Batch posting is terrible slow!!! Why?

We do all our invoice batch processing at night. We created a nightly batch table of jobs that either run slowly or record lock users. The jobs run quickly when there are no users on the system.

Would you mind sharing what technique you use to drive the after hours batch? It’s a good idea. Attain has a built in function for batch processing, but I don’t believe any of the earlier version do. Dave Studebaker das@libertyforever.com Liberty Grove Software A Navision Services Partner

Create a nightly batch job table Include object id, name, enable etc. Write a code unit to run each job(report) in the table one after another. Create a from with user entered posting dates, time to run and other variables necessary to run the reports in the nightly table. Set it at night to run , let the timer run until it hits the desired time and the codeunit begins processing each job in the table. After each job is completed update the nightly batch with date and time of completion. IT works very well we run all our batches in less than an hour at night. If they fail due to hardware or software we run it manually early in the AM. You may have to change some of the Navision batch code or create copies and modify to get the job to run properly. BTW The month end jobs used to take 40 hrs to run on our old AS400 system (Non-Navision)

It looks like a useful tool. Thanks for your sharing. Dave Studebaker das@libertyforever.com Liberty Grove Software A Navision Services Partner

Three things that can have the greatest impact on speed have been mentioned by various users but, I thought I’d summerize and add a comment. 1. Commit Cache (better have a secure UPS) 2. Large DBMS Cache 3. Make sure that the users settings take advantage of the Object Cache. This will save the users need to constantly request an object and instead stores it in memory for use later. Also, you may want to examine the keys used by Navision. They are usually pretty good but, I have found on occation a slight change can produce dramatic improvements. The rule of thumb in setting keys and filtering is reducing the maximum number of records first. Bill Benefiel Manager of Information Systems Overhead Door Company billb@ohdindy.com (317) 842-7444 ext 117 Edited by - wbenefiel on 2001 Oct 16 18:01:29

A caution addendum to Bill’s recommendation: We have found that sometimes setting the DBMS cache too big will have the same slowdown effect as setting it too small. I’m afraid I don’t know why, but in one case we set the DBMS cache to 100MB and the system ground to almost a halt. We finally found that particular configuration optimized with a DBMS cache of about 20 to 30 MB. Given the time it took to tune the system, we didn’t spend the additional time it would have taken to figure out why. This was a Version 2.0B installation running on an NT server. We did have a similar experience later when the customer set the DBMS cache way up and the system slowed down until the cache setting was reduced. I would echo Bill’s comment about keys. And I am still of the opinion that having a generous amount of DBMS internal freespace is a good thing. Dave Studebaker das@libertyforever.com Liberty Grove Software A Navision Services Partner

As long as the cache can fit in physical memory and the CPU is fast enough, the bigger the better. (With enough qualifiers, anything can be true…) Anyway, the most common problem with cache size is that it ends up in virtual memory and thus get’s swapped out which really defeats the purpose. I’ve always wondered why this was even possible. It seems like you could automatically reduce the size of the cache if the O/S tried to swap it out, but at least for Navision up to about 2.1, it didn’t. Are later version smart enough to refuse cache sizes that get swapped?

As to the DBMS cache, as long as it leaves enough room for the operating system and other processes, it should be set as high as possible, unless increasing does not really improve performance. This could be true if you have a small enough database. If so, you probably would start seeing diminishing returns as you increase the cache. As for us, we have a 28GB database with about 70% usage. Our server has 1GB of ram. We use 500MB cache setting. The commit cache has an even more dramatic effect. Bill Benefiel Manager of Information Systems Overhead Door Company billb@ohdindy.com (317) 842-7444 ext 117

A couple of additions to my earlier comments. Remember we are working with Windows NT server here so some of what might be ‘expected behaviour’ in the area of virtual memory management might have to be taken with a little licence. In my experience with virtual memory on Windows (as opposed to say Unix) - - memory leaks influence the result - the server tends to migrate RAM to swap when it is idle. This has to be swapped back when the application (eg. Navision server) becomes active We tend see about 10 to 20M bytes per week memory leakage in the Navision server service, so a little spare space in RAM for this is useful. You absolutely do not want the Navision service to swap, this triggers another problem (reported in other postings on this bulletin board) where the server CPU goes 100% doing nothing - server re-boot required. Another point to watch if you are running anti Virus software on the Navision server - make sure that Navision’s Server.exe and Slave.exe are excluded from virus checking. Having a virus scanner checking all accesses to your database file is not going to improve performance.

Moved from “Attain Developer Forum” to “Attain Developer FAQ” forum.