Performance using CALCSUMS vs CALCFIELDS

Hello.

I have a situation which I need some help with. In our system we have our own report to calculate a requisition plan. This report is using table 27 as dataitem. We have today about 110000 records in this table. Basically it does CALCFIELDS on 4 fields.

CALCFIELDS(Inventory,“Qty. on Purch. Order”,“Qty. on Sales Order”,“Return Order (Qty.)”);

The problem that we are having is that this reports basically locks the system for 4-5 minutes. We are also using a websolution which is built around a application server. During the time the report is running the websolution also stops responding.

I’ve looked at report 699 which is the default report in Navision and it’s designed in another way.

I’m trying to improve performance by finding another solution and I need some help. Would it be a good solution to instead of doing CALCFIELDS on Item, instead using CALCSUMS on tables 32, 37 and 39 respectivly. Any help would be appriciated.

Perhaps you should look at the TransactionType property of the report. You can read up about the four options in the NAV C/SIDE Help Menu. Maybe setting it to SnapShot would solve your problem as that will make it a read only transaction instead of an update transaction. This is assuming that your report is not modifying any data.

You might want to run it after hours with a NAS server. That’s what our Manufacturing group does.

Thank you for the suggestions. Unfortunatly we have a INSERT in the code which will insert a requsition line, so transaction type Snapshot is not an option. Neither is using NAS after hours, due to cut-off times placing orders by our suppliers.

I’ve paste the code below. I’ve checked the code in report 699 and it uses a COMMIT. I’ve tried adding this to the end of OnAfterGetRecord and it looks to solve the locking issue on our websolution. However I was under the assumption that using COMMIT is bad practice.

I need explanation on the following. Since we have a INSERT in the code, will Navision place a lock on all the tables involved and thereby locking our websolution. Using a COMMIT ends the write transaction and release the lock. Is this correct, and is this really good practice?

Item - OnAfterGetRecord()

Window.UPDATE(1,“No.”);

CALCFIELDS(Inventory,“Qty. on Purch. Order”,“Qty. on Sales Order”,“Return Order (Qty.)”);

AvailQty := Inventory + “Qty. on Purch. Order” +“Return Order (Qty.)” - “Qty. on Sales Order”;

QuantToPurch := “Reorder Point” - AvailQty;

IF QuantToPurch < 0 THEN

QuantToPurch := 0;

IF “Reorder Quantity” <> 0 THEN

QuantToPurch := “Reorder Quantity” * ROUND(QuantToPurch/“Reorder Quantity”,1,’>’);

IF QuantToPurch > 0 THEN

InsertReqLine(“Vendor No.”,QuantToPurch);

COMMIT release all current transaction from memory, and writes it all to the database, and it sets a rollback point. It also gives the system a tiny window to pick up transactions from other users. If your process fails somewhere down the line though, the system will only be able to rollback to where it last executed a COMMIT. So in case of a long running process that loops through large amounts of data, it can be useful.

You seem to have well over 100K Items that you need to do this for, so what probably happens is at some point, RAM fills up and the system starts paging, and the I/O’s cause it to slow down. Another symptom is that it locks other transactions out of the tables while it is running. Putting some COMMIT statements in there does speed things up, but you need to be sure of what is going on. If you put in a COMMIT in the middle of a transaction, and there is an error before its completion, the system will not be able to rollback to its last known “consistent” state. You can use COMMIT, but only at the end of a transaction, and you need to use them sparingly.

They also use system resources, so putting one after every single Item, you might not get the most performance increase possible. If I were in your shoes, I would probably start by using COMMIT after every 10000 Items, and if it still goes slowly, put one every 5000, and decreasing the number of Items until you get to an optimal point.

These are good suggestions.

Small code optimization is a great place to start. You could do something like IF ItemsProcessed MOD 10000 = 0 THEN COMMIT;

Sometimes code changes are not enough and you have to examine the underlying hardware. You can always consider splitting your database over multiple disks, adding more RAM, or upgrading to SQL Server if you are still on the native database. I’m not sure what your setup is. You probably won’t even need to get to this point, though. Just keep it in mind as your item master grows.

Hi Peter,

Another observation is that the dialog update can slow down a process. I see from the code that the item number is updating from even item - you may want to show every 10000 as per the COMMIT.

Good observation. You will be shocked at how much slower this will make it run.

Thank you for all suggestions. Actually I am quite shocked how much dialog update slowed down the process. It cut about 1 minute of from the running time. I’ve experimentet quite alot and found that a COMMIT every 1000 record worked out quite well. That’s approx 100 COMMITS beeing executed. The performance has increased considerably. Most important the COMMIT removes the lock on some tables which will let our websolution process some requests.

//Peter

The tiny window that the COMMIT provides is important so that other processes don’t wait for too long. Glad to hear that you made that work, and thanks for updating us with your final results. This topic will be helpful for others who have the same issue.