I have an issue here with performance on temporary defined tables.
I have a rather large table, which I due to some issues needs to load as temporary table.
While working with this large table I need to do a MODIFYALL on a field in the table. When NAV is executing this on MODIFYALL it’s displaying “Modifying records in the xx table” and takes a long time…
My theory is that it’s actually not putting the table into memory, but it’s swapping it to the clients temp file.
If this is the case, then how can I:
Check if how much it’s actually swapping
Extend the memory (we are running the clients under Citrix).
can you refactor your code so you perform the modification like the snippet I’ve posted, I think you will be able to decrease the processing time of 50% :
Let me know if you’ve notice a major procesing time improvement …
From my recollection recreating a fresh record (on a temp table) is always faster than performing a MODIFYALL
// >> Tarek
YourTempRec.RESET;
StartTime := TIME;
IF YourTempRec.FIND(’-’) THEN
REPEAT
YourTempRec2 := YourTempRec; // Copy you Temp Rec to another variable
YourTempRec2.Description := YourTempRec.“No.” + ‘x’; // Field that you actually update with MODIFYALL - Perform the modify here
YourTempRec2.INSERT; // Insert the new temp rec into the memory
Interesting observation, Tarek. Has anyone else verified this? How many records were you performing the modifyall on? I imagine it would scale better to some extent … and then perhaps it would decline again if the transaction grew to large…
From what I have understood you are trying to sort/delete temp tables somehow in a local file computer? I don’t know how you are trying to achieve this, but memory operations are faster than file operations.
Temp tables are slower. All records have to be put in Navision client and be keep in memory. There is also a number of maximum record limitation that can be kept in memory. I try to follow a simple rule. That might not be very consensual to all everybody. When I have large temp tables I try to kept it server side. For that tables I create a table that PK has user login. In that way user only see records from himself. You can optimize that table of have better performance in SQL Server.
In extreme you could use ado and use SQL temp tables support, but it would be a risk.
When using the SQL Server option in NAV, if you declare a record variable as temporary, it will, in fact, write to a temporary sql table, not store the information in memory.
I was shocked when I first noticed this, too, Nuno.
As you fill your temp table in NAV, watch the size of your tempdb in SQL Mgmt Studio.
I agree with Nuno’s suggestion to try and keep this server side.
However, using the USERID as the primary key to your table isn’t a good idea (at least I’ve been caught with that approach a few times). You will want you use the session ID, User ID combination from teh Session table (filter on My Session).
Girish I have to disagree with your. Navision temp table are not stored on sql. Navision store them in temp files in local application data temporary folder. The following code does send one sql statement to sql. for i := 1 to 1000 do begin TempItemLedger.“Entry No.” := I; TempItemLedger.insert; end; for i := 1 to 1000 do begin TempItemLedger.“Item No.” := format(I); TempItemLedger.Modify; end;
Well on my side the performance improvement wasn’t really noticable, maybe 10% or so. But if you wait 54 seconds or 60 seconds for a form to open, then it’s still a lot of time for the user (and me testing!).
I think I’m more back to the teory of the temp file being used too much.
Oh yes Tarek, one thing. Why would you delete the records as you read it? Is NAV releasing memory when you delete the TEMP records?
I’ve not checked the impact on RAM, But I’ve always assumed (wrongly ?) that handling my own garbage collection that’s clearing no longer used variables was a good programming practice.
I wonder if someone has already benchmark this with a tool that scan the memory usage
But I hope if that if I’ve filled a temp table with one millions records and if I then use DELETE to delete them all that Navision will deallocate the allocated memory, this has always seemed obvious to me… But maybe someone working at Vaedbek could have an answer in how Navision handles memory allocation & deallocation
I would guess that you are doing some kind of complex seek, say for instance you are filtering on a field that is not indexed, and thus forcing a clustered seek, SQL is then building the query in the TEMPDB, thus it is growing. Once the search is built, it then moves it to the temp table, but what you are seeing is just the normal process of SQL. To test this theory, do the same code but with a non temp table.
To me it looks like the read from SQL that is using tempdb.
For some reason I am having problems with the quote button not adding the quote like it should.
Actually I had a case like this a while ago, where a FIND was taking about 15 min to find a record, (the table was about 20 Gig) and the delay was the building of the TempDB. Which by the way leads me to wonder why MS keep saying that TempDB is not important in NAV.