v4 upgrade performance very slow


I am upgrading a client from 310 to 4.02. They currently run SQL Server 2000 and we are going to 2005. The database has 60GB of data with 6.5m records in the item ledger entry table, over 7m in the value entry and 21m in the g/l entry table.

The checking and transfer to temporary task takes days. Also the transfer back from the temporary tables takes days. I run the client via a citrix connection.

What I’ve done:

  1. They don’t use bins so I can skip the negative bin check on the ILE table.
  2. Dropped all indexes (except PK) and SIFT on the largest tables for the transfer out and back.
  3. Put the SQL Server into Simple recovery mode.
  4. I’m running the upgrade on a good box, 4 CPU, 8 GB RAM. Disks are on 10k but that is the best I can do at the moment. SQL Server is configured to use 7GB.

What I plan to do on the next run:

  1. Apply KB article 911450 to increase the update interval of the progress window.

What I’m wondering is:

  1. The upgrade code doesn’t use the new FINDFIRST, FINDSET functions. Has anyone converted the main codeunits to use these and did it make much difference?
  2. Would putting the Nav client on the same box as the SQL Server help? The SQL box is good (see above) but the Nav client box is just a small test machine. I can’t see why this would help much as the SQL Server is doing the grunt searching. But then I seem to have got a better run on a 2GB box with the client installed on the box with the SQL Server?
  3. SQL is holding millions of page locks on the table being transferred. I would have thought this should be escalated to a table lock and therefore use less resources. Anyone come across this before?

Has anyone got any timings for similiar upgrades, i.e. data and version jump, so I can set some expectations and do some planning?



Have you tried splitting up those tables and making archive tables instead of schleping across all those records at once.

Trek on

Hi - no I haven’t tried that yet and we haven’t date compressed either. Date compression has caused ‘strangness’ at other sites I understand but perhaps this is in earlier version. If I archived then I’d have to check alot of process to see what was affected, e.g. the inventory valuation report would have to be changed since this uses all the ILE data to give it’s valuation. Not that keen on doing that.



Have you gone through the upgrade yet? one of our customers is planning to upgrade from 3.01 to 4.00SP3. They are on a SQL database which is approaching 200Gb in size. They are currently going through archiving/deleting of data in an attempt to reduce the size of the db. I am interested in your experience and especially on the actual timing of the data conversion part of the upgrade. Any feedback would be greatly appreciated.


My experiences with new functions:

  • FINDFIRST: Works fine if you are only going to use the first record. If you decide to loop through the recordset then navision will change the SQL call to (’-’) so will not help in that case.
  • FINDSET: Very usefull…in some cases. Depends on what you want to do with the recordset. Also check what your recordset caching is set to (500 default). I tend to use FINDSET and FIND(’-’) based on how many records I know are likely to be returned and if I am going to modify or not. I think navisions still makes the decisions and so it changes the SQL call on the fly depending on what happens with the recordset. What I mean is that you could call FINDSET but if there are more than 500 records it automatically changes to FIND(’-’) type call (if you get what I am saying).
  • ISEMPTY: Now this one I recommend. It does not return a recordset, so performance is improved. Can use it instead of IF NOT rec.FIND(’-’) etc.
  • MODIFYALL: Now offers better perfomance than REPEAT UNTIL… if you can find places to use this then definately do so.

I did not go into technical details here. Check Navision online help, it is quite detailed.


That is nice to know thanks. Do you know if its due to a change in Navision (if so which version), or if its just handled better in SQL 2005.


I ran the test 2 days ago on SQL2005 with 4SP2 client.

It went from 3800ms downto 400ms. (I know this is specific to my tables etc, but its proof).

Hynek aslo told me that it had been improved - when I saw him at Convergence.

Shep you need to have someone that knows what they are doing come and do a full analysis. You can’t just disable all secondary keys and expect the system to perform better. That may improve write performance, but will shut down read performance in other instances. Indexes are a good thing, necessary even, but you have to have GOOD ones. Disabling all of them will only create new problems.

Its good to know, because “back in the old days” Modifyall was always way and above the best way to go for one field in a table, and its odd how it just kept getting slower.

PS still didn’t answer why you didn’t drop into the booths at Convergence and say hi [:(]


An update on my experience (mostly bad, some good) so far and some specific answers to comments posted.

Daniel - I just want to clarify that I didn’t drop all secondary keys as a permanent setup option but just as one part of temporary solution in order for the upgrade programs to run within a reasonable time. In the end I got the upgrade process from taking 5 days plus and not finishing to being able to do the whole thing in about 25 hours. This was the first hurdle and was a relief as I ‘heard’ (unconfirmed) of an upgrade in Singapore that had Navision shut down for a week for the upgrade?? This was a show stopper for my client. I was unable to find anyone in Australia or New Zealand who had done an upgrade to v4/2005 over 40GB. The v4 database in a heavy set index structure, without index tuning for the business, is over 100GB.

Nic - Thanks for you comments on FINDSET etc. You seem to be saying you use FIND(’-’) and FINDSET. But I thought as you said Navision upgraded (downgraded really) a FINDSET call to FIND(’-’) if the recordset was over 500 (default) and therefore you didn’t need to worry about using FIND(’-’). If the recordset was less than 500 you got away without triggering a dynamic cursor, if it was over 500 you were no worst off. Can you confirm?


The bad news - we haven’t gone live yet. There has been a number of factors causing this but the one the root of the problem seems to be cached query plans on large data sets causes table scans and can lock sessions for minutes. We’ve had two hot fixes from MS (latest last week).We found the sales history form could cause a session to ‘stop’ if users paged down too quickly on large datasets and trigger a table scan. This is a retail business with high throughput and what was required were recent and specific history lookups so I did something specific to close this hole. Table scans of large tables caused buffer flushing and in turned slowed posting performance. But we were getting other unexplained slowdowns - the response from MS on this was:

“SQL 2005’s use of cached query plans is affecting the performance in NAV in addition to how NAV’s UI is designed to pull the data. The current design of NAV’s UI uses synchronous update with SQL. This synchronous update behavior affects the performance of the application as you have seen in your load tests. It is for this part of the issue where we cannot do anything for now as it will require a considerable re-design of the NAV client.”

Other secondary factors include hardware (corrected last week I hope), Christmas period and budget wrangles.

I’ll update further when things start to move again.


Hi - I posted a reply on my experience to date. Has you 200GB upgrade started? If so I’d like to hear your experience so far. Thanks.

Shep, I am having exaclty the same problem with large datasets. I feel your pain [:’(]

Microsoft supplied me with hotfix KB930775. but I am still having the same issues. Definately on 4SP2 client, and have used 4SP3 (not so much yet) but have reproduced it there too.

I have also had the same problem with cached query plans and I may possibly have possibly lucked out with a fix. It’s still early to tell though.

We were running 32bit windows and 32 bit SQL 2005. We experienced very bad locking in the larger tables. Sometimes the locking would last up to 10 minutes. We were able to capture SQL Trace’s to prove it was the “Cached Query Plan Problem”.

We tried 2 hotfixes with no positive results.

We recently realized that our Server had x64 processors so we wiped out the entire machine and reloaded 64 bit Windows + 64 bit SQL 2005 (Dynamics Executables as SPIII). So far it seems to have corrected the problem! We will be conducting more testing with more users so I will keep you posted as to the results.

Also when running 64 bit thiere is a known caching problem (we haven’t experienced it yet). Read this article it’s pretty good http://support.microsoft.com/?id=927396

Email me at jwilder@stonewallkitchen.com if you would like to discuss further.

We have always been using a 64 bit machine but with SP2. When I tried to upgrade to the SP3 client last year it corrupted the database and I had to upgrade again. Asked MS about this when I was discussing the performance problems and they let me know the cause, which was related to logins as I remember.

Thanks for the article; I’ll read that.

Nic - did you agree with my comments on FINDSET and FIND?

Hi Shep,

Yes, it is what I have been told. Perhaps it is a bit more complicated to test. I would suggest playing around with these commands a bit and draw your own conclusion. I don’t want to mislead anyone.

Anyway, about SQL2005 and the cached query plans. I got a response from microsoft proposing the following:

Install SQL2005 hot-fix KB930775.

Use the Plan Guide feature of SQL2005.

  1. Generate a query “Plan Guide” for the particular queries which perform poorly and specifying WITH RECOMPILE as part of the guide.
  2. This entails:

i. Use SQL Profiler to trace and capture the precise queries which are causing the poor performance

ii. One or more plan guides will be created (one for each problem query) which will use the RECOMPILE option to cause a new plan to be generated every time the same query is called.

iii. These plans will then be optimised based on the actual parameter values each time it the query is executed.

Hi Nic

Yes, got the same plan from MS two weeks ago. We haven’t started on this yet due to a number of factors. Would be interested in how you progress with it. Also, what is the size of the database and the user base of your client?


25gb, now optimized, 133 concurrent users.

I found this article very useful:


I managed to create a Plan Guide for Sales Invoice Header list and so far it looks promising but I don’t want to jump the gun and say the issue is solved with out further testing. I have passed on my findings to our SQL DBA - to play around with it a bit. Will see how it goes. I am off skiing for 10 days so will update when I get back…[<:o)]


We went live back in June and it has been very successful. The server is performing well with transaction times consistent and low even under server load. For example the sales order posting time per line is consistent at 500 ms per line, transfer orders (ship and receive) are about 400 ms per line. All ended well and I went on holiday in August to recover from this experience.

Now client wants to upgrade this database to v5 and take on another company bring the total users to 150 and averaging 2100 non zero sales shipment lines per day - oh dear !