NAV SQL DB Size

We have a customer whose DB is increasing at a rate of about 2gb per month. They are a high volume, low value company, so with value entries etc there is not much you can do about it. Is there a theoretical limit to a SQL DB, and does anyone have any suggestions as how we can maximise the performance of the system when the size will be 100gb+ in just a year or two. They are running 5.0. I have heard conflicting arguments as to whether going to SP1 gives significant SQL performance improvements or not.

Any comments or experiences would be most welcome

Andrew

Hi Andrew,

actually performance issues can have multiple issues - if you search the forum for “SQL Performance” you’ll get gazillions of recommendations and advices …

Unfortunately 5.0 is one of the worst NAV versions ever, there are a LOT of severe issues [+o(]
First of all you have to look into “SIFT Optimization”, reducing the number of SIFT Buckets - this will also decrease the DB size and slow down the growth. Again, for this and much more please search the forum.
Another big problem in 5.0 is the (wrong) usage of the “SQL Index” property in many important tables. The indexes defined here are mostly crap and are dramatically decreasing the performance - MS has removed all those indexes with 5.0 SP1.
Ideally remove these “SQL Indexes” manually in “Object Designer” or overwrite with a 1:1 copy of the “Key” definition.

Upgrading to 5.0 SP1 or higher might be an option, as VSIFT will perform better than standard (non-optimized) SIFT. But do not expect miracles, there are other issues as well (advantages and disadvantages; check out this one: http://www.mibuso.com/dlinfo.asp?FileID=1052) .
Maybe you could just test such a scenario?

So there’s a lot you can and have to do to improve the performance …

Regards,
Jörg

Thanks for that, we will definitely look into this. But apart from reports and batch jobs running slower, should there be any real problems with having a DB that big? Obviously assuming hardware is up to the job?

Andrew

Well, technically the DB size alone is not that daramatic - I have customers running databases from 300 to 700GB. What matters is the “transaction volume”, means the numer of transaction or the data read/write operations within a period. Non-optimized SIFT structures are unnecessarily increasing this volume and are reponsible for too much db growth.

Of course, large databases require sufficient hardware infrastructure regarding storage and memory. But before adding hardware I really recommend to look into that SIFT stuff: once you’ve optimized the most important tables you’ll get rid of thousands (millions?) of pointless records, the db size will be reduced, read-/writr-operations might execute faster, also encountering less blocks …

Doing so, at least you gain time to investigate further options like upgrading to 5.0 SP1.

Hope this helps you a little.

Best regards,
Jörg

Thanks for all this, its a great help. One last question, would upgrading the executables to SP1 help at all, or would we really need to do a full upgrade to get the benefits?

Andrew

I’d say a technical update - just changing the executables etc. - should be sufficient in the first place. With NAV 5.0 SP1 it is important to have a quite recent build version (Update 2 was just released on 2nd March, which is maybe a little bit too new at the moment) as the early versions were also affected by some bugs (do you know Waldo’s BLOG? Check this one http://dynamicsuser.net/blogs/waldo/archive/2009/12/22/platform-updates-overview-3-70-b-nav2009-sp1-updated-2.aspx)

But again: NAV 5.0 SP1 has some advantages regarding communication with SQL Server but the same stuff could also cause some trouble. Basically the new VSIFT are indeed an improvement, but they also may require some tuning in several places …
(the webcast I mentioned before explains those features and also mentions the potential risks)

Regarding the NAV 5.0 code base (the NAV objects) I strongly recommend to get rid of those problematic “SQL Indexes”, at least in the most important tables. See also http://blogs.msdn.com/microsoft_dynamics_nav_sustained_engineering/archive/2008/04/21/sqlindex-property.aspx

if you are having performance issues with a database this size, (I don’t consider 24 gig per year to be very large) then you must have serious issues. I agree with Joerg to look at some basic performance tuning, but also take a look at the hardware configuration, (which I will bet is bad) and then look at code you have written and how it was written.

No matter what though the information you provided is no where near enough to even start a diagnosis. Please post some details of the system.

We aren’t having any significant performance issues at the moment, it was more a sounding out as to what could be done, what other peoples experiences were. The hardware is powerful enough, a SAN. It was more what we could do to ensure that this carries on being adequate and what we can say to reassure our client who seems to be concerned about the size. Plus a feel as to 5.0 against 5.1.

Their only real problem is they only receive purchase invoices once a month, which means they can only run the adjust cost/item entries batch job once a month, and that takes a long time, but there is nothing which can be done about that unfortunately.

Andrew

Actually, they weren’t removed because they were “crap”, they were removed because the cursor type that NAV uses was changed, which forces SQL Server to act more like NAV native database server. On regular 5.0 and earlier versions, many of those SQLIndexes have shown measured significant performance improvement.

Obviously we have quite different experiences here; as far as I got “in touch” with those indexes it just caused trouble, even in older pre-Dynamic-Cursor versions … removing fields from an index (also if supposedly non-selective), without exact knowledge about filtering (and we’re talking about standard filters in most cases) , wasn’t such a good idea even with Fast-Forward cusrors … but anyway, that’s history now …

A SAN is no guarantee that everything is fast. If badly configured, a SAN can slow down a lot. In short, the same rules for the disks exist: Dedicated disks, No RAID5, correct cluster-size and striping-size and offset, …

For SQL, it is more interesting to have more write cache on the SAN than read-cache (SQL has its own read-cache anyway) but giving to much writecache can hinder performance of other applications using the SAN.

Joerg didn’t write this, but he has a good book about performance tuning on SQL. It is a good starting point for you.