The document “Tuning Navision for Better Performance” recommends a fill factor of 70% for Navision table type “Value Entry” Which tables does that include? The document can be found at http://www.naviclub.fr/Docs/Tuning%20Navision%20for%20better%20performanceV2.pdf
This document was written 4 years ago and is based on NAV 3.70 and SQL 2000. While many thing we have learned in the past still have bearing, I’m not sure I would take this document at face-value with newer versions of both NAV and SQL.
With the Value Entry, unless you have changed the primary key (not an option back in 3.70), I see little value in so much free space.
Can I ask, why Fill Factor and why Value Entry in particular? Are you having performance issues with SQL and Inventory? t
We have not upgraded Nav in 4 years, so the documnet still applies to the system I’m using. I would like to update NAV but it’s not in the budget for this year. I am just trying to find the optimal values for fill factors in order to get the best performance from our system. Too many page splits can lead to more deadlocks and blocking, which I am Trying to eliminate. If this document is no longer valid, can you recommend a way too determine the optimal fill factors?
Hey John. My understanding of Page splits is the following… Lets say a page can contain 4 records/indexes per page we have the following :- Page 1 : A B D E Page 2 : F G H I We now insert “C” into the equation. The DBMS does the following (not sure which side they go to but the principle is the same Page 1 : A B C X Page 2 : D E X X Page 3 : F G H I (X being space that will never be used) Now, as thats my understanding of it, tables like “Value entry” table always insert in numeric order, so you should not have this issue. Its the none ledger entry tables where you can start to see page split issues. To combat this, you can Defrag or Rebuild your indexes at night, and of course update statistics. That might be a start that is better than tinkering with Fill Factors, unless you’ve tried that already. t
If the new record is inserted into the middle of the page and causes a split, the smaller part will go to the new page and the larger part will remain on the old page. Nevertheless, once a page is full, a page split occurs. Rebuilding the indexes is part of our regular maintenance plan. Index rebuilds automatically update the statistics, but they also set the fill factor back to the default of 100% Therefore, an increase in page splits occurs after the index rebuild, which slows performance. Eventually, after enough writes and updates, the fill factors become irrelevant, but fragmentation begins to occur which also causes performance problems. The best option is to rebuild the indexes on a regular schedule using the optimal fill factors. The optimal level varies according to the number of writes and updates that occur. My problem is in determining these optimal levels. Too high and too many page splits occur, too low and reads suffer because the data is spread out accross more pages. I need a way to find the correct balance for each table.
Thanks for clarifying where the different parts of where the split page go.
However, the main tables that you are going to find data in are the entry tables. G/L Entry, Vendor Ledger Entry, Customer Ledger Entry, Value Entry Etc…
The Entry tables in Nav are ALWAYS sequential on the primary key… 1,2,3,4,5,6,7,8,9,10. It’s not a float, so you should see no page splits.
Where you are going to see splits is on the working tables. Where data tends, but not always, to get written to and then delete (Sales header, Production Order, requisition wrk sheet). But theses are smaller in terms of data. And since the data tends to be added and removed, reindexing/defragging is essential, regardless of resetting the fill factor, if the data is in the page on day and not the next, the fill factor will only make the insertion possibly faster. In the long run you could end up with plenty of unused space on pages…
Can I ask what version of Nav you are on?
Are you actually having performance issues with your system right now, or are you attempting to just implement a fine level of maintenance ion your system?
Your initial post on here said
“The document “Tuning Navision for Better Performance” recommends a fill factor of 70% for Navision table type “Value Entry” Which tables does that include?”
The Value Entry is its own table, it is linked on a 1 to many relationship with the Item Ledger Entry Table, and is used to maintain monetary values against an item.
The Value Entry table as discussed above, inserts records in a sequential order, and will never miss a value, so fill factor changes for page splits purposes is irrelevant.
we’re using Version “US 2.60.G(4.0SP1)” SQL server 2000. A little behind the curve, I know. But no money in the IT budget this year to update.
During periods of normal user activity, I’m seeing page splits in the perf monitor of around 4/sec spiking sometimes to 10. I’ve heard that sometimes adding a page to a linked list when the new row doesn’t fit into the 8K limit can be recorded in the perf mon as a page split although no data is actually being moved. Regardless, whenever the perf mon is showing high page splits/sec we get a slowdown in performance and end users complain they are “locking up”. I’m trying to determine which tables are causing the page splits and where to set the fill factors on those tables. I’ve heard that using DBCC showcontig() can be used to determine the scan density, and anything with a scan density significantly lower than 100% is probably experiencing excessive page splits and needs to have the indexes rebuilt. Setting the fill factors on the index rebuild should improve tables with poor stats. Any thoughts on this?
As an aside, our most troublesome tables are the Item ledger entry , General Ledger Entry, and Vendor Ledger entry, but I don’t know if this has anything to do with page splits or not. It would seem contrary to your previous post if page splits were at fault. Problems typically occur on these tables when users are inputing or updating information.
there is an old saying “A man with a hammer sees every problem as a nail”.
Its rare that any one fix is going to make a huge difference to making your system run faster you need to make small changes one at a time and measure the effect they have.
Step one is to have a solid hardware foundation.
Step two is to make sure that SQL is implemented correctly.
But in reality so long as your hard ware and SQL implementations are correct even if not perfect, then any tweaks and tuning at this level is only going to make very minor changes to performance. Of course if you have somethign silly like the LOG file on a shared drive, or AutoShrink set on the database, fixing these will help, but I consider those as major fopas, not “tweaking”.
Where the real perfromance boost comes is in steps 3,4 and 5.
Step 3 is key index and sift tuning. (which is where most of the performance boost comes from).
Step 4 is code tuning, where the most work is involved.
Step 4 is tuning your business procedures, (which is always the hardest to sell to the client).
put down your hammer for a bit, and take a broader view of the issues. [;)]
Thank you for your sage advice, David. Although it may not seem that way from this thread, I am taking a more wholistic approach to tuning our system. I have a handful of things I want to investigate, and fill factors just happens to be the area I’m concentrating on at the moment. The biggest problem with our system is actually I/O contention, which our network administrator is addressing. Key index, SIFT and code tuning is being handled by our microsoft Navision partner. Optimizing the fill factors is definitely not a cure all, but it’s not going to hurt either. And, as SQL professional, it’s something I want to know how to do. Perhaps I should start a thread on a SQL server forum. I’m not getting any real anwers here, only platitudes.
You are welcome. From the original post it looked like you were getting stuck on one thing, but it looks like you have it under controll.
By the way, be very carefull when using that document, it tends to over simplify things in a very complex way, and I have seen many baddly implemented NAV systems based on the advise there being missused.