DBCC INDEXDEFRAG

Finaly I ran some DBCC INDEXDEFRAG after some help from this community. The fillrate of the Ttranslog gave me som trouble but i’m stil employed.
The worse table had a logical fragmentation above 40 but just a few days after the defrag the sob is back to 40+ !!! How is that possible?

The Sales Shipment Line’ seems ok. It’s on 5+ some weeks after defrag.

Wolud it do any good to rebuild the index instead? or is there another way?

Is there a specific or general performance problem?

Can you make a copy of the live database and on the copy Optimise the tables?

It’s more of a general perf. prob. but not very bad and also a bit random. Sales Shipment Line is 12.000.000 lines. The one with log.frag. 40+ has only 325.000 lines.

Yes, I have a copy. Do yoy mean optimize in Navision / tables / optimize?

You should schedule a weekly defrag in database maintenance, and run it in the weekend or late at night.

Thanx Denster!

I’ll look in to that. If it’s very complicatad you may drop a reply here with some examples.

/K

You set up a maintenance plan in SQL Server. It’s basically the same thing that you’d do for a scheduled backup, only you enter the DBCC INDEXDEFRAG as a scheduled command. Search books online for how exactly that works.

Some general stuff ahead:

To avoid performance problems, you have to - among other things! - maintain the indexes periodically. Maintenance in this case means, to restore the index-pages by re-adjusting the defined FillFactor. The FillFactors have to be optimized to have the best balance between cache-usage and Read/Write-Performance:

Low Fillfactor = Big Index = Poor Read/Good Write (less PageSplitts)

High FillFactor = Small Index = Good Read/Poor Write (lot PageSplitts)

So, the FF basically depends on how many read/write transactions you have on a table. If your table is causing trouble, you could try to adjust the FF, a too high fragmentation could indicate the the FF is too high (= many PageSplitts), so you could try to set a lower FF. But take care! If the FF is set too low, performance will also decrease! Usually, you have a standard FF of 90%, so maybe you could try 85% and see what happens … (and: never go below 70%!)

Regarding index maintenance:

To rebuild the FF, you basically have these options:

DROP the indexes and re CREATE them from the scratch: Best and cleanest solution, but time-consuming and no parallel user-activity is possible during re-creation

DBCC DBREINDEX: allmost the same as DROP/CREATE

DBCC INDEXDEFRAG: Not as clean as the two options above, but could be performed with parallel user activity

Aditionally, you also need to maintain the statistics - don’t forget!

My personal opinion is, that the “standard” maintenance tools of NAVISION (table optimizer) or SQL Server (Maintenance Plan) are insufficient. To set up a more precise maintenance, you have to have some SQL Server skills as it is not that easy … In our subsidaries we have implemented SQLSunrise (www.sqlsunrise.com) to get the most efficient maintenance easily …

Finally, one word to the NAVISION Table Optimizer:

This thing is resetting all index setting to the IMHO very insufficient NAV standard, all adjustments made from “outside” are gone … I really hate this thing [6]

Hope this helps a little!

Thanx again!

FillFactor = ScanDensity?

FillFactor = ScanDensity?

Oh, no!

The FillFactor defines, which percentage of an index-page should be used; e.g. page size is 8kB, with a FF of 75% only 6kB are used when restoring the index page! Of course, SQL Server will fill in more data than just 6kB, it will insert until the page is full with 8kB. If the page is full, SQL Server will allocate a new (empty) page, with moving the “overload” of 2kB from the full page to the new one, thus, restoring the Original FF of 75% (this process is called a PageSplitt).

So, if there is a too high FF defined, the pages are getting full more often, hence more page splitts have to be performed. The pages that are allocated e.g. for an index are not necessarily in order, so a new allocate page could be physically in a “distant” location. This “physical disorder” is called the Fragmentation. Not only the pages could be fragmented, but also the Extents (= 8 pages = 64kB), so there are different types of Fragmentation (from Books Online about DBCC SHOWCONTIG):

Statistic Description
Pages Scanned Number of pages in the table or index.
Extents Scanned Number of extents in the table or index.
Extent Switches Number of times the DBCC statement moved from one extent to another while it traversed the pages of the table or index.
Avg. Pages per Extent Number of pages per extent in the page chain.
Scan Density
[Best Count: Actual Count]
Best count is the ideal number of extent changes if everything is contiguously linked. Actual count is the actual number of extent changes. The number in scan density is 100 if everything is contiguous; if it is less than 100, some fragmentation exists. Scan density is a percentage.
Logical Scan Fragmentation Percentage of out-of-order pages returned from scanning the leaf pages of an index. This number is not relevant to heaps and text indexes. An out of order page is one for which the next page indicated in an IAM is a different page than the page pointed to by the next page pointer in the leaf page.
Extent Scan Fragmentation Percentage of out-of-order extents in scanning the leaf pages of an index. This number is not relevant to heaps. An out-of-order extent is one for which the extent containing the current page for an index is not physically the next extent after the extent containing the previous page for an index.
Avg. Bytes free per page Average number of free bytes on the pages scanned. The higher the number, the less full the pages are. Lower numbers are better. This number is also affected by row size; a large row size can result in a higher number.
Avg. Page density (full) Average page density (as a percentage). This value takes into account row size, so it is a more accurate indication of how full your pages are. The higher the percentage, the better.

But to avoid misunderstandings: a high FF is not allways bad! For example, in NAV a lot of entries contain a increasing number as primary key (thus, Clustered Index, physical order of records), here a FF upt to 100% could be feasible (e.g. Item Ledger Entry, the records are physically sorted by Entry No. 1, 2, 3, …, you’ll allways add, never insert between = FF 100% on Clusterd Index)

To get the optimal FF you actually have to monitor the growth of data and indexes between the re-indexing intervalls …

You can set the FillFactor per index e.g. via TSQL script or via Enterprise Manager:

Right-Click on the desired table, select “Design Table”, there open “Table and Index Properties”, select Tab “Keys/Indexes” and voila, there you can set the FF per Index.

But as mentioned, when fiddling with the FF you should really be careful … you could increase performance if done right, but any mistake could f*** up the performance, too …

Good Luck [H]

Thanx Jörg!

More to look in to.

I think our statistics are set to automatic so I can forget that part?

/K

Well, this “Auto. Create Statistics” and “Auto. Update Statistics” could be a little bit problematic; one issue was recently discussed here http://dynamicsusers.org/forums/thread/1139.aspx

Another issue is, that with these “Automatic” option the statistics are maintained, when SQL Server “thinks” it’s a good time … which may cause trouble with other processes …

IMO the best way to maintain the stats is with the SQL Server Agent, running a job which is simply executing

exec sp_updatestats

exec sp_createstats ‘indexonly’

In our installations we run this on a weekly base. If you need more “intensive” statistic maintenance (e.g. updating the stats of a specific table daily), you have to create scripts/jobs using the CREATE STATISTICS and UPDATE STATISTICS commands with enhanced features …

I agree, you should disable all automatic features and schedule jobs to run periodically. SQL Server always kicks these off when you least want them.

Thanks again!