Performence on SQL 3.70

Yo! Our largest table, Sales Shipment Line, has over 12.000.000 recs. Some months ago the nr-serie past 999.999 and started on 1.000.000 Because this little s.o.b. is an alfa the new recs will be squeezed in among the old ones instead of last? Now, a block i sql is 16k? The new recs will then not fit in the block but will be placed in a “bucket”? This could be devastating for the performance? We enter about 10000 new recs a day. How can we mesure/monitor this? How can we solve this? /Hilfe

One option is, to define the “SQLDataType” of the “No.0” [Code20] field of the “Sales Shipment Header” as INTEGER, this would avoid the alphabetic sorting. If you have the db property “MaintainRelations” active, you need to implement this change to all fields that are related to this “No.” field. Another option would be, to define the Fillfactors (SQL server site) of the pages for this table (including the indexes) very low (maybe 50 to 70%); then the insertion of new records would not perform that bad. But in this case you have to re-organize the pages frequently and you’re wasting a huge amount of cache and diskspace! (SQL Server will always load complete pages into the cache (8k), if the FF is 50% only 4k contain data, 4k are empty) If you cannot get rid of the old records, you maybe should look into option 1 …

Do you need to keep all of the reords in the Sales Shipment Line table? We use a year identifier in all posted tables - perhaps next time…

Try the SQLDATATYPE with “Variant”. This worked for me without the need to change the other fields.

The problem with “Integer” as SQLDataType is that you cannot change the number series format in the future to an alphanumeric value even if you need to. In Integer SQLDataType Code fields only positive numbers with no decimals are allowed. When changing to Variant instead, the sorting is like the Navision understanding of the code field sorting: 100 200 1000 and not: 100 1000 200 like the default (actually VarChar) SQL data type is doing.

Thanx for all the replys! Step one must be to make sure IF it is a performance problem. Isn’t there a way to check the table? /K

SQL blocks are 8K but this isn’t too relevant. You could renumber the sequence so that your series starts Z1000000… thus ensuring the records are added on to the end. Other than that you can set the index fill to say 25% and run defrag or reindexing overnight. Or you could uncluster the index so that the table fills in free space instead of sorted into order. All these are suck it & see options though…

[:0)] Even though it doesn’t matter: a block (aka extent) in SQL Server is 64k (8 pages of 8k) You really have to have a good reason to define a fillfactor smaller than 70%; if you go below 50% you’ll f*** up the system completely (you’ll waste 75% of cache space) And: You cant’t be serious? Unclustering a table? Remove the CI? Store 12 Million records in a Heap? Hell on earth for SQL Server! Another option for a solution - just thinking, not tested: Add a new field of type GUID (which is a unique number). Create this GUID on OnInsert and create a Clustered Index on this GUID. GUIDs will not be created in a sequence, so if the table is physically sorted (clusterd index) by this GUID, the record will allways be inserted in different pages. This would reduce locking problems, and would prevent SQL Server from squeezing in records at the same position … With this it is crucial to restore the fillfactors frequently, but as mentioned: 70% - and just for this table! - should be enough … Your opinion abou this!?

If you want to see the effect of the changes try perfmon for continuous analysis or sql profiler for transactional analysis. Unclustering the primary key will mean fewer bucket splits. You can still keep a clustered index if you really want, but DO measure the effect. As with every tuning option, you must balance the insert performance against the read performance. A table in a heap is only a REALLY bad option if you want to select * from… with no indexed access (so for Navision this may well be a bad thing to do). As with all tuning, measure your problem, make changes, measure it again. See what works best before you discount what seem like crazy options. If you want to see the cache hit/miss performance, use that metric on perfmon.

At my last site we had similar numbers of records and some performance problems in the posting areas on inserting records. Little was achieved by tuning SQL as it does quite a good job by itself especially if you drop & recreate the indexes frequently. It also has been known to ignore indexes completely and do table scans especially with tables like sales line (due to the type being in the key), on read accesses. I would be surprised if any table structure changes made much difference overall. However, it may be worth investigating the posting code routines to modify inefficient routines (mixing reads & writes for example) and looking at locking contention on (for example) the no. series table. The microsoft perfomance documentation says it’s usually 90% Navision and 10% SQL/Hardware when identifying bottlenecks. My experience with 3.70 & SQL backed this up. If your SQL & server is already OK, look at the code…

If you want to check index fragmentation use: DBCC SHOWCONTIG (‘dbo.CRONUS UK Ltd_$Sales Shipment Line’) WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS In query analyzer or management studio. Changing the database from Cronus to yours & Sales Shipment line to whatever table you want.