Navision Version 4.0 Installation SQL Option

I am installing Navison 4.0 on SQL Server for the first time. Any tips, suggestions or recomendations for Database Configuration. For Example Number of Database File Sizes on the server hard disk Number of logical file names and their sizes File Growth option. Transaction Location, Growth & configuration SQL Database Maintainance What are the implications of the growth of transction log and how do we control it. All input will be appreciated.

First, search this site. You will find a fair amount of information. In the meantime, some information: The standard Navision SQL database consist of 3 files (primary data, secondary data, and transaction log). It is best if each of these is on a different drive set. Primary Data File: This file contains the data that is common to all companies, such as users, permissions, and Navision objects. This also contains the DB Catalog. This file usually will not get to large. Having this file on a different drive allows you to back up the transaction log in the event of a failure of the disk holding the secondary data file. This would allow a possible recovery to the point of failure instead of the last transaction backup. Secondary Data File: This hold the data for your companies. This file will account for most of the read/write access to your database and focus of your disk system. RAID 10 across as many disk as you can. On larger systems, when you think you have enough hard disk, add more. Transaction Log File Place the transaction log file for each database on its own RAID 1 drive set. Do place any other files on this drive. If you expect to need a transaction log larger than one drive set, then span the transaction log across multiple physical files, placing each on its own drive set. File Growth This applies to both data and transaction logs: Watch the file usage and expand manually as needed during off hours. Rely on the AutoGrow only as a safety net. (Some one adds an unexpected amout of data that fills the file). Avoid growing the files in small increments. Maintenance Rebuild indexes and statistics on a regular basis. The more often a table is used, the more of to rebuild its indexes and statistics.

Thanks for the input, that was useful What are the maximum number of DB files i can create for example on an 100 GB hard disk. Are there any limits to the number of DB files. Do i split up all the three Primary, Transaction and the data files into multipe files or not. How do i decide to set the cache size.

There is little value in creating multiple data files unless they are on different drives. Expanding the number of physical drives in the RAID 10 set and using a good RAID controller is a much more economical way of gaining performance. oh yeah, the database is limited to 32,767 files. This includes data and transaction log files. Each file can be 32 TB. I don’t think this will be a limitation. Navision’s DB cache setting has no impact on SQL. SQL will determine how much memory to use. (settings may effect this). How large is your database and how many users?

Those limits are when you have the biggest SQL license. I think there are size limitations with the ‘lower’ SQL licenses, although I don’t know exactly what those limitations are.

Well there are about 60-70 users and some of them will be connecting through Citrix. We will be starting the implementation shortly therefore I am gathering information related to the setup mainly the spliting the Database file across the disks and the Database configuration.

The only version with size limitations is SQL 2005 Express (db limit 4 GB) and SQL 2000 MSDS (db limit 2 GB). These versions also have other limitations and are meant for smaller applications and dvelopment. The size limits I listed are for SQL 2000 The different versions do have memory land processor limits, which make the lower versions impractical for running very large databases.

“My Two Cents”: You should at least run NAVISION 4.00 ServicePack 1 with SQL Server, especially if you consider the usage of SQL Server 2005, to get the best performance (plus the other recommendations you’ll find in this forum). You can request the latest build 4.0.1.21990 (a not published/“inofficial” hotfix) from Microsoft; part of this update is a script that creates CLUSTERED INDEXES for all tables (CI do not exist by default anymore, a real stupid “enhnacement” - but obviously MS learned that [}:)]). When it’s about files, then this issue is also important: Having multiple files within one Filegroup causes a kind of loadbalancing between these files, which could increase performance when having each of them on a dedicated disk/RAID/etc… According to this, it might be feasible (if you have multiple CPU) to have multiple filegroups, as each Filegroup could be processed with a parallel Thread! We usually implement a dedicated Filegroup for Indexes which really “pushes” performance … And:

quote:

Transaction Log File Place the transaction log file for each database on its own RAID 1 drive set. Do place any other files on this drive. If you expect to need a transaction log larger than one drive set, then span the transaction log across multiple physical files, placing each on its own drive set.

I guess it was a misstake, it has to be “Do NOT place any other files on this drive!”. And I would go for a “fast-as-hell” & safe RAID10 for the NAVISION db …

Stryk: Good catch on my typo. Creating multiple files within the filegroup is a good option, but only if the files are on different RAID 10 sets (as you stated). Different controllers is even better. This can quickly drive up cost.

Do the clustered indexes really improve performance? My understanding is that only certain types of data sets/keys should be clustered based on content, but I’m not sure. I noticed that on my version of 4.0 SP1, that Clustered is an option for the keys, and have been toying with the idea of enabling it on some of our slow performing tables.

Let me clarify the Clustered Index thing. Navision on SQL, since version 2.5 has always created clustered indexes on primary keys and this was not changeable in Navision, although you could change it externally in Enterprise Manager or the like. From 4.0 SP1, a new Clustered property was added for a key which allowed the developer, within Navision, to nominate a key other than the primary key to be clustered because in many cases the primary key is not the best candidate. Even if the primary was not explicitly marked ‘Clustered’ it still is in the SQL database. Unfortunately a bug in SP1 caused the the primary keys to be removed from a table if the Clustered property was not explicitly set (instead of leaving it on the primary key). This can have an adverse affect on performance - it turns the table into a ‘Heap’ and can be expensive for many read operations. This has been fixed (either for SP2 or a SP2 hotfix, cant remember which) so that Navision does the right thing, but any database already accessed with the SP1 version needs to be corrected using the script. The bottom line is that all Navision tables are clustered, with the exception of the bug behaviour mentioned above.

quote:

Do the clustered indexes really improve performance?

Ohhhhh YES they do! I give you an example: Table 357 Document Dimension Primary Key: Table ID,Document Type,Document No.,Line No.,Dimension Code This gives “by standard” this CLUSTERED INDEX (CI): Table ID,Document Type,Document No.,Line No.,Dimension Code This defines the physical sorting of records within the db. e.g. 01: 36/1/Doc001/10000/DEP 02: 36/1/Doc001/20000/DEP 03: 36/1/Doc002/10000/DEP … 50: 37/1/Doc001/10000/CAT 51: 37/1/Doc001/20000/CAT 52: 37/1/Doc002/10000/CAT … When posting a document, NAVISION is accessing these records like this: SETFILTER(Table ID,‘36|37’); SETRANGE(Document Type, 1); SETRANGE(Document No., ‘Doc001’); With this, SQL Server locks the full range from line 01 to 51. If you want to post Doc002 you’re blocked, because Line 03 is within this range. The more Documents & Dimensions you have, the more severe is this blocking problem. Now, if you change the CI like this - following the “SQL rules for indexes”, having the most selective fields first (the PK on NAVISION site is untouched): **Document No.,**Table ID,Document Type,Line No.,Dimension Code This gives this physical sorting: 01: Doc001/36/1/10000/DEP 02: Doc001/36/1/20000/DEP 03: Doc001/37/1/10000/CAT 04: Doc001/37/1/20000/CAT 05: Doc002/36/1/10000/DEP 06: Doc002/37/1/10000/CAT … Now, with the same filter only the range from line 01 to 04 is blocked, no influence on Doc002. And this doesn’t just improve locking mechanisms, SQL Server can access the required records MUCH faster! This just simply shows how important the CLUSTERED INDEX is! And have in mind that the CI is used by all other NON CLUSTERED INDEXES! And finally: Just the change of the CI as shown above increased the posting speed of Sales Documents at 4 times (“Quadspeed”)! And that’s just the beginning …

A point about moving indexes to a dedicated Filegroup. I thought that as the Navision development environment doesn’t ‘know’ SQL then all indexes will be stored in the Primary Filegroup if they are ever modified or rebuilt. Isn’t this correct?

If you create a new SQL Server database using the NAVISION client, C/SIDE creates 2 database files, one in filegroup “PRIMRAY”, the other in “Data Filegroup 1”, which is set as “Default Filegroup”. So, all new tabledata - and indexes - is created within the “Data Filegroup 1”. To move indexes into a dedicated filegroup, you have to do that “manually” from SQL Server site! If you modify such an moved index by changing the key in NAVISION - or if you run the NAVISION Table Optimization, C/SIDE will reset everything to defaults.

Hi together, well, putting all non-clustered indexes on a seperate filegroup is always beneficial, because even on the same drive there will be less competition between tables an indexes as they grow. choosing the clustered index other than the primary key can be dangerous. if there are non-clustered indexes, those will reference the clustered one and if the clustered one has a poor selectivity, performance will suffer due to too many page reads. there’s a tool called SQLSunrise (www.sqlsunrise.com) whicht puts all nonclustered indexes automatically on a seperate filegroup and chooses the most selective column order for the primary key. the selectivity always depends on the data contained in the tables. also the optimal fillfactors are calculated and applied which is crucial for best sql server performance! by default, all navision base table non-clustered indexes are unique as the (missing) primary key fields are added. this does not make sense, because this only generates overhead for sql server. the clustered index fields are already included in any non-clustered index, because those form the clustering key of the table. sqlsunrise corrects this which results in less page reads due to smaller index sizes - which frees up space in ram also. also consider using only needed sift indexes because they are maintained by sql server triggers which is slow! sql indexes often perform even better than sift indexes. reducing sift to a minimum reduces blocking situations. manual index tuning should also be applied to get the best results. usually, too many never used indexes exist which slows down writing operations. those never used indexes can easily be identified with sql server 2005 - sql server 2000 wants a lot more effort. on the other hand, some very important indexes are usually missing. for example, navision seems to just love optimizer hints such as “option fast” in conjunction with “order by”. this pretty often forces sql server to choose the worst query plan. manual index tuning can ease this pain, but you have to know what you are doing - you need to identify them, which is the hardest part. deep sql server knowledge is needed here. manual index tuning should be documented as navision changes can destroy the tuning work. sqlsunrise keeps all manual sql server indexes in mind, so that they can be identified and reapplied easily. i hope this information was helpful?

Hi everyone, just let me first say that how amazing it is with all the people here sharing their knowledge and experience freely. I for one am deeply grateful for all the information I have received in this forum over the years. Thanks everyone! Now back to business :slight_smile: I am one of the numerous Navision people that struggle with the lack of SQL knowledge. Now I am trying to understand this with clustered index and the Document Dimension table. I understand how an index starting with Document No. increase performance, but I am not sure how to create it. Can I just create a key this key inside Navision, set the clustered index value to yes, and that will take care of the issue? Will SQL then select this key even if there is a SETCURRENTKEY command selecting a different key? Or do I have to do anything with the enterprise management? /Daniel

Here a TransactSQL script example that would do the job:

quote:

USE [Navision] GO – SET TRANSACTION ISOLATION LEVEL SERIALIZABLE – – Table: 357 “Document Dimension” (CRONUS - Development CCPE$Document Dimension) – Drop Non-Clustered-Indexes DROP INDEX [CRONUS - Development CCPE$Document Dimension].[$1] GO – Drop Clustered-Index ALTER TABLE [dbo].[CRONUS - Development CCPE$Document Dimension] DROP CONSTRAINT [CRONUS - Development CCPE$Document Dimension$0] GO – Create Clustered-Index ALTER TABLE [dbo].[CRONUS - Development CCPE$Document Dimension] WITH NOCHECK ADD CONSTRAINT [CRONUS - Development CCPE$Document Dimension$0] PRIMARY KEY CLUSTERED – original: ([Table ID],[Document Type],[Document No_],[Line No_],[Dimension Code]) ([Document No_],[Table ID],[Document Type],[Line No_],[Dimension Code]) WITH FILLFACTOR = 95 ON [PRIMARY] GO – Create Non-Clustered-Indexes CREATE NONCLUSTERED INDEX [$1] ON [dbo].[CRONUS - Development CCPE$Document Dimension] ([Dimension Code],[Dimension Value Code]) WITH FILLFACTOR = 95 ON [IndexGroup] GO – IF @@TRANCOUNT > 0 COMMIT TRAN GO

This script first deletes the non-clustered indexes, then the clustered, then creates the new clustered index, then the new non-clustered indexes. Please change the FILLFACTOR and/or the FILEGROUP settings (and of course the COMPANYNAME part of the table/index names). As you can see, it would be a tough job to re-configure all clusterd indexes by those scripts.

[edited by author]

Added 15:34: I forgot: Don’t worry about the NAVISION site - leave everything as it is, just change the indexes on SQL Server site. But have in mind: If you change the index inside NAVISION (or if you use the table optimization), the changes made by the script will be undone - back to default values!

And as we were talking about SIFT, here an advice about “which buckets to disable”: Example: Table 17 G/L Entry Key No 2: G/L Account No.,Posting Date SumIndexFields: Amount,Debit Amount,Credit Amount,Additional-Currency Amount,Add.-Currency Debit Amount,Add.-Currency Credit Amount By default, these SIFT Levels are activated: Bucket No. SIFT Level Maintain 0 GRAND TOTAL No 1 G/L Account No. Yes 2 G/L Account No.,Posting Date:Year Yes 3 G/L Account No.,Posting Date:Month Yes 4 G/L Account No.,Posting Date:Day Yes 5 G/L Account No.,Posting Date:Day,Entry No. No But actually required is only the most detailed one, bucket number 4 (number 5 is bullsh**), all other “levels” can be calculated on base of number 4. E.g. if a sum per month is required (bucket 3), SQL Server will sum up all records of bucket 4 within this period. Usually, this “manual” summing up is very fast! The gain of performance when writing data is higher than the potential loss of performance. As there is a chance - depending on the business logic - that the other SIFT may be required, the system has to be monitored (SQL Profiler) after disabling SIFTs to assure that no problems occur … For small, “hot” tables - as “Warehouse Activity Line” or “G/L Journal Line”, “Item Journal Line” etc. and maybe “Sales Line” or “Purchase Line” it could be feasible to remove all SIFT buckets.(“MaintainSIFTIndex” = No) That’s at least a start …

Would running the SQLSunrise Navtune fix the clustered index for the document dimension table? So running that every night instead of the table optimization would take care of that? Thanks! Daniel

We are now a long way from 60-70 users connecting to Navision on SQL. I strongly recommend going with a vanilla implementation of SQL as per documented recommendations and then monitoring and tuning the performance. Without doubt the SQL implementation of Navision will perfrom less quickly than native BUT bottlenecks and contentions will primarlity be generated through the applcation and its asscoaited demand on the database. 40 sessions running Sales Order Entry concurrently will force it’s own demands whilst end of period inventory reporting will force other demands.