I would like to know what the latest recommendations are regarding Maintenance Plans for Dynamics NAV on SQL 2005.
First: “Maintenance Plans” with SQL 2005 have much improved since SQL Server 2000 (where the sqlmaint.exe frequently crashed [:@])!
When creating MP, I recommend not to put all kinds of maintenance in a single one, but to create different MPs, that makes further adjustments easier.
So what I would propose is to do this via MP:
- Creating Full/Diff/TLog backups of the NAV database (daily, hourly, etc.)
- Creating Full backups of the system databases master, msdb, model (weekly)
- Cleaning-up old backup files
- Cleaning-up old history entries
- Re-Organize Indexes* **
I do not recommend to use the “Statistic Update” feature, this could be done better by executing some TSQL via job.
*) Regarding Re-Indexing: This is a critical point in NAV. Actually it is necessary to do a full index optimization, but that’s a different story. Anyway, re-indexing will keep performance on a “reasonable” level.
**) Caution: with build 9.0.1399 there is a bug when defining a fillfactor! Don’t use this feature, upgrade to the latest version (SQL Server 2005 SP2a 9.0.3050, bug is solved in SP1 9.0.2047)
Thank you. We are, of cause, already running the backup part of the MP’s. It was more the other jobs.
Why would you not run the update statistics in a MP?
I would usually recommend the following.
Transaction log backup every 15 min
Full Backup once a day
I recommend that the backups are copied offsite as soon as possible (easy with the transaction logs over a WAN but may not be possible for the full backup).
I make it part of a backup of the backups to physically remove the backup files, so a process will dump them to tape and then delete files older than two days old if it is successful.
I try and time the backups to happen after any overnight polling/ processing but before the start of the working day so the first thing the dba should do is remove the backup tape and put it in a fire proof safe.
Make sure the backups go to their own drive not the database drive or operating system. If one thing is going to fail in the database world it is the backup and you will end up filling the database or os drive with backups so something eventually stops.
Also use the SQL email system to notify the DBA when one of the processes fails.
Oh yes this one is very important take one of your backup tapes and do a full database restore to midday on a different server just to make sure everything is working properly you would be surprised to find out how often things are not working. The time to find this out is not when you have a system crash and you are trying to build the database and the restore should be done monthly and timed to see how long it takes. This way the DBA knows that the backup works, how to do a full restore and how long it is going to take invaluable when everything goes wrong?
I would also have some process to defrag/ rebuild indexes on a weekly basis
The “Update Statistics Task” of the MP actually perfroms an UPDATE STATISTICS statement. Means, existing stats are updated - which presumes, that the stats have been created previously. But stats are only created automatically, if the “Auto. Create Statistics” property is set; and this is not really recommended, as this will cause the creation of a lot of useless statistics (I skip the details here).
So, I usually disable the “Auto. Create/Update Stats” stuff, and implement an Agent job, which executes just this on the NAV database:
exec sp_createstats ‘indexonly’
This first updates all existing stats, and then missing INDEX statistics are created! (Normally, before implementing this I run my own procedure to get rid of all the “useless” stats first)
Finally, all required stats are created and up-to-date!
I agree, but if you have more or less 24-7 activity (we have users in both Americas, EMEA and Asia), what’s then the best solution?
In optimization subjects there are a few scripts related to sumindex fields optimization. In SQL there are a few scripts (I never tested) to optimize scripts.
I also don’t usually run update statistics. I rather prefer to periodically rebuild indexes during night. Large tables I might don’t rebuild every night. You have a 27x7 business but don’t forget that companies don’t run 24 H days you can rebuild index per company.
In SQL Server Enterprise Edition has the option to rebuild indexes even during business, without affecting users (never tested).
You need to keep the statistics on the indexes current though. sp_updatestats will update the index statistics, and in turn will make the query analyzer more accurate.
Good statistics are critical to query optimizer. But there are several ways to update statistics. Automatic (slower), Asynchronous (run in background and updated when doesn’t affect a query, new in SQL Server 2005) or disabled (updated manually)
In some indexes can be disable and it’s enough to rebuild indexes during night.
In SSIS can be rebuilding instead of update stats.
Has I have said I normally prefer rebuilding. In quite large tables I periodically update stats and in weekend I rebuild indexes.
But each business has different needs….
BTW, Is anyone optimizing shiftindex tables using SQL queries?
My plan is:
- Job 1: DBCC REINDEX [Table] x (n = processing tables) - daily
- Job 2: DBCC REINDEX [Table] x (n = entry tables) - weekly
- Job 3: DBCC REINDEX [Table] x (n = other tables) - monthly
- Job 4: DelZeroSift - daily (ref: http://www.mibuso.com/dlinfo.asp?FileID=812)
- Job 5: Update Stats - daily (as Jorg Stryk recommended)
- The maintenance of Logs etc is a bit more detailed…
I am changing to this by the end of may and am currently testing it.
Config: SQL2005, DB=50gb, 141 concurrent, 3 x eCommerce Sites integrated, warehouse, sql reporting services
I would like to know if anyone has some more advice for this plan?
I think that you should also check the integrity of your databases. This is done with DBCC CHECKDB.
About the index rebuild and reorganize you could check the index fragmentation levels with sys.dm_db_index_physical_stats. This way you only have to do something about the indexes with a high fragmentation level. You could also dynamically do a rebuild or a reorganize based on the fragmentation level.
The advantage with this is that your index optimization will run faster and that your transaction log backups after the index optimization will be smaller.
I have a stored procedure that could help you with this. It’s being used today on a very large Navision installation.
Thanks Ola, pretty impressive. I can’t wait to have a look at these [:D] You rock
Thanks, Nicholas. Please feel free to contact me directly if you have any questions on how to use it.
Is there any Document for Maintenance Plan Recommendation for NAV database?