Could anyone (who has a gigantic Navision database) share some knowledge with me as how to manage the exponentially growing data? I somehow, couldn’t visualize a Navision database keep growing to hold 10 years, 20 years … worth of data without some kind of data archving and start over …
Does it make sense (or even possible) to use SQL script to go thru each Navision tables, (filter the last 2 years data when applicable) bulk insert them to a new production database?
You raise an interesting question here. One for which there is no single right or wrong answer. What makes sense for one database may not be right for another. In fact, what works for one table may not be right for another.
To start with I think you have to analyze your database to determine where the majority of the data is, and how it’s used. For example, would it make sense to worrk about deleting records form a table that represents only a small fraction of the database?
The first thing is to ask yourself what you would gain from deleting the data. Also what you would lose. Repeat that for each table.
What i am thinking is that if i only need to access and use, at the oldest, the last 2 years data, then could i
full backup of the whole db, of couse
restore the backup to a diff server to make sure all db is up and running ok
delete all past data (thru SQL) except last 2 years up to current from the production database to serve as the latest new production db (yes, i do have to go thru each table and run analysis, good that not all Navision modules are used)
the backup of #3 could be restored as a development db.
My opening point is you may not need to deal with all tables. For example, the database I’m look at right now, only a few tables account for roughly 88% of the used space. So would I need to really bother looking elsewhere if I wanted to reduce the size of this DB?
i think i kind of understanding your point, yet, i’m not sure what else i could do if not deleting records:
say, the GL of course is huge, and the Ledger entry dimension where table id = 17 and a couple more of custom tables. how could i shrink/reduce file size without deleting GL entries? and if i delete GL entries, how could i not delete the corresponding GL related entries e.g. vendor ledger which has its corresponding detailed vendor ledger and their corresponding ledger entry dims? (if i want to get a new working production db to start over while archiving the old data.) i guess here’s where i got stuck.
This illustrates my point that you have to take each table as a unique case. For tables such as posted documents, you could take the approach of deleting everything prior to a certain date. But with tables such as GL, you would need to take more of a data compression approach.
But what is the actual goal? Is it to reduce the number of records, or the physical size of the database?
Have you looked at the record and page compression features of SQL Enterprise. These can reduce the size of select tables, and the overall database, without removing any data.
The below are some notes from a compression test I ran a while back. This was using page compression on a few tables.
The resulting space savings where within 10% of that estimated by the “estimate” query.
Initial compression was fairly quick. GL Entry (150 million records) took 5 minutes and gained 80% of its space. CPU and disk utilization was fairly high during this time. All 16 cores were running 96-98%.
After compressing a few tables, the database gained about 25% of its space. This also resulted in an improved backup time of about 18% and a backup file reduced by about 15%. This was with compressed backups before and after.
On the performance side I had mixed results. But nothing bad. I used a couple reports for this testing.
This did not seem to help processes that read data in small chunk (even though the table was large). For this example I used the Inventory Valuation report. While the Value Entry table has many records, this report only reads them in small chunks. I saw no difference between uncompressed or compressed.
On the other hand it did seem to slightly improve processes that read large amounts of sequential records. For this test I used the G/L Register report with a register containing ~30,000 GL Entries. Using the compressed version yielded about a 15% performance improvement.
Thank you so much. I’m encouraged and hopeful. Could you confirm a couple of thing for me? I thought i read that compression is not suggested for Navision … or may be it meant date compression which is different than SQL page compression? After page compression, GL table can only be read, but not posted to, correct?
i’m reading SQL server record structure about record and page compression right now, i think i’d the 2 questions answered: date compression is different than record compression. And GL table should function as normal.
i would test out both record compression as well as page compression in a development environment. anything else i need to pay attention to?
Page compression is not for every table. It is best suited to tables where records are only added to the end of the clustered index. And not modified. GL Entry and Value Entry are good examples.
Compress only the table (clustered index).
And yes, test it well.
Not sure I mentioned it, this does require SQL Enterprise.