Optimizing Tables

Does anyone have/know of a good set schedule to Optimize tables. ie. once a week, once a month, once every 6 months. How important is it to do this, if at all?

Only do it to save space, it does not make Navision faster. (Unless you have less than 25% db free, in which case the speed increases indirectly because of the extra space.) Buy more disks, it is cheaper than you time.

Optimisation does not make Navision faster ? Is it really true ? I spend 2 to 3 hours every 2 weeks to do the optimisation.

ExpandIT www.expandit.com provides a tool for automatic optimization of tables … Regards, Jörg

quote:


Originally posted by mumu
Optimisation does not make Navision faster ? Is it really true ? I spend 2 to 3 hours every 2 weeks to do the optimisation.


As with almost every program, there are two parameters which can be optimized: Time (= runtime) and space (= memory consumption). Usually, you can only optimize one of these at the cost of the other [;)] Navision optimization is an optimization of the disk space used up by your tables. This may make read access a bit faster, but modifications will take more time because the table will reorganize itself again during inserts and deletes. You might want to read a bit about B-, B± and B*-trees to get the idea - I don’t know who to explain this using less then five screen pages [:p]

Sorry Heinz, but for Navision 2.00 Native-databases you are wrong with ‘Modify-Time’ because there is no really reorganization. The data is stored elsewhere in database and only the pointerstructures of the indices is updated. Even the new index is not stored in a list which must be reorganized. BUT : In many experiments with database optimisation, we did not see any effect to runtime. The only thing that seemed to happen is that all jobs had enough memory to run.

We use Expand-It table Optimizer and it works great on our 3.60 database. (5GB) We have it run every Sunday at 5:00pm. It runs for about 3 hours. Frees up lots of space and and we see some speed improvement.

quote:


Originally posted by pszoke1
… and we see some speed improvement.


Can you document the speed increases? I am curious as to the performance increases you are getting. The only thing that really should run faster, is something like a backup, since there would be slightly less disk activity involved, but any write transaction must take longer after the tables are optimized.

By the way, in the old days of Navision, Optimizing did make the system faster, only because of the fact that you had to buy daatabase, and users would therefor ignore the 25% free issue, and run with 80% or 90% of their database used. Obviously in these cases, optimzing when it takes you from 20% free to 25% free makes a big difference.

You can also split this discussion in 2 for Navision Server and SQL Server. Both use balanced index trees, driven by a clustering index (SQL doesn’t have to be but in practice it is, and all Navision SQL tables are clustered). But both implement index page overflow, splits and secondary (i.e. non-primary) indexes differently. So a general discussion of the B-flavours of trees is really not relevant. In Navision Server an optimization will result in fewer page reads due to page splits and overflow pages being re-organized by the optimization and not in-place during modifications. Fewer pages reads bring about faster index data access, at least in the short term, but the degree that it is noticeable and measurable is proportional to the update activity that has caused the overflow. It’s principle effect is space saving in the clustered index pages, and to a lesser degree in the secondary index pages. SQL Server is quite different. A Navision Optimization runs a SQL CREATE INDEX … WITH DROP_EXISTING, which addresses fragmentation within index pages which can affect query performance when inefficient. This is more noticeable when a FILLFACTOR is used for the index (Navision does not expose this property but it can be changed externally for an index). In both cases it is sensible to determine the benefits of optimizing, without just doing it because there is a belief of improvement. In SQL Server, an UPDATE STATISTICS is usually more beneficial, because this directly affects execution plans.

Thank you very much for the detailed explanations. Now I understand the Navision Optimation logic. I will buy more HD than come to office early. I use ExpandIT for Backup, ECC and ECM. However, ExpandIT-Optimisation is quit costly for me. Best regards,

When I have run it on Fin 2.6 and SQL 2000, it would seem to be slower for me (we do a lot of writes typically), but the space savings were large in some cases, especially on some of the big, frequently used tables.

Thanks Robert for adding some clarity, It is clear that optimization in Navision should make a theoretical increase in read spead, which should actally be trivial. On the other hand, the fact that write time becomes higher can be noticable in a heavy transactional environment. In the end, adding more drives will add a lot of additional performance to your system, and will cost very little. It is the prefered path.

It’s nice to see that this little question turned into such a good discussion. Thanks to everyone for their input.