Why disable caching

When you have more then one database part the navision rule is to disable write cache on the: - Controller - Harddisks (RAID1) But why?, (I have an UPS) and this hardware caching was invented to speed things up (See HP/IBM). The controller even has it’s own UPS! Navision doesn’t know about the hardware, so …

Please note that I really need a white board and three colored markers to explain this properly.[:p] Navision Server basically runs two caches, the normal cache and the commit cache. If you have write transactions in the commit cache, and you loose power, you will loose any tansactions int he cache at that time, BUT the database will remain concistant, and all you will need to do is re-enter the lost data. Commit Cache is an extension of the Hard disks, (which is why there is one Commit Cache (SLAVE.EXE) for each physical hard drive). Navision uses a version principle which ensures that only complete transactions are ever commited to the database, but it makes no distinction whether the commit has been made to Commit Cache or DISK. The way the tree works, is that after all the data is committed, then the pointer changes. The sequence of writes here is quite critical, since the changing of the pointer, is in effect the commit, and if the commit happens in the middle of a transaction then that transaction is corrupt, and then the database is destroyed. If you have a disk controller with a write cache, then it is probably intelligent, and will write to disk by writing to the track that is nearest the head at that time, so I may have 5 blocks to write, call them A,B,C,D and E. With E being the commit. Now these may be scattered over the hard disk, with A,D and E in the Center, B on the outside, and C on the inside. The controller may decide thus to write the tracks in the sequence C,A,D,E,B for performance (this can only be done if the controller or harddisk has a Write Cache). You can see that if anything goes wrong, say C,A,D,E are written, but not B, then we have a corrupt database. Its not just a case of replacing one of the drives in the RAID 1 array, the system could still be fully functional with a corrupt database. I know that controllers could be built to be Navision Aware, but they aren’t. Maybe with Microsoft, some smart company out there will design a Controller that can use a write cache with Navision, much as you can get SQL aware controllers. By the way similar logic can be applied to why RAID 5 arrays often corrupt Navision Databases. In any case there really is not enough performance gain to justify the risk. Hope this helps more than it confusses. PS: I have been told that Write Caches in some circumstances can actually slow down Navision Commit Performance, but I have not been able to proove it to myself mathematically, so thus far I have disregarded that.

Obviously i’m talking to the right person here :slight_smile: Your answer raises more questions if you don’t mind. 1. I can buy a HDD with more cache but is there a parameter in navision to size the Commit Cache? 2. What happens with a large commit. The normal cache can’t hold this and need to flush to the HDD. 3. How much performance is lost by not using hardware cache? 4. I heard on some HDD’s you can set the cache to read only. Does this help? 5. Does commit cache only work when you’ve more then one database part?


Originally posted by ajhvdb
Obviously i’m talking to the right person here :slight_smile:

Thanks [:I]- 1 - In the early days, it was possible to manually configure the Navision cache and commitcache seperately, but it became complex, so Navision changed it to a Yes/No paramater instead. When Navision Starts, 1/3 of the cache is allocated to normal cache, 1.3 to commit cache, and 1/3 is dynamically allocated between the two. The commit cache allocation is divided equally between each of the hard disks. The normal cache (which is used primarily for sorting tables) is FAR more important to performance than commit cache. For ideal performance, the Normal cache should be 1.6 times the size of the largest key that you want to sort. (Database → Information → Tables → Keys. It will probably be the primary key of Item Ledger entries.

  • 2 - once the Commit Cache reaches 2/3 of the avaliable cache, the server locks out all transactions, and forces a flush to Disk. Agin in the earlier versions you could manually set the paramaters, but not now [V]. If every you get the message that the cache is being flushed, then you MUST get more drives, there is no other real solution. Getting more RAM is NOT the correct solution.
  • 3 - I was told that a write cache on a hard disk, would actually SLOW Navision down! The logic being that you are writing from one cache to another uneccessarily, but I think with Controllers these days, a write cache would be marginally faster. BUT definitely not enough that I would risk using it. Buy more hard disks instead.
  • 4 - setting the read cache on a hard disk can help, and you should use this.
  • 5 - Navision shares the commit cache equally amoungst each hard disk. If you have only one hard disk, then it gets 1/3 of the cache, if you have 5 hard disks, they each get 1/5 of 1/3 of the cache. This is a reason that it is CRITICAL that the database parts are all exactly the same size to get optimum performance.
    Now one last comment. Commit Cache will have no effect what so ever on the overall performance of you system nor will it have any effect on how fast you can commit data to the Disk arrays! All the commit cache really does, is gives a reply back to each client faster. In the meantime, the SLAVE.EXE programs are busy doing the work that the server would normally be doing. (Think of Commit Cache as something like a print buffer. Which does nothing to help you print faster, it just lets you get on to do something else while the job is being printed. If there are another 10 jobs in the queue, I still need to wait for them to be done before mine is ready. If I find that the printer is not fast enough to print everyones print jobs in a day, then I need to either get more printers, or a faster printer. Hard disks are the same for Navision. If you like I can send you some objects that allow you to monitor server performance, so you can study this in more detail.

Yes send them to me, I’ve used the standard objects but will look at what you’ve got! 7. How large does a database part need to be to give best performance. The number i’ve learned is 4Gb. 8. When you format a partition in Windows 2000 the allocation size = default. Because the HDD’s are 18Gb and my parts are 4Gb maybe the OS can help with performance to?

7 - Actually 2Gb is idea, but 4Gb seems to be OK. 8 - Ideally the drive should be as small as possible, and the partition should also be as small as possible. But if you make small partitions, make sure you don’t think about using the rest of the space for something else. 6 - what happened to 6?

  1. So, on the HDD i create a 4 to 5Gb partition for a 4Gb Database part. The rest of the 18Gb HDD is unused space. So many questions, while typing 6, I answered it myself.

8 - yes.

Thank you, I will change our installation procedure to create smaller partitions. On the client there is cache to :slight_smile: 9. The object cache is easy to understand. We set the DBMS cache high (160Mb) for a heavy user because we found out this really helps to speed up things. Why? (maybe Temp-table vars) 10. A lot of data is send and processed on the client. Is it an option to create a ram-drive and point the temp-path to it. 11. Does adding a faster network card help? (from: 100Mb to 1Gb) The server has a 1Gb. The above options are related to heavy users only!

By the way, the performance increase from reducing the size of the partitions, is probably microscopic, it is more a case of being aware of it. Just try to keep all the disks identical and the partitions the same size. 9. You may be confusing things a little here. Going to options on a client, and changing DBMS Cache or Commit Cache has NO EFFECT at all. Object cache is where Navision stores objects locally. The entire application is about 70Mb, so even on a busy day you are never going to use close this this, so there is not point in setting the object cache above say 25Meg. Even 8M is normally more than you ever need. 10 Pointing the temp path the a ram drive can help, but not a lot. Important is to just make sure it is located locally on a fast drive. For example if you are running on Citrix, you need to make sure that the temp path is pointing to a drive that is physically on the Citrix box. I have seen a lot of slow Citrix implementations because of this. 11 For a client, 100 Meg is more than you will need. What you really want, is the 1Gb from the server going to a 100 meg switch, and just make sure that the heavy users have a true 100 Meg to the server. In normal running, 10meg is enough per client, it is just getting that bandwidth balanced. If you have some tasks that really seem to chew bandwidth, then run those tasks (posting or printing) on your Citrix server, which can then be linked by a high speed connection to the Navision server. 11 a DONT run Navision on the server when people are working, and NEVER run big jons on the server. It may seem faster to that task, but you are using resources that are then slowing everyone else down.

  1. I understand your point about setting DBMS cache, it’s only used when the client is opening a local database. That’s what I thought also BUT if you change it for example to 150Mb and connect to a server and start a heavy process (Where in “code” temp-tables are used) it really speed things up.

Are we the only solution center who use this trick to speed up processing or is this trick so common to you all …?


Update: I had been informed many years ago, that using the write cache on a Navision server’s hard disks, would actually slow down the server, but though I realized it was dangerous, I never saw it as a performance issue. Well… I am workng with a client that has a guy in IT that likes “toys”, so against all my recommendations, he decided to go with Write Cache on the server. We had a test server, which was basically a workstation, with 1G P4 one 20g IDE drive 256m ram, etc… with 25 users, it was slow but usable for testing. So after installing their 4 x Xeon /4 G ram, 12 HD machine, it performed like a dog (no insult meant to hard working dogs of course). But a rename on Cronus took 2 hours!!! After a struggle to get the Write Back Cache turned off, the machine flew… Moral of the story, Write Cache on the server absolutely killed performance. Also a more major issue, was that we were continually getting locking issues, which have also now gone. My feeling here, is that the Write Cache was interfering with Navision’s commit cache, and incorrectly reporting data back on reads.

Thanks for your update.