Attain 3.60 server problem!

Hi! we have attain 3.60 on a windows 2000 server with its own database (32 gb) running, 1 Xeon Processor and 2 GB RAM. our database is on 16 different disks. (database1.fdb, database2.fdb …) We make one time every month a special billing job. When this job starts he is first fast but then this job generates in the taskmanager many slave.exe processes and becomes slower and slower. Later we have 17 slave.exe processes. 15 slave.exe have 518.992 KB memory and 2 slave.exe have 518.996 and 519.040 KB memory.(I see that in the taskmanager) While our job is running also our users are working with the navision database. What is the reason for that many slave.exe processes? Have we too little ram in the server (2 GB)? Who has an idea what our problem is? Thanks!

Hi, the slave processes are the commit memory for each of your database-files. I don’t think they actually take up any memory. One possible reason to the system getting slower is that if you have little free space in the database and your job doesn’t make any commits until the end of the job. If it’s a big job and you don’t make any commit then the job will store all changes in the free part of the database, as an virtual memory. If you don’t then have enough of free space the system gets very slow. So either beef up your database or insert some commit in your code if that’s possible without creating any “half” transactions. Best regards Daniel

are you using SQL or Navision?

Daniel is right. You cant sum all RAM-usage for the slave processes. Navision doesn’t use this much RAM. It’s only the RAM usage that server.exe is reporting that is actually used. How big is Your cache? Should be 800Mb+ with this size of the database. /Lars

Well, if Task Manager is reporting 500MB or so for a slave.exe process - this is what Rubernik says - then they are indeed using this mempory, independent of the server. Task Manager does not know the relationship between server.exe and the slaves; they are all just processes. The memory reported here is actually the working set of the process. I’m not surprised your system is crawling to a halt if 17 slaves are consuming this amount of memory. You can perhaps use Performance Monitor to track virtual page read/writes (being the swap file) - I expect this will be high. What happens if you restart the server? Do they gradually accumilate the memory again?

We have our database splitted in seven disks and the task manager also report seven slave processes that take equally much memory as the server.exe but the taskmanager also report on total used memory and this doesn’t add upp if its true what Robert says. Its impossible that the slave.exe actually take up any memory (or atleast not as much what they are reporting in task manager). Best Daniel

Sorry guys - its not true what I said! Since the slaves have shared memory from the server mapped into their address space, this will be added to their private memory and look like they are taking up independent memory. Task Manager cannot differentiate that - so that is normal and is not in fact the problem with your performance. Sorry for the misinformation.

quote:


Originally posted by cnicola
are you using SQL or Navision?


Just guessing but he’s talking about slave processes… these are used by the Navision Server, so probably NO SQL [:o)] But seriously, is commit-cache enabled on the server? If so, you could try disableing the commit-cache and run it that way.

Disabling the commit cahce… Hmmmmm… I thought the problem was performance when writing to disk. I wouldn’t turn of the commit cache then. Just make sure You have UPS when using commit cache. /Lars

quote:


Originally posted by Lars Westman
Disabling the commit cahce… Hmmmmm… I thought the problem was performance when writing to disk. I wouldn’t turn of the commit cache then. Just make sure You have UPS when using commit cache.


Well, first of all MBS recommends to not use commit cache and the problem here is the memory usage of the Slave processes. When you’re running large batches the mem-usage grows bigger and bigger because transactions are cached. (correct me if I’m wrong[:)])