I have a question about the field Memory Usage (KB) in the Session table.
The documentation states:
The number of kilobytes in the procedure cache that are currently allocated to this session.
It is pretty clear but I just want to make sure. If a user has say 200,000 kb in this field, does that mean that the SQL server has allocated these 200mb for this user? So say the server only have 1gb of memory, does that mean that every other user have to share the remaining 800mb?
The reason I ask is because today I had a client complaining about Navision being slow. We just implemented a new functionality that runs a very heavy process every morning, and I could see that there was 1.6 GB allocated to this process. So I was thinking that perhaps this process stole so much memory from the server that everyone else experienced the system being slow.
Could that be?
The client is on SQL 2000 and Nav 4 SP2.
First of all you willhave to monitor performance of your server and SQL Server with perfmon.
and some of SQL Server (time i find the name in English).
So you can see if hardware is right or not when executing your process. Then you have to analyze if your process can’t be optimize.
Processor : % Processor time
Physical Disk : Disk Transfers/sec
SQL Buffer Manager : Cache Hit Ratio.
How much memory is there on the server/
You can also look at Task Manager on the Server and check on CPU utilisation for each application / process. This will show the process makeing the greatest demand on resources.
The server has 4GB of memory, and there are around 60 users (well within the recommendation from Microsoft).
Thanks for your suggestions! I forwarded them to the IT dept at this client so they can look into it more.
As allways, SQL Server performance is a tricky thing … 
Which edition of SQL Server do you use on which Operating System?
In SQL Server 2000 Standard Edition the Server can not allocate more than 2 GB for cache/etc… If that’s the case, then one transaction which deals with 1.6 GB (!!!) of data actually occupies the whole SQL Server cache, then you’ll see a remarkable decreas of “Buffer Cache Hit Ratio” (normal value > 90%) and a drastical increase of “Read Aheads” and probably heavy “Disk I/O”. The default fix for this problem is “Add more RAM”, but as you use STD this will not help.
If you use SQL Server 2000 Enterprise Edition - which I allways recommend - the Server could allocate up to 3 GB RAM (in your case, as you have 4GB installed, but 1 GB is reserved for the OS). To enable this you have to set the /3GB parameter in the c:\boot.ini file. If you’re experiencing the problems described above, then you could simply add more RAM - and, depending on your OS, you could use features like PAE or AWE.
But anyway, I would recommend to re-design this batch process, maybe to splitt it into smaller portions …
That’s actually what we have ended up doing, splitting the process into several smaller updates.
It is still a huge update though and it still uses a lot of memoery, because it updates 150,000 items multiplied with 50 vendors 5 times a day…
The client do use the enterprise edition so we are looking into enabling that extra gig of ram. Cannot hurt to have that anyway
Thanks for your input