Max Usage of Navision....

Hi,

We are in retail business and we operate on 1 Navision Server per region base with around 40-50 users at a time. The data size of SQL database is around 200-300 GB as we expereince heavy surge in data in around 1-2 GB per day. There are around 9 such servers for 9 different locations. The servers are hosted on SUN Blade Servers having quad core processors with aroound 16 GB RAM. Each server is connected to SUN Storagetek 6140( having 16 disks of 146 GB in one expansion) having 146 GB x 4 or 6 disks having data files on RAID 1+0.

Anyways we are expereincing huge problems if someone accesses the system for reports or if someone tries to process some data.

Now the management decides to have 2 regions in 1 Server that means We are going to put 2 Navision SQL Database on 1 Server to save costs (SQL Licence Cost) and we are coming up with more 6 regions.

The management doesnt want to spend on extra servers but is ok on Disks part. But the data will spread across many disks as possible of course in different set of RAID 1+0 luns for both the databases. I would like to know what would be the impact on such a scenario.

First of all is this feasible? Do I need more RAM , more CPU, more disks ?

In some regions there is data around 400 GB. I am worried that there would be some impact on the server with such heavy usage. What would one need to do in such scenario ? What special care i need to take while I do this activity? I would appreciate if all you gurus guide & advice me.

Regrads,

Prashant

Hi Prashant,

just to make sure I understood this right: your management is planning to have 1 SQL Server running up to 7 databases (1 per region) each of a size of 200-400 GB ; each databse used by 40 to 50 users!?! Or did I miss anything? I would not recommend this …

This would be at maximum 7 x 50 users = 350 ccu, thus with a 32bit system you should have at least 16 CPU cores, e.g. 4 x QuadCores (with 64bit one Quad is minimum, better use 2) … but anyway, with such a volume a 64bit system is mandatory!

Regarding RAM, well, depending on the transaction volume I would consider at least 8 GB per “region”, thus it would be 7 x 8 GB = 56 GB (= 64 GB).

For each NAV database you should have at least 2 dedicated physical disk-volumes (ideally each RAID10) - one for mdf/ndf, and one for ldf - that would be 14 RAID10 volumes (= min. 56 spindles) plus system, programs, os, etc… So you need a huge storage system - and have in mind that a huge SAN will not help just because it’s huge, it must be configured properly!

I would not do that. You’ll need a tremendous “box”, and even though, I would not dare to tell how that would perform … besides: you’ll create a single point of failure, if this one server crashes, all 7 regions are screwed. So you’ll need to look into failover solutions/redundancy, too. So finally you’ll have a big investment in hardware etc., I’m not sure that after all this will be cheaper than separate servers with separate licenses …

I would vote for one SQL Server per 1 or 2 regions, but not more.

And of course, you should look into the general optimization measures of a NAV/SQL system!

Hey Jorg,

Just the person I needed …No you got it wrong …They are going for 2 regions per server…I was worried about the response the users will face while posting and doing other activities. Also there would be 2 batch posting windows open on the server coz whatever is posted goes into the queue.

Another thing is that we are on 64 bit systems. I was worried about the memory & cpu as mentioned we are on quad core.

What would be ur inputs regarding a DB size of 500 GB what config should I go for in terms of disks in SAN, memory & cpu? Each server has internal 4 disks (146 GB x 4) for OS and SQL binaries and Programs. My only worry is SAN config ,memory and cpu.

Would like your opinion on this ?

Thanx a ton Jorg

Best Regards,

Prashant

Puuhhh … actually I’m glad for mistaking the question [:|]

OK, 1 Server, 2 Regions, 2 Databases, 80 to 100 users. Well, whenever it is about sizing a server it primarily depends on the transaction volume which is processed. As you said there’s 1 to 2 GB of data produced per day, I consider this as very high volume!

My suggestion for such a server would be:

CPU: 2 (4?) x Quad Core

RAM: 32 GB or more

Disk-Subsystem:

C:\ local RAID1 OS, Swapfile, Programs, Misc

D:\ local RAID1 SQL Server master, model, msdb

E:\ SAN RAID1 tempdb (4 files)

F:\ SAN RAID10 NAV Region 1 (mdf/ndf)

G:\ SAN RAID10 NAV Region 1 (ldf)

H:\ SAN RAID10 NAV Region 2 (mdf/ndf)

I:\ SAN RAID10 NAV Region 2 (ldf)

J:\ SAN RAID1 Backups, Misc

When it is about the SAN, you should not have one huge physical aggregate and just assign logical LUN to the server. Each volume should be a physical array of disks!

How will the users connect to the Server? Terminal Server/Citrix? Then you should make sure that these boxes are sufficiently sized, too. Or Fat Clients? In any case you might need a sufficient network connection!

Well, this proposal is just a proposal and should be discussed …

Hi,

Thanx Jorg …that was very useful…I was coming to your question too.

Yes the users log in locally as well as from regions via Terminal Server or Via Citrix Server.

In this scenario what would be the ideal setup for these servers.

we experience a lot of problems with terminal servers and often we have to restart it.

After a restart everything is happy-bappy for a few days and we are back to hell again.

You suggest a seperate space for swap …beats me…if you throw some light on the same I would glad.

Can I redirect the Navision Temp Folder on this drive where the swap would be.

Another thing was can I restrict MSSQL to use 30 GB instead of 32 GB and leave 2 GB for Windows.

Need your expertise on the above.Thanx once again Jorg.

Best Regards,

Prashant

First about the RAM: if you have 32 GB installed (64bit system), then Windows and SQL Server actually will “negotiate” who will be using which amount of RAM which finally will result in 2 GB for OS and 30 GB for SQL. But you could also set a “Min.” and “Max. Server Memory” in the SQL Server configuration.

In my disk-layout-proposal I didn’t mean the NAV temppath but the SQL Server’s database “tempdb” - and this could be easily moved after installing the SQL Server.

Regarding the swap file I’d like to change my proposal a little (see old posting).

Regarding Terminal Servers (or Citrix):

The “NAV Hardware Sizing Guide” has a calculation formula about how to size a Terminal/Citrix Server:

  • 10-15 Dynamics - NAV users per processor core depending on work load
  • 64 MB of Memory per Dynamics - NAV user (assumes an object cache of 32 MB)
  • 1 GB of Memory for the Operating System
  • Internal SCSI or SAS RAID 1 10 - 15K RPM with 500 MB of disk space available for each user
  • 1 Gigabit Ethernet connection

I would calculate conservatively, allowing 10 Users on a DualCore CPU, no more than 40 on a QuadCore - dependig on the expexted transaction volume. Thus, having 40 users on a QuadCore CPU you need 40 x 64 MB = 2560 MB (= 2,5 GB) plus 1 GB for OS; hence it’s 4 GB of RAM. 40 x 500 MB disk-space = 20 GB (shouldn’t be a problem).

But: this suits only the requirements for NAV! If you run other applications on the same TS/CTX boxes, you need to add their requirements! And especially Office applications - the worst is Outlook/Exchange or the like - are also consuming a lot of resources, too.

And have in mind: the NAV memory requirements are just referring to the “finsql.exe” requirements plus “Object Cache”, but this actually ignores the fact, that all data is processed on client site. Hence, when dealing with large result-sets of records, the client (also TS/CTX) ned to have the capacity to store the records (more RAM) and to process it fast (CPU).

Finally, from my experience I would go for a QuadCore CPU, 4 GB of RAM, Disks as required and Gigabit LAN per no more than 20 concurrent users … bt then again: this also needs to be discussed on basis of your specific requirements and work-load …

Thank you once again Jorg…I am clear about this now.

Regards,

Prashant

Just a question to Jorg out of interest.

For each region, you’ve suggested the mdf/ndf files on one disk. Should they have additional spare raid 10 volumes, would there be any gains, in general terms, in splitting some of the ‘high use’ (whatever they could be defined as size, transaction volume, etc) tables into additional ndf files on those additional volumes?

For example, say you identified 3 key tables, that were largest in size and also the highest in both read & write transactions with large additions/modifications to data

C:\ local RAID1 OS, Swapfile, Programs, Misc

D:\ local RAID1 SQL Server master, model, msdb

E:\ SAN RAID1 tempdb (4 files)

F:\ SAN RAID10 NAV Region 1 (mdf only)

G:\ SAN RAID10 NAV Region 1 (ndf 1) - containing table 1 as identified above (and even spread of other tables)

H:\ SAN RAID10 NAV Region 1 (ndf 2) - containing table 2 as identified above (and even spread of other tables)

I:\ SAN RAID10 NAV Region 1 (ndf 3) - containing table 3 as identified above (and even spread of other tables)

… etc

Do you think that would be over-engineering the solution? Do you think that even in ideal circumstances the difference would be neglible?

I’d be interested to know your thoughts.

Well, as so often … it depends …

Splitting the DB into multiple files could indeed improve the performance (more CPU Threads, dedicated I/O). To do that, at first it should be assured that thre IS enough CPU capacity to handle the additional threads. Then, the multiple files could be either stored on one HDD volume (just benefitting from the CPU) but then it has to be assured that the disk-controller is able to deal with that. If too many files are handled some controllers might screw up … (depend). So a real benefit would be to have the files on separate volumes (as you suggetsed) but then the system gets rather expensive … and again: you need diskkontroler which are able to handle all this properly. Just adding a new LUN from the same SAN might not do the job well …

When it is up to split the db there might be several strategies. The one I favor is, to segregate the Non-Clustered Indexes and the SIFT-Tables (inkl. Indexes) as these are implicit transactions, triggered by one process.

What are your experiences with splittin up the db?

In a previous life, we split the DB for disk space concerns, rather than performance. We basically sorted by table size and put half in one, and half in another, plus seperate for the non-clustered indexes on the larger tables.

From a performance perspective unfortunately it was difficult to measure, as we were running a sub-optimal configuration (partitioned raid 01), however I have a ‘feeling’ that it had some marginal effect.

We later moved to a better hard disk setup (4 RAID10 arrays for SQL plus 2xraid1 for OS and backups) and nobody really complaind about everyday performance after that.

I think that on a properly configured RAID10 array, the performance gain in disk IO would not be worth the cost.My feeling would be that splitting up a DB would be last on my to-do list when performance troubleshooting.I would think application tuning would be higher on the list of performance to-dos that DB splitting.

Have you seen, first hand, noticibly performance gains with this setup?

I agree, splitting up the db is at the rear end of optimizations (first NCI, second SIFT), and only on very large databases (300GB+). In these cases we usually splitt the db AND placed the files on separate disk volumes which in total indeed gave a performance gain (but not giant leaps) due to the “relief” of the original disks.

I have no figures on hand about the impact of only splitting up, but I guess this is marginal as nowadays usually the servers are sufficiently sized regarding CPU (mostly slightly oversized - which is good!) so that putting more threads out of them just has minor impact …

So finally the primary reason for splitting the db is to separate I/O and prevent disk-latencies due to overloaded disk controllers … and again: this should be only necessary on large db with high transaction volume and many concurrent users …

May I ask why your database is so large? I do a lot of work with NAV with Retail clients and in every case I have seen this it was because of one of the following:

  1. They weren’t using or avoiding SIFT properly. A technical upgrade to 5.01 may drop your database size in half. For a “vanilla” install, it will shrink the size of Item Ledger Entry table by a factor of 10 — which is typcially about 70% of the data.
  2. They had large tables that were over indexed.
  3. They weren’t archiving or compressing old transactions. Effectively, they would track detail down to every employees logon on every pos for all time. Not a great use of space…

Hi Girish,

I do not know …We are running NAV 4 with sp3 … I run the SIFT key optimizer from Stryk …I update stats daily…reindex key tables daily and huge ones weekly…

but still the db growth is enormous…We also stopped running the cost batch coz it would take days to complete…One thing I have noticed is if I feed the servers more disks then it performs extremely well.

Also I had asked if we are planning to shift from MSSQL 2005 Enterprise x64 bit to MSSQL 2005 Standard x64 bit version.

How will it impact Navision connectivity & performance? Can I do this safely… I mean by backup & restore.

Also I run a transactional replication from my master server which is a enterprise version to the regional servers which we planning to move to standard version.

Your valuable expertise would be appreciated.

Thanx & Best Regards,

Prashant