Is Navision really slow on SQL

80 Gb database up to 50 users. ~8000000 item ledger entries. report 1001 Inventory Valuation hardware: 2 xeons, 2Gb RAM, RAID … BTW: I have asked a lot of times, Navision suport about what. So, they blame $%#$@^#$ 3.01 version. => you should upgrade => money => customer: what a F%$#@#^, we done 2 upgrades 1 year ago ^#%@$^@ OFF => no money :confused: But it is another topic, nevermind.

Navision Support software code: case Problem of . "SQL perfomance": BEGIN dialg.OPEN('Customer Version: #1###'); dialg.INPUT(1,CustVer); ERROR('It was\will fixed in %1 version' CustVer+=1); END; . [;)]

Hi Dalius, One last try…[:p] Can you let us know the COMPLETE SQL configuration that you have? I do thank you though, for CONFIRMING that you are definitely running a Server that is NOT configured to Navision requirements, so that is a good indication, that once again the “Navision” performance issue is probably hardware related. I am guessing that as well as the wrong CPU and RAM, you also probably have the hard drives incorrectly configured. My guess is that you have a resolvable problem here. Also how did you make the comparison? You say above that on C/Side database, that with an 80Gb database and 8,000,000 Item Ledger entries, that the Inventory valuation report runs in 10 minutes. Firstly I am VERY interested in the hardware you used for this, since I find that even on smaller ssytems with only a million or so entries, that the inventory valuation report can take up to an hour to run. Of course that depends on the Data too. But to increase from 10 minutes to 48 hours is of course a big difference. Can you also let us know how long it takes to run the Inventory Adjustment Routine? And how often does the client run it?

My, hardware is OK. I have tested a lot of configurations on SQL server, before started Navison. I’m using HDD stripping. Comparsion ? One report is native Navision “Item Valuation” - 48 h; Another was made using transact-SQL script, using the same SQL database and server - 10 min. If t-sql queries on 8 mil. rec’s run’s 10 min, you still think that configuration is wrong ? [:)]

a/ you have 2 CPUs and 2 G ram. From the Navision specs, this just is not enough. b/ since you have this incorrectly configured, I am assuming that the drives are also incorrectly configured. c/ you have not answered my question.[V] But serously Dalius, you may be angry that Navision does not work the way you think it should, but wouldn’t it just be easier to configure the machine correctly, and get on with life. You would do you customer much better justice to just get it working.[;)]

OK I can add more RAM and change RAID, but what perfomance increase I can achieve ? 2 or 3 max 5 times. Comparing to 100 - 10000 increase, when SQLism is used, it is nothing. BTW to create tools based on SQLism for reporting is cheaper than hardware and software upgrades every year. About angry. I’m not only one think like that. You think that Navision implementation on SQL is bad too: >>From what I see, Navision made one fundamental (performance) error with SQL, and that was making it 100% C/SIDE compatible (but a great marketing decision). [:)] David sorry, but I don’t like talk about cache hits, random seek times … Let’s keep philosophic discussion. I’m looking for ideas, not for solutions there. So, returning to your first question, I hope that no one chose SQL to increase “read perfomance”. There are a lot of another nature problems, that pushing to SQL platform: database size, locking, integration with another databeses … Buying plenty of RAM, HDD or CPU isn’t solution. If you have locking problems, hardware can minimize lock wait times, but rewriting code is more effective. Hardware upgrade can’t recompense software diseases. Marketing: Hmmm, i don’t think so. This was more lifebelt escaping form deadend, than reasoned improve.

So one last try. Is anyone out there able to answer my original question?

Daluis - I agree with your arguments about hardware improvements vs. application code improvements. There is no substitute for the latter. My experience is that those that have achieved a succesfully running system are those that have invested time in tracking down specific problems and fixing them by application changes. I have seen evidence on this forum, of complaints about peformance - but when faced with having to attack the problem with Client Monitor or Profiler etc, it is just too much. The efficiency of Navision Server, and its ability to be forgiving torwards bad application code or key design (someone mentioned ‘fire-and-forget’, a good summary) has made it too easy to code against this platform. Most application developers targeting SQL systems (of any kind) don’t have that luxury.

David, You could also ask, then: Who is running with satisfactory performance, and what is your database/server setup to achieve it? (or for that matter what changes were made to application code also?). If you get no hits then I suppose summer laziness is stopping people from posting their experiences.

David, I know that my early experiences with SQL, especially 2.50 really put me off it, but we all know that was a bad version. Yes, it was really bad. Navision had alot to learn about using complex Structured Queries Language code. There were simple things like filter orders, etc. that are now well documented that compounded the problem. It is hard to see any direction for Navision except SQL now, because of our new master. I an not sure the new master is the issue, but market demand. I see IT shops insisting on SQL. IT can always sell SQL skills, but not C/Side in the job market. So my question, is SQL slow? Hardware is so cheap now, that the hardware cost should not be an issue at all. And in reality every SQL performance issue that I have seen so far was due to either poorly written code or a bad hardware configuration, not directly due to SQL. Yes, I think we have all seen this. Like all systems, by understanding its strengths and weakness, a wining solution can be designed and delivered. In this case by understanding how Navision works with SQL, bad design can be avoided that will exaggerate the weakness. Thus, minimizing the performance issues. From what I see, Navision made one fundamental (performance) error with SQL, and that was making it 100% C/SIDE compatible (but a great marketing decision). They may be should have allowed some SQLisms to carry across, even if it meant supporting two products. In any case we all know that GREEN will not have anything to do with C/SIDE. Is this really a fundemental error? Like all solutions, everyone has there own opinion how to solve it. With that said, fighting the solution to be 100% C/SIDE compatible will not solve the problem. There are others who have voiced this already and it apprears they have been successful (satisfactory) with their SQL implementations by working around the short commings. I see Navision as an application that can be run on 2 database engines (C/SIDE and SQL). Just like all database engines, they have their strengths and weakneses. This is then compounded by the strengths and weaknesses of how the application accesses the database engine. So, can anyone out there give specific examples where they had a properly tuned server, properly written code, they weren’t using dimensions, they were not renaming records and their system showed a huge performance difference between SQL and C/SIDE. I know I cannot give a specific example between the difference between SQL and C/SIDE because most systems have been using SQL since Attain. And comparison before Attain is probably not realistic given the known performance issues at that time. Plus, it was noted at that time SQL was slower. And just like yourself and others, I have seen both bad hardware, SQL tunning and code degrade the performance of Navision. We are all used to the performance in C/SIDE and compare this with SQL. But should we be asking how does Navision performance in SQL compare with competing product today? When 2.50 came out, it was noted that Navision’s performance was better than Great Plains, but slower than C/SIDE. Also base this on a hardware configuration using 10 or more drives, and SQL having at least 3/4 CPUs and at least 4 G ram. (who cares, its so cheap anyway). I am not an SQL expert, but I think I understand the beast enough to work with it. My experiences have been based on systems ranging from a single processor to systesm with 4 processors, 4G of ram and databases upto 30GB processing several thousand sales lines a day.

Scott. These are good points. I’d just like to day that there was a very small implementation difference between 2.50 (which is regarded as a bad performer) and 2.60, which is regarded as a much better performer. The only thing done was that filter values (generated by c/side or by C/AL SETFILTER, SETRANGE) were parameterised in 2.60 but were placed as literals in 2.50. There was no change to the SQL. They were placed as literals because SQL Server 7.0 had an autoparameterization mechanism which was believed to be doing this anyway. After 2.50, it was found that it was not and the parameters had to be built manually by the client. SQL Server 2000 still does autoparameterization, suposedley, but it is not used anymore by Navision. It is there mainly for ad-hoc end-user queries where manual parameterization is not possible.

Moved to SQL forum.