Is Navision really slow on SQL

We continually hear issues about SQL performance. I know that my early experiences with SQL, especially 2.50 really put me off it, but we all know that was a bad version. It is hard to see any direction for Navision except SQL now, because of our new master. So my question, is SQL slow? Hardware is so cheap now, that the hardware cost should not be an issue at all. And in reality every SQL performance issue that I have seen so far was due to either poorly written code or a bad hardware configuration, not directly due to SQL. From what I see, Navision made one fundamental (performance) error with SQL, and that was making it 100% C/SIDE compatible (but a great marketing decision). They may be should have allowed some SQLisms to carry across, even if it meant supporting two products. In any case we all know that GREEN will not have anything to do with C/SIDE. So, can anyone out there give specific examples where they had a properly tuned server, properly written code, they weren’t using dimensions, they were not renaming records and their system showed a huge performance difference between SQL and C/SIDE. Also base this on a hardware configuration using 10 or more drives, and SQL having at least 3/4 CPUs and at least 4 G ram. (who cares, its so cheap anyway).

Like you mention it yourself : Navision on SQL is slow, SQL by itself is not slow at all. But indeed, due to the way Navision implemented SQL (flowfields stored in seperate tables in stead of using standard SQL statements such as SUM, AVG and others, …) : one can say that if you can avoid it, don’t run Navision on SQL.

I currently work for a company, who’s main database is a Navision 2.60 SQL back end (clients running Attain). We are in the process of ‘tuning’ the system and as you mentioned David, SQL Server is not the problem, it’s the implementation of Navision-SQL option. Our hardware configuration has been approved by the ‘experts’, so we know this is not the issue in our situation. As our database is heavily modified, I dont suspect our firm will want to move to a newer version (like Attain), so it’s a major task for my colleagues and me to fine tune our installation and hope that we can improve performance to satisfy our users. As for the future, I have no idea what is going to happen and in our case anyway, I cannot see us ever moving away from what we’ve got. It would be a major overhaul.

Well, we know for sure that SQL iself is NOT slow at all. And when you consider the lower end market the Financials/Attain is targeted to, it should FLY on a reasonably configured SQL server. We know it does not. While it certainly was a trick to get Financials/Attain to run on SQL utilizing the C/SIDE functions, it has been at the cost of performance. Not utilizing basic SQL stuff, like stored procedures and views has hurt a lot. A lot of times when those C/SIDE GET’s, SETRANGE’s, SETFILTER’s, etc, are translated to standard SQL statements, they are not optimized properly, as if they were to be manually written. In the case of my company, we went with SQL from the start, but have scaling problems when we go through busy periods, with about 50-60 concurrent users. Some of it can be traced to custom coding, but with this load, the server should be screaming.

I may have been misunderstood.[V] My question of “Is Sql Slow?” refered to Navision implemented on SQL. (Hey this is a Navision Forum[:p]), I am not refering to a native SQL program. My concern is that we hear a lot of for/against Navision on SQL, but to me it apears that Navision runs fine on SQL, and the performance issue relate either to a poor hardware configuration, or badly designed modifications. What I want to know is - Are there any Navision implementations out there that are running on SQL, the Code is good, the hardware is good, the user training and system setup is good, and still performance is substantially slower than on the Native database.[?]

Hi David, We do have a client with this issue, Standard Navision attain running on SQL. The issues faced are not when normal transactions take place. The hardware is setup fine. But it is the setting of the transaction log that is causing the slowness in the performance. The transactionlog file suddenly starts to grow in massive proportions. This some times crashes the Storage space for the transaction Log. Also noticed is the way the database expansion takes place on SQL. When running some batch jobs like Adjust cost Item Entries or Calculate Plan on a requisition worksheet. All the users are slowed down some times they even can go out for lunch till the time this work finishes. The reason we found was the database expansion. The temp tables or variables used in these batch jobs are filled when running. If it were to be Native Navision then we get an Error to Expand the Database although after running the Data size remains the same (increase of 2 or 3 %). But SQL just hangs up on us as the database expansions can take place only after a write to the database and the batch jobs Write only after completion.

Hi Prahanth, thanks for your comments, could I ask you please for soem details of your system, i.e. current database size, no of actual users, hardware configuration etc. thanks.

Hi David, Current Database size is around 3 GB, transaction Log around 400 MB No. of users 75 concurrent as yet. They are using a Raid 5 with 4 HDD partitioned to (36, 36, 36, 18) times 2.

quote:


Originally posted by prashycool
The hardware is setup fine. … They are using a Raid 5 with 4 HDD partitioned to (36, 36, 36, 18) times 2.


Hey Prashycool, this hardware is NOT setup fine. Please so a search in this Forum, use the keywords “SQL, Hardware and performance” for help in how you should configure your server. One drive just is not enough for 75 users. I was looking for a scenario where the SQL server was setup correctly. In fact this more goes towards supporting my theorem that incorrect hardware, is a major reason for SQL getting a bad name with us.

I must agree with You David. OK. Navision on SQL requires a lot more HW. But that’s no suprise. There is documentation available from former Navision and Microsoft which can act as guidelines for installing SQL. Prashanth wouldn’t have had that setup if the guidelines was followed. In all posting that’s been done on this forum allmost all bad performing SQL-implementations has been on a bad HW-setup. Maybe Navision could perform better on SQL if it was more SQL-ish. I won’t argue on that. I think it would. Maybe Navision on SQL requires a bit “to much” HW. But that’s just one (and a small one) of many factors when calculationg on a purchase of Navision on SQL. You can’t expect Navision to perform as well as on C/SIDE the way they did the implemenation on SQL. I don’t think the developers in Vedbaeck did either. But is there anyone having bad performance after having followed for example the sizing giude that was written for 2.60? That’s the issue here.

Hey Lars, exactly what I was trying to say.

Doesn’t seem to have been any performance enhancements in the database-driver for the sql option since 3.10, according to the the last pages of the “adg” for 3.70, thus Navision itself must be quite pleased with the situation as is.

Hi guys, I think there are some issues with the SQL interface that would make it run a lot slower or with problems as opposed to the native db. The examples I am offering were tried on my laptop hardly the best server [:D] but on the other hand I think they illustrate the issues. 1. One cool piece of code is the following: Table.Setcurrentkey(FieldToGroupBy). if Table.find(’-’) then repeat Table.Setrange(FieldToGroupBy,Table.FieldToGroupBy); Table.Calcsums(FieldToSubtotalBy);//optional Table.find(’+’); Table.Setrange(FieldToGroupBy); until Table.next = 0; This code is suppose to allow you to go through a table from unique value to unique value for a field that is not part of the primary key. But what it does is not important. Important is that this code ran in 10 secs max in native for about 30000 records. For the same data and same code (and same laptop [;)]) in SQL it took about 1.5 minutes. After a lot of research and hair pulling [}:)] I found out why there was such a difference. The little Navision To SQL engine knows find … next loops. But because I have that find(’+’) the little engine that couldn’t considers the command “Next” as reading a completely new set of records and every time requests the data straight from the db rather than reading it from the cache … And it does that every little time … [xx(] So the only way you could rewrite the above code to make it read from cache is to read records one by one and add them up [}:)][xx(] I actually found a comment in the Navision - SQL documentation that such a thing could actually happen resulting in very poor performance. And then in true Navision style it starts a new chapter without offering any solutions. [B)] 2. This I think it is quite known but just in case I’ll mention it: Let us consider the code: CustLedgEntry.Setcurrentkey(“Customer No.”,“Country Code”); CustLedgEntry.setrange(“Country Code”,“Bora Bora”);[;)] // before you start I know it is not a country but still a nice place if CustLedgEntry.find(’-’) then repeat //go to the beach and get a suntan [:)] until CustLedgEntry.Next = 0; Now the above code will perform rather good in native. Not as fast as filtering on Customer No. but still better than using primary key. In SQL if you do not filter on the first field of the key then it uses the primary key. Therefore above code will generate a table scan for all records with Country Code = Bora Bora. I was too lazy to test if that is valid if you do not filter on other fields from the key except the first one but anyway this is hardly ideal and I would not say it is poor code. So there is another problem with no easy solution. I mean you will not start writing code: if you filter on field 1, 2 and 5 then use key 3 else if you filter on field 2 and 5 use key 7 a.s.o. Somehow that does not fly for me … 3. This one everyone knows. Due to the way they implemented flowfields (additional tables) data modification is seriously slowed down on heavily used large tables or when posting … Did you know that adding a date field as part of a SIFT key creates 3 records in the additional table for every record inserted in the original table? Did you ? [:p] 4. The locking behaviour of SQL is different from regular native. That is both good and bad. After some really bad experiences at a client running native Attain with locking on No. Series tables and dimensions (about 60 - 70 concurrent users) I would prefer SQL for that because in these cases record locking truly works. But when posting it is a different story. I am not a SQL person so i cannot speak very well about it but I think the posting routines would have needed to be rewritten for SQL to establish some standard locking orders for the tables since SQL is a lot more sensitive to this. As it stands I think it is a lot easier to get deadlocks in SQL than Navision regardless of supposed record locking. I mean did you know that after Locktable, SQL locks the entire set of records once the first record is retrieved? So the code: CustLedgEntry.Locktable; CustLedgEntry.Setcurrentkey(“Customer No.”,“Country Code”); CustLedgEntry.setrange(“Country Code”,“Bora Bora”); if CustLedgEntry.find(’-’) then … will lock all records with that country code? Not too bad … Does this code sound familiar [;)]: GLEntry.Locktable; GLEntry.reset; if glentry.find(’+’) then NextLineNo := glentry.“Entry No.” + 1 else NextLineNo := 1; As far as I understand you get the whole table locked . So much for record level locking. As I said I am not a SQL person or tested this last thing fully so I would welcome someone to show me that I am wrong … Please do as a matter of fact [:D] I think that should be enough for to keep the debate open [:D] Cristi P.S. David do not tell me I understood your question wrong. Do you have any idea how much time it took to write this !!! [:p]

Of course there’s a lot that could have been done better if Navision created a new product for SQL. But where would we have been then. Today we have one code base. We can upgrade our customers systems from C/SIDE to SQL without them even notice the difference (given that the customer trippled the HW). We do not have to do a lot of upgrading/conversion to move to SQL and we do not have to do additional training. What would the cost for Navision and us as NSC’s (and our customers) have been if we hade two product lines to support? I think those costs would have been substantionally higher for our customers than just bying a lot of HW. When developing a new product we will hopefully see a lot more performance/HW from Microsoft since they don’t have to care about C/SIDE-compability then.

Hi Lars, I am contesting the solution they went for. I mean it is a compromise solution and definitely it can never be as good as native SQL. First I am contesting David’s assertion that if you had the right hardware, right setup and right coding there should be no issues with performance. There are things outside of our control that affect performance in a negative way and that were not in native. Second, I think they could have put more effort in redoing the posting routines to cater to SQL that just that stupid “If RecordLevelLocking …” thing. I think there would have been less locking/deadlocking in SQL if ther rearranged some things in the posting routines and I do not think the changes would have had any impact on native dbs. Three, I do not know how hard it would be but I doubt that they couldn’t improve anymore on the little C/SIDE to SQL engine … Cristi

Being heavily involved in SQL I’ll do my bit… Those that have realised that a better-performing SQL implementation for Navision would have meant intoducing some SQL into C/AL, and with it a new approach to programming against the DB and new skillsets etc are right. How many C/AL developers are very profficient with SQL - not just a SELECT * FROM table - but can roll out a good SQL statement without much effort? Being good at the SQL language is not easy and takes time to learn to do well, like most things. The assumption with the SQL version was that these skills were not possessed by most that were already very skilled with C/AL. That was a fact in-house at Navision. There was a decision from day one to not touch the C/AL language or introduce even a psuedo-SQL: the reqirement that C/AL code should just run on the SQL version with no changes was priority no 1. For better or for worse. The C/AL programmer is shielded from knowing: 1. The ‘standard’ SQL language (if you can’t use derived tables, correlated queries, a nice 3-way nested EXISTS query or an INSERT/SELECT/OUTER-JOIN copy you won’t get far.) 2. SQL Server data types and all that comes with them (explicit/implicit conversions, families, character collations bla bla). 2. Transaction isolation levels. 3. Lock hints, index hints, cursors, triggers, views, functions, stored procedures. 4. Data definition: table/index creation/alteration. 5. Security: SQL object permissions and role membership. 6. ??? There is very little that was needed in the SQL solution that required special handling. The introduction of the TransactionType in C/AL, for example, was unavoidable but only done because there needed to be some openness regarding transaction and locking control. And nobody liked or understood even that small addition. This really leads to the ‘little C/SIDE to SQL engine’ mentioned here. Although this has improved over the versions it will never provide the performance necessary to really exploit SQL Server, because it is still just a mapper from an ISAM driven language to a set driven one. Another problem with it is that it does not even know what C/AL commands are being executed because it sits below the DB interface, which is used by both C/AL and C/SIDE internally (and C/FRONT and C/ODBC) - it does not know about NEXT loops for example. But it does recognise patterns and changes the SQL statements that are used when it realises what is really being done in C/AL. E.g. it will use a cursor in respose to a FIND(’-/+’) if a NEXT was also used, but it will use a singleton SELECT if no NEXT was met. You can see this easily by running a small snipet of code 3 times and you’ll end up with the optimal SQL statements for that C/AL. Using stored procedures is really not a solution, because Navision is not a fixed application environment. It is a development system where code is of course dynamic. Assuming that noone would like to port their C/AL application to T-SQL by hand, this would mean making a C/AL to T-SQL translator to compile an application into native SQL by the Navision development system. And this was deemed to expensive a task development wise and not a strategic solution. T-SQL is one of the only remaining languages in Microsoft that is there for lagacy reasons, and not until SQL Server 9 (Yukon) will there be an alternative to it for server-side code, in the shape of Common Language Runtime languages (C#, VB.NET, Managed C++ etc). SQL Server is not very forgiving if you dont use stored procedures. It has some very nasty limitations for reading sets of the wire. Its most optimal method of reading data - the one your client should always use - ties up the entire connection until you’ve read the whole set or cancelled the query. This would correspond to one operation of one active C/AL record variable. It is essentially unusable for Navision. (The limitation will be removed in SQL Server 9, but hey, everyone can write C# server code instead of C/AL client code then anyway!). Microsoft realised this problem and introduced several ‘fast’ cursors and autofetching options etc, as a next best thing. The point being that SQL Server likes fixed, server side code, with fixed queries, views etc. Not dynamic client side code. Anyway, I’m rambling. Regarding some of the points made earlier, the C/AL examples where performance is a problem still need to be addressed. You will always be able to write some C/AL that will not be optimal. The case of NEXT begin done on a different record to the ‘current’ record, on the same C/AL variable is as given earlier is one to avoid for sure, and is not that common in most application code. The trouble-shooting guide tools look for this one. There has been much confusion about LOCKTABLE and its affect, and I made a huge post about LOCKTABLE some time back so wont go over it again. But the assumption about it locking the entire table is partly fueled by a bug that caused too much locking (not by LOCKTABLE, which does not visit the server, but by the subsequent FIND). The bug caused a cursor, which reads in blocks of records, to lock the whole block. This is fixed in 3.60 hf17 and 3.70. With row-level locking: I dont think you can win the locking granularity vs. deadlock war. With any complex application, if you open up lower locking granularity you will open up deadlocks and it takes a lot of work to change existing code to use the right locking order etc to avoid deadlocks. In Navision, all locking is done with a ROWLOCK overload sent to the server which means SQL Server is refused the right to decide for itself weather to use row locks or a page/table lock. The is to improve concurrency of course, but it can also be an advantage to take courser locks - performance wise and deadlock avoidance wise. From 3.70 (and one of the 3.60 hitfixes) there is a way to do this. But you cannot be certain of the affect for a huge application so it is somewhat hidden. Those programming in T-SQL would make these decisions as part of their code. I think I’ll stop there, though there is more to say. David, sorry this wasn’t an ‘answer’ to your question either!

Point 1 of cnicola’s reply is a case in point. Given how often FIND(’+’) is used within Navision, if it is taking 9 times as long on an SQL server (as the test suggests) then is it any wonder that the SQL version has got a poor rep over the years. The other thing I dislike about SQL is that as a system you really need someone with a reasonable bit of knowledge to administer it whereas the native server is just start and forget. IMHO many smaller customers just aren’t interested in the overheads of SQL. Cheers, John

Thanks John for the additional comments,[:)] but… did anyone actually read my question[V]? It seems that about 350 people have read the question, and none have repied with any SQL prolems under the defined circumstances, so QED Navision is not slow on SQL … right[;)].

David I have one question to you. If Navision report runs more than 48 h. and I can make the same report in 10 min (T-SQL) Navision is slow or not ? T-SQL reporting perfomance are 100 - 10000 times faster comparing to Navision. Navision db engine was meet dead-end with db size and locking. You should rememer times then was selled 100 Mb db limit granules. I think that they tried to limmit db sizes. Next db expansion to 64 Gb. Dead end too. Pointer expanded to 64 bit’s, expansion to 128 will cause perfomance problems. So SQL platform choosed not for perfomance, but for db size limit and locking (sessions count). Navision db on SQL gives to as a lot of posibilities, but the Navision aplication don’t exploit SQL as we expected. BTW.: Navision should implement new func. for sql option in C/AL, something like: [TableRef]:=CREATEVIRTUALTABLE(‘T-SQL statment’); Such things should rule !!! MyTab:=CREATEVIRTUALTABLE(‘select … from … where …’);

Hi Dalius, thanks for your partial response to my question. To complete the answer, could you please let em know the SQL configuration you have, and most importantly, which Navision Standard Report are you running, and how many records is this based on. Thanks for the reply.