decrease in performance whith SQL server

Situation: we have 80 point of sale located in italy linked with a central server via citrix. The problem is that after about 2 hours of work (writing about 100 sales line) the system has a incredible decrease in performance. We have 2 giga ram and the total structure is more then enough(the microsoft specialists said). Table sales line is about 4,5 gigabite wide and comprehends about 800000 records. Has any one a suggestion to help us? Thank you very much. Elena

First thing to do is check the task manager and see what is going on with the CPU. If it is not being maxed out, you will have to check bandwidth problems. Also, if you have the right version of Citrix, you can use the Resource Manager to check for memory leaks and details on each application, how much CPU, etc. Good Luck.

quote:


Originally posted by Elena Staffolani
Situation: we have 80 point of sale … Table sales line is about 4,5 gigabite wide and comprehends about 800000 Elena


4.5 Gig in the Sales Line table is a big issue, it looks like you have some code that consolidate the sales lines form your POS systems prior to posting. This is a BIG NO NO … it should be done in a seperate table. You need to contact your NSC and have them fix the code.

quote:


Originally posted by JohnT
First thing to do is check the task manager and see what is going on with the CPU. If it is not being maxed out, you will have to check bandwidth problems. Also, if you have the right version of Citrix, you can use the Resource Manager to check for memory leaks and details on each application, how much CPU, etc. Good Luck.


He should also check disk activity,… If disks are over busy, there could be some problems in code regarding useing keys and filters,… Question is also Navison native or SQL server? With SQL server there is a VERY BIG ISSUE of SIFT indexes, which must not be generated in big tabled (G/L entry, CL entry, DCLE entry,…) bostjan

We are working on SQL server. We have created 4 new keys with sumindex fields on the Sales Invoice Line table (not Sales Line as said before, sorry). Actually the problem is apparently on disk activity. When the performances slow down we see an abnormal activity of the disk. We have up to 5 gigas of TempDb, and a simple Insert on Sales Invoice Line takes several minutes to be performed. After few minutes of inactivity tempdb goes down to a normal size and everything restarts well… up to the new problem, about one hour later. What’s funny is that we have another very small server (0,5 gigas RAM) with the same database and it works excellently. Free space on the disk is not a issue as we have > 190 Gigas free at the moment. We are going to try to disable the Mainan Sift Index properties of the new keys. Let’s see what happens… We hope our Project Manager has not still suicided… he has disappeared this morning :slight_smile:

quote:


Originally posted by Elena Staffolani
We are working on SQL server. We have created 4 new keys with sumindex fields on the Sales Invoice Line table (not Sales Line as said before, sorry). Actually the problem is apparently on disk activity. When the performances slow down we see an abnormal activity of the disk. We have up to 5 gigas of TempDb, and a simple Insert on Sales Invoice Line takes several minutes to be performed. After few minutes of inactivity tempdb goes down to a normal size and everything restarts well… up to the new problem, about one hour later. What’s funny is that we have another very small server (0,5 gigas RAM) with the same database and it works excellently. Free space on the disk is not a issue as we have > 190 Gigas free at the moment. We are going to try to disable the Mainan Sift Index properties of the new keys. Let’s see what happens… We hope our Project Manager has not still suicided… he has disappeared this morning :slight_smile:


[:D] SIFT it is!!! Since you are using SQL option i stronly suggest that you disable the maintanace of SIFT in all big tables. This will do miracles,… We have same problems,… We only leave one SIFT in G/L ENTRY key:G/L Account No.,Posting Date Remember SIFT on big tables kill SQL performace. Bostjan

Try to run Optimize function. File->Database->Information->TAbles->->click Optimize. Another good idea when using SQL server is that you setup regular maintenance plan, which INCLUDES, index rebuilding and update statics activities. You should check with your MS SQL Server Administrator the frequency of backing up the transaction log. This can have big impact on performance. And finaly, let your NSC recheck code modifications made to tables and/or posting routines if any. /I still remember fixing a “VERY COMPLEX” 16 hours running report just by selecting the proper key in the properties and had the time reduced to 10 minutes[:D]/ good luck

Hi Elena, How the server is exactly configued? I mean, i must report here the exact configuration of the sofware and the hardware configuration to evaluate all the possibilities. [:D]

HI Elena, this is more helpfull now. Basically you have badly designed code (it would apear that you have programmers that have not taken into account SQL, and most importantly you DEFINITELY have a badly configured server. You need to get this checked out.) Unless you are a Navision developer with a strong SQL background, DO NOT make major changes to your system such as modifying keys. Leave this to your NSC. In any case, it is more than programming and harware. It seems the whole concept you ahve is problematic. Navision make it clear that such processing should NOT be performed on Document tables, but on the Entry tables instead.

Where do you disable SIFT?

Disabling SIFT is simply moving a performance problem from one place to another. If you don’t have a full understanding of the implications, you should not “turn off” the SIFT fields. Contact the SQL expert at your NSC and discuss it with them. In reality, your better solution is to redesign the FlowFields, so that you can then optimize the SIFT fields. THis will give much better performance pay back than just diabling things.

Hi, I understand that it may impact other things, I just wanted to try it on our test system to see if I was barking up the right tree. Thanks

The SIFT triggers make no use of tempdb, even the cursors created therein do not need to do this. The only situation where tempdb would be used is when a SET operation is performed on the base table - an UPDATE or DELETE with a WHERE clause matching many rows. In C/AL a MODIFYALL or DELETEALL does this in some cases. But even then it would be a small set and should not cause tempdb to grow as you say. But a single insert will not do any of this. It is very easy to see which SQL statement is causing the problem. Use the SQL Profiler and/or Client Monitor. It is better than guessing. The size of your tempdb sounds very strange. I would set up some (SQL) performance counters on it to see what write activity is going on.

In my experience with Navision Native and SQL statement by C/ODBC, the “ORDER BY” part of the SQL statement must match at least one of the table keys, for instance, in the Sales Line table, most of its keys start with “Document Type”. If the first part in “ORDER BY” is not “Document Type”, then the server will sort the records manually as if the key did not exist. Open “Sales Line” table and click on “Sort” to see the valid key orders. For instance, here are some keys that belong to that table: Document Type,Document No.,Line No. Document Type,Type,No.,Variant Code As you see, they start with “Document Type”. If you use ‘ORDER BY “Document No.”’, then the server will do a manual sort without a key. The correct syntax for better performance should be: ‘ORDER BY “Document Type”, “Document No.”’ You can omit parts of the key that are to the right, but make sure that you use the same sequence as it appears in the list of keys.

C/ODBC is not relevant here since it only applies to Navision Server, and Elena’s implementation is on SQL Server.

quote:


Originally posted by Naji
As you see, they start with “Document Type”. If you use ‘ORDER BY “Document No.”’, then the server will do a manual sort without a key. The correct syntax for better performance should be: ‘ORDER BY “Document Type”, “Document No.”’


[?][?][?] Where do you use ‘ORDER BY’ in Navision? Bostjan

Hi all, many thanks for your intervention. Sorry for the little delay, we were engaged in the project. Now everything is working well, and actually the problem has been solved simply installing service pack 3 on Sql Server 2000. Just to answer David Singleton’s points, let me say that we did not write bad code, as the slow codeunit was the standard codeunit 80. We simply have a lot of data in table 113 as our client is a retail organization (one of the major in Italy), and we need all that stuff in order to perform a centralized purchase plan and a point-of-sales refill plan, based upon a sales forecast which is calculated also on the basis of the sales history. By the way, this our procedure (that uses those CALCSUMS on table 113) worked well since the beginning. We tried to disable the SIFT fields on the keys, and we got a little step beyond, but not enough. (15 minutes with SIFTs, 2 without, for a single row invoice registration). As said, we had tested the complete solution on our development server without problems. Then, we have seen that the difference was about the Service Pack (3 on our server, 2 on the client’s server). Fortunately, the responsibility for management of the server is on the client’s EDP people: they are expert as they use SQL since many years with other applications. And actually thay have used SQL profiler, as Robert Cinkaid suggests, and found that everything came after an “insert” into table 113. At the bottom of this message, you can read the test transaction they have written to try out the various situations on the two servers. Then, don’t ask me for what’s happened with Service Pack 3. I’ve heard them guessing about the statistics of SQL Server. Anyway, now the said transaction takes 2 seconds. That’s all, folks “Ciao” to you all from Italy ELENA p.s. our project manager is fine! ============================ HERE IS THE SLOW TRANSACTION ============================ declare @P1 int set @P1=85 exec sp_prepexec 85 output, N’@P1 varchar(20),@P2 int,@P3 varchar(20),@P4 int,@P5 varchar(20),@P6 varchar(10),@P7 varchar(10),@P8 datetime,@P9 varchar(50),@P10 varchar(50),@P11 varchar(10),@P12 decimal(38,20),@P13 decimal(38,20),@P14 decimal(38,20),@P15 decimal(38,20),@P16 decimal(38,20),@P17 decimal(38,20),@P18 decimal(38,20),@P19 decimal(38,20),@P20 tinyint,@P21 decimal(38,20),@P22 decimal(38,20),@P23 decimal(38,20),@P24 decimal(38,20),@P25 int,@P26 varchar(20),@P27 varchar(20),@P28 varchar(10),@P29 varchar(20),@P30 int,@P31 varchar(10),@P32 varchar(10),@P33 varchar(10),@P34 varchar(20),@P35 tinyint,@P36 varchar(10),@P37 varchar(20),@P38 decimal(38,20),@P39 tinyint,@P40 varchar(10),@P41 varchar(10),@P42 int,@P43 varchar(10),@P44 varchar(10),@P45 int,@P46 varchar(10),@P47 varchar(10),@P48 varchar(10),@P49 varchar(20),@P50 tinyint,@P51 varchar(10),@P52 varchar(10),@P53 varchar(10),@P54 varchar(20),@P55 int,@P56 decimal(38,20),@P57 decimal(38,20),@P58 decimal(38,20),@P59 decimal(38,20),@P60 varchar(10),@P61 varchar(10),@P62 varchar(20),@P63 decimal(38,20),@P64 varchar(10),@P65 decimal(38,20),@P66 datetime,@P67 varchar(10),@P68 tinyint,@P69 varchar(10),@P70 tinyint,@P71 varchar(10),@P72 varchar(20),@P73 varchar(10),@P74 int,@P75 varchar(30),@P76 varchar(10),@P77 tinyint,@P78 varchar(10),@P79 varchar(10),@P80 int,@P81 varchar(20),@P82 varchar(20),@P83 varchar(20),@P84 int,@P85 int,@P86 varchar(10),@P87 varchar(10),@P88 tinyint,@P89 varchar(10),@P90 decimal(38,20),@P91 decimal(38,20),@P92 decimal(38,20),@P93 varchar(12),@P94 varchar(10),@P95 tinyint,@P96 decimal(38,20),@P97 decimal(38,20),@P98 decimal(38,20),@P99 decimal(38,20),@P100 decimal(38,20)’, begin transaction INSERT INTO “Gestionale-ORO”.“dbo”.“Gestionale ORO$Sales Invoice Line” WITH (REPEATABLEREAD, ROWLOCK) (“Document No_”,“Line No_”,“Sell-to Customer No_”,“Type”,“No_”,“Location Code”,“Posting Group”,“Shipment Date”,“Description”,“Description 2”,“Unit of Measure”,“Quantity”,“Unit Price”,“Unit Cost (LCY)”,“VAT %”,“Line Discount %”,“Line Discount Amount”,“Amount”,“Amount Including VAT”,“Allow Invoice Disc_”,“Gross Weight”,“Net Weight”,“Units per Parcel”,“Unit Volume”,“Appl_-to Item Entry”,“Shortcut Dimension 1 Code”,“Shortcut Dimension 2 Code”,“Customer Price Group”,“Job No_”,“Appl_-to Job Entry”,“Phase Code”,“Task Code”,“Step Code”,“Job Applies-to ID”,“Apply and Close (Job)”,“Work Type Code”,“Bill-to Customer No_”,“Inv_ Discount Amount”,“Drop Shipment”,“Gen_ Bus_ Posting Group”,“Gen_ Prod_ Posting Group”,“VAT Calculation Type”,“Transaction Type”,“Transport Method”,“Attached to Line No_”,“Exit Point”,“Area”,“Transaction Specification”,“Tax Area Code”,“Tax Liable”,“Tax Group Code”,“VAT Bus_ Posting Group”,“VAT Prod_ Posting Group”,“Blanket Order No_”,“Blanket Order Line No_”,“VAT Base Amount”,“Unit Cost”,“Line Amount”,“VAT Difference”,“VAT Identifier”,“Variant Code”,“Bin Code”,“Qty_ per Unit of Measure”,“Unit of Measure Code”,“Quantity (Base)”,“FA Posting Date”,“Depreciation Book Code”,“Depr_ until FA Posting Date”,“Duplicate in Depreciation Book”,“Use Duplication List”,“Responsibility Center”,“Cross-Reference No_”,“Unit of Measure (Cross Ref_)”,“Cross-Reference Type”,“Cross-Reference Type No_”,“Item Category Code”,“Nonstock”,“Purchasing Code”,“Product Group Code”,“Appl_-from Item Entry”,“Service Contract No_”,“Service Order No_”,“Service Item No_”,“Appl_-to Service Entry”,“Service Item Line No_”,“Serv_ Price Adjmt_ Gr_ Code”,“Return Reason Code”,“Allow Line Disc_”,“Customer Disc_ Group”,“Deductable %”,“Peso Totale Articolo”,“Peso Fino Totale Articolo”,“BarCode”,“Codice Convenzione”,“Scontrino Annullato”,“Costo STD Articolo”,“Costo Materiale”,“Costo Manifattura”,“Costo Complessivo”,“Ricarico(valore)”) VALUES ( ‘x530-300038’, 10000, ‘SCO530’, 2, ‘40100004’, ‘530’, ‘’, ‘Oct 4 2003 12:00AM’, ‘PEN DELFINO DIAM. IN CERCHIO’, ‘’, ‘Pezzo’, 1.00000000000000000000, 43.00000000000000000000, 0.00000000000000000000, 20.00000000000000000000, 0.00000000000000000000, 0.00000000000000000000, 35.83000000000000000000, 43.00000000000000000000, 1, 0.00000000000000000000, 0.00000000000000000000, 0.00000000000000000000, 0.00000000000000000000, 0, ‘4010’, ‘RETAIL’, ‘’, ‘’, 0, ‘’, ‘’, ‘’, ‘’, 0, ‘’, ‘SCO530’, 0.00000000000000000000, 0, ‘STANDARD’, ‘STANDARD’, 0, ‘’, ‘’, 0, ‘’, ‘’, ‘’, ‘’, 0, ‘’, ‘RETAIL530’, ‘IVA20’, ‘’, 0, 35.83000000000000000000, 0.00000000000000000000, 43.00000000000000000000, 0.00000000000000000000, ‘IVA20’, ‘’, ‘’, 1.00000000000000000000, ‘PZ’, 1.00000000000000000000, ‘Jan 1 1753 12:00AM’, ‘’, 0, ‘’, 0, ‘’, ‘’, ‘’, 0, ‘’, ‘’, 0, ‘’, ‘’, 0, ‘’, ‘’, ‘’, 0, 0, ‘’, ‘’, 1, ‘’, 100.00000000000000000000, 1.40000000000000000000, 1.10600000000000000000, ‘401000040043’, ‘’, 0, 18.62000000000000000000, 0.00000000000000000000, 0.00000000000000000000, 0.00000000000000000000, 0.00000000000000000000) rollback transaction

Congrats on the successful closing of the case. Regarding “the slow transaction example” my only comment is: “Oh, really! Do you think this is slow?” [:D]