SQL server LOG

We’ve a customer with NAV3,70A under SQL2000.

We’ve a problem with the Purchase Order form , where random the user have to wait 30 second when opening the form.

It seems a slowness when he tries to open the form for the first time or when he was positioned in the first record and try to open navision client again.

I’ve investigated with SQL Profile and when opening the form, all the time is spent in rows with EventClass=TransactionLog and EventSubclass = BeginXaCT, ModifyRow(indexInterior)…

One of the rows , on the beginning , has EventSubclass= CreateIndex.

I have not found any strange flow field in any of the forms (primary and subform) , but I don’t know the real personalization (neither in tables or keys).

Is there anyone having idea of what it is doing or knowing how to have more information from the LOG file (especially about which index is creating)?

There was a maintainace job (with reorganizing data and indexes) running once a week and today I’have schedule it three times a week (surely it helps I hope).

Thank you in advance for any information

try to run optemize from navision.
File->database->innformation->table->filter on table purchase line and header and click on optemize.

This rebuilds the indexes online like the maintenance job. But this deletes also all SIFT-records with values=0. This last is something SQL will never do.

Some things to check:
-Change Form-property SourceTablePlacement=First
-Check if your client has enough Object cache. The fact that it only happens (if I understood correctly) when the users opens it the first time, could be because the form has to load a lot of objects into memory and the second time the objects are already in memory.
It is also possible that the forms don’t use the primary index. This makes SQL read the records, order them and then send them to the client. The second client (or the same client when he reopens the form) is faster because the server has the records already in memory.

BTW:how many records are there in the header and detail tables?

First of all , thank you for the information.

With Profiler I noticed that the slowness was during a cursoropen:

sp_cursoropen @P1 output, N’SELECT * FROM “database1”.“dbo”.“Company1$Purchase Header” WHERE ((“Document Type”=@P1)) AND “Document Type”=@P2 AND “No_”>=@P3 ORDER BY “Document Type”,“No_” OPTION (FAST 10)’, @P2 output, @P3 output, @P4 output, N’@P1 int,@P2 int,@P3 varchar(20)’, 1, 1, ‘L1’.

and then it begins with TransactionLOG , ecc.

I’ve followed the suggestion , and the Profile , during the Optimize function , shows almost the same rows: CreateIndex,ModifyRow(IndexInterior), ecc.

After the Optimize , it’s fast; so the problem was effectively in the re-building of the index (cluster?) during the form opening.

But why did it need to re-build the index?That is , I’ve increased the frequency of the maintenaince job , but I’m afraid it may happen again.

I have already tested with SourceTableReplacent=First , but it was worst.

It was better with SourceTableReplacent=Last , maybe because it finds the “No_” >= xxx and for the last record it didn’t re-build the index.

There are 13.000 record in puchase header and 150.000 in Purch.Line , and the problem was for the same client every time it tries to re-open the same form.

Thanks again

My guess was that your sift table for purchase line was grown out of proportion, which is the case since you have 13K purch headers and 150K purch lines. The optemize rebuilds the sift tables. These tables on sql look like CompanyName$Integer. You want to read on the sift impletation of sql. Any tables where you do a lot insert and delete, the sift tables keep growing bigger and bigger, and the optimize rebuilds them, in addition to rebuilding the index.

There is no sift table for tables 38 and 39.

Surely someone before me , has put MaintainSIFTIndex=No to all the keys with sumindexfield.

I have met this situation and I don’t know if it was done for slowness in updating/inserting rows.

turning of SIFT can benefit but also slow down these tables. It does depend on how many records are in these etc etc - if the optimize has worked then great, no need to dig further unless you have performance problems while the order card is running.

Sort of off topic but… Consider an upgrade of the executables to at least 3.70B or preferably 4.00 SP3, since 3.70A had a lot of problems with SQL.

yesterday my database navision on the network is troublesome
it seemed because .LOG is getting bigger significantly
usually it only 1Gb, but there’s 18 GB
why it could be happened?

Hi all!

Well, reading this Thread I would say we are talking about a of different issues …

To the original question: How to get more information in the Log (and I assume we are talking about the Error Log here)?

There are so called “Trace Flags” which could be activated to provide more information. E.g. mostly used 1204, 1205 and 3605 (SQL 2000) or 1204 and 1222 (SQL 2005) to gather detailed information about Deadlocks in the Log. For more “Trace Flags” see Books Online.

Trace Falgs could be set as Startup Parameter or via DBCC TRACEON or DBCC TRACEOFF command.

About Indexes: When opening a Form, for sure no Indexes are created or something. NAV is sending a SELECT query to build a cursor. In case of the Purchase Order Form it should do that using the Clustered Index - if nothing was customized, and that’s what your Profile has shown. If this opening takes too long, the reasons are - as stated in previous postings - insufficient indexes (fragmentation, fillfactors, statistics) - which was obviously the case with your issue - and SIFT usagae due to Flow Fields to be displayed. When it’s about SIFT be careful: having to many enabled is a problematic as having too less - depending on the size of the tables which are actually the “source” of a FlowField!

To get better information from the Error Log, I recommend to establish a Job which executes the sp_cycle_errorlog procedure daily at 00:00, to get a fresh Log per day. Makes investigations easier.

Stan: Are you talking about the Error Log? Or is it abou the Transaction Log? I guess we are talking about the TLog here …

Well, how big is database? If the Recovery Model is set to FULL, the TLog contains information about every single transaction. So if the TLog was growing, this simply means that there have been that many transactions. If the difference between the usuall size and the current is so remarkable, then maye there have been massive dataimports, postings, cleanups, etc. - or this could be also caused by index maintenance …

To reduce the size of the TLog you should create a backup of it (with truncate) then you could shrink it.

this sounds like a parameter-sniffing problem. that is, when sql server executes a query for the first time, it’s query plan gets stored in cache and will be used for all other queries that match the pattern. if this one was not exactly “representative”, other queries may suffer.

by reindexing you deleted the query plans so that a new one, better suiting one, had to be genereated.

try an index that supports this particular query, no matter what parameter values you supply. this usually helps.