locking recordset in SQL

When using a recordset, and modifying records within this recordset i see that the complete recordset is locked. Question: How can i prevent this from happening when i’m changing records from within a report? I would like to have a recordlock on a single record. When i use a new record variable to make a variable.get statement i expect that a recordlock on 1 single record takes places. Instead i notice that even then, the complete recordset is locked. I’ve only noticed this by running the report and trying to edit the records at the same time. Question: How and with which tool can i see which records-sets in navision are/were locked? What do i have to look for? I’ve already read the “Performance Troubleshooting Guide” from the Attain 3.10 CD I’m using SQL2000 with Attain3.60

Hi, There is a property on the report called TransactionType. read the help on that and maybe it will make more sense to you than me … Also while SQL is recordlevel locking I belive from I understood that when a lock statement (or a write statement) is issued it locks the entire set that you are working on (so therefore the entire table if there are no filters) … I have not tested that yet but maybe you want to look into it. (and if you do please let me know too :-)) Hope that helps, Cristi

I’ts correct that the recordset is locked on the first modify. My only solution that I could come up with is placing a commit after every X-records that are processed.

if you’re on a SQL database, and record level locking is supported on the underlying table, then modifying a record should not lock the whole table out, and only lock the record you are modifying. If you explicity use the C/AL locktable command with a SQL db, then any records read (or found via find()s, or gets) by the code between the locktable and the commit are also locked, but the whole table should not be locked.

Can you be more specific what you mean by a recordset? In C/AL terms, do you mean a FIND/REPEAT UNTIL NEXT=0 type loop? Then we can talk about what/when read-locking is done, how the TransactionType affects it and what/when write-locking is done. Regarding GET - GET places a single-row KEY lock on the record (since it is a clustered index it is a KEY lock rather than a RECORD lock) - it uses the REPEATABLEREAD lock hint in the SELECT statement, to override the SERIALIZABLE isolation level that is in effect if you have used LOCKTABLE, which would otherwise have placed a KEY-RANGE lock between the GET row and the next row. This happens with FIND, which might be what you are referring to with your recordset. Here is an example, which I observe and which you should be able to as well: With the following code: OnRun() Country.LOCKTABLE; Country.GET('GB'); After the GET but before the end of the run (i.e. before the commit) you should get the following SQL Server locks, which you can see with sp_lock in Query Analyzer (ignore the other locks listed): spid dbid ObjId IndId Type Resource Mode Status 51 7 98099390 1 PAG 1:142 IU GRANT 51 7 98099390 1 KEY (89007e14cc99) U GRANT 51 7 98099390 0 TAB IX GRANT On this objectid, which is the Country table (your id will be different), this shows that the record itself is locked with an UPDATE lock. This is the KEY type on index 1 which is the primary key clustered index. The UPDATE lock is compatible with SHARE locks but not with other UPDATE locks, so if you were reading the same record on two clients one of them would block. The page that the row resides on is INTENT UPDATE locked, which means at least 1 row on it is UPDATE locked. And the table is INTENT EXCLUSIVE locked, which means that an object within the table is locked for update. If you run the same code on a second client but change the value for the GET to the prior adjacent value, for my data this is the ‘FR’ country code, this will not block with the ‘GB’ GET. You’ll get these additional locks: 53 7 98099390 1 PAG 1:142 IU GRANT 53 7 98099390 1 KEY (98005b35609d) U GRANT 53 7 98099390 0 TAB IX GRANT Since this is a different KEY resource for the different record, and it is not a KEYRANGE lock there is no conflict. Then take the next adjecent record, here it is ‘GR’, you get no conflict, with the locks: 54 7 98099390 1 KEY (99001a047b84) U GRANT 54 7 98099390 0 TAB IX GRANT 54 7 98099390 1 PAG 1:142 IU GRANT So I succesfully GET three adjacent values with no conflict and you should also see this. If your are seeing more locks, its the result of more calls than just GET, e.g. explicit FINDs or the report run-time performing FINDs. Bye the way, on SQL Server, record level locking is supported on all tables, and on all views provided they are not views into non-SQL Server linked servers.

SQL doesn’t use table lock. SQL makes table locks in memory pages of i-don’t-remember-how-many Kbytes (see the Online books, sorry [:(]). when a proccess starts, SQL begins to lock in that pages. but if the proccess is too heavy (works with so many records), consumes a lot of time or even if it read or/and write too many tables, the page space ends and then SQL take the control of the proccess and locks the tables by itself. this means that navision has no control in table locks. but there is a solution anyway: please start the SQL Books Online and search the sp_tableoption. there you can find a parameter that says something like BULK_OPTION or something like that… with this you could force a table NOT-TO-BE-LOCKED and you could reach the record lock you want please contact me at my mail: malonso@infogroup.es i have a query saved to avoid this problem and it worked in a similar problem… be happy & yourself regards

You are referring to lock escalation. Anyway… The sp_tableoption ‘table lock on bulk load’ will not make any difference to locking from Navision. As it says, it allows row locks for the bulk load API instead of a bulk update lock. But you’re getting off track here. Navision places ROWLOCK directives for all set reads it does anyway - verifyable with the Profiler - which prevents SQL Server from placing a table or page lock instead. This also prevents lock escalation to a table lock. So you are getting the finest possible locking granularity already. This strategy is actually rather strict since it increases the locking overhead on the server, in order to maintain so many rowlocks, which itself can have a performance hit, although concurrency is optimal.

that’s right, executing the sp_tableoption ‘table_lock_on_bulk_load’ we increase the locking overhead on server. our problem was located on Sales Header & Sales Line (tbl’s 36 & 37) we have developed a module which continually writes orders in those tables. it’s just a module to work for example in a supermarket cash (called sales point??? don’t know how to call it in english)this orders are posted (only shipping) automatically. if someone combine shipments the other users can’t work, this means, can’t post orders… so we have to “push down” the lock level on those tables and in tables 110&111 and 112&113 (shipments & invoices)… after performing the sp_tableoption in the server, the users can work fine… possibly it wasn’t the best solution [:(] … but it works [:)]…

So with sp_tableoption ‘table_lock_on_bulk_load’ you change the locking from record-locking to table-locking. Normally, if you work in a multiuser-env., performance must be worser than before, or am I wrong ?? Nico

Micky is using this option because his module is using the bulk insert function that this option is designed for. In that case they are benefiting from a single table lock over many row locks for the insert process, but it does mean less concurrency than row locks. Since this is outside of Attain, it has no influence on how Attain works, which has nothing to do with bulk insert. Row locks will still be used by Attain.