YIELD and SLEEP

i’ve created a codeunit which reads out an ascii-file and fills the data in different tables. Unfortunately the source-ascii-files became bigger and bigger, and so the codeunits takes more than one hour to update the tables. A lot of users need to work with this tables in the meantime, and while the codeunit is running their process is very slow. So i would like to break the codeunit from time to time and as i read in the online help there are two functions (YIELD and SLEEP), which could help. But i couldn’t find out exactely what they are doing. Can anybody tell me whether one of both functions could perform a temporary break or how i could perform it ? thanks in advance

Under current operatin systems they are quite the same, but they will not help your problem, because you want to shift the system utilisation from the import to reading tables, and not to shift system utilisation from the database server task to other not Navision-related tasks running on the server. The latter is what YIELD and SLEEP do. You’ve encountered the filehandling problem of Navision, because Navision always reads the file it handles from the start to the actual position for every task it performs on this file. I recommend to split the file and import them seperately.

Hey Alex, the received file has a header and a footer with very special (typical Deutsche Telekom) data in. So splitting the file was a very difficult task. Is there no way to make it possible to give users access to tables while they are updated ? btw: the users are just reading the tables (users are our Hotline department). They don’t need to do entries in the updated tables.

If you have not used the LOCKTABLE statement all users should have access to the table. Can you give some more information? Has the codeunit an internal loop for every line/field to import, or is the loop external, calling another codeunit which does the insert of a record? Pretty strange problem… BTW: in my first post I made a mistake, YIELD and SLEEP have only effect at the client, not the server.

the task is to import invoice records from the Deutsche Telekom. The data in the source file have to be devided in several tables (such as Customer, calls data, invoice data and so on). The import takes usually 1 to 2 hours. The codeunit is just a single codeunit which opens the source file, selects a records and looks which type of record it is. Depending on the record type the data is pushed into a table. Reading a record and deviding it to the desired table is a loop, which will be finished when the footer is read. I guess it would be okay if i could make every loop “sleep” for a while, so the database can access a request from a user. It wouldn’t matter if the runtime of the codeunit would grow up to 3 or 4 hours. More important is that the users can do their work in the meantime. btw: of course we read out the source file at night (if it is available). But if we receive a file within business time it has to be imported asap and this causes the described problems.

Hi Stefan! Maybe you can “speed up” the operation that way: 1. Read the file and store all fields in table (only text fields!), not further processing with this data 2. Then process the stored data from the Dummy-Table and transfer it to the “real” tables This could be faster than doing any operations when processing a file! When running Step 2, you could create a kind of “Processor Form”, using the OnTimer-Trigger to work on defined “packages” of records, e.g. process every 5 Minutes 100 records, then pause for 5 Minutes, and so on … Not nice, but maybe a workaround … Regards, Jörg

Hey Joerg, although i can’t imagine that pushing the records into a temporary table and then deviding it to the final tables could be faster i’ll give it a try. It’s always interesting to see which solutions are found for certain problems… :wink: I’ll post when i have done so thanks so far for your efforts

Hi Stefan, My guess is that you are doing minimal processing with this file, so using a temporary file won’t actually speed up the process, (actually it will take longer), but it will make it easy to release controll back to the server, so that other users will be able to access the table and work with it. Also Alex, the issue of Navision restarting back to the begining, is only an issue with Dataports, and is probably the reason the Stefan is using a codeunit. I would not normally recommend a user using commit, but in this case it is the way to do it, and I think you have been around long enough to get it right. My recommendation would be as follows: I assume it is not practical to break up the file into smaller chunks, otherwise you would have done that. Determine what is a reasonable time to have control of the system, say 10 seconds, then estimate how many records can be read in this time. Call this a variable “NoRecsToCommit”::Boolean Create new fields “In Progress”::Boolean. Set true when importing. “Import File Name”::Text the name of the file you are importing. “Import Line No.”::Integer. Sequential line numbers modify your codeunit to count the number of lines imported, and each time you reach the “NoRecsToCommit” value, reset it, and then do a commit. (You may or may not need to add a GetLatestVersion here, and maybe a sleep comand, but you can experiment with that.). At the end of the import, add the following: Modifyall(“In Progress”,false); So what happens. Well each time the commit happens, the records will be committed to the database and other users will have access to the data. Warnings: Commit is a dangerus command if used incorrectly, do not try this at home. If the codeunit fails, then you will have a number of records already committed, you have two solution, you can write code to track these using the new fields added, or (my recommendation), delete ALL transactions with “In Progress” true, and start the dataport again. You must fully understand all code attached to this table, and you must create filters and tests, so that no form, or codeunit, or user has any access to any record with “In Progress” true. You will need to analyse performance. This process will definitely slow down your system. The repeated commits will cause a lot of transfers between commit cache and disk, and you will need to evaluate this against the requirement of users having access to the data. Please let me know how it goes. PS: Yes, I have done this before, and it does work.