Deleting Over 1 Million records.

A client has a DB size of 17 GB. We are talking about Navision 2.01 here. The Item Ledger Entry itself has over 4 million records in it. I wanted to delete the 4 million records and leave around a 100.000, however the process is taking an extremely long time. When using the DeleteAll function as shown in the code: ILE.INIT; ILE.SETCURRENTKEY(ILE.“Entry No.”); ILE.SETFILTER(ILE.“Entry No.”,’%1…%2’,1000000,2000000); ILE.DELETEALL(FALSE); It deletes around 3 to 4 records per second, give or take one record. If you do the math and see that if I delete 4 million records, this is going to take a very looong time. Similarly I used a report to delete the records one at a time. This is not much help either, deleting 3 records per second. Is there any way to delete the records any faster? Has any one else faced a similar issue? I believe there was one thread, but it had to do with Navision on an SQL server, while this Navision is on a regular fdb. The Database is on a Windows 2000 Server with 1 GB RAM. The DB cache size is 1000000. Any help or suggestion would be appreciated. Regards Omair

I’m not sure, but it could be faster using setrange instead of setfilter. BTW, ILE.INIT doesn’t make much sense, you should use ILE.RESET. Alex

Have you tried exporting the entries you want to preserve, then do a deleteall without any filters, and then importing the exported entries again? Remember backup offcourse [:)]

Another way could be to use a loop for setting a filter on the first 10000 records or so, deleting them with deleteall, followed by a commit and starting the loop again. 4 million records are a lot to be kept in the commit cache. Alex

Create a copy of table 32, move the entries you want to that table, then do a deleteall on table 32, then move the entries back. This is the fastest solution.

hi, first of all no use of changing the key as “Entry No.” is the default key. I agree with solution, of deleting the records in small chunks, say 50 thousands per chunk and then commit and then start again. WR Hari

It worked! 4 million plus records just vanished in less then a minute. I exported around 10000 records using a dataport. Then ran the codeunit with deletall. and voila. the ILE was empty. Thanks guys.

By the way the commit cache is not a memory cache that can be filled with something, in the way that the object cache is. Its purpose is to batch or delay the writing of blocks to be committed to database files and it is flushed as appropriate by the database manager. So there is no worry about filling it dependent on record update activity like in this example.