We have a batch job scheduled to delete records from AifMessageLog and AifDocumentLog tables daily @12pm. The criteria is the records created 30 days ago. However, the job caused table lock because about 20 million records to delete.
Here is the delete statements in run() method
delete_from msgLog where msgLog.createdDateTime <= DateTimeUtil::addDays(currentTime, -30);
delete_from docLog where docLog.createdDateTime <= DateTimeUtil::addDays(currentTime, -30);
I proposed to run the job more frequently, maybe one time per half an hour!
We don’t want to delete them all. We keep records for 30 days. Any records created before 30 days will be deleted. Infrastruture guys want to save some storage.
First of all, make sure you have an index on createdDateTime.
Nevertheless if you create 20 million records every day, don’t you think that such an aggressive logging has impact on performance? I bet it does! Do you really have to log all these things? Don’t you want to enable message logging only when you need to debug something?
Thx, Martin,
We do have a request from our QA to “Evaluate database logging requests to ensure no performance issues will occur be adding the logging”.