When using the Data Crawler with SQL Server setup with full recovery, we end up with humongous transaction log files (the crawler creates between 10-120 transactions/second depending on the load chosen). At medium load, the files are 500MB/hour versus 10MB when the crawler is not running. This would not be a problem if the daemon did not continously restart automatically–> I only need one table indexed (the item table-70,000 records) that only gets updated a couple times a week. Thus, I get these transactions 24/hours a day until the user remembers to stop the daemon (after magically knowing that it has finished the table at least once without sitting and watching it). Any ideas? Current options: 1. Remove the DB from Full Recovery (not going to happen) 2. Customize the crawler to remember the first table indexed and not to re-index it, but stop instead. 3. Set it on low load and live with the space impact (which still will likely be 80 MB/hr) Thanks!