Importing comma delimited text file using dataport

I have created a dataport for import of records from a text file. Problem is that this file is huge, has many fields and some lines are ‘bad’ lines. Any field out of say 50 fields in the line may be in error. I am able to identify some bad lines using message(text000,reccount) but this is a laborious process (If 200th line is bad, message shows 200 times and then process terminates). What code I need to add in this dataport so that it identifies the bad lines, writes the bad lines to a text file and then skips these bad lines only. All other lines are inserted as records in the table with partial completion of the import. Any help is appreciated.

First - Have you eliminated all characters that are used as field delimiters/field qualifiers from the source text? You can use XL to do this. Can you use a ‘special character’ as the field delimiter ~ or | for example. Secondly - you could read one long text variable from the file and then allocated in the OnAfterImport Trigger.

Problem is not with reading the line. All the fields are being populated properly. However, the record is not saved as the value in the fields fails validate test. I can live with it so long as I can identify which line(s) is in error out of say 6000 lines.

The story is that you need to switch the fieldvalidate off. You can do this fairly easy by just changing that property from yes to no. this property can be reached in the dataport field list by pointing to a certain field and select properties from the view menu.

Think I don’t explain well. Switching validate field off is a temp soln. In reality what is required is to write ‘bad’ lines to say c:\badlines.txt, populate all the ‘good’ lines. Correct the bad lines and save those again. So, I have two requirements in this dataport. 1. Code to identify bad lines which then can be written to c:\badlines.txt 2. Continue with processing of the rest of the file. I hope I explain well here.

This is very difficult to achieve. You would need to import into a different table with the same structure without any triggers in it. Then you need a codeunit which tries line by line to move (and validate) the data into the final target table. You need to call the CU with IF NOT CU.RUN(Buffer) THEN // Log error file ELSE // Do something else

Here’s a guess: Save the ‘next line’, using the following code in the OnBeforeImportRecord trigger:intSavePos := CurrFile.POS; boolSaveTextMode := CurrFile.TEXTMODE(TRUE); CurrFile.READ(txtCurrentLine); CurrFile.TEXTMODE(bolSaveTextMode); CurrFile.SEEK(intSavePos); You say that you ‘are able to identify bad lines’ (presumably in the OnAfterImportRecord trigger). When you identify a ‘bad’ record, add the following code to the OnAfterImportRecord trigger:IF ThisRecIsBad THEN BEGIN fileBadLines.WRITE(txtCurrentLine); CurrDataport.SKIP; END;Of course, you’ll have to open/create ‘fileBadLines’ in the OnPreDataItem trigger, and close it in OnPostDataItem…

Thank you, all. I am quite close to the solution but not what I wanted. Is there a way to check which line will result in bad record and may not be saved. One way is to check it field by field viz rec.validate(,value); but this method requires long code writing or create a function for checking each field. How do we validate the record without going thru each field validation.

How about switching off AutoSave and do this in the OnAfterImport trigger: If not Insert then HandleTheError Haven’t tested, but seems logical to do. //Pelle