I have a DTS import that imports up to 50-70 MB of data from a number of
flat text files (CSV).
I have pre-processing that logs an error for missing files, and DTS detects
fewer fields than are used in the transformation and we log an error for
that.

What I don't have is a way to detect too many columns in the text file; we
will need a good way to detect some bad records with too many
comma-separated columns, or if the source export changes the format and adds
extra columns in the middle or end, on purpose or accidentally.

For example, the source file has 5 fields, and the transformation puts all 5
into a destination table; there may be more columns in the table that aren't
used. If the text file has 4 columns, there's an error that one of those
that is used is missing. If it has 6 columns, we want to detect an error.

We tried checking dynamic properties in an ActiveX task; no good. Can't run
a SQL on that source.

Any ideas?

Thanks,
- David


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|
Message: http://www.houseoffusion.com/lists.cfm/link=i:6:2469
Archives: http://www.houseoffusion.com/cf_lists/threads.cfm/6
Subscription: http://www.houseoffusion.com/lists.cfm/link=s:6
Unsubscribe: 
http://www.houseoffusion.com/cf_lists/unsubscribe.cfm?user=11502.10531.6
Donations & Support: http://www.houseoffusion.com/tiny.cfm/54

Reply via email to