Why don't you read the file line by line and split each line on the pipe, 
then yield return the result? 

Am Mittwoch, 18. Juli 2012 12:22:43 UTC+2 schrieb Daventry:
>
> Hi,
>  
> We've got to load large pipe-delimited files. When loading these into a 
> SQL Server DB by using Rhino ETL (relying upon FileHelpers), is it 
> mandatory to provide a record class?
> We have to load files to different tables which have dozens of columns - 
> it might take us a whole day to generate them. I guess we can write a small 
> tool to generate the record classes out of the SQL Server tables.
>  
> Another approach would be to write an IDataReader wrapper for a FileStream 
> and the pass it on to a SqlBulkCopy.
>  
> SqlBulkCopy does require column mappings as well but it does allow column 
> ordinals - that's easy.
>  
> Any ideas/suggestions?
>  
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Rhino Tools Dev" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/rhino-tools-dev/-/ZyfdkVquFYgJ.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/rhino-tools-dev?hl=en.

Reply via email to