A parsing routine is usually the fastest. LLFF's (don't forget to use
_MLINE) or FileToStr() or Append to Memo and then parse each come out
about even in terms of processing time and code, unless the file is
really extreme ( 10 Gb, or million byte lines or funny characters...)

Whil ran into a situation where long fields and an impossible number
of columns made that technique impractical, and he came up with an
innovative solution: import the datea into SQLIte, then use VFP ODBC
to manipulate the date from there. He published his whitepaper here:
http://hentzenwerke.com/catalog/sqlite2gb.htm

[Disclaimer: I tech-edited this, but make no money from promoting it.]

On Thu, Apr 20, 2017 at 10:58 AM, Matt Wiedeman
<[email protected]> wrote:
> Hello everyone,
>
> I need to setup a job to import a pipe delimited text file. This is easy 
> enough but one of the fields is larger than 254 characters. If I use a memo 
> field, it does not import that field. I started to setup a routine to step 
> through each character and store the fields manually but I would rather not 
> do it that way.
>
> Does anyone have a function or tip they can share to resolve this situation?
>
[excessive quoting removed by server]

_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://mail.leafe.com/mailman/listinfo/profox
OT-free version of this list: http://mail.leafe.com/mailman/listinfo/profoxtech
Searchable Archive: http://leafe.com/archives/search/profox
This message: 
http://leafe.com/archives/byMID/profox/cacw6n4trm9o-x9zgzxdcwsgecwnbfoducf0empuq+qbamgk...@mail.gmail.com
** All postings, unless explicitly stated otherwise, are the opinions of the 
author, and do not constitute legal or medical advice. This statement is added 
to the messages for those lawyers who are too stupid to see the obvious.

Reply via email to