It might be best to use Perl for this processing since it is better
equipped to work with text files of this nature.

On Wed, Jul 9, 2008 at 12:18 PM, Paolo Sonego <[EMAIL PROTECTED]> wrote:
> I apologize  for giving wrong information again ...  :-[
> The number of files is not a problem (30/40). The real deal is that some of
> my files have ~10^6  lines (file size ~ 300/400M)  :'(
> Thanks again for your help and advices!
>
> Regards,
> Paolo
>
>
> jim holtman ha scritto:
>>
>> How much time is it taking on the files and how many files do you have
>> to process?  I tried it with your data duplicated so that I had 57K
>> lines and it took 27 seconds to process.  How much faster to you want?
>
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem you are trying to solve?

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to