Hi,

   Recently I was dealing with large csv ( comma separated value )
   files, of size around 500M.

   I was using perl to parse such files, and it took around 40 minutes
   for perl to read the file, and duplicate it using the csv module.
   Python's module took 1 hr. I am sure even if I had written c code,
   opened the file and parsed it, it would have taken a lot of time.

   However, I used MySQL to create a database from the file, and the
   entire creation took around 2 minutes. I would like to know how is
   this possible - is it a case of threading, memory mapping or some
   good algorithm ?

   I would be thankful to anyone who can give me a good answer to the
   question, as I cant think of a way myself to solve the problem.

Anindya.
-
To unsubscribe from this list: send the line "unsubscribe linux-c-programming" 
in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to