On Fri, Mar 12, 2010 at 11:23 AM, Paul Rubin <no.em...@nospam.invalid> wrote:
> "D'Arcy J.M. Cain" <da...@druid.net> writes:
>> Just curious, what database were you using that wouldn't keep up with
>> you?  I use PostgreSQL and would never consider going back to flat
>> files.
>
> Try making a file with a billion or so names and addresses, then
> compare the speed of inserting that many rows into a postgres table
> against the speed of copying the file.
>

Also consider how much work it is to partition data from flat files
versus PostgreSQL tables.

>> The only thing I can think of that might make flat files faster is
>> that flat files are buffered whereas PG guarantees that your
>> information is written to disk before returning
>
> Don't forget all the shadow page operations and the index operations,
> and that a lot of these operations require reading as well as writing
> remote parts of the disk, so buffering doesn't help avoid every disk
> seek.
>

Plus the fact that your other DB operations slow down under the load.

-- 
Jonathan Gardner
jgard...@jonathangardner.net
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to