PFC wrote:

I was doing some testing on "insert" compared to "select into". I inserted 100 000 rows (with 8 column values) into a table, which took 14 seconds, compared to a select into, which took 0.8 seconds. (fyi, the inserts where batched, autocommit was turned off and it all happend on the local machine)

    Did you use prepared statements ?
Did you use INSERT INTO ... VALUES () with a long list of values, or just 100K insert statements ?

It was prepared statements and I tried it both batched and non-batched (not much difference on a local machine)

It's the time to parse statements, plan, execute, roundtrips with the client, context switches, time for your client library to escape the data and encode it and for postgres to decode it, etc. In a word : OVERHEAD.

I know there is some overhead, but that much when running it batched...?

    By the way which language and client library are you using ?

FYI 14s / 100k = 140 microseconds per individual SQL query. That ain't slow at all.

Unfortunately its not fast enough, it needs to be done in no more than 1-2 seconds, ( and in production it will be maybe 20-50 columns of data, perhaps divided over 5-10 tables.) Additionally it needs to scale to perhaps three times as many columns and perhaps 2 - 3 times as many rows in some situation within 1 seconds. Further on it needs to allow for about 20 - 50 clients reading much of that data before the next batch of data arrives.

I know the computer is going to be a much faster one than the one I am testing with, but I need to make sure the solution scales well.


regars

thomas

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

              http://archives.postgresql.org

Reply via email to