Re: [HACKERS] Libpq enhancement
On Tue, Jun 21, 2011 at 3:55 PM, Merlin Moncure mmonc...@gmail.com wrote: For update, it's a bit more complex - we don't have a replace into operator... Actually, we do. 9.1 supports data modifying CTE around which it's possible to rig a perfectly reasonable upsert...barring that, you could trivially do something similar in a hand rolled backend upsert function that takes a row or a set of rows (fed in as a composite array). I don't believe that any of the solutions we have today are guaranteed to behave correctly in the face of concurrent activity. Because of the way snapshot isolation works, you can try to update an existing record, find that there isn't one, and then fail when you go to insert because some other backend has meanwhile inserted one that isn't visible to your snapshot. Doing the operations in the other order is no better. I'm not saying this is the biggest problem in the entire world, but I do think it's a non-imaginary problem. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Libpq enhancement
On Sun, Jun 19, 2011 at 8:08 PM, Robert Haas robertmh...@gmail.com wrote: On Sun, Jun 19, 2011 at 11:04 AM, Jeff Shanab jsha...@smartwire.com wrote: I am wondering If I am missing something obvious. If not, I have a suggestion for plpgsql. Stored procedures can accept rows. Libpq can receive rows (PQResult). Wouldn’t it be a great interface if PQResult was “bi-directional”? Create a result set on the client then call the database with a command. For insert, we have something like this already - this is what copy is for. 'copy' is a *bulk* insert statement -- it's great for the very specific case when you are dumbly stuffing data into the database, especially if performance is critical and sane error handling is not. It is not suitable for anything else: feeding data into functions, update/upsert/delete, insert with join, pre-post process, etc. Also copy runs through libpq textually at the line level, not at the field level like the rest of libpq. For update, it's a bit more complex - we don't have a replace into operator... Actually, we do. 9.1 supports data modifying CTE around which it's possible to rig a perfectly reasonable upsert...barring that, you could trivially do something similar in a hand rolled backend upsert function that takes a row or a set of rows (fed in as a composite array). Point being, the server has the necessary features -- it's the client that's the (solved) problem. At the risk of sounding 'broken record repetitive', let me echo andrew's comment upthread that libpqtypes solves the OP's problem completely in a very elegant way. The basic M.O. is to: 1. register the type you are using for transport (can either be the table or a composite type) 2. for each record you want to send, PQputf that record, and if you are sending more than one, PQputf the record into it's array 3. PQparamExec() a query that might look like one of: /* straight up insert */ PQparamExec(conn, param, INSERT INTO foo SELECT (unnest(%foo[])).* FROM f, resfmt); /* send to function */ PQparamExec(conn, param, SELECT do_stuff(%foo[]) , resfmt); /* upsert -- pre 9.1 this could be done in plpgsql loop, etc */ WITH foos AS (SELECT (UNNEST(%foo[])).*) updated as (UPDATE foo SET foo.a = foos.a ... RETURNING foo.id) INSERT INTO foo SELECT foos.* FROM foos LEFT JOIN updated USING(id) WHERE updated.id IS NULL; Basically, the trick is to exploit the server's composite array type features on the client side to do exactly what the OP is gunning for. You can send anything from simple arrays to entire complex nested structures that way -- although the complex stuff would typically go a to a function. Performance wise, it's faster than traditional query methods (everything is sent in binary) but slower than 'copy'. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Libpq enhancement
On 6/19/2011 11:04 AM, Jeff Shanab wrote: I am wondering If I am missing something obvious. If not, I have a suggestion for plpgsql. Stored procedures can accept rows. Libpq can receive rows (PQResult). Wouldn’t it be a great interface if PQResult was “bi-directional”? Create a result set on the client then call the database with a command. Perhaps… PQinsert(PQResult,”schema.table”); //iterate thru rows inserting PQupdate(PQResult,”schema.table”); //iterate thru rows updateing PQexec(connection,”scheme.function”,PQResult) //iterate thru rows passing row as arg to stored procedure. Have you looked into libpqtypes? It allows you to pack nested structures/arrays and pass them as query/function parameters. http://pgfoundry.org/projects/libpqtypes/ http://libpqtypes.esilo.com/ (docs) -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Libpq enhancement
Hey Jeff, 2011/6/19 Jeff Shanab jsha...@smartwire.com I am wondering If I am missing something obvious. If not, I have a suggestion for plpgsql. ** ** Stored procedures can accept rows. Libpq can receive rows (PQResult). ** ** Wouldn’t it be a great interface if PQResult was “bi-directional”? Create a result set on the client then call the database with a command. ** ** Perhaps… PQinsert(PQResult,”schema.table”); //iterate thru rows inserting PQupdate(PQResult,”schema.table”); //iterate thru rows updateing IMO, mapping C functions to SQL operators is bad idea. If I understood you correctly, you want to make libpq ORM. But without implementing a functional like C++ virtual functions on the _backend_ side, it is impossible or ugly. -- // Dmitriy.
Re: [HACKERS] Libpq enhancement
On Sun, Jun 19, 2011 at 11:04 AM, Jeff Shanab jsha...@smartwire.com wrote: I am wondering If I am missing something obvious. If not, I have a suggestion for plpgsql. Stored procedures can accept rows. Libpq can receive rows (PQResult). Wouldn’t it be a great interface if PQResult was “bi-directional”? Create a result set on the client then call the database with a command. For insert, we have something like this already - this is what copy is for. For update, it's a bit more complex - we don't have a replace into operator... -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] libpq enhancement for multi-process application
Sébastien Bonnet wrote: Hi all, and mainly postresql developpers, I've been reading old posts about the libpq interface related to multi-process application. The main problem being that after a fork, each process has a DB connexion, actually the same. If one closes it, the other one remains in a unknown or not stable state. Uhm... I always thought that sharing the same socket between processes is wrong. My multi-process daemon works like apache with a pool of processes everyone with its own connection to the DB. The connection is only opened AFTER the fork and remains open as long as the process lives just to avoid a new connection for each accept. Bye!