Re: [HACKERS] multithreading in Batch/pipelining mode for libpq
On 21 April 2017 at 21:31, Ilya Roublev wrote: > What I need is to make a huge amount of inserts This may be a silly question but I assume you've already considered using server-side COPY? That's the most efficient way to load a lot of data currently. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] multithreading in Batch/pipelining mode for libpq
On 2017-04-22 09:14:50 +0800, Craig Ringer wrote: > 2) if the answer to the previous question is negative, is it possible to > send asynchronous queries in one thread while reading results in another > thread? > > > Not right now. libpq's state tracking wouldn't cope. > > I imagine it could be modified to work with some significant refactoring. > You'd need to track state with a shared fifo of some kind where dispatch > outs queries on the fifo as it sends them and receive pops them from it. FWIW, I think it'd be a *SERIOUSLY* bad idea trying to make individual PGconn interactions threadsafe. It'd imply significant overhead in a lot of situations, and programming it would have to become a lot more complicated (since you need to synchronize command submission between threads). For almost all cases it's better to either use multiple connections or use a coarse grained mutex around all of libpq. - Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] multithreading in Batch/pipelining mode for libpq
On 22 Apr. 2017 6:04 am, "Ilya Roublev" wrote: 1) is it possible technically (possibly by changing some part of libpq code) to ignore results (especially for this sort of queries like insert), processing somehow separately the situation when some error occurs? There is a patch out there to allow libpq result processing by callback I think. Might be roughly what you want. 2) if the answer to the previous question is negative, is it possible to send asynchronous queries in one thread while reading results in another thread? Not right now. libpq's state tracking wouldn't cope. I imagine it could be modified to work with some significant refactoring. You'd need to track state with a shared fifo of some kind where dispatch outs queries on the fifo as it sends them and receive pops them from it. I started on that for the batch mode stuff but it's not in any way thread safe there. locking, info in PGconn very quickly becomes inconsistent, the number of queries sent does not correspond to the number of results to be read, etc. So I'd like to know at first is it possible at all (possibly by some changes to be made in libpq)? Sorry if my idea sounds rather naive. And thanks for your answer and advice. Yeah, it's possible. The protocol can handle it, it's just libpq that can't.