On Jul 16, 2014 7:05 AM, "Alvaro Herrera" <alvhe...@2ndquadrant.com> wrote:
>
> Tom Lane wrote:
> > Dilip kumar <dilip.ku...@huawei.com> writes:
> > > On 15 July 2014 19:01, Magnus Hagander Wrote,
> > >> I am late to this game, but the first thing to my mind was - do we
> > >> really need the whole forking/threading thing on the client at all?
> >
> > > Thanks for the review, I understand you point, but I think if we have
do this directly by independent connection,
> > > It's difficult to equally divide the jobs b/w multiple independent
connections.
> >
> > That argument seems like complete nonsense.  You're confusing work
> > allocation strategy with the implementation technology for the multiple
> > working threads.  I see no reason why a good allocation strategy
couldn't
> > work with either approach; indeed, I think it would likely be easier to
> > do some things *without* client-side physical parallelism, because that
> > makes it much simpler to handle feedback between the results of
different
> > operational threads.
>
> So you would have one initial connection, which generates a task list;
> then open N libpq connections.  Launch one vacuum on each, and then
> sleep on select() on the three sockets.  Whenever one returns
> read-ready, the vacuuming is done and we send another item from the task
> list.  Repeat until tasklist is empty.  No need to fork anything.
>

Yeah, those are exactly my points. I think it would be significantly
simpler to do it that way, rather than forking and threading. And also
easier to make portable...

(and as a  optimization on Alvaros suggestion, you can of course reuse the
initial connection as one of the workers as long as you got the full list
of tasks from it up front, which I think you  do anyway in order to do
sorting of tasks...)

/Magnus

Reply via email to