* Robert Haas (robertmh...@gmail.com) wrote:
On Fri, Jan 9, 2015 at 12:24 PM, Stephen Frost sfr...@snowman.net wrote:
Yeah, we also need to consider the i/o side of this, which will
definitely be tricky. There are i/o systems out there which are faster
than a single CPU and ones where a
Amit,
* Amit Kapila (amit.kapil...@gmail.com) wrote:
On Sun, Jan 11, 2015 at 9:09 AM, Robert Haas robertmh...@gmail.com wrote:
I don't think you should be switching queues. The tuples should be
sent to the tuple queue, and errors and notices to the error queue.
Agreed.
To achieve what
* Robert Haas (robertmh...@gmail.com) wrote:
On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost sfr...@snowman.net wrote:
Yeah, if we come up with a plan for X workers and end up not being able
to spawn that many then I could see that being worth a warning or notice
or something. Not sure what
On Sun, Jan 11, 2015 at 10:37 AM, Jim Nasby jim.na...@bluetreble.com wrote:
On 1/10/15, 7:11 AM, Michael Paquier wrote:
If we had an independent transaction coordinator then I agree with you
Kevin. I think Robert is proposing that if we are controlling one of the
nodes that's participating as
Hi,
I want to know what kind of hash function postgres is using currently, can
someone please explain the algorithm postgres is using for the hash
function in the hash join algorithm.
Thanks,
K.Ravikiran
Andreas Karlsson andr...@proxel.se writes:
On 01/11/2015 02:36 AM, Andres Freund wrote:
b) I'm doubtful that AC_CHECK_TYPES is a sufficiently good test on all
platforms.
Should I fix it to actually compile some code which uses the 128-bit types?
We used to have code in configure to test that
On 01/11/2015 11:27 AM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost sfr...@snowman.net wrote:
Yeah, if we come up with a plan for X workers and end up not being able
to spawn that many then I could see that being worth a
On 11/01/15 08:56, Kohei KaiGai wrote:
2015-01-11 10:40 GMT+09:00 Jim Nasby jim.na...@bluetreble.com:
Yeah there are actually several places in the code where relid means
index in range table and not oid of relation, it still manages to
confuse
me. Nothing this patch can do about that.
Well,
Petr Jelinek p...@2ndquadrant.com writes:
On 11/01/15 08:56, Kohei KaiGai wrote:
2015-01-11 10:40 GMT+09:00 Jim Nasby jim.na...@bluetreble.com:
Yeah there are actually several places in the code where relid means
index in range table and not oid of relation, it still manages to
confuse me.
Andres Freund and...@2ndquadrant.com writes:
On 2015-01-10 18:40:58 -0500, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
Actually. It looks like I only translated the logic from barrier.h 1:1
and it already was broken there. Hm, it looks like the current code
essentially is
Christoph Berg c...@df7cb.de writes:
Re: Tom Lane 2015-01-10 22432.1420915...@sss.pgh.pa.us
So what I propose we do with this is patch HEAD and 9.4 only.
We need to do *something* in 9.4 to address Christoph's complaint, and
that branch is new enough that we can probably get away with changing
Re: Tom Lane 2015-01-11 13609.1420998...@sss.pgh.pa.us
The problem isn't present in 9.3 and earlier (at least with
postfix-pgsql), so there's no need to go back further.
I've committed a fix for this in HEAD and 9.4.
I've just tested with the HEAD libpq and the issue is fixed. Thanks!
In
On 11/01/15 05:07, Andreas Karlsson wrote:
On 01/11/2015 02:36 AM, Andres Freund wrote:
@@ -3030,6 +3139,18 @@ int8_avg_accum(PG_FUNCTION_ARGS)
Datum
int2_accum_inv(PG_FUNCTION_ARGS)
{
+#ifdef HAVE_INT128
+Int16AggState *state;
+
+state = PG_ARGISNULL(0) ? NULL : (Int16AggState *)
On Sat, Jan 10, 2015 at 11:14 PM, Amit Kapila amit.kapil...@gmail.com wrote:
I don't think you should be switching queues. The tuples should be
sent to the tuple queue, and errors and notices to the error queue.
To achieve what you said (The tuples should be sent to the tuple
queue, and
On Sun, Jan 11, 2015 at 5:27 AM, Stephen Frost sfr...@snowman.net wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost sfr...@snowman.net wrote:
Yeah, if we come up with a plan for X workers and end up not being able
to spawn that many then I
On Mon, Jan 12, 2015 at 01:45:41AM +0100, Andres Freund wrote:
On 2015-01-11 19:37:53 -0500, Noah Misch wrote:
I recommend either (a) taking no action or (b) adding a regression test
verifying WaitLatchOrSocket() conformance in this scenario.
Do you have a good idea how to test b) save a C
On Mon, Jan 12, 2015 at 3:27 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 11, 2015 at 5:27 AM, Stephen Frost sfr...@snowman.net wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost sfr...@snowman.net
wrote:
Yeah, if we come up with
On 2015-01-11 19:37:53 -0500, Noah Misch wrote:
I recommend either (a) taking no action or (b) adding a regression test
verifying WaitLatchOrSocket() conformance in this scenario.
Do you have a good idea how to test b) save a C function in regress.c
that does what your test does using latches?
On Sat, Jan 10, 2015 at 11:35 AM, Andres Freund and...@2ndquadrant.com wrote:
Interesting. I dimly remembered you mentioning this, that's how I
rediscovered this message.
Do you remember any details?
No, not really.
My guess that's not so much the overhead of the latch itself, but the
lack
Or else we implement what you suggest below (more comments below):
Thinking about the 'release' flag a bit more - maybe we could do
this
instead:
if (release astate-private_cxt)
MemoryContextDelete(astate-mcontext);
On 2015-01-11 05:07:13 +0100, Andreas Karlsson wrote:
On 01/11/2015 02:36 AM, Andres Freund wrote:
a) Afaics only __int128/unsigned __int128 is defined. See
https://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html
Both GCC and Clang defines both of them. Which you use seems to just be a
On Mon, Jan 12, 2015 at 12:40:50AM +0100, Andres Freund wrote:
On 2015-01-11 16:36:07 -0500, Noah Misch wrote:
On Sat, Jan 10, 2015 at 03:25:42AM +0100, Andres Freund wrote:
0001-Allow-latches-to-wait-for-socket-writability-without.patch
Imo pretty close to commit and can be
On Thu, Jan 8, 2015 at 6:31 AM, David Rowley dgrowle...@gmail.com wrote:
I'd be keen to know what people's thoughts are about the nodeAlternativePlan
only surviving until the plan is initialised.
I find it scary, although sometimes I am easily frightened.
--
Robert Haas
EnterpriseDB:
On Sat, Jan 10, 2015 at 03:25:42AM +0100, Andres Freund wrote:
0001-Allow-latches-to-wait-for-socket-writability-without.patch
Imo pretty close to commit and can be committed independently.
The key open question is whether all platforms of interest can reliably detect
end-of-file when
On 2015-01-11 16:47:53 -0500, Robert Haas wrote:
My guess that's not so much the overhead of the latch itself, but the
lack of the directed wakeup stuff the OS provides for semaphores.
That's possible.
On Sat, Jan 10, 2015 at 11:35 AM, Andres Freund and...@2ndquadrant.com
wrote:
On Sun, Jan 11, 2015 at 6:09 AM, Stephen Frost sfr...@snowman.net wrote:
I think what Robert's getting at here is that pq_redirect_to_shm_mq()
might be fine for the normal data heading back, but we need something
separate for errors and notices. Switching everything back and forth
between the
On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost sfr...@snowman.net wrote:
So, for my 2c, I've long expected us to parallelize at the relation-file
level for these kinds of operations. This goes back to my other
thoughts on how we should be thinking about parallelizing inbound data
for bulk
On 2015-01-11 16:36:07 -0500, Noah Misch wrote:
On Sat, Jan 10, 2015 at 03:25:42AM +0100, Andres Freund wrote:
0001-Allow-latches-to-wait-for-socket-writability-without.patch
Imo pretty close to commit and can be committed independently.
The key open question is whether all platforms
On Mon, Jan 12, 2015 at 3:30 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 11, 2015 at 6:01 AM, Stephen Frost sfr...@snowman.net wrote:
So, if the workers have been started but aren't keeping up, the master
should do nothing until they produce tuples rather than participating?
29 matches
Mail list logo