Re: [HACKERS] Thanks for the TAP framework
Hello. For me Testgres https://github.com/postgrespro/testgres much better because I have the allergy for Perl. Unfortunately, it's not inside Postgres... 2017-03-20 10:21 GMT+03:00 Craig Ringer: > Hi > > It just occurred to me that much of what I've been doing recently > would've been exceedingly difficult to write and even harder to debug > without the TAP framework. I would've spent a LOT of time writing test > scripts and wondering whether the bug was in my scripts or my Pg code. > > I still spend a while swearing at Perl, but I can't really imagine > doing nontrivial development without them anymore. > > So ... thanks. > > -- > Craig Ringer http://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Training & Services > > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers >
Re: [HACKERS] WIP: About CMake v2
2017-02-12 20:55 GMT+03:00 Vladimir Rusinov: > Overall, when things go wrong debugging cmake requires cmake knowledge, > while autotools mostly require shell knowledge which is much more common > (again, for sysadmins/packagers). It's not really true because of CMake scripts much easier than tons of crap bash (configure) and m4 scripts in Autotools, also please don't forget Windows MSVC, Xcode and etc usage. PS: I personally like Google's internal version of https://bazel.build/ a > lot. I've never used open-source version but I presume it's similar. While > it has many problems (Java, lack of popular IDE support, lack of popularity > and, again, Java) good parts are rules are both machine- and human- > readable and writable and generally easy to debug. I'm not bold enough to > propose PostgreSQL to use it, but I'd be happy to see ideas from it to be > used elsewhere. We have many build systems, for example, another one http://mesonbuild.com/ but CMake the best today as meta build system.
Re: [HACKERS] WIP: About CMake v2
2017-01-28 1:50 GMT+03:00 Michael Paquier: > On Fri, Jan 27, 2017 at 11:09 PM, Peter Eisentraut > wrote: > > On 1/24/17 8:37 AM, Tom Lane wrote: > >> Craig Ringer writes: > >>> Personally I think we should aim to have this in as a non default build > >>> mode in pg10 if it can be made ready, and aim to make it default in > pg11 at > >>> least for Windows. > >> > >> AFAIK we haven't committed to accepting this at all, let alone trying > >> to do so on a tight schedule. And I believe there was general agreement > >> that we would not accept it as something to maintain in parallel with > >> the existing makefiles. If we have to maintain two build systems, we > >> have that already. > > > > My preferred scenario would be to replace the Windows build system by > > this first, then refine it, then get rid of Autoconf. > > > > The ideal timeline would be to have a ready patch to commit early in a > > development cycle, then get rid of the Windows build system by the end > > of it. Naturally, this would need buy-in from Windows developers. > > This looks like a really good plan to me. I think it's best plan because when this patch will be in Postgres guys from community can test it for Unix systems too. Support two build systems it's not big deal really. I have been working on this past year without any big troubles. Also we have second perl build system...
Re: [HACKERS] WIP: About CMake v2
> > I don't understand what this has to do with cmake. If this is a > worthwhile improvement for the Windows build, then please explain why, > with a "before" and "after" output and a patch for the existing build > system as well. During the porting process, I meet such situations when I should fix something. It's happening because I build with different way also current build system is trying to avoid many sharp corners. If talk about this situation - without strict mode many "floats" checks don't work correctly. You can read the link above. Besides this option puts by build system. I think we can make a new thread for this approach. (with patch for current perl system) It might also be worth refactoring the existing Autoconf code here to > make this consistent. I do it because it's convenient in CMake. I can change this it's not big deal. Please explain what the circular dependency is. If there is one, we > should also side-port this change. It's an important part. I have a rule for generate rijndael.tbl by gen-rtab who make from rijndael.c (with special define) who include rijndael.tbl . If I generate rijndael.tbl it's to force build gen-rtab and generate rijndael.tbl again. CMake knows about "includes" in files but we can make the wraparound macro to hide include. This patch removes the uuid.h include but doesn't add it anywhere else. How does it work? CMake sends to compiler right place for uuid.h (I mean -I/usr/include and etc for gcc). > Yeah, I think this is how the MSVC stuff effectively works right now as > well. I glad to hear it. 2017-01-03 17:11 GMT+03:00 Peter Eisentraut < peter.eisentr...@2ndquadrant.com>: > On 12/30/16 9:10 AM, Yuriy Zhuravlev wrote: > > cmake_v2_2_c_define.patch > > > > Small chages in c.h . At first it is “#pragma fenv_access (off)” it is > > necessary if we use /fp:strict for MSVC compiler. Without this pragma we > > can’t calc floats for const variables in compiller time (2 * M_PI for > > example). Strict mode important if we want to be close with ieee754 > > float format on MSVC (1.0 / 0.0 = inf for example). Detail info here: > > https://msdn.microsoft.com/en-us/library/e7s85ffb.aspx > > I don't understand what this has to do with cmake. If this is a > worthwhile improvement for the Windows build, then please explain why, > with a "before" and "after" output and a patch for the existing build > system as well. > > > Second change is because I find and set HAVE_INT128 directly from CMake. > > PG_INT128_TYPE used only for autoconfig scripts. > > It might also be worth refactoring the existing Autoconf code here to > make this consistent. > > (My assumption is that if we were to move forward with cmake or any > other build system change, we would have to keep the old one alongside > at least for a little while. So any changes to the C code would need to > be side-ported.) > > > cmake_v2_3_rijndael.patch > > > > First I added special wraparound because here CMake have circular > > dependency (cmake very smart here). Second I removed rijndael.tbl > > because it generated during build process every time. > > Please explain what the circular dependency is. If there is one, we > should also side-port this change. > > > cmake_v2_4_uuid.patch > > > > Another small patch. Right place for uuid.h I find by CMake and not > > necessary this ifdef hell. > > This patch removes the uuid.h include but doesn't add it anywhere else. > How does it work? > > > Questions for discussion: > > > > In generated project by CMake we always have only one enter point. Also > > INSTALL macross support only including to “all” targets. It follows that > > it is impossible build contrib modules separately only with “all” > > target. Here write about this behavior: > > https://cmake.org/cmake/help/v3.7/prop_tgt/EXCLUDE_FROM_ALL.html > > Yeah, I think this is how the MSVC stuff effectively works right now as > well. > > -- > Peter Eisentraut http://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services >
[HACKERS] Small fix in pg_rewind (redundant declaration)
Hello hackers. I've stumbled upon a strange code. In src/bin/pg_rewind/datapagemap.h we decalre: extern void datapagemap_destroy(datapagemap_t *map); But nowhere is implemented. I think the declaration of this function must be removed. I'm not sure that this trivial things needed patch. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Friday 04 December 2015 16:52:48 Teodor Sigaev wrote: > Seems, omitting boundaries in insert/update isn't a good idea. I suggest to > allow omitting only in select subscripting. It was my last attempt to do so. So now I agree, the most simple is now disabled for insert and update. New patch in attach. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Companydiff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 4385a09..6ee71a5 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -257,6 +257,26 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; (1 row) + Possible to skip the lower-bound or + upper-bound + for get first or last element in slice. + + +SELECT schedule[:][:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{meeting,lunch},{training,presentation}} +(1 row) + +SELECT schedule[:2][2:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{lunch},{presentation}} +(1 row) + + If any dimension is written as a slice, i.e., contains a colon, then all dimensions are treated as slices. Any dimension that has only a single number (no colon) is treated as being from 1 diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c index 29f058c..d9bf977 100644 --- a/src/backend/executor/execQual.c +++ b/src/backend/executor/execQual.c @@ -268,10 +268,12 @@ ExecEvalArrayRef(ArrayRefExprState *astate, bool eisnull; ListCell *l; int i = 0, -j = 0; +j = 0, +indexexpr; IntArray upper, lower; int *lIndex; + AnyArrayType *arrays; array_source = ExecEvalExpr(astate->refexpr, econtext, @@ -293,6 +295,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->refupperindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (i >= MAXDIM) ereport(ERROR, @@ -300,10 +303,23 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", i + 1, MAXDIM))); - upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL && astate->refattrlength <= 0) + { + if (isAssignment) +ereport(ERROR, + (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR), + errmsg("cannot determine upper index for empty array"))); + arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); + indexexpr = AARR_LBOUND(arrays)[i] + AARR_DIMS(arrays)[i] - 1; + } + else + indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + upper.indx[i++] = indexexpr; + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { @@ -321,6 +337,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->reflowerindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (j >= MAXDIM) ereport(ERROR, @@ -328,10 +345,19 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", j + 1, MAXDIM))); - lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL) + { +arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); +indexexpr = AARR_LBOUND(arrays)[j]; + } + else +indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + lower.indx[j++] = indexexpr; + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 26264cb..a761263 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -2417,6 +2417,8 @@ _copyAIndices(const A_Indices *from) COPY_NODE_FIELD(lidx); COPY_NODE_FIELD(uidx); + COPY_SCALAR_FIELD(lidx_default); + COPY_SCALAR_FIELD(uidx_default); return newnode; } diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index aa6e102..e75b448 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2162,6 +2162,8 @@ _equalAIndices(const A_Indices *a, const A_Indices *b) { COMPARE_NODE_FIELD(lidx); COMPARE_NODE_FIELD(uidx); + COMPARE_SCALAR_FIELD(lidx_default); + COMPARE_SCALAR_FIELD(uidx_default); return true; } diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 012c14b..ed77c75 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -2773,6 +2773,8 @@ _outA_Indices(StringInfo str, const A_Indices *node) WRITE_NODE_FIELD(lidx); WRITE_NODE_FIELD(uidx); + WRITE_BOOL_FIELD(lidx_default); + WRITE_BOOL_FIELD
Re: [HACKERS] Some questions about the array.
On Tuesday 01 December 2015 15:30:47 Teodor Sigaev wrote: > As I understand, update should fail with any array, so, first update should > fail too. Am I right? You right. Done. New patch in attach. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Companydiff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 4385a09..6ee71a5 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -257,6 +257,26 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; (1 row) + Possible to skip the lower-bound or + upper-bound + for get first or last element in slice. + + +SELECT schedule[:][:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{meeting,lunch},{training,presentation}} +(1 row) + +SELECT schedule[:2][2:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{lunch},{presentation}} +(1 row) + + If any dimension is written as a slice, i.e., contains a colon, then all dimensions are treated as slices. Any dimension that has only a single number (no colon) is treated as being from 1 diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c index 29f058c..6643714 100644 --- a/src/backend/executor/execQual.c +++ b/src/backend/executor/execQual.c @@ -268,10 +268,12 @@ ExecEvalArrayRef(ArrayRefExprState *astate, bool eisnull; ListCell *l; int i = 0, -j = 0; +j = 0, +indexexpr; IntArray upper, lower; int *lIndex; + AnyArrayType *arrays; array_source = ExecEvalExpr(astate->refexpr, econtext, @@ -293,6 +295,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->refupperindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (i >= MAXDIM) ereport(ERROR, @@ -300,10 +303,23 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", i + 1, MAXDIM))); - upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL && astate->refattrlength <= 0) + { + if (isAssignment) +ereport(ERROR, + (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR), + errmsg("cannot determine upper index for empty array"))); + arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); + indexexpr = AARR_LBOUND(arrays)[i] + AARR_DIMS(arrays)[i] - 1; + } + else + indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + upper.indx[i++] = indexexpr; + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { @@ -321,6 +337,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->reflowerindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (j >= MAXDIM) ereport(ERROR, @@ -328,10 +345,20 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", j + 1, MAXDIM))); - lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL) + { +arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); +indexexpr = AARR_LBOUND(arrays)[j]; + } + else +indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + lower.indx[j++] = indexexpr; + + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 26264cb..a761263 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -2417,6 +2417,8 @@ _copyAIndices(const A_Indices *from) COPY_NODE_FIELD(lidx); COPY_NODE_FIELD(uidx); + COPY_SCALAR_FIELD(lidx_default); + COPY_SCALAR_FIELD(uidx_default); return newnode; } diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index aa6e102..e75b448 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2162,6 +2162,8 @@ _equalAIndices(const A_Indices *a, const A_Indices *b) { COMPARE_NODE_FIELD(lidx); COMPARE_NODE_FIELD(uidx); + COMPARE_SCALAR_FIELD(lidx_default); + COMPARE_SCALAR_FIELD(uidx_default); return true; } diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 012c14b..ed77c75 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -2773,6 +2773,8 @@ _outA_Indices(StringInfo str, const A_Indices *node) WRITE_NODE_FIELD(lidx); WRITE_NODE_FIELD(uidx); + WRITE_BOOL_FIELD(lidx_default); + WRITE_BOOL_FIELD(uidx_default); } static void diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index
Re: [HACKERS] Some questions about the array.
On Tuesday 01 December 2015 15:43:47 you wrote: > On Tuesday 01 December 2015 15:30:47 Teodor Sigaev wrote: > > As I understand, update should fail with any array, so, first update > > should > > fail too. Am I right? > > You right. Done. New patch in attach. Found error when omitted lower bound in INSERT like this: INSERT INTO arrtest_s (a[:2], b[1:2]) VALUES ('{1,2,3,4,5}', '{7,8,9}'); I fix it in new patch. Lower bound for new array is 1 by default. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Companydiff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 4385a09..6ee71a5 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -257,6 +257,26 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; (1 row) + Possible to skip the lower-bound or + upper-bound + for get first or last element in slice. + + +SELECT schedule[:][:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{meeting,lunch},{training,presentation}} +(1 row) + +SELECT schedule[:2][2:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{lunch},{presentation}} +(1 row) + + If any dimension is written as a slice, i.e., contains a colon, then all dimensions are treated as slices. Any dimension that has only a single number (no colon) is treated as being from 1 diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c index 29f058c..f300b31 100644 --- a/src/backend/executor/execQual.c +++ b/src/backend/executor/execQual.c @@ -268,10 +268,12 @@ ExecEvalArrayRef(ArrayRefExprState *astate, bool eisnull; ListCell *l; int i = 0, -j = 0; +j = 0, +indexexpr; IntArray upper, lower; int *lIndex; + AnyArrayType *arrays; array_source = ExecEvalExpr(astate->refexpr, econtext, @@ -293,6 +295,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->refupperindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (i >= MAXDIM) ereport(ERROR, @@ -300,10 +303,23 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", i + 1, MAXDIM))); - upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL && astate->refattrlength <= 0) + { + if (isAssignment) +ereport(ERROR, + (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR), + errmsg("cannot determine upper index for empty array"))); + arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); + indexexpr = AARR_LBOUND(arrays)[i] + AARR_DIMS(arrays)[i] - 1; + } + else + indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + upper.indx[i++] = indexexpr; + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { @@ -321,6 +337,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->reflowerindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (j >= MAXDIM) ereport(ERROR, @@ -328,10 +345,25 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", j + 1, MAXDIM))); - lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL) + { +if (*isNull) + indexexpr = 1; +else +{ + arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); + indexexpr = AARR_LBOUND(arrays)[j]; +} + } + else +indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + lower.indx[j++] = indexexpr; + + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 26264cb..a761263 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -2417,6 +2417,8 @@ _copyAIndices(const A_Indices *from) COPY_NODE_FIELD(lidx); COPY_NODE_FIELD(uidx); + COPY_SCALAR_FIELD(lidx_default); + COPY_SCALAR_FIELD(uidx_default); return newnode; } diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index aa6e102..e75b448 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2162,6 +2162,8 @@ _equalAIndices(const A_Indices *a, const A_Indices *b) { COMPARE_NODE_FIELD(lidx); COMPARE_NODE_FIELD(uidx); + COMPARE_SCALAR_FIELD(lidx_default); + COMPARE_SCALAR_FIELD(uidx_default); return true; } diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 012c14b..ed77c75 1
Re: [HACKERS] Some questions about the array.
On Tuesday 01 December 2015 08:38:21 you wrote: > it (zero > based indexing support) doesn't meet the standard of necessity for > adding to the core API and as stated it's much to magical. We do not touch the arrays, we simply create a function to access them with a comfortable behavior. Creating a separate array types in the form extension is very difficult IMHO. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Monday 30 November 2015 08:58:49 you wrote: > +1 IMO this line of thinking is a dead end. Better handled via > functions, not syntax Maybe then add array_pyslice(start, end) when start is 0 and with negative indexes? Only for 1D array. What do you think? -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
The new version of the patch. On Friday 27 November 2015 17:23:35 Teodor Sigaev wrote: > 1 > Documentation isn't very informative Added example with different results. > 2 > Seems, error messages are too inconsistent. If you forbid omitting bound in > assigment then if all cases error message should be the same or close. Done. Skipping lower boundary is no longer an error. Thank you for your review. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Companydiff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 4385a09..5a51e07 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -257,6 +257,25 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; (1 row) + You can skip the lower-bound or upper-bound + for get first or last element in slice. + + +SELECT schedule[:][:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{meeting,lunch},{training,presentation}} +(1 row) + +SELECT schedule[:2][2:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{lunch},{presentation}} +(1 row) + + If any dimension is written as a slice, i.e., contains a colon, then all dimensions are treated as slices. Any dimension that has only a single number (no colon) is treated as being from 1 diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c index 29f058c..6643714 100644 --- a/src/backend/executor/execQual.c +++ b/src/backend/executor/execQual.c @@ -268,10 +268,12 @@ ExecEvalArrayRef(ArrayRefExprState *astate, bool eisnull; ListCell *l; int i = 0, -j = 0; +j = 0, +indexexpr; IntArray upper, lower; int *lIndex; + AnyArrayType *arrays; array_source = ExecEvalExpr(astate->refexpr, econtext, @@ -293,6 +295,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->refupperindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (i >= MAXDIM) ereport(ERROR, @@ -300,10 +303,23 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", i + 1, MAXDIM))); - upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL && astate->refattrlength <= 0) + { + if (isAssignment) +ereport(ERROR, + (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR), + errmsg("cannot determine upper index for empty array"))); + arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); + indexexpr = AARR_LBOUND(arrays)[i] + AARR_DIMS(arrays)[i] - 1; + } + else + indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + upper.indx[i++] = indexexpr; + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { @@ -321,6 +337,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->reflowerindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (j >= MAXDIM) ereport(ERROR, @@ -328,10 +345,20 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", j + 1, MAXDIM))); - lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL) + { +arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); +indexexpr = AARR_LBOUND(arrays)[j]; + } + else +indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + lower.indx[j++] = indexexpr; + + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 26264cb..a761263 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -2417,6 +2417,8 @@ _copyAIndices(const A_Indices *from) COPY_NODE_FIELD(lidx); COPY_NODE_FIELD(uidx); + COPY_SCALAR_FIELD(lidx_default); + COPY_SCALAR_FIELD(uidx_default); return newnode; } diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index aa6e102..e75b448 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2162,6 +2162,8 @@ _equalAIndices(const A_Indices *a, const A_Indices *b) { COMPARE_NODE_FIELD(lidx); COMPARE_NODE_FIELD(uidx); + COMPARE_SCALAR_FIELD(lidx_default); + COMPARE_SCALAR_FIELD(uidx_default); return true; } diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 012c14b..ed77c75 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -2773,6 +2773,8 @@ _outA_Indices(StringInfo str, const A_Indices *node
Re: [HACKERS] WIP: About CMake v2
On Thursday 26 November 2015 19:28:15 you wrote: > I think you don't understand the point: start with the *right* cmake > version because you could have to redo (a lot of) your work Between versions CMake there is no fundamental difference. On my laptop or desktop is already 3.4.0 (new KDE requires). At friends is generally have the same all over 3.0.0. It is simply not convenient right now (develop under 2.8). But I try not to use 3.0 features. Later simply check will need to be 2.8. Importantly do not forget CMake != GNU Make. CMake == autotools+gnumake. New versions of CMake most fixes modules like FindBISON and etc. This different philosophy. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: About CMake v2
On Thursday 26 November 2015 01:29:37 Euler Taveira wrote: > I give it a try. Nice WIP. IMHO you should try to support cmake version > that are available in the stable releases. Looking at [1], I think the > best choice is 2.8.11 (because it will cover Red Hat based distros and > also Debian based ones). Are you using a new feature from 3.1? I mean, > it should be nice to cover old stable releases, if it is possible. Maybe you are right. But by the time I finish my work I think 3.0 will become a standard. CMake is developing rapidly and soon will have version 3.4.1 And one more thing: a normal documentation came with 3.0. :) But I try to check my code for 2.8.11, now I have 3.4.0 (latest for Gentoo). Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: About CMake v2
On Thursday 26 November 2015 11:10:36 you wrote: > Adding a new build > dependency is bad enough; adding one that isn't easily available is a > show-stopper. If someone decided to compile from source Postgres rather than install from RPM then it will not be a problem as to build CMake. On the one hand it is good that the GNU Make a stable and have not changed, but it can not last forever. It is possible to make support for CMake 2.6 but now is not the time when you have to think about it. I'll come back to this when all the functionality will work. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: About CMake v2
On Thursday 26 November 2015 17:42:16 you wrote: > No point in doing any work if you don't agree with the basic prerequisites. I meant that support for older versions of CMake I'll do when will implement other functions. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: About CMake v2
On Wednesday 25 November 2015 13:27:33 you wrote: > That was years ago, mind, but I > haven't seen CMake exactly taking over the world since then. KDE, MariaDB, neovim and etc. New projects usually selected CMake. > Many prewritten CMake modules fail to follow basic practices that make them > work anywhere but where the author wanted/needed. Now everything is much better. And it is worth split "make" and configuration. CMake better GNU Make and is similar to the m4 for configuring. But M4 older and elaborated. These are my feelings on the current work. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: About CMake v2
Hello hackers. News about CMake: I built postgres, initdb, createdb, psql, pg_ctl using CMake. After make install you can run initdb after run postgres after createdb and use it by psql. Only for Linux now and realy bugy (and the code is very dirt) but it work! If someone wants to test or to help: https://github.com/stalkerg/postgres_cmake Thanks. PS All define for pg_config.h generate and testing truly -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Friday 06 November 2015 12:55:44 you wrote: > Omitted bounds are common in other languages and would be handy. I > don't think they'd cause any issues with multi-dimensional arrays or > variable start-pos arrays. And yet, what about my patch? Discussions about ~ and{:} it seems optional. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Wednesday 11 November 2015 17:29:31 you wrote: > In this case the syntax is major issue. Any language should not to have any > possible feature on the world. I am about omitted boundaries. It almost does not change the syntax and with nothing conflicts. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Monday 09 November 2015 13:29:30 you wrote: > It is ugly, but you can wrap it to function - so still I don't see any > reason, why it is necessary For example, I'm writing a lot of queries by hands... This functionality is available in many languages and it's just convenient. Of course it is possible and without it, but why? Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Monday 09 November 2015 13:50:20 Pavel Stehule wrote: > New symbols increase a complexity of our code and our documentation. > > If some functionality can be implemented via functions without performance > impacts, we should not to create new operators or syntax - mainly for > corner use cases. > > Regards > > Pavel Ok we can use {:} instead [:] for zero array access. The function is the solution half. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Monday 09 November 2015 04:33:28 you wrote: > You can write it as a separate function instead of changing current syntax. I do not think, because we have a multi-dimensional arrays. And why we have [:] syntax now? -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Sunday 08 November 2015 16:49:20 you wrote: > I'm not necessarily objecting to that, but it's not impossible that it > could break something for some existing user. We can decide not to > care about that, though. We had an idea. You can use ~ to convert the index to the array which always starts with 0. Then we can use negative indexes, and you can always find the beginning of the array. Example: we have array [-3:3]={1,2,3,4,5,6,7} array[~0] == 1 array[~-1] == 7 array[~2:~-2] == {3,4,5,6} What do you think? -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Monday 09 November 2015 12:48:54 you wrote: > I am sorry - it is looking pretty obscure. Really need this feature? IMHO yes. Now for write: array[~2:~-2] you need like: array[array_lower(array, 1)+3: array_upper(array, 1)-2] Worse when long names. Besides the extra functions calls. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Thursday 05 November 2015 22:33:37 you wrote: > Would something like array[1:~1] as a syntax be acceptable to denote > backward counting? Very interesting idea! I could implement it. I just need to check for side effects. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
On Thursday 05 November 2015 23:45:53 you wrote: > On Thu, Nov 5, 2015 at 9:57 AM, YUriy Zhuravlev > > <u.zhurav...@postgrespro.ru> wrote: > > Hello hackers. > > There are comments to my patch? Maybe I should create a separate thread? > > Thanks. > > You should add this on commitfest.postgresql.org. I created a couple of weeks ago: https://commitfest.postgresql.org/7/397/ > > I'm sure I know your answer, but what do other people think? I wonder the same thing. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
Hello hackers. There are comments to my patch? Maybe I should create a separate thread? Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] clearing opfuncid vs. parallel query
> I thought that's what you were proposing. Process the struct > definitions and emit .c files. We have 2 ways. The first is always to generate the * .c files from the * .h files. Another way is to generate once from * .h file a XML/JSON and after generate from it to * .c files (parsing xml/json easy). > Anything that is part of the build process will have to be done in C or > Perl. I know about the relationship between Postgres and C / Perl. Yet this is not the language which would be worth to do something associated with code generation. Python is better in many ways than the Perl. I'm not trying to convince someone. I just see the situation, and I do not like. What do you think about the format of the serialization? Now it is very primitive. For example, there are selected data in order, rather than by key. In its development, I used jsonb, it also helped to simplify the saved of query plans in the Postgres table. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] clearing opfuncid vs. parallel query
On Thursday 22 October 2015 09:26:46 David Fetter wrote: > On Thu, Oct 22, 2015 at 07:15:35PM +0300, YUriy Zhuravlev wrote: > > Hello. > > Currently using nodeToString and stringToNode you can not pass a > > full plan. In this regard, what is the plan to fix it? Or in the > > under task parallel query does not have such a problem? > > > > > This turns out not to be straightforward to code, because we don't > > > have a generic plan tree walker, > > > > I have an inner development. I am using python analyzing header > > files and generates a universal walker (parser, paths ,executer and > > etc trees), as well as the serializer and deserializer to jsonb. > > Maybe I should publish this code? > > Please do. Tom Lane and Robert Haas are very unhappy with a python. Is there any reason? Thanks! -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] clearing opfuncid vs. parallel query
On Friday 23 October 2015 12:41:50 you wrote: > Requirement of python with pycparser as build dependency is a > serious cataclysm. For instance, how many buildfarms will survive it? > This is why Tom and Robert are looking for ways to evade it. I agree. But it is also a fact that Perl less suited for such development. Also at the moment Python is more common: http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] clearing opfuncid vs. parallel query
On Thursday 22 October 2015 13:25:52 you wrote: > It would be more useful, if we're going to autogenerate code, Are we going to use autogenerate code? > to do it from the actual struct definitions. I can gen xml/json from actual struct. I offered XML/JSON as the analysis of C code much more difficult. But my current generator just use the structure from the header files (by pycparser). Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] clearing opfuncid vs. parallel query
On Thursday 22 October 2015 12:53:49 you wrote: > On Thu, Oct 22, 2015 at 12:15 PM, YUriy Zhuravlev > > <u.zhurav...@postgrespro.ru> wrote: > > Hello. > > Currently using nodeToString and stringToNode you can not pass a full > > plan. In this regard, what is the plan to fix it? Or in the under task > > parallel query does not have such a problem? > > It's already fixed. See commits > a0d9f6e434bb56f7e5441b7988f3982feead33b3 and > 9f1255ac859364a86264a67729dbd1a36dd63ff2. Ahh. Thanks. And then another question: What do you think if the generated equalfuncs.c, copyfuncs.c, readfuncs.c, outfuncs.c from XML or JSON? In order not to change the code in four places when changing nodes. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] clearing opfuncid vs. parallel query
Hello. Currently using nodeToString and stringToNode you can not pass a full plan. In this regard, what is the plan to fix it? Or in the under task parallel query does not have such a problem? > This turns out not to be straightforward to code, because we > don't have a generic plan tree walker, I have an inner development. I am using python analyzing header files and generates a universal walker (parser, paths ,executer and etc trees), as well as the serializer and deserializer to jsonb. Maybe I should publish this code? Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Some questions about the array.
Hello again. I attached simple patch for omitted boundaries in the slice. This will simplify the writing of SQL. Instead: select arr[2:array_upper(arr, 1)]; you can write: select arr[2:]; simple and elegant. Omitted boundaries is prohibited in UPDATE. Thanks. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Companydiff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml index 4385a09..57614b7 100644 --- a/doc/src/sgml/array.sgml +++ b/doc/src/sgml/array.sgml @@ -257,6 +257,25 @@ SELECT schedule[1:2][1:1] FROM sal_emp WHERE name = 'Bill'; (1 row) + You can skip the lower-bound or upper-bound + for get first or last element in slice. + + +SELECT schedule[:][:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{meeting,lunch},{training,presentation}} +(1 row) + +SELECT schedule[:2][1:] FROM sal_emp WHERE name = 'Bill'; + +schedule + + {{meeting,lunch},{training,presentation}} +(1 row) + + If any dimension is written as a slice, i.e., contains a colon, then all dimensions are treated as slices. Any dimension that has only a single number (no colon) is treated as being from 1 diff --git a/src/backend/executor/execQual.c b/src/backend/executor/execQual.c index 29f058c..6643714 100644 --- a/src/backend/executor/execQual.c +++ b/src/backend/executor/execQual.c @@ -268,10 +268,12 @@ ExecEvalArrayRef(ArrayRefExprState *astate, bool eisnull; ListCell *l; int i = 0, -j = 0; +j = 0, +indexexpr; IntArray upper, lower; int *lIndex; + AnyArrayType *arrays; array_source = ExecEvalExpr(astate->refexpr, econtext, @@ -293,6 +295,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->refupperindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (i >= MAXDIM) ereport(ERROR, @@ -300,10 +303,23 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", i + 1, MAXDIM))); - upper.indx[i++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL && astate->refattrlength <= 0) + { + if (isAssignment) +ereport(ERROR, + (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR), + errmsg("cannot determine upper index for empty array"))); + arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); + indexexpr = AARR_LBOUND(arrays)[i] + AARR_DIMS(arrays)[i] - 1; + } + else + indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + upper.indx[i++] = indexexpr; + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { @@ -321,6 +337,7 @@ ExecEvalArrayRef(ArrayRefExprState *astate, foreach(l, astate->reflowerindexpr) { ExprState *eltstate = (ExprState *) lfirst(l); + eisnull = false; if (j >= MAXDIM) ereport(ERROR, @@ -328,10 +345,20 @@ ExecEvalArrayRef(ArrayRefExprState *astate, errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)", j + 1, MAXDIM))); - lower.indx[j++] = DatumGetInt32(ExecEvalExpr(eltstate, - econtext, - , - NULL)); + if (eltstate == NULL) + { +arrays = (AnyArrayType *)DatumGetArrayTypeP(array_source); +indexexpr = AARR_LBOUND(arrays)[j]; + } + else +indexexpr = DatumGetInt32(ExecEvalExpr(eltstate, + econtext, + , + NULL)); + + lower.indx[j++] = indexexpr; + + /* If any index expr yields NULL, result is NULL or error */ if (eisnull) { diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index 0b4ab23..6d9cad4 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -2415,6 +2415,8 @@ _copyAIndices(const A_Indices *from) COPY_NODE_FIELD(lidx); COPY_NODE_FIELD(uidx); + COPY_SCALAR_FIELD(lidx_default); + COPY_SCALAR_FIELD(uidx_default); return newnode; } diff --git a/src/backend/nodes/equalfuncs.c b/src/backend/nodes/equalfuncs.c index aa6e102..e75b448 100644 --- a/src/backend/nodes/equalfuncs.c +++ b/src/backend/nodes/equalfuncs.c @@ -2162,6 +2162,8 @@ _equalAIndices(const A_Indices *a, const A_Indices *b) { COMPARE_NODE_FIELD(lidx); COMPARE_NODE_FIELD(uidx); + COMPARE_SCALAR_FIELD(lidx_default); + COMPARE_SCALAR_FIELD(uidx_default); return true; } diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index df7f6e1..6769740 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -2756,6 +2756,8 @@ _outA_Indices(StringInfo str, const A_Indices *node) WRITE_NODE_FIELD(lidx); WRITE_NODE_FIELD(uidx); + WRITE_BOOL_FIELD(lidx_default); + WRITE_BOOL_FIELD(uidx_default); } static void
[HACKERS] Some questions about the array.
We were some of the issues associated with the behavior of arrays. 1. We would like to implement arrays negative indices (from the end) like in Python or Ruby: arr[-2] or arr[1: -1] but as an array can be indexed in the negative area so it probably can not be done. 2. We would like to add the ability be omitted boundaries in the slice. Example: arr[2:] or arr[:2]. But there was a problem with the update of an empty array: arr[1:][1:] = {1,2,3,4,5,6} can be interpreted as arr[1:3][1:2] or arr[1:2] [1:3] or [1:1], [1:6] What is the history of the emergence of such arrays? Maybe something can be improved? P.S. I would like List datatype as in Python. Is there any fundamental objections? Or we just did not have the time and enthusiasm before? The current implementation I would call vectors or matrices but not arrays. IMHO -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] No Issue Tracker - Say it Ain't So!
On Wednesday 30 September 2015 14:41:34 you wrote: > On Tue, Sep 29, 2015 at 12:08:56PM +1300, Gavin Flower wrote: > > Linux kernel project uses bugzilla (https://bugzilla.kernel.org) > > AIUI this is not mandatory for kernel hackers, and more opt-in from a > some/many/a few(?) subsystem maintainers. Some parts use it more, some > less or not at all. > > > and so does LibreOffice (https://bugs.documentfoundation.org) > > Thas is true, however. > > Same for freedesktop.org and the Gnome project, I believe. > > > Michael What about Trac? http://trac.edgewall.org/wiki/TracUsers -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] No Issue Tracker - Say it Ain't So!
On Monday 28 September 2015 08:23:46 David Fetter wrote: > They may well be, but until we decide it's worth the switching costs > to move to a totally different way of doing things, that system will > stay in place. Until we decide we're wasting time >Neither magic wands nor a green field project are in operation here. Now any stick will help. IMHO -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] No Issue Tracker - Say it Ain't So!
On Thursday 24 September 2015 12:10:07 Ryan Pedela wrote: > Kam Lasater wrote: > > I'd suggest: Github Issues, Pivotal Tracker or Redmine (probably in > > that order). There are tens to hundreds of other great ones out there, > > I'm sure one of them would also work. > > Why not just use Github issues? I will also vote for github. We have github mirror now. In a pinch, you can use gitlab. PS mail lists outdated IMHO. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics
On Tuesday 15 September 2015 04:06:25 Andres Freund wrote: > And here's an actual implementation of that approach. It's definitely > work-in-progress and could easily be optimized further. Don't have any > big machines to play around with right now tho. Thanks. Interesting. We had a version like your patch. But this is only half the work. Example: state = pg_atomic_read_u32(>state); if ((state & BUF_REFCOUNT_MASK) == 0 && (state & BUF_USAGECOUNT_MASK) == 0) After the first command somebody can change buf->state and local state not actual. In this embodiment, there is no significant difference between the two patches. For honest work will need used the CAS for all IF statement. Thanks! Hope for understanding. ^_^ -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics
On Saturday 12 September 2015 04:15:43 David Rowley wrote: > I've run this on a single CPU server and don't see any speedup, so I assume > I'm not getting enough contention. > As soon as our 4 socket machine is free I'll try a pgbench run with that. Excellent! Will wait. > Just for fun, what's the results if you use -M prepared ? Unfortunately now we can not check. :( -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics
On Tuesday 15 September 2015 16:50:44 Andres Freund wrote: > No, they can't in a a relevant manner. We hold the buffer header lock. I'm sorry, I did not notice of a LockBufHdr. In this embodiment, your approach seems to be very similar to s_lock. Cycle in PinBuffer behaves like s_lock. In LockBufHdr: if (pg_atomic_compare_exchange_u32(>state, , state | BM_LOCKED)) conflict with: while (unlikely(state & BM_LOCKED)) from PinBuffer. Thus your patch does not remove the problem of competition for PinBuffer. We will try check your patch this week. >You're posting >things for review and you seem completely unwilling to actually respond >to points raised. I think we're just talking about different things. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics
> That path is only taken if somebody else has already locked the buffer > (e.g. BufferAlloc()). If you have contention in PinBuffer() your > workload will be mostly cache resident and neither PinBuffer() nor > UnpinBuffer() set BM_LOCKED. Thanks. Now I understand everything. It might work. We will be tested. >your workload Simple pgbench -S for NUMA. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Scaling PostgreSQL at multicore Power8
On Monday 31 August 2015 17:43:08 Tomas Vondra wrote: > Well, I could test the patch on a x86 machine with 4 sockets (64 cores), > but I wonder whether it makes sense at this point, as the patch really > is not correct (judging by what Andres says). Can you test patch from this thread: http://www.postgresql.org/message-id/2400449.GjM57CE0Yg@dinodell ? In our view it is correct, although this is not obvious. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics
On Friday 11 September 2015 18:50:35 you wrote: > a) As I said upthread there's a patch to remove these locks entirely It is very interesting. Could you provide a link? And it's not very good, since there is a bottleneck PinBuffer / UnpinBuffer instead of LWLocks. > b) It doesn't matter anyway. Not every pin goes through the buffer >mapping table. StrategyGetBuffer(), SyncOneBuffer(), ... StrategyGetBuffer call only from BufferAlloc . SyncOneBuffer not problem too because: PinBuffer_Locked(bufHdr); LWLockAcquire(bufHdr->content_lock, LW_SHARED); And please read comment before LockBufHdr(bufHdr) in SyncOneBuffer. We checked all functions with refcount and usage_count. Thanks! ^_^ -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics
On Friday 11 September 2015 18:14:21 Andres Freund wrote: > This way we can leave the for (;;) loop > in BufferAlloc() thinking that the buffer is unused (and can't be further > pinned because of the held spinlock!) We lost lock after PinBuffer_Locked in BufferAlloc. Therefore, in essence, nothing has changed. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics
On Friday 11 September 2015 18:37:00 you wrote: > so unless I'm missing something, no, we haven't lost the lock. This section is protected by like LWLockAcquire(newPartitionLock, LW_EXCLUSIVE); before it (and we can't get this buffer from hash table). -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Move PinBuffer and UnpinBuffer to atomics
Hello hackers! Continuing the theme: http://www.postgresql.org/message-id/3368228.mTSz6V0Jsq@dinodell This time, we fairly rewrote 'refcount' and 'usage_count' to atomic in PinBuffer and UnpinBuffer (but save lock for buffer flags in Unpin). In the same time it doesn't affect to correctness of buffer manager because that variables already have LWLock on top of them (for partition of hashtable). If someone pinned buffer after the call StrategyGetBuffer we just try again (in BufferAlloc). Also in the code there is one more check before deleting the old buffer, where changes can be rolled back. The other functions where it is checked 'refcount' and 'usage_count' put exclusive locks. Also stress test with 256 KB shared memory ended successfully. Without patch we have 417523 TPS and with patch 965821 TPS for big x86 server. All details here: https://gist.github.com/stalkerg/773a81b79a27b4d5d63f Thank you. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Companydiff --git a/contrib/pg_buffercache/pg_buffercache_pages.c b/contrib/pg_buffercache/pg_buffercache_pages.c index 6622d22..50ca2a5 100644 --- a/contrib/pg_buffercache/pg_buffercache_pages.c +++ b/contrib/pg_buffercache/pg_buffercache_pages.c @@ -33,14 +33,14 @@ typedef struct BlockNumber blocknum; bool isvalid; bool isdirty; - uint16 usagecount; + uint32 usagecount; /* * An int32 is sufficiently large, as MAX_BACKENDS prevents a buffer from * being pinned by too many backends and each backend will only pin once * because of bufmgr.c's PrivateRefCount infrastructure. */ - int32 pinning_backends; + uint32 pinning_backends; } BufferCachePagesRec; @@ -160,8 +160,8 @@ pg_buffercache_pages(PG_FUNCTION_ARGS) fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode; fctx->record[i].forknum = bufHdr->tag.forkNum; fctx->record[i].blocknum = bufHdr->tag.blockNum; - fctx->record[i].usagecount = bufHdr->usage_count; - fctx->record[i].pinning_backends = bufHdr->refcount; + fctx->record[i].usagecount = pg_atomic_read_u32(>usage_count); + fctx->record[i].pinning_backends = pg_atomic_read_u32(>refcount); if (bufHdr->flags & BM_DIRTY) fctx->record[i].isdirty = true; @@ -236,7 +236,7 @@ pg_buffercache_pages(PG_FUNCTION_ARGS) values[7] = Int16GetDatum(fctx->record[i].usagecount); nulls[7] = false; /* unused for v1.0 callers, but the array is always long enough */ - values[8] = Int32GetDatum(fctx->record[i].pinning_backends); + values[8] = UInt32GetDatum(fctx->record[i].pinning_backends); nulls[8] = false; } diff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c index 3ae2848..e139a7c 100644 --- a/src/backend/storage/buffer/buf_init.c +++ b/src/backend/storage/buffer/buf_init.c @@ -96,8 +96,8 @@ InitBufferPool(void) CLEAR_BUFFERTAG(buf->tag); buf->flags = 0; - buf->usage_count = 0; - buf->refcount = 0; + pg_atomic_init_u32(>usage_count, 0); + pg_atomic_init_u32(>refcount, 0); buf->wait_backend_pid = 0; SpinLockInit(>buf_hdr_lock); diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c index 8c0358e..afba360 100644 --- a/src/backend/storage/buffer/bufmgr.c +++ b/src/backend/storage/buffer/bufmgr.c @@ -962,7 +962,6 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, * into the buffer. */ buf = GetBufferDescriptor(buf_id); - valid = PinBuffer(buf, strategy); /* Can release the mapping lock as soon as we've pinned it */ @@ -1013,7 +1012,15 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, */ buf = StrategyGetBuffer(strategy); - Assert(buf->refcount == 0); + /* + * Ok, we can skip this but then we have to remove new buffer from + * hash table. Better to just try again. + */ + if (pg_atomic_read_u32(>refcount) != 0) + { + UnlockBufHdr(buf); + continue; + } /* Must copy buffer flags while we still hold the spinlock */ oldFlags = buf->flags; @@ -1211,7 +1218,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, * over with a new victim buffer. */ oldFlags = buf->flags; - if (buf->refcount == 1 && !(oldFlags & BM_DIRTY)) + if (pg_atomic_read_u32(>refcount) == 1 && !(oldFlags & BM_DIRTY)) break; UnlockBufHdr(buf); @@ -1234,10 +1241,10 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, buf->tag = newTag; buf->flags &= ~(BM_VALID | BM_DIRTY | BM_JUST_DIRTIED | BM_CHECKPOINT_NEEDED | BM_IO_ERROR | BM_PERMANENT); if (relpersistence == RELPERSISTENCE_PERMANENT) - buf->flags |= BM_TAG_VALID | BM_PERMANENT; + buf->flags|= BM_TAG_VALID | BM_PERMANENT; else buf->flags |= BM_TAG_VALID; - buf->usage_count = 1; + pg_atomic_write_u3
[HACKERS] Fix small bug for build without HAVE_SYMLINK
Hello hackers. During my work on CMake I stumbled upon this simple bug. I think this is important for mingw users. The function is trying to return a variable that has not been announced. Patch in attach. Probably relevant for stable releases. Thanks! -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Companydiff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c index ff0d904..eba9c6a 100644 --- a/src/backend/commands/tablespace.c +++ b/src/backend/commands/tablespace.c @@ -383,13 +383,15 @@ CreateTableSpace(CreateTableSpaceStmt *stmt) /* We keep the lock on pg_tablespace until commit */ heap_close(rel, NoLock); + + return tablespaceoid; #else /* !HAVE_SYMLINK */ ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("tablespaces are not supported on this platform"))); -#endif /* HAVE_SYMLINK */ - return tablespaceoid; + return InvalidOid; +#endif /* HAVE_SYMLINK */ } /* -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: About CMake v2
On Tuesday 01 September 2015 11:46:05 you wrote: > I would actually suggest that the cmake conversion would be better off > to ignore src/tools/msvc altogether to begin with. Build postgres by MSVC is important task for me. But Linux first of course. > A separate cmake build system would certainly > require maintenance *every* time we touch the Makefiles. I can support the cmake build a separate git branch on github. When we (and users) see that all is well then we will think when and how to change GNUMake to CMake. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Scaling PostgreSQL at multicore Power8
On Monday 31 August 2015 17:54:17 Tomas Vondra wrote: > So does this mean it's worth testing the patch on x86 > or not, in it's current state? Its realy intersting. But you need have true 64 cores without HT. (32 core +HT not have effect) -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Scaling PostgreSQL at multicore Power8
On Monday 31 August 2015 17:48:50 Andres Freund wrote: > Additionally it's, for default pgbench, really mostly a bottlneck after > GetSnapshotData() is fixed. You can make it a problem much earlier if > you have index nested loops over a lot of rows. 100 000 000 is a lot? Simple select query from pgbech is common task not for all but... -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Scaling PostgreSQL at multicore Power8
Hello hackers Recently, we were given access to the test server is IBM, 9119-MHE with 8 CPUs * 8 cores * 8 threads. We decided to take advantage of this and to find bottlenecks for read scalability (pgbench -S). All detail you can read here: http://www.postgrespro.ru/blog/pgsql/2015/08/30/p8scaling Performance 9.4 stopped growing after 100 clients, and 9.5 / 9.6 stopped after 150 (at 4 NUMA nodes). After research using pref we saw that inhibits ProcArrayLock in GetSnaphotData. But inserting the stub instead of GetSnapshotData not significantly increased scalability. Trying to find the bottleneck with gdb, we found another place. We have noticed s_lock in PinBuffer and UnpinBuffer. For the test we rewrited PinBuffer and UnpinBuffer by atomic operations and we liked the result. Degradation of performance almost completely disappeared, and went scaling up to 400 clients (4 NUMA nodes with 256 "CPUs"). To scale Postgres for large NUMA machine must be ported to the atomic operations bufmgr. During our tests, we no found errors in our patch, but most likely it is not true and this patch only for test. Who has any thoughts? -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Companydiff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c index 3ae2848..5fdaca7 100644 --- a/src/backend/storage/buffer/buf_init.c +++ b/src/backend/storage/buffer/buf_init.c @@ -95,9 +95,9 @@ InitBufferPool(void) BufferDesc *buf = GetBufferDescriptor(i); CLEAR_BUFFERTAG(buf->tag); - buf->flags = 0; - buf->usage_count = 0; - buf->refcount = 0; + buf->flags.value = 0; + buf->usage_count.value = 0; + buf->refcount.value = 0; buf->wait_backend_pid = 0; SpinLockInit(>buf_hdr_lock); diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c index cd3aaad..8cf97cb 100644 --- a/src/backend/storage/buffer/bufmgr.c +++ b/src/backend/storage/buffer/bufmgr.c @@ -714,8 +714,8 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, if (isLocalBuf) { /* Only need to adjust flags */ - Assert(bufHdr->flags & BM_VALID); - bufHdr->flags &= ~BM_VALID; + Assert(bufHdr->flags.value & BM_VALID); + bufHdr->flags.value &= ~BM_VALID; } else { @@ -727,8 +727,8 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, do { LockBufHdr(bufHdr); -Assert(bufHdr->flags & BM_VALID); -bufHdr->flags &= ~BM_VALID; +Assert(bufHdr->flags.value & BM_VALID); +bufHdr->flags.value &= ~BM_VALID; UnlockBufHdr(bufHdr); } while (!StartBufferIO(bufHdr, true)); } @@ -746,7 +746,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, * it's not been recycled) but come right back here to try smgrextend * again. */ - Assert(!(bufHdr->flags & BM_VALID)); /* spinlock not needed */ + Assert(!(bufHdr->flags.value & BM_VALID)); /* spinlock not needed */ bufBlock = isLocalBuf ? LocalBufHdrGetBlock(bufHdr) : BufHdrGetBlock(bufHdr); @@ -824,7 +824,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, if (isLocalBuf) { /* Only need to adjust flags */ - bufHdr->flags |= BM_VALID; + bufHdr->flags.value |= BM_VALID; } else { @@ -952,10 +952,10 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, */ buf = StrategyGetBuffer(strategy); - Assert(buf->refcount == 0); + Assert(buf->refcount.value == 0); /* Must copy buffer flags while we still hold the spinlock */ - oldFlags = buf->flags; + oldFlags = buf->flags.value; /* Pin the buffer and then release the buffer spinlock */ PinBuffer_Locked(buf); @@ -1149,8 +1149,8 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, * recycle this buffer; we must undo everything we've done and start * over with a new victim buffer. */ - oldFlags = buf->flags; - if (buf->refcount == 1 && !(oldFlags & BM_DIRTY)) + oldFlags = buf->flags.value; + if (buf->refcount.value == 1 && !(oldFlags & BM_DIRTY)) break; UnlockBufHdr(buf); @@ -1171,12 +1171,12 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, * 1 so that the buffer can survive one clock-sweep pass.) */ buf->tag = newTag; - buf->flags &= ~(BM_VALID | BM_DIRTY | BM_JUST_DIRTIED | BM_CHECKPOINT_NEEDED | BM_IO_ERROR | BM_PERMANENT); + buf->flags.value &= ~(BM_VALID | BM_DIRTY | BM_JUST_DIRTIED | BM_CHECKPOINT_NEEDED | BM_IO_ERROR | BM_PERMANENT); if (relpersistence == RELPERSISTENCE_PERMANENT) - buf->flags |= BM_TAG_VALID | BM_PERMANENT; + buf->flags.value |= BM_TAG_VALID | BM_PERMANENT; else - buf->flags |= BM_TAG_VALID; - buf->usage_count = 1; + buf
Re: [HACKERS] Scaling PostgreSQL at multicore Power8
On Monday 31 August 2015 13:03:07 you wrote: > That's definitely not correct, you should initialize the atomics using > pg_atomic_init_u32() and write to by using pg_atomic_write_u32() - not > access them directly. This breaks the fallback paths. You right. Now it's just to silence the compiler. This patch is concept only. -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: About CMake v2
Thanks all hackers. I have not heard of fundamental problems and continue its development. :) -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: About CMake v2
On Friday 28 August 2015 13:51:30 you wrote: It's broadly interesting, but since it bakes in a build dependency on CMake, there is some risk that the dependencies become an insurmountable problem. (Does CMake run on a VAX 11/780?? :-)) http://public.kitware.com/Bug/view.php?id=13605 you about this? It is probably worth a try, to see what improvements arise, albeit with the need to accept some risk of refusal of the change. The experiment is most likely necessary: we won't know the benefits without trying. You right. If the results represent little improvement, there will be little or no appetite to jump through the dependency hoops needed to get the change accepted. On the other hand, if there are big gains, that encourages pushing thru the dependency issues. On Aug 28, 2015 10:45, YUriy Zhuravlev u.zhurav...@postgrespro.ru wrote: Hello Hackers How would you react if I provided a patch which introduces a CMake build system? Old thread: http://www.postgresql.org/message-id/200812291325.13354.pete...@gmx.net The main argument against the it's too hard. I'm right? Thanks! -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] WIP: About CMake v2
Hello Hackers How would you react if I provided a patch which introduces a CMake build system? Old thread: http://www.postgresql.org/message-id/200812291325.13354.pete...@gmx.net The main argument against the it's too hard. I'm right? Thanks! -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers