Re: Protect syscache from bloating with negative cache entries
On 2017-12-16 22:25:48 -0500, Robert Haas wrote: > On Wed, Dec 13, 2017 at 11:20 PM, Kyotaro HORIGUCHI > wrote: > > Thank you very much for the valuable suggestions. I still would > > like to solve this problem and the > > a-counter-freely-running-in-minute(or several seconds)-resolution > > and pruning-too-long-unaccessed-entries-on-resizing seems to me > > to work enough for at least several known bloat cases. This still > > has a defect that this is not workable for a very quick > > bloating. I'll try thinking about the remaining issue. > > I'm not sure we should regard very quick bloating as a problem in need > of solving. Doesn't that just mean we need the cache to be bigger, at > least temporarily? Leaving that aside, is that actually not at least to a good degree, solved by that problem? By bumping the generation on hash resize, we have recency information we can take into account. Greetings, Andres Freund
Re: Race to build pg_isolation_regress in "make -j check-world"
On Mon, Nov 06, 2017 at 12:07:52AM -0800, Noah Misch wrote: > I've been enjoying the speed of parallel check-world, but I get spurious > failures from makefile race conditions. Commit c66b438 fixed the simple ones. > More tricky is this problem of multiple "make" processes entering > src/test/regress concurrently, which causes failures like these: > > gcc: error: pg_regress.o: No such file or directory > make[4]: *** [pg_isolation_regress] Error 1 > > /bin/sh: ../../../src/test/isolation/pg_isolation_regress: Permission denied > make -C test_extensions check > make[2]: *** [check] Error 126 > make[2]: Leaving directory > `/home/nm/src/pg/backbranch/10/src/test/isolation' > > /bin/sh: ../../../../src/test/isolation/pg_isolation_regress: Text file busy > make[3]: *** [isolationcheck] Error 126 > make[3]: Leaving directory > `/home/nm/src/pg/backbranch/10/src/test/modules/snapshot_too_old' > > This is reproducible since commit 2038bf4 or earlier; "make -j check-world" > had worse problems before that era. A workaround is to issue "make -j; make > -j -C src/test/isolation" before the check-world. Commit de0aca6 fixed that problem, but I now see similar trouble from multiple "make" processes running "make -C contrib/test_decoding install" concurrently. This is a risk for any directory named in an EXTRA_INSTALL variable of more than one makefile. Under the right circumstances, this would affect contrib/hstore and others in addition to contrib/test_decoding. That brings me back to the locking idea: > The problem of multiple "make" processes in a directory (especially src/port) > shows up elsewhere. In a cleaned tree, "make -j -C src/bin" or "make -j > installcheck-world" will do it. For more-prominent use cases, src/Makefile > prevents this with ".NOTPARALLEL:" and building first the directories that are > frequent submake targets. Perhaps we could fix the general problem with > directory locking; targets that call "$(MAKE) -C FOO" would first sleep until > FOO's lock is available. That could be tricky to make robust. If one is willing to assume that a lock-holding process never crashes, locking in a shell script is simple: mkdir to lock, rmdir to unlock. I don't want to assume that. The bakery algorithm provides convenient opportunities for checking whether the last locker crashed; I have attached a shell script demonstrating this approach. Better ideas? Otherwise, I'll look into integrating this design into the makefiles. Thanks, nm bakery.sh Description: Bourne shell script
Re: [HACKERS] Custom compression methods
On Thu, Dec 14, 2017 at 12:23 PM, Tomas Vondra wrote: > Can you give an example of such algorithm? Because I haven't seen such > example, and I find arguments based on hypothetical compression methods > somewhat suspicious. > > FWIW I'm not against considering such compression methods, but OTOH it > may not be such a great primary use case to drive the overall design. Well it isn't, really. I am honestly not sure what we're arguing about at this point. I think you've agreed that (1) opening avenues for extensibility is useful, (2) substitution a general-purpose compression algorithm could be useful, and (3) having datatype compression that is enabled through TOAST rather than built into the datatype might sometimes be desirable. That's more than adequate justification for this proposal, whether half-general compression methods exist or not. I am prepared to concede that there may be no useful examples of such a thing. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager
On Thu, Dec 14, 2017 at 5:45 AM, Masahiko Sawada wrote: > Here is the result. > I've measured the through-put with some cases on my virtual machine. > Each client loads 48k file to each different relations located on > either xfs filesystem or ext4 filesystem, for 30 sec. > > Case 1: COPYs to relations on different filessystems(xfs and ext4) and > N_RELEXTLOCK_ENTS is 1024 > > clients = 2, avg = 296.2068 > clients = 5, avg = 372.0707 > clients = 10, avg = 389.8850 > clients = 50, avg = 428.8050 > > Case 2: COPYs to relations on different filessystems(xfs and ext4) and > N_RELEXTLOCK_ENTS is 1 > > clients = 2, avg = 294.3633 > clients = 5, avg = 358.9364 > clients = 10, avg = 383.6945 > clients = 50, avg = 424.3687 > > And the result of current HEAD is following. > > clients = 2, avg = 284.9976 > clients = 5, avg = 356.1726 > clients = 10, avg = 375.9856 > clients = 50, avg = 429.5745 > > In case2, the through-put got decreased compare to case 1 but it seems > to be almost same as current HEAD. Because the speed of acquiring and > releasing extension lock got x10 faster than current HEAD as I > mentioned before, the performance degradation may not have gotten > decreased than I expected even in case 2. > Since my machine doesn't have enough resources the result of clients = > 50 might not be a valid result. I have to admit that result is surprising to me. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: Protect syscache from bloating with negative cache entries
On Wed, Dec 13, 2017 at 11:20 PM, Kyotaro HORIGUCHI wrote: > Thank you very much for the valuable suggestions. I still would > like to solve this problem and the > a-counter-freely-running-in-minute(or several seconds)-resolution > and pruning-too-long-unaccessed-entries-on-resizing seems to me > to work enough for at least several known bloat cases. This still > has a defect that this is not workable for a very quick > bloating. I'll try thinking about the remaining issue. I'm not sure we should regard very quick bloating as a problem in need of solving. Doesn't that just mean we need the cache to be bigger, at least temporarily? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: pg_(total_)relation_size and partitioned tables
On Thu, Dec 14, 2017 at 12:23 AM, Amit Langote wrote: > You may have guessed from $subject that the two don't work together. It works exactly as documented: pg_total_relation_size(regclass) - Total disk space used by the specified table, including all indexes and TOAST data It says nothing about including partitions. If we change this, then we certainly need to update the documentation (that might be a good idea if we decide not to update this). Personally, I'm -1 on including partitions, because then you can no longer expect that the sum of pg_total_relation_size(regclass) across all relations in the database will equal the size of the database itself. Partitions will be counted a number of times equal to their depth in the partitioning hierarchy. However, I understand that I might get outvoted. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: [HACKERS] Proposal: Local indexes for partitioned table
On Fri, Dec 15, 2017 at 5:18 PM, Alvaro Herrera wrote: > We have two options for marking valid: > > 1. after each ALTER INDEX ATTACH, verify whether the set of partitions > that contain the index is complete; if so, mark it valid, otherwise do > nothing. This sucks because we have to check that over and over for > every index that we attach > > 2. We invent yet another command, say > ALTER INDEX VALIDATE If ALTER INDEX .. ATTACH is already taking AEL on the parent, then I think it might as well try to validate while it's at it. But if not then we might want to go with #2. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: Package version in PG_VERSION and version()
On Fri, Dec 15, 2017 at 10:23 AM, Tom Lane wrote: > Christoph Berg writes: >> Re: Michael Paquier 2017-12-15 >> >>> Why reinventing the wheel when there is already --with-extra-version >>> that you can use for the same purpose? > >> That modifies the PG version number as such, as what psql is showing >> on connect. I'd think that is too intrusive. > > I'm really pretty much -1 on having two different ways to do very nearly > the same thing, with the differences determined only by somebody's > arbitrary choices of where they think the modified version should be > exposed. IMO, either you think the Debian package version is important > enough to show, or you don't. (I'd incline to the "don't" side anyway.) Unfortunately, actually modifying the main version number breaks large numbers of tools and drivers that think they know what a PostgreSQL version number looks like, as many people who work for my employer can testify to from personal experience with a piece of software that displays a non-default version number. I think --with-extra-version is therefore badly-designed and probably mostly useless in its current form, and as Christoph's example shows, it's not really adapted for the kind of string he wants to add. I don't really care whether we leave --with-extra-version as-is and add something else for the kind of thing Christoph wants to do, or whether we add a different thing that does what he wants to do, but I think it's a very good idea to provide something along the lines of what he wants. In short, "the version number is important enough to show" != "the version number is important enough to break compatibility with large numbers of tools and drivers". -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: Why does array_position_common bitwise NOT an Oid type?
David Rowley writes: > I was puzzled to see the following code: > my_extra->element_type = ~element_type; > It looks quite wrong, but if its right then I think it needs a comment > to explain it. I don't see any in the area which mentions it. My best > guess would be that it's using this to know if the type data has been > cached, but then why would it not use InvalidOid for that? If memory serves, the idea was to force the subsequent datatype-lookup path to be taken, even if for some reason element_type is InvalidOid. If we take the lookup path then the bogus element_type will be detected and reported; if we don't, it won't be. We could instead add an explicit test for element_type == InvalidOid, but that's just more duplicative code. regards, tom lane
Why does array_position_common bitwise NOT an Oid type?
Hi, I was puzzled to see the following code: my_extra->element_type = ~element_type; It looks quite wrong, but if its right then I think it needs a comment to explain it. I don't see any in the area which mentions it. My best guess would be that it's using this to know if the type data has been cached, but then why would it not use InvalidOid for that? -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Re: [sqlsmith] Parallel worker executor crash on master
Thomas Munro writes: > On Sat, Dec 16, 2017 at 10:13 PM, Andreas Seltenreich > wrote: >> Core was generated by `postgres: smith regression [local] SELECT >>'. >> Program terminated with signal SIGSEGV, Segmentation fault. >> #0 gather_getnext (gatherstate=0x555a5fff1350) at nodeGather.c:283 >> 283 estate->es_query_dsa = >> gatherstate->pei->area; >> #1 ExecGather (pstate=0x555a5fff1350) at nodeGather.c:216 > > Hmm, thanks. That's not good. Do we know if gatherstate->pei is > NULL, or if it's somehow pointing to garbage? It was NULL on all the coredumps I looked into. Below[1] is a full gatherstate. > Not sure how either of those things could happen, since we only set it > to NULL in ExecShutdownGather() after which point we shouldn't call > ExecGather() again, and any MemoryContext problems with pei should > have caused problems already without this patch (for example in > ExecParallelCleanup). Clearly I'm missing something. FWIW, all backtraces collected so far are identical for the first nine frames. After ExecProjectSet, they are pretty random executor innards. , | #1 ExecGather at nodeGather.c:216 | #2 0x555bc9fb41ea in ExecProcNode at ../../../src/include/executor/executor.h:242 | #3 ExecutePlan at execMain.c:1718 | #4 standard_ExecutorRun at execMain.c:361 | #5 0x555bc9fc07cc in postquel_getnext at functions.c:865 | #6 fmgr_sql (fcinfo=0x555bcba07748) at functions.c:1161 | #7 0x555bc9fbc4f7 in ExecMakeFunctionResultSet at execSRF.c:604 | #8 0x555bc9fd7cbb in ExecProjectSRF at nodeProjectSet.c:175 | #9 0x560828dc8df5 in ExecProjectSet at nodeProjectSet.c:105 ` regards, Andreas Footnotes: [1] (gdb) p *gatherstate $3 = { ps = { type = T_GatherState, plan = 0x555bcb9faf30, state = 0x555bcba3d098, ExecProcNode = 0x555bc9fc9e30 , ExecProcNodeReal = 0x555bc9fc9e30 , instrument = 0x0, worker_instrument = 0x0, qual = 0x0, lefttree = 0x555bcba3d678, righttree = 0x0, initPlan = 0x0, subPlan = 0x0, chgParam = 0x0, ps_ResultTupleSlot = 0x555bcba3d5b8, ps_ExprContext = 0x555bcba3d3c8, ps_ProjInfo = 0x0 }, initialized = 1 '\001', need_to_scan_locally = 1 '\001', tuples_needed = -1, funnel_slot = 0x555bcba3d4c0, pei = 0x0, nworkers_launched = 0, nreaders = 0, nextreader = 0, reader = 0x0 }
Re: [sqlsmith] Parallel worker executor crash on master
On Sat, Dec 16, 2017 at 10:13 PM, Andreas Seltenreich wrote: > Amit Kapila writes: > >> This seems to be another symptom of the problem related to >> es_query_dsa for which Thomas has sent a patch on a different thread >> [1]. After applying that patch, I am not able to see the problem. I >> think due to the wrong usage of dsa across nodes, it can lead to >> sending some wrong values for params to workers. >> >> [1] - >> https://www.postgresql.org/message-id/CAEepm%3D0Mv9BigJPpribGQhnHqVGYo2%2BkmzekGUVJJc9Y_ZVaYA%40mail.gmail.com > > while my posted recipe is indeed inconspicuous with the patch applied, > It seems to have made matters worse from the sqlsmith perspective: > Instead of one core dump per hour I get one per minute. Sample > backtrace below. I could not find a recipe yet to reproduce these > (beyond starting sqlsmith). > > regards, > Andreas > > Core was generated by `postgres: smith regression [local] SELECT > '. > Program terminated with signal SIGSEGV, Segmentation fault. > #0 gather_getnext (gatherstate=0x555a5fff1350) at nodeGather.c:283 > 283 estate->es_query_dsa = gatherstate->pei->area; > #1 ExecGather (pstate=0x555a5fff1350) at nodeGather.c:216 Hmm, thanks. That's not good. Do we know if gatherstate->pei is NULL, or if it's somehow pointing to garbage? Not sure how either of those things could happen, since we only set it to NULL in ExecShutdownGather() after which point we shouldn't call ExecGather() again, and any MemoryContext problems with pei should have caused problems already without this patch (for example in ExecParallelCleanup). Clearly I'm missing something. -- Thomas Munro http://www.enterprisedb.com
Re: Reproducible builds: genbki.pl vs schemapg.h
Christoph Berg writes: >> Agreed so far as the script name goes. However, two out of three of these >> scripts also print their input file names, and I'm suspicious that that >> output is also gonna change in a VPATH build. I'm a little less inclined >> to buy the claim that we're not losing anything if we suppress that :-( > Well, patching this instance of $0 would fix a binary-package > variation in practise. Of course there might be more issues waiting to > come into effect, but I don't see why that would be an argument > against fixing the current issue. I think we're talking at cross-purposes. I'm not saying we should not fix this problem. I'm saying that the proposed fix appears incomplete, which means that (a) even if it solves your problem, it probably does not solve related problems for other people; (b) since it's not clear why this patch is apparently sufficient for you, I'd like to understand that in some detail before deeming the problem solved; and (c) leaving instances of the problematic code in our tree is just about guaranteed to mean you'll have the same problem in future, when somebody either copies that coding pattern into some new script or tweaks the way those existing scripts are being used. regards, tom lane
Re: Reproducible builds: genbki.pl vs schemapg.h
Re: Tom Lane 2017-12-16 <5525.1513381...@sss.pgh.pa.us> > >>> As per > >>> https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/postgresql-10.html, > >>> that's the only place that makes it into the resulting binary. > > I'm fairly confused by this claim. Since the string in question is in a > comment, it really shouldn't affect built binaries at all. I can believe > that it would affect the non-binary contents of the finished package, "Binary" in that context was the .deb package file (in contrast to the .dsc source package). > In my build, neither one of these files contains any path information; > I speculate that you need to use a VPATH build to have an issue, or > maybe Debian's build environment does something even weirder. This is a VPATH build, yes. > > It's not like $0 instead of a hardcoded name in the header actually buys > > us anything afaict. > > Agreed so far as the script name goes. However, two out of three of these > scripts also print their input file names, and I'm suspicious that that > output is also gonna change in a VPATH build. I'm a little less inclined > to buy the claim that we're not losing anything if we suppress that :-( Well, patching this instance of $0 would fix a binary-package variation in practise. Of course there might be more issues waiting to come into effect, but I don't see why that would be an argument against fixing the current issue. Christoph -- Senior Berater, Tel.: +49 2166 9901 187 credativ GmbH, HRB Mönchengladbach 12080, USt-ID-Nummer: DE204566209 Trompeterallee 108, 41189 Mönchengladbach Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer pgp fingerprint: 5C48 FE61 57F4 9179 5970 87C6 4C5A 6BAB 12D2 A7AE
Re: genomic locus
On 12/15/2017 05:50 PM, Michael Paquier wrote: > >> I have seen a lot of bit rot in other extensions (never contributed) that I >> have not maintained since 2009 and I now I am unable to fix some of them, so >> I wonder how much of old knowledge is still applicable. In other words, is >> what I see in new code just a change of macros or the change of principles? > APIs in Postgres are usually stable. You should be able to update your > own extensions. If you want to discuss about a couple of things in > particular, don't hesitate! I keep most of the out-of-tree extensions I maintain green by building and testing them in a buildfarm member. That way I become aware pretty quickly if any API change has broken them, as happened just the other day in fact. To do this requires writing a small perl paqckage. There are three examples in the buildfarm client sources at https://github.com/PGBuildFarm/client-code/tree/master/PGBuild/Modules and one of these is included in the buildfarm client releases. cheers andrew -- Andrew Dunstanhttps://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: Backfill bgworker Extension?
On 12/15/17 23:50, Jeremy Finzel wrote: > The common ground is some column in some table needs to be bulk updated. > I may not be explaining well, but in our environment we have done > hundreds of these using a generic framework to build a backfill. So I’m > not sure what you are questioning about the need? We have had to build a > worker to accomplish this because it can’t be done as a sql script alone. I'm trying to identify the independently useful pieces in your use case. A background worker to backfill large tables is a very specific use case. If instead we had a job/scheduler mechanism and a way to have server-side scripts that can control transactions, then that might satisfy your requirements as well (I'm not sure), but it would also potentially address many other uses. > I’m not sure what you mean by a stored procedure in the background. > Since it would not be a single transaction, it doesn’t fit as a stored > procedure at least in Postgres when a function is 1 transaction. In progress: https://commitfest.postgresql.org/16/1360/ -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Re: [sqlsmith] Parallel worker executor crash on master
Amit Kapila writes: > This seems to be another symptom of the problem related to > es_query_dsa for which Thomas has sent a patch on a different thread > [1]. After applying that patch, I am not able to see the problem. I > think due to the wrong usage of dsa across nodes, it can lead to > sending some wrong values for params to workers. > > [1] - > https://www.postgresql.org/message-id/CAEepm%3D0Mv9BigJPpribGQhnHqVGYo2%2BkmzekGUVJJc9Y_ZVaYA%40mail.gmail.com while my posted recipe is indeed inconspicuous with the patch applied, It seems to have made matters worse from the sqlsmith perspective: Instead of one core dump per hour I get one per minute. Sample backtrace below. I could not find a recipe yet to reproduce these (beyond starting sqlsmith). regards, Andreas Core was generated by `postgres: smith regression [local] SELECT '. Program terminated with signal SIGSEGV, Segmentation fault. #0 gather_getnext (gatherstate=0x555a5fff1350) at nodeGather.c:283 283 estate->es_query_dsa = gatherstate->pei->area; #1 ExecGather (pstate=0x555a5fff1350) at nodeGather.c:216 #2 0x555a5d51a1ea in ExecProcNode (node=0x555a5fff1350) at ../../../src/include/executor/executor.h:242 #3 ExecutePlan (execute_once=, dest=0x555a604f78a0, direction=, numberTuples=1, sendTuples=, operation=CMD_SELECT, use_parallel_mode=, planstate=0x555a5fff1350, estate=0x555a5fff1138) at execMain.c:1718 #4 standard_ExecutorRun (queryDesc=0x555a604f78f8, direction=, count=1, execute_once=) at execMain.c:361 #5 0x555a5d5267cc in postquel_getnext (es=0x555a604f7418, es=0x555a604f7418, fcache=0x555a5fd1a658, fcache=0x555a5fd1a658) at functions.c:865 #6 fmgr_sql (fcinfo=0x555a60376470) at functions.c:1161 #7 0x555a5d5224f7 in ExecMakeFunctionResultSet (fcache=0x555a60376400, econtext=econtext@entry=0x555a60374090, argContext=0x555a5fd449d0, isNull=0x555a6037a60e "", isDone=isDone@entry=0x555a6037a698) at execSRF.c:604 #8 0x555a5d53dcbb in ExecProjectSRF (node=node@entry=0x555a60373f78, continuing=continuing@entry=0 '\000') at nodeProjectSet.c:175 #9 0x555a5d53ddf5 in ExecProjectSet (pstate=0x555a60373f78) at nodeProjectSet.c:105 #10 0x555a5d53d556 in ExecProcNode (node=0x555a60373f78) at ../../../src/include/executor/executor.h:242 #11 ExecNestLoop (pstate=0x555a60373da0) at nodeNestloop.c:109 #12 0x555a5d53d556 in ExecProcNode (node=0x555a60373da0) at ../../../src/include/executor/executor.h:242 #13 ExecNestLoop (pstate=0x555a60373248) at nodeNestloop.c:109 #14 0x555a5d536699 in ExecProcNode (node=0x555a60373248) at ../../../src/include/executor/executor.h:242 #15 ExecLimit (pstate=0x555a60372650) at nodeLimit.c:95 #16 0x555a5d5433eb in ExecProcNode (node=0x555a60372650) at ../../../src/include/executor/executor.h:242 #17 ExecSetParamPlan (node=, econtext=0x555a6045e948) at nodeSubplan.c:968 #18 0x555a5d513da8 in ExecEvalParamExec (state=, op=0x555a604619f0, econtext=) at execExprInterp.c:1921 #19 0x555a5d516b7e in ExecInterpExpr (state=0x555a604616e0, econtext=0x555a6045e948, isnull=) at execExprInterp.c:1038 #20 0x555a5d547cad in ExecEvalExprSwitchContext (isNull=0x7ffecac290ce "", econtext=0x555a6045e948, state=0x555a604616e0) at ../../../src/include/executor/executor.h:300 #21 ExecProject (projInfo=0x555a604616d8) at ../../../src/include/executor/executor.h:334 #22 ExecWindowAgg (pstate=0x555a6045e670) at nodeWindowAgg.c:1761 #23 0x555a5d536699 in ExecProcNode (node=0x555a6045e670) at ../../../src/include/executor/executor.h:242 #24 ExecLimit (pstate=0x555a6045df28) at nodeLimit.c:95 #25 0x555a5d51a1ea in ExecProcNode (node=0x555a6045df28) at ../../../src/include/executor/executor.h:242 #26 ExecutePlan (execute_once=, dest=0x555a604322a0, direction=, numberTuples=0, sendTuples=, operation=CMD_SELECT, use_parallel_mode=, planstate=0x555a6045df28, estate=0x555a5ffef128) at execMain.c:1718 #27 standard_ExecutorRun (queryDesc=0x555a5ff8e418, direction=, count=0, execute_once=) at execMain.c:361 #28 0x555a5d668ecc in PortalRunSelect (portal=portal@entry=0x555a5fbf5f00, forward=forward@entry=1 '\001', count=0, count@entry=9223372036854775807, dest=dest@entry=0x555a604322a0) at pquery.c:932 #29 0x555a5d66a4c0 in PortalRun (portal=portal@entry=0x555a5fbf5f00, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', run_once=run_once@entry=1 '\001', dest=dest@entry=0x555a604322a0, altdest=altdest@entry=0x555a604322a0, completionTag=0x7ffecac29380 "") at pquery.c:773 #30 0x555a5d66608b in exec_simple_query (query_string=0x555a5fb78178 "[...]") at postgres.c:1120 #31 0x555a5d667de1 in PostgresMain (argc=, argv=argv@entry=0x555a5fbb5710, dbname=, username=) at postgres.c:4139 #32 0x555a5d36af16 in BackendRun (port=0x555a5fb9d280) at postmaster.c:4412 #33 BackendStartup (port=0x555a5fb9d280) at postmaster.c:4084 #34 ServerLoop () at postmaster.c:1757 #35 0x