Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade

2017-12-07 Thread Gunther
at 4.5 MB/s when the system sustains over 100 MB/s during processing of table scan and hash join queries. Something is wrong with the dump thing. And no, it's not SSL or whatever, I am doing it on a local system with local connections. Version 9.5 something. regards, -Gunther On

neither CPU nor IO bound, but throttled performance

2019-02-20 Thread Gunther
thing wrong with the total CPU% estimated by both top and iostat? From the top it looks like I have 24 worker processes use 8% each, most of them in R(unning) state, so that would be 192%, which is divided over the 2 CPUs of the t2.medium instance, really 96%. So I am CPU bound after all? regards, -Gunther

Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-23 Thread Gunther
g it with ... WHERE pending; then it would only grow, but never shrink. May be that helps somehow? I doubt it though. Adding to an index also causes deterioration, and most of the rows would be irrelevant because they would be past work. It would be nicer if there was another smooth way. regards, -Gunther

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-23 Thread Gunther
On 2/23/2019 16:13, Peter Geoghegan wrote: On Sat, Feb 23, 2019 at 1:06 PM Gunther wrote: I thought to keep my index tight, I would define it like this: CREATE UNIQUE INDEX Queue_idx_pending ON Queue(jobId) WHERE pending; so that only pending jobs are in that index. When a job is done

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-24 Thread Gunther
ENTLY never completes. Why is that? Also, the REINDEX command always fails with a deadlock because there is a row lock and a complete table lock involved. I consider this ultimately a bug, or at the very least there is room for improvement. And I am on version 11.1. regards, -Gunther

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-24 Thread Gunther
partition the table further by time, may be alternating even and odd days, such that on any given day one of the two pending partitions are quiet? Is that how it's done? regards, -Gunther

Re: Facing issue in using special characters

2019-03-15 Thread Gunther
ntation, whether that is XML or JSON or SQL or URL query parameters, or a CSV file, or anything at all, you need to escape your string values properly. This question with no detail didn't deserve such a thorough answer, but it's my soap box. I do not accept people complaining about &q

Re: Distributing data over "spindles" even on AWS EBS, (followup to the work queue saga)

2019-03-17 Thread Gunther
On 3/14/2019 11:11, Jeremy Schneider wrote: On 3/14/19 07:53, Gunther wrote: 2. build a low level "spreading" scheme which is to take the partial files 4653828 and 4653828.1, .2, _fsm, etc. and move each to another device and then symlink it back to that directory (I co

Poor man's partitioned index .... not being used?

2019-03-20 Thread Gunther
f the load really is so big, why not partition the entire table. But just for hecks, why not this way? regards, -Gunther

Re: Poor man's partitioned index .... not being used?

2019-03-21 Thread Gunther
.id) LIMIT 10; It's OK though. It just goes to show that in a case like this, it is best to just go with the partitioned table anyway. regards, -Gunther

EXPLAIN PLAN for DELETE CASCADE or DELETE not using pkey index despite EXPLAINing that it would?

2019-03-21 Thread Gunther
be traced, I am sure that somewhere inside there is some data structure representing this activity and all it would take is to dump it? regards, -Gunther

Re: EXPLAIN PLAN for DELETE CASCADE or DELETE not using pkey index despite EXPLAINing that it would?

2019-03-22 Thread Gunther
On 3/21/2019 17:16, Tom Lane wrote: Gunther writes: I have 250 rows to delete, but they are a target to a bunch of child tables with foreign key on delete cascade. EXPLAIN DELETE FROM Foo WHERE id = (SELECT fooId FROM Garbage); shows me that it uses the nested loop by Foo_pkey index to find

Block / Page Size Optimization

2019-04-08 Thread Gunther
xplanation? If you have a link that I can just read, I appreciate you sharing that. I think that should be on some Wiki or FAQ somewhere. If I get a quick and dirty explanation with some pointers, I can try to write it out into a more complete answer that might be added into some documentation or FAQ somewhere. thanks & regards, -Gunther

Out of Memory errors are frustrating as heck!

2019-04-14 Thread Gunther
feels like those out of memory issues should be handled more gracefully (garbage collection attempt?) and that somehow there should be more information so the person can do anything about it. Any ideas? -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-14 Thread Gunther
ther difficult because there are many tables and columns involved. Would really be nice if the error would say exactly what plan step that ExecutorState referred to, so one could narrow it down. regards, -Gunther On 4/14/2019 17:19, Tom Lane wrote: Gunther writes: For weeks now, I am ba

Re: Out of Memory errors are frustrating as heck!

2019-04-14 Thread Gunther
e0) at postmaster.c:4361 #22 BackendStartup (port=0xf6dfe0) at postmaster.c:4033 #23 ServerLoop () at postmaster.c:1706 #24 0x006bf122 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xf45320) at postmaster.c:1379 #25 0x004822dc in main (argc=3, argv=0xf45320) at main.c:228 That's it. Thank you all very much for your interest in this case. -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-14 Thread Gunther
the system that runs this. thanks, -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Gunther
artup (port=0xf6efe0) at postmaster.c:4033 #27 ServerLoop () at postmaster.c:1706 #28 0x006bf122 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xf45320) at postmaster.c:1379 #29 0x004822dc in main (argc=3, argv=0xf45320) at main.c:228 Good, now I leave this all sitting like that for you to ask me what else you might want to see. We are now close to the edge of the cliff. -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Gunther
oing to be 512 or whatever. Isn't there some other way? I'm going to try without that DISTINCT step, or perhaps by dismantling this query until it works without this excessive memory growth. -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Gunther
On 4/15/2019 21:49, Gunther wrote: I'm going to try without that DISTINCT step, or perhaps by dismantling this query until it works without this excessive memory growth. It also failed. Out of memory. The resident memory size of the backend was 1.5 GB before it crashed. TopMemoryCo

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Gunther
d88 (gdb) b AllocSetAlloc if (int)strcmp(context->name, "ExecutorState") == 0 && *(int *)$rsp != 0x84e7dd && 0x84e8ad != *(int *)$rsp Note: breakpoint 6 also set at pc 0x849030. Breakpoint 7 at 0x849030: file aset.c, line 718. (gdb) delete 6 Now if I continue I don't seem to be stopping any more. Does this help now? -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-16 Thread Gunther
e@entry=0x4b4a198, hashvalue=hashvalue@entry=0x7ffcbf92fe5c, tupleSlot=0x2ae0ab8, hjstate=0x2a1d920) at nodeHashjoin.c:1277 (More stack frames follow...) But now it increased ExecutorState: 1369353824 total in 163399 blocks; 248792 free (36 chunks); 1369105032 used It increases every 3 times I stop at the breakpoint. -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-16 Thread Gunther
hink it would actually work. I've done it with java heap dumps, even upgrading the VM to a 32 GB VM just to crunch the heap dump. But can you tell me just a little more as to how I need to configure this thing to get the data you want without blowing up the memory and disk during this huge query? regards, -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-17 Thread Gunther
noying out of memory errors. And yes, I can add that memory profiler thing, if you think it would actually work. I've done it with java heap dumps, even upgrading the VM to a 32 GB VM just to crunch the heap dump. But can you tell me just a little more as to how I need to configure this thing to get the data you want without blowing up the memory and disk during this huge query? regards, -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-19 Thread Gunther
es) = (5505039, 1.13467e+08) where relname = 'tmp_q'; UPDATE 1 foo=# update pg_class set (relpages, reltuples) = (7, 236) where relname = 'tmp_r'; UPDATE 1 but that didn't help. Somehow the planner outsmarts every such trick, so I can't get it to follow my right outer join plan where the big table is hashed.  I am sure y'all know some way to force it. regards, -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Gunther
shed, how many it estimates to still have to do, whether its original estimate panned out or not, etc. This would be so tremendously useful for runtime debugging of queries. I think the utility of this can hardly be overstated. I mean even for end-user applications of some data warehouse, where you could probe a long running query every 5 seconds as to where the execution is. Man, I could not think of any more low hanging fruit useful feature. I am sure that if PostgreSQL was originally written in Java, this feature would naturally exist already. regards and Happy Easter, -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Gunther
nique step is taking forever on this data. OK, now I will try the various patches that people sent. -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Gunther
of the trusted people) wants access to hack directly, I can provide. regards, -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Gunther
e MemoryContext is NULL: (gdb) p context $1 = (MemoryContext) 0x0 all patches applied cleanly (with the -p1 option). I see no .rej file, but also no .orig file, not sure why that version of patch didn't create them. But I paid attention and know that there was no error. -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Gunther
C [6974] DETAIL: Failed on request of size 32800 in memory context "HashBatchContext". 2019-04-21 19:50:21.338 UTC [6974] STATEMENT: explain analyze select * from reports.v_BusinessOperation; Next I'll apply Tomas' corrected 0002 patch on top of this and see ... -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Gunther
After applying Tomas' patch 0002 as corrected, over 0001, same thing: integrator=# set enable_nestloop to off; SET integrator=# explain analyze select * from reports.v_BusinessOperation; WARNING: ExecHashIncreaseNumBatches: increasing number of batches from 16 to 32 WARNING: ExecHashIncreaseNum

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Gunther
nated with signal SIGSEGV, Segmentation fault. #0 0x006bd792 in ExecParallelHashJoinNewBatch ( hjstate=) at nodeHashjoin.c:1127 1127{ ( I don't understand why all of them are at the same location. Doesn't make any sense to me. But I'll leave it at that right now. -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-23 Thread Gunther
3) Filter: ((participationtypecode)::text = 'PRD'::text) Rows Removed by Filter: 7415579 Planning Time: 40.559 ms Execution Time: 6896581.566 ms (70 rows) For the first time this query has succeeded now. Memory was bounded. The time of nearly hours is crazy, but things sometimes take that long. The important thing was not to get an out of memory error. Thank you. Anything else you want to try, I can do it. regards, -Gunther

Re: Out of Memory errors are frustrating as heck!

2019-04-23 Thread Gunther
ion Time: 7004340.091 ms (70 rows) There isn't really any big news here. But what matters is that it works. thanks & regards, -Gunther Schadow

Re: Out of Memory errors are frustrating as heck!

2019-08-23 Thread Gunther
proposed, especially this last one by Tomas Vondra, been applied to the 11 releases? Should I try these older patches from April? regards, -Gunther For what it is worth, this is what I am getting: TopMemoryContext: 67424 total in 5 blocks; 7184 free (7 chunks); 60240 used pgstat TabStatusArray

Re: Out of Memory errors are frustrating as heck!

2019-08-23 Thread Gunther
commented this much (the above bullet points are my own notes.) Anyway, for now, I am good. Thank you very much. regards, -Gunther On 8/23/2019 9:17, Gunther wrote: Hi all, I am connecting to a discussion back from April this year. My data has grown and now I am running into new out

Re: Out of Memory errors are frustrating as heck!

2019-08-24 Thread Gunther
nd tables involved. The query plan has 100 lines. Not easy to share for reproduce and I have my issue under control by adding some swap just in case. The swap space was never actually used. thanks, -Gunther On 8/23/2019 10:20, Tom Lane wrote: Gunther writes: Hi all, I am connecting to a d

FPGA optimization ...

2019-11-04 Thread Gunther
could donate to, 501(c)(3) tax exempt? I can donate and possibly find some people at Purdue University who might take this on. Interest? regards, -Gunther

Re: FPGA optimization ...

2019-11-04 Thread Gunther
l-hackers can't be involved, of course). I've been involved in a couple of such research projects in Europe, not sure what exactly is the situation/rules in US. Yes, might work with a University directly. Although I will contact the PostgreSQL foundation in the US also. regards, -Gunther

Faster more low-level methods of having hot standby / secondary read-only servers?

2022-09-13 Thread Gunther Schadow
ved by letting a RAID mirror sync to the same snapshot point and then split it off, or do the read activity of the stand-by server querying like crazy only from that mirror, while batching changes to the RAID master so that they can be applied with very low overhead. Anyone thinking about these things? regards, -Gunther

When you really want to force a certain join type?

2022-12-28 Thread Gunther Schadow
oop is a bad choice for bulk data. It's only for this recursive query it sometimes makes sense. regards, -Gunther

Re: When you really want to force a certain join type?

2022-12-28 Thread Gunther Schadow
see the recursion, so no need to have that keyword. regards, -Gunther

Re: neither CPU nor IO bound, but throttled performance

2019-02-21 Thread Gunther Schadow
Thank you Magnus. 68% steal. Indeed. You probably hit the target. Yes. That explains the discrepancy. I need to watch and understand that CPU credits issue. regards, -Gunther On 2/21/2019 4:08, Magnus Hagander wrote: On Thu, Feb 21, 2019 at 12:34 AM Gunther <mailto:r...@gusw.net>&

Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks.

2019-02-25 Thread Gunther Schadow
partition the table further by time, may be alternating even and odd days, such that on any given day one of the two pending partitions are quiet? Is that how it's done? regards, -Gunther

Re: Pg10 : Client Configuration for Parallelism ?

2019-04-17 Thread Gunther Schadow
option, you will see each postgres process identify itself as to its role, e.g. postgres: parallel worker for PID 46687 or postgres: SELECT ... or postgres: wal writer extremely useful this. -Gunther

Re: Pg10 : Client Configuration for Parallelism ?

2019-04-17 Thread Gunther Schadow
* FROM (SELECT count(1) FROM BigTable) x Parallel or not? -Gunther

Re: Best Filesystem for PostgreSQL

2019-04-17 Thread Gunther Schadow
growing. Any ideas on how ZFS might do? ZFS is of course so much more flexible. -Gunther

Is there a known bug with SKIP LOCKED and "tuple to be locked was already moved to another partition due to concurrent update"?

2020-06-30 Thread Gunther Schadow
tc. but you can't really try it out unless you do some massively parallel magic. So I figured I just ask. regards, -Gunther

Conundrum with scaling out of bottleneck with hot standby, PgPool-II, etc.

2020-12-23 Thread Gunther Schadow
twice as wide IO path and disks. Why would I do anything other but to just increase that master db server? regards, -Gunther

Re: Postgres performance comparing GCP and AWS

2021-02-24 Thread Gunther Schadow
check bus throughput of the EC2 instance. Needless to say you don't want a t* instance where you have a limited burst CPU capacity only. regards, -Gunther On 2/23/2021 1:12 PM, Maurici Meneghetti wrote: Hi everyone, I have 2 postgres instances created from the same dump (backup), one on a GCP V

PosgtgreSQL hot standby reading WAL from muli-attached volume?

2021-04-05 Thread Gunther Schadow
earing? And as I read the log-shipping method may not be suitable for hot standby use? Is this something that has been written about already? regards, -Gunther ___ freebsd-performa...@freebsd.org mailing list https://lists.freebsd.org/mailman/listin