Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 Yeah, I remember we found a few xml-related leaks based on your reports.
 However, there's not anything here to suggest that this query is
 straining the capabilities of a 64-bit system with lots o RAM.  It seems
 certain you're hitting some artificial process-size limit, and the only
 one I know about is ulimit.

 I wasn't aware of /proc/pid/limits before, but now that I've heard
 of it, checking that for the postmaster and/or a backend seems like
 a great idea.

This doesn't seem to exist for any process on this box:

[r...@170226-db7 ~]# ls /proc/*/limit*
ls: /proc/*/limit*: No such file or directory

If this were a system-defined process-size limit, then should the query
still run out of memory after restarting Postgres? Most likely we'll have
to restart Postgres soon, and I'll retry this query after doing so. Based
on past experience, I'd expect the query to complete at that time.

From what we experience, Postgres seems to be slowly accumulating memory
in the fashion of a small memory leak and things start to fail with
out-of-memory errors after the server has been running for some time (e.g.
roughly 4-6 weeks). Restarting Postgres clears out the problems (after a
restart we can immediately run queries that were failing before the
restart)... but then the cycle starts again.

I just bring this up wondering if there is something possibly accumulating
within Postgres that isn't getting freed and might cause an out-of-memory
error like this in some way.

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 erp, that stinks.  Must be on an older kernel?  I've got it under (a
 Debian-built) 2.6.26.  I can't recall if there's another way to get
 limit info for an active process..  Could use Tom's suggestion of
 echo'ing ulimit -a out to a file somewhere during database start-up.

Yes, this is a RHEL 5 box with kernel 2.6.18... must not exist for this
ancient kernel :-)

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 [r...@170226-db7 ~]# su -l postgres -c ulimit -a
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 max nice(-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 139264
 max locked memory   (kbytes, -l) 32
 max memory size (kbytes, -m) unlimited
 open files  (-n) 1024
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 max rt priority (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 139264
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited

I just noticed something: the open files limit lists 1024, which is the
default for this system. A quick count of open data files currently in use
by Postgres returns almost 7000, though.

[r...@170226-db7 ~]# lsof -u postgres |egrep
'(/pg_data|/pg_index|/pg_log)' |wc -l
6749

We have 100+ postgres processes running, so for an individual process,
could the 1024 file limit be doing anything to this query? Or would I see
an explicit error message regarding this condition?

Regards,
Matt



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 We have 100+ postgres processes running, so for an individual process,
 could the 1024 file limit be doing anything to this query? Or would I
 see
 an explicit error message regarding this condition?

 you would get one of Open files rlimit 1024 reached for uid  in
 syslog (which you should checkout anyhow).

No... nothing like this in syslog.

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 So did the backend crashed on this one, or just produced 'out of
 memory ' message ?

No crash, just the error message.

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 Err, yea, overcommit_memory=2, a small amount of swap space, and a low
 overcommit_ratio could cause this to happen...  The default
 ratio is 50 though, which should mean, on this system, there is about
 10G available for user processes, but his usage shows only a bit over 2G
 being used outside of buffers/caches (based on free output)..

 Matt, can you provide the output from these:
 cat /proc/sys/vm/overcommit_memory

 cat /proc/sys/vm/overcommit_ratio

 cat /proc/meminfo

Sure, here you go:

[r...@170226-db7 ~]# cat /proc/sys/vm/overcommit_memory
2

[r...@170226-db7 ~]# cat /proc/sys/vm/overcommit_ratio
50

[r...@170226-db7 ~]# cat /proc/meminfo
MemTotal: 16432296 kB
MemFree:160188 kB
Buffers:164764 kB
Cached:   14006456 kB
SwapCached:  0 kB
Active:6984144 kB
Inactive:  8471456 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal: 16432296 kB
LowFree:160188 kB
SwapTotal: 2096440 kB
SwapFree:  2095992 kB
Dirty: 540 kB
Writeback:   0 kB
AnonPages: 1270076 kB
Mapped:4258192 kB
Slab:   211448 kB
PageTables: 559248 kB
NFS_Unstable:0 kB
Bounce:  0 kB
CommitLimit:  10312588 kB
Committed_AS:  9760756 kB
VmallocTotal: 34359738367 kB
VmallocUsed:263556 kB
VmallocChunk: 34359474475 kB
HugePages_Total: 0
HugePages_Free:  0
HugePages_Rsvd:  0
Hugepagesize: 2048 kB

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 Which is a longwinded way of saying that doesn't seem to match the
 symptoms, but ...  If you're not dead certain that your kernel is
 configured to allow *well* north of 7000 open files, you might consider
 cutting max_files_per_process in half at your next restart.

I think it is, looking at /proc/sys/fs/file-max I get

[r...@170226-db7 64]# cat /proc/sys/fs/file-max
1573080

 OTOH ... wait a minute.  Have you maybe got the system configured to
 start denying memory requests before it gets into significant swapping?
 We typically suggest setting vm.overcommit_memory=2 on Linux, but
 I'm not sure whether that results in the kernel trying to avoid using
 swap at all.  Maybe this really is an issue with system-wide total
 memory use rather than any one process.

We do have vm.overcommit_memory set to 2 on this system.

I should add that now many queries and just transaction initiations are
failing on this system since yesterday. Now I just saw one of our
applications attempt to initiate a transaction (this is a Java app, so
through JDBC here) and it resulted in an out-of-memory error. This is all
I see in the logs:

[2009-02-09 13:52:13 CST lms_kia FATAL:  out of memory
TopMemoryContext: 53952 total in 6 blocks; 7048 free (7 chunks); 46904 used
  Relcache by OID: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  CacheMemoryContext: 91024 total in 4 blocks; 3856 free (9 chunks); 87168
used
  MdSmgr: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  LOCALLOCK hash: 8192 total in 1 blocks; 3744 free (0 chunks); 4448 used
  Timezones: 53584 total in 2 blocks; 3744 free (0 chunks); 49840 used
  ErrorContext: 8192 total in 1 blocks; 8160 free (4 chunks); 32 used
2009-02-09 13:52:29 CST lms_kia FATAL:  out of memory
2009-02-09 13:52:29 CST lms_kia DETAIL:  Failed on request of size 2456.
TopMemoryContext: 97184 total in 11 blocks; 2216 free (7 chunks); 94968 used
  TopTransactionContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544
used
  Prepared Queries: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
  Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
  Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
  MessageContext: 8192 total in 1 blocks; 6976 free (0 chunks); 1216 used
  smgr relation table: 24576 total in 2 blocks; 11840 free (4 chunks);
12736 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
PortalHeapMemory: 122880 total in 17 blocks; 7640 free (102 chunks);
115240 used
  ExecutorState: 89312 total in 4 blocks; 1960 free (2 chunks); 87352
used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  HashBatchContext: 125820976 total in 26 blocks; 496 free (6
chunks); 125820480 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  HashBatchContext: 524336 total in 1 blocks; 0 free (0 chunks);
524336 used
TIDBitmap: 516096 total in 6 blocks; 222528 free (20 chunks);
293568 used
TupleSort: 32816 total in 2 blocks; 7360 free (0 chunks); 25456 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8096 free (0 chunks); 96 used
ExprContext: 8192 total in 1 blocks; 8160 free (1 chunks); 32 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
AggContext: 8192 total in 1 blocks; 8016 free (0 chunks); 176 used
  TupleHashTable: 8192 total in 1 blocks; 3744 free (0 chunks);
4448 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  Relcache by OID: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used
  CacheMemoryContext: 1341680 total in 21 blocks; 502840 free (2 chunks);
838840 used
unnamed prepared statement: 57344 total in 3 blocks; 2008 free (2
chunks); 55336 used
cl_event_crm_idx: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
cl_event_lead_id_idx: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
cl_event_event_name_idx: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
cl_event_crm_lead_id_idx: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
cl_event_pkey: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
cl_event_first_response_pkey: 2048 total in 1 blocks; 752 free (0
chunks); 1296 used
lead_reporting_data_delivery_meth_idx: 2048 total in 1 blocks; 752

Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 * Matt Magoffin (postgresql@msqr.us) wrote:
 [r...@170226-db7 ~]# cat /proc/meminfo
 CommitLimit:  10312588 kB
 Committed_AS:  9760756 kB

 I suspect this may be it...  Apparently, while you're only using about
 2G, you've got 10G or so of outstanding commitments, and Linux is
 refusing to allocate more.

 You probably want to up your overcommit_ratio, esp. in light of the fact
 that you've only got 2G of swap on this box.  I'd probably say up it to
 80, which would give you 14.8G of commitable memory, leaving some room
 in-memory (1.2G) for cache/buffers and whatnot.  Alternativly, you could
 go for 90, which would allow commits up to 16.4G, so if everyone used
 all their memory, you'd be into swap.

Thanks for the advice. Should we have more than 2GB of swap available? I
thought the goal for a Postgres system was to avoid swap use at all cost?
Would it be better for us to add more swap, or adjust this
overcommit_ratio as you discuss?

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 I suspect this may be it...  Apparently, while you're only using about
 2G, you've got 10G or so of outstanding commitments, and Linux is
 refusing to allocate more.

 You probably want to up your overcommit_ratio, esp. in light of the fact
 that you've only got 2G of swap on this box.  I'd probably say up it to
 80, which would give you 14.8G of commitable memory, leaving some room
 in-memory (1.2G) for cache/buffers and whatnot.  Alternativly, you could
 go for 90, which would allow commits up to 16.4G, so if everyone used
 all their memory, you'd be into swap.

Also, by adjusting this, would I possibly just be delaying the problem we
currently have (i.e. over time, we start to run out of memory)? I just
wonder why the system is reaching this limit at all... do you feel it is
quite normal for a system with this memory configuration to be configured
with the ratio set to 80? I'm not terribly familiar with these VM
parameters, so I apologize if I sound vague.

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 with 100 concurrent postgres connections,  if they all did something
 requiring large amounts of work_mem, you could allocate 100 * 125MB (I
 believe thats what you said it was set to?) which is like 12GB :-O

 in fact a single query thats doing multiple sorts of large datasets  for
 a messy join (or other similar activity) can involve several instances
 of workmem.  multiply that by 100 queries, and ouch.

 have you considered using a connection pool to reduce the postgres
 process count?

We do have a connection pool here, and as I mentioned about a quarter of
these are Slony-controlled processes for replication. Most connections are
not doing complex queries of this sort, in fact when this query runs it is
the only query running, most connections are idle (kept open by a
connection pool) or performing short insert/update operations.

But I see your point about the large work_mem, it was set that high to
help speed up big queries such as this one.

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 I'd do both.  But only after I'd reduced work_mem.  Given that
 reducing work_mem removed the problem, it looks to me like pgsql is
 requesting several large blocks of ram, then only using a small port
 of them.  But overcommit set to 2 means that the OS will not allow an
 overcommit of memory to these allocations, the allocations fail, and
 you get your error.

OK, I've re-configured work_mem, and set vm.overcommit_ratio to 80.
Without restarting Postgres, I was now able to run that big query posted
earlier in this thread that failed... so the overcommit_ratio adjustment
helped there. The EXPLAIN ANALYZE for that I've included below, where I
can see it did use in-memory sorting for some of the sorts.


QUERY
PLAN
--
 Sort  (cost=2297842.92..2297943.76 rows=40336 width=234) (actual
time=370440.041..370483.133 rows=49317 loops=1)
   Sort Key: s.sale_date, s.vin
   Sort Method:  quicksort  Memory: 9357kB
   -  Hash Left Join  (cost=1450365.72..2294757.26 rows=40336 width=234)
(actual time=293212.004..369857.956 rows=49317 loops=1)
 Hash Cond: (ml.lead_id = phone.lead_id)
 -  Hash Left Join  (cost=1341348.27..2140418.89 rows=40336
width=219) (actual time=286374.489..362880.702 rows=49317
loops=1)
   Hash Cond: (ml.lead_id = email.lead_id)
   -  Nested Loop Left Join  (cost=1237270.73..2000634.35
rows=40336 width=204) (actual time=278652.051..355022.014
rows=49317 loops=1)
 -  Nested Loop Left Join 
(cost=1237270.73..1807035.53 rows=40336 width=141)
(actual time=278635.414..323774.871 rows=49317
loops=1)
   -  Nested Loop Left Join 
(cost=1237270.73..1587513.22 rows=40336
width=126) (actual time=278632.427..317952.620
rows=49317 loops=1)
 -  Nested Loop Left Join 
(cost=1237270.73..1414103.19 rows=40336
width=118) (actual
time=278606.034..297951.038 rows=49317
loops=1)
   -  Merge Left Join 
(cost=1237270.73..1238315.78
rows=40336 width=89) (actual
time=278587.812..279498.796
rows=49317 loops=1)
 Merge Cond: (ml.lead_id =
lrdm.lead_id)
 -  Sort 
(cost=46384.08..46484.92
rows=40336 width=78) (actual
time=6200.810..6240.526
rows=49317 loops=1)
   Sort Key: ml.lead_id
   Sort Method:  quicksort
 Memory: 8472kB
   -  Hash Left Join 
(cost=9430.33..43298.42
rows=40336 width=78)
(actual
time=1079.869..6084.010
rows=49317 loops=1)
 Hash Cond:
((s.dealer_code)::text
=
(d.dealer_code)::text)
 -  Hash Left
Join 
(cost=9094.04..42407.50
rows=40336
width=60) (actual
time=1074.170..5947.646
rows=49317
loops=1)
   Hash Cond:
(s.id =
ml.sale_id)
   Filter:
(((s.sale_type
=
'd'::bpchar)
AND (NOT
ml.lost_sale))
OR
((s.sale_type
=
'c'::bpchar)
AND
(ml.lead_pos
= 0)) OR
(s.sale_type
=
'0'::bpchar))
   -  Index
Scan using
mb_sale_sale_date_idx
on mb_sale
s 
(cost=0.00..14027.94
rows=43804
width=50)
(actual
time=55.663..4683.901
rows=49152
loops=1)
 Index
Cond:
((sale_date
=
'2009-01-01'::date)
AND
(sale_date
=
'2009-01-31'::date))
   -  Hash 
(cost=5577.35..5577.35
rows=281335
width=26)
(actual
time=1018.108..1018.108
rows=281335
loops=1)
 - 
Seq
Scan
on
mb_lead
ml 
(cost=0.00..5577.35
rows=281335
width=26)
(actual
time=3.451..516.245
rows=281335
loops=1)
 -  Hash 
(cost=321.13..321.13
rows=1213
width=23) (actual
time=5.577..5.577
rows=1213
loops=1)
   -  Seq
Scan on
dealer d 
(cost=0.00..321.13
rows=1213
width=23)
(actual
time=0.056..3.684
rows=1213
loops=1)
 -  Sort 
(cost=1190886.66..1191208.43
rows=128709 width=19) (actual
time=270075.460..271851.519
rows=1442527 loops=1)
   Sort Key: lrdm.lead_id
   Sort Method:  external
sort  Disk: 56072kB
   -  Bitmap 

Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 I don't think changing work_mem down is actually going to reduce the
 memory allocated without changing the plan to something less optimal.
 In the end, all of this is putting off the inevitable, if you get enough
 PGs going and enough requests and whatnot, you're going to start running
 out of memory again.  Same if you get larger data sets that take up more
 hash table space or similar.  Eventually you might need a bigger box,
 but let's try to get everything in the current box to at least be used
 first..

Yes... and indeed changing vm.overcommit_ratio to 80 does allow that
previously-failing query to execute successfully. Do you think this is
also what caused the out-of-memory error we saw today just when a
transaction was initiated?

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 Yes... and indeed changing vm.overcommit_ratio to 80 does allow that
 previously-failing query to execute successfully. Do you think this is
 also what caused the out-of-memory error we saw today just when a
 transaction was initiated?

 Curious, what's the explain analyze look like for that one?

Do you mean the transaction initiation? I'm not sure how to get an EXPLAIN
for that, the application never got to do anything, from the application
side it failed with out-of-memory while trying to open the connection. Or,
the most precise I have is that in the JDBC driver, it failed at

Caused by: org.postgresql.util.PSQLException: FATAL: out of memory
at
org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:444) 
   at
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:99)
at
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at
org.postgresql.jdbc2.AbstractJdbc2Connection.init(AbstractJdbc2Connection.java:124)
at
org.postgresql.jdbc3.AbstractJdbc3Connection.init(AbstractJdbc3Connection.java:30)
at
org.postgresql.jdbc3g.Jdbc3gConnection.init(Jdbc3gConnection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:386)
at org.postgresql.Driver.connect(Driver.java:260)

And the corresponding error in the Postgres log is

2009-02-09 13:52:13 CST lms_kia FATAL:  out of memory
TopMemoryContext: 53952 total in 6 blocks; 7048 free (7 chunks); 46904 used
  Relcache by OID: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  CacheMemoryContext: 91024 total in 4 blocks; 3856 free (9 chunks); 87168
used
  MdSmgr: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  LOCALLOCK hash: 8192 total in 1 blocks; 3744 free (0 chunks); 4448 used
  Timezones: 53584 total in 2 blocks; 3744 free (0 chunks); 49840 used
  ErrorContext: 8192 total in 1 blocks; 8160 free (4 chunks); 32 used
2009-02-09 13:52:29 CST lms_kia FATAL:  out of memory
2009-02-09 13:52:29 CST lms_kia DETAIL:  Failed on request of size 2456.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-09 Thread Matt Magoffin
 No, explain analyze for the query that wouldn't execute before but now
 does, with, I assume, a large work_mem.  I'd like to see how it
 differes from the one with smaller work_mem.

Ah, I pasted that in an earlier email, sent February 10, 2009 9:58:00 AM
GMT+13:00... that plan was the one using still the 128MB of work_mem after
changing the overcommit_ratio to 80.

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory on SELECT in 8.3.5

2009-02-08 Thread Matt Magoffin
We've been having persistent out-of-memory errors occur in our production
8.3 deployment, which is now running 8.3.5. I'm not sure the query here is
the cause of the problem, but this is our most-recent example which
triggered an out-of-memory error for us.

Perhaps our configuration needs tweaking somewhere...

shared_buffers = 4000MB
#temp_buffers = 8MB
work_mem = 128MB
maintenance_work_mem = 256MB
#max_stack_depth = 2MB

I was wondering if somebody could spot anything odd looking in the query
plan or the memory parameters dumped to the log that might be causing
this?

explain SELECT
s.sale_date,
s.vin,
s.dealer_region as sale_region,
s.dealer_area as sale_area,
s.dealer_code as sale_dealer,
d.name as sale_dealer_name,
s.model as sale_model,
s.sale_type,
ml.lead_id,
l.created,
lrd.reporting_date as lead_date,
ml.lead_pos as lead_order,
ml.lost_sale,
ml.range,
lrd.dealer_region as lead_region,
lrd.dealer_area as lead_area,
lrd.dealer_code as lead_dealer,
lrd.model as lead_model,
l.source as lead_source,
lrd.oem_type as lead_type,
lrd.oem_subtype as lead_subtype,
lrd.unique_all as lead_unique_all,
lrd.unique_region as lead_unique_region,
lrd.unique_area as lead_unique_area,
lrd.unique_dealer as lead_unique_dealer,
lrd.customer_fname,
lrd.customer_lname,
lrdm.item_value as lead_tracking_code,
address_street.street as owner_street,
address.city as owner_city,
address.state as owner_state,
address.postal_code as owner_postal_code,
address.country as owner_country,
email.list_value as owner_email,
phone.list_value as owner_phone

FROM mb_sale s
LEFT OUTER JOIN dealer d on d.dealer_code = s.dealer_code
LEFT OUTER JOIN mb_lead ml on ml.sale_id = s.id
LEFT OUTER JOIN lead_reporting_data lrd ON lrd.lead_id = ml.lead_id
LEFT OUTER JOIN lead_reporting_meta lrdm
ON lrdm.lead_id = ml.lead_id
AND lrdm.item_key = '[ALS:prospectid]TrackingCode'
AND lrdm.pos=1
LEFT OUTER JOIN lead l ON l.id = ml.lead_id

LEFT OUTER JOIN lead_reporting_address address ON address.lead_id =
ml.lead_id
LEFT OUTER JOIN lead_reporting_street address_street
ON address_street.address_id = address.id AND 
address_street.pos = 0

LEFT OUTER JOIN lead_reporting_list_data phone
ON phone.lead_id = ml.lead_id AND phone.list_type = 'p' AND 
phone.pos = 0

LEFT OUTER JOIN lead_reporting_list_data email
ON email.lead_id = ml.lead_id AND email.list_type = 'e' AND 
email.pos = 0

WHERE
((s.sale_type = 'd' and ml.lost_sale = FALSE) OR (s.sale_type = 
'c' and
ml.lead_pos = 0) OR (s.sale_type = '0'))
AND s.sale_date BETWEEN date('2009-01-01') AND 
date('2009-01-31')
ORDER BY s.sale_date, s.vin

 QUERY
PLAN

 Sort  (cost=2297525.76..2297626.60 rows=40336 width=234)
   Sort Key: s.sale_date, s.vin
   -  Hash Left Join  (cost=1450193.63..2294440.10 rows=40336 width=234)
 Hash Cond: (ml.lead_id = phone.lead_id)
 -  Hash Left Join  (cost=1341176.18..2140101.73 rows=40336
width=219)
   Hash Cond: (ml.lead_id = email.lead_id)
   -  Nested Loop Left Join  (cost=1237098.64..2000317.19
rows=40336 width=204)
 -  Nested Loop Left Join 
(cost=1237098.64..1806747.26 rows=40336 width=141)
   -  Nested Loop Left Join 
(cost=1237098.64..1633400.88 rows=40336
width=133)
 -  Nested Loop Left Join 
(cost=1237098.64..1413899.40 rows=40336
width=118)
   -  Merge Left Join 
(cost=1237098.64..1238143.69
rows=40336 width=89)
 Merge Cond: (ml.lead_id =
lrdm.lead_id)
 -  Sort 
(cost=46384.08..46484.92
rows=40336 width=78)
   Sort Key: ml.lead_id
   -  Hash Left Join 
(cost=9430.33..43298.42
rows=40336 width=78)
 Hash Cond:

Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-08 Thread Matt Magoffin
 erm..  How much memory do you have in the system?

This system has 16GB of RAM, and Postgres is basically the only service
running on the box.

 shared_buffers = 4000MB

 I hope you've got a fair bit more than 4G of memory if you're going to
 use 4G for shared buffers...  Once that memory is dedicated to shared
 buffers it's not going to be available for other usage.

Yep, we've got 16GB to work with here. I should have also mentioned the
architecture in my original post, sorry. SELECT version() returns this:

PostgreSQL 8.3.5 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
4.1.2 20070626 (Red Hat 4.1.2-14)

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-08 Thread Matt Magoffin
 PostgreSQL 8.3.5 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
 4.1.2 20070626 (Red Hat 4.1.2-14)

 Does the result from 'free' look reasonable on this box?

I think so:

 total   used   free sharedbuffers cached
Mem:  16432296   16273964 158332  0 173536   14321340
-/+ buffers/cache:1779088   14653208
Swap:  20964405602095880


 How many PG processes are running, generally?

I see about 90 currently, of which 24 are Slony connections and the rest
are client connections.

 Do you see the PG process running this query growing to fill the
available memory?  That query really
 didn't look like it was chewing up all that much memory to me..

Just running top, it does appear to chew through a fair amount of memory.
Here's a snapshot from top of the postgres processing running this query
from just before it ran out of memory:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 4486 postgres  18   0 4576m 3.6g 3.3g R   90 23.1   0:34.23 postgres:
postgres lms_nna [local] EXPLAIN

These values did start out low, for example the RES memory started in the
130MB range, then climbed to the 3.6GB you see here.

 Have
 you tried reducing your work_mem setting to see if that makes the errors
 go away?  It might cause a different plan which might be less efficient
 for some queries, not sure if you have anything which would be
 dramatically affected by that..Do the row counts in the plan match up
 with what you'd expect?  Can you provide output from 'explain analyze'
 on a successful run?

I set the work_mem to 2MB, and the query does actually complete (explain
analyze output below), so does this mean that the query simply uses too
much memory for sorting/joining, and we'd have to either allocate enough
work_mem to allow the query to complete, or a smaller work_mem as shown
here to make the query use slower disk-based sorting? The row counts are
matching what we'd expect from this query.



  QUERY
PLAN
--
 Sort  (cost=2345301.47..2345402.31 rows=40336 width=234) (actual
time=203429.526..203599.840 rows=49317 loops=1)
   Sort Key: s.sale_date, s.vin
   Sort Method:  external merge  Disk: 4776kB
   -  Hash Left Join  (cost=1469244.28..2337665.81 rows=40336 width=234)
(actual time=162362.791..202883.728 rows=49317 loops=1)
 Hash Cond: (ml.lead_id = phone.lead_id)
 -  Hash Left Join  (cost=1349360.82..2159151.44 rows=40336
width=219) (actual time=155499.666..194941.423 rows=49317
loops=1)
   Hash Cond: (ml.lead_id = email.lead_id)
   -  Nested Loop Left Join  (cost=1236733.28..180.90
rows=40336 width=204) (actual time=141890.479..186344.717
rows=49317 loops=1)
 -  Nested Loop Left Join 
(cost=1236733.28..1806406.02 rows=40336 width=141)
(actual time=141890.419..166782.070 rows=49317
loops=1)
   -  Nested Loop Left Join 
(cost=1236733.28..1586899.03 rows=40336
width=126) (actual time=141890.368..166124.253
rows=49317 loops=1)
 -  Nested Loop Left Join 
(cost=1236733.28..1413543.83 rows=40336
width=118) (actual
time=141890.281..156284.612 rows=49317
loops=1)
   -  Merge Left Join 
(cost=1236733.28..1237778.33
rows=40336 width=89) (actual
time=141890.184..143717.900
rows=49317 loops=1)
 Merge Cond: (ml.lead_id =
lrdm.lead_id)
 -  Sort 
(cost=43356.21..43457.05
rows=40336 width=78) (actual
time=1722.385..1794.668
rows=49317 loops=1)
   Sort Key: ml.lead_id
   Sort Method:  external
merge  Disk: 4152kB
   -  Hash Left Join 
(cost=11354.33..38475.05
rows=40336 width=78)
(actual
time=919.319..1578.556
rows=49317 loops=1)
 Hash Cond:
((s.dealer_code)::text
=
(d.dealer_code)::text)
 -  Hash Left
Join 
(cost=11018.04..37584.13
rows=40336
width=60) (actual
time=914.936..1445.926
rows=49317
loops=1)
   Hash Cond:
(s.id =
ml.sale_id)
   Filter:
(((s.sale_type
=
'd'::bpchar)
AND (NOT
ml.lost_sale))
OR
((s.sale_type
=
'c'::bpchar)
AND
(ml.lead_pos
= 0)) OR
(s.sale_type
=
'0'::bpchar))
   -  Index
Scan using
mb_sale_sale_date_idx
on mb_sale
s 

Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-08 Thread Matt Magoffin
  total   used   free sharedbuffers
 cached
 Mem:  16432296   16273964 158332  0 173536
 14321340
 -/+ buffers/cache:1779088   14653208
 Swap:  20964405602095880

 That certainly looks fine..  And you've got 14G or so which should be
 available for this query.  Was this near the time the query was running?
 Could you give us what 'free' returns when the query is close to the
 out-of-memory error?  I'd expect the 2nd row under 'free' to be getting
 low for the allocation to fail.

It was near the time... here's free just before executing the query:

 total   used   free sharedbuffers cached
Mem:  16432296   16342056  90240  0 116868   14561200
-/+ buffers/cache:1663988   14768308
Swap:  20964404482095992

And then here is free close to the time the query runs out of memory (but
while the query is still running):

 total   used   free sharedbuffers cached
Mem:  16432296   16348260  84036  0  41344   14167384
-/+ buffers/cache:2139532   14292764
Swap:  20964404482095992

 Uhh..  I saw that your system was 64-bit, but is your PG process
 compiled as 64bit?  Maybe you're hitting an artificial 32-bit limit,
 which isn't exactly helped by your shared_buffers being set up so high
 to begin with?

Yes, it's compiled as 64-bit:

/opt/lms/postgres-8.3_64/bin/postgres: ELF 64-bit LSB executable, AMD
x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses
shared libs), for GNU/Linux 2.6.9, not stripped

We've had nagging memory-related issues with 8.3 that manifest themselves
like memory leaks... some posts I've made in the past have led to some
leaks getting fixed... but I've not been able to track down more specific
causes. It's just that over time Postgres seems to accumulate memory and
then we start to see out-of-memory errors like this one. Again, this query
could have nothing to do with the root cause, but this is just the example
I have to work with at the moment.

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-08 Thread Matt Magoffin
 I think it must be compiled 64-bit, or he'd not be able to get
 shared_buffers that high to start with.  However, it's possible that the
 postmaster's been started under a ulimit setting that constrains each
 backend to just a few hundred meg of per-process memory.

Here's the output of ulimit -a by the postgres user the database is
running under:

[postg...@170226-db7 ~]$ ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
max nice(-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 139264
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 139264
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

I think this means it does not have an artificial memory limit imposed,
but is there a specific setting beyond these I could check do you think?

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-08 Thread Matt Magoffin
 Hmm ... a gig here, a gig there, pretty soon you're talking about real
 memory?  He's got several sorts and hashes that are each taking over
 100MB according to the memory context dump, so it seems impossible that
 it all fits into a strict 32-bit address space.  There's surely no harm
 in double-checking the executable's file type though.

I did reply to this in a different email in this thread, but just to be
safe, yes, I did verify Postgres is a 64-bit binary:

[postg...@170226-db7 ~]$ file /opt/lms/postgres-8.3_64/bin/postgres
/opt/lms/postgres-8.3_64/bin/postgres: ELF 64-bit LSB executable, AMD
x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses
shared libs), for GNU/Linux 2.6.9, not stripped

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-08 Thread Matt Magoffin
 How about cat /proc/pid/limits for the postmaster?
 And maybe:
 status
 stat
 maps

 Though I'm kinda grasping at straws here, to be honest.  I've had PG up
 and running through 16G of memory at a time before.

There is no /prod/pid/limits file, but here are

status:

Name:   postmaster
State:  S (sleeping)
SleepAVG:   95%
Tgid:   4698
Pid:4698
PPid:   1
TracerPid:  0
Uid:26  26  26  26
Gid:26  26  26  26
FDSize: 1024
Groups: 26
VmPeak:  4301216 kB
VmSize:  4301204 kB
VmLck: 0 kB
VmHWM: 85980 kB
VmRSS: 85980 kB
VmData:  940 kB
VmStk:84 kB
VmExe:  4480 kB
VmLib:  3512 kB
VmPTE:   284 kB
StaBrk: 00ade000 kB
Brk:0c3db000 kB
StaStk: 7fff21fecf40 kB
Threads:1
SigQ:   0/139264
SigPnd: 
ShdPnd: 
SigBlk: 
SigIgn: 01303000
SigCgt: 00014a07
CapInh: 
CapPrm: 
CapEff: 
Cpus_allowed:   
7fff,,,,,,,
Mems_allowed:   ,0001

stat:

4698 (postmaster) S 1 4686 4686 0 -1 4202496 27092033 059343 31 2809
6684 47834 13037706 1828097 15 0 1 0 9296 4404432896 21495
18446744073709551615 4194304 8778268 140733763735360 18446744073709551615
232627404115 0 0 19935232 84487 0 0 0 17 2 0 0 69

maps:

0040-0086 r-xp  08:03 1120168   
/opt/lms/64/postgres/8.3/bin/postgres
00a5f000-00a6b000 rw-p 0045f000 08:03 1120168   
/opt/lms/64/postgres/8.3/bin/postgres
00a6b000-00ade000 rw-p 00a6b000 00:00 0
0c39c000-0c3db000 rw-p 0c39c000 00:00 0
301220-3012234000 r-xp  08:07 5237058   
/usr/lib64/libxslt.so.1.1.17
3012234000-3012433000 ---p 00034000 08:07 5237058   
/usr/lib64/libxslt.so.1.1.17
3012433000-3012435000 rw-p 00033000 08:07 5237058   
/usr/lib64/libxslt.so.1.1.17
3012a0-3012a05000 r-xp  08:07 457029
/lib64/libcrypt-2.5.so
3012a05000-3012c04000 ---p 5000 08:07 457029
/lib64/libcrypt-2.5.so
3012c04000-3012c05000 r--p 4000 08:07 457029
/lib64/libcrypt-2.5.so
3012c05000-3012c06000 rw-p 5000 08:07 457029
/lib64/libcrypt-2.5.so
3012c06000-3012c34000 rw-p 3012c06000 00:00 0
362960-362961a000 r-xp  08:07 457227
/lib64/ld-2.5.so
3629819000-362981a000 r--p 00019000 08:07 457227
/lib64/ld-2.5.so
362981a000-362981b000 rw-p 0001a000 08:07 457227
/lib64/ld-2.5.so
3629a0-3629b46000 r-xp  08:07 457228
/lib64/libc-2.5.so
3629b46000-3629d46000 ---p 00146000 08:07 457228
/lib64/libc-2.5.so
3629d46000-3629d4a000 r--p 00146000 08:07 457228
/lib64/libc-2.5.so
3629d4a000-3629d4b000 rw-p 0014a000 08:07 457228
/lib64/libc-2.5.so
3629d4b000-3629d5 rw-p 3629d4b000 00:00 0
3629e0-3629e02000 r-xp  08:07 457229
/lib64/libdl-2.5.so
3629e02000-362a002000 ---p 2000 08:07 457229
/lib64/libdl-2.5.so
362a002000-362a003000 r--p 2000 08:07 457229
/lib64/libdl-2.5.so
362a003000-362a004000 rw-p 3000 08:07 457229
/lib64/libdl-2.5.so
362aa0-362aa82000 r-xp  08:07 457235
/lib64/libm-2.5.so
362aa82000-362ac81000 ---p 00082000 08:07 457235
/lib64/libm-2.5.so
362ac81000-362ac82000 r--p 00081000 08:07 457235
/lib64/libm-2.5.so
362ac82000-362ac83000 rw-p 00082000 08:07 457235
/lib64/libm-2.5.so
362b60-362b614000 r-xp  08:07 5231431   
/usr/lib64/libz.so.1.2.3
362b614000-362b813000 ---p 00014000 08:07 5231431   
/usr/lib64/libz.so.1.2.3
362b813000-362b814000 rw-p 00013000 08:07 5231431   
/usr/lib64/libz.so.1.2.3
362ba0-362bb33000 r-xp  08:07 5235374   
/usr/lib64/libxml2.so.2.6.26
362bb33000-362bd33000 ---p 00133000 08:07 5235374   
/usr/lib64/libxml2.so.2.6.26
362bd33000-362bd3c000 rw-p 00133000 08:07 5235374   
/usr/lib64/libxml2.so.2.6.26
362bd3c000-362bd3d000 rw-p 362bd3c000 00:00 0
2aaab000-2aaac000 rw-p 2aaab000 00:00 0
2aabd000-2aac1000 rw-p 2aabd000 00:00 0
2aac1000-2aacb000 r-xp  08:07 456988
/lib64/libnss_files-2.5.so
2aacb000-2acca000 ---p a000 08:07 456988
/lib64/libnss_files-2.5.so
2acca000-2accb000 r--p 9000 08:07 456988
/lib64/libnss_files-2.5.so
2accb000-2accc000 rw-p a000 08:07 456988
/lib64/libnss_files-2.5.so

Re: [GENERAL] Out of memory on SELECT in 8.3.5

2009-02-08 Thread Matt Magoffin
 Agreed, that ulimit isn't reflecting any such limit, but is that really
 the same environment the postmaster gets started in?  I wouldn't trust
 a system startup script to be launched in the same environment that a
 login shell gets.  You might try adding
   ulimit -a /tmp/something
 to the startup script to find out for sure.

Our startup script is calling

su -l postgres ...

So I thought the limits would be the same (from the -l flag)? I then tried
to mimic this with the following:

[r...@170226-db7 ~]# su -l postgres -c ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
max nice(-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 139264
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
max rt priority (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 139264
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

which I think should accurately reflect what the postmaster environment
should be seeing.

Regards,
Matt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory on SELECT (from sort?) in 8.3

2008-09-25 Thread Matt Magoffin
 Huh, I was overthinking the problem.  xml_out has a memory leak :-(

 More than one in fact: a large leak (equal to size of output) in
 the executor context (hence query lifespan) and a small one in
 LibxmlContext (hence transaction lifespan).

 Try the patch here:
 http://archives.postgresql.org/pgsql-committers/2008-09/msg00159.php

I've applied this patch now to our staging and production environments,
and Postgres performed quite well after testing very large result sets
that were previously failing with out of memory errors... so the patch is
successful for this problem I was having. I didn't see any significant
increase in memory over the course of the query execution, in fact.

Thanks!

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-08-11 Thread Matt Magoffin
 These numbers don't even have any demonstrable connection to Postgres,
 let alone to an xpath-related memory leak.  You're going to need to come
 up with a concrete test case if you want anyone to investigate.

   regards, tom lane

As I said in the start of this thread, this is all just a hunch, and the
graphs only show you the overall picture of this machine. However Postgres
is the only application running, and I can see on the box that all the
memory is being consumed by various postgres processes. In addition when
Postgres is restarted, all this memory is freed. Something changed in the
behavior of our database between running 8.1 and 8.3, and the most
significant change we made was the use of xpath() and the XML type.

My general question remains: should Postgres slowly be accumulating memory
like this, possibly up to the maximum amount of shared memory we've
allocated for it (4GB in this case)? If so then this memory trend isn't
something I should worry about.

-- m@



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-08-10 Thread Matt Magoffin
 Gregory Stark [EMAIL PROTECTED] writes:

 That's just a special case of what would be expected to happen with
 memory
 allocation anyways though. Few allocators return memory to the OS
 anyways.

 Well, that does happen on Linux for instance.  Since Matt knew in his
 original report that the xpath leak was intra-transaction, I assumed
 he must be using a platform where malloc/free can release memory back
 to the OS --- else he couldn't have seen that behavior from outside
 the backend.

 Still, it's entirely possible that some sort of high-water-mark is
 involved somewhere, perhaps in malloc's internal data structures.

 I was really going on a hunch, as I noticed a definite trend of postgres
 processes using more and more memory over time, and it only started after
 switching to 8.3 and starting to use xpath() quite heavily. Most of the
 memory data I have comes from Linux x64 systems with Postgres compiled as
 64-bit. But I did also notice what appeared to be a similar trend on an OS
 X PPC system.

 In any event, I'm sorry I can't provide any more helpful tests, but I'll
 report back how the system changes after I've patched these systems.

I'm following up on this thread from a month ago on PG 8.3 memory use. I'm
afraid even after updating to 8.3.3 + this patch, I still see the same
overall memory trend. You can see what I'm looking at here with a couple
of memory graphs.

The first comes from our old primary database server, which started on 8.1
and then we switched to 8.3. I've added a callout on the chart to when we
switched to 8.3. Before the switch, the application memory use (bright
green) was pretty consistent, around 250MB. After the switch, you can see
the application memory trends upwards over time.

http://msqr.us/pg83-mem/pg83-memlead-1.png

Around June, we then switched to some new hardware. It was on this machine
about a month ago I updated to 8.3.3 + the xml.c patch referred to in this
thread. I've pointed this out on this chart:

http://msqr.us/pg83-mem/pg83-memlead-2.png

Here the application memory continues to trend upwards over time since the
8.3.3 server was started, up to 1.2GB as of now.

Am I right in my understanding that in general I should not see this
every-increasing use of app memory by PostgreSQL? Or will PostgreSQL
slowly work towards consuming all the shared memory we've allocated (4GB
in this case) and retain that? The only apps running on this box are
PostgreSQL and Slony for replication.

-- m@



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-06 Thread Matt Magoffin
 Gregory Stark [EMAIL PROTECTED] writes:
 That's just a special case of what would be expected to happen with
 memory
 allocation anyways though. Few allocators return memory to the OS
 anyways.

 Well, that does happen on Linux for instance.  Since Matt knew in his
 original report that the xpath leak was intra-transaction, I assumed
 he must be using a platform where malloc/free can release memory back
 to the OS --- else he couldn't have seen that behavior from outside
 the backend.

 Still, it's entirely possible that some sort of high-water-mark is
 involved somewhere, perhaps in malloc's internal data structures.

I was really going on a hunch, as I noticed a definite trend of postgres
processes using more and more memory over time, and it only started after
switching to 8.3 and starting to use xpath() quite heavily. Most of the
memory data I have comes from Linux x64 systems with Postgres compiled as
64-bit. But I did also notice what appeared to be a similar trend on an OS
X PPC system.

In any event, I'm sorry I can't provide any more helpful tests, but I'll
report back how the system changes after I've patched these systems.

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-02 Thread Matt Magoffin
 OK, I'll try to come up with something. Do you have a recommended way of
 capturing the amount memory being used by Postgres related to this? I
 was
 thinking I would have a plpgsql function that loops a large number of
 times, calling a few xpath() calls,

 Yeah, that's what I'd try first.

   regards, tom lane

Below is a test case that simulates the use of xpath() within a plpgsql
function in my application. I'm not sure of a good way to measure the
retained memory before/after the script runs, however. I ran this several
times and the postgres process that ran it does have more memory allocated
afterwards than before, but I don't know what is expected and what isn't.

You can adjust the number of loops that run at

num_loops int8 := 100;

-- m@

-
DROP TABLE IF EXISTS tmp_xml_test CASCADE;
DROP TABLE IF EXISTS tmp_xml_addr CASCADE;
CREATE TEMPORARY TABLE tmp_xml_test(
id int4, x xml, primary key (id));
CREATE TEMPORARY TABLE tmp_xml_addr(
id int4, xmlpos int4, street text, city text, state text, zip text,
primary key (id, xmlpos));

CREATE OR REPLACE FUNCTION tmp_extract_address(data_row tmp_xml_test)
RETURNS void AS
$BODY$
DECLARE
addr_row tmp_xml_addr%ROWTYPE;
tmp_txt text;
tmp_array xml[];
BEGIN
addr_row.id := data_row.id;
DELETE FROM tmp_xml_addr WHERE id = data_row.id;
tmp_array := xpath(
'/po:purchaseOrder/*[name(.) = shipTo or name(.) =  
billTo]',
data_row.x, ARRAY[ARRAY['po', 'http://www.example.com/PO1']]);
IF array_upper(tmp_array, 1)  0 THEN
FOR idx IN 1..array_upper(tmp_array, 1) LOOP
addr_row.xmlpos := idx;
addr_row.street := upper(XMLSERIALIZE(CONTENT(xpath(
'/po:purchaseOrder/*[name(.) = shipTo or 
name(.) =  billTo]['
||idx|| ']/po:street[1]/text()',
data_row.x, ARRAY[ARRAY['po', 
'http://www.example.com/PO1']]))[1] as
text));
addr_row.city := upper(XMLSERIALIZE(CONTENT(xpath(
'/po:purchaseOrder/*[name(.) = shipTo or 
name(.) =  billTo]['
||idx|| ']/po:city[1]/text()',
data_row.x, ARRAY[ARRAY['po', 
'http://www.example.com/PO1']]))[1] as
text));
addr_row.state := upper(XMLSERIALIZE(CONTENT(xpath(
'/po:purchaseOrder/*[name(.) = shipTo or 
name(.) =  billTo]['
||idx|| ']/po:state[1]/text()',
data_row.x, ARRAY[ARRAY['po', 
'http://www.example.com/PO1']]))[1] as
text));
addr_row.zip := upper(XMLSERIALIZE(CONTENT(xpath(
'/po:purchaseOrder/*[name(.) = shipTo or 
name(.) =  billTo]['
||idx|| ']/po:zip[1]/text()',
data_row.x, ARRAY[ARRAY['po', 
'http://www.example.com/PO1']]))[1] as
text));
INSERT INTO tmp_xml_addr (id, xmlpos, street, city, 
state, zip) VALUES
(addr_row.id, addr_row.xmlpos, addr_row.street, 
addr_row.city,
addr_row.state, addr_row.zip);
END LOOP;
END IF;
RETURN;
END;
$BODY$
LANGUAGE 'plpgsql';

CREATE OR REPLACE FUNCTION tmp_populate_addr() RETURNS trigger AS
$BODY$
BEGIN
PERFORM tmp_extract_address(NEW);
RETURN NEW;
END;
$BODY$
LANGUAGE 'plpgsql';

CREATE TRIGGER tmp_populate_addr_trigger AFTER INSERT OR UPDATE ON
tmp_xml_test
FOR EACH ROW EXECUTE PROCEDURE tmp_populate_addr();

DROP FUNCTION IF EXISTS tmp_test_loop();
CREATE OR REPLACE FUNCTION tmp_test_loop() RETURNS SETOF tmp_xml_addr AS
$BODY$
DECLARE
num_loops int8 := 100;
BEGIN
FOR idx IN 1..num_loops LOOP
INSERT INTO tmp_xml_test VALUES (idx,
$$purchaseOrder xmlns=http://www.example.com/PO1; orderDate=1999-10-20
   shipTo country=US
  nameAlice Smith/name
  street123 Maple Street/street
  cityMill Valley/city
  stateCA/state
  zip90952/zip
   /shipTo
   billTo country=US
  nameRobert Smith/name
  street8 Oak Avenue/street
  cityOld Town/city
  statePA/state
  zip95819/zip
   /billTo
   commentHurry, my lawn is going wild!/comment
   items
  item partNum=872-AA
 productNameLawnmower/productName
 quantity1/quantity
 USPrice148.95/USPrice
 commentConfirm this is electric/comment
  /item
  item partNum=926-AA
 productNameBaby Monitor/productName
 quantity1/quantity
 USPrice39.98/USPrice
 shipDate1999-05-21/shipDate
  /item
   /items
/purchaseOrder$$);
END LOOP;
FOR idx IN 1..num_loops LOOP
UPDATE tmp_xml_test SET id = idx WHERE id = idx;

Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-02 Thread Matt Magoffin
 I looked through the libxml2 sources a little bit but couldn't
 immediately find the problem.  I'm fairly confident though that
 this could be reproduced outside Postgres, by replicating the sequence
 of libxml2 calls we make in xpath().  The next step should probably be
 to build a reproducer program and submit it to the libxml authors,
 or maybe run it under a debugging malloc package that could help
 identify where the leak is.  Anyone want to do that?

Ugh, I'd love to give that a shot but that is a bit outside my comfort zone.

-- m@



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-02 Thread Matt Magoffin
 I'm able to duplicate the memory leak in this function with the current
 Fedora 8 libxml2 (2.6.32).  The leak is definitely inside libxml2
 itself, because the bloat shows up here:


 I think this should fix it.

 Kris Jurka

It looks like xml.c source has changed considerably since 8.3 (looking at
revision 1.68.2.2 from the 8.3.3. release). Do you know where/if this
patch would apply to the 8.3 branch?

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-02 Thread Matt Magoffin
 I think this should fix it.

 Kris Jurka

 It looks like xml.c source has changed considerably since 8.3 (looking at
 revision 1.68.2.2 from the 8.3.3. release). Do you know where/if this
 patch would apply to the 8.3 branch?

I diff'ed 1.74 and 1.68.2.2, and I'm guessing this new line could be added
after line 3203 in 1.68.2.2? I'll give that a try...

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-02 Thread Matt Magoffin
 This part seems to match the bug though --- the leak is approximately
 the same size as all the text returned by xpath() within the current
 transaction.

 So there may be a second issue remaining to be found.  Can you put
 together a test case for the long-term small leak?

   regards, tom lane

Hmm, I'm not sure what else to add to this test case. This test case was a
good example of what our database is doing with xpath(); it is using quite
a number of them, that's all. Is there something else in particular you'd
be looking for in another test case?

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-02 Thread Matt Magoffin
  Probably the right thing for you to do now is just to install the known
 fix, and keep an eye on your server for awhile to see if you still see
 any indication of the long-term leak behavior.

Certainly, that is my plan. Once I can get the patch rolled out to these
systems, I should be able to see if the memory-creep trend is resolved
after a couple of weeks or so.

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-01 Thread Matt Magoffin
I've been using Postgres 8.3 in a production environment for some time
now, and have noticed a difference in long-term non-shared memory use by
Postgres since upgrading the system from 8.1. After initially making the
switch to 8.3 and making only one schema change (changed a text column to
xml) the server ran and used memory more or less the same as it had in
8.1.

Later, I added a large set of plpgsql trigger functions that operate on
that new xml column data, using the xpath() function to extract bits of
XML and populate them into normal tables. The server has been running in
this fashion for many months now, and there is a noticeable difference in
how Postgres is using memory now, in that over time it's non-shared memory
use is climbing higher and higher. Right now I'm tracking this from data
captured by Munin on the system. The memory creep is very slight, but over
many months is easy to discern.

I was wondering if anyone else out there is making heavy use of xpath()
and related functions in plpgsql functions, and if they've noticed a trend
like this? Is this normal for Postgres to accumulate non-shared memory
over time, up to some configured maximum?

A second question I have about is about plpgsql and memory use in general.
If I run some of these plpgsql functions using a lot of xpath() calls on
large sets of data, huge amounts of memory are consumed (gigabytes) and
the memory seems to accumulate until the transaction the functions are
running in completes. Do invocations of plpgsql functions retain memory
they used until the end of the transaction?

-- m@



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory use in 8.3 plpgsql with heavy use of xpath()

2008-07-01 Thread Matt Magoffin
 Ugh.  Sounds like small memory leak inside libxml2 --- probably not
 going to be easy to find.  Can you put together a self-contained test
 case?

OK, I'll try to come up with something. Do you have a recommended way of
capturing the amount memory being used by Postgres related to this? I was
thinking I would have a plpgsql function that loops a large number of
times, calling a few xpath() calls, and could take a memory snapshot
before/after the function completes. Is there a call within Postgres I can
make, or should I just rely on OS-level tools for this?

-- m@

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Nested xpath() calls lose namespace support

2008-03-09 Thread Matt Magoffin
I was trying to extract XML fragments within a pl/pgsql function by nested
xpath() function calls, and found that when namespaces are used in the
XML, only the first xpath() call is able to correctly use namespaces.

First here is an example that works, when no namespaces are used:

BEGIN;
CREATE TEMPORARY TABLE tmp_xpath_test (x xml) ON COMMIT DROP;
INSERT INTO tmp_xpath_test VALUES (
'foobar x=ybar1/barbar x=ybar2/bar/foo'::xml);
SELECT (xpath('/foo/bar[1]', t.x))[1] FROM tmp_xpath_test t;
SELECT xpath('/bar/@x', (xpath('/foo/bar[1]', t.x))[1]) FROM
tmp_xpath_test t;
COMMIT;

The first select is just there to show the result of the inner call to
xpath() in the second select, and the second select returns:

SELECT xpath('/bar/@x', (xpath('/foo/bar[1]', t.x))[1]) FROM
tmp_xpath_test t;
 xpath
---
 {y}
(1 row)

Now if I use XML with namespaces, the first SELECT works, but the second
never returns the expected results:

BEGIN;
CREATE TEMPORARY TABLE tmp_xpath_test (x xml) ON COMMIT DROP;
INSERT INTO tmp_xpath_test VALUES (
'a:foo xmlns:a=a:urna:bar x=ybar1/a:bara:bar
x=ybar2/a:bar/a:foo'::xml);
SELECT (xpath('/a:foo/a:bar[1]', t.x, ARRAY[ARRAY['a','a:urn']]))[1] FROM
tmp_xpath_test t;
SELECT xpath('/a:bar/@x', (xpath('/a:foo/a:bar[1]', t.x,
ARRAY[ARRAY['a','a:urn']]))[1],
ARRAY[ARRAY['a','a:urn']]) FROM tmp_xpath_test t;
COMMIT;

The select results are

SELECT (xpath('/a:foo/a:bar[1]', t.x, ARRAY[ARRAY['a','a:urn']]))[1] FROM
tmp_xpath_test t;
   xpath
---
 a:bar x=ybar1/a:bar
(1 row)

SELECT xpath('/a:bar/@x', (xpath('/a:foo/a:bar[1]', t.x,
ARRAY[ARRAY['a','a:urn']]))[1],
lms_kia( ARRAY[ARRAY['a','a:urn']]) FROM tmp_xpath_test t;
 xpath
---
 {}
(1 row)

For the second select, I expected a single XML text node containing y,
just like from the no-namespace result.

Am I doing anything wrong here?

-- m@




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Way to avoid expensive Recheck Cond in index lookup?

2007-12-19 Thread Matt Magoffin
 But it's true that it's possible for a slow expression to make the recheck
 very expensive. The planner doesn't have a very good understanding of how
 to
 tell whether the expression is likely to be slow.

 The case I ran into is thing like WHERE x = ANY $1::integer[] which
 become
 very slow for very large arrays. So I'm sure xpath() could possibly
 trigger
 the same case.

 But the number of matching pages would have to be quite large. And in that
 case the alternative (regular index scans) is going to suck too.

So the actual index function expression is _only_ evaluated in the
re-check for some (or all?) matches, if there are more matching pages than
can fit into the memory allocated by work_mem?

I also seemed to notice that after running a query that did return a large
number of results where the query plan did use the text[] index, running
the same query, or a similar one, would stop using the index lookup and
just do a full table scan. Would that be the optimizer changing plans
because of the statistics it gathered when it ran the query initially with
the index lookup but found the re-check condition took such a long time to
execute?

What I was trying to accomplish was to define a text[] index created from
the results of an xpath() expression, for the purposes of being able to do
fast index lookups using the  operator. But I'm finding that even when
the index is used, the query is very slow and I was assuming it was coming
from the re-check condition, which is defined as that xpath() call. So I'm
finding that this approach isn't working out as I had hoped.

-- m@

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Way to avoid expensive Recheck Cond in index lookup?

2007-12-18 Thread Matt Magoffin
Hello, I'm trying to find a way to use a text[] index lookup using an
xpath() function in 8.3, but I suspect this situation is not specific to
8.3 or this exact query style. The query plan looks like

 Bitmap Heap Scan on lead  (cost=37.39..7365.22 rows=2206 width=8)
   Recheck Cond:
((xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]com.autoleadservice.TypeFlag]/text()'::text,
xml, '{{als,http://autoleadservice.com/xml/als}}'::text[]))::text[] 
'{foo,bar}'::text[])
   -  Bitmap Index Scan on lead_type_flag_gin_idx  (cost=0.00..36.83
rows=2206 width=0)
 Index Cond:
((xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]com.autoleadservice.TypeFlag]/text()'::text,
xml,
'{{als,http://autoleadservice.com/xml/als}}'::text[]))::text[] 
'{foo,bar}'::text[])

The problem for me is, the Recheck Cond is then on the xpath() function
used by the function-based index. My understanding is that then the
database must actually call the xpath() function again on all matches from
the index lookup. Are there ways to re-write the query so the recheck
condition is not necessary? Or a way to define the index differently so
that I might be able to still compare text[] values from the index without
needing the recheck?

-- m@

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Way to avoid expensive Recheck Cond in index lookup?

2007-12-18 Thread Matt Magoffin
 The problem for me is, the Recheck Cond is then on the xpath() function
 used by the function-based index. My understanding is that then the
 database must actually call the xpath() function again on all matches
 from
 the index lookup.

 This is mistaken.  It only happens if there are so many hits that the
 bitmap becomes lossy (which you can control to some extent anyway by
 adjusting work_mem).

Ah, great. Thanks for clarifying.

-- m@

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Many 8.3 server crashes in xmlCleanupCharEncodingHandlers()

2007-12-17 Thread Matt Magoffin
Hello, I'm using 8.3b4 and keep experiencing server crash when I execute
various queries using XML functions. The crash backtraces look like this:

---
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x3f847ae1
0x004b140c in xmlCleanupCharEncodingHandlers ()
(gdb) bt
#0  0x004b140c in xmlCleanupCharEncodingHandlers ()
#1  0x004d01fc in xmlCleanupParser ()
#2  0x0026d514 in xpath (fcinfo=value temporarily unavailable, due to
optimizations) at xml.c:3441
#3  0x0010b908 in ExecMakeFunctionResult (fcache=0xa08474,
econtext=0x7fe9ec, isNull=0xbfffdb08 , isDone=0xbfffd8c8) at
execQual.c:1351
#4  0x0010a120 in ExecEvalArrayCoerceExpr (astate=0xa08428,
econtext=0x7fe9ec, isNull=0xbfffdb08 , isDone=0xbfffd8c8) at
execQual.c:3633
#5  0x0010b4c0 in ExecEvalFuncArgs (fcinfo=0xbfffd968, argList=value
temporarily unavailable, due to optimizations, econtext=0x7fe9ec) at
execQual.c:
#6  0x0010b5fc in ExecMakeFunctionResult (fcache=0xa0801c,
econtext=0x7fe9ec, isNull=0xbfffdbf8 , isDone=0x0) at execQual.c:1169
#7  0x0010c120 in ExecQual (qual=value temporarily unavailable, due to
optimizations, econtext=0x7fe9ec, resultForNull=value temporarily
unavailable, due to optimizations) at execQual.c:4501
#8  0x0010fcb8 in ExecScan (node=0x7fe960, accessMtd=0x119a20 IndexNext)
at execScan.c:131
#9  0x00107e80 in ExecProcNode (node=0x7fe960) at execProcnode.c:338
#10 0x0011c818 in ExecNestLoop (node=0x107bb8c) at nodeNestloop.c:170
#11 0x00107ee0 in ExecProcNode (node=0x107bb8c) at execProcnode.c:367
#12 0x0011e158 in ExecSort (node=0x107bb00) at nodeSort.c:102
#13 0x00107f20 in ExecProcNode (node=0x107bb00) at execProcnode.c:386
#14 0x00105f28 in ExecutorRun (queryDesc=value temporarily unavailable,
due to optimizations, direction=ForwardScanDirection, count=0) at
execMain.c:1233
#15 0x001c6834 in PortalRunSelect (portal=0x104be1c, forward=value
temporarily unavailable, due to optimizations, count=0, dest=0x103a228)
at pquery.c:943
#16 0x001c7c4c in PortalRun (portal=0x104be1c, count=2147483647,
isTopLevel=0 '\0', dest=0x103a228, altdest=0x103a228,
completionTag=0xbfffe3c4 ) at pquery.c:769
#17 0x001c59b4 in PostgresMain (argc=value temporarily unavailable, due
to optimizations, argv=value temporarily unavailable, due to
optimizations, username=value temporarily unavailable, due to
optimizations) at postgres.c:1844
#18 0x0018a468 in ServerLoop () at postmaster.c:3180
#19 0x0018b974 in PostmasterMain (argc=4, argv=0x800550) at postmaster.c:1028
#20 0x0012e2fc in main (argc=4, argv=0x800550) at main.c:188

---
#0  0x95c59ca8 in __kill ()
#1  0x95cfe7b8 in abort ()
#2  0x00281338 in ExceptionalCondition (conditionName=value temporarily
unavailable, due to optimizations, errorType=value temporarily
unavailable, due to optimizations, fileName=value temporarily
unavailable, due to optimizations, lineNumber=value temporarily
unavailable, due to optimizations) at assert.c:57
#3  0x002a1460 in pfree (pointer=0x207) at mcxt.c:581
#4  0x004b1428 in xmlCleanupCharEncodingHandlers ()
#5  0x004d01fc in xmlCleanupParser ()
#6  0x0026d514 in xpath (fcinfo=value temporarily unavailable, due to
optimizations) at xml.c:3441
#7  0x0010b908 in ExecMakeFunctionResult (fcache=0xa08474,
econtext=0x7fe9ec, isNull=0xbfffdb08 , isDone=0xbfffd8c8) at
execQual.c:1351
#8  0x0010a120 in ExecEvalArrayCoerceExpr (astate=0xa08428,
econtext=0x7fe9ec, isNull=0xbfffdb08 , isDone=0xbfffd8c8) at
execQual.c:3633
#9  0x0010b4c0 in ExecEvalFuncArgs (fcinfo=0xbfffd968, argList=value
temporarily unavailable, due to optimizations, econtext=0x7fe9ec) at
execQual.c:
#10 0x0010b5fc in ExecMakeFunctionResult (fcache=0xa0801c,
econtext=0x7fe9ec, isNull=0xbfffdbf8 , isDone=0x0) at execQual.c:1169
#11 0x0010c120 in ExecQual (qual=value temporarily unavailable, due to
optimizations, econtext=0x7fe9ec, resultForNull=value temporarily
unavailable, due to optimizations) at execQual.c:4501
#12 0x0010fcb8 in ExecScan (node=0x7fe960, accessMtd=0x119a20 IndexNext)
at execScan.c:131
#13 0x00107e80 in ExecProcNode (node=0x7fe960) at execProcnode.c:338
#14 0x0011c818 in ExecNestLoop (node=0x107c58c) at nodeNestloop.c:170
#15 0x00107ee0 in ExecProcNode (node=0x107c58c) at execProcnode.c:367
#16 0x0011e158 in ExecSort (node=0x107c500) at nodeSort.c:102
#17 0x00107f20 in ExecProcNode (node=0x107c500) at execProcnode.c:386
#18 0x00105f28 in ExecutorRun (queryDesc=value temporarily unavailable,
due to optimizations, direction=ForwardScanDirection, count=0) at
execMain.c:1233
#19 0x001c6834 in PortalRunSelect (portal=0x104781c, forward=value
temporarily unavailable, due to optimizations, count=0, dest=0x102d828)
at pquery.c:943
#20 0x001c7c4c in PortalRun (portal=0x104781c, count=2147483647,
isTopLevel=0 '\0', dest=0x102d828, altdest=0x102d828,
completionTag=0xbfffe3c4 ) at pquery.c:769
#21 0x001c59b4 in PostgresMain (argc=value temporarily unavailable, due
to optimizations, argv=value temporarily unavailable, due to
optimizations, 

Re: [GENERAL] Many 8.3 server crashes in xmlCleanupCharEncodingHandlers()

2007-12-17 Thread Matt Magoffin
 This was reported before,
 http://archives.postgresql.org/pgsql-general/2007-12/msg00716.php
 but neither he nor you have provided anything approximating a
 reproducible test case.  The interactions with libxml are messy
 enough that I'm not even going to think about fixing this without
 a test case to trace through.

I'll try to get a test case to you. I've had trouble getting it to
consistently crash, except via JDBC. Do you still happen to have that
database dump I provided to you previously? I can try to get the crash to
occur on that table.

I also noticed these in my log file, don't know if this is helpful:

TRAP: FailedAssertion(!(pointer == (void *) (((long) ((pointer)) + ((4) -
1))  ~((long) ((4) - 1, File: mcxt.c, Line: 581)
LOG:  server process (PID 714) was terminated by signal 6: Abort trap

TRAP: BadArgument(!(((header-context) != ((void *)0) 
(Node*)((header-context)))-type) == T_AllocSetContext, File:
mcxt.c, Line: 589)
LOG:  server process (PID 633) was terminated by signal 6: Abort trap

-- m@



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Many 8.3 server crashes in xmlCleanupCharEncodingHandlers()

2007-12-17 Thread Matt Magoffin
 Not sure --- when are you thinking of, and what was the context?
 I don't usually keep sample data unless the issue still seems open.

I was referring to a dump I provided a link to you called
pg83-leads-sanitized.db which was around 20 Dec, with email subject Re:
[GENERAL] 8.3b2 XPath-based function index server crash.

 I also noticed these in my log file, don't know if this is helpful:
 TRAP: FailedAssertion(!(pointer == (void *) (((long) ((pointer)) + ((4)
 -
 1))  ~((long) ((4) - 1, File: mcxt.c, Line: 581)
 LOG:  server process (PID 714) was terminated by signal 6: Abort trap
 TRAP: BadArgument(!(((header-context) != ((void *)0) 
 (Node*)((header-context)))-type) == T_AllocSetContext, File:
 mcxt.c, Line: 589)
 LOG:  server process (PID 633) was terminated by signal 6: Abort trap

 These are consistent with the idea that we've got a memory-allocation
 problem, ie, libxml is trying to access data that was already freed.
 But exactly where and how is not any more clear than before.

 FWIW, I think it's unlikely that a single query will reproduce this,
 because the problem looks to be an expectation that leftover data is
 still valid when it ain't.  What you need to be looking for is a series
 of two or more queries that crash PG.  Possibly it'll be easier to
 reproduce with that in mind ...

Thanks for the tips. I am trying to get some sort of reproducible series
of queries, but so far no luck. I'll let you know if I find anything.

-- m@



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] how to create aggregate xml document in 8.3?

2007-12-11 Thread Matt Magoffin
 Matt Magoffin [EMAIL PROTECTED] writes:
 Hello, I'm trying to write a query to return an XML document like
 root foo=bar
   range range=x count=123
   range range=y count=345
   ...
 /root

 Something like this:

 regression=# select xmlelement(name root, xmlagg(x)) from
 regression-#   (select xmlelement(name range, xmlattributes(string4,
 count(*) as count)) as x from tenk1 group by string4) ss;
   
 xmlelement
 ---
  rootrange string4=xx count=2500/range string4=xx
 count=2500/range string4=xx count=2500/range
 string4=xx count=2500//root
 (1 row)

 You need a subquery because your setup requires two levels of
 aggregation: one to make the grouped counts, and then another one
 for the xmlagg() (which is basically just text concatenation).

Thanks very much, that helps. Now I'm wondering if it's also possible to
then fill in another nested element level in the XML output, from the rows
that are aggregated into the range count. So from your example,
something like

root
  range string4=xx count=2500
string4 foo=bar1/
string4 foo=bar2/
  /range
  range string4=xx count=2500
string4 foo=bar3/

  /range
  
/root

Any thoughts on this?

-- m@

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] how to create aggregate xml document in 8.3?

2007-12-11 Thread Matt Magoffin
 Matt Magoffin [EMAIL PROTECTED] writes:
 Thanks very much, that helps. Now I'm wondering if it's also possible to
 then fill in another nested element level in the XML output, from the
 rows
 that are aggregated into the range count.

 Something involving xmlagg in the sub-query, perhaps?  No time to
 experiment with it now.

Thanks for the first tip, anyway. I got stuck with trying this out
myself... I had ended up with

select xmlelement(
name matchback-months,
xmlattributes(1 as count, 'true' as multi),
xmlagg(ranges)) from (
select xmlelement(name range,
xmlattributes(m.range, count(s.id) as sales-conv-from-lead),
xmlagg(sales)) from (
select xmlelement(name sale,
xmlattributes(ss.vin, ms.lead_id as lead-id)
) as sales
from mb_sale ss
inner join mb_lead ms on ms.sale_id = ss.id
where
ss.sale_date = date('2007-08-01') and 
ss.sale_date = date('2007-08-30')
and ss.sale_type = 'd'
and ms.range = m.range
order by ss.sale_date
) ssub
) as ranges
from mb_sale s
inner join mb_lead m on m.sale_id = s.id
where
s.sale_date = date('2007-08-01') and s.sale_date = 
date('2007-08-30')
and s.sale_type = 'd'
group by m.range
order by m.range
) sub;

but this does not compile:

ERROR:  syntax error at or near from
LINE 20: from mb_sale s
 ^

If anyone has any suggestions, much appreciated.

-- m@

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] how to create aggregate xml document in 8.3?

2007-12-10 Thread Matt Magoffin
Hello, I'm trying to write a query to return an XML document like

root foo=bar
  range range=x count=123
  range range=y count=345
  ...
/root

I started with

select xmlelement(name range, xmlattributes(m.range, count(s.id) as count))
from mb_sale s
inner join mb_lead m on m.sale_id = s.id
where
s.sale_date = date('2007-08-01') and s.sale_date = date('2007-08-30')
and s.sale_type = 'd' -- direct sale
group by m.range
order by m.range;

 xmlelement

 range range=0-30 count=215/
 range range=31-60 count=202/
 range range=61-90 count=64/
 range range=91-120 count=22/
(4 rows)

which returns 4 individual rows as shown, but I can't figure out how to
correctly produce this with a root element and the range elements nested
under that.

I tried a variety of ways, including

select xmlelement(name matchback-months,
xmlattributes('bar' as foo),
xmlagg(
xmlelement(name range,
xmlattributes(m.range, count(s.id) as count)
)
)
)
from mb_sale s
inner join mb_lead m on m.sale_id = s.id
where
s.sale_date = date('2007-08-01') and s.sale_date = date('2007-08-30')
and s.sale_type = 'd'
group by m.range
order by m.range;

which returns an error aggregate function calls cannot be nested.

Is this type of output possible in 8.3?

-- m@





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] replication in Postgres

2007-11-26 Thread Matt Magoffin
 So what is the state-of-the-art in the Postgresql world if I _do_ want
 synchronous replication? 2-phase commit from the client application? Any
 success/horror stories about doing it in Java?

For Java, you could check out Sequoia (http://sequoia.continuent.org/) or
their commercial version uni/cluster. I believe it can be configured for
true synchronous replication.

-- m@

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] possible to create multivalued index from xpath() results in 8.3?

2007-11-20 Thread Matt Magoffin
 AFAICT that's exactly what it does.

 regression=#  select xpath('//[EMAIL PROTECTED]mykey]/text()', 
 'valueABCfoo
 key=mykeyXYZ/foo/valuefoo key=mykeyRST/foofooDEF/foo');
xpath
 ---
  {XYZ,RST}
 (1 row)

 regression=#

 Of course this is of type xml[], but you can cast to text[] and then
 index.

Ugh, you're right of course! Somehow I had this wrong. So I tried to
create an index on the xml[] result by casting to text[] but I got the
function must be immutable error. Is there any reason the xml[] to
text[] cast is not immutable?

I worked around it by writing a function like

CREATE OR REPLACE FUNCTION xpath_to_text(xml_array xml[]) RETURNS text[] AS
$BODY$
BEGIN
RETURN xml_array::text[];
END;
$BODY$
LANGUAGE 'plpgsql' IMMUTABLE;

and wrapping my CREATE INDEX call with that, like:

create index type_flag_idx on lead using gin (
(xpath_to_text(xpath('/[EMAIL PROTECTED]foo]/text()', xml)))
);

-- m@

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] possible to create multivalued index from xpath() results in 8.3?

2007-11-19 Thread Matt Magoffin
 Matt Magoffin [EMAIL PROTECTED] writes:
 2) Even if I could have an xpath() result return an array with multiple
 values, like {value1,value2,value3} I wasn't able to define a GIN index
 against the xml[] type. Should this be possible?

 Dunno about your other questions, but the answer to this one is no
 --- AFAICS there is no indexing support of any kind for the xml type
 in 8.3.  Try casting to text[] instead.

I had tried that, but it does not actually work because of my first issue,
of a way for the XPath to return 3 individual text nodes. If I return 3
elements, like

/[EMAIL PROTECTED]mykey] = {
  elem key=mykeyvalue1/elem,
  elem key=mykeyvalue2/elem,
  elem key=mykeyvalue3/elem
}

and cast that to text[], I get 3 XML strings, including the elem/elem.
I want only the element text content.

Should the xpath() function return 3 individual text nodes like this:

/[EMAIL PROTECTED]mykey]/text() = {
  value1,
  value2,
  value3
}

rather than concatenating these into a single text node result? I also
tried something like

string(/[EMAIL PROTECTED]mykey])

but that throws an XPath error. It looks like the code is converting this
to /x/string(/[EMAIL PROTECTED]mykey] internally, which is not a valid XPath.

So if xpath() cannot return individual text node results, would a possible
solution be to write a trigger function that generates a tsvector from the
XML array, and then use text search to locate matches?

-- m@

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] 8.3b2 XPath-based function index server crash

2007-11-18 Thread Matt Magoffin
Hello, I currently have a table in Postgres 8.1 with a text column that
contains XML. I use the xml2 module to define several XPath-based function
indices on that column, and this has worked very well.

I'm trying not to evaluate the native XML support in 8.3b2. I dumped this
table from 8.1, then loaded the data into a new table in 8.3 with the text
column re-defined as the xml type. The load happened without any errors.

Now I am trying to define equivalent XPath based function indices on the
xml column, but am running into a problem where the server processes the
'add index' command for a while, then crashes with a seg fault. While
investigating, I found I was able to reproduce the crash consistently by
executing a select statement with a large offset and limit in the query.

Initially I had thought some particular row in the xml column was causing
a problem. However, if I query directly for any specific row by its
primary key, the server does not crash and returns the result without
error.

Here are some details of the xml functions I am trying to perform. In 8.1,
I have an xml2 module function index defined like

assigned_area_idx btree (xpath_string(xml,
'/als:auto-lead-service/als:[EMAIL PROTECTED]AREA][1]'::text))

Here xml in the text column. In 8.3, then, I was trying to mimic this
same index with this:

create index assigned_area_idx ON lead (
XMLSERIALIZE( CONTENT
(xpath('/als:auto-lead-service/als:[EMAIL PROTECTED]AREA][1]/text()', xml,
ARRAY[ARRAY['als','http://autoleadservice.com/xml/als']]))[1] 
as text)
);

Is this the correct way to define such an index? I tried several different
ways, and this seemed to be the only way I could get it to be accepted.
However, after running for a while, this command fails and the postgres
server crashes.

So I explored with a SELECT statement, thinking there was some specific
XML document causing the crash. I could consistently execute this
statement to get a crash:

select XMLSERIALIZE( CONTENT
(xpath('/als:auto-lead-service/als:[EMAIL PROTECTED]AREA][1]/text()', xml,
ARRAY[ARRAY['als','http://autoleadservice.com/xml/als']]))[1] 
as text)
from lead order by id limit 1 offset 83367;

The query would take a long time to execute, and then crash. I took some
samples with Instruments (OS X) and found while the server was processing
this select, it was spending a ton of time in libxml2, as if it was
evaluating the XMLSERIALIZE on every row leading up to the first returned
offset row. Is that expected for this type of query (i.e. I was thinking
it would just find the first offset row, then execute the XMLSERIALIZE
statement on that row)?

If I query for that same row using its primary key instead of the offset,
the query runs fine and returns the expected results, i.e.

select id, XMLSERIALIZE( CONTENT
(xpath('/als:auto-lead-service/als:[EMAIL PROTECTED]AREA][1]/text()', xml,
ARRAY[ARRAY['als','http://autoleadservice.com/xml/als']]))[1] 
as text)
from lead where id = 84521;

-- m@





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.3b2 XPath-based function index server crash

2007-11-18 Thread Matt Magoffin
In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Tom Lane) wrote:

 Matt Magoffin [EMAIL PROTECTED] writes:
  So I explored with a SELECT statement, thinking there was some specific
  XML document causing the crash. I could consistently execute this
  statement to get a crash:
 
  select XMLSERIALIZE( CONTENT
  (xpath('/als:auto-lead-service/als:[EMAIL PROTECTED]AREA][1]/text()', xml,
  ARRAY[ARRAY['als','http://autoleadservice.com/xml/als']]))[1] 
  as text)
  from lead order by id limit 1 offset 83367;
 
 Could you put together a self-contained example?  It would probably take
 someone else a long time to guess at all the details you've left out.
 
 One point in particular is that this query would not use the index
 you mentioned, so I doubt that's relevant.
 
   regards, tom lane

Sorry if I left any relavent details out. I've been looking at this for 
a while so many things are probably obvious only to me. Could you hint 
at which additional details you think would be useful here?

-- m@

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] 8.3b2 XPath-based function index server crash

2007-11-18 Thread Matt Magoffin
 On Mon, Nov 19, 2007 at 10:02:20AM +1300, Matt Magoffin wrote:
 Sorry if I left any relavent details out. I've been looking at this for
 a while so many things are probably obvious only to me. Could you hint
 at which additional details you think would be useful here?

 What's being asked for is a self contained example. Can you make a
 single script file that when you run it on a blank database causes the
 error?

 If you can't, that in itself is a useful fact. But then we need schema
 dumps and such things. EXPLAIN output for the queries that break.

I understand. I cannot make the SELECT query nor the ADD INDEX command
break on an empty database. I cannot share this database data, either.
I'll try to fill in any missing details. My table schema is this:

create table lead_test (
id int8 not null,
xml xml,
processing_state varchar(20) not null,
created timestamp with time zone not null,
last_processed timestamp with time zone,
processing_step int4, processing_attempts int4,
primary key (id)
);

I have approximately 400k rows loaded in one particular database.

Back in 8.1, I have some additional indices defined using the xml2 module,
like this one:

create index assigned_area_idx ON lead (
xpath_string(xml, '/als:auto-lead-service/als:[EMAIL 
PROTECTED]AREA][1]')
);

I was first trying to do the same sort of thing in 8.3 using the native
XML support, like this:

create index assigned_area_idx ON lead (
xmlserialize(content(xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]AREA][1]/text()',
xml,
ARRAY[ARRAY['als','http://autoleadservice.com/xml/als']]))[1] 
as text)
);

I'm not sure if this is the correct way to do this in 8.3, but what I want
is a text-based index that I can query against like I do in 8.1. On an
empty table, this works fine. However, on my table with data in it, this
produces a crash, and the log file contains this:

DEBUG:  0: ProcessUtility
LOCATION:  PortalRunUtility, pquery.c:1142
STATEMENT:  create index assigned_area_idx ON lead (
xmlserialize(content(xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]AREA][1]/text()',
xml,
ARRAY[ARRAY['als','http://autoleadservice.com/xml/als']]))[1] as text)
);

... time passes here with heavy CPU use...

DEBUG:  0: StartTransaction
LOCATION:  ShowTransactionState, xact.c:3995
DEBUG:  0: name: unnamed; blockState:   DEFAULT; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children: 
LOCATION:  ShowTransactionStateRec, xact.c:4020
DEBUG:  0: CommitTransaction
LOCATION:  ShowTransactionState, xact.c:3995
DEBUG:  0: name: unnamed; blockState:   STARTED; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children: 
LOCATION:  ShowTransactionStateRec, xact.c:4020
DEBUG:  0: autovacuum: processing database lms_infiniti
LOCATION:  AutoVacWorkerMain, autovacuum.c:1600
DEBUG:  0: StartTransaction
LOCATION:  ShowTransactionState, xact.c:3995
DEBUG:  0: name: unnamed; blockState:   DEFAULT; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children: 
LOCATION:  ShowTransactionStateRec, xact.c:4020
DEBUG:  0: pg_authid: vac: 0 (threshold 50), anl: 0 (threshold 50)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_database: vac: 0 (threshold 50), anl: 0 (threshold 50)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_attribute: vac: 0 (threshold 543), anl: 0 (threshold 296)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_class: vac: 0 (threshold 111), anl: 0 (threshold 81)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_index: vac: 0 (threshold 81), anl: 0 (threshold 65)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_opclass: vac: 0 (threshold 72), anl: 0 (threshold 61)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_am: vac: 0 (threshold 51), anl: 0 (threshold 50)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_amop: vac: 0 (threshold 119), anl: 0 (threshold 85)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_amproc: vac: 0 (threshold 99), anl: 0 (threshold 74)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: pg_trigger: vac: 0 (threshold 63), anl: 0 (threshold 56)
LOCATION:  relation_needs_vacanalyze, autovacuum.c:2566
DEBUG:  0: CommitTransaction
LOCATION:  ShowTransactionState, xact.c:3995
DEBUG:  0: name: unnamed; blockState:   STARTED; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children: 
LOCATION:  ShowTransactionStateRec, xact.c:4020
DEBUG:  0: proc_exit(0)
LOCATION:  proc_exit, ipc.c:95
DEBUG:  0: shmem_exit(0)
LOCATION:  shmem_exit, ipc.c:156
DEBUG:  0: exit(0)
LOCATION:  proc_exit, ipc.c:113
DEBUG:  0: reaping dead processes
LOCATION:  reaper, postmaster.c:2081
DEBUG:  0: server process (PID 966) exited with exit code 0

Re: [GENERAL] 8.3b2 XPath-based function index server crash

2007-11-18 Thread Matt Magoffin
 Matt Magoffin [EMAIL PROTECTED] writes:
 I understand. I cannot make the SELECT query nor the ADD INDEX command
 break on an empty database. I cannot share this database data, either.

 So try to make a test case using dummy data, or with suitably obscured
 versions of your real data.

 Also, consider compiling a debug-enabled build and showing us a gdb
 stack trace from the core dump.

Generating obscured dummy data would be a difficult task. Here's a
stacktrace from gdb after the server crashed:

(gdb) c
Continuing.
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x00a04000
0x8820 in __memcpy ()
(gdb) bt
#0  0x8820 in __memcpy ()
#1  0x004d9098 in xmlBufferAdd ()
#2  0x004e0dc4 in xmlParserInputBufferCreateMem ()
#3  0x004ced98 in xmlCtxtReadMemory ()
#4  0x0026ea0c in xpath (fcinfo=0x37) at xml.c:3183
#5  0x001095bc in ExecMakeFunctionResultNoSets (fcache=0x10563fc,
econtext=0x10562a8, isNull=0xbfffd4f8 \004Y?P, isDone=0x0) at
execQual.c:1412
#6  0x00109640 in ExecEvalArrayRef (astate=0x1056364, econtext=0x10562a8,
isNull=0xbfffd4f8 \004Y?P, isDone=0x0) at execQual.c:254
#7  0x0010d550 in ExecEvalXml (xmlExpr=0x1056324, econtext=0x10562a8,
isNull=0xbfffd5d8 \001)?\f, isDone=value temporarily unavailable, due
to optimizations) at execQual.c:3012
#8  0x0010ae18 in ExecEvalExprSwitchContext (expression=value temporarily
unavailable, due to optimizations, econtext=value temporarily
unavailable, due to optimizations, isNull=value temporarily unavailable,
due to optimizations, isDone=value temporarily unavailable, due to
optimizations) at execQual.c:3726
#9  0x00076490 in FormIndexDatum (indexInfo=0x1044ef4, slot=0x1044e1c,
estate=0x105621c, values=0xbfffd69c, isnull=0xbfffd67c ) at index.c:1041
#10 0x000770e8 in IndexBuildHeapScan (heapRelation=0x7a7428,
indexRelation=0x7a9ca0, indexInfo=0x1044ef4, callback=0x36c90
btbuildCallback, callback_state=0xbfffda08) at index.c:1705
#11 0x00036bfc in btbuild (fcinfo=value temporarily unavailable, due to
optimizations) at nbtree.c:119
#12 0x002875a8 in OidFunctionCall3 (functionId=value temporarily
unavailable, due to optimizations, arg1=8025128, arg2=8035488,
arg3=17059572) at fmgr.c:1580
#13 0x00074e94 in index_build (heapRelation=0x7a7428,
indexRelation=0x7a9ca0, indexInfo=0x1044ef4, isprimary=0 '\0') at
index.c:1331
#14 0x000762ac in index_create (heapRelationId=601643,
indexRelationName=0x1044fe0 assigned_area_idx, indexRelationId=983040,
indexInfo=0x1044ef4, accessMethodObjectId=403, tableSpaceId=0,
classObjectId=0x10505c0, coloptions=0x10509e0, reloptions=0, isprimary=0
'\0', isconstraint=0 '\0', allow_system_table_mods=0 '\0', skip_build=0
'\0', concurrent=6 '\006') at index.c:817
#15 0x000cef14 in DefineIndex (heapRelation=0x1044eb4,
indexRelationName=0x1044fe0 assigned_area_idx, indexRelationId=0,
accessMethodName=0x1044f80 btree, tableSpaceName=0x273a ,
attributeList=0x1044db4, predicate=0x19, options=0x0, src_options=0x0,
unique=0 '\0', primary=0 '\0', isconstraint=0 '\0', is_alter_table=0 '\0',
check_rights=1 '\001', skip_build=0 '\0', quiet=0 '\0', concurrent=0 '\0')
at indexcmds.c:453
#16 0x001c8aec in ProcessUtility (parsetree=0x103ac88,
queryString=0x1044c1c create index assigned_area_idx ON lead
(\nXMLSERIALIZE(CONTENT(xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]com.autoleadservice.service.AssignDealer.AREA\][1]/text()',
xml,\nARRAY[ARRAY['als','http://au;..., params=0x0, isTopLevel=1 '\001',
dest=0x103ad04, completionTag=value temporarily unavailable, due to
optimizations) at utility.c:919
#17 0x001c566c in PortalRunUtility (portal=0x104bc1c,
utilityStmt=0x103ac88, isTopLevel=1 '\001', dest=0x103ad04,
completionTag=0xbfffe4ca ) at pquery.c:1173
#18 0x001c5af0 in PortalRunMulti (portal=0x104bc1c, isTopLevel=1 '\001',
dest=0x103ad04, altdest=0x103ad04, completionTag=0xbfffe4ca ) at
pquery.c:1266
#19 0x001c647c in PortalRun (portal=0x104bc1c, count=2147483647,
isTopLevel=0 '\0', dest=0x103ad04, altdest=0x103ad04,
completionTag=0xbfffe4ca ) at pquery.c:813
#20 0x001c1584 in exec_simple_query (query_string=0x1039e1c create index
assigned_area_idx ON lead
(\nXMLSERIALIZE(CONTENT(xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]AREA\][1]/text()',
xml,\nARRAY[ARRAY['als','http://au;...) at postgres.c:962
#21 0x001c335c in PostgresMain (argc=4, argv=value temporarily
unavailable, due to optimizations, username=0x1001650 lms) at
postgres.c:3529
#22 0x00188f58 in ServerLoop () at postmaster.c:3175
#23 0x0018a464 in PostmasterMain (argc=8, argv=0x800420) at postmaster.c:1026
#24 0x0012d49c in main (argc=8, argv=0x800420) at main.c:188



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.3b2 XPath-based function index server crash

2007-11-18 Thread Matt Magoffin
 Matt Magoffin [EMAIL PROTECTED] writes:
 (gdb) bt
 #0  0x8820 in __memcpy ()
 #1  0x004d9098 in xmlBufferAdd ()
 #2  0x004e0dc4 in xmlParserInputBufferCreateMem ()
 #3  0x004ced98 in xmlCtxtReadMemory ()
 #4  0x0026ea0c in xpath (fcinfo=0x37) at xml.c:3183
 #5  0x001095bc in ExecMakeFunctionResultNoSets (fcache=0x10563fc,
 econtext=0x10562a8, isNull=0xbfffd4f8 \004Y?P, isDone=0x0) at
 execQual.c:1412

 [ squint... ]  Hmm, that seems to be in the same general area as some
 post-beta2 fixes.  Before we spend more time on this, could you update
 to 8.3beta3 and see if the problem is still there?  b3 is not officially
 announced yet, but the tarballs are up.

Sure, I'll give it a shot and report back.

-- m@

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] 8.3b2 XPath-based function index server crash

2007-11-18 Thread Matt Magoffin
 Matt Magoffin [EMAIL PROTECTED] writes:
 (gdb) bt
 #0  0x8820 in __memcpy ()
 #1  0x004d9098 in xmlBufferAdd ()
 #2  0x004e0dc4 in xmlParserInputBufferCreateMem ()
 #3  0x004ced98 in xmlCtxtReadMemory ()
 #4  0x0026ea0c in xpath (fcinfo=0x37) at xml.c:3183
 #5  0x001095bc in ExecMakeFunctionResultNoSets (fcache=0x10563fc,
 econtext=0x10562a8, isNull=0xbfffd4f8 \004Y?P, isDone=0x0) at
 execQual.c:1412

 [ squint... ]  Hmm, that seems to be in the same general area as some
 post-beta2 fixes.  Before we spend more time on this, could you update
 to 8.3beta3 and see if the problem is still there?  b3 is not officially
 announced yet, but the tarballs are up.

I tried looking for the 8.3b3 tarballs, but can't find them on any
download host I've tried (starting with ftp.postgresql.org, looking in the
source directory). Are they available some place else?

-- m@



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.3b2 XPath-based function index server crash

2007-11-18 Thread Matt Magoffin

 (gdb) bt
 #0  0x8820 in __memcpy ()
 #1  0x004d9098 in xmlBufferAdd ()
 #2  0x004e0dc4 in xmlParserInputBufferCreateMem ()
 #3  0x004ced98 in xmlCtxtReadMemory ()
 #4  0x0026ea0c in xpath (fcinfo=0x37) at xml.c:3183
 #5  0x001095bc in ExecMakeFunctionResultNoSets (fcache=0x10563fc,
 econtext=0x10562a8, isNull=0xbfffd4f8 \004Y?P, isDone=0x0) at
 execQual.c:1412

 [ squint... ]  Hmm, that seems to be in the same general area as some
 post-beta2 fixes.  Before we spend more time on this, could you update
 to 8.3beta3 and see if the problem is still there?  b3 is not officially
 announced yet, but the tarballs are up.

 I tried looking for the 8.3b3 tarballs, but can't find them on any
 download host I've tried (starting with ftp.postgresql.org, looking in the
 source directory). Are they available some place else?

I took the latest snapshot from /dev, but I still get the same crash:

Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x00a4b000
0x8b1c in __memcpy ()
(gdb) bt
#0  0x8b1c in __memcpy ()
#1  0x004da098 in xmlBufferAdd ()
#2  0x004e1dc4 in xmlParserInputBufferCreateMem ()
#3  0x004cfd98 in xmlCtxtReadMemory ()
#4  0x0026c32c in xpath (fcinfo=value temporarily unavailable, due to
optimizations) at xml.c:3355
#5  0x001095cc in ExecMakeFunctionResultNoSets (fcache=0x10613fc,
econtext=0x10612a8, isNull=0xbfffd4f8 \004\004?,, isDone=0x0) at
execQual.c:1412
#6  0x00109650 in ExecEvalArrayRef (astate=0x1061364, econtext=0x10612a8,
isNull=0xbfffd4f8 \004\004?,, isDone=0x0) at execQual.c:254
#7  0x0010d560 in ExecEvalXml (xmlExpr=0x1061324, econtext=0x10612a8,
isNull=0xbfffd5d8 \001)?\f, isDone=value temporarily unavailable, due
to optimizations) at execQual.c:3012
#8  0x0010ae28 in ExecEvalExprSwitchContext (expression=value temporarily
unavailable, due to optimizations, econtext=value temporarily
unavailable, due to optimizations, isNull=value temporarily unavailable,
due to optimizations, isDone=value temporarily unavailable, due to
optimizations) at execQual.c:3726
#9  0x00075e00 in FormIndexDatum (indexInfo=0x1054a1c, slot=0x105451c,
estate=0x106121c, values=0xbfffd69c, isnull=0xbfffd67c ) at index.c:1065
#10 0x00076a58 in IndexBuildHeapScan (heapRelation=0x7bc220,
indexRelation=0x7bedcc, indexInfo=0x1054a1c, callback=0x36690
btbuildCallback, callback_state=0xbfffda08) at index.c:1734
#11 0x000365fc in btbuild (fcinfo=value temporarily unavailable, due to
optimizations) at nbtree.c:119
#12 0x00287168 in OidFunctionCall3 (functionId=value temporarily
unavailable, due to optimizations, arg1=8110624, arg2=8121804,
arg3=17123868) at fmgr.c:1580
#13 0x000747a4 in index_build (heapRelation=0x7bc220,
indexRelation=0x7bedcc, indexInfo=0x1054a1c, isprimary=0 '\0') at
index.c:1355
#14 0x00075c20 in index_create (heapRelationId=601643,
indexRelationName=0x1053fa0 assigned_area_idx, indexRelationId=991232,
indexInfo=0x1054a1c, accessMethodObjectId=403, tableSpaceId=0,
classObjectId=0x10550c4, coloptions=0x10554e4, reloptions=0, isprimary=0
'\0', isconstraint=0 '\0', allow_system_table_mods=0 '\0', skip_build=0
'\0', concurrent=0 '\0') at index.c:841
#15 0x000ce824 in DefineIndex (heapRelation=0x1053fcc,
indexRelationName=0x1053fa0 assigned_area_idx, indexRelationId=0,
accessMethodName=0x105400c btree, tableSpaceName=0x273a ,
attributeList=0x1054020, predicate=0x19, options=0x0, src_options=0x0,
unique=0 '\0', primary=0 '\0', isconstraint=0 '\0', is_alter_table=0 '\0',
check_rights=1 '\001', skip_build=0 '\0', quiet=0 '\0', concurrent=0 '\0')
at indexcmds.c:452
#16 0x001c946c in ProcessUtility (parsetree=0x103ac88,
queryString=0x1053e1c create index assigned_area_idx ON lead
(\nXMLSERIALIZE(CONTENT(xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]AREA\][1]/text()',
xml,\nARRAY[ARRAY['als','http://au;..., params=0x0, isTopLevel=1 '\001',
dest=0x103ad04, completionTag=value temporarily unavailable, due to
optimizations) at utility.c:919
#17 0x001c5fec in PortalRunUtility (portal=0x1047c1c,
utilityStmt=0x103ac88, isTopLevel=1 '\001', dest=0x103ad04,
completionTag=0xbfffe4ca ) at pquery.c:1173
#18 0x001c6470 in PortalRunMulti (portal=0x1047c1c, isTopLevel=1 '\001',
dest=0x103ad04, altdest=0x103ad04, completionTag=0xbfffe4ca ) at
pquery.c:1266
#19 0x001c6dfc in PortalRun (portal=0x1047c1c, count=2147483647,
isTopLevel=0 '\0', dest=0x103ad04, altdest=0x103ad04,
completionTag=0xbfffe4ca ) at pquery.c:813
#20 0x001c1f04 in exec_simple_query (query_string=0x1039e1c create index
assigned_area_idx ON lead
(\nXMLSERIALIZE(CONTENT(xpath('/als:auto-lead-service/als:[EMAIL 
PROTECTED]com.autoleadservice.service.AssignDealer.AREA\][1]/text()',
xml,\nARRAY[ARRAY['als','http://au;...) at postgres.c:963
#21 0x001c3cdc in PostgresMain (argc=4, argv=value temporarily
unavailable, due to optimizations, username=0x1001454 lms) at
postgres.c:3531
#22 0x001894e8 in ServerLoop () at postmaster.c:3180
#23 0x0018a9f4 in PostmasterMain (argc=8, argv=0x800420) at 

Re: [GENERAL] 8.3b2 XPath-based function index server crash

2007-11-18 Thread Matt Magoffin
 Matt Magoffin [EMAIL PROTECTED] writes:
 I took the latest snapshot from /dev, but I still get the same crash:

 Drat :-(.  Please try a bit harder at generating a self-contained
 test case.  Given where the crash is happening, I suspect it may be
 only weakly if at all dependent on your input data, so you may not need
 to work that hard at generating dummy data.  In any case it's unlikely
 to depend much on the content (as opposed to the XML structure) of your
 data, so possibly you could sanitize your real data sufficiently by
 stripping out everything but the XML tags.

Hmm. I do have a load testing program with which I _might_ be able to get
to generate a sufficient amount of dummy data. However, it apparently will
require many tens of thousands of rows to reproduce the problem. Will I be
able to post a dump file from this table somewhere? I don't have a public
server I could host it from.

-- m@

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] possible to create multivalued index from xpath() results in 8.3?

2007-11-18 Thread Matt Magoffin
I've working with XML in Postgres 8.3 and am trying to find a way to
create a text-based index from an XPath that returns multiple nodes. For
example, if I have an XPath like

/[EMAIL PROTECTED]mykey]/text()

which might return a few text nodes like

value1
value2
value3

I'd like 3 index values associated with this row's index key: {value1,
value2, value3). I was trying to figure out a way to define an index like
this but ran into a couple of issues:

1) The xpath() function returns an array of XML type, but in the above
example the text nodes are joined together into a single xml result node,
like {value1value2value3}. How can I get it to return 3 individual text
nodes, so an array of 3 values instead of 1?

2) Even if I could have an xpath() result return an array with multiple
values, like {value1,value2,value3} I wasn't able to define a GIN index
against the xml[] type. Should this be possible?

-- m@

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] trigger inheritence?

2001-03-20 Thread Matt Magoffin

I had to give a loud "Duh!" after reading your response: that's exactly what
I wanted, thanks!

Now I wonder if there is a way for a trigger that's created on a base table
to be fired on any table that inherits from that base table. Otherwise I'm
still stuck creating triggers for each table that I create (that's inherited
from the base table).

For example, if I have:

CREATE TABLE foo (
"name" text
);

CREATE TRIGGER foo_trigger BEFORE INSERT
ON foo FOR EACH ROW EXECUTE PROCEDURE a_func();

CREATE TABLE bar (
) INHERITS (foo);

I would like foo_trigger to get fired when I execute a

INSERT into bar ("Hello, world.");

but it only seems to fire if the INSERT was on foo, not bar. Any way to do
this?

-- m@

"Stephan Szabo" [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...

 I'd guess you could use TG_RELID or TG_RELNAME inside your trigger.

 On Tue, 20 Mar 2001, Matt Magoffin wrote:

  Is there any way to make use of the tableoid either as an argument to
the
  function or as a reference within the function (in plpgsql)? For
example,
  I'd like to either
 
  CREATE TRIGGER set_default_value BEFORE INSERT
  ON foo FOR EACH ROW EXECUTE PROCEDURE set_value('tableoid');
 
  and within the function set_value():
 
  SELECT p.relname::text FROM pg_class p WHERE p.oid = TG_ARGV[0]::oid );
 
  - or -
 
  CREATE TRIGGER set_default_value BEFORE INSERT
  ON foo FOR EACH ROW EXECUTE PROCEDURE set_value();
 
  and within the function set_value():
 
  SELECT p.relname::text FROM pg_class p WHERE p.oid = NEW.tableoid;
 
  The former produces the error
 
   ERROR:  text_oid: error in "tableoid": can't parse "tableoid"
 
  and the later produces the error:
 
  ERROR:  record new has no field tableoid
 
  I gather the former method is passing the string "tableoid" into the
  set_value() function. I just want to be able to write one function that
uses
  the tableoid value to produce different results instead of unique
functions
  for each table I create.


 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Re : overriding default value in inherited column (+ set_value function)

2001-03-20 Thread Matt Magoffin

Is there any way to make use of the tableoid either as an argument to the
function or as a reference within the function (in plpgsql)? For example,
I'd like to either

CREATE TRIGGER set_default_value BEFORE INSERT
ON foo FOR EACH ROW EXECUTE PROCEDURE set_value('tableoid');

and within the function set_value():

SELECT p.relname::text FROM pg_class p WHERE p.oid = TG_ARGV[0]::oid );

- or -

CREATE TRIGGER set_default_value BEFORE INSERT
ON foo FOR EACH ROW EXECUTE PROCEDURE set_value();

and within the function set_value():

SELECT p.relname::text FROM pg_class p WHERE p.oid = NEW.tableoid;

The former produces the error

 ERROR:  text_oid: error in "tableoid": can't parse "tableoid"

and the later produces the error:

ERROR:  record new has no field tableoid

I gather the former method is passing the string "tableoid" into the
set_value() function. I just want to be able to write one function that uses
the tableoid value to produce different results instead of unique functions
for each table I create.

-- m@

"Nico" [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 Actually not possible with the "DEFAULT" sintax inside the CREATE TABLE.
 Instead, you could use explicit triggers, for example:

 CREATE table foo (
  "type"int2
 );

 CREATE table bar (
  "type"int2
 ) INHERITS (foo);

 CREATE TRIGGER set_default_value BEFORE INSERT
 ON foo FOR EACH ROW
 EXECUTE PROCEDURE set_value("type", 0);

 CREATE TRIGGER set_default_value BEFORE INSERT
 ON bar FOR EACH ROW
 EXECUTE PROCEDURE set_value("type", 1);

 The function set_value has to be written in C language (plpgsql lang
doesn't
 allow parameter passing for trigger functions).

 Has someone already written that function?

 regards, nico


  From: "Matt Magoffin" [EMAIL PROTECTED]
  X-Newsgroups: comp.databases.postgresql.general
  Subject: overriding default value in inherited column
  Date: Mon, 19 Mar 2001 18:39:27 -0800
 
  Is there an easy way to override the defined default value of a column
in
  an inherited table? For example:
 
  CREATE table foo (
   "type"int2 DEFAULT 0
  );
 
  CREATE table bar (
   "type"int2 DEFAULT 1
  ) INHERITS (foo);
 
  This gives the error:
 
  ERROR: CREATE TABLE: attribute "type" already exists in inherited schema
 
  which is understandable. In essence what I want to do is have each table
  schema default to a different value.
 
  -- m@

 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Out of memory on SELECT (from sort?) in 8.3

-- Thread Matt Magoffin
oding="utf-8"?>


  
mercedes
http://www.mail-archive.com/mercedes@okiebenz.com
mercedes @ okiebenz
Sun, 14 Sep 2008 21:59:21 GMT
Sun, 14 Sep 2008 21:59:21 GMT
http://blogs.law.harvard.edu/tech/rss
MHonArc RSS 2.0 RCFile
themailarchive@gmail.com (The Mail Archive)

   The Mail Archive
   http://www.mail-archive.com/nanologo.png
   http://www.mail-archive.com/mercedes@okiebenz.com

 

  [MBZ] Gerry's accident
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107713.html
  font color=#6f6f6f 2008/09/14/font -- a href=http://www.mail-archive.com/search?l=mercedes@okiebenz.comq=from:%22Wilton Strickland%22Wilton Strickland/a
  Sun, 14 Sep 2008 21:57:50 GMT
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107713.html
   

  Re: [MBZ] Gerrys accident
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107712.html
  font color=#6f6f6f 2008/09/14/font -- a href=http://www.mail-archive.com/search?l=mercedes@okiebenz.comq=from:%22Bill R%22Bill R/a
  Sun, 14 Sep 2008 21:57:31 GMT
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107712.html
   

  Re: [MBZ] SDL a/c not working
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107711.html
  font color=#6f6f6f 2008/09/14/font -- a href=http://www.mail-archive.com/search?l=mercedes@okiebenz.comq=from:%22Luther%22Luther/a
  Sun, 14 Sep 2008 21:54:29 GMT
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107711.html
   

  Re: [MBZ] Gerrys accident
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107710.html
  font color=#6f6f6f 2008/09/14/font -- a href=http://www.mail-archive.com/search?l=mercedes@okiebenz.comq=from:%22Archer%22Archer/a
  Sun, 14 Sep 2008 20:56:28 GMT
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107710.html
   

  Re: [MBZ] Gerrys accident
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107709.html
  font color=#6f6f6f 2008/09/14/font -- a href=http://www.mail-archive.com/search?l=mercedes@okiebenz.comq=from:%22Archer%22Archer/a
  Sun, 14 Sep 2008 20:55:52 GMT
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107709.html
   

  Re: [MBZ] Gerrys accident
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107708.html
  font color=#6f6f6f 2008/09/14/font -- a href=http://www.mail-archive.com/search?l=mercedes@okiebenz.comq=from:%22Archer%22Archer/a
  Sun, 14 Sep 2008 20:55:05 GMT
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107708.html
   

  Re: [MBZ] Gerry's accident
  http://www.mail-archive.com/mercedes@okiebenz.com/msg107707.html
  font color=#6f6f6f 2008/09/(0 chunks);
20480 used
  Record information cache: 24576 total in 2 blocks; 15984 free (5
chunks); 8592 used
  Prepared Queries: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
  Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
  MessageContext: 8192 total in 1 blocks; 6976 free (1 chunks); 1216 used
  smgr relation table: 24576 total in 2 blocks; 13904 free (4 chunks);
10672 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7616 free (0 chunks); 576 used
PortalHeapMemory: 2048 total in 1 blocks; 424 free (0 chunks); 1624 used
  ExecutorState: 841031232 total in 51159 blocks; 1712232 free (56
chunks); 839319000 used
TIDBitmap: 2088960 total in 8 blocks; 243200 free (25 chunks);
1845760 used
TupleSort: 28303408 total in 13 blocks; 5346944 free (14 chunks);
22956464 used
ExprContext: 8192 total in 1 blocks; 8096 free (0 chunks); 96 used
ExprContext: 8192 total in 1 blocks; 8096 free (0 chunks); 96 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
PortalHeapMemory: 1024 total in 1 blocks; 944 free (0 chunks); 80 used
  Relcache by OID: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used
  CacheMemoryContext: 817392 total in 20 blocks; 23488 free (1 chunks);
793904 used
CachedPlan: 15360 total in 4 blocks; 1528 free (0 chunks); 13832 used
CachedPlanSource: 15360 total in 4 blocks; 1440 free (0 chunks); 13920
used
unnamed prepared statement: 8192 total in 1 blocks; 3856 free (2
chunks); 4336 used
CachedPlan: 1024 total in 1 blocks; 312 free (0 chunks); 712 used
CachedPlanSource: 1024 total in 1 blocks; 56 free (0 chunks); 968 used
SPI Plan: 1024 total in 1 blocks; 832 free (0 chunks); 192 used
CachedPlan: 1024 total in 1 blocks; 336 free (0 chunks); 688 used
CachedPlanSource: 1024 total in 1 blocks; 80 free (0 chunks); 944 used
SPI Plan: 1024 total in 1 blocks; 808 free (0 chunks); 216 used
CachedPlan: 3072 total in 2 blocks; 488 free (0 chunks);