Re: [GENERAL] Next steps in debugging database storage problems?

2014-12-11 Thread Jacob Bunk Nielsen
Hi

A final followup from my side to this post for anyone who may find this
thread in archives in the future.

On the 15th of August Jacob Bunk Nielsen ja...@bunk.cc wrote:
 On the 1st of July 2014 Jacob Bunk Nielsen ja...@bunk.cc wrote:

 We have a PostgreSQL 9.3.4 running in an LXC container on Debian
 Wheezy on a Linux 3.10.43 kernel on a Dell R620 server. Data are
 stored on a XFS file system. We are seeing problems such as:

 unexpected data beyond EOF in block 2 of relation base/805208133/1238511128

 and

 could not read block 5 in file base/805208348/1259338118: read only
 0 of 8192 bytes

 This seems to occur every few days after the server has been up for
 30-40 days. If we reboot the server it'll be another 30-40 days before
 we see any problems again. [...]

 This time it took 45 days before this happened:

 LOG:  unexpected EOF on standby connection
 ERROR:  unexpected data beyond EOF in block 140 of relation 
 base/805208885/805209852
 HINT:  This has been seen to occur with buggy kernels; consider updating your 
 system.

 It always happens with small tables with lots of inserts and deletes.
 From previous experience we know that it's now going to happen again in
 a few days, so we'll probably try to schedule a reboot to give us
 another 30-40 days.

We have concluded that it's probably a bug in the autovacuuming. Since
we changed how often we vacuum those busy tables we haven't seen any
problems for the past 2 months:

We changed:

autovacuum_vacuum_threshold = 10 (default: 50)

and 

autovacuum_vacuum_scale_factor = 0 (default 0.2, 0 turns it off)

The default settings caused autovacuum to run every minute, and
eventually we would hit some bug that caused the problems described
above.

My colleague who has done most of the work find this has promised to try
to create a working test case and file a proper bug report.

Best regards

Jacob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] new index type with clustering in mind.

2014-12-11 Thread Jack Douglas
 Currently, one issue you're going to face is that brin doesn't rescan a
range to
 find the tighest possible summary tuple.

That's going to be an issue I think, thanks for mentioning it. We'd need
some sort of mechanism for achieving this without a complete REINDEX, even
if it only reset the min/max when all the blocks in the range are entirely
cleared out. Ah well :)

 Another issue is how to find the best possible ordering.  For minmax
 opclasses it's easy, but for other opclass designs it's less clear what to
do.
 Even for minmax you need to find some way to communicate to the system
 what's the order to follow ...

Do you mean the ordering for the clustered table tuples or the ordering of
index tuples in the BRIN index? I'm the former because I'm also assuming you
always scan an entire BRIN index as there isn't a trivial way of optimizing
the index scan for ranges (unless you 'granulate' the ranges along the lines
of this: http://dba.stackexchange.com/a/22295/1396)?

If you mean the clustering order, for the use cases I'm concerned with it
isn't important - as long as tuples with the same cluster key gravitate
towards the same blocks, it often doesn't matter what order those blocks are
in because the main mission is to reduce the number of blocks scanned.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Hi,

I've facing an out of memory condition after running SLONY several hours to
get a 1TB database with about 23,000 tables replicated. The error occurs
after about 50% of the tables were replicated.

Most of the 48GB memory is being used for file system cache but for some
reason the initial copy of one table performed by SLONY abended due to an
out of memory condition. The table that was being transferred at the moment
of the abend has two text columns.

After the OOM condition is raised, select * of that specific table also
returns out of memory condition.

I guess postgresql is trying to perform an atomic allocation (those which
cannot wait for reclaim) to get a continues memory area and is failing due
to memory fragmentation.

My idea to prevent this issue is to reserve 500MB of free storage for
atomic allocations using vm.min_free_kbytes = 5 in the
/etc/sysctl.conf.

Is this a good approach to solve it?

Another question: is it safe to flush file system cache using these steps:

1) Shutdown postgresql
2) sync
3) echo 1  /proc/sys/vm/drop_caches;
4) Startup postgresql

Some data about the issue:

SLONY error:
2014-12-01 12:14:56 BRST ERROR  remoteWorkerThread_1: copy to stdout on
provider - PGRES_FATAL_ERROR ERROR:  out of memory
DETAIL:  Failed on request of size 123410655.

# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)

# uname -m
x86_64

# free
 total   used   free sharedbuffers cached
Mem:  49422076   49038348 383728  0 268488   47520476
-/+ buffers/cache:1249384   48172692
Swap: 16777208  0   16777208

# cat /proc/meminfo | grep Commit
CommitLimit:41488244 kB
Committed_AS: 689312 kB

# /sbin/sysctl vm.min_free_kbytes
vm.min_free_kbytes = 135168

After SLONY gets the out of memory condition, select * of the table also
does not work:
FiscalWeb=# select * from 8147_spunico.sincdc;
ERROR:  out of memory
DETAIL:  Failed on request of size 268435456.

Backup of the table using pg_dump also gives out of memory condition.

Buddyinfo indicates memory fragmentation after getting out of memory
condition:
# cat /proc/buddyinfo
Node 0, zone  DMA  3  2  2  3  2  1  1
0  1  0  3
Node 0, zoneDMA32  94091  69426  30367   7531996126  8
0  0  1  0
Node 0, zone   Normal   6840 23  0  0  0  0  0
0  0  0  1
Node 1, zone   Normal730338159 93 44 26 11
9  3  1  3
Node 2, zone   Normal 68535309144 60 18 13
12 32 29  7
Node 3, zone   Normal 319246 341233 173115  52602   5989646232
63  8  3  1

postgres=# select version();

version
---
 PostgreSQL 8.3.21 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
4.4.6 20120305 (Red Hat 4.4.6-4)
(1 row)


Thank you!

Carlos Reimer


Re: service allowing arbitrary relations was Re: [GENERAL] hstore, but with fast range comparisons?

2014-12-11 Thread Andy Colson

On 12/10/2014 7:20 PM, Guyren Howe wrote:




I want to do something that is perfectly satisfied by an hstore column. *Except* that 
I want to be able to do fast (ie indexed) ,  etc comparisons, not just 
equality.

 From what I can tell, there isn’t really any way to get hstore to do this, so 
I’ll have to go to a key-value table. But I thought I’d ask just in case I’m 
missing something.



I think your missing something.

Is it one field in the hstore?  Did you try an expression index?

create index cars_mph on cars ( (data-'mph') );

thats a btree index, which should support  and .  (Although I've never tried 
it)


Sorry I wasn’t clear. I need a fast comparison on whatever keys someone chooses 
to put in the hstore.

I’m creating a web service where you can post whatever keys and values you 
wish. I am leaning toward having folks declare the relations they want to store 
and the field types, so I could just create a table for each one, but I want 
this to be able to scale to a lot of data for a lot of people.

Perhaps I could give everyone their own schema and just create tables for their 
relations. How heavyweight are schemas?

But if I don’t want to do that, I’m leaning toward a table with a relation name 
and a hstore, and then separate k-v tables for different types. I was thinking 
of keeping the hstore because they will often be searching on fewer fields than 
they’re retrieving, so this would avoid having to do a join for every field 
they need.

Regards,

Guyren G Howe
Relevant Logic LLC






How many rows are we talking?  Have you bench marked it with normal 
table scans?  They're pretty fast, especially repeated queries because 
the data is cached.  (Normal columns would be faster, a table scan + 
hstore will add a little overhead)


Indexing every possible field is possible, but slow for insert/updates. 
 And would chew up a lot of disk.


Is there any subset of fields you could index that would reduce the 
number of results at least somewhat?


select * from bigtable where generic = 'a' and specific = 'b';

Then only index generic columns.  Your searches will still be fast, as 
will insert/update.


Schema's are pretty light weight.  I run an 80 Gig database split into 
115 schema's without problem.


I'm not sure what'll work best for you.  You'll probably have to mock up 
some different layouts and benchmark them.


-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure workflow question

2014-12-11 Thread Andy Colson

On 12/10/2014 6:53 PM, Israel Brewster wrote:

Currently, when I need to create/edit a stored procedure in Postgresql,
my workflow goes like the following:

- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my development database
- Test
- repeat above until it works as desired
- Copy and paste function into my production DB.

To edit an existing function, the workflow is basically the same as
above, but I first have to find the function in my file.

This whole workflow just feels kludgy to me. Is there a better way? Or
is that essentially the recommended procedure? Thanks.
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---








This is pretty much what I do as well.  I keep all my scripts and check 
them into subversion though.  I edit them with vim and use the dbext 
plugin to run snippets on the test db.


Once I'm ready I run it on production.  (Could be an entire file, or 
just a copy/paste of a block)  I use psql for everything.  (vim dbext 
uses psql as well)


It doesn't seem that clunky though.  You do a bunch of stuff on testdb, 
once you get it right you do it on livedb.  I don't see any other ways 
of doing it.


-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Tom Lane
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
 I've facing an out of memory condition after running SLONY several hours to
 get a 1TB database with about 23,000 tables replicated. The error occurs
 after about 50% of the tables were replicated.

I'd try bringing this up with the Slony crew.

 I guess postgresql is trying to perform an atomic allocation (those which
 cannot wait for reclaim) to get a continues memory area and is failing due
 to memory fragmentation.

This theory has nothing to do with reality.  More likely it's just a
garden variety memory leak.  If it was an out-of-memory error reported
by Postgres, there should have been a memory statistics dump written in
the postmaster log --- can you find that and post it?

Another possible theory is that you're just looking at lots of memory
needed to hold relcache entries for all 23000 tables :-(.  If so there
may not be any easy way around it, except perhaps replicating subsets
of the tables.  Unless you can boost the memory available to the backend
--- since this is a 64 bit build, the only reason I can see for
out-of-memory failures would be a restrictive ulimit setting.

 After SLONY gets the out of memory condition, select * of the table also
 does not work:
 FiscalWeb=# select * from 8147_spunico.sincdc;
 ERROR:  out of memory
 DETAIL:  Failed on request of size 268435456.

That's odd ... looks more like data corruption than anything else.
Does this happen even in a fresh session?  What do you have to do
to get rid of the failure?

  PostgreSQL 8.3.21 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
 4.4.6 20120305 (Red Hat 4.4.6-4)

You realize of course that this version is years out of support, and that
even if this problem traces to a bug in Postgres, 8.3 is not going to get
fixed.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Hi,

Yes, I agree, 8.3 is out of support for a long time and this is the reason
we are trying to migrate to 9.3 using SLONY to minimize downtime.

I eliminated the possibility of data corruption as the limit/offset
technique indicated different rows each time it was executed. Actually, the
failure is still happening and as it is running in a virtual machine,
memory size configuration for this virtual machine was increased from 48GB
to 64GB and we have scheduled a server shutdown/restart for the next coming
weekend in order to try to get rid of the failure.

The replication activity was aborted: SLONY triggers removed, SLONY
processes terminated and SLONY schemas removed.

Ulimit output was appended at the end of this note.

Memory statistics dump from postmaster log resulted from a select * from
8147_spunico.sincdc; command:

Thank you!

TopMemoryContext: 80800 total in 9 blocks; 4088 free (10 chunks); 76712 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
  TopTransactionContext: 8192 total in 1 blocks; 7648 free (1 chunks); 544
used
  MessageContext: 57344 total in 3 blocks; 40760 free (6 chunks); 16584 used
  smgr relation table: 24576 total in 2 blocks; 13904 free (4 chunks);
10672 used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
PortalHeapMemory: 1024 total in 1 blocks; 720 free (0 chunks); 304 used
  ExecutorState: 381096528 total in 6 blocks; 49856 free (30 chunks);
381046672 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  Relcache by OID: 24576 total in 2 blocks; 14912 free (3 chunks); 9664 used
  CacheMemoryContext: 817392 total in 20 blocks; 230456 free (3 chunks);
586936 used
pg_toast_729119_index: 2048 total in 1 blocks; 608 free (0 chunks);
1440 used
idx_sincdc_situacao: 2048 total in 1 blocks; 752 free (0 chunks); 1296
used
idx_sincdc_esqtab: 2048 total in 1 blocks; 656 free (0 chunks); 1392
used
idx_sincdc_datahoraexp: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
pk_sincdc: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
pg_index_indrelid_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_attrdef_adrelid_adnum_index: 2048 total in 1 blocks; 608 free (0
chunks); 1440 used
pg_ts_dict_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_aggregate_fnoid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_language_name_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_statistic_relid_att_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_ts_dict_dictname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_namespace_nspname_index: 3072 total in 2 blocks; 1696 free (2
chunks); 1376 used
pg_opfamily_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_opclass_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_ts_parser_prsname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_amop_fam_strat_index: 3072 total in 2 blocks; 1384 free (2 chunks);
1688 used
pg_opclass_am_name_nsp_index: 3072 total in 2 blocks; 1624 free (3
chunks); 1448 used
pg_trigger_tgrelid_tgname_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_cast_source_target_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_auth_members_role_member_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_attribute_relid_attnum_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_ts_config_cfgname_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_authid_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376
used
pg_ts_config_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_conversion_default_index: 3072 total in 2 blocks; 1432 free (3
chunks); 1640 used
pg_language_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_enum_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks); 1328
used
pg_proc_proname_args_nsp_index: 3072 total in 2 blocks; 1624 free (3
chunks); 1448 used
pg_ts_parser_oid_index: 3072 total in 2 blocks; 1744 free (3 chunks);
1328 used
pg_database_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks);
1376 used
pg_conversion_name_nsp_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_class_relname_nsp_index: 3072 total in 2 blocks; 1600 free (2
chunks); 1472 used
pg_attribute_relid_attnam_index: 3072 total in 2 blocks; 1648 free (2
chunks); 1424 used
pg_class_oid_index: 3072 total in 2 blocks; 1696 free (2 chunks); 1376
used
pg_amproc_fam_proc_index: 3072 total in 2 blocks; 1384 free (2 chunks);
1688 used
pg_operator_oprname_l_r_n_index: 3072 total 

[GENERAL] libpq equivalent of jdbc:default:connection

2014-12-11 Thread Jim McLaughlin
Hi all,

I am rewriting a pljava procedure in C++ with libpq. This procedure needs
to access some temp tables that the calling procedure creates and
populates. It seems that the connection created by PQconnectdb creates a
new connection (I have tried all permutations of conninfo I could think
of). Since the connection is new, the temp tables are not available
obviously. On the pljava side, we simply created a connection with
DriverManager.getConnection(jdbc:default:connection), which behaved
admirably. Is there a way to accomplish similar behavior with libpq?

Best,
Jim


Re: [GENERAL] libpq equivalent of jdbc:default:connection

2014-12-11 Thread David G Johnston
Jim McLaughlin wrote
 Hi all,
 
 I am rewriting a pljava procedure in C++ with libpq. This procedure needs
 to access some temp tables that the calling procedure creates and
 populates. It seems that the connection created by PQconnectdb creates a
 new connection (I have tried all permutations of conninfo I could think
 of). Since the connection is new, the temp tables are not available
 obviously. On the pljava side, we simply created a connection with
 DriverManager.getConnection(jdbc:default:connection), which behaved
 admirably. Is there a way to accomplish similar behavior with libpq?
 
 Best,
 Jim

I suspect you want this:

http://www.postgresql.org/docs/9.3/interactive/spi-spi-connect.html

but I've never used c++

David J.





--
View this message in context: 
http://postgresql.nabble.com/libpq-equivalent-of-jdbc-default-connection-tp5830106p5830110.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq equivalent of jdbc:default:connection

2014-12-11 Thread Jim McLaughlin
Thanks David! This is what I needed. I figured I was looking in the wrong
place.

On Thu, Dec 11, 2014 at 10:58 AM, David G Johnston 
david.g.johns...@gmail.com wrote:

 Jim McLaughlin wrote
  Hi all,
 
  I am rewriting a pljava procedure in C++ with libpq. This procedure needs
  to access some temp tables that the calling procedure creates and
  populates. It seems that the connection created by PQconnectdb creates a
  new connection (I have tried all permutations of conninfo I could think
  of). Since the connection is new, the temp tables are not available
  obviously. On the pljava side, we simply created a connection with
  DriverManager.getConnection(jdbc:default:connection), which behaved
  admirably. Is there a way to accomplish similar behavior with libpq?
 
  Best,
  Jim

 I suspect you want this:

 http://www.postgresql.org/docs/9.3/interactive/spi-spi-connect.html

 but I've never used c++

 David J.





 --
 View this message in context:
 http://postgresql.nabble.com/libpq-equivalent-of-jdbc-default-connection-tp5830106p5830110.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Out of memory condition

2014-12-11 Thread Scott Marlowe
Just wondering what slony version you're using?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Slony version is 2.2.3

On Thu, Dec 11, 2014 at 3:29 PM, Scott Marlowe scott.marl...@gmail.com
wrote:

 Just wondering what slony version you're using?




-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Vick Khera
On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 needed to hold relcache entries for all 23000 tables :-(.  If so there
 may not be any easy way around it, except perhaps replicating subsets
 of the tables.  Unless you can boost the memory available to the backend


I'd suggest this. Break up your replication into something like 50 sets of
500 tables each, then add one at a time to replication, merging it into the
main set. Something like this:

create  replicate set 1.
create  replicate set 2.
merge 2 into 1.
create  replicate set 3.
merge 3 into 1.

repeat until done. this can be scripted.

Given you got about 50% done before it failed, maybe even 4 sets of 6000
tables each may work out.


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.

On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera vi...@khera.org wrote:


 On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 needed to hold relcache entries for all 23000 tables :-(.  If so there
 may not be any easy way around it, except perhaps replicating subsets
 of the tables.  Unless you can boost the memory available to the backend


 I'd suggest this. Break up your replication into something like 50 sets of
 500 tables each, then add one at a time to replication, merging it into the
 main set. Something like this:

 create  replicate set 1.
 create  replicate set 2.
 merge 2 into 1.
 create  replicate set 3.
 merge 3 into 1.

 repeat until done. this can be scripted.

 Given you got about 50% done before it failed, maybe even 4 sets of 6000
 tables each may work out.




-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Scott Marlowe
On Thu, Dec 11, 2014 at 12:05 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
 That was exactly what the process was doing and the out of memory error
 happened while one of the merges to set 1 was being executed.

You sure you don't have a ulimit getting in the way?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Yes, all lines of /etc/security/limits.conf are commented out and session
ulimit -a indicates the defaults are being used:

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 385725
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 1024
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


On Thu, Dec 11, 2014 at 5:19 PM, Scott Marlowe scott.marl...@gmail.com
wrote:

 On Thu, Dec 11, 2014 at 12:05 PM, Carlos Henrique Reimer
 carlos.rei...@opendb.com.br wrote:
  That was exactly what the process was doing and the out of memory error
  happened while one of the merges to set 1 was being executed.

 You sure you don't have a ulimit getting in the way?




-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Tom Lane
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
 Yes, all lines of /etc/security/limits.conf are commented out and session
 ulimit -a indicates the defaults are being used:

I would not trust ulimit -a executed in an interactive shell to be
representative of the environment in which daemons are launched ...
have you tried putting ulimit -a sometempfile into the postmaster
start script?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Documentation missing bigint?

2014-12-11 Thread Paul Jungwirth
Hello,

The table of which C types represent which SQL types seems to be missing bigint:

http://www.postgresql.org/docs/9.3/static/xfunc-c.html#XFUNC-C-TYPE-TABLE

It looks like bigint should be listed and should correspond to an
int64 C type. Also I see there is an INT8OID, PG_GETARG_INT64,
DatumGetInt64, and Int64GetDatum---I think all for bigints. Does that
sound right? If so, would you like a documentation patch?

Thanks,
Paul


-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Carlos Henrique Reimer
Extracted ulimits values from postmaster pid and they look as expected:

[root@2-NfseNet ~]# ps -ef | grep /postgres
postgres  2992 1  1 Nov30 ?03:17:46
/usr/local/pgsql/bin/postgres -D /database/dbcluster
root 26694  1319  0 18:19 pts/000:00:00 grep /postgres

[root@2-NfseNet ~]# cat /proc/2992/limits
Limit Soft Limit   Hard Limit
Units
Max cpu time  unlimitedunlimited
seconds
Max file size unlimitedunlimited
bytes
Max data size unlimitedunlimited
bytes
Max stack size10485760 unlimited
bytes
Max core file size0unlimited
bytes
Max resident set  unlimitedunlimited
bytes
Max processes 1024 385725
processes
Max open files1024 4096
files
Max locked memory 6553665536
bytes
Max address space 102400   unlimited
bytes
Max file locksunlimitedunlimited
locks
Max pending signals   385725   385725
signals
Max msgqueue size 819200   819200
bytes
Max nice priority 00
Max realtime priority 00
Max realtime timeout  unlimitedunlimited
us
[root@2-NfseNet-SGDB ~]#


On Thu, Dec 11, 2014 at 6:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
  Yes, all lines of /etc/security/limits.conf are commented out and session
  ulimit -a indicates the defaults are being used:

 I would not trust ulimit -a executed in an interactive shell to be
 representative of the environment in which daemons are launched ...
 have you tried putting ulimit -a sometempfile into the postmaster
 start script?

 regards, tom lane




-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Scott Marlowe
So if you watch processes running with sort by memory turned on in top
or htop can you see your machine running out of memory etc? You have
enough swap if needed? 48G is pretty small for a modern pgsql server
with as much data and tables as you have, so I'd assume you have
plenty of swap just in case.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory condition

2014-12-11 Thread Tom Lane
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
 Extracted ulimits values from postmaster pid and they look as expected:

 [root@2-NfseNet ~]# cat /proc/2992/limits
 Limit Soft Limit   Hard Limit
 Units
 Max address space 102400   unlimited
 bytes

So you've got a limit of 1GB on process address space ... that's
probably why it's burping on allocations of a couple hundred meg,
especially if you have a reasonably large shared_buffers setting.
You might as well be running a 32-bit build (in fact, a 32-bit
build could still do a lot better than that).

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help Optimizing a Summary Query

2014-12-11 Thread Robert DiFalco
Thanks Arthur. I don't think there is as big a different between BIGINT and
INTEGER as you think there is. In fact with an extended filesystem you
might not see any difference at all.

As I put in the first emal I am using a GIST index on user.name.

I was really more interested in the LEFT OUTER JOINs vs EXISTS queries and
if there was a better alternative I had not considered.

On Tue, Dec 9, 2014 at 11:44 AM, Arthur Silva arthur...@gmail.com wrote:

 On Tue, Dec 9, 2014 at 4:18 PM, Robert DiFalco robert.difa...@gmail.com
 wrote:

 I'm sorry, I missed a JOIN on the second variation. It is:

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN f.friend_id IS NOT NULL THEN 'isFriend'
   WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
   WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
   ELSE 'none'
END AS 'friendStatus',
(SELECT COUNT(1) AS d
   FROM friends f1
  JOIN friends f2 ON f1.fiend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id)
 FROM users u
 *LEFT OUTER JOIN friends f ON f.user_id = 33 AND f.friend_id = u.id
 http://u.id*
 LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
 LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
 WHERE u.id != 33 AND u.name LIKE '%John%' ORDER BY u.name;


 On Tue, Dec 9, 2014 at 10:15 AM, Robert DiFalco robert.difa...@gmail.com
  wrote:

 I have users, friends, and friend_requests. I need a query that
 essentially returns a summary containing:

 * user (name, imageURL, bio, ...)
 * Friend status (relative to an active user)
* Is the user a friend of the active user?
* Has the user sent a friend request to the active user?
* Has the user received a friend request from the active user?
 * # of mutualFriends
 * Exclude the active user from the result set.

 So I have mocked this up two ways but both have complicated query plans
 that will be problematic with large data sets. I'm thinking that my lack of
 deep SQL knowledge is making me miss the obvious choice.

 Here's my two query examples:

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN EXISTS(SELECT 1 FROM friends f WHERE f.user_id = 33 AND
 f.friend_id = u.id)   THEN 'isFriend'
   WHEN EXISTS(SELECT 1 FROM friend_requests s WHERE s.to_id = 33
 AND s.from_id = u.id) THEN 'hasSentRequest'
   WHEN EXISTS(SELECT 1 FROM friend_requests r WHERE r.to_id = u.id
 AND r.from_id = 33)   THEN 'hasReceivedRequest'
   ELSE 'none'
END AS friendStatus,
(SELECT COUNT(1)
   FROM friends f1
  JOIN friends f2 ON f1.friend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id) AS mutualFriends
 FROM users u
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN f.friend_id IS NOT NULL THEN 'isFriend'
   WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
   WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
   ELSE 'none'
END AS 'friendStatus',
(SELECT COUNT(1) AS d
   FROM friends f1
  JOIN friends f2 ON f1.fiend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id)
 FROM users u
 LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
 LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

 33 is just the id of the active user I am using for testing. The WHERE
 clause could be anything. I'm just using u.name here but I'm more
 concerned about the construction of the result set than the WHERE clause.
 These have more or less similar query plans, nothing that would change
 things factorially. Is this the best I can do or am I missing the obvious?

 Here are the tables:


 CREATE TABLE users (
   idBIGINT,
   name  VARCHAR,
   imageURL  VARCHAR
   created   TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
   phone_natlBIGINT,   /* National Phone Number */
   country_e164  SMALLINT, /* E164 country code */
   email VARCHAR(255),
   PRIMARY KEY (id),
   UNIQUE (email),
   UNIQUE (phone_natl, country_e164)
 );


 CREATE TABLE friends (
   user_id  BIGINT,
   friend_id   BIGINT,
   PRIMARY KEY (user_id, user_id),
   FOREIGN KEY (user_id)REFERENCES users(id) ON DELETE CASCADE,
   FOREIGN KEY (friend_id)  REFERENCES users(id) ON DELETE CASCADE
 );
 CREATE INDEX idx_friends_friend ON friends(friend_id);

 CREATE TABLE friend_requests (
   from_id  BIGINT,
   to_idBIGINT,
   created  TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
   PRIMARY KEY (from_id, user_id),
   FOREIGN KEY (from_id)  REFERENCES users(id) ON DELETE CASCADE,
   FOREIGN KEY (to_id)REFERENCES users(id) ON DELETE CASCADE
 );
 CREATE INDEX idx_friend_requests_to ON friend_requests(to_id);

 Let me know if you guys need anything else.




 Hello Robert, none of your schemas worked for me, here's a clean version

 CREATE TABLE users (
 

Re: [GENERAL] Help Optimizing a Summary Query

2014-12-11 Thread Arthur Silva
On Thu, Dec 11, 2014 at 6:52 PM, Robert DiFalco robert.difa...@gmail.com
wrote:

 Thanks Arthur. I don't think there is as big a different between BIGINT
 and INTEGER as you think there is. In fact with an extended filesystem you
 might not see any difference at all.

 As I put in the first emal I am using a GIST index on user.name.

 I was really more interested in the LEFT OUTER JOINs vs EXISTS queries and
 if there was a better alternative I had not considered.

 On Tue, Dec 9, 2014 at 11:44 AM, Arthur Silva arthur...@gmail.com wrote:

 On Tue, Dec 9, 2014 at 4:18 PM, Robert DiFalco robert.difa...@gmail.com
 wrote:

 I'm sorry, I missed a JOIN on the second variation. It is:

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN f.friend_id IS NOT NULL THEN 'isFriend'
   WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
   WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
   ELSE 'none'
END AS 'friendStatus',
(SELECT COUNT(1) AS d
   FROM friends f1
  JOIN friends f2 ON f1.fiend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id)
 FROM users u
 *LEFT OUTER JOIN friends f ON f.user_id = 33 AND f.friend_id = u.id
 http://u.id*
 LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
 LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
 WHERE u.id != 33 AND u.name LIKE '%John%' ORDER BY u.name;


 On Tue, Dec 9, 2014 at 10:15 AM, Robert DiFalco 
 robert.difa...@gmail.com wrote:

 I have users, friends, and friend_requests. I need a query that
 essentially returns a summary containing:

 * user (name, imageURL, bio, ...)
 * Friend status (relative to an active user)
* Is the user a friend of the active user?
* Has the user sent a friend request to the active user?
* Has the user received a friend request from the active user?
 * # of mutualFriends
 * Exclude the active user from the result set.

 So I have mocked this up two ways but both have complicated query plans
 that will be problematic with large data sets. I'm thinking that my lack of
 deep SQL knowledge is making me miss the obvious choice.

 Here's my two query examples:

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN EXISTS(SELECT 1 FROM friends f WHERE f.user_id = 33 AND
 f.friend_id = u.id)   THEN 'isFriend'
   WHEN EXISTS(SELECT 1 FROM friend_requests s WHERE s.to_id = 33
 AND s.from_id = u.id) THEN 'hasSentRequest'
   WHEN EXISTS(SELECT 1 FROM friend_requests r WHERE r.to_id = u.id
 AND r.from_id = 33)   THEN 'hasReceivedRequest'
   ELSE 'none'
END AS friendStatus,
(SELECT COUNT(1)
   FROM friends f1
  JOIN friends f2 ON f1.friend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id) AS mutualFriends
 FROM users u
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN f.friend_id IS NOT NULL THEN 'isFriend'
   WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
   WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
   ELSE 'none'
END AS 'friendStatus',
(SELECT COUNT(1) AS d
   FROM friends f1
  JOIN friends f2 ON f1.fiend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id)
 FROM users u
 LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
 LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

 33 is just the id of the active user I am using for testing. The WHERE
 clause could be anything. I'm just using u.name here but I'm more
 concerned about the construction of the result set than the WHERE clause.
 These have more or less similar query plans, nothing that would change
 things factorially. Is this the best I can do or am I missing the obvious?

 Here are the tables:


 CREATE TABLE users (
   idBIGINT,
   name  VARCHAR,
   imageURL  VARCHAR
   created   TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
   phone_natlBIGINT,   /* National Phone Number */
   country_e164  SMALLINT, /* E164 country code */
   email VARCHAR(255),
   PRIMARY KEY (id),
   UNIQUE (email),
   UNIQUE (phone_natl, country_e164)
 );


 CREATE TABLE friends (
   user_id  BIGINT,
   friend_id   BIGINT,
   PRIMARY KEY (user_id, user_id),
   FOREIGN KEY (user_id)REFERENCES users(id) ON DELETE CASCADE,
   FOREIGN KEY (friend_id)  REFERENCES users(id) ON DELETE CASCADE
 );
 CREATE INDEX idx_friends_friend ON friends(friend_id);

 CREATE TABLE friend_requests (
   from_id  BIGINT,
   to_idBIGINT,
   created  TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
   PRIMARY KEY (from_id, user_id),
   FOREIGN KEY (from_id)  REFERENCES users(id) ON DELETE CASCADE,
   FOREIGN KEY (to_id)REFERENCES users(id) ON DELETE CASCADE
 );
 CREATE INDEX idx_friend_requests_to ON friend_requests(to_id);

 Let me know if you guys need anything else.




 Hello 

Re: [GENERAL] Help Optimizing a Summary Query

2014-12-11 Thread David G Johnston
Robert DiFalco wrote
 I have users, friends, and friend_requests. I need a query that
 essentially
 returns a summary containing:
 
 * user (name, imageURL, bio, ...)
 * Friend status (relative to an active user)
* Is the user a friend of the active user?
* Has the user sent a friend request to the active user?
* Has the user received a friend request from the active user?
 * # of mutualFriends
 * Exclude the active user from the result set.
 
 So I have mocked this up two ways but both have complicated query plans
 that will be problematic with large data sets. I'm thinking that my lack
 of
 deep SQL knowledge is making me miss the obvious choice.
 
 Here's my two query examples:
 
 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN EXISTS(SELECT 1 FROM friends f WHERE f.user_id = 33 AND
 f.friend_id = u.id)   THEN 'isFriend'
   WHEN EXISTS(SELECT 1 FROM friend_requests s WHERE s.to_id = 33   AND
 s.from_id = u.id) THEN 'hasSentRequest'
   WHEN EXISTS(SELECT 1 FROM friend_requests r WHERE r.to_id = u.id AND
 r.from_id = 33)   THEN 'hasReceivedRequest'
   ELSE 'none'
END AS friendStatus,
(SELECT COUNT(1)
   FROM friends f1
  JOIN friends f2 ON f1.friend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id) AS mutualFriends
 FROM users u
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;
 
 SELECT u.id, u.name, u.imageURL, u.bio,
CASE
   WHEN f.friend_id IS NOT NULL THEN 'isFriend'
   WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
   WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
   ELSE 'none'
END AS 'friendStatus',
(SELECT COUNT(1) AS d
   FROM friends f1
  JOIN friends f2 ON f1.fiend_id = f2.friend_id
   WHERE f1.user_id = 33 AND f2.user_id = u.id)
 FROM users u
 LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
 LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
 WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;
 
 33 is just the id of the active user I am using for testing. The WHERE
 clause could be anything. I'm just using u.name here but I'm more
 concerned about the construction of the result set than the WHERE clause.
 These have more or less similar query plans, nothing that would change
 things factorially. Is this the best I can do or am I missing the obvious?

I dislike the multiple LEFT JOIN version though I did not try to prove that
it possible to give incorrect results.

The goal is to avoid looping - so you want to create temporary results that
will contain all of the data you plan to need and then join them together. 
CTE/WITH is the feature that can do this most easily.

I have no idea how this will perform relative to the CASE WHEN EXISTS
version but it seems like it should be faster.  Again, I don't believe your
original LEFT JOIN query is equivalent to either of these but I cannot be
certain without more effort than I am able to put forth.

Hybrid SQL Code (note in particular that you cannot have literals in the
WITH field alias area...)

WITH user_ref (ref_u_id) AS ( VALUES (33) )
, users_vis_a_vis_ref (u_id, ref_id) AS ( ... WHERE u_id != ref_u_id)
, user_friend (u_id, ref_u_id, 'Friend' AS status_uf) AS ( ... )
, user_sent_request (u_id, ref_u_id, 'Sent' AS status_usr) AS ( ... )
, user_recv_request (u_id, ref_u_id, 'Received' AS status_urr) AS ( ... )
, user_mutuals (u_id, ref_u_id, ## AS mutual_count) AS ( ... )

SELECT u_id, ref_u_id
, COALESCE(status_uf, status_usr, status_urr, 'None') AS FriendStatus
, COALESCE(mutual_count, 0) AS MutualFriendCount
FROM users_vis_a_vis_ref 
NATURAL LEFT JOIN user_friend
NATURAL LEFT JOIN user_sent_request 
NATURAL LEFT JOIN user_recv_request 
NATURAL LEFT JOIN user_mutuals

It is safe to use NATURAL here since you are fully controlling the source
relations since they all come from the CTE/WITH structure.

David J.



--
View this message in context: 
http://postgresql.nabble.com/Help-Optimizing-a-Summary-Query-tp5829941p5830198.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TYPE ADD SEND AND RECEIVE

2014-12-11 Thread Manuel Kniep

On 11. Dezember 2014 at 00:08:52, Tom Lane (t...@sss.pgh.pa.us) wrote:
 Manuel Kniep writes:
  I have a custom type and want to add the yet missing SEND and RECEIVE 
  functions
  is there anyway to alter the type definition without dropping and 
  recreating it?
  
 There's no supported way to do that. As an unsupported way, you could
 consider a manual UPDATE on the type's pg_type row.

I also thought about this but I guess I have to INSERT the dependency in 
pg_depend too.
assuming the receiver function is in the same namespace as the type I’d do 
something like this:

UPDATE pg_catalog.pg_type t
  SET typreceive = ‘my_recv', typsend = ‘my_send'
WHERE t.typname = ‘my_type’;

INSERT INTO pg_catalog.pg_depend (classid, objid, objsubid, refclassid, 
refobjid, refobjsubid, deptype)
SELECT c.oid AS classid, p.oid as objid, 0 AS objsubid, refc.oid AS refclassid, 
t.oid AS refobjid, 0 AS refobjsubid, 'n' AS deptype
FROM pg_catalog.pg_class c, pg_catalog.pg_proc p,  pg_catalog.pg_class refc, 
pg_catalog.pg_type t
WHERE c.relname='pg_proc' AND  p.proname IN(‘my_recv', ‘my_send') AND 
refc.relname = 'pg_type' AND t.typname =‘my_type'
  AND p.pronamespace = t.typnamespace
  AND c.relnamespace = refc.relnamespace
  AND c.relnamespace = (SELECT oid FROM pg_catalog.pg_namespace WHERE nspname = 
'pg_catalog’);

Are there any other pg_catalog tables that might need informations?

thanks

Manuel




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TYPE ADD SEND AND RECEIVE

2014-12-11 Thread Tom Lane
Manuel Kniep man...@adjust.com writes:
 On 11. Dezember 2014 at 00:08:52, Tom Lane (t...@sss.pgh.pa.us) wrote:
 There's no supported way to do that. As an unsupported way, you could
 consider a manual UPDATE on the type's pg_type row.

 I also thought about this but I guess I have to INSERT the dependency in 
 pg_depend too.

Personally, I wouldn't bother with that.  Good practice would be to have
any such type packaged as an extension; so if you add the new functions
to the extension (which is supported) there's no need to be terribly
tense about the intra-extension dependencies.  But even if you didn't
do that, the worst consequence of dropping the functions would be
cache lookup failed errors.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Documentation missing bigint?

2014-12-11 Thread Michael Paquier
On Fri, Dec 12, 2014 at 5:03 AM, Paul Jungwirth
p...@illuminatedcomputing.com wrote:
 http://www.postgresql.org/docs/9.3/static/xfunc-c.html#XFUNC-C-TYPE-TABLE

 It looks like bigint should be listed and should correspond to an
 int64 C type. Also I see there is an INT8OID, PG_GETARG_INT64,
 DatumGetInt64, and Int64GetDatum---I think all for bigints. Does that
 sound right? If so, would you like a documentation patch?
+1 for adding it.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general