[GENERAL] ERROR: permission denied for database control

2014-05-07 Thread Huang, Suya
Hi,

I've encountered a weird problem in PostgreSQL :

postgres= create user test password 'test';

postgres= grant select on pg_catalog.pg_database_size to test;

postgres= grant execute on function pg_catalog.pg_database_size(name) to test;

--login as user test
postgres= select current_user;
current_user
--
test
(1 row)


--query individual database size is fine
postgres= select pg_database_size('postgres');
pg_database_size
--
 25611884
(1 row)

--fails if try to get all db size in one sql
postgres= SELECT current_date,pg_database_size(pg_database.datname) from 
pg_database;
ERROR:  permission denied for database control

Thanks,
Suya


Re: [GENERAL] Server process crash - Segmentation fault

2014-05-07 Thread Leif Jensen
   Hello Adrian,

   Thank you for your answer. I can post part of the code that makes these 
calls, but I'm not sure how much it would help. It is rather large function 
that makes these calls, and it is called all over the program. The part of the 
log posted is only a small excerpt of the use of the 
ApplDBConn_22854_f6adeb70_query, which has been used many many times before the 
log shown (167 in all to be exact ;-) ).

 Leif


- Original Message -
 On 05/06/2014 07:08 AM, Leif Jensen wrote:
  Hello.
 
  I was running PostgreSQL 9.1.4 when I got a server process crash
  (Segmentation fault) as the postgres log shown below. I tried
  upgrade to newest version 9.3.4, but this gives exactly the same
  problem.
 
  It is an (ecpg based) C-program that does tons of these scroll
  cursor exercises. Until recently this worked too but changes to
  totally different part of the program made this happen. (I have
  made way too many changes to this other part to be able to roll
  back the code :-( ). The system generates data all the time for
  this lookup, but I can grab the SQL from the postgres log and
  run it through psql and get the result I expect, so I don't see
  how it can be data related.
 
  Please help,
 
Leif
 
  .
  .
  .
  22864 2014-05-06 15:37:35.350 CEST LOG: statement: close execcurs
  22864 2014-05-06 15:37:35.350 CEST LOG: statement: deallocate
  ApplDBConn_22854_f6adeb70_query
  22864 2014-05-06 15:37:35.352 CEST DEBUG: parse
  ApplDBConn_22854_f6adeb70_query: SELECT data_type FROM
  information_schema.columns WHERE table_name =
  'l2_hb_water_hours_sum' AND column_name = '';
  22864 2014-05-06 15:37:35.353 CEST LOG: statement: declare execcurs
  scroll cursor for SELECT data_type FROM information_schema.columns
  WHERE table_name = 'l2_hb_water_hours_sum' AND column_name = '
  ';
  22864 2014-05-06 15:37:35.356 CEST LOG: statement: fetch first in
  execcurs
  22864 2014-05-06 15:37:35.358 CEST LOG: statement: close execcurs
  22864 2014-05-06 15:37:35.358 CEST LOG: statement: deallocate
  ApplDBConn_22854_f6adeb70_query
  22864 2014-05-06 15:37:35.359 CEST LOG: statement: commit
  22864 2014-05-06 15:37:35.359 CEST LOG: statement: start transaction
  read only
  22864 2014-05-06 15:37:35.360 CEST DEBUG: parse
  ApplDBConn_22854_f6adeb70_query: SELECT montime, year, month, day,
  hh, gal_hour, exp_hour, unsched_hour FROM l2_hb_water_hours_sum
  WHERE l2_hb_water_
  hours_sum.ctrlid = 86 ORDER BY year,month,day,hh OFFSET (SELECT CASE
  WHEN count(*)  2000 THEN count(*) -2000 ELSE 0 END FROM
  l2_hb_water_hours_sum WHERE l2_hb_water_hours_sum.ctrlid = 86 );
  22864 2014-05-06 15:37:35.365 CEST LOG: statement: declare execcurs
  scroll cursor for SELECT montime, year, month, day, hh, gal_hour,
  exp_hour, unsched_hour FROM l2_hb_water_hours_sum WHERE l2_hb
  _water_hours_sum.ctrlid = 86 ORDER BY year,month,day,hh OFFSET
  (SELECT CASE WHEN count(*)  2000 THEN count(*) -2000 ELSE 0 END
  FROM l2_hb_water_hours_sum WHERE l2_hb_water_hours_sum.ctrlid = 8
  6 );
 
 The code that generates the above would be helpful. The thing that
 catches my eye is that the first time you use
 ApplDBConn_22854_f6adeb70_query the parse and cursor queries are the
 same and all is good. The second time they are not and you get a
 failure. Without seeing what is going in in your code it is hard to
 tell
 if this significant or not.
 
  22864 2014-05-06 15:37:35.432 CEST LOG: statement: fetch first in
  execcurs
  21702 2014-05-06 15:37:35.440 CEST DEBUG: server process (PID 22864)
  was terminated by signal 11: Segmentation fault
  21702 2014-05-06 15:37:35.440 CEST DETAIL: Failed process was
  running: fetch first in execcurs
  21702 2014-05-06 15:37:35.440 CEST LOG: server process (PID 22864)
  was terminated by signal 11: Segmentation fault
  21702 2014-05-06 15:37:35.440 CEST DETAIL: Failed process was
  running: fetch first in execcurs
  21702 2014-05-06 15:37:35.440 CEST LOG: terminating any other active
  server processes
 
 
 --
 Adrian Klaver
 adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: permission denied for database control

2014-05-07 Thread Albe Laurenz
Suya Huang wrote:
 I’ve encountered a weird problem in PostgreSQL :

 postgres= create user test password ‘test’;
 
 postgres= grant select on pg_catalog.pg_database_size to test;

This statement produces an error:
ERROR:  relation pg_catalog.pg_database_size does not exist

 postgres= grant execute on function pg_catalog.pg_database_size(name) to 
 test;
 
 --login as user test
 postgres= select current_user;
 current_user
 --
 test
 (1 row)
 
 
 --query individual database size is fine
 postgres= select pg_database_size('postgres');
 pg_database_size
 --
  25611884
 (1 row)
 
 --fails if try to get all db size in one sql
 postgres= SELECT current_date,pg_database_size(pg_database.datname) from 
 pg_database;
 ERROR:  permission denied for database control

Works for me.

It seems like you have a database called control for which user test
has no connect privilege.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle to PostgreSQL replication

2014-05-07 Thread Albe Laurenz
Sameer Kumar wrote:
 I need to setup a replication process for continuously replicating changes 
 happening in an Oracle
 Database to a PostgreSQL database.
 
 
 My Oracle Database is version 11.2 and setup as a cluster with RAC
 My Postgres database version is 9.2
 
 Oracle Database is running in Solaris and PostgreSQL is running on RHEL.
 
 Is there any commercial or open source tool available to achieve this?
 
 I was wondering has anyone used foreign data  wrapper or

There is no ready-made solution for this from the PostgreSQL side.
You could check with Oracle if they provide something like that
with their Golden Gate.

What you could do is to have a trigger record all changes to the Oracle
table in a separate log table and regularly run a program that pulls those
changes from the log table and applies them to a PostgreSQL table,
deleting the log entries as it goes.

You could write such a thing as PostgreSQL function using oracle_fdw,
but you need PostgreSQL 9.3 if you want to update Oracle data that way.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: any psql \copy tricks for default-value columns without source data?

2014-05-07 Thread Rémi Cura
Hey,
you may want to have a look at pg_bulkload (
http://pgbulkload.projects.pgfoundry.org/).
Using filter you could get the function you want.

Another solution is pgloader (http://pgloader.tapoueh.org) , but I don't
know if it is as fast as copy.

Cheers,
Rémi-C


2014-05-06 23:04 GMT+02:00 David G Johnston david.g.johns...@gmail.com:

 On Tue, May 6, 2014 at 4:48 PM, John R Pierce [via PostgreSQL] [hidden
 email] http://user/SendEmail.jtp?type=nodenode=5802804i=0 wrote:

 On 5/6/2014 1:22 PM, David G Johnston wrote:
  I know that I can pre-process the input file and simply add the needed
 data
  but I am curious if maybe there is some trick to having defaults
 populate
  for missing columns WITHOUT explicitly specifying each and every column
 that
  is present?

 if you didn't specify the columns in your file, how would you expect it
 to know whats there and not there?


 ​The default copy behavior is column-order dependent.  If your input file
 has 10 columns and the table has 10 columns they get matched up 1-to-1 and
 everything works just fine.  It would be nice if there was some way to say
 that if the table has 12 columns but the file has 10 columns that the first
 10 columns of the table get matched to the file and the remaining two
 columns use their default values; that way you can add default columns to
 the end of the table and still do an auto-matching import.

 David J.
 ​

 --
 View this message in context: Re: any psql \copy tricks for default-value
 columns without source 
 data?http://postgresql.1045698.n5.nabble.com/any-psql-copy-tricks-for-default-value-columns-without-source-data-tp5802795p5802804.html

 Sent from the PostgreSQL - general mailing list 
 archivehttp://postgresql.1045698.n5.nabble.com/PostgreSQL-general-f1843780.htmlat
  Nabble.com.



Re: [GENERAL] Oracle to PostgreSQL replication

2014-05-07 Thread Geoff Montee
On Wed, May 7, 2014 at 12:31 AM, Sameer Kumar sameer.ku...@ashnik.comwrote:

 Hi,


 I need to setup a replication process for continuously replicating changes
 happening in an Oracle Database to a PostgreSQL database.


 My Oracle Database is version 11.2 and setup as a cluster with RAC
 My Postgres database version is 9.2

 Oracle Database is running in Solaris and PostgreSQL is running on RHEL.

 Is there any commercial or open source tool available to achieve this?



Continuent's Tungsten Replicator apparently offers Oracle to MySQL
replication. There's a wiki page that suggests PostgreSQL support was in
development at one time. I'm not sure how far they got, or if they are
still working on it.

http://www.continuent.com/solutions/replication

https://wiki.postgresql.org/wiki/Tungsten

Geoff Montee


Re: [GENERAL] Oracle to PostgreSQL replication

2014-05-07 Thread Serge Fonville
Hi,

I need to setup a replication process for continuously replicating changes
 happening in an Oracle Database to a PostgreSQL database.

 My Oracle Database is version 11.2 and setup as a cluster with RAC
 My Postgres database version is 9.2

 Oracle Database is running in Solaris and PostgreSQL is running on RHEL.

 Is there any commercial or open source tool available to achieve this?


How about EnterpriseDB XDB
replicationhttp://www.enterprisedb.com/products-services-training/products/complementary-enterprisedb-products/xdb-replication-server
?

Kind regards/met vriendelijke groet,

Serge Fonville

http://www.sergefonville.nl


2014-05-07 11:44 GMT+02:00 Geoff Montee geoff.mon...@gmail.com:


 On Wed, May 7, 2014 at 12:31 AM, Sameer Kumar sameer.ku...@ashnik.comwrote:

 Hi,


 I need to setup a replication process for continuously replicating
 changes happening in an Oracle Database to a PostgreSQL database.


 My Oracle Database is version 11.2 and setup as a cluster with RAC
 My Postgres database version is 9.2

 Oracle Database is running in Solaris and PostgreSQL is running on RHEL.

 Is there any commercial or open source tool available to achieve this?



 Continuent's Tungsten Replicator apparently offers Oracle to MySQL
 replication. There's a wiki page that suggests PostgreSQL support was in
 development at one time. I'm not sure how far they got, or if they are
 still working on it.

 http://www.continuent.com/solutions/replication

 https://wiki.postgresql.org/wiki/Tungsten

 Geoff Montee



Re: [GENERAL] Crosstab function

2014-05-07 Thread Sim Zacks

  
  
What I have done in the past to build a
  generic reporting application is to have the function write the
  results you want in a table and return the tablename and then have
  the client code call select * from that table. 
  
  My standard report tablename is tblreport || userid;
  It gets dropped at the beginning of the function, so it is
  basically a temp table that doesn't interfere with any other
  users.
  
  Example:
   execute 'drop table if exists reports.tblreport' || v_userid ;
   execute 'drop sequence if exists reports.tblreport' ||
  v_userid || '_id_seq; create sequence reports.tblreport' ||
  v_userid || '_id_seq';
   v_sql=' create table reports.tblreport' || v_userid || ' as ';
  
  Sim
  
  On 05/06/2014 06:37 AM, Hengky Liwandouw wrote:


  Very Clear instruction !

Thank you very much David. I will do it in my client app and follow your
guidance.


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David G Johnston
Sent: Tuesday, May 06, 2014 11:01 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Crosstab function

Hengky Lie wrote

  
Hi David,

Are you sure that there is no pure sql solution for this ? 

I think (with my very limited postgres knowledge), function can solve
this.

Which is the column header I need but I really have no idea how to use
this
as column header.

Anyway, If i can't do this in postgres, I will try to build sql string in
the client application (Windev) and send the fixed sql to the server

  
  
Positive.

You could build the necessary string in a pl/pgsql language function but you
would still have trouble getting the data out of the function the way you
want; unless you output a single text column no matter the original data (
basically output a cvs version of the crosstab result).

There is no dynamic execution in SQL so even though you can get a string
that looks like what you want you cannot do anything with it.  Only data is
allowed to be dynamic; the engine has to know the names and types of all
schema objects before it can start so there is no way a query can retrieve
these things from its own data. It's the whole cart-horse thing...

The solution is to build the query in the client and send it.  Make sure you
look at the various "quote_" functions in order to minimize the risk of SQL
injection attacks.  These are especially useful for pl/pgsql functions but
you might be able to use them in your first query so that you can avoid
coding all the quoting and escaping rules into your application.  At minimum
double-quote all your identifiers and make sure there are no unescaped
embedded double-quotes.  If the only variables are from data in tables
putting constraints on those tables would probably be useful as well - you
limit valid identifiers but minimized risk of bad data causing an issue.

David J.








--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Crosstab-function-tp5802402p5802601.
html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





  




[GENERAL] Re: any psql \copy tricks for default-value columns without source data?

2014-05-07 Thread David G Johnston
Thank you everyone; some good programs to check out but I just went ahead and
used awk to add two additional columns of data to the input file before
sending it onto psql.

David J.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/any-psql-copy-tricks-for-default-value-columns-without-source-data-tp5802795p5802914.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server process crash - Segmentation fault

2014-05-07 Thread Adrian Klaver

On 05/06/2014 11:37 PM, Leif Jensen wrote:

Hello Adrian,

Thank you for your answer. I can post part of the code that makes these 
calls, but I'm not sure how much it would help. It is rather large function 
that makes these calls, and it is called all over the program. The part of the 
log posted is only a small excerpt of the use of the 
ApplDBConn_22854_f6adeb70_query, which has been used many many times before the 
log shown (167 in all to be exact ;-) ).


Exactly. Something different happened at that point. The hard part will 
determining what that is. The next step would seem to run a debugger on 
the Postgres process to get more information. For a step by step guide 
see here:


https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD



  Leif






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: permission denied for database control

2014-05-07 Thread Tom Lane
Albe Laurenz laurenz.a...@wien.gv.at writes:
 Suya Huang wrote:
 --fails if try to get all db size in one sql
 postgres= SELECT current_date,pg_database_size(pg_database.datname) from 
 pg_database;
 ERROR:  permission denied for database control

 It seems like you have a database called control for which user test
 has no connect privilege.

Yeah, this failure is unsurprising.  But it looks like we forgot to
document the restriction :-(.  I see a mention of it in the 8.3 release
notes, but there's nothing in the documentation of the functions
themselves.  Will fix.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server process crash - Segmentation fault

2014-05-07 Thread Tom Lane
Leif Jensen l...@crysberg.dk writes:
Thank you for your answer. I can post part of the code that makes these 
 calls, but I'm not sure how much it would help. It is rather large function 
 that makes these calls, and it is called all over the program. The part of 
 the log posted is only a small excerpt of the use of the 
 ApplDBConn_22854_f6adeb70_query, which has been used many many times before 
 the log shown (167 in all to be exact ;-) ).

Perhaps you could get a stack trace from the segfault, then.
https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] probable pg_hba.conf configuration issues

2014-05-07 Thread Ravi Roy
Hi,

I'm sorry if this question have been asked earlier, but i could not find
any valid result through googling.

I'm running postgresql 9.1 on Windows 7 (64 bit) and i've following
configuration in pg_hba.conf.

host all superuser1127.0.0.1/32 password
host all normaluser1  127.0.0.1/32 password

Using command line : psql -U superuser1  psql -U normaluser1 asks for
password and login successfully. - perfect. (We are only alloing local
connections, remote connections to database are disabled)

But if I try to connect using pgadmin (from the same machine) it gives
acess to database without password, i'm surprised as it does not seem to
respect pg_hba.conf or i'm terribly wrong in the configuration somewhere.

Does somebody have any idea what is wrong?

Thank you

Ravi.


Re: [GENERAL] probable pg_hba.conf configuration issues

2014-05-07 Thread Tom Lane
Ravi Roy ravi.a...@gmail.com writes:
 Using command line : psql -U superuser1  psql -U normaluser1 asks for
 password and login successfully. - perfect. (We are only alloing local
 connections, remote connections to database are disabled)

 But if I try to connect using pgadmin (from the same machine) it gives
 acess to database without password, i'm surprised as it does not seem to
 respect pg_hba.conf or i'm terribly wrong in the configuration somewhere.

pgadmin is no doubt caching the password somewhere.  I think it uses
~/.pgpass, though psql would too, so maybe there's something
pgadmin-specific involved.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] probable pg_hba.conf configuration issues

2014-05-07 Thread Sim Zacks

  
  
My bet is you have either another line
  in the pg_hba that says trust
  or you saved the password and forgot.
  
  Sim
  
  On 05/07/2014 05:33 PM, Ravi Roy wrote:


  
Hi,
 
I'm sorry if this question have been asked earlier, but i
  could not find any valid result through googling.
 
I'm running postgresql 9.1 on Windows 7 (64 bit) and i've
  following configuration in pg_hba.conf.
 
host all superuser1    127.0.0.1/32
  password
  host all normaluser1  127.0.0.1/32
  password
 
Using command line : psql -U superuser1  psql -U
  normaluser1 asks for password and login successfully. -
  perfect. (We are only alloing local connections, remote
  connections to database are disabled)
 
But if I try to connect using pgadmin (from the same
  machine) it gives acess to database without password, i'm
  surprised as it does not seem to respect pg_hba.conf or i'm
  terribly wrong in the configuration somewhere. 
 
Does somebody have any idea what is wrong? 
 
Thank you
 
Ravi.
 
 
 
 
  


  




Re: [GENERAL] probable pg_hba.conf configuration issues

2014-05-07 Thread David G Johnston
Ravi Roy wrote
 But if I try to connect using pgadmin (from the same machine) it gives
 acess to database without password, i'm surprised as it does not seem to
 respect pg_hba.conf or i'm terribly wrong in the configuration somewhere.

You likely told pgadmin to remember (store) passwords.

If you select “Store password”, pgAdmin stores passwords you enter in the
~/.pgpass file under Unix or :file:%APPDATA%postgresqlpgpass.conf under
Win32 for later reuse. For details, see pgpass documentation. It will be
used for all libpq based tools. If you want the password removed, you can
select the server’s properties and uncheck the selection any time.

http://www.pgadmin.org/docs/1.18/connect.html



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/probable-pg-hba-conf-configuration-issues-tp5802949p5802954.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Building Postgres using mingw

2014-05-07 Thread Michael Paquier
Hi all,

Following some instructions on the wiki and the docs, I am trying to
compile the code using minwg:
https://wiki.postgresql.org/wiki/Building_With_MinGW
http://www.postgresql.org/docs/devel/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW

After installing mingw-w64 and adding its binaries to PATH, I am able
to complete configure correctly (including finding a compiler with
--host=x86_64-w64-mingw32). However mingw-w64 does not have a make
command included so compilation cannot continue.

I have also installed msysgit to facilitate the work and have already
most of the build dependencies at hand. Something obvious that I may
be missing? Is it better to give up with mingw and switch to msvc?

Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building Postgres using mingw

2014-05-07 Thread Adrian Klaver

On 05/07/2014 08:27 AM, Michael Paquier wrote:

Hi all,

Following some instructions on the wiki and the docs, I am trying to
compile the code using minwg:
https://wiki.postgresql.org/wiki/Building_With_MinGW
http://www.postgresql.org/docs/devel/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW

After installing mingw-w64 and adding its binaries to PATH, I am able
to complete configure correctly (including finding a compiler with
--host=x86_64-w64-mingw32). However mingw-w64 does not have a make
command included so compilation cannot continue.


It would seem it does:):

http://sourceforge.net/apps/trac/mingw-w64/wiki/Make



I have also installed msysgit to facilitate the work and have already
most of the build dependencies at hand. Something obvious that I may
be missing? Is it better to give up with mingw and switch to msvc?

Regards,




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Andrus

After upgrading server to Postgres 9.3 in Debian customer cannot create
backups anymore. pg_dump returns
error lost synchronization with server:

C:\myapp\..\pg_dump\pg_dump.exe -ib -Z3 -f C:\mybackup.backup -Fc -h
1.2.3.4 -U user -p 5432 mydb

pg_dump: Dumping the contents of table attachme failed: PQgetCopyData()
failed.
pg_dump: Error message from server: lost synchronization with server: got
message type d, length 5858454
pg_dump: The command was: COPY firma1.attachme (id, idmailbox, attachname,
attachbody, attachtype) TO stdout;


attachme table contains 4487 records
Its  total size is 1016 MB. Most data is contained in one bytea column

I changed

ssl_renegotiation_limit = 512GB

in postgresql.conf but problem persists.

postgres log file does not contain any information about this.

How to fix or diagnose the issue ?

Should I

1. Add --inserts line option to pg_dump. According to (1) it fixes the 
issue.

2. Turn ssl off
3. Change something in VMWare . According to (1) it occurs in VMWare only


Server:

PostgreSQL 9.3.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.7.2-5) 4.7.2, 64-bit
Debian Linux x64 is running under VMWare, 2 cores

Apache and Mono 3.2.8 with mod_mono MVC4 applicati is also running in this
server


Client:

Windows computer running 9.3 pg_dump.exe over in LAN but external IP address
(1.2.3.4) is used


It worked if server was Widows 2003 server running earlier Postgres 9
without SSL.


Andrus.

(1) https://groups.google.com/forum/#!topic/pgsql.bugs/-bS1Lba3txA 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server process crash - Segmentation fault

2014-05-07 Thread Tom Lane
Leif Jensen l...@crysberg.dk writes:
Here is a gdb dump of the backtrace at the server process crash. I have 
 also included the code that generates these calls. As mentioned below this 
 specific connection has been used many times before the crash. Also, we are 
 aware of the thread caveat that only using a connection from one thread at a 
 time. Therefore the strange connection name that includes both the process 
 id and the thread id. This is for the code to make sure that a connection is 
 only used in the thread it is meant to.

Hm.  The crash looks like it must be because ActiveSnapshot is null
(not set).  Since we're doing a FETCH, the active snapshot ought to
be the one saved for the cursor query by DECLARE CURSOR.  It looks
like the problem is that pquery.c only bothers to install that as the
active snapshot while calling ExecutorRun, but in this stack trace
we're in ExecutorRewind.

I wonder if it's a bad idea for ExecReScanLimit to be executing
user-defined expressions?  But it's been like that for awhile,
and I think we might have a hard time preserving the bounded-sort
optimization if we didn't do that.

Anyway the simple fix would be to ensure we install the query
snapshot as active before calling ExecutorRewind.

One interesting question is why this issue hasn't been seen before;
it seems like it'd not be that hard to hit.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] probable pg_hba.conf configuration issues

2014-05-07 Thread Ravi Roy
Thanks Sim, Tom  David, i'm really amazed with quick answers and insight
provided by you all.
I found and was a password cache issue under
%APPDATA%\postgresql\pgpass.conf; I removed it and everything works as
expected.

Thank you again guys!
Regards,
Ravi.


On Wed, May 7, 2014 at 8:22 PM, David G Johnston david.g.johns...@gmail.com
 wrote:

 Ravi Roy wrote
  But if I try to connect using pgadmin (from the same machine) it gives
  acess to database without password, i'm surprised as it does not seem to
  respect pg_hba.conf or i'm terribly wrong in the configuration somewhere.

 You likely told pgadmin to remember (store) passwords.

 If you select “Store password”, pgAdmin stores passwords you enter in the
 ~/.pgpass file under Unix or :file:%APPDATA%postgresqlpgpass.conf under
 Win32 for later reuse. For details, see pgpass documentation. It will be
 used for all libpq based tools. If you want the password removed, you can
 select the server’s properties and uncheck the selection any time.

 http://www.pgadmin.org/docs/1.18/connect.html



 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/probable-pg-hba-conf-configuration-issues-tp5802949p5802954.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Tom Lane
Andrus kobrule...@hot.ee writes:
 pg_dump: Dumping the contents of table attachme failed: PQgetCopyData()
 failed.
 pg_dump: Error message from server: lost synchronization with server: got
 message type d, length 5858454
 pg_dump: The command was: COPY firma1.attachme (id, idmailbox, attachname,
 attachbody, attachtype) TO stdout;

 attachme table contains 4487 records
 Its  total size is 1016 MB. Most data is contained in one bytea column

I think this is probably an out-of-memory situation inside pg_dump, ie
libpq failing to make its input buffer large enough for the incoming row.
It's hard to believe that there's not 6MB available on any modern machine,
so I'm thinking this is an OS-level restriction on how much memory we can
get.  On a Unix machine I'd recommend looking at the ulimit settings
pg_dump is being run under.  Dunno the equivalent for Windows.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server process crash - Segmentation fault

2014-05-07 Thread Leif Jensen
   Could it be related to the OFFSET part of the statement ? I have another 
query on the same table without OFFSET, which seems to work fine.

 Leif


- Original Message -
 Leif Jensen l...@crysberg.dk writes:
 Here is a gdb dump of the backtrace at the server process crash.
 I have also included the code that generates these calls. As
 mentioned below this specific connection has been used many times
 before the crash. Also, we are aware of the thread caveat that
 only using a connection from one thread at a time. Therefore the
 strange connection name that includes both the process id and
 the thread id. This is for the code to make sure that a
 connection is only used in the thread it is meant to.
 
 Hm. The crash looks like it must be because ActiveSnapshot is null
 (not set). Since we're doing a FETCH, the active snapshot ought to
 be the one saved for the cursor query by DECLARE CURSOR. It looks
 like the problem is that pquery.c only bothers to install that as the
 active snapshot while calling ExecutorRun, but in this stack trace
 we're in ExecutorRewind.
 
 I wonder if it's a bad idea for ExecReScanLimit to be executing
 user-defined expressions? But it's been like that for awhile,
 and I think we might have a hard time preserving the bounded-sort
 optimization if we didn't do that.
 
 Anyway the simple fix would be to ensure we install the query
 snapshot as active before calling ExecutorRewind.
 
 One interesting question is why this issue hasn't been seen before;
 it seems like it'd not be that hard to hit.
 
 regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building Postgres using mingw

2014-05-07 Thread Jeff Janes
On Wed, May 7, 2014 at 8:27 AM, Michael Paquier
michael.paqu...@gmail.comwrote:

 Hi all,

 Following some instructions on the wiki and the docs, I am trying to
 compile the code using minwg:
 https://wiki.postgresql.org/wiki/Building_With_MinGW

 http://www.postgresql.org/docs/devel/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW

 After installing mingw-w64 and adding its binaries to PATH, I am able
 to complete configure correctly (including finding a compiler with
 --host=x86_64-w64-mingw32). However mingw-w64 does not have a make
 command included so compilation cannot continue.


Did you select the 'developer toolkit' when running the mingw installer?

Cheers,

Jeff


Re: [GENERAL] Server process crash - Segmentation fault

2014-05-07 Thread Tom Lane
Leif Jensen l...@crysberg.dk writes:
Could it be related to the OFFSET part of the statement ? I have another 
 query on the same table without OFFSET, which seems to work fine.

Yeah, the specific code path here involves executing a stable (or possibly
immutable) SQL function in a LIMIT or OFFSET clause.  I was able to
reproduce the crash like so:

create function foo(int) returns int as 'select $1 limit 1'
language sql stable;

begin;

declare c cursor for select * from int8_tbl limit foo(3);

select * from c;

move backward all in c;

select * from c;

commit;

You might be able to dodge the problem if you can make the SQL function
inline-able (the LIMIT 1 in my example is just to prevent that from
happening).  A less appealing alternative is to mark the function
VOLATILE, which I think would also prevent this crash, but might have
negative performance consequences.

If you don't mind building your own PG then you could grab the actual fix
from our git repo; I should have something committed before long.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Andrus

Hi!


pg_dump: Error message from server: lost synchronization with server: got
message type d, length 5858454

I think this is probably an out-of-memory situation inside pg_dump, ie
libpq failing to make its input buffer large enough for the incoming row.
It's hard to believe that there's not 6MB available on any modern machine,
so I'm thinking this is an OS-level restriction on how much memory we can
get.  On a Unix machine I'd recommend looking at the ulimit settings
pg_dump is being run under.  Dunno the equivalent for Windows.


Backup computer has modern Windows client OS.
It has GBs of memory and swap file possibility.

Based on my knowledge there is no memory settings in windows which can
restrict 6MB allocation.
On memory shortage Windows shows message like Increasing swap file size.
Customer did'nt report such message.

Dump worked for years without issues when server was 32 bit Windows 2003
server and Postgres and pg_dump were earlier version 9 (but after upgrade
new rows are added to attachme table).

How to create backup copies or diagnose the issue ?
I can change pg_dump execution parameters.
I can install VC++ Express and compile something to add  diagnozing if this
can help.
Maybe this message can improved to include more details about the reason.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread David G Johnston
Andrus Moor wrote
 Dump worked for years without issues when server was 32 bit Windows 2003
 server and Postgres and pg_dump were earlier version 9 (but after upgrade
 new rows are added to attachme table).
 
 How to create backup copies or diagnose the issue ?
 I can change pg_dump execution parameters.
 I can install VC++ Express and compile something to add  diagnozing if
 this
 can help.
 Maybe this message can improved to include more details about the reason.

Given this is an upgrade, though the specifics were omitted, is there some
possibility of a pg_dump/server version mis-match being the cause.  I could
see where a 32-bit client connecting to a 64bit server could possible
exhibit apparent memory-related issues.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-add-xml-data-to-table-tp4881402p5803020.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full text: Ispell dictionary

2014-05-07 Thread Oleg Bartunov
btw, take a look on contrib/dict_xsyn, it's  more powerful than
synonym dictionary.

On Sat, May 3, 2014 at 2:26 AM, Tim van der Linden t...@shisaa.jp wrote:
 Hi Oleg

 Haha, understood!

 Thanks for helping me on this one.

 Cheers
 Tim


 On May 3, 2014 7:24:08 AM GMT+09:00, Oleg Bartunov obartu...@gmail.com
 wrote:

 Tim,

 you did answer yourself - don't use ispell :)

 On Sat, May 3, 2014 at 1:45 AM, Tim van der Linden t...@shisaa.jp wrote:

  On Fri, 2 May 2014 21:12:56 +0400
  Oleg Bartunov obartu...@gmail.com wrote:

  Hi Oleg

  Thanks for the response!

  Yes, it's normal for ispell dictionary, think about morphological
 dictionary.


  Hmm, I see, that makes sense. I thought the morphological aspect of the
 Ispell only dealt with splitting up compound words, but it also deals with
 deriving the word to a more stem like form, correct?

  As a last question on this, is there a way to disable this dictionary to
 emit multiple lexemes?


 The reason I am asking is because in my (fairly new) understanding of
 PostgreSQL's full text it is always best to have as few lexemes as possible
 saved in the vector. This to get smaller indexes and faster matching
 afterwards. Also, if you run a tsquery afterwards to, you can still employ
 the power of these multiple lexemes to find a match.

  Or...probably answering my own question...if I do not desire this
 behavior I should maybe not use Ispell and simply use another dictionary :)

  Thanks again.

  Cheers,
  Tim

  On Fri, May 2, 2014 at 11:54 AM, Tim van der Linden t...@shisaa.jp
 wrote:

  Good morning/afternoon all

  I am currently writing a few articles about PostgreSQL's full text
 capabilities and have a question about the Ispell dictionary which I
 cannot seem to find an answer to. It is probably a very simple issue, so
 forgive my ignorance.

  In one article I am explaining about dictionaries and I have setup a
 sample configuration which maps most token categories to only use a Ispell
 dictionary (timusan_ispell) which has a default configuration:

  CREATE TEXT SEARCH DICTIONARY timusan_ispell (
  TEMPLATE = ispell,
  DictFile = en_us,
  AffFile = en_us,
  StopWords = english
  );

  When I run a simple query like SELECT
 to_tsvector('timusan-ispell','smiling') I get back the following 
 tsvector:

  'smile':1 'smiling':1

  As you can see I get two lexemes with the same pointer.
  The question here is: why does this happen?

  Is it normal behavior for the Ispell dictionary to emit multiple
 lexemes for a single token? And if so, is this efficient? I
 mean, why could it not simply save one lexeme 'smile' which (same as
 the snowball dictionary) would match 'smiling' as well if later matched 
 with
 the accompanying tsquery?

  Thanks!

  Cheers,
  Tim


  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general



  --
  Tim van der Linden t...@shisaa.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Enforce Symmetric Matrix

2014-05-07 Thread Randy Westlund
I have a table with N (over 35k) rows.  I need to compare each of
those elements to every other element in the list, which is the
handshake problem.

This results in a symmetric matrix (an adjacency matrix for an
undirected graph).  Because all relations are symmetric, I only need to
store the upper triangle of this matrix.  A ~ B and B ~ A are the same.

I need some advice on how to implement this.  At first, I thought that
I'd only store the upper triangle, resulting in N^2 / 2 rows of the
form:

id1   id2   value
    -
A B  1
A C  2
A D  3
B C  4
B D  5
C D  6

Where value is the result of an expensive computation, and with the
constraint that id1  id2.

But there are problems.  To get a list of all things compared to B, I
have to look in both columns.  Complicated queries with selects nested
inside updates become very difficult or impossible to do in one go.  I'd
also need to index both columns, which seems like a waste.

So I thought maybe I'd just store the full matrix, with A,B and B,A
having the same data

id1   id2   value
    -
A B   1
A C   2
A D   3
B A   1
B C   4
B D   5
C A   2
C B   4
C D   6

With the constraint that id1 != id2 because I don't need the diagonal.

The table is twice as big, but still O(n^2).  This would let me get the
list of all things compared to B with SELECT ... WHERE id1 = B, which is
super easy.

My problem here is that I'm not sure how to enforce the rule that A,B
and B,A have the same value.  I want to use a rule or trigger such that
when row A,B is updated or inserted, row B,A is updated or inserted with
the same date.  But then it would get called recursively, and that
doesn't work.

For my application, the total number of items I have (N) will be growing
over time and this table will need to be updated accordingly.

This seems like a problem that many people must have come across before,
but I've been strangely unable to find advice online.  Any
recommendations?


signature.asc
Description: Digital signature


Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Andrus

Hi!


Given this is an upgrade, though the specifics were omitted, is there some
possibility of a pg_dump/server version mis-match being the cause.  I could
see where a 32-bit client connecting to a 64bit server could possible
exhibit apparent memory-related issues.


pg_dump is 32-bit version. pg_dump -V returns

pg_dump (PostgreSQL) 9.3.0


Server is x64 :

PostgreSQL 9.3.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.7.2-5) 4.7.2, 64-bit

Can this cause the issue ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Andrus

Hi!


Given this is an upgrade, though the specifics were omitted, is there some
possibility of a pg_dump/server version mis-match being the cause.  I could
see where a 32-bit client connecting to a 64bit server could possible
exhibit apparent memory-related issues.


pg_dump is 32-bit version. pg_dump -V returns

pg_dump (PostgreSQL) 9.3.0


Server is x64 :

PostgreSQL 9.3.3 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.7.2-5) 4.7.2, 64-bit

Can this cause the issue ?

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to fix lost synchronization with server

2014-05-07 Thread Tom Lane
Andrus kobrule...@hot.ee writes:
 Given this is an upgrade, though the specifics were omitted, is there some
 possibility of a pg_dump/server version mis-match being the cause.  I could
 see where a 32-bit client connecting to a 64bit server could possible
 exhibit apparent memory-related issues.

 pg_dump is 32-bit version. pg_dump -V returns
 pg_dump (PostgreSQL) 9.3.0

 Can this cause the issue ?

Hm.  It wouldn't *cause* the issue, but certainly a 32-bit pg_dump would
have lots less headroom if there were a memory bloat problem.

I looked back at the previous thread you mentioned (bug #7914) and was
reminded that we never did understand what was going on in that report.
I'm not sure if you are seeing the same thing though.  That user reported
that he was able to see pg_dump's memory consumption bloating well beyond
what it ought to be (I suppose he was watching the process in whatever
Windows' equivalent of ps or top is).  Do you see that?

If there is a memory leakage type issue involved then it's less surprising
that a request for a mere 6MB would fail --- once we've leaked enough
memory, it's certainly gonna fail at some point.  This doesn't get us much
closer to understanding the problem though.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: permission denied for database control

2014-05-07 Thread Huang, Suya
Thank you Tom and Albe. After grant connect database privilege to user test, 
the query runs without problem.

Thanks,
Suya

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Thursday, May 08, 2014 12:13 AM
To: Albe Laurenz
Cc: Huang, Suya; 'pgsql-general General'
Subject: Re: [GENERAL] ERROR: permission denied for database control

Albe Laurenz laurenz.a...@wien.gv.at writes:
 Suya Huang wrote:
 --fails if try to get all db size in one sql postgres= SELECT 
 current_date,pg_database_size(pg_database.datname) from pg_database;
 ERROR:  permission denied for database control

 It seems like you have a database called control for which user test
 has no connect privilege.

Yeah, this failure is unsurprising.  But it looks like we forgot to document 
the restriction :-(.  I see a mention of it in the 8.3 release notes, but 
there's nothing in the documentation of the functions themselves.  Will fix.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enforce Symmetric Matrix

2014-05-07 Thread David G Johnston
SELECT l.id, u.id, func(l.id, u.id)
FROM ids l CROSS JOIN ids u
WHERE l.id  u.id

Depending on whether you always update a known pair, or instead invalidate
all rows where either id is a given value, you can use various means to
manage the resultant materialized view.  Triggers or interface functions
mainly.

Without calling the value function you would also know, at any given time,
whether a given pair is present.  The usefulness of this depends on how
real-time you need the updates to be; which is a trade-off with performance
during changes.

Adding a simple limit on the two ids sub-queries, and doing the incremental
add in a loop, you can appropriately scale the updates to limit memory usage
during the bulk load phase.  Likely ongoing updates will not have the same
requirement since you only have N updates instead of N^2/2; but can be done
all the same.

SELECT LID, UID, FUNC(lid, uid) FROM 
SELECT CASE WHEN c1  c2 THEN c1 ELSE c2 END AS LID , CASE WHEN c1  c2 THEN
c2 ELSE c1 END AS UID FROM
SELECT * FROM - WHERE c1  c2
SELECT :newval AS c1, ids.id AS c2 FROM ids

David J.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Enforce-Symmetric-Matrix-tp5803064p5803126.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Building Postgres using mingw

2014-05-07 Thread Michael Paquier
On Wed, May 7, 2014 at 10:26 AM, Jeff Janes jeff.ja...@gmail.com wrote:
 On Wed, May 7, 2014 at 8:27 AM, Michael Paquier michael.paqu...@gmail.com
 wrote:

 Hi all,

 Following some instructions on the wiki and the docs, I am trying to
 compile the code using minwg:
 https://wiki.postgresql.org/wiki/Building_With_MinGW

 http://www.postgresql.org/docs/devel/static/installation-platform-notes.html#INSTALLATION-NOTES-MINGW

 After installing mingw-w64 and adding its binaries to PATH, I am able
 to complete configure correctly (including finding a compiler with
 --host=x86_64-w64-mingw32). However mingw-w64 does not have a make
 command included so compilation cannot continue.


 Did you select the 'developer toolkit' when running the mingw installer?
Think so... However I finally got it working, and here is a little bit
of feedback for the archive's sake as this is trickier than it seems
at first sight.

When trying to use a recent mingw package, sometimes ./configure is
not able to recognize a compiler even if there is one in PATH. This
worked correctly with the snapshot that Postgres wiki recommends
though, even if it is a couple of years old.

Then, after looking at some mingw builds available on sourceforge, the
make command is sometimes not available, but some of the latest builds
available name it mingw32-make.exe instead of make.exe to not conflict
with msys things. So a quick solution is to copy it as make.exe in
PATH if your environment does not include it yet. But be careful when
doing that.

I also had to enforce some environment variables to the following
values at configure (instead of the default values set to /bin/*
because this was failing):
SHELL=bash
PERL=perl
BISON=bison
FLEX=flex
MKDIR_P=mkdir -p

I hope that this helps. Perhaps I missed something, so if someone has
better ideas or ways to do that... This has been done on a Win7 dev
box with msysgit installed.
Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Analyze against a table with geometry columns runs out of memory

2014-05-07 Thread Roxanne Reid-Bennett

Hello,

We are working out the upgrade of our servers from Postgres 9.1 and 
Postgis 2.0 to Postgres 9.3 and Postgis 2.1
After building the base stack, The System Admin restored the database 
from a backup. [I'll ask for more details if you need them]


I have 3 tables with geometry columns in them that when they are 
autovacuumed, vacuumed, or analyzed run the system out of memory.  I 
have isolated that the problem for one of the tables is related to a 
geometry column. I have tables in the system that are much larger on 
disk with geometry columns in them that vacuum analyze just fine, so it 
isn't just that they have geometry columns. Two of the tables are 
related to each other, the other is a load of Government supplied data 
and completely separate in detail and concept for data.


Using the smallest table... we looked at maintenance_work_mem and tried 
several runs with varying values [16MB, 64MB, 256MB, and 500MB].  Larger 
maintenance_work_mem  allows the process to run longer before it starts 
gobbling up swap, but the process still spends most of it's time in 
uninterruptible sleep (usually IO) state and just eats up the swap 
until all of the memory is gone.


Smallest table definition, config and log file entries, etc follow 
below.  If I have failed to provide necessary or desired information, 
just ask.


We have noted that the memory management was changed going into 9.3 - 
but we haven't been able to find anything that would indicate any known 
issues ... This problem caused us to take a hard look at the stack 
again, and we will be building a new stack anyway because we need a 
newer GEOS - but we are seriously considering dropping Postgres back to 
9.2.


I am out of ideas on what else to try after maintenance_work_mem ...  
Does anybody have any suggestions/questions/observations for me?


Thank you.

Roxanne
--

VirutualBox: 4.1.24   Intel Xeon 2.13 GHz (8)   48 Gb RAM
Virtual Box instance:  64 Bit 4 Processors Base Memory: 12Gb

running Ubuntu 12.04.1 LTS
Linux 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 
x86_64 x86_64 x86_64 GNU/Linux


Postgres: PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled by gcc 
(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
PostGis: POSTGIS=2.1.2 r12389 GEOS=3.3.3-CAPI-1.7.4 PROJ=Rel. 
4.8.0, 6 March 2012 GDAL=GDAL 1.9.2, released 2012/10/08 
LIBXML=2.7.8 LIBJSON=UNKNOWN TOPOLOGY RASTER


Postgres.conf entries (of probable interest - if I didn't list it, it's 
probably defaulted):


max_connections = 100
shared_buffers = 4089196kB
work_mem = 128MB
maintenance_work_mem = 64MB
checkpoint_segments = 64
checkpoint_timeout = 30min
checkpoint_completion_target = 0.75
effective_cache_size = 4089196kB
default_statistics_target = 200
autovacuum_max_workers = 1  [this is normally set to 3]

Analyzing the original table activity failed.  Using a copy of the 
original table with no indexes, no foreign keys, no constraints also 
failed.  However, dropping one of the two geometry columns (region) out 
of the copy allowed it to succeed. Taking a copy of just region which 
contains (Multi)Polygons and the primary key via CREATE TABLE ... as 
(Select...), from the original table activity to create 
temp.region... analyze runs out of memory.  The following were run 
against temp.region.


smallest/shortest table definition from \d:

Table temp.region
   Column|  Type   | Modifiers
-+-+---
 activity_id | integer |
 region  | geometry(Geometry,4326) |

shell
HQ4_Staging=# analyze verbose temp.region;
INFO:  0: analyzing temp.region
LOCATION:  do_analyze_rel, analyze.c:335
INFO:  0: region: scanned 1022 of 1022 pages, containing 52990 
live rows and 0 dead rows; 52990 rows in sample, 52990 estimated total rows

LOCATION:  acquire_sample_rows, analyze.c:1299
The connection to the server was lost. Attempting reset: Failed.
/shell

Duration of the above was approximately 1.25 hrs.

The Log files show:
postgres
2014-05-07 16:56:56 EDT|2054| LOG:  server process (PID 6663) was 
terminated by signal 9: Killed
2014-05-07 16:56:56 EDT|2054| DETAIL:  Failed process was running: 
analyze verbose temp.region;
2014-05-07 16:56:56 EDT|2054| LOG:  terminating any other active server 
processes

syslog
May  7 16:56:55 hq4-staging-database kernel: [458605.351369] postgres 
invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0


...  [I have the full stack if anyone wants it]

May  7 16:56:55 hq4-staging-database kernel: [458605.408021] Out of 
memory: Kill process 6663 (postgres) score 920 or sacrifice child
May  7 16:56:55 hq4-staging-database kernel: [458605.412287] Killed 
process 6663 (postgres) total-vm:20269840kB, anon-rss:8625876kB, 
file-rss:3082472kB



This is a test box, which we know is much slower/smaller than our 
production box, but normally sufficient.  As a test box, we had no other 
major activity going on.  

Re: [GENERAL] Oracle to PostgreSQL replication

2014-05-07 Thread Sameer Kumar
Thanks alot everyone!

I guess I will be exploring more on oracle foreign data wrapper.

Has anyone tried using oracle_fdw with Oracle RAC? I am wondering how would
it handle failovers.