Re: [GENERAL] Installing Postgresql on Linux Friendlyarm

2014-09-23 Thread Abelard Hoffman
On Mon, Sep 22, 2014 at 7:39 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Xiang Gan xiang...@santaitech.com writes:
  OK. So a stupid question, whether there is any possibility to run
 Postgresql as root? (I know this is forbidden generally. But what I find
 out is that in Linux FriendlyArm environment, root could create socket
 while non-root user does not have such privilege)


Typically, applications open the port as root and then change to low
privilege for the actual service.
See also this thread:

http://stackoverflow.com/questions/413807/is-there-a-way-for-non-root-processes-to-bind-to-privileged-ports-1024-on-l


Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-23 Thread Jeff Janes
On Mon, Sep 22, 2014 at 8:40 AM, John McKown john.archie.mck...@gmail.com
wrote:

 On Mon, Sep 22, 2014 at 10:31 AM, Merlin Moncure mmonc...@gmail.com
 wrote:
  On Fri, Sep 19, 2014 at 7:16 AM, Bill Moran wmo...@potentialtech.com
 wrote:
  On Fri, 19 Sep 2014 09:32:09 +0200
  Marius Grama marius...@gmail.com wrote:
  Can anybody explain me what happens in the background when the alter
  statement is executed? I've tried it out on a small copy of the table
 (70K)
  and the operation completed in 0.2 seconds.
  Will the table be completely locked during the execution of the ALTER
  statement?
 
  I share Gavin's concern that you're fixing this in the wrong place.  I
 expect
  that you'll be better served by configuring the middleware to do the
 right thing.
 
  I'll pile on here: in almost 20 years of professional database
  development I've never had an actual problem that was solved by
  introducing or shortening a length constraint to text columns except
  in cases where overlong strings violate the data model (like a two
  character state code for example).  It's a database equivalent of C
  programmer's disease.  Input checks from untrusted actors should
  happen in the application.
 
  merlin
 

 I do not have your experience level with data bases, but if I may, I
 will make an addition. Input checks should also happen in the RDBMS
 server. I have learned you cannot trust end users _or_ programmers.
 Most are good and conscientious. But there are a few who just aren't.


So fire them.


 And those few seem to be very prolific in making _subtle_ errors. Had
 one person who was really good at replacing every p with a [ and P
 with {


Your solution is what, arbitrarily forbidding the use of '[' when that is
not a logically forbidden character, just because someone might make a
mistake?  What do you do when they wish that someone have a lot of gun on
your vacation?

Nothing ticks me off more than some DBA deciding that it is unreasonable
for my street address to be more than 25 characters long, when obviously
neither I nor the USPS agrees with that arbitrary limitation.  Unless 25 is
the maximum number of characters that physically fit on the mailing label
(and you are sure you will never change label printers), it is not your job
to decide how long my street name can be.  Get over yourself.

If you need to verify that the data is accurate, then implement methods to
verify that.  Verifying that the data is reasonable, according to some
ignorant standard of reasonableness, is not the same thing as verifying
that it is accurate.

More than one company has lost business by refusing to acknowledge that I
might know how to spell my own address.

Cheers,

Jeff, whose street address has 27 characters, whether you like it or not.


[GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Craig Ringer
Hi all

I've had some issues with how the procedural languages are packaged in
the Windows installer for a while, but I was very surprised to see that
plpython2 appears to be entirely absent in 9.3.

It doesn't seem to be provided via EDB's StackBuilder app either.

What's going on? It looks like it was dropped in 9.1.

PL/Python 2 and PL/Python 3 are different things. You can't just upgrade
trivially from plpython2 to plpython3.

Python 2 and Python 3 install side-by-side without issues.

Why isn't plpython2.dll built and included?

It's good to see that the required procedural language runtime versions
are now included in installation-notes.html in the installer, though.
Thanks for that.

(Any thoughts on my prior question about using virtual service accounts
on Windows 7, btw?)

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] csv import error

2014-09-23 Thread Eugenio Trumpy
Hello,

I'm trying to import data using a csv file,
but I got an error:

 
ERROR:  column key;daprof;aprof;tipo;valore;note;oid;unit_mis of relation 
assorb does not exist
LINE 1: INSERT INTO info_pozzi_hydrocarbon.assorb (key;daprof;a...
   ^
  In statement:

INSERT INTO info_pozzi_hydrocarbon.assorb 
(key;daprof;aprof;tipo;valore;note;oid;unit_mis)
VALUES ('1001334;19.1;21;A;6;;;11')
My sql statement for the table that I would like to populate is:

-- Table: info_pozzi_hydrocarbon.assorb

-- DROP TABLE info_pozzi_hydrocarbon.assorb;

CREATE TABLE info_pozzi_hydrocarbon.assorb
(
  key integer,
  daprof double precision,
  aprof double precision,
  tipo character(1),
  valore double precision,
  note character(254),
  oid serial NOT NULL,
  unit_mis smallint,
  CONSTRAINT assorb_pk PRIMARY KEY (oid),
  CONSTRAINT assorb_fk FOREIGN KEY (key)
  REFERENCES pozzi (key) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT unit_m_fk FOREIGN KEY (unit_mis)
  REFERENCES info_cod.unita (unita) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);


the csv file is like the following (it is only a part):

key;daprof;aprof;tipo;valore;note;oid;unit_mis
1001334;19.1;21;A;6;;;11
1001334;93.5;94;A;30;;;11
1001334;94;115;A;20;;;11
1001334;154.5;255;A;644;;;11
1001334;273;282;A;4;;;11
1001334;298;309;A;7;;;11
1001334;432;1224;P;1850;;;11
4277001;121;901;A;397;ALLARGAMENTO FORO;;11
4277001;121;901;A;96;PERFORAZIONE;;11


The filed order it is the same and also the decimal separator is set as dot. I 
set also the delimiter for fields.
I don't understand what was the problem. 

Can somebody help me?

Eugenio



  

Re: [GENERAL] csv import error

2014-09-23 Thread FarjadFarid(ChkNet)
Hi, 

 

Quotation marks should be around both the name of
each and every column and their values. 

 

Also replace column comas as separation character
for sql insert statement. 



What has happened here is that the values from CSV
are directly  into sql. 



 Hope this helps. 

 

Best Regards

 

 

Farjad Farid

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On
Behalf Of Eugenio Trumpy
Sent: 23 September 2014 11:26
To: pgsql-general@postgresql.org
Subject: [GENERAL] csv import error

 

Hello,

I'm trying to import data using a csv file,
but I got an error:

 
ERROR:  column
key;daprof;aprof;tipo;valore;note;oid;unit_mis
of relation assorb does not exist
LINE 1: INSERT INTO
info_pozzi_hydrocarbon.assorb
(key;daprof;a...
 
^

In statement:

INSERT INTO info_pozzi_hydrocarbon.assorb
(key;daprof;aprof;tipo;valore;note;oid;unit_mis)
VALUES ('1001334;19.1;21;A;6;;;11')


My sql statement for the table that I would like
to populate is:

-- Table: info_pozzi_hydrocarbon.assorb

-- DROP TABLE info_pozzi_hydrocarbon.assorb;

CREATE TABLE info_pozzi_hydrocarbon.assorb
(
  key integer,
  daprof double precision,
  aprof double precision,
  tipo character(1),
  valore double precision,
  note character(254),
  oid serial NOT NULL,
  unit_mis smallint,
  CONSTRAINT assorb_pk PRIMARY KEY (oid),
  CONSTRAINT assorb_fk FOREIGN KEY (key)
  REFERENCES pozzi (key) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT unit_m_fk FOREIGN KEY (unit_mis)
  REFERENCES info_cod.unita (unita) MATCH
SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);


the csv file is like the following (it is only a
part):

key;daprof;aprof;tipo;valore;note;oid;unit_mis
1001334;19.1;21;A;6;;;11
1001334;93.5;94;A;30;;;11
1001334;94;115;A;20;;;11
1001334;154.5;255;A;644;;;11
1001334;273;282;A;4;;;11
1001334;298;309;A;7;;;11
1001334;432;1224;P;1850;;;11
4277001;121;901;A;397;ALLARGAMENTO FORO;;11
4277001;121;901;A;96;PERFORAZIONE;;11


The filed order it is the same and also the
decimal separator is set as dot. I set also the
delimiter for fields.
I don't understand what was the problem. 

Can somebody help me?

Eugenio


Re: [GENERAL] csv import error

2014-09-23 Thread Rémi Cura
Why don't you use COPY ?
Cheers,
Rémi-C

2014-09-23 12:50 GMT+02:00 FarjadFarid(ChkNet) 
farjad.fa...@checknetworks.com:

 Hi,



 Quotation marks should be around both the name of each and every column
 and their values.



 Also replace column comas as separation character for sql insert
 statement.

 What has happened here is that the values from CSV are directly  into sql.

  Hope this helps.



 Best Regards





 Farjad Farid



 *From:* pgsql-general-ow...@postgresql.org [mailto:
 pgsql-general-ow...@postgresql.org] *On Behalf Of *Eugenio Trumpy
 *Sent:* 23 September 2014 11:26
 *To:* pgsql-general@postgresql.org
 *Subject:* [GENERAL] csv import error



 Hello,

 I'm trying to import data using a csv file,
 but I got an error:


 ERROR:  column key;daprof;aprof;tipo;valore;note;oid;unit_mis of relation 
 assorb does not exist

 LINE 1: INSERT INTO info_pozzi_hydrocarbon.assorb (key;daprof;a...

^

 *In statement:*

 INSERT INTO info_pozzi_hydrocarbon.assorb
 (key;daprof;aprof;tipo;valore;note;oid;unit_mis) VALUES
 ('1001334;19.1;21;A;6;;;11')


 My sql statement for the table that I would like to populate is:

 -- Table: info_pozzi_hydrocarbon.assorb

 -- DROP TABLE info_pozzi_hydrocarbon.assorb;

 CREATE TABLE info_pozzi_hydrocarbon.assorb
 (
   key integer,
   daprof double precision,
   aprof double precision,
   tipo character(1),
   valore double precision,
   note character(254),
   oid serial NOT NULL,
   unit_mis smallint,
   CONSTRAINT assorb_pk PRIMARY KEY (oid),
   CONSTRAINT assorb_fk FOREIGN KEY (key)
   REFERENCES pozzi (key) MATCH SIMPLE
   ON UPDATE CASCADE ON DELETE CASCADE,
   CONSTRAINT unit_m_fk FOREIGN KEY (unit_mis)
   REFERENCES info_cod.unita (unita) MATCH SIMPLE
   ON UPDATE NO ACTION ON DELETE NO ACTION
 )
 WITH (
   OIDS=FALSE
 );



 the csv file is like the following (it is only a part):


 key;daprof;aprof;tipo;valore;note;oid;unit_mis

 1001334;19.1;21;A;6;;;11

 1001334;93.5;94;A;30;;;11

 1001334;94;115;A;20;;;11

 1001334;154.5;255;A;644;;;11

 1001334;273;282;A;4;;;11

 1001334;298;309;A;7;;;11

 1001334;432;1224;P;1850;;;11

 4277001;121;901;A;397;ALLARGAMENTO FORO;;11

 4277001;121;901;A;96;PERFORAZIONE;;11


 The filed order it is the same and also the decimal separator is set as dot. 
 I set also the delimiter for fields.
 I don't understand what was the problem.

 Can somebody help me?

 Eugenio




[GENERAL] Fw: strange problem with not existing roles

2014-09-23 Thread lud...@kni-online.de

Hi List,

a workaround for my own problems as described below:

In the following system-table-colums (type aclitem[]) I replaced all entries with these non-existing Group-Roles with something like {postgres=arwdDxt/postgres,kniprath=arwdDxt/kniprath}, this resets the privileges to just these two (Admin-) users.

 pg_namespace.nspacl

 pg_class.relacl

 pg_default_acl.defaclacl

I dontt think its good practice to update systemtables manually, but apparently I previously did something in my database, that messed the contents.

Ludwig

Gesendet:Mittwoch, 17. September 2014 um 17:08 Uhr
Von:lud...@kni-online.de lud...@kni-online.de
An:pgsql-general@postgresql.org
Betreff:strange problem with not existing roles





Hi list,
I have a strange problem in postgres (PostgreSQL 9.3.1, compiled by Visual C++ build 1600, 64-bit), there are granted privileges on schemas, tables, columns for roles that dont exist.


Example:
In pgAdmin for schema user_data the follwing wrong grants are reported:
...
GRANT ALL ON SCHEMA user_data TO 482499;
GRANT ALL ON SCHEMA user_data TO 17708;
...


Problem is:
- these roles dont exist,
- they cant be dropped (DROP ROLE 482499; = FEHLER: Rolle 482499 existiert nicht)
- grants cant be revoked (REVOKE ALL ON SCHEMA user_data FROM 482499; = FEHLER: Rolle 482499 existiert nicht)
- ROLES can be recreated and dropped afterwards, but the grants persists:
 CREATE ROLE 482499;
 DROP OWNED BY 482499;
 REVOKE CONNECT ON DATABASE wver_ims FROM 482499;
 REVOKE ALL ON SCHEMA user_data FROM 482499;
 DROP ROLE 482499;
- new tables cant be created in schemas with these grants
 CREATE TABLE user_data.test
 (
 id serial,
 PRIMARY KEY (id)
 );
 = FEHLER: Rolle 17708 wurde gleichzeitig gelscht
 
 
The roles are not listed in any catalog
 SELECT * FROM information_schema.xxx WHERE grantee in (243683,243666,243689,482499,482499,17708);


Only in pg_auth_members there is a set for each of these roles:
 SELECT * FROM pg_catalog.pg_auth_members WHERE member in (243683,243666,243689,482499,482499,17708);



What can I do to get rid of these roles and grants?



Ludwig








Re: [GENERAL] csv import error

2014-09-23 Thread FarjadFarid(ChkNet)
 

Hi, 

 

Quotation marks should be around both the name of
each and every column and their values. 

 

As the columns names are all lower case. You may
wish to remove all quotation marks which is much
easier. 

But character value needs quotation mark.

 

Also replace the semi-column as column separation
character with commas for sql insert statement. 

What has happened here is that the values from CSV
are directly  inserted into sql. It needs a little
bit of extra work.  

Hope this helps. 

 

Best Regards

 

 

Farjad Farid

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On
Behalf Of Eugenio Trumpy
Sent: 23 September 2014 11:26
To: pgsql-general@postgresql.org
Subject: [GENERAL] csv import error

 

Hello,

I'm trying to import data using a csv file,
but I got an error:

 
ERROR:  column
key;daprof;aprof;tipo;valore;note;oid;unit_mis
of relation assorb does not exist
LINE 1: INSERT INTO
info_pozzi_hydrocarbon.assorb
(key;daprof;a...
 
^

In statement:

INSERT INTO info_pozzi_hydrocarbon.assorb
(key;daprof;aprof;tipo;valore;note;oid;unit_mis)
VALUES ('1001334;19.1;21;A;6;;;11')


My sql statement for the table that I would like
to populate is:

-- Table: info_pozzi_hydrocarbon.assorb

-- DROP TABLE info_pozzi_hydrocarbon.assorb;

CREATE TABLE info_pozzi_hydrocarbon.assorb
(
  key integer,
  daprof double precision,
  aprof double precision,
  tipo character(1),
  valore double precision,
  note character(254),
  oid serial NOT NULL,
  unit_mis smallint,
  CONSTRAINT assorb_pk PRIMARY KEY (oid),
  CONSTRAINT assorb_fk FOREIGN KEY (key)
  REFERENCES pozzi (key) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT unit_m_fk FOREIGN KEY (unit_mis)
  REFERENCES info_cod.unita (unita) MATCH
SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);


the csv file is like the following (it is only a
part):

key;daprof;aprof;tipo;valore;note;oid;unit_mis
1001334;19.1;21;A;6;;;11
1001334;93.5;94;A;30;;;11
1001334;94;115;A;20;;;11
1001334;154.5;255;A;644;;;11
1001334;273;282;A;4;;;11
1001334;298;309;A;7;;;11
1001334;432;1224;P;1850;;;11
4277001;121;901;A;397;ALLARGAMENTO FORO;;11
4277001;121;901;A;96;PERFORAZIONE;;11


The filed order it is the same and also the
decimal separator is set as dot. I set also the
delimiter for fields.
I don't understand what was the problem. 

Can somebody help me?

Eugenio


Re: [GENERAL] wide row insert via Postgres jdbc driver

2014-09-23 Thread Bill Moran
On Tue, 23 Sep 2014 13:24:40 +0800
Sameer Kumar sameer.ku...@ashnik.com wrote:
 
 I am working with a vendor and planning to deploy their application on
 PostgreSQL as backend. They have cautioned the customer that PostgreSQL's
 jdbc driver v9.1 (build 900) has issues which causes deadlocks while wide
 record inserts.

Where are they getting this information?  Sounds like FUD to me.

 Is there any such known problem which anyone else has encountered in this
 regards? Has there been any improvements in future builds/releases on this
 aspect of PostgreSQL drivers/connectors?

I'm not aware of any, and in my previous job we made extensive use of it.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Seref Arikan
I've gone through the same journey myself. Tried to juggle dlls etc.
Apparently python 2 is no longer included in the windows installer.
Since I have lots of python 2.x code with native extensions etc, I thought
about compiling postgres from source on Windows but as usual the process
turned into a nightmare (compiling anything on Windows is, at least for me)
There is also the problem of postgres distribution on windows using the
activestate version of python distribution instead of the python.org one if
my memory is not failing me.

Anyway, I ended up using virtualbox with linux guest because of this issue.
If there is an easy way to get python 2.x(7?) working with postgres on
windows, it would be nice to hear.



On Tue, Sep 23, 2014 at 9:36 AM, Craig Ringer cr...@2ndquadrant.com wrote:

 Hi all

 I've had some issues with how the procedural languages are packaged in
 the Windows installer for a while, but I was very surprised to see that
 plpython2 appears to be entirely absent in 9.3.

 It doesn't seem to be provided via EDB's StackBuilder app either.

 What's going on? It looks like it was dropped in 9.1.

 PL/Python 2 and PL/Python 3 are different things. You can't just upgrade
 trivially from plpython2 to plpython3.

 Python 2 and Python 3 install side-by-side without issues.

 Why isn't plpython2.dll built and included?

 It's good to see that the required procedural language runtime versions
 are now included in installation-notes.html in the installer, though.
 Thanks for that.

 (Any thoughts on my prior question about using virtual service accounts
 on Windows 7, btw?)

 --
  Craig Ringer   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] csv import error

2014-09-23 Thread Eugenio Trumpy
Hi,

I adjusted the csv, changing the semi-column with column and inserting the 
quote for character.
Now it seems to be better but I got another error. This last is due to the fact 
that oid column in my
postgresql table is a serial and I did not filled it in csv because I was 
thinking that would have been 
filled-in automatically during the data import.

What do you suggest on this regard? How can I solve?

E.

From: farjad.fa...@checknetworks.com
To: frippe12...@hotmail.com; pgsql-general@postgresql.org
Subject: RE: [GENERAL] csv import error
Date: Tue, 23 Sep 2014 12:03:15 +0100

 Hi,  Quotation marks should be around both the name of each and every column 
and their values.  As the columns names are all lower case. You may wish to 
remove all quotation marks which is much easier. But character value needs 
quotation mark. Also replace the semi-column as column separation character 
with commas for sql insert statement. What has happened here is that the values 
from CSV are directly  inserted into sql. It needs a little bit of extra work.  
Hope this helps.  Best Regards  Farjad Farid From: 
pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] 
On Behalf Of Eugenio Trumpy
Sent: 23 September 2014 11:26
To: pgsql-general@postgresql.org
Subject: [GENERAL] csv import error Hello,

I'm trying to import data using a csv file,
but I got an error:

 
ERROR:  column key;daprof;aprof;tipo;valore;note;oid;unit_mis of relation 
assorb does not existLINE 1: INSERT INTO info_pozzi_hydrocarbon.assorb 
(key;daprof;a...   ^In 
statement:

INSERT INTO info_pozzi_hydrocarbon.assorb 
(key;daprof;aprof;tipo;valore;note;oid;unit_mis) VALUES 
('1001334;19.1;21;A;6;;;11')
My sql statement for the table that I would like to populate is:

-- Table: info_pozzi_hydrocarbon.assorb

-- DROP TABLE info_pozzi_hydrocarbon.assorb;

CREATE TABLE info_pozzi_hydrocarbon.assorb
(
  key integer,
  daprof double precision,
  aprof double precision,
  tipo character(1),
  valore double precision,
  note character(254),
  oid serial NOT NULL,
  unit_mis smallint,
  CONSTRAINT assorb_pk PRIMARY KEY (oid),
  CONSTRAINT assorb_fk FOREIGN KEY (key)
  REFERENCES pozzi (key) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT unit_m_fk FOREIGN KEY (unit_mis)
  REFERENCES info_cod.unita (unita) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

the csv file is like the following (it is only a part):
key;daprof;aprof;tipo;valore;note;oid;unit_mis1001334;19.1;21;A;6;;;111001334;93.5;94;A;30;;;111001334;94;115;A;20;;;111001334;154.5;255;A;644;;;111001334;273;282;A;4;;;111001334;298;309;A;7;;;111001334;432;1224;P;1850;;;114277001;121;901;A;397;ALLARGAMENTO
 FORO;;114277001;121;901;A;96;PERFORAZIONE;;11


The filed order it is the same and also the decimal separator is set as dot. I 
set also the delimiter for fields.
I don't understand what was the problem. 

Can somebody help me?

Eugenio   

Re: [GENERAL] csv import error

2014-09-23 Thread FarjadFarid(ChkNet)
Hi Eugenio, 

 

How you got about resolving this depends on your
project, the size of data. 



For a simple case, which this seems to be. Simply
don't insert the serial column. E.g. remove both
name of the column and its corresponding value in
the insert statement. 

Postgresql will insert these automatically. 

 

If the size of the data is larger or there are
dependencies on the OID value elsewhere in the DB
then recreated the target table but without making
the column 'serial'. But actually creating all
columns.  And inserting all column values. 

 

Once the operation is done. Then manually create
serial sequencer and attach it to the OID column. 

 

The disadvantage of this is in error detection and
correction of the process. 

If there are errors in CSV files ,which often are
as sometimes they are edited manually, then it
will be more time consuming to correct. Depending
on the size of data in CSV file. 

 

Hope this helps. 

 

 

Best Regards

 

 

Farjad Farid

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On
Behalf Of Eugenio Trumpy
Sent: 23 September 2014 13:14
To: farjad.fa...@checknetworks.com;
pgsql-general@postgresql.org
Subject: Re: [GENERAL] csv import error

 

Hi,

I adjusted the csv, changing the semi-column with
column and inserting the quote for character.
Now it seems to be better but I got another error.
This last is due to the fact that oid column in my
postgresql table is a serial and I did not filled
it in csv because I was thinking that would have
been 
filled-in automatically during the data import.

What do you suggest on this regard? How can I
solve?

E.

  _  

From: farjad.fa...@checknetworks.com
To: frippe12...@hotmail.com;
pgsql-general@postgresql.org
Subject: RE: [GENERAL] csv import error
Date: Tue, 23 Sep 2014 12:03:15 +0100

 

Hi, 

 

Quotation marks should be around both the name of
each and every column and their values. 

 

As the columns names are all lower case. You may
wish to remove all quotation marks which is much
easier. 

But character value needs quotation mark.

 

Also replace the semi-column as column separation
character with commas for sql insert statement. 

What has happened here is that the values from CSV
are directly  inserted into sql. It needs a little
bit of extra work.  

Hope this helps. 

 

Best Regards

 

 

Farjad Farid

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On
Behalf Of Eugenio Trumpy
Sent: 23 September 2014 11:26
To: pgsql-general@postgresql.org
Subject: [GENERAL] csv import error

 

Hello,

I'm trying to import data using a csv file,
but I got an error:

 
ERROR:  column
key;daprof;aprof;tipo;valore;note;oid;unit_mis
of relation assorb does not exist
LINE 1: INSERT INTO
info_pozzi_hydrocarbon.assorb
(key;daprof;a...
 
^

In statement:

INSERT INTO info_pozzi_hydrocarbon.assorb
(key;daprof;aprof;tipo;valore;note;oid;unit_mis)
VALUES ('1001334;19.1;21;A;6;;;11')


My sql statement for the table that I would like
to populate is:

-- Table: info_pozzi_hydrocarbon.assorb

-- DROP TABLE info_pozzi_hydrocarbon.assorb;

CREATE TABLE info_pozzi_hydrocarbon.assorb
(
  key integer,
  daprof double precision,
  aprof double precision,
  tipo character(1),
  valore double precision,
  note character(254),
  oid serial NOT NULL,
  unit_mis smallint,
  CONSTRAINT assorb_pk PRIMARY KEY (oid),
  CONSTRAINT assorb_fk FOREIGN KEY (key)
  REFERENCES pozzi (key) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT unit_m_fk FOREIGN KEY (unit_mis)
  REFERENCES info_cod.unita (unita) MATCH
SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);


the csv file is like the following (it is only a
part):

key;daprof;aprof;tipo;valore;note;oid;unit_mis
1001334;19.1;21;A;6;;;11
1001334;93.5;94;A;30;;;11
1001334;94;115;A;20;;;11
1001334;154.5;255;A;644;;;11
1001334;273;282;A;4;;;11
1001334;298;309;A;7;;;11
1001334;432;1224;P;1850;;;11
4277001;121;901;A;397;ALLARGAMENTO FORO;;11
4277001;121;901;A;96;PERFORAZIONE;;11


The filed order it is the same and also the
decimal separator is set as dot. I set also the
delimiter for fields.
I don't understand what was the problem. 

Can somebody help me?

Eugenio


Re: [GENERAL] wide row insert via Postgres jdbc driver

2014-09-23 Thread Thomas Kellerer
Sameer Kumar schrieb am 23.09.2014 um 07:24:
 I am working with a vendor and planning to deploy their application
 on PostgreSQL as backend. They have cautioned the customer that
 PostgreSQL's jdbc driver v9.1 (build 900) has issues which causes
 deadlocks while wide record inserts.

Can you be a bit more explicit? 
I have never heard the term wide record inserts before 


 Is there any such known problem which anyone else has encountered in
 this regards? Has there been any improvements in future
 builds/releases on this aspect of PostgreSQL drivers/connectors?

I have never seen any deadlocks in Postgres that were caused by the driver or 
Postgres itself.

Deadlocks are almost always caused by sloppy programming. 

My guess is that this vendor initially supported some other database 
with less strict transaction handling or even a DBMS where the couldn't 
(or didn't want to) use transactions.



 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] wide row insert via Postgres jdbc driver

2014-09-23 Thread Bill Moran
On Tue, 23 Sep 2014 14:12:22 +0200
Thomas Kellerer spam_ea...@gmx.net wrote:

 Sameer Kumar schrieb am 23.09.2014 um 07:24:
  I am working with a vendor and planning to deploy their application
  on PostgreSQL as backend. They have cautioned the customer that
  PostgreSQL's jdbc driver v9.1 (build 900) has issues which causes
  deadlocks while wide record inserts.
 
 Can you be a bit more explicit? 
 I have never heard the term wide record inserts before 

I've heard these terms before.  Wide generally means at least one of the
following:

* A large number of columns
* At least 1 column with a lot of data

Of course, both of those criteria are incredibly subjective.  How many columns
is a large number?  How much data is a lot?

It generally boils down to he fact that pushing a lot of data (whether many
columns or a single column with a lot of data) takes longer than pushing small
amounts of data (big surprise) and as a result, the statistical chance that
the operatin will collide with a conflicting operation (causing, in this case,
a deadlock) is increased.

As you mention, it's usually something that people with poorly written
applications complain about.  I.e. our application works just fine in our test
environment, so your server must be too slow ... get a faster server

Of course, the real problem is that the application was written with a large
number of braindead assumptions (things will always be fast; our tests never
encounter deadlocks, so they can't happen, etc)  I've dealt directly with this
back in my consulting days: clients who insisted that the correct way to fix
their crashes was to buy faster hardware.  The annoying thing is that such an
approach _appears_ to fix the problem, because the faster hardware causes the
chance of the problem occuring to be less, and in the mind of people who don't
understand concurrent programming, that's fixed.

The amount of really badly written software out there is a very large number.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Employee modeling question

2014-09-23 Thread Nelson Green
On Fri, Sep 5, 2014 at 11:39 AM, Rich Shepard rshep...@appl-ecosys.com
wrote:

 On Fri, 5 Sep 2014, John McKown wrote:

  They are excellent. They are _not_ for beginners. The For Smarties
 portion is not just a play against the For Dummies series. Joe does some
 high powered SQL.


   For the purpose of developing an employee schema with departments for
 some, his SQL For Smarties provides very sound advice on how to proceed.
 Having separate company, department, and employee tables is a given. But,
 you might need many-to-many tables to keep track of the complex
 relationships. This is all covered in the chapters on DDL (Data Definition
 Language) and is separate from the chapters on DML (Data Manipulation
 Language).

 Good luck,

 Rich


Thank you Rich, and apologies for the delay in getting back to this.
Sometimes
my job has a bad habit of getting in the way of getting work done.

I've been through the first four or five chapters of the SQL For Smarties
book,
and I've glanced at the other two books we have, but I didn't find anything
especially enlightening (and I was surprised at the number of typographical
errors in the content). I have also read through the other references I was
given.

Although I have not completely hashed this whole situation out, I am leaning
towards an exclusivity constraint on department and business, where one of
the
columns will be required to be null, and a check constraint on the business
column that will not allow businesses that are referenced in the department
table. This seems to meet all of my rules and requirements, and will also
work
in the case of external contracts applying to a business or a department.

If this plan changes dramatically I will update this posting, and I do
appreciate the advice that I received from you and everyone else. I
especially
appreciate being given pointers to information sources as opposed to
receiving
pat answers without explanations. Reading and learning will prove much more
beneficial in the long run.

Well, back to work. Gotta go explain to someone why two separate and
unrelated
tables won't model their multi step workflow too well (OK not at all
really). I
just love how people that can populate a spreadsheet think that makes them
into
data professionals.

Nelson




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] csv import error

2014-09-23 Thread Eugenio Trumpy
Thank you Farjad,

following your suggestion I'm now able to import my csv and the serial 
automatically is inserted in the new records.

Bye

E.

From: farjad.fa...@checknetworks.com
To: frippe12...@hotmail.com; pgsql-general@postgresql.org
Subject: RE: [GENERAL] csv import error
Date: Tue, 23 Sep 2014 13:38:27 +0100

Hi Eugenio,  How you got about resolving this depends on your project, the size 
of data. 

For a simple case, which this seems to be. Simply  don’t insert the serial 
column. E.g. remove both name of the column and its corresponding value in the 
insert statement. Postgresql will insert these automatically.  If the size of 
the data is larger or there are dependencies on the OID value elsewhere in the 
DB then recreated the target table but without making the column ‘serial’. But 
actually creating all columns.  And inserting all column values.  Once the 
operation is done. Then manually create serial sequencer and attach it to the 
OID column.  The disadvantage of this is in error detection and correction of 
the process. If there are errors in CSV files ,which often are as sometimes 
they are edited manually, then it will be more time consuming to correct. 
Depending on the size of data in CSV file.  Hope this helps.   Best Regards  
Farjad Farid From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Eugenio Trumpy
Sent: 23 September 2014 13:14
To: farjad.fa...@checknetworks.com; pgsql-general@postgresql.org
Subject: Re: [GENERAL] csv import error Hi,

I adjusted the csv, changing the semi-column with column and inserting the 
quote for character.
Now it seems to be better but I got another error. This last is due to the fact 
that oid column in my
postgresql table is a serial and I did not filled it in csv because I was 
thinking that would have been 
filled-in automatically during the data import.

What do you suggest on this regard? How can I solve?

E.From: farjad.fa...@checknetworks.com
To: frippe12...@hotmail.com; pgsql-general@postgresql.org
Subject: RE: [GENERAL] csv import error
Date: Tue, 23 Sep 2014 12:03:15 +0100 Hi,  Quotation marks should be around 
both the name of each and every column and their values.  As the columns names 
are all lower case. You may wish to remove all quotation marks which is much 
easier. But character value needs quotation mark. Also replace the semi-column 
as column separation character with commas for sql insert statement. What has 
happened here is that the values from CSV are directly  inserted into sql. It 
needs a little bit of extra work.  Hope this helps.  Best Regards  Farjad Farid 
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Eugenio Trumpy
Sent: 23 September 2014 11:26
To: pgsql-general@postgresql.org
Subject: [GENERAL] csv import error Hello,

I'm trying to import data using a csv file,
but I got an error:

 
ERROR:  column key;daprof;aprof;tipo;valore;note;oid;unit_mis of relation 
assorb does not existLINE 1: INSERT INTO info_pozzi_hydrocarbon.assorb 
(key;daprof;a...   ^In 
statement:

INSERT INTO info_pozzi_hydrocarbon.assorb 
(key;daprof;aprof;tipo;valore;note;oid;unit_mis) VALUES 
('1001334;19.1;21;A;6;;;11')
My sql statement for the table that I would like to populate is:

-- Table: info_pozzi_hydrocarbon.assorb

-- DROP TABLE info_pozzi_hydrocarbon.assorb;

CREATE TABLE info_pozzi_hydrocarbon.assorb
(
  key integer,
  daprof double precision,
  aprof double precision,
  tipo character(1),
  valore double precision,
  note character(254),
  oid serial NOT NULL,
  unit_mis smallint,
  CONSTRAINT assorb_pk PRIMARY KEY (oid),
  CONSTRAINT assorb_fk FOREIGN KEY (key)
  REFERENCES pozzi (key) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE CASCADE,
  CONSTRAINT unit_m_fk FOREIGN KEY (unit_mis)
  REFERENCES info_cod.unita (unita) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);

the csv file is like the following (it is only a part):
key;daprof;aprof;tipo;valore;note;oid;unit_mis1001334;19.1;21;A;6;;;111001334;93.5;94;A;30;;;111001334;94;115;A;20;;;111001334;154.5;255;A;644;;;111001334;273;282;A;4;;;111001334;298;309;A;7;;;111001334;432;1224;P;1850;;;114277001;121;901;A;397;ALLARGAMENTO
 FORO;;114277001;121;901;A;96;PERFORAZIONE;;11


The filed order it is the same and also the decimal separator is set as dot. I 
set also the delimiter for fields.
I don't understand what was the problem. 

Can somebody help me?

Eugenio   

Re: [GENERAL] Installing Postgresql on Linux Friendlyarm

2014-09-23 Thread Vick Khera
On Tue, Sep 23, 2014 at 2:29 AM, Abelard Hoffman
abelardhoff...@gmail.com wrote:
 Typically, applications open the port as root and then change to low
 privilege for the actual service.
 See also this thread:

 http://stackoverflow.com/questions/413807/is-there-a-way-for-non-root-processes-to-bind-to-privileged-ports-1024-on-l

Postgres does not use a privileged port, so this is not necessary.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about Vacuum and Replication failures in 9.3.5

2014-09-23 Thread Adrian Klaver

On 09/22/2014 10:21 PM, Joel Avni wrote:

Its version 9.3.5, whats interesting the that the table grew in size after
the vacuum full, which I did to try to see why the auto vacuum wasn¹t
working.


Please do not top post, it makes it difficult to follow the thread.


However, after I stopped the PostgreSQL slave instance, then vacuum full
did result in a much much smaller size, as expected. So it appears to be
that there must be some interaction between a slave that trying to do
streaming replication but failing, because the requests WALs have been
cycled out and vacuuming on the master. I am not entirely sure that¹s the
case, but I think observed it twice. Is it the master can¹t clean up
tuples that might be visible at the slave¹s last replayed transaction? I
didn¹t think the master was aware of the slave¹s state, and why locks
can¹t be coordinated between the master and slave.


As far as I know the master does not care about the slave(unless you are 
using synchronous replication). It generates and recycles WALs and it up 
to the slave to keep up. Best guess is that whatever you where doing was 
generating and recycling WALs faster then the slave was able to process 
them.


For more information on the care and feeding of WALs see below:

Configuration:
http://www.postgresql.org/docs/9.3/interactive/runtime-config-wal.html

Explanation:
http://www.postgresql.org/docs/9.3/interactive/wal.html

in particular:

http://www.postgresql.org/docs/9.3/interactive/wal-configuration.html

Best guess is you have wal_keep_segments set too low:

http://www.postgresql.org/docs/9.3/interactive/runtime-config-replication.html

Another option is to set up a WAL archive so the slave could fetch from 
there also. See the Configuration link above.





On 9/22/14, 4:04 PM, Adrian Klaver adrian.kla...@aklaver.com wrote:


On 09/22/2014 01:42 PM, Joel Avni wrote:

I noticed that tables on my master PostgreSQL server were growing, and
running vacuum full analyze on them actually made them even bigger.


First what version of Postgres are you using?

Second VACUUM FULL is usually not recommended for the reason you found
out and which is documented here:

http://www.postgresql.org/docs/9.3/interactive/sql-vacuum.html

FULL

 Selects full vacuum, which can reclaim more space, but takes much
longer and exclusively locks the table. This method also requires extra
disk space, since it writes a new copy of the table and doesn't release
the old copy until the operation is complete. Usually this should only
be used when a significant amount of space needs to be reclaimed from
within the table.




At the same time, a slave PostgreSQL server had fallen behind in trying
to replicate, and was stuck in constantly looping over Œstarted
streaming WAL from primary atŠ¹ and Œrequested WAL segment Š.  has
already been removed¹. Once I stopped running the slave instance, I was
able to manually vacuum the tables, and appears that auto vacuum is now
able to vacuum as well.  One table (for instance) dropped from 10Gb down
to 330Mb after this operation. I don¹t see anything about auto vacuum
not able to acquire  locks while the slave wasn¹t able to replicate. I
am unclear why a slave trying to continue streaming would block the auto
vacuum, or is something else at play?


My guess related to the locks your VACUUM FULL was taking, though it
would require more information on what all the various parts where doing
over the time frame.



I did check, and no base backups were in progress at the time this
occurred.

Thank you,
Joel Avni




--
Adrian Klaver
adrian.kla...@aklaver.com






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unexpected syntax error when using JSON - in 9.3.5

2014-09-23 Thread Seamus Abshere

hi all,

This part looks correct and expected:


$ psql foobar
psql (9.3.5)
Type help for help.

foobar=# select coalesce('{}'::json-'a', 1);
ERROR:  COALESCE types json and integer cannot be matched
LINE 1: select coalesce('{}'::json-'a', 1);
 ^


but check it out when I use a string instead of an integer:


foobar=# select coalesce('{}'::json-'a', 'b');
ERROR:  invalid input syntax for type json
LINE 1: select coalesce('{}'::json-'a', 'b');
 ^
DETAIL:  Token b is invalid.
CONTEXT:  JSON data, line 1: b


That seems like the wrong error - shouldn't it be the equiv of [...] 
json and string cannot be matched?


Thanks,
Seamus


--
Seamus Abshere, SCEA
https://github.com/seamusabshere


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Will there be a JSON operator like - but returning numeric?

2014-09-23 Thread Seamus Abshere

hi,

I've got use cases like


array_remove(array_agg((a-'b')::float), NULL)


It would be nice to replace (a-'b')::float with something like


a-^'b'


that directly returned a numeric... is that in the cards?

Thanks,
Seamus


--
Seamus Abshere, SCEA
https://github.com/seamusabshere


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Daniel Lenski
Craig Ringer craig at 2ndquadrant.com writes:

 I've had some issues with how the procedural languages are packaged in
 the Windows installer for a while, but I was very surprised to see that
 plpython2 appears to be entirely absent in 9.3.

 It doesn't seem to be provided via EDB's StackBuilder app either.

 What's going on? It looks like it was dropped in 9.1.

 PL/Python 2 and PL/Python 3 are different things. You can't just upgrade
 trivially from plpython2 to plpython3.

 Python 2 and Python 3 install side-by-side without issues.

 Why isn't plpython2.dll built and included?

 It's good to see that the required procedural language runtime versions
 are now included in installation-notes.html in the installer, though.
 Thanks for that.

The absence of plpython2 support in Windows builds of 9.3 led to quite a
bit of confusion for me especially because the docs seem to make it quite
clear that PL/Python 2 is intended to be the default version in standard
builds of PostgreSQL both now and for the foreseeable future:
http://www.postgresql.org/docs/9.3/static/plpython-python23.html

(By the way, Craig Ringer and I came across this in a discussion on
StackOverflow where we both confirmed that plpython2.dll is absent from the
stock installer for PG 9.3. A number of other threads suggest that we're
not the only ones who've been stumped by this omission.
http://stackoverflow.com/questions/24216627/how-to-install-pl-python-on-postgresql-9-3-x64-windows-7/24218449?noredirect=1#comment40700651_24218449
)

Thanks,
Dan Lenski


Re: [GENERAL] Unexpected syntax error when using JSON - in 9.3.5

2014-09-23 Thread David G Johnston
seamusabshere wrote
 hi all,
 
 This part looks correct and expected:
 
 $ psql foobar
 psql (9.3.5)
 Type help for help.

 foobar=# select coalesce('{}'::json-'a', 1);
 ERROR:  COALESCE types json and integer cannot be matched
 LINE 1: select coalesce('{}'::json-'a', 1);
  ^
 
 but check it out when I use a string instead of an integer:
 
 foobar=# select coalesce('{}'::json-'a', 'b');
 ERROR:  invalid input syntax for type json
 LINE 1: select coalesce('{}'::json-'a', 'b');
  ^
 DETAIL:  Token b is invalid.
 CONTEXT:  JSON data, line 1: b
 
 That seems like the wrong error - shouldn't it be the equiv of [...] 
 json and string cannot be matched?

The problem here is that the literal  'b'  is of unknown type, not a
string (varchar or text in PostgreSQL), and so in the context of COALESCE
the parser (planner?) attempts to resolve the unknown type so that the
function call succeeds.   

In the first example it cannot convert a literal  1  to json and its
attempt to do so triggers the type mismatch error.

In the  'b'  example there is an input routine that converts a ''-literal
to json so the parser uses that and that input routine throws the error you
see.

This isn't implicit casting per-se since if we didn't attempt to resolve the
unknown type the language would be nearly impossible to use.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Unexpected-syntax-error-when-using-JSON-in-9-3-5-tp5820141p5820149.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected syntax error when using JSON - in 9.3.5

2014-09-23 Thread Tom Lane
Seamus Abshere sea...@abshere.net writes:
 foobar=# select coalesce('{}'::json-'a', 1);
 ERROR:  COALESCE types json and integer cannot be matched

Right ...

 but check it out when I use a string instead of an integer:

 foobar=# select coalesce('{}'::json-'a', 'b');
 ERROR:  invalid input syntax for type json

This is expected.  The parser correctly determines that the json-text
operator returns json, and it knows that the two arguments of coalesce
must resolve as the same type, so it tries to coerce the untyped literal
to json --- which means passing that string to the json input function.
A quoted literal is not automatically a string in Postgres, it can get
resolved as some other type if required by context.  In the first example
the integer literal has a well-defined type, but bare quoted literals are
not considered to have a known type.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Will there be a JSON operator like - but returning numeric?

2014-09-23 Thread Tom Lane
Seamus Abshere sea...@abshere.net writes:
 I've got use cases like
 array_remove(array_agg((a-'b')::float), NULL)
 It would be nice to replace (a-'b')::float with something like
 a-^'b'
 that directly returned a numeric... is that in the cards?

I believe we discussed this, along with variants to return boolean
etc, and concluded that it was not really worth doing.  For one
thing, it's hard to see how to pick reasonably mnemonic operator
names.  The cast syntax may be a bit more verbose but at least
it's pretty self-documenting.

Of course, you're free to create custom operators or functions
for your own use ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] fmgr_oldstyle in extensions

2014-09-23 Thread Paul Ramsey
 
Hi all,

I’m trying to implement an spgist index in the PostGIS extension, which seems 
like it should work, but is thus far not working for what appear (to me) to be 
issues in the way spgist expects to pass pointers to user-defined functions.  

Right before anything happens, spgist calls a ‘config’ routine in the user 
code. This is the code from the example implementation (which is internal to 
pgsql, not a run-time add-on like postgis)  

 Datum  
 spg_quad_config(PG_FUNCTION_ARGS)
 {
 /* spgConfigIn *cfgin = (spgConfigIn *) PG_GETARG_POINTER(0); */
 spgConfigOut *cfg = (spgConfigOut *) PG_GETARG_POINTER(1); 

 cfg-prefixType = POINTOID;  
 cfg-labelType = VOIDOID; /* we don't need node labels */
 cfg-canReturnData = true;
 cfg-longValuesOK = false;
 PG_RETURN_VOID();
 }

It is called from the spgist core in the spgGetCache() function, via 

 FunctionCall2Coll(procinfo,
   index-rd_indcollation[0],
   PointerGetDatum(in),
   PointerGetDatum(cache-config));

The part the user function cares about is the pointer to cache-config. 

In the core code, the call stack to the user function goes like this:

postgres`spg_quad_config(fcinfo=0x7fff5f0faf90) + 18 at spgquadtreeproc.c:29
postgres`FunctionCall2Coll(flinfo=0x7ff59a804cc8, collation=0, 
arg1=140734788252568, arg2=140692835814944) + 150 at fmgr.c:1327
 postgres`spgGetCache(index=0x00010254be68) + 220 at spgutils.c:71

So, spgGetCache to FunctionCall2Coll to spg_quad_config.

In my user-defined version of the same thing (I just copied the C code and 
wrote a CREATE OPERATOR CLASS for it) the call stack is this

postgis-2.2.so`gserialized_spgist_quadtree_2d_config(fcinfo=0x7fff5f0fb398) 
+ 30 at gserialized_spgist_2d.c:60
postgres`fmgr_oldstyle(fcinfo=0x7fff5f0faf90) + 424 at fmgr.c:678
postgres`FunctionCall2Coll(flinfo=0x7ff59a039cc8, collation=0, 
arg1=140734788252568, arg2=140692827643424) + 150 at fmgr.c:1327
postgres`spgGetCache(index=0x00010254be68) + 220 at spgutils.c:71

So, spgGetCache to FunctionCall2Coll to fmgr_oldstyle to 
gserialized_spgist_quadtree_2d_config! 

On the way through fmgr_oldstyle things go very much awry and the 
gserialized_spgist_quadtree_2d_config doesn’t get called with a pointer to an 
fcinfo, but with a pointer to the block of memory occupied by the first 
argument, and from there things naturally stop working.

So, I’m wondering what I could have done that is causing my calls to route 
through fmgr_oldstyle instead of the usual path?

P.

--  
Paul Ramsey
http://cleverelephant.ca  
http://postgis.net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fmgr_oldstyle in extensions

2014-09-23 Thread Andres Freund
Hi Paul,

On 2014-09-23 09:55:32 -0700, Paul Ramsey wrote:
 I’m trying to implement an spgist index in the PostGIS extension, which seems 
 like it should work, but is thus far not working for what appear (to me) to 
 be issues in the way spgist expects to pass pointers to user-defined 
 functions.  
 
 Right before anything happens, spgist calls a ‘config’ routine in the user 
 code. This is the code from the example implementation (which is internal to 
 pgsql, not a run-time add-on like postgis)  
 
  Datum  
  spg_quad_config(PG_FUNCTION_ARGS)
  {
  /* spgConfigIn *cfgin = (spgConfigIn *) PG_GETARG_POINTER(0); */
  spgConfigOut *cfg = (spgConfigOut *) PG_GETARG_POINTER(1); 
 
  cfg-prefixType = POINTOID;  
  cfg-labelType = VOIDOID; /* we don't need node labels */
  cfg-canReturnData = true;
  cfg-longValuesOK = false;
  PG_RETURN_VOID();
  }
 
 It is called from the spgist core in the spgGetCache() function, via 
 
  FunctionCall2Coll(procinfo,
    index-rd_indcollation[0],
    PointerGetDatum(in),
    PointerGetDatum(cache-config));
 
 The part the user function cares about is the pointer to cache-config. 
 
 In the core code, the call stack to the user function goes like this:
 
 postgres`spg_quad_config(fcinfo=0x7fff5f0faf90) + 18 at 
 spgquadtreeproc.c:29
 postgres`FunctionCall2Coll(flinfo=0x7ff59a804cc8, collation=0, 
 arg1=140734788252568, arg2=140692835814944) + 150 at fmgr.c:1327
  postgres`spgGetCache(index=0x00010254be68) + 220 at spgutils.c:71
 
 So, spgGetCache to FunctionCall2Coll to spg_quad_config.
 
 In my user-defined version of the same thing (I just copied the C code and 
 wrote a CREATE OPERATOR CLASS for it) the call stack is this
 
 postgis-2.2.so`gserialized_spgist_quadtree_2d_config(fcinfo=0x7fff5f0fb398)
  + 30 at gserialized_spgist_2d.c:60
 postgres`fmgr_oldstyle(fcinfo=0x7fff5f0faf90) + 424 at fmgr.c:678
 postgres`FunctionCall2Coll(flinfo=0x7ff59a039cc8, collation=0, 
 arg1=140734788252568, arg2=140692827643424) + 150 at fmgr.c:1327
 postgres`spgGetCache(index=0x00010254be68) + 220 at spgutils.c:71
 
 So, spgGetCache to FunctionCall2Coll to fmgr_oldstyle to 
 gserialized_spgist_quadtree_2d_config! 
 
 On the way through fmgr_oldstyle things go very much awry and the 
 gserialized_spgist_quadtree_2d_config doesn’t get called with a pointer to an 
 fcinfo, but with a pointer to the block of memory occupied by the first 
 argument, and from there things naturally stop working.
 
 So, I’m wondering what I could have done that is causing my calls to route 
 through fmgr_oldstyle instead of the usual path?

You forgot to add a PG_FUNCTION_INFO_V1(yourfunc); for the function.

I really, really think we should get rid of v0 functions. I've seen
errors like this far too many, and they really don't buy us much.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fmgr_oldstyle in extensions

2014-09-23 Thread Paul Ramsey
Oh vey, I knew after spending a half-day walking through the debugger it would 
turn out to be a trivial mistake on my part. This is always how it is when the 
Magic Fails to Work :) Thanks so much for the help! (and now I know a lot more 
about the function manager (well, the old one)).

P

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net

On September 23, 2014 at 10:02:33 AM, Andres Freund (and...@2ndquadrant.com) 
wrote:

You forgot to add a PG_FUNCTION_INFO_V1(yourfunc); for the function. 


[GENERAL] deadlock of lock-waits (on transaction and on tuple) using same update statement

2014-09-23 Thread Andrej Vanek
Hi,

My application runs many concurrent sessions with the same transaction code
starting with an update statement.
I would expect locking and serialization of those transactions. But I get
unexpected deadlocks.
As opposed to *http://momjian.us/main/writings/pgsql/locking.pdf
http://momjian.us/main/writings/pgsql/locking.pdf* page 84 (waits for
ShareLock on transaction only)
my case looks different:

ERROR:  deadlock detected
DETAIL:  Process 6973 waits for ShareLock on transaction 318396117; blocked
by process 11039.
  ^^
Process 11039 waits for ExclusiveLock on tuple (4,9) of relation
16416 of database 16417; blocked by process 6973.
  ^^
Process 6973: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP =
'15')
Process 11039: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP =
'15')

Is this an expected behaviour, or maybe a bug?

Thanks, Andrej
---details
case is reproducible via my application.
I was unable to reproduce it via psql sessions (locking worked fine).
I was unable to reproduce it via shell for loop with psql sessions running
same transactions (100 loops in 10 terminal windows).

postgres version: PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled by
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit

postgres log:
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.021 ms  execute
S_2: ROLLBACK
2014-09-22 22:21:54 CEST:yy@xx1:[11039] LOG:  duration: 7.965 ms  execute
S_3: COMMIT
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.015 ms  bind S_1:
BEGIN
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.008 ms  execute
S_1: BEGIN
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.122 ms  parse
unnamed: select  *  from AELDATA_READ_VIEW where  (SBO_GRP = '15')
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.317 ms  bind
unnamed: select  *  from AELDATA_READ_VIEW where  (SBO_GRP = '15')
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.041 ms  execute
unnamed: select  *  from AELDATA_READ_VIEW where  (SBO_GRP = '15')
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.017 ms  bind S_2:
ROLLBACK
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.022 ms  execute
S_2: ROLLBACK

2014-09-22 22:21:54 CEST:yy@xx1:[11039] LOG:  duration: 0.017 ms  bind S_1:
BEGIN
2014-09-22 22:21:54 CEST:yy@xx1:[11039] LOG:  duration: 0.010 ms  execute
S_1: BEGIN
2014-09-22 22:21:54 CEST:yy@xx1:[11039] LOG:  duration: 0.083 ms  parse
unnamed: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP = '15')
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.016 ms  bind S_1:
BEGIN
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.013 ms  execute
S_1: BEGIN
2014-09-22 22:21:54 CEST:yy@xx1:[11039] LOG:  duration: 0.174 ms  bind
unnamed: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP = '15')
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.096 ms  parse
unnamed: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP = '15')
2014-09-22 22:21:54 CEST:yy@xx1:[6973] LOG:  duration: 0.152 ms  bind
unnamed: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP = '15')
2014-09-22 22:21:55 CEST:yy@xx1:[6973] LOG:  process 6973 detected deadlock
while waiting for ShareLock on transaction 318396117 after 1000.060 ms
2014-09-22 22:21:55 CEST:yy@xx1:[6973] STATEMENT:  update AELDATA set
LOCK_COLUMN = 1 where  (SBO_GRP = '15')
2014-09-22 22:21:55 CEST:yy@xx1:[11039] LOG:  process 11039 still waiting
for ExclusiveLock on tuple (4,9) of relation 16416 of database 16417 after
1000.038 ms
2014-09-22 22:21:55 CEST:yy@xx1:[11039] STATEMENT:  update AELDATA set
LOCK_COLUMN = 1 where  (SBO_GRP = '15')
2014-09-22 22:21:55 CEST:yy@xx1:[6973] ERROR:  deadlock detected
2014-09-22 22:21:55 CEST:yy@xx1:[6973] DETAIL:  Process 6973 waits for
ShareLock on transaction 318396117; blocked by process 11039.
Process 11039 waits for ExclusiveLock on tuple (4,9) of relation
16416 of database 16417; blocked by process 6973.
Process 6973: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP =
'15')
Process 11039: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP =
'15')
2014-09-22 22:21:55 CEST:yy@xx1:[6973] HINT:  See server log for query
details.
2014-09-22 22:21:55 CEST:yy@xx1:[6973] STATEMENT:  update AELDATA set
LOCK_COLUMN = 1 where  (SBO_GRP = '15')
2014-09-22 22:21:55 CEST:yy@xx1:[11039] LOG:  process 11039 acquired
ExclusiveLock on tuple (4,9) of relation 16416 of database 16417 after
1000.224 ms
2014-09-22 22:21:55 CEST:yy@xx1:[11039] STATEMENT:  update AELDATA set
LOCK_COLUMN = 1 where  (SBO_GRP = '15')
2014-09-22 22:21:55 CEST:yy@xx1:[11039] LOG:  duration: 1004.543 ms
 execute unnamed: update AELDATA set LOCK_COLUMN = 1 where  (SBO_GRP =
'15')
2014-09-22 22:21:55 CEST:yy@xx1:[11039] LOG:  duration: 0.084 ms  parse
unnamed: select AEL_ID, SBO_GRP, 

Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Nick Guenther



Quoting Seref Arikan serefari...@gmail.com:


On Tue, Sep 23, 2014 at 9:36 AM, Craig Ringer cr...@2ndquadrant.com wrote:


Hi all

I've had some issues with how the procedural languages are packaged in
the Windows installer for a while, but I was very surprised to see that
plpython2 appears to be entirely absent in 9.3.

It doesn't seem to be provided via EDB's StackBuilder app either.

What's going on? It looks like it was dropped in 9.1.



I've gone through the same journey myself. Tried to juggle dlls etc.[...]




I've struggled with plpython on OpenBSD 5.5-amd64 as well. Could it be  
related? Maybe the amount of dependencies python pulls in gets  
overwhelming and things break?



$ psql -h localhost -d postgres
psql (9.3.2)
Type help for help.
postgres=# create language plpython2u;
ERROR:  could not load library  
/usr/local/lib/postgresql/plpython2.so: dlopen  
(/usr/local/lib/postgresql/plpython2.so) failed: Cannot load specified  
object

postgres=# create language plperl;
CREATE LANGUAGE
postgres=#


This is strange because /usr/local/lib/postgresql/plpython2.so exists.  
Also, perl loads fine.

$ ls -l /usr/local/lib/postgresql/pl*
-rwxr-xr-x  1 root  bin  100948 Jul 31 02:05  
/usr/local/lib/postgresql/plperl.so
-rwxr-xr-x  1 root  bin  181287 Jul 31 02:05  
/usr/local/lib/postgresql/plpgsql.so
-rwxr-xr-x  1 root  bin  137951 Mar  4 12:45  
/usr/local/lib/postgresql/plpython2.so



I uninstalled all the postgres subpackages and rebuilt them from  
ports, and ended up with an identical plpython2.so, which has these  
checksums:
SHA256 (/usr/local/lib/postgresql/plpython2.so) =  
8c7ff6358d9bf0db342e3aca1762cd7c509075a6803b240291d60a21ca38198b
MD5 (/usr/local/lib/postgresql/plpython2.so) =  
bb6122f27f48f0b3672dbc79cef40eea
SHA1 (/usr/local/lib/postgresql/plpython2.so) =  
4dd79641cbad3f71466648559d74e6b0c4f174a3


Any other OpenBSD users that have run into this, here?

--
Nick Guenther
4B Stats/CS
University of Waterloo






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Alan Hodgson
On Tuesday, September 23, 2014 02:05:48 PM Nick Guenther wrote:
 I uninstalled all the postgres subpackages and rebuilt them from
 ports, and ended up with an identical plpython2.so, which has these
 checksums:
 SHA256 (/usr/local/lib/postgresql/plpython2.so) =
 8c7ff6358d9bf0db342e3aca1762cd7c509075a6803b240291d60a21ca38198b
 MD5 (/usr/local/lib/postgresql/plpython2.so) =
 bb6122f27f48f0b3672dbc79cef40eea
 SHA1 (/usr/local/lib/postgresql/plpython2.so) =
 4dd79641cbad3f71466648559d74e6b0c4f174a3
 
 Any other OpenBSD users that have run into this, here?

I'm not a current BSD user, but I'd suggest ldd'ing that library and see if it 
has any shared libraries it can't find or if it references shared libraries 
that are not in whatever OpenBSD uses for a library search path (ie. the 
equivalent of Linux's ld.so.conf), or if any of those shared libraries have 
permissions issues.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Adrian Klaver

On 09/23/2014 11:05 AM, Nick Guenther wrote:



Quoting Seref Arikan serefari...@gmail.com:


On Tue, Sep 23, 2014 at 9:36 AM, Craig Ringer cr...@2ndquadrant.com
wrote:


Hi all

I've had some issues with how the procedural languages are packaged in
the Windows installer for a while, but I was very surprised to see that
plpython2 appears to be entirely absent in 9.3.

It doesn't seem to be provided via EDB's StackBuilder app either.

What's going on? It looks like it was dropped in 9.1.



I've gone through the same journey myself. Tried to juggle dlls etc.[...]




I've struggled with plpython on OpenBSD 5.5-amd64 as well. Could it be
related? Maybe the amount of dependencies python pulls in gets
overwhelming and things break?


$ psql -h localhost -d postgres
psql (9.3.2)
Type help for help.
postgres=# create language plpython2u;
ERROR:  could not load library /usr/local/lib/postgresql/plpython2.so:
dlopen (/usr/local/lib/postgresql/plpython2.so) failed: Cannot load
specified object
postgres=# create language plperl;
CREATE LANGUAGE
postgres=#




What user are you running as?

plpythonu is an untrusted language so it can only be installed by a 
superuser.
Also languages are now extensions and should be installed using CREATE 
EXTENSION:


http://www.postgresql.org/docs/9.3/interactive/sql-createlanguage.html




Any other OpenBSD users that have run into this, here?




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Daniel Lenski
On Tue, Sep 23, 2014 at 11:05 AM, Nick Guenther nguen...@uwaterloo.ca
wrote:
 I've struggled with plpython on OpenBSD 5.5-amd64 as well. Could it be
related? Maybe the amount of dependencies python pulls in gets overwhelming
and things break?


 $ psql -h localhost -d postgres
 psql (9.3.2)
 Type help for help.
 postgres=# create language plpython2u;
 ERROR:  could not load library /usr/local/lib/postgresql/plpython2.so:
dlopen (/usr/local/lib/postgresql/plpython2.so) failed: Cannot load
specified object
 postgres=# create language plperl;
 CREATE LANGUAGE
 postgres=#


 This is strange because /usr/local/lib/postgresql/plpython2.so exists.
Also, perl loads fine.
 $ ls -l /usr/local/lib/postgresql/pl*
 -rwxr-xr-x  1 root  bin  100948 Jul 31 02:05
/usr/local/lib/postgresql/plperl.so
 -rwxr-xr-x  1 root  bin  181287 Jul 31 02:05
/usr/local/lib/postgresql/plpgsql.so
 -rwxr-xr-x  1 root  bin  137951 Mar  4 12:45
/usr/local/lib/postgresql/plpython2.so


Hmmm... I doubt that this is directly related to the Windows issue, where
the shared library simply doesn't exist in the standard installation.

Can anyone confirm whether the plpython2 omission from the Windows builds
was intentional? If it is intentional, then a few simple changes would make
it easier for users like me to figure out that it's missing, not just
misconfigured:
* Make plpythonu point to plpython3u, rather than plpython2u, by default
* Omit plpython2u from the pg_pltemplate catalog, so that this:
  CREATE LANGUAGE plpython2u
  ... would give a more helpful error message:
  ERROR:  unsupported language plpython2u

-Dan


[GENERAL] which Update quicker

2014-09-23 Thread Emi Lu

Hello list,

For a big table with more than 1,000,000 records, may I know which 
update is quicker please?


(1) update t1
  set c1 = a.c1
  from a
  where pk and
 t1.c1a.c1;
 ..
  update t1
  set c_N = a.c_N
  from a
  where pk and
 t1.c_Na.c_N;


(2)  update t1
  set c1 = a.c1 ,
c2  = a.c2,
...
c_N = a.c_N
 from a
 where pk AND
   (  t1.c1  a.c1 OR t1.c2  a.c2. t1.c_N  a.c_N)


Or other quicker way for update action?

Thank you
Emi




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] which Update quicker

2014-09-23 Thread Daniele Varrazzo
On Tue, Sep 23, 2014 at 8:35 PM, Emi Lu em...@encs.concordia.ca wrote:
 Hello list,

 For a big table with more than 1,000,000 records, may I know which update is
 quicker please?

 (1) update t1
   set c1 = a.c1
   from a
   where pk and
  t1.c1a.c1;
  ..
   update t1
   set c_N = a.c_N
   from a
   where pk and
  t1.c_Na.c_N;


 (2)  update t1
   set c1 = a.c1 ,
 c2  = a.c2,
 ...
 c_N = a.c_N
  from a
  where pk AND
(  t1.c1  a.c1 OR t1.c2  a.c2. t1.c_N  a.c_N)

Definitely the second, and it produces less bloat too.


 Or other quicker way for update action?

You may express the comparison as (t1.c1, t1.c2, ... t1.cN)  (t2.c1,
t2.c2, ... t2.cN)
It's not going to be faster but maybe it's more readable.


-- Daniele


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] which Update quicker

2014-09-23 Thread Steve Crawford

On 09/23/2014 12:35 PM, Emi Lu wrote:

Hello list,

For a big table with more than 1,000,000 records, may I know which 
update is quicker please?


(1) update t1
  set c1 = a.c1
  from a
  where pk and
 t1.c1a.c1;
 ..
  update t1
  set c_N = a.c_N
  from a
  where pk and
 t1.c_Na.c_N;


(2)  update t1
  set c1 = a.c1 ,
c2  = a.c2,
...
c_N = a.c_N
 from a
 where pk AND
   (  t1.c1  a.c1 OR t1.c2  a.c2. t1.c_N  a.c_N)




We don't have any info about table structures, index availability and 
usage for query optimization, whether or not the updated columns are 
part of an index, amount of memory available, disk speed, portion of t1 
that will be updated, PostgreSQL settings, etc. so it's really anyone's 
guess. A million rows is pretty modest so I was able to try a couple 
variants of update...from... on million row tables on my aging desktop 
without coming close to the 60-second mark.


*Usually* putting statements into a single transaction is better (as 
would happen automatically in case 2). Also, to the extent that a given 
tuple would have multiple columns updated you will have less bloat and 
I/O using the query that updates the tuple once rather than multiple 
times. But a lot will depend on the efficiency of looking up the 
appropriate data in a.


Cheers,
Steve






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: Re: [GENERAL] Where art thou, plpython2.dll? (EDB installer)

2014-09-23 Thread Nick Guenther



On September 23, 2014 2:27:29 PM EDT, Adrian Klaver  
adrian.kla...@aklaver.com wrote:

On 09/23/2014 11:05 AM, Nick Guenther wrote:



Quoting Seref Arikan serefari...@gmail.com:


On Tue, Sep 23, 2014 at 9:36 AM, Craig Ringer

cr...@2ndquadrant.com

wrote:


Hi all

I've had some issues with how the procedural languages are packaged

in

the Windows installer for a while, but I was very surprised to see

that

plpython2 appears to be entirely absent in 9.3.

It doesn't seem to be provided via EDB's StackBuilder app either.

What's going on? It looks like it was dropped in 9.1.



I've gone through the same journey myself. Tried to juggle dlls

etc.[...]





I've struggled with plpython on OpenBSD 5.5-amd64 as well. Could it

be

related? Maybe the amount of dependencies python pulls in gets
overwhelming and things break?


$ psql -h localhost -d postgres
psql (9.3.2)
Type help for help.

  postgres=# create language plpython2u;

ERROR:  could not load library

/usr/local/lib/postgresql/plpython2.so:

dlopen (/usr/local/lib/postgresql/plpython2.so) failed: Cannot load
specified object
postgres=# create language plperl;
CREATE LANGUAGE
postgres=#




What user are you running as?
plpythonu is an untrusted language so it can only be installed by a
superuser.


My user account which is also the one that ran initdb and the one that  
ran postgres. I discovered this when trying to work on a project I'd  
started on arch linux where everything is peachy.



Also languages are now extensions and should be installed using CREATE
EXTENSION:

http://www.postgresql.org/docs/9.3/interactive/sql-createlanguage.html


Ah, thanks for the tip. I will change over to that. But CREATE  
EXTENSION gives the identical error:


postgres=# create extension plpython2u;
ERROR:  could not load library  
/usr/local/lib/postgresql/plpython2.so: dlopen  
(/usr/local/lib/postgresql/plpython2.so) failed: Cannot load specified  
object


 On September 23, 2014 2:25:43 PM EDT, Alan Hodgson  
ahodg...@simkin.ca wrote:


I'm not a current BSD user, but I'd suggest ldd'ing that library and
see if it
has any shared libraries it can't find or if it references shared
libraries
that are not in whatever OpenBSD uses for a library search path (ie.
the
equivalent of Linux's ld.so.conf), or if any of those shared libraries
have
permissions issues.


Ah, good point.

It turned out that the problem DLL was libpthread. I discovered this  
using LD_DEBUG and worked around it with LD_PRELOAD.



Full output is below, so that hopefully this becomes searchable:

$ uname -a
OpenBSD localhost.localdomain 5.5 GENERIC.MP#315 amd64

$ initdb data
The files belonging to this database system will be owned by user nguenthe.
This user must also own the server process.

The database cluster will be initialized with locale C.
The default database encoding has accordingly been set to SQL_ASCII.
The default text search configuration will be set to english.

Data page checksums are disabled.

creating directory data ... ok
creating subdirectories ... ok
selecting default max_connections ... 40
selecting default shared_buffers ... 128MB
creating configuration files ... ok
creating template1 database in data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... not supported on this platform
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling trust authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

postgres -D data
or
pg_ctl -D data -l logfile start



$ LD_DEBUG=1 postgres -D ./data
rtld loading: 'postgres'
exe load offset:  0x1c200040
 flags postgres = 0x0
head postgres
obj postgres has postgres as head
examining: 'postgres'
loading: libasn1.so.20.0 required by postgres
 flags /usr/lib/libasn1.so.20.0 = 0x0
obj /usr/lib/libasn1.so.20.0 has postgres as head
loading: libroken.so.1.0 required by postgres
 flags /usr/lib/libroken.so.1.0 = 0x0
obj /usr/lib/libroken.so.1.0 has postgres as head
loading: libwind.so.1.0 required by postgres
 flags /usr/lib/libwind.so.1.0 = 0x0
obj /usr/lib/libwind.so.1.0 has postgres as head
loading: libc.so.73.1 required by postgres
 flags /usr/lib/libc.so.73.1 = 0x0
obj /usr/lib/libc.so.73.1 has postgres as head
loading: libkrb5.so.20.0 required by postgres
 flags /usr/lib/libkrb5.so.20.0 = 0x0
obj /usr/lib/libkrb5.so.20.0 has postgres as head
loading: libcrypto.so.23.0 required by postgres
 flags /usr/lib/libcrypto.so.23.0 = 0x0
obj 

[GENERAL] pg_dump: [archiver] -C and -c are incompatible options

2014-09-23 Thread Carlos Carcamo
Hello list, I need some help with this error:
...
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
pg_dump: [archiver] -C and -c are incompatible options
pg_dump: *** aborted because of error

Process returned exit code 1.

I get this error when i'm trying to create a backup of some of my
tables of the database, i'm using pgadmin3, I've searched on internet
about it, but I couldn't find useful info.

I will appreciate any help, thanks in advance.

-- 
El desarrollo no es material es un estado de conciencia mental


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump: [archiver] -C and -c are incompatible options

2014-09-23 Thread Tom Lane
Carlos Carcamo eazyd...@gmail.com writes:
 Hello list, I need some help with this error:
 ...
 pg_dump: saving encoding = UTF8
 pg_dump: saving standard_conforming_strings = on
 pg_dump: [archiver] -C and -c are incompatible options
 pg_dump: *** aborted because of error

Like it says, you should not use both the -C and -c command-line options
to pg_dump.  I'm not sure how that translates to what you're doing in
pgAdmin3, but presumably you're selecting some incompatible options there.

You might want to gripe about it on the pgadmin mailing list (not here),
as it seems like they could provide a more user-friendly complaint.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump: [archiver] -C and -c are incompatible options

2014-09-23 Thread Adrian Klaver

On 09/23/2014 04:01 PM, Carlos Carcamo wrote:

Hello list, I need some help with this error:
...
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
pg_dump: [archiver] -C and -c are incompatible options
pg_dump: *** aborted because of error

Process returned exit code 1.

I get this error when i'm trying to create a backup of some of my
tables of the database, i'm using pgadmin3, I've searched on internet
about it, but I couldn't find useful info.

I will appreciate any help, thanks in advance.


According to pgAdmin3 you cannot use both -C and -c at the same time.

Not sure why:
http://www.postgresql.org/docs/9.3/interactive/app-pgdump.html

-c
--clean

Output commands to clean (drop) database objects prior to 
outputting the commands for creating them. (Restore might generate some 
harmless error messages, if any objects were not present in the 
destination database.)


This option is only meaningful for the plain-text format. For the 
archive formats, you can specify the option when you call pg_restore.


-C
--create

Begin the output with a command to create the database itself and 
reconnect to the created database. (With a script of this form, it 
doesn't matter which database in the destination installation you 
connect to before running the script.) If --clean is also specified, the 
script drops and recreates the target database before reconnecting to it.


This option is only meaningful for the plain-text format. For the 
archive formats, you can specify the option when you call pg_restore.



What version of pgAdmin3 are you using?

What options are you using when running the pgAdmin3 backup tool?

I ask because pgAdmin will not issue a CREATE DATABASE by default.







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump: [archiver] -C and -c are incompatible options

2014-09-23 Thread David G Johnston
Tom Lane-2 wrote
 Carlos Carcamo lt;

 eazyduiz@

 gt; writes:
 Hello list, I need some help with this error:
 ...
 pg_dump: saving encoding = UTF8
 pg_dump: saving standard_conforming_strings = on
 pg_dump: [archiver] -C and -c are incompatible options
 pg_dump: *** aborted because of error
 
 Like it says, you should not use both the -C and -c command-line options
 to pg_dump.  I'm not sure how that translates to what you're doing in
 pgAdmin3, but presumably you're selecting some incompatible options there.
 
 You might want to gripe about it on the pgadmin mailing list (not here),
 as it seems like they could provide a more user-friendly complaint.

What version of PostgreSQL?

9.3 documentation indicates that using both together should work.

Are you maybe hitting an older database with a newer version of pgadmin?

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-dump-archiver-C-and-c-are-incompatible-options-tp5820220p5820259.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] JSONB spaces in text presentation

2014-09-23 Thread Ilya I. Ashchepkov
Hi.

Is spaces is nessesary in text presentation of JSONB?
In my data resulting text contains ~12% of spaces.

I'm developing web application, and want to get json-string from pg and
send it to browser without repacking.

-- 
С уважением,
Ащепков Илья koc...@gmail.com


Re: [GENERAL] PROBLEM Service Alert: hostname/check_postgres_old_transaction is CRITICAL **

2014-09-23 Thread Adarsh Sharma
Thanks Jan.!! Will check and update you all the findings.

Cheers

On Sat, Sep 20, 2014 at 4:17 AM, Jan-Pieter Cornet joh...@xs4all.net
wrote:

 On 2014-9-19 20:33 , Adarsh Sharma wrote:
  It returns approx *311 MB* data to the client servers.
 
  root  netstat -p | grep 45355
  tcp0 1531648 localhost:5499  localhost:48848
  ESTABLISHED 45355/postgres
 
  root  strace -p 45355
  -- Remain stuck for *2-3 *hours
  read(306,
 \30\0kB\3\0\0\0\0\0\377\5\0\0\26\0\0\0\2\0\0\0\30\0+\264\3\0\0\0\0\0...,
 8192) = 8192
  sendto(10, 4174\0\0\0\0041535D\0\0\0\30\0\2\0\0\0\006248710\0\0\0...,
 8192, 0, NULL, 0
 
  Then after 2-3 hours it got completed automatically with below
 stacktrace :
 
  # strace -p 45355
  Process 45355 attached - interrupt to quit
  sendto(10, 4174\0\0\0\0041535D\0\0\0\30\0\2\0\0\0\006248710\0\0\0...,
 8192, 0, NULL, 0
 
  ) = -1 ECONNRESET (Connection reset by peer)

 I'd guess that the receiving program crashes. It might not be able to
 stomach the 311 MB that's being sent to it, maybe it's expanding, and
 swapping, until it's killed by the OOM killer?

 Doesn't look like a postgres problem to me. postgres is stuck trying to
 send data... try stracing the client to see what it does? (use lsof -i
 :48848 or whatever the port number of the remote is to find the pid).

 --
 Jan-Pieter Cornet joh...@xs4all.net
 Any sufficiently advanced incompetence is indistinguishable from malice.
 - Grey's Law