Re: [GENERAL] Detecting query timeouts properly

2014-09-22 Thread Albe Laurenz
Evan Martin wrote:
 I'm using PostgreSQL 9.2.8 via Npgsql 2.2.0. When a query times out it
 returns error 57014 with the message canceling statement due to
 statement timeout. I use the message to detect the timeout and re-try
 in some cases. It seems a bit wrong to rely on the message, though - I
 presume the message text is subject to change between releases. Is there
 a better way? I cannot just rely on code 57014, because the same code is
 used when the user cancels the query (eg. from pgAdmin), in which case I
 don't want to re-try.

I don't think that there is a good way to disambiguate that, and I agree
that parsing the error message text is not a very robust solution.

Is it really necessary to bother with the case that a statement is
explicitly canceled?  That should not happen too often, and if you have
statement_timeout set to non-zero, queries should not hang indefinitely
long anyway, right?

Yours
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature request: temporary schemas

2014-09-22 Thread Pete Hollobon
On 14 September 2014 22:01, cowwoc cow...@bbs.darktech.org wrote:

 Hi,

 I'd like to propose the ability to create temporary schemas.

 Unlike temporary tables, this feature would enable developers to create a
 temporary schema once and execute CREATE TABLE statements without the
 TEMPORARY parameter.


I think you can use pg_temp for this - it's an alias to the current
session's temporary schema. If you set the current schema with SET
search_path TO pg_temp, all tables, views, functions and so on will be
created in a temporary schema.

The only downside is that you'd need to reference any function calls with
pg_temp explicitly, as pg_temp is never searched for function or operator
names (see
http://www.postgresql.org/docs/current/static/runtime-config-client.html).

I have just replied to your Stack Exchange answer too.


[GENERAL] Installing Postgresql on Linux Friendlyarm

2014-09-22 Thread Xiang Gan
Hi,

I'm newbie here so I'm sorry if this is posted in wrong place.

I tried to install Postgresql 9.2.9 on Linux Friendlyarm. The Postgresql
was cross compiled (use arm-linux-gcc 4.4.3) successfully on my desktop Ubuntu. 
Then it was moved to
Linux Friendlyarm environment. When I tried to start the Postgresql server, 
however,
it gives me the following errors:

LOG: could not create IPv4 socket: Permission denied
WARNING: could not create listen socket for 127.0.0.1
FATAL: could not create any TCP/IP sockets


The kernel info. of my Linux friendlyarm is as follows:
Linux FriendlyARM 2.6.35.7-FriendlyARM #1 PREEMPT Thu Mar 1 17:38:57 HKT 2012 
arm7l GNU/Linux.

Anyone could provide some help about this?
Thanks in advance!

Gerry



Re: [GENERAL] Installing Postgresql on Linux Friendlyarm

2014-09-22 Thread Adrian Klaver

On 09/22/2014 01:22 AM, Xiang Gan wrote:

Hi,

I'm newbie here so I'm sorry if this is posted in wrong place.

I tried to install Postgresql 9.2.9 on Linux Friendlyarm. The Postgresql

was cross compiled (use arm-linux-gcc 4.4.3) successfully on my desktop
Ubuntu. Then it was moved to

Linux Friendlyarm environment. When I tried to start the Postgresql
server, however,

it gives me the following errors:

LOG: could not create IPv4 socket: Permission denied

WARNING: could not create listen socket for 127.0.0.1

FATAL: could not create any TCP/IP sockets

The kernel info. of my Linux friendlyarm is as follows:

Linux FriendlyARM 2.6.35.7-FriendlyARM #1 PREEMPT Thu Mar 1 17:38:57 HKT
2012 arm7l GNU/Linux.

Anyone could provide some help about this?


The user you are running Postgres as does not the permissions to create 
an IPv4 socket. I know nothing about Linux FriendlyArm, so I don't know 
where to go from here. I would say ask the FriendlyArm forum, but I see 
you have already done that.




Thanks in advance!

Gerry




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgre SQL SHA-256 Compliance

2014-09-22 Thread Anthony Burden
validate some software with you to
ensure that all our installed PostgreSQL software meets SHA-256 compliance.
There is basically two things we are looking for:

1) Identify all COTS software purchased as part of scheduled and budgeted
technology refreshes and upgrades must be SHA-256 compliant.

2) All DOD information systems that have been upgraded or are upgrading to
support SHA-256 compliance must continue to maintain backwards compatibility
with DOD's current SHA-1 credentials.

All the software we are using are:
PostgreSQL 8.2  8.2

Can you confirm that your software is SHA-256 Compliant?




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Postgre-SQL-SHA-256-Compliance-tp5819917.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre SQL SHA-256 Compliance

2014-09-22 Thread Adrian Klaver

On 09/22/2014 07:24 AM, Anthony Burden wrote:

validate some software with you to
ensure that all our installed PostgreSQL software meets SHA-256 compliance.
There is basically two things we are looking for:

1) Identify all COTS software purchased as part of scheduled and budgeted
technology refreshes and upgrades must be SHA-256 compliant.

2) All DOD information systems that have been upgraded or are upgrading to
support SHA-256 compliance must continue to maintain backwards compatibility
with DOD's current SHA-1 credentials.

All the software we are using are:
PostgreSQL 8.2  8.2


First the oldest supported release is 9.0, currently at 9.0.18



Can you confirm that your software is SHA-256 Compliant?


Second what does the above mean?

I found this:

http://www.acq.osd.mil/dpap/ops/docs/Public%20Briefing%20-%20DoD%20SHA-256%20Migration%2018%20Mar%202011.pdf

but my eyes quickly glazed over:) So a synopsis would be helpful.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre SQL SHA-256 Compliance (UNCLASSIFIED)

2014-09-22 Thread Anthony Burden
Classification: UNCLASSIFIED
Caveats: NONE

Adrian,

Correct I have the DOD Memo and this PPT. However I am trying to validate
from the Vendor that PostgreSQL 8.2 is SHA-256 compliant or not. Does that
software utilize SHA-1/SHA-256 algorithm? 

V/r,
Anthony 

-Original Message-
From: Adrian Klaver-4 [via PostgreSQL]
[mailto:ml-node+s1045698n5819918...@n5.nabble.com] 
Sent: Monday, September 22, 2014 10:36 AM
To: Burden, Anthony D CTR USARMY TRADOC (US)
Subject: Re: Postgre SQL SHA-256 Compliance

On 09/22/2014 07:24 AM, Anthony Burden wrote: 

 validate some software with you to
 ensure that all our installed PostgreSQL software meets SHA-256
compliance. 
 There is basically two things we are looking for: 
 
 1) Identify all COTS software purchased as part of scheduled and 
 budgeted technology refreshes and upgrades must be SHA-256 compliant.
 
 2) All DOD information systems that have been upgraded or are 
 upgrading to support SHA-256 compliance must continue to maintain 
 backwards compatibility with DOD's current SHA-1 credentials.
 
 All the software we are using are: 
 PostgreSQL 8.2 8.2

First the oldest supported release is 9.0, currently at 9.0.18 

 
 Can you confirm that your software is SHA-256 Compliant? 

Second what does the above mean? 

I found this: 

http://www.acq.osd.mil/dpap/ops/docs/Public%20Briefing%20-%20DoD%20SHA-256%2
0Migration%2018%20Mar%202011.pdf

but my eyes quickly glazed over:) So a synopsis would be helpful. 




--
Adrian Klaver
[hidden email] 


--
Sent via pgsql-general mailing list ([hidden email]) To make changes to your
subscription: 
http://www.postgresql.org/mailpref/pgsql-general





If you reply to this email, your message will be added to the discussion
below:
http://postgresql.1045698.n5.nabble.com/Postgre-SQL-SHA-256-Compliance-tp581
9917p5819918.html
To unsubscribe from Postgre SQL SHA-256 Compliance, click here
http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=unsu
bscribe_by_codenode=5819917code=YW50aG9ueS5kLmJ1cmRlbi5jdHJAbWFpbC5taWx8NT
gxOTkxN3w2OTk3MDc3Nzk= .
NAML
http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=macr
o_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.B
asicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.templ
ate.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-insta
nt_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml  

Classification: UNCLASSIFIED
Caveats: NONE




smime.p7s (7K) 
http://postgresql.1045698.n5.nabble.com/attachment/5819919/0/smime.p7s




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/RE-Postgre-SQL-SHA-256-Compliance-UNCLASSIFIED-tp5819919.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Postgre SQL SHA-256 Compliance

2014-09-22 Thread Albe Laurenz
Anthony Burden wrote:
 validate some software with you to
 ensure that all our installed PostgreSQL software meets SHA-256 compliance.
 There is basically two things we are looking for:
 
 1) Identify all COTS software purchased as part of scheduled and budgeted
 technology refreshes and upgrades must be SHA-256 compliant.
 
 2) All DOD information systems that have been upgraded or are upgrading to
 support SHA-256 compliance must continue to maintain backwards compatibility
 with DOD's current SHA-1 credentials.
 
 All the software we are using are:
 PostgreSQL 8.28.2
 
 Can you confirm that your software is SHA-256 Compliant?

If you mean whether a SSL database connection can use SHA-256 or not,
that depends on the OpenSSL library your PostgreSQL uses.
If your OpenSSL version supports SHA-256, so does PostgreSQL.

Yours,
Laurenz Albe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre SQL SHA-256 Compliance

2014-09-22 Thread Paul Jungwirth
 Can you confirm that your software is SHA-256 Compliant?

Postgres's SSL certificate  key live at the value of ssl_cert_file
and ssl_key_file in your postgresql.conf. Why not point it at a
SHA-256 certificate, restart, and try it out?

Paul


-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre SQL SHA-256 Compliance

2014-09-22 Thread Neil Tiffin
On Sep 22, 2014, at 9:46 AM, Paul Jungwirth p...@illuminatedcomputing.com 
wrote:

 Can you confirm that your software is SHA-256 Compliant?
 
 Postgres's SSL certificate  key live at the value of ssl_cert_file
 and ssl_key_file in your postgresql.conf. Why not point it at a
 SHA-256 certificate, restart, and try it out?
 
Unfortunately, that is not the way the government usually works.  The person 
requesting the info may not even have access to the system or know how to use 
the system.  This is especially true if the system is classified at any level.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre SQL SHA-256 Compliance

2014-09-22 Thread Merlin Moncure
On Mon, Sep 22, 2014 at 9:42 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Anthony Burden wrote:
 validate some software with you to
 ensure that all our installed PostgreSQL software meets SHA-256 compliance.
 There is basically two things we are looking for:

 1) Identify all COTS software purchased as part of scheduled and budgeted
 technology refreshes and upgrades must be SHA-256 compliant.

 2) All DOD information systems that have been upgraded or are upgrading to
 support SHA-256 compliance must continue to maintain backwards compatibility
 with DOD's current SHA-1 credentials.

 All the software we are using are:
 PostgreSQL 8.28.2

 Can you confirm that your software is SHA-256 Compliant?

 If you mean whether a SSL database connection can use SHA-256 or not,
 that depends on the OpenSSL library your PostgreSQL uses.
 If your OpenSSL version supports SHA-256, so does PostgreSQL.

Well, it may be more than that depending on what 'SHA-256 compliance'
means.  Postgres still uses md5 for password authentication.  This has
a significant downside: it requires endlessly explaining the actual
danger to those who are security experts but don't know the difference
between collision and preimage resistance.

For everything but password auth postgres depends on SSL and is configurable.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-22 Thread Merlin Moncure
On Fri, Sep 19, 2014 at 7:16 AM, Bill Moran wmo...@potentialtech.com wrote:
 On Fri, 19 Sep 2014 09:32:09 +0200
 Marius Grama marius...@gmail.com wrote:
 Can anybody explain me what happens in the background when the alter
 statement is executed? I've tried it out on a small copy of the table (70K)
 and the operation completed in 0.2 seconds.
 Will the table be completely locked during the execution of the ALTER
 statement?

 I share Gavin's concern that you're fixing this in the wrong place.  I expect
 that you'll be better served by configuring the middleware to do the right 
 thing.

I'll pile on here: in almost 20 years of professional database
development I've never had an actual problem that was solved by
introducing or shortening a length constraint to text columns except
in cases where overlong strings violate the data model (like a two
character state code for example).  It's a database equivalent of C
programmer's disease.  Input checks from untrusted actors should
happen in the application.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre SQL SHA-256 Compliance

2014-09-22 Thread Stephen Frost
Anthony,

* Anthony Burden (anthony.d.burden@mail.mil) wrote:
 validate some software with you to
 ensure that all our installed PostgreSQL software meets SHA-256 compliance.
 There is basically two things we are looking for:
 
 1) Identify all COTS software purchased as part of scheduled and budgeted
 technology refreshes and upgrades must be SHA-256 compliant.
 
 2) All DOD information systems that have been upgraded or are upgrading to
 support SHA-256 compliance must continue to maintain backwards compatibility
 with DOD's current SHA-1 credentials.
 
 All the software we are using are:
 PostgreSQL 8.28.2

PostgreSQL is now at version 9.3, with 9.0 being the oldest version
which is supported by PGDG (the PostgreSQL Global Development Group- aka
the PostgreSQL community).  Support for older versions may be
available from PostgreSQL support vendors- a list of vendors in North
America is available here:

http://www.postgresql.org/support/professional_support/northamerica/

 Can you confirm that your software is SHA-256 Compliant?

As mentioned elsewhere on the thread, if this question is about SHA-256
support in OpenSSL, you would need to check the OpenSSL library on your
system.  If the operating system you're running PostgreSQL on is as old
as the version of PostgreSQL you're running then I would be quite
worried that it does not support SHA-256.

Generally, I'd recommend you look to upgrade to a version of your OS
which includes a version of PostgreSQL which is currently considered
supported by the PGDG (eg: Red Hat Enterprise Linux 7 includes
PostgreSQL 9.2) and verify that the OpenSSL also supports SHA-256 (RHEL7
does).

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-22 Thread John McKown
On Mon, Sep 22, 2014 at 10:31 AM, Merlin Moncure mmonc...@gmail.com wrote:
 On Fri, Sep 19, 2014 at 7:16 AM, Bill Moran wmo...@potentialtech.com wrote:
 On Fri, 19 Sep 2014 09:32:09 +0200
 Marius Grama marius...@gmail.com wrote:
 Can anybody explain me what happens in the background when the alter
 statement is executed? I've tried it out on a small copy of the table (70K)
 and the operation completed in 0.2 seconds.
 Will the table be completely locked during the execution of the ALTER
 statement?

 I share Gavin's concern that you're fixing this in the wrong place.  I expect
 that you'll be better served by configuring the middleware to do the right 
 thing.

 I'll pile on here: in almost 20 years of professional database
 development I've never had an actual problem that was solved by
 introducing or shortening a length constraint to text columns except
 in cases where overlong strings violate the data model (like a two
 character state code for example).  It's a database equivalent of C
 programmer's disease.  Input checks from untrusted actors should
 happen in the application.

 merlin


I do not have your experience level with data bases, but if I may, I
will make an addition. Input checks should also happen in the RDBMS
server. I have learned you cannot trust end users _or_ programmers.
Most are good and conscientious. But there are a few who just aren't.
And those few seem to be very prolific in making _subtle_ errors. Had
one person who was really good at replacing every p with a [ and P
with {


-- 
There is nothing more pleasant than traveling and meeting new people!
Genghis Khan

Maranatha! 
John McKown


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-22 Thread Rob Sargent

On 09/22/2014 09:40 AM, John McKown wrote:

On Mon, Sep 22, 2014 at 10:31 AM, Merlin Moncure mmonc...@gmail.com wrote:

On Fri, Sep 19, 2014 at 7:16 AM, Bill Moran wmo...@potentialtech.com wrote:

On Fri, 19 Sep 2014 09:32:09 +0200
Marius Grama marius...@gmail.com wrote:

Can anybody explain me what happens in the background when the alter
statement is executed? I've tried it out on a small copy of the table (70K)
and the operation completed in 0.2 seconds.
Will the table be completely locked during the execution of the ALTER
statement?

I share Gavin's concern that you're fixing this in the wrong place.  I expect
that you'll be better served by configuring the middleware to do the right 
thing.

I'll pile on here: in almost 20 years of professional database
development I've never had an actual problem that was solved by
introducing or shortening a length constraint to text columns except
in cases where overlong strings violate the data model (like a two
character state code for example).  It's a database equivalent of C
programmer's disease.  Input checks from untrusted actors should
happen in the application.

merlin


I do not have your experience level with data bases, but if I may, I
will make an addition. Input checks should also happen in the RDBMS
server. I have learned you cannot trust end users _or_ programmers.
Most are good and conscientious. But there are a few who just aren't.
And those few seem to be very prolific in making _subtle_ errors. Had
one person who was really good at replacing every p with a [ and P
with {


You don't want that string to get all the way to the server and fail, 
blow out a transaction and carry that joyous news back to the user who 
now has to start over completely.  Further no mear length constraint is 
going to fix p=[.  Not say the db cannot have the constraint (no [ 
allowed?) but a good app checks input on the fly.






Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-22 Thread Tim Clarke
On 22/09/14 17:18, Rob Sargent wrote:
 On 09/22/2014 09:40 AM, John McKown wrote:
 On Mon, Sep 22, 2014 at 10:31 AM, Merlin Moncure mmonc...@gmail.com wrote:
 On Fri, Sep 19, 2014 at 7:16 AM, Bill Moran wmo...@potentialtech.com 
 wrote:
 On Fri, 19 Sep 2014 09:32:09 +0200
 Marius Grama marius...@gmail.com wrote:
 Can anybody explain me what happens in the background when the alter
 statement is executed? I've tried it out on a small copy of the table 
 (70K)
 and the operation completed in 0.2 seconds.
 Will the table be completely locked during the execution of the ALTER
 statement?
 I share Gavin's concern that you're fixing this in the wrong place.  I 
 expect
 that you'll be better served by configuring the middleware to do the right 
 thing.
 I'll pile on here: in almost 20 years of professional database
 development I've never had an actual problem that was solved by
 introducing or shortening a length constraint to text columns except
 in cases where overlong strings violate the data model (like a two
 character state code for example).  It's a database equivalent of C
 programmer's disease.  Input checks from untrusted actors should
 happen in the application.

 merlin

 I do not have your experience level with data bases, but if I may, I
 will make an addition. Input checks should also happen in the RDBMS
 server. I have learned you cannot trust end users _or_ programmers.
 Most are good and conscientious. But there are a few who just aren't.
 And those few seem to be very prolific in making _subtle_ errors. Had
 one person who was really good at replacing every p with a [ and P
 with {


 You don't want that string to get all the way to the server and fail,
 blow out a transaction and carry that joyous news back to the user who
 now has to start over completely.  Further no mear length constraint
 is going to fix p=[.  Not say the db cannot have the constraint (no
 [ allowed?) but a good app checks input on the fly.




Indeed - both is the answer; back-end (trigger) checks for safety,
front-end application polite messages for clarity and ease of use.

-- 
Tim Clarke



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-22 Thread John McKown
Sorry guess I wasn't being as clear as I thought. To be a bit more
precise, I really think that validation should occur _first_ at the
point of entry (for a web browser, I put in Javascript code to verify
it there as well as in the web service doing the same validation
because some people disable Javascript as a possible security breach
vector), then also do the same, or even more, validation in the back
end server. I.e. don't trust any step of the process which is not
under your immediate control. As the owner of the data base, I want
to validate the data myself according to the proper business rules.
The application developer should also validate the input. What I don't
believe in is a trusted application from which I would accept data
and not validate it before updating the data base. If such an
application were to exist, due to management dictum, I would audit
everything that I could to prove any corruption to the data base was
caused by this can't ever be wrong application. Yes, I am a
paranoid.

On Mon, Sep 22, 2014 at 11:18 AM, Rob Sargent robjsarg...@gmail.com wrote:
 On 09/22/2014 09:40 AM, John McKown wrote:

 On Mon, Sep 22, 2014 at 10:31 AM, Merlin Moncure mmonc...@gmail.com wrote:

 On Fri, Sep 19, 2014 at 7:16 AM, Bill Moran wmo...@potentialtech.com
 wrote:

 On Fri, 19 Sep 2014 09:32:09 +0200
 Marius Grama marius...@gmail.com wrote:

 Can anybody explain me what happens in the background when the alter
 statement is executed? I've tried it out on a small copy of the table (70K)
 and the operation completed in 0.2 seconds.
 Will the table be completely locked during the execution of the ALTER
 statement?

 I share Gavin's concern that you're fixing this in the wrong place.  I
 expect
 that you'll be better served by configuring the middleware to do the right
 thing.

 I'll pile on here: in almost 20 years of professional database
 development I've never had an actual problem that was solved by
 introducing or shortening a length constraint to text columns except
 in cases where overlong strings violate the data model (like a two
 character state code for example).  It's a database equivalent of C
 programmer's disease.  Input checks from untrusted actors should
 happen in the application.

 merlin

 I do not have your experience level with data bases, but if I may, I
 will make an addition. Input checks should also happen in the RDBMS
 server. I have learned you cannot trust end users _or_ programmers.
 Most are good and conscientious. But there are a few who just aren't.
 And those few seem to be very prolific in making _subtle_ errors. Had
 one person who was really good at replacing every p with a [ and P
 with {


 You don't want that string to get all the way to the server and fail, blow
 out a transaction and carry that joyous news back to the user who now has to
 start over completely.  Further no mear length constraint is going to fix
 p=[.  Not say the db cannot have the constraint (no [ allowed?) but a good
 app checks input on the fly.






-- 
There is nothing more pleasant than traveling and meeting new people!
Genghis Khan

Maranatha! 
John McKown


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-22 Thread Merlin Moncure
On Mon, Sep 22, 2014 at 10:40 AM, John McKown
john.archie.mck...@gmail.com wrote:
 On Mon, Sep 22, 2014 at 10:31 AM, Merlin Moncure mmonc...@gmail.com wrote:
 I'll pile on here: in almost 20 years of professional database
 development I've never had an actual problem that was solved by
 introducing or shortening a length constraint to text columns except
 in cases where overlong strings violate the data model (like a two
 character state code for example).  It's a database equivalent of C
 programmer's disease.  Input checks from untrusted actors should
 happen in the application.

 merlin


 I do not have your experience level with data bases, but if I may, I
 will make an addition. Input checks should also happen in the RDBMS
 server. I have learned you cannot trust end users _or_ programmers.
 Most are good and conscientious. But there are a few who just aren't.
 And those few seem to be very prolific in making _subtle_ errors. Had
 one person who was really good at replacing every p with a [ and P
 with {

Sure.  The point is distinguishing things which are *demonstrably*
false (like a US VIN must be exactly 17 chars) from those that are
based assumption (such as a cityname must be = 50 characters).  The
former should be validated in the schema and the latter should not be.
If you're paranoid about the user submitting 100mb strings for
username and don't trust the application to deal with that, I'd
maybe consider making a domain 'safetext' which checks length on the
order of a few thousand bytes and using that instead of 'text' and use
it everywhere.  This will prevent the dba from outsmarting the
datamodel which is a *much* bigger problem in practice than the one
length checks attempt to solve.

Domains have certain disadvantages (like no array type) -- be advised.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre SQL SHA-256 Compliance

2014-09-22 Thread Karsten Hilbert
On Mon, Sep 22, 2014 at 10:15:36AM -0500, Neil Tiffin wrote:

  Can you confirm that your software is SHA-256 Compliant?
  
  Postgres's SSL certificate  key live at the value of ssl_cert_file
  and ssl_key_file in your postgresql.conf. Why not point it at a
  SHA-256 certificate, restart, and try it out?
  
 Unfortunately, that is not the way the government usually
 works.  The person requesting the info may not even have
 access to the system or know how to use the system.  This is
 especially true if the system is classified at any level.

There is no need for access because the exact version is known.

Install a local PG 8.2, at home, on your laptop, and check.

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-22 Thread David G Johnston
Merlin Moncure-2 wrote
 On Mon, Sep 22, 2014 at 10:40 AM, John McKown
 lt;

 john.archie.mckown@

 gt; wrote:
 On Mon, Sep 22, 2014 at 10:31 AM, Merlin Moncure lt;

 mmoncure@

 gt; wrote:
 I'll pile on here: in almost 20 years of professional database
 development I've never had an actual problem that was solved by
 introducing or shortening a length constraint to text columns except
 in cases where overlong strings violate the data model (like a two
 character state code for example).  It's a database equivalent of C
 programmer's disease.  Input checks from untrusted actors should
 happen in the application.

 merlin


 I do not have your experience level with data bases, but if I may, I
 will make an addition. Input checks should also happen in the RDBMS
 server. I have learned you cannot trust end users _or_ programmers.
 Most are good and conscientious. But there are a few who just aren't.
 And those few seem to be very prolific in making _subtle_ errors. Had
 one person who was really good at replacing every p with a [ and P
 with {
 
 Sure.  The point is distinguishing things which are *demonstrably*
 false (like a US VIN must be exactly 17 chars) from those that are
 based assumption (such as a cityname must be = 50 characters).  The
 former should be validated in the schema and the latter should not be.
 If you're paranoid about the user submitting 100mb strings for
 username and don't trust the application to deal with that, I'd
 maybe consider making a domain 'safetext' which checks length on the
 order of a few thousand bytes and using that instead of 'text' and use
 it everywhere.  This will prevent the dba from outsmarting the
 datamodel which is a *much* bigger problem in practice than the one
 length checks attempt to solve.
 
 Domains have certain disadvantages (like no array type) -- be advised.
 
 merlin

These responses all seem beside the point.  The OP isn't concerned that
too-long data is making it into the database but rather that an unadorned
text type is functionally a CLOB which the application he is using is
treating like a document instead of a smallish text field that would be
treated like any other value.  It's like the difference between choosing
input/text or textarea in HTML.  Now, some tools distinguish between text
and varchar only and the length piece is irrelevant; but whether that
applies here I have no idea.

It might be easier to simply create a view over the table, using the desired
type (truncating the actual value if needed), and feed that view to the
reporting engine.

In the end the two questions are:
1) does adding a length restriction cause a table rewrite?
2) what level of locking occurs while the length check is resolving?

I don't confidently know the answers to those two questions.

David J.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/ALTER-TEXT-field-to-VARCHAR-1024-tp5819608p5819939.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TEXT field to VARCHAR(1024)

2014-09-22 Thread Tom Lane
David G Johnston david.g.johns...@gmail.com writes:
 In the end the two questions are:
 1) does adding a length restriction cause a table rewrite?

Yes.  In principle the restriction could be checked with just a scan,
not a rewrite, but ALTER TABLE doesn't currently understand that --- and
in any case a scan would still be potentially a long time.

 2) what level of locking occurs while the length check is resolving?

AccessExclusiveLock.  This would be necessary in any case for a data type
change.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] large table

2014-09-22 Thread Luke Coldiron
I am trying to figure out what would have caused a table in a PostgreSQL 8.4.16 
to get into a state where there is only 1 live tuple and has only ever had one 
1 tuple but the size of the table is huge.

CREATE TABLE public.myTable(  myColumn timestamp with time zone NOT NULL);

Note: there is no primary key or index on this table.
CREATE OR REPLACE FUNCTION public.myFunc()RETURNS VOID AS $$BEGIN   UPDATE 
public.myTable SET myColumn = CLOCK_TIMESTAMP();
   IF NOT FOUND THEN  INSERT INTO public.myTable(myColumn) VALUES 
(CLOCK_TIMESTAMP());   END IF;END;$$ LANGUAGE plpgsql VOLATILE STRICT SECURITY 
DEFINER;
CREATE OR REPLACE FUNCTION public.wasUpdated(OUT was_updated boolean)RETURNS 
BOOLEAN AS $$BEGIN   was_updated := COALESCE((SELECT myColumn FROM 
public.myTable)  (CLOCK_TIMESTAMP() - INTERVAL '5 SECOND'),
 FALSE);END;$$ LANGUAGE plpgsql VOLATILE STRICT 
SECURITY DEFINER;
SELECT *FROM pg_stat_all_tablesWHERE relname = 'myTable';


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  relid
  schemaname
  relname
  seq_scan
  seq_tup_read
  idx_scan
  idx_tup_fetch
  n_tup_ins
  n_tup_upd
  n_tup_del
  n_tup_hot_upd
  n_live_tup
  n_dead_tup
  last_vacuum
  last_autovacuum
  last_analyze
  last_autoanalyze
 
 
  16713
  public
  myTable
  3991833
  3992001
   
   
  0
  3775409
  0
  3771173
  949135
  183
   
  2014-09-18
  11:28:47.63545+00
   
  2014-09-18
  11:27:47.134432+00
 
The stats are very far off with n_live_tup at 949135 when there is only a 
single row in the table. Autovacuum appears to be running on a regular basis.
SELECT *
FROM pgstattuple('public.myTable');


 
 
 
 
 
 
 
 
 
 
  table_len
  tuple_count
  tuple_len
  tuple_percent
  dead_tuple_count
  dead_tuple_len
  dead_tuple_percent
  free_space
  free_precent
 
 
  34709504
  1
  32
  0
  105
  3360
  0.01
  30757308
  88.61
 
The actual size of the table is around 33 MB.
The myFunc function is called every 2.5 seconds and the wasUpdated function 
every 2 seconds by separate processes. 
I realize that running a FULL VACUUM or CLUSTER command on the table will 
resolve the issue but I am more interested in a root cause that explains why 
this table would end up in this state. I have tried to reproduce this issue by 
running the exact setup and have not been able to get the table to grow like 
this example. Any plausible cause'es or explanations would be much appreciated.
Luke 
  

Re: [GENERAL] large table

2014-09-22 Thread Alan Hodgson
On Monday, September 22, 2014 11:17:05 AM Luke Coldiron wrote:
 The actual size of the table is around 33 MB.
 The myFunc function is called every 2.5 seconds and the wasUpdated function
 every 2 seconds by separate processes. I realize that running a FULL VACUUM
 or CLUSTER command on the table will resolve the issue but I am more
 interested in a root cause that explains why this table would end up in
 this state. I have tried to reproduce this issue by running the exact setup
 and have not been able to get the table to grow like this example. Any
 plausible cause'es or explanations would be much appreciated. Luke

I'd guess that some other process held a transaction open for a couple of 
week, and that prevented any vacuuming from taking place.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large table

2014-09-22 Thread Bill Moran
On Mon, 22 Sep 2014 11:17:05 -0700
Luke Coldiron lukecoldi...@hotmail.com wrote:

 I am trying to figure out what would have caused a table in a PostgreSQL 
 8.4.16 to get into a state where there is only 1 live tuple and has only ever 
 had one 1 tuple but the size of the table is huge.
 
 CREATE TABLE public.myTable(  myColumn timestamp with time zone NOT NULL);
 
 Note: there is no primary key or index on this table.
 CREATE OR REPLACE FUNCTION public.myFunc()RETURNS VOID AS $$BEGIN   UPDATE 
 public.myTable SET myColumn = CLOCK_TIMESTAMP();
IF NOT FOUND THEN  INSERT INTO public.myTable(myColumn) VALUES 
 (CLOCK_TIMESTAMP());   END IF;END;$$ LANGUAGE plpgsql VOLATILE STRICT 
 SECURITY DEFINER;
 CREATE OR REPLACE FUNCTION public.wasUpdated(OUT was_updated boolean)RETURNS 
 BOOLEAN AS $$BEGIN   was_updated := COALESCE((SELECT myColumn FROM 
 public.myTable)  (CLOCK_TIMESTAMP() - INTERVAL '5 SECOND'),  
FALSE);END;$$ LANGUAGE plpgsql VOLATILE 
 STRICT SECURITY DEFINER;
 SELECT *FROM pg_stat_all_tablesWHERE relname = 'myTable';

[snip]

 The actual size of the table is around 33 MB.
 The myFunc function is called every 2.5 seconds and the wasUpdated function 
 every 2 seconds by separate processes. 
 I realize that running a FULL VACUUM or CLUSTER command on the table will 
 resolve the issue but I am more interested in a root cause that explains why 
 this table would end up in this state. I have tried to reproduce this issue 
 by running the exact setup and have not been able to get the table to grow 
 like this example. Any plausible cause'es or explanations would be much 
 appreciated.
  
The UPDATE in myFunc() creates a new row every 2.5 seconds when it updates the 
row.
The data from those rows is only reclaimed when a vacuum is run.  So (for 
example)
if autovacuum only triggers a vacuum every 250 seconds, there will be 249 rows 
worth
of space in the table, on average.  With the other process querying the table, 
it's
possible that the row that it's looking at will be a something that _should_ be
reclaimable, so vacuum may not clear up all the free space.

As far as running the exact setup: if you're not getting the same results, then
your setup isn't exactly the same.  It's likely that there are things going on 
in the
setup you're curious about that you're not aware of, such as additional queries 
on
the table, additional load that causes operations to take a little longer, thus
resulting in different overlap of competing operations, etc.

Keep in mind that a short-lived incident might have resulted in table bloat that
won't be reclaimed by autovacuum.  I.e., if autovacuum wasn't running for a 
while,
this table would just keep bloating; then when you start autovacuum, it will
maintain the table size, but it won't get any smaller.

I can't make any sense of the data you provided, it's all on seperate rows and 
I've
given up on trying to figure out what number goes with which value, so I don't 
know
exactly what the situation is.  It's likely that you can improve on the 
situation
by tweaking the autovacuum settings for this table to vacuum it more 
aggressively.

Although, you don't seem to have a _problem_ that you've stated.  Are you seeing
performance issues?  Is 33M too much data and filling up the drive (not being
sarcastic here, as there are various mobile applications where 33M could be
important, even now).  Because, if this isn't actually causing any problems, I
wouldn't really worry about it.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large table

2014-09-22 Thread Luke Coldiron
  From: ahodg...@simkin.ca
  To: pgsql-general@postgresql.org
  Subject: Re: [GENERAL] large table
  Date: Mon, 22 Sep 2014 11:34:45 -0700
  
  On Monday, September 22, 2014 11:17:05 AM Luke Coldiron wrote:
   The actual size of the table is around 33 MB.
   The myFunc function is called every 2.5 seconds and the wasUpdated 
   function
   every 2 seconds by separate processes. I realize that running a FULL 
   VACUUM
   or CLUSTER command on the table will resolve the issue but I am more
   interested in a root cause that explains why this table would end up in
   this state. I have tried to reproduce this issue by running the exact 
   setup
   and have not been able to get the table to grow like this example. Any
   plausible cause'es or explanations would be much appreciated. Luke
  
  I'd guess that some other process held a transaction open for a couple of 
  week, and that prevented any vacuuming from taking place.
  
Interesting idea, on the surface I'm not sure how this would have happened in 
the system but I can certainly explore forcing this to happen and see if the 
result is similar.
  -- 
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
  

Re: [GENERAL] large table

2014-09-22 Thread Luke Coldiron
 Date: Mon, 22 Sep 2014 14:38:52 -0400
 From: wmo...@potentialtech.com
 To: lukecoldi...@hotmail.com
 CC: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] large table
 
 On Mon, 22 Sep 2014 11:17:05 -0700
 Luke Coldiron lukecoldi...@hotmail.com wrote:
 
  I am trying to figure out what would have caused a table in a PostgreSQL 
  8.4.16 to get into a state where there is only 1 live tuple and has only 
  ever had one 1 tuple but the size of the table is huge.
  
  CREATE TABLE public.myTable(  myColumn timestamp with time zone NOT NULL);
  
  Note: there is no primary key or index on this table.
  CREATE OR REPLACE FUNCTION public.myFunc()RETURNS VOID AS $$BEGIN   UPDATE 
  public.myTable SET myColumn = CLOCK_TIMESTAMP();
 IF NOT FOUND THEN  INSERT INTO public.myTable(myColumn) VALUES 
  (CLOCK_TIMESTAMP());   END IF;END;$$ LANGUAGE plpgsql VOLATILE STRICT 
  SECURITY DEFINER;
  CREATE OR REPLACE FUNCTION public.wasUpdated(OUT was_updated 
  boolean)RETURNS BOOLEAN AS $$BEGIN   was_updated := COALESCE((SELECT 
  myColumn FROM public.myTable)  (CLOCK_TIMESTAMP() - INTERVAL '5 SECOND'),  
 FALSE);END;$$ LANGUAGE 
  plpgsql VOLATILE STRICT SECURITY DEFINER;
  SELECT *FROM pg_stat_all_tablesWHERE relname = 'myTable';
 
 [snip]
 
  The actual size of the table is around 33 MB.
  The myFunc function is called every 2.5 seconds and the wasUpdated function 
  every 2 seconds by separate processes. 
  I realize that running a FULL VACUUM or CLUSTER command on the table will 
  resolve the issue but I am more interested in a root cause that explains 
  why this table would end up in this state. I have tried to reproduce this 
  issue by running the exact setup and have not been able to get the table to 
  grow like this example. Any plausible cause'es or explanations would be 
  much appreciated.
 
 The UPDATE in myFunc() creates a new row every 2.5 seconds when it updates 
 the row.
 The data from those rows is only reclaimed when a vacuum is run.  So (for 
 example)
 if autovacuum only triggers a vacuum every 250 seconds, there will be 249 
 rows worth
 of space in the table, on average.  With the other process querying the 
 table, it's
 possible that the row that it's looking at will be a something that _should_ 
 be
 reclaimable, so vacuum may not clear up all the free space.
 
 As far as running the exact setup: if you're not getting the same results, 
 then
 your setup isn't exactly the same.  It's likely that there are things going 
 on in the
 setup you're curious about that you're not aware of, such as additional 
 queries on
 the table, additional load that causes operations to take a little longer, 
 thus
 resulting in different overlap of competing operations, etc.
It is possible and that is part of what I am trying to discover however I am 
very familiar with the system / code base and in this case there is a single 
process updating the timestamp and a single process reading the timestamp. 
There are no other user processes programmed to interact with this table 
outside of potentially what Postgres is doing.
 
 Keep in mind that a short-lived incident might have resulted in table bloat 
 that
 won't be reclaimed by autovacuum.  I.e., if autovacuum wasn't running for a 
 while,
 this table would just keep bloating; then when you start autovacuum, it will
 maintain the table size, but it won't get any smaller.
I thought this as well and have run tests with autovacuum turned off and I 
don't see this issue occur over my 1000s of updates. The updates become hot 
updates and reuse dead tuples.
 
 I can't make any sense of the data you provided, it's all on seperate rows 
 and I've
 given up on trying to figure out what number goes with which value, so I 
 don't know
 exactly what the situation is.  It's likely that you can improve on the 
 situation
 by tweaking the autovacuum settings for this table to vacuum it more 
 aggressively.
Sorry about that the email client that I am using messed up the formatting. 
Here is another attempt.
SELECT *

FROM pg_stat_all_tables

WHERE relname = 'myTable';
relid   schemaname  relname seq_scanseq_tup_readidx_scan
idx_tup_fetch   n_tup_ins   n_tup_upd   n_tup_del   n_tup_hot_upd   
n_live_tup  n_dead_tup  last_vacuum last_autovacuum last_analyze
last_autoanalyze16713   public  myTable 3995023 3995296 0   
3778598 0   3774362 949135  124 2014-09-18 11:28:47.63545+00
2014-09-18 11:27:47.134432+00
SELECT * FROM
pgstattuple('public.myTable');
table_len   tuple_count tuple_len   tuple_percent   
dead_tuple_countdead_tuple_len  dead_tuple_percent  free_space  
free_precent347095041   32  0   105 33600.01
3075730888.61
So far having autovacuum on or off has not caused the problem to occur. 
Originally I was thinking 

Re: [GENERAL] large table

2014-09-22 Thread John R Pierce

On 9/22/2014 12:33 PM, Luke Coldiron wrote:


It is possible and that is part of what I am trying to discover 
however I am very familiar with the system / code base and in this 
case there is a single process updating the timestamp and a single 
process reading the timestamp. There are no other user processes 
programmed to interact with this table outside of potentially what 
Postgres is doing.


ANY other connection to the same postgres server, even to a different 
database, that has an open long running transaction (most frequently, 
Idle In Transaction) will block autovacuum from marking the old tuples 
as reusable.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large table

2014-09-22 Thread Eduardo Morras
On Mon, 22 Sep 2014 12:15:27 -0700
Luke Coldiron lukecoldi...@hotmail.com wrote:

   I'd guess that some other process held a transaction open for a
   couple of week, and that prevented any vacuuming from taking
   place.
   
 Interesting idea, on the surface I'm not sure how this would have
 happened in the system but I can certainly explore forcing this to
 happen and see if the result is similar.

It happened when I developed with Java+Hibernate. It opened a transaction and 
made a lot of inserts and deletes while the app run. It created GB size tables 
with few rows and a permament 'IDLE in TRANSACTION' stops any autovacuum.

---   ---
Eduardo Morras emorr...@yahoo.es


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large table

2014-09-22 Thread Bill Moran
On Mon, 22 Sep 2014 12:46:21 -0700
John R Pierce pie...@hogranch.com wrote:

 On 9/22/2014 12:33 PM, Luke Coldiron wrote:
 
  It is possible and that is part of what I am trying to discover 
  however I am very familiar with the system / code base and in this 
  case there is a single process updating the timestamp and a single 
  process reading the timestamp. There are no other user processes 
  programmed to interact with this table outside of potentially what 
  Postgres is doing.
 
 ANY other connection to the same postgres server, even to a different 
 database, that has an open long running transaction (most frequently, 
 Idle In Transaction) will block autovacuum from marking the old tuples 
 as reusable.

As a possibility, I've seen this happen when people connected to the DB
using various GUI tools (can't remember the exact one where we saw this)
that started and held open a transaction without the user realizing it.
This prevented autovacuum from getting any useful work done until our
Nagios monitoring detected the idle transaction and an engineer tracked
down who was doing it and had them close the program.  IMHO, too many
GUI tools make it too easy to do something without realizing the
consequences.

On a related note, I'm curious as to how an open transaction affects HOT
updates (if at all).  This is an area of behavior I have little experience
with to date.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large table

2014-09-22 Thread Luke Coldiron


 Date: Mon, 22 Sep 2014 12:46:21 -0700
 From: pie...@hogranch.com
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] large table
 
 On 9/22/2014 12:33 PM, Luke Coldiron wrote:
 
  It is possible and that is part of what I am trying to discover 
  however I am very familiar with the system / code base and in this 
  case there is a single process updating the timestamp and a single 
  process reading the timestamp. There are no other user processes 
  programmed to interact with this table outside of potentially what 
  Postgres is doing.
 
 ANY other connection to the same postgres server, even to a different 
 database, that has an open long running transaction (most frequently, 
 Idle In Transaction) will block autovacuum from marking the old tuples 
 as reusable.
 
Good point, I wasn't thinking about this as a possibility. This is a very good 
possibility considering the behavior of the rest of the system.
 
 -- 
 john r pierce  37N 122W
 somewhere on the middle of the left coast
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
  

[GENERAL] Question about Vacuum and Replication failures in 9.3.5

2014-09-22 Thread Joel Avni
I noticed that tables on my master PostgreSQL server were growing, and running 
vacuum full analyze on them actually made them even bigger.

At the same time, a slave PostgreSQL server had fallen behind in trying to 
replicate, and was stuck in constantly looping over ‘started streaming WAL from 
primary at…’ and ‘requested WAL segment ….  has already been removed’. Once I 
stopped running the slave instance, I was able to manually vacuum the tables, 
and appears that auto vacuum is now able to vacuum as well.  One table (for 
instance) dropped from 10Gb down to 330Mb after this operation. I don’t see 
anything about auto vacuum not able to acquire  locks while the slave wasn’t 
able to replicate. I am unclear why a slave trying to continue streaming would 
block the auto vacuum, or is something else at play?

I did check, and no base backups were in progress at the time this occurred.

Thank you,
Joel Avni



Re: [GENERAL] Question about Vacuum and Replication failures in 9.3.5

2014-09-22 Thread Adrian Klaver

On 09/22/2014 01:42 PM, Joel Avni wrote:

I noticed that tables on my master PostgreSQL server were growing, and
running vacuum full analyze on them actually made them even bigger.


First what version of Postgres are you using?

Second VACUUM FULL is usually not recommended for the reason you found 
out and which is documented here:


http://www.postgresql.org/docs/9.3/interactive/sql-vacuum.html

FULL

Selects full vacuum, which can reclaim more space, but takes much 
longer and exclusively locks the table. This method also requires extra 
disk space, since it writes a new copy of the table and doesn't release 
the old copy until the operation is complete. Usually this should only 
be used when a significant amount of space needs to be reclaimed from 
within the table.





At the same time, a slave PostgreSQL server had fallen behind in trying
to replicate, and was stuck in constantly looping over ‘started
streaming WAL from primary at…’ and ‘requested WAL segment ….  has
already been removed’. Once I stopped running the slave instance, I was
able to manually vacuum the tables, and appears that auto vacuum is now
able to vacuum as well.  One table (for instance) dropped from 10Gb down
to 330Mb after this operation. I don’t see anything about auto vacuum
not able to acquire  locks while the slave wasn’t able to replicate. I
am unclear why a slave trying to continue streaming would block the auto
vacuum, or is something else at play?


My guess related to the locks your VACUUM FULL was taking, though it 
would require more information on what all the various parts where doing 
over the time frame.




I did check, and no base backups were in progress at the time this occurred.

Thank you,
Joel Avni




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Higher chance of deadlock due to ANALYZE VERBOSE / SHARE UPDATE EXCLUSIVE?

2014-09-22 Thread Dean Toader
I’ve got a question on postgresql locking:

I managed to get a SHARE UPDATE EXCLUSIVE while running the following
/opt/pgsql/bin/vacuumdb -a --analyze-only --verbose -U postgres

While the above is running, I started another session and connected to the 
database 
getting analyzed in the first terminal and ran the following query on and 
noticed the following

[06:05 PM|TEST|TEST-tstclone|~]# /opt/pgsql/bin/psql -U postgres mydb
psql (9.0.13)
Type help for help.

mirthmatch=#   SELECT a.datname,
 c.relname,
 l.transactionid,
 l.mode,
 l.granted,
 a.usename,
 a.current_query,
 a.query_start,
 age(now(), a.query_start) AS age,
 a.procpid
FROM  pg_stat_activity a
 JOIN pg_locks l ON l.pid = a.procpid
 JOIN pg_class c ON c.oid = l.relation
ORDER BY a.query_start;

  datname   |relname| transactionid |   mode
   | granted | usename  |   current_query   
 |
  query_start  |   age   | procpid
+---+---+--+-+--++
---+-+-
 mydb   | trait_inst_hist_trait_vers_fk |   | AccessShareLock   
   | t   | postgres | ANALYZE VERBOSE;  
+|
 2014-09-22 17:54:51.924328-04 | 00:11:01.319604 |9979
|   |   |   
   | |  |   
 |
   | |
 mydb   | eis_trait_instance_history_pk |   | AccessShareLock   
   | t   | postgres | ANALYZE VERBOSE;  
+|
 2014-09-22 17:54:51.924328-04 | 00:11:01.319604 |9979
|   |   |   
   | |  |   
 |
   | |
 mydb   | eis_trait_instance_history|   | 
ShareUpdateExclusiveLock | t   | postgres | ANALYZE VERBOSE;
  +|
 2014-09-22 17:54:51.924328-04 | 00:11:01.319604 |9979
|   |   |   
   | |  |   
 |
   | |
This is unexpected to me because looking at the chart at 
http://www.postgresql.org/docs/9.0/static/explicit-locking.html
I see that SHARE UPDATE EXCLUSIVE conflicts with SHARE, SHARE ROW EXCLUSIVE, 
EXCLUSIVE and 
ACCESS EXCLUSIVE so that means that an ANALYZE VERBOSE on a large table that is 
running concurrently
with another PID having any of the conflicting lock modes (SHARE, SHARE ROW 
EXCLUSIVE, EXCLUSIVE and 
ACCESS EXCLUSIVE) on the same large table may lead to a LOCK situation.

What I observed on a deadlocked system was a SHARE UPDATE EXCLUSIVE lock due to 
an ANALYZE VERBOSE initiated by
/opt/pgsql/bin/vacuumdb -a --analyze-only --verbose -U postgres
was LOCKing a VACUUM operation (SHARE UPDATE EXCLUSIVE) initiated by the 
autovacuum process on a very large table.

My conclusion in this case is that a user or cronjob-initiated
/opt/pgsql/bin/vacuumdb -a --analyze-only --verbose -U postgres
should really not happen while the autovacuum is active since this is a
scenario where locking (and deadlock) is possible due to
SHARE UPDATE EXCLUSIVE.

Can anyone see an increased possibility of deadlock occurring with 
ANALYZE VERBOSE (initiated by “vacuumdb -a --analyze-only --verbose -U 
postgres” 
command run once every 24 hrs on a cronjob schedule), VACUUM (initiated by 
autovacuum) 
and say ... a long running UPDATE (initiated by a JEE application)
all happening concurrently on the same table?

Thanks in advance,
Dean Toader




-- 
CONFIDENTIALITY NOTICE: The information contained in this electronic 
transmission may be confidential. If you are not an intended recipient, be 
aware that any disclosure, copying, distribution or use of the information 
contained in this transmission is prohibited and may be unlawful. If you 
have received this transmission in error, please notify us by email reply 
and then erase it from your computer system.


Re: [GENERAL] Higher chance of deadlock due to ANALYZE VERBOSE / SHARE UPDATE EXCLUSIVE?

2014-09-22 Thread Tom Lane
Dean Toader de...@mirthcorp.com writes:
 Can anyone see an increased possibility of deadlock occurring with 
 ANALYZE VERBOSE (initiated by “vacuumdb -a --analyze-only --verbose -U 
 postgres” 
 command run once every 24 hrs on a cronjob schedule), VACUUM (initiated by 
 autovacuum) 
 and say ... a long running UPDATE (initiated by a JEE application)
 all happening concurrently on the same table?

If an autovacuum is part of a deadlock, the deadlock detector should
preferentially kill the autovacuum transaction.  Is that not happening
for you?  But in any case, whether that happens or not, SHARE UPDATE
EXCLUSIVE doesn't block UPDATE commands, so that the case you describe
above is not a deadlock.  You would have to be doing some kind of DDL
on the table in question to have any risk.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing Postgresql on Linux Friendlyarm

2014-09-22 Thread Xiang Gan
OK. So a stupid question, whether there is any possibility to run Postgresql as 
root? (I know this is forbidden generally. But what I find out is that in Linux 
FriendlyArm environment, root could create socket while non-root user does not 
have such privilege)

Kind regards,
Gerry


On 09/22/2014 01:22 AM, Xiang Gan wrote:
 Hi,

 I'm newbie here so I'm sorry if this is posted in wrong place.

 I tried to install Postgresql 9.2.9 on Linux Friendlyarm. The Postgresql

 was cross compiled (use arm-linux-gcc 4.4.3) successfully on my desktop
 Ubuntu. Then it was moved to

 Linux Friendlyarm environment. When I tried to start the Postgresql
 server, however,

 it gives me the following errors:

 LOG: could not create IPv4 socket: Permission denied

 WARNING: could not create listen socket for 127.0.0.1

 FATAL: could not create any TCP/IP sockets

 The kernel info. of my Linux friendlyarm is as follows:

 Linux FriendlyARM 2.6.35.7-FriendlyARM #1 PREEMPT Thu Mar 1 17:38:57 HKT
 2012 arm7l GNU/Linux.

 Anyone could provide some help about this?

The user you are running Postgres as does not the permissions to create 
an IPv4 socket. I know nothing about Linux FriendlyArm, so I don't know 
where to go from here. I would say ask the FriendlyArm forum, but I see 
you have already done that.


 Thanks in advance!

 Gerry



-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing Postgresql on Linux Friendlyarm

2014-09-22 Thread Tom Lane
Xiang Gan xiang...@santaitech.com writes:
 OK. So a stupid question, whether there is any possibility to run Postgresql 
 as root? (I know this is forbidden generally. But what I find out is that in 
 Linux FriendlyArm environment, root could create socket while non-root user 
 does not have such privilege)

So, it's not so much FriendlyArm as StupidAndUselessArm?  What sort of nut
would think that all network services should run as root?

The short answer to your question is that if you're so inclined, you
can easily find and dike out the no-root check in the server source
code.  But the Postgres project never has and never will publish a
version with that check removed or compromised in any way, and we will
not provide support if you have any problems running the server that way.
We do not believe it's advisable to run network-accessible services
as root.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about Vacuum and Replication failures in 9.3.5

2014-09-22 Thread Joel Avni
It 9.3.5 and I did the manual vacuum to try to see where the problem might
be.

On 9/22/14, 4:04 PM, Adrian Klaver adrian.kla...@aklaver.com wrote:

On 09/22/2014 01:42 PM, Joel Avni wrote:
 I noticed that tables on my master PostgreSQL server were growing, and
 running vacuum full analyze on them actually made them even bigger.

First what version of Postgres are you using?

Second VACUUM FULL is usually not recommended for the reason you found
out and which is documented here:

http://www.postgresql.org/docs/9.3/interactive/sql-vacuum.html

FULL

 Selects full vacuum, which can reclaim more space, but takes much
longer and exclusively locks the table. This method also requires extra
disk space, since it writes a new copy of the table and doesn't release
the old copy until the operation is complete. Usually this should only
be used when a significant amount of space needs to be reclaimed from
within the table.



 At the same time, a slave PostgreSQL server had fallen behind in trying
 to replicate, and was stuck in constantly looping over Œstarted
 streaming WAL from primary atŠ¹ and Œrequested WAL segment Š.  has
 already been removed¹. Once I stopped running the slave instance, I was
 able to manually vacuum the tables, and appears that auto vacuum is now
 able to vacuum as well.  One table (for instance) dropped from 10Gb down
 to 330Mb after this operation. I don¹t see anything about auto vacuum
 not able to acquire  locks while the slave wasn¹t able to replicate. I
 am unclear why a slave trying to continue streaming would block the auto
 vacuum, or is something else at play?

My guess related to the locks your VACUUM FULL was taking, though it
would require more information on what all the various parts where doing
over the time frame.


 I did check, and no base backups were in progress at the time this
occurred.

 Thank you,
 Joel Avni



-- 
Adrian Klaver
adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about Vacuum and Replication failures in 9.3.5

2014-09-22 Thread Joel Avni
Its version 9.3.5, whats interesting the that the table grew in size after
the vacuum full, which I did to try to see why the auto vacuum wasn¹t
working.
However, after I stopped the PostgreSQL slave instance, then vacuum full
did result in a much much smaller size, as expected. So it appears to be
that there must be some interaction between a slave that trying to do
streaming replication but failing, because the requests WALs have been
cycled out and vacuuming on the master. I am not entirely sure that¹s the
case, but I think observed it twice. Is it the master can¹t clean up
tuples that might be visible at the slave¹s last replayed transaction? I
didn¹t think the master was aware of the slave¹s state, and why locks
can¹t be coordinated between the master and slave.


On 9/22/14, 4:04 PM, Adrian Klaver adrian.kla...@aklaver.com wrote:

On 09/22/2014 01:42 PM, Joel Avni wrote:
 I noticed that tables on my master PostgreSQL server were growing, and
 running vacuum full analyze on them actually made them even bigger.

First what version of Postgres are you using?

Second VACUUM FULL is usually not recommended for the reason you found
out and which is documented here:

http://www.postgresql.org/docs/9.3/interactive/sql-vacuum.html

FULL

 Selects full vacuum, which can reclaim more space, but takes much
longer and exclusively locks the table. This method also requires extra
disk space, since it writes a new copy of the table and doesn't release
the old copy until the operation is complete. Usually this should only
be used when a significant amount of space needs to be reclaimed from
within the table.



 At the same time, a slave PostgreSQL server had fallen behind in trying
 to replicate, and was stuck in constantly looping over Œstarted
 streaming WAL from primary atŠ¹ and Œrequested WAL segment Š.  has
 already been removed¹. Once I stopped running the slave instance, I was
 able to manually vacuum the tables, and appears that auto vacuum is now
 able to vacuum as well.  One table (for instance) dropped from 10Gb down
 to 330Mb after this operation. I don¹t see anything about auto vacuum
 not able to acquire  locks while the slave wasn¹t able to replicate. I
 am unclear why a slave trying to continue streaming would block the auto
 vacuum, or is something else at play?

My guess related to the locks your VACUUM FULL was taking, though it
would require more information on what all the various parts where doing
over the time frame.


 I did check, and no base backups were in progress at the time this
occurred.

 Thank you,
 Joel Avni



-- 
Adrian Klaver
adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] wide row insert via Postgres jdbc driver

2014-09-22 Thread Sameer Kumar
Hi,

I am working with a vendor and planning to deploy their application on
PostgreSQL as backend. They have cautioned the customer that PostgreSQL's
jdbc driver v9.1 (build 900) has issues which causes deadlocks while wide
record inserts.

Is there any such known problem which anyone else has encountered in this
regards? Has there been any improvements in future builds/releases on this
aspect of PostgreSQL drivers/connectors?

I am probably skipping/missing to provide lot of info in this post. I am
not sure what other info can be helpful here other than version. The
cautionary advice is irrespective of platform/hardware.

Regards
Sameer

PS: Sent from my Mobile device. Pls ignore typo n abb