[GENERAL] Revolut - postgres ?

2017-10-01 Thread rakeshkumar464
https://blog.revolut.com/no-excuses-we-let-you-down-32f81e64f974

The career section of the company's web page lists PG as one of the tech
stack.

Would be interesting to know the details.

"At around 07:00 BST on Friday morning, our transaction database began to
malfunction. Naturally, we followed procedure and switched to a backup
server. Unfortunately, the backup server began to drastically slow down and
was struggling to process live transactions.
To make matters worse, we had an unexpected spike of 50% in user activity on
Friday and the backup server could not handle such a large increase in
volume."




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inconsistent Postgres error when installing our product containing PostgreSQL 9.3.4

2017-09-06 Thread Diags101
Hi,

I'm getting inconsistent Postgres errors, when we upgrade our product that
bundles PostgreSQL 9.3.4.
We have 13 databases and we get errors at different places of the database
setup.

Here are the failures:
1.
$POSTGRES/bin/psql -c  " CREATE SEQUENCE serial1 START 1 CYCLE ;" -d $ARCHDB
exitIfCommandFailed "Schema creation"

ERROR:  index "pg_type_oid_index" contains corrupted page at block 0
HINT:  Please REINDEX it.

2.
$POSTGRES/bin/psql -c " CREATE TABLE RAWLOG (ID int8  PRIMARY KEY DEFAULT
nextval('serial1'),SOURCE varchar (100),RAWDATA bytea,WRITETIME timestamp
DEFAULT CURRENT_TIMESTAMP,BOOTTIME timestamp DEFAULT CURRENT_TIMESTAMP
,HIDDEN bool,VERSION int4);" -d $WORKINGDB

ERROR:  relation "pg_catalog.pg_database" does not exist
LINE 7: FROM pg_catalog.pg_database d

3.
$POSTGRES/bin/pg_restore -d $EVWEB_DB -p$EVWEB_DB_PORT -U$SFMDB_SU
$VAR_TMP/evwebdump

ERROR:  relation "pg_catalog.pg_namespace" does not exist
LINE 3: FROM pg_catalog.pg_namespace n

4.
$POSTGRES/bin/psql -c " CREATE TABLE RAWLOG (ID int8  PRIMARY KEY DEFAULT
nextval('serial1'),SOURCE varchar (100),RAWDATA bytea,WRITETIME timestamp
DEFAULT CURRENT_TIMESTAMP,BOOTTIME timestamp DEFAULT CURRENT_TIMESTAMP
,HIDDEN bool,VERSION int4);" -d $ARCHDB

ERROR:  invalid page in block 0 of relation base/216257/11812

All these errors are inconsistent and the errors appear in some other
database, when I repeat the installation.

Any pointers or suggestions are welcome.

On behalf of the Diagnostics team,
Sanjay


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Adrian Klaver

On 06/18/2017 01:13 PM, Martin Mueller wrote:

I think I get it. 'base' is not the data directory but a child of var-9.5, which (with 
its entire path) is the "data directory". I honestly don't recall how I 
installed Posgres, but I'm pretty sure that I picked the default method from the Postgres.


Not something you have to do right this instant, but I would try to 
track down how you did the install. It will come in handy when you do an 
upgrade as the various installers differ in how they do an install. 
Picking the wrong one when you do the upgrade could make things difficult.




Many thanks for your help, which is exceptionally clear and detailed.

MM





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Martin Mueller
I think I get it. 'base' is not the data directory but a child of var-9.5, 
which (with its entire path) is the "data directory". I honestly don't recall 
how I installed Posgres, but I'm pretty sure that I picked the default method 
from the Postgres.

Many thanks for your help, which is exceptionally clear and detailed.

MM




On 6/18/17, 3:03 PM, "Adrian Klaver"  wrote:

>On 06/18/2017 01:00 PM, Martin Mueller wrote:
>> Did you mean  that "/users/martin/Library 
>> ApplicationSupport/Postgres/var9.5/base/" is above  or below the data 
>> directory?  As I understand it Postgres is the highest Postgres specific 
>> directory. It contains just one child directory, var-9.5, which has a lot of 
>> subdirectories, including 'base'. So I assume that "base" is the "entire 
>> Postgres directory". Is that correct
>
>Pretty sure the answer is no. I am going to say the Postgres data 
>directory is /users/martin/Library ApplicationSupport/Postgres/var9.5/.
>
>Take a look at this link:
>
>https://urldefense.proofpoint.com/v2/url?u=https-3A__www.postgresql.org_docs_9.6_static_storage-2Dfile-2Dlayout.html=DwICaQ=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk=EE03EBpU01pYlVG72ZO4h4D-1JJd--vXBmEvErByUOo=ekr93dxabdAgdsxrrTh5gg3pQOUFrX9muS6YFV9SIwM=
> 
>
>and see if what is under:
>
>/users/martin/Library ApplicationSupport/Postgres/var9.5/
>
>looks similar.
>
>*** It would also help to know how you installed Postgres? ***
>
>>   
>> 
>> 
>> 
>> 
>>>
>>> 3) You will need to make sure you are copying/syncing the entire
>>> Postgres data directory:
>>>
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.postgresql.org_docs_9.6_static_storage-2Dfile-2Dlayout.html=DwICaQ=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk=lQpZV77iEes6tm1L3zsHQm9eceGhQh8UV4IOgzDvapg=MpRqOYUdfX_Z4OK9-AFg2b7glEe4lY2aYupB6508ZjQ=
>>>
>>>  From you original post the directory you mentioned:
>>>
>>> /users/martin/Library ApplicationSupport/Postgres/var9.5/base/
>>>
>>> looks to be below the top level data directory.
>>>
>>> While I remember, how are you installing/updating Postgres on your machines?
>>>
>
>
>-- 
>Adrian Klaver
>adrian.kla...@aklaver.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Adrian Klaver

On 06/18/2017 01:00 PM, Martin Mueller wrote:

Did you mean  that "/users/martin/Library ApplicationSupport/Postgres/var9.5/base/" is above  or 
below the data directory?  As I understand it Postgres is the highest Postgres specific directory. It 
contains just one child directory, var-9.5, which has a lot of subdirectories, including 'base'. So I assume 
that "base" is the "entire Postgres directory". Is that correct


Pretty sure the answer is no. I am going to say the Postgres data 
directory is /users/martin/Library ApplicationSupport/Postgres/var9.5/.


Take a look at this link:

https://www.postgresql.org/docs/9.6/static/storage-file-layout.html

and see if what is under:

/users/martin/Library ApplicationSupport/Postgres/var9.5/

looks similar.

*** It would also help to know how you installed Postgres? ***

  







3) You will need to make sure you are copying/syncing the entire
Postgres data directory:

https://urldefense.proofpoint.com/v2/url?u=https-3A__www.postgresql.org_docs_9.6_static_storage-2Dfile-2Dlayout.html=DwICaQ=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk=lQpZV77iEes6tm1L3zsHQm9eceGhQh8UV4IOgzDvapg=MpRqOYUdfX_Z4OK9-AFg2b7glEe4lY2aYupB6508ZjQ=

 From you original post the directory you mentioned:

/users/martin/Library ApplicationSupport/Postgres/var9.5/base/

looks to be below the top level data directory.

While I remember, how are you installing/updating Postgres on your machines?




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Martin Mueller
Did you mean  that "/users/martin/Library 
ApplicationSupport/Postgres/var9.5/base/" is above  or below the data 
directory?  As I understand it Postgres is the highest Postgres specific 
directory. It contains just one child directory, var-9.5, which has a lot of 
subdirectories, including 'base'. So I assume that "base" is the "entire 
Postgres directory". Is that correct
 




>
>3) You will need to make sure you are copying/syncing the entire 
>Postgres data directory:
>
>https://urldefense.proofpoint.com/v2/url?u=https-3A__www.postgresql.org_docs_9.6_static_storage-2Dfile-2Dlayout.html=DwICaQ=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk=lQpZV77iEes6tm1L3zsHQm9eceGhQh8UV4IOgzDvapg=MpRqOYUdfX_Z4OK9-AFg2b7glEe4lY2aYupB6508ZjQ=
> 
>
> From you original post the directory you mentioned:
>
>/users/martin/Library ApplicationSupport/Postgres/var9.5/base/
>
>looks to be below the top level data directory.
>
>While I remember, how are you installing/updating Postgres on your machines?
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Adrian Klaver

On 06/18/2017 11:29 AM, Martin Mueller wrote:

How close is close enough? In my case, the machines run OS Sierra, and the 
installation uses the same directory paths Keeping the Postgres version in sync 
should be simple. Is that close enough?

In MySQL you can copy and paste individual tables if the data are kept in ISAM, 
but INNO is hopeless that way. Is Postgres more like INNO than ISAM when it 
comes to table storage?



Postgres will not like you to cut and pasting individual tables.

To keep two Postgres instances on separate machines in sync using an 
external hard drive you will need to:


1) Make sure you have compatible OS'es. You have that covered already.

2) Maintain compatible Postgres versions. For Postgres up to version 9.6 
the versioning scheme is X.X.x where a change in the first two numbers 
denotes a major version change and a change in the last means a minor 
version change. You cannot do what you intend(sync binary files) across 
major versions. Minor versions should not be a problem until they are. 
By this I mean you should check the Release Notes for any gotchas:


https://www.postgresql.org/docs/9.6/static/release.html

For Postgres 10(now in development) and up the versioning scheme has 
been changed to X.x, where a change in the first number is a major 
change and a change in the second number indicates a minor release 
change. Checking the Release Notes still applies.


3) You will need to make sure you are copying/syncing the entire 
Postgres data directory:


https://www.postgresql.org/docs/9.6/static/storage-file-layout.html

From you original post the directory you mentioned:

/users/martin/Library ApplicationSupport/Postgres/var9.5/base/

looks to be below the top level data directory.

While I remember, how are you installing/updating Postgres on your machines?

4) Before you do anything I would use pg_dump/pg_dumpall:

https://www.postgresql.org/docs/9.6/static/app-pgdump.html
https://www.postgresql.org/docs/9.6/static/app-pg-dumpall.html

to make a backup of your data, just in case things do not work out as 
expected.


5) The above is valid for Steve Atkins suggestion of using the external 
drive as the sole data directory. The only thing I would be concerned 
about is that external hard drives I have worked with are not all that 
fast, you will have to see if that is a problem in your case.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Karsten Hilbert
On Sun, Jun 18, 2017 at 06:29:50PM +, Martin Mueller wrote:

> How close is close enough? In my case, the machines run OS
> Sierra, and the installation uses the same directory paths
> Keeping the Postgres version in sync should be simple. Is
> that close enough?

I am not an expert on that. Methinks the mailing list archive
should have posts on that topic.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Karsten Hilbert
On Sun, Jun 18, 2017 at 06:29:50PM +, Martin Mueller wrote:

> In MySQL you can copy and paste individual tables if the
> data are kept in ISAM, but INNO is hopeless that way. Is
> Postgres more like INNO than ISAM when it comes to table
> storage?

*more* like INNO but not at all *like* INNO :-)

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Martin Mueller
How close is close enough? In my case, the machines run OS Sierra, and the 
installation uses the same directory paths Keeping the Postgres version in sync 
should be simple. Is that close enough?

In MySQL you can copy and paste individual tables if the data are kept in ISAM, 
but INNO is hopeless that way. Is Postgres more like INNO than ISAM when it 
comes to table storage?




On 6/18/17, 12:58 PM, "pgsql-general-ow...@postgresql.org on behalf of Karsten 
Hilbert"  wrote:

>On Sun, Jun 18, 2017 at 05:30:44PM +, Martin Mueller wrote:
>
>> Thank for this very helpful answer, which can be
>> implemented for less than $100. For somebody who started
>> working a 128k Mac in the eighties, it is mindboggling that
>> for that amount you can buy a terabyte of storage in a device
>> that you put in a coat pocket. I'll read up on rsync
>
>I seem to remember that for this to work the two machines
>must be *very* close in architecture, and the PostgreSQL
>versions best be exactly the same.
>
>Karsten
>-- 
>GPG key ID E4071346 @ eu.pool.sks-keyservers.net
>E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
>-- 
>Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>To make changes to your subscription:
>https://urldefense.proofpoint.com/v2/url?u=http-3A__www.postgresql.org_mailpref_pgsql-2Dgeneral=DwIBAg=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk=dRn-urP2CTrTrg6xFXNVHRpTwbZJSAI0SDEIGtDW5tM=NMV4EDSxwFIOUoomuK06tWbUWUTkD5pd_Q5thb2xZ6I=
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Steve Atkins

> On Jun 18, 2017, at 10:58 AM, Karsten Hilbert  wrote:
> 
> On Sun, Jun 18, 2017 at 05:30:44PM +, Martin Mueller wrote:
> 
>> Thank for this very helpful answer, which can be
>> implemented for less than $100. For somebody who started
>> working a 128k Mac in the eighties, it is mindboggling that
>> for that amount you can buy a terabyte of storage in a device
>> that you put in a coat pocket. I'll read up on rsync
> 
> I seem to remember that for this to work the two machines
> must be *very* close in architecture, and the PostgreSQL
> versions best be exactly the same.

If the two machines have the same architecture you can also just
have the data directory (or the whole postgresql installation) installed
on an external drive and run it from there.

Plug it in, start postgresql, use it.Shut down postgresql, unplug it.

I've been running from an external drive for years with no problems,
but backing it up regularly to the machines you plug it into (with
pg_dump) is probably a good idea.

I have the entire postgresql installation on the external drive, and
have /Volumes/whatever/pgsql/bin early in my path, so if the drive
is plugged in pg_ctl, psql etc go to the installation on the external
drive.

With one of the little samsung usb3 SSDs it'll fit in your pocket.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Karsten Hilbert
On Sun, Jun 18, 2017 at 05:30:44PM +, Martin Mueller wrote:

> Thank for this very helpful answer, which can be
> implemented for less than $100. For somebody who started
> working a 128k Mac in the eighties, it is mindboggling that
> for that amount you can buy a terabyte of storage in a device
> that you put in a coat pocket. I'll read up on rsync

I seem to remember that for this to work the two machines
must be *very* close in architecture, and the PostgreSQL
versions best be exactly the same.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Martin Mueller
Thank for this very helpful answer, which can be implemented for less than 
$100. For somebody who started working a 128k Mac in the eighties, it is 
mindboggling that for that amount you can buy a terabyte of storage in a device 
that you put in a coat pocket. I'll read up on rsync






On 6/18/17, 11:13 AM, "Adrian Klaver"  wrote:

>On 06/18/2017 06:16 AM, Martin Mueller wrote:
>> Why not a PostgreSQL-database somewhere in the cloud? Good question, but 
>> it's a question of money and performance. I used MySQL for many years and 
>> then moved a dataset to an instance on AWS. The performance was horribly 
>> slow. Then some kind soul at my institution hooked me up with "Aurora," 
>> which I take to be MySQL on steroids. That was great, and the performance 
>> was almost as good as on my desktopc. But it cost hundreds of dollars per 
>> month. I work at home with a machine that has 32 GB of memory. In order to 
>> get comparable performance from a cloud-based Postgres instance, I'd have to 
>> spend a lot of money that I don't have. Dropbox costs $120 a year for a 
>> terabyte of storage, which is very affordable.
>
>If it where me I would pick up 1TB external hard drive then:
>
>1) On your Mac(Location 1) stop Postgres and then back up/sync your base 
>directory to the external harddrive.
>
>2) Take the external hard drive to Location 2.
>
>3) Stop Postgres at Location 2 and then sync to base directory there.
>
>4) Start Postgres at location 2.
>
>5) Repeat for going other direction.
>
>It is similar to using Dropbox, with the difference being you do not 
>have Dropbox trying to sync while you are using the database. That I am 
>pretty sure will not end well. The above does depend on familiarity with 
>programs like rsync or Unison for the syncing portion.
>
>
>
>
>-- 
>Adrian Klaver
>adrian.kla...@aklaver.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Bruno Wolff III

On Sun, Jun 18, 2017 at 13:16:16 +,
 Martin Mueller  wrote:

Why not a PostgreSQL-database somewhere in the cloud? Good question, but it's a question 
of money and performance. I used MySQL for many years and then moved a dataset to an 
instance on AWS. The performance was horribly slow. Then some kind soul at my institution 
hooked me up with "Aurora," which I take to be MySQL on steroids. That was 
great, and the performance was almost as good as on my desktopc. But it cost hundreds of 
dollars per month. I work at home with a machine that has 32 GB of memory. In order to 
get comparable performance from a cloud-based Postgres instance, I'd have to spend a lot 
of money that I don't have. Dropbox costs $120 a year for a terabyte of storage, which is 
very affordable.


You aren't going to be able to use copies of the raw files taken while the 
database is running, to restore the database. Storing compressed output 
from pg_dumpall is probably the best way to create backups you can restore 
from.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Adrian Klaver

On 06/18/2017 06:16 AM, Martin Mueller wrote:

Why not a PostgreSQL-database somewhere in the cloud? Good question, but it's a question 
of money and performance. I used MySQL for many years and then moved a dataset to an 
instance on AWS. The performance was horribly slow. Then some kind soul at my institution 
hooked me up with "Aurora," which I take to be MySQL on steroids. That was 
great, and the performance was almost as good as on my desktopc. But it cost hundreds of 
dollars per month. I work at home with a machine that has 32 GB of memory. In order to 
get comparable performance from a cloud-based Postgres instance, I'd have to spend a lot 
of money that I don't have. Dropbox costs $120 a year for a terabyte of storage, which is 
very affordable.


If it where me I would pick up 1TB external hard drive then:

1) On your Mac(Location 1) stop Postgres and then back up/sync your base 
directory to the external harddrive.


2) Take the external hard drive to Location 2.

3) Stop Postgres at Location 2 and then sync to base directory there.

4) Start Postgres at location 2.

5) Repeat for going other direction.

It is similar to using Dropbox, with the difference being you do not 
have Dropbox trying to sync while you are using the database. That I am 
pretty sure will not end well. The above does depend on familiarity with 
programs like rsync or Unison for the syncing portion.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Martin Mueller
Why not a PostgreSQL-database somewhere in the cloud? Good question, but it's a 
question of money and performance. I used MySQL for many years and then moved a 
dataset to an instance on AWS. The performance was horribly slow. Then some 
kind soul at my institution hooked me up with "Aurora," which I take to be 
MySQL on steroids. That was great, and the performance was almost as good as on 
my desktopc. But it cost hundreds of dollars per month. I work at home with a 
machine that has 32 GB of memory. In order to get comparable performance from a 
cloud-based Postgres instance, I'd have to spend a lot of money that I don't 
have. Dropbox costs $120 a year for a terabyte of storage, which is very 
affordable. 




On 6/18/17, 2:43 AM, "pgsql-general-ow...@postgresql.org on behalf of Andreas 
Kretschmer"  wrote:

>
>
>Am 18.06.2017 um 03:03 schrieb Martin Mueller:
>> This is a queestion from a Postgresql novice.
>>
>> I use Postgresql in a single-user environment on a Mac with OS Sierra. 
>> I use AquaFold DataStudio as a client, which is nice but also keeps me 
>> woefully ignorant about many aspects of the underlying application.
>>
>> As I understand it, Postgres data are stored in my homedirectory 
>> /users/martin/Library ApplicationSupport/Postgres/var9.5/base/.  I 
>> have read things on the Web about  backing up data to Dropbox (I have 
>> 80GB of data) .  But that means that my data sit first in the base 
>> directory, then in the Dropbox directory from which they are backed up 
>> to the Dropbox cloud.  So my 80 GB of data take up 160GB on my 
>> machine. Is there a way of installing the base directory directly in 
>> the Dropbox directory so that the data are stored only once on my machine?
>
>You can store Backups there (dumps), but i would strongly  advise 
>against to store the db there. PostgreSQL relys on the fsync, that's not 
>possible in this case. I'm sure, a total data disaster would not be a 
>question, only when it would happen.
>
>
>Why not a PostgreSQL-database somewhere in the cloud?
>
>
>Regards, Andreas
>
>-- 
>2ndQuadrant - The PostgreSQL Support Company.
>www.2ndQuadrant.com
>
>
>
>-- 
>Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>To make changes to your subscription:
>https://urldefense.proofpoint.com/v2/url?u=http-3A__www.postgresql.org_mailpref_pgsql-2Dgeneral=DwICaQ=yHlS04HhBraes5BQ9ueu5zKhE7rtNXt_d012z2PA6ws=rG8zxOdssqSzDRz4x1GLlmLOW60xyVXydxwnJZpkxbk=KXaWmcxNhlCSHssyqrqU1S_mVlceHc7yM3UpQ2fBVXQ=9eN95fgPNJGdCFSD_ozbDci12h1SjX_qAovIBV01Pcc=
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Andreas Kretschmer



Am 18.06.2017 um 03:03 schrieb Martin Mueller:

This is a queestion from a Postgresql novice.

I use Postgresql in a single-user environment on a Mac with OS Sierra. 
I use AquaFold DataStudio as a client, which is nice but also keeps me 
woefully ignorant about many aspects of the underlying application.


As I understand it, Postgres data are stored in my homedirectory 
/users/martin/Library ApplicationSupport/Postgres/var9.5/base/.  I 
have read things on the Web about  backing up data to Dropbox (I have 
80GB of data) .  But that means that my data sit first in the base 
directory, then in the Dropbox directory from which they are backed up 
to the Dropbox cloud.  So my 80 GB of data take up 160GB on my 
machine. Is there a way of installing the base directory directly in 
the Dropbox directory so that the data are stored only once on my machine?


You can store Backups there (dumps), but i would strongly  advise 
against to store the db there. PostgreSQL relys on the fsync, that's not 
possible in this case. I'm sure, a total data disaster would not be a 
question, only when it would happen.



Why not a PostgreSQL-database somewhere in the cloud?


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] storing postgres data on dropbox

2017-06-17 Thread Martin Mueller
This is a queestion from a Postgresql novice.

I use Postgresql in a single-user environment on a Mac with OS Sierra. I use 
AquaFold DataStudio as a client, which is nice but also keeps me woefully 
ignorant about many aspects of the underlying application.

As I understand it, Postgres data are stored in my homedirectory 
/users/martin/Library ApplicationSupport/Postgres/var9.5/base/.  I have read 
things on the Web about  backing up data to Dropbox (I have 80GB of data) .  
But that means that my data sit first in the base directory, then in the 
Dropbox directory from which they are backed up to the Dropbox cloud.  So my 80 
GB of data take up 160GB on my machine. Is there a way of installing the base 
directory directly in the Dropbox directory so that the data are stored only 
once on my machine?

There is a more radical version of that question. I spend a lot of time in a 
second home, where I have a MacPro that runs the same OS. Would it be possible 
to move the entire Postgres environment into Dropbox. If that were possible I 
could use the same Postgres installation in Location 1 and Location 2.

I assume there are 'gotchas' in that scenario, but it seems worth asking. On 
the the other hand, if the primary location of the base directory inside 
Dropbox directory, the fileparth from the application to the data would be 
identical on the two machines.  I am the only user of it, and when I go to 
Location 2 I would make sure that all Dropbox files have properly synched 
before firing up Postgres. In theory it should work but I am fond of saying 
that the difference between theory and practice is usually greater in practice 
than in theory.

With thanks in advance for any advice

Martin Mueller
Professor emeritus of English and Classics
Northwestern University


Re: [GENERAL] Locks Postgres

2017-02-10 Thread Jeff Janes
On Thu, Feb 9, 2017 at 9:00 PM, Patrick B  wrote:

> Hi guys
>
> I just wanna understand the locks in a DB server:
> [image: Imagem inline 1]
>
> Access share = Does that mean queries were waiting because an
> update/delete/insert was happening?
>



It would seem more plausible that your chart is showing the locks that are
*held*, not the locks that are *waiting to be granted*.  But without
knowing where the chart came from, we can't know for sure.

If those are locks being held, it just means your server was kind of busy
(which you already knew).  But we don't know how busy.  A single complex
query can easily hold several dozens locks.

Cheers,

Jeff


Re: [GENERAL] Locks Postgres

2017-02-09 Thread Patrick B
2017-02-10 18:18 GMT+13:00 John R Pierce :

> On 2/9/2017 9:16 PM, John R Pierce wrote:
>
>> that spike in your graph suggests you had 8000 concurrent SELECT
>> operations...
>>
>
> errr, 7000, still way too many.
>

Thanks a lot John!! Got it

PAtrick


Re: [GENERAL] Locks Postgres

2017-02-09 Thread John R Pierce

On 2/9/2017 9:16 PM, John R Pierce wrote:
that spike in your graph suggests you had 8000 concurrent SELECT 
operations...


errr, 7000, still way too many.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locks Postgres

2017-02-09 Thread John R Pierce

On 2/9/2017 9:00 PM, Patrick B wrote:


Access share = Does that mean queries were waiting because an 
update/delete/insert was happening?




access share is taken by a SELECT, and all it blocks is an ACCESS 
EXCLUSIVE lock, which is taken by operations like ALTER TABLE, VACUUM 
FULL, and such global table operations.   that spike in your graph 
suggests you had 8000 concurrent SELECT operations going on, which is 
likely way more than you have compute and IO resources to handle 
efficiently.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locks Postgres

2017-02-09 Thread Adrian Klaver

On 02/09/2017 09:00 PM, Patrick B wrote:

Hi guys

I just wanna understand the locks in a DB server:
Imagem inline 1

Access share = Does that mean queries were waiting because an
update/delete/insert was happening?


https://www.postgresql.org/docs/9.3/static/explicit-locking.html



I'm asking because I got a very big spike with > 30 seconds web response
time.
Running PG 9.3

Thanks!
Patrick



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Locks Postgres

2017-02-09 Thread Patrick B
Hi guys

I just wanna understand the locks in a DB server:
[image: Imagem inline 1]

Access share = Does that mean queries were waiting because an
update/delete/insert was happening?

I'm asking because I got a very big spike with > 30 seconds web response
time.
Running PG 9.3

Thanks!
Patrick


Re: [GENERAL] R: Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-08 Thread Thomas Kellerer

FWIW you still haven't explained how the upgrade was performed.
That might be a very important piece of information, because the
9.4 cluster might have hint bits set and/or the data may be
mostly frozen, but the 9.6 cluster may not have that yet,
resulting in higher CPU usage.


We upgraded the database with a pg_dumpall from 8.4.22 and then a psql < 
BACKUP.SAV command!


Not to miss the obvious: did you run ANALYZE after importing the backup?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] does postgres log the create/refresh of a materialized view anywhere?

2016-12-13 Thread Michael Paquier
On Wed, Dec 14, 2016 at 11:19 AM, Melvin Davidson  wrote:
> Originally, all I wanted was a column to record the creation date/time of an 
> object. One reason it was debunked was that it would lead
> to a request for an additional column to record changes in objects. I 
> maintain that both can be done, but others disagree,

Event triggers could be used to track the creation or drop timestamp
of objects. Even if it is not supported for REFRESH, it may make sense
to support it in the firing matrix.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] does postgres log the create/refresh of a materialized view anywhere?

2016-12-13 Thread Melvin Davidson
On Tue, Dec 13, 2016 at 8:50 PM, Kevin Grittner  wrote:

> On Tue, Dec 13, 2016 at 7:37 PM, Melvin Davidson 
> wrote:
> > On Tue, Dec 13, 2016 at 7:36 PM, Kevin Grittner 
> wrote:
> >> On Tue, Dec 13, 2016 at 5:57 PM, Jonathan Vanasco 
> wrote:
> >>
> >>> Is there a way to find out when a materialized view was
> >>> created/refreshed?
> >>
> >>> I can log this manually in postgresql if needed, but was hoping
> >>> there was some "timestamp" on the view in a system table.
> >>
> >> This is not currently tracked in the system catalogs.
>
> > This goes back to a discussion of my request to add relcreated
> > column to pg_class.
> > https://www.postgresql.org/message-id/CANu8FiyiRPGZ+gB=
> 1jayryx3hxcuqnflsfgdzfxsvlmj-jt...@mail.gmail.com
> > Apparently the naysayers do not feel it is worthwhile.
>
> Do you see relcreated as being something to set anew whenever the
> data contents of a materialized view change due to the REFRESH
> command?  I wouldn't have thought so, but I guess the problem with
> that proposal is that everyone has a different idea of what the
> semantics of the column would be.  Suggesting that field as the
> solution here seems to reinforce that perception, anyway.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Originally, all I wanted was a column to record the creation date/time of
an object. One reason it was debunked was that it would lead
to a request for an additional column to record changes in objects. I
maintain that both can be done, but others disagree,

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] does postgres log the create/refresh of a materialized view anywhere?

2016-12-13 Thread Kevin Grittner
On Tue, Dec 13, 2016 at 7:37 PM, Melvin Davidson  wrote:
> On Tue, Dec 13, 2016 at 7:36 PM, Kevin Grittner  wrote:
>> On Tue, Dec 13, 2016 at 5:57 PM, Jonathan Vanasco  wrote:
>>
>>> Is there a way to find out when a materialized view was
>>> created/refreshed?
>>
>>> I can log this manually in postgresql if needed, but was hoping
>>> there was some "timestamp" on the view in a system table.
>>
>> This is not currently tracked in the system catalogs.

> This goes back to a discussion of my request to add relcreated
> column to pg_class.
> https://www.postgresql.org/message-id/CANu8FiyiRPGZ+gB=1jayryx3hxcuqnflsfgdzfxsvlmj-jt...@mail.gmail.com
> Apparently the naysayers do not feel it is worthwhile.

Do you see relcreated as being something to set anew whenever the
data contents of a materialized view change due to the REFRESH
command?  I wouldn't have thought so, but I guess the problem with
that proposal is that everyone has a different idea of what the
semantics of the column would be.  Suggesting that field as the
solution here seems to reinforce that perception, anyway.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] does postgres log the create/refresh of a materialized view anywhere?

2016-12-13 Thread Melvin Davidson
On Tue, Dec 13, 2016 at 7:36 PM, Kevin Grittner  wrote:

> On Tue, Dec 13, 2016 at 5:57 PM, Jonathan Vanasco 
> wrote:
>
> > Is there a way to find out when a materialized view was
> > created/refreshed?
>
> > I can log this manually in postgresql if needed, but was hoping
> > there was some "timestamp" on the view in a system table.
>
> This is not currently tracked in the system catalogs.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Kevin,
This goes back to a discussion of my request to add relcreated column to
pg_class.
https://www.postgresql.org/message-id/CANu8FiyiRPGZ+gB=1jayryx3hxcuqnflsfgdzfxsvlmj-jt...@mail.gmail.com
Apparently the naysayers do not feel it is worthwhile.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] does postgres log the create/refresh of a materialized view anywhere?

2016-12-13 Thread Kevin Grittner
On Tue, Dec 13, 2016 at 5:57 PM, Jonathan Vanasco  wrote:

> Is there a way to find out when a materialized view was
> created/refreshed?

> I can log this manually in postgresql if needed, but was hoping
> there was some "timestamp" on the view in a system table.

This is not currently tracked in the system catalogs.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] does postgres log the create/refresh of a materialized view anywhere?

2016-12-13 Thread Jonathan Vanasco
Is there a way to find out when a materialized view was created/refreshed?  I 
couldn't find this information anywhere in the docs.

the use-case is that I wish to update a materialized view a few times a day in 
a clustered environment.  i'd like to make sure one of the redundant nodes 
doesn't refresh if needed.  I can log this manually in postgresql if needed, 
but was hoping there was some "timestamp" on the view in a system table.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checking Postgres Streaming replication delay

2016-11-02 Thread Jim Nasby

On 10/31/16 3:39 PM, Patrick B wrote:

|(
||extract(epoch FROMnow())-
||extract(epoch FROMpg_last_xact_replay_timestamp())
||)::int lag|


You could certainly simplify it though...

extract(epoch FROM now()-pg_last_xact_replay_timestamp())
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checking Postgres Streaming replication delay

2016-10-31 Thread Patrick B
2016-10-31 15:54 GMT+13:00 Venkata B Nagothi :

>
> On Mon, Oct 31, 2016 at 11:57 AM, Patrick B 
> wrote:
>
>> Hi guys,
>>
>> I'm using this query to measure the delay between a Master and a
>> Streaming Replication Slave server, using PostgreSQL 9.2.
>>
>> SELECT
>>> pg_last_xlog_receive_location() receive,
>>> pg_last_xlog_replay_location() replay,
>>> (
>>> extract(epoch FROM now()) -
>>> extract(epoch FROM pg_last_xact_replay_timestamp())
>>> )::int lag;
>>
>>
>> In your opinion, is that right?
>>
>> Yes, thats right.
>
> Regards,
>
> Venkata B N
> Database Consultant
>
>

Thanks


Re: [GENERAL] Checking Postgres Streaming replication delay

2016-10-30 Thread Venkata B Nagothi
On Mon, Oct 31, 2016 at 11:57 AM, Patrick B 
wrote:

> Hi guys,
>
> I'm using this query to measure the delay between a Master and a Streaming
> Replication Slave server, using PostgreSQL 9.2.
>
> SELECT
>> pg_last_xlog_receive_location() receive,
>> pg_last_xlog_replay_location() replay,
>> (
>> extract(epoch FROM now()) -
>> extract(epoch FROM pg_last_xact_replay_timestamp())
>> )::int lag;
>
>
> In your opinion, is that right?
>
> Yes, thats right.

Regards,

Venkata B N
Database Consultant


[GENERAL] Checking Postgres Streaming replication delay

2016-10-30 Thread Patrick B
Hi guys,

I'm using this query to measure the delay between a Master and a Streaming
Replication Slave server, using PostgreSQL 9.2.

SELECT
> pg_last_xlog_receive_location() receive,
> pg_last_xlog_replay_location() replay,
> (
> extract(epoch FROM now()) -
> extract(epoch FROM pg_last_xact_replay_timestamp())
> )::int lag;


In your opinion, is that right?


Cheers

Patrick


Re: [GENERAL] Understanding Postgres Memory Usage

2016-09-08 Thread Theron Luhn
I've done the upgrade to 9.5.  Memory bloat has reduced to a more
manageable level.  Most workers have an overhead of <20MB, with one outlier
consuming 60MB.


— Theron

On Fri, Aug 26, 2016 at 5:41 AM, Tom Lane  wrote:

> Theron Luhn  writes:
> > Okay, I got a semi-reproducible test case:
> > https://gist.github.com/luhn/2b35a9b31255e3a6a2e6a06d1213dfc9
>
> > The one caveat is that the memory rise only happens when using a
> > HashAggregate query plan (included in the gist), which I can't find a way
> > to get Postgres to reliably use.
>
> OK, I can reproduce some memory bloat in 9.3, but not in 9.5 and up.
> I believe this was fixed by commit b419865a8, which reduced the overhead
> of running a lot of instances of array_agg() concurrently in a HashAgg
> plan.  I think your options are to live with it or upgrade.  Or I guess
> you could turn off enable_hashagg when using array_agg() plus GROUP BY,
> though you'd want to remember to undo that whenever you do upgrade.
>
> regards, tom lane
>


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-26 Thread Tom Lane
Theron Luhn  writes:
> Okay, I got a semi-reproducible test case:
> https://gist.github.com/luhn/2b35a9b31255e3a6a2e6a06d1213dfc9

> The one caveat is that the memory rise only happens when using a
> HashAggregate query plan (included in the gist), which I can't find a way
> to get Postgres to reliably use.

OK, I can reproduce some memory bloat in 9.3, but not in 9.5 and up.
I believe this was fixed by commit b419865a8, which reduced the overhead
of running a lot of instances of array_agg() concurrently in a HashAgg
plan.  I think your options are to live with it or upgrade.  Or I guess
you could turn off enable_hashagg when using array_agg() plus GROUP BY,
though you'd want to remember to undo that whenever you do upgrade.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Theron Luhn
Okay, I got a semi-reproducible test case:
https://gist.github.com/luhn/2b35a9b31255e3a6a2e6a06d1213dfc9

The one caveat is that the memory rise only happens when using a
HashAggregate query plan (included in the gist), which I can't find a way
to get Postgres to reliably use.

If you need it, I could probably find another test case.



— Theron

On Thu, Aug 25, 2016 at 5:27 PM, Tom Lane  wrote:

> Theron Luhn  writes:
> > Okay, here's the output:
> > https://gist.github.com/luhn/a39db625ba5eed90946dd4a196d12220
>
> Hm, well the only thing there that looks even slightly out of the
> ordinary is the amount of free space in TopMemoryContext itself:
>
> TopMemoryContext: 3525712 total in 432 blocks; 3444272 free (12654
> chunks); 81440 used
>
> Normally, TopMemoryContext doesn't get to more than a few hundred K,
> and in the cases I've seen where it does, it's usually been because of
> leaky coding that was allocating stuff there and never cleaning it up.
> But you've got no more than the typical amount of space still allocated
> there, which seems to kill the "leak in TopMemoryContext" theory.
> And in any case there is nowhere near 100MB accounted for by the whole
> dump.
>
> Are you using any other PLs besides plpgsql?  We've seen cases where
> bloat occurred within plpython or plperl, and wasn't visible in this
> dump because those languages don't use PG's memory management code.
> Or maybe some nonstandard extension?
>
> If not that, then I'd have to speculate that the query you're running is
> triggering some bug or otherwise pathological behavior.  Can you put
> together a self-contained test case?
>
> regards, tom lane
>


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Tom Lane
Theron Luhn  writes:
> Okay, here's the output:
> https://gist.github.com/luhn/a39db625ba5eed90946dd4a196d12220

Hm, well the only thing there that looks even slightly out of the
ordinary is the amount of free space in TopMemoryContext itself:

TopMemoryContext: 3525712 total in 432 blocks; 3444272 free (12654 chunks); 
81440 used

Normally, TopMemoryContext doesn't get to more than a few hundred K,
and in the cases I've seen where it does, it's usually been because of
leaky coding that was allocating stuff there and never cleaning it up.
But you've got no more than the typical amount of space still allocated
there, which seems to kill the "leak in TopMemoryContext" theory.
And in any case there is nowhere near 100MB accounted for by the whole
dump.

Are you using any other PLs besides plpgsql?  We've seen cases where
bloat occurred within plpython or plperl, and wasn't visible in this
dump because those languages don't use PG's memory management code.
Or maybe some nonstandard extension?

If not that, then I'd have to speculate that the query you're running is
triggering some bug or otherwise pathological behavior.  Can you put
together a self-contained test case?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread John R Pierce

On 8/25/2016 9:58 AM, Theron Luhn wrote:
> I do not remember exact formula, but it should be something like 
“work_mem*max_connections + shared_buffers” and it should be around 
80% of your machine RAM (minus RAM used by other processes and 
kernel).  It will save you from OOM.




a single query can use multiple work_mem's if its got subqueries, joins, 
etc.


My Postgres is configured with *very* conservative values.  work_mem 
(4MB) * max_connections (100) + shared buffers (512MB) = ~1GB, yet 
Postgres managed to fill up a 4GB server.  I'm seeing workers 
consuming hundreds of MBs of memory (and not releasing any of it until 
the connection closes), despite work_mem being 4MB.


are you doing queries that return large data sets?


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Theron Luhn
Hi Ilya,

> Are you talking about buffers/cache increased? AFAIK this memory is used
by kernel as buffer before any block device (HDD for example).

If I'm reading the output correctly, buffers/cached do not increase.  I'm
looking at the 248MB -> 312MB under the "used" column in the "-/+
buffers/cache" row.  This number excludes the buffer/cached, so that can't
explain the ~60MB increase.  "Shared" also remains the same (212MB), so the
shared buffers filling can't explain the increase either.

> I do not remember exact formula, but it should be something like
“work_mem*max_connections + shared_buffers” and it should be around 80% of
your machine RAM (minus RAM used by other processes and kernel).  It will
save you from OOM.

My Postgres is configured with *very* conservative values.  work_mem (4MB)
* max_connections (100) + shared buffers (512MB) = ~1GB, yet Postgres
managed to fill up a 4GB server.  I'm seeing workers consuming hundreds of
MBs of memory (and not releasing any of it until the connection closes),
despite work_mem being 4MB.


— Theron

On Thu, Aug 25, 2016 at 8:57 AM, Ilya Kazakevich <
ilya.kazakev...@jetbrains.com> wrote:

> $ free -h  # Before the query
>
>  total   used   free sharedbuffers cached
>
> Mem:  7.8G   5.2G   2.6G   212M90M   4.9G
>
> -/+ buffers/cache:   248M   7.6G
>
> Swap:   0B 0B 0B
>
> $ free -h  # After the query
>
>  total   used   free sharedbuffers cached
>
> Mem:  7.8G   5.3G   2.5G   212M90M   4.9G
>
> -/+ buffers/cache:   312M   7.5G
>
> Swap:   0B 0B 0B
>
>
>
> [I.K >> ] Are you talking about buffers/cache increased? AFAIK this memory
> is used by kernel as buffer before any block device (HDD for example).
>
> Postgres does not use this memory directly, it simply reads data from
> block device, and kernel caches it. Process can’t be OOMed because of it.
>
>
>
>
>
> I am sure you should configure your Postgres to NEVER exceed available
> RAM. You may use tools like (http://pgtune.leopard.in.ua/) or calculate
> it manually.
>
> I do not remember exact formula, but it should be something like
> “work_mem*max_connections + shared_buffers” and it should be around 80% of
> your machine RAM (minus RAM used by other processes and kernel).
>
> It will save you from OOM.
>
>
>
> If you  face performance bottleneck after it, you fix it using tools like
> “log_min_duration_statement”, “track_io_timing” and system-provided tools.
>
>
>
>
>
>
>
>
>
> Ilya Kazakevich
>
>
>
> JetBrains
>
> http://www.jetbrains.com
>
> The Drive to Develop
>
>
>
>
>


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Theron Luhn
Okay, here's the output:
https://gist.github.com/luhn/a39db625ba5eed90946dd4a196d12220


— Theron

On Thu, Aug 25, 2016 at 12:34 PM, Tom Lane  wrote:

> Theron Luhn  writes:
> >> It would be worth using plain old top to watch this process.  We have
> >> enough experience with that to be pretty sure how to interpret its
> >> numbers: "RES minus SHR" is the value to be worried about.
>
> > Sure thing.  https://gist.github.com/luhn/e09522d524354d96d297b153d1479c
> 13#file-top-txt
>
> > RES - SHR is showing a similar increase to what smem is reporting.
>
> Hm, yeah, and the VIRT column agrees --- so 100MB of non-shared
> memory went somewhere.  Seems like a lot.
>
> If you have debug symbols installed for this build, you could try
> doing
>
> gdb /path/to/postgres processID
> gdb> call MemoryContextStats(TopMemoryContext)
> gdb> quit
>
> (when the process has reached an idle but bloated state) and seeing what
> gets printed to the process's stderr.  (You need to have launched the
> postmaster with its stderr directed to a file, not to /dev/null.)
> That would provide a better clue about what's eating space.
>
> regards, tom lane
>


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Tom Lane
Theron Luhn  writes:
>> It would be worth using plain old top to watch this process.  We have
>> enough experience with that to be pretty sure how to interpret its
>> numbers: "RES minus SHR" is the value to be worried about.

> Sure thing.  
> https://gist.github.com/luhn/e09522d524354d96d297b153d1479c13#file-top-txt

> RES - SHR is showing a similar increase to what smem is reporting.

Hm, yeah, and the VIRT column agrees --- so 100MB of non-shared
memory went somewhere.  Seems like a lot.

If you have debug symbols installed for this build, you could try
doing

gdb /path/to/postgres processID
gdb> call MemoryContextStats(TopMemoryContext)
gdb> quit

(when the process has reached an idle but bloated state) and seeing what
gets printed to the process's stderr.  (You need to have launched the
postmaster with its stderr directed to a file, not to /dev/null.)
That would provide a better clue about what's eating space.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Theron Luhn
> It would be worth using plain old top to watch this process.  We have
> enough experience with that to be pretty sure how to interpret its
> numbers: "RES minus SHR" is the value to be worried about.

Sure thing.  https://gist.github.com/luhn/e09522d524354d96d297b153d1479c
13#file-top-txt

RES - SHR is showing a similar increase to what smem is reporting.

— Theron

On Thu, Aug 25, 2016 at 11:25 AM, Tom Lane  wrote:

> Theron Luhn  writes:
> >> If it's not an outright leak, it's probably consumption of cache space.
> >> We cache stuff that we've read from system catalogs, so sessions that
> >> touch lots of tables (like thousands) can grow due to that.  Another
> >> possible source of large cache consumption is calling lots-and-lots of
> >> plpgsql functions.
>
> > I have a reasonable number of tables (around 50) and very few plpgsql
> > functions.
>
> Doesn't sound like a lot ...
>
>
> >> If the same query, repeated over and over, causes memory to continue
> >> to grow, I'd call it a leak (ie bug).  If repeat executions consume
> >> no additional memory then it's probably intentional caching behavior.
>
> > Here's the results of that:
> > https://gist.github.com/luhn/e09522d524354d96d297b153d1479c13
>
> > So kind of a combination of the two:  Memory usage increases up to a
> > certain point but then plateaus.  So... cache?  It's ~100MB increase,
> > though, which seems an excessive amount.  What could be taking up that
> much
> > cache?
>
> Hmm.  I find it mighty suspicious that the USS, PSS, and RSS numbers are
> all increasing to pretty much the same tune, ie from very little to circa
> 100MB.  I think there is a decent chance that smem is not doing what it
> says on the tin, and in fact is including shared memory consumption in
> "USS".  In which case the apparent leak just corresponds to the process
> gradually touching more and more of the shared buffer arena.  (If your
> shared_buffers settings is not somewhere near 100MB, then this theory
> breaks down.)
>
> It would be worth using plain old top to watch this process.  We have
> enough experience with that to be pretty sure how to interpret its
> numbers: "RES minus SHR" is the value to be worried about.
>
> regards, tom lane
>


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Kevin Grittner
On Thu, Aug 25, 2016 at 1:25 PM, Tom Lane  wrote:

> Hmm.  I find it mighty suspicious that the USS, PSS, and RSS numbers are
> all increasing to pretty much the same tune, ie from very little to circa
> 100MB.  I think there is a decent chance that smem is not doing what it
> says on the tin, and in fact is including shared memory consumption in
> "USS".  In which case the apparent leak just corresponds to the process
> gradually touching more and more of the shared buffer arena.  (If your
> shared_buffers settings is not somewhere near 100MB, then this theory
> breaks down.)

I can't speak to every implementation of smem, but I have used it
quite a bit under SLES and Ubuntu, and it always seemed to do what
it says -- USS is unshared (process-local) memory and PSS is that
plus the process's portion of shared memory.  (The sum of
differences between PSS and USS == total shared memory.)  RSS has
the usual meaning.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Tom Lane
Theron Luhn  writes:
>> If it's not an outright leak, it's probably consumption of cache space.
>> We cache stuff that we've read from system catalogs, so sessions that
>> touch lots of tables (like thousands) can grow due to that.  Another
>> possible source of large cache consumption is calling lots-and-lots of
>> plpgsql functions.

> I have a reasonable number of tables (around 50) and very few plpgsql
> functions.

Doesn't sound like a lot ...


>> If the same query, repeated over and over, causes memory to continue
>> to grow, I'd call it a leak (ie bug).  If repeat executions consume
>> no additional memory then it's probably intentional caching behavior.

> Here's the results of that:
> https://gist.github.com/luhn/e09522d524354d96d297b153d1479c13

> So kind of a combination of the two:  Memory usage increases up to a
> certain point but then plateaus.  So... cache?  It's ~100MB increase,
> though, which seems an excessive amount.  What could be taking up that much
> cache?

Hmm.  I find it mighty suspicious that the USS, PSS, and RSS numbers are
all increasing to pretty much the same tune, ie from very little to circa
100MB.  I think there is a decent chance that smem is not doing what it
says on the tin, and in fact is including shared memory consumption in
"USS".  In which case the apparent leak just corresponds to the process
gradually touching more and more of the shared buffer arena.  (If your
shared_buffers settings is not somewhere near 100MB, then this theory
breaks down.)

It would be worth using plain old top to watch this process.  We have
enough experience with that to be pretty sure how to interpret its
numbers: "RES minus SHR" is the value to be worried about.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Theron Luhn
> 9.3.which?  We do fix memory leaks from time to time ...

9.3.14

> If it's not an outright leak, it's probably consumption of cache space.
> We cache stuff that we've read from system catalogs, so sessions that
> touch lots of tables (like thousands) can grow due to that.  Another
> possible source of large cache consumption is calling lots-and-lots of
> plpgsql functions.

I have a reasonable number of tables (around 50) and very few plpgsql
functions.

> If the same query, repeated over and over, causes memory to continue
> to grow, I'd call it a leak (ie bug).  If repeat executions consume
> no additional memory then it's probably intentional caching behavior.

Here's the results of that:
https://gist.github.com/luhn/e09522d524354d96d297b153d1479c13

So kind of a combination of the two:  Memory usage increases up to a
certain point but then plateaus.  So... cache?  It's ~100MB increase,
though, which seems an excessive amount.  What could be taking up that much
cache?


— Theron

On Thu, Aug 25, 2016 at 9:27 AM, Tom Lane  wrote:

> Theron Luhn  writes:
> > I have an application that uses Postgres 9.3 as the primary datastore.
>
> 9.3.which?  We do fix memory leaks from time to time ...
>
> > Some of these queries use quite a bit of memory.  I've observed a
> > "high-water mark" behavior in memory usage:  running a query increases
> the
> > worker memory by many MBs (beyond shared buffers), but the memory is not
> > released until the connection is closed.
>
> Hm.  I'm not familiar with smem, but assuming that that USS column
> really is process-private space, that definitely looks bad.
>
> If it's not an outright leak, it's probably consumption of cache space.
> We cache stuff that we've read from system catalogs, so sessions that
> touch lots of tables (like thousands) can grow due to that.  Another
> possible source of large cache consumption is calling lots-and-lots of
> plpgsql functions.
>
> If the same query, repeated over and over, causes memory to continue
> to grow, I'd call it a leak (ie bug).  If repeat executions consume
> no additional memory then it's probably intentional caching behavior.
>
> regards, tom lane
>


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Tom Lane
Theron Luhn  writes:
> I have an application that uses Postgres 9.3 as the primary datastore.

9.3.which?  We do fix memory leaks from time to time ...

> Some of these queries use quite a bit of memory.  I've observed a
> "high-water mark" behavior in memory usage:  running a query increases the
> worker memory by many MBs (beyond shared buffers), but the memory is not
> released until the connection is closed.

Hm.  I'm not familiar with smem, but assuming that that USS column
really is process-private space, that definitely looks bad.

If it's not an outright leak, it's probably consumption of cache space.
We cache stuff that we've read from system catalogs, so sessions that
touch lots of tables (like thousands) can grow due to that.  Another
possible source of large cache consumption is calling lots-and-lots of
plpgsql functions.

If the same query, repeated over and over, causes memory to continue
to grow, I'd call it a leak (ie bug).  If repeat executions consume
no additional memory then it's probably intentional caching behavior.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Ilya Kazakevich
$ free -h  # Before the query

 total   used   free sharedbuffers cached

Mem:  7.8G   5.2G   2.6G   212M90M   4.9G

-/+ buffers/cache:   248M   7.6G

Swap:   0B 0B 0B

$ free -h  # After the query

 total   used   free sharedbuffers cached

Mem:  7.8G   5.3G   2.5G   212M90M   4.9G

-/+ buffers/cache:   312M   7.5G

Swap:   0B 0B 0B

 

[I.K >> ] Are you talking about buffers/cache increased? AFAIK this memory is 
used by kernel as buffer before any block device (HDD for example). 

Postgres does not use this memory directly, it simply reads data from block 
device, and kernel caches it. Process can’t be OOMed because of it.

 

 

I am sure you should configure your Postgres to NEVER exceed available RAM. You 
may use tools like (http://pgtune.leopard.in.ua/) or calculate it manually.

I do not remember exact formula, but it should be something like 
“work_mem*max_connections + shared_buffers” and it should be around 80% of your 
machine RAM (minus RAM used by other processes and kernel).

It will save you from OOM.

 

If you  face performance bottleneck after it, you fix it using tools like 
“log_min_duration_statement”, “track_io_timing” and system-provided tools.

 

 

 

 

Ilya Kazakevich

 

JetBrains

  http://www.jetbrains.com

The Drive to Develop

 

 



[GENERAL] Understanding Postgres Memory Usage

2016-08-25 Thread Theron Luhn
I have an application that uses Postgres 9.3 as the primary datastore.
Like any real-life application, it's not all roses—There are many ugly,
convoluted, and inefficient queries.

Some of these queries use quite a bit of memory.  I've observed a
"high-water mark" behavior in memory usage:  running a query increases the
worker memory by many MBs (beyond shared buffers), but the memory is not
released until the connection is closed.  For example, here's the memory
usage on my test server when running a query once and leaving the
connection open.

$ free -h  # Before the query
 total   used   free sharedbuffers cached
Mem:  7.8G   5.2G   2.6G   212M90M   4.9G
-/+ buffers/cache:   248M   7.6G
Swap:   0B 0B 0B
$ free -h  # After the query
 total   used   free sharedbuffers cached
Mem:  7.8G   5.3G   2.5G   212M90M   4.9G
-/+ buffers/cache:   312M   7.5G
Swap:   0B 0B 0B
$ sudo -u postgres smem -trs uss
  PID User Command Swap  USS  PSS
 RSS
 8263 postgres postgres: postgres souschef0   200204   203977
209540
 8133 postgres /usr/lib/postgresql/9.3/bin05045661090
 74596
 8266 postgres /usr/bin/python /usr/bin/sm0 5840 6261
7460
 8138 postgres postgres: autovacuum launch0  776 1146
2968
 8139 postgres postgres: stats collector p0  300  470
1872
 8135 postgres postgres: checkpointer proc0  148  342
1880
 8137 postgres postgres: wal writer proces0  140  322
1812
 8136 postgres postgres: writer process   0  132 6814
 15140
---
8 1   0   257996   280422
315268

This is proving to be very troublesome on my production server because I
use connection pooling (so connections remain open indefinitely) and the
connection memory seems to rise without end, to the point where 25 open
connections OOM'd a 4GB server.

So I have a couple questions:  Is this high-water mark memory behavior
expected?  If so, how can I identify the queries that are using lots of
memory and driving the high-water mark upwards?

I understand that this post is rather vague, I didn't want to talk your ear
off with specifics in case this was pretty basic, well-understood
behavior.  If necessary, I can follow up with an email diving into the
specifics of what I'm observing.

— Theron


Re: [GENERAL] Unix Postgres 9.5. using pg_basebackup and WAL files. Can't get a PITR recovery

2016-07-20 Thread Jeff Janes
On Fri, Jul 15, 2016 at 10:09 AM, Pruett, Jennis
 wrote:
> I can't cut and paste my issues, so I will explain as best I can.
>
>
>
> I’m testing PITR on a single 9.5 db (no replication setups).
>
>
>
> I have all the settings that I can find, wal_level, archive_command,
>
> restore_command, directories defined, logs defined. My WAL files are stored
> off in
>
> another area with gzip in the archive_command. The restore_command also uses
> gunzip. Those work fine.
>
> (pg_xresetlog seems to work OK too)
>
>
>
> I can restore from a pg_basebackup, but if I define PITR in recovery.conf
> along with the
>
>   pg_basebackup in place, I can't get to that point in time.

What does the server log file say?

You should see something like this:

   3071  2016-07-20 08:02:00.698 PDT LOG:  database system was
interrupted; last known up at 2016-07-14 12:59:39 PDT
   3071  2016-07-20 08:02:04.503 PDT LOG:  starting point-in-time
recovery to 2016-07-19 13:50:00-07
   3071  2016-07-20 08:02:04.872 PDT LOG:  restored log file
"0002.history" from archive

What do you see instead?

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unix Postgres 9.5. using pg_basebackup and WAL files. Can't get a PITR recovery

2016-07-19 Thread Pruett, Jennis
I can't cut and paste my issues, so I will explain as best I can.

I'm testing PITR on a single 9.5 db (no replication setups).

I have all the settings that I can find, wal_level, archive_command,
restore_command, directories defined, logs defined. My WAL files are stored off 
in
another area with gzip in the archive_command. The restore_command also uses 
gunzip. Those work fine.
(pg_xresetlog seems to work OK too)

I can restore from a pg_basebackup, but if I define PITR in recovery.conf along 
with the
  pg_basebackup in place, I can't get to that point in time.

Does it matter if I use hot_standby or archive? hot_standby should include 
archive functionality and then some.
(I've since tried both, seems to not matter in this case)

I am running on a port other than 5432 for this particular db instance. I have 
more than 1 postgres version running.
One of my next experiments will be to run on the 5432 default.
 The only PG var I define is $PGDATA. I use full paths for all commands.

I've run through the examples in the documentation several times, but it seems 
that the WAL files just
won't get picked up by the defined date/time in recovery.conf.
I'm using cron to create time-named tables in the database (every 10 min) to 
see what times I'm getting back in in my recoveries.
They all retrieve from the backup, but don't get the WAL files up to a time.

This is all in experiment mode for now, so there is no urgency.
 But I have almost exhausted all testing permutations and combinations.

Does anyone have an idea?
 Thanks.


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-15 Thread Ken Tanzer
On Fri, Jun 10, 2016 at 2:23 PM, Ken Tanzer  wrote:

> On Fri, Jun 10, 2016 at 1:47 PM, Steve Atkins  wrote:
>
>>

> You could look at one of the existing SQL parsers implemented in PHP, and
>> use those to parse the constraint to a tree from which you could easily
>> pull PHP.
>>
>
>> This one sounds most promising!  I took a quick Google, and it looks like
> there are lots of them, and a heavy Mysql focus.  Anyone have experience or
> suggestions about which of these might be best, and particularly for
> Postgres?
>
>
> Hi.  Just wanted to follow up one more time on this.  Best parsers in PHP,
especially for Postgres?  Anyone?  Thanks!

Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Ken Tanzer
On Fri, Jun 10, 2016 at 1:54 PM, rob stone  wrote:

>
> Hi Ken,
>
> Would this be static or dynamic?
> For example, if you altered a column to become defined as NOT NULL,
> say, when you build the form used to maintain that table you'd like to
> have a "required" attribute against the input field for that column. So
> if it were dynamic you'd have to access the database information_schema
> each time you wanted to send the HTML down the wire as well as when
> carrying out validation.
>

Hi Rob.  I guess this is currently static, in that whenever you change the
tables you have to run an update process that rereads the tables and stores
the configuration information.


> Also, views are updateable and you'd have to check the tables and

columns making up the view.
>

Yeah.  We're not using any updateable views at the moment, so for now I'd
be happy for this to work with tables.  One step at a time! :)


> I've never used pg_meta_data but it might be helpful.
> Another thought would be to write a program that scanned all the tables
> in your database and wrote out a table in your application's schema
> where the table and column constraints were synthesised in a way
> suitable for use in PHP code.
>
> I'm not sure I'm following this one.  Wouldn't such a program have to
parse the SQL and convert it to PHP anyway?


> Cheers,
> Rob
>

Thanks!

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Ken Tanzer
On Fri, Jun 10, 2016 at 1:47 PM, Steve Atkins  wrote:

>
> You could name the check constraints, catch the errors and use a
> client-side mapping between constraint name and a friendly error message
> for display in the web interface.
>
> This seems plausible, but not ideal.  I could get over the aesthetics of
sequence gaps, but don't like throwing lots of database errors on
principle, and also for silting up my logs.  Seems better to catch these
before submitting.



> You could implement the checks in PHP in the database.
> https://public.commandprompt.com/projects/plphp/wiki
>
> I'm aiming as much as possible for this to "just work" with standard
Postgres, and this approach gets away from that.  Plus people would have to
know SQL AND PHP in order to create tables.


> You could look at one of the existing SQL parsers implemented in PHP, and
> use those to parse the constraint to a tree from which you could easily
> pull PHP.
>
> This one sounds most promising!  I took a quick Google, and it looks like
there are lots of them, and a heavy Mysql focus.  Anyone have experience or
suggestions about which of these might be best, and particularly for
Postgres?

Thanks for the suggestions!

Ken


> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread rob stone
On Fri, 2016-06-10 at 13:01 -0700, Ken Tanzer wrote:
> Hi.  I was hoping this list might be able to offer some
> help/advice/suggestions/opinions about feasibility for something I
> want to implement, namely converting Postgres constraints into PHP
> logic.  Here's the context and explanation:
> 
> I work on a PHP web app using Postgres.  When possible, we try to
> build as much logic as possible directly into the DB.  The app
> already automatically reads NOT NULL and foreign key constraints from
> the DB, and enforces them through the UI thus preventing people from
> getting ugly database errors.  It doesn't do that with check
> constraints and table constraints though, which means we either end
> up duplicating the constraint logic in PHP, or else sometimes get
> lazy/expedient and only put the constraint into PHP.  Obviously
> neither of those is ideal.
> 
> What would be ideal is for the app to handle those constraints
> automatically.  It looks like I can pull them out (as SQL) from
> information_schema.check_constraints, with the remaining issue being
> how to make them usable in PHP.
> 
> I'm wondering if anyone has done this already, or if there is some
> kind of library available for this purpose?
> 
> If not, and absent any better suggestions, I'm looking at trying to
> parse/search/replace.  This might well be imperfect, and error-
> prone.  But if I can get something that at least works in a lot of
> cases, that would help a lot.  So as a simple example, converting
> from
> 
> ((sat3 >= 0) AND (sat3 <= 5))
> 
> to the needed format:
> 
> (($rec['sat3'] >=0) and ($rec['sat3'] <= 5))
> 
> seems like it would be relatively straightforward, since the
> structure of the logic is neatly parallel between SQL and PHP.  Each
> of these below, and many others I could pull, all have additional
> complications beyond that though:
>  (((incentive_sent_date IS NULL) OR (incentive_sent_date >=
> residence_date)))
> (((application_date IS NOT NULL) OR (employment_date IS NOT NULL)))
>  (xor((assessed_by IS NULL), (assessor_organization IS NULL)))
>  (((release_to IS NULL) OR ((consent_type_code)::text = ANY
> ((ARRAY['ROI'::character varying, 'MEDIA'::character
> varying])::text[]
> So anyway, there's my issue.  Any thoughts/comments/suggestions
> welcome.  Thanks in advance!
> 
> Ken
> 
> 
Hi Ken,

Would this be static or dynamic?
For example, if you altered a column to become defined as NOT NULL,
say, when you build the form used to maintain that table you'd like to
have a "required" attribute against the input field for that column. So
if it were dynamic you'd have to access the database information_schema
each time you wanted to send the HTML down the wire as well as when
carrying out validation.
Also, views are updateable and you'd have to check the tables and
columns making up the view.
I've never used pg_meta_data but it might be helpful.
Another thought would be to write a program that scanned all the tables
in your database and wrote out a table in your application's schema
where the table and column constraints were synthesised in a way
suitable for use in PHP code. 

Cheers,
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Steve Atkins

> On Jun 10, 2016, at 1:01 PM, Ken Tanzer  wrote:
> 
> Hi.  I was hoping this list might be able to offer some 
> help/advice/suggestions/opinions about feasibility for something I want to 
> implement, namely converting Postgres constraints into PHP logic.  Here's the 
> context and explanation:
> 
> I work on a PHP web app using Postgres.  When possible, we try to build as 
> much logic as possible directly into the DB.  The app already automatically 
> reads NOT NULL and foreign key constraints from the DB, and enforces them 
> through the UI thus preventing people from getting ugly database errors.  It 
> doesn't do that with check constraints and table constraints though, which 
> means we either end up duplicating the constraint logic in PHP, or else 
> sometimes get lazy/expedient and only put the constraint into PHP.  Obviously 
> neither of those is ideal.
> 
> What would be ideal is for the app to handle those constraints automatically. 
>  It looks like I can pull them out (as SQL) from 
> information_schema.check_constraints, with the remaining issue being how to 
> make them usable in PHP.
> 
> I'm wondering if anyone has done this already, or if there is some kind of 
> library available for this purpose?
> 
> If not, and absent any better suggestions,

You could name the check constraints, catch the errors and use a client-side 
mapping between constraint name and a friendly error message for display in the 
web interface.

You could implement the checks in PHP in the database. 
https://public.commandprompt.com/projects/plphp/wiki

You could look at one of the existing SQL parsers implemented in PHP, and use 
those to parse the constraint to a tree from which you could easily pull PHP.

I'd go for that first one, if possible. Robust, and zero overhead in the happy 
path.

> I'm looking at trying to parse/search/replace.  This might well be imperfect, 
> and error-prone.  But if I can get something that at least works in a lot of 
> cases, that would help a lot.  So as a simple example, converting from

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Ken Tanzer
Hi.  I was hoping this list might be able to offer some
help/advice/suggestions/opinions about feasibility for something I want to
implement, namely converting Postgres constraints into PHP logic.  Here's
the context and explanation:

I work on a PHP web app using Postgres.  When possible, we try to build as
much logic as possible directly into the DB.  The app already automatically
reads NOT NULL and foreign key constraints from the DB, and enforces them
through the UI thus preventing people from getting ugly database errors.
It doesn't do that with check constraints and table constraints though,
which means we either end up duplicating the constraint logic in PHP, or
else sometimes get lazy/expedient and only put the constraint into PHP.
Obviously neither of those is ideal.

What would be ideal is for the app to handle those constraints
automatically.  It looks like I can pull them out (as SQL) from
information_schema.check_constraints, with the remaining issue being how to
make them usable in PHP.

I'm wondering if anyone has done this already, or if there is some kind of
library available for this purpose?

If not, and absent any better suggestions, I'm looking at trying to
parse/search/replace.  This might well be imperfect, and error-prone.  But
if I can get something that at least works in a lot of cases, that would
help a lot.  So as a simple example, converting from

((sat3 >= 0) AND (sat3 <= 5))

to the needed format:

(($rec['sat3'] >=0) and ($rec['sat3'] <= 5))

seems like it would be relatively straightforward, since the structure of
the logic is neatly parallel between SQL and PHP.  Each of these below, and
many others I could pull, all have additional complications beyond that
though:

   -  (((incentive_sent_date IS NULL) OR (incentive_sent_date >=
   residence_date)))
   - (((application_date IS NOT NULL) OR (employment_date IS NOT NULL)))
   -  (xor((assessed_by IS NULL), (assessor_organization IS NULL)))
   -  (((release_to IS NULL) OR ((consent_type_code)::text = ANY
   ((ARRAY['ROI'::character varying, 'MEDIA'::character varying])::text[]

So anyway, there's my issue.  Any thoughts/comments/suggestions welcome.
Thanks in advance!

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Partitioned postgres tables don't need update trigger??

2016-06-03 Thread rverghese
Perfect! Thanks for the response! Just wanted to make sure I wasn't missing
anything.



--
View this message in context: 
http://postgresql.nabble.com/Partitioned-postgres-tables-don-t-need-update-trigger-tp5906403p5906415.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioned postgres tables don't need update trigger??

2016-06-03 Thread David G. Johnston
On Fri, Jun 3, 2016 at 5:03 PM, rverghese  wrote:

> I am working with partitioned tables. I have partitioned based on date and
> I have the INSERT trigger in place, I don't have an Update or Delete
> Trigger but both updates and deletes against the master table work
> correctly. I am not sure how these are working without triggers.
> Any insight?
>
> So, this deletes the right number of rows : * delete from
> torque.test_master where tstamp ='2012-08-03 03:00:00'; *
> And this updates the right rows : * update torque.test_master set
> system='zzz' where tstamp ='2012-08-03 04:00:00';*
>
> *Here are the sample tables.*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> * CREATE TABLE torque.test_master ( testmstr_seq_id bigserial NOT NULL,
> tstamp timestamp without time zone NOT NULL, system text NOT NULL,
> CONSTRAINT pk_testmstr_id PRIMARY KEY (testmstr_seq_id) ) WITH ( OIDS=TRUE
> ); CREATE TABLE torque.test_y2012m08 ( CONSTRAINT pk_test_y2012m08_id
> PRIMARY KEY (testmstr_seq_id), CONSTRAINT test_y2012m08_log_tstamp_check
> CHECK (tstamp >= '2012-08-01 00:00:00'::timestamp without time zone AND
> tstamp < '2012-09-01 00:00:00'::timestamp without time zone) ) INHERITS
> (torque.test_master) WITH ( OIDS=TRUE ); CREATE OR REPLACE FUNCTION
> torque.test_child_insert() RETURNS trigger AS $BODY$ BEGIN IF ( new.tstamp
> >= '2012-08-01' AND new.tstamp < '2012-09-01') THEN INSERT INTO
> torque.test_y2012m08 VALUES (NEW.*); ELSEIF ( new.tstamp >= '2015-05-01'
> AND new.ltstamp < '2015-06-01') THEN INSERT INTO torque.test_y2015m05
> VALUES (NEW.*); ELSE RAISE EXCEPTION 'Date out of range. Fix the
> test_child_insert() function!'; END IF; RETURN NULL; END; $BODY$ LANGUAGE
> plpgsql VOLATILE COST 100; *
>
> CREATE TRIGGER testmaster_insert_trigger BEFORE INSERT ON
> torque.test_master FOR EACH ROW EXECUTE PROCEDURE
> torque.test_child_insert();


Since both UPDATE and DELETE are expecting the record to exist they operate
in much the same way SELECT does, queries against the master are also
applied to the children (and WHERE clause constraint evaluation is
performed).  However, INSERTing a new record basically causes an implicit
ONLY to be added to the query since by default you cannot issue a single
insert and cause multiple records to appear - and there is not WHERE
clause.  Thus it is necessary to place a trigger on the table named on the
INSERT so that the new record is directed to the correct location.

David J.


[GENERAL] Partitioned postgres tables don't need update trigger??

2016-06-03 Thread rverghese
I am working with partitioned tables. I have partitioned based on date and I
have the INSERT trigger in place, I don't have an Update or Delete Trigger
but both updates and deletes against the master table  work correctly. I am
not sure how these are working without triggers.
Any insight?
So, this deletes the right number of rows :  / delete from
torque.test_master  where tstamp ='2012-08-03 03:00:00';  /
And this updates the right rows : /  update torque.test_master set
system='zzz' where tstamp ='2012-08-03 04:00:00';/
*Here are the sample tables.*
/CREATE TABLE torque.test_master(  testmstr_seq_id bigserial NOT NULL, 
tstamp timestamp without time zone NOT NULL,  system text NOT NULL, 
CONSTRAINT pk_testmstr_id PRIMARY KEY (testmstr_seq_id))WITH (  OIDS=TRUE);
CREATE TABLE torque.test_y2012m08(  CONSTRAINT pk_test_y2012m08_id PRIMARY
KEY (testmstr_seq_id), 
 CONSTRAINT test_y2012m08_log_tstamp_check CHECK (tstamp >= '2012-08-01
00:00:00'::timestamp without time zone AND tstamp < '2012-09-01
00:00:00'::timestamp without time zone))INHERITS (torque.test_master)WITH ( 
OIDS=TRUE); 
CREATE OR REPLACE FUNCTION torque.test_child_insert()  RETURNS trigger
AS$BODY$BEGIN
IF ( new.tstamp >= '2012-08-01' AND new.tstamp < '2012-09-01') THEN   
 INSERT INTO torque.test_y2012m08 VALUES (NEW.*);   
 ELSEIF ( new.tstamp >= '2015-05-01' AND new.ltstamp < '2015-06-01') THEN   
 INSERT INTO torque.test_y2015m05 VALUES (NEW.*);
 ELSE
RAISE EXCEPTION 'Date out of range.  Fix the test_child_insert()
function!';   
 END IF;   
 RETURN NULL;END;
$BODY$  LANGUAGE plpgsql VOLATILE  COST 100; /
CREATE TRIGGER testmaster_insert_trigger  BEFORE INSERT  ON
torque.test_master  FOR EACH ROW  EXECUTE PROCEDURE
torque.test_child_insert();



--
View this message in context: 
http://postgresql.nabble.com/Partitioned-postgres-tables-don-t-need-update-trigger-tp5906403.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Connections - Postgres 9.2

2016-05-23 Thread Lucas Possamai
On 17 May 2016 at 22:24, Sridhar N Bamandlapally 
wrote:

> Hi
>
> I control this way
>
> if "state_change" is from longtime and "state" is idle then I use
>
> function:
>   *pg_terminate_backend ( integer ) * ==> return TRUE if
> killed-successful else FALSE
>
> example:
> # select pg_terminate_backend ( pid ) from pg_stat_activity where
> state='idle' and state_change < (current_timestamp - interval '1 hour');
>
> may be helpful
>
> NOTE: we come to decision to kill sessions in idle state more then "1
> hour" after lot of discussion with application / implementation /
> stake-holders team
>
> *removed history as thrown error due to mail length
>
>
> Thanks
> Sridhar
> OpenText
>
>
>
Hi Sridhar!

Thanks for your help...

I also control the same way.. But was wondering if a pooler would bring me
performance improvements...


cheers


Re: [GENERAL] Connections - Postgres 9.2

2016-05-17 Thread Sridhar N Bamandlapally
Hi

I control this way

if "state_change" is from longtime and "state" is idle then I use

function:
  *pg_terminate_backend ( integer ) * ==> return TRUE if killed-successful
else FALSE

example:
# select pg_terminate_backend ( pid ) from pg_stat_activity where
state='idle' and state_change < (current_timestamp - interval '1 hour');

may be helpful

NOTE: we come to decision to kill sessions in idle state more then "1 hour"
after lot of discussion with application / implementation / stake-holders
team

*removed history as thrown error due to mail length


Thanks
Sridhar
OpenText


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Adrian Klaver

On 05/16/2016 02:25 PM, John R Pierce wrote:

On 5/16/2016 2:11 PM, Adrian Klaver wrote:

Yes a connection consumes resources.


an idle connection consumes some memory, a process context, and a
network socket.   its not using CPU or disk IO.


True, but the existence of poolers says that can be an issue.


I note that MRTG style graph showed max 16, average 5 idle
connections.   thats a rather small number to be concerned with.  if it
was 100s, then I'd be worrying about it.


Yeah, I failed to look at the scale of the y axis and just looked at the 
relative numbers.






--
john r pierce, recycling bits in santa cruz




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread John R Pierce

On 5/16/2016 2:11 PM, Adrian Klaver wrote:

Yes a connection consumes resources.


an idle connection consumes some memory, a process context, and a
network socket.   its not using CPU or disk IO.


True, but the existence of poolers says that can be an issue. 


I note that MRTG style graph showed max 16, average 5 idle 
connections.   thats a rather small number to be concerned with. if it 
was 100s, then I'd be worrying about it.




--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Adrian Klaver

On 05/16/2016 02:00 PM, John R Pierce wrote:

On 5/16/2016 1:55 PM, Adrian Klaver wrote:

Would the pool connection solve that IDLE connections? But more
important than that, are the IDLE connections using the machine's
resources ?


Yes a connection consumes resources.


an idle connection consumes some memory, a process context, and a
network socket.   its not using CPU or disk IO.


True, but the existence of poolers says that can be an issue.




--
john r pierce, recycling bits in santa cruz




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Lucas Possamai
On 17 May 2016 at 08:56, Venkata Balaji N  wrote:

>
> On Tue, May 17, 2016 at 6:28 AM, Lucas Possamai 
> wrote:
>
>> hmm.. thanks for all the answers guys...
>>
>>
>> One more question: Those IDLE connections.. are using the server's
>> resources?
>> To solve that problem I would need a Pool connection, right?
>>
>> Would the pool connection solve that IDLE connections? But more important
>> than that, are the IDLE connections using the machine's resources ?
>>
>
> Yes. There is no straight way to terminate the IDLE connections from the
> database end. You would need a connection pooler like pgBouncer which can
> help you terminate IDLE connections after a certain time.
>
> It would be much better if your Application can terminate connections to
> the database.
>
> Regards,
> Venkata B N
>
> Fujitsu Australia
>
>


Ok awesome.. thanks a lot!

Lucas


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread John R Pierce

On 5/16/2016 1:55 PM, Adrian Klaver wrote:

Would the pool connection solve that IDLE connections? But more
important than that, are the IDLE connections using the machine's
resources ?


Yes a connection consumes resources. 


an idle connection consumes some memory, a process context, and a 
network socket.   its not using CPU or disk IO.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Venkata Balaji N
On Tue, May 17, 2016 at 6:28 AM, Lucas Possamai 
wrote:

> hmm.. thanks for all the answers guys...
>
>
> One more question: Those IDLE connections.. are using the server's
> resources?
> To solve that problem I would need a Pool connection, right?
>
> Would the pool connection solve that IDLE connections? But more important
> than that, are the IDLE connections using the machine's resources ?
>

Yes. There is no straight way to terminate the IDLE connections from the
database end. You would need a connection pooler like pgBouncer which can
help you terminate IDLE connections after a certain time.

It would be much better if your Application can terminate connections to
the database.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Adrian Klaver

On 05/16/2016 01:28 PM, Lucas Possamai wrote:

hmm.. thanks for all the answers guys...


One more question: Those IDLE connections.. are using the server's
resources?
To solve that problem I would need a Pool connection, right?


Yes and no. If your application/clients are generating connections that 
are not being closed then putting a pooler between the client and the 
database just moves the problem to the pooler. In other words if the 
client is asking for a connection from the pooler and then does not 
close it then the pooler is going to have to add connections to deal 
with subsequent connections. Now you can set a max number of connections 
and reject new ones after that number, but that then means you may be 
shutting out connections that need to get through. The first step in 
dealing with this would be to determine what code is generating 
connections and then not closing them. A good place to start would be:


http://www.postgresql.org/docs/9.5/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

Table 27-3. pg_stat_activity View

That should give you an idea of what is creating the connections.




Would the pool connection solve that IDLE connections? But more
important than that, are the IDLE connections using the machine's
resources ?


Yes a connection consumes resources.



cheers



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Lucas Possamai
hmm.. thanks for all the answers guys...


One more question: Those IDLE connections.. are using the server's
resources?
To solve that problem I would need a Pool connection, right?

Would the pool connection solve that IDLE connections? But more important
than that, are the IDLE connections using the machine's resources ?


cheers


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Adrian Klaver

On 05/16/2016 09:56 AM, Charles Clavadetscher wrote:

Hello

On 16.05.2016, at 18:32, Francisco Olarte > wrote:


Hi Lucas

On Mon, May 16, 2016 at 4:10 AM, Lucas Possamai > wrote:


Those IDLE connections, might be because the user/application
didn't commit the transaction?


​IIRC Those would be 'Idle in transaction' ( which are normally bad if
numerous, unless your app has a reason for them, as they are the ones
which can block things ). Plain 'Idle' are normally connections
between transactions, totally normal if you use poolers, or if your
app keeps connection opens while it does other things ( like preparing
for a transaction ).



There really is a state 'Idle in transaction'? Good to learn.


http://www.postgresql.org/docs/9.5/static/monitoring-stats.html
"
state   textCurrent overall state of this backend. Possible values are:

active: The backend is executing a query.

idle: The backend is waiting for a new client command.

idle in transaction: The backend is in a transaction, but is not 
currently executing a query.


idle in transaction (aborted): This state is similar to idle in 
transaction, except one of the statements in the transaction caused an 
error.


fastpath function call: The backend is executing a fast-path function.

disabled: This state is reported if track_activities is disabled in 
this backend.

"


Thank you and bye
Charles


Francisco Olarte.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Francisco Olarte
Charles:

On Mon, May 16, 2016 at 6:56 PM, Charles Clavadetscher
 wrote:

> There really is a state 'Idle in transaction'? Good to learn.

Again, IIRC, it was there in the graph legend, orange was Idle, yellow
was Idle in transaction ( not in the data, just in the legend ).

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Charles Clavadetscher
Hello

> On 16.05.2016, at 18:32, Francisco Olarte  wrote:
> 
> Hi Lucas
> 
>> On Mon, May 16, 2016 at 4:10 AM, Lucas Possamai  wrote:
>> 
>> Those IDLE connections, might be because the user/application didn't commit 
>> the transaction?
> 
> ​IIRC Those would be 'Idle in transaction' ( which are normally bad if 
> numerous, unless your app has a reason for them, as they are the ones which 
> can block things ). Plain 'Idle' are normally connections between 
> transactions, totally normal if you use poolers, or if your app keeps 
> connection opens while it does other things ( like preparing for a 
> transaction ).
> 

There really is a state 'Idle in transaction'? Good to learn.
Thank you and bye
Charles

> Francisco Olarte.
> 
>  
> 


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Francisco Olarte
Hi Lucas

On Mon, May 16, 2016 at 4:10 AM, Lucas Possamai 
wrote:

>
> Those IDLE connections, might be because the user/application didn't
> commit the transaction?
>

​IIRC Those would be 'Idle in transaction' ( which are normally bad if
numerous, unless your app has a reason for them, as they are the ones which
can block things ). Plain 'Idle' are normally connections between
transactions, totally normal if you use poolers, or if your app keeps
connection opens while it does other things ( like preparing for a
transaction ).

Francisco Olarte.


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Charles Clavadetscher
Hello

> On 16.05.2016, at 04:10, Lucas Possamai  wrote:
> 
> Hi guys,
> 
> 
> 
> Those IDLE connections, might be because the user/application didn't commit 
> the transaction?

I think that idle means that a client is connected but is doing nothing. 
Possibly It includes terminated processes without a commit, since they do 
nothing, but I am not sure that you can assume that all connections are 
uncommitted transactions.

Bye
Charles


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] understanding postgres backend process memory usage

2016-04-18 Thread Day, David
Hi Jony,

First of all thank for considering the matter.

Although the overall database has many tables, the relations/objects accessed  
by  high RAM usage connections is rather limited. ( < 10 )
There are essentially a couple of “selects” (2-4) dependent on options that are 
enabled and then the bulk of the subsequent activity are
Inserts/updates  on call  state information about a phone call.

Usage of Bind Variables:

If this is what Bind Variables means then yes.
Each session/connection sets ups 19 prepared statements with variables when a 
connection is established.
Some of these  prepared statements are used at a later time,  dependent on  
call scenario,  if needed,  at which time the variables provided by the 
requestor are “bound” to the prepared statement variables.The prepared 
statements are generally server side functions with arguments being provided by 
these bind variables.

Periodically resetting the connection is something we could do but it seems a 
work-around to really understanding what is going on with the RAM usage.  I was 
hoping there might be a simple formula, that  given the postgres  config file 
settings a connection should never exceed a certain amount of RAM as observed 
by the “top” utility.  I am not confident, in googling about,  that I have 
found an authoritative answer as to what is that  formula.


Best Regards and thanks again.


Dave


From: Jony Cohen [mailto:jony.cohe...@gmail.com]
Sent: Sunday, April 17, 2016 4:24 AM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] understanding postgres backend process memory usage

Hi David,
How many different table/objects do you read from and do you use bind variables?
Each postmaster process keeps internal dictionary on objects and queries ran - 
If you have too many objects or queries planned you might take up too much ram.

If this is indeed your situation, you might benefit from shorter lived 
connections.

Regards,
 - Jony

On Fri, Apr 15, 2016 at 12:14 AM, Day, David 
<d...@redcom.com<mailto:d...@redcom.com>> wrote:
Hello,


Understanding postgresql memory usage ?

While traffic loading our application that uses a postgres 9.3.11 database 
instance, we observe higher RAM usage then expected.  The system bogs down as 
free memory decreases to minimal values.  The most RAM usage seems to be with 
postgres backends.   I am looking for clarity on how postgress backend 
connections use RAM.  I suspect our tuning is bad for 2 GB RAM pool.  We likely 
need to add memory.

Please look at the "top" output below and offer an opinion that the "top RES" 
values could be that high for the configuration parameters postgres is running 
on.

There are no error messages in postgres log of relevance.

Regards and thank you for consideration on the matter.


Dave Day

--


Details:


Postgresql.conf  ( relevant subset )

max_connections = 100
shared_buffers = 64MB
#temp_buffers = 8MB
work_mem = 1MB
maintenance_work_mem=16MB
max_stack_depth=2MB
#wal_level=minimal
#fsync=on
#synchronous_commit=on
#full_page_writes = on
#wal_buffers = -1
#autovacuum = on
log_destination='syslog'
update_process_title=off

--

Here are all the backend connections via "top" taken after offered load has 
been running a while.

I annotated some of the "top" data as to connection ownership sorted by RES and 
user.
Within the DBM connections I correlate the pid to the client session. 
Connections
0-3 are somewhat specialized towards read operations ( selects )  whiles 
connections 4-9 are
More write oriented. ( Insert/updates ).


top capture:

97 processes:  1 running, 91 sleeping, 5 zombie
CPU:  1.6% user,  0.0% nice,  9.4% system,  0.4% interrupt, 88.6% idle
Mem: 1474M Active, 79M Inact, 387M Wired, 19M Cache, 10M Free
ARC: 156M Total, 44M MFU, 38M MRU, 1885K Anon, 2619K Header, 70M Other
Swap: 2048M Total, 761M Used, 1287M Free, 37% Inuse, 144K In

  PID USERNAME   THR PRI NICE   SIZERES STATE   C   TIMEWCPU 
COMMAND   DBM connection
DBM
 9099 pgsql1  200   373M   205M sbwait  1   4:02   0.00% 
postgres  6
 9094 pgsql1  200   365M   204M sbwait  1   3:53   0.00% 
postgres  4
 9089 pgsql1  200   325M   162M sbwait  1   8:14   0.00% 
postgres  9
 9097 pgsql1  200   309M   145M sbwait  1   3:40   0.00% 
postgres  5
 9096 pgsql1  200   301M   138M sbwait  0   3:24   0.00% 
postgres  8
 9095 pgsql1  200   233M 72648K sbwait  1   1:03   0.00% 
postgres  0
 9092 pgsql1  200   217M 53852K sbwait  1   0:44   0.00% 
postgres  2
 9100 pgsql1  200   217M 53708K sbwait  0   0:44   0.00% 
postgres  3
 9098 pgsql1  200   217M 53552K sbwait  0   0:43   0.00% 
postgres  1
 9093 pgsql1  200   373M 32328K sbwait  1   4:02   0.00% 
postgres  7

 9084 pgsql1  200   112M 21284K select  0   0:37   0.00%

Re: [GENERAL] Re: Postgres 9.4.4/9.4.6: plpython2/3 intallation issues on a Windows 7 machine

2016-03-31 Thread Adrian Klaver

On 03/31/2016 02:53 AM, margrit drescher wrote:

I originally used the 9.4.6 version on
http://www.enterprisedb.com/products-services-training/pgdownload#windows for
64-bit windows and loaded the language pack recommended in the readme file.


Did you do this?:

If you are using Windows 64, you must set the following variables:
=
set PYTHONHOME=C:\EnterpriseDB\LanguagePack\9.4\x64\Python-3.3
set 
PATH=C:\EnterpriseDB\LanguagePack\9.4\x64\Python-3.3;C:\EnterpriseDB\LanguagePack\9.4\x64\Perl-5.16\bin;C:\EnterpriseDB\LanguagePack\9.4\x64\Tcl-8.5\bin;%PATH%



My suspicion is that plpython3u is finding the Python 3.2 install on the 
machine not the Python 3.3 one.




On 31 March 2016 at 11:25, Alex Ignatov-2 [via PostgreSQL] <[hidden
email] > wrote:



On 31.03.2016 10:50, margrit drescher wrote:

 > I installed postgres 9.4.6 on my Windows 7 (64-bit) machine, and
managed to
 > create the plpython3u extension without a problem.
 >
 > I then restored an existing DB (created in postgres 9.3).  The
plpython3
 > functions restored without a hitch and can execute.
 >
 >
 > However, as soon as I try to create a NEW plpython3 function on
the db the
 > postgres service terminates and does an auto-recovery.
 >
 > I then downgraded to postgres 9.4.4 and am still encountering the
same
 > issue.
 >
 >
 >
 > Example:
 >
 > When  running the following code:
 >
 > "create function test() returns void as $$ plpy.notice( 'xxx' );
$$ language
 > plpython3u"
 >
 >
 > the following is generated in the log:
 >
 > "
 > 2016-03-31 09:36:56 CAT LOG:  server process (PID 6376) exited
with exit
 > code 3
 > 2016-03-31 09:36:56 CAT DETAIL:  Failed process was running:  create
 > function test() returns void as $$ plpy.notice( 'xxx' ); $$ language
 > plpython3u
 > 2016-03-31 09:36:56 CAT LOG:  terminating any other active server
processes
 > 2016-03-31 09:36:56 CAT WARNING:  terminating connection because
of crash of
 > another server process
 > 2016-03-31 09:36:56 CAT DETAIL:  The postmaster has commanded
this server
 > process to roll back the current transaction and exit, because
another
 > server process exited abnormally and possibly corrupted shared
memory.
 > 2016-03-31 09:36:56 CAT HINT:  In a moment you should be able to
reconnect
 > to the database and repeat your command.
 > 2016-03-31 09:36:56 CAT WARNING:  terminating connection because
of crash of
 > another server process
 > 2016-03-31 09:36:56 CAT DETAIL:  The postmaster has commanded
this server
 > process to roll back the current transaction and exit, because
another
 > server process exited abnormally and possibly corrupted shared
memory.
 > 2016-03-31 09:36:56 CAT HINT:  In a moment you should be able to
reconnect
 > to the database and repeat your command.
 > 2016-03-31 09:36:56 CAT WARNING:  terminating connection because
of crash of
 > another server process
 > 2016-03-31 09:36:56 CAT DETAIL:  The postmaster has commanded
this server
 > process to roll back the current transaction and exit, because
another
 > server process exited abnormally and possibly corrupted shared
memory.
 > 2016-03-31 09:36:56 CAT HINT:  In a moment you should be able to
reconnect
 > to the database and repeat your command.
 >
 > "
 >
 > I am guessing that my problem might have something to do with the
fact that
 > the plpython2u language is not installed on the DB - although I
am not sure
 > why it should it need it, when executing a plpython3 command.  As I
 > understand it, Postgres 9.4 accomodates both versions of python.
   However I
 > have not been able to figure out how to load plpython2u onto the
db.  The
 > postgres version I installed does not ship with plpython2.dll and
I have not
 > been able to get hold of a copy. ( Python 2.7 /3.2 and 3.3 are
installed on
 > my machine, and I am running postgre versions  9.2, 9.3 and 9.4.4 on
 > different ports).
 >
 >
 >
 >
 >
 > --
 > View this message in context:

http://postgresql.nabble.com/Postgres-9-4-4-9-4-6-plpython2-3-intallation-issues-on-a-Windows-7-machine-tp5896157.html
 > Sent from the PostgreSQL - general mailing list archive at
Nabble.com.
 >
 >
Hello!
What postgres distr for Win  are you using?

--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list ([hidden email]
)
To make changes to your subscription:

[GENERAL] Re: Postgres 9.4.4/9.4.6: plpython2/3 intallation issues on a Windows 7 machine

2016-03-31 Thread margrit drescher
I originally used the 9.4.6 version on
http://www.enterprisedb.com/products-services-training/pgdownload#windows
for 64-bit windows and loaded the language pack recommended in the readme
file.

On 31 March 2016 at 11:25, Alex Ignatov-2 [via PostgreSQL] <
ml-node+s1045698n5896174...@n5.nabble.com> wrote:

>
>
> On 31.03.2016 10:50, margrit drescher wrote:
>
> > I installed postgres 9.4.6 on my Windows 7 (64-bit) machine, and managed
> to
> > create the plpython3u extension without a problem.
> >
> > I then restored an existing DB (created in postgres 9.3).  The plpython3
> > functions restored without a hitch and can execute.
> >
> >
> > However, as soon as I try to create a NEW plpython3 function on the db
> the
> > postgres service terminates and does an auto-recovery.
> >
> > I then downgraded to postgres 9.4.4 and am still encountering the same
> > issue.
> >
> >
> >
> > Example:
> >
> > When  running the following code:
> >
> > "create function test() returns void as $$ plpy.notice( 'xxx' ); $$
> language
> > plpython3u"
> >
> >
> > the following is generated in the log:
> >
> > "
> > 2016-03-31 09:36:56 CAT LOG:  server process (PID 6376) exited with exit
> > code 3
> > 2016-03-31 09:36:56 CAT DETAIL:  Failed process was running:  create
> > function test() returns void as $$ plpy.notice( 'xxx' ); $$ language
> > plpython3u
> > 2016-03-31 09:36:56 CAT LOG:  terminating any other active server
> processes
> > 2016-03-31 09:36:56 CAT WARNING:  terminating connection because of
> crash of
> > another server process
> > 2016-03-31 09:36:56 CAT DETAIL:  The postmaster has commanded this
> server
> > process to roll back the current transaction and exit, because another
> > server process exited abnormally and possibly corrupted shared memory.
> > 2016-03-31 09:36:56 CAT HINT:  In a moment you should be able to
> reconnect
> > to the database and repeat your command.
> > 2016-03-31 09:36:56 CAT WARNING:  terminating connection because of
> crash of
> > another server process
> > 2016-03-31 09:36:56 CAT DETAIL:  The postmaster has commanded this
> server
> > process to roll back the current transaction and exit, because another
> > server process exited abnormally and possibly corrupted shared memory.
> > 2016-03-31 09:36:56 CAT HINT:  In a moment you should be able to
> reconnect
> > to the database and repeat your command.
> > 2016-03-31 09:36:56 CAT WARNING:  terminating connection because of
> crash of
> > another server process
> > 2016-03-31 09:36:56 CAT DETAIL:  The postmaster has commanded this
> server
> > process to roll back the current transaction and exit, because another
> > server process exited abnormally and possibly corrupted shared memory.
> > 2016-03-31 09:36:56 CAT HINT:  In a moment you should be able to
> reconnect
> > to the database and repeat your command.
> >
> > "
> >
> > I am guessing that my problem might have something to do with the fact
> that
> > the plpython2u language is not installed on the DB - although I am not
> sure
> > why it should it need it, when executing a plpython3 command.  As I
> > understand it, Postgres 9.4 accomodates both versions of python.
> However I
> > have not been able to figure out how to load plpython2u onto the db.
> The
> > postgres version I installed does not ship with plpython2.dll and I have
> not
> > been able to get hold of a copy. ( Python 2.7 /3.2 and 3.3 are installed
> on
> > my machine, and I am running postgre versions  9.2, 9.3 and 9.4.4 on
> > different ports).
> >
> >
> >
> >
> >
> > --
> > View this message in context:
> http://postgresql.nabble.com/Postgres-9-4-4-9-4-6-plpython2-3-intallation-issues-on-a-Windows-7-machine-tp5896157.html
> > Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> >
> >
> Hello!
> What postgres distr for Win  are you using?
>
> --
> Alex Ignatov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
>
> --
> Sent via pgsql-general mailing list ([hidden email]
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://postgresql.nabble.com/Postgres-9-4-4-9-4-6-plpython2-3-intallation-issues-on-a-Windows-7-machine-tp5896157p5896174.html
> To unsubscribe from Postgres 9.4.4/9.4.6: plpython2/3 intallation issues
> on a Windows 7 machine, click here
> 
> .
> NAML
> 

[GENERAL] regarding postgres feedback

2016-03-11 Thread Durgamahesh Manne
Hi Sir

i am very happy to work on PostgreSQL.Super fast response only from
postgres team regarding i asked any question related to postgres

regards
mahesh


Re: [GENERAL] bloated postgres data folder, clean up

2016-03-04 Thread Rémi Cura
​Hey Julien,
thanks for the original query !
​
There was a slight mistake in the query,
it was comparing the file name with
pg_class.relfilenode.
It is not safe in some case (see doc : "caution" in here
)
, so better use the pg_relation_filenode() function.
AS a result this database could not be started anymore (no worry I had
copy).
However using pg_relation_filenode() seems to be safe (passes vacuum full
analyse).

I'll modify the query as soon as I have access to gist.

I agree the warning about ​tablespace is also important.
I'll put all of this on the wiki as soon has I have permission to create a
new page
(man, how long is this "cool-off", it's been already several days !).

Cheers,
Rémi-C
​​



2016-03-03 20:10 GMT+01:00 Julien Rouhaud :

> On 03/03/2016 18:15, Rémi Cura wrote:
> > Hey,
>
> Hello Rémi,
>
> > first I forgot something in the querry to remove the annoying .XXX :
> > ---
> > SELECT distinct substring(file_name from '\d+' )
> > FROM find_useless_postgres_file('your_database_name') ;
> > ---
> >
>
> Thanks for working on this :)
>
> I added a comment on the gist URL you provided. It's a simplified
> version of the main query that should work fine and detect more orphan
> files. Double checking it would be a good idea though.
>
> Also, as you can't check other databases than the one you're connected
> to, I used current_database() instead of user defined database name.
>
> It's also important to warn that all of this only work for finding
> orphan files on the default
> ​​
> tablespace (and to never blindly remove
> files in the PGDATA of course).
> ​​
>
>


Re: [GENERAL] bloated postgres data folder, clean up

2016-03-03 Thread Julien Rouhaud
#END AND 
> 
> #   (pg_catalog.pg_relation_filenode(c.oid) IN ('48175847.30')) 
> 
> # ORDER BY relname
> 
> __ __
> 
> /data/postgres # cut -d ';' -f1 /tmp/useless_files.csv | cut -d'.'
> -f1 | sort -n | uniq >/tmp/potential_useless_oids.csv
> 
> /data/postgres # wc -l /tmp/potential_useless_oids.csv
> 
> 1017 /tmp/potential_useless_oids.csv
> 
> __ __
> 
> # get a list of all used oids, instead of examing one by one
> 
> /data/postgres # oid2name -i -S -q -d live | awk '{print $1}' | sort
> -n | uniq >/tmp/used_oids.csv
> 
> /data/postgres # wc -l /tmp/used_oids.csv 
> 
> 940 /tmp/used_oids.csv
> 
> __ __
> 
> /data/postgres # while read i; do grep $i /tmp/used_oids.csv
> >/dev/null || (echo "$i" >>/tmp/not_in_use_oids.csv); done <
> /tmp/potential_useless_oids.csv
> 
> /data/postgres # wc -l /tmp/not_in_use_oids.csv
> 
> 168 /tmp/not_in_use_oids.csv
> 
> __ __
> 
> /data/postgres # egrep -v 'fsm|vm' /tmp/not_in_use_oids.csv |wc -l
> 
> 1
> 
> /data/postgres # egrep -v 'fsm|vm' /tmp/not_in_use_oids.csv   
> 
> file_name
> 
> # The CSV header only
> 
> __ __
> 
> So, no bloated files.
> 
> But, using the function from
> https://wiki.postgresql.org/wiki/Show_database_bloat I get:
> 
> __ __
> 
> live=# select tbloat,wasted_space from table_bloat order by
> wasted_space desc limit 25;
> 
> tbloat │ wasted_space 
> 
> ┼──
> 
> 1.0 │ 9976 kB
> 
> 1.2 │ 98 GB
> 
> 1.0 │ 97 MB
> 
> 1.4 │ 96 kB
> 
> 1.2 │ 920 kB
> 
> 1.2 │ 88 kB
> 
> 1.1 │ 88 kB
> 
> 2.0 │ 8192 bytes
> 
> 0.0 │ 8192 bytes
> 
> 1.3 │ 8192 bytes
> 
> 2.0 │ 8192 bytes
> 
> 1.3 │ 8192 bytes
> 
> 1.5 │ 8192 bytes
> 
> 1.5 │ 8192 bytes
> 
> 2.0 │ 8192 bytes
> 
> 1.1 │ 8192 bytes
> 
> 1.0 │ 8192 bytes
> 
> 1.1 │ 8192 bytes
> 
> 1.3 │ 8192 bytes
> 
> 1.5 │ 8192 bytes
> 
> 1.1 │ 80 kB
> 
> 1.0 │ 7584 kB
> 
> 1.6 │ 71 MB
> 
> 1.0 │ 704 kB
> 
> 1.1 │ 6968 kB
> 
> (25 rows)
> 
> __ __
> 
> So actually, quite a lot of bloated data J
> 
> What am I doing wrong?
> 
> __ __
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Johnny Morano
> 
> 
> 
> __ __
> 
> *Johnny Morano  |  Principal Systems Engineer*
> 
> __ __
> 
> PAY.ON GmbH  |  AN ACI WORLDWIDE COMPANY  |  WWW.PAYON.COM
>     <http://www.payon.com/>
> 
> Jakob-Haringer-Str. 1  |  5020 Salzburg  |  Austria
> 
> __ __
> 
> This email message and any attachments may contain confidential,
> proprietary or non-public information. This information is intended
> solely for the designated recipient(s). If an addressing or
> transmission error has misdirected this email, please notify the
> sender immediately and destroy this email. Any review,
> dissemination, use or reliance upon this information by unintended
> recipients is prohibited. Any opinions expressed in this email are
> those of the author personally.
> 
> __ __
> 
> *From:*Rémi Cura [mailto:remi.c...@gmail.com
> <mailto:remi.c...@gmail.com>]
> *Sent:* Mittwoch, 2. März 2016 17:49
> *To:* Johnny Morano
> *Cc:* Alvaro Herrera; PostgreSQL General
> 
> 
> *Subject:* Re: [GENERAL] bloated postgres data folder, clean up
> 
> __ __
> 
> Hey,
> 
> this is quite the *opposite*.
> 
> The function find files in the postgres database folder that are not
> used by the database.
> 
> To use it : 
> 
>  * connect to the database you want to analyse ( **mandatory** ).
> 
>  * create the function (execute function definition)
> 
>  * Execute `SELECT * FROM
> find_useless_postgres_file('your_database_name')`
> 
> 
> This will output a list of files that are on the disk but not used
> by postgres,
> 
> and so can be removed.
> 
>

Re: [GENERAL] bloated postgres data folder, clean up

2016-03-03 Thread Rémi Cura
gresql.org/wiki/Show_database_bloat I get:
>
>
>
> live=# select tbloat,wasted_space from table_bloat order by wasted_space
> desc limit 25;
>
> tbloat │ wasted_space
>
> ┼──
>
> 1.0 │ 9976 kB
>
> 1.2 │ 98 GB
>
> 1.0 │ 97 MB
>
> 1.4 │ 96 kB
>
> 1.2 │ 920 kB
>
> 1.2 │ 88 kB
>
> 1.1 │ 88 kB
>
> 2.0 │ 8192 bytes
>
> 0.0 │ 8192 bytes
>
> 1.3 │ 8192 bytes
>
> 2.0 │ 8192 bytes
>
> 1.3 │ 8192 bytes
>
> 1.5 │ 8192 bytes
>
> 1.5 │ 8192 bytes
>
> 2.0 │ 8192 bytes
>
> 1.1 │ 8192 bytes
>
> 1.0 │ 8192 bytes
>
> 1.1 │ 8192 bytes
>
> 1.3 │ 8192 bytes
>
> 1.5 │ 8192 bytes
>
> 1.1 │ 80 kB
>
> 1.0 │ 7584 kB
>
> 1.6 │ 71 MB
>
> 1.0 │ 704 kB
>
> 1.1 │ 6968 kB
>
> (25 rows)
>
>
>
> So actually, quite a lot of bloated data J
>
> What am I doing wrong?
>
>
>
> Mit freundlichen Grüßen / With kind regards,
>
> Johnny Morano
>
> 
>
>
>
> *Johnny Morano  |  Principal Systems Engineer*
>
>
>
> PAY.ON GmbH  |  AN ACI WORLDWIDE COMPANY  |  WWW.PAYON.COM
> <http://www.payon.com/>
>
> Jakob-Haringer-Str. 1  |  5020 Salzburg  |  Austria
>
>
>
> This email message and any attachments may contain confidential,
> proprietary or non-public information. This information is intended solely
> for the designated recipient(s). If an addressing or transmission error has
> misdirected this email, please notify the sender immediately and destroy
> this email. Any review, dissemination, use or reliance upon this
> information by unintended recipients is prohibited. Any opinions expressed
> in this email are those of the author personally.
>
>
>
> *From:* Rémi Cura [mailto:remi.c...@gmail.com]
> *Sent:* Mittwoch, 2. März 2016 17:49
> *To:* Johnny Morano
> *Cc:* Alvaro Herrera; PostgreSQL General
>
> *Subject:* Re: [GENERAL] bloated postgres data folder, clean up
>
>
>
> Hey,
>
> this is quite the *opposite*.
>
> The function find files in the postgres database folder that are not used
> by the database.
>
> To use it :
>
>  * connect to the database you want to analyse ( **mandatory** ).
>
>  * create the function (execute function definition)
>
>  * Execute `SELECT * FROM find_useless_postgres_file('your_database_name')`
>
>
> This will output a list of files that are on the disk but not used by
> postgres,
>
> and so can be removed.
>
> To be extra sure, you should use oid2name programme to check that the
> useless files are really useless.
>
>
> For this :
>  * output the list of potential useless files with copy for instance
>   ex :
>   COPY ( SELECT file_name
>
>  FROM find_useless_postgres_file('your_database_name')
>
>) TO 'path_to_you_database_folder/potential_useless.txt'
>
>now you've got a file with a list of potential erroneous files.
>
>  * Then use oid2name
>
>   `$su postgres
>$cd path_to_you_database_folder
>
>$while read i; do oid2name -f "$i" -i -S -q -d your_database_name; done
> < potential_useless.txt
>   `
>
>   Nothing should show, meaning that every potential erroneous file
>has not been recognized by oid2name !
>
>   If you feel unconvinced, you can manually try oid2name on some
>of the potential erroneous files, to be extra sure.
>   It should not find anything.
>
>
>
>  * Now delete all the files in `potential_useless.txt`.
>
>   It could be wiser to not delete the files but rename those
>
>   (for instance, adding `.potentially_useless` as a postfix)
>
>   so if it breaks something, you have an easy way to revert everything.
>
>
>
> Anyway, use *-*extra extra*-* caution if you delete.
> Except a backup, there would be no easy way to correct a mistake.
>
> Cheers,
>
> Rémi-C
>
>
>
> 2016-03-02 15:38 GMT+01:00 Johnny Morano <johnny.mor...@payon.com>:
>
> Hi Remi!
>
>
>
> This SQL function you have provided, seems to return all valid files, is
> that correct? In my case, it returned all my ‘base/’ files. Is that normal?
>
> If yes, maybe you rename the function to ‘find_useful_postgres_files’ ;-)
>
>
>
> Could you explain in steps how to use this function to make a cleanup of
> bloated data? (like in an example with commands and example output, if
> possible of course)
>
>
>
> Thanks!
>
>
>
>
>
> Mit besten Grüßen / With best regards,
>
> Johnny Morano
>
> _

Re: [GENERAL] bloated postgres data folder, clean up

2016-03-02 Thread Johnny Morano
Hi,

So, I still think the SQL function isnt really working well.
Here’s what I did:

/data/postgres # psql

postgres=# \copy ( select * from find_useless_postgres_file('live') ) to 
/tmp/useless_files.csv delimiter ';' csv header;
postgres=# \q

/data/postgres # wc -l /tmp/useless_files.csv
7422 /tmp/useless_files.csv

# filter out the .## files, e.g.:
# 
48175847.37;/base/16398/48175847.37;/data/postgres/base/16398/48175847.37;1047420928
# 
48175847.36;/base/16398/48175847.36;/data/postgres/base/16398/48175847.36;1073741824
# 
48175847.35;/base/16398/48175847.35;/data/postgres/base/16398/48175847.35;1073741824
# 
48175847.34;/base/16398/48175847.34;/data/postgres/base/16398/48175847.34;1073741824
# 
48175847.33;/base/16398/48175847.33;/data/postgres/base/16398/48175847.33;1073741824
# 
48175847.32;/base/16398/48175847.32;/data/postgres/base/16398/48175847.32;1073741824
# 
48175847.31;/base/16398/48175847.31;/data/postgres/base/16398/48175847.31;1073741824
# 
48175847.30;/base/16398/48175847.30;/data/postgres/base/16398/48175847.30;1073741824
#
# because oid2name doesn't like them, gives error:
# /data/postgres # oid2name -f 48175847.30 -i -S -q -d live
# oid2name: query failed: ERROR:  invalid input syntax for type oid: 
"48175847.30"
# LINE 11:   (pg_catalog.pg_relation_filenode(c.oid) IN ('48175847.30')...
#^
#
# oid2name: query was: SELECT pg_catalog.pg_relation_filenode(c.oid) as 
"Filenode", relname as "Table Name"
# FROM pg_catalog.pg_class c
#  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
#  LEFT JOIN pg_catalog.pg_database d ON d.datname = 
pg_catalog.current_database(),
#  pg_catalog.pg_tablespace t
# WHERE relkind IN ('r', 'm', 'i', 'S', 't') AND
#t.oid = CASE
#   WHEN reltablespace <> 0 THEN reltablespace
#   ELSE dattablespace
#END AND
#   (pg_catalog.pg_relation_filenode(c.oid) IN ('48175847.30'))
# ORDER BY relname

/data/postgres # cut -d ';' -f1 /tmp/useless_files.csv | cut -d'.' -f1 | sort 
-n | uniq >/tmp/potential_useless_oids.csv
/data/postgres # wc -l /tmp/potential_useless_oids.csv
1017 /tmp/potential_useless_oids.csv

# get a list of all used oids, instead of examing one by one
/data/postgres # oid2name -i -S -q -d live | awk '{print $1}' | sort -n | uniq 
>/tmp/used_oids.csv
/data/postgres # wc -l /tmp/used_oids.csv
940 /tmp/used_oids.csv

/data/postgres # while read i; do grep $i /tmp/used_oids.csv >/dev/null || 
(echo "$i" >>/tmp/not_in_use_oids.csv); done < /tmp/potential_useless_oids.csv
/data/postgres # wc -l /tmp/not_in_use_oids.csv
168 /tmp/not_in_use_oids.csv

/data/postgres # egrep -v 'fsm|vm' /tmp/not_in_use_oids.csv |wc -l
1
/data/postgres # egrep -v 'fsm|vm' /tmp/not_in_use_oids.csv
file_name
# The CSV header only

So, no bloated files.
But, using the function from 
https://wiki.postgresql.org/wiki/Show_database_bloat I get:

live=# select tbloat,wasted_space from table_bloat order by wasted_space desc 
limit 25;
tbloat │ wasted_space
┼──
1.0 │ 9976 kB
1.2 │ 98 GB
1.0 │ 97 MB
1.4 │ 96 kB
1.2 │ 920 kB
1.2 │ 88 kB
1.1 │ 88 kB
2.0 │ 8192 bytes
0.0 │ 8192 bytes
1.3 │ 8192 bytes
2.0 │ 8192 bytes
1.3 │ 8192 bytes
1.5 │ 8192 bytes
1.5 │ 8192 bytes
2.0 │ 8192 bytes
1.1 │ 8192 bytes
1.0 │ 8192 bytes
1.1 │ 8192 bytes
1.3 │ 8192 bytes
1.5 │ 8192 bytes
1.1 │ 80 kB
1.0 │ 7584 kB
1.6 │ 71 MB
1.0 │ 704 kB
1.1 │ 6968 kB
(25 rows)

So actually, quite a lot of bloated data ☺
What am I doing wrong?

Mit freundlichen Grüßen / With kind regards,
Johnny Morano


Johnny Morano  |  Principal Systems Engineer

PAY.ON GmbH  |  AN ACI WORLDWIDE COMPANY  |  
WWW.PAYON.COM<http://www.payon.com/>
Jakob-Haringer-Str. 1  |  5020 Salzburg  |  Austria

This email message and any attachments may contain confidential, proprietary or 
non-public information. This information is intended solely for the designated 
recipient(s). If an addressing or transmission error has misdirected this 
email, please notify the sender immediately and destroy this email. Any review, 
dissemination, use or reliance upon this information by unintended recipients 
is prohibited. Any opinions expressed in this email are those of the author 
personally.

From: Rémi Cura [mailto:remi.c...@gmail.com]
Sent: Mittwoch, 2. März 2016 17:49
To: Johnny Morano
Cc: Alvaro Herrera; PostgreSQL General
Subject: Re: [GENERAL] bloated postgres data folder, clean up

Hey,
this is quite the *opposite*.
The function find files in the postgres database folder that are not used by 
the database.
To use it :
 * connect to the database you want to analyse ( **mandatory** ).
 * create the function (execute function definition)
 * Execute `SELECT * FROM find_useless_

Re: [GENERAL] bloated postgres data folder, clean up

2016-03-02 Thread Rémi Cura
Hey,
this is quite the *opposite*.
The function find files in the postgres database folder that are not used
by the database.

To use it :
 * connect to the database you want to analyse ( **mandatory** ).
 * create the function (execute function definition)
 * Execute `SELECT * FROM find_useless_postgres_file('your_database_name')`

This will output a list of files that are on the disk but not used by
postgres,
and so can be removed.

To be extra sure, you should use oid2name programme to check that the
useless files are really useless.


For this :
 * output the list of potential useless files with copy for instance
  ex :
  COPY ( SELECT file_name
 FROM find_useless_postgres_file('your_database_name')
   ) TO 'path_to_you_database_folder/potential_useless.txt'

   now you've got a file with a list of potential erroneous files.

 * Then use oid2name
  `$su postgres
   $cd path_to_you_database_folder
   $while read i; do oid2name -f "$i" -i -S -q -d your_database_name; done
< potential_useless.txt
  `

  Nothing should show, meaning that every potential erroneous file
   has not been recognized by oid2name !
  If you feel unconvinced, you can manually try oid2name on some
   of the potential erroneous files, to be extra sure.
  It should not find anything.

 * Now delete all the files in `potential_useless.txt`.
  It could be wiser to not delete the files but rename those
  (for instance, adding `.potentially_useless` as a postfix)
  so if it breaks something, you have an easy way to revert everything.

Anyway, use *-*extra extra*-* caution if you delete.
Except a backup, there would be no easy way to correct a mistake.

Cheers,
Rémi-C

2016-03-02 15:38 GMT+01:00 Johnny Morano <johnny.mor...@payon.com>:

> Hi Remi!
>
>
>
> This SQL function you have provided, seems to return all valid files, is
> that correct? In my case, it returned all my ‘base/’ files. Is that normal?
>
> If yes, maybe you rename the function to ‘find_useful_postgres_files’ ;-)
>
>
>
> Could you explain in steps how to use this function to make a cleanup of
> bloated data? (like in an example with commands and example output, if
> possible of course)
>
>
>
> Thanks!
>
>
>
>
>
> Mit besten Grüßen / With best regards,
>
> Johnny Morano
>
> 
>
>
>
> *Johnny Morano  | Principal Systems Engineer*
>
>
>
> PAY.ON GmbH  |  AN ACI WORLDWIDE COMPANY  |  WWW.PAYON.COM
> <http://www.payon.com/>
>
> Jakob-Haringer-Str. 1  |  5020 Salzburg  |  Austria
>
> Registered at: LG Salzburg  |  Company number: FN 315081 f  |  VAT-ID:
> ATU64439405
>
> Managing Director: Christian Bamberger
>
>
>
>
>
> Follow us on:
>
>
>
> [image: cid:image001.jpg@01D126D0.E1AB0670] <http://blog.payon.com/>  [image:
> cid:image002.jpg@01D126D0.E1AB0670]
> <http://www.linkedin.com/company/146260?trk=tyah>  [image:
> cid:image003.jpg@01D126D0.E1AB0670] <https://twitter.com/PAYON_com>
>
>
>
> This email message and any attachments may contain confidential,
> proprietary or non-public information. This information is intended solely
> for the designated recipient(s). If an addressing or transmission error has
> misdirected this email, please notify the sender immediately and destroy
> this email. Any review, dissemination, use or reliance upon this
> information by unintended recipients is prohibited. Any opinions expressed
> in this email are those of the author personally.
>
>
>
> This message and any attachments have been scanned for viruses prior
> leaving PAY.ON; however, PAY.ON does not guarantee the security of this
> message and will not be responsible for any damages arising as a result of
> any virus being passed on or arising from any alteration of this message by
> a third party. PAY.ON may monitor e-mails sent to and from PAY.ON.
>
>
>
>
>
>
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Rémi Cura
> *Sent:* Mittwoch, 2. März 2016 14:58
> *To:* Alvaro Herrera
> *Cc:* PostgreSQL General
> *Subject:* Re: [GENERAL] bloated postgres data folder, clean up
>
>
>
> Would gladly do it,
>
> but still this "wiki cooloff" stuff,
>
> can't create a page
>
> Cheers,
>
> Rémi-C
>
>
>
> 2016-02-29 20:44 GMT+01:00 Alvaro Herrera <alvhe...@2ndquadrant.com>:
>
> Rémi Cura wrote:
> > Hey dear list,
> > after a fex years of experiments and crash,
> > I ended up with a grossly bloated postgres folder.
> > I had about 8 Go of useless files.
>
> Would you add a new page to the wiki with this?
>
> https://wiki.postgresql.org/wiki/Category:Administrative_Snippets
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
>


Re: [GENERAL] bloated postgres data folder, clean up

2016-03-02 Thread Johnny Morano
Hi Remi!

This SQL function you have provided, seems to return all valid files, is that 
correct? In my case, it returned all my ‘base/’ files. Is that normal?
If yes, maybe you rename the function to ‘find_useful_postgres_files’ ;-)

Could you explain in steps how to use this function to make a cleanup of 
bloated data? (like in an example with commands and example output, if possible 
of course)

Thanks!


Mit besten Grüßen / With best regards,
Johnny Morano


Johnny Morano  | Principal Systems Engineer

PAY.ON GmbH  |  AN ACI WORLDWIDE COMPANY  |  
WWW.PAYON.COM<http://www.payon.com/>
Jakob-Haringer-Str. 1  |  5020 Salzburg  |  Austria
Registered at: LG Salzburg  |  Company number: FN 315081 f  |  VAT-ID: 
ATU64439405
Managing Director: Christian Bamberger


Follow us on:

[cid:image001.jpg@01D126D0.E1AB0670]<http://blog.payon.com/>  
[cid:image002.jpg@01D126D0.E1AB0670] 
<http://www.linkedin.com/company/146260?trk=tyah>   
[cid:image003.jpg@01D126D0.E1AB0670] <https://twitter.com/PAYON_com>

This email message and any attachments may contain confidential, proprietary or 
non-public information. This information is intended solely for the designated 
recipient(s). If an addressing or transmission error has misdirected this 
email, please notify the sender immediately and destroy this email. Any review, 
dissemination, use or reliance upon this information by unintended recipients 
is prohibited. Any opinions expressed in this email are those of the author 
personally.

This message and any attachments have been scanned for viruses prior leaving 
PAY.ON; however, PAY.ON does not guarantee the security of this message and 
will not be responsible for any damages arising as a result of any virus being 
passed on or arising from any alteration of this message by a third party. 
PAY.ON may monitor e-mails sent to and from PAY.ON.




From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rémi Cura
Sent: Mittwoch, 2. März 2016 14:58
To: Alvaro Herrera
Cc: PostgreSQL General
Subject: Re: [GENERAL] bloated postgres data folder, clean up

Would gladly do it,
but still this "wiki cooloff" stuff,
can't create a page
Cheers,
Rémi-C

2016-02-29 20:44 GMT+01:00 Alvaro Herrera 
<alvhe...@2ndquadrant.com<mailto:alvhe...@2ndquadrant.com>>:
Rémi Cura wrote:
> Hey dear list,
> after a fex years of experiments and crash,
> I ended up with a grossly bloated postgres folder.
> I had about 8 Go of useless files.

Would you add a new page to the wiki with this?

https://wiki.postgresql.org/wiki/Category:Administrative_Snippets

--
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [GENERAL] bloated postgres data folder, clean up

2016-03-02 Thread Rémi Cura
Would gladly do it,
but still this "wiki cooloff" stuff,
can't create a page
Cheers,
Rémi-C

2016-02-29 20:44 GMT+01:00 Alvaro Herrera :

> Rémi Cura wrote:
> > Hey dear list,
> > after a fex years of experiments and crash,
> > I ended up with a grossly bloated postgres folder.
> > I had about 8 Go of useless files.
>
> Would you add a new page to the wiki with this?
>
> https://wiki.postgresql.org/wiki/Category:Administrative_Snippets
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [GENERAL] bloated postgres data folder, clean up

2016-02-29 Thread Alvaro Herrera
Rémi Cura wrote:
> Hey dear list,
> after a fex years of experiments and crash,
> I ended up with a grossly bloated postgres folder.
> I had about 8 Go of useless files.

Would you add a new page to the wiki with this?

https://wiki.postgresql.org/wiki/Category:Administrative_Snippets

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bloated postgres data folder, clean up

2016-02-29 Thread Peter Devoy
> Hope this may be useful

Thanks for sharing!

Peter


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bloated postgres data folder, clean up

2016-02-29 Thread Rémi Cura
Hey dear list,
after a fex years of experiments and crash,
I ended up with a grossly bloated postgres folder.
I had about 8 Go of useless files.
All is in a virtualbox, so I'm sure to be able to reproduce exactly, and
fried my postgres folder a couple of time before getting it right.

Julien (Rouhaud) helped me to find those useless files via SQL.
The idea is to list files in postgres directory with `pg_ls_dir`, then to
check that the dir name correspond to something useful (using
pg_relation_filenode).
--
https://gist.github.com/Remi-C/926eaee04d61a7245eb8
--

To be sure I export the found files list,
then use oid2name to check that no file is recognized.

files can then be deleted (using plpythonu in my case).

So far a vacuum full analyze raise no errors.

Warning : for this to work, the SQL query must be sent while connected to
the database to clean.

Hope this may be useful
Cheers,
Rémi-C


Re: [GENERAL] Why Postgres use a little memory on Windows.

2016-02-20 Thread Adrian Klaver

On 02/20/2016 10:39 AM, Francisco Olarte wrote:

On Sat, Feb 20, 2016 at 7:13 PM, Adrian Klaver
 wrote:
.

FROM
 sym_data d INNER JOIN sym_data_gap g ON g.status = 'GP'
 AND d.data_id BETWEEN g.start_id
 AND g.end_id

.

The thing that stands out to me is that I do not see that sym_data and
sym_data_gp are actually joined on anything.


Yes they are, although the formatting hid it somehow.

It is a classic,  data_gap defines intervals via start+end id over
data, he wants to join every data with the corresponding gap. It is  a
hard optimization problem without knowing more of the data
distributions, maybe the interval types and ginindexes can help him.
When faced with this kind of structure, depending on the data
distribution, I've solved it via two paralell queries ( gap sorted by
start plus end, data sorted by id, sweep them in paralell joining by
code, typical tape-update problem, works like a charm for
non-overlapping ranges and even for some overlapping ones with a
couple of queues  ) . And he seems to want all of the data ( sometime
this goes faster if you can add a couple of range conditions for
data.id / gap.start/end_id.


Thanks to you and Tom for enlightening me. I am going to have to spend 
some time puzzling this out to convert what you have shown into 
something that I can wrap my head around.





Also is it possible to see the schema definitions for the two tables?


My bet is on somethink like data.id ~serial primary key,
gap.start/end_id foreign key to that.

Francisco Olarte.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Postgres use a little memory on Windows.

2016-02-20 Thread Francisco Olarte
On Sat, Feb 20, 2016 at 7:37 PM, Tom Lane  wrote:
> It looks like the bitmap heap scan generally returns exactly one row for
> each outer row, which makes me wonder if the BETWEEN couldn't be replaced
> with some sort of equality.

Mm, I'm not good reading explains, but that seems to confirm my
suspicion that gaps partition the id range in non overlapping ranges.

> But that might take some rethinking of the data.

If id is a series, gap defines a range, he can do something with an
auxiliary table, like

select start as a, 0 as b from gaps where status = 'GP'
union all
select id as a,1 as b from data
union all end-1 as a, 2 as b from gaps  where status='gp' -- to end-1
to make intervals half open.
order by a,b

which would give all the ids in a with b=1  surrounded by (0,2) when
valid and by (2,0) when invalid.

and then, with a creative window clause or a small function, filter
that and join with data.id. I suppose adding a third c column, null on
b=1 and =b on b=0/2 and selecting the previous non-null in the
sequence could do it, but it's somehow above my window-fu, I'm more of
a code gouy and would do it with two nested loops on a function.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Postgres use a little memory on Windows.

2016-02-20 Thread Francisco Olarte
On Sat, Feb 20, 2016 at 7:13 PM, Adrian Klaver
 wrote:
.
> FROM
> sym_data d INNER JOIN sym_data_gap g ON g.status = 'GP'
> AND d.data_id BETWEEN g.start_id
> AND g.end_id
.
> The thing that stands out to me is that I do not see that sym_data and
> sym_data_gp are actually joined on anything.

Yes they are, although the formatting hid it somehow.

It is a classic,  data_gap defines intervals via start+end id over
data, he wants to join every data with the corresponding gap. It is  a
hard optimization problem without knowing more of the data
distributions, maybe the interval types and ginindexes can help him.
When faced with this kind of structure, depending on the data
distribution, I've solved it via two paralell queries ( gap sorted by
start plus end, data sorted by id, sweep them in paralell joining by
code, typical tape-update problem, works like a charm for
non-overlapping ranges and even for some overlapping ones with a
couple of queues  ) . And he seems to want all of the data ( sometime
this goes faster if you can add a couple of range conditions for
data.id / gap.start/end_id.

> Also is it possible to see the schema definitions for the two tables?

My bet is on somethink like data.id ~serial primary key,
gap.start/end_id foreign key to that.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Postgres use a little memory on Windows.

2016-02-20 Thread Tom Lane
Adrian Klaver  writes:
> Took liberty of reformatting the above here:
> ...
> FROM
>  sym_data d INNER JOIN sym_data_gap g ON g.status = 'GP'
>  AND d.data_id BETWEEN g.start_id
>  AND g.end_id
> WHERE
>  d.channel_id = 'sale_transaction'
> ORDER BY
>  d.data_id ASC;

> The thing that stands out to me is that I do not see that sym_data and 
> sym_data_gp are actually joined on anything.

The "d.data_id BETWEEN g.start_id AND g.end_id" part is a join condition
... but not one that can be handled by either hash or merge join, because
those require simple equality join conditions.  So the nestloop plan shown
here is really about as good as you're going to get without redesigning
the query and/or the data representation.

It looks like the bitmap heap scan generally returns exactly one row for
each outer row, which makes me wonder if the BETWEEN couldn't be replaced
with some sort of equality.  But that might take some rethinking of the
data.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Postgres use a little memory on Windows.

2016-02-20 Thread Adrian Klaver

On 02/20/2016 08:46 AM, tuanhoanganh wrote:

Hello

I have a bad query on PostgreSQL 9.0.23 - 64bit - Windows 2012 R2 - 48GB Ram

explain analyze select d.data_id, d.table_name, d.event_type,
d.row_data, d.pk_data, d.old_data, d.create_time, d.trigger_hist_id,
d.channel_id, d.transaction_id, d.source_node_id, d.external_data, ''
from sym_data d inner join sym_data_gap g on g.status='GP' and d.data_id
between g.start_id and g.end_id where d.channel_id='sale_transaction'
order by d.data_id asc;


Took liberty of reformatting the above here:
http://sqlformat.darold.net/

EXPLAIN ANALYZE
SELECT
d.data_id,
d.table_name,
d.event_type,
d.row_data,
d.pk_data,
d.old_data,
d.create_time,
d.trigger_hist_id,
d.channel_id,
d.transaction_id,
d.source_node_id,
d.external_data,
''
FROM
sym_data d INNER JOIN sym_data_gap g ON g.status = 'GP'
AND d.data_id BETWEEN g.start_id
AND g.end_id
WHERE
d.channel_id = 'sale_transaction'
ORDER BY
d.data_id ASC;

The thing that stands out to me is that I do not see that sym_data and 
sym_data_gp are actually joined on anything.


Also is it possible to see the schema definitions for the two tables?





Here is result

Nested Loop  (cost=319.42..4879348246.58 rows=32820035265 width=1525) (actual 
time=64656.747..5594654.189 rows=3617090 loops=1)
   ->  Index Scan using sym_data_pkey on sym_data d  (cost=0.00..3671742.82 
rows=3867095 width=1525) (actual time=9.775..12465.153 rows=3866359 loops=1)
 Filter: ((channel_id)::text = 'sale_transaction'::text)
   ->  Bitmap Heap Scan on sym_data_gap g  (cost=319.42..1133.51 rows=8487 
width=8) (actual time=1.438..1.439 rows=1 loops=3866359)
 Recheck Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))
 Filter: (g.status = 'GP'::bpchar)
 ->  Bitmap Index Scan on sym_data_gap_pkey  (cost=0.00..317.30 
rows=8487 width=0) (actual time=1.436..1.436 rows=1 loops=3866359)
   Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <= 
g.end_id))

http://explain.depesz.com/s/c3DT


I have run vaccum full. Here is my PostgreSQL config

shared_buffers = 2GB
work_mem = 64MB
maintenance_work_mem = 1GB
wal_buffers = 256
effective_cache_size = 4GB
checkpoint_segments = 256
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 256
random_page_cost = 3.5
autovacuum_vacuum_threshold = 1000
autovacuum_analyze_threshold = 250
max_locks_per_transaction = 2000

When I check taskmanager, I found postgres process is user 4-5MB

What happened with my PostgreSQL. Please help me

Thank you in advance.

Tuan Hoang Anh






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why Postgres use a little memory on Windows.

2016-02-20 Thread tuanhoanganh
Hello

I have a bad query on PostgreSQL 9.0.23 - 64bit - Windows 2012 R2 - 48GB Ram

explain analyze select d.data_id, d.table_name, d.event_type, d.row_data,
d.pk_data, d.old_data, d.create_time, d.trigger_hist_id, d.channel_id,
d.transaction_id, d.source_node_id, d.external_data, '' from sym_data d
inner join sym_data_gap g on g.status='GP' and d.data_id between g.start_id
and g.end_id where d.channel_id='sale_transaction' order by d.data_id asc;

Here is result

Nested Loop  (cost=319.42..4879348246.58 rows=32820035265 width=1525)
(actual time=64656.747..5594654.189 rows=3617090 loops=1)
  ->  Index Scan using sym_data_pkey on sym_data d
(cost=0.00..3671742.82 rows=3867095 width=1525) (actual
time=9.775..12465.153 rows=3866359 loops=1)
Filter: ((channel_id)::text = 'sale_transaction'::text)
  ->  Bitmap Heap Scan on sym_data_gap g  (cost=319.42..1133.51
rows=8487 width=8) (actual time=1.438..1.439 rows=1 loops=3866359)
Recheck Cond: ((d.data_id >= g.start_id) AND (d.data_id <= g.end_id))
Filter: (g.status = 'GP'::bpchar)
->  Bitmap Index Scan on sym_data_gap_pkey  (cost=0.00..317.30
rows=8487 width=0) (actual time=1.436..1.436 rows=1 loops=3866359)
  Index Cond: ((d.data_id >= g.start_id) AND (d.data_id <=
g.end_id))

http://explain.depesz.com/s/c3DT


I have run vaccum full. Here is my PostgreSQL config

shared_buffers = 2GB
work_mem = 64MB
maintenance_work_mem = 1GB
wal_buffers = 256
effective_cache_size = 4GB
checkpoint_segments = 256
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 256
random_page_cost = 3.5
autovacuum_vacuum_threshold = 1000
autovacuum_analyze_threshold = 250
max_locks_per_transaction = 2000

When I check taskmanager, I found postgres process is user 4-5MB

What happened with my PostgreSQL. Please help me

Thank you in advance.

Tuan Hoang Anh


Re: [GENERAL] Charlotte Postgres User Group

2016-02-19 Thread Don Parris
Definitely putting this on my calendar.  I have not been aware of other
PostgreSQL users here - would love to meet some other users.

Don

On Wed, Feb 17, 2016 at 10:39 AM, Boyan Botev  wrote:

> If you live near or around Charlotte, please join us for the inaugural
> meeting of the Charlotte PUG on March 1, followed by a second meeting on
> April 11 featuring Bruce Momjian. More information about the two events can
> be found here:
>
> http://www.meetup.com/Charlotte-PostgreSQL-User-Group
>
> Also if you are travelling through Charlotte, NC or you just like to visit
> the "Queen City" you are welcome to stop by an give a talk to the Charlotte
> PUG. Just contact me when you would be in town and I can pencil you in for
> a talk or a presentation.
>
> Thanks,
> Boyan Botev
>



-- 
D.C. Parris, FMP, Linux+, ESL Certificate
Minister, Security/FM Coordinator, Free Software Advocate
http://dcparris.net/ 

GPG Key ID: F5E179BE


[GENERAL] Charlotte Postgres User Group

2016-02-17 Thread Boyan Botev
If you live near or around Charlotte, please join us for the inaugural
meeting of the Charlotte PUG on March 1, followed by a second meeting on
April 11 featuring Bruce Momjian. More information about the two events can
be found here:

http://www.meetup.com/Charlotte-PostgreSQL-User-Group

Also if you are travelling through Charlotte, NC or you just like to visit
the "Queen City" you are welcome to stop by an give a talk to the Charlotte
PUG. Just contact me when you would be in town and I can pencil you in for
a talk or a presentation.

Thanks,
Boyan Botev


Re: [GENERAL] Auotmated postgres failover

2016-01-22 Thread Jehan-Guillaume de Rorthais
Le Thu, 21 Jan 2016 11:34:18 -0800,
John R Pierce  a écrit :

> On 1/21/2016 11:07 AM, jwienc...@comcast.net wrote:
> >
> >
> > I'm looking for a tool to automate PostgreSQL cluster management 
> > failover in the event the master database were to become unavailable. 
> >   Currently are manually issuing a "pg_ctl promote"  once we become 
> > aware that the master database has crashed.
> >
> >
> > Is repmgr a via solution?  Please pass along experiences with repmgr.
> >
> > Are there any other  tools available to automatically issue the 
> > "promote" in the event of a master database crash?

Yes, 3 different Pacemaker resource agents exist for PostgreSQL:

 * official one, in the package "resource-agents" on most linux distribs.
   This one is pretty complex and support multistate and stateless setup.
 * a simple, stupid, easy and stateless, agent:
   https://github.com/dalibo/pgsql-resource-agent/tree/master/stateless
   This one is fine for a 2 node cluster
 * a multistate-aware agent:
   https://github.com/dalibo/pgsql-resource-agent/tree/master/multistate
   This one is nice for multi-node cluster, searching for the best known slave
   to elect after a master lost.

Some important docs are available in the pgsql-resource-agent (PRA) repo:
 * https://github.com/dalibo/pgsql-resource-agent/blob/master/FENCING.md
 * the stateless:
   
https://github.com/dalibo/pgsql-resource-agent/blob/master/stateless/README.md
 * the multistate:
   
https://github.com/dalibo/pgsql-resource-agent/blob/master/multistate/README.md
   
https://github.com/dalibo/pgsql-resource-agent/blob/master/multistate/INSTALL.md
   
https://github.com/dalibo/pgsql-resource-agent/blob/master/multistate/docs/Quick_Start.md

> repmgr is a tool you could use in conjunction with a generic cluster 
> management system like linuxha/heartbeat, vcs, etc.
> 
> the most difficult part is reliably determining that A) the master has 
> crashed, and B) fencing the failed old master so it doesn't wake up and 
> think its still in charge.

+1 
-- 
Jehan-Guillaume de Rorthais
Dalibo


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Auotmated postgres failover

2016-01-21 Thread John R Pierce

On 1/21/2016 11:07 AM, jwienc...@comcast.net wrote:



I'm looking for a tool to automate PostgreSQL cluster management 
failover in the event the master database were to become unavailable. 
  Currently are manually issuing a "pg_ctl promote"  once we become 
aware that the master database has crashed.



Is repmgr a via solution?  Please pass along experiences with repmgr.

Are there any other  tools available to automatically issue the 
"promote" in the event of a master database crash?



repmgr is a tool you could use in conjunction with a generic cluster 
management system like linuxha/heartbeat, vcs, etc.


the most difficult part is reliably determining that A) the master has 
crashed, and B) fencing the failed old master so it doesn't wake up and 
think its still in charge.




--
john r pierce, recycling bits in santa cruz



  1   2   3   4   5   6   7   8   9   10   >