Re: [GENERAL] postgres sometimes returns no data

2015-11-12 Thread John R Pierce

On 11/12/2015 12:07 PM, Venkata Balaji N wrote:

Around that time I just see a bunch of "checkpoints are occurring too
frequently...".  About a minute later and coincidentally around the 
time the

pentaho job ends, I see "unexpected eof...could not receive data..target
machine actively refused it.".  The reason this last item interests me is
that this component is the last postgres item to run in the job. And the
next/last component wouldn't run long when there is an empty file 
created by
the problematic component.  The file is empty whenever this behavior 
occurs.



the too frequent checkpoint thing is purely a warning, that you probably 
want to increase the checkpoint_segments count, I usually set it to 30 
or 50 or something on a server that sees these warnings during heavy 
activity.I've never known it to cause data loss.


the unexpected EOF, that means the socket between the client and the 
database server was closed unexpectedly, usually this means the client 
application exited ungracefully.


HOWEVER, if this is a Linux server, I'd go looking in the system logs 
for signs that the evil 'OOM Killer' has been active.   OOM Killer is an 
abomination on a database server, it randomly kills processes that use 
high memory, this can cause all sorts of problems.If this is 
happening, you should A) tune OOM Killer to be MUCH less active, and B) 
reduce your postgresql memory usage to better suit your system environment.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-12 Thread db042190
thx, the fact that the query runs under pentaho at least half of the time
(used to be most of the time) makes me think it will run on the db. I'll try
to get a handle on the cpu/handle info u requested.



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873731.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-12 Thread Adrian Klaver

On 11/12/2015 12:54 PM, db042190 wrote:

there is something i need to correct.  Coming out of one of the 3 switches
are two branches that cause updates to the same "interaction log" table in
postgres.  Not text files like the other 2 switches.


Could you show a sample of the actual log error messages?

Not sure what the log line prefix is, you might want to take a look here:

http://www.postgresql.org/docs/9.1/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

log_line_prefix

and add escapes that would help identify who is doing what.




--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873732.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-12 Thread Doiron, Daniel
I’m troubleshooting a schema and found this:

Indexes:
"pk_patient_diagnoses" PRIMARY KEY, btree (id)
"index_4341548" UNIQUE, btree (id)
"idx_patient_diagnoses_deleted" btree (deleted)
"idx_patient_diagnoses_diagnosis_type_id" btree (diagnosis_type_id)
"idx_patient_diagnoses_icd10" btree (icd10)
"idx_patient_diagnoses_icd9" btree (diagnosis_code)
"idx_patient_diagnoses_is_unknown" btree (is_unknown)
"idx_patient_diagnoses_modified" btree (modified)
"idx_patient_diagnoses_patient_id" btree (patient_id)
"idx_patient_diagnoses_uuid" btree (uuid)
"index_325532921" btree (modified)
"index_4345603" btree (deleted)
"index_4349516" btree (diagnosis_type_id)
"index_4353417" btree (icd10)
"index_4384754" btree (diagnosis_code)
"index_4418849" btree (is_unknown)
"index_4424101" btree (patient_id)
"index_4428458" btree (uuid)

My questions is whether these “index_*” indexes could have been created by 
postgresql or whether I have an errant developer using some kinda third-party 
tool?


[GENERAL] postgres sometimes returns no data

2015-11-12 Thread db042190
Hi.  We have a postgres 9.1 query in a pentaho job (table input component
contains the query and is followed by various switches) that runs nightly. 
More and more frequently (about half the time now), the query returns no
data (or appears to return no data, see last paragraph) when there is
clearly data to be extracted.  And pentaho continues as if nothing is wrong. 
Whenever I simply rerun the component manually, the query returns data and I
am able to complete the job.

I'm looking at what I think is a postgres log around one of the times this
last happened.  And trying to correlate it to the time I see in the pentaho
job's log when this component starts up.

Around that time I just see a bunch of "checkpoints are occurring too
frequently...".  About a minute later and coincidentally around the time the
pentaho job ends, I see "unexpected eof...could not receive data..target
machine actively refused it.".  The reason this last item interests me is
that this component is the last postgres item to run in the job.  And the
next/last component wouldn't run long when there is an empty file created by
the problematic component.  The file is empty whenever this behavior occurs.

I'm torn not knowing if this is a postgres issue or pentaho issue.  The fact
that the pentaho log doesn't show the switches as having "finished
processing" makes me think either pentaho wouldnt execute those switches
when there is no data or pentaho is the problem.  The fact that its
happening more often makes me think the db is somehow involved in the
problem. 

I did a reindex and vacuum recently hoping it would alleviate the problem. 
Not so.  Any advice would be appreciated.



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-12 Thread Adrian Klaver

On 11/12/2015 11:49 AM, db042190 wrote:

Hi.  We have a postgres 9.1 query in a pentaho job (table input component
contains the query and is followed by various switches) that runs nightly.
More and more frequently (about half the time now), the query returns no
data (or appears to return no data, see last paragraph) when there is
clearly data to be extracted.  And pentaho continues as if nothing is wrong.
Whenever I simply rerun the component manually, the query returns data and I
am able to complete the job.

I'm looking at what I think is a postgres log around one of the times this
last happened.  And trying to correlate it to the time I see in the pentaho
job's log when this component starts up.

Around that time I just see a bunch of "checkpoints are occurring too
frequently...".  About a minute later and coincidentally around the time the
pentaho job ends, I see "unexpected eof...could not receive data..target
machine actively refused it.".  The reason this last item interests me is
that this component is the last postgres item to run in the job.  And the
next/last component wouldn't run long when there is an empty file created by
the problematic component.  The file is empty whenever this behavior occurs.

I'm torn not knowing if this is a postgres issue or pentaho issue.  The fact
that the pentaho log doesn't show the switches as having "finished
processing" makes me think either pentaho wouldnt execute those switches
when there is no data or pentaho is the problem.  The fact that its
happening more often makes me think the db is somehow involved in the
problem.

I did a reindex and vacuum recently hoping it would alleviate the problem.
Not so.  Any advice would be appreciated.


The client(Pentaho) and server on the same machine, network, or remotely 
connected?


What OS and version, on either end if appropriate?

What does the Postgres log show just before and after ""unexpected 
eof...could not receive data.." ?


What do the OS system logs show around the time of the error?

What is/are the job(s) doing?







--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-12 Thread Venkata Balaji N
On Fri, Nov 13, 2015 at 6:49 AM, db042190 
wrote:

> Hi.  We have a postgres 9.1 query in a pentaho job (table input component
> contains the query and is followed by various switches) that runs nightly.
> More and more frequently (about half the time now), the query returns no
> data (or appears to return no data, see last paragraph) when there is
> clearly data to be extracted.  And pentaho continues as if nothing is
> wrong.
> Whenever I simply rerun the component manually, the query returns data and
> I
> am able to complete the job.
>
> I'm looking at what I think is a postgres log around one of the times this
> last happened.  And trying to correlate it to the time I see in the pentaho
> job's log when this component starts up.
>
> Around that time I just see a bunch of "checkpoints are occurring too
> frequently...".  About a minute later and coincidentally around the time
> the
> pentaho job ends, I see "unexpected eof...could not receive data..target
> machine actively refused it.".  The reason this last item interests me is
> that this component is the last postgres item to run in the job.  And the
> next/last component wouldn't run long when there is an empty file created
> by
> the problematic component.  The file is empty whenever this behavior
> occurs.
>

When checkpoints are occurring too frequently, then, it is important that
you tune the checkpoint parameters to ensure optimal checkpoint behavior.
Do you see any IO spike on the postgres server ?

As the above message says "could not receive data" ...

I suspect, the query is returning the rows at the postgres end without any
issues. Did you execute the query on the Postgres database and see if that
is returning rows ? if yes, then it is not a query related issue, it has
something to do with the performance. Do you notice any other activities on
the database when this query is running which are possibly generating High
CPU / High IO which can block or abort the application connections ?

I'm torn not knowing if this is a postgres issue or pentaho issue.  The fact
> that the pentaho log doesn't show the switches as having "finished
> processing" makes me think either pentaho wouldnt execute those switches
> when there is no data or pentaho is the problem.  The fact that its
> happening more often makes me think the db is somehow involved in the
> problem.
>
> I did a reindex and vacuum recently hoping it would alleviate the problem.
> Not so.  Any advice would be appreciated.
>

The first thing to do is to tune the checkpoint parameters.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] postgres sometimes returns no data

2015-11-12 Thread db042190
thx both for responding.  

Because the checkpoint settings probably haven't changed in a long long time
and we see as many of those warnings on good nights as bad I'm going to make
those priority 2.  Unless there is a cause and effect u can explain between
checkpointing and what we are experiencing.  I'll have to find the config
setting too.

Yes, client and server are on the same machine. 

Windows 7 professional, sp1, 32 gb, 64 bit.

Before unexpected eof, lots of checkpoint warnings back to and prior to what
I am presuming is the start up time for this component.  After are a bunch
more unexpected eofs followed by more checkpoint warnings.

I haven't looked at the os logs but will post back here what I find.

The job is loading a bunch of postgres tables but in this component which
follows all of that, a postgres extract query is supposed to run, feed 3
switches in parallel which in turn direct or divert certain text file writes
to occur.  The component that follows this one does a remote call that
causes an app to import one of the files this problem component creates.



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873730.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-12 Thread Paul Jungwirth

On 11/12/2015 11:49 AM, db042190 wrote:

I see "unexpected eof...could not receive data..target
machine actively refused it.".


That sounds like the same error message as discussed here:

http://www.postgresql.org/message-id/4d75289d.9020...@techbaza.pl

Could it be a problem of too many open connections? Possibly some 
hanging around longer than they should?


Paul




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres sometimes returns no data

2015-11-12 Thread db042190
there is something i need to correct.  Coming out of one of the 3 switches
are two branches that cause updates to the same "interaction log" table in
postgres.  Not text files like the other 2 switches.  



--
View this message in context: 
http://postgresql.nabble.com/postgres-sometimes-returns-no-data-tp5873720p5873732.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-12 Thread Guillaume Lelarge
2015-11-12 23:21 GMT+01:00 Doiron, Daniel :

> I’m troubleshooting a schema and found this:
>
> Indexes:
> "pk_patient_diagnoses" PRIMARY KEY, btree (id)
> "index_4341548" UNIQUE, btree (id)
> "idx_patient_diagnoses_deleted" btree (deleted)
> "idx_patient_diagnoses_diagnosis_type_id" btree (diagnosis_type_id)
> "idx_patient_diagnoses_icd10" btree (icd10)
> "idx_patient_diagnoses_icd9" btree (diagnosis_code)
> "idx_patient_diagnoses_is_unknown" btree (is_unknown)
> "idx_patient_diagnoses_modified" btree (modified)
> "idx_patient_diagnoses_patient_id" btree (patient_id)
> "idx_patient_diagnoses_uuid" btree (uuid)
> "index_325532921" btree (modified)
> "index_4345603" btree (deleted)
> "index_4349516" btree (diagnosis_type_id)
> "index_4353417" btree (icd10)
> "index_4384754" btree (diagnosis_code)
> "index_4418849" btree (is_unknown)
> "index_4424101" btree (patient_id)
> "index_4428458" btree (uuid)
>
> My questions is whether these “index_*” indexes could have been created by
> postgresql or whether I have an errant developer using some kinda
> third-party tool?
>

PostgreSQL doesn't create indexes on its own, except for primary keys,
unique constraints, and exclusion constraints.

So, that must be something (or someone) else.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-12 Thread Thomas Kellerer

Doiron, Daniel schrieb am 12.11.2015 um 23:21:

I’m troubleshooting a schema and found this:

Indexes:
 "pk_patient_diagnoses" PRIMARY KEY, btree (id)
 "index_4341548" UNIQUE, btree (id)
 "idx_patient_diagnoses_deleted" btree (deleted)
 "idx_patient_diagnoses_diagnosis_type_id" btree (diagnosis_type_id)
 "idx_patient_diagnoses_icd10" btree (icd10)
 "idx_patient_diagnoses_icd9" btree (diagnosis_code)
 "idx_patient_diagnoses_is_unknown" btree (is_unknown)
 "idx_patient_diagnoses_modified" btree (modified)
 "idx_patient_diagnoses_patient_id" btree (patient_id)
 "idx_patient_diagnoses_uuid" btree (uuid)
 "index_325532921" btree (modified)
 "index_4345603" btree (deleted)
 "index_4349516" btree (diagnosis_type_id)
 "index_4353417" btree (icd10)
 "index_4384754" btree (diagnosis_code)
 "index_4418849" btree (is_unknown)
 "index_4424101" btree (patient_id)
 "index_4428458" btree (uuid)

My questions is whether these “index_*” indexes could have been created by 
postgresql or whether I have an errant developer using some kinda third-party 
tool?


The only index that Postgres "automatically" creates is the unique index 
supporting a primary key or a unique constraint.

But apart from that, Postgres never creates indexes on its own.

So from the list above, only pk_patient_diagnose has (most probably) been 
created automatically. Everything else was created manually.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-12 Thread Tom Lane
Thomas Kellerer  writes:
> Doiron, Daniel schrieb am 12.11.2015 um 23:21:
>> I’m troubleshooting a schema and found this:
>> 
>> Indexes:
>> "pk_patient_diagnoses" PRIMARY KEY, btree (id)
>> "index_4341548" UNIQUE, btree (id)
>> "idx_patient_diagnoses_deleted" btree (deleted)
>> "idx_patient_diagnoses_diagnosis_type_id" btree (diagnosis_type_id)
>> "idx_patient_diagnoses_icd10" btree (icd10)
>> "idx_patient_diagnoses_icd9" btree (diagnosis_code)
>> "idx_patient_diagnoses_is_unknown" btree (is_unknown)
>> "idx_patient_diagnoses_modified" btree (modified)
>> "idx_patient_diagnoses_patient_id" btree (patient_id)
>> "idx_patient_diagnoses_uuid" btree (uuid)
>> "index_325532921" btree (modified)
>> "index_4345603" btree (deleted)
>> "index_4349516" btree (diagnosis_type_id)
>> "index_4353417" btree (icd10)
>> "index_4384754" btree (diagnosis_code)
>> "index_4418849" btree (is_unknown)
>> "index_4424101" btree (patient_id)
>> "index_4428458" btree (uuid)

> So from the list above, only pk_patient_diagnose has (most probably) been 
> created automatically. Everything else was created manually.

Also, *none* of those index names match what Postgres would choose of its
own accord.  The built-in naming schemes can be exhibited thus:

regression=# create table foo (f1 int primary key, f2 int unique, f3 int);
CREATE TABLE
regression=# create index on foo(f3);
CREATE INDEX
regression=# \d foo
  Table "public.foo"
 Column |  Type   | Modifiers 
+-+---
 f1 | integer | not null
 f2 | integer | 
 f3 | integer | 
Indexes:
"foo_pkey" PRIMARY KEY, btree (f1)
"foo_f2_key" UNIQUE CONSTRAINT, btree (f2)
"foo_f3_idx" btree (f3)

There's some additional rules for abbreviating very long derived index
names, and for dealing with index name collisions, but none of those would
have come into play here.  The index names Daniel shows must all have been
specified in DDL commands, either as the name of a constraint or as the
name of an index.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-12 Thread Alvaro Herrera
Thomas Kellerer wrote:
> Doiron, Daniel schrieb am 12.11.2015 um 23:21:
> >I’m troubleshooting a schema and found this:
> >
> >Indexes:
> > "pk_patient_diagnoses" PRIMARY KEY, btree (id)

> The only index that Postgres "automatically" creates is the unique index 
> supporting a primary key or a unique constraint.
> 
> But apart from that, Postgres never creates indexes on its own.
> 
> So from the list above, only pk_patient_diagnose has (most probably) been 
> created automatically. Everything else was created manually.

As I recall, the naming convention is to append "_pkey", not to prepend
"pk_", so not even that one.  (Of course, you can tell it what name to
use when creating the constraint, which is what was done here.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore tells schema “test” already exists but it isn't actually

2015-11-12 Thread Alex Luya
When restoring a dump like this:

pg_restore --clean --create --exit-on-error --dbname=test test.tar

these error messages got printed out:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 21; 2615 80924 SCHEMA test test
pg_restore: [archiver (db)] could not execute query: ERROR:  schema
"test" already exists
Command was: CREATE SCHEMA test;

but when:

select schema_name from information_schema.schemata;

these got printed out

schema_name 
pg_toast
pg_temp_1
pg_toast_temp_1
pg_catalogpublic
information_schema

It seems like schema "test" doesn't exist yet,why do I got this kind of
error?


Re: [GENERAL] Issues with german locale on CentOS 5,6,7

2015-11-12 Thread Thomas Munro
On Thu, Oct 8, 2015 at 12:49 PM, Thomas Munro  wrote:

> On Thu, Oct 8, 2015 at 5:52 AM, Peter Geoghegan
>  wrote:
> > On Wed, Oct 7, 2015 at 6:25 AM, Tom Lane  wrote:
> >> The only real way out of such a situation is to REINDEX affected
> indexes.
> >> Refusing to start the server not only doesn't contribute to a solution,
> >> but makes it impossible to fix manually.
> >
> > I agree that that would be almost as bad as carrying on, because there
> > is no reason to think that the locale thing can easily be rolled back.
> > That was my point, in fact.
>
> I have contemplated a maintenance script that would track either the
> md5 checksums of the /usr/lib/locale/*/LC_COLLATE files or the version
> of installed locale packages and automatically reindex things when
> they change (I guess after restarting the cluster to clear any glibc
> caches that might be lurking in long running backends).  Or at least
> tell me that's needed.  Obviously completely OS-specific...
>

I got around to trying this on a recent rainy day:

https://github.com/macdice/check_pg_collations/blob/master/check_pg_collations

It needs Python and psycopg2.  It notices when LC_COLLATE files' checksums
change, and generates REINDEX commands on stdout.  It seems to work OK on
Debian, but I haven't tested much.  Pull requests welcome :-)  But I guess
it would be much cooler as a pure plpgsql extension.

-- 
Thomas Munro
http://www.enterprisedb.com


Re: [GENERAL] pg_restore tells schema “test” already exists but it isn't actually

2015-11-12 Thread Adrian Klaver

On 11/12/2015 06:53 PM, Alex Luya wrote:

When restoring a dump like this:

|pg_restore --clean --create --exit-on-error --dbname=test test.tar|

these error messages got printed out:

|pg_restore:[archiver (db)]Error whilePROCESSING TOC:pg_restore:[archiver
(db)]Error fromTOC entry 21;261580924SCHEMAtest test
pg_restore:[archiver (db)]could
notexecutequery:ERROR:schema"test"already existsCommand
was:CREATESCHEMAtest;|

but when:

|selectschema_name frominformation_schema.schemata;|

these got printed out

|schema_name pg_toast pg_temp_1 pg_toast_temp_1
pg_catalog publicinformation_schema|


What database are you running the above on?

What is the database you are trying to CREATE and does it already exist 
when you run the restore?


What happens if you try without --exit-on-error?



It seems like schema "test" doesn't exist yet,why do I got this kind of
error?









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2015-11-12 Thread Alex Luya
Hello,
   I created a new database by

   create database icare;

   then quit off psql and  run:

pg_restore --clean --create --exit-on-error --dbname=icare
icare-test.tar

 it complains:

  pg_restore: [archiver (db)] Error while PROCESSING TOC:
  pg_restore: [archiver (db)] Error from TOC entry 21; 2615 80924
SCHEMA icare icare
  pg_restore: [archiver (db)] could not execute query: ERROR:
 permission denied for database icare
   Command was: CREATE SCHEMA icare;


Re: [GENERAL] recursive table performance (CTE)

2015-11-12 Thread Merlin Moncure
On Wed, Nov 11, 2015 at 3:44 AM, Dusan  wrote:
> Hi,
> I'm using table with parent_id to themselve and WITH RECURSIVE in SELECT on
> about 3thousands records.
> The "tree" of data is wide (each node has more children) but not deep
> (maximal depth of branch is 10 nodes).
>
> I'm planning to use same schema on much deeper but narrower tree (most of
> nodes will have only one child, only few nodes will have two or little bit
> more childs).
> It will represent points on map of line construction with many points
> (nodes). It would have thousands of nodes, there will be more independent
> trees (up to hundreds), some of them will be much smaller then others. Count
> of nodes in table will be about few hundreds of thousands.
>
> Alternatively I can divide line constructions to many sections (from cross
> of lines to other) and have it on separate table like this:
> CREATE TABLE sections (
> id_section SERIAL PRIMARY KEY
> );
>
> CREATE TABLE section_nodes (
> id_node SERIAL PRIMARY KEY,
> sections_id_section INTEGER REFERENCES sections (id_section),
> x INTEGER,
> y INTEGER,
> sortid INTEGER -- serial number of onde in one section
> );
>
> Solution with recursive is nicer and easier for administration (and
> SELECTing from it), but won't be problem with performance on so many
> recursion? Is there some limitations of recursive tables?
> Or is better solution the second one with seperated sections?
>
> Thanks for help and your opinion.

Well, WITH RECURSIVE is not really recursive -- it's iterative.  So if
you have maximum depth of 10 you get 10 iterations.  Back in the day,
we'd keep two temp tables and ping pong the data back between them
(deleting one side or the other each time) -- the new approach is
faster and cleaner.   A fully recursive approach which can be done via
nested set returning function calls will typically be even slower.

IMHO, the only way to do better is via materialized path approaches
where you store all the path components of a tree in a column.   This
gives best case performance for extractions (a single sweep of the
btree) at the cost of bigger data and much more expensive move
operations.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-12 Thread John McKown
On Thu, Nov 12, 2015 at 12:49 PM, Scott Marlowe 
wrote:

> On Thu, Nov 12, 2015 at 11:19 AM, anj patnaik  wrote:
> > The Linux VM where postgres is running over NFS is in a different
> location
> > than where I am. Both the NFS mounted storage and VM are on the same
> network
> > connected via 1GB ethernet switch.
> >
> > The physical server for the Linux VM has UPS.
> >
> > Is there any specific test I can run to do power failure?
> >
> > Can I reboot my VM to test this or that wouldn't be good enough?
> >
> > Also, why does a checkpoint need to run? I used the graphical installer
> to
> > install postgres so I assume it would start automatically when the server
> > starts.
> >
> > I was also thinking of blackhole testing. If I do a blackhole to the NFS
> > server would I be able to test this accurately?
> >
> > Folks in the other teams believe NFS should work fine for us so I need to
> > check it out.
> >
> > Your ideas are  highly appreciated!
>
> The point of the checkpoint is to make sure as much as possible is
> being written to the data directory when you "pull the plug". But
> without being able to pull the power plugs on the NAS or db server you
> can't really test for reliability in case of power loss. So you can't
> know that it'll survive one. Just disconnecting its network connection
> etc means it can still write out cached data if it isn't properly
> syncing it.
>

All of the above make we curious about using NFS for the data files, but
having the WAL files on a local, perhaps SSD, device.​ I am not
knowledgeable about WAL. Of course, I don't know why the OP wants to put
the database files on an NFS.

-- 

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-12 Thread John R Pierce

On 11/12/2015 10:19 AM, anj patnaik wrote:
The Linux VM where postgres is running over NFS is in a different 
location than where I am. Both the NFS mounted storage and VM are on 
the same network connected via 1GB ethernet switch.


and earlier...



I do need reliability and high speed.


you won't get very high speed over gigE.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] pg_restore tells schema “test” already exists but it isn't actually

2015-11-12 Thread Melvin Davidson
Try this instead:
SELECT current_database();
SELECT nspname FROM pg_namespace ORDER BY 1;

Also, What is the exact command you are using with pg_restore?

On Thu, Nov 12, 2015 at 9:53 PM, Alex Luya  wrote:

> When restoring a dump like this:
>
> pg_restore --clean --create --exit-on-error --dbname=test test.tar
>
> these error messages got printed out:
>
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 21; 2615 80924 SCHEMA test 
> test
> pg_restore: [archiver (db)] could not execute query: ERROR:  schema "test" 
> already exists
> Command was: CREATE SCHEMA test;
>
> but when:
>
> select schema_name from information_schema.schemata;
>
> these got printed out
>
> schema_name 
> pg_toast
> pg_temp_1
> pg_toast_temp_1
> pg_catalogpublic
> information_schema
>
> It seems like schema "test" doesn't exist yet,why do I got this kind of
> error?
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Michael Convey
In Ubuntu 14.10, there are three variations of the postgresql.conf
configuration file, as follows:

/var/lib/postgresql/9.4/main/postgresql.auto.conf
/usr/lib/tmpfiles.d/postgresql.conf
/etc/postgresql/9.4/main/postgresql.conf

What is the difference between these files and which is the correct one to
change for modifying the configurations contained therein?


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Adrian Klaver

On 11/12/2015 07:46 AM, Michael Convey wrote:

In Ubuntu 14.10, there are three variations of the postgresql.conf
configuration file, as follows:

/var/lib/postgresql/9.4/main/postgresql.auto.conf


The above is for when you do ALTER SYSTEM:
http://www.postgresql.org/docs/9.4/interactive/sql-altersystem.html


/usr/lib/tmpfiles.d/postgresql.conf


This is system file that the OS seems to be using to determine where to 
run Postgres sockets, create temp files, etc:


"# Directory for PostgreSQL sockets, lockfiles and stats tempfiles
d /var/run/postgresql 2775 postgres postgres - -
# Log directory
d /var/log/postgresql 1775 root postgres - -
"

I would leave this one alone.

In my 14.04 install there are also files for sshd and xconsole.


/etc/postgresql/9.4/main/postgresql.conf


The above is the primary conf file for the cluster. This is where you 
want to change variables, if you are not using ALTER SYSTEM.




What is the difference between these files and which is the correct one
to change for modifying the configurations contained therein?



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best tool to pull from mssql

2015-11-12 Thread Merlin Moncure
On Tue, Nov 10, 2015 at 12:26 PM, Adrian Klaver
 wrote:
> On 11/10/2015 10:21 AM, Mammarelli, Joanne T wrote:
>>
>> Hi .. we’re on a hunt to locate either pgloader or the tds_fdw extension
>> to assist us with pulling data FROM a Microsoft sql server (we’re a
>> windows-based postgres installation).
>
> So are we taking about a one time operation or continuously syncing or
> something in between?
>
>> Is this the proper way to handle the process?
>
> Depends on the answer to the above.

My strategy is a little unconventional.  I use pl/sh to call into sqsh
-- from there I export the data to .csv with 'go -m csv' and then load
it via 'COPY'.This gives great performance and a dblink-ish
ability to lateral data.  If I wasn't doing that, I'd probably be
using one of the fdw approaches -- tds, jdbc, or odbc.

Any ETL tool that involves a point/click GUI or editing .xml files is
a total non-starter for me.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Melvin Davidson
Oops, I forgot to mention pg_initicluster, which is used to initiate the
various version/cluster combinations in Ubuntu.
It makes it easier to have several versions and clusters on the same
server, albeit that is not what most people/companies usually do.
eg: In Ubuntu, you could possibly have the following on one server:

Ver Cluster Port Status OwnerData directoryLog file
9.1  prod5432 online  postgres /var/lib/postgresql/9.1/prod
/var/log/postgresql/postgresql-9.1-prod.log
9.1  dev  5433 online postgres /var/lib/postgresql/9.1/dev
/var/log/postgresql/postgresql-9.1-dev.log
9.4  prod5435 online  postgres /var/lib/postgresql/9.4/prod
/var/log/postgresql/postgresql-9.4-prod.log
9.4  dev  5436 online postgres /var/lib/postgresql/9.4/dev
/var/log/postgresql/postgresql-9.4-dev.log
9.4  qa   5437 online postgres /var/lib/postgresql/9.4/qa
/var/log/postgresql/postgresql-9.4-qa.log

On Thu, Nov 12, 2015 at 11:56 AM, Adrian Klaver 
wrote:

> On 11/12/2015 08:37 AM, Melvin Davidson wrote:
>
>> To clarify.
>>
>> To see the ACTUAL clusters installed, and the LIVE config directories,
>> you use
>> pg_lsclusters.
>>
>
> Actually:
>
> aklaver@arkansas:~$ pg_lsclusters
>
> Ver Cluster Port Status OwnerData directory   Log file
>
> 9.3 main5432 down   postgres /var/lib/postgresql/9.3/main
> /var/log/postgresql/postgresql-9.3-main.log
>
> 9.4 main5434 online postgres /var/lib/postgresql/9.4/main
> /var/log/postgresql/postgresql-9.4-main.log
>
> You get the $DATA directories and the log files. The config directories
> are in /etc/postgresql/*
>
>
>> pg_ctlclusters then can control each individual cluster
>>
>> So, for example, since apparently you have the 9.4 version of PostgreSQL,
>> pg_ctlcluster 9.4 main status
>> would probably show you that the 9.4 version of PostgreSQL is active and
>> will also show you the command line arguments used to start it.
>>
>>
>> On Thu, Nov 12, 2015 at 11:29 AM, Tom Lane > > wrote:
>>
>> Adrian Klaver > > writes:
>> > On 11/12/2015 08:07 AM, Melvin Davidson wrote:
>> >> In Ubuntu 14.10, there are three variations of the postgresql.conf
>> >> configuration file, as follows:
>> >>
>> >> /var/lib/postgresql/9.4/main/postgresql.auto.conf
>> >> /usr/lib/tmpfiles.d/postgresql.conf
>> >> /etc/postgresql/9.4/main/postgresql.conf
>>
>> FWIW, I would imagine that /usr/lib/tmpfiles.d/postgresql.conf is
>> a configuration file for the system's tmp-file-cleaner daemon
>> (probably telling it not to flush some postgres-related files),
>> not a PG config file at all.  If you look into it you'll likely
>> find that it doesn't look anything like PG configuration data.
>>
>> As already noted, postgresql.auto.conf is not for hand-editing.
>>
>>  regards, tom lane
>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Melvin Davidson
Ubuntu does not use the standard pg_ctl for postgreSQL. Instead, it uses
pg_ctlcluster.
That in turn controls the different PostgreSQL clusters. When you do an
install of a new cluster, pg_ctlcluster is smart enough to put
postgresql.conf & pg_hba.conf into separate dirs.
So to be specific, /usr/lib/tmpfiles.d/postgresql.conf is just a tmp file
that you can ingnore, /etc/postgresql/9.4/main/postgresql.conf is the
original version for the 9.4 cluster and
/var/lib/postgresql/9.4/main/postgresql.auto.conf is the actual, live
version of the 9.4 cluster that you need to change to affect the 9.4
cluster. Likewise for the pg_hba.conf.

On Thu, Nov 12, 2015 at 10:46 AM, Michael Convey  wrote:

> In Ubuntu 14.10, there are three variations of the postgresql.conf
> configuration file, as follows:
>
> /var/lib/postgresql/9.4/main/postgresql.auto.conf
> /usr/lib/tmpfiles.d/postgresql.conf
> /etc/postgresql/9.4/main/postgresql.conf
>
> What is the difference between these files and which is the correct one to
> change for modifying the configurations contained therein?
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Adrian Klaver

On 11/12/2015 08:37 AM, Melvin Davidson wrote:

To clarify.

To see the ACTUAL clusters installed, and the LIVE config directories,
you use
pg_lsclusters.


Actually:

aklaver@arkansas:~$ pg_lsclusters

Ver Cluster Port Status OwnerData directory   Log file

9.3 main5432 down   postgres /var/lib/postgresql/9.3/main 
/var/log/postgresql/postgresql-9.3-main.log


9.4 main5434 online postgres /var/lib/postgresql/9.4/main 
/var/log/postgresql/postgresql-9.4-main.log


You get the $DATA directories and the log files. The config directories 
are in /etc/postgresql/*




pg_ctlclusters then can control each individual cluster

So, for example, since apparently you have the 9.4 version of PostgreSQL,
pg_ctlcluster 9.4 main status
would probably show you that the 9.4 version of PostgreSQL is active and
will also show you the command line arguments used to start it.


On Thu, Nov 12, 2015 at 11:29 AM, Tom Lane > wrote:

Adrian Klaver > writes:
> On 11/12/2015 08:07 AM, Melvin Davidson wrote:
>> In Ubuntu 14.10, there are three variations of the postgresql.conf
>> configuration file, as follows:
>>
>> /var/lib/postgresql/9.4/main/postgresql.auto.conf
>> /usr/lib/tmpfiles.d/postgresql.conf
>> /etc/postgresql/9.4/main/postgresql.conf

FWIW, I would imagine that /usr/lib/tmpfiles.d/postgresql.conf is
a configuration file for the system's tmp-file-cleaner daemon
(probably telling it not to flush some postgres-related files),
not a PG config file at all.  If you look into it you'll likely
find that it doesn't look anything like PG configuration data.

As already noted, postgresql.auto.conf is not for hand-editing.

 regards, tom lane




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Melvin Davidson
To clarify.

To see the ACTUAL clusters installed, and the LIVE config directories, you
use
pg_lsclusters.

pg_ctlclusters then can control each individual cluster

So, for example, since apparently you have the 9.4 version of PostgreSQL,
pg_ctlcluster 9.4 main status
would probably show you that the 9.4 version of PostgreSQL is active and
will also show you the command line arguments used to start it.


On Thu, Nov 12, 2015 at 11:29 AM, Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 11/12/2015 08:07 AM, Melvin Davidson wrote:
> >> In Ubuntu 14.10, there are three variations of the postgresql.conf
> >> configuration file, as follows:
> >>
> >> /var/lib/postgresql/9.4/main/postgresql.auto.conf
> >> /usr/lib/tmpfiles.d/postgresql.conf
> >> /etc/postgresql/9.4/main/postgresql.conf
>
> FWIW, I would imagine that /usr/lib/tmpfiles.d/postgresql.conf is
> a configuration file for the system's tmp-file-cleaner daemon
> (probably telling it not to flush some postgres-related files),
> not a PG config file at all.  If you look into it you'll likely
> find that it doesn't look anything like PG configuration data.
>
> As already noted, postgresql.auto.conf is not for hand-editing.
>
> regards, tom lane
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Adrian Klaver

On 11/12/2015 08:07 AM, Melvin Davidson wrote:

Ubuntu does not use the standard pg_ctl for postgreSQL. Instead, it uses
pg_ctlcluster.
That in turn controls the different PostgreSQL clusters. When you do an
install of a new cluster, pg_ctlcluster is smart enough to put
postgresql.conf & pg_hba.conf into separate dirs.
So to be specific, /usr/lib/tmpfiles.d/postgresql.conf is just a tmp
file that you can ingnore, /etc/postgresql/9.4/main/postgresql.conf is
the original version for the 9.4 cluster and


No, it is the actual conf file for the cluster.


/var/lib/postgresql/9.4/main/postgresql.auto.conf is the actual, live
version of the 9.4 cluster that you need to change to affect the 9.4
cluster. Likewise for the pg_hba.conf.


No, as you can see if look in the file:

# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.

There is no pg_hba.conf file in /var/lib/postgresql/9.4/main/ it is in
/etc/postgresql/9.4/main/

/etc/postgresql/* is where configuration is done, with the exception of 
those done through ALTER SYSTEM, which are persisted in 
postgresql.auto.conf.




On Thu, Nov 12, 2015 at 10:46 AM, Michael Convey > wrote:

In Ubuntu 14.10, there are three variations of the postgresql.conf
configuration file, as follows:

/var/lib/postgresql/9.4/main/postgresql.auto.conf
/usr/lib/tmpfiles.d/postgresql.conf
/etc/postgresql/9.4/main/postgresql.conf

What is the difference between these files and which is the correct
one to change for modifying the configurations contained therein?




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Tom Lane
Adrian Klaver  writes:
> On 11/12/2015 08:07 AM, Melvin Davidson wrote:
>> In Ubuntu 14.10, there are three variations of the postgresql.conf
>> configuration file, as follows:
>> 
>> /var/lib/postgresql/9.4/main/postgresql.auto.conf
>> /usr/lib/tmpfiles.d/postgresql.conf
>> /etc/postgresql/9.4/main/postgresql.conf

FWIW, I would imagine that /usr/lib/tmpfiles.d/postgresql.conf is
a configuration file for the system's tmp-file-cleaner daemon
(probably telling it not to flush some postgres-related files),
not a PG config file at all.  If you look into it you'll likely
find that it doesn't look anything like PG configuration data.

As already noted, postgresql.auto.conf is not for hand-editing.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] get this warning from pgAdmin

2015-11-12 Thread anj patnaik
I get this warning when trying to fetch data for a postgres db. Does this
indicate a real issue? Thanks

Running VACUUM recommended
The estimated rowcount on the table "recorder" deviates significantly from
the actual rowcount. You should run VACUUM ANALYZE on this table.


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-12 Thread anj patnaik
The Linux VM where postgres is running over NFS is in a different location
than where I am. Both the NFS mounted storage and VM are on the same
network connected via 1GB ethernet switch.

The physical server for the Linux VM has UPS.

Is there any specific test I can run to do power failure?

Can I reboot my VM to test this or that wouldn't be good enough?

Also, why does a checkpoint need to run? I used the graphical installer to
install postgres so I assume it would start automatically when the server
starts.

I was also thinking of blackhole testing. If I do a blackhole to the NFS
server would I be able to test this accurately?

Folks in the other teams believe NFS should work fine for us so I need to
check it out.

Your ideas are  highly appreciated!



On Tue, Nov 10, 2015 at 9:59 PM, Scott Marlowe 
wrote:

> On Tue, Nov 10, 2015 at 4:26 PM, anj patnaik  wrote:
> > Thanks for the feedback. I have setup a second Linux VM (running RHEL
> 5.11)
> > and Postgres 9.4. I ran some insertions today from a client running on
> > Windows. The client does a loop of 30 updates.
> >
> > I am seeing about 10-20% increase in latency in the case where DB is on
> NFS
> > (over TCP) compared to directly on disk.
> >
> > The other machine I am using to compare is running RHEL 6.5 and Postgres
> > 9.4.
> >
> > Are there any specific tests that are recommended to test that postgres
> over
> > NFS works well?
> >
> > I am planning on doing a few large data inserts and fetches.
> >
> > With the little testing, the DB over NFS appears fine.
>
> You need to do a power failure test. While running something like
> pgbench for a few minutes, run a checkpoint command and then pull the
> plug on the NFS server and / or the pg server. Bring it back up. Is
> your db corrupted? Then there's something that needs fixing.
>


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-12 Thread Scott Marlowe
On Thu, Nov 12, 2015 at 11:49 AM, Scott Marlowe  wrote:
> On Thu, Nov 12, 2015 at 11:19 AM, anj patnaik  wrote:
>> The Linux VM where postgres is running over NFS is in a different location
>> than where I am. Both the NFS mounted storage and VM are on the same network
>> connected via 1GB ethernet switch.
>>
>> The physical server for the Linux VM has UPS.
>>
>> Is there any specific test I can run to do power failure?
>>
>> Can I reboot my VM to test this or that wouldn't be good enough?
>>
>> Also, why does a checkpoint need to run? I used the graphical installer to
>> install postgres so I assume it would start automatically when the server
>> starts.
>>
>> I was also thinking of blackhole testing. If I do a blackhole to the NFS
>> server would I be able to test this accurately?
>>
>> Folks in the other teams believe NFS should work fine for us so I need to
>> check it out.
>>
>> Your ideas are  highly appreciated!
>
> The point of the checkpoint is to make sure as much as possible is
> being written to the data directory when you "pull the plug". But
> without being able to pull the power plugs on the NAS or db server you
> can't really test for reliability in case of power loss. So you can't
> know that it'll survive one. Just disconnecting its network connection
> etc means it can still write out cached data if it isn't properly
> syncing it.

Also note that a UPS doesn't preclude the machine losing its power
supplies etc, or the ever popular faulty power switch / reset button
etc. Which I have experienced on production machines. UPS does not
mean never having a power failure.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] get this warning from pgAdmin

2015-11-12 Thread Guillaume Lelarge
2015-11-12 19:43 GMT+01:00 Adrian Klaver :

> On 11/12/2015 10:02 AM, anj patnaik wrote:
>
>> I get this warning when trying to fetch data for a postgres db. Does
>> this indicate a real issue? Thanks
>>
>
> Well it means the statistics for the table are out of touch with reality.
> The database will not fall over, but your queries against the table will be
> using stale statistics when the query planner sets up the query.
>
>
>> Running VACUUM recommended
>> The estimated rowcount on the table "recorder" deviates significantly
>> from the actual rowcount. You should run VACUUM ANALYZE on this table.
>>
>
> I would take the suggestion, or you can wait till autovacuum hits the
> table.
>
>
AFAIK, this part of the code in pgAdmin hasn't been touched since quite a
few years. You shouldn't rely on it to know which tables need a VACUUM or
an ANALYZE.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-12 Thread Scott Marlowe
On Thu, Nov 12, 2015 at 11:19 AM, anj patnaik  wrote:
> The Linux VM where postgres is running over NFS is in a different location
> than where I am. Both the NFS mounted storage and VM are on the same network
> connected via 1GB ethernet switch.
>
> The physical server for the Linux VM has UPS.
>
> Is there any specific test I can run to do power failure?
>
> Can I reboot my VM to test this or that wouldn't be good enough?
>
> Also, why does a checkpoint need to run? I used the graphical installer to
> install postgres so I assume it would start automatically when the server
> starts.
>
> I was also thinking of blackhole testing. If I do a blackhole to the NFS
> server would I be able to test this accurately?
>
> Folks in the other teams believe NFS should work fine for us so I need to
> check it out.
>
> Your ideas are  highly appreciated!

The point of the checkpoint is to make sure as much as possible is
being written to the data directory when you "pull the plug". But
without being able to pull the power plugs on the NAS or db server you
can't really test for reliability in case of power loss. So you can't
know that it'll survive one. Just disconnecting its network connection
etc means it can still write out cached data if it isn't properly
syncing it.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] get this warning from pgAdmin

2015-11-12 Thread Adrian Klaver

On 11/12/2015 10:02 AM, anj patnaik wrote:

I get this warning when trying to fetch data for a postgres db. Does
this indicate a real issue? Thanks


Well it means the statistics for the table are out of touch with 
reality. The database will not fall over, but your queries against the 
table will be using stale statistics when the query planner sets up the 
query.




Running VACUUM recommended
The estimated rowcount on the table "recorder" deviates significantly
from the actual rowcount. You should run VACUUM ANALYZE on this table.


I would take the suggestion, or you can wait till autovacuum hits the table.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] get this warning from pgAdmin

2015-11-12 Thread Melvin Davidson
This is not necessarily a major issue. It is simply an advisory that a
significant number of rows in the table have been inserted, updated or
deleted and that in order to have reliable statistics, a vacuum analyze
should be done.

Depending on the number of rows in the table and the difference between
actual row count and and the current statistics row count, you can decide
if you need to vacuum  analyze or delay.

One rule of thumb is to consider the response time of queries against that
tablet. IOW, are DML queries involving that table completing within a
couple of seconds? If yes, you can delay a while or wait for autovacuum to
be done on that table. If the DML is taking more than a few seconds, then
yes, it is advisable to vacuum analyze it.

On Thu, Nov 12, 2015 at 1:02 PM, anj patnaik  wrote:

> I get this warning when trying to fetch data for a postgres db. Does this
> indicate a real issue? Thanks
>
> Running VACUUM recommended
> The estimated rowcount on the table "recorder" deviates significantly from
> the actual rowcount. You should run VACUUM ANALYZE on this table.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Does PostgreSQL ever create indexes on its own?

2015-11-12 Thread Joshua D. Drake

On 11/12/2015 02:21 PM, Doiron, Daniel wrote:

I’m troubleshooting a schema and found this:

Indexes:
 "pk_patient_diagnoses" PRIMARY KEY, btree (id)
 "index_4341548" UNIQUE, btree (id)
 "idx_patient_diagnoses_deleted" btree (deleted)
 "idx_patient_diagnoses_diagnosis_type_id" btree (diagnosis_type_id)
 "idx_patient_diagnoses_icd10" btree (icd10)
 "idx_patient_diagnoses_icd9" btree (diagnosis_code)
 "idx_patient_diagnoses_is_unknown" btree (is_unknown)
 "idx_patient_diagnoses_modified" btree (modified)
 "idx_patient_diagnoses_patient_id" btree (patient_id)
 "idx_patient_diagnoses_uuid" btree (uuid)
 "index_325532921" btree (modified)
 "index_4345603" btree (deleted)
 "index_4349516" btree (diagnosis_type_id)
 "index_4353417" btree (icd10)
 "index_4384754" btree (diagnosis_code)
 "index_4418849" btree (is_unknown)
 "index_4424101" btree (patient_id)
 "index_4428458" btree (uuid)

My questions is whether these “index_*” indexes could have been created
by postgresql or whether I have an errant developer using some kinda
third-party tool?


This is definitely a third party tool. The only time an index will be 
implicitly created is:


1. You set a column as the PRIMARY KEY
2. You set a column UNIQUE

Lastly, postgresql would never use such a ridiculous naming scheme.

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
New rule for social situations: "If you think to yourself not even
JD would say this..." Stop and shut your mouth. It's going to be bad.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general