Re: [GENERAL] duplicate key value violates unique constraint and duplicated records

2017-06-30 Thread Steven Chang
Uh...we also met duplicate rows with primary key column through  restoring
database by pg_basebackup.
H.
I don't think its an issue with primary key index corruption.



2017-07-01 7:30 GMT+08:00 Adrian Klaver :

> On 06/30/2017 07:33 AM, Timokhin Maxim wrote:
>
>> Sure, here it is.
>>
>> pg_basebackup -h servername -R -P -D /data/upgrade/94 -U pgsql -v
>> —xlog-method=stream —checkpoint=fast
>>
>> /usr/pgsql-9.5/bin/initdb -D /data/upgrade/95/ —encoding=utf8
>> —locale=ru_RU.utf8 —lc-collate=ru_RU.utf8 —lc-ctype=ru_RU.utf8
>> —lc-messages=en_US.utf8
>>
>> Then updating:
>> /usr/pgsql-9.5/bin/pg_upgrade -b /usr/pgsql-9.4/bin/ -d /data/upgrade/94
>> -B /usr/pgsql-9.5/bin/ -D /data/upgrade/95 -k
>>
>> and so on to 9.6
>>
>
> The original 9.4 database has the same encoding setup?
>
> FYI, you can use pg_upgrade to go straight from 9.4 to 9.6.
>
> https://www.postgresql.org/docs/9.6/static/pgupgrade.html
>
> "pg_upgrade supports upgrades from 8.4.X and later to the current major
> release of PostgreSQL, including snapshot and alpha releases."
>
>
>
>> after that server starts normally.
>>
>>
>> --
>> Timokhin 'maf' Maxim
>>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] duplicate key value violates unique constraint and duplicated records

2017-06-30 Thread Adrian Klaver

On 06/30/2017 07:33 AM, Timokhin Maxim wrote:

Sure, here it is.

pg_basebackup -h servername -R -P -D /data/upgrade/94 -U pgsql -v 
—xlog-method=stream —checkpoint=fast

/usr/pgsql-9.5/bin/initdb -D /data/upgrade/95/ —encoding=utf8 
—locale=ru_RU.utf8 —lc-collate=ru_RU.utf8 —lc-ctype=ru_RU.utf8 
—lc-messages=en_US.utf8

Then updating:
/usr/pgsql-9.5/bin/pg_upgrade -b /usr/pgsql-9.4/bin/ -d /data/upgrade/94 -B 
/usr/pgsql-9.5/bin/ -D /data/upgrade/95 -k

and so on to 9.6


The original 9.4 database has the same encoding setup?

FYI, you can use pg_upgrade to go straight from 9.4 to 9.6.

https://www.postgresql.org/docs/9.6/static/pgupgrade.html

"pg_upgrade supports upgrades from 8.4.X and later to the current major 
release of PostgreSQL, including snapshot and alpha releases."




after that server starts normally.


--
Timokhin 'maf' Maxim






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostGreSQL Timeout, auto shutdown and Pkey errors

2017-06-30 Thread Adrian Klaver

On 06/30/2017 11:35 AM, Niel Smith wrote:

Please reply to list also.
Ccing list.

Also to help with replies in the future could you change your posting 
style to something like:


https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

Thanks.

What version of Postgres and where did you get it? This Was from 
postgresSQL 9.5  


 > I'm not sure where this was downloaded from this was
 > installed on the server before I started with the company I work 
for. my

 > guess is from the postgreSQL website


OS and version?

  >this is running a windows 7 embedded OS.

Do you have anti-virus software running on this machine?

Which logs? 


 >From the PostgreSQL logs.



A timeout expired, probably on client side.
Where are the clients relative to the server, local network or Internet? 

  > Local network


Where are the above coming from as I can not find a reference to 
readCurIndex in either the Postgres or psqodbc code?  



  >This is from a Log
  > called Trace and it comes from my Central Storage interface

What is Central Storage?



Timestamp: 6/20/2017 3:09:15 PM
Message: Error during indexfile unfixed fields readCurIndex: 
3292CurFileNum: 550

Severity: Error, Machine: DIRELIANT2,  Thread Name: SQLThreadPM
Process Name: C:\Program Files\Dynamic Instruments, Inc\Digital 
Recorder\DICSInterface\DICSInterface.exe


Timestamp: 6/21/2017 4:51:05 AM
Message: Error during indexfile unfixed fields readCurIndex: 
6249CurFileNum: 550

Severity: Error, Machine: DIRELIANT2,  Thread Name: SQLThreadPM
Process Name: C:\Program Files\Dynamic Instruments, Inc\Digital 
Recorder\DICSInterface\DICSInterface.exe




So the above is this third party instrument, correct?:

http://www.officer.com/company/10028921/dynamic-instruments

If so you might want to get in touch with them, as this looks to be 
coming from their software.



Timestamp: 6/21/2017 9:43:28 AM
Message: Exception during making connection to 
databaseDriver={PostgreSQL 
UNICODE};Server=127.0.0.1;Port=5432;UID=postgres;PWD=root;Database=postgres;ERROR 
[08001] timeout expired


That looks fairly straight forward the connection attempt timed out.

The question is why?

Do you see anything in the Postgres logs from this time period that 
might help?


Do you have network issues that might prevent the device from connecting 
in a timely fashion?


Is there any consistency to when these errors occur?

at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, 
RetCode retcode)
at System.Data.Odbc.OdbcConnectionHandle..ctor(OdbcConnection 
connection, OdbcConnectionString constr, OdbcEnvironmentHandle 
environmentHandle)
at System.Data.Odbc.OdbcConnectionOpen..ctor(OdbcConnection 
outerConnection, OdbcConnectionString connectionOptions)
at 
System.Data.Odbc.OdbcConnectionFactory.CreateConnection(DbConnectionOptions 
options, Object poolGroupProviderInfo, DbConnectionPool pool, 
DbConnection owningObject)
at 
System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection 
owningConnection, DbConnectionPoolGroup poolGroup)
at 
System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection 
owningConnection)
at 
System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection 
outerConnection, DbConnectionFactory connectionFactory)

at System.Data.Odbc.OdbcConnection.Open()
at DICSInterface.DISQL.ConnectToSQL(OdbcConnection& conn, String 
DatabaseType, String IPAddress)

Severity: Critical, Machine: DIRELIANT2,  Thread Name:
Process Name: C:\Program Files\Dynamic Instruments, Inc\Digital 
Recorder\DICSInterface\DICSInterface.exe


Niel Smith - Support Technician




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres: dbname dbuser 9.9.9.9[2222] PARSE waiting

2017-06-30 Thread DrakoRod
You're right I have forgotten to say, the OS is RHEL 7.

Actually I'm reading about.

Thanks!





-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
View this message in context: 
http://www.postgresql-archive.org/postgres-dbname-dbuser-9-9-9-9--PARSE-waiting-tp5968923p5969564.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres: dbname dbuser 9.9.9.9[2222] PARSE waiting

2017-06-30 Thread Melvin Davidson
On Fri, Jun 30, 2017 at 11:36 AM, DrakoRod  wrote:

> > Do you control the app?
>
> Nop Just I know how it's developed.
>
> > The app has a pooling component and you still are having problems, have
> > you looked at what the pooler is actually doing?
>
> As far as I know, the wildfly's jdbc pool. No really I don't know what are
> doing. I suspect that problem is that in DAO's not are closing the sessions
> or not beginning transactions properly. I going to ask them send me the
> logfile or I'll could verify the pool behavior.
>
> > Not sure what the above means. Are you saying the application you refer
> > to above has a history of not correctly closing connections or are you
> > talking in general terms about applications interacting with databases.
>
> Sorry, it's not like that, just was a comment, The problem I have is with a
> specific application.
>
> > I've attached two files that may be helpful to you.
>
> Melvin , Thanks for the scripts! I owe one!
>
> I have another question, I've was reading about the lock_timeout, Somehow
> this parameter will help or could affect all the behaviour?
>
> Thanks!
>
>
>
>
>
>
> -
> Dame un poco de fe, eso me bastará.
> Rozvo Ware Solutions
> --
> View this message in context: http://www.postgresql-archive.
> org/postgres-dbname-dbuser-9-9-9-9--PARSE-waiting-
> tp5968923p5969552.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>I've was reading about the lock_timeout, Somehow this parameter will help
or could affect all the behaviour?
This affects all behavior and will abort statements that wait too long to
get locks.

https://www.postgresql.org/docs/9.4/static/runtime-config-client.html

lock_timeout (integer)

Abort any statement that waits longer than the specified number of
milliseconds while attempting to acquire a lock on a table, index, row, or
other database object. The time limit applies separately to each lock
acquisition attempt. The limit applies both to explicit locking requests
(such as LOCK TABLE, or SELECT FOR UPDATE without NOWAIT) and to
implicitly-acquired locks. If log_min_error_statement is set to ERROR or
lower, the statement that timed out will be logged. A value of zero (the
default) turns this off.

Unlike statement_timeout, this timeout can only occur while waiting for
locks. Note that if statement_timeout is nonzero, it is rather pointless to
set lock_timeout to the same or larger value, since the statement timeout
would always trigger first.

*Setting lock_timeout in postgresql.conf is not recommended because it
would affect all sessions.*


*You are probably much better off using tcp_keepalives... , providing your
system supports it. I don't remember if you ever gave us *


*the O/S.*



*Have a look at them. You might possibly be able to use them to force
disconnect after a set amount of inactivity
time.https://www.postgresql.org/docs/current/static/runtime-config-connection.html
*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres: dbname dbuser 9.9.9.9[2222] PARSE waiting

2017-06-30 Thread DrakoRod
> Do you control the app?

Nop Just I know how it's developed. 

> The app has a pooling component and you still are having problems, have 
> you looked at what the pooler is actually doing? 

As far as I know, the wildfly's jdbc pool. No really I don't know what are
doing. I suspect that problem is that in DAO's not are closing the sessions
or not beginning transactions properly. I going to ask them send me the
logfile or I'll could verify the pool behavior.

> Not sure what the above means. Are you saying the application you refer 
> to above has a history of not correctly closing connections or are you 
> talking in general terms about applications interacting with databases. 

Sorry, it's not like that, just was a comment, The problem I have is with a
specific application. 

> I've attached two files that may be helpful to you. 

Melvin , Thanks for the scripts! I owe one! 

I have another question, I've was reading about the lock_timeout, Somehow
this parameter will help or could affect all the behaviour?

Thanks! 






-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
View this message in context: 
http://www.postgresql-archive.org/postgres-dbname-dbuser-9-9-9-9--PARSE-waiting-tp5968923p5969552.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] duplicate key value violates unique constraint and duplicated records

2017-06-30 Thread Melvin Davidson
On Fri, Jun 30, 2017 at 9:07 AM, Adrian Klaver 
wrote:

> On 06/30/2017 04:58 AM, Timokhin Maxim wrote:
>
>> BTW, we are moving using:
>>
>> pg_basebackup -h servername -R -P -D /data/upgrade/94 -U pgsql -v
>> —xlog-method=stream —checkpoint=fast
>>
>
> Going from 9.4 to 9.6 is a major version upgrade and you cannot use
> pg_basebackup for that. Besides I can't see how you even got the 9.6.3
> server to start:
>
> /usr/local/pgsql94/bin/pg_basebackup -D /home/aklaver/pgback_test94 -U
> postgres -p 5412 -v --xlog-method=stream
>
> /usr/local/pgsql96/bin/pg_ctl -D /home/aklaver/pgback_test94/ start
> server starting
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 9.4,
> which is not compatible with this version 9.6.3
>
>
>> After that we are upping version to 9.6.3.
>>
>
> Given the above how did you actually get 9.6.3 to start?
>
>
> I've looked through the documentation https://postgrespro.ru/docs/po
>> stgrespro/9.6/app-pgbasebackup and didn't find details about how
>> pg_basebackup works with b-tree indexes.
>> Is it possible that pg_basebackup just copies indexes as is and that is
>> cause of corruption. Or it pass indexes as instruction that says "after
>> upping db make indexes" ?
>>
>> Thank you.
>>
>>
>> --
>> Timokhin 'maf' Maxim
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


*Since you are doing a major version upgrade, the correct way to do that,
depending on the size of your DB, is *

*A. pg_dumpall on old version and pg_reload on new version*

*OR*

*B. pg_upgrade*



*https://www.postgresql.org/docs/9.6/static/upgrading.html
*--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] duplicate key value violates unique constraint and duplicated records

2017-06-30 Thread Adrian Klaver

On 06/30/2017 04:58 AM, Timokhin Maxim wrote:

BTW, we are moving using:

pg_basebackup -h servername -R -P -D /data/upgrade/94 -U pgsql -v 
—xlog-method=stream —checkpoint=fast


Going from 9.4 to 9.6 is a major version upgrade and you cannot use 
pg_basebackup for that. Besides I can't see how you even got the 9.6.3 
server to start:


/usr/local/pgsql94/bin/pg_basebackup -D /home/aklaver/pgback_test94 -U 
postgres -p 5412 -v --xlog-method=stream


/usr/local/pgsql96/bin/pg_ctl -D /home/aklaver/pgback_test94/ start
server starting
FATAL:  database files are incompatible with server
DETAIL:  The data directory was initialized by PostgreSQL version 9.4, 
which is not compatible with this version 9.6.3




After that we are upping version to 9.6.3.


Given the above how did you actually get 9.6.3 to start?


I've looked through the documentation 
https://postgrespro.ru/docs/postgrespro/9.6/app-pgbasebackup and didn't find 
details about how pg_basebackup works with b-tree indexes.
Is it possible that pg_basebackup just copies indexes as is and that is cause of 
corruption. Or it pass indexes as instruction that says "after upping db make 
indexes" ?

Thank you.


--
Timokhin 'maf' Maxim




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Significant discrepancy in index cost estimation

2017-06-30 Thread Mikhail

After setting seq_page_cost to 3 the execution plan became good, without 
SeqScan, but it seems strange to set seq_page_cost almost equal to 
random_page_cost, therefore i've set seq_page_cost back to defaults, increased 
the statistics for "sub_id" in "mba_test.subscr_param" to 1000. That gave me 
the value of estimated number of rows for subscr_param_pkey closer to the real 
value (est.64, real 30) which affected the execution plan in the right way.


>Thursday, June 29, 2017 7:21 PM +03:00 from Mikhail :
>
>
>Hi guys,
>I'm loss. I'm running:
>=# select version();
> version
>--
>PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 
>(Red Hat 4.8.5-11), 64-bit
> 
>=# show work_mem;
>work_mem
>--
>27MB
> 
>=# show shared_buffers;
>shared_buffers
>
>3520MB
> 
> 
>Having the tables:
>CREATE TABLE mba_test.subscr_param(
>    sub_id integer NOT NULL,
>    res_id integer NOT NULL,
>    rel_status integer,
>    status integer,
>   and so on.. total 35 columns
>    
>CREATE TABLE mba_test.subscr (
>    sub_id integer NOT NULL,
>    sub_name character varying(80),
>    status integer NOT NULL,
>    acc_id integer,
>   and so on.. total 59 columns
>   
>alter table mba_test.subscr_param add primary key (sub_id, res_id);
>alter table mba_test.subscr add primary key (sub_id);
>create index idx_subscr_acc_id on mba_test.subscr(acc_id);
> 
>Tables and indexes has the following sizes / statistics:
>=# select relname, relpages, reltuples, pg_size_pretty(pg_relation_size(oid))
> from pg_class
>    where relname in ('subscr_param', 'subscr', 'idx_subscr_acc_id', 
>'subscr_pkey', 'subscr_param_pkey');
>   relname | relpages |  reltuples  | pg_size_pretty
>---+--+-+
>subscr    |    24086 |  825305 | 188 MB
>subscr_param_pkey |   115968 | 4.22936e+07 | 906 MB
>subscr_param  |  1446158 | 4.22936e+07 | 11 GB
>subscr_pkey   | 2265 |  825305 | 18 MB
>idx_subscr_acc_id | 2265 |  825305 | 18 MB
> 
>And "subscr" data distribution is:
>=# select acc_id, count(*) from mba_test.subscr group by acc_id order by 
>count(*) desc limit 5;
>  acc_id | count
>-+---
>1089212 |  5232
>1000154 |  2884
>1587496 |  1896
>1409682 |  1746
>1249568 |  1149
> 
>=# select count(*) from mba_test.subscr;
>count
>
>825305
> 
>=# select count(*) from mba_test.subscr_param;
>  count
>--
>42293572
> 
>Now, i take the second acc_id (1000154) and run the query below twice (to have 
>cached everything i can). The second execution gives the following:
> 
>=# explain (analyze, buffers) SELECT "SP".res_id, "SP".sub_id
>  FROM mba_test.subscr_param "SP"
>  JOIN mba_test.subscr "S" ON "SP".sub_id = "S".sub_id
>WHERE "S".acc_id = 1000154;
> QUERY PLAN
>-
>Hash Join  (cost=7841.72..2036917.93 rows=138159 width=8) (actual 
>time=39.501..10086.843 rows=86933 loops=1)
>   Hash Cond: ("SP".sub_id = "S".sub_id)
>   Buffers: shared hit=178674 read=1269448
>   ->  Seq Scan on subscr_param "SP"  (cost=0.00..1869093.72 rows=42293572 
>width=8) (actual time=0.024..6294.100 rows=42293572 loops=1)
> Buffers: shared hit=176710 read=1269448
>   ->  Hash  (cost=7808.02..7808.02 rows=2696 width=4) (actual 
>time=3.161..3.161 rows=2884 loops=1)
> Buckets: 4096  Batches: 1  Memory Usage: 134kB
> Buffers: shared hit=1964
> ->  Bitmap Heap Scan on subscr "S"  (cost=53.32..7808.02 rows=2696 
>width=4) (actual time=0.471..2.802 rows=2884 loops=1)
>   Recheck Cond: (acc_id = 1000154)
>   Heap Blocks: exact=1953
>   Buffers: shared hit=1964
>   ->  Bitmap Index Scan on idx_subscr_acc_id  (cost=0.00..52.64 
>rows=2696 width=0) (actual time=0.273..0.273 rows=2884 loops=1)
> Index Cond: (acc_id = 1000154)
> Buffers: shared hit=11
>Planning time: 0.155 ms
>Execution time: 10091.265 ms
> 
>Seems strange to decide to seq scan the table with 42 mln rows and size 11GB 
>when having the index (actually, primary key) containing both columns (sub_id 
>and res_id) which is less than 1GB.
> 
>Now, i've explicitly turned the sec scans off and got perfect execution plan:
> 
>=# set enable_seqscan = off;
>=# explain (analyze, buffers) SELECT "SP".res_id, "SP".sub_id
>  FROM mba_test.subscr_param "SP"
>  JOIN mba_test.subscr "S" ON "SP".sub_id = "S".sub_id
>WHERE "S".acc_id = 1000154;
>   
>QUERY PLAN