Re: Service pgpool

2018-06-09 Thread Bo Peng
Hi,

> Jun 08 04:40:17 asa-pgpool02 pgpool-II-10[30787]: Starting pgpool-II-10
> service: pgpool is already running with pid 4285 [FAILED]

It seemed like Pgpool-II is already started. Did you start Pgpool-II twice?

Also may I ask some questions?

- How did you install Pgpool-II, using source code or RPM packages?

- If you installed Pgpool-II by using RPM packages, did you use the RPM packages
  from PGDG repository provided by PostgreSQL community or from  pgpool yum 
repository 
  provided by Pgpool-II community?

- Could you provide the RPM package name?

- How did you start Pgpool-II?
  It seemed like you started Pgpool-II with systemd, but the init script is 
used.
  Normally on RHEL 7.x the systemd service is used to start a service.


On Fri, 8 Jun 2018 10:47:34 +0200
Jean Claude  wrote:

> Hi Bo Peng,
> 
> Thank you for your feedback.
> 
> You can find the status of my service below :
> 
> [root@asa-pgpool02 tmp]# systemctl status pgpool-II-10.service
> ● pgpool-II-10.service - SYSV: Starts and stops the pgpool daemon
>Loaded: loaded (/etc/rc.d/init.d/pgpool-II-10; bad; vendor preset:
> disabled)
>Active: failed (Result: exit-code) since Fri 2018-06-08 04:40:17 EDT;
> 5min ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 30787 ExecStart=/etc/rc.d/init.d/pgpool-II-10 start
> (code=exited, status=1/FAILURE)
> 
> Jun 08 04:40:17 asa-pgpool02 systemd[1]: Starting SYSV: Starts and stops
> the pgpool daemon...
> Jun 08 04:40:17 asa-pgpool02 pgpool-II-10[30787]: Starting pgpool-II-10
> service: pgpool is already running with pid 4285 [FAILED]
> Jun 08 04:40:17 asa-pgpool02 systemd[1]: pgpool-II-10.service: control
> process exited, code=exited status=1
> Jun 08 04:40:17 asa-pgpool02 systemd[1]: Failed to start SYSV: Starts and
> stops the pgpool daemon.
> Jun 08 04:40:17 asa-pgpool02 systemd[1]: Unit pgpool-II-10.service entered
> failed state.
> Jun 08 04:40:17 asa-pgpool02 systemd[1]: pgpool-II-10.service failed.
> [root@asa-pgpool02 tmp]#
> 
> Thanks
> 
> 
> 
> 2018-06-08 4:22 GMT+02:00 Bo Peng :
> 
> > Hi,
> >
> > > below my problem about daemon :
> > >
> > > Jun 07 14:06:45 vm02 systemd[1]: Failed to start SYSV: Starts and stops
> > the pgpool daemon.
> > > Jun 07 14:06:45 vm02 systemd[1]: Unit pgpool-II-10.service entered
> > failed state.
> > > Jun 07 14:06:45 vm02 systemd[1]: pgpool-II-10.service failed.
> >
> > Could you show more error messages about systemd?
> >
> > As a common case, starting Pgpool-II failed due to
> > missing the socket file directory which is specified
> > in pgpool.conf.
> >
> > By the way as Joshua has already suggested, probably It
> > would better to make questions in the Pgpool-II dedicated
> > mailing list.
> >
> > On Thu, 7 Jun 2018 20:16:19 +0200
> > Jean Claude  wrote:
> >
> > > Hi all,
> > >
> > > below my problem about daemon :
> > >
> > > Jun 07 14:06:45 vm02 systemd[1]: Failed to start SYSV: Starts and stops
> > the
> > > pgpool daemon.
> > > Jun 07 14:06:45 vm02 systemd[1]: Unit pgpool-II-10.service entered failed
> > > state.
> > > Jun 07 14:06:45 vm02 systemd[1]: pgpool-II-10.service failed.
> > >
> > > Can you help me?
> > >
> > > Thanks a lot
> >
> >
> > --
> > Bo Peng 
> > SRA OSS, Inc. Japan
> >
> >


-- 
Bo Peng 
SRA OSS, Inc. Japan




Re: Fwd: compressed bits in sql table

2018-06-09 Thread Adrian Klaver

On 06/09/2018 03:43 PM, SANDEEP GOURNENI wrote:

Postgres version is 10.4
field type is char(32).basically its a fee bit field.


Can you explain what you mean by 'fee bit'?

Also not sure why you need to deal with compressed bits, cannot you not 
just pull or push a string from/into the field?



it is related to my application.

thanks,
Sandeep

On Sun, Jun 10, 2018 at 4:04 AM Adrian Klaver > wrote:


On 06/09/2018 02:40 PM, SANDEEP GOURNENI wrote:
 >
 > Hi all,
 >
 > How can we read compressed bits in postgres.In one of my table
field  i
 > have compressed bits.how to read and write them in postgres using
cobol
 > program.

You are going to need to be more specific:

1) What version of Postgres?

2) What is the field type?

3) Are you referring to the TOAST table?

 >
 > Thanks,
 > Sandeep
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Fwd: compressed bits in sql table

2018-06-09 Thread SANDEEP GOURNENI
Postgres version is 10.4
field type is char(32).basically its a fee bit field.
it is related to my application.

thanks,
Sandeep

On Sun, Jun 10, 2018 at 4:04 AM Adrian Klaver 
wrote:

> On 06/09/2018 02:40 PM, SANDEEP GOURNENI wrote:
> >
> > Hi all,
> >
> > How can we read compressed bits in postgres.In one of my table field  i
> > have compressed bits.how to read and write them in postgres using cobol
> > program.
>
> You are going to need to be more specific:
>
> 1) What version of Postgres?
>
> 2) What is the field type?
>
> 3) Are you referring to the TOAST table?
>
> >
> > Thanks,
> > Sandeep
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Fwd: compressed bits in sql table

2018-06-09 Thread Adrian Klaver

On 06/09/2018 02:40 PM, SANDEEP GOURNENI wrote:


Hi all,

How can we read compressed bits in postgres.In one of my table field  i 
have compressed bits.how to read and write them in postgres using cobol 
program.


You are going to need to be more specific:

1) What version of Postgres?

2) What is the field type?

3) Are you referring to the TOAST table?



Thanks,
Sandeep




--
Adrian Klaver
adrian.kla...@aklaver.com



Fwd: compressed bits in sql table

2018-06-09 Thread SANDEEP GOURNENI
Hi all,

How can we read compressed bits in postgres.In one of my table field  i
have compressed bits.how to read and write them in postgres using cobol
program.

Thanks,
Sandeep


Re: pg_upgrade and wraparound

2018-06-09 Thread Tom Lane
Adrian Klaver  writes:
> On 06/09/2018 03:46 AM, Alexander Shutyaev wrote:
>> The upgrade operation failed after several hours with the following error:
>> database is not accepting commands to avoid wraparound data loss in 
>> database with OID 0

> Do you know which database has an OID of 0?

Well, none do, so the correct question is what is passing an invalid
database OID to the code that's complaining.  This sure looks like a
bug, though I'm not sure we have enough info to locate it.

regards, tom lane



Re: Slow planning time for simple query

2018-06-09 Thread Tom Lane
Maksim Milyutin  writes:
> On hot standby I faced with the similar problem.
> ...
> is planned 4.940 ms on master and *254.741* ms on standby.

Presumably the problem is that the standby isn't authorized to change
the btree index's "entry is dead" bits, so it's unable to prune index
entries previously detected as dead, and thus the logic that intends
to improve this situation doesn't work on the standby.

(I wonder though why, if you executed the same query on the master,
its setting of the index-entry-is-dead bits didn't propagate to the
standby.)

I wonder if we should extend the "SnapshotNonVacuumable" logic introduced
in commit 3ca930fc3 so that in hot standby, *all* index entries are deemed
non vacuumable.  This would essentially get rid of long standby planning
times in this sort of scenario by instead accepting worse (possibly much
worse) planner range estimates.  I'm unsure if that's a good tradeoff or
not.

regards, tom lane



Re: pg_upgrade and wraparound

2018-06-09 Thread Adrian Klaver

On 06/09/2018 03:46 AM, Alexander Shutyaev wrote:

Hello!

I've been trying to upgrade a postgresql cluster from 9.6 to 10. I've 
executed the pg_upgrade with the following options:


  /usr/lib/postgresql/10/bin/pg_upgrade -b /usr/lib/postgresql/9.6/bin/ 
-B /usr/lib/postgresql/10/bin/ -d /var/lib/postgresql/9.6/main -D 
/var/lib/postgresql/10/main -o ' -c 
config_file=/etc/postgresql/9.6/main/postgresql.conf' -O ' -c 
config_file=/etc/postgresql/10/main/postgresql.conf'


The upgrade operation failed after several hours with the following error:

database is not accepting commands to avoid wraparound data loss in 
database with OID 0


Do you know which database has an OID of 0?



Earlier in the log there are a lot of messages like

pg_restore: executing BLOB 1740736966
pg_restore: WARNING:  database with OID 0 must be vacuumed within 
1000279 transactions
HINT:  To avoid a database shutdown, execute a database-wide VACUUM in 
that database.

You might also need to commit or roll back old prepared transactions.
pg_restore: WARNING:  database with OID 0 must be vacuumed within 
1000278 transactions
HINT:  To avoid a database shutdown, execute a database-wide VACUUM in 
that database.

You might also need to commit or roll back old prepared transactions.

I've tried to do VACUUM FULL on my 9.6 cluster on all databases and then 
retried the pg_upgrade - it failed in the same way.


Also to be noted, earlier this cluster was succesfully upgraded with 
pg_upgrade using similar parameters from older versions (at least 2 
times, something like 9.1 -> 9.3, 9.3 -> 9.6). The database is around 
700 GB and has very many pg_largeobjects in it.


What could be the reason of this and how can I perform my upgrade?

Thanks in advance,
Alexander



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Slow planning time for simple query

2018-06-09 Thread Maksim Milyutin

On 09.06.2018 21:49, Maksim Milyutin wrote:


On hot standby I faced with the similar problem.


Sorry, the problem in question is described here 
https://www.postgresql.org/message-id/22136.1528312205%40sss.pgh.pa.us


--
Regards,
Maksim Milyutin




Re: Slow planning time for simple query

2018-06-09 Thread Maksim Milyutin

On hot standby I faced with the similar problem.


The following simple query:

SELECT array_to_json(array_agg(t)) from (
    select *
    from main.message m
    join main.message_instance mi on m.message_id = mi.message_id
    join main.channel_type ct on mi.channel_type_id = 
ct.channel_type_id

    where   m.user_id = 2152595
    and ct.name = 'android'
    and m.user_delete_ts is null
    order by
    case
    when read_ts is null then 0
    else 1
    end,
    m.message_id desc
    limit 100
    ) t;

is planned 4.940 ms on master and *254.741* ms on standby. I tried to 
investigate the reasons of so long planning on replica and determined 
that index accesses on planning stage come to multiple heap scans.



Here is the execution plan statistics of query above:

 Aggregate (actual time=0.641..0.642 rows=1 loops=1)
   Buffers: shared hit=14
   ->  Subquery Scan on t (actual time=0.337..0.360 rows=2 loops=1)
 Buffers: shared hit=14
 ->  Limit (actual time=0.292..0.301 rows=2 loops=1)
   Buffers: shared hit=14
   ->  Sort (actual time=0.287..0.291 rows=2 loops=1)
 Sort Key: (CASE WHEN (m.read_ts IS NULL) THEN 0 
ELSE 1 END), m.message_id DESC

 Sort Method: quicksort  Memory: 27kB
 Buffers: shared hit=14
 ->  Nested Loop (actual time=0.157..0.219 rows=2 
loops=1)

   Buffers: shared hit=14
   ->  Seq Scan on channel_type ct (actual 
time=0.043..0.048 rows=1 loops=1)

 Filter: (name = 'android'::text)
 Rows Removed by Filter: 7
 Buffers: shared hit=1
   ->  Nested Loop (actual time=0.098..0.148 
rows=2 loops=1)

 Buffers: shared hit=13
 ->  Index Scan using 
message_user_id_idx1 on message m (actual time=0.055..0.063 rows=2 loops=1)

   Index Cond: (user_id = 2152595)
   Buffers: shared hit=5
 ->  Index Scan using 
message_instance_pkey on message_instance mi (actual time=0.026..0.029 
rows=1 loops=2)
   Index Cond: ((message_id = 
m.message_id) AND (channel_type_id = ct.channel_type_id))

   Buffers: shared hit=8

The 14 accesses to buffer cache under query execution. But 
pg_statio_user_tables and pg_statio_user_indexes views show the 
different picture:


select
    schemaname, relname, indexrelname, idx_blks_hit, idx_blks_read,
pg_size_pretty(pg_relation_size(indexrelid::regclass))
from pg_statio_user_indexes
where idx_blks_hit + idx_blks_read > 0;

 schemaname | relname  | indexrelname  | idx_blks_hit | 
idx_blks_read | pg_size_pretty

+--+---+--+---+
 main   | channel_type | channel_type_pkey |    2 
| 0 | 16 kB
 main   | message_instance | message_instance_pkey |  666 
| 0 | 345 MB
 main   | message  | message_pkey |   56 
| 0 | 53 MB
 main   | message  | message_user_id_idx1 |    3 
| 0 | 17 MB


select
    schemaname, relname, heap_blks_read, heap_blks_hit
from pg_statio_user_tables
where heap_blks_read + heap_blks_hit + idx_blks_hit + idx_blks_read > 0;

 schemaname | relname  | heap_blks_read | heap_blks_hit
+--++---
 main   | channel_type |  0 | 3
 main   | message  |  0 |  8682
 main   | message_instance |  0 | 114922

(This experiment is carried out on test hot standby without parallel 
activities and with cleared system statistics)



The vacuum on problem tables (main.message and main.message_instance) on 
master node resolves the problem somehow but its often execution slows 
down all queries and generally increases IO.



Is there any case to overcome the problem or it's fundamental issue and 
necessary to rewrite the query to simplify planning?



--
Regards,
Maksim Milyutin




Re: Performance problem postgresql 9.5

2018-06-09 Thread Miguel Angel Sanchez Sandoval
Guys, indeed I had an intruder,  with the recent migration I installed
a remote mirror that had a  vulnerability. Thank you very much for your
fast
response.
My best regards


Libre
de virus. www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

2018-06-08 14:56 GMT-05:00 Miguel Angel Sanchez Sandoval <
massan...@gmail.com>:

>
> Hi guys, migrate from 8.4 to 9.5, all OK except that 2-3 days pass and the
> database experiences slowness, I execute the linux top command and it shows
> me a postgres user process executing a strange command (2yhdgrfrt63788)
> that I consume a lot of CPU, I see the querys active and encounter select
> fun ('./ 2yhdgrfrt63788') , this did not happen in version 8.4, any help
> would appreciate it.
>
> regards
>
>


Re: (2^63 - 1)::bigint => out of range? (because of the double precision)

2018-06-09 Thread Alexey Dokuchaev
On Sat, Jun 09, 2018 at 07:20:17AM -0700, Adrian Klaver wrote:
> On 06/09/2018 05:24 AM, Alexey Dokuchaev wrote:
> >OK, but what about highly volatile tables for come-and-go type of things?
> >Think of a session pool, or task queue.  I want to use NO CYCLE for this
> >kind of tables as it would allow me to never worry about hitting "nextval:
> >reached maximum value of sequence" error, recycle ids (because they come
> >and go), and still be safe because PK constraint protects me.  Any flaws
> >in this vision of mine?
> 
> Assuming you meant CYCLE not NO CYCLE, I see no issue.

Oh, mea culpa, I meant CYCLE of course (in the quoted paragraph above).

> If you do use a sequence with NO CYCLE you can use ALTER SEQUENCE some_seq
> RESTART to reset it:
> https://www.postgresql.org/docs/10/static/sql-altersequence.html

I understand that I can reset it; the idea was to minimize the table and
sequence maintenance while allowing it to work, well, forever.  Hence the
CYCLE idea.  Anyway, I've heard you, thanks Adrian.

./danfe



Re: (2^63 - 1)::bigint => out of range? (because of the double precision)

2018-06-09 Thread Adrian Klaver

On 06/09/2018 05:24 AM, Alexey Dokuchaev wrote:

On Fri, Jun 08, 2018 at 10:30:45AM -0700, Adrian Klaver wrote:




My guess is because sequences are often used to provide numbers for a
PRIMARY KEY and NO CYCLE is a heads up for key duplication before the
PK code kicks in.


OK, but what about highly volatile tables for come-and-go type of things?
Think of a session pool, or task queue.  I want to use NO CYCLE for this
kind of tables as it would allow me to never worry about hitting "nextval:
reached maximum value of sequence" error, recycle ids (because they come
and go), and still be safe because PK constraint protects me.  Any flaws
in this vision of mine?


Assuming you meant CYCLE not NO CYCLE, I see no issue. If you do use a 
sequence with NO CYCLE you can use ALTER SEQUENCE some_seq RESTART to 
reset it:


https://www.postgresql.org/docs/10/static/sql-altersequence.html



./danfe




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Performance problem postgresql 9.5

2018-06-09 Thread Andreas Kretschmer




Am 08.06.2018 um 22:09 schrieb Alvaro Herrera:

On 2018-Jun-08, Miguel Angel Sanchez Sandoval wrote:


Hi guys, migrate from 8.4 to 9.5, all OK except that 2-3 days pass and the
database experiences slowness, I execute the linux top command and it shows
me a postgres user process executing a strange command (2yhdgrfrt63788)
that I consume a lot of CPU, I see the querys active and encounter select
fun ('./ 2yhdgrfrt63788') , this did not happen in version 8.4, any help
would appreciate it.

Hmm, has your database been compromised?  You may have an intruder there --
beware.


https://www.imperva.com/blog/2018/03/deep-dive-database-attacks-scarlett-johanssons-picture-used-for-crypto-mining-on-postgre-database/

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




pg_upgrade and wraparound

2018-06-09 Thread Alexander Shutyaev
Hello!

I've been trying to upgrade a postgresql cluster from 9.6 to 10. I've
executed the pg_upgrade with the following options:

 /usr/lib/postgresql/10/bin/pg_upgrade -b /usr/lib/postgresql/9.6/bin/ -B
/usr/lib/postgresql/10/bin/ -d /var/lib/postgresql/9.6/main -D
/var/lib/postgresql/10/main -o ' -c
config_file=/etc/postgresql/9.6/main/postgresql.conf' -O ' -c
config_file=/etc/postgresql/10/main/postgresql.conf'

The upgrade operation failed after several hours with the following error:

database is not accepting commands to avoid wraparound data loss in
database with OID 0

Earlier in the log there are a lot of messages like

pg_restore: executing BLOB 1740736966
pg_restore: WARNING:  database with OID 0 must be vacuumed within 1000279
transactions
HINT:  To avoid a database shutdown, execute a database-wide VACUUM in that
database.
You might also need to commit or roll back old prepared transactions.
pg_restore: WARNING:  database with OID 0 must be vacuumed within 1000278
transactions
HINT:  To avoid a database shutdown, execute a database-wide VACUUM in that
database.
You might also need to commit or roll back old prepared transactions.

I've tried to do VACUUM FULL on my 9.6 cluster on all databases and then
retried the pg_upgrade - it failed in the same way.

Also to be noted, earlier this cluster was succesfully upgraded with
pg_upgrade using similar parameters from older versions (at least 2 times,
something like 9.1 -> 9.3, 9.3 -> 9.6). The database is around 700 GB and
has very many pg_largeobjects in it.

What could be the reason of this and how can I perform my upgrade?

Thanks in advance,
Alexander