[GENERAL] Migration to pglister - Before

2017-11-20 Thread Stephen Frost
Greetings,

We will be migrating these lists to pglister in the next few minutes.

This final email on the old list system is intended to let you know
that future emails will have different headers and you will need to
adjust your filters.

The changes which we expect to be most significant to users can be found
on the wiki here: https://wiki.postgresql.org/wiki/PGLister_Announce

Once the migration of these lists is complete, an 'after' email will be
sent out.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Query Using Massive Temp Space

2017-11-20 Thread legrand legrand
Hi, 
there are many Hash and Merge joins that may generate a lot of temp space,
it could also be a problem of forgotten column in the join.

Could you also provide indexes definitions (pk, uk and others)
with the EXPLAIN (ANALYZE BUFFERS)

(you can limit the scope of the query to help it to finish by limiting the
number of lines from candidates  by
...  
*FROM match_candidate_address WHERE account_id NOT IN (:exclude_ids)
limit 100
*
)

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Registering Event Log on Windows

2017-11-20 Thread Ronen Nofar
Hi Guys,

I'm trying to register the event log on Windows ( Windows Server 2012
64-bit ) with the following command:
regsvr32.exe C:\PostgreSQL\pg96\lib\postgresql\pgevent.dll
and I'm recieving an error message:

The modoule "C:\PostgreSQL\pg96\lib\postgesql\pgevent.dll" was loaded but
the entry point DllRegisterServer was not found.
Make sure that "C:\PostgreSQL\pg96\lib\postgesql\pgevent.dll" is a valid
DLL or OCX file and then try again.

I tried several solutions but none of them was helpful, for example:
I ran the command with another version of pgevent.dll
I ran the command with regasm.exe

The PostgreSQL version is:  PostgreSQL 9.6.4

Any ideas?

Thanks in advance,
Ronen


Re: [GENERAL] Query Using Massive Temp Space

2017-11-20 Thread Semler Miloslav
Hello,
I have one question… Why you using so huge amount of grouping columns? Is there 
some reason for it? It is not definitelly fast method.  I would prefer firstly 
do named query grouped by ids (account_id, candidate_id, parent_id) and then 
left join table candidates (to get rest of requested columns) out of this named 
query on this ids if it is possible.
You also avoid of hash indexing of these nine columns so you will save certain 
ammount of memory.

Miloslav

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Cory Tucker
Sent: Monday, November 20, 2017 7:17 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Query Using Massive Temp Space

Hello,

I have a query that is using a tremendous amount of temp disk space given the 
overall size of the dataset.  I'd love for someone to try to explain what PG is 
doing and why its using so much space for the query.

First off, the system is PG 9.6 on Ubuntu with 4 cores and 28 GB of RAM.  The 
query in question is a fairly large join of several tables (6) including some 
aggregations.  The overall dataset size of the 6 tables in question is about 
20GB and the largest table is about 15M rows.  The query is essentially a dump 
of most of the data from these tables joined together to be used in another 
system.

When the query runs it begins to use an aggressive amount of temp space on the 
volume over the course of many hours, until it reaches about 95% capacity and 
then tapers off.  Never completes though.  The temp space it uses is around 
1.5TB out of a 2TB volume.  Again, the total size of the relations in question 
is only 20GB.

Can anyone explain how the query could possibly use so much temp space?

Query and query plan are attached.  Please let me know if any more info is 
needed.

thanks
--Cory


Re: [GENERAL] unnest-like pg_stats.most_common_values and pg_stats.most_common_freqs

2017-11-20 Thread Luca Ferrari
On Fri, Nov 17, 2017 at 9:41 PM, Justin Pryzby  wrote:
> I think you want something like this ?
>
> postgres=# SELECT schemaname, tablename, attname, 
> unnest(histogram_bounds::text::text[]), histogram_bounds FROM pg_stats LIMIT 
> 9;
>  pg_catalog | pg_pltemplate | tmplname| plperl  | 
> {plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}

I don't think it is what I'm looking for, I would like something ,like:

 select unnest( histogram_bounds::text::text[] ), unnest(
most_common_freqs ) from pg_stats

but with correlation between the two array indexes. Is it something
achievable in SQL? Or should I use a plpgsql loop with an index?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query Using Massive Temp Space

2017-11-19 Thread Laurenz Albe
Cory Tucker wrote:
> I have a query that is using a tremendous amount of temp disk space given the 
> overall size of the dataset.
> I'd love for someone to try to explain what PG is doing and why its using so 
> much space for the query.

It could be a sort or a hash operation.

Do determine what is going on, EXPLAIN (ANALYZE, BUFFERS) output
would be useful.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query Using Massive Temp Space

2017-11-19 Thread Cory Tucker
Hello,

I have a query that is using a tremendous amount of temp disk space given
the overall size of the dataset.  I'd love for someone to try to explain
what PG is doing and why its using so much space for the query.

First off, the system is PG 9.6 on Ubuntu with 4 cores and 28 GB of RAM.
The query in question is a fairly large join of several tables (6)
including some aggregations.  The overall dataset size of the 6 tables in
question is about 20GB and the largest table is about 15M rows.  The query
is essentially a dump of *most* of the data from these tables joined
together to be used in another system.

When the query runs it begins to use an aggressive amount of temp space on
the volume over the course of many hours, until it reaches about 95%
capacity and then tapers off.  Never completes though.  The temp space it
uses is around 1.5TB out of a 2TB volume.  Again, the *total* size of the
relations in question is only 20GB.

Can anyone explain how the query could possibly use so much temp space?

Query and query plan are attached.  Please let me know if any more info is
needed.

thanks
--Cory


query
Description: Binary data


query_plan
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread Tom Lane
brahmesr  writes:
> SELECT ARRAY (SELECT ROW (COL1,COL2, COUNT(*) *AS txn_cnt* )::
> ap.validate_crtr_line_items$inv_lines_rt FROM Distinct_Records GROUP BY
> COL1, COL2  HAVING COUNT(*) > 1) INTO L_INV_LINES;

> ERROR:  syntax error at or near "AS"
> LINE 73: COL1,COL2, COUNT(*) AS txn_cnt...

> Why "AS" is throwing an error ?

"AS" is part of SELECT-list syntax, not ROW(...) syntax.

Even if it were allowed in ROW(), it would be totally pointless in
this context, because when you cast the ROW() result to the
ap.validate_crtr_line_items$inv_lines_rt composite type, that type
is what determines the column names.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread brahmesr
I already defined the composite type as
"validate_crtr_line_items$inv_lines_rt" with the selected
columns(COL1,COL2,COl3)

DeCLARE Block :

INV_LINES_T  validate_crtr_line_items$inv_lines_rt ARRAY;
L_INV_LINES INV_LINES_T%TYPE;
L_INV_LINES$temporary_record ap.validate_crtr_line_items$inv_lines_rt;

 Collecting the records into L_INV_LINES 

SELECT ARRAY (SELECT ROW (COL1,COL2, COUNT(*) *AS txn_cnt* )::
ap.validate_crtr_line_items$inv_lines_rt FROM Distinct_Records GROUP BY
COL1, COL2  HAVING COUNT(*) > 1) INTO L_INV_LINES;

ERROR:  syntax error at or near "AS"
LINE 73: COL1,COL2, COUNT(*) AS txn_cnt...

Why "AS" is throwing an error ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parallel query in 9.6.6 + oracle_fdw -> ERROR: invalid cache ID: 41

2017-11-19 Thread Chris Mair

Hi,


So I'd call this an oracle_fdw bug.  It needs to postpone what it's
doing here to the first normal FDW function call in a session.


Thanks a lot for looking so quickly into this!

I've opened an issue with oracle_fdw:

https://github.com/laurenz/oracle_fdw/issues/215

Thanks,
Chris.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parallel query in 9.6.6 + oracle_fdw -> ERROR: invalid cache ID: 41

2017-11-19 Thread Tom Lane
Chris Mair  writes:
> Whenever a session has performed a query on a foreign table, any subsequent
> query on a local table big enough to use the parallel query feature exits with
> an error:
>  ERROR:  invalid cache ID: 41
>  CONTEXT:  parallel worker

Hm, syscache 41 is PROCNAMEARGSNSP in 9.6, if I counted right.

> (gdb) bt
> #0  0x7f16a0f4d1f7 in raise () from /lib64/libc.so.6
> #1  0x7f16a0f4e8e8 in abort () from /lib64/libc.so.6
> #2  0x008094b4 in errfinish (dummy=dummy@entry=0) at elog.c:557
> #3  0x0080aea2 in elog_finish (elevel=elevel@entry=22, 
> fmt=fmt@entry=0x9d9965 "invalid cache ID: %d") at elog.c:1378
> #4  0x007ffd82 in SearchSysCacheList (cacheId=cacheId@entry=41, 
> nkeys=nkeys@entry=2, key1=key1@entry=139734905138463, key2=, 
> key3=key3@entry=0, key4=key4@entry=0) at syscache.c:1210
> #5  0x7f169161a59f in _PG_init () at oracle_fdw.c:709
> #6  0x0080c476 in internal_load_library (libname=, 
> libname@entry=0x7f16a1bfdde8 ) at 
> dfmgr.c:276
> #7  0x0080c708 in RestoreLibraryState (start_address=0x7f16a1bfdde8 
> ) at dfmgr.c:741
> #8  0x004e72cf in ParallelWorkerMain (main_arg=) at 
> parallel.c:1069

Apparently, oracle_fdw is trying to do a procedure lookup in its _PG_init
function.  This is a horrible idea: it assumes that _PG_init is invoked
inside a transaction, which is wrong if the library is preloaded, for
example.  (I'd bet that adding oracle_fdw to shared_preload_libraries
would fail badly, though perhaps not with this exact error message.)

So I'd call this an oracle_fdw bug.  It needs to postpone what it's
doing here to the first normal FDW function call in a session.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread Tom Lane
Pavel Stehule  writes:
> 2017-11-19 18:57 GMT+01:00 Brahmam Eswar :
>> How to collect multiple columns into array which is composite data type of
>> all select colums

> SELECT ARRAY(SELECT ROW(col1, col2, ...) INTO

You probably need an explicit cast to the rowtype.  That is,

declare myarray rowtypename[];
...
select array(select row(col1, ...)::rowtypename from ...) into myarray;

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] parallel query in 9.6.6 + oracle_fdw -> ERROR: invalid cache ID: 41

2017-11-19 Thread Chris Mair

Hi!

I've found a problem in either PostgreSQL 9.6.6 or oracle_fdw 2.0.0.

Background:

I was testing a setup on a current CentOS 7 system with PostgreSQL 9.6.6
installed from the PGDG repositories and oracle_fdw 2.0.0 installed via pgxn
install. Everything went absolutely fine until I enabled the parallel query
feature (I set max_worker_processes and max_parallel_workers_per_gather both to
8).

Bug:

Whenever a session has performed a query on a foreign table, any subsequent
query on a local table big enough to use the parallel query feature exits with
an error:

ERROR:  invalid cache ID: 41
CONTEXT:  parallel worker

Whenever a session has NOT performed a query on a foreign table, parallel
queries are executed all OK. No problem.

The problem is not related to what kind of foreign query was executed.

Reproducability is 100%!

Some debugging work:

I've seen this was reported some time ago here:

https://www.postgresql.org/message-id/29113.1488910563%40sss.pgh.pa.us

I've followed Tom Lanes advice and did - on the same machine - a debug enabled
build of the 9.6.6 source code with

./configure --enable-cassert --enable-debug CFLAGS="-ggdb -Og -g3 
-fno-omit-frame-pointer" --prefix=/home/centos/pgbuild

and changed the ERROR in PANIC in src/backend/utils/cache/syscache.c.

I then redid the pgxn install oracle_fdw with paths pointing to the debug
build.  The problem was easily reproduced. A count(*) on a bigish table with 4
workers produced 4 core dumps!

Below is the back trace from gdb from each of the cores.

Let me know if there's anything else I can do or check!

Thanks :)
Chris.



[centos@asia data]$ ls -l core*
-rw---. 1 centos centos 152059904 Nov 19 17:43 core.30430
-rw---. 1 centos centos 152059904 Nov 19 17:43 core.30431
-rw---. 1 centos centos 152059904 Nov 19 17:43 core.30432
-rw---. 1 centos centos 152059904 Nov 19 17:43 core.30433

[centos@asia data]$ gdb ../bin/postgres core.30430
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /home/centos/pgbuild/bin/postgres...done.
[New LWP 30430]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: bgworker: parallel worke'.
Program terminated with signal 6, Aborted.
#0  0x7f16a0f4d1f7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7.x86_64 
libaio-0.3.109-13.el7.x86_64
(gdb) bt
#0  0x7f16a0f4d1f7 in raise () from /lib64/libc.so.6
#1  0x7f16a0f4e8e8 in abort () from /lib64/libc.so.6
#2  0x008094b4 in errfinish (dummy=dummy@entry=0) at elog.c:557
#3  0x0080aea2 in elog_finish (elevel=elevel@entry=22, fmt=fmt@entry=0x9d9965 
"invalid cache ID: %d") at elog.c:1378
#4  0x007ffd82 in SearchSysCacheList (cacheId=cacheId@entry=41, 
nkeys=nkeys@entry=2, key1=key1@entry=139734905138463, key2=, 
key3=key3@entry=0, key4=key4@entry=0) at syscache.c:1210
#5  0x7f169161a59f in _PG_init () at oracle_fdw.c:709
#6  0x0080c476 in internal_load_library (libname=, 
libname@entry=0x7f16a1bfdde8 ) at dfmgr.c:276
#7  0x0080c708 in RestoreLibraryState (start_address=0x7f16a1bfdde8 
) at dfmgr.c:741
#8  0x004e72cf in ParallelWorkerMain (main_arg=) at 
parallel.c:1069
#9  0x0069b0bf in StartBackgroundWorker () at bgworker.c:742
#10 0x006a70bd in do_start_bgworker (rw=rw@entry=0x1f3b8d0) at 
postmaster.c:5612
#11 0x006a722a in maybe_start_bgworkers () at postmaster.c:5809
#12 0x006a7cf3 in sigusr1_handler (postgres_signal_arg=) 
at postmaster.c:4990
#13 
#14 0x7f16a1007783 in __select_nocancel () from /lib64/libc.so.6
#15 0x006a80e5 in ServerLoop () at postmaster.c:1683
#16 0x006a93c0 in PostmasterMain (argc=argc@entry=1, 
argv=argv@entry=0x1f13cc0) at postmaster.c:1327
#17 0x00626905 in main (argc=1, argv=0x1f13cc0) at main.c:228
(gdb) quit

[centos@asia data]$ gdb ../bin/postgres core.30431
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from 

Re: [GENERAL] How to store multiple rows in array .

2017-11-19 Thread Pavel Stehule
2017-11-19 18:57 GMT+01:00 Brahmam Eswar :

> Hi ,
>
> System is migrating from Oracle to Postgre SQL.
> Oracle is providing BULK COLLECT INTO function to collect the multiple
> records from table .
>
>  Select COL1,COL2 ,COL3  BULK COLLECT INTO LINES from Distinct_Records.
>
> LINES IS TABLE OF TABLE1  (Defined lines as IS TABLE OF type).
>
> In PotGres:
>
>INV_LINES_T validate_crtr_line_items$inv_lines_rt ARRAY;
> L_INV_LINES INV_LINES_T%TYPE;
> L_INV_LINES$temporary_record ap.validate_crtr_line_items$inv_lines_rt;
>
>  I'm trying to collect the records in L_INV_LINES
>
> SELECT ARRAY (SELECT COL1,COL2,COL3  FROM Distinct_Records) INTO
> L_INV_LINES;
>
> Seems, Selecting  multiple columns into an array doesn't work in PL/pgSQL
> .
>
> How to collect multiple columns into array which is composite data type of
> all select colums
>

SELECT ARRAY(SELECT ROW(col1, col2, ...) INTO

>
> --
> Thanks & Regards,
> Brahmeswara Rao J.
>


[GENERAL] How to store multiple rows in array .

2017-11-19 Thread Brahmam Eswar
Hi ,

System is migrating from Oracle to Postgre SQL.
Oracle is providing BULK COLLECT INTO function to collect the multiple
records from table .

 Select COL1,COL2 ,COL3  BULK COLLECT INTO LINES from Distinct_Records.

LINES IS TABLE OF TABLE1  (Defined lines as IS TABLE OF type).

In PotGres:

   INV_LINES_T validate_crtr_line_items$inv_lines_rt ARRAY;
L_INV_LINES INV_LINES_T%TYPE;
L_INV_LINES$temporary_record ap.validate_crtr_line_items$inv_lines_rt;

 I'm trying to collect the records in L_INV_LINES

SELECT ARRAY (SELECT COL1,COL2,COL3  FROM Distinct_Records) INTO
L_INV_LINES;

Seems, Selecting  multiple columns into an array doesn't work in PL/pgSQL .

How to collect multiple columns into array which is composite data type of
all select colums

-- 
Thanks & Regards,
Brahmeswara Rao J.


Re: [GENERAL] mild modification to pg_dump

2017-11-18 Thread Matt Zagrabelny
On Fri, Nov 17, 2017 at 3:58 PM, marcelo  wrote:

> Again: knowing of .pgpass (thank you Scott) this is what I will do.
>
>
Just in case you might not know. The perms of the .pgpass file need to not
have group or all write access. For instance:

chmod 0600 .pgpass

-m


[GENERAL] Changing the location of the default data directory on PG 9.6.6 (CentOS 7)?

2017-11-17 Thread Robert Gordon
I'm trying to identify which postgresql.conf file I should be editing, in order 
to change the default database files location for Postgres 9.6.6, when 
installed on CentOS 7.x/

Is the bet method for changing the default data directory at the time of 
database init, to include the $PGDATA variable at initialization, such as:

su - postgres -c '/usr/pgsql-9.6/bin/initdb --pgdata=$PGDATA', where $PGDATA is 
the directory path that I want the psql database files to reside?




Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Andres Freund
On 2017-11-17 18:56:45 -0300, marcelo wrote:
> Truly, I'm catched in a very big app, so I have no time to read all
> the docs.

People on this list also have jobs.

- Andres


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo

Again: knowing of .pgpass (thank you Scott) this is what I will do.

On 17/11/17 17:49, Ron Johnson wrote:

On 11/17/2017 02:23 PM, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message 
(from a privileged app) containing some elements: the database to 
dump, the user under which do that, and his password. (My apps are 
using that same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.




Specifically, do you mean to write a simple daemon which forks pg_dump 
at the appropriate time?







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo

Thank you, Scott.
That's happening me because incomplete docs reading.
Truly, I'm catched in a very big app, so I have no time to read all the 
docs.



On 17/11/17 18:31, Scott Mead wrote:


On Fri, Nov 17, 2017 at 4:06 PM, marcelo > wrote:


I need to "emulate" the pg_dump code because the password prompt.
Years ago I write a program (for the QnX environment) that catched
some prompt and emulates the standard input. I don't like to do
that again.


pg_dump can use an environment variable "PGPASSWORD" upon execution 
(actually, all libpq programs can).  You could have a wrapper that 
sets the environment variable and then executes pg_dump, this would 
get you around that prompt.  Similarly, you could use the .pgpass file.


https://www.postgresql.org/docs/9.5/static/libpq-envars.html
https://www.postgresql.org/docs/9.5/static/libpq-pgpass.html

--Scott



On 17/11/17 17:23, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on
pg_dump code, I need to develop a daemon which can receive
a TCP message (from a privileged app) containing some
elements: the database to dump, the user under which do
that, and his password. (My apps are using that same data,
of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than
try and incorporate its source code into your app.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
--
Scott Mead
Sr. Architect
/OpenSCG /
http://openscg.com




Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Scott Mead
On Fri, Nov 17, 2017 at 4:06 PM, marcelo  wrote:

> I need to "emulate" the pg_dump code because the password prompt. Years
> ago I write a program (for the QnX environment) that catched some prompt
> and emulates the standard input. I don't like to do that again.


pg_dump can use an environment variable "PGPASSWORD" upon execution
(actually, all libpq programs can).  You could have a wrapper that sets the
environment variable and then executes pg_dump, this would get you around
that prompt.  Similarly, you could use the .pgpass file.

https://www.postgresql.org/docs/9.5/static/libpq-envars.html
https://www.postgresql.org/docs/9.5/static/libpq-pgpass.html

--Scott



>
>
> On 17/11/17 17:23, John R Pierce wrote:
>
>> On 11/17/2017 12:19 PM, marcelo wrote:
>>
>>> Sorry, I was not exact.
>>> I don't need nor like to change pg_dump. Rather, based on pg_dump code,
>>> I need to develop a daemon which can receive a TCP message (from a
>>> privileged app) containing some elements: the database to dump, the user
>>> under which do that, and his password. (My apps are using that same data,
>>> of course, encripted to the common users).
>>>
>>
>>
>> I would just fork pg_dump to do the actual dump rather than try and
>> incorporate its source code into your app.
>>
>>
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG *
http://openscg.com


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Ron Johnson


What about the pgpass file?

https://www.postgresql.org/docs/9.2/static/libpq-pgpass.html

On 11/17/2017 03:06 PM, marcelo wrote:
I need to "emulate" the pg_dump code because the password prompt. Years 
ago I write a program (for the QnX environment) that catched some prompt 
and emulates the standard input. I don't like to do that again.


On 17/11/17 17:23, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump code, 
I need to develop a daemon which can receive a TCP message (from a 
privileged app) containing some elements: the database to dump, the user 
under which do that, and his password. (My apps are using that same 
data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.







--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo

I will give expect a try. But the source code embedded in my daemon.

On 17/11/17 17:49, Ron Johnson wrote:

On 11/17/2017 02:23 PM, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message 
(from a privileged app) containing some elements: the database to 
dump, the user under which do that, and his password. (My apps are 
using that same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.




Specifically, do you mean to write a simple daemon which forks pg_dump 
at the appropriate time?







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Vick Khera
pg_dump is a libpq client, and thus will read the environment for a
variable with the password. no need to emulte any command prompt tty
operations.

On Fri, Nov 17, 2017 at 4:06 PM, marcelo  wrote:

> I need to "emulate" the pg_dump code because the password prompt. Years
> ago I write a program (for the QnX environment) that catched some prompt
> and emulates the standard input. I don't like to do that again.
>
> On 17/11/17 17:23, John R Pierce wrote:
>
>> On 11/17/2017 12:19 PM, marcelo wrote:
>>
>>> Sorry, I was not exact.
>>> I don't need nor like to change pg_dump. Rather, based on pg_dump code,
>>> I need to develop a daemon which can receive a TCP message (from a
>>> privileged app) containing some elements: the database to dump, the user
>>> under which do that, and his password. (My apps are using that same data,
>>> of course, encripted to the common users).
>>>
>>
>>
>> I would just fork pg_dump to do the actual dump rather than try and
>> incorporate its source code into your app.
>>
>>
>>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo
I need to "emulate" the pg_dump code because the password prompt. Years 
ago I write a program (for the QnX environment) that catched some prompt 
and emulates the standard input. I don't like to do that again.


On 17/11/17 17:23, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message 
(from a privileged app) containing some elements: the database to 
dump, the user under which do that, and his password. (My apps are 
using that same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Ron Johnson

On 11/17/2017 02:23 PM, John R Pierce wrote:

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump code, I 
need to develop a daemon which can receive a TCP message (from a 
privileged app) containing some elements: the database to dump, the user 
under which do that, and his password. (My apps are using that same data, 
of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.




Specifically, do you mean to write a simple daemon which forks pg_dump at 
the appropriate time?



--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unnest-like pg_stats.most_common_values and pg_stats.most_common_freqs

2017-11-17 Thread Justin Pryzby
On Fri, Nov 17, 2017 at 09:32:23PM +0100, Luca Ferrari wrote:
> Hi all,
> maybe this is trivial, but I need an hint on a way to see a table form
> of the MCVs and MCFs out of pg_stats with a query. Is it possible to
> get a set of rows each with a most common value on one column and the
> corresponding column on the the other? (assuming I can cast the array
> of MCVs to the right type array)

I think you want something like this ?

postgres=# SELECT schemaname, tablename, attname, 
unnest(histogram_bounds::text::text[]), histogram_bounds FROM pg_stats LIMIT 9;
 pg_catalog | pg_pltemplate | tmplname| plperl  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plperlu | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpgsql | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpython2u  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpython3u  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpythonu   | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| pltcl   | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| pltclu  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplhandler | plperl_call_handler | 
{plperl_call_handler,plperlu_call_handler,plpgsql_call_handler,plpython2_call_handler,plpython3_call_handler,plpython_call_handler,pltcl_cal
l_handler,pltclu_call_handler}

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unnest-like pg_stats.most_common_values and pg_stats.most_common_freqs

2017-11-17 Thread Luca Ferrari
Hi all,
maybe this is trivial, but I need an hint on a way to see a table form
of the MCVs and MCFs out of pg_stats with a query. Is it possible to
get a set of rows each with a most common value on one column and the
corresponding column on the the other? (assuming I can cast the array
of MCVs to the right type array)

Thanks,
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread John R Pierce

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message (from 
a privileged app) containing some elements: the database to dump, the 
user under which do that, and his password. (My apps are using that 
same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump code, 
I need to develop a daemon which can receive a TCP message (from a 
privileged app) containing some elements: the database to dump, the user 
under which do that, and his password. (My apps are using that same 
data, of course, encripted to the common users).

Thank you, Scott.

On 17/11/17 10:49, Scott Mead wrote:



On Fri, Nov 17, 2017 at 7:51 AM, marcelo > wrote:


I would need to do a mild change to pg_dump, working against a 9.4
server on linux.
Which source tree do I need? Have gcc 4.9.2 in my Lubuntu
installation.
TIA


What exactly do you need to change?  Most likely, there is a quick and 
easy fix for whatever you're doing without modifying pg_dump itself.


That being said, if you really want to modify the source, download the 
source tarball: https://www.postgresql.org/ftp/source/






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
--
Scott Mead
Sr. Architect
/OpenSCG /
http://openscg.com




[GENERAL] explain analyze output with parallel workers - question about meaning of information for explain.depesz.com

2017-11-17 Thread hubert depesz lubaczewski
Hi,

up to parallel executions, when we had node in explain analyze showing
"loops=x" with x more than 1, it meant that the "actual time" had to be
multiplied by loops to get real time spent in a node.

For example, check step 13 in https://explain.depesz.com/s/gNBd

It shows time of 3ms, but loops of 1873, so the actual time is ~ 5600ms.

But with parallel execution it seems to be no longer the case.

For example:
https://explain.depesz.com/s/LTMp
or
https://explain.depesz.com/s/QHRi

It looks that the actual time is really actual time, and loops is
"worker nodes + 1".

Is that really the case? Should I, for explain.depesz.com, when dealing
with partial* and parallel* nodes, use "loops=1" for calculation of
exclusive/inclusive time? always? some other nodes?

or am I missing something in here?

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread Scott Mead
On Fri, Nov 17, 2017 at 7:51 AM, marcelo  wrote:

> I would need to do a mild change to pg_dump, working against a 9.4 server
> on linux.
> Which source tree do I need? Have gcc 4.9.2 in my Lubuntu installation.
> TIA


What exactly do you need to change?  Most likely, there is a quick and easy
fix for whatever you're doing without modifying pg_dump itself.

That being said, if you really want to modify the source, download the
source tarball: https://www.postgresql.org/ftp/source/


>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG *
http://openscg.com


[GENERAL] mild modification to pg_dump

2017-11-17 Thread marcelo
I would need to do a mild change to pg_dump, working against a 9.4 
server on linux.

Which source tree do I need? Have gcc 4.9.2 in my Lubuntu installation.
TIA



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> It doesn't seem impossible to get into a situation where syslogger is
> the source of the OOM. Just enabling a lot of logging in a workload with
> many large query strings might do it.  So making it less likely to be
> killed might make the problem worse...

Hm, so that's another angle David didn't report on: is it possible that
his workload could have resulted in a very large volume of incomplete
in-progress log messages?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Andres Freund


On November 16, 2017 7:06:23 PM PST, Tom Lane  wrote:
>Andres Freund  writes:
>> On 2017-11-16 21:39:49 -0500, Tom Lane wrote:
>>> What might be worth thinking about is allowing the syslogger process
>to
>>> inherit the postmaster's OOM-kill-proofness setting, instead of
>dropping
>>> down to the same vulnerability as the postmaster's other child
>processes.
>
>> Hm. I'm a bit scared about that - it doesn't seem that inconceivable
>> that various backends log humongous multi-line messages, leading to
>> syslogger *actually* taking up a fair amount of memory. Note that
>we're
>> using plain stringinfos that ereport(ERROR) out of memory situations,
>> rather than failing more gracefully.
>
>True, but there's no hard limits on the postmaster's memory consumption
>either ... 

Is there a credible scenario where it'd allocate many gigabytes of memory?

> and if the syslogger does get killed on such a basis, we
>have at the least lost a bunch of log output.  On the whole I think we'd be
>better off trying to prevent OOM kills on the syslogger.  (That doesn't
>preclude other mitigation measures.)

It doesn't seem impossible to get into a situation where syslogger is the 
source of the OOM. Just enabling a lot of logging in a workload with many large 
query strings might do it.  So making it less likely to be killed might make 
the problem worse...

Andres
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> On 2017-11-16 21:39:49 -0500, Tom Lane wrote:
>> What might be worth thinking about is allowing the syslogger process to
>> inherit the postmaster's OOM-kill-proofness setting, instead of dropping
>> down to the same vulnerability as the postmaster's other child processes.

> Hm. I'm a bit scared about that - it doesn't seem that inconceivable
> that various backends log humongous multi-line messages, leading to
> syslogger *actually* taking up a fair amount of memory. Note that we're
> using plain stringinfos that ereport(ERROR) out of memory situations,
> rather than failing more gracefully.

True, but there's no hard limits on the postmaster's memory consumption
either ... and if the syslogger does get killed on such a basis, we have
at the least lost a bunch of log output.  On the whole I think we'd be
better off trying to prevent OOM kills on the syslogger.  (That doesn't
preclude other mitigation measures.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Andres Freund
On 2017-11-16 21:39:49 -0500, Tom Lane wrote:
> > We could work around a situation like that if we made postmaster use a
> > *different* pipe as stderr than the one we're handing to normal
> > backends. If postmaster created a new pipe and closed the read end
> > whenever forking a syslogger, we should get EPIPEs when writing after
> > syslogger died and could fall back to proper stderr or such.
> 
> I think that's nonsense, unfortunately.

Nice phrasing.


> If the postmaster had its own pipe, that would reduce the risk of this
> deadlock because only the postmaster would be filling that pipe, not
> the postmaster and all its other children --- but it wouldn't
> eliminate the risk.

The deadlock happens because postmaster is waiting for syslogger accept
a message, and syslogger waits for postmaster to restart it. To resolve
the deadlock postmasterneeds to not wait for a dead sylogger, even if it
hasn't yet received & processed the SIGCLD - what other postmaster
children do or don't do doesn't matter for resolving that cycle. The
reason postmaster currently block on writing to the pipe, instead of
getting EPIPE, is because both ends of the pipe are still
existing. Which in turn is the case because we need to be able to
restart syslogger without passing a new file descriptor to all
subprocesses.  If postmaster instead uses a different pipe to write to
it'll not block anymore, instead getting EPIPE, and can continue towards
starting a new syslogger.  So I don't think the described deadlock
exists if we were to apply my proposed fix.


What this obviously would not *not* guarantee is being able start a new
syslogger, but it seems fairly impossible to guarantee that. So sure,
other processes would still block until syslogger has successfully
restarted - but it's a resolvable situation rather than a hard deadlock,
which the described situation appears to be.


Note that there's plenty of cases where you could run into this even
without being unable to fork new processes. You'd e.g. could also run
into this while logging the exit of some other subprocess or such,
there's enough ereports in postmaster.


> I doubt the increase in reliability would be enough to justify the
> extra complexity and cost.

I'm doubtful about that too.


> What might be worth thinking about is allowing the syslogger process to
> inherit the postmaster's OOM-kill-proofness setting, instead of dropping
> down to the same vulnerability as the postmaster's other child processes.
> That presumes that this was an otherwise-unjustified OOM kill, which
> I'm not quite sure of ... but it does seem like a situation that could
> arise from time to time.

Hm. I'm a bit scared about that - it doesn't seem that inconceivable
that various backends log humongous multi-line messages, leading to
syslogger *actually* taking up a fair amount of memory. Note that we're
using plain stringinfos that ereport(ERROR) out of memory situations,
rather than failing more gracefully.

- Andres


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Tom Lane
Andres Freund  writes:
> On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
>> David Pacheco  writes:
>>> I ran into what appears to be a deadlock in the logging subsystem.  It
>>> looks like what happened was that the syslogger process exited because it
>>> ran out of memory.  But before the postmaster got a chance to handle the
>>> SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
>>> That also failed, and the postmaster went to log a message about it, but
>>> it's blocked on the pipe that's normally connected to the syslogger,
>>> presumably because the pipe is full because the syslogger is gone and
>>> hasn't read from it.

>> Ugh.

> I'm somewhat inclined to say that one has to live with this if the
> system is so resource constrainted that processes barely using memory
> get killed.

David's report isn't too clear: did the syslogger process actually run
out of memory and exit of its own volition after an ENOMEM, or did it get
killed by the dreaded OOM killer?  In either case, it's unclear whether
it was really using an excessive amount of memory.  We have not heard
reports suggesting a memory leak in the syslogger, but maybe there is
one under unusual circumstances?

I think you're probably right that the real cause here is the OOM
killer just randomly seizing on the syslogger as a victim process;
although since the syslogger disconnects from shared memory, it's
not very clear why it would score high on the OOM killer's metrics.
The whole thing is definitely odd.

> We could work around a situation like that if we made postmaster use a
> *different* pipe as stderr than the one we're handing to normal
> backends. If postmaster created a new pipe and closed the read end
> whenever forking a syslogger, we should get EPIPEs when writing after
> syslogger died and could fall back to proper stderr or such.

I think that's nonsense, unfortunately.  If the postmaster had its
own pipe, that would reduce the risk of this deadlock because only
the postmaster would be filling that pipe, not the postmaster and
all its other children --- but it wouldn't eliminate the risk.
I doubt the increase in reliability would be enough to justify the
extra complexity and cost.

What might be worth thinking about is allowing the syslogger process to
inherit the postmaster's OOM-kill-proofness setting, instead of dropping
down to the same vulnerability as the postmaster's other child processes.
That presumes that this was an otherwise-unjustified OOM kill, which
I'm not quite sure of ... but it does seem like a situation that could
arise from time to time.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Michael Paquier
On Fri, Nov 17, 2017 at 11:14 AM, Andres Freund  wrote:
> On 2017-11-17 11:09:56 +0900, Michael Paquier wrote:
>> when redirection_done is switched to true because the first process
>> generating a message to the syslogger pipe needs to open it first if
>> not done yet?
>
> I can't follow. The syslogger pipe is created when the first syslogger
> is started (before it's forked!). Which happens before other processes
> are created, because they all need to inherit that file descriptor.

Ah, OK. I didn't recall this dependency. Sorry for the confusion.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Andres Freund
On 2017-11-17 11:09:56 +0900, Michael Paquier wrote:
> On Fri, Nov 17, 2017 at 10:50 AM, Andres Freund  wrote:
> > On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
> >> David Pacheco  writes:
> >> > I ran into what appears to be a deadlock in the logging subsystem.  It
> >> > looks like what happened was that the syslogger process exited because it
> >> > ran out of memory.  But before the postmaster got a chance to handle the
> >> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
> >> > That also failed, and the postmaster went to log a message about it, but
> >> > it's blocked on the pipe that's normally connected to the syslogger,
> >> > presumably because the pipe is full because the syslogger is gone and
> >> > hasn't read from it.
> >>
> >> Ugh.
> >
> > I'm somewhat inclined to say that one has to live with this if the
> > system is so resource constrainted that processes barely using memory
> > get killed.
> >
> > We could work around a situation like that if we made postmaster use a
> > *different* pipe as stderr than the one we're handing to normal
> > backends. If postmaster created a new pipe and closed the read end
> > whenever forking a syslogger, we should get EPIPEs when writing after
> > syslogger died and could fall back to proper stderr or such.
> 
> I don't have the code on top of my mind, but isn't a custom fd causing
> a small penalty

Sure, there's some minor overhead because every process would need to
close another fd after forking.


> when redirection_done is switched to true because the first process
> generating a message to the syslogger pipe needs to open it first if
> not done yet?

I can't follow. The syslogger pipe is created when the first syslogger
is started (before it's forked!). Which happens before other processes
are created, because they all need to inherit that file descriptor.


> So you'd need proper locking to save from race conditions.

I completely fail to see why this'd be the case. All I'm talking about
is using another pipe between syslogger and postmaster than between
other-processes and syslogger.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Michael Paquier
On Fri, Nov 17, 2017 at 10:50 AM, Andres Freund  wrote:
> On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
>> David Pacheco  writes:
>> > I ran into what appears to be a deadlock in the logging subsystem.  It
>> > looks like what happened was that the syslogger process exited because it
>> > ran out of memory.  But before the postmaster got a chance to handle the
>> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
>> > That also failed, and the postmaster went to log a message about it, but
>> > it's blocked on the pipe that's normally connected to the syslogger,
>> > presumably because the pipe is full because the syslogger is gone and
>> > hasn't read from it.
>>
>> Ugh.
>
> I'm somewhat inclined to say that one has to live with this if the
> system is so resource constrainted that processes barely using memory
> get killed.
>
> We could work around a situation like that if we made postmaster use a
> *different* pipe as stderr than the one we're handing to normal
> backends. If postmaster created a new pipe and closed the read end
> whenever forking a syslogger, we should get EPIPEs when writing after
> syslogger died and could fall back to proper stderr or such.

I don't have the code on top of my mind, but isn't a custom fd causing
a small penalty when redirection_done is switched to true because the
first process generating a message to the syslogger pipe needs to open
it first if not done yet? So you'd need proper locking to save from
race conditions. Or is the first message redirected message always
generated by the postmaster or the syslogger? I don't recall that this
is actually true..
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread Andres Freund
On 2017-11-06 15:35:03 -0500, Tom Lane wrote:
> David Pacheco  writes:
> > I ran into what appears to be a deadlock in the logging subsystem.  It
> > looks like what happened was that the syslogger process exited because it
> > ran out of memory.  But before the postmaster got a chance to handle the
> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
> > That also failed, and the postmaster went to log a message about it, but
> > it's blocked on the pipe that's normally connected to the syslogger,
> > presumably because the pipe is full because the syslogger is gone and
> > hasn't read from it.
> 
> Ugh.

I'm somewhat inclined to say that one has to live with this if the
system is so resource constrainted that processes barely using memory
get killed.

We could work around a situation like that if we made postmaster use a
*different* pipe as stderr than the one we're handing to normal
backends. If postmaster created a new pipe and closed the read end
whenever forking a syslogger, we should get EPIPEs when writing after
syslogger died and could fall back to proper stderr or such.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster deadlock while logging after syslogger exited

2017-11-16 Thread David Pacheco
On Mon, Nov 6, 2017 at 12:35 PM, Tom Lane  wrote:

> David Pacheco  writes:
> > I ran into what appears to be a deadlock in the logging subsystem.  It
> > looks like what happened was that the syslogger process exited because it
> > ran out of memory.  But before the postmaster got a chance to handle the
> > SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
> > That also failed, and the postmaster went to log a message about it, but
> > it's blocked on the pipe that's normally connected to the syslogger,
> > presumably because the pipe is full because the syslogger is gone and
> > hasn't read from it.
>
> Ugh.
>


Should I file a bug on this issue?

Thanks,
Dave


Re: [GENERAL] pg_restore load data

2017-11-16 Thread Ron Johnson

On 11/16/2017 03:13 PM, bricklen wrote:


On Thu, Nov 16, 2017 at 1:07 PM, Ron Johnson > wrote:


v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump,
when is the data actually loaded?  I've looked in the list output and
don't see any "load" statements.


Look for COPY lines, that's how the data is restored.


$ pg_restore -l CDSHA01.dump > CDSHA01.txt
$ grep --color -i copy CDSHA01.txt
$ echo $?
1

There are lots of "restoring data", though.  I should have thought to grep 
for that.


One thing that puzzles me is how fast the tables (even large ones) loaded 
compared to how slow the pg_dump -Fc was.  Granted, I'm running -j4 but 
still, these were some really large, poorly compressible tables (the dump 
file was about as big as du -mc data/base).


--
World Peace Through Nuclear Pacification



Re: [GENERAL] pg_restore load data

2017-11-16 Thread melvin6925
The command is COPY, not load.


Sent via the Samsung Galaxy S® 6, an AT 4G LTE smartphone
 Original message From: Ron Johnson  
Date: 11/16/17  16:07  (GMT-05:00) To: pgsql-general@postgresql.org Subject: 
[GENERAL] pg_restore load data 
Hi,

v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump, when is 
the data actually loaded?  I've looked in the list output and don't see any 
"load" statements.

Thanks

-- 
World Peace Through Nuclear Pacification



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore load data

2017-11-16 Thread bricklen
On Thu, Nov 16, 2017 at 1:07 PM, Ron Johnson  wrote:

> v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)
>
> During a "whole database" restore using pg_restore of a custom dump, when
> is the data actually loaded?  I've looked in the list output and don't see
> any "load" statements.
>

Look for COPY lines, that's how the data is restored.


[GENERAL] pg_restore load data

2017-11-16 Thread Ron Johnson

Hi,

v9.2.7  (Yes, I know, it's old.  Nothing I can do about it.)

During a "whole database" restore using pg_restore of a custom dump, when is 
the data actually loaded?  I've looked in the list output and don't see any 
"load" statements.


Thanks

--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] expanded auto and header linestyle double

2017-11-16 Thread Pavel Stehule
2017-11-16 21:16 GMT+01:00 Matt Zagrabelny :

> Thanks for the reply, Pavel!
>
> On Thu, Nov 16, 2017 at 1:01 AM, Pavel Stehule 
> wrote:
>
>> Hi
>>
>> 2017-11-15 23:37 GMT+01:00 Matt Zagrabelny :
>>
>>> Greetings,
>>>
>>> Using PG 10.1.
>>>
>>> In my .psqlrc I have:
>>>
>>> \x auto
>>> \pset linestyle 'unicode'
>>> \pset unicode_header_linestyle double
>>>
>>> and when the output is expanded, I do not see a double line for the
>>> first record, but I do for all subsequent records. For example:
>>>
>>> % select  * from artist;
>>> ─[ RECORD 1 ]─
>>> artistid │ 1
>>> name │ AC/DC
>>> ═[ RECORD 2 ]═
>>> artistid │ 2
>>> name │ Accept
>>> ═[ RECORD 3 ]═
>>> artistid │ 3
>>> name │ Aerosmith
>>> ═[ RECORD 4 ]═
>>> artistid │ 4
>>> name │ Alanis Morissette
>>> ═[ RECORD 5 ]═
>>> artistid │ 5
>>>
>>> I would like to have the initial "RECORD 1" line have the same "double"
>>> linestyle as the other records.
>>>
>>> Am I missing a config item?
>>>
>>
>> yes - it is border line
>>
>> use \pset border 2
>>
>>
> Hmmm I didn't use the "border" setting.
>
>
>> and you understand
>>
>> you are missing
>>
>> \pset unicode_border_linestyle double
>>
>
> But I did use the above setting.
>
> So my .psqlrc looks like:
>
> \pset linestyle 'unicode'
> \pset unicode_border_linestyle double
> \pset unicode_header_linestyle double
>
> ═[ RECORD 1 ]
> artistid │ 1
> name │ AC/DC
> ═[ RECORD 2 ]
> artistid │ 2
> name │ Accept
> ═[ RECORD 3 ]
> artistid │ 3
> name │ Aerosmith
>
> Thanks for helping me get it corrected.
>
> It still feels strange that when using "expanded" display the first record
> separator would be categorized under "border_linestyle" and the remaining
> record separators would be categorized under "header_linestyle".
>

There is some simplification - this line is header and together border. It
has sense when you use border 2

Regards

Pavel


> Cheers!
>
> -m
>


Re: [GENERAL] expanded auto and header linestyle double

2017-11-16 Thread Matt Zagrabelny
Thanks for the reply, Pavel!

On Thu, Nov 16, 2017 at 1:01 AM, Pavel Stehule 
wrote:

> Hi
>
> 2017-11-15 23:37 GMT+01:00 Matt Zagrabelny :
>
>> Greetings,
>>
>> Using PG 10.1.
>>
>> In my .psqlrc I have:
>>
>> \x auto
>> \pset linestyle 'unicode'
>> \pset unicode_header_linestyle double
>>
>> and when the output is expanded, I do not see a double line for the first
>> record, but I do for all subsequent records. For example:
>>
>> % select  * from artist;
>> ─[ RECORD 1 ]─
>> artistid │ 1
>> name │ AC/DC
>> ═[ RECORD 2 ]═
>> artistid │ 2
>> name │ Accept
>> ═[ RECORD 3 ]═
>> artistid │ 3
>> name │ Aerosmith
>> ═[ RECORD 4 ]═
>> artistid │ 4
>> name │ Alanis Morissette
>> ═[ RECORD 5 ]═
>> artistid │ 5
>>
>> I would like to have the initial "RECORD 1" line have the same "double"
>> linestyle as the other records.
>>
>> Am I missing a config item?
>>
>
> yes - it is border line
>
> use \pset border 2
>
>
Hmmm I didn't use the "border" setting.


> and you understand
>
> you are missing
>
> \pset unicode_border_linestyle double
>

But I did use the above setting.

So my .psqlrc looks like:

\pset linestyle 'unicode'
\pset unicode_border_linestyle double
\pset unicode_header_linestyle double

═[ RECORD 1 ]
artistid │ 1
name │ AC/DC
═[ RECORD 2 ]
artistid │ 2
name │ Accept
═[ RECORD 3 ]
artistid │ 3
name │ Aerosmith

Thanks for helping me get it corrected.

It still feels strange that when using "expanded" display the first record
separator would be categorized under "border_linestyle" and the remaining
record separators would be categorized under "header_linestyle".

Cheers!

-m


Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread Michael Nolan
On Thu, Nov 16, 2017 at 1:56 AM, Nick Dro  wrote:

> I beleieve that every information system has the needs to send emails.
> Currently PostgreSQL doesn't have a function which gets TEXT and return
> true if it's valid email address (x...@yyy.com / .co.ZZ)
> Do you believe such function should exist in PostgreSQL or it's best to
> let every user to implement his own function?
>

There's a world of difference between an email address that is well-formed
and one that actually works.

In the systems I administer there's a lot of time spent dealing with
bounced mail to make sure that the email addresses we have actually reach
someone, hopefully the intended target.  And in the US, bulk emailers also
have to deal with the CAN-SPAM act, which specifies procedures that must be
in place to allow easy administrative options to remove one'e email address
from mailing lists.

Procedures to verify that an email address works and to administer its use
under rules like CAN_SPAM cannot exist solely within the database itself.
And as others have noted, what makes for a 'well-formed' email address has
always been a bit complicated.
--
Mike Nolan


Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread Andreas Joseph Krogh
På torsdag 16. november 2017 kl. 09:05:00, skrev Pavel Stehule <
pavel.steh...@gmail.com >:
Hi   2017-11-16 8:56 GMT+01:00 Nick Dro >: I beleieve that every information system has 
the needs to send emails.
Currently PostgreSQL doesn't have a function which gets TEXT and return true 
if it's valid email address (x...@yyy.com / .co.ZZ)
Do you believe such function should exist in PostgreSQL or it's best to let 
every user to implement his own function?

 
 I don't think so this functionality should be in upstream - but it is good 
use for some extension and placing it in PGXN or PostgreSQL community 
repository.
 
Postgres has good regexp support and this case can be implemented by one 
regexp.
 
you can use PLPerlu and use some modules from CPAN
 
http://search.cpan.org/~rjbs/Email-Valid-1.202/lib/Email/Valid.pm 

 
Regards
 
Pavel



 
In general, I see no reason for a modern RDBMS not to provide an 
email-datatype. IMV that's no different from other types which also could have 
been plain-text but are convenient to have datatypes for.
Being an open-source project I guess one must show initiative and start a 
discussion on -hackers to see what interesst there's in having one in core. I 
for one hope there will be.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread bto...@computer.org


- Original Message -


From: "Nick Dro"  
To: pgsql-general@postgresql.org 
Sent: Thursday, November 16, 2017 2:56:42 AM 
Subject: [GENERAL] Build in function to verify email addresses 

I beleieve that every information system has the needs to send emails. 
Currently PostgreSQL doesn't have a function which gets TEXT and return true if 
it's valid email address (x...@yyy.com / .co.ZZ) 
Do you believe such function should exist in PostgreSQL or it's best to let 
every user to implement his own function? 




I don't mean to sound snarky, but did you even try to google search? 

https://pgxn.org/dist/email/ 

https://github.com/asotolongo/email 

or 

https://github.com/petere/pgemailaddr 



-- B 






Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread Chris Withers
I'm pretty sure this isn't as simple as you think it is, I'd suggest 
having a good read of:


https://stackoverflow.com/a/201378/216229

Chris


On 16/11/2017 07:56, Nick Dro wrote:

I beleieve that every information system has the needs to send emails.
Currently PostgreSQL doesn't have a function which gets TEXT and 
return true if it's valid email address (x...@yyy.com / .co.ZZ)
Do you believe such function should exist in PostgreSQL or it's best 
to let every user to implement his own function?




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Build in function to verify email addresses

2017-11-16 Thread Pavel Stehule
Hi

2017-11-16 8:56 GMT+01:00 Nick Dro :

> I beleieve that every information system has the needs to send emails.
> Currently PostgreSQL doesn't have a function which gets TEXT and return
> true if it's valid email address (x...@yyy.com / .co.ZZ)
> Do you believe such function should exist in PostgreSQL or it's best to
> let every user to implement his own function?
>

I don't think so this functionality should be in upstream - but it is good
use for some extension and placing it in PGXN or PostgreSQL community
repository.

Postgres has good regexp support and this case can be implemented by one
regexp.

you can use PLPerlu and use some modules from CPAN

http://search.cpan.org/~rjbs/Email-Valid-1.202/lib/Email/Valid.pm

Regards

Pavel


[GENERAL] Build in function to verify email addresses

2017-11-15 Thread Nick Dro
I beleieve that every information system has the needs to send emails.
Currently PostgreSQL doesn't have a function which gets TEXT and return true if it's valid email address (x...@yyy.com / .co.ZZ)
Do you believe such function should exist in PostgreSQL or it's best to let every user to implement his own function?

Re: [GENERAL] expanded auto and header linestyle double

2017-11-15 Thread Pavel Stehule
Hi

2017-11-15 23:37 GMT+01:00 Matt Zagrabelny :

> Greetings,
>
> Using PG 10.1.
>
> In my .psqlrc I have:
>
> \x auto
> \pset linestyle 'unicode'
> \pset unicode_header_linestyle double
>
> and when the output is expanded, I do not see a double line for the first
> record, but I do for all subsequent records. For example:
>
> % select  * from artist;
> ─[ RECORD 1 ]─
> artistid │ 1
> name │ AC/DC
> ═[ RECORD 2 ]═
> artistid │ 2
> name │ Accept
> ═[ RECORD 3 ]═
> artistid │ 3
> name │ Aerosmith
> ═[ RECORD 4 ]═
> artistid │ 4
> name │ Alanis Morissette
> ═[ RECORD 5 ]═
> artistid │ 5
>
> I would like to have the initial "RECORD 1" line have the same "double"
> linestyle as the other records.
>
> Am I missing a config item?
>

yes - it is border line

use \pset border 2

and you understand

you are missing

\pset unicode_border_linestyle double

Regards

Pavel


> Thanks!
>
> -m
>


Re: [GENERAL] pg_basebackup running from a remote machine

2017-11-15 Thread Jeff Janes
On Tue, Nov 14, 2017 at 8:28 AM, Laurenz Albe 
wrote:

> rakeshkumar464 wrote:
> > If pg_basebackup is run from a remote machine with compress option
> --gzip ,
> > compress level 9,
> > will the compression occur prior to the data being sent on the network or
> > after it has been received
> > at the remote machine.
>
> That only means that the output TAR file will be compressed, it has
> nothing to do with the data transfered from the server.
>
> If you want to compress the data sent over the network, use
> pg_basebackup over an SSL connection with SSL compression enabled.
>

But ssl compression is disabled by default on most systems, and not even
supported at all on many without doing a custom compilation of the ssl
library.

Cheers,

Jeff


Re: [GENERAL] Setting up replication slave on remote high latency host

2017-11-15 Thread Rory Falloon
Thank you for that. Back to the drawing board!

On Wed, Nov 15, 2017 at 9:30 PM, John R Pierce  wrote:

> On 11/15/2017 6:02 PM, Rory Falloon wrote:
>
>>
>> Right now I am trying to dump the database, gzip, move across, and import
>> into the new slave (which is configured as a master to perform the initial
>> setup). Ideally I do this dump, move and import during a period of
>> inactivity on the master so the new server will come up and immediately be
>> able to catch up on replication due to lack of activity. However, I have
>> been importing the current db as a test and after 90 minutes it seems to
>> have only got 2/3 of the way. I am not confident this will work but it
>> seems like the most efficient way to start.
>>
>
>
> you can't use pg_dump to create a slave, as it won't have the same
> timeline.
>
> I would use pg_basebackup, but in general streaming replication over a
> high latency erratic link will never work real well.
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Setting up replication slave on remote high latency host

2017-11-15 Thread John R Pierce

On 11/15/2017 6:02 PM, Rory Falloon wrote:


Right now I am trying to dump the database, gzip, move across, and 
import into the new slave (which is configured as a master to perform 
the initial setup). Ideally I do this dump, move and import during a 
period of inactivity on the master so the new server will come up and 
immediately be able to catch up on replication due to lack of 
activity. However, I have been importing the current db as a test and 
after 90 minutes it seems to have only got 2/3 of the way. I am not 
confident this will work but it seems like the most efficient way to 
start.



you can't use pg_dump to create a slave, as it won't have the same timeline.

I would use pg_basebackup, but in general streaming replication over a 
high latency erratic link will never work real well.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Setting up replication slave on remote high latency host

2017-11-15 Thread Rory Falloon
Hi,

Thoughts and opinions on this please -

I have a db (data dir is 90gb) that I am trying to setup on a replication
slave. The slave is on a host which maintains latency over 300ms at all
times (wan link).

Other times I have done this setup, I have simply rsync'ed the data dir to
another host, set config, ran rsync again and fired up the slave. this
works well.

However, my bandwidth to the host in question fluctuates between 800k/sec
to 3MB/sec. Performing this initial rsync and then having to rsync again if
the replication slave drops out due to network latency is not something I
think is going to work in this situation.

Right now I am trying to dump the database, gzip, move across, and import
into the new slave (which is configured as a master to perform the initial
setup). Ideally I do this dump, move and import during a period of
inactivity on the master so the new server will come up and immediately be
able to catch up on replication due to lack of activity. However, I have
been importing the current db as a test and after 90 minutes it seems to
have only got 2/3 of the way. I am not confident this will work but it
seems like the most efficient way to start.

Have I missed anything here?

Now, assuming I get the slave up, how best can I mitigate the slave from
dropping out because of latency and being able to recover? Increasing the
amount of wal segments would be the best way, correct?

Thanks,


Re: [GENERAL] PGPool in Master-Master mode, is it possible?

2017-11-15 Thread Tatsuo Ishii
I am not familiar with swarm cluster nor repmgr. So I wonder why you
wouldn't be able to get to the goal you are trying to achieve without
repmgr. Can you please elaborate more?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

> I am also interested in this set-up
> My idea is to run pgpool in a docker swarm cluster and to have two instances 
> of pgpool on different nodes (but not in watchdog mode). In this case it is 
> not possible to give the responsablity to pgpool to do the failover otherwise 
> both instances would try to do the failover (they don't know about each 
> other, no watchdog mode). So I have to set failover_command parameter to an 
> empty string and let repmgr do the automatic failover. I have tested and I 
> believe it is ok: in case of master failure repmgr does the promote, pgpool 
> will not do the failover but will constantly try to reconnect to a new master 
> and as soon as repmgr has done the promotion it is OK.
> The important think is that when pgpool starts I have to read the state of 
> the cluster in the repmgr metadata table (repl_nodes) and generate the 
> /tmp/pgpool_status file based on that. (I even have to do a promote_node in 
> the edge case that the master is down and was brought down when pgpool also 
> was down).  
> But the inconvenient is that if pgpool fails, let's say because the docker 
> node of pgpool is stopped, then docker swarm restart the service on another 
> node but this can take a few seconds. So I would prefer to have multiple 
> pgpool. I did not have the time right now to test such a scenario but I am 
> interested to know if it is possible.
> Pierre 
> 
> On Wednesday, November 15, 2017, 11:02:31 AM GMT+1, Vikas Sharma 
>  wrote:  
>  
>  Thank you Tatsuo for the Reply, I will post this in list you mentioned.
> 
> By Master-Master, I meant two pgpool servers both active accepting 
> connections from Application, If one Pgpool Server becomes unlavailable other 
> still accepting connections. At an Ideal time, both Pgpool instances on each 
> PgPool server working actively.  The database in this case will be 1 master 
> and 2 slaves, replicated by streaming replication.
> 
> Regards
> VS
> 
> On 14 November 2017 at 22:56, Tatsuo Ishii  wrote:
> 
>> Hello There,
>>
>> I need to setup two PGPool Servers in Master-Master mode, First thing I
>> want to know, is it possible?
>>
>> I know we can setup 2 PGPool servers in master-slave mode using watchdog.
>> http://www.pgpool.net/pgpool- web/contrib_docs/watchdog_ 
>> master_slave_3.3/en.html
>>
>> Could anyone please enlighten me and any workaround for this?
>>
>> Regards
>> VS
> 
>> I need to setup two PGPool Servers in Master-Master mode, First thing I
>> want to know, is it possible?
>>
>> I know we can setup 2 PGPool servers in master-slave mode using watchdog.
>> http://www.pgpool.net/pgpool- web/contrib_docs/watchdog_ 
>> master_slave_3.3/en.html
>>
>> Could anyone please enlighten me and any workaround for this?
> 
> This is not an appropriate list. You should go to the Pgpool mailing
> list:
> 
> https://www.pgpool.net/ mailman/listinfo/pgpool- general
> 
> Anyway... I am not sure what you mean by "Master-Master mode" but if
> that means you want to issue write queries by connecting to any of
> Pgpool-II port, it's already possible in the set up you have created.
> 
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_ en.php
> Japanese:http://www.sraoss.co. jp
> 
> 
>   


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] expanded auto and header linestyle double

2017-11-15 Thread Matt Zagrabelny
Greetings,

Using PG 10.1.

In my .psqlrc I have:

\x auto
\pset linestyle 'unicode'
\pset unicode_header_linestyle double

and when the output is expanded, I do not see a double line for the first
record, but I do for all subsequent records. For example:

% select  * from artist;
─[ RECORD 1 ]─
artistid │ 1
name │ AC/DC
═[ RECORD 2 ]═
artistid │ 2
name │ Accept
═[ RECORD 3 ]═
artistid │ 3
name │ Aerosmith
═[ RECORD 4 ]═
artistid │ 4
name │ Alanis Morissette
═[ RECORD 5 ]═
artistid │ 5

I would like to have the initial "RECORD 1" line have the same "double"
linestyle as the other records.

Am I missing a config item?

Thanks!

-m


Re: [GENERAL] problem changing jsonb attribute to null value

2017-11-15 Thread Dmitry Dolgov
> On 15 November 2017 at 22:54, RODRIGUEZ CORTES MARIO IGNACIO <
ignacio.cor...@inegi.org.mx> wrote:
>
> I have a problem with a record in a jsonb type table, I'm trying to
> change the value of an attribute to null but it leaves me all the
> content in null and not just the value
>
> prueba=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}', to_jsonb(null));
> ERROR:  no se pudo determinar el tipo polimórfico porque el tipo de
> entrada es «unknown»
>
> defining null value as a text type:
>
> prueba=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}',
> to_jsonb(null::text));
>  jsonb_set
> ---
>
> (1 fila)
>
> it leaves the record in null, when I hope it leaves it with the null
> value in attribute "v": {"v": null}

I think something like this should work:

=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}', 'null'::jsonb);
  jsonb_set
-
 {"v": null}
(1 row)

is that what you want?


[GENERAL] problem changing jsonb attribute to null value

2017-11-15 Thread RODRIGUEZ CORTES MARIO IGNACIO
Hello:

I have a problem with a record in a jsonb type table, I'm trying to
change the value of an attribute to null but it leaves me all the
content in null and not just the value, I show you an example first of
how to define the null attribute:

prueba=# select jsonb_build_object('v', null);
 jsonb_build_object 

 {"v": null}
(1 fila)

this jsonb record is correct, then when trying to change the value with
jsonb_set with some value of type jsonb, it does it correctly:

prueba=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}', to_jsonb(1));
 jsonb_set 
---
 {"v": 1}
(1 fila)

but if i try to put the null value in "v" attribute, apparently is not
allowed:

prueba=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}', to_jsonb(null));
ERROR:  no se pudo determinar el tipo polimórfico porque el tipo de
entrada es «unknown»

defining null value as a text type:

prueba=# select jsonb_set('{"v" : 0}'::jsonb, '{"v"}',
to_jsonb(null::text));
 jsonb_set 
---
 
(1 fila)

it leaves the record in null, when I hope it leaves it with the null
value in attribute "v": {"v": null}


thank you for your help

Regards
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup running from a remote machine

2017-11-15 Thread rakeshkumar464
Does pg_basebackup on a remote machine follow the standard libpq protocol.  I
am not able to force it to use ssl, despite having an entry in pg_hba.conf:

hostnossl all all all reject

>From the same remote machine, psql is forced to use ssl.

Makes me wonder whether pg_basebackup has a different protocol.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query on pg_settings view

2017-11-15 Thread Stephen Frost
Greetings,

* Laurenz Albe (laurenz.a...@cybertec.at) wrote:
> Ronen Nofar wrote:
> > I have a weird case when running a query on  the pg_settings view.
> > I have two users, first one is the default user - postgres which is a 
> > superuser
> > and another one is a role which i had created, i called it test_role and 
> > it's not a superuser.
> > When I run a select on pg_settings with these two users I recieve different 
> > results.
> 
> That is intentional, because some settings should only be
> visible for superusers, for example everything that has to
> do with the operating system (location of configuration file
> or socket directories).

This isn't quite correct any longer- with PG10, we have a default role
called 'pg_read_all_settings' which can be GRANT'd to other roles to
allow viewing of all settings, even those previously restricted to
superuser-only.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Query on pg_settings view

2017-11-15 Thread Laurenz Albe
Ronen Nofar wrote:
> I have a weird case when running a query on  the pg_settings view.
> I have two users, first one is the default user - postgres which is a 
> superuser
> and another one is a role which i had created, i called it test_role and it's 
> not a superuser.
> When I run a select on pg_settings with these two users I recieve different 
> results.

That is intentional, because some settings should only be
visible for superusers, for example everything that has to
do with the operating system (location of configuration file
or socket directories).

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGPool in Master-Master mode, is it possible?

2017-11-15 Thread Vikas Sharma
Thank you Tatsuo for the Reply, I will post this in list you mentioned.

By Master-Master, I meant two pgpool servers both active accepting
connections from Application, If one Pgpool Server becomes unlavailable
other still accepting connections. At an Ideal time, both Pgpool instances
on each PgPool server working actively.  The database in this case will be
1 master and 2 slaves, replicated by streaming replication.

Regards
VS

On 14 November 2017 at 22:56, Tatsuo Ishii  wrote:

> > Hello There,
> >
> > I need to setup two PGPool Servers in Master-Master mode, First thing I
> > want to know, is it possible?
> >
> > I know we can setup 2 PGPool servers in master-slave mode using watchdog.
> > http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_
> master_slave_3.3/en.html
> >
> > Could anyone please enlighten me and any workaround for this?
> >
> > Regards
> > VS
>
> > I need to setup two PGPool Servers in Master-Master mode, First thing I
> > want to know, is it possible?
> >
> > I know we can setup 2 PGPool servers in master-slave mode using watchdog.
> > http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_
> master_slave_3.3/en.html
> >
> > Could anyone please enlighten me and any workaround for this?
>
> This is not an appropriate list. You should go to the Pgpool mailing
> list:
>
> https://www.pgpool.net/mailman/listinfo/pgpool-general
>
> Anyway... I am not sure what you mean by "Master-Master mode" but if
> that means you want to issue write queries by connecting to any of
> Pgpool-II port, it's already possible in the set up you have created.
>
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>


[GENERAL] Query on pg_settings view

2017-11-15 Thread Ronen Nofar
Hi,

I have a weird case when running a query on  the pg_settings view.
I have two users, first one is the default user - postgres which is a
superuser and another one is a role which i had created, i called it
test_role and it's not a superuser.
When I run a select on pg_settings with these two users I recieve different
results.
Running the query 'select * from pg_settings' with 'postgres' user returns
269 rows when running the same query with 'test_role' returns 254 rows.
For example, the following query 'select lower(setting) from pg_settings
where lower(name) = 'config_file'' returns null when I connect with
'test_role' but with 'postgres' user I recieve one row
(/var/lib/pgsql/9.6/data/postgresql.conf).

I have granted all privileges on pg_settings to 'test_role' but it didn't
changed the results.

I don't have any idea what it's wrong here.

The PostgreSQL version is PostgreSQL 9.6.1 on x86_64-pc-linux-gnu

Can you please help me with this issue?

Thanks in advance,
Ronen


Re: [GENERAL] archive_command not being executed

2017-11-14 Thread Eric D
Hi Jeremy,


Thanks for the info on 9.3 vs 9.5.  While searching for Paul's suggestion I'd 
seen the same thing, but didn't "reply all" so my response didn't make it into 
the mailing list.


Regarding your question, as I understand the process, you need to get the WAL 
files being shipped over to the standby before you start streaming replication. 
 The sequence of events I think is supposed to happen is:


  1.  Start WAL file shipping from master->standby
  2.  Run pg_basebackup from master->standby
  3.  With a recovery.conf in place, start the standby.  The recovery.conf 
needs the connection info for the master for the streaming replication, but it 
also needs the location of the WAL files, to replay what's been going on during 
the pg_basebackup.

I don't think streaming replication has a way to pick up the activity covered 
in the WAL files.



From: Jeremy Schneider 
Sent: Monday, November 13, 2017 3:56 PM
To: eric...@hotmail.com
Cc: PostgreSQL General; Paul Jungwirth
Subject: Re: [GENERAL] archive_command not being executed

Hi Eric,

Thanks for using PostgreSQL!

On Fri, Nov 10, 2017 at 9:26 AM, Paul Jungwirth
 wrote:
> Oh this has happened to me before. :-) On SB1 you need to set
> archive_mode to always (not on). Otherwise it is ignored when running as a
> standby.

It looks to me like this feature was not added until 9.5 and Eric is
running 9.3  :(

> On 11/10/2017 09:10 AM, Eric D wrote:
>> I have a standby db server (SB1) that will soon become the master.  SB1
>> is set up with streaming replication from the current master.  I'm
>> trying to set up a third server (SB2) as a slave/standby to SB1, so that
>> when SB1 becomes the master, there will be a standby for it.  First step
>> is to get WAL files shipped from SB1->SB2.

Eric,

>From my reading of the docs and commit logs, standby databases
couldn't archive their WALs until 9.5.

https://www.postgresql.org/message-id/cankgpbs7qgakgq-opzy0esam6+wue5mgpyehcgo_eoq7tjv...@mail.gmail.com
http://paquier.xyz/postgresql-2/postgres-9-5-feature-highlight-archive-mode-always/
Postgres 9.5 feature highlight - archive_mode = 
always
paquier.xyz
Postgres 9.5 feature highlight - archive_mode = always




There is a section in the 9.3 docs about cascading replication:
https://www.postgresql.org/docs/9.3/static/warm-standby.html#CASCADING-REPLICATION
PostgreSQL: Documentation: 9.3: Log-Shipping Standby 
Servers
www.postgresql.org
25.2. Log-Shipping Standby Servers. Continuous archiving can be used to create 
a high availability (HA) cluster configuration with one or more standby servers 
ready ...




It seems to me this is based on streaming replication only.  Looks
like it's not designed (in 9.3) to start archiving until it becomes a
primary.  "You will also need to set primary_conninfo in the
downstream standby to point to the cascading standby."

Are you able to configure a cascading replica by using streaming
replication on your 9.3 system, without WAL archiving on the standby?

-Jeremy

--
http://about.me/jeremy_schneider
[https://aboutme.imgix.net/background/users/j/e/r/jeremy_schneider_1364416900_46.jpg?q=80=1=format=max=250=140=0,0,1396,732]

Jeremy Schneider on about.me
about.me
I am an Engineer, consultant, and Trainer in the United States. Read my blog.





Re: [GENERAL] jsonb

2017-11-14 Thread John R Pierce

On 11/14/2017 2:30 PM, hmidi slim wrote:

I'm trying to use the function to_jsonb and create the name:
to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');
But after that I used Objection.js ORM to get data using the query:
Product.query().where('id',1).then(prod => {console.log(prod)})
I think that the problem maybe with the usage of to_jsonb function, 
maybe I miss something. But when I fetch the data with the ORM I found 
that the type was a string and not a jsonb




never heard of your ORM... does it even know what postgres jsonb is ?   
do you know what actual SQL query that piece of ORMism generates ?




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jsonb

2017-11-14 Thread Johannes Graën
On 11/14/2017 11:30 PM, hmidi slim wrote:
> I'm trying to use the function to_jsonb and create the name:
> to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');

This query converts a string into a JSON object that consist of that
string. I guess what you intend to accomplish is rather:

select jsonb_build_object('key1', 'text1', 'key2', 'text2');




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jsonb

2017-11-14 Thread Merlin Moncure
On Tue, Nov 14, 2017 at 4:30 PM, hmidi slim  wrote:
> I'm trying to use the function to_jsonb and create the name:
> to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');
> But after that I used Objection.js ORM to get data using the query:
> Product.query().where('id',1).then(prod => {console.log(prod)})
> I think that the problem maybe with the usage of to_jsonb function, maybe I
> miss something. But when I fetch the data with the ORM I found that the type
> was a string and not a jsonb

Still not quite following. My advice would be to:

#1) work out the SQL you want the database to be running and verify
the results are correct

#2) figure out out to get the ORM to send that SQL

If you need help figuring out that SQL the ORM is actually running,
try turning on statement logging in postgresql.conf and watching the
log.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGPool in Master-Master mode, is it possible?

2017-11-14 Thread Tatsuo Ishii
> Hello There,
> 
> I need to setup two PGPool Servers in Master-Master mode, First thing I
> want to know, is it possible?
> 
> I know we can setup 2 PGPool servers in master-slave mode using watchdog.
> http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html
> 
> Could anyone please enlighten me and any workaround for this?
> 
> Regards
> VS

> I need to setup two PGPool Servers in Master-Master mode, First thing I
> want to know, is it possible?
> 
> I know we can setup 2 PGPool servers in master-slave mode using watchdog.
> http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html
> 
> Could anyone please enlighten me and any workaround for this?

This is not an appropriate list. You should go to the Pgpool mailing
list:

https://www.pgpool.net/mailman/listinfo/pgpool-general

Anyway... I am not sure what you mean by "Master-Master mode" but if
that means you want to issue write queries by connecting to any of
Pgpool-II port, it's already possible in the set up you have created.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jsonb

2017-11-14 Thread hmidi slim
I'm trying to use the function to_jsonb and create the name:
to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');
But after that I used Objection.js ORM to get data using the query:
Product.query().where('id',1).then(prod => {console.log(prod)})
I think that the problem maybe with the usage of to_jsonb function, maybe I
miss something. But when I fetch the data with the ORM I found that the
type was a string and not a jsonb


2017-11-14 23:09 GMT+01:00 Merlin Moncure :

> On Tue, Nov 14, 2017 at 3:32 PM, hmidi slim  wrote:
> > I have a column name of type 'jsonb' on my table named product. The
> format
> > of the column:
> > name: {"key1": "text1", "key2": "text2"}
> >
> > When I make a query to fetch data from the table I got this format:
> > name: '{"key1": "text1", "key2": "text2"}'
> >
> > Why does postgresql returns the name such as string type and not jsonb?
> is
> > it a bug or is there something else to add?
>
> not quite following.  Can you paste the query you are trying to
> execute along with the results vs. expectation?  thanks
>
> merlin
>


Re: [GENERAL] jsonb

2017-11-14 Thread Merlin Moncure
On Tue, Nov 14, 2017 at 3:32 PM, hmidi slim  wrote:
> I have a column name of type 'jsonb' on my table named product. The format
> of the column:
> name: {"key1": "text1", "key2": "text2"}
>
> When I make a query to fetch data from the table I got this format:
> name: '{"key1": "text1", "key2": "text2"}'
>
> Why does postgresql returns the name such as string type and not jsonb? is
> it a bug or is there something else to add?

not quite following.  Can you paste the query you are trying to
execute along with the results vs. expectation?  thanks

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] jsonb

2017-11-14 Thread hmidi slim
I have a column name of type 'jsonb' on my table named product. The format
of the column:
name: {"key1": "text1", "key2": "text2"}

When I make a query to fetch data from the table I got this format:
name: '{"key1": "text1", "key2": "text2"}'

Why does postgresql returns the name such as string type and not jsonb? is
it a bug or is there something else to add?


Re: [GENERAL] missing public on schema public

2017-11-14 Thread Stephen Frost
Tom, all,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> =?iso-8859-1?Q?Bo_Thorbj=F8rn_Jensen?=  writes:
> > I have some additional info and a fix.
> > Firstly steps to reproduce:
> 
> Yeah, I can reproduce this.  I suspect it got broken by Stephen's hacking
> around with default ACLs.  A simple example is

Yes, it's related to the work I did with pg_dump's ACL handling, because
we're no longer just always including the whole revoke/grant set of ACLs
for everything in the output.

> $ pg_dump -c -U postgres postgres | grep -i public
> DROP SCHEMA public;
> -- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
> CREATE SCHEMA public;
> ALTER SCHEMA public OWNER TO postgres;
> -- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
> COMMENT ON SCHEMA public IS 'standard public schema';
> -- Name: public; Type: ACL; Schema: -; Owner: postgres
> GRANT ALL ON SCHEMA public TO PUBLIC;
> 
> That's fine, but if I shove it through an archive file:

This works because I added into pg_dump.c a check based on if the output
is clean (and therefore the public schema is being recreated or not).

In hindsight, that wasn't really the right thing to do because it ends
up only working when pg_dump is run with -c and doesn't consider the
case where pg_dump is run without -c but pg_restore is.

> $ pg_dump -f p.dump -Fc -U postgres postgres
> 
> $ pg_restore -c p.dump | grep -i public

This doesn't work because pg_dump isn't run with -c, while pg_restore
is.  If the archive is created with pg_dump -c (as the above was), then
the results match up between the two runs.  Note also that if pg_dump is
run with -c then a pg_restore without -c would actually still include
the GRANT statement, which isn't really correct either.

That's obviously a change from what we had before and wasn't
intentional.

> This is *REALLY BAD*.  Quite aside from the restore being wrong,
> those two sequences should never ever give different results.
> Stephen, you put some filtering logic in the wrong place in pg_dump.

I do wish it was that simple.

Unfortunately, the public schema is just ridiculously special, both in
the way it's a 'user' object but is created by initdb and that it's got
special non-default ACLs on it and how it has explicit special code to
skip over it when a restore is happening, unless -c is used.

What I'm afraid we need to do here is basically continue to hack on that
code in pg_backup_archiver.c's _printTocEntry() to teach it to issue the
default GRANT ALL ON SCHEMA public TO PUBLIC; when we are processing the
TOC entry for CREATE SCHEMA public;.

That would make the recreation of the public schema when pg_dump or
pg_restore is being run with -c actually match how the public schema is
created by initdb, and the rest would end up falling into place, I
think.

One complication, however, is what happens when a user drops and
recreates the public schema.  If that's done, we'll end up not dumping
out the delta from the public schema's initial ACLs, which wouldn't be
correct if you're restoring into a newly initdb'd cluster.  I'm thinking
that we need to forcibly look at the delta from
public-as-installed-by-initdb and whatever-public-is-now, regardless of
if the public schema was recreated by the user or not, because on
restore we are expecting a newly initdb'd cluster with the public schema
as originally installed (or as installed by pg_dump/pg_restore following
the logic above).

I'll play around with this approach and see if things end up working out
in a better fashion with it.  Baking this knowledge into
pg_backup_archiver.c is certainly ugly, but handling of public has
always been hard-coded into that, and we even added more special
handling to that code 10 years ago to deal with the COMMENT on the
public schema, so this is really just more of the same.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] pg 10 crashes on int8_avg_combine

2017-11-14 Thread Dingyuan Wang
ok, I just avoided it using sum(cnt::int) since cnt is small.

2017-11-15 00:25, Tom Lane:
> Dingyuan Wang  writes:
>> PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 7.2.0-8)
>> 7.2.0, 64-bit
>>
>> (gdb) bt
>> #0  int8_avg_combine (fcinfo=0x55bdb92472d8) at
>> ./build/../src/backend/utils/adt/numeric.c:4285
> 
> I think this is the same issue being discussed at
> 
> https://www.postgresql.org/message-id/flat/20171110185747.31519.28038%40wrigleys.postgresql.org
> 
>   regards, tom lane
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_basebackup running from a remote machine

2017-11-14 Thread Laurenz Albe
rakeshkumar464 wrote:
> If pg_basebackup is run from a remote machine with compress option --gzip ,
> compress level 9,
> will the compression occur prior to the data being sent on the network or
> after it has been received
> at the remote machine.

That only means that the output TAR file will be compressed, it has
nothing to do with the data transfered from the server.

If you want to compress the data sent over the network, use
pg_basebackup over an SSL connection with SSL compression enabled.

Yours,
Laurenz Albe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg 10 crashes on int8_avg_combine

2017-11-14 Thread Tom Lane
Dingyuan Wang  writes:
> PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 7.2.0-8)
> 7.2.0, 64-bit
>
> (gdb) bt
> #0  int8_avg_combine (fcinfo=0x55bdb92472d8) at
> ./build/../src/backend/utils/adt/numeric.c:4285

I think this is the same issue being discussed at

https://www.postgresql.org/message-id/flat/20171110185747.31519.28038%40wrigleys.postgresql.org

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGPool in Master-Master mode, is it possible?

2017-11-14 Thread Vikas Sharma
Hello There,

I need to setup two PGPool Servers in Master-Master mode, First thing I
want to know, is it possible?

I know we can setup 2 PGPool servers in master-slave mode using watchdog.
http://www.pgpool.net/pgpool-web/contrib_docs/watchdog_master_slave_3.3/en.html

Could anyone please enlighten me and any workaround for this?

Regards
VS


[GENERAL] pg 10 crashes on int8_avg_combine

2017-11-14 Thread Dingyuan Wang
Hi,

PostgreSQL constantly crashes on this query:

select dategrid, category, sum(cnt) from v_crime_grid group by dategrid,
category;

Where v_crime_grid is a materialized view, have columns "dategrid" int4,
"category" int4, and "cnt" int8.

Version is:

PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 7.2.0-8)
7.2.0, 64-bit

GDB bt says:

(gdb) bt
#0  int8_avg_combine (fcinfo=0x55bdb92472d8) at
./build/../src/backend/utils/adt/numeric.c:4285
#1  0x55bdb73b2e4d in advance_combine_function
(pergroupstate=0x55bdb92481b8, pertrans=0x55bdb92471a8,
aggstate=0x55bdb9239378) at ./build/../src/backend/executor/nodeAgg.c:1233
#2  combine_aggregates (aggstate=0x55bdb9239378, pergroup=) at ./build/../src/backend/executor/nodeAgg.c:1167
#3  0x55bdb73b36bd in agg_retrieve_direct (aggstate=0x55bdb9239378)
at ./build/../src/backend/executor/nodeAgg.c:2407
#4  ExecAgg (pstate=0x55bdb9239378) at
./build/../src/backend/executor/nodeAgg.c:2124
#5  0x55bdb73a555d in ExecProcNode (node=0x55bdb9239378) at
./build/../src/include/executor/executor.h:250
#6  ExecutePlan (execute_once=, dest=0x55bdb9238dd8,
direction=, numberTuples=0, sendTuples=,
operation=CMD_SELECT, use_parallel_mode=,
planstate=0x55bdb9239378, estate=0x55bdb9239138) at
./build/../src/backend/executor/execMain.c:1721
#7  standard_ExecutorRun (queryDesc=0x55bdb9202f48, direction=, count=0, execute_once=) at
./build/../src/backend/executor/execMain.c:363
#8  0x7f5e48c1a0c5 in pgss_ExecutorRun (queryDesc=0x55bdb9202f48,
direction=ForwardScanDirection, count=0, execute_once=)
at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:889
#9  0x55bdb74dc456 in PortalRunSelect
(portal=portal@entry=0x55bdb918a7a8, forward=forward@entry=1 '\001',
count=0, count@entry=9223372036854775807, dest=dest@entry=0x55bdb9238dd8)
at ./build/../src/backend/tcop/pquery.c:932
#10 0x55bdb74dda40 in PortalRun (portal=portal@entry=0x55bdb918a7a8,
count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1
'\001', run_once=run_once@entry=1 '\001',
dest=dest@entry=0x55bdb9238dd8,
altdest=altdest@entry=0x55bdb9238dd8, completionTag=0x7ffd7dd99cd0 "")
at ./build/../src/backend/tcop/pquery.c:773
#11 0x55bdb74d9580 in exec_simple_query (query_string=0x55bdb9153348
"select dategrid, category, sum(cnt) from v_crime_grid group by
dategrid, category;")
at ./build/../src/backend/tcop/postgres.c:1099
#12 0x55bdb74db551 in PostgresMain (argc=,
argv=argv@entry=0x55bdb9188c50, dbname=,
username=) at ./build/../src/backend/tcop/postgres.c:4090
#13 0x55bdb721cd23 in BackendRun (port=0x55bdb91818b0) at
./build/../src/backend/postmaster/postmaster.c:4357
#14 BackendStartup (port=0x55bdb91818b0) at
./build/../src/backend/postmaster/postmaster.c:4029
#15 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1753
#16 0x55bdb746b7bf in PostmasterMain (argc=5, argv=0x55bdb9133e40)
at ./build/../src/backend/postmaster/postmaster.c:1361
#17 0x55bdb721e834 in main (argc=5, argv=0x55bdb9133e40) at
./build/../src/backend/main/main.c:228


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_basebackup running from a remote machine

2017-11-14 Thread rakeshkumar464
If pg_basebackup is run from a remote machine with compress option --gzip ,
compress level 9,
will the compression occur prior to the data being sent on the network or
after it has been received
at the remote machine.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing public on schema public

2017-11-14 Thread Tom Lane
=?iso-8859-1?Q?Bo_Thorbj=F8rn_Jensen?=  writes:
> I have some additional info and a fix.
> Firstly steps to reproduce:

Yeah, I can reproduce this.  I suspect it got broken by Stephen's hacking
around with default ACLs.  A simple example is

$ pg_dump -c -U postgres postgres | grep -i public
DROP SCHEMA public;
-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
CREATE SCHEMA public;
ALTER SCHEMA public OWNER TO postgres;
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
COMMENT ON SCHEMA public IS 'standard public schema';
-- Name: public; Type: ACL; Schema: -; Owner: postgres
GRANT ALL ON SCHEMA public TO PUBLIC;

That's fine, but if I shove it through an archive file:

$ pg_dump -f p.dump -Fc -U postgres postgres

$ pg_restore -c p.dump | grep -i public
DROP SCHEMA public;
-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres
CREATE SCHEMA public;
ALTER SCHEMA public OWNER TO postgres;
-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres
COMMENT ON SCHEMA public IS 'standard public schema';

This is *REALLY BAD*.  Quite aside from the restore being wrong,
those two sequences should never ever give different results.
Stephen, you put some filtering logic in the wrong place in pg_dump.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Retrieve the server's time zone

2017-11-14 Thread Thomas Kellerer
Tom Lane schrieb am 14.11.2017 um 15:36:
>> is there any way (short of writing a function in an untrusted PL)
>> to determine the actual time zone (or time) of the server OS?
> 
> AFAIK that would only be true if some part of your client stack
> is issuing a SET TIMEZONE command.  (libpq will do that if it finds
> a PGTZ environment variable set, but not in response to plain TZ.)

Ah, interesting. I do that through JDBC, so apparently that's the part to 
blame. 
 
> If that's true, and you can't/don't want to change it, you could try
> 
> select reset_val from pg_settings where name = 'TimeZone';

Hmm, this does not seem to work. 

I am connected to a server with Asia/Bangkok but through JDBC 
that query still returns Europe/Berlin (which is my client's time zone)

So apparently the JDBC driver somehow "persists" this setting. 

I will take this to the JDBC mailing list then, thanks.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Retrieve the server's time zone

2017-11-14 Thread Tom Lane
Thomas Kellerer  writes:
> is there any way (short of writing a function in an untrusted PL) to 
> determine the actual time zone (or time) of the server OS? 

The default value of the timezone parameter is as close as you'll get
in modern versions of PG.

> "show timezone" always returns the client's time zone. 

AFAIK that would only be true if some part of your client stack
is issuing a SET TIMEZONE command.  (libpq will do that if it finds
a PGTZ environment variable set, but not in response to plain TZ.)

If that's true, and you can't/don't want to change it, you could try

select reset_val from pg_settings where name = 'TimeZone';

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: PostgreSQL walsender process doesn't exist after "pg_ctl stop -m fast"

2017-11-14 Thread y39chen
Thank you for the explanation. We shall try the latest PostgreSQL 9.6.6
version.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Because PostgreSQL is compiling in old versions of OS?

2017-11-14 Thread Jose Maria Terry Jimenez

El 11/11/17 a las 0:48, DrakoRod escribió:

Oh!!


Jose Maria TJ wrote

You're wrong, that are gcc versions, not OS versions.

For example in my CentOS 6 Box

cat /etc/redhat-release
CentOS release 6.9 (Final)

gcc -v
[...trimmed...]
gcc versión 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC)

You're right!. Is the GGC version, not the OS version

Great! I think that I compiling in a GGC 4.X version is good for most SO
distribution right?

Thanks!



You're welcome!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] missing public on schema public

2017-11-14 Thread Bo Thorbjørn Jensen
I have some additional info and a fix.

Firstly steps to reproduce:

1.  create database:
CREATE DATABASE test WITH ENCODING='UTF8' OWNER=postgres CONNECTION LIMIT=-1;
-- here public has access to public

2. dump:
pg_dump -f testfile.dump -F c -h localhost -U postgres test

3. restore:
pg_restore -c -d testfile.dump -h localhost -U postgres test
-- here public no longer has access to schema public

It is easily fixable with:
GRANT ALL ON SCHEMA public TO public;

And the issue goes away.. (privilege stays after next dump/restore)

So. What am I missing?
Is this intentional functionality ?

Kind regards and again thank you for your time

Bo Thorbjørn Jensen


[GENERAL] Connection loosing at some places - caused by firewall

2017-11-14 Thread Durumdara
Dear Members!


Windows 10, PGSQL 9.4 and 9.6 (Debian with SSL, and Windows without it - it
doesn't matter).

When I (or my boss) work(s) at home, I got connection lost errors from
PGAdmin (3/4) or from other applications too.


server closed the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.


PGAdmin, EMS PostgreSQL manager, Navicat for PGSQL, and psql too.

When minimum 5 minutes passed after the last request (select, insert, or
any communication what I did) the connection lost.

The keepalive timer is 10 minutes in the server - so it is not the source
of the problem.

I tried to set keepalive in the clients (EMS, Navicat called ping) to lower
- no effect.

When I did any request in my side before the 5 minutes "limit" passed, I
got a new 5 minutes.

It seems to be libpq.dll applications need traffic (started by the client)
in 5 minute periods to keep the connection.

In the office I don't have this limitation.

My colleauge can work at home without time limit, and he has same Internet
Provider as mine.

It's not PGSQL server based.
When I use VPN connection from home to reach office PG server, I also got
this problem.
>From the office I can use same IP, same server without problem.

I checked the router, I disabled all things - nothing happened.

*I disabled my firewall at home - the problem vanished!!!*

And: if I use a secondary test computer (also Windows 10, PGAdmin III) - it
works at home!!!

I very-very wondered because it seems to be computer firewall problem.
I tried to use wifi and direct cable - no differents, so network card could
be eliminated from the "problem sources" list.

It's very interesting!
The firewall could make differents between networks and limits these
packages timeout to 5 minutes? Hmmm...

I made an exception to the firewall - nothing happened.

I know this is not Windows Admin list, but you may experienced same
problem,  or you have idea about what to do.

This is become important for us because one of my customer have same
problem, and we don't know the solution...

What is your opinion about it?

Thanks
  dd


[GENERAL] Retrieve the server's time zone

2017-11-14 Thread Thomas Kellerer
Hello,

is there any way (short of writing a function in an untrusted PL) to determine 
the actual time zone (or time) of the server OS? 

"show timezone" always returns the client's time zone. 

localtimestamp also converts the server's time to the client time zone (the one 
defined by "timezone")

log_timezone is also unreliable as it can be changed to anything.

I am looking for something along the lines of: "show server_timezone" or 
"select current_timestamp at time zone server_timezone" 

Is that possible? 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_command not being executed

2017-11-13 Thread Eric D
I'd come to the same conclusion Jeremy had about 9.5 being the first version 
with the 'always' option for archive_mode.   Looking at pg_receivexlog, that 
might work, but with me being a total noob I'm wary of the various steps I'd 
have to take in going from:


Master -> streaming replication to -> SB1 -> pg_receivexlog to -> SB2


to:


New Master (Old SB1) -> streaming replication to -> SB2


And whether or not the conversion from pg_receivexlog to normal streaming 
replication would maintain data integrity.  I need to skew this towards 
simplicity or I'll likely screw it up.


My current thought is to cut off master, promote SB1, set up WAL file shipping 
to SB2, start a pg_basebackup, make SB1 live, then run for a couple days with 
no backup as the pg_basebackup runs.   Far from ideal but at least I have gone 
through most of this before.



From: Michael Paquier 
Sent: Monday, November 13, 2017 6:01 PM
To: Jeremy Schneider
Cc: eric...@hotmail.com; PostgreSQL General; Paul Jungwirth
Subject: Re: [GENERAL] archive_command not being executed

On Tue, Nov 14, 2017 at 8:56 AM, Jeremy Schneider
 wrote:
> From my reading of the docs and commit logs, standby databases
> couldn't archive their WALs until 9.5.

pg_receivexlog is available in 9.3. You could leverage your archives
with it easily, by for example connecting it to a standby you'd like
to get the archives from.
--
Michael


Re: [GENERAL] PostgreSQL walsender process doesn't exist after "pg_ctl stop -m fast"

2017-11-13 Thread Michael Paquier
On Tue, Nov 14, 2017 at 12:05 AM, Tom Lane  wrote:
> y39chen  writes:
>> We encounter one problem that PostgreSQL walsender process doesn't exist
>> after "pg_ctl stop -m fast".
>> Uses PostgreSQL 9.6.2
>
> There was a fix in 9.6.4 that's at least related to this problem.
> It would be interesting to see if you can still reproduce it on
> current 9.6.

Commit that may matter here:
commit: e9d4aa594f2caa8c28d55c41c9926420b1efdb79
author: Tom Lane 
date: Fri, 30 Jun 2017 12:00:03 -0400
Fix walsender to exit promptly if client requests shutdown.

It's possible for WalSndWaitForWal to be asked to wait for WAL that doesn't
exist yet.  That's fine, in fact it's the normal situation if we're caught
up; but when the client requests shutdown we should not keep waiting.
The previous coding could wait indefinitely if the source server was idle.

In passing, improve the rather weak comments in this area, and slightly
rearrange some related code for better readability.

Back-patch to 9.4 where this code was introduced.

Discussion: https://postgr.es/m/14154.1498781...@sss.pgh.pa.us
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_command not being executed

2017-11-13 Thread Michael Paquier
On Tue, Nov 14, 2017 at 8:56 AM, Jeremy Schneider
 wrote:
> From my reading of the docs and commit logs, standby databases
> couldn't archive their WALs until 9.5.

pg_receivexlog is available in 9.3. You could leverage your archives
with it easily, by for example connecting it to a standby you'd like
to get the archives from.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] archive_command not being executed

2017-11-13 Thread Jeremy Schneider
Hi Eric,

Thanks for using PostgreSQL!

On Fri, Nov 10, 2017 at 9:26 AM, Paul Jungwirth
 wrote:
> Oh this has happened to me before. :-) On SB1 you need to set
> archive_mode to always (not on). Otherwise it is ignored when running as a
> standby.

It looks to me like this feature was not added until 9.5 and Eric is
running 9.3  :(

> On 11/10/2017 09:10 AM, Eric D wrote:
>> I have a standby db server (SB1) that will soon become the master.  SB1
>> is set up with streaming replication from the current master.  I'm
>> trying to set up a third server (SB2) as a slave/standby to SB1, so that
>> when SB1 becomes the master, there will be a standby for it.  First step
>> is to get WAL files shipped from SB1->SB2.

Eric,

>From my reading of the docs and commit logs, standby databases
couldn't archive their WALs until 9.5.

https://www.postgresql.org/message-id/cankgpbs7qgakgq-opzy0esam6+wue5mgpyehcgo_eoq7tjv...@mail.gmail.com
http://paquier.xyz/postgresql-2/postgres-9-5-feature-highlight-archive-mode-always/

There is a section in the 9.3 docs about cascading replication:
https://www.postgresql.org/docs/9.3/static/warm-standby.html#CASCADING-REPLICATION

It seems to me this is based on streaming replication only.  Looks
like it's not designed (in 9.3) to start archiving until it becomes a
primary.  "You will also need to set primary_conninfo in the
downstream standby to point to the cascading standby."

Are you able to configure a cascading replica by using streaming
replication on your 9.3 system, without WAL archiving on the standby?

-Jeremy

-- 
http://about.me/jeremy_schneider


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Andreas Joseph Krogh
På tirsdag 14. november 2017 kl. 00:44:11, skrev Peter Geoghegan >:
On Mon, Nov 13, 2017 at 2:05 PM, Andreas Joseph Krogh
  wrote:
 > When sorting on text, we're usually doing so using an multi-column index, 
like for instance "CREATE INDEX xxx ON my_table (lower(name) ASC, created 
ASC)". Will abbreviated keys help here?

 Yes, they'll help with that, even though the leading column might be
 low cardinality.
 
Nice to know, thanks.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Peter Geoghegan
On Mon, Nov 13, 2017 at 2:05 PM, Andreas Joseph Krogh
 wrote:
> When sorting on text, we're usually doing so using an multi-column index, 
> like for instance "CREATE INDEX xxx ON my_table (lower(name) ASC, created 
> ASC)". Will abbreviated keys help here?

Yes, they'll help with that, even though the leading column might be
low cardinality.

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] "Cascading Logical Replication" from a physical replica

2017-11-13 Thread Hannes Erven

Hi,


as of PG10, it is not possible to create logical replication slots on 
standby servers.
Should that fact probably be mentioned more explicitly in 
https://www.postgresql.org/docs/10/static/logical-replication.html ?



There seems to be work by Craig Ringer going on, but that doesn't seem 
to be arriving soon...

https://commitfest.postgresql.org/12/788/


My requirement is to be notified of any row changes 
(insert/update/delete) that become visible on a standby server through 
streaming physical replication.
Table name and primary key would be sufficient, I don't need the actual 
row data. And there's no "replication" at all ;-)



Is this something that pglogical 
(https://www.2ndquadrant.com/en/resources/pglogical/ ) or pgxlogdump 
could provide?



The workaround I've come up so far would be: connect logical replication 
to the master and extract the events I'm interested together with the 
WAL position of the corresponding TX's commit.
Then, send that information to each replica; connect locally and poll 
the replayed WAL position every second or so. Work on the events that 
became visible and queue the rest.


But this seems to be quite complex compared to just reading the WAL off 
the standby where it already is, ideally using the logical output plugin 
interface to format the data.



Thanks for any insights!
Best regards,

-hannes erven






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG-10 + ICU and abbreviated keys

2017-11-13 Thread Andreas Joseph Krogh
På mandag 13. november 2017 kl. 22:28:40, skrev Peter Geoghegan >:
On Mon, Nov 13, 2017 at 12:48 PM, Andreas Joseph Krogh
  wrote:
 > Thanks.

 As the person that worked on abbreviated keys, I'd like to hear about
 how you get with this. How much faster is it for you?

 I don't usually get to hear about this, because most users don't
 notice that anything in particular gets faster, because there are many
 performance enhancements added to a release.
 
We haven't migrated any of our databases to v10 yet so I really can't tell. 
I'm evaluating ICU-usage as the last step before we decide moving to v10. Being 
a per-column setting that means a pg_dump/reload won't cut it (AFAIU), so I'm 
not sure we'll take that route as it involves much manual tweaking which we're 
really not interessted in spending time on.
 
When sorting on text, we're usually doing so using an multi-column index, like 
for instance "CREATE INDEX xxx ON my_table (lower(name) ASC, created ASC)". 
Will abbreviated keys help here?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


  1   2   3   4   5   6   7   8   9   10   >