[GENERAL] PGConf.Asia and visa

2016-10-04 Thread Tatsuo Ishii
To all who are planning to join PGConf.Asia:

http://www.pgconf.asia/EN/

The main conference is scheduled from December 2nd and December 3rd,
2016 (plus developer conference/unconference on December 1st). The
registration will be open by the end of this October.

If you need a visa to enter Japan, please let me know
(is...@sraoss.co.jp). This time SRA OSS, Inc. Japan is responsible for
taking care of visa things, because in order to take care of visa, a
legal entity is required to issue invitation letters.

Obtaining visa may take long time. I recommend to request me as soon
as possible.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query help

2016-10-04 Thread Daniel Caldeweyher
Try this:

select distinct vendor_no, vendor_name
from ap_vendors
where vendor_no in (
   select vendor_no from ap_vendors
   group by vendor_no
   having array_agg(company_code) @> ARRAY['BUR','EBC','SNJ'])


On Wed, Oct 5, 2016 at 1:31 PM, Bret Stern  wrote:

> Good evening,
> I'm curious about a way to ask the following question of my vendors
> table.
>
> psuedo1 "select all vendors which exist in BUR and EBC and SNJ"
>
> and
> psuedo2 "select all vendors which DO NOT exist in all three show rooms
>
>
> The data is from a Sage accounting system which I pull out and place in a
> pg db. What we are trying to do is identify which vendors are defined in
> all three of our showrooms, and vice-versa, which ones are not.
>
> ap_vendors table
>   company_code character varying(10) NOT NULL,
>   ap_division_no character varying(2) NOT NULL,
>   vendor_no character varying(7) NOT NULL,
>   terms_code character varying(2),
>   vendor_name character varying(30),
>   address_line1 character varying(30),
>   address_line2 character varying(30),
>   address_line3 character varying(30),
>   city character varying(20),
>   state character varying(2),
>   zip_code character varying(10),
>   telephone_no character varying(17),
>   fax_no character varying(17),
>   CONSTRAINT ap_vendors_pk PRIMARY KEY (company_code, ap_division_no,
> vendor_no)
>
> sample records:
> "BUR";"00";"ADEXU";"30";"ADEX USA";"''";"''";"''";"''";"''";
> "92831";"''";"''"
> "BUR";"00";"AGORA";"30";"Agora Natural Surfaces";"''";"''";"''";"''";
> "''";"90248";"''";"''"
> "BUR";"00";"AKDOP";"30";"AKDO Pacific";"''";"''";"''";"''";"
> ''";"94545";"''";"''"
> "EBC";"00";"ADEXU";"30";"ADEX USA";"''";"''";"''";"''";"''";
> "92831";"''";"''"
> "EBC";"00";"AGORA";"30";"Agora Natural Surfaces";"''";"''";"''";"''";
> "''";"90248";"''";"''"
> "EBC";"00";"ARIZ01";"30";"Arizona Tile";"''";"''";"''";"''";"''"
> ;"94550";"''";"''"
> "SNJ";"00";"AKDOP";"30";"AKDO Pacific";"''";"''";"''";"''";"
> ''";"94545";"''";"''"
> "SNJ";"00";"AGORA";"30";"Agora Natural Surfaces";"''";"''";"''";"''";
> "''";"90248";"''";"''"
> "SNJ";"00";"ADEXU";"30";"ADEX USA";"''";"''";"''";"''";"''";
> "92831";"''";"''"
>
> What I need is a query which I can modify to return only vendors which
> exists
> in all three company_code columns ( BUR, EBC, SNJ) (they represent
> showroom location)
>
> eg; exists in BUR, EBC, SNJ
> ADEXU
> AGORA
>
> OR
>
> modify the query to return only the vendors which DO NOT exist in all
> three showrooms based on the first column company_code
>
> eg;
> AKDOP only exists in BUR and SNJ
> ARIZ01 only exists in EBC
>
> Thanks
> Bret
>
>
>


Re: [GENERAL] Query help

2016-10-04 Thread Rob Sargent

> On Oct 4, 2016, at 9:31 PM, Bret Stern  
> wrote:
> 
> Good evening,
> I'm curious about a way to ask the following question of my vendors
> table.
> 
> psuedo1 "select all vendors which exist in BUR and EBC and SNJ"
> 
> and
> psuedo2 "select all vendors which DO NOT exist in all three show rooms
> 
> 
> The data is from a Sage accounting system which I pull out and place in a
> pg db. What we are trying to do is identify which vendors are defined in
> all three of our showrooms, and vice-versa, which ones are not.
> 
> ap_vendors table
>   company_code character varying(10) NOT NULL,
>   ap_division_no character varying(2) NOT NULL,
>   vendor_no character varying(7) NOT NULL,
>   terms_code character varying(2),
>   vendor_name character varying(30),
>   address_line1 character varying(30),
>   address_line2 character varying(30),
>   address_line3 character varying(30),
>   city character varying(20),
>   state character varying(2),
>   zip_code character varying(10),
>   telephone_no character varying(17),
>   fax_no character varying(17),
>   CONSTRAINT ap_vendors_pk PRIMARY KEY (company_code, ap_division_no, 
> vendor_no)
> 
> sample records:
> "BUR";"00";"ADEXU";"30";"ADEX USA";"''";"''";"''";"''";"''";"92831";"''";"''"
> "BUR";"00";"AGORA";"30";"Agora Natural 
> Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
> "BUR";"00";"AKDOP";"30";"AKDO 
> Pacific";"''";"''";"''";"''";"''";"94545";"''";"''"
> "EBC";"00";"ADEXU";"30";"ADEX USA";"''";"''";"''";"''";"''";"92831";"''";"''"
> "EBC";"00";"AGORA";"30";"Agora Natural 
> Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
> "EBC";"00";"ARIZ01";"30";"Arizona 
> Tile";"''";"''";"''";"''";"''";"94550";"''";"''"
> "SNJ";"00";"AKDOP";"30";"AKDO 
> Pacific";"''";"''";"''";"''";"''";"94545";"''";"''"
> "SNJ";"00";"AGORA";"30";"Agora Natural 
> Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
> "SNJ";"00";"ADEXU";"30";"ADEX USA";"''";"''";"''";"''";"''";"92831";"''";"''"
> 
> What I need is a query which I can modify to return only vendors which exists
> in all three company_code columns ( BUR, EBC, SNJ) (they represent showroom 
> location)
> 
> eg; exists in BUR, EBC, SNJ
> ADEXU
> AGORA
> 
> OR
> 
> modify the query to return only the vendors which DO NOT exist in all
> three showrooms based on the first column company_code
> 
> eg; 
> AKDOP only exists in BUR and SNJ
> ARIZ01only exists in EBC
> 
> Thanks
> Bret
> 
> 
Not sure I like the schema but 
select vendor_no, count(*) from ap_vendors having count(*) = 3;

and maybe count(*) < 3 is your second answer.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query help

2016-10-04 Thread Bret Stern
Good evening,
I'm curious about a way to ask the following question of my vendors
table.

psuedo1 "select all vendors which exist in BUR and EBC and SNJ"

and
psuedo2 "select all vendors which DO NOT exist in all three show rooms


The data is from a Sage accounting system which I pull out and place in
a
pg db. What we are trying to do is identify which vendors are defined in
all three of our showrooms, and vice-versa, which ones are not.

ap_vendors table
  company_code character varying(10) NOT NULL,
  ap_division_no character varying(2) NOT NULL,
  vendor_no character varying(7) NOT NULL,
  terms_code character varying(2),
  vendor_name character varying(30),
  address_line1 character varying(30),
  address_line2 character varying(30),
  address_line3 character varying(30),
  city character varying(20),
  state character varying(2),
  zip_code character varying(10),
  telephone_no character varying(17),
  fax_no character varying(17),
  CONSTRAINT ap_vendors_pk PRIMARY KEY (company_code, ap_division_no,
vendor_no)

sample records:
"BUR";"00";"ADEXU";"30";"ADEX
USA";"''";"''";"''";"''";"''";"92831";"''";"''"
"BUR";"00";"AGORA";"30";"Agora Natural
Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
"BUR";"00";"AKDOP";"30";"AKDO
Pacific";"''";"''";"''";"''";"''";"94545";"''";"''"
"EBC";"00";"ADEXU";"30";"ADEX
USA";"''";"''";"''";"''";"''";"92831";"''";"''"
"EBC";"00";"AGORA";"30";"Agora Natural
Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
"EBC";"00";"ARIZ01";"30";"Arizona
Tile";"''";"''";"''";"''";"''";"94550";"''";"''"
"SNJ";"00";"AKDOP";"30";"AKDO
Pacific";"''";"''";"''";"''";"''";"94545";"''";"''"
"SNJ";"00";"AGORA";"30";"Agora Natural
Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
"SNJ";"00";"ADEXU";"30";"ADEX
USA";"''";"''";"''";"''";"''";"92831";"''";"''"

What I need is a query which I can modify to return only vendors which
exists
in all three company_code columns ( BUR, EBC, SNJ) (they represent
showroom location)

eg; exists in BUR, EBC, SNJ
ADEXU
AGORA

OR

modify the query to return only the vendors which DO NOT exist in all
three showrooms based on the first column company_code

eg; 
AKDOP   only exists in BUR and SNJ
ARIZ01  only exists in EBC

Thanks
Bret




Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-04 Thread Michael Paquier
On Wed, Oct 5, 2016 at 12:34 AM, Aleksander Alekseev
 wrote:
> I could align ZSON to PostgreSQL code style. I only need to run pgindent
> and write a few comments. Do you think community would be interested in
> adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
> problem for this?

I find the references to pglz quite troubling, particularly by reading
that this data type visibly uses its own compression logic.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql 9.5 upsert issue with nulls

2016-10-04 Thread Kenneth Marshall
On Tue, Oct 04, 2016 at 07:46:48PM +, Shaun McCready wrote:
> Hello,
> 
> I'm having an issue with using the new UPSERT feature in Postgres 9.5
> 
> I have a table that is used for aggregating data from another table. The 
> composite key is made up of 20 columns, 10 of which can be nullable.
> Below I have created a smaller version of the issue i'm having, specifically 
> with NULL values.
> 
> 
> CREATE TABLE public.test_upsert
> (
>   upsert_id integer NOT NULL DEFAULT 
> nextval('test_upsert_upsert_id_seq'::regclass),
>   name character varying(32) NOT NULL,
>   status integer NOT NULL,
>   test_field text,
>   identifier character varying(255),
>   count integer,
>   CONSTRAINT upsert_id_pkey PRIMARY KEY (upsert_id),
>   CONSTRAINT test_upsert_name_status_test_field_key UNIQUE (name, status, 
> test_field)
> )
> WITH (
>   OIDS=FALSE
> );
> 
> 
> Running this query works as needed (First insert, then subsequent inserts 
> simply increment the count):
> INSERT INTO test_upsert as tu(name,status,test_field,identifier, count) 
> VALUES ('shaun',1,'test value','ident', 1)
> ON CONFLICT (name,status,test_field) DO UPDATE set count = tu.count + 1 where 
> tu.name = 'shaun' AND tu.status = 1 AND tu.test_field = 'test value';
> 
> 
> However if I run this query, 1 row is inserted each time rather than 
> incrementing the count for the initial row:
> INSERT INTO test_upsert as tu(name,status,test_field,identifier, count) 
> VALUES ('shaun',1,null,'ident', 1)
> ON CONFLICT (name,status,test_field) DO UPDATE set count = tu.count + 1 where 
> tu.name = 'shaun' AND tu.status = 1 AND tu.test_field = null;
> 
> This is my issue. I need to simply increment the count value and not create 
> multiple identical rows with null values.
> 
> 
> Attempting to add a partial unique index:
> CREATE UNIQUE INDEX test_upsert_upsert_id_idx
>   ON public.test_upsert
>   USING btree
>   (name COLLATE pg_catalog."default", status, test_field, identifier);
> 
>   However this yields the same results, either multiple null rows being 
> inserted or "ERROR:  there is no unique or exclusion constraint matching the 
> ON CONFLICT specification" messages being returned when trying to insert.
> 
>   I already attempted to add extra details on the partial index such as WHERE 
> test_field is not null OR identifier is not null, however when inserting I 
> get the constraint error message.
> 
> 
>   Any suggestions would be appreciated, thanks!
> 
> 
> -Shaun

Hi Shaun,

NULL <> NULL so each insert with a NULL will create a new row. A NULL value is 
defined to be
an unknown value so two INSERTs of:

INSERT INTO test_upsert as tu(name,status,test_field,identifier, count) VALUES 
('shaun',1,null,'ident', 1)
INSERT INTO test_upsert as tu(name,status,test_field,identifier, count) VALUES 
('shaun',1,null,'ident', 1)

are inserting different rows. You might want to change your NULL to the empty 
string
or some other fixed token if you actually want them to work as equal.

Regards,
Ken


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected trouble from pg_basebackup

2016-10-04 Thread otheus uibk
After a 3 to 4 minute delay, pg_basebackup started doing it's thing and
finished within a few minutes. So now the question is: why the startup
delay?


Re: [GENERAL] Unexpected trouble from pg_basebackup

2016-10-04 Thread Magnus Hagander
On Tue, Oct 4, 2016 at 10:42 PM, otheus uibk  wrote:

> After a 3 to 4 minute delay, pg_basebackup started doing it's thing and
> finished within a few minutes. So now the question is: why the startup
> delay?
>


Sounds to me like it's doing a CHECKPOINT with spreading, which make it
take time. Try with "-c fast" and see if the problem goes away.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


[GENERAL] postgresql 9.5 upsert issue with nulls

2016-10-04 Thread Shaun McCready
Hello,

I'm having an issue with using the new UPSERT feature in Postgres 9.5

I have a table that is used for aggregating data from another table. The 
composite key is made up of 20 columns, 10 of which can be nullable.
Below I have created a smaller version of the issue i'm having, specifically 
with NULL values.


CREATE TABLE public.test_upsert
(
  upsert_id integer NOT NULL DEFAULT 
nextval('test_upsert_upsert_id_seq'::regclass),
  name character varying(32) NOT NULL,
  status integer NOT NULL,
  test_field text,
  identifier character varying(255),
  count integer,
  CONSTRAINT upsert_id_pkey PRIMARY KEY (upsert_id),
  CONSTRAINT test_upsert_name_status_test_field_key UNIQUE (name, status, 
test_field)
)
WITH (
  OIDS=FALSE
);


Running this query works as needed (First insert, then subsequent inserts 
simply increment the count):
INSERT INTO test_upsert as tu(name,status,test_field,identifier, count) VALUES 
('shaun',1,'test value','ident', 1)
ON CONFLICT (name,status,test_field) DO UPDATE set count = tu.count + 1 where 
tu.name = 'shaun' AND tu.status = 1 AND tu.test_field = 'test value';


However if I run this query, 1 row is inserted each time rather than 
incrementing the count for the initial row:
INSERT INTO test_upsert as tu(name,status,test_field,identifier, count) VALUES 
('shaun',1,null,'ident', 1)
ON CONFLICT (name,status,test_field) DO UPDATE set count = tu.count + 1 where 
tu.name = 'shaun' AND tu.status = 1 AND tu.test_field = null;

This is my issue. I need to simply increment the count value and not create 
multiple identical rows with null values.


Attempting to add a partial unique index:
CREATE UNIQUE INDEX test_upsert_upsert_id_idx
  ON public.test_upsert
  USING btree
  (name COLLATE pg_catalog."default", status, test_field, identifier);

  However this yields the same results, either multiple null rows being 
inserted or "ERROR:  there is no unique or exclusion constraint matching the ON 
CONFLICT specification" messages being returned when trying to insert.

  I already attempted to add extra details on the partial index such as WHERE 
test_field is not null OR identifier is not null, however when inserting I get 
the constraint error message.


  Any suggestions would be appreciated, thanks!


-Shaun
This e-mail may be privileged and/or confidential, and the sender does not 
waive any related rights and obligations. Any distribution, use or copying of 
this e-mail or the information it contains by other than an intended recipient 
is unauthorized. If you received this e-mail in error, please advise me (by 
return e-mail or otherwise) immediately. Ce courrier ?lectronique est 
confidentiel et prot?g?. L'exp?diteur ne renonce pas aux droits et obligations 
qui s'y rapportent. Toute diffusion, utilisation ou copie de ce message ou des 
renseignements qu'il contient par une personne autre que le (les) 
destinataire(s) d?sign?(s) est interdite. Si vous recevez ce courrier 
?lectronique par erreur, veuillez m'en aviser imm?diatement, par retour de 
courrier ?lectronique ou par un autre moyen.


[GENERAL] Unexpected trouble from pg_basebackup

2016-10-04 Thread otheus uibk
I recently updated my systems from pg 9.1.8 to 9.5.3. A pg_dumpall was used
to migrate the data. Now I'm trying to re-establish replication between
master and slave. I'm getting stuck.

When I run pg_basebackup (via a script which worked flawlessly on 9.1.8,
AND via command line, ala "manual mode"), it creates the PGDATA directory
structure, some files, but soon gets stuck. It writes fewer than 40 MB of a
20GB database. Further, using tcpdump to watch packets on the wire, I
confirm no data is going across.

However, it's clear a connection has been made and the process was started.
On the master, I see the process table:

postgres: wal sender process pgsync A.B.C.D(42821) sending backup

Meanwhile, on the client, I see no output updates. Here's the command:

pg_basebackup -x -P -v -D $PGDATA -w

PGUSER, PGPASSWORD and PGHOST are set accordingly. The User is a user with
the REPLICATION attribute. Just in case, I dropped and re-created the user.

So that's question A.

Question B is related. In attempting to verify that the permissions and HBA
were set correctly, I attempted to do a pg_dump using the same PGUSER and
PGHOST. What I got surprised me:

  $ pg_dump -s onyxeditor
  pg_dump: [archiver (db)] query failed: ERROR:  permission denied for
relation licence
  pg_dump: [archiver (db)] query was: LOCK TABLE public.licence IN ACCESS
SHARE MODE

This is the replication user. Other schemas worked just fine, but not this
one. Is this expected behavior?

 Kind regards,
 Otheus

-- 
Otheus
otheus.u...@gmail.com
otheus.shell...@uibk.ac.at


Re: [GENERAL] Query killed with Out of memory

2016-10-04 Thread Chris Mair

With a heavy query, when line number results raise over 600k query hangs with 
out of memory.

Here is the explain analyze:
[...]
Work_mem is.512mb, shared buffers 3084mb and system Ram 10Gb. Postgres version 
is 8.4.8 and for some months i cannot upgrade.

Is there a way to solve the problem?


Hi,

a few ideas:

 - what goes out of memory? The client? if you query from a programming 
language you should set the fetch/batch size
   to some value so that it won't fetch the whole 600k rows into memory... for 
the psql client you can do
   \set FETCH_COUNT 1000

 - work_mem 512 MB is high-ish unless you have a small value for 
max_connection...

 - 8.4.8 was released in 2011, the latest 8.4 release is 8.4.22, you'r missing 
lots of patches (and 8.4 was EOLed more
   than two years ago)

Bye,
Chris.










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query killed with Out of memory

2016-10-04 Thread Job
Hello,

With a heavy query, when line number results raise over 600k query hangs with 
out of memory.

Here is the explain analyze:

CTE Scan on lista  (cost=25066.66..47721.23 rows=3678 width=260)

   CTE lista

 ->  Unique  (cost=24956.32..25066.66 rows=3678 width=512)

   ->  Sort  (cost=24956.32..24965.52 rows=3678 width=512)

 Sort Key: "*SELECT* 1".id, "*SELECT* 1".data_log, "*SELECT* 
1".type_log, "*SELECT* 1".ip, "*SELECT* 1".log_id, "*SELECT* 1".url_dominio, 
"*SELECT* 1".porta, "*SELECT* 1".action_bind, "*SELECT* 1".action, 
('DNS_DENIED/403'

::text), "*SELECT* 1".array_dominio

     ->  Append  (cost=905.76..24738.50 rows=3678 width=512)

   ->  Subquery Scan "*SELECT* 1"  (cost=905.76..12423.64 
rows=3652 width=512)

 ->  Hash Anti Join  (cost=905.76..12387.12 
rows=3652 width=512)

   Hash Cond: 
(((public.webtraffic_bind.ip)::text = (wb.ip)::text) AND 
((public.webtraffic_bind.log_id)::text = (wb.log_id)::text) AND 
(public.webtraffic_bind.url_dominio = wb.url_dominio) AND ((public.webt

raffic_bind.porta)::text = (wb.porta)::text))

   Join Filter: ((wb.data_log >= 
public.webtraffic_bind.data_log) AND (wb.data_log < 
(public.webtraffic_bind.data_log + '00:02:00'::interval)))

   ->  Bitmap Heap Scan on webtraffic_bind  
(cost=269.23..11638.68 rows=3678 width=512)

 Recheck Cond: ((type_log)::text = 
'queries'::text)

 ->  Bitmap Index Scan on 
type_log_wbidx  (cost=0.00..268.31 rows=3678 width=0)

   Index Cond: ((type_log)::text = 
'queries'::text)

   ->  Hash  (cost=636.17..636.17 rows=18 
width=274)

 ->  Bitmap Heap Scan on 
webtraffic_bind wb  (cost=564.94..636.17 rows=18 width=274)

   Recheck Cond: (((type_log)::text 
= 'security'::text) AND ((action_bind)::text = 'approved'::text))

   ->  BitmapAnd  
(cost=564.94..564.94 rows=18 width=0)

 ->  Bitmap Index Scan on 
type_log_wbidx  (cost=0.00..268.31 rows=3678 width=0)

   Index Cond: 
((type_log)::text = 'security'::text)

 ->  Bitmap Index Scan on 
action_bind_wbidx  (cost=0.00..296.37 rows=3678 width=0)

   Index Cond: 
((action_bind)::text = 'approved'::text)

   ->  Subquery Scan "*SELECT* 2"  (cost=905.76..12314.86 
rows=26 width=512)

 ->  Hash Semi Join  (cost=905.76..12314.60 rows=26 
width=512)

   Hash Cond: 
(((public.webtraffic_bind.ip)::text = (wb.ip)::text) AND 
((public.webtraffic_bind.log_id)::text = (wb.log_id)::text) AND 
(public.webtraffic_bind.url_dominio = wb.url_dominio) AND ((public.webt

raffic_bind.porta)::text = (wb.porta)::text))

   Join Filter: ((wb.data_log >= 
public.webtraffic_bind.data_log) AND (wb.data_log < 
(public.webtraffic_bind.data_log + '00:02:00'::interval)))

   ->  Bitmap Heap Scan on webtraffic_bind  
(cost=269.23..11638.68 rows=3678 width=512)

 Recheck Cond: ((type_log)::text = 
'queries'::text)

 ->  Bitmap Index Scan on 
type_log_wbidx  (cost=0.00..268.31 rows=3678 width=0)

   Index Cond: ((type_log)::text = 
'queries'::text)

   ->  Hash  (cost=636.17..636.17 rows=18 
width=274)

 ->  Bitmap Heap Scan on 
webtraffic_bind wb  (cost=564.94..636.17 rows=18 width=274)

   Recheck Cond: (((type_log)::text 
= 'security'::text) AND ((action_bind)::text = 'approved'::text))

   ->  BitmapAnd  
(cost=564.94..564.94 rows=18 width=0)

 ->  Bitmap Index Scan on 
type_log_wbidx  (cost=0.00..268.31 rows=3678 width=0)

   Index Cond: 
((type_log)::text = 'security'::text)

 ->  Bitmap Index Scan on 
action_bind_wbidx  (cost=0.00..296.37 rows=3678 width=0)

   Index Cond: 
((action_bind)::text = 'approved'::text)

   SubPlan 2

 ->  Index Scan using stpestensioni_domini_idx on stpestensioni_domini  
(cost=0.01..12.18 rows=2 width=0)

   Index Cond: ((estensione)::text = 

Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-04 Thread Dorian Hoxha
@Aleksander
~everyone wants lower data storage and wants some kind of compression.
Can this be made to automatically retrain when analyzing (makes sense?)?
And create a new dictionary only if it changes compared to the last one.

On Tue, Oct 4, 2016 at 5:34 PM, Aleksander Alekseev <
a.aleks...@postgrespro.ru> wrote:

> Hello, Simon.
>
> Thanks for you interest to this project!
>
> > Will you be submitting this to core?
>
> I could align ZSON to PostgreSQL code style. I only need to run pgindent
> and write a few comments. Do you think community would be interested in
> adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
> problem for this?
>
> --
> Best regards,
> Aleksander Alekseev
>


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-04 Thread Aleksander Alekseev
Hello, Simon.

Thanks for you interest to this project!

> Will you be submitting this to core?

I could align ZSON to PostgreSQL code style. I only need to run pgindent
and write a few comments. Do you think community would be interested in
adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
problem for this?

-- 
Best regards,
Aleksander Alekseev


signature.asc
Description: PGP signature


Re: [GENERAL] Graphical entity relation model

2016-10-04 Thread Martijn Tonies (Upscene Productions)
Hello Johannes,
A new kid on the block – Database Workbench, a Windows application that works 
fine on Linux and Mac via Wine.
http://www.upscene.com/database_workbench/
With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Database Workbench - developer tool for Oracle, MS SQL Server, PostgreSQL,
SQL Anywhere, MySQL, InterBase, NexusDB and Firebird.
On 1 October 2016 at 04:45,  wrote:

  > Does anybody know a Software for generating graphical entity relation 
models from existing postgresql databases?
  >
  > Best regards Johannes

  I use dbWrench (dbwrench.com). It's not free, but they do have a free trial 
version so you can see if you like it before you buy it. It's also not 
expensive compared to many of these sorts of tools. It also runs on all 3 major 
platforms (it's written in Java) and the developer is responsive if you find a 
problem.

  If money is no object, you can look at Power Designer (by Sybase). I used to 
use it years ago and liked it even if it was MS-Windows only, but the price has 
gone up so much only companies can really afford it now, IMO.

  HTH,
  Kevin



  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Graphical entity relation model

2016-10-04 Thread Robert Stone
Hello,

If you have jdk 1.8 or above installed go to www.executequery.org and
download the latest jar file. Download the JDBC driver from Postgres and
set it up. It's open source.
It has an ERD generator but obviously depends on having all your foreign
keys declared in order to link tables, etc. After generating the ERD you
then have to re-organise it by dragging and dropping so that when you
print, the diagram is readable.
I use it all the time for testing, etc.

HTH,
Rob

On 1 October 2016 at 04:45,  wrote:

> > Does anybody know a Software for generating graphical entity relation
> models from existing postgresql databases?
> >
> > Best regards Johannes
>
> I use dbWrench (dbwrench.com). It's not free, but they do have a free
> trial version so you can see if you like it before you buy it. It's also
> not expensive compared to many of these sorts of tools. It also runs on all
> 3 major platforms (it's written in Java) and the developer is responsive if
> you find a problem.
>
> If money is no object, you can look at Power Designer (by Sybase). I used
> to use it years ago and liked it even if it was MS-Windows only, but the
> price has gone up so much only companies can really afford it now, IMO.
>
> HTH,
> Kevin
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-04 Thread Oleg Bartunov
On Tue, Oct 4, 2016 at 4:20 PM, Simon Riggs  wrote:

> On 30 September 2016 at 16:58, Aleksander Alekseev
>  wrote:
>
> > I've just uploaded ZSON extension on GitHub:
> >
> > https://github.com/afiskon/zson
> >
> > ZSON learns on your common JSONB documents and creates a dictionary
> > with strings that are frequently used in all documents. After that you
> > can use ZSON type to compress documents using this dictionary. When
> > documents schema evolve and compression becomes inefficient you can
> > re-learn on new documents. New documents will be compressed with a new
> > dictionary, old documents will be decompressed using old dictionary.
> >
> > In some cases ZSON can save half of your disk space and give you about
> > 10% more TPS. Everything depends on your data and workload though.
> > Memory is saved as well. For more details see README.md.
> >
> > Please don't hesitate to ask any questions. Any feedback and pull
> > requests are welcome too!
>
> Very good. Oleg had mentioned that dictionary compression was being
> considered.
>
> It would be useful to be able to define compression dictionaries for
> many use cases.
>


Dictionary compression is a different project, we'll publish it after
testing.


>
> Will you be submitting this to core?
>
> --
> Simon Riggshttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-04 Thread Simon Riggs
On 30 September 2016 at 16:58, Aleksander Alekseev
 wrote:

> I've just uploaded ZSON extension on GitHub:
>
> https://github.com/afiskon/zson
>
> ZSON learns on your common JSONB documents and creates a dictionary
> with strings that are frequently used in all documents. After that you
> can use ZSON type to compress documents using this dictionary. When
> documents schema evolve and compression becomes inefficient you can
> re-learn on new documents. New documents will be compressed with a new
> dictionary, old documents will be decompressed using old dictionary.
>
> In some cases ZSON can save half of your disk space and give you about
> 10% more TPS. Everything depends on your data and workload though.
> Memory is saved as well. For more details see README.md.
>
> Please don't hesitate to ask any questions. Any feedback and pull
> requests are welcome too!

Very good. Oleg had mentioned that dictionary compression was being considered.

It would be useful to be able to define compression dictionaries for
many use cases.

Will you be submitting this to core?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-04 Thread Edson Richter
Hi!


(sorry for top post, but I'm making a recall of current status - therefore 
avoiding deep search in details)


In short:

OS = Oracle Enterprise Linux 7.2 with Oracle Unbreakeable Kernel (a.k.a RHEL 
7.2)

Environment = Linux backup1.simfrete.com 3.8.13-118.11.2.el7uek.x86_64 #2 SMP 
Wed Sep 21 11:23:36 PDT 2016 x86_64 x86_64 x86_64 GNU/Linux

PostgreSQL = Using PostgreSQL 9.6 final, installed from official yum repository.

yum.repos with "pgdg96-updates-testing" enabled.

Manually installed all "flask..." packages.


Result of installation process:


[root@backup1 pgsql96]# LANG=C yum install pgadmin4-web
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-psycopg2-debug >= 2.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-passlib >= 1.6.2 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-html5lib >= 1.0b3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-crypto >= 2.6.1 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-blinker >= 1.3 for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-click for package: 
pgadmin4-web-1.0-1.rhel7.noarch
--> Running transaction check
---> Package python-blinker.noarch 0:1.4-1.rhel7 will be installed
---> Package python-click.noarch 0:6.3-1.el7 will be installed
---> Package python-html5lib.noarch 1:0.999-5.el7 will be installed
---> Package python-passlib.noarch 0:1.6.2-2.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
---> Package python2-crypto.x86_64 0:2.6.1-9.el7 will be installed
--> Processing Dependency: libtomcrypt.so.0()(64bit) for package: 
python2-crypto-2.6.1-9.el7.x86_64
--> Running transaction check
---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
--> Processing Dependency: libtommath >= 0.42.0 for package: 
libtomcrypt-1.17-23.el7.x86_64
--> Processing Dependency: libtommath.so.0()(64bit) for package: 
libtomcrypt-1.17-23.el7.x86_64
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Running transaction check
---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing Dependency: libpython2.7_d.so.1.0()(64bit) for package: 
python-psycopg2-debug-2.6.2-2.rhel7.x86_64
--> Finished Dependency Resolution
Error: Package: python-psycopg2-debug-2.6.2-2.rhel7.x86_64 (pgdg96)
   Requires: libpython2.7_d.so.1.0()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest


If I can help on anything, please let me know.



Regards,


Edson



De: pgsql-general-ow...@postgresql.org  em 
nome de Edson Richter 
Enviado: segunda-feira, 3 de outubro de 2016 13:24
Para: Devrim Gündüz; pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

On 03/10/2016 10:12, Devrim Gündüz wrote:
> Hi Edson,
>
> On Mon, 2016-10-03 at 00:18 +, Edson Richter wrote:
>> https://download.postgresql.org/pub/repos/yum/testing/9.6/redhat/rhel-7Server
>> -x86_64/repodata/repomd.xml:
>> [Errno 14] HTTPS Error 404 - Not Found
>> Trying other mirror.
> Fixed this error, sorry for that.
Thanks OK, this one is working - but I still get the following error:


[root@backup1 yum.repos.d]# LANG=C yum install pgadmin4-web
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgadmin4-web.noarch 0:1.0-1.rhel7 will be installed
--> Processing Dependency: python-psycopg2-debug >= 2.6.2 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-passlib >= 1.6.2 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-html5lib >= 1.0b3 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-crypto >= 2.6.1 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-blinker >= 1.3 for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Processing Dependency: python-click for package:
pgadmin4-web-1.0-1.rhel7.noarch
--> Running transaction check
---> Package python-blinker.noarch 0:1.4-1.rhel7 will be installed
---> Package python-click.noarch 0:6.3-1.el7 will be installed
---> Package python-html5lib.noarch 1:0.999-5.el7 will be installed
---> Package python-passlib.noarch 0:1.6.2-2.el7 will be installed
---> Package python-psycopg2-debug.x86_64 0:2.6.2-2.rhel7 will be installed
--> Processing 

Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-04 Thread Edson Richter
Please, ignore this double post. It came from wrong e-mail address.

The issue is already being discussed in another discussion thread.


Re: [GENERAL] Restricted access on DataBases

2016-10-04 Thread Durumdara
Oh, WTF (Word Trade Fenster)! :-o

PGAdmin did that!

There are subdialog for Default Privileges, with Tables, and with ONLY ONE
ROLE.  This role is used after "TO". But nowhere role is used after "FOR"...

Hm

Thank you!


2016-10-04 12:57 GMT+02:00 Albe Laurenz :

> Durumdara wrote:
> [...]
> > --- login with postgres:
> [...]
> >   ALTER DEFAULT PRIVILEGES
> >   GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE, REFERENCES,
> TRIGGER ON TABLES
> >   TO u_tr_db;
> >
> >  login with u_tr_main:
> >
> >   create table t_canyouseeme_1 (k int);
> >
> >  login with u_tr_db:
> >
> >   select * from t_canyouseeme_1;
> >
> >   ERROR: permission denied for relation t_canyouseeme_1
> >   SQL state: 42501
> >
> >  As you see before, u_tr_db got all default privileges on future tables,
> so I don't understand why he
> > don't get to "t_canyouseeme_1".
>
> You should have written
>
>ALTER DEFAULT PRIVILEGES FOR ROLE u_tr_main ...
>
> The way you did it, you effectively wrote "FOR ROLE postgres" because
> you were connected as that user.
>
> Than means that all future tables created *by postgres* will have
> privileges for user "u_tr_db" added.  But you want tables created
> *by u_tr_main* to get the privileges.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] Restricted access on DataBases

2016-10-04 Thread Charles Clavadetscher
Hello

> > Also try this:
> > ALTER DEFAULT PRIVILEGES FOR ex_mainuser GRANT INSERT, SELECT, UPDATE, 
> > DELETE, TRUNCATE, REFERENCES, TRIGGER
> >ON TABLES TO ex_dbuser;
> >
> > You execute the ALTER DEFAULT PRIVILEGES as su, so the grant applies to 
> > objects created by su and not
> >ex_mainuser, unless you specify it with FOR ex_mainuser.
> >
>
> So... I repeated the test.
> 
> --- login with postgres:
> 
>   CREATE DATABASE db_testrole
> WITH ENCODING='UTF8'
>  TEMPLATE=template0
> CONNECTION LIMIT=-1;
> 
>   CREATE ROLE u_tr_db LOGIN NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE 
> NOREPLICATION;
> 
> 
>   CREATE ROLE u_tr_main LOGIN
> NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
>   GRANT u_tr_db TO u_tr_main;
> 
> 
>   ALTER DATABASE db_testrole
> OWNER TO u_tr_db;
> 
>   REVOKE ALL ON DATABASE db_testrole FROM public;
>   GRANT CREATE, TEMPORARY ON DATABASE db_testrole TO public;
>   GRANT ALL ON DATABASE db_testrole TO u_tr_db;
> 
>   ALTER DEFAULT PRIVILEGES
>   GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER 
> ON TABLES
>   TO u_tr_db;

Here you are telling PostgreSQL to grant those privileges to u_tr_db on tables 
created by user postgres.

>  login with u_tr_main:
> 
>   create table t_canyouseeme_1 (k int);
> 
>  login with u_tr_db:
> 
>   select * from t_canyouseeme_1;
> 
>   ERROR: permission denied for relation t_canyouseeme_1
>   SQL state: 42501
>  As you see before, u_tr_db got all default privileges on future tables, so I 
> don't understand why he don't get to
> "t_canyouseeme_1".

This is not correct. You issued the ALTER DEFAULT PRIVILEGES statement as user 
postgres. So u_tr_db is granted privileges only on tables created by user 
postgres. Since you created the table as user u_tr_main the default privileges 
don't apply, because there are none defined.

> If I try to use these things they would work:
> 
>   A.)
> 
>    login with u_tr_main:
> 
>   set role u_tr_db;
> 
>   create table t_canyouseeme_2 (k int);
>
>    login with u_tr_db:
> 
>   select * from t_canyouseeme_2; -- OK!

Yes, because the owner of the table is u_tr_db. With set role user u_tr_main is 
impersonating user u_tr_db.

>   B.)
> 
>    login with su:
> 
> 
>   ALTER DEFAULT PRIVILEGES FOR role u_tr_main  GRANT INSERT, 
> SELECT, UPDATE, DELETE, TRUNCATE,
> REFERENCES, TRIGGER ON TABLES TO u_tr_db;

Here you are telling PostgreSQL to grant privileges on tables created by 
u_tr_main to u_tr_db.

>    login with u_tr_main:
> 
>   create table t_canyouseeme_3 (k int);
> 
>    login with u_tr_db:
> 
>   select * from t_canyouseeme_3; -- OK!
> 
> 
> A.) is because I can set role to u_tr_db and then he is the creator, he get 
> all rights.
> B.) I don't understand this statement... :-( :-( :-(
> 
> So the main questions.
> Why the default privilege settings aren't affected on newly created table?
> See:
> 
>   ALTER DEFAULT PRIVILEGES
>   GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER 
> ON TABLES
>   TO u_tr_db;

They do if the user creating the table is the user that issued the statement. 
In the case above postgres.

> What are the meaning of this statement if they won't usable for object 
> created by another users?
> U_TR_DB is owner, so they have all privileges for next tables he will create.
> So I supposed that "default privileges" is for future objects created by 
> different users.
> But this not works here.
> 
> I don't understand case B.
> U_TR_MAIN gives all privileges to U_TR_DB for all newly created table?

Yes. You may also choose to restrict the privileges, instead of granting all of 
them.

> What are the differences between?
> 
>   1.  ALTER DEFAULT PRIVILEGES
>   GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER 
> ON TABLES
>   TO u_tr_db;
>   2.  ALTER DEFAULT PRIVILEGES FOR role u_tr_main  GRANT INSERT, SELECT, 
> UPDATE, DELETE, TRUNCATE, REFERENCES,
> TRIGGER ON TABLES TO u_tr_db;

In 1 the rule apply for tables created by the user that created the default 
privileges. Specifically the current_user is the one used for authorization 
checks.
In 2 you say explicitly that the rule applies to tables created by user 
u_tr_main.

> Why the second works and first not?

They both work. In the first statement it works if you create tables as the 
user who was the current_user when you issued the alter default privileges 
statement. In the second it works if you create a table as user u_tr_main.

> ---
> 
> 
>   db_testrole-# \ddp
>   Default access privileges
>  Owner   | Schema | Type  |  Access privileges
>   ---++---+-
>postgres  

Re: [GENERAL] Restricted access on DataBases

2016-10-04 Thread Albe Laurenz
Durumdara wrote:
[...]
> --- login with postgres:
[...]
>   ALTER DEFAULT PRIVILEGES
>   GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER 
> ON TABLES
>   TO u_tr_db;
>
>  login with u_tr_main:
>
>   create table t_canyouseeme_1 (k int);
>
>  login with u_tr_db:
> 
>   select * from t_canyouseeme_1;
> 
>   ERROR: permission denied for relation t_canyouseeme_1
>   SQL state: 42501
> 
>  As you see before, u_tr_db got all default privileges on future tables, so I 
> don't understand why he
> don't get to "t_canyouseeme_1".

You should have written

   ALTER DEFAULT PRIVILEGES FOR ROLE u_tr_main ...

The way you did it, you effectively wrote "FOR ROLE postgres" because
you were connected as that user.

Than means that all future tables created *by postgres* will have
privileges for user "u_tr_db" added.  But you want tables created
*by u_tr_main* to get the privileges.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with pg_upgrade after change of unix user running db.

2016-10-04 Thread Benedikt Grundmann
On 4 October 2016 at 09:28, Benedikt Grundmann 
wrote:

>
>
> On 4 October 2016 at 08:17, Benedikt Grundmann 
> wrote:
>
>>
>> On 3 October 2016 at 21:01, Tom Lane  wrote:
>>
>>> Benedikt Grundmann  writes:
>>> > proddb_testing=# SELECT
>>> > conname,convalidated,conislocal,coninhcount,connoinherit
>>> > proddb_testing-# FROM pg_constraint WHERE conrelid =
>>> > 'js_activity_20110101'::regclass;
>>> >conname   | convalidated |
>>> conislocal |
>>> > coninhcount | connoinherit
>>> > -+--
>>> ++-+--
>>> >  seqno_not_null  | f| t
>>>   |
>>> >   1 | f
>>>
>>> After some tracing through the code, I think it's the combination of all
>>> three of coninhcount>0, conislocal, and !convalidated that is producing
>>> the problem, and even then possibly only in binary-upgrade mode.  pg_dump
>>> is jumping through some hoops to try to restore that state, and evidently
>>> not getting it entirely right.
>>>
>>> Is there a reason you've left all these constraints in NOT VALID state?
>>> They're kinda useless that way.
>>
>>
>> Not at all.  I consider the ability to add constraints in not validated
>> form one of the 10 best things that happened in postgres in recent years.
>> They helped us a lot when slowly improving our schemas.
>>
>> Often just preventing any new or modified rows to validate the constraint
>> is really all we need or most that is needed.  Which is the only thing I
>> really care about in this case. And given the size of these tables and
>> their importance validating the constraints during production hours is
>> tricky.  Which means to validate them one of us has to sacrifice part of
>> their Saturday to do these and the marginal utility of having the
>> constraint validated was just never worth it.  But if that is what's
>> required to do the upgrade we will do so (the upgrade itself we will have
>> to do on a Saturday anyway).
>>
>>
>> Probably if you updated them to be valid
>>> (see ALTER TABLE ... VALIDATE CONSTRAINT), the upgrade would go through
>>> without difficulty.
>>>
>>> I'm running all the upgrade attempts on our testing instance (which is
>> nightly restored from the latest backup), it's not a problem to run the
>> validate command there so I'll do that now and find out if you are right.
>>
>
> It looks like you might be right but I don't know for sure yet.  And it
> will take me a long time to find out.  Rationale: After validating
> seqno_not_null I could proceed a bit further but failed at another
> constraint like that (valid_counterparty).  However that constraint
> actually is violated by lots of rows in the past and we had no plans (or
> easy way) to fix this.  The constraint was put in like this to prevent
> future rows.
>
> I guess I could drop the constraint do the restore and then put the
> constraint in again.  Sigh.  This is all relatively sad.
>
>

Yep I can confirm that after dropping a few more constraints and then doing
the checkpoint_segments vs min_wal_size/max_wal_size foo in postgresql.conf
 I got the database up.  So far everything seems otherwise fine.


>
>> I'll look into fixing this, but depending on how messy it turns out to be,
>>> it might be something we choose to fix only in HEAD.
>>>
>>> regards, tom lane
>>>
>>
>>
>


Re: [GENERAL] Restricted access on DataBases

2016-10-04 Thread Durumdara
Dear Charles!


Sorry for late answer. Now I got a little time to check this again...

2016-09-14 18:43 GMT+02:00 Charles Clavadetscher :

> Hello
>
>
>
>
>
> Also try this:
>
>
>
> ALTER DEFAULT PRIVILEGES FOR ex_mainuser GRANT INSERT, SELECT, UPDATE,
> DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO ex_dbuser;
>
>
>
> You execute the ALTER DEFAULT PRIVILEGES as su, so the grant applies to
> objects created by su and not ex_mainuser, unless you specify it with FOR
> ex_mainuser.
>
>
>
> Besides, if the objects in the table will not be created by the owner, but
> by your admin, then I don’t very much see the point in giving ownership.
> That could be done anyway in the public schema, unless you changed that.
>


So... I repeated the test.

*--- login with postgres:*




*CREATE DATABASE db_testrole   WITH ENCODING='UTF8'
TEMPLATE=template0 CONNECTION LIMIT=-1;*


*CREATE ROLE u_tr_db LOGIN NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE
NOREPLICATION; *




*CREATE ROLE u_tr_main LOGIN   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE
NOREPLICATION; GRANT u_tr_db TO u_tr_main; *


*ALTER DATABASE db_testrole   OWNER TO u_tr_db;*









*REVOKE ALL ON DATABASE db_testrole FROM public; GRANT CREATE, TEMPORARY ON
DATABASE db_testrole TO public; GRANT ALL ON DATABASE db_testrole TO
u_tr_db; ALTER DEFAULT PRIVILEGES GRANT INSERT, SELECT, UPDATE, DELETE,
TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db; *


* login with u_tr_main: *


*create table t_canyouseeme_1 (k int); *


* login with u_tr_db: *






*select * from t_canyouseeme_1; ERROR: permission denied for relation
t_canyouseeme_1 SQL state: 42501 *

 As you see before, u_tr_db got all default privileges on future tables, so
I don't understand why he don't get to "t_canyouseeme_1".

If I try to use these things they would work:






*A.) ** login with u_tr_main:*





*set role u_tr_db; create table t_canyouseeme_2 (k int); *


* login with u_tr_db: *



*select * from t_canyouseeme_2; -- OK! *


*B.) *

* login with su: *



*ALTER DEFAULT PRIVILEGES FOR role u_tr_main  GRANT INSERT, SELECT, UPDATE,
DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db; *

* login with u_tr_main:*



*create table t_canyouseeme_3 (k int); *


* login with u_tr_db: *

*select * from t_canyouseeme_3; -- OK!*


A.) is because I can set role to u_tr_db and then he is the creator, he get
all rights.
B.) I don't understand this statement... :-( :-( :-(

So the main questions.
Why the default privilege settings aren't affected on newly created table?
See:



*ALTER DEFAULT PRIVILEGES GRANT INSERT, SELECT, UPDATE, DELETE,
TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db;*

What are the meaning of this statement if they won't usable for object
created by another users?
U_TR_DB is owner, so they have all privileges for next tables he will
create.
So I supposed that "default privileges" is for future objects created by
different users.
But this not works here.

I don't understand case B.
U_TR_MAIN gives all privileges to U_TR_DB for all newly created table?

What are the differences between?


1.

*ALTER DEFAULT PRIVILEGES GRANT INSERT, SELECT, UPDATE, DELETE,
TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db;*
2.  *ALTER DEFAULT PRIVILEGES FOR role u_tr_main  GRANT INSERT, SELECT,
UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db;*


Why the second works and first not?


---

db_testrole-# \ddp
Default access privileges
   Owner   | Schema | Type  |  Access privileges
---++---+-
 postgres  || table | postgres=arwdDxt/postgres  +
   ||   | u_tr_db=arwdDxt/postgres
 u_tr_main || table | u_tr_db=arwdDxt/u_tr_main  +
   ||   | u_tr_main=arwdDxt/u_tr_main
(2 rows)

db_testrole-# \d
  List of relations
 Schema |  Name   | Type  |   Owner
+-+---+---
 public | t_canyouseeme_1 | table | u_tr_main
 public | t_canyouseeme_2 | table | u_tr_db
 public | t_canyouseeme_3 | table | u_tr_main
(3 rows)


---


Thank you for your help!

Best wishes
   dd


Re: [GENERAL] Problems with pg_upgrade after change of unix user running db.

2016-10-04 Thread Benedikt Grundmann
On 4 October 2016 at 08:17, Benedikt Grundmann 
wrote:

>
> On 3 October 2016 at 21:01, Tom Lane  wrote:
>
>> Benedikt Grundmann  writes:
>> > proddb_testing=# SELECT
>> > conname,convalidated,conislocal,coninhcount,connoinherit
>> > proddb_testing-# FROM pg_constraint WHERE conrelid =
>> > 'js_activity_20110101'::regclass;
>> >conname   | convalidated |
>> conislocal |
>> > coninhcount | connoinherit
>> > -+--
>> ++-+--
>> >  seqno_not_null  | f| t
>>   |
>> >   1 | f
>>
>> After some tracing through the code, I think it's the combination of all
>> three of coninhcount>0, conislocal, and !convalidated that is producing
>> the problem, and even then possibly only in binary-upgrade mode.  pg_dump
>> is jumping through some hoops to try to restore that state, and evidently
>> not getting it entirely right.
>>
>> Is there a reason you've left all these constraints in NOT VALID state?
>> They're kinda useless that way.
>
>
> Not at all.  I consider the ability to add constraints in not validated
> form one of the 10 best things that happened in postgres in recent years.
> They helped us a lot when slowly improving our schemas.
>
> Often just preventing any new or modified rows to validate the constraint
> is really all we need or most that is needed.  Which is the only thing I
> really care about in this case. And given the size of these tables and
> their importance validating the constraints during production hours is
> tricky.  Which means to validate them one of us has to sacrifice part of
> their Saturday to do these and the marginal utility of having the
> constraint validated was just never worth it.  But if that is what's
> required to do the upgrade we will do so (the upgrade itself we will have
> to do on a Saturday anyway).
>
>
> Probably if you updated them to be valid
>> (see ALTER TABLE ... VALIDATE CONSTRAINT), the upgrade would go through
>> without difficulty.
>>
>> I'm running all the upgrade attempts on our testing instance (which is
> nightly restored from the latest backup), it's not a problem to run the
> validate command there so I'll do that now and find out if you are right.
>

It looks like you might be right but I don't know for sure yet.  And it
will take me a long time to find out.  Rationale: After validating
seqno_not_null I could proceed a bit further but failed at another
constraint like that (valid_counterparty).  However that constraint
actually is violated by lots of rows in the past and we had no plans (or
easy way) to fix this.  The constraint was put in like this to prevent
future rows.

I guess I could drop the constraint do the restore and then put the
constraint in again.  Sigh.  This is all relatively sad.


>
> I'll look into fixing this, but depending on how messy it turns out to be,
>> it might be something we choose to fix only in HEAD.
>>
>> regards, tom lane
>>
>
>


Re: [GENERAL] Problems with pg_upgrade after change of unix user running db.

2016-10-04 Thread Benedikt Grundmann
On 3 October 2016 at 21:01, Tom Lane  wrote:

> Benedikt Grundmann  writes:
> > proddb_testing=# SELECT
> > conname,convalidated,conislocal,coninhcount,connoinherit
> > proddb_testing-# FROM pg_constraint WHERE conrelid =
> > 'js_activity_20110101'::regclass;
> >conname   | convalidated | conislocal
> |
> > coninhcount | connoinherit
> > -+--
> ++-+--
> >  seqno_not_null  | f| t
> |
> >   1 | f
>
> After some tracing through the code, I think it's the combination of all
> three of coninhcount>0, conislocal, and !convalidated that is producing
> the problem, and even then possibly only in binary-upgrade mode.  pg_dump
> is jumping through some hoops to try to restore that state, and evidently
> not getting it entirely right.
>
> Is there a reason you've left all these constraints in NOT VALID state?
> They're kinda useless that way.


Not at all.  I consider the ability to add constraints in not validated
form one of the 10 best things that happened in postgres in recent years.
They helped us a lot when slowly improving our schemas.

Often just preventing any new or modified rows to validate the constraint
is really all we need or most that is needed.  Which is the only thing I
really care about in this case. And given the size of these tables and
their importance validating the constraints during production hours is
tricky.  Which means to validate them one of us has to sacrifice part of
their Saturday to do these and the marginal utility of having the
constraint validated was just never worth it.  But if that is what's
required to do the upgrade we will do so (the upgrade itself we will have
to do on a Saturday anyway).


Probably if you updated them to be valid
> (see ALTER TABLE ... VALIDATE CONSTRAINT), the upgrade would go through
> without difficulty.
>
> I'm running all the upgrade attempts on our testing instance (which is
nightly restored from the latest backup), it's not a problem to run the
validate command there so I'll do that now and find out if you are right.

I'll look into fixing this, but depending on how messy it turns out to be,
> it might be something we choose to fix only in HEAD.
>
> regards, tom lane
>


[GENERAL] postgresql service is working but showing status failed

2016-10-04 Thread devangn
Everything looks and works as it should, just i am not getting the status
opensips service as active (running) instead i am getting active (exited),
what am i missing here?
# netstat  -pan|grep 5432
tcp0  0 0.0.0.0:54320.0.0.0:*   LISTEN 
589/postgres

# /etc/init.d/postgresql status
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service; enabled)
   Active: active (exited) since Thu 2013-11-14 22:14:42 WAST; 2 years 10
months ago
 Main PID: 606 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/postgresql.service

Nov 14 22:14:42 db01 systemd[1]: Started PostgreSQL RDBMS.

--> below is the content of file /lib/systemd/system/postgresql.service
# systemd service for managing all PostgreSQL clusters on the system. This
# service is actually a systemd target, but we are using a service since
# targets cannot be reloaded.

[Unit]
Description=PostgreSQL RDBMS

[Service]
Type=oneshot
#ExecStartPre=/usr/bin
ExecStart= /bin/true
ExecReload=/bin/true
ExecStop=  /bin/true
RemainAfterExit=on

[Install]
WantedBy=multi-user.target




--
View this message in context: 
http://postgresql.nabble.com/postgresql-service-is-working-but-showing-status-failed-tp5924295.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general