Application Dependency/Limitations of Postgres Version Upgrade

2018-03-09 Thread amitabh kumar
Hi,

I would like to know about application dependency of PostgreSQL version
upgrade.

We have multiple servers with PG versions in 8.4, 9.2, 9.3, 9.4 and 9.5,
that we want to upgrade to 9.6. We want to be sure that all applications
will run smoothly after upgrade.

Is there any dependency or limitation of applications in 9.6 after
upgrading from these versions ? OS platforms we are using are in Linux 5,
6.7. 6.8 and Windows 10, 12.

I am happy to provide more information if need.

Regards,

Amitabh
PostgreSQL DBA


Re: pgpass hostname and IP address

2018-03-09 Thread Adrian Klaver

On 03/09/2018 08:19 PM, Ron Johnson wrote:

(8.4 and 9.2, but soon to 9.6)

If we sometimes access a db server by IP address, and sometimes by 
hostname, must we have two entries for each server+user (one with the 
hostname and the other with IP address), or is there a way to put them 
both on the same line?




Not with .pgpass, though there is the service file method:

https://www.postgresql.org/docs/9.6/static/libpq-pgservice.html

https://www.postgresql.org/docs/10/static/libpq-connect.html#LIBPQ-CONNSTRING

service

Service name to use for additional parameters. It specifies a 
service name in pg_service.conf that holds additional connection 
parameters. This allows applications to specify only a service name so 
connection parameters can be centrally maintained. See Section 33.16.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pgpass hostname and IP address

2018-03-09 Thread Ron Johnson

On 03/09/2018 10:26 PM, David G. Johnston wrote:
On Friday, March 9, 2018, Ron Johnson > wrote:


(8.4 and 9.2, but soon to 9.6)

If we sometimes access a db server by IP address, and sometimes by
hostname, must we have two entries for each server+user (one with the
hostname and the other with IP address), or is there a way to put them
both on the same lin


https://www.postgresql.org/docs/10/static/libpq-pgpass.html

So unless you can make a wildcard work you are stuck with two separate lines.


Yeah, I read that (well, the ones for my ancient versions), but hoped there 
might be something I missed, or some obscure and clever trick.


--
Angular momentum makes the world go 'round.


Re: pgpass hostname and IP address

2018-03-09 Thread David G. Johnston
On Friday, March 9, 2018, Ron Johnson  wrote:

> (8.4 and 9.2, but soon to 9.6)
>
> If we sometimes access a db server by IP address, and sometimes by
> hostname, must we have two entries for each server+user (one with the
> hostname and the other with IP address), or is there a way to put them both
> on the same lin
>

 https://www.postgresql.org/docs/10/static/libpq-pgpass.html

So unless you can make a wildcard work you are stuck with two separate
lines.

David J.


pgpass hostname and IP address

2018-03-09 Thread Ron Johnson

(8.4 and 9.2, but soon to 9.6)

If we sometimes access a db server by IP address, and sometimes by hostname, 
must we have two entries for each server+user (one with the hostname and the 
other with IP address), or is there a way to put them both on the same line?


--
Angular momentum makes the world go 'round.



Need followup setup instructions to postgresql-10.3-1-linux-x64.run

2018-03-09 Thread Sherman Willden
Name: Sherman
Platform: HP Compaq 6710b
Operating System: Ubuntu 17.10 Fresh Install
postgresql: 10 Fresh Install. No other versions

What post-setup must I perform to get postgresql 10 running.

I used postgresql-10.3-1-linux-x64.run which ran great. It place postgresql
in /opt/PostgreSql. I am logged in as sql-dev the user I created when
installing Ubuntu.

Now I can't even recognize postgres. postgres has a system password and
sql-dev has a system password. I have to run all commands from
/opt/PostgreSQL/10/bin or such. postgres receives a password authentication
failed for user "postgres". Also for "sql-dev".

I have postgresql-10-A4.pdf but I haven't recognized what I need there yet.

Thank you;

Sherman


Re: Ability to create tables

2018-03-09 Thread Ron Johnson

On 03/09/2018 05:46 PM, Tom Lane wrote:

Ron Johnson  writes:

Even though I revoked the  CREATE priv on role ABCREADONLY, it's still able
to create tables.  What can I do to prevent this?
$ psql -c 'revoke create on database "ABC123" from "ABCREADONLY";'

That revokes the ability to create new schemas within that database
(which I suspect the role did not have anyway).  What you need is
to remove its ability to create objects within the public schema
within that database.  By default, that ability is granted to PUBLIC,
so that "revoke create on schema public from "ABCREADONLY";" won't
help either.  What you have to do is "revoke create on schema public
from public", and then grant it back to just the roles that should have
it.

If you don't want the role creating temp tables either, you need to
revoke its TEMP right on the database (which *is* a database-level
privilege).  Again, this'll involve disallowing that to PUBLIC,
since that default grant is how it's getting the privilege.


Thanks.


--
Angular momentum makes the world go 'round.



Re: Ability to create tables

2018-03-09 Thread Tom Lane
Ron Johnson  writes:
> Even though I revoked the  CREATE priv on role ABCREADONLY, it's still able 
> to create tables.  What can I do to prevent this?

> $ psql -c 'revoke create on database "ABC123" from "ABCREADONLY";'

That revokes the ability to create new schemas within that database
(which I suspect the role did not have anyway).  What you need is
to remove its ability to create objects within the public schema
within that database.  By default, that ability is granted to PUBLIC,
so that "revoke create on schema public from "ABCREADONLY";" won't
help either.  What you have to do is "revoke create on schema public
from public", and then grant it back to just the roles that should have
it.

If you don't want the role creating temp tables either, you need to
revoke its TEMP right on the database (which *is* a database-level
privilege).  Again, this'll involve disallowing that to PUBLIC,
since that default grant is how it's getting the privilege.

regards, tom lane



Ability to create tables

2018-03-09 Thread Ron Johnson

Hi,

Archaic v9.2.7

Even though I revoked the  CREATE priv on role ABCREADONLY, it's still able 
to create tables.  What can I do to prevent this?


\c postgres
CREATE ROLE "ABCREADONLY" LOGIN INHERIT PASSWORD 'Flying.Fox';
GRANT CONNECT ON DATABASE "ABC123" TO "ABCREADONLY";

\c ABC123
GRANT USAGE ON SCHEMA public, ABC, tms TO "ABCREADONLY";
GRANT SELECT ON ALL TABLES IN SCHEMA public, ABC, tms TO "ABCREADONLY";
ALTER DEFAULT PRIVILEGES IN SCHEMA public, ABC, tms GRANT SELECT ON TABLES 
TO "ABCREADONLY";


$ psql -c 'revoke create on database "ABC123" from "ABCREADONLY";'
REVOKE
$ psql -d ABC123 -U ABCREADONLY -c "create table bar (f1 integer);"
CREATE TABLE
$ psql -d ABC123 -U ABCREADONLY -c "\d bar"
  Table "public.bar"
Column |  Type   | Modifiers
+-+---
f1 | integer |

Thanks

--
Angular momentum makes the world go 'round.



Re: how to get list of sequences owned by a user/role

2018-03-09 Thread Adrian Klaver

On 03/09/2018 02:22 PM, Adrian Klaver wrote:

On 03/09/2018 12:08 PM, Charlin Barak wrote:

Hi,
I can find sequences owned by a schema from 
information_schema.sequences and pg_class but how do I find sequences 
owned by a user/role? What pg table should I be looking at?


https://www.postgresql.org/docs/10/static/view-pg-sequences.html


This is new to PG 10.

For older versions:

https://www.postgresql.org/docs/9.6/static/catalog-pg-class.html





Thanks.

Charlin






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: how to get list of sequences owned by a user/role

2018-03-09 Thread Adrian Klaver

On 03/09/2018 12:08 PM, Charlin Barak wrote:

Hi,
I can find sequences owned by a schema from information_schema.sequences 
and pg_class but how do I find sequences owned by a user/role? What pg 
table should I be looking at?


https://www.postgresql.org/docs/10/static/view-pg-sequences.html



Thanks.

Charlin



--
Adrian Klaver
adrian.kla...@aklaver.com



how to get list of sequences owned by a user/role

2018-03-09 Thread Charlin Barak
Hi,
I can find sequences owned by a schema from information_schema.sequences
and pg_class but how do I find sequences owned by a user/role? What pg
table should I be looking at?

Thanks.

Charlin


Re: ERROR could not access transaction/Could not open file pg_commit_ts

2018-03-09 Thread Jerry Sievers
Alvaro Herrera  writes:

> Jeremy Finzel wrote:
>
>> UPDATE: what is actually failing is a call to
>> pg_xact_commit_timestamp(xmin) on a given table under the view.  We still
>> think we must have some corruption though with pg_commit_ts.
>
> This is not a valid query, because the xmin may belong to a frozen
> tuple.  pg_commit_ts does not keep data forever; old files are removed.
> Anything older than datfrozenxid could return that error.

Ok but what does it mean if the error is raised only on both streaming
standbys but not on master?

We took a quick look to verify, just in terms of pg_commit_ts file
names, between master and standbys to find no differences.

Thanks

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800



Re: ERROR could not access transaction/Could not open file pg_commit_ts

2018-03-09 Thread Alvaro Herrera
Jeremy Finzel wrote:

> UPDATE: what is actually failing is a call to
> pg_xact_commit_timestamp(xmin) on a given table under the view.  We still
> think we must have some corruption though with pg_commit_ts.

This is not a valid query, because the xmin may belong to a frozen
tuple.  pg_commit_ts does not keep data forever; old files are removed.
Anything older than datfrozenxid could return that error.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: ERROR could not access transaction/Could not open file pg_commit_ts

2018-03-09 Thread Jeremy Finzel
On Fri, Mar 9, 2018 at 10:43 AM, Jeremy Finzel  wrote:

> Hello -
>
> Here is our cluster setup:
>
> cluster_a 9.5.11 Ubuntu 16.04.4 LTS
>  --> cluster_b (streamer) 9.5.11 Ubuntu 16.04.4 LTS
>  --> cluster_c (streamer) 9.5.11 Ubuntu 16.04.4 LTS
>
> Very recently, we started seeing these errors when running a query on a
> specific table on the streamer:
>
> 2018-03-09 08:28:16.280 CST,"uname","foo",18692,"0.0.0.0:0
> ",5aa29292.4904,4,"SELECT",2018-03-09 07:56:34 CST,18/15992,0,*ERROR*
> ,58P01,"*could not access status of transaction 1035047007*","*Could not
> open file ""pg_commit_ts/9A45*"": No such file or directory."
>
> A little history on the cluster:
>
>- The most recent change we made was a point release upgrade
>from 9.5.5 to 9.5.11 on the master, and 9.5.9 to 9.5.11 for the 2 streamers
>- It is a very high WAL traffic reporting system.
>- We actually have synchronous_commit set to off.  It's possible this
>could have bitten us and we are just now seeing issues, however there have
>been no crashes since the table in question was created.
>- We have run pg_repack on many tables on this cluster, but that also
>has not happened since over a month
>- We had a similar error of missing pg_commit_ts file over a year ago
>after an actual crash.  We had serious issues getting the cluster to start,
>and had to resort to recreating the missing pg_commit_ts with null
>bytes (IIRC, we had a snapshot of the system which still showed the file),
>which worked but left us questioning what really caused the issue.
>
>
> The table that is causing the error has been in production and used fine
> since 2/15/2018 when it was created. It is fed by pglogical replication (v.
> 2.1.1 on subscriber) from a system running 9.6.1 and pglogical v. 1.2.1.
> The point release upgrade from earlier 9.5 did take place *after* this.
>
> However, we *only* just started seeing errors in the past 12 hours.  The
> table was autovacuumed on master at 2018-03-08 18:18:15.532137-06, which
> was about 3 hours before the first user query errored, however, I saw that
> 2 hours after the autovac, there was another user query that worked
> successfully on the table.  Not sure if related?
>
> Any insight/ideas would be much appreciated!
>
> Thanks,
> Jeremy
>

UPDATE: what is actually failing is a call to
pg_xact_commit_timestamp(xmin) on a given table under the view.  We still
think we must have some corruption though with pg_commit_ts.


Re: Feature request: min/max for macaddr type

2018-03-09 Thread Tom Lane
Herwin Weststrate  writes:
> ... I tried
> adding some functions for macaddr_smaller/macaddr_larger to implement
> the min/max. The code for it was very trivial, but then I got completely
> lost in pg_proc.h/pg_aggregate.h. I fail to find the documentation for
> this files as well, is there any reference I could use to finish my work?

Well, you'd need two pg_proc entries for macaddr_smaller/macaddr_larger,
and two more for the min and max aggregates, and then two pg_aggregate
entries as well.  Looking at the existing entries related to some other
datatype (inet, perhaps) ought to let you move forward.

However, if you're thinking of submitting a patch ... it's too late
for any more feature submissions for v11, and I expect that by the
time v12 opens, the bootstrap data representation will have changed,
cf the discussion at
https://www.postgresql.org/message-id/flat/cajvsvgwo48jbbwxkjz_ybfygyw-m9ywxnpdxjbuosdc9ou_...@mail.gmail.com

So, while you'll still need the same six new catalog entries, what
you actually have to write is going to look completely different.
You might want to stick this project on the back burner for a month
or so, until that dust settles.

regards, tom lane



ERROR could not access transaction/Could not open file pg_commit_ts

2018-03-09 Thread Jeremy Finzel
Hello -

Here is our cluster setup:

cluster_a 9.5.11 Ubuntu 16.04.4 LTS
 --> cluster_b (streamer) 9.5.11 Ubuntu 16.04.4 LTS
 --> cluster_c (streamer) 9.5.11 Ubuntu 16.04.4 LTS

Very recently, we started seeing these errors when running a query on a
specific table on the streamer:

2018-03-09 08:28:16.280
CST,"uname","foo",18692,"0.0.0.0:0",5aa29292.4904,4,"SELECT",2018-03-09
07:56:34 CST,18/15992,0,*ERROR*,58P01,"*could not access status of
transaction 1035047007*","*Could not open file ""pg_commit_ts/9A45*"": No
such file or directory."

A little history on the cluster:

   - The most recent change we made was a point release upgrade from 9.5.5
   to 9.5.11 on the master, and 9.5.9 to 9.5.11 for the 2 streamers
   - It is a very high WAL traffic reporting system.
   - We actually have synchronous_commit set to off.  It's possible this
   could have bitten us and we are just now seeing issues, however there have
   been no crashes since the table in question was created.
   - We have run pg_repack on many tables on this cluster, but that also
   has not happened since over a month
   - We had a similar error of missing pg_commit_ts file over a year ago
   after an actual crash.  We had serious issues getting the cluster to start,
   and had to resort to recreating the missing pg_commit_ts with null bytes
   (IIRC, we had a snapshot of the system which still showed the file), which
   worked but left us questioning what really caused the issue.


The table that is causing the error has been in production and used fine
since 2/15/2018 when it was created. It is fed by pglogical replication (v.
2.1.1 on subscriber) from a system running 9.6.1 and pglogical v. 1.2.1.
The point release upgrade from earlier 9.5 did take place *after* this.

However, we *only* just started seeing errors in the past 12 hours.  The
table was autovacuumed on master at 2018-03-08 18:18:15.532137-06, which
was about 3 hours before the first user query errored, however, I saw that
2 hours after the autovac, there was another user query that worked
successfully on the table.  Not sure if related?

Any insight/ideas would be much appreciated!

Thanks,
Jeremy


Re: pg/tcl performance related

2018-03-09 Thread Tom Lane
wolfg...@alle-noten.de writes:
> Now, for pl/tcl, I could use either

> create function check(text) returns text as $$
> set data {
>   # the data value
> }
> foreach d $data {
># the work
> }
> $$ language pltcl;
> or
> create function check(text) returns text as $$
> if ![info exists GD(data)] {
>set GD(data) {
> # the data value
>   }
> }
> foreach d $GD(data) {
>   # the work
> }
> $$ language pltcl; 

> Does this make any difference - i.e. how often will parsing the data happen?

Obviously, in the first case the "set data" will happen every time you
call the function.  The cost of that should be about the same as it
would be in standalone Tcl.  PL/Tcl builds a Tcl code object for the
function body on first call in a session, and thereafter reuses that
code object, so that you shouldn't have extra parsing overhead per
se ... but execution of the command costs something too.

regards, tom lane



pg/tcl performance related

2018-03-09 Thread wolfgang


Hi,

suppose I want to create a tcl function that basically walks through a long 
list and compares its
argument against all entries
In plain tcl I would probably write

set data {
  #  lots of stuff here
}
proc check val {
  global data
  foreach d $data {
#  this does the work
  }
}

Now, for pl/tcl, I could use either

create function check(text) returns text as $$
set data {
  # the data value
}
foreach d $data {
   # the work
}
$$ language pltcl;
or
create function check(text) returns text as $$
if ![info exists GD(data)] {
   set GD(data) {
# the data value
  }
}
foreach d $GD(data) {
  # the work
}
$$ language pltcl; 

Does this make any difference - i.e. how often will parsing the data happen?

Best regards
Wolfgang Hamann




Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-09 Thread Melvin Davidson
On Fri, Mar 9, 2018 at 10:00 AM, Aldrin Martoq Ahumada <
aldrin.mar...@gmail.com> wrote:

> Hi Andre,
>
> Yes, here is the issue: https://github.com/influitive/apartment/issues/532
>
> It happens if you configured apartment with use_sql=true, which means it
> clones the schema from pg_dump. My first attempt was to “fix” the script
> generated by pg_dump, but I feel it will be a mess. We solved our issue
> going back to use_sql=false, which is the default (creates the schema from
> db/schema.rb). But there is people that have other requirements, like
> functions, so the easier way for them is to keep use_sql and replace
> strings in the script.
>
>
> Cheers,
>
> On Mar 9, 2018, at 11:51 AM, Andre Oliveira Freitas <
> afrei...@callixbrasil.com> wrote:
>
> Hello Aldrin,
>
> I'm also using apartment with postgresql 9.6.6, and I don't see any
> issue with it. Are you using Apartment::Tenant.create?
>
> 2018-03-09 10:26 GMT-03:00 Aldrin Martoq Ahumada  >:
>
> Hi,
>
> For a multi tenant system, we are using the following command to blindly
> clone a schema into another:
> pg_dump -s -x -O -n #{default_tenant} #{dbname}
>
>
> This is done for us by a rails gem, which then feeds that script into the
> new created schema for the new tenant.
> https://github.com/influitive/apartment/blob/
> 80a21f2e1cdcbe5b0bd976f88c14332657804536/lib/apartment/
> adapters/postgresql_adapter.rb#L150
>
>
>
>
> When we upgraded from 9.5.11 to 9.5.12, the format of the dump changed (it
> now always appends the schema name), so this is broken. We could patch the
> SQL generated, but that’s not a generic/robust solution.
>
> # pg_dump postgresql 9.5.11
> SET search_path = public, pg_catalog;
> CREATE TABLE ahoy_events (
>id bigint NOT NULL,
>visit_id integer,
>user_id integer,
>name character varying,
>properties jsonb,
>"time" timestamp without time zone
> );
>
> # pg_dump postgresql 9.5.12
> CREATE TABLE public.ahoy_events (
>id bigint NOT NULL,
>visit_id integer,
>user_id integer,
>name character varying,
>properties jsonb,
>"time" timestamp without time zone
> );
>
>
>
> Thinking in the long term, how could be the best way to clone a schema into
> another?
>
>
>
>
> —
> Aldrin
>
>
>
>
> --
>
> André Luis O. Freitas
> System Architect
>
> Rua do Rócio, 220 - Cj. 72
> São Paulo - SP - 04552-000
> 55 11 4063 4222
>
> afrei...@callix.com.br
> www.callix.com.br
>
>
>


















* >...how could be the best way to clone a schema into another?  The safest
way is to use pgdump -F p -n  > schema.sql Then edit
schema.sql and change all references to old_schema name to new_schema
name. Finally, use psql < schema.sql to create the new_schema.  That being
said, a year ago I optimized a function originally written by Emanuel
'3manuek' called clone_schema, which is added to the public schema. It
clones all sequences, tables,  indexes, rules, triggers, data(optional),
views & functions from any existing schema to a  new  schema SAMPLE
CALL: SELECT clone_schema('public', 'new_schema', TRUE);  I've attached it
for your convenience. disclaimer: I do not accept any responsibility for
any unknow bugs in the function.  Test first and use at your own risk.-- *


*Melvin DavidsonMaj. Database & Exploration SpecialistUniverse Exploration
Command – UXC*
Employment by invitation only!
-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', 

Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-09 Thread Aldrin Martoq Ahumada
Hi Andre,

Yes, here is the issue: https://github.com/influitive/apartment/issues/532 


It happens if you configured apartment with use_sql=true, which means it clones 
the schema from pg_dump. My first attempt was to “fix” the script generated by 
pg_dump, but I feel it will be a mess. We solved our issue going back to 
use_sql=false, which is the default (creates the schema from db/schema.rb). But 
there is people that have other requirements, like functions, so the easier way 
for them is to keep use_sql and replace strings in the script.


Cheers,

> On Mar 9, 2018, at 11:51 AM, Andre Oliveira Freitas 
>  wrote:
> 
> Hello Aldrin,
> 
> I'm also using apartment with postgresql 9.6.6, and I don't see any
> issue with it. Are you using Apartment::Tenant.create?
> 
> 2018-03-09 10:26 GMT-03:00 Aldrin Martoq Ahumada :
>> Hi,
>> 
>> For a multi tenant system, we are using the following command to blindly
>> clone a schema into another:
>> pg_dump -s -x -O -n #{default_tenant} #{dbname}
>> 
>> 
>> This is done for us by a rails gem, which then feeds that script into the
>> new created schema for the new tenant.
>> https://github.com/influitive/apartment/blob/80a21f2e1cdcbe5b0bd976f88c14332657804536/lib/apartment/adapters/postgresql_adapter.rb#L150
>> 
>> 
>> 
>> 
>> When we upgraded from 9.5.11 to 9.5.12, the format of the dump changed (it
>> now always appends the schema name), so this is broken. We could patch the
>> SQL generated, but that’s not a generic/robust solution.
>> 
>> # pg_dump postgresql 9.5.11
>> SET search_path = public, pg_catalog;
>> CREATE TABLE ahoy_events (
>>id bigint NOT NULL,
>>visit_id integer,
>>user_id integer,
>>name character varying,
>>properties jsonb,
>>"time" timestamp without time zone
>> );
>> 
>> # pg_dump postgresql 9.5.12
>> CREATE TABLE public.ahoy_events (
>>id bigint NOT NULL,
>>visit_id integer,
>>user_id integer,
>>name character varying,
>>properties jsonb,
>>"time" timestamp without time zone
>> );
>> 
>> 
>> 
>> Thinking in the long term, how could be the best way to clone a schema into
>> another?
>> 
>> 
>> 
>> 
>> —
>> Aldrin
>> 
> 
> 
> 
> -- 
> 
> André Luis O. Freitas
> System Architect
> 
> Rua do Rócio, 220 - Cj. 72
> São Paulo - SP - 04552-000
> 55 11 4063 4222
> 
> afrei...@callix.com.br
> www.callix.com.br



Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-09 Thread Andre Oliveira Freitas
Hello Aldrin,

I'm also using apartment with postgresql 9.6.6, and I don't see any
issue with it. Are you using Apartment::Tenant.create?

2018-03-09 10:26 GMT-03:00 Aldrin Martoq Ahumada :
> Hi,
>
> For a multi tenant system, we are using the following command to blindly
> clone a schema into another:
> pg_dump -s -x -O -n #{default_tenant} #{dbname}
>
>
> This is done for us by a rails gem, which then feeds that script into the
> new created schema for the new tenant.
> https://github.com/influitive/apartment/blob/80a21f2e1cdcbe5b0bd976f88c14332657804536/lib/apartment/adapters/postgresql_adapter.rb#L150
>
>
>
>
> When we upgraded from 9.5.11 to 9.5.12, the format of the dump changed (it
> now always appends the schema name), so this is broken. We could patch the
> SQL generated, but that’s not a generic/robust solution.
>
> # pg_dump postgresql 9.5.11
> SET search_path = public, pg_catalog;
> CREATE TABLE ahoy_events (
> id bigint NOT NULL,
> visit_id integer,
> user_id integer,
> name character varying,
> properties jsonb,
> "time" timestamp without time zone
> );
>
> # pg_dump postgresql 9.5.12
> CREATE TABLE public.ahoy_events (
> id bigint NOT NULL,
> visit_id integer,
> user_id integer,
> name character varying,
> properties jsonb,
> "time" timestamp without time zone
> );
>
>
>
> Thinking in the long term, how could be the best way to clone a schema into
> another?
>
>
>
>
> —
> Aldrin
>



-- 

André Luis O. Freitas
System Architect

Rua do Rócio, 220 - Cj. 72
São Paulo - SP - 04552-000
55 11 4063 4222

afrei...@callix.com.br
www.callix.com.br



Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-09 Thread David G. Johnston
On Fri, Mar 9, 2018 at 6:26 AM, Aldrin Martoq Ahumada <
aldrin.mar...@gmail.com> wrote:

> Thinking in the long term, how could be the best way to clone a schema
> into another?
>

Depends on why you are cloning schemas.

Generally not cloning is the best bet - instead place the reference schema
into version control and simply install it whenever necessary.
Parameterize the installation script.

​Or, clone into a different database and leave the namespace the same.

pg_dump, as currently written, is tasked with recreating the original
database exactly and safely.  If you need it to do something different you
could teach it the desired behavior and add some relevant command line
switches to enable said behavior.

David J.


Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-09 Thread Aldrin Martoq Ahumada
Hi,

For a multi tenant system, we are using the following command to blindly clone 
a schema into another:
pg_dump -s -x -O -n #{default_tenant} #{dbname}


This is done for us by a rails gem, which then feeds that script into the new 
created schema for the new tenant.
https://github.com/influitive/apartment/blob/80a21f2e1cdcbe5b0bd976f88c14332657804536/lib/apartment/adapters/postgresql_adapter.rb#L150
 





When we upgraded from 9.5.11 to 9.5.12, the format of the dump changed (it now 
always appends the schema name), so this is broken. We could patch the SQL 
generated, but that’s not a generic/robust solution.

# pg_dump postgresql 9.5.11
SET search_path = public, pg_catalog;
CREATE TABLE ahoy_events (
id bigint NOT NULL,
visit_id integer,
user_id integer,
name character varying,
properties jsonb,
"time" timestamp without time zone
);

# pg_dump postgresql 9.5.12
CREATE TABLE public.ahoy_events (
id bigint NOT NULL,
visit_id integer,
user_id integer,
name character varying,
properties jsonb,
"time" timestamp without time zone
);



Thinking in the long term, how could be the best way to clone a schema into 
another?




— 
Aldrin



Re: Authentication?

2018-03-09 Thread Bjørn T Johansen
On Wed, 7 Mar 2018 10:19:35 -0500
Stephen Frost  wrote:

> Greetings,
> 
> * Bjørn T Johansen (b...@havleik.no) wrote:
> > Is it possible to use one authentication method as default, like LDAP, and 
> > if the user is not found, then try to authenticate using
> > md5/scram-sha-256 ?  
> 
> Not directly in pg_hba.conf.  You might be able to construct a system
> which works like this using PAM though, but it wouldn't be much fun.
> 
> LDAP use really should be discouraged as it involves sending the
> password to the PG server.  If you are operating in an active directory
> environment then you should be using GSSAPI/Kerberos.
> 
> SCRAM is a good alternative as it doesn't send the password to the
> server either, though that is only available in PG10, of course.
> 
> Thanks!
> 
> Stephen

Ok, thx... Will check out GSSAPI/Kerberos instead... :)

BTJ