Re: [GENERAL] Unexpected interval comparison

2017-03-30 Thread Kyotaro HORIGUCHI
At Thu, 30 Mar 2017 10:57:19 -0400, Tom Lane  wrote in 
<2087.1490885...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > At Tue, 21 Mar 2017 07:52:25 -0700, Adrian Klaver 
> >  wrote in 
> > <375c9e5a-960f-942c-913f-55632a1f0...@aklaver.com>
> >> On 03/21/2017 07:42 AM, Tom Lane wrote:
> >>> It looks like the problem is overflow of the result of interval_cmp_value,
> >>> because it's trying to compute
> >>> =# select '32618665'::int8 * 30 * 86400 * 100;
> >>> ERROR:  bigint out of range
> >>> It's not immediately obvious how to avoid that while preserving the
> >>> same comparison semantics :-(
> 
> > Detecting the overflow during the conversion can fix it and
> > preserving the semantics (except value range). The current code
> > tells a lie anyway for the cases but I'm not sure limting the
> > range of value is acceptable or not.
> 
> I don't think it is.  It'd cause failures in attempting to enter
> very large interval values into btree indexes, for instance.

As for btree on intervals, it uses the same conversion function
with bare comparisons so it works for btree, too.  The following
correctly fails with the patch.

| =# insert into ti values ('32618665 years'::interval);
| ERROR:  interval out of range during comparison

But, strange behavior is seen on creating an index.

| =# insert into ti values ('32618665 years'::interval);
| INSERT 0 1
| postgres=# create index on ti using btree (i);
| ERROR:  interval out of range during comparison

So, restricting the domain on reading (interval_in or such) might
be better. Since we don't have big-bigint, extract(usec) will
overflow for certain range of interval values anyway. Or allow
returning them in numeric?

If we don't mind such inconsistency, just using wider integer
will useful.

> A possible solution is to manually work in wider-than-64-bit
> arithmetic, that is compute the comparison values div and mod
> some pretty-large number and then compare the two halves.
> I seem to recall that we did something similar in a few cases
> years ago, before we were willing to assume that every machine
> had 64-bit integer support.
> 
> Of course, for machines having int128, you could just use that
> type directly.  I'm not sure how widespread that support is
> nowadays.  Maybe a 95%-good-enough solution is to use int128
> if available and otherwise throw errors for intervals exceeding
> 64 bits.

int128 is seen in numeric.c. It is doable in the same manner. In
that case it will be a bit slower on the platforms without
int128.

By the way is it right that we don't assume this as a bug-fix
which should be done in the Pg10 dev cycle, but an improvement
for 11?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


REFERENCES privilege should not be symmetric (was Re: [GENERAL] Postgres Permissions Article)

2017-03-30 Thread Tom Lane
Paul Jungwirth  writes:
>> Also I don't understand why you wrote “You need the permission on both
>> tables”: Only the owner of a table can add constraints to it

> Ah, this piece was really helpful for me in making it click. Thanks so 
> much! I added a couple new paragraphs to my post with a link back to 
> this thread. I feel like it all makes sense now! :-)

> FYI "You need this permission on both tables" is what the docs say 
> (https://www.postgresql.org/docs/9.6/static/sql-grant.html):

>>> To create a foreign key constraint, it is necessary to have this 
>>> privilege on both the referencing and referenced columns.

> Maybe it would be worth clarifying there that you need to *own* the 
> referencing table, and you need REFERENCES on the referenced table?

Hmm ... interesting.  A bit of excavating in tablecmds.c shows that
in order to do ADD FOREIGN KEY, you need to be owner of the table
the constraint is being attached to (checked by ATSimplePermissions,
which is used for AT_AddConstraint by ATPrepCmd), *and* you need
REFERENCES on both tables, or at least on the columns involved in
the proposed FK constraint (checked by checkFkeyPermissions, which
is invoked against each of the tables by ATAddForeignKeyConstraint).

So yeah, this seems a little bit redundant.  In principle, a table owner
could revoke her own REFERENCES permissions on the table and thereby
disable creation of FKs leading out of it, but it'd be pretty unusual
to want to do so.

Moreover, this definition seems neither intuitive (REFERENCES doesn't
seem like it should be symmetric) nor compliant with the SQL standard.
In SQL:2011 section 11.8  I read

Access Rules
1) The applicable privileges for the owner of T shall include REFERENCES
for each referenced column.

(T is the referenced table.)  I see nothing suggesting that the command
requires REFERENCES privilege on the referencing table.  Now this is a
little garbled, because surely they meant the owner of the referencing
table (who is issuing the command) not the owner of the referenced table,
but I think the intent is clear enough.

In short, it seems like this statement in the docs is correctly describing
our code's behavior, but said behavior is wrong and should be changed.
I'd propose fixing it like that in HEAD; I'm not sure if the back branches
should also be changed.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Permissions Article

2017-03-30 Thread Paul Jungwirth

Also I don't understand why you wrote “You need the permission on both
tables”: Only the owner of a table can add constraints to it


Ah, this piece was really helpful for me in making it click. Thanks so 
much! I added a couple new paragraphs to my post with a link back to 
this thread. I feel like it all makes sense now! :-)


FYI "You need this permission on both tables" is what the docs say 
(https://www.postgresql.org/docs/9.6/static/sql-grant.html):


> To create a foreign key constraint, it is necessary to have this 
privilege on both the referencing and referenced columns.


Maybe it would be worth clarifying there that you need to *own* the 
referencing table, and you need REFERENCES on the referenced table?


In any case, thanks again to you all for your help figuring this out!

Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump recording privileges on foreign data wrappers

2017-03-30 Thread Tom Lane
Adam Mackler  writes:
> If I grant a privilege on a foreign data wrapper like this:
> GRANT USAGE ON FOREIGN DATA WRAPPER postgres_fdw TO myuser;

> from within psql, then a dump of the database produced using pg_dump
> seems to lack an equivalent GRANT statement, even though it contains
> the CREATE EXTENSION statement for that foreign data wrapper.

Yeah, it was only in 9.6 that we grew any infrastructure that would
allow handling this properly.  Before that, there was no way to track
which GRANTs on an extension object were part of the extension's
initial script and which had been issued later --- so pg_dump just
assumed they were all of the former kind and didn't dump anything
for extension member objects.

> I am using version 9.5.6.

9.6.x will make this better for you.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] inevitability of to_date() when converting representations which don't represent whole timestamps

2017-03-30 Thread Adrian Klaver

On 03/30/2017 08:21 AM, Shaun Cutts wrote:



On Mar 30, 2017, at 10:02 AM, Adrian Klaver > wrote:

On 03/29/2017 06:19 PM, Shaun Cutts wrote:

When being asked to convert a day of the week, the to_date() function
returns the same day ('0001-01-01 BC’) no matter which day is converted:

# select to_date(‘Monday’, ‘Day’)
'0001-01-01 BC’

# select to_date(‘Tuesday’, ‘Day’)
'0001-01-01 BC’

However, if it were to return a date that was that day of the week,
it could be inverted:

# select extract(dow from '0001-01-01 BC'::date); — this date should
be the result of to_date(‘Sunday’, ‘Day’)
6

# select extract(dow from '0001-01-02 BC'::date); — this date should
be the result of to_date(‘Monday’, ‘Day’)
0


The two examples are not the same. In the second you starting from a
known date and extracting a day number. In the first you are asking
for a day of the week that is not anchored to a date, so any date past
or present that is on that day would fit. I have no problem with
normalizing that to a placeholder date.


Normalizing to a placeholder date is indeed what I’m after.


What would the requirement be?:

That Day dates w/o a year would be sequential from 0001-01-01 BC?


Yes — that sounds fine, so:

to_date(‘Sunday’, ‘Day’) -> ‘0001-01-01 BC’::date
to_date(‘Monday’, ‘Day’) -> ‘0001-01-02 BC’::date

and so on.


I tend to doubt that is going to happen as it would change current 
behavior out from under code that depends on it. I was asking more in 
the vein of what you where after. See possible solution below.






Or some other Sunday in some other year?

It comes down to what you want "select to_date(‘Monday’, ‘Day’)" to
provide you and for what purpose?



My use case is to convert the name of a day to a day of the week number
— now testing in English, but ultimately locale insensitive, so relying
on to_date() to recognize the day in whatever the database locale is.




To build on David's suggestion, something like maybe:

WITH day_mapping AS (
SELECT
to_char(week_date,
'Day') AS day_name,
to_char(week_date,
'D') day_number
FROM
generate_series('03/26/2017'::date,
'04/01/2017'::date,
'1 day') AS week_date
)
SELECT
*
FROM
day_mapping
WHERE
trim(day_name) = 'Sunday';

 day_name  | day_number
---+
 Sunday| 1

This would be for non-ISO numbering. Change 'D' to 'ID' to get ISO day 
numbering.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump recording privileges on foreign data wrappers

2017-03-30 Thread Adam Mackler
If I grant a privilege on a foreign data wrapper like this:

GRANT USAGE ON FOREIGN DATA WRAPPER postgres_fdw TO myuser;

from within psql, then a dump of the database produced using pg_dump
seems to lack an equivalent GRANT statement, even though it contains
the CREATE EXTENSION statement for that foreign data wrapper.

Am I usderstanding correctly that when I feed that output of pg_dump
back into psql it will result in a database that has the foreign data
wrapper but without the priviliges that were set in the database that
was dumped?  Is that really what is supposed to happen?  Is there a
way to get pg_dump to output the necessary statements such that
running the dump back through psql results in the same priviliges that
I started with?

I am using version 9.5.6.

Thanks very much,
-- 
Adam Mackler


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Default Behavior

2017-03-30 Thread Adrian Klaver

On 03/30/2017 10:02 AM, harpagornis wrote:

Nope, I installed v4.1.3 and it is the same, pg_default appears as the
tablespace.  Even when I use the drop down list in the right pane of PgAdmin
to select the my_space tablespace, and then click the Save button, it does
not change.


pgAdmin is a separate project from the Postgres server so I would 
suggest adding that information to the existing issue in their issue 
tracker:


https://redmine.postgresql.org/issues/2069






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Default Behavior

2017-03-30 Thread harpagornis
Nope, I installed v4.1.3 and it is the same, pg_default appears as the
tablespace.  Even when I use the drop down list in the right pane of PgAdmin
to select the my_space tablespace, and then click the Save button, it does
not change.



--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910p5953081.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HotSync Replicate doubt?

2017-03-30 Thread Periko Support
Good, them I must tested, thanks guys!!!

On Thu, Mar 30, 2017 at 9:49 AM, Adrian Klaver
 wrote:
> On 03/30/2017 09:39 AM, Periko Support wrote:
>>
>> Hi guys.
>>
>> I had some doubt about PSQL internal replication mechanics(Streaming).
>>
>> If I setup a Master-Slave Hot StandBy.
>>
>> The slave is on a Read-Only stage, but we can query the server.
>>
>> If a user create a table in the database:
>>
>> CREATE DATABASE mynewdb;
>> CREATE TABLE mytbale-name + fields.
>>
>> Or if changes a table with new fields or remove fields.
>>
>> Does the replication will send this commands to the SLAVE without user
>> intervention?
>>
>> I run bucardo for replication, but this feature doesn't work, we need
>> to manually do it on the
>> SLAVE and some other steps to have both DB sync.
>>
>> We already know how to do it, bucardo works.
>>
>> Just wondering if PSQL can handle this automatically?
>
>
> Yes, with an exception:
>
> https://www.postgresql.org/docs/9.6/static/warm-standby.html
>
> 26.2.1. Planning
>
> "Keep in mind that if CREATE TABLESPACE is executed on the primary, any new
> mount point needed for it must be created on the primary and all standby
> servers before the command is executed"
>
>>
>> Thanks.
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HotSync Replicate doubt?

2017-03-30 Thread Adrian Klaver

On 03/30/2017 09:39 AM, Periko Support wrote:

Hi guys.

I had some doubt about PSQL internal replication mechanics(Streaming).

If I setup a Master-Slave Hot StandBy.

The slave is on a Read-Only stage, but we can query the server.

If a user create a table in the database:

CREATE DATABASE mynewdb;
CREATE TABLE mytbale-name + fields.

Or if changes a table with new fields or remove fields.

Does the replication will send this commands to the SLAVE without user
intervention?

I run bucardo for replication, but this feature doesn't work, we need
to manually do it on the
SLAVE and some other steps to have both DB sync.

We already know how to do it, bucardo works.

Just wondering if PSQL can handle this automatically?


Yes, with an exception:

https://www.postgresql.org/docs/9.6/static/warm-standby.html

26.2.1. Planning

"Keep in mind that if CREATE TABLESPACE is executed on the primary, any 
new mount point needed for it must be created on the primary and all 
standby servers before the command is executed"




Thanks.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HotSync Replicate doubt?

2017-03-30 Thread Bill Moran
On Thu, 30 Mar 2017 09:39:40 -0700
Periko Support  wrote:

> Hi guys.
> 
> I had some doubt about PSQL internal replication mechanics(Streaming).
> 
> If I setup a Master-Slave Hot StandBy.
> 
> The slave is on a Read-Only stage, but we can query the server.
> 
> If a user create a table in the database:
> 
> CREATE DATABASE mynewdb;
> CREATE TABLE mytbale-name + fields.
> 
> Or if changes a table with new fields or remove fields.
> 
> Does the replication will send this commands to the SLAVE without user
> intervention?
> 
> I run bucardo for replication, but this feature doesn't work, we need
> to manually do it on the
> SLAVE and some other steps to have both DB sync.
> 
> We already know how to do it, bucardo works.
> 
> Just wondering if PSQL can handle this automatically?

Postgres' built-in streaming replication _does_ replicate this automatically.

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] HotSync Replicate doubt?

2017-03-30 Thread Periko Support
Hi guys.

I had some doubt about PSQL internal replication mechanics(Streaming).

If I setup a Master-Slave Hot StandBy.

The slave is on a Read-Only stage, but we can query the server.

If a user create a table in the database:

CREATE DATABASE mynewdb;
CREATE TABLE mytbale-name + fields.

Or if changes a table with new fields or remove fields.

Does the replication will send this commands to the SLAVE without user
intervention?

I run bucardo for replication, but this feature doesn't work, we need
to manually do it on the
SLAVE and some other steps to have both DB sync.

We already know how to do it, bucardo works.

Just wondering if PSQL can handle this automatically?

Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL and Kubernetes

2017-03-30 Thread Moreno Andreo

Il 30/03/2017 15:27, Adrian Klaver ha scritto:

On 03/30/2017 03:10 AM, Moreno Andreo wrote:


Hi everyone,
I was googling around while searching stuff to finish my cluster
solution (already running in production in single-server mode) with
barman and pgbouncer, when I ran into this

https://hackernoon.com/postgresql-cluster-into-kubernetes-cluster-f353cde212de 



Since I'm on Google Cloud Platform, I thought it would be a good idea to
see what it offers. 

[snip]

What's your advice?


I don't us eKubernetes, 
neither do I, at the moment. but if there's to dig into something 
new, I'm always ready!



but I do know one of the former members of the Postgres Core does:

https://opensource.com/article/17/2/stateful-applications

That's a *niiice* article!


and delivered a talk at Scale15x on it:

http://www.socallinuxexpo.org/scale/15x/presentations/ccp-containerized-clustered-postgres 



https://www.youtube.com/watch?v=mxmZv9xkc8s

I'll take a look later

Thanks a lot!!!
Best regards
Moreno.


Go to about 3 minutes in on the video as there are sound/video issues 
before that.




Thanks in advance,
Moreno








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Default Behavior

2017-03-30 Thread Adrian Klaver

On 03/30/2017 08:36 AM, harpagornis wrote:


Postgres version is 9.4

PgAdmin version is 1.0


Alright seems this is a known issue:

https://redmine.postgresql.org/issues/2069

You will need a Postgres coummunity account:

https://www.postgresql.org/account/signup/

to see the issue though.

Though, according to the issue it is fixed in pgAdmin4 1.2. Current 
stable release is 1.3, so I would upgrade your pgAdmin.




The value returned by the suggested query is correctly reported as
'my_space'.



--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910p5953046.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] inevitability of to_date() when converting representations which don't represent whole timestamps

2017-03-30 Thread David G. Johnston
On Thu, Mar 30, 2017 at 8:21 AM, Shaun Cutts 
wrote:

>
> My use case is to convert the name of a day to a day of the week number —
> now testing in English, but ultimately locale insensitive, so relying on
> to_date() to recognize the day in whatever the database locale is.
>
>
​Patches are welcome but I'd suggest that you not attempt to incorporate
this behavior into to_date.  That function returns a date and what you want
is a normalized integer (or, in reverse, a locale-specific string).

Both locale-specific so stable, not immutable:
day_of_week(text) : integer
day_of_week(integer) : text

Given that "select to_char(now(), 'Day')" works there should be few, if
any, technical barriers to overcome.

You'd need to decide whether to support only the  "ISO 8601" numbering
scheme (1-7) or to add additional arguments and/or function to number
Sunday as 0 instead of 7.

David J.


Re: [GENERAL] Tablespace Default Behavior

2017-03-30 Thread harpagornis

Postgres version is 9.4

PgAdmin version is 1.0

The value returned by the suggested query is correctly reported as
'my_space'.



--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910p5953046.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Default Behavior

2017-03-30 Thread Adrian Klaver

On 03/30/2017 07:35 AM, harpagornis wrote:

In PgAdmin, when I right-click the table, in the Properties pane on the right
side of the screen,  the tablespace is listed as "pg_default"


What we know. Correct me if I am wrong:

1) In postgresql.conf you have:
default_tablespace = ''

2) You created a table in the database:

CREATE TABLE myschema.mytable (rc_id integer NOT NULL) TABLESPACE
my_space;

FYI, the TABLESPACE is redundant as the table would created in my_space 
anyway as it is the default for the database


3) When you queried pg_tables, the tablespace field is NULL for the 
table. Which would be correct:

https://www.postgresql.org/docs/9.6/static/catalogs.html
tablespace 	name 	pg_tablespace.spcname 	Name of tablespace containing 
table (null if default for database)


4) pgAdmin shows the tablespace as being pg_default for the table.

What we do not know:

1) Postgres version

2) pgAdmin version

3) select spcname  from pg_database join pg_tablespace on 
pg_database.dattablespace=pg_tablespace.oid  where datname= your_db_name;






--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910p5953028.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected interval comparison

2017-03-30 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> At Tue, 21 Mar 2017 07:52:25 -0700, Adrian Klaver  
> wrote in <375c9e5a-960f-942c-913f-55632a1f0...@aklaver.com>
>> On 03/21/2017 07:42 AM, Tom Lane wrote:
>>> It looks like the problem is overflow of the result of interval_cmp_value,
>>> because it's trying to compute
>>> =# select '32618665'::int8 * 30 * 86400 * 100;
>>> ERROR:  bigint out of range
>>> It's not immediately obvious how to avoid that while preserving the
>>> same comparison semantics :-(

> Detecting the overflow during the conversion can fix it and
> preserving the semantics (except value range). The current code
> tells a lie anyway for the cases but I'm not sure limting the
> range of value is acceptable or not.

I don't think it is.  It'd cause failures in attempting to enter
very large interval values into btree indexes, for instance.

A possible solution is to manually work in wider-than-64-bit
arithmetic, that is compute the comparison values div and mod
some pretty-large number and then compare the two halves.
I seem to recall that we did something similar in a few cases
years ago, before we were willing to assume that every machine
had 64-bit integer support.

Of course, for machines having int128, you could just use that
type directly.  I'm not sure how widespread that support is
nowadays.  Maybe a 95%-good-enough solution is to use int128
if available and otherwise throw errors for intervals exceeding
64 bits.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace Default Behavior

2017-03-30 Thread harpagornis
In PgAdmin, when I right-click the table, in the Properties pane on the right
side of the screen,  the tablespace is listed as "pg_default"



--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910p5953028.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: inevitability of to_date() when converting representations which don't represent whole timestamps

2017-03-30 Thread David G. Johnston
On Thursday, March 30, 2017, Peter J. Holzer  wrote:

> I
> >
> > David tells this is not a bug, but it still seems like a reasonable
> > requirement on to_date() to me. Is there some reason why this isn’t
> > possible?
>
> The documentation warns that to_date “interpret input liberally, with
> minimal error checking” and while it “produce[s] valid output, the
> conversion can yield unexpected results.”
>
> I would agree that producing the same date for every day of the week
> crosses the line between “unexpected (but valid) result” and “bug”.
>
>
What to_date appears to do with the Day specification is throw away the
match just like placeholder text.  Left with no other information to
process it then constructs a date - where the day, month, and year are all
missing.

Yes, one could hard-code the word Tuesday to some particular date (absolute
or relative) and deal with the consequent localization issue.  But that
isn't in to_dates charter.  It doesn't think but simply captures the
necessary date elements from a string into variables and then combines
those values into an actual date.  Not all info in the string is useful for
this purpose and so some of it can be discarded.

Maybe the documentation could be improved here but the behavior shown is
not unreasonable.

It would be good to know what Oracel does here since the primary purpose of
to_date is as a compatibility function with that system.

David J.


Re: [GENERAL] Handling psql lost connections

2017-03-30 Thread Peter J. Holzer
On 2017-03-29 08:49:57 -0700, Steve Crawford wrote:
> When firewalls/VPNs stand between my psql client and a remote PostgreSQL 
> server
> the connection will on occasion time out and drop. This results in the
> following scenario:
> 
> -Leave for lunch mid project - leave psql open.
> 
> -Return from lunch, complete and submit large query.
> 
> -Notice query is taking too long. cancel it.
> 
> -Cancel doesn't return - realize that connection has dropped.
> 
> -Kill psql - history is not written out. Start query from scratch.
> 
> Is there:
[...]
> Yes, I know I and my coworkers could spend brain cycles trying to unerringly
> remember to close and restart connections, write all queries in an external
> editor and then submit them, etc. but I'm looking for more user friendly
> options.

One workaround could be to login to the server, start a screen session
and psql in the screen session. Then if your network connection drops
you can simply login again and resume the screen session. Of course this
only works if you have a shell login on the server which may not be the
case.

hp

-- 
   _  | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| |   | h...@hjp.at | -- John Gordon
__/   | http://www.hjp.at/ |http://downlode.org/Etext/alicebob.html


signature.asc
Description: Digital signature


Re: [GENERAL] Tablespace Default Behavior

2017-03-30 Thread Adrian Klaver

On 03/29/2017 09:18 PM, harpagornis wrote:

That is what I suspected, and thank you for the explanation.  I think it is
misleading and a bug in PgAdmin for explicitly listing the tablespace as
pg_default.


Listing it where?





--
View this message in context: 
http://www.postgresql-archive.org/Tablespace-Default-Behavior-tp5952910p5952929.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] inevitability of to_date() when converting representations which don't represent whole timestamps

2017-03-30 Thread Peter J. Holzer
I don't understand what you mean by "inevitability" in the subject.

On 2017-03-29 21:19:56 -0400, Shaun Cutts wrote:
> When being asked to convert a day of the week, the to_date() function
> returns the same day ('0001-01-01 BC’) no matter which day is
> converted:
> 
> # select to_date(‘Monday’, ‘Day’)
> '0001-01-01 BC’
> 
> # select to_date(‘Tuesday’, ‘Day’)
> '0001-01-01 BC’
> 
> However, if it were to return a date that was that day of the week, it
> could be inverted:
> 
> # select extract(dow from '0001-01-01 BC'::date); — this date should be the 
> result of to_date(‘Sunday’, ‘Day’)
> 6
> 
> # select extract(dow from '0001-01-02 BC'::date); — this date should be the 
> result of to_date(‘Monday’, ‘Day’)
> 0
> 
> ….
> 
> David tells this is not a bug, but it still seems like a reasonable
> requirement on to_date() to me. Is there some reason why this isn’t
> possible?

The documentation warns that to_date “interpret input liberally, with
minimal error checking” and while it “produce[s] valid output, the
conversion can yield unexpected results.”

I would agree that producing the same date for every day of the week
crosses the line between “unexpected (but valid) result” and “bug”.

On the other hand I have no idea what the result of to_date(‘Monday’,
‘Day’) should be. “Any date which is a Monday” seems too vague. “The
nearest Monday”, “the previous Monday”, “the next Monday” might be
useful in practice, but whichever of them you pick, you've picked the
wrong one with a probability of 2/3. “The first monday in the year -1 of
the proleptic Gregorian calendar” would be consistent with how
to_timestamp('12:34:56', 'HH24:MI:SS') works, but apart from that and
being invertible it seems to be a quite useless choice.

hp

-- 
   _  | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| |   | h...@hjp.at | -- John Gordon
__/   | http://www.hjp.at/ |http://downlode.org/Etext/alicebob.html


signature.asc
Description: Digital signature


Re: [GENERAL] inevitability of to_date() when converting representations which don't represent whole timestamps

2017-03-30 Thread Adrian Klaver

On 03/29/2017 06:19 PM, Shaun Cutts wrote:

When being asked to convert a day of the week, the to_date() function returns 
the same day ('0001-01-01 BC’) no matter which day is converted:

# select to_date(‘Monday’, ‘Day’)
'0001-01-01 BC’

# select to_date(‘Tuesday’, ‘Day’)
'0001-01-01 BC’

However, if it were to return a date that was that day of the week, it could be 
inverted:

# select extract(dow from '0001-01-01 BC'::date); — this date should be the 
result of to_date(‘Sunday’, ‘Day’)
6

# select extract(dow from '0001-01-02 BC'::date); — this date should be the 
result of to_date(‘Monday’, ‘Day’)
0


The two examples are not the same. In the second you starting from a 
known date and extracting a day number. In the first you are asking for 
a day of the week that is not anchored to a date, so any date past or 
present that is on that day would fit. I have no problem with 
normalizing that to a placeholder date.




….

David tells this is not a bug, but it still seems like a reasonable requirement 
on to_date() to me. Is there some reason why this isn’t possible?


What would the requirement be?:

That Day dates w/o a year would be sequential from 0001-01-01 BC?

Or some other Sunday in some other year?

It comes down to what you want "select to_date(‘Monday’, ‘Day’)" to 
provide you and for what purpose?











--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Keycloak and Postgres

2017-03-30 Thread Marc Tempelmeier
Hi,

I have a replication question, we have some big Cisco UCS VM thingy, where VMs 
are snapshotted, the drives are abstracted etc. If a VM crashes it will be 
resumed in 1 min from another rack. What brings us master slave replication or 
some other kind of replication in this setup? Should we do it because of other 
failures?

Best regards

Marc


Re: [GENERAL] Postgres Permissions Article

2017-03-30 Thread Peter J. Holzer
On 2017-03-29 08:05:23 -0700, Paul Jungwirth wrote:
> On 03/29/2017 06:36 AM, Tom Lane wrote:
> >Karsten Hilbert  writes:
> >>Being able to create foreign keys may allow to indirectly
> >>discover whether certain values exists in a table which I
> >>don't otherwise have access to (by means of failure or
> >>success to create a judiciously crafted FK).
> >
> >Aside from that, an FK can easily be used to cause effective
> >denial-of-service, for example preventing rows from being deleted
> >within a table, or adding enormous overhead to such a deletion.
> 
> Thank you both for taking a look! I agree those are both worthwhile
> concerns. It still seems a little strange it is not just part of the CREATE
> permission (for example). I understand why not everyone can create a foreign
> key, I just have trouble imagining a use case where it is helpful to
> separate it from other DDL commands.

A foreign key affects not only the table on which it is defined but also
the table it references. 

If Alice creates a table “master” and Bob creates a table “detail”
referencing “master”, Bob can prevent Alice from deleting entries from
her own table. So Alice must be able to decide whom she allows to
reference her tables.

I don't see how how this could be part of the create privilege - I
certainly want different roles to be able to create their own tables (or
views, or whatever) without being able to DOS each other (accidentally
or intentionally).

(Also I don't understand why you wrote “You need the permission on both
tables”: Only the owner of a table can add constraints to it - this
privilege cannot be granted to other roles at all. So to create a
foreign key constraint you need to be the owner of the referencing table
and have the references privilege on the referenced table. It's not
symmetrical.)

hp

-- 
   _  | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| |   | h...@hjp.at | -- John Gordon
__/   | http://www.hjp.at/ |http://downlode.org/Etext/alicebob.html


signature.asc
Description: Digital signature


Re: [GENERAL] PostgreSQL and Kubernetes

2017-03-30 Thread Adrian Klaver

On 03/30/2017 03:10 AM, Moreno Andreo wrote:


Hi everyone,
I was googling around while searching stuff to finish my cluster
solution (already running in production in single-server mode) with
barman and pgbouncer, when I ran into this

https://hackernoon.com/postgresql-cluster-into-kubernetes-cluster-f353cde212de

Since I'm on Google Cloud Platform, I thought it would be a good idea to
see what it offers. After a quick read, I found a comment saying
*"There was a series of comments by Kelsey Hightower about running
databases in Kubernetes, both at conferences and on Twitter; he
essentially says that you should absolutely NOT run your database in a
container/kubernetes pod/service/deployment, as databases aren't built
for type of dynamic scheduling that something like kubernetes (or any
other container management solution), due to how they interact with the
filesystem, network stack, and more.*
*Further more, he suggests that because of this lack of integration,
that if you do run your database in kubernetes/docker, you are very
likely to encounter data corruption and data loss"*

What's your advice?


I don't us eKubernetes, but I do know one of the former members of the 
Postgres Core does:


https://opensource.com/article/17/2/stateful-applications

and delivered a talk at Scale15x on it:

http://www.socallinuxexpo.org/scale/15x/presentations/ccp-containerized-clustered-postgres

https://www.youtube.com/watch?v=mxmZv9xkc8s

Go to about 3 minutes in on the video as there are sound/video issues 
before that.




Thanks in advance,
Moreno



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL and Kubernetes

2017-03-30 Thread Vick Khera
On Thu, Mar 30, 2017 at 6:10 AM, Moreno Andreo 
wrote:

> Since I'm on Google Cloud Platform, I thought it would be a good idea to
> see what it offers.


They currently have in beta a Postgres flavor of their cloudsql. I haven't
used it yet, but I plan to test it sometime in the next couple of months.


Re: [GENERAL] Huge Pages - setting the right value

2017-03-30 Thread pinker


W dniu 2017-03-30 11:45:55 użytkownik pinker  napisał:
> Hi,
> I'm currently testing performance with and without huge pages. Documentation
> says that in order to estimate the number of huge pages needed one should
> check the postmaster's VmPeak value. I wonder if it's only postmaster memory
> usage what's matters? Or I could get better estimation from the most memory
> intensive postgres process - not necessarly postmaster? I'm using following
> command to check it:
> for i in $(ps -ef | grep postgres|awk '{print $2}'); do grep ^VmPeak
> /proc/${i}/status|awk '{print $2}' >> log; done; sort -n -r log | head -1
> 
> I'm asking because some other process takes 606788kB while postmaster only
> 280444kB.
> 
> 
> 
> --
> View this message in context: 
> http://www.postgresql-archive.org/Huge-Pages-setting-the-right-value-tp5952972.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 


or maybe sum of all processes?
I assume that memory allocated by postmaster means shared buffers, so if one 
wants to huge pages beeing used for sorting as well then should set some bigger 
number of huge pages in the kernel? Is it a right assumption?





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL and Kubernetes

2017-03-30 Thread Moreno Andreo

  
  

Hi everyone,
    I was googling around while searching stuff to finish my cluster
solution (already running in production in single-server mode) with
barman and pgbouncer, when I ran into this

https://hackernoon.com/postgresql-cluster-into-kubernetes-cluster-f353cde212de

Since I'm on Google Cloud Platform, I thought it would be a good
idea to see what it offers. After a quick read, I found a comment
saying 
"There was a series of comments by Kelsey Hightower about running
  databases in Kubernetes, both at conferences and on Twitter; he
  essentially says that you should absolutely NOT run your database
  in a container/kubernetes pod/service/deployment, as databases
  aren't built for type of dynamic scheduling that something like
  kubernetes (or any other container management solution), due to
  how they interact with the filesystem, network stack, and more.
Further more, he suggests that because of this lack of
  integration, that if you do run your database in
  kubernetes/docker, you are very likely to encounter data
  corruption and data loss"

What's your advice?
Thanks in advance,
Moreno
  





[GENERAL] Huge Pages - setting the right value

2017-03-30 Thread pinker
Hi,
I'm currently testing performance with and without huge pages. Documentation
says that in order to estimate the number of huge pages needed one should
check the postmaster's VmPeak value. I wonder if it's only postmaster memory
usage what's matters? Or I could get better estimation from the most memory
intensive postgres process - not necessarly postmaster? I'm using following
command to check it:
for i in $(ps -ef | grep postgres|awk '{print $2}'); do grep ^VmPeak
/proc/${i}/status|awk '{print $2}' >> log; done; sort -n -r log | head -1

I'm asking because some other process takes 606788kB while postmaster only
280444kB.



--
View this message in context: 
http://www.postgresql-archive.org/Huge-Pages-setting-the-right-value-tp5952972.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected interval comparison

2017-03-30 Thread Kyotaro HORIGUCHI
Hello,

At Tue, 21 Mar 2017 07:52:25 -0700, Adrian Klaver  
wrote in <375c9e5a-960f-942c-913f-55632a1f0...@aklaver.com>
> On 03/21/2017 07:42 AM, Tom Lane wrote:
> > Frazer McLean  writes:
> >> I came across an unexpected comparison (tested on PostgreSQL 9.4 and
> >> 9.6) for intervals with a large difference in magnitude.
> >
> >>   '1 year'::interval > '32618665 years'::interval;
> >
> >> Is this a bug?
> >
> > It looks like the problem is overflow of the result of
> > interval_cmp_value,
> > because it's trying to compute
> >
> > =# select '32618665'::int8 * 30 * 86400 * 100;
> > ERROR:  bigint out of range
> >
> > It's not immediately obvious how to avoid that while preserving the
> > same
> > comparison semantics :-(

This is an apparent bug of interval comparison. During comparison
interval is converted into int64 in milliseconds but it overflows
in the case.

Detecting the overflow during the conversion can fix it and
preserving the semantics (except value range). The current code
tells a lie anyway for the cases but I'm not sure limting the
range of value is acceptable or not.

| =# select '106751990 days 24:59:59'::interval;
| interval 
| -
|  106751990 days 24:59:59
| =# select '106751990 days 24:59:59'::interval > '1 year'::interval;
| ERROR:  interval out of range during comparison

If this is not acceptable, some refactoring would be required.


> Not sure if it helps but this works:
> 
> test=# select extract(epoch from '1 year'::interval) > extract(epoch
> from '32618665 years'::interval);
>  ?column?
> --
>  f

It calculates in seconds. So it is useful if subseconds are not
significant. But extract also silently overflows during
converting the same interval to usecs. This seems to need the
same amendment.

> =# select extract(usec from '32618665 years'::interval);
>  date_part 
> ---
>  0

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c
index 4be1999..f77cfcc 100644
--- a/src/backend/utils/adt/timestamp.c
+++ b/src/backend/utils/adt/timestamp.c
@@ -2293,10 +2293,23 @@ static inline TimeOffset
 interval_cmp_value(const Interval *interval)
 {
 	TimeOffset	span;
+	TimeOffset timedayfraction;
+	TimeOffset timedays;
 
-	span = interval->time;
-	span += interval->month * INT64CONST(30) * USECS_PER_DAY;
-	span += interval->day * INT64CONST(24) * USECS_PER_HOUR;
+	timedays = ((int64)interval->time) / USECS_PER_DAY;
+	timedayfraction = interval->time - timedays * USECS_PER_DAY;
+
+	/* calculate span in days. this cannot overflow */
+	span = timedays;
+	span += interval->month * INT64CONST(30);
+	span += interval->day;
+
+	/* overflow check */
+	if (span > INT64CONST(0x7fff) / USECS_PER_DAY - 1)
+		ereport(ERROR,
+			  (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
+			   errmsg ("interval out of range during comparison")));
+	span = span * USECS_PER_DAY + timedayfraction;
 
 	return span;
 }
@@ -2304,6 +2317,7 @@ interval_cmp_value(const Interval *interval)
 static int
 interval_cmp_internal(Interval *interval1, Interval *interval2)
 {
+	
 	TimeOffset	span1 = interval_cmp_value(interval1);
 	TimeOffset	span2 = interval_cmp_value(interval2);
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using relations in the SELECT part

2017-03-30 Thread Giuseppe Sacco
Hello Tom,

Il giorno mer, 29/03/2017 alle 09.59 -0400, Tom Lane ha scritto:
[...]
> > Furthermore, let's assume postgres does a cartesian product, if I
> > add a
> > new relation as third element, does it create 4x3 product?
> 
> You've hit on the reason why it's semantically ugly: it's not very
> clear what to do with multiple SRFs in one targetlist.  LATERAL,
> together with the ROWS FROM construct, allows clear specification
> of both of the useful behaviors (cartesian product and eval-set-
> returning-functions-in-lockstep).  The multiple-SRFs-in-targetlist
> behavior that we inherited from Berkeley is just a mess, as it
> effectively runs the SRFs until reaching the least common multiple of
> their periods.  We're changing that for v10 though.  You might find
> this commit informative (at least the commit message and
> documentation
> changes):
> 
> https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=69f4b9c85

Thanks for the detailed answer. I'll better study LATERAL joins and
change my query.

BTW, the commit you pointed out has been very very instructive for me.

Thank you,
Giuseppe



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Issue in autovacuum

2017-03-30 Thread Agarwal, Prateek
Thanks for your responses!
We are not using prepared transactions and not using replication slots as well.
We do have a lot of transactions getting created and closed on account of 
executing the update queries.

Thanks,
Prateek
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Tuesday, March 28, 2017 11:54 PM
To: Adrian Klaver 
Cc: Agarwal, Prateek ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Issue in autovacuum

Adrian Klaver  writes:
> On 03/28/2017 03:15 AM, Agarwal, Prateek wrote:
>> When I upgrade my db to 9.4.6 (my application remains same and hardly 
>> any code change w.r.t transaction declarations), it started 
>> complaining about dead rows below. I am not sure if it is something 
>> to do with upgrade or not.

> So are you using replication slots?

Or prepared transactions?  Or maybe you just have a really old open transaction?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general