Tracking DDL and DML changes in Postgresql and different versions of database (advance)

2018-05-29 Thread Łukasz Jarych
Hi Guys,

i am using Access FE and Postgresql BE.
To track DML changes i have history log table wotking like here:

https://www.fontstuff.com/access/acctut21.htm

It is ok but this doesnt solve the problem with trakich DDL changes.

For DDL changes i can use flyway system but how to combine this with my DML
trackinh system?

Do you have any solutions for this?

My boss wants to have possibility to save the current version (state) of
database with DML and DDL changes. And to easily revert and go back to
previous version (rollback all changes).

I do not think so that postgresql has triggers on DDL changes and - this is
the most optimal solution?

Best Wishes,
Luke


Re: existence of a savepoint?

2018-05-29 Thread David G. Johnston
On Tuesday, May 29, 2018, Stuart McGraw  wrote:

>  But in my case I don't control the size of the input data
>

Not in production but you have an idea of both size and complexity and
should be able to generate performance test scenarios, and related
monitoring queries (system and service) to obtain some idea.  The specifics
are beyond my experience but this is not brand new technology and people
have done similar stuff with it before.

And, as an extension to what you said, given such lack of control you are
going to want to monitor performance in production anyway even with an
assumed bullet-resistant solution.

David J.


Re: existence of a savepoint?

2018-05-29 Thread Stuart McGraw

On 05/29/2018 05:01 PM, Alvaro Herrera wrote:

On 2018-May-29, Stuart McGraw wrote:


Alternatively if there were a setting to tell Postgresql to
follow the SQL standard behavior of overwriting rather stacking
savepoints, that too would also solve my current problem I think.
Perhaps it is just my limited experience but the former behavior
has always seemed more useful in practice than the latter.


I think if what we're doing breaks the semantics of the SQL spec, we're
definitely open to changing our behavior.


My comment was based on the statement in the Postqresql docs for SAVEPOINT:

  "SQL requires a savepoint to be destroyed automatically when another
  savepoint with the same name is established. In PostgreSQL, the old
  savepoint is kept, though only the more recent one will be used when
  rolling back or releasing. (Releasing the newer savepoint with RELEASE
  SAVEPOINT will cause the older one to again become accessible to ROLLBACK
  TO SAVEPOINT and RELEASE SAVEPOINT.) Otherwise, SAVEPOINT is fully SQL
  conforming."


But that wouldn't solve your
problem today.  What I think could solve your problem today is a
C-language extension that uses xact.c callbacks in order to expose a
list that you can query from user space.


Even that won't solve it unless I can find someone who has already
written it and is willing to share it and my Googling hasn't found
anything yet.  :-(



Re: existence of a savepoint?

2018-05-29 Thread Stuart McGraw

On 05/29/2018 05:18 PM, David G. Johnston wrote:

On Tue, May 29, 2018 at 4:01 PM, Alvaro Herrera mailto:alvhe...@2ndquadrant.com>>wrote:

On 2018-May-29, Stuart McGraw wrote:

> Alternatively if there were a setting to tell Postgresql to
> follow the SQL standard behavior of overwriting rather stacking
> savepoints, that too would also solve my current problem I think.
> Perhaps it is just my limited experience but the former behavior
> has always seemed more useful in practice than the latter.

I think if what we're doing breaks the semantics of the SQL spec, we're
definitely open to changing our behavior.  But that wouldn't solve your
problem today.  What I think could solve your problem today is a
C-language extension that uses xact.c callbacks in order to expose a
list that you can query from user space.

​Stuart:​

That said, have you measured this "leaking" and can show that it is non-trivial 
(given the large size of the overall transaction)?


No I haven't and am not sure how I would.  Are you saying I shouldn't worry 
about it and just not bother releasing any of the savepoints?  I would feel a 
little uneasy about that the same way I would feel about a program that never 
freed allocated memory or closed open files.  If I know there are relatively 
small limits on how much data will be processed or how long the program will 
run, sure.  But in my case I don't control the size of the input data and I 
don't understand the internals of savepoints so I think caution is prudent.

Also I'm not sure the warnings against premature optimization when talking 
about code performance tweaks apply to resource leaks.  The former attempt to 
make a program run faster but don't (in theory) affect its correctness.  
Resource problems often show up unexpectedly and catastrophically.  So being 
more preemptively concerned about the latter I think is justified.


Beyond that bulk ETL leveraging SAVEPOINT is not something I've encountered or 
contemplated.  Expecting and reacting to errors is expensive and itself 
error-prone.  I'd much rather try to design something that where failure is 
simply bad - usually by bulk loading with fewer constraints and then ensuring 
that future queries don't attempt to do something illegal like insert 
duplicates.


Funny you should say that :-)  I am looking at rewriting these import programs 
(there are several) to do just that.  But it is not a trivial job and in the 
meantime I need to keep what already exists, working.





Re: reduce number of multiple values to be inserted

2018-05-29 Thread tango ward
On Wed, May 30, 2018 at 8:40 AM, Rob Sargent  wrote:

>
>
>
> On 05/29/2018 06:36 PM, Adrian Klaver wrote:
>
>> On 05/29/2018 05:10 PM, tango ward wrote:
>>
>>>
>>> Hi,
>>>
>> Not sure where you are pulling the data from and how it is ordered, but
>> an outline:
>>
>> data_input (Assuming sorted by village and then age)
>>
>> for village in data_input:
>> for age in village:
>>  curr_pgsql.execute('''
>>  INSERT INTO student (created, modified, name,
>>   address, age, level)
>>  VALUES(current_timezone, current_timezone,
>> %s, %s, %s, 2)''', (name, village, age))
>>
>>
>>>
>>> Thanks,
>>> J
>>>
>>>
>>
>> You might need random(name)? ;)



Noted. Thank you Sirs. I love you both.


Re: reduce number of multiple values to be inserted

2018-05-29 Thread Rob Sargent





On 05/29/2018 06:36 PM, Adrian Klaver wrote:

On 05/29/2018 05:10 PM, tango ward wrote:


Hi,
Not sure where you are pulling the data from and how it is ordered, 
but an outline:


data_input (Assuming sorted by village and then age)

for village in data_input:
for age in village:
 curr_pgsql.execute('''
 INSERT INTO student (created, modified, name,
  address, age, level)
 VALUES(current_timezone, current_timezone,
    %s, %s, %s, 2)''', (name, village, age))




Thanks,
J





You might need random(name)? ;)



Re: reduce number of multiple values to be inserted

2018-05-29 Thread Adrian Klaver

On 05/29/2018 05:10 PM, tango ward wrote:


Hi,

I am working on inserting multiple values for a table. I need to insert 
3 values of data for each age of the students from the same village. It 
will be 3 different ages of student per village.


My sample code:


curr_pgsql.execute('''
    INSERT INTO student (created, modified, name,
     address, age, level
     )
    VALUES (current_timezone, current_timezone,
    'Scott', 'Malayan Village', 21, 2),
   (current_timezone, current_timezone,
    'Ben', 'Malayan Village', 21, 2),
   (current_timezone, current_timezone,
    'Scott', 'Malayan Village', 21, 2),

   (current_timezone, current_timezone,
    'Andrew', 'Malayan Village', 25, 2),
   (current_timezone, current_timezone,
    'Larry', 'Malayan Village', 25, 2),
   (current_timezone, current_timezone,
    'Adam', 'Malayan Village', 25, 2),

   (current_timezone, current_timezone,
    'Elisse', 'Malayan Village', 27, 2),
   (current_timezone, current_timezone,
    'Xena', 'Malayan Village', 27, 2),
   (current_timezone, current_timezone,
    'Karen', 'Malayan Village', 27, 2)

I will repeat the same process for 13 villages so that will be 117 of 
values. I would like to know if there's a way to reduce the script? This 
has to be done strictly via script.


Not sure where you are pulling the data from and how it is ordered, but 
an outline:


data_input (Assuming sorted by village and then age)

for village in data_input:
for age in village:
 curr_pgsql.execute('''
 INSERT INTO student (created, modified, name,
  address, age, level)
 VALUES(current_timezone, current_timezone,
%s, %s, %s, 2)''', (name, village, age))




Thanks,
J




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: reduce number of multiple values to be inserted

2018-05-29 Thread Rob Sargent


> On May 29, 2018, at 6:32 PM, tango ward  wrote:
> 
> On Wed, May 30, 2018 at 8:29 AM, Rob Sargent  > wrote:
> 
> Is “current_timezone, current_timezone” just a typo?  I think you need to 
> make the 117 data lines and load using \copy
> 
> 
> 
> Sorry, yeah, it's current_timestamp.  
> 

David and I are suggesting the same thing.  You have 117 unique student 
definitions so you need to explicitly define each of them. That can be in your 
python code or in a file your python code reads in and generates inserts or 
simply a psql script which reads the file using \copy tablename from file.  

Re: reduce number of multiple values to be inserted

2018-05-29 Thread tango ward
On Wed, May 30, 2018 at 8:29 AM, Rob Sargent  wrote:

>
> Is “current_timezone, current_timezone” just a typo?  I think you need to
> make the 117 data lines and load using \copy
>
>

Sorry, yeah, it's current_timestamp.


Re: reduce number of multiple values to be inserted

2018-05-29 Thread Rob Sargent



> On May 29, 2018, at 6:10 PM, tango ward  wrote:
> 
> 
> Hi,
> 
> I am working on inserting multiple values for a table. I need to insert 3 
> values of data for each age of the students from the same village. It will be 
> 3 different ages of student per village.
> 
> My sample code:
> 
> 
> curr_pgsql.execute('''
>INSERT INTO student (created, modified, name,
> address, age, level
> )
>VALUES (current_timezone, current_timezone,
>'Scott', 'Malayan Village', 21, 2),
>   (current_timezone, current_timezone,
>'Ben', 'Malayan Village', 21, 2),
>   (current_timezone, current_timezone,
>'Scott', 'Malayan Village', 21, 2),
>  
>   (current_timezone, current_timezone,
>'Andrew', 'Malayan Village', 25, 2),
>   (current_timezone, current_timezone,
>'Larry', 'Malayan Village', 25, 2),
>   (current_timezone, current_timezone,
>'Adam', 'Malayan Village', 25, 2),
>   
>   (current_timezone, current_timezone,
>'Elisse', 'Malayan Village', 27, 2),
>   (current_timezone, current_timezone,
>'Xena', 'Malayan Village', 27, 2),
>   (current_timezone, current_timezone,
>'Karen', 'Malayan Village', 27, 2)
> 
> I will repeat the same process for 13 villages so that will be 117 of values. 
> I would like to know if there's a way to reduce the script? This has to be 
> done strictly via script.
> 
> 
> Thanks,
> J
> 
Is “current_timezone, current_timezone” just a typo?  I think you need to make 
the 117 data lines and load using \copy
 




Re: reduce number of multiple values to be inserted

2018-05-29 Thread tango ward
On Wed, May 30, 2018 at 8:21 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tuesday, May 29, 2018, tango ward  wrote:
>>
>> I will repeat the same process for 13 villages so that will be 117 of
>> values. I would like to know if there's a way to reduce the script? This
>> has to be done strictly via script.
>>
>>
> VALUES and CROSS JOIN might help but you haven't explained the pattern
> well enough, if there is one, to know for sure.  Frankly, you'll probably
> spend more time figuring out the elegant way than just copy-paste-change so
> unless you need to leverage this elsewhere I'd say just brute-force it.
>
> David J.
>
>
Can you elaborate the idea on brute-forcing this Sir?


Thanks,
J


Re: reduce number of multiple values to be inserted

2018-05-29 Thread David G. Johnston
On Tuesday, May 29, 2018, tango ward  wrote:
>
> I will repeat the same process for 13 villages so that will be 117 of
> values. I would like to know if there's a way to reduce the script? This
> has to be done strictly via script.
>
>
VALUES and CROSS JOIN might help but you haven't explained the pattern well
enough, if there is one, to know for sure.  Frankly, you'll probably spend
more time figuring out the elegant way than just copy-paste-change so
unless you need to leverage this elsewhere I'd say just brute-force it.

David J.


reduce number of multiple values to be inserted

2018-05-29 Thread tango ward
Hi,

I am working on inserting multiple values for a table. I need to insert 3
values of data for each age of the students from the same village. It will
be 3 different ages of student per village.

My sample code:


curr_pgsql.execute('''
   INSERT INTO student (created, modified, name,
address, age, level
)
   VALUES (current_timezone, current_timezone,
   'Scott', 'Malayan Village', 21, 2),
  (current_timezone, current_timezone,
   'Ben', 'Malayan Village', 21, 2),
  (current_timezone, current_timezone,
   'Scott', 'Malayan Village', 21, 2),

  (current_timezone, current_timezone,
   'Andrew', 'Malayan Village', 25, 2),
  (current_timezone, current_timezone,
   'Larry', 'Malayan Village', 25, 2),
  (current_timezone, current_timezone,
   'Adam', 'Malayan Village', 25, 2),

  (current_timezone, current_timezone,
   'Elisse', 'Malayan Village', 27, 2),
  (current_timezone, current_timezone,
   'Xena', 'Malayan Village', 27, 2),
  (current_timezone, current_timezone,
   'Karen', 'Malayan Village', 27, 2)

I will repeat the same process for 13 villages so that will be 117 of
values. I would like to know if there's a way to reduce the script? This
has to be done strictly via script.


Thanks,
J


Re: existence of a savepoint?

2018-05-29 Thread David G. Johnston
On Tue, May 29, 2018 at 4:01 PM, Alvaro Herrera 
wrote:

> On 2018-May-29, Stuart McGraw wrote:
>
> > Alternatively if there were a setting to tell Postgresql to
> > follow the SQL standard behavior of overwriting rather stacking
> > savepoints, that too would also solve my current problem I think.
> > Perhaps it is just my limited experience but the former behavior
> > has always seemed more useful in practice than the latter.
>
> I think if what we're doing breaks the semantics of the SQL spec, we're
> definitely open to changing our behavior.  But that wouldn't solve your
> problem today.  What I think could solve your problem today is a
> C-language extension that uses xact.c callbacks in order to expose a
> list that you can query from user space.


​Stuart:​

That said, have you measured this "leaking" and can show that it is
non-trivial (given the large size of the overall transaction)?

Beyond that bulk ETL leveraging SAVEPOINT is not something I've encountered
or contemplated.  Expecting and reacting to errors is expensive and itself
error-prone.  I'd much rather try to design something that where failure is
simply bad - usually by bulk loading with fewer constraints and then
ensuring that future queries don't attempt to do something illegal like
insert duplicates.

David J.


Re: Pgagent is not reading pgpass file either in Windows or Linux.

2018-05-29 Thread Adrian Klaver

On 05/29/2018 03:57 PM, nageswara Bandla wrote:

#1. Windows:

My guess because the LocalSystem user does not have permissions on your:
%APPDATA%/postgresql/pgpass.co nf
file. This seems to be confirmed by it working when you run pgagent as 
the login user.

Also:
passfile
     Specifies the name of the file used to store passwords (see Section 
33.15). Defaults to ~/.pgpass, or %APPDATA%\postgresql\pgpass.co 
nf on Microsoft Windows. (No error is reported if this 
file does not exist.)



Does that mean we can not use pgagent (when installed)  as a  
"LocalSystem" service account and define PGPASSFILE to a valid 
accessible location. ?


Not sure as I do not use Windows much and do not have a good handle on 
how it handles permissions. You might try raising an issue below to see 
if you can get some guidance:


https://github.com/postgres/pgagent/issues

Since I have tried copying pgpass.conf file to  C:\pgpass.conf and 
defined PGPASSFILE=C:\pgpass.conf

Still it was throwing no password supplied error.


#2. Linux
What happens if from the terminal as the postgres user you do?:
/path/to/pgagent 'connection string'

pgagent is throwing the same error from terminal as a postgres user as 
well. I have enabled debugging logging mode. It's not showing me which 
password file it's trying to access.


Remember there is a difference between the postgres system user and the 
postgres database user. So when you did the above where you in the 
postgres system user shell? That is where you set up the .pgpass file. 
Running as the postgres database user in another system account will not 
work unless you create a .pgpass file in that home directory as well.


It also seems that you have more the one instance of Postgres of 
installed and it looks like the 8.4.20 version is being found first. So 
I wonder if there is a library compatibility issue going on, given that 
8.4 is well past EOL.




When I run psql from postgres user; It works fine.
*
*
*/usr/bin/pgagent_96 -f -l 2 hostaddr=127.0.0.1 dbname=linuxpostgresdb 
user=postgres port=5432*

DEBUG: Creating primary connection
DEBUG: Connection Information:
DEBUG:      user         : postgres
DEBUG:      port         : 5432
DEBUG:      host         : 127.0.0.1
DEBUG:      dbname       : linuxpostgresdb
DEBUG:      password     :
DEBUG:      conn timeout : 0
DEBUG: Connection Information:
DEBUG:      user         : postgres
DEBUG:      port         : 5432
DEBUG:      host         : 127.0.0.1
DEBUG:      dbname       : linuxpostgresdb
DEBUG:      password     :
DEBUG:      conn timeout : 0
DEBUG: Creating DB connection: user=postgres port=5432 
hostaddr=127.0.0.1 dbname=linuxpostgresdb
WARNING: Couldn't create the primary connection (attempt 1): 
fe_sendauth: no password supplied

DEBUG: Clearing all connections
DEBUG: Connection stats: total - 1, free - 0, deleted - 1

*# psql command is running fine:*
psql -h 127.0.0.1 -d linuxpostgresdb -U postgres

psql (8.4.20, server 9.6.6)
WARNING: psql version 8.4, server version 9.6.
          Some psql features might not work.
Type "help" for help.

linuxpostgresdb=#







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Pgagent is not reading pgpass file either in Windows or Linux.

2018-05-29 Thread nageswara Bandla
#1. Windows:

My guess because the LocalSystem user does not have permissions on your:
%APPDATA%/postgresql/pgpass.conf
file. This seems to be confirmed by it working when you run pgagent as the
login user.
Also:
passfile
Specifies the name of the file used to store passwords (see Section
33.15). Defaults to ~/.pgpass, or %APPDATA%\postgresql\pgpass.conf on
Microsoft Windows. (No error is reported if this file does not exist.)


Does that mean we can not use pgagent (when installed)  as a  "LocalSystem"
service account and define PGPASSFILE to a valid accessible location. ?
Since I have tried copying pgpass.conf file to  C:\pgpass.conf and defined
PGPASSFILE=C:\pgpass.conf
Still it was throwing no password supplied error.


#2. Linux
What happens if from the terminal as the postgres user you do?:
/path/to/pgagent 'connection string'

pgagent is throwing the same error from terminal as a postgres user as
well. I have enabled debugging logging mode. It's not showing me which
password file it's trying to access.

When I run psql from postgres user; It works fine.

*/usr/bin/pgagent_96 -f -l 2 hostaddr=127.0.0.1 dbname=linuxpostgresdb
user=postgres port=5432*
DEBUG: Creating primary connection
DEBUG: Connection Information:
DEBUG:  user : postgres
DEBUG:  port : 5432
DEBUG:  host : 127.0.0.1
DEBUG:  dbname   : linuxpostgresdb
DEBUG:  password :
DEBUG:  conn timeout : 0
DEBUG: Connection Information:
DEBUG:  user : postgres
DEBUG:  port : 5432
DEBUG:  host : 127.0.0.1
DEBUG:  dbname   : linuxpostgresdb
DEBUG:  password :
DEBUG:  conn timeout : 0
DEBUG: Creating DB connection: user=postgres port=5432 hostaddr=127.0.0.1
dbname=linuxpostgresdb
WARNING: Couldn't create the primary connection (attempt 1): fe_sendauth:
no password supplied
DEBUG: Clearing all connections
DEBUG: Connection stats: total - 1, free - 0, deleted - 1

*# psql command is running fine:*
psql -h 127.0.0.1 -d linuxpostgresdb -U postgres

psql (8.4.20, server 9.6.6)
WARNING: psql version 8.4, server version 9.6.
 Some psql features might not work.
Type "help" for help.

linuxpostgresdb=#




On Tue, May 29, 2018 at 3:32 PM, Adrian Klaver 
wrote:

> On 05/29/2018 12:14 PM, nageswara Bandla wrote:
>
>> The intention of this post is to find out ways to run pgagent without
>> passing password in its connection string.
>>
>> *Windows:*
>>
>> I have installed pgagent on windows and configured to run under Local
>> System account.
>>
>> Command:-
>>
>> C:\PostgreSQL\bigsql\pgagent\bin\pgagent.exeINSTALL pgagent -l 2 -u
>> LocalSystem hostaddr=127.0.0.1 dbname=postgresdb user=postgres
>>
>> I have logged into my windows account; where my profile has pgpass.conf
>> in %APPDATA%/postgresql/pgpass.conf file.
>>
>> 127.0.0.1:5432:*:postgres:postgres1
>>
>> Pgagent is not started and throws error –“The pgagent service on Local
>> Computer started and then stopped. Some services stop automatically if they
>> are not in use by other services or programs”.
>>
>> Eventvwr log messages have these error messages - “Couldn't create the
>> primary connection (attempt 10): fe_sendauth: no password supplied”
>>
>> *PGPASSFILE env variable:*
>>
>> As per the link- (https://www.postgresql.org/do
>> cs/9.6/static/libpq-pgpass.html) I set PGPASSFILE environment variable
>> to point to pgpass.conf location. Even then, it’s throwing same above error
>> message. I have found out that pgagent is not reading pgpass.conf file when
>> configured under LocalSystem account.
>>
>> When I change the properties of the pgagent service to run under my login
>> user account. Then, it’s reading pgpass.conf file under
>> %APPDATA%/postgresql/pgpass.conf.
>>
>> I am clueless, why pgagent is not honoring PGPASSFILE env variable.
>>
>
> My guess because the LocalSystem user does not have permissions on your:
>
> %APPDATA%/postgresql/pgpass.conf
>
> file. This seems to be confirmed by it working when you run pgagent as the
> login user.
>
> Also:
>
> passfile
>
> Specifies the name of the file used to store passwords (see Section
> 33.15). Defaults to ~/.pgpass, or %APPDATA%\postgresql\pgpass.conf on
> Microsoft Windows. (No error is reported if this file does not exist.)
> ^^
>
>
>
>
>> *Linux:*
>>
>> In Linux, I have installed pgagent_96 using yum command. And tried to run
>> pgagent using command /etc/init.d/pgagent_96 from postgres user account.
>>
>
> This is going to depend on what pgagent_96 is doing?
>
>
>> It fails with error message – “WARNING: Couldn't create the primary
>> connection (attempt 2): fe_sendauth: no password supplied”
>>
>
> What happens if from the terminal as the postgres user you do?:
>
> /path/to/pgagent 'connection string'
>
>
>
>> I made sure that I have .pgpass configuration file with 0600 permissions
>> in postgres user home directory.
>>
>> I have tried by defining 

Re: existence of a savepoint?

2018-05-29 Thread Stuart McGraw

On 05/29/2018 08:38 AM, Alvaro Herrera wrote:

On 2018-May-27, Stuart McGraw wrote:


Is there some way to to test if a savepoint of a given name
exists?  Or better yet, the number of stacked savepoints of
that name?


What is the use case for this?


I have a process that extracts data from a source and inserts
it into a Postgresql database with one big commit at the end.
The flow is not straight forward: the data is read in blocks
that contain sub-blocks and certain database errors (duplicate
keys, etc) may need to be rolled back to the previous insert,
sub-block or block depending on context.  Trying to keep track
of which savepoints are active without "leaking" them (failing
to release when no longer needed) in the presence of many branches
and exception handlers is not impossible but would be much easier
if I could introspect the savepoint state rather than having to
try and track it myself.

Alternatively if there were a setting to tell Postgresql to
follow the SQL standard behavior of overwriting rather stacking
savepoints, that too would also solve my current problem I think.
Perhaps it is just my limited experience but the former behavior
has always seemed more useful in practice than the latter.



Re: existence of a savepoint?

2018-05-29 Thread Stuart McGraw

On 05/27/2018 04:39 PM, David G. Johnston wrote:

On Sunday, May 27, 2018, Stuart McGraw mailto:smcg4...@mtneva.com>> wrote:

Is there some way to to test if a savepoint of a given name
exists?  Or better yet, the number of stacked savepoints of
that name?

A scan of the documentation doesn't show any commands or functions that would 
provide this information.


I was hoping that I was overlooking something.  :-(



Re: pgdg-keyring (or apt-key) failure on fresh 9.6 install

2018-05-29 Thread Tim Cross


Moreno Andreo  writes:

> Hi Tim,
>
> Il 29/05/2018 00:06, Tim Cross ha scritto:
>> Moreno Andreo  writes:
>>
>>> Hi folks,
>>> I'm trying to install Postgresql 9.6 on a test machine in Google Cloud
>>> Platform
>>> After a fresh install with Debian 9 (just after the instance has been
>>> created) I follow steps from here
>>>
>>> https://wiki.postgresql.org/wiki/Apt
>>>
>>> (instead of pg 10 I install pg 9.6)
>>>
>>> During the installation process i encounter the following strange
>>> warnings that, even if that's a test machine, make me think twice before
>>> going ahead.
>>>
>>> [...]
>>> Processing triggers for man-db (2.7.6.1-2) ...
>>> Setting up pgdg-keyring (2017.3) ...
>>> Removing apt.postgresql.org key from trusted.gpg: Warning: The postinst
>>> maintainerscript of the package pgdg-keyring
>>> Warning: seems to use apt-key (provided by apt) without depending on
>>> gnupg or gnupg2.
>>> Warning: This will BREAK in the future and should be fixed by the
>>> package maintainer(s).
>>> Note: Check first if apt-key functionality is needed at all - it
>>> probably isn't!
>>> OK
>>> Setting up xml-core (0.17) ...
>>> [...]
>>>
>>> I have to say that installation is successfully and database server goes
>>> up and apparently with no problems at all.
>>>
>> This looks like a warning for the package maintainers regarding ensuring
>> the package depends on either gnupg or gnupg2 and nothing you need to
>> worry about unless you are building/maintaining deb packages for postgres.
> Brilliant. That's what I needed to know. Just to avoid bitter surprises 
> in the future... :-)
>>
>> The Debian package manager, apt, uses gpg keys to verify the
>> authenticity of packages it downloads. My guess is that previously, you
>> only needed to ensure the package had a dependency on apt-key and now
>> apt has/is changing such that you need to have an explicit dependency on
>> either gnupg or gnupg2.
>>
> ... so if I update/upgrade this instance in the future it will be 
> automatically fixed (and there shouldn't be issues), right?
> Thanks a lot!
>

Right. In fact, there are no issues now. That warning is from the Debian
package management system and about the package management system, so
nothing to do with Postgres.

When you upgrade in the future, provided the new Postgres packages have
been created with the dependency for gnupg/gnupg2, there will be no
warnings.

Personally, I tend to prefer using the packages which come with the
particular flavour of Linux your installing as they are often more
in-line with the current version of the package management system being
used. I only grab packages from the specific Postgres repo if the
package is not in the current version of the distribution I'm
installing. 

-- 
Tim Cross



Re: How to drop a value from an ENUM?

2018-05-29 Thread Tom Lane
=?UTF-8?Q?Torsten_F=C3=B6rtsch?=  writes:
> I am absolutely sure a certain value of one of my ENUM types is not used in
> the entire database. Now I am asked to drop that value. Unfortunately,
> there is no ALTER TYPE DROP VALUE.

Yup.

> On my development box I tried
> delete from pg_enum
>  where enumtypid='my_type_name'::regtype
>and enumlabel='my_label'
> It worked and I could not find any adverse effects.
> Given the value is not used anywhere, is this a save command?

No.  If it were, we'd have an ALTER DROP VALUE command.

The key problem that is hard to fix here is that, even if today you have
no live rows containing that value, it may still be present in indexes.
In a btree, for example, the value might've migrated up into upper index
pages as a page boundary value.  Once that's happened, it's likely to
persist indefinitely, even if the live occurrences in the underlying table
get deleted and vacuumed away.

Now, even if that's happened, you might be okay, because of the
optimizations that typically allow enum value comparisons to be done
without consulting pg_enum.  But if you're in the habit of altering
enums, it's that much more likely that you would have done an ALTER TYPE
that defeats those optimizations; so I wouldn't rely on this.  Sooner
or later you're going to get burnt by complaints about an invalid enum
value (not sure of the exact wording) when you access certain parts
of the index.

You could maybe get around all of that by reindexing any indexes
containing the altered enum type after you're certain that all
entries of the unwanted enum value are dead and vacuumed away.
But it's not terribly safe.

regards, tom lane



Re: Pgagent is not reading pgpass file either in Windows or Linux.

2018-05-29 Thread Adrian Klaver

On 05/29/2018 12:14 PM, nageswara Bandla wrote:
The intention of this post is to find out ways to run pgagent without 
passing password in its connection string.


*Windows:*

I have installed pgagent on windows and configured to run under Local 
System account.


Command:-

C:\PostgreSQL\bigsql\pgagent\bin\pgagent.exeINSTALL pgagent -l 2 -u 
LocalSystem hostaddr=127.0.0.1 dbname=postgresdb user=postgres


I have logged into my windows account; where my profile has pgpass.conf 
in %APPDATA%/postgresql/pgpass.conf file.


127.0.0.1:5432:*:postgres:postgres1

Pgagent is not started and throws error –“The pgagent service on Local 
Computer started and then stopped. Some services stop automatically if 
they are not in use by other services or programs”.


Eventvwr log messages have these error messages - “Couldn't create the 
primary connection (attempt 10): fe_sendauth: no password supplied”


*PGPASSFILE env variable:*

As per the link- 
(https://www.postgresql.org/docs/9.6/static/libpq-pgpass.html) I set 
PGPASSFILE environment variable to point to pgpass.conf location. Even 
then, it’s throwing same above error message. I have found out that 
pgagent is not reading pgpass.conf file when configured under 
LocalSystem account.


When I change the properties of the pgagent service to run under my 
login user account. Then, it’s reading pgpass.conf file under 
%APPDATA%/postgresql/pgpass.conf.


I am clueless, why pgagent is not honoring PGPASSFILE env variable.


My guess because the LocalSystem user does not have permissions on your:

%APPDATA%/postgresql/pgpass.conf

file. This seems to be confirmed by it working when you run pgagent as 
the login user.


Also:

passfile

Specifies the name of the file used to store passwords (see Section 
33.15). Defaults to ~/.pgpass, or %APPDATA%\postgresql\pgpass.conf on 
Microsoft Windows. (No error is reported if this file does not exist.)

^^





*Linux:*

In Linux, I have installed pgagent_96 using yum command. And tried to 
run pgagent using command /etc/init.d/pgagent_96 from postgres user account.


This is going to depend on what pgagent_96 is doing?



It fails with error message – “WARNING: Couldn't create the primary 
connection (attempt 2): fe_sendauth: no password supplied”


What happens if from the terminal as the postgres user you do?:

/path/to/pgagent 'connection string'



I made sure that I have .pgpass configuration file with 0600 permissions 
in postgres user home directory.


I have tried by defining PGPASSFILE env variable in postgres user 
account. But it’s not working.


Pgagent is starting only when we pass password in it’s connection 
string. But which is not a good practice at all.






--
Adrian Klaver
adrian.kla...@aklaver.com



Pgagent is not reading pgpass file either in Windows or Linux.

2018-05-29 Thread nageswara Bandla
The intention of this post is to find out ways to run pgagent without
passing password in its connection string.



*Windows:*

I have installed pgagent on windows and configured to run under Local
System account.



Command:-

C:\PostgreSQL\bigsql\pgagent\bin\pgagent.exe  INSTALL pgagent -l 2 -u
LocalSystem hostaddr=127.0.0.1 dbname=postgresdb user=postgres



I have logged into my windows account; where my profile has pgpass.conf in
%APPDATA%/postgresql/pgpass.conf file.



127.0.0.1:5432:*:postgres:postgres1



Pgagent is not started and throws error –“The pgagent service on Local
Computer started and then stopped. Some services stop automatically if they
are not in use by other services or programs”.

Eventvwr log messages have these error messages - “Couldn't create the
primary connection (attempt 10): fe_sendauth: no password supplied”



*PGPASSFILE env variable:*

As per the link- (
https://www.postgresql.org/docs/9.6/static/libpq-pgpass.html)   I set
PGPASSFILE environment variable to point to pgpass.conf location. Even
then, it’s throwing same above error message. I have found out that pgagent
is not reading pgpass.conf file when configured under LocalSystem account.



When I change the properties of the pgagent service to run under my login
user account. Then, it’s reading pgpass.conf file under
%APPDATA%/postgresql/pgpass.conf.



I am clueless, why pgagent is not honoring PGPASSFILE env variable.





*Linux:*

In Linux, I have installed pgagent_96 using yum command. And tried to run
pgagent using command /etc/init.d/pgagent_96 from postgres user account.

It fails with error message – “WARNING: Couldn't create the primary
connection (attempt 2): fe_sendauth: no password supplied”

I made sure that I have .pgpass configuration file with 0600 permissions in
postgres user home directory.



I have tried by defining PGPASSFILE env variable in postgres user account.
But it’s not working.



Pgagent is starting only when we pass password in it’s connection string.
But which is not a good practice at all.


Re: Login with LDAP authentication takes 5 seconds

2018-05-29 Thread Laurenz Albe
Andreas Schmid wrote:
> I configured my PostgreSQL 10 DB on Debian 9.2 with LDAP authentication 
> (simple bind mode).
> While this basically works, it has the strange effect that the first login 
> with psql
> takes around 5 seconds. When I reconnect within 60 seconds, the login 
> completes immediately.
> 
> The LDAP server is behind a firewall. So for a test, in pg_hba.conf I put the 
> LDAP servers
> IP address instead of its DNS name (for parameter ldapserver). Like that, all 
> logins
> complete immediately. But in general I prefer specifying the DNS name rather 
> than the IP.
> 
> When I checked on the DB machine with the following commands
> host my.ldap.server.org
> dig my.ldap.server.org
> both always returned the host name and IP address of the LDAP server 
> immediately. 
> 
> Does anyone of you have an explanation for this, or a hint, where I could do 
> some further
> investigation?

I would run a network trace with timestamps to see where the time is spent.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: existence of a savepoint?

2018-05-29 Thread Alvaro Herrera
On 2018-May-27, Stuart McGraw wrote:

> Is there some way to to test if a savepoint of a given name
> exists?  Or better yet, the number of stacked savepoints of
> that name?

What is the use case for this?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



LDAP authentication slow

2018-05-29 Thread C GG
This is PostgreSQL 9.5 -- We just enabled LDAP(S) authentication (to an
Active Directory server) for a certain grouping of users

pg_hba.conf

#...

hostssl all +ldap_group 0.0.0.0/0 ldap ldaptls="1" ldapserver="
hostssl all all 0.0.0.0/0 md5

#...

I'm getting complaints from the users authenticating using ldap that
database operations are taking quite a bit longer than they were previously
when they were authenticating with MD5 stored passwords. Clearly, there's
more machinery at work with that kind of operation, but the increase in
time is way more that I would have estimated.

I still want them to be able to type in a password, so GSSAPI is out for an
alternative (right?) ... Is there something I can do to help speed things
up? If there any telemetry that I can generate (logs, stats, etc.) which
might be able to pinpoint a bottleneck?

Thanks all,

CG


Login with LDAP authentication takes 5 seconds

2018-05-29 Thread Andreas Schmid
 Hi,

I configured my PostgreSQL 10 DB on Debian 9.2 with LDAP authentication
(simple bind mode). While this basically works, it has the strange effect
that the first login with psql takes around 5 seconds. When I reconnect
within 60 seconds, the login completes immediately.

The LDAP server is behind a firewall. So for a test, in pg_hba.conf I put
the LDAP servers IP address instead of its DNS name (for parameter
ldapserver). Like that, all logins complete immediately. But in general I
prefer specifying the DNS name rather than the IP.

When I checked on the DB machine with the following commands
host my.ldap.server.org
dig my.ldap.server.org
both always returned the host name and IP address of the LDAP server
immediately.

Does anyone of you have an explanation for this, or a hint, where I could
do some further investigation?

Thanks,
Andy


Update rules on views

2018-05-29 Thread Maroš Kollár
Hello,

I am currently  evaluating multiple ways of denying certain updates on
record AND indicating whether an update was denied because it did not
match some criteria or if it simply was not found.

One of these methods is using the rule system and behaves in an odd
way.The unexpected results are indicated below.

-- Function for debugging
CREATE OR REPLACE FUNCTION public.test_debug(
message text
) RETURNS text
LANGUAGE plpgsql
AS $function$
BEGIN
RAISE NOTICE 'Calling test_debug: %', message;
RETURN message;
END; $function$;

DROP TABLE test CASCADE ;
-- Main table
CREATE TABLE test(id int primary key, animal text NOT NULL, sound text
NOT NULL);
-- Populate table
INSERT into test VALUES (1,'rabbit','purr'),(2,'fox','shriek');
-- Create simple view
CREATE VIEW test_view AS SELECT * FROM test;
-- Unconditional fallback update rule
CREATE OR REPLACE RULE test_fallback AS ON UPDATE TO test_view
DO INSTEAD NOTHING;
-- Rule to deny update if animal is the same
CREATE RULE test_deny AS ON UPDATE TO test_view
WHERE NEW.animal = OLD.animal
DO INSTEAD (SELECT test_debug('deny'));
-- Rule to allow update if animal was changed
CREATE OR REPLACE RULE test_allow AS ON UPDATE TO test_view
WHERE NEW.animal <> OLD.animal
DO INSTEAD (
UPDATE test SET
animal = NEW.animal,
sound = NEW.sound
WHERE id = OLD.id;
SELECT test_debug('allow');
);

-- Test an update that should be denied by the rule since animal is the same
UPDATE test_view SET animal = 'rabbit', sound = 'bark' WHERE id = 1;
-- NOTICE:  Calling test_debug: deny
--  test_debug
-- 
--  deny
-- (1 row)
--
-- UPDATE 0

-- Check if the record was not altered
SELECT * FROM test WHERE id = 1;
--  id | animal | sound
-- ++---
--   1 | rabbit | purr
-- (1 row)

-- Test an update that should be accepted by the rule since animal is different
UPDATE test_view SET animal = 'bear',sound = 'roar'  WHERE id = 2;
-- NOTICE:  Calling test_debug: deny <--- expecting allow instead !
--  test_debug
-- 
--  deny <--- expecting allow instead !
-- (1 row)
--
-- UPDATE 1

-- Check if the record was altered
SELECT * FROM test WHERE id = 2;
--  id | animal | sound
-- ++---
--   2 | bear   | roar   <--- record was updated as expected,
although it returned 'deny'
-- (1 row)

-- Test an update on a record that does not exist
UPDATE test_view SET animal = 'dog',sound = 'bark'  WHERE id = 3;
-- test_debug
-- 
-- (0 rows)
--
-- UPDATE 0

Why do I see 'deny' on an update that was handled by the 'test_allow' rule?

All tests were run on postgres 9.6.7.

Cheers
Maroš



Re: How to drop a value from an ENUM?

2018-05-29 Thread Melvin Davidson
On Mon, May 28, 2018 at 11:08 PM, Torsten Förtsch 
wrote:

> Hi,
>
> I am absolutely sure a certain value of one of my ENUM types is not used
> in the entire database. Now I am asked to drop that value. Unfortunately,
> there is no ALTER TYPE DROP VALUE.
>
> On my development box I tried
>
> delete from pg_enum
>  where enumtypid='my_type_name'::regtype
>and enumlabel='my_label'
>
> It worked and I could not find any adverse effects.
>
> Given the value is not used anywhere, is this a save command?
>
> Thanks,
> Torsten
>




*Well, imho, you should avoid enums at all cost. As you have discovered,
enums are hard to maintain and have long been replaced by Foreign Keys.With
that being said, apparently your command was safe. However, the best way is
to do the following.to  drop/delete an enum.*




















*1. Determine that the particular enum value is NOT referenced by any
column of any table in the database.2. As a superuser, use the following
queries:SELECT t.typname,   e.enumlabel,   e.enumsortorder,
e.enumtypid  FROM pg_type t  JOIN pg_enum e ON e.enumtypid = t.oid WHERE
t.typtype = 'e'   AND e.enumlabel = 'your_enum_value' ORDER BY 1,
e.enumsortorder; DELETE FROM pg_enum WHERE enumtypid =AND enumlabel = 'your_enum_value';-- *


*Melvin DavidsonMaj. Database & Exploration SpecialistUniverse Exploration
Command – UXC*
Employment by invitation only!


Re: SQL problem (forgot to change header with earlier post!).

2018-05-29 Thread Adrian Klaver

On 05/29/2018 06:52 AM, Adrian Klaver wrote:

On 05/29/2018 05:05 AM, Paul Linehan wrote:

Hi again, and thanks for your efforts on my behalf!


WITH num AS
(
    SELECT count (*) as cnt1 FROM v1
  ),
div AS
(
    SELECT count (*) as cnt2 FROM v2
  )
  SELECT (num.cnt1::numeric/div.cnt2)
 From num cross join div;



I've tried running this code 4 different ways and none of them work -
your original and my efforts to tweak the code!

This always ends up giving just 1 (integer division - using float) or
1.00 (using numeric).


It would, each view has only a single row for the count value. From the 
fiddle:


SELECT * FROM v1;

cnt1
13

SELECT * FROM v2;

cnt2
11

So doing:

SELECT count (*) as cnt1 FROM v1(2)

is going to return 1 in both cases and 1/1 = 1.

Change:

SELECT count (*) as cnt1 FROM v1

SELECT count (*) as cnt2 FROM v2

to

SELECT cnt1 FROM v1

SELECT cnt2 FROM v1


Cut and paste error, should be:

SELECT cnt2 FROM v2





Check out the fiddle here:
https://dbfiddle.uk/?rdbms=postgres_10=b1bd443baf16d85dee0436333a6fd919 




You could have also written it like your first statement without the 
CTEs.

This way requires joining the tables with a cross or Cartesian join.


Yes, the first statement is the way to go on this particular case, but
I'm also trying to understand the ins and outs of CTEs, so I'm
interesting in solving this one!


Thanks again,


Rgs,


Pól...




Todd









--
Adrian Klaver
adrian.kla...@aklaver.com



Re: SQL problem (forgot to change header with earlier post!).

2018-05-29 Thread Adrian Klaver

On 05/29/2018 05:05 AM, Paul Linehan wrote:

Hi again, and thanks for your efforts on my behalf!


WITH num AS
(
SELECT count (*) as cnt1 FROM v1
  ),
div AS
(
SELECT count (*) as cnt2 FROM v2
  )
  SELECT (num.cnt1::numeric/div.cnt2)
 From num cross join div;



I've tried running this code 4 different ways and none of them work -
your original and my efforts to tweak the code!

This always ends up giving just 1 (integer division - using float) or
1.00 (using numeric).


It would, each view has only a single row for the count value. From the 
fiddle:


SELECT * FROM v1;

cnt1
13

SELECT * FROM v2;

cnt2
11

So doing:

SELECT count (*) as cnt1 FROM v1(2)

is going to return 1 in both cases and 1/1 = 1.

Change:

SELECT count (*) as cnt1 FROM v1

SELECT count (*) as cnt2 FROM v2

to

SELECT cnt1 FROM v1

SELECT cnt2 FROM v1



Check out the fiddle here:
https://dbfiddle.uk/?rdbms=postgres_10=b1bd443baf16d85dee0436333a6fd919



You could have also written it like your first statement without the CTEs.
This way requires joining the tables with a cross or Cartesian join.


Yes, the first statement is the way to go on this particular case, but
I'm also trying to understand the ins and outs of CTEs, so I'm
interesting in solving this one!


Thanks again,


Rgs,


Pól...




Todd






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: SQL problem (forgot to change header with earlier post!).

2018-05-29 Thread Paul Linehan
Hi again, and thanks for your efforts on my behalf!

> WITH num AS
> (
>SELECT count (*) as cnt1 FROM v1
>  ),
> div AS
> (
>SELECT count (*) as cnt2 FROM v2
>  )
>  SELECT (num.cnt1::numeric/div.cnt2)
> From num cross join div;


I've tried running this code 4 different ways and none of them work -
your original and my efforts to tweak the code!

This always ends up giving just 1 (integer division - using float) or
1.00 (using numeric).

Check out the fiddle here:
https://dbfiddle.uk/?rdbms=postgres_10=b1bd443baf16d85dee0436333a6fd919


> You could have also written it like your first statement without the CTEs.
> This way requires joining the tables with a cross or Cartesian join.

Yes, the first statement is the way to go on this particular case, but
I'm also trying to understand the ins and outs of CTEs, so I'm
interesting in solving this one!


Thanks again,


Rgs,


Pól...



> Todd



Re: SQL problem (forgot to change header with earlier post!).

2018-05-29 Thread Paul Linehan
Hi and grazie for your reply!


> If it's not an excercise, I think you don't need them


Not an exercise - I have to use the VIEW though - this was only a
sample. In real life the VIEW is trickier!


> select (select count(*) from t1) / (select count(*) from t2)::float

Looks as if the CAST was part of it.

Check here: 
https://dbfiddle.uk/?rdbms=postgres_10=9a15766de01946d7f57b4298d8fb1028

Thanks for your input!


Pól...



Re: SQL problem (forgot to change header with earlier post!).

2018-05-29 Thread Paul Linehan
Hi, and thanks for taking the trouble to reply!


> WITH num AS
> (
>   SELECT count (*) as cnt1 FROM v1
> ),
> div AS
> (
>   SELECT count (*) as cnt2 FROM v2
> )
> SELECT (num.cnt1::numeric/div.cnt2);

I get this error

ERROR:  missing FROM-clause entry for table "num"
LINE 9: SELECT (num.cnt1::numeric/div.cnt2);

Check out the fiddle here -
https://dbfiddle.uk/?rdbms=postgres_10=9fbe33f971b12ce637d03c1e7e452831


> Casting as numeric just in case you might have integer division...


Yeah, forgot about the CASTing bit for the other method!

Thanks again!


Pól...


> Todd



Re: SQL problem (forgot to change header with earlier post!).

2018-05-29 Thread Moreno Andreo

Il 29/05/2018 13:14, Paul Linehan ha scritto:

Hi all,

I have a problem that I just can't seem to solve:

I want to divide the count of one table by the count of another -
seems simple enough!
I created simple VIEWs with counts of the tables, but I just can't
grasp the logic!

If it's not an excercise, I think you don't need them

DDL and DML (simplified) at the bottom of post.


I tried various combinations of things like basic SELECTs.


SELECT avg FROM ((SELECT cnt1 FROM v1)/(SELECT cnt2 FROM v2));


Maybe I didn't catch the problem, but

select (select count(*) from t1) / (select count(*) from t2)::float

should be a starting point (if you need an integer as a return value, 
simply remove the ::float at the end


HTH
Moreno.-




Re: binaries for 11 beta compiled with --with-llvm?

2018-05-29 Thread Andreas Kretschmer
On 29 May 2018 13:12:33 CEST, Paul Linehan  wrote:
>Hi all,
>
>I have a problem that I just can't seem to solve:
>

Please create a new thread for a new question.


Regards, Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company



SQL problem (forgot to change header with earlier post!).

2018-05-29 Thread Paul Linehan
Hi all,

I have a problem that I just can't seem to solve:

I want to divide the count of one table by the count of another -
seems simple enough!
I created simple VIEWs with counts of the tables, but I just can't
grasp the logic!

DDL and DML (simplified) at the bottom of post.


I tried various combinations of things like basic SELECTs.


SELECT avg FROM ((SELECT cnt1 FROM v1)/(SELECT cnt2 FROM v2));

and I also tried to use CTEs as follows:

WITH num AS
(
  SELECT cnt1 FROM v1
),
div AS
(
  SELECT cnt2 FROM v2
)
SELECT (num.cnt1/div.cnt2);

Should you require any further information or if this should be on
another list, please don't hesitate to contact me and/or let me know.

I would appreciate a short explanation of where I'm going wrong also.

TIA and rgs,


Pól...

== DDL and DML


CREATE TABLE t1 (x INT);

INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);


CREATE VIEW v1 AS (SELECT COUNT(*) AS cnt1 FROM t1);

CREATE TABLE t2 (y INT);

INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);



Re: binaries for 11 beta compiled with --with-llvm?

2018-05-29 Thread Paul Linehan
Hi all,

I have a problem that I just can't seem to solve:

I want to divide the count of one table by the count of another -
seems simple enough!
I created simple VIEWs with counts of the tables, but I just can't
grasp the logic!

DDL and DML (simplified) at the bottom of post.


I tried various combinations of things like basic SELECTs.


SELECT avg FROM ((SELECT cnt1 FROM v1)/(SELECT cnt2 FROM v2));

and I also tried to use CTEs as follows:

WITH num AS
(
  SELECT cnt1 FROM v1
),
div AS
(
  SELECT cnt2 FROM v2
)
SELECT (num.cnt1/div.cnt2);

Should you require any further information or if this should be on
another list, please don't hesitate to contact me and/or let me know.

I would appreciate a short explanation of where I'm going wrong also.

TIA and rgs,



== DDL and DML


CREATE TABLE t1 (x INT);

INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (4);


CREATE VIEW v1 AS (SELECT COUNT(*) AS cnt1 FROM t1);

CREATE TABLE t2 (y INT);

INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);
INSERT INTO t2 VALUES (5);

CREATE VIEW v2 AS (SELECT COUNT(*) AS cnt1 FROM t2);

2018-05-29 7:47 GMT+01:00 Thomas Kellerer :
> Andres Freund schrieb am 29.05.2018 um 08:28:
>>> But neither the Linux binaries nor the Windows binaries were compiled with 
>>> the --with-llvm option
>>> (will JITting be possible with Windows at all?)
>>
>> Not in 11.
>
> I assumed that ;)
> Not a real problem.
>
> But what about Linux binaries with JITting enabled?
> I would like to test some of our DWH queries to see if that improves 
> performance
>
> Or do I need to compile Postgres 11 myself to get that?
>
> Thomas
>
>
>



Re: pgdg-keyring (or apt-key) failure on fresh 9.6 install

2018-05-29 Thread Moreno Andreo

Hi Tim,

Il 29/05/2018 00:06, Tim Cross ha scritto:

Moreno Andreo  writes:


Hi folks,
I'm trying to install Postgresql 9.6 on a test machine in Google Cloud
Platform
After a fresh install with Debian 9 (just after the instance has been
created) I follow steps from here

https://wiki.postgresql.org/wiki/Apt

(instead of pg 10 I install pg 9.6)

During the installation process i encounter the following strange
warnings that, even if that's a test machine, make me think twice before
going ahead.

[...]
Processing triggers for man-db (2.7.6.1-2) ...
Setting up pgdg-keyring (2017.3) ...
Removing apt.postgresql.org key from trusted.gpg: Warning: The postinst
maintainerscript of the package pgdg-keyring
Warning: seems to use apt-key (provided by apt) without depending on
gnupg or gnupg2.
Warning: This will BREAK in the future and should be fixed by the
package maintainer(s).
Note: Check first if apt-key functionality is needed at all - it
probably isn't!
OK
Setting up xml-core (0.17) ...
[...]

I have to say that installation is successfully and database server goes
up and apparently with no problems at all.


This looks like a warning for the package maintainers regarding ensuring
the package depends on either gnupg or gnupg2 and nothing you need to
worry about unless you are building/maintaining deb packages for postgres.
Brilliant. That's what I needed to know. Just to avoid bitter surprises 
in the future... :-)


The Debian package manager, apt, uses gpg keys to verify the
authenticity of packages it downloads. My guess is that previously, you
only needed to ensure the package had a dependency on apt-key and now
apt has/is changing such that you need to have an explicit dependency on
either gnupg or gnupg2.

... so if I update/upgrade this instance in the future it will be 
automatically fixed (and there shouldn't be issues), right?

Thanks a lot!

Moreno.-




Re: binaries for 11 beta compiled with --with-llvm?

2018-05-29 Thread Christoph Moench-Tegeder
## Thomas Kellerer (spam_ea...@gmx.net):

> But what about Linux binaries with JITting enabled? 

The Debian packages do have JIT enabled.
https://www.postgresql.org/download/linux/debian/

Regards,
Christoph

-- 
Spare Space



Re: binaries for 11 beta compiled with --with-llvm?

2018-05-29 Thread Thomas Kellerer
Andres Freund schrieb am 29.05.2018 um 08:28:
>> But neither the Linux binaries nor the Windows binaries were compiled with 
>> the --with-llvm option 
>> (will JITting be possible with Windows at all?)
> 
> Not in 11.

I assumed that ;) 
Not a real problem.

But what about Linux binaries with JITting enabled? 
I would like to test some of our DWH queries to see if that improves performance

Or do I need to compile Postgres 11 myself to get that? 

Thomas





Re: binaries for 11 beta compiled with --with-llvm?

2018-05-29 Thread Andres Freund
Hi,

On 2018-05-29 07:54:52 +0200, Thomas Kellerer wrote:
> But neither the Linux binaries nor the Windows binaries were compiled with 
> the --with-llvm option 
> (will JITting be possible with Windows at all?)

Not in 11.

Greetings,

Andres Freund