Re: [GENERAL] Retrieve the server's time zone

2017-11-14 Thread Thomas Kellerer
Tom Lane schrieb am 14.11.2017 um 15:36:
>> is there any way (short of writing a function in an untrusted PL)
>> to determine the actual time zone (or time) of the server OS?
> 
> AFAIK that would only be true if some part of your client stack
> is issuing a SET TIMEZONE command.  (libpq will do that if it finds
> a PGTZ environment variable set, but not in response to plain TZ.)

Ah, interesting. I do that through JDBC, so apparently that's the part to 
blame. 
 
> If that's true, and you can't/don't want to change it, you could try
> 
> select reset_val from pg_settings where name = 'TimeZone';

Hmm, this does not seem to work. 

I am connected to a server with Asia/Bangkok but through JDBC 
that query still returns Europe/Berlin (which is my client's time zone)

So apparently the JDBC driver somehow "persists" this setting. 

I will take this to the JDBC mailing list then, thanks.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Retrieve the server's time zone

2017-11-14 Thread Thomas Kellerer
Hello,

is there any way (short of writing a function in an untrusted PL) to determine 
the actual time zone (or time) of the server OS? 

"show timezone" always returns the client's time zone. 

localtimestamp also converts the server's time to the client time zone (the one 
defined by "timezone")

log_timezone is also unreliable as it can be changed to anything.

I am looking for something along the lines of: "show server_timezone" or 
"select current_timestamp at time zone server_timezone" 

Is that possible? 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] idle in transaction, why

2017-11-06 Thread Thomas Kellerer
Rob Sargent schrieb am 06.11.2017 um 23:09:
> Gosh I wish I could learn to proof-read my posts.
> My support crew graciously set
> 
> idle_transaction_timeout = 1
> 
> Now to ponder if I need zero or some large number.

The unit of that setting is milliseconds (if no unit is specified). 
zero disables that feature.

One millisecond seems like an awfully short period to allow a transaction to be 
idle. 

I would figure values in "minutes" to be more realistic depending on the 
workload and characteristics of the application. 

A transaction that has several seconds of "think time" between individual 
statements doesn't seem that unrealistic. 


Thomas




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why does a SELECT query cause "dirtied" buffers?

2017-10-27 Thread Thomas Kellerer

When using explain (analyze, buffers) things like this can show up:

Buffers: shared hit=137 read=6545 dirtied=46

Given the description in the manual:

The number of blocks dirtied indicates the number of previously unmodified 
blocks that were changed by this query

I am a bit surprised why a SELECT (read-only) query would cause buffers to be 
dirtied?

Under which situation does a SELECT query change a block?

Regards
Thomas
 





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Tool for PostgreSQL

2017-10-18 Thread Thomas Kellerer

Fabricio Pedroso Jorge schrieb am 18.10.2017 um 19:37:

is there a "official" monitoring tool for PostgreSQL databases? For
example, i come from Oracle Database, and there, we have Enterprise
Manager to monitor and administrer the product... is there such a
similar tool for PostgreSQL?


There is nothing "official" in the sense that it's developed by the Postgres 
development group.

But OPM looks quite promising:

http://opm.io/

There is a pretty extensive list in the Postgres wiki:

https://wiki.postgresql.org/wiki/Monitoring





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Making subscribers read only in Postgres 10 logical replication

2017-10-17 Thread Thomas Kellerer
rverghese schrieb am 11.10.2017 um 20:38:
> You mean at the user permissions level? Yes, I could, but would mean doing so
> table by table, which is not our current structure. I guess there is nothing
> at the database level.

Not at the database level, but at the schema level:

You can revoke those privileges for all tables in a schema:

  revoke insert,update,delete 
on all tables in schema public
from the_user;

You can do that for all future tables as well:

  alter default privileges
in schema public
revoke insert,update,delete on tables
from the_user;

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 10 manual breaks links with anchors

2017-10-16 Thread Thomas Kellerer
I don't know if this is intentional, but the Postgres 10 manual started to use 
lowercase IDs as anchors in the manual.

So, if I have e.g.: the following URL open in my browser:

   
https://www.postgresql.org/docs/current/static/sql-createindex.html#sql-createindex-concurrently

I cannot simply switch to an older version by replacing "current" with e.g. 
"9.5" because in the 9.5 manual the anchor was all uppercase, and the URL would 
need to be: 

   
https://www.postgresql.org/docs/9.5/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY

Is this intentional? 

This also makes "cleaning" up links in e.g. StackOverflow that point to 
outdated versions of the manual a bit more cumbersome. 

Regards
Thomas
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Non-overlapping updates blocking each other

2017-10-14 Thread Thomas Kellerer

Melvin Davidson schrieb am 14.10.2017 um 17:32:

More than likely, the optimizer has determined that a table scan is best, in 
which case it will use a table lock.
That means one updates will be blocking each other.


Since when does Postgres lock the whole table during an update?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple Schemas vs. Multiple Databases

2017-10-13 Thread Thomas Kellerer

Melvin Davidson schrieb am 13.10.2017 um 21:42:

If that is what you need to do, then definitely use multiple schemas.
In PostgreSQL, the only way to do cross db queries / DML, is with the
dblink extension, and from personal use, it is a PIA to use.


dblink is not the only way to do that.

Nowadays, cross-DB queries can quite easily be done using foreign tables (and 
they are quite efficient as well - much more efficient then dblink)

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Shared strings"-style table

2017-10-13 Thread Thomas Kellerer

Seamus Abshere schrieb am 13.10.2017 um 18:43:

On Fri, Oct 13, 2017 at 8:49 AM, Seamus Abshere wrote:

Theoretically / blue sky, could there be a table or column type that
transparently handles "shared strings" like this, reducing size on disk
at the cost of lookup overhead for all queries?
(I guess maybe it's like TOAST, but content-hashed and de-duped and not
only for large objects?)


On Fri, Oct 13, 2017, at 01:29 PM, Melvin Davidson wrote:

What was described is exactly what relations and Foreign Keys are for.


hi Melvin, appreciate the reminder. Our issue is that we have 300+
columns and frequently include them in the SELECT or WHERE clauses... so
traditional normalization would involve hundreds of joins.

That's why I ask about a new table or column type that handles basic
translation and de-duping transparently, keeping the coded values
in-table.



As those are all strings: why not put them into a hstore (or jsonb) column?

Then all of them could compressed (TOASTed) as a single value instead of 300 
independent values.


 





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why does increasing the precision of a numeric column rewrites the table?

2017-10-11 Thread Thomas Kellerer
When increasing the length constraint on a varchar column, Postgres is smart 
enough to not rewrite the table. 

I expected the same thing to be true when increasing the size of a numeric 
column.

However this does not seem to be the case:

Consider the following table:

create table foo 
(
  some_number numeric(12,2)
);


The following statement returns "immediately", regardless of the number of rows 
in the table

alter table foo alter column some_number numeric(15,2);

However, when running (on the original table definition)

alter table foo alter column some_number numeric(15,3);

it takes quite a while (depending on the number of rows) which indicates a 
table rewrite is taking place. 

I don't understand why going from numeric(12,2) to numeric(15,3) would require 
a table rewrite. 

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automatically check for anti-patterns in SQL queries

2017-10-09 Thread Thomas Kellerer
Joy Arulraj schrieb am 08.10.2017 um 13:50:
> Hi folks -- We developed a static analysis tool, called SQLCheck, for
> automatically identifying anti-patterns in SQL queries.
> 
> https://github.com/jarulraj/sqlcheck
> 
> Our goal is to provide hints to the developers about potential
> performance and security issues present in SQL queries. I believe
> that integrating such a tool (maybe using a special SQL clause) with
> PostgreSQL can benefit users. Feedback would be much appreciated.

Looks interesting. 

However on Windows 10 this does not seem to work. e.g.:

   sqlcheck -h

simply does nothing. The program briefly starts (I can see the title of my 
cmd.exe changed) but then exists immediately without even showing the help. 

Something like "sqlcheck -f test.sql" also shows no result at all (no error 
message, no output, nothing)

Regards
Thomas
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance appending to an array column

2017-09-21 Thread Thomas Kellerer

Paul A Jungwirth schrieb am 21.09.2017 um 23:05:

but maybe I could write my own extension to
load regular files into Postgres arrays, sort of getting the best of
both worlds.


There is a foreign data wrapper for that:

   https://github.com/adunstan/file_text_array_fdw

but it's pretty old and seems un-maintained.









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's the best way in postgres to use ANY() with LIKE '%'?

2017-09-05 Thread Thomas Kellerer
Ryan Murphy schrieb am 05.09.2017 um 16:19:
> Thanks, I saw that page earlier; what I'm looking for is kind of the
> opposite - instead of comparing a single value to see if it matches
> any of a list of patterns, I'm trying to take a list of values and
> see if any of them match a given pattern.
> 

You mean something like this?

https://stackoverflow.com/q/46047339/330315

Thomas




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] clustered index benchmark comparing Postgresql vs Mariadb

2017-08-31 Thread Thomas Kellerer
유상지 schrieb am 31.08.2017 um 04:03:
> Cluster secondary indexes were faster than those without cluster indexes in 
> pg, but slower than mariadb.

There is no such thing as a "clustered index" in Postgres. 

The Postgres "cluster" command physically sorts the rows of a table according 
to the sort order of an index, but that is something completely different then 
a "clustered index". The data is still stored in the index and the table. 

A clustered index in MariaDB/MySQL stores the entire table data. So the table 
and the index is the same thing (Oracle calls that an "index organized table" 
which describes this a lot better). As the table and index are the same thing 
you can't have two clustered indexes on the same table. 

An index in Postgres only stores the data of the indexed columns (plus an 
internal row identifier). There is no technical difference between a primary 
key index and any other index. The structure and storage is always the same. So 
the term "secondary index" does not really make sense in Postgres.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] _page_cost parameter with values < 1

2017-07-20 Thread Thomas Kellerer
Hello,

recently I have seen a Postgres configuration with the following values:

  seq_page_cost = 0.5
  random_page_cost = 0.6

Is there any advantage (or maybe disadvantage) compared to using e.g. 1.0 and 
1.2? 

Regards
Thomas
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is exactly a schema?

2017-07-14 Thread Thomas Kellerer
marcelo schrieb am 14.07.2017 um 13:59:
> Could I select a specific schema in the connection string? Say, by example 
> database=mydb.schemanumbertwo ?

The JDBC driver does indeed support that:

   jdbc:postgresql://localhost/somedatabase?currentSchema=some_schema 

I think in the backround it then simply runs a 

   set search_path = some_schema;

after the connection has been established.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Feature proposal, DBURL: psql pgsql://joe:p4zzw...@example.org:2345/dbname

2017-07-05 Thread Thomas Kellerer
Hans Schou schrieb am 05.07.2017 um 14:27:
> The dburl (or dburi) has become common to use by many systems
> connecting to a database. The feature is that one can pass all
> parameters in a string, which has similar pattern as http-URI do.
>
> Especially when using psql in a script, having the credentials in one string 
> is convenient.
> 
> The syntax could be:
>   
> [scheme[<+>dsn]]://[[username[:[password]]@][host][:port][/[dbname][/[[table[/[column[,column...]*]]]|sql]]]

Besides the fact that something like that is already possible: 

What's the use of "table" and "column" in the URI? You connect to a database, 
not to a table.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download 9.6.3 Binaries

2017-06-29 Thread Thomas Kellerer

Andreas Kretschmer schrieb am 23.06.2017 um 20:58:

I expected to find binaries for 9.6.3 at
https://www.enterprisedb.com/download-postgresql-binaries but I only
see
9.6.2.

Am I looking at the wrong place?

 
Yeah, use the community version from postgresql.org ;-)


Regards, Andreas


There are no Windows binaries available at postgresql.org





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sub-select with multiple records, columns

2017-06-19 Thread Thomas Kellerer

Israel Brewster schrieb am 19.06.2017 um 22:17:

SELECT
...
(SELECT
array_agg(to_json(row(notedate,username,note)))
FROM sabrenotes
INNER JOIN users ON author=users.id
WHERE ticket=sabretickets.id ) notes
FROM tickets
WHERE ...

The only problem with this query is that the notes aren't sorted. Of
course, simply adding an ORDER BY clause to the sub-select doesn't
work - it throws an error about needing to use notedate in a GROUP BY
clause or aggregate function. Is there some way I can get sorting as
well here? Of course, I could just run a second query to get the
notes, and combine in code, but that's no fun... :-)


You can supply an ORDER BY to an aggregate function:

   array_agg(to_json(row(notedate,username,note)) order by ...)

I have to admit, that I fail to see the the advantage of an array of JSON 
objects, rather then having a single json with the elements inside.

json_object_agg() or json_agg() might be better suited for this.

Thomas




   




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_catalog tables don't have constraints?

2017-06-09 Thread Thomas Kellerer

kbran...@pwhome.com schrieb am 09.06.2017 um 20:57:

Neil Anderson  wrote:


I've been exploring the pg_catalog tables and pointed a couple of tools at
it to extract an ER diagram for a blog post. At first I thought it was a bug
in the drawing tool but it appears that the relationships between the
pg_catalog tables are implicit rather than enforced by the database, is that 
correct?


Every time I have to dive into the pg_* tables, I really want such a diagram 
because
the relationships aren't obvious to me, so I've been looking for a diagram like 
that
and haven't found one. 


https://wiki.postgresql.org/wiki/Developer_FAQ#Is_there_a_diagram_of_the_system_catalogs_available.3F





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dump to pg

2017-06-01 Thread Thomas Kellerer

Nicolas Paris schrieb am 31.05.2017 um 16:43:

Hi,

I have dumps from oracle and microsoft sql server (no more details). Is it possible to 
load them "directly" into postgres  (without oracle/mssql license)?
dump -> csv -> postgtres
or something ?


If those are binary dumps (e.g. a DataPump dump in Oracle or a "backup" in SQL 
Server), then the answer is no.
Only Oracle or SQL Server can read those files.

If they aren't too big, you might get away by installing the express edition of 
the respective DBMS, then import them using the native tools, then export the 
data as CSV files.

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.5 "chained equality" behavior

2017-05-30 Thread Thomas Kellerer

Joshua Ma schrieb am 30.05.2017 um 22:56:

We're going to fix usages of this to
instead do (a = b && a = c) instead of (a = b = c).
 
That has to be (a = b AND a = c)


The && operator is not valid for booleans





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10 Beta1 - pg_upgrade fails on Windows 10

2017-05-28 Thread Thomas Kellerer

Adrian Klaver schrieb am 28.05.2017 um 17:51:

After I finally found the EnterpriseDB Beta binaries (they are pretty well 
hidden) I tested with those, and everything works fine there.


For those following along, where would that be?


Here they are:

https://www.enterprisedb.com/products-services-training/pgdevdownload

Note that the "Binaries Version" (the ZIP download) is missing several .exe 
program (most importantly psql.exe and pg_dump.exe but some others as well).

The files in the installer package are complete.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10 Beta1 - pg_upgrade fails on Windows 10

2017-05-28 Thread Thomas Kellerer

Tom Lane schrieb am 26.05.2017 um 20:18:

I just tried pg_upgrade from the Postgres 10 Beta1 (from the BigSQL 
distribution) to upgrade a 9.6 cluster.
pg_upgrade --check fails with the following messages:



could not load library "$libdir/pgxml":
ERROR:  could not load library 
"d:/etc/postgres-10/pgsql/lib/postgresql/pgxml.dll": unknown error 126


Apparently BigSQL forgot to include contrib/xml2 in their distribution;
you should ping them about that one.


I can confirm that it's a BigSQL problem.
After I finally found the EnterpriseDB Beta binaries (they are pretty well 
hidden) I tested with those, and everything works fine there.
 

When I then run pg_upgrade without the --check option, it fails when taking the 
schema only dump from one database.
The error message reported in the logfile is:
pg_dump: unrecognized collation provider: p


Ugh :-( ... seems like a rather obvious typo in dumpCollation().
Thanks for finding it!


When I drop that collation from the source database, pg_upgrade works without 
problems with the EnterprsieDB binaries.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10 Beta1 - pg_upgrade fails on Windows 10

2017-05-26 Thread Thomas Kellerer

Adrian Klaver schrieb am 26.05.2017 um 23:41:

If that's coming from port/dynloader/win32.c, as I think it must be
because the non-conformant-to-message-style-guidelines phrase "unknown
error" appears nowhere else in our tree, then that's an error code that
FormatMessage doesn't recognize.  Anybody have a clue what it means?



https://msdn.microsoft.com/en-us/library/windows/desktop/ms681382(v=vs.85).aspx

"The System Error Codes are very broad. Each one can occur in one of
many hundreds of locations in the system. Consequently the
descriptions of these codes cannot be very specific. Use of these
codes requires some amount of investigation and analysis. You need to
note both the programmatic and the run-time context in which these
errors occur. Because these codes are defined in WinError.h for
anyone to use, sometimes the codes are returned by non-system
software. Sometimes the code is returned by a function deep in the
stack and far removed from your code that is handling the error."


ERROR_MOD_NOT_FOUND

 126 (0x7E)

 The specified module could not be found.


I thought so as well, but as I said the "trace" from ProcessMonitor didn't show 
any errors.

It didn't even log an attempt to load that dll.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10 Beta1 - pg_upgrade fails on Windows 10

2017-05-26 Thread Thomas Kellerer

Adrian Klaver schrieb am 26.05.2017 um 23:16:

 could not load library "$libdir/pgxml":
 ERROR:  could not load library 
"d:/etc/postgres-10/pgsql/lib/postgresql/pgxml.dll": unknown error 126


Apparently BigSQL forgot to include contrib/xml2 in their distribution;
you should ping them about that one.


I wasn't sure where the problem is, I will report this to BigSQL as well.

However, the xml2 extension is included. The extension control file as well as 
libxml2-2.dll.


But the error message is looking for pgxml.dll, not libxml2-2.dll.
Those are clearly different files.


Well Thomas is using pg_upgrade from a BigSQL 10beta1 install to
upgrade from a 9.6 instance of unknown provenance. pg_upgrade is
complaining that the 9.6 cluster has pgxml.dll whereas the 10beta1
cluster does not even though xml2 is installed on both clusters. The
question is this a packaging oops on the part of BigSQL or something
else?


Interesting enough, the same error message appears when I run "create extension 
xml2;" on Postgres 10.
It somehow looks as if there is a DLL dependency between XML2 and whatever uses 
pgxml.dll (the new xmltable perhaps?)


 from a 9.6 instance of unknown provenance


The 9.6 installation is a 9.6.2 binary from EnterpriseDB (unfortunately they 
don't supply the beta binaries, otherwise I would have used those)


Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 10 Beta1 - pg_upgrade fails on Windows 10

2017-05-26 Thread Thomas Kellerer

Tom Lane schrieb am 26.05.2017 um 20:18:

I just tried pg_upgrade from the Postgres 10 Beta1 (from the BigSQL 
distribution) to upgrade a 9.6 cluster.
pg_upgrade --check fails with the following messages:



could not load library "$libdir/pgxml":
ERROR:  could not load library 
"d:/etc/postgres-10/pgsql/lib/postgresql/pgxml.dll": unknown error 126


Apparently BigSQL forgot to include contrib/xml2 in their distribution;
you should ping them about that one.


I wasn't sure where the problem is, I will report this to BigSQL as well.

However, the xml2 extension is included. The extension control file as well as 
libxml2-2.dll.


When I then run pg_upgrade without the --check option, it fails when taking the 
schema only dump from one database.
The error message reported in the logfile is:
pg_dump: unrecognized collation provider: p


Ugh :-( ... seems like a rather obvious typo in dumpCollation().
Thanks for finding it!






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 10 Beta1 - pg_upgrade fails on Windows 10

2017-05-26 Thread Thomas Kellerer

I just tried pg_upgrade from the Postgres 10 Beta1 (from the BigSQL 
distribution) to upgrade a 9.6 cluster.

pg_upgrade --check fails with the following messages:

Performing Consistency Checks
-
Checking cluster versions   ok
Checking database user is the install user  ok
Checking database connection settings   ok
Checking for prepared transactions  ok
Checking for reg* system OID user data typesok
Checking for contrib/isn with bigint-passing mismatch   ok
Checking for invalid "unknown" user columns ok
Checking for presence of required libraries fatal
Your installation references loadable libraries that are missing from the
new installation.  You can add these libraries to the new installation,
or remove the functions using them from the old installation.  A list of
problem libraries is in the file:
loadable_libraries.txt

loadable_libraries.txt contains:

  could not load library "$libdir/pgxml":
  ERROR:  could not load library 
"d:/etc/postgres-10/pgsql/lib/postgresql/pgxml.dll": unknown error 126

I ran pg_upgrade as administrator and regular user with the same results. I 
traced the file system calls of pg_upgrade using ProcessMonitor, but I couldn't 
see any obvious problems. I can supply the logfile of that trace if it helps.

Postgres 10 itself runs just fine, and so does e.g. the new XMLTABLE function, 
so I am unsure why pg_upgrade isn't able to load that dll


When I then run pg_upgrade without the --check option, it fails when taking the 
schema only dump from one database.

The error message reported in the logfile is:

  pg_dump: unrecognized collation provider: p

In the database where pg_dump (v10) fails, I have created the following custom 
collation:

  CREATE COLLATION "de" (lc_collate = 'German_Germany.1251', lc_ctype = 
'German_Germany.1251');
  ALTER COLLATION "de" OWNER TO "thomas";

pg_dump (9.6) runs fine with the same arguments on that database

I can supply a (schema only) dump from the database in question if this helps

Regards
Thomas
 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] logical replication in PG10 BETA

2017-05-23 Thread Thomas Kellerer
Igor Neyman schrieb am 22.05.2017 um 21:33:
> Does built-in logical replication work on Windows in PG10 BETA release?
> 
> I can’t make it working so far.
> 
> I created Publication on “source” PG server, but when I’m trying to CREATE 
> SUBSCRIPTION… on “destination” server, I’m getting:
> 
> “ERROR: could not connect to the publisher: could not send data to
> server: Socket is not connected (0x2749/10057) could not send SSL
> negotiation packet: Socket is not connected (0x2749/10057) SQL
> state: XX000”

> could not send SSL negotiation packet

sounds more like a SSL setup problem between the two Postgres servers, not 
really related to logical replication. 

Can you try to setup the connections without SSL?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-16 Thread Thomas Kellerer
John R Pierce schrieb am 16.05.2017 um 16:44:
> On 5/16/2017 7:35 AM, Thomas Kellerer wrote:
>> When my (JDBC based) SQL client and the database server are on the same 
>> computer...
> 
> node.js is Javascript, not java w/ jdbc

I know that. 

I mentioned JDBC so that it's clear that the timings were done using a 
different technology

Maybe it's Node.js or the JavaScript "driver" that causes the problems.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-16 Thread Thomas Kellerer
Eric Hill schrieb am 16.05.2017 um 14:25:
> I have a node.js application using PostgreSQL to store uploaded
> files. The column in which I am storing the file contents is of type
> “bytea” with “Storage” type set to “EXTENDED”. Storing a 12.5 MB file
> is taking 10 seconds, and storing a 25MB file is taking 37 seconds.
> Two notable things about those numbers: It seems like a long time,
> and the time seems to grow exponentially with file size rather than
> linearly.
> 
> Do these numbers surprise you? 

When my (JDBC based) SQL client and the database server are on the same 
computer, I can store a 45MB file about 4 seconds, a 240MB file in about 
20seconds 
When I do the same with a server on the (same) network, the 45MB take about 9 
seconds, the 240MB take about 60 seconds.

So yes, these numbers sound a bit surprising. 

> Could there be something about my methodology that is slowing things down?

How far are your app server and the database server apart? 
Maybe it's just a case of a slow network

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert performance and disk usage in JSON vs JSONB

2017-05-15 Thread Thomas Kellerer

Ronny Abraham schrieb am 15.05.2017 um 19:25:

4. Insert 10,000 rows to JSON, execution time (sec):

122.855001211

5. Insert 10,000 rows to JSONB, execution time (sec):

122.128999233


What’s interesting is that inserting to JSONB is slightly faster than inserting 
to JSON.


A difference in 0.7 seconds from a single test run is not significant enough to warrant 
the statement "JSONB is slightly faster".

That could very well have been caused by other things going on your system.
Or maybe just some caching of metadata.

Unless you repeat this at least 4 or 5 times, you can't tell if one is really 
faster then the other.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Top posting....

2017-05-12 Thread Thomas Kellerer
Tom Lane schrieb am 11.05.2017 um 19:43:
> Bottom posting without trimming is just an awful combination:
> whatever you do, don't do that.
Amen to that. 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migration Query

2017-05-09 Thread Thomas Kellerer
PAWAN SHARMA schrieb am 08.05.2017 um 17:13:
> Does one any having list of bottlenecks and workarounds while migrating data
> from Oracle to Postgresql. like what are thing which we can migrate from 
> Oracle database to Postgresql and what we can't?

One thing I have seen regularly is the wrong translation of Oracle's NUMBER 
data type to the corresponding Postgres type. 

e.g. NUMBER(22) is very often translated to a numeric in Postgres but it should 
be translated to integer or bigint. 

numeric is slightly slower and takes more space then integer or bigint.

You also need to think about how you used the DATE data type in Oracle. 
If you rely on the time part, you have to translate that into a TIMESTMAP in 
Postgres.

Thomas




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] browser interface to forums please?

2017-04-04 Thread Thomas Kellerer
vinny schrieb am 04.04.2017 um 12:01:
> I'd love to have an RSS feed that contains only new questions, so I can just 
> watch the popup
> on my screen the way I do with the rest of the world, and not have to deal 
> with replies to topics that I don't care about anyway.

You can read them as a newsgroup provided by news.gmane.org - I do it that way. 

Thunderbird works quite well with that. 

Thomas





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Suggestion to improve select pg_reload_conf()

2017-04-03 Thread Thomas Kellerer
I would like to suggest an improvement to the select pg_reload_conf() function.

Currently this will only return true or false indicating if reloading was 
successful.

I think it would be a "nice-to-have" if the function would also return the GUCs 
that have been changed, similar to what is being written to the logfile. 

To not break existing code (e.g. scripts that only expect true/false), this 
could be done through an optional boolean parameter (e.g. named "verbose").

Any thoughts?

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Do I need to COMMIT an analyze statement?

2017-03-31 Thread Thomas Kellerer

David G. Johnston schrieb am 31.03.2017 um 16:49:

On Friday, March 31, 2017, Thomas Kellerer <spam_ea...@gmx.net 
<mailto:spam_ea...@gmx.net>> wrote:

Hello,

something I have always wondered:

If I run with autocommit turned OFF, do I need to commit an ANALYZE statement, or is 
that "non-transactional"?


The results are stored in a catalog table so they are transactional.


Thanks.

That's what I assumed. Just wanted to be sure.

Thomas





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Do I need to COMMIT an analyze statement?

2017-03-31 Thread Thomas Kellerer
Hello,

something I have always wondered: 

If I run with autocommit turned OFF, do I need to commit an ANALYZE statement, 
or is that "non-transactional"?

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger based logging alternative to table_log

2017-03-27 Thread Thomas Kellerer

Jeff Janes schrieb am 27.03.2017 um 19:07:

I have some code which uses table_log
(http://pgfoundry.org/projects/tablelog/) to keep a log of changes to
selected tables.  I don't use the restore part, just the logging
part.

It creates a new table for each table being logged, with several
additional columns, and adds triggers to insert rows in the new table
for changes in the original.

The problem is that table_log hasn't been maintained in nearly 10
years, and pgfoundry itself seems to have one foot in the grave and
one on a banana peel.

There are several other systems out there which store the data in
hstore or json, which I would probably use if doing this from
scratch.  But I'd rather preserve the existing log tables than either
throw away that data, or port it over to a new format.

Is there any better-maintained code out there which would be
compatible with the existing schema used by table_log?


Logical replication maybe?

I don't know which programming language you are using, but the JDBC driver has 
a nice example:

https://jdbc.postgresql.org/documentation/head/replication.html

The upside to a trigger based solution is, that it's faster.
The downside is, that the replication "consumer" has to be running to ensure 
the logging





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Combining INSERT with DELETE RETURNING

2017-03-24 Thread Thomas Kellerer
Alexander Farber schrieb am 24.03.2017 um 16:06:

> the doc https://www.postgresql.org/docs/9.6/static/sql-delete.html states:
> 
> "The syntax of the RETURNING list is identical to that of the output list of 
> SELECT."
> 
> So I am trying to rewrite the INSERT .. ON CONFLICT DO NOTHING followed by a 
> DELETE:
> into a single statement:
> 
> INSERT INTO words_reviews (
> uid,
> author,
> nice,
> review,
> updated
> ) VALUES (
> DELETE FROM words_reviews
> WHERE author <> out_uid
> AND author = ANY(_uids)
> RETURNING
> uid,
> out_uid,-- change to out_uid
> nice,
> review,
> updated
> )
> ON CONFLICT DO NOTHING;

You need a CTE:

with deleted as (
DELETE FROM words_reviews
WHERE author <> out_uid
AND author = ANY(_uids)
RETURNING
uid,
out_uid,-- change to out_uid
nice,
review,
updated
)
INSERT INTO words_reviews (uid, author, nice, review, updated)
select *
from deleted
ON CONFLICT DO NOTHING;



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental / Level -1 backup in PG

2017-03-22 Thread Thomas Kellerer
Rakesh Kumar schrieb am 22.03.2017 um 01:27:
> PG does not have a concept of incremental  backup. 

Postgres doesn't, but external tools can. 

e.g. Barman can do incremental backups:

https://blog.2ndquadrant.com/incremental-backup-barman-1-4-0/




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CenOS 5/Postgresql 9.6

2017-03-17 Thread Thomas Kellerer

Steve Crawford schrieb am 17.03.2017 um 20:15:

I'm aware of all those dates. Also that 9.6 has been out for
six-months, and that RHEL/CentOS 5 are listed among the currently
supported versions at https://yum.postgresql.org/.

The question remains - does anyone know where I might find packages so I don't 
have to compile them myself?


You can download the binaries (no RPM) from here:

https://www.enterprisedb.com/products-services-training/pgbindownload

Or an installer:

https://www.enterprisedb.com/downloads/postgres-postgresql-downloads





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CentOS 7.3, PostgreSQL 9.6.2, PHP 5.4 deliver array as string

2017-03-02 Thread Thomas Kellerer

Alexander Farber schrieb am 02.03.2017 um 21:45:

I am calling this stored function -

CREATE OR REPLACE FUNCTION words_get_board(
in_gid integer
) RETURNS TABLE (
out_bid integer,
out_letters varchar[15][15],
out_values integer[15][15]
) AS
$func$
SELECT
bid,
letters,
values
FROMwords_games
WHERE   gid = in_gid;

$func$ LANGUAGE sql STABLE;

by a PHP script -

$sth = $dbh->prepare(SQL_GET_BOARD);
$sth->execute(array($gid));
if ($row = $sth->fetch(PDO::FETCH_ASSOC)) {
$bid = $row['bid'];
$letters = $row['letters'];
$values  = $row['values'];
}


What exactly is the query in SQL_GET_BOARD?
How exactly are you calling that function?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with Greatest

2017-02-15 Thread Thomas Kellerer
Teddy Schmitz schrieb am 16.02.2017 um 05:38:
> As a quick follow up I just did an explain on the query,
> 
> 
> Aggregate  (cost=258007258.87..258007258.88 rows=1 width=8)
> ->  Nested Loop  (cost=0.00..184292254.83 rows=14743000807 width=16)
> ->  Seq Scan on t1  (cost=0.00..3796.41 rows=263141 width=8)
> ->  Materialize  (cost=0.00..1088.40 rows=56027 width=8)
> ->  Seq Scan on t2  (cost=0.00..808.27 rows=56027 width=8)
> 
> 
> It seems it has to do a loop on 14 billion rows?  Can someone explain why 
> this would happen?

You are not joining those two tables. 

"from t1, t2" is a cross join that will calculate the cartesian product of both 
tables



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Documentation inconsistency (at least to me)

2017-02-13 Thread Thomas Kellerer

I wonder why regexp_split_to_array() is listed under "String functions and operators" [1] 
but string_to_array() is listed under "Array functions and operators" [2]

I find that a bit inconsistent - I would expect to find both in the same 
chapter.

I would suggest to put both into "String functions and operators" because after 
all string_to_array() does more or less the same as regexp_split_to_array() does. But at 
the end of the day I think it's just important that both are in the same chapter (unless 
I overlooked a huge difference between the two that warrants this distinction).


Thomas


[1] https://www.postgresql.org/docs/current/static/functions-string.html
[2] https://www.postgresql.org/docs/current/static/functions-array.html



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Causeless CPU load waves in backend, on windows, 9.5.5 (EDB binary).

2017-02-13 Thread Thomas Kellerer

Nikolai Zhubr schrieb am 13.02.2017 um 23:03:

Maybe I should have been more specific.
What I need is debugging/profiling pure communication side of server
operation, implying huge lots of requests and replies going over the
wire to and from the server within some continued (valid) session,
but so that the server is not actually doing anything above that (no
sql, no locking, no synchronizing, zero usefull activity, just
pumping network I/O)



If you are willing to drop the "no sql" requirement you could use something like

select rpad('*', 1, '*');

this will send a lot of data over the wire, the SQL overhead should be fairly 
small.

You can send more data if you combine that with e.g. generate_series()








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Loose indexscan and partial indexes

2017-02-10 Thread Thomas Kellerer
Peter J. Holzer schrieb am 10.02.2017 um 14:02:
> So it's doing a sequential scan on the initial select in the recursive
> CTE, but using the index on the subsequent selects.
> 
> But why? If it uses the index on
> SELECT MIN(periodizitaet) FROM facttable_imf_ifs WHERE periodizitaet > 'x'
> shouldn't it be able to use the same index on 
> SELECT MIN(periodizitaet) FROM facttable_imf_ifs

What is the definition of the index facttable_imf_ifs_periodizitaet_idx?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Result of timestamp - timestamp

2017-02-05 Thread Thomas Kellerer
Adrian Klaver schrieb am 05.02.2017 um 22:40:
>> I just stumbled about a report that has been running for a long time now
>> and that relied on the fact that the interval "timestamp - timestamp"
>> always returns an interval with days, hours, minutes. But never a
>> "justified" interval with years, months, days and so on.
> 
> According to the docs:
> 
> https://www.postgresql.org/docs/9.6/static/functions-datetime.html
> 
> "Subtraction of date or timestamp values with the "-" operator returns the 
> number of days (24-hours) and hours/minutes/seconds between the values, 
> making the same adjustments."
> 
> It should always return days and hours.


Ah, great. For some reason I didn't find that. 
 
>> The query usees "extract(day from timestamp - timestamp)" which is
>> working fine, but would apparently fail if a justified interval was
>> returned
> 
> Did that happen?

No it didn't. I just was curious if I relied on something that might change.

Thanks



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Result of timestamp - timestamp

2017-02-05 Thread Thomas Kellerer

Hello,

I just stumbled about a report that has been running for a long time now and that relied on the 
fact that the interval "timestamp - timestamp" always returns an interval with days, 
hours, minutes. But never a "justified" interval with years, months, days and so on.

The query usees "extract(day from timestamp - timestamp)" which is working 
fine, but would apparently fail if a justified interval was returned

But I wonder if I'm relying on undocumented behaviour or if there is any situation where 
timestamp - timestamp would return a "justified" interval.

So, my question is: will timestamp '2017-02-05 18:19:20' - timestamp '2016-11-18 
23:00:00' always return "78 days 21:00:00"?

Or is there any situation where the returned interval would be "2 mons 18 days 
21:00:00" without using justiy_interval() on it.

I couldn't find a clear statement on that in the manual.

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data Modeling Tools - Version specific to Postgres

2017-02-01 Thread Thomas Kellerer

Greg Slawek schrieb am 01.02.2017 um 19:35:


Can anyone recommend a data modeling tool (preferably Mac OSX
compatible)?

I would like to be sure it can export version specific SQL code (ie
9.1 vs 9.4)

I have used Toad Data Modeler years ago on Windows, which was pretty
good at sticking to the differences in each postgres version. I've
seen Navicat advertised a few places which I was considering
purchasing, and also checked out the list at the postgres wiki -
https://wiki.postgresql.org/wiki/GUI_Database_Design_Tools

I am curious if anyone has any experience using one



Have a look at DbSchema: http://www.dbschema.com/

It's Java based and should run on Mac OS as well.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: How does Postgres estimate the memory needed for sorting/aggregating

2017-01-25 Thread Thomas Kellerer

Tomas Vondra schrieb am 25.01.2017 um 22:46:

I guess this is based on the column statistics stored in pg_stats, but I
am not sure:



It is based on the average length of values in that column, yes.


Thanks for confirming that.

I assume this is taken from pg_stats.avg_width ?


I'm not sure what you mean by 'dynamically resize'. The above
decision is pretty much how planner decides whether to use hash
aggregate or group aggregate. If we estimate that the hash aggregate
will fit into work_mem, the planner will consider both possibilities.
If the estimate says hash aggregate would not fit into work_mem,
we'll only consider group aggregate, because that can work with very
little memory.

At execution time we'll only use as much memory as actually needed.
The trouble is that if we under-estimated the amount of memory,
there's no way back.


The "under-estimation" is what I am referring to with "dynamically resize".

What happens if the planner assumes 100kb but in reality it needs 100MB?

Thomas








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How does Postgres estimate the memory needed for sorting/aggregating

2017-01-25 Thread Thomas Kellerer

There was a question on dba.stackexchange recently:

   http://dba.stackexchange.com/a/162117/1822

That question (and the answer) deals with performance difference of a query 
caused by the _declared_ length of a VARCHAR column in SQL Server (everything 
else being equal - especially the actual data length)

For the curios: it does make a (big) difference in performance if you declare 
varchar(100) or varchar(2000) in SQL Server - something that really surprised 
me.

The difference in performance in SQL Servers seems to be caused by SQL Server's 
optimizer that uses the _declared_ length of a column to estimate the memory 
needed for the aggregation (or sorting).

Now, we all know that there is no performance difference whatsoever for varchar 
columns regardless of the declared length.

In one of the comments, to that answer the question was asked how Postgres 
knows how much memory it needs to allocate to do the aggregation.

I guess this is based on the column statistics stored in pg_stats, but I am not 
sure:

So here is my question: how does Postgres estimate/know the memory needed for 
the aggregation? Or does it dynamically resize the memory if the initial 
assumption was wrong?

Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching array for multiple items

2017-01-25 Thread Thomas Kellerer
Alex Magnum schrieb am 25.01.2017 um 09:29:
> I can search an array with 1 = ANY('{1,3,4,7}'::int[])
> 
> I need to check for one or multiple items in the array.
> 
> e.g.'1,7,3'  = ANY('{1,3,4,7}'::int[]
> 
> I do need to check if
> a) all items exist in the array

You can use the contains (or is contained) operator for that:

   array[1,7,3] <@ array[1,3,4,7] is true

   array[1,7,10] <@ array[1,3,4,7] is false

> b) at least one item exists in the array

You can use the "overlaps" operator: 

   array[1,7,3] && array[1,3,4,7] returns true

   array[10,11] && array[1,3,4,7] returns false

> Does the order of left and right side matter?

For the contains or (is contained) operator the order matters, for the overlaps 
operator it does not.

For more details see 
https://www.postgresql.org/docs/current/static/functions-array.html

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Doubts regarding postgres Security

2017-01-21 Thread Thomas Kellerer

Stephen Frost schrieb am 21.01.2017 um 22:37:

So, there is no solution for my first question, we need if users enter the
wrong password more than 5 times than their account gets locked and then
only DBA will unlock this account.


I understood the question and there is an answer- use PAM.



What about LDAP?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PgPool or alternatives

2017-01-21 Thread Thomas Kellerer

Simon Windsor schrieb am 21.01.2017 um 21:26:

I was wondering if there is another option that will allow me to
spool all ALTER|CREATE|DELETE|DROP|INSERT|UPDATE commands to all
nodes, and SELECTs to any of the connected nodes. The apllication can
actually handle separate READ|WRITE nodes from how it was written for
Oracle.


You could logical replication: https://2ndquadrant.com/en/resources/pglogical/




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Combining count() and row_number() as window functions

2017-01-19 Thread Thomas Kellerer
I was playing around with a query that essentially looked something like this:

select row_number() over (order by foo_date) as rn, 
   count(*) over () as total_count, 
   f.* 
from foo f;

(The actual query limits the output based on the row_number() for pagination 
purposes, but for this question this turned out to be irrelevant)

I assumed that the count() wouldn't increase the runtime of the query as the 
result of the row_number() can be used to calculate that. 

However it turns out that using a scalar subquery to calculate the count() is 
faster despite the duplicated Seq Scan on the table:

select row_number() over (order by foo_date) as rn,
   (select count(*) from foo) as total_count, 
   f.* 
from foo f

I used the following test setup:

create table foo 
as
select i, 
   date '2000-01-01' + (random() * 17 * 365)::int as foo_date,
   'Some Text '||i as data
from generate_series(1,50) g(i);

explain (analyze, verbose, buffers)
select row_number() over (order by foo_date) as rn, 
   count(*) over () as total_count, 
   f.* 
from foo f;

explain (analyze, verbose, buffers)
select row_number() over (order by foo_date) as rn,
   (select count(*) from foo) as total_count, 
   f.* 
from foo f;

This is the plan for the first query

WindowAgg  (cost=44191.13..49344.89 rows=429480 width=56) (actual 
time=1295.152..1491.642 rows=50 loops=1)
  Output: (row_number() OVER (?)), count(*) OVER (?), i, foo_date, data
  Buffers: shared hit=2048 read=1531 dirtied=1531, temp read=5781 written=5078
  I/O Timings: read=26.022
  ->  WindowAgg  (cost=44191.13..47841.71 rows=429480 width=48) (actual 
time=611.887..987.440 rows=50 loops=1)
Output: foo_date, i, data, row_number() OVER (?)
Buffers: shared hit=2048 read=1531 dirtied=1531, temp read=2123 
written=2123
I/O Timings: read=26.022
->  Sort  (cost=44191.13..45264.83 rows=429480 width=40) (actual 
time=611.872..723.216 rows=50 loops=1)
  Output: foo_date, i, data
  Sort Key: f.foo_date
  Sort Method: external merge  Disk: 16976kB
  Buffers: shared hit=2048 read=1531 dirtied=1531, temp read=2123 
written=2123
  I/O Timings: read=26.022
  ->  Seq Scan on stuff.foo f  (cost=0.00..4008.48 rows=429480 
width=40) (actual time=0.064..158.561 rows=50 loops=1)
Output: foo_date, i, data
Buffers: shared hit=2048 read=1531 dirtied=1531
I/O Timings: read=26.022
Planning time: 0.711 ms
Execution time: 1523.306 ms


and this is the plan for the second query:

WindowAgg  (cost=49273.31..52923.89 rows=429480 width=56) (actual 
time=660.543..1036.534 rows=50 loops=1)
  Output: row_number() OVER (?), $0, f.i, f.foo_date, f.data
  Buffers: shared hit=7158, temp read=2123 written=2123
  InitPlan 1 (returns $0)
->  Aggregate  (cost=5082.18..5082.18 rows=1 width=8) (actual 
time=105.307..105.307 rows=1 loops=1)
  Output: count(*)
  Buffers: shared hit=3579
  ->  Seq Scan on stuff.foo  (cost=0.00..4008.48 rows=429480 width=0) 
(actual time=0.041..54.075 rows=50 loops=1)
Output: foo.i, foo.foo_date, foo.data
Buffers: shared hit=3579
  ->  Sort  (cost=44191.13..45264.83 rows=429480 width=40) (actual 
time=555.216..663.021 rows=50 loops=1)
Output: f.foo_date, f.i, f.data
Sort Key: f.foo_date
Sort Method: external merge  Disk: 16976kB
Buffers: shared hit=3579, temp read=2123 written=2123
->  Seq Scan on stuff.foo f  (cost=0.00..4008.48 rows=429480 width=40) 
(actual time=0.030..107.520 rows=50 loops=1)
  Output: f.foo_date, f.i, f.data
  Buffers: shared hit=3579
Planning time: 0.134 ms
Execution time: 1065.572 ms

I uploaded both plans in case formatting breaks the above:

First query: https://explain.depesz.com/s/BT8y
Second query: https://explain.depesz.com/s/cbTm

The major contributor to the runtime is obviously the order by which is to be 
expected
But I am surprised that adding the count(*) in the first query adds additional 
work as from my perspective the count() could be "derived" from the 
row_count(). 

Is this a case of "just not implemented yet" or a case of "to expensive to 
optimize"?

This is on 9.6.1 

Regards
Thomas
 






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Is it possible to specify an ESCAPE for a wildcard search using LIKE ANY (...)

2017-01-17 Thread Thomas Kellerer
Karsten Hilbert schrieb am 17.01.2017 um 14:42:
>> I recently stumbled over the need to use a wildcard escape character for a 
>> condition that makes use of LIKE ANY, something like:
>>
>>select *
>>from some_table
>>where name like any (array['foo_bar%', 'bar_foo%']) escape '/';
>>
>> so that the underscore wouldn't be treated as a wildard
> 
> May I ask for clarification:
> 
> Do you need to have the _ NOT be recognized as a wildcard ?

Yes, the underscore should NOT be a wildcard in this case. 
 
Now that I think about it - my question actually doesn't make sense. 

In order to be able to _use_ an escape character I would need to supply one. 
And if I can supply one, I can use the standard one as well. 

Sorry for the noise



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Is it possible to specify an ESCAPE for a wildcard search using LIKE ANY (...)

2017-01-17 Thread Thomas Kellerer
Tom Lane schrieb am 17.01.2017 um 13:41:
> Thomas Kellerer <spam_ea...@gmx.net> writes:
>> So my question is: Is there any way to specify an alternate wildcard escape 
>> when using LIKE ANY (..)? 
> 
> No, not with ESCAPE.  [ manfully resists temptation to run down SQL
> committee's ability to design composable syntax ... oops ]
> 
> You could do what PG does under the hood, which is to run the pattern
> through like_escape():
> 
>  select *
>from some_table
>where name like any (array[like_escape('foo_bar%', '/'),
>   like_escape('bar_foo%', '/')]);
> 
> If that seems too verbose, maybe build a function to apply like_escape
> to each member of an array.

OK, thanks. 

I was trying to avoid to actually change the input list, but apparently there 
is no other way. 






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is it possible to specify an ESCAPE for a wildcard search using LIKE ANY (...)

2017-01-17 Thread Thomas Kellerer
I recently stumbled over the need to use a wildcard escape character for a 
condition that makes use of LIKE ANY, something like:

   select *
   from some_table
   where name like any (array['foo_bar%', 'bar_foo%']) escape '/';

so that the underscore wouldn't be treated as a wildard (I can't really change 
the values _inside_ the array as they are user_provided).

The above throws a syntax error. 

So my question is: Is there any way to specify an alternate wildcard escape 
when using LIKE ANY (..)? 

Thomas





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] R: Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-08 Thread Thomas Kellerer

FWIW you still haven't explained how the upgrade was performed.
That might be a very important piece of information, because the
9.4 cluster might have hint bits set and/or the data may be
mostly frozen, but the 9.6 cluster may not have that yet,
resulting in higher CPU usage.


We upgraded the database with a pg_dumpall from 8.4.22 and then a psql < 
BACKUP.SAV command!


Not to miss the obvious: did you run ANALYZE after importing the backup?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's the benefit (or usage scenario) of a "typed table"?

2016-12-31 Thread Thomas Kellerer

David G. Johnston schrieb am 31.12.2016 um 16:51:

 I wonder what the benefit of a typed table is and when this would be useful?


But I'd say if you want a table with said structure you should plan on
droppign the original type after you've altered all references to it to
point to the new implicit type created with the table.


I am not planning to use it.

I am just wondering _if_ there is an advantage to this setup

Apparently at some point someone thought it would be useful, otherwise
it wouldn't have been implemented I guess.

Thomas








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What's the benefit (or usage scenario) of a "typed table"?

2016-12-31 Thread Thomas Kellerer

I recently stumbled over "typed tables" in Postgres
(there were several questions containing this on stackoverflow recently)

create type some_type as (id integer, data text);
create table some_table of some_type;

I wonder what the benefit of a typed table is and when this would be useful?


Regards
Thomas
 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert MS SQL functions to pgSQL functions

2016-12-31 Thread Thomas Kellerer

Yogi Yang 007 schrieb am 31.12.2016 um 11:06:

Hello,

I am stuck while trying to convert/port a MS SQL server database to pgSQL.

Here is the code of the function in MS SQL server:

CREATE FUNCTION [dbo].[AccountGroupHierarchy]
-- Description: 
(
@groupId numeric(18,0)
)

RETURNS @table_variable TABLE (accountGroupId NUMERIC(18,0))
AS
BEGIN
WITH GroupInMainGroupP AS (SELECT accountGroupId, 1 AS HierarchyLevel
FROM dbo.tbl_AccountGroup
WHERE (accountGroupId = @groupId)
UNION ALL
SELECT e.accountGroupId, G.HierarchyLevel + 1 AS HierarchyLevel
FROM dbo.tbl_AccountGroup AS e CROSS JOIN
GroupInMainGroupP AS G
WHERE (e.groupUnder = G.accountGroupId))


INSERT INTO @table_variable

(accountGroupId)
(
SELECT accountGroupId FROM GroupInMainGroupP)

Return
END

I need to convert this code to pgSQL.

Please help.

Thanks,

Yogi Yang



Something like this:

CREATE FUNCTION account_group_hierarchy(p_group_id numeric(18,0))
   RETURNS TABLE (account_group_id NUMERIC(18,0))
AS
$$
WITH recursive GroupInMainGroupP AS
(
  SELECT accountGroupId, 1 AS HierarchyLevel
  FROM dbo.tbl_AccountGroup
  WHERE (accountGroupId = p_group_id)
  UNION ALL
  SELECT e.accountGroupId, G.HierarchyLevel + 1 AS HierarchyLevel
  FROM dbo.tbl_AccountGroup AS e
JOIN GroupInMainGroupP AS G ON e.groupUnder = G.accountGroupId
)
select accountGroupId
from GroupInMainGroupP;
$$
language sql;

  
Note that Postgres fold all unquoted identifiers to lowercase so it's usually better to use snake_case instead of CamelCase






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Er Data Modeller for PostgreSQL

2016-12-23 Thread Thomas Kellerer

Stephen Davies schrieb am 23.12.2016 um 10:08:

I'm looking for an ER Data Modeller tool for postgresql. I use Navicat Premium
for postgresql and the tool has a modeller but I would like to display a
database modeller that belonging to a tables of an schema under a database.

If I use Navicat for modeller, I have to drag and drop whole tables which I
want to add to data modeller. So It's pretty manual process.

Is there any way to display an er modeller that show only chosen schema under
the database?

Any advice would be appreciated.


Try Schemaspy


Scott Mead just blogged about using SchemaSpy with Postgres

http://www.openscg.com/2016/12/postgresql-schema-visualization/



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and quoted identifiers

2016-12-15 Thread Thomas Kellerer
Tom Lane schrieb am 15.12.2016 um 16:20:
>> Still doesn't work:
>>   -bash-4.1$ pg_dump -d postgres -t "\"Statuses\""
>>   pg_dump: no matching tables were found
> 
> Hmm.  It might shed some light if you put "echo" in front of that
> to see what gets printed:
> 
> $ echo pg_dump -d postgres -t "\"Statuses\""
> pg_dump -d postgres -t "Statuses"

Same here:

  -bash-4.1$ echo pg_dump -d postgres -t "\"Statuses\""
  pg_dump -d postgres -t "Statuses"
  

> Also, when having strange issues, I've found the combo
> 
> echo  | od -tx1 -tc
> 
> very useful, this help rule out potential fancy quotes pointed previously


-bash-4.1$ echo pg_dump -d postgres -t "\"Statuses\"" | od -tx1 -tc
000  70  67  5f  64  75  6d  70  20  2d  64  20  70  6f  73  74  67
  p   g   _   d   u   m   p   -   d   p   o   s   t   g
020  72  65  73  20  2d  74  20  22  53  74  61  74  75  73  65  73
  r   e   s   -   t   "   S   t   a   t   u   s   e   s
040  22  0a
  "  \n
042





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and quoted identifiers

2016-12-14 Thread Thomas Kellerer
Adrian Klaver schrieb am 14.12.2016 um 15:32:
>>> I'm suspicious that you're not actually typing plain-ASCII single and
>>> double quotes, but some fancy curly quote character.
>>
>> Definitely not. I typed this manually on the command line using Putty
> 
> So you are reaching the Bash shell via Putty on a Windows machine, correct?

Correct.
 
> So have you tried the answer from the SO question?:
> 
> "\"Statuses\""

Still doesn't work:

  -bash-4.1$ pg_dump -d postgres -t "\"Statuses\""
  pg_dump: no matching tables were found

Must apparently be something strange with the bash on that server.

It's not really important, as we never use quoted identifiers where I work 
anyway (and the above problem is precisely one of the reasons). 

I was just curious what the correct solution would be, but apparently there is 
no single answer and it depends on the environment.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.0 to 9.6

2016-12-14 Thread Thomas Kellerer
Mikhail schrieb am 13.12.2016 um 10:57:
> Should i check all the production environments for the problems,
> mentioned in all interim versions release notes, is it enough only to
> check the last minor upgrade release note (9.6 --> 9.6.1) or there is
> another quick way to check if i should apply some additional actions
> to standard pg_upgrade+stats_collect?

There is a nice page which consolidates all release notes:

   https://bucardo.org/postgres_all_versions.html

The nice thing about that is, that for each change in one release, it also 
mentions if that was a change in other releases too, e.g.: 

   https://bucardo.org/postgres_all_versions.html#version_9.6.1

Thomas






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and quoted identifiers

2016-12-13 Thread Thomas Kellerer
Tom Lane schrieb am 13.12.2016 um 19:35:
>>> These cases work for me.  Maybe your shell is doing something weird
>>> with the quotes?
> 
>> Hmm, that's the default bash from CentOS 6 (don't know the exact version)
> 
> I'm using bash from current RHEL6, should be the same.
> 
> I'm suspicious that you're not actually typing plain-ASCII single and
> double quotes, but some fancy curly quote character.

Definitely not. I typed this manually on the command line using Putty

Bash version is "GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)"





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and quoted identifiers

2016-12-13 Thread Thomas Kellerer

Tom Lane schrieb am 13.12.2016 um 18:02:

-bash-4.1$ pg_dump -d postgres -t '"Statuses"'
pg_dump: no matching tables were found



-bash-4.1$ pg_dump -d postgres -t 'public."Statuses"'
pg_dump: no matching tables were found


These cases work for me.  Maybe your shell is doing something weird
with the quotes?


Hmm, that's the default bash from CentOS 6 (don't know the exact version)

Thomas





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and quoted identifiers

2016-12-13 Thread Thomas Kellerer

David G. Johnston schrieb am 13.12.2016 um 18:05:

On Tue, Dec 13, 2016 at 9:43 AM, Pavel Stehule >wrote:


pg_dump -t '"Statuses"' postgres


​To elaborate - your shell was removing the double-quotes. You need
make it so after shell processing the double-quotes remain. Wrapping
the double-quote string in single-quotes should do it.


That was one of the options I tried, but that neither worked on Linux (using 
bash) nor on Windows

Thomas
 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump and quoted identifiers

2016-12-13 Thread Thomas Kellerer
Inspired by this question:

   http://dba.stackexchange.com/q/158044/1822

I tried that for myself, and it seems that pg_dump indeed can not parse quoted 
identifiers:

psql (9.6.1)
Type "help" for help.

postgres=# create table "Statuses" (id integer);
CREATE TABLE
postgres=# \q

-bash-4.1$ pg_dump -d postgres -t "Statuses"
pg_dump: no matching tables were found

-bash-4.1$ pg_dump -d postgres -t '"Statuses"'
pg_dump: no matching tables were found

-bash-4.1$ pg_dump -d postgres -t 'public."Statuses"'
pg_dump: no matching tables were found

Running 9.6.1 on CentOS 6 but under Windows this is the same.

Any ideas? 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About the MONEY type

2016-11-30 Thread Thomas Kellerer
Tobia Conforto schrieb am 30.11.2016 um 12:15:
> I think MONEY is a great datatype, at least in theory.

I personally find it pretty useless to be honest - especially because the 
currency symbol depends on the client. 

So if I store a money value in the database, some clients see CHF, some see Kč, 
some see £ and others might see € - all see the same amount.
Which seems totally wrong because 10€ is something completely different then 
10Kč or 10£.

Plus: inside a programming language (e.g. Java/JDBC) it's hard to work with the 
values because the database sends the values as a string (it has to because of 
the currency symbol) but in reality it is a number - but you can't just convert 
the String to a number again because of the symbol.

So I always recommend to not use it (in Postgres just as well as in other DBMS, 
e.g. SQL Server)






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup "Best Practices"

2016-11-28 Thread Thomas Kellerer

Israel Brewster schrieb am 28.11.2016 um 23:50:


pg_archivecleanup -n /mnt/server/archiverdir 
00010010.0020.backup


Ok, but where does that "00010010.0020.backup"
come from? I mean, I can tell it's a WAL segment file name (plus a
backup label), but I don't have anything like that in my WAL
archives, even though I've run pg_basebackup a couple of times. Do I
have to call something to create that file? Some flag to
pg_basebackup? At the moment I am running pg_basebackup such that it
generates gziped tar files, if that makes a difference.


The .backup file will be inside the tar file if I'm not mistaken




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strict min and max aggregate functions

2016-11-20 Thread Thomas Kellerer

Jeff Janes schrieb am 19.11.2016 um 22:12:

I need "strict" MIN and MAX aggregate functions, meaning they return
NULL upon any NULL input, and behave like the built-in aggregates if
none of the input values are NULL.

This doesn't seem like an outlandish thing to want, and I'm surprised
I can't find other discussion of it. Perhaps because none of the
words here are very effective as search terms as they are so
individually common.

I've hit upon a solution that works, but it is both ugly and slow
(about 50 fold slower than the built-ins; for my current purpose this
is not a big problem but I would love it to be faster if that could
be done easily).


This is not really pretty as well, but might be faster:

select a,
   case when group_count = nn_count then min_b end as min_b
from (
  select a,
 min(b) as min_b,
 count(b) as nn_count,
 count(*) as group_count
  from x
  group by a
) t;

As the expensive part is the group by I wouldn't expect the additional 
aggregates to make a big difference.

Alternatively:

select a, case when no_nulls then min_b end as min_b
from (
  select a,
 min(b) as min_b,
 bool_and(b is not null) as no_nulls
  from x
  group by a
) t;



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Master - slave replication?

2016-10-26 Thread Thomas Kellerer
Bjørn T Johansen schrieb am 26.10.2016 um 13:17:
> I have a database that I would like to replicate in case of hardware failure 
> on this server. So I gather I just need a streaming replication (warm
> standby?) and I found a howto describing how to set it up, that looks like 
> this:
> 
> 1. Edit postgresql.conf on the master to turn on streaming replication. 
> Change these settings:
>  listen_addresses = '*'
>  wal_level = hot_standby
>  max_wal_senders = 3
> 2. Edit pg_hba.conf on the master in order to let the standby connect.
>  host  replication   all   192.168.0.2/32  trust
> 3. Edit postgresql.conf on the standby to set up hot standby. Change this 
> line:
>  hot_standby = on
> 4. Create or edit recovery.conf on the standby to set up replication and 
> standby mode. Save a file in the standby's data directory, called 
> recovery.conf, with the following lines:
>  standby_mode = 'on'
>  primary_conninfo = 'host=192.168.0.1'
> 5. Shut down both the master and standby, and copy the files. You want to 
> copy most but not all files between the two servers, excluding the 
> configuration files and the pg_xlog directory. An example rsync script would 
> be:
>  rsync -av --exclude pg_xlog --exclude postgresql.conf data/* 
> 192.168.0.2:/var/lib/postgresql/data/
> 6. Start the standby first, so that they can't get out of sync. (Messages 
> will be logged about not being able to connect to the primary server, that's 
> OK.)
> 7. Start the master.
> 
> Is this all that is needed to get a working master - slave replication? (It 
> just looks too easy... :) )

Yes, that's all. 

In fact 5. could be done withoug shutting down the master using pg_basebackup

> Do I need to enable archive mode for this to work or?

No. 

"wal_level = hot_standby" includes that. 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drop user cascade

2016-10-19 Thread Thomas Kellerer
Alex Ignatov (postgrespro) schrieb am 19.10.2016 um 12:26:
> Hello!
> 
> Why we can DROP TABLE CASCADE, DROP VIEW CASCADE, DROP SEQUENCE CASCADE but 
> we can’t DROP USER/ROLE CASCADE?
> 
> Why do Postgres have no such functionality as DROP USER CASCADE? Is there any 
> reasons in that absence?

You can do 

  drop owned by user_name;
  drop user user_name;

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting the currently used sequence for a SERIAL column

2016-10-19 Thread Thomas Kellerer
Hanne Moa schrieb am 19.10.2016 um 10:31:
>> You can use the following statement to find the sequences that a table uses:
>>
> Thanks. This assumes that there is only one nextval per table though.
> While this holds for the database we need this on right now, it's not a
> sufficiently generic solution. How do I, for the sake of paranoia,
> specify a column?

No, it will return that for all columns in that table that have a sequence 
default
If you add "col.attname" to the select list you can see each column.

create sequence s1;
create sequence s2;
create table t1 (id1 integer default nextval('s1'), id2 integer default 
nextval('s2'), id3 integer default nextval('s2'));

select sn.nspname as sequence_schema, s.relname as sequence_name, 
col.attname
from pg_class s
  join pg_namespace sn on sn.oid = s.relnamespace 
  join pg_depend d on d.refobjid = s.oid and 
d.refclassid='pg_class'::regclass 
  join pg_attrdef ad on ad.oid = d.objid and d.classid = 
'pg_attrdef'::regclass
  join pg_attribute col on col.attrelid = ad.adrelid and col.attnum = 
ad.adnum
  join pg_class tbl on tbl.oid = ad.adrelid 
  join pg_namespace n on n.oid = tbl.relnamespace 
where s.relkind = 'S' 
  and d.deptype in ('a', 'n')  
  and n.nspname = 'public'
  and tbl.relname = 't1'

Returns

sequence_schema | sequence_name | attname
+---+
public  | s1| id1
public  | s2| id2
public  | s2| id3


Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting the currently used sequence for a SERIAL column

2016-10-19 Thread Thomas Kellerer
Hanne Moa schrieb am 19.10.2016 um 09:06:
>> regression=# create table t1 (f1 serial);
>> CREATE TABLE
>> regression=# select * from pg_depend where objid = 't1_f1_seq'::regclass or 
>> refobjid = 't1_f1_seq'::regclass;
>>  classid | objid | objsubid | refclassid | refobjid | refobjsubid | deptype 
>> -+---+--++--+-+-
>> 1247 | 47198 |0 |   1259 |47197 |   0 | i
>> 1259 | 47197 |0 |   2615 | 2200 |   0 | n
>> 2604 | 47202 |0 |   1259 |47197 |   0 | n
>> 1259 | 47197 |0 |   1259 |47199 |   1 | a
>> (4 rows)
> 
> This seems to assume that I already know the name of the sequence?
> Looking at the sequence names that's already in use, I cannot safely
> assume anything about the format of their names. I start out knowing
> only the table and column, and I need a string with the sequence name to
> pass on to whatever wants it.

You can use the following statement to find the sequences that a table uses:

select sn.nspname as sequence_schema, s.relname as sequence_name
from pg_class s
  join pg_namespace sn on sn.oid = s.relnamespace 
  join pg_depend d on d.refobjid = s.oid and 
d.refclassid='pg_class'::regclass 
  join pg_attrdef ad on ad.oid = d.objid and d.classid = 
'pg_attrdef'::regclass
  join pg_attribute col on col.attrelid = ad.adrelid and col.attnum = 
ad.adnum
  join pg_class tbl on tbl.oid = ad.adrelid 
  join pg_namespace n on n.oid = tbl.relnamespace 
where s.relkind = 'S' 
  and d.deptype in ('a', 'n')  
  and n.nspname = 'public'
  and tbl.relname = 'foo'

Of course you can do that for multiple tables as well:

 and (n.nspname, t.relname) in ( ('public', 'foo'), ('public'), ('bar') ) 

I am not entirely sure if that is the "shortest way" to do it, but it works for 
me.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting the currently used sequence for a SERIAL column

2016-10-18 Thread Thomas Kellerer

Tom Lane schrieb am 18.10.2016 um 16:11:

I thought pg_depend only stores the dependency if the the sequence was assigned
an owning column (through OWNED BY).


No, there will be regular expression dependencies as well.

That 'a' dependency is the one that pg_get_serial_sequence() looks for,
but the default-to-sequence dependency will be there in any case.

regression=# create table t2(f2 int default nextval('t1_f1_seq'));
CREATE TABLE
regression=# select pg_describe_object(classid,objid,objsubid) as
obj, pg_describe_object(refclassid,refobjid,refobjsubid) as ref,
deptype from pg_depend where objid = 't1_f1_seq'::regclass or
refobjid = 't1_f1_seq'::regclass;
  obj   |ref | deptype
++-
 type t1_f1_seq | sequence t1_f1_seq | i
 sequence t1_f1_seq | schema public  | n
 default for table t1 column f1 | sequence t1_f1_seq | n
 sequence t1_f1_seq | table t1 column f1 | a
 default for table t2 column f2 | sequence t1_f1_seq | n


Great, thanks. I meant to include that dependency in my SQL Workbench as well,
but could never find the correct way of joining the tables.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting the currently used sequence for a SERIAL column

2016-10-18 Thread Thomas Kellerer
Tom Lane schrieb am 18.10.2016 um 15:20:
>> Furthermore, what's stored in the column seems to be a string of the
>> format "nextval('sequencename'::regclass)". Is there a function to
>> parse this, to return just the sequence name, or will the sequence
>> name always be without for instance a schema name so that a naive
>> parser of our own will do? Googling found no candidates.
> 
> Personally, I'd try looking in pg_depend to see if the column's default
> expression has a dependency on a relation of type sequence.  That avoids
> all the fun of parsing the expression and turns it into a simple SQL
> join problem.


I thought pg_depend only stores the dependency if the the sequence was assigned 
an owning column (through OWNED BY). 

I don't see any entries in pg_depend for a simple "default 
nextval('some_sequence')" expression 
but maybe I am just missing something. 

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting the currently used sequence for a SERIAL column

2016-10-18 Thread Thomas Kellerer
> Is this to be regarded as internal API or is it safe to use this to
> find the correct sequence?

I think it's safe to use. 


> Furthermore, what's stored in the column seems to be a string of the
> format "nextval('sequencename'::regclass)". Is there a function to
> parse this, to return just the sequence name, or will the sequence
> name always be without for instance a schema name so that a naive
> parser of our own will do? Googling found no candidates.

In my experience, this could also be in the form 
"nextval('schema.sequencename'::regclass)" 
if the sequence is not in the same schema as the table.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] import_bytea function

2016-10-08 Thread Thomas Kellerer

Stephen Davies schrieb am 08.10.2016 um 02:57:

I will have to regenerate that code to get the exact error message text but it 
basically said that the parameter substitution was invalid.

A follow-up question.
Once the bytea column is populated, how best to display  the content in a web 
page?

I have :

byte [] imgB;
ResultSet rs = st1.executeQuery("select pic from part where pno='" + p + "'");
  if(rs.next()){
  imgB = rs.getBytes(1);
  if (imgB != null){
out.write("Content-type: image/jpeg");
out.write("Content-length: " + (int)imgB.length);
out.write(imgB.toString());
  }
  }

but this does not work.
The toString() looks wrong but removing it makes the write fail.


Assuming you are doing this in a Servlet, you should be writing the binary data 
to the HttpServletResponse

Something like:

ResultSet rs = st1.executeQuery("select pic from part where pno='" + p + 
"'");
if (rs.next())
{
   byte[] image = rs.getBytes(1);
   response.setContentType("image/jpeg");
   response.setIntHeader("Content-length", (int)image.length);
   response.getOutputStream().write(image);
}

But this is getting quite off-topic now.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] import_bytea function

2016-10-08 Thread Thomas Kellerer

Jan de Visser schrieb am 08.10.2016 um 16:11:

You need to stream the data. Working from memory here, and it's been a long 
time, but it's something like

rs = conn.executeQuery("SELECT byeta_column FROM foo WHERE bar = ?");
Blob b = (Blob) rs.getObject(1);


No. getBytes() works fine with the JDBC driver.

The problem is calling toString() on it and sending that via print






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] import_bytea function

2016-10-08 Thread Thomas Kellerer

Stephen Davies schrieb am 08.10.2016 um 02:57:

A follow-up question.
Once the bytea column is populated, how best to display  the content in a web 
page?

I have :

byte [] imgB;
ResultSet rs = st1.executeQuery("select pic from part where pno='" + p + "'");
  if(rs.next()){
  imgB = rs.getBytes(1);
  if (imgB != null){
out.write("Content-type: image/jpeg");
out.write("Content-length: " + (int)imgB.length);
out.write(imgB.toString());
  }
  }

but this does not work.
The toString() looks wrong but removing it makes the write fail.


What is "out"?

(And please use a PreparedStatement to run the SELECT for security reasons)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] import_bytea function

2016-10-07 Thread Thomas Kellerer
Stephen Davies schrieb am 07.10.2016 um 10:46:
>> You can store the contents of a file in a bytea using plain JDBC no 
>> lo_import() required
>>
>> String sql = "insert into images (id, image_data) values (?,?)";
>> Connection con = ;
>> File uploaded = new File("...");
>> InputStream in = new FileInputStream(uploaded);
>> PreparedStatement pstmt = con.prepareStatement(sql);
>> pstmt.setInt(1, 42);
>> pstmt.setBinaryStream(in, (int)uploaded.length());
>> pstmt.executeUpdate();
>>
>> This *only* works with bytea column, not with "large objects".
>>
>> In production code you obviously need to close all resources and handle 
>> errors.
>> I left that out for simplicity.

> That looks reasonable but I need to update rather than insert and my similar 
> code with sql="update part set pic=? where id=3" did not work.

That *will* work (using that myself for updates as well). 

What exactly is your problem? What was the error/exception?





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] import_bytea function

2016-10-07 Thread Thomas Kellerer
Stephen Davies schrieb am 07.10.2016 um 09:12:
> I am trying to use the import_bytea function described in various list posts 
> (PG version 9.3.14) in a jsp.
> 
> I get an error saying that only the super user can use server-side 
> lo_import().
> 
> If I change the Java connection to use user postgres, the function works but 
> I would prefer not to do this.
> 
> Is there a better way to update a bytea column from an uploaded file (in this 
> case a small jpeg)?

Since you mention JSP, I assume you are using JDBC. 

You can store the contents of a file in a bytea using plain JDBC no lo_import() 
required

String sql = "insert into images (id, image_data) values (?,?)";
Connection con = ;
File uploaded = new File("...");
InputStream in = new FileInputStream(uploaded);
PreparedStatement pstmt = con.prepareStatement(sql);
pstmt.setInt(1, 42);
pstmt.setBinaryStream(in, (int)uploaded.length());
pstmt.executeUpdate();

This *only* works with bytea column, not with "large objects".

In production code you obviously need to close all resources and handle errors. 
I left that out for simplicity.

Thomas





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres failed to start from services manager on windows 2008 r2

2016-10-01 Thread Thomas Kellerer

PHANIKUMAR G schrieb am 01.10.2016 um 17:30:

We then tried to start the service with the following changes.


 a. we changed the "Log on as" type to "This account"
 b. changed the account name to ".\"
 c. And entered the password for this account and saved.

After that we are able to start the postgresql service successfully.

Found that the pg_log directory is created under data directory with 
postgres-Sep.log


Why it is failing with local system account, the local system account
is part of administrators group. If we specifically provide
credentials to the service as explained above, service getting
started. Please help me to understand what is causing.



Where is the data directory? And how did you create the data directory?

Check the privileges on the data directory.
This sounds as if the local system account does not have the privilege to write 
to the data directory.

The local system account (or "Administrator") don't have the privilege to read 
and write all files.
Those accounts only have the ability to give themselves these privileges.

Also: if I'm not mistaken, Postgres 9.3 creates the service with the "Local Network 
Service", not with "Local System Account"

So how did you create the initial service?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade from 9.5 to 9.6 fails with "invalid argument"

2016-09-30 Thread Thomas Kellerer
Tom Lane schrieb am 29.09.2016 um 23:10:
> Thomas Kellerer <spam_ea...@gmx.net> writes:
>> for some reason pg_upgrade failed on Windows 10 for me, with an error 
>> message that one specifc _vm file couldn't be copied.
> 
> Hmm ... a _vm file would go through rewriteVisibilityMap(), which is new
> code for 9.6 and hasn't really gotten that much testing.  Its error
> reporting is shamefully bad --- you can't tell which step failed, and
> I wouldn't even put a lot of faith in the errno being meaningful,
> considering that it does close() calls before capturing the errno.
> 
> But what gets my attention in this connection is that it doesn't
> seem to be taking the trouble to open the files in binary mode.
> Could that lead to the reported failure?  Not sure, but it seems
> like at the least it could result in corrupted VM files.

I did this on two different computers, one with Windows 10 the other with 
Windows 7. 
(only test-databases, so no real issue anyway)

In both cases running a "vacuum full" for the table in question fixed the 
problem and pg_upgrade finished without problems.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade from 9.5 to 9.6 fails with "invalid argument"

2016-09-29 Thread Thomas Kellerer

Adrian Klaver schrieb am 29.09.2016 um 22:55:

After running a "vacuum full" on the table in question the upgrade goes
through.


Assuming you did that on old cluster?


Yes, correct. I did that on the 9.5 cluster


Where both clusters installed the same way?


Yes.

I always download the ZIP Archive from 
http://www.enterprisedb.com/products-services-training/pgbindownload then run 
initdb manually.

Both were initialized using:

   initdb -D "..."  --lc-messages=English -U postgres --pwfile=pwfile.txt -E 
UTF8 -A md5


What was the complete command line invocation of pg_upgrade?


That was in a batch file:

set LC_MESSAGES=English

set oldbin=c:\Programme\PostgreSQL\9.5\bin
set newbin=c:\Programme\PostgreSQL\9.6\bin
"%newbin%\pg_upgrade" ^
  --username=postgres ^
  --old-bindir="%oldbin%" ^
  --new-bindir="%newbin%" ^
  --old-datadir=d:/Daten/db/pgdata95 ^
  --new-datadir=d:/Daten/db/pgdata96 ^
  --retain ^
  --verbose ^
  --old-port=5432 ^
  --new-port=5433

 





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_upgrade from 9.5 to 9.6 fails with "invalid argument"

2016-09-29 Thread Thomas Kellerer

Hello,

for some reason pg_upgrade failed on Windows 10 for me, with an error message 
that one specifc _vm file couldn't be copied.

When I try to copy that file manually everything works fine.

After running a "vacuum full" on the table in question the upgrade goes through.

One thing I noticed in the --verbose output of pg_upgrade is that the old cluster - 
despite being a 9.5 one - has the "pg_control version number 942"

Here is the part of the pg_upgrade output:

.

copying "d:/Daten/db/pgdata95/base/16410/85351" to 
"d:/Daten/db/pgdata96/base/16411/85351"
  d:/Daten/db/pgdata95/base/16410/85351_fsm
copying "d:/Daten/db/pgdata95/base/16410/85351_fsm" to 
"d:/Daten/db/pgdata96/base/16411/85351_fsm"
  d:/Daten/db/pgdata95/base/16410/85351_vm
copying "d:/Daten/db/pgdata95/base/16410/85351_vm" to 
"d:/Daten/db/pgdata96/base/16411/85351_vm"
  d:/Daten/db/pgdata95/base/16410/85358
copying "d:/Daten/db/pgdata95/base/16410/85358" to 
"d:/Daten/db/pgdata96/base/16411/85358"
  d:/Daten/db/pgdata95/base/16410/85358.1
copying "d:/Daten/db/pgdata95/base/16410/85358.1" to 
"d:/Daten/db/pgdata96/base/16411/85358.1"
  d:/Daten/db/pgdata95/base/16410/85358.2
copying "d:/Daten/db/pgdata95/base/16410/85358.2" to 
"d:/Daten/db/pgdata96/base/16411/85358.2"
  d:/Daten/db/pgdata95/base/16410/85358.3
copying "d:/Daten/db/pgdata95/base/16410/85358.3" to 
"d:/Daten/db/pgdata96/base/16411/85358.3"
  d:/Daten/db/pgdata95/base/16410/85358_fsm
copying "d:/Daten/db/pgdata95/base/16410/85358_fsm" to 
"d:/Daten/db/pgdata96/base/16411/85358_fsm"
  d:/Daten/db/pgdata95/base/16410/85358_vm
copying "d:/Daten/db/pgdata95/base/16410/85358_vm" to 
"d:/Daten/db/pgdata96/base/16411/85358_vm"

error while copying relation "public.wb_downloads" 
("d:/Daten/db/pgdata95/base/16410/85358_vm" to 
"d:/Daten/db/pgdata96/base/16411/85358_vm"): Invalid argument
Failure, exiting

The file in question is 65.536 bytes in size.

I saved all log files and the complete output from the failed run, so if you 
are interested I can supply them (I ran pg_upgrade with the --retain option).

Regards
Thomas
 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] isnull() function in pgAdmin3

2016-09-28 Thread Thomas Kellerer
dudedoe01 schrieb am 27.09.2016 um 19:04:
> I am trying to emulate the isnull() function used in MySQL into postreSQL. I
> have tried different ways such is null but it's not producing the results
> desired. I am doing a data migration from MySQL into postgreSQL and need
> help with the isnull() in pgAdmin3.
> 
> Any assistance provided would be greatly appreciated.
> 
> Thanks,
> 
> In MySQL:
> 
> (case
> when
> ((`s`.`Funding_Date` = '')
> and (isnull(`s`.`Actual_Close_Date`)
> or (`s`.`Actual_Close_Date` = '')))
> then
> 'RPG_INV'
> when
> ((isnull(`s`.`Funding_Date`)
> or (`s`.`Funding_Date` <> ''))
> and ((`s`.`Actual_Close_Date` = '')
> or isnull(`s`.`Actual_Close_Date`)))
> then
> 'Builder_Inventory'
> else 'Owner_Inventory'
> end) AS `Lot_Status`,
> 

Unrelated, but: why are you storing DATEs in VARCHAR columns? 

A migration might be a good moment to clean up the data model and store DATEs 
in a DATE (or TIMESTAMP) column. 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Chante domain type - Postgres 9.2

2016-09-26 Thread Thomas Kellerer
Rakesh Kumar schrieb am 26.09.2016 um 15:08:
>>You sound like you think that varchar(50) is somehow cheaper than text.
> 
> The biggest impediment to text cols in other RDBMS  is no index allowed. 
> If PG has an elegant solution to that, then yes I see the point made by the
> original poster.

Don't confuse Postgres' "text" data type with "text" in other DBMS. 

There is no difference whatsoever between varchar and text in Postgres. 







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Clustered index to preserve data locality in a multitenant application?

2016-08-30 Thread Thomas Kellerer
Nicolas Grilly schrieb am 30.08.2016 um 13:12:
> We rely on clustered indexes to preserve data locality for each
> tenant. Primary keys start with the tenant ID. This way, rows
> belonging to the same tenant are stored next to each other. Because
> all requests hit only one tenant, this is a great performance
> improvement.
> 

What about partitioning by tenant? With a local index on each partition. 

Partitioning is currently a bit limited in Postgres (e.g. you can't have 
incoming foreign keys) but this would fit your requirements pretty much as I 
understand them. 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Best way to insert a row with two columns having the value based on the same argument generated by another postgres function.

2016-07-07 Thread Thomas Kellerer
Silk Parrot schrieb am 07.07.2016 um 08:56:
> Hi,
>  
>I am trying to build a user database. The steps for creating a new user 
> are:
> 
> 1. Use gen_salt to create a salt.
> 2. Compute the hash based on the salt and password and store both the hash 
> and the salt into a new row.
> 
> 
> The pl/pgsql would look like this:
> 
> CREATE OR REPLACE FUNCTION system.create_enduser(IN emailArg TEXT, IN 
> passwordArg TEXT, IN nicknameArg TEXT, IN roundsArg int) RETURNS TEXT AS
> $$
> #print_strict_params on
> DECLARE
> salt TEXT;
> inserted_uuid TEXT;
> BEGIN
> salt := public.gen_salt('bf', roundsArg);
> INSERT INTO system.enduser (email, password_hash, password_salt, 
> nickname, user_state) VALUES (emailArg, public.crypt(passwordArg, salt), 
> salt, nicknameArg, 'REGISTERED') RETURNING uuid into inserted_uuid;
> RETURN inserted_uuid;
> END
> $$
>   LANGUAGE 'plpgsql' VOLATILE
> ;
> 
> 
> Is there a way to do this in a single SQL statement without using a
> function? One way I can think of is using trigger, but that still
> requires another setup.
> 

Something like this maybe? 

  with salted (new_salt) as (
  select public.gen_salt('bf', roundsArg)
  )
  INSERT INTO system.enduser (email, password_hash, password_salt, nickname, 
user_state) 
  select emailArg, public.crypt(passwordArg, s.new_salt), s.new_salt, 
nicknameArg, 'REGISTERED'
  from salted
  RETURNING inserted_uuid;






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.6 beta2 win-x64 download links still point to beta1

2016-07-01 Thread Thomas Kellerer

Bruce Momjian schrieb am 28.06.2016 um 05:36:

On Fri, Jun 24, 2016 at 07:44:17AM +0200, Thomas Kellerer wrote:

the Beta2 downloads on

   http://www.enterprisedb.com/products-services-training/pgdownload
   http://www.enterprisedb.com/products-services-training/pgbindownload

still lead to Beta1 for the Windows 64bit builds.

All others properly link to beta1


This looks fixed now.



I still get the beta1 packages.

This

  
http://www.enterprisedb.com/postgresql-960-binaries-win64?ls=Crossover=Crossover

redirects to

  
http://get.enterprisedb.com/postgresql/postgresql-9.6.0-beta1-windows-x64-binaries.zip


and

  
http://www.enterprisedb.com/postgresql-960-beta-installers-win64?ls=Crossover=Crossover

redirects to

  http://get.enterprisedb.com/postgresql/postgresql-9.6.0-beta1-windows-x64.exe


Replacing beta1 with beta2 leads to a 404 (for both direct download links)

Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.6 beta2 win-x64 download links still point to beta1

2016-06-23 Thread Thomas Kellerer
Hello,

the Beta2 downloads on 

  http://www.enterprisedb.com/products-services-training/pgdownload
  http://www.enterprisedb.com/products-services-training/pgbindownload

still lead to Beta1 for the Windows 64bit builds. 

All others properly link to beta1

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Moving from PHP to Java: A result was returned when none was expected.

2016-06-15 Thread Thomas Kellerer
Alexander Farber schrieb am 15.06.2016 um 15:56:
> Good afternoon,
> 
> at PostgreSQL 9.5.3 I have a stored function (full source code below) 
> returning void, which I successfully call with PHP:
> 
> function skipGame($dbh, $uid, $gid) {
> $sth = $dbh->prepare('SELECT words_skip_game(?, ?)');
> $sth->execute(array($uid, $gid));
> }
> 
> Now I am trying to call the same function through JDBC driver 9.4.1208.jre7:
> 
> private static final String SQL_SKIP_GAME   =
> "SELECT words_skip_game(?, ?)";
> 
> try (PreparedStatement st = 
> mDatabase.prepareStatement(SQL_SKIP_GAME)) {
> st.setInt(1, mUid);
> st.setInt(2, gid);
> st.executeUpdate();
> }
> 
> and sadly get the SQLException "A result was returned when none was 
> expected.".
> 
> Shouldn't I call executeUpdate() method here - according to the doc
> https://www.postgresql.org/docs/7.4/static/jdbc-callproc.html  ?
> 

A SELECT statement needs to be run using `executeQuery()`. 

You can also `execute()` if you want to get the update count using 
Statement.getUpdateCount() but I am not sure if that is actually populated 
through the JDBC driver for SELECT statements calling functions.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgAdmin 4 beta not working on Windows 10

2016-06-11 Thread Thomas Kellerer

Dave Page schrieb am 10.06.2016 um 16:48:

I'm pleased to announce that the release of pgAdmin 4 v1.0 Beta 1 for
testing. You can find more details on the website:

Announcement: https://www.pgadmin.org/

Documentation: https://www.pgadmin.org/docs4/dev/index.html

Downloads: https://www.pgadmin.org/download/

Bug tracker: https://redmine.postgresql.org/projects/pgadmin4/issues
(requires a PostgreSQL community login)



I don't have a community login, so I'm posting that here:

On Windows 10 64bit it doesn't start.

When running pgadmin4.exe I get a dialog

   "Fatal Error"
   An Error occurred initialising the application server:
Failed to launche the application server, server thread exiting.

Then it prompts me for a "Python Path" and an "Application Path".

I pointed that to

  c:\Program Files (x86)\pgAdmin 4\v1\runtime\python27.dll
  and
  c:\Program Files (x86)\pgAdmin 4\v1\runtime\pgAdmin4.exe

but after that, nothing happens.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How SQL SELECT * statement works in Postgres?

2016-06-05 Thread Thomas Kellerer

Sachin Srivastava schrieb am 05.06.2016 um 11:16:

Kindly inform to me How PostgreSQL Processes SQL Statements internally?

How SQL SELECT * statement works in Postgres? 




Check out Bruce's presentations:

http://momjian.us/main/presentations/internals.html

Specifically: http://momjian.us/main/writings/pgsql/internalpics.pdf





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   8   9   >