Re: [GENERAL] Question about SELECT statements with subselects

2017-09-22 Thread Albe Laurenz
Miloslav Semler wrote:
> I found strange behavior with subselects and I am not able to explain
> it. I have several tables in schema:
> 
> tramecky, mt_hodnoty, plata_kusy
> 
> in these tables, id is always primary key (serial), table_id is always
> foreign key to table. When I run this query:
> 
> select tramecky.id FROM a.tramecky WHERE
>  id NOT IN(SELECT tramecky_id FROM a.plata_kusy) AND
>  expedicni_plato IS NULL
> 
> I get 55 rows.
> 
> When I run this query:
> 
> select tramecky.id FROM a.tramecky WHERE
>  id NOT IN(SELECT tramecky_id FROM a.plata_kusy) AND
>  id NOT IN(SELECT tramecky_id FROM a.mt_hodnoty) AND
>  expedicni_plato IS NULL
> 
> I get no rows.. so I expect that rows with foreign keys tramecky_id of
> 55 rows are present in table mt_hodnoty. However result of query:
> 
> select mt_hodnoty.id FROM a.mt_hodnoty WHERE tramecky_id NOT IN(SELECT
> tramecky_id FROM a.plata_kusy)
> 
> is empty set. Can anybody explain such strange behavior?

There is probably one or more rows in "mt_hodnoty" where "tramecky_id" is NULL.

Then the subselect
   SELECT tramecky_id FROM a.mt_hodnoty
contains a NULL values, and the NOT IN clause will result in NULL,
which is not TRUE, so the result set is empty.

The NULL value does not show up in your second query, because
the condition NULL NOT IN (...) is also always NULL.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] VM-Ware Backup of VM safe?

2017-09-20 Thread Albe Laurenz
Thomas Güttler wrote:
> We run a PostgreSQL 9.6 server in a virtual machine.
> 
> The virtual machine is managed by the customer.
> 
> He does backup the VM.
> 
> Is this enough, is this safe?

I don't know about VMware, but the general rule is that
if the backup is truly atomic (it is guaranteed to capture
a consistent state of the file system), you can use it
to backup the database.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] org.postgresql.util.PSQLException: FATAL: terminating connection due to administrator command

2017-08-02 Thread Albe Laurenz
Emi wrote:
> Running psql table updates() by using
> org.springframework.scheduling.quartz.JobDetailFactoryBean cronjob from
> web application. Got the following exception:
> 
> org.postgresql.util.PSQLException: FATAL: terminating connection due to
> administrator command
> 
> 
> Re-run the same cronjob several times, and cannot re-generate the error.
> 
> May I know what might cause the above error message please? And which
> log file(online doc) could help provide more detail information about
> this please?

You could look into the PostgreSQL server log.

An administrator killed your session, or statement_timeout was exceeded.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schemas and foreign keys

2017-07-21 Thread Albe Laurenz
marcelo wrote:
> Recently I asked regarding schemas, and received very useful answers. I
> conclude that I can put some tables in one schema and left others in the
> public one.
> If my app selects some schema, the backend will found automatically the
> absent tables in "public".
> So far, so good.
> But what about foreign keys? At least, I will have foreign keys from the
> tables in the specified schema to the tables in "public", because I'm
> thinking that the tables in "public" would be references, while the
> tables residing in the specified schema will be the transactional ones.

That shouldn't be a problem; schemas are just namespaces (with the option
to exclude users through permissions).

Try it and report back if you encounter problems.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changing collate & ctype for an existing database

2017-07-14 Thread Albe Laurenz
rihad wrote:
> Btw, can we wrap the update of datcollate  and rebuilding of
> textual indices inside a transaction with effectively 0 downtime?

No.  Building indexes takes time and will lock the tables
until the transaction is done.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DATA Integrity & Recovery

2017-07-12 Thread Albe Laurenz
chris faber wrote:
> Postgres 9.2
> 
> We have a POSTGRES database that we have been backing up via Incremental 
> backups.

You are talking of a physical base backup and WAL archives, right?

> We had an incident where we had to recover from backup. Our software vendor 
> has completed
> a restore and we have lost 10 days of data. There is no explanation as to the 
> reason we
> have sustained this loss.

Then press your software wendor for a reason.
They did the restore, so they should know.

> I would appreciate the communities help in the following:
> 
> 1. Determine if data from the incremental backups can be restored or 
> recovered.

If properly done, you can recover to any point of time after the backup with
a base backup and WAL archives.

> 2. Determine if data can be recovered from individual files backed up from 
> main Postgres
> data directory.

That is more tricky.  There is no straightforward way to extract such
information, particularly if the commit log is missing.
If that file is from a base backup, there is the additional difficulty
that the file could be in an inconsistent state.

You would have to hire a specialist for such work.


It sounds like you should consider letting somebody more reliable than
your software vendor manage your database backups.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changing collate & ctype for an existing database

2017-07-12 Thread Albe Laurenz
rihad wrote:
> Hi there. We have a working database that was unfortunately created by
> initdb with default ("C") collation & ctype. All other locale specific
> settings have the value en_US.UTF-8 in postgresql.conf. The database
> itself is multilingual and all its data is stored in UTF-8. Sorting
> doesn't work correctly, though. To fix that, can I just do this:
> 
> 
> update pg_database set datcollate='en_US.UTF-8', datctype='en_US.UTF-8'
> where datname='mydb';
> 
> 
> This does seem to work on a testing copy of the database, i.e. select
> lower('БлаБлаБла') now works correctly when connected to that database.
> 
> 
> Is there still any chance for corrupting data by doing this, or indexes
> stopping working etc?
> 
> p.s.: postgres 9.6.3

As explained, yes.  Indexes on string columns will be corrupted.

See this example:

test=# CREATE DATABASE breakme LC_COLLATE "C" LC_CTYPE "C" TEMPLATE template0;
test=# \c breakme
breakme=# CREATE TABLE sort(id integer PRIMARY KEY, val text NOT NULL);
breakme=# INSERT INTO sort VALUES (1, 'LITTLE'), (2, 'big'), (3, 'b-less');
breakme=# CREATE INDEX ON sort(val);
breakme=# SET enable_seqscan=off;  -- force index use
breakme=# SELECT * FROM sort ORDER BY val;
┌┬┐
│ id │  val   │
├┼┤
│  1 │ LITTLE │
│  3 │ b-less │
│  2 │ big│
└┴┘
(3 rows)

breakme=# UPDATE pg_database SET datcollate='en_US.UTF-8', 
datctype='en_US.UTF-8' WHERE datname='breakme';
breakme=# \c breakme
breakme=# SET enable_seqscan=off;  -- force index use
breakme=# SELECT * FROM sort ORDER BY val;
┌┬┐
│ id │  val   │
├┼┤
│  1 │ LITTLE │
│  3 │ b-less │
│  2 │ big│
└┴┘
(3 rows)

breakme=# SET enable_seqscan=on;  -- this and the following force sequential 
scan
breakme=# SET enable_bitmapscan=off;
breakme=# SET enable_indexscan=off;
breakme=# SET enable_indexonlyscan=off;
breakme=# SELECT * FROM sort ORDER BY val;  -- this returns the correct order
┌┬┐
│ id │  val   │
├┼┤
│  2 │ big│
│  3 │ b-less │
│  1 │ LITTLE │
└┴┘
(3 rows)

As you see, your index is still sorted according to the C collation
and scanning it returns wrong results.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] loading file with en dash character into postgres 9.6.1 database

2017-07-12 Thread Albe Laurenz
Tom Lane wrote:
> "Hu, Patricia"  writes:
>> The server and client encoding are both set to UTF8, and according to this
>> http://www.fileformat.info/info/unicode/char/2013/index.htm en dash is a 
>> valid UTF8
>> character, but when running a script with insert statement with en dash 
>> character in it, I
>> got the error below.
>> psql:activity_type.lst:379: ERROR:  invalid byte sequence for encoding 
>> "UTF8": 0x96
> 
> Well, that certainly isn't valid UTF8, so your script file isn't in UTF8.
> 
>> If I set client_encoding to WIN1252, the same file will be run ok
>> but afterwards the en dash character showed up as "û", instead of the en 
>> dash character
>> "-"
> 
> This indicates that your terminal program does *not* think its encoding
> is WIN1252.  Having loaded that script file, you need to revert
> client_encoding to whatever your terminal program is using, or non-ASCII
> characters are going to be displayed wrong.
> 
> A bit of poking around suggests that your terminal may be operating
> with code page 437 or similar, as 0x96 is "û" in that encoding ---
> according to Wikipedia, at least:
> https://en.wikipedia.org/wiki/Code_page_437
> I don't think Postgres supports that as a client_encoding setting,
> so one way or another you're going to need to switch the terminal
> program's character set setting.

Running "chcp 1252" in your Windows console before starting psql
should do the trick.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsquery error

2017-07-11 Thread Albe Laurenz
Stephen Davies wrote:
> The following query give the error:
> 
> select
> id,title,dtype,source,used_for,ts_headline('english',content,to_tsquery('english','
> ma waterflux'),'minWords = 99, maxWords = 999') from info where  clob @@
> to_tsquery('english',' ma waterflux') order by title,dtype,source,used_for;
> ERROR:  syntax error in tsquery: " ma waterflux"
> 
> Remove either the "ma" or the "waterflux" and the query works.
> 
> What is causing the error?
> 
> (MA Waterflux is a product name.)

It is the fact that the string contains two words.

You would have to place an operator between the words,
probably & in that case.
(https://www.postgresql.org/docs/current/static/datatype-textsearch.html#DATATYPE-TSQUERY)

Or you use "plainto_tsquery" instead of "to_tsquery".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql doesn't pass on exported shell environment functions

2017-07-07 Thread Albe Laurenz
David G. Johnston wrote:
>> It works for me on Linux with 9.6.3 psql:
> 
> ​Except you haven't recreated the scenario I presented.​
> 
> ​You only are involving a single script and that script defines 
> "testfunction" itself
> (which makes exporting pointless).  In my example the script being executed 
> within the
> psql script does not define testfunction itself.
> 
> -> == execute in subshell
> 
> ​main-script (def func) -> psql -> psql-call-bash (invoke func)

I am confused; my shell script does *not* contain a function definition.

I copied and pasted a shell session:

First, show the script that contains a function invocation.
Then, define and export the function.
Then, call psql
Then, escape to a subshell.
Then, call the script that successfully calls the function.

Am I missing something?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql doesn't pass on exported shell environment functions

2017-07-07 Thread Albe Laurenz
David G. Johnston wrote:
> In hindsight I'm not surprised but couldn't find a ready explanation on the 
> web and
> figured I'd inquire here.  In short: "export VAR" and "export -f 
> functionname" behave
> differently when psql is acting as a relay.

It works for me on Linux with 9.6.3 psql:

laurenz:~> cat psql-call-bash
#!/usr/bin/env bash

echo "Enter"
echo "EnvVar: $TEST_ENVVAR"
echo "Invoking Function..."

testfunction

exit
laurenz:~> chmod 0700 psql-call-bash
laurenz:~> function testfunction() { echo "Function Test"; }
laurenz:~> export -f testfunction
laurenz:~> export TEST_ENVVAR='Test'
laurenz:~> psql
Border style is 2.
Line style is unicode.
psql (9.6.3)
Type "help" for help.

test=> \!
laurenz:~> ./psql-call-bash
Enter
EnvVar: Test
Invoking Function...
Function Test

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature proposal, DBURL: psql pgsql://joe:p4zzw...@example.org:2345/dbname

2017-07-05 Thread Albe Laurenz
Hans Schou wrote:
> The dburl (or dburi) has become common to use by many systems connecting to a 
> database.
> The feature is that one can pass all parameters in a string, which has 
> similar pattern as
> http-URI do.
> 
> Especially when using psql in a script, having the credentials in one string 
> is
> convenient.
> 
> 
> The syntax could be:
> 
> [scheme[<+>dsn]]://[[username[:[password]]@][host][:port][/[dbname][/[[table[/[column[,col
> umn...]*]]]|sql]]]
> 
> 
> Example of usage:
>   psql pgsql://joe:p4zzw...@example.org:2345/dbname

[...]

> I have attached an example of how it could be implemented. It uses libpcre 
> RegEx to pass
> the dburl.

Unless I misunderstand, this has been in PostgreSQL since 9.2:

https://www.postgresql.org/docs/current/static/libpq-connect.html#AEN45571
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b035cb9db7aa7c0f28581b23feb10d3c559701f6

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Unable to understand index only scan as it is not happening for one table while it happens for other

2017-06-27 Thread Albe Laurenz
rajan wrote:
> why the index-only scan *works only* with an *order by*?
> localdb=# explain analyse verbose select uid from mm where uid>100 *order
> by* uid;
> QUERY
> PLAN
> 
>  Index Only Scan using mm_pkey on public.mm  (cost=0.27..22.47 rows=354 
> width=8)
>  (actual time=0.023..0.079 rows=354 
> loops=1)
>Output: uid
>Index Cond: (mm.uid > 100)
>Heap Fetches: 0
>  Planning time: 0.096 ms
>  Execution time: 0.131 ms
> (6 rows)

I'd guess that it would work fine, but PostgreSQL chooses to use a sequential
scan instead, because too many rows meet the condition "uid > 100".

If you add the ORDER BY, the plan with the sequential scan also has to
sort the data, which makes it much more expensive, while the index only scan
returns the data in sorted order anyway and does not have to sort,
which makes it cheaper.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Writing a C function to return the log file name

2017-06-06 Thread Albe Laurenz
Kouber Saparev wrote:
> I am trying to write a function in C to return the log file name by given 
> timestamp. I
> will use that later to make dynamic creation of a foreign table (file_fdw) to 
> read the csv
> logs themselves. The thing is I do now want to hardcode neither the format, 
> nor the
> directory in my extension.
> 
> I already looked into the adminpack extension, but the format is hardcoded to 
> the default
> one there, so it does not serve my needs.
> 
> Here is what I currently have:
> https://gist.github.com/kouber/89b6e5b647452a672a446b12413e20cf
> 
> 
> The thing is the function is returning random results, obtained by 
> pg_strftime().
> 
> kouber=# select now()::timestamp, sqlog.log_path(now()::timestamp);
> NOTICE:  Log directory = "pg_log"
> NOTICE:  Log filename = "postgresql-%F.log"
> NOTICE:  Length = "7"
> NOTICE:  Filename = "pg_log/postgresql-17422165-04-30.log"
>now |   log_path
> +--
> 2017-06-02 14:17:47.832446 | pg_log/postgresql-17422165-04-30.csv
> (1 row)
> 
> kouber=# select now()::timestamp, sqlog.log_path(now()::timestamp);
> NOTICE:  Log directory = "pg_log"
> NOTICE:  Log filename = "postgresql-%F.log"
> NOTICE:  Length = "7"
> NOTICE:  Filename = "pg_log/postgresql-17422166-02-08.log"
>now |   log_path
> +--
> 2017-06-02 14:18:12.390558 | pg_log/postgresql-17422166-02-08.csv
> (1 row)
> 
> 
> 
> 
> 
> Any idea what am I doing wrong?
> 
> 
> I copied logfile_getname() from syslogger.c, and simply added some debug 
> messages in
> there.

You are mixing up "Timestamp" and "pg_time_t".

Both are int64, but the former contains the number of microseconds since
2000-01-01 00:00:00, while the latter represents "the number of seconds
elapsed since 00:00:00 on January 1, 1970, Coordinated Universal Time (UTC)"
(quote from "man localtime").

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql function with offset - Postgres 9.1

2017-05-29 Thread Albe Laurenz
Patrick B wrote:
> I am running a background task on my DB, which will copy data from tableA to 
> tableB. For
> that, I'm writing a PL/PGSQL function which basically needs to do the 
> following:
> 
> 
> 1.Select the data from tableA
> 2.The limit will be put when calling the function
> 3.insert the selected data on Step 1 onto new table
> 
> Question:
> 
> * When I stop it and start it again, how can the query "know" that it has 
> already
> processed some rows so it won't do it twice on the same rows? If it stopped 
> on row number
> 100, I need it to continue on row number 101, for example.
> 
> * How can I ask the function to return the number of processed rows?
> 
> 
> I can add a column on TableB if needed, but not on tableA.
> 
> This is what I've done so far:

>   CREATE or REPLACE FUNCTION data_copy(rows integer)
>   RETURNS SETOF bigint AS $$

CREATE or REPLACE FUNCTION data_copy(p_limit integer, p_offset integer)
RETURNS integer;

>   declare
>   row record;
>   offset_num integer;

num_rows integer := 0;

>   BEGIN
>   FOR row IN EXECUTE '
>   SELECT
>   id,
>   path,
>   name,
>   name_last,
>   created_at
>   FROM
>   tablea
>   WHERE
>   ready = true
>   ORDER BY 1 LIMIT ' || rows || ' OFFSET ' || rows || 
> ''

'... LIMIT ' || p_limit || ' OFFSET ' || p_offset

>   LOOP

num_rows := num_rows + 1;

>   INSERT INTO tableB (id,path,name,name_last,created_at)
>   VALUES (row.id,row.path,row.name,row.name_last,row.created_at);
> 
>   END LOOP;

RETURN num_rows;

>   END
>   $$ language 'plpgsql';

There are two problems with this approach:

1. It will do the wrong thing if rows are added or deleted in "tablea" while
   you process it.

2. Queries with hight OFFSET values have bad performance.

The solution is to avoid OFFSET and to use "keyset pagination":
http://use-the-index-luke.com/no-offset

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpg sources

2017-05-23 Thread Albe Laurenz
Igor Korot wrote:

> Can I put libpg sources into my project? Or I will have to provide binaries?

You can do anything you want as long as you stick with the license:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=COPYRIGHT;hb=HEAD

All you have to do is include the following information:

Portions Copyright (c) 1996-2017, PostgreSQL Global Development Group

Portions Copyright (c) 1994, The Regents of the University of California

IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR
DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS
ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO
PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory consumption for Query

2017-04-24 Thread Albe Laurenz
dhaval jaiswal wrote:
> How to check how much memory query is consuming.
> 
> Is there tool can check of query consuming memory for the execution or output.
> 
> Let's say for following query how to calculate memory consumption.
> 
> select * from test where id=1;

That query will not consume memory worth mention unless
"test" is a non-trivial view.

You can run "EXPLAIN (ANALYZE) SELECT ..." to see how much memory is used
for memory intense operations like sort, hash or materialize.

Other operations don't really consume much memory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] expensive function in select list vs limit clause

2017-04-05 Thread Albe Laurenz
Chris Mair wrote:
> I've found a (simple) situation where the planner does something I don't 
> understand.
> 
> Below is a complete test case followed by output.
> 
>  From the timings it appears that in the second explain analyze query a 
> function
> call in the select list (expensive()) is evaluated in the sequential scan node
> *for each* row in big, despite the use of limit.
> 
> I would have expected expensive() to be evaluated only for the ten rows
> in the result set. Hence the second explain analyze query shouldn't be more
> expensive than the first one.
> 
> My trust in Postgres' planner goes so far as I feel the planner is right and 
> there
> must be a reason for this :)
> 
> Could someone help me understand this behaviour?
[...]
> create function expensive() returns double precision as
> $$
>  begin
>  for i in 1 .. 15000 loop
>  end loop;
>  return random();
>  end;
> $$ language 'plpgsql';

This is unrelated, but you should set COST for an expensive function
to help the planner.

[...]
> -- now do the same, but add an expensive() column to the result:
> -- takes ~ 29s => WHY?
> 
> explain analyze select r, expensive() from big order by r offset 0 limit 10;
[...]
>QUERY PLAN
> --
> 
>   Limit  (cost=286034.64..286034.67 rows=10 width=8) (actual 
> time=28932.311..28932.314
> rows=10 loops=1)
> ->  Sort  (cost=286034.64..288534.64 rows=100 width=8) (actual
> time=28932.309..28932.310 rows=10 loops=1)
>   Sort Key: r
>   Sort Method: top-N heapsort  Memory: 25kB
>   ->  Seq Scan on big  (cost=0.00..264425.00 rows=100 width=8) 
> (actual
> time=0.062..28822.520 rows=100 loops=1)
>   Planning time: 0.038 ms
>   Execution time: 28932.339 ms
> (7 rows)

ORDER BY can only be processed after all rows have been fetched, this
includes the expensive result column.

You can easily avoid that by applying the LIMIT first:

  SELECT r, expensive()
  FROM (SELECT r
FROM big
ORDER BY r
LIMIT 10
   ) inner;

I don't know how hard it would be to only fetch the necessary columns before
the ORDER BY and fetch the others after the LIMIT has been applied, but it
is probably nontrivial and would require processing time for *everybody*
who runs a query with ORDER BY to solve a rare problem that can easily be
worked around.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Request to confirm which command is use for exclusive operation

2017-03-08 Thread Albe Laurenz
Yogesh Sharma wrote:
> I observed there is some problem in REINDEX operation in older PostgreSQL 
> versions.
> That why i want to add explicitly lock.

Which problem?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres, apps, special characters and UTF-8 encoding

2017-03-08 Thread Albe Laurenz
Ken Tanzer wrote:
> Hi.  I've got a recurring problem with character encoding for a 
> Postgres-based web PHP app, and am
> hoping someone can clue me in or at least point me in the right direction.  
> I'll confess upfront my
> understanding of encoding issues is extremely limited.  Here goes.
> 
> The app uses a Postgres database, UTF-8 encoded.  Through their browsers, 
> users can add and edit
> records often including text.  Most of the time this works fine.  Though 
> sometimes this will fail with
> Postgres complaining, for example, "Could query with ... , The error text 
> was: ERROR: invalid byte
> sequence for encoding "UTF8": 0xe9 0x20 0x67"
> 
> So this generally happens when people copy and paste things out of their word 
> documents and such.
> 
> As I understand it, those are likely encoded in something non-UTF-8, like 
> WIN-1251 or something.  And
> that one way or another, the encoding needs to be translated before it can be 
> placed into the
> database.  I'm not clear how this is supposed to happen though.  
> Automatically by the browser?  Done
> in the app?  Some other way?  And if in the app, how is one supposed to know 
> what the incoming
> encoding is?
> 
> Thanks in advance for any help or pointers.

The byte sequence 0xe9 0x20 0x67 means "é g" in ISO-8859-1 and WINDOWS-1252,
so I think that your setup is as follows:

- The PHP application gets data encoded in ISO-8859-1 or WINDOWS-1252
  and tries to store it in a database.
- The PHP application has a database connection with client_encoding
  set to UTF8.

Then the database thinks it gets UTF-8 and will choke if it gets something
different.

The solution:

- Make sure that your web application gets data in only one encoding.
- Set client_encoding to that encoding.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Request to confirm which command is use for exclusive operation

2017-03-08 Thread Albe Laurenz
Yogesh Sharma wrote:
>  I want to apply explicitly lock mechanism once inset operation is in 
> progress then REINDEX will wait.
> And vice versa.
> So, please let me know this type of handling is possible.

Maybe I misunderstand something, but you don't need to do that because it
happens automatically.

If you run REINDEX, it will take an ACCESS EXCLUSIVE lock on the index
before it starts its work.

That means that it has to wait until all earlier INSERTing transactions are
ended, and all INSERTs that come after the REINDEX will have to wait until
the REINDEX is done.

The database takes care that the data are consistent, so why would you
want to do that explicitly?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about TOAST table - PostgreSQL 9.2

2017-02-28 Thread Albe Laurenz
Patrick B wrote:
> I have a database which is 4TB big. We currently store binary data in a bytea 
> data type column
> (seg_data BYTEA). The column is behind binary_schema and the files types 
> stored are: pdf, jpg, png.

> Questions:
> 
> 1 - If I take out 500GB of bytea data ( by updating the column seg_data and 
> setting it to null ), will
> I get those 500GB of free disk space? or do I need to run vacuum full or 
> either pg_dump?

You'll need VACUUM (FULL) or dump/restore.

> 2 - If I choose going ahead with VACUUM FULL, I have 3 streaming replication 
> slaves, Will I need to
> run the vacuum full on them too?

No, and indeed you cannot.
The changes made by VACUUM on the primary will be replicated.

> 3 - [2] vacuum full needs some free disk space as same size as the target 
> table. It locks the table
> (cannot be used while running vacuum full) and a REINDEX might be needed 
> after. AM I right?

It locks the table for all concurrent access, but a REINDEX is not necessary, 
as the
indexes are rewritten as well.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing feature - how to differentiate insert/update in plpgsql function?

2017-02-15 Thread Albe Laurenz
hubert depesz lubaczewski wrote:
> I have a function, in PostgreSQL 9.6, which does:
> 
> INSERT INTO table () values (...)
> ON CONFLICT DO UPDATE ...;
> 
> The thing is that the function should return information whether the row
> was modified, or created - and currently it seems that this is not
> available. Or am I missing something?

Maybe the following answer can help:
http://stackoverflow.com/a/39204667/6464308

I don't really know how stable that (undocumented) behaviour will be, though.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory the hard way ...

2017-02-08 Thread Albe Laurenz
Bill Moran wrote:
> If you run a transaction with lots of server side functions that use a
> lot of memory, this can trigger the OOM killer in Linux, causing the
> PosgreSQL backend to receive a SIGKILL and all the associated bad
> stuff.
> 
> Tuning the OOM killer is not sufficient. No setting I've found for the
> OOM killer will guarantee that it won't SIGKILL a process that's essentially
> untenable anyway (because it's going to use more memory than actually
> exists on the system at some point anyway). Additionally, "add more RAM"
> doesn't "solve" the problem, it only delays it until datasets
> scale up to even larger transactions that use even more memory.

I routinely set vm.overcommit_memory = 2 and configure vm.overcommit_ratio
so that the kernel does not try to commit more memory than there is in
the machine.

That should get rid of the problem, of course at the risk of leaving
some memory unused.

> This is particularly prevelent with Postgis, because some Postgis functions
> are very memory intesive, but I'd be willing to bet real money that I could
> trigger it with just about any stored procedure that allocates memory in
> such as way that it doesn't get reclaimed until the transaction completes.
[...]
> 
> What I feel is the best way to mitigate the situation, is to have some
> setting that limits the maximum RAM any backend can consume. Attempting to
> exceed this limit would cause an error and rollback for that particular
> backend without affecting other backends.
[...]
>
> My first question: does this setting exist somewhere and I'm simply not
> finding it for some reason?
> 
> Assuming this doesn't exist (I haven't found it) my next question is
> whether there's a philosophical or technical reason that such a feature
> doesn't exist? Should I take this discussion to -hackers?

I don't think that there is such a setting.

work_mem sets a limit per operation, but that is a soft limit that
PostgreSQL server code can choose to ignore if it pleases.
Moreover, it does not limit the *total* memory a backend can use.

I'd delegate that problem to the operating system which, after all,
should know best of all how much memory a process uses.
And I don't see a big advantage in a PostgreSQL generated error message
over an "out of memory" error that is propagated from the operating system.

Of course, if there is no way to limit the amount of memory per process
(excluding shared memory!), you have a point.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is materialized view creation a "security-restricted operation"?

2017-01-24 Thread Albe Laurenz
Joshua Chamberlain wrote:
> I see this has been discussed briefly before[1], but I'm still not clear on 
> what's happening and why.
> 
> I wrote a function that uses temporary tables in generating a result set. I 
> can use it when creating
> tables or views, e.g.,
> CREATE TABLE some_table AS SELECT * FROM my_func();
> CREATE VIEW some_view AS SELECT * FROM my_func();
> 
> But creating a materialized view fails:
> CREATE MATERIALIZED VIEW some_view AS SELECT * FROM my_func();
> 
> ERROR:  cannot create temporary table within security-restricted operation
> 
> 
> The docs explain that this is expected[2], but not why. On the contrary, this 
> is actually quite
> surprising to me, given that tables and views work just fine. What makes a 
> materialized view so
> different? Are there any plans to make this more consistent?

There is a comment in the source that explains it quite well:

/*
 * Security check: disallow creating temp tables from security-restricted
 * code.  This is needed because calling code might not expect untrusted
 * tables to appear in pg_temp at the front of its search path.
 */

"Security-restricted" is explained in this comment:

 * SECURITY_RESTRICTED_OPERATION indicates that we are inside an operation
 * that does not wish to trust called user-defined functions at all.  This
 * bit prevents not only SET ROLE, but various other changes of session state
 * that normally is unprotected but might possibly be used to subvert the
 * calling session later.  An example is replacing an existing prepared
 * statement with new code, which will then be executed with the outer
 * session's permissions when the prepared statement is next used.  Since
 * these restrictions are fairly draconian, we apply them only in contexts
 * where the called functions are really supposed to be side-effect-free
 * anyway, such as VACUUM/ANALYZE/REINDEX.


The idea here is that if you run REFRESH MATERIALIZED VIEW,
you don't want it to change the state of your session.
In this case, a new temporary table with the same name as a normal table
might suddenly get used by one of your queries.

I guess that the problem is probably more relevant here that in other places
because REFRESH MATERIALIZED VIEW is likely to be regularly called in sessions
with high privileges.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] change type from NUMERIC(14,4) to NUMERIC(24,12)

2017-01-24 Thread Albe Laurenz
Torsten Förtsch wrote:
> we have a large table and want to change the type of one column from 
> NUMERIC(14,4) to NUMERIC(24,12).
> If the new type is just NUMERIC without any boundaries, the operation is 
> fast. If (24,12) is
> specified, it takes ages.
> 
> I think it takes so long because the database wants to check that all data in 
> the table is compatible
> with the new type. But the old type has stricter boundaries both before and 
> after the dot. So, it is
> compatible. It has to be.
> 
> Is there a way to change the type skipping the additional check?
> 
> This is 9.6.

If you don't mind doing something unsupported, you could just modify
the attribute metadata in the catalog:

test=# CREATE TABLE a(x numeric(14,4));
CREATE TABLE
test=# INSERT INTO a VALUES (1234567890.1234);
INSERT 0 1
test=# UPDATE pg_attribute
   SET atttypmod = atttypmod + (24 - 14) * 65536 + (12 - 4)
   WHERE attrelid = 'a'::regclass AND attname = 'x';
UPDATE 1
test=# \d a
  Table "public.a"
 Column |  Type  | Modifiers
++---
 x  | numeric(24,12) |

test=# SELECT * FROM a;
x
-
 1234567890.1234
(1 row)

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Querying dead rows

2017-01-09 Thread Albe Laurenz
Rakesh Kumar wrote:
> Is there a way to query dead rows (that is, rows which are dead and still not 
> cleaned up by Vacuum)
> using SQL.  I am asking this just as an academical question.

Sort of.

You can use heap_page_item_attrs() from the pageinspect contrib module to get 
at the
data, but you will only see the binary representation.

If your question is motivated by security concerns, you cannot do this unless 
you
are a superuser.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When updating row that has TOAST column, is the TOAST column also reinserted ? Or just the oid to the value?

2016-12-13 Thread Albe Laurenz
Dorian Hoxha wrote:
> When updating row that has TOAST column, is the TOAST column also inserted ? 
> Or just the oid?
> 
> Say I have a 1MB value in the TOAST column, and I update the row by changing 
> another column, and since
> every update is an insert, will it also reinsert the toast-column ? The 
> column that I will update will
> have an index so I think hot-update won't work in this case ? The same 
> question also when full-page-
> writes is enabled ?
> 
> 
> Using 9.6.

The TOAST table will remain unchanged by the UPDATE; you can see that with the
"pageinspect" contrib module:

CREATE TABLE longtext (
   id integer primary key,
   val text NOT NULL,
   other integer NOT NULL
);

INSERT INTO longtext VALUES (
   4,
   (SELECT string_agg(chr((random()*25+65)::integer), '')
   FROM generate_series(1, 2000)),
   42
);

SELECT reltoastrelid, reltoastrelid::regclass FROM pg_class
   WHERE oid = 'longtext'::regclass;  

 reltoastrelid |  reltoastrelid
---+-
 25206 | pg_toast.pg_toast_25203
(1 row)

Use "pageinspect" to see the tuples in the table and the TOAST table:

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS id, t_attrs[2] AS val, t_attrs[3] 
AS other
   FROM heap_page_item_attrs(get_raw_page('longtext', 0), 25203);

 t_xmin | t_xmax | t_ctid | id |  val   
|   other
+++++
   2076 |  0 | (0,1)  | \x0400 | \x0112d407d0077b627662 
| \x2a00
(1 row)

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS chunk_id, t_attrs[2] AS chunk_seq
   FROM heap_page_item_attrs(get_raw_page('pg_toast.pg_toast_25203', 0), 25206);

 t_xmin | t_xmax | t_ctid |  chunk_id  | chunk_seq
++++
   2076 |  0 | (0,1)  | \x7b62 | \x
   2076 |  0 | (0,2)  | \x7b62 | \x0100
(2 rows)

Now let's UPDATE:

UPDATE longtext SET other = -1 WHERE id = 4;

Let's look at the tuples again:

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS id, t_attrs[2] AS val, t_attrs[3] 
AS other
   FROM heap_page_item_attrs(get_raw_page('longtext', 0), 25203);

 t_xmin | t_xmax | t_ctid | id |  val   
|   other
+++++
   2076 |   2077 | (0,2)  | \x0400 | \x0112d407d0077b627662 
| \x2a00
   2077 |  0 | (0,2)  | \x0400 | \x0112d407d0077b627662 
| \x
(2 rows)

A new tuple has been entered, but "val" still points to chunk ID 0x627b
(this is a little-endian machine).

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS chunk_id, t_attrs[2] AS chunk_seq
   FROM heap_page_item_attrs(get_raw_page('pg_toast.pg_toast_25203', 0), 25206);

 t_xmin | t_xmax | t_ctid |  chunk_id  | chunk_seq
++++
   2076 |  0 | (0,1)  | \x7b62 | \x
   2076 |  0 | (0,2)  | \x7b62 | \x0100
(2 rows)

The TOAST table is unchanged!

This was a HOT update, but it works the same for a non-HOT update:

UPDATE longtext SET id = 1 WHERE id = 4;

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS id, t_attrs[2] AS val, t_attrs[3] 
AS other
   FROM heap_page_item_attrs(get_raw_page('longtext', 0), 25203);

 t_xmin | t_xmax | t_ctid | id |  val   
|   other
+++++
   2076 |   2077 | (0,2)  | \x0400 | \x0112d407d0077b627662 
| \x2a00
   2077 |   2078 | (0,3)  | \x0400 | \x0112d407d0077b627662 
| \x
   2078 |  0 | (0,3)  | \x0100 | \x0112d407d0077b627662 
| \x
(3 rows)

SELECT t_xmin, t_xmax, t_ctid, t_attrs[1] AS chunk_id, t_attrs[2] AS chunk_seq
   FROM heap_page_item_attrs(get_raw_page('pg_toast.pg_toast_25203', 0), 25206);

 t_xmin | t_xmax | t_ctid |  chunk_id  | chunk_seq
++++
   2076 |  0 | (0,1)  | \x7b62 | \x
   2076 |  0 | (0,2)  | \x7b62 | \x0100
(2 rows)

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL

2016-12-12 Thread Albe Laurenz
Torsten Förtsch wrote:
> if I do something like this:
> 
> BEGIN;
> UPDATE tbl SET data='something' WHERE pkey='selector';
> UPDATE tbl SET data=NULL WHERE pkey='selector';
> COMMIT;
> 
> Given 'selector' actually exists, I get a separate WAL entry for each of the 
> updates. My question is,
> does the first update actually hit the data file?

It should, yes.

> If I am only interested in the first update hitting the WAL, does it make 
> sense to do something like
> the above in a transaction? Would that help to keep the table small in a high 
> concurrency situation?
> The table itself has a small fillfactor. So, in most cases there should be 
> enough space to do a HOT
> update. For that HOT update, is that second update setting data to NULL 
> beneficial or rather adverse?

How could the second update *not* be WAL logged?

Maybe you could explain what you are trying to achieve.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Problems?

2016-11-29 Thread Albe Laurenz
elbriga wrote:
> Thanks for the detailed answer!
> 
> Changing the function sinature seams to have solved the problem:
> CREATE OR REPLACE FUNCTION ceilDecimal(num numeric) RETURNS float AS
> $BODY$
> BEGIN
>   RETURN CEIL(num * 100) / 100;
> END
> $BODY$
> LANGUAGE 'plpgsql';
> 
> 
> SELECT ceilDecimal(0.07);
>  ceildecimal
> -
> 0.07

Yes, because "numeric" stores the exact number, so it will be
exactly 0.07 after conversion.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Problems?

2016-11-29 Thread Albe Laurenz
elbriga wrote:
> Hi,
>   I have this pl function:
> CREATE OR REPLACE FUNCTION ceilDecimal(num float) RETURNS float AS
> $BODY$
> BEGIN
>   RETURN CEIL(num * 100) / 100;
> END
> $BODY$
> LANGUAGE 'plpgsql';
> 
>   It is supposed to do a "decimail ceil" for 2 decimal places.
>   But when I do "SELECT ceilDecimal(0.07)"
> It will return
>  ceildecimal
> -
> 0.08
> 
>   For other numbers the result is as expected:
> SELECT ceilDecimal(0.17);
>  ceildecimal
> -
> 0.17
> 
> WHY? rsrsr

Let's do your calculation step by step.

100 is (implicitly) an integer value.

When a "real" and an "integer" are multiplied, PostgreSQL casts them to
"double precision" before the operation. That would be necessary anyway,
because "ceil()" only operates on "double precision" (or "numeric").

test=> SELECT CAST (REAL '0.07' AS double precision);
   float8

 0.070002980232
(1 row)

The weird digits are because 0.07 can never represented exactly
as a floating point number (with base 2).
They become visible because "double precision" has greater precision.

test=> SELECT REAL '0.07' * 100;
 ?column?
--
 7.0002980232
(1 row)

test=> SELECT ceil(REAL '0.07' * 100);
 ceil
--
8
(1 row)

The value is rounded up correctly, because it is greater than 7.


For some "real" values, the representation will be slightly less
then the correct value:

test=> SELECT CAST (REAL '0.47' AS double precision);
  float8
---
 0.46998807907
(1 row)

For such values, your function will work as you expect.

You could work around the problem by subtracting a small delta
from the value after converting it to "double precision".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] min/max_wal_size

2016-11-23 Thread Albe Laurenz
Torsten Förtsch wrote:
> Now, I have a database with very low write activity. Archive_command is 
> called about once per hour to
> archive one segment. When the database was moved to PG 9.5, it was initially 
> configured with insanely
> high settings for max_wal_size, min_wal_size and wal_keep_segments. I reset 
> min/max_wal_size to the
> default settings of 80MB and 1GB and reduced wal_keep_segments to 150.
> 
> Why does a database that generates a little more than 1 WAL file per hour and 
> has a checkpoint_timeout
> of 30 minutes with a completion_target=0.7 need so many of them? The default 
> value for min_wal_size is
> 80MB which amounts to 5 segments. That should be totally enough for this 
> database.
> 
> Is this because of the previously insanely high setting (min=1GB, max=9GB)? 
> Should I expect this value
> to drop in a week's time? Or is there anything that I am not aware of?

The size of pg_xlog will shrink slowly if there is little activity on the 
database.
Whenever a WAL segment has been archived, it will be deleted rather than 
recycled
as long as WAL size exceeds max_wal_size.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting a TimestampTz into a C# DateTime

2016-11-15 Thread Albe Laurenz
valeriof wrote:
> BTW, a comment says this about the floating point representation: "A
> deprecated compile-time option of PostgreSQL switches to a floating-point
> representation of some date/time
> fields. Npgsql (currently) does not support this mode." Is it safe to say
> that the floating point format is less in use compared to the long int? If
> Npgsql doesn't support it, any application that uses Npgsql will have this
> limitation anyway. Am I correct?

It looks that way.

64-bit integer representation for datetimes was introduced in 7.3 and
became the default in 8.4, over 7 years ago.
So I guess you won't encounter databases with floating point datetimes
very often these days.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-15 Thread Albe Laurenz
Mark Anns wrote:
> How the planning factors of PGStrom differs from planning factos of
> PostgreSQL?

I don't know what exactly you mean by a "planning factor".

What PGStrom does is estimate the cost of the GPU operations
and attach these costs to a custom scan node which is part of
a query plan.

Many different plans are generated; some will involve PGStrom,
some will not.  The optimizer then chooses among these plans the
one with the lowest total cost.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting a TimestampTz into a C# DateTime

2016-11-14 Thread Albe Laurenz
Jerome Wagner wrote:
> seeing you answer I have a question for which I found no answer a few weeks 
> ago : is there a way to
> know at runtime which internal representation timestamps have ?
> I am trying to deal with the COPY binary protocol with only SQL access to the 
> remote server and would
> like to find a way to know the internal representation to read / write the 
> correct timestamps.

I guess you have to use the SPI interface to run the SQL statement

   SHOW integer_datetimes;

and check if the result is 'on'.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting a TimestampTz into a C# DateTime

2016-11-14 Thread Albe Laurenz
valeriof wrote:
> I'm handling a TimestampTz value inside a plugin to stream WAL changes to a
> .NET client application. What I'm trying to do is to return all possible
> column changes as binary (don't like to have Postgres handle the conversion
> to string as I may need to have access to the bytes at the client level). In
> case of a TimestampTz, is it possible to return the 8-bytes long integer and
> then from the C# application convert the value to Ticks?

Sure, if you know how it is stored internally.

One of your problems will be that the format depends on whether PostgreSQL
was configured with --disable-integer-datetimes or not.

With that switch, a timestamp is a double precision value, otherwise a
64-bit integer value. In the former case, it measures seconds after
midnight 2000-01-01, while in the latter case it measures microseconds
after that timestamp.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Surviving connections after internet problem

2016-11-08 Thread Albe Laurenz
Durumdara wrote:
>> You can use pg_terminate_backend to kill a database session.
>>
>> Setting the keepalive options in postgresql.conf can make PostgreSQL
>> discover dead connections more quickly.
> 
> The server is licenced, so we can't access the conf file now.
> We will report this to the provider.

You cannot use ALTER SYSTEM from SQL either?

> For that moment could we set these parameters from clients after the 
> connection established?
> 
> For example:
> 
> set param bla = nnn?

Yes, that should work, e.g.:

SET tcp_keepalives_idle = 600;

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux equivalent library for "postgres.lib" from Windows

2016-11-08 Thread Albe Laurenz
John R Pierce wrote:
>> I am new to the product and in windows “postgres.lib” provides certain 
>> functions which we are
>> using in windows for creating extensions.
>> 
>> Now I am porting the project to Linux and there no straight library with 
>> this name in Linux
>> binaries packages.
>> 
>> Can someone please advise the equivalent library for postgres.lib in Linux?
> 
> I am not sure what this postgres.lib is, what are the functions you're using ?

With MSVC, you have to link with the mylibrary.lib file if you want to use
functions from the shared library mylibrary.dll.
This is not necessary on Linux, where references to a shared library are 
resolved
at load time.

So the answer to the original question is that there is no replacement
for postgres.lib on Linux because you don't need it to link with PostgreSQL.
It is enough to #include the required PostgreSQL headers during compilation.

Some information on how to link on Linux can be found here:
https://www.postgresql.org/docs/current/static/xfunc-c.html#DFUNC
But as others have remarked, using PGXS is much better than doing
it by hand.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Surviving connections after internet problem

2016-11-07 Thread Albe Laurenz
Durumdara wrote:
> Linux server, 9.4 PG, Windows clients far-far away.
> 
> They called us that they had an "internet reset" at 13.00, but many client 
> locks are alive now
> (14:00).
> I checked server status, and and saw 16 connections.
> 
> In Windows PG server I read about keepalive parameters which are control and 
> redefine default TCP
> keepalive values.
> 
> As I read it could be two hours in Windows, and it is a system default, so we 
> can change for all
> applications.
> 
> I don't know what happens with Linux server and Windows clients.
> 
> May Linux version of PGSQL also uses 2 hour keepalive (default), or the it 
> caused by Windows clients,
> or an active device?
> 
> Or how could they survive this internet connection reset? :-o
> 
> May we must limit these parameters in clients after the starting of the 
> connection?

Don't bother about the clients, just see that the backends go away on the 
server.

You can use pg_terminate_backend to kill a database session.

Setting the keepalive options in postgresql.conf can make PostgreSQL
discover dead connections more quickly.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Recovery from Corrupted Dump or Raw database table file.

2016-11-07 Thread Albe Laurenz
Howard News wrote:
> I have a raid catastrophe which has effectively blitzed a cluster data
> directory.  I have several pg_dump backups but these will not restore
> cleanly. I assume the disk has been failing for some time and the
> backups are of the corrupted database.
> 
> Using a selective pg_restore on the dumps, I have restored 2/3rds of the
> data but some tables I cannot recover directly, so I would like to see
> if it is possible to examine the dumps (they are in compressed format)
> to see if there are any rows which are recoverable. I do not know how or
> if it is even possible to pick out rows from an individual table, when
> in all likelyhood the file itself is corrupted.
> 
> I also have some parts of the data directory, so the tables may be
> accessible if I know the ID of the corrupt datatables. Is the ID listed
> in the pg_dump --list file?  And can the data be extracted from the raw
> data files without running a cluster. I am unsure if there is enough
> data to actually start a cluster.

A backup created with pg_dump consists of SQL statements to recreate the
objects.

You can extract the SQL statements as text with

  pg_restore -f sqlfile backupfilename

That should help you with restoring the data.

What exactly do you mean by "do not restore cleanly"?
Do you get error messages or is the content not ok?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: What is the best thing to do with PUBLIC schema in Postgresql database

2016-11-07 Thread Albe Laurenz
Patricia Hu wrote:
> Since it could potentially be a security loop hole. So far the action taken 
> to address it falls into
> these two categories:
> 
> drop the PUBLIC schema altogether. One of the concerns is with some of 
> the system objects that
> have been exposed through PUBLIC schema previously, now they will need other 
> explicit grants to be
> accessible to users. e.g pg_stat_statements.
> keep the PUBLIC schema but revoke all privileges to it from public role, 
> then grant as necessity
> comes up.
> 
> Any feedback and lessons from those who have implemented this?

I'd prefer the second approach as it is less invasive and prevents
undesirable objects in schema "public" just as well.

> Confidentiality Notice::  This email, including attachments, may include 
> non-public, proprietary,
> confidential or legally privileged information.  If you are not an intended 
> recipient or an authorized
> agent of an intended recipient, you are hereby notified that any 
> dissemination, distribution or
> copying of the information contained in or transmitted with this e-mail is 
> unauthorized and strictly
> prohibited.

You are hereby notified that any dissemination, distribution or copying of the 
information
contained in or transmitted with your e-mail is hunky-dory.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres_fdw : disable extended queries

2016-10-24 Thread Albe Laurenz
Nicolas Paris wrote:
> I have a 9.6 pg instance, and I am trying to link a foreign postgresql 
> database that do not accept
> extended queries. (only simple queries 
> https://www.postgresql.org/docs/current/static/protocol.html )
> 
> When I run a query against the foreign pg instance thought postres_fdw, it 
> looks like it sends a
> transaction containing
> 
> DECLARE c1 CURSOR FOR
> SELECT customer_id FROM foodmart.customer
> 
> -> is there a way to run a simple query with postgres_fdw such:
> 
> SELECT customer_id FROM foodmart.customer

No, it is part of the design that cursors are used, so that rows can be
fetched one at a time and concurrent DML statements can be run.

You might consider using dblink.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sequences / Replication

2016-10-21 Thread Albe Laurenz
Jonathan Eastgate wrote:
> We're seeing some odd behaviour from a PostgreSQL group - one running as 
> primary and the other as a
> hot slave using streaming replication.
> 
> When a failover event occurs and we switch to the hot slave as primary 
> sequences in tables jump by 33
> - so where the last number allocated in the sequence was 100 prior to 
> failover once adding the next
> entry the sequence will produce the number 133.

That is working as expected.

When nextval() is called, a number of sequence numbers are reserved (by default 
one,
you can set this with the CACHE clause of CREATE SEQUENCE).  If the transaction 
is aborted,
these entries won't be used.

So if there were 30 inserting transactions when your server went down that got 
aborted,
that would explain the behaviour quite nicely.

This should not be a problem.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] journaled FS and and WAL

2016-10-19 Thread Albe Laurenz
t.dalpo...@gmail.com wrote:
> I don't mind about performance but I absolutely mind about reliability,
> so I was thinking about the safest setting of linux FS and postgresql I
> can use.

Sure, use journaling then.
I do it all the time.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Collations and codepages

2016-10-18 Thread Albe Laurenz
Raimo Jormakka wrote:
> In Windows 7, and using PostgreSQL 9.4.5, the collation gets set to 
> "English_United States.1252" when
> I select the "English, United States" locale in the installer. In Linux, the 
> collation is set to
> "en_US.UTF-8". The encoding is set to UTF-8 in both instances.
>
> Will these two instances behave identically in terms of collation logic? And 
> if not, is there
> something I can do about it? In general, what's the impact of the codepage 
> part of a collation to
> begin with?

The two collations will probably not behave identically, since PostgreSQL uses 
the
operating system collations instead of having ist own, and odds are that 
Microsoft's
collations and glibc's are slightly different.

I don't know if the impact will be large; maybe run a couple of tests to see if 
the
ordering is similar enough for your purposes.

I don't think that the actual encoing (UTF-8 or Windows-1252) has any impact on 
the ordering,
but I am not certain.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] journaled FS and and WAL

2016-10-14 Thread Albe Laurenz
t.dalpo...@gmail.com wrote:
>   two question related to the WAL.
> 
> 1) I read in the doc that journaled FS is not important as WAL is
> journaling itself.  But who garantees that the WAL is written correctly?
> I know that it's sequential and a partial update of WAL can be discarded
> after a restart. But am I sure that without a journaled FS, if there is
> a crash during the WAL update, nothing already updated in the WAL before
> my commit can get corrupted?

At commit time, the WAL is "synchronized": PostgreSQL instructs the operating
system to write the data to the physical medium (not just a memory cache)
and only return success if that write was successful.

After a successful commit, the WAL file and its metadata are on disk.
Moreover, the file metadata won't change (except for the write and access
timestamps) because WAL files are created with their full size and never
extended, so no WAL file should ever get "lost" because of partial metadata
writes.

> 2) Let's suppose that I have one database, one table of 10 rows,
> each 256 bytes. Now, in a single SQL commit, I update row 10, row 3
> and row 8. How much should I expect the WAL increase by? (supposing
> no WAL segments will be deleted). I could guess 8192x3 but I'm not sure

It will be that much immediately after a checkpoint, but for subsequent writes
to the same disk block only the actually changed parts of the data block will
be written to WAL.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES

2016-10-12 Thread Albe Laurenz
Kevin Grittner wrote:
> Sent: Tuesday, October 11, 2016 10:00 PM
> To: Jason Dusek
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] SERIALIZABLE and INSERTs with multiple VALUES
> 
> On Tue, Oct 11, 2016 at 2:29 PM, Jason Dusek  wrote:
> 
>> I notice the following oddity:
> 
>>  =# CREATE TABLE with_pk (i integer PRIMARY KEY);
>> CREATE TABLE
> 
>>  =# BEGIN;
>> BEGIN
>>  =# INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
>> ERROR:  could not serialize access due to concurrent update
>>  =# END;
>> ROLLBACK
> 
> I don't see that on development HEAD.  What version are you
> running?  What is your setting for default_transaction_isolation?

The subject says SERIALIZABLE, and I can see it on my 9.5.4 database:

test=> CREATE TABLE with_pk (i integer PRIMARY KEY);
CREATE TABLE
test=> START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION
test=> INSERT INTO with_pk VALUES (2), (2) ON CONFLICT DO NOTHING;
ERROR:  could not serialize access due to concurrent update

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restricted access on DataBases

2016-10-04 Thread Albe Laurenz
Durumdara wrote:
[...]
> --- login with postgres:
[...]
>   ALTER DEFAULT PRIVILEGES
>   GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER 
> ON TABLES
>   TO u_tr_db;
>
>  login with u_tr_main:
>
>   create table t_canyouseeme_1 (k int);
>
>  login with u_tr_db:
> 
>   select * from t_canyouseeme_1;
> 
>   ERROR: permission denied for relation t_canyouseeme_1
>   SQL state: 42501
> 
>  As you see before, u_tr_db got all default privileges on future tables, so I 
> don't understand why he
> don't get to "t_canyouseeme_1".

You should have written

   ALTER DEFAULT PRIVILEGES FOR ROLE u_tr_main ...

The way you did it, you effectively wrote "FOR ROLE postgres" because
you were connected as that user.

Than means that all future tables created *by postgres* will have
privileges for user "u_tr_db" added.  But you want tables created
*by u_tr_main* to get the privileges.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database fixed size

2016-09-28 Thread Albe Laurenz
Adir Shaban wrote:
> Is there anyway to limit a database size?
> For example, I need to create a database for user X and I don't want it to 
> use more than 5 GB.

You can create a tablespace on a device with limited size.
Then you can create the database on that tablespace.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-16 Thread Albe Laurenz
Arun Rangarajan wrote:
> But when I try to create the extension, I get the following error:
>
> postgres=# create extension oracle_fdw;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

> t=2016-09-15 16:16:08 PDT d= h= p=25327 a=LOG:  0: server process (PID 
> 20397) was terminated by signal 11: Segmentation fault

Well, as I told you, get a stack trace with debugging symbols.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Duplicate data despite unique constraint

2016-09-02 Thread Albe Laurenz
Jonas Tehler wrote:
> We’re running Postgresql 9.4.5 on Amazon RDS. One of our tables looks 
> something like this:
> 
> 
> CREATE TABLE users
> (
>   ...
>   email character varying(128) NOT NULL,
>   ...
>   CONSTRAINT users_email_key UNIQUE (email)
> )
> 
> Despite this we have rows with very similar email values. I discovered the 
> problem when I tried to add
> a column and got the following error:
> 
> ActiveRecord::RecordNotUnique: PG::UniqueViolation: ERROR:  could not create 
> unique index
> "users_email_key"
> DETAIL:  Key (email)=(x...@yyy.com) is duplicated.
> : ALTER TABLE "users" ADD COLUMN "client_branch" character varying(255) 
> DEFAULT ‘beta'
> 
> 
> Now look at this:
> 
> => select email from users where email = 'x...@yyy.com';
>email
> ---
>  x...@yyy.com
> (1 row)
> 
> 
> => select email from users where email LIKE 'x...@yyy.com';
>email
> ---
>  x...@yyy.com
>  x...@yyy.com
> (2 rows)
> 
> 
> I have tried to compare the binary data in various ways, email::bytes, 
> md5(email),
> encode(email::bytea, 'hex’), char_length(email) and it all looks the same for 
> both rows.
> 
> Any suggestions how I can discover the difference between the values and how 
> they could have been
> added without triggering the constraint? I know that the values were added 
> after the constraint was
> added.
> 
> The data was added from a Ruby on Rails app that also has unique constraints 
> on the email field and
> validation on the email format.

That looks very much like data corruption.

I guess there is an index on "users" that is used for one query but not the 
other.
Can you verify with EXPLAIN?

Assuming that it is a 'text_ops' or 'varchar_ops' index, I'd say it gets used 
for the first
query, but not for the second.  That would mean that there is an extra entry in 
the table that
is not in the index.

Did you have any crashes, standby promotion, restore with PITR or other unusual 
occurrences recently?

Make sure you have a physical backup; there may be other things corrupted.

This is a possible path to proceed:

Once you have made sure that you have a physical backup, try to add the "ctid" 
column to both queries.

Then delete the extra row from the second query with "DELETE FROM email WHERE 
ctid = ...".

Then, to make sure there is no other corruption lurking, make a logical backup
with pg_dumpall, create a new database cluster, create a new one with "initdb" 
and
restore the data.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jdbc 9.4-1208 driver for PostgreSQL 9.5?

2016-07-13 Thread Albe Laurenz
Joek Hondius wrote:
> (I hope i am on the right list)

pgsql-jdbc would have been the perfect list.

> jdbc.postgresql.org lists version 9.4 build 1208 as the lastest.
> Is this the correct version to use with PostgreSQL 9.5 (or even 9.6-beta)?
> I cannot find info on this elsewhere.

Yes, you should just use the latest driver.
Don't be worried if they don't have the same version number.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow SQL?

2016-07-12 Thread Albe Laurenz
Bjørn T Johansen wrote:
> Thx for your suggestions. Tried to use NOT EXISTS and the query was about 
> half a second quicker so not
> much difference...
> But when I try to run the 3 queries separately, then they are very quick, 2 
> barely measurable and the
> third takes about 1,5 seconds. The union query
> takes a little over 9 seconds, so I guess the union part is the bottleneck?

Looking at EXPLAIN (ANALYZE) output should tell you.

If yes, is UNION ALL an option for you?
That should be cheaper.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow SQL?

2016-07-12 Thread Albe Laurenz
haman...@t-online.de wrote:
Bjørn T Johansen wrote:
>> I am trying to move a small system from Oracle to PostgreSQL and I have come 
>> upon a sql that runs
>> really slow compared to on the Oracle database and I am not able to 
>> interpret why this is slow.

> I have experienced that some subqueries can be quite slow, and would suspect 
> the NOT IN
> clause. I occasionally rewrite
> NOT IN (select key from unwanted_candidates)
> as
> IN (select key from possible_candidates except select key from 
> unwanted_candidates)

I would try to rewrite these clauses to NOT EXISTS, for example

   a.avlsnr Not In (Select avlsnr From dyr_pause_mot)

could be

   NOT EXISTS (SELECT NULL FROM dyr_pause_mot WHERE avlsnr = a.avlsnr)

This can be executed as anti-join and is often more efficient.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protect a table against concurrent data changes while allowing to vacuum it

2016-06-22 Thread Albe Laurenz
Sameer Kumar wrote:
> On Wed, Jun 22, 2016 at 6:08 PM Vlad Arkhipov  wrote:
>> I am running PostgreSQL 9.5.
>> 
>> CREATE TABLE t (id BIGINT NOT NULL PRIMARY KEY, name TEXT);
>> 
>> The constraint that the data must satisfy is `there is no more than 3 
>> records with the same name`.
>> 
>> I am not in control of queries that modify the table, so advisory locks can 
>> hardly be of help to me.
> 
> 
> Define a function which does a count of the rows and if count is 3 it return 
> false if count is less it
> returns true.
> 
> Use check constraint with this function. I have not tried this so not sure if 
> you can use function
> with SELECT on same table in CHECK constraint. So test it out first.
> 
> If this works, any insert trying to get the 4th record in table would fail.

You cannot use subqueries in a check constraint:

ALTER TABLE t
   ADD CONSTRAINT name_count
  CHECK ((SELECT count(*) FROM t t1 WHERE t1.name = name) <= 3);
ERROR:  cannot use subquery in check constraint

> A last resort could be using triggers. But either of these approaches will 
> cause issues if you have
> high concurrency.

Yes, triggers is the way to go:

CREATE OR REPLACE FUNCTION check_t() RETURNS trigger LANGUAGE plpgsql AS
$$BEGIN
   IF (SELECT count(*) FROM t WHERE name = NEW.name) > 3 THEN
  RAISE EXCEPTION 'More than three values!';
   END IF;
   RETURN NEW;
END;$$;

CREATE TRIGGER check_t AFTER INSERT OR UPDATE ON t FOR EACH ROW
   EXECUTE PROCEDURE check_t();

But be warned that this will only work if all transactions involved use
the isolation level SERIALIZABLE.

Otherwise two concurrent INSERTs would not see each other's entry, and the
triggers would not raise an error even if there are more than three entries
after COMMIT.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Protect a table against concurrent data changes while allowing to vacuum it

2016-06-22 Thread Albe Laurenz
Vlad Arkhipov wrote:
> I have a constraint that requires a table to be locked before checking
> it (i.e. no more than 2 records with the same value in the same column).
> If I lock the table in the SHARE ROW EXCLUSIVE mode, any vacuuming (or
> autovacuuming) process prevents me from checking the constraint. What
> are possible solutions?

Can you describe your check in more detail?
Why don't you use simple unique constraints?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hot disable WAL archiving

2016-06-17 Thread Albe Laurenz
Job wrote:
> is there a way in Postgresql-9.5 to disable temporarily WAL archiving to 
> speed up pg_bulkload with
> restarting database engine?

You can set 'archive_command=/bin/true' and reload, then no WAL
archives will be written.

Make sure to perform a base backup as soon as your bulk load
is finished.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changelog version from 8.1.2 to 9.3.6

2016-06-15 Thread Albe Laurenz
Yogesh Sharma wrote:
> I have doubt regarding release notes of all versions.
> As per release notes, below change logs are mentioned in all versions.
> 
> "(8.3.8,8.4.1,8.2.14) Make LOAD of an already-loaded loadable module into a 
> no-op (Tom Lane)"
> 1. What is meaning of above lines?
> 2. This changes are implemented only in above versions or is it propagated on 
> every above newer
> versions after that?
> 3. Why same line is mentioned in the release notes of above 
> 8.3.8,8.4.1,8.2.14 versions?

Ad 1:
As of the cited versions, nothing will be done if you use LOAD to load a module
that is already loaded.

Ad 2:
The change was a bugfix that was applied to the development version (so it is 
included
in 9.0 and later) and was backpatched to 8.2, 8.3 and 8.4.

If you want to know details about a change like this, you'll have to dig into 
the
git logs and possibly the code.
The commit was 602a9ef5a7c60151e10293ae3c4bb3fbb0132d03 in master,
57710f39cc55cba1e98c718300a811aadacec7c5 in 8.4,
5927d9f642c4cf6233e5fedd3468087995c00523 in 8.3 and
22f77b0f9db1220789b262cda6dccad49d031643 in 8.2.

Ad 3:
Because the same fix was applied to these three versions.

It is safe to assume that a problem that was fixed in 8.4 will also be
fixed in 9.0 and above, even though the 9.0 release notes don't explicitly
mention it - they mostly contain new features.


When reading up on the changes between 8.1.2 and 9.3.6 (a daunting task)
you can usually limit yourself to reading 8.2, 8.3, 8.4, 9.0, 9.1, 9.2
and 9.3.  This should cover all behaviour changes that are likely to
affect you.
The minor releases should not change behaviour other than fixing bugs.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can SET_VARSIZE cause a memory leak?

2016-06-08 Thread Albe Laurenz
Николай Бабаджанян wrote:
> I didn't find an easy way to convert ucs-2 bytea to utf-8, so I decided to 
> write a C-function. Since
> ucs-2 is has fixed symbol size of 2 bytes the output bytea size may differ.
> 
> I do the following:
> 
>   bytea   *result= (bytea *)  palloc0(VARSIZE(in_bytea)); // 
> allocating memory for the result
>   SET_VARSIZE(result, VARSIZE_ANY(in_bytea));
> 
>   ... // some calculations resulting in `result` having some trailing 0-s 
> (since palloc0 was
> used). We don't need those, so:
> 
>   SET_VARSIZE(result, new_varsize_result+VARHDRSZ); // new_varsize_result 
> was calculated during
> the convertion
> 
>   PG_RETURN_BYTEA_P(result);
> 
> The question is am I leaking memory by doing this, and if I am, should I use 
> pfree() manually on each
> address that is left trailing, or is there some other way to make this work?

This is safe, and the memory will be freed at the end of the transaction.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql embedded mode

2016-05-23 Thread Albe Laurenz
David G. Johnston wrote:
> On Mon, May 23, 2016 at 6:54 AM, aluka raju  wrote:
>> As given in the FAQ's that postgresql cannot be embedded
>> https://wiki.postgresql.org/wiki/FAQ#Can_PostgreSQL_be_embedded.3F .
>> 
>> Is their any possibility to make it embedded. Till now postgresql has not 
>> done this embedded
>> mode and i want to work on how it can be embedded and contribute. please 
>> help me how to start or
>> suggest the idea-.
> 
> ​Step 1 - Fork the project...
> 
> ​Changing the fundamental architecture of the system is not something I would 
> expect a project with
> this long of history to attempt to do directly.  While the people working on 
> the core product are
> welcome to spend their time however they like I don't imagine any of them 
> would be willing to commit
> code to core pertaining to this capability.  Most, if not all, of them likely 
> don't believe it is even
> a good idea generally.

There was this patch by Tom Lane in 2012:
http://www.postgresql.org/message-id/12511.1346631...@sss.pgh.pa.us

This can be used as something like an embedded database.
Nothing more happened with this patch, though.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG wire protocol question

2016-05-17 Thread Albe Laurenz
Boszormenyi Zoltan wrote:
> it was a long time I have read this list or written to it.
> 
> Now, I have a question. This blog post was written about 3 years ago:
> https://aphyr.com/posts/282-jepsen-postgres
> 
> Basically, it talks about the client AND the server as a system
> and if the network is cut between sending COMMIT and
> receiving the answer for it, the client has no way to know
> whether the transaction was actually committed.
> 
> The client connection may just timeout and a reconnect would
> give it a new connection but it cannot pick up its old connection
> where it left. So it cannot really know whether the old transaction
> was committed or not, possibly without doing expensive queries first.
> 
> Has anything changed on that front?

That blog post seems ill-informed - that has nothing to do with
two-phase commit.

The problem - that the server may commit a transaction, but the client
never receives the server's response - is independent of whether
two-phase commit is used or not.

This is not a problem of PostgreSQL, it is a generic problem of communication.

What would be the alternative?
That the server has to wait for the client to receive the commit response?
But what if the client received the message and the server or the network
go down before the server learns of the fact?
You see that this would lead to an infinite regress.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres RFC3339 datetime formatting

2016-05-10 Thread Albe Laurenz
Jasim Mohd wrote:
> Is there any way to format datetime to RFC3339Nano Eg: 
> 2006-01-02T15:04:05.9Z07:00 in postgres
> 9.3 or 9.5?
> 
> I tried with to_char. But there is no documentation how to handle T, Z, 
> +07:00, -07:00 etc.

The best I can get is:

SELECT to_char(current_timestamp, 'FX-MM-DD"T"HH:MI:SS.US"000Z"OF');

but it will suppress the minute part of the time zone offset if it is 0.

I don't think you can get nanosecond precision in PostgreSQL.

Is that good enough?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does the initial postgres user have a password?

2016-05-03 Thread Albe Laurenz
dandl wrote:
> I have a new 9.5 installation, Windows x64, developer only. No users have 
> been added, and no passwords
> set.
> 
> I can access the system:
> · using pgAdmin3, without specifying a user or password
> · using psql, specifying user ‘postgres’ but no password
> 
> I cannot access the system with various other tools that require a connection 
> string eg
> password authentication failed for user "postgres"
> 
> There is a ‘postgres’ user defined in pg_shadow with password of :
> "md5a19959576d12fc69375fc3dadaeab90b"
> 
> What is going on here? Is there a password for user ‘postgres’ or isn’t 
> there? What is it? Why don’t I
> need it sometimes, and I do other times?

You have a password set, and it is probably in a password file, which
might have been created by pgAdmin III.

See http://www.postgresql.org/docs/current/static/libpq-pgpass.html

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] truncate table getting blocked

2016-04-26 Thread Albe Laurenz
Jayadevan M wrote:
> I have a python script. It opens a cursor, and sets the search_path (using 
> psycopg2). In case
> something goes wrong in the script , a record is inserted into a table. In 
> that script, I am not doing
> any thing else other than reading a file and publishing the lines to a queue 
> (no database operations).
> The database is used just to track the errors. But my set search_path is 
> locking a truncate table I am
> executing from a psql session. Is this expected?
> 
> When the truncate table hung, I used this query
>  SELECT blocked_locks.pid AS blocked_pid,
>  blocked_activity.usename  AS blocked_user,
>  blocking_locks.pid AS blocking_pid,
>  blocking_activity.usename AS blocking_user,
>  blocked_activity.queryAS blocked_statement,
>  blocking_activity.query   AS current_statement_in_blocking_process
>FROM  pg_catalog.pg_locks blocked_locks
> JOIN pg_catalog.pg_stat_activity blocked_activity  ON 
> blocked_activity.pid = blocked_locks.pid
> JOIN pg_catalog.pg_locks blocking_locks
> ON blocking_locks.locktype = blocked_locks.locktype
> AND blocking_locks.DATABASE IS NOT DISTINCT FROM 
> blocked_locks.DATABASE
> AND blocking_locks.relation IS NOT DISTINCT FROM 
> blocked_locks.relation
> AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
> AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
> AND blocking_locks.virtualxid IS NOT DISTINCT FROM 
> blocked_locks.virtualxid
> AND blocking_locks.transactionid IS NOT DISTINCT FROM 
> blocked_locks.transactionid
> AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
> AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
> AND blocking_locks.objsubid IS NOT DISTINCT FROM 
> blocked_locks.objsubid
> AND blocking_locks.pid != blocked_locks.pid
>JOIN pg_catalog.pg_stat_activity blocking_activity ON 
> blocking_activity.pid = blocking_locks.pid
> 
>WHERE NOT blocked_locks.GRANTED;
> 
> and got this (schemaname/user/tablename modified)-
> 
> blocked_pid | blocked_user | blocking_pid | blocking_user |
> blocked_statement |
> current_statement_in_blocking_process
> -+--+--+---+--+---
> 
> 9223 | myuser   |12861 | myuser  | truncate table 
> myschema.table1; | SET
> search_path TO  myschema,public
> 
> 
> PG version :
> 
> PostgreSQL 9.4.5 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 
> 4.8.5, 64-bit

It is not the "SET search_path" statement that is blocking the truncate,
but probably some earlier statement issued in the same transaction.

Take a look at pg_locks to find out what lock the transaction is holding on 
myschema.table1.

Use statement logging to find out which statement causes the lock.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error: no connection to the server

2016-04-25 Thread Albe Laurenz
Marco Bambini wrote:
> I have a multithreaded C client and sometimes I receive the "no connection to 
> the server" error
> message.
> I haven't found any documentation about it and about how to fix this issue.
> 
> Anyone can point me to the right direction?

The error message is reported in interfaces/libpq/fe-exec.c:

static bool
PQsendQueryStart(PGconn *conn)
{
if (!conn)
return false;

/* clear the error string */
resetPQExpBuffer(>errorMessage);

/* Don't try to send if we know there's no live connection. */
if (conn->status != CONNECTION_OK)
{
printfPQExpBuffer(>errorMessage,
  libpq_gettext("no connection to the server\n"));
return false;
}

So it looks like you closed the connection (maybe in some other thread)
and then try to reuse it.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do BEGIN/COMMIT/ABORT operate in a nested SPI query?

2016-04-20 Thread Albe Laurenz
David Bennett wrote:
> > From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
> >
> > > I am attempting to create a new language implementation. The language
> > > is Andl (andl.org), so the handler is plandl.
> > > This is a question about executing SPI queries from inside plandl.
> > >
> > > The documentation makes it clear that SPI allows nested queries; that
> > > in some instances it will be necessary to call SPI_push() and
> > > SPI_pop(), but in others this will be handled automatically. Se
> > http://www.postgresql.org/docs/9.5/interactive/spi-spi-push.html.
> > >
> > > It is an important design feature of plandl to allow nested queries.
> > >
> > > My question is: where are the transaction boundaries if the
> > > inner/outer query do or do not contain BEGIN/ABORT/COMMIT? Do they
> > > nest, or does an inner COMMIT finish a transaction started by an outer
> > BEGIN, or is it ignored?
> >
> > You cannot have BEGIN or COMMIT inside a function.
> 
> Are you sure you meant it like that? I already have BEGIN/COMMIT inside a 
> function and it works
> perfectly. If it did not, then it would be impossible to use BEGIN/COMMIT in 
> any language handler,
> since every call to a language handler is a call to a function.
> 
> Did you mean 'inside a nested function'? Or something else?

I guess I'm out of my depth when it comes to language handlers...

But I cannot see how you can have BEGIN or COMMIT called from inside one.

Doesn't it look like that:

BEGIN;
SELECT my_andl_function();
COMMIT;

Since there are no autonomous transactions in PostgreSQL, how can you have
BEGIN and COMMIT called from the code that is invoked by "SELECT 
my_andl_function()"?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Enhancement Request

2016-04-20 Thread Albe Laurenz
Melvin Davidson wrote:
> On Tue, Apr 19, 2016 at 4:37 PM, Rob Brucks  wrote:
>> I'd like to propose two enhancements to the PostgreSQL code, but I'm not 
>> sure if this is the
>> correct mailing list.  So if it's not then please let me know where I need 
>> to post this.

I'd post feature requests to the -hackers list, but -general is usually fine 
too.

>> * General monitoring:
>> We have a need for a "monitoring" role in PostgreSQL that has read-only 
>> access to any "pg_stat"
>> view.  As of 9.4, only a super-user can read all columns of 
>> "pg_stat_activity", "pg_stat_replication",
>> and "pg_stat_archiver" (there may be other restricted views as well).  These 
>> views provide critical
>> insight on how well the cluster is operating and what is going on.
>> 
>> There appears to be only two ways to gain access to these views:
>> 
>> 1.   grant super-user to my monitoring user
>> 2.   write custom functions owned by a super-user with "SECURITY DEFINER" 
>> and grant access to
>> my monitoring user

I think #2 is good enough.
To reduce the pain of deploying such a function, you can create it in
"template1" and CREATE DATABASE will automatically copy it.

>> * Streaming Replication Monitoring:

>> The only accurate method I have found to measure standby lag is to create a 
>> synthetic update
>> that runs periodically.  This works, but is less than ideal and requires 
>> adding a table to every
>> cluster (which then has to be vacuumed frequently too) and writing and 
>> maintaining a process to update
>> it and purge it.

I second that, I have wished for something like that myself.

You don't actually have to create a table for this, a simple "SELECT 
txid_current()"
on the primary will do the trick.  Still it is annoying.

> Enhancement or feature requests should probably go to  Custormer Feedback
> 

I didn't know that site.
Was it ever announced?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How are files of tables/indexes/etc deleting?

2016-04-19 Thread Albe Laurenz
Иван Фролков wrote:
> When I drop a table, the file which contains real data will be deleted, but 
> only when I will commit
> transaction. It seems like such job must do vacuum, but I cannot locate the 
> code which does the real
> job. Could anybody explain in details how it works?

See RelationDropStorage() in src/backend/catalog/storage.c

No vacuum is necessary, the files that belong to the relation are deleted at 
commit time.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do BEGIN/COMMIT/ABORT operate in a nested SPI query?

2016-04-18 Thread Albe Laurenz
da...@andl.org wrote:
> I am attempting to create a new language implementation. The language is Andl 
> (andl.org), so the
> handler is plandl.
> This is a question about executing SPI queries from inside plandl.
> 
> The documentation makes it clear that SPI allows nested queries; that in some 
> instances it will be
> necessary to call SPI_push() and SPI_pop(), but in others this will be 
> handled automatically. Se
> http://www.postgresql.org/docs/9.5/interactive/spi-spi-push.html.
> 
> It is an important design feature of plandl to allow nested queries.
> 
> My question is: where are the transaction boundaries if the inner/outer query 
> do or do not contain
> BEGIN/ABORT/COMMIT? Do they nest, or does an inner COMMIT finish a 
> transaction started by an outer
> BEGIN, or is it ignored?

You cannot have BEGIN or COMMIT inside a function.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what's the exact command definition in read committed isolation level?

2016-04-18 Thread Albe Laurenz
Jinhua Luo wrote:
> The document said, "Read Committed mode starts each command with a new
> snapshot that includes all transactions committed up to that instant".
> 
> But what about the embedded commands within the outer command itself?
> Do they share the same snapshot with the outer command?
> 
> a) trigger
> b) CTE
> c) other forms of sub-query

All these share the snapshot of the command to which they belong.

As you wrote, things would break if it were different.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to detoast a column of type BYTEAOID

2016-04-18 Thread Albe Laurenz
da...@andl.org wrote:
> I am attempting to create a new language implementation. The language is
> Andl (andl.org), so the handler is plandl.
> I am having a bit of trouble executing SPI queries from inside plandl.
> 
> The particular problem arises after calling SPI_cursor_fetch() and then
> SPI_getbinval() on a column of type BYTEA (BYTEAOID 17). I was expecting to
> get back a Datum of type bytea, but the length is crazy. From the look of
> the data, I suspect that what I have is a compressed TOAST, and I don't know
> how to handle those.
> 
> 1. Does SPI_getbinval() indeed return compressed TOAST values?
> 2. If so, what should I do (in general) to ensure the values I retrieve are
> de-toasted?

You should use the PG_DETOAST_DATUM* macros from fmgr.h; see the documentation 
there.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.3.4 file system compatibility

2016-04-11 Thread Albe Laurenz
Marllius wrote:
> OCFS2 = oracle cluster file system 2

I think using OCFS2 for PostgreSQL data is a good idea if you want
to be the first at something or try to discover bugs in OCFS2.

Why do you want a cluster file system for PostgreSQL?  You cannot
have more than one server access the same data at the same time
anyway.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-08 Thread Albe Laurenz
Charles Clavadetscher wrote:
> We have a process in R which reads statistical raw data from a table and 
> computes time series values
> from them.
> The time series values are in a hstore field with the date as the key and the 
> value as the value.
> The process writes the computed value into a temporary table and locks the 
> corresponding row(s) of the
> target table for update.
> Finally the row(s) are updated if they already exist or inserted if they do 
> not exist.
> 
> This process runs nightly and processes many GB of data without generating 
> any error. Normally these
> are low frequency time series
> (monthly or quarterly data).
> 
> Now we have a daily time serie with about 46'000 key/value pairs. In near 
> future this will increase to
> 86'000 including data from
> previous years.
> 
> When R processes the daily time serie we get a stack size exceeded error, 
> followed by the hint to
> increase the max_stack_depth. My
> colleague, who wrote the R function and performed the test read the docs and 
> increased, according to
> ulimit -s the max_stack_depth
> to 7MB.
> 
> Here the details of OS and PG:
> OS: osx 10.10.5
> PG: 9.3.3
> 
> ulimit -s = 8192
> 
> The resize did work as *show max_stack_depth;" has shown. After this change, 
> however, the query states
> the same error as before,
> just with the new limit of 7 MB.
> 
> The query itself was written to a file in order to verify its size. The size 
> turned out to be 1.7MB,
> i.e. even below the
> conservative default limit of 2 MB, yet alone substantially below 7 MB.
> 
> Apart from the fact that we could consider using a different strategy to 
> store time series, we would
> like to understand what is
> causing the problem.
> 
> Here the query as it looks like in the R code:
> sql_query_data <- sprintf("BEGIN;
>CREATE TEMPORARY TABLE ts_updates(ts_key varchar, 
> ts_data hstore,
> ts_frequency integer) ON COMMIT DROP;
>INSERT INTO ts_updates(ts_key, ts_data) VALUES %s;
>LOCK TABLE %s.timeseries_main IN EXCLUSIVE MODE;
> 
>UPDATE %s.timeseries_main
>SET ts_data = ts_updates.ts_data
>FROM ts_updates
>WHERE ts_updates.ts_key = 
> %s.timeseries_main.ts_key;
> 
>INSERT INTO %s.timeseries_main
>SELECT ts_updates.ts_key, ts_updates.ts_data, 
> ts_updates.ts_frequency
>FROM ts_updates
>LEFT OUTER JOIN %s.timeseries_main ON 
> (%s.timeseries_main.ts_key =
> ts_updates.ts_key)
>WHERE %s.timeseries_main.ts_key IS NULL;
>COMMIT;",
>values, schema, schema, schema, schema, schema, 
> schema, schema)
> 
> And here is how it looks like at the end:
> 
> INSERT INTO ts_updates(ts_key, ts_data, ts_frequency) VALUES 
> ('somekey',hstore('1900-01-01','-
> 0.395131869823009')||
> 
> hstore('1900-01-02','-
> 0.595131869823009')||
> 
> hstore('1900-01-03','-
> 0.395131869823009')||
> [...] 
> 46'000 times
> 
> hstore('1900-01-04','-
> 0.395131869823009'),NULL);

I don't understand which query causes the error.
The queries you quote above are smaller than 1.7 MB...
You could log errors to find out which statement causes the error.

One idea would be to attach a debugger to the backend, set a breakpoint in 
check_stack_depth()
where the error is thrown, and take a stack trace when you hit the error.
Maybe that can show you what is going on.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-08 Thread Albe Laurenz
Rakesh Kumar wrote:
>> Every row has two system columns associated with it: xmin and xmax
>>
>> xmin is the transaction ID that created the row, while xmax is
>> the transaction ID that removed the row.
>>
>> So when an update takes place, xmax of the original row and xmin
>> of the new row are set to the transaction ID of the current transaction.
>>
>> Furthermore, the commit log (CLOG) logs for each transaction whether
>> it was committed or rolled back.
>>
>> Now when a backend examines a row, it first checks if the row is
>> visible, i.e. xmin must be less or equal to the current transaction ID
>> and xmax must be 0 or greater than the current transaction ID
>> or belong to a transaction that was rolled back.
>>
>> To save CLOG lookups, the first reader who consults the CLOG will
>> save the result in so-called "hint bits" on the row itself.
> 
> I am assuming the same must be true for the indexes also. Does PG look
> up primary key
> by examining the rows like you described above.

Rows remain in the index as long as they remain in the table.

What actually removes old rows from the table - VACUUM - also
removes them from the index.

The index does not contain visibility information about the indexed row,
so every row found in an index must still be checked for visibility,
either as described above, or in the case of an index-only scan
by checking the visibility map if all tuples on that heap page are
visible to all transactions.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-08 Thread Albe Laurenz
Jeff Janes wrote:
>> I am curious because of "while xmax is the transaction ID that
>> *removed* the row".
> 
> "marked for removal" would be more accurate.  If the row were actually
> physically removed, it would no longer have a xmax to set.

Yes, thanks for the clarification.
I was thinking "logically removed", but I realize my wording was confusing.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-06 Thread Albe Laurenz
Rakesh Kumar wrote:
> I understand that when an update of say 100,000 rows are made, PG
> writes the updated rows as a new row. These new rows are not visible
> to any sessions except the one creating it. At commit time PG flips
> something internally to make these rows visible to all.
> 
> My Q: what happens to those rows which use to contain the values
> before the update. Shouldn't something change in those rows to
> indicate that those rows are no longer valid. Who does it chain those
> rows to the new rows.

Every row has two system columns associated with it: xmin and xmax

xmin is the transaction ID that created the row, while xmax is
the transaction ID that removed the row.

So when an update takes place, xmax of the original row and xmin
of the new row are set to the transaction ID of the current transaction.

Furthermore, the commit log (CLOG) logs for each transaction whether
it was committed or rolled back.

Now when a backend examines a row, it first checks if the row is
visible, i.e. xmin must be less or equal to the current transaction ID
and xmax must be 0 or greater than the current transaction ID
or belong to a transaction that was rolled back.

To save CLOG lookups, the first reader who consults the CLOG will
save the result in so-called "hint bits" on the row itself.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL advocacy

2016-03-26 Thread Albe Laurenz
Jernigan, Kevin wrote:
>On 3/24/16, 3:09 PM, "Albe Laurenz" <laurenz.a...@wien.gv.at> wrote:
>>> Disk is only a single point of failure in RAC if you configure 
>>> non-redundant storage.
>>> In general, Oracle recommends triple mirroring to protect against disk 
>>> failures,
>>> as they have had many experiences over the years where customers with 
>>> mirrored disks
>>> would see consecutive disk failures within short periods of time.
>>
>>The single point of failure in Oracle RAC is the ASM file system.
>
> Only if you misconfigure ASM for RAC: with RAC, an ASM instance will run on 
> every RAC node,
> and if the ASM instance fails on any one node, the RAC instance on that node 
> will go down,
> but the RAC instances on the other nodes will continue to run - so the 
> database will remain
> accessible, though with fewer processors available.
>
> If you configure ASM to implement at least dual mirroring for storage - and 
> I’m pretty sure
> Oracle intentionally makes it hard to configure ASM without mirroring - then 
> ASM will continue
> run through any single disk failure.

I think you missed my point.

I am not talking about disk failure, but about some failure (possibly a 
software bug or
a combination of hardware problem and software weakness) that causes the on-disk
data to be corrupted. File system corruption.

Mirroring will only mirror such a corruption, and multiple ASM instances that 
all access
the same corrupted data won't help either.

Of course Oracle says that ASM is so simple and bullet-proof that this cannot 
happen,
but claiming that something cannot fail is not good enough.
RAC is a shared storage system, and that shared storage is a single point of 
failure.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL advocacy

2016-03-24 Thread Albe Laurenz
Jernigan, Kevin wrote:
> Disk is only a single point of failure in RAC if you configure non-redundant 
> storage.
> In general, Oracle recommends triple mirroring to protect against disk 
> failures,
> as they have had many experiences over the years where customers with 
> mirrored disks
> would see consecutive disk failures within short periods of time.

The single point of failure in Oracle RAC is the ASM file system.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confusing deadlock report

2016-03-19 Thread Albe Laurenz
Thomas Kellerer wrote:
>>> The error as reported in the Postgres log file is this:
>>>
>>> 2016-03-12 13:51:29.305 CET [23912]: [1-1] 
>>> user=arthur,db=prod,app=[unknown] ERROR: deadlock detected
>>> 2016-03-12 13:51:29.305 CET [23912]: [2-1] 
>>> user=arthur,db=prod,app=[unknown] DETAIL: Process 23912
>>> waits for ShareLock on transaction; blocked by process 24342.
>>> Process 24342 waits for ShareLock on transaction 39632974; blocked 
>>> by process 23912.
>>> Process 23912: UPDATE alpha SET some_flag = $1 WHERE (id = $2)
>>> Process 24342: INSERT INTO bravo (..., alpha_id) VALUES ($1, $2, 
>>> $3, $4, $5, $6, $7, $8, $9,
>>> $10)
>>>
>>> Can the foreign key between bravo and alpha play a role here? With some 
>>> simple test setups I could not
>>> get the insert to wait even if it was referencing the row that the other 
>>> process has updated.
>>>
>>> This happened on 9.3.10 running on Debian

>> The probable culprit is a foreign key between these tables.
>>
>> What foreign keys are defined?

> The FK in question is:
> 
>alter table bravo foreign key (alpha_id) references alpha (id);
> 
> But by simply creating two tables (with a foreign key) and doing an update in 
> one transaction and the
> insert in another, I do not get any locks or waiting transactions.
> (And to be honest: I would have been pretty disappointed if I had)

Hm, true; I cannot get a lock with these two statements.

Can you determine what statements were executed in these transactions before 
the deadlock?
It was probably one of these that took the conflicting lock.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confusing deadlock report

2016-03-18 Thread Albe Laurenz
Thomas Kellerer wrote:
>> Can you determine what statements were executed in these transactions before 
>> the deadlock?
>> It was probably one of these that took the conflicting lock.
> 
> Unfortunately not. Statement logging is not enabled on that server 
> (space-constrained).
> 
> And while we know the statements that can possibly be executed by these parts 
> of the application,
> several on them depend on the actual data, so it's hard to tell which path 
> the two transactions
> actually used.

But that's where the solution to your problem must be...

Look at all statements that modify "alpha" and could be in the same transaction
with the INSERT to "bravo".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confusing deadlock report

2016-03-18 Thread Albe Laurenz
Thomas Kellerer wrote:
> we have a strange (at least to me) deadlock situation which does not seem to 
> fall into the "usual"
> deadlock category.
> 
> The error as reported in the Postgres log file is this:
> 
> 2016-03-12 13:51:29.305 CET [23912]: [1-1] user=arthur,db=prod,app=[unknown] 
> ERROR: deadlock detected
> 2016-03-12 13:51:29.305 CET [23912]: [2-1] user=arthur,db=prod,app=[unknown] 
> DETAIL: Process 23912
> waits for ShareLock on transaction; blocked by process 24342.
> Process 24342 waits for ShareLock on transaction 39632974; blocked by 
> process 23912.
> Process 23912: UPDATE alpha SET some_flag = $1 WHERE (id = $2)
> Process 24342: INSERT INTO bravo (..., alpha_id) VALUES ($1, $2, $3, 
> $4, $5, $6, $7, $8, $9,
> $10)
> 
> (I have "obfuscated" the table names)
> 
> 
> Process 24342 did update table alpha in an earlier step, but a different row 
> than Process 23912
> updated.
> Table bravo has a foreign key to table alpha.
> 
> My understanding of the deadlock report is that the statements shown in the 
> log are the actual
> statements on which the two processes were waiting.
> 
> What I think is unusual in this situation is the INSERT statement that is 
> part of the deadlock
> situation.
> 
> The only way I can think of how a deadlock could happen during an insert, is 
> if process 23912 had
> inserted a row into bravo with the same PK value that process 24342 is trying 
> to insert. But process
> 23912 never even touches that table, so I am a bit confused on how this can 
> happen.
> 
> Can the foreign key between bravo and alpha play a role here? With some 
> simple test setups I could not
> get the insert to wait even if it was referencing the row that the other 
> process has updated.
> 
> This happened on 9.3.10 running on Debian

The probable culprit is a foreign key between these tables.

What foreign keys are defined?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql question: aborting a "script"

2016-03-15 Thread Albe Laurenz
John McKown wrote:
> I'm likely abusing the psql program. What I have is an awk program which 
> reads a file and produces a
> number of INSERT INTO commands. I then feed these commands into psql to 
> execute them. Yes, a Perl
> program would be a better idea. Anyway, sometimes the commands are rejected 
> due to some problem, such
> as duplicate primary key. What I wish is that the psql command had a switch, 
> or control command, which
> would say "exit from psql if anything fails". To me, this is like the BASH 
> "set -e" command for a
> shell script. Does this sound like a useful addition. Or am I just missing 
> where it already exists?

Did you try "psql -v ON_ERROR_STOP=on"?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows default directory for client certificates

2016-03-09 Thread Albe Laurenz
Lupi Loop wrote:
> PostgreSQL documentation at 
> http://www.postgresql.org/docs/9.5/static/libpq-ssl.html
> says that when a client certificate is requested by a server, a windows 
> client psql will use by
> default the credentials  located at %APPDATA%\postgresql\postgresql.crt and
> %APPDATA%\postgresql\postgresql.key
> 
> However, my psql client application (v 9.5.0) in a Windows Server 2012 R2 
> cannot find the certificates
> in this location and only works when this location is specifically set using 
> the sslcert and sslkey
> attributes when connecting. Is this a bug or am I using a wrong path?
> 
> This an example of execution:
> 
> ---
> C:\Program Files\PostgreSQL\9.5\data> ..\bin\psql 
> "postgresql://postgres@localhost/postgres"
> psql: FATAL:  connection requires a valid client certificate
> FATAL:  no pg_hba.conf entry for host "::1", user "postgres", 
> database"postgres", SSL off
> 
> C:\Program Files\PostgreSQL\9.5\data> ..\bin\psql
> "postgresql://postgres@localhost/postgres?sslkey=postgresql\postgresql.key=postgresql\postgres
> ql.crt"
> psql (9.5.0)
> SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384,bits: 
> 256, compression: off)
> Type "help" for help.
> postgres=#
> 
> cheers

To make this work, you will have to have a root certificate "root.crt" in the
server's data directory and the configuration parameter "ssl_ca_file" set to 
"root.crt".

The corresponding line in pg_hba.conf should look like this:
hostssl  /32  md5 clientcert=1

Then you have to restart the server.
But I guess you have done that since it works if you specify the files 
explicitly.

Perhaps you are not in the %APPDATA% directory.
What do you get when you type
   echo %APPDATA%
on the command prompt?

One possibility to investigate this is to run "Process Monitor" and add the 
filters
"Process Name is psql.exe" and "Patch ends with postgresql.key".
Then you should see where psql looks for the client key.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does RAISE EXCEPTION rollback previous commands in a stored function?

2016-03-02 Thread Albe Laurenz
Alexander Farber wrote:
> On Wed, Mar 2, 2016 at 11:18 AM, Albe Laurenz <laurenz.a...@wien.gv.at> wrote:
>> You cannot use START TRANSACTION, BEGIN, SAVEPOINT, COMMIT or ROLLBACK
>> inside a function.  A function always runs within one transaction.
>> 
>> Savepoints or subtransactions are written with a BEGIN ... EXCEPTION
>> block in PL/pgSQL, so you could write:
>> 
>> DECLARE FUNCTION  AS
>> $$BEGIN
>>/* UPDATE 1 */
>>UPDATE ...;
>>BEGIN  /* sets a savepoint */
>>   /* UPDATE 2, can cause an error */
>>   UPDATE ...;
>>EXCEPTION
>>   /* rollback to savepoint, ignore error */
>>   WHEN OTHERS THEN NULL;
>>END;
>> END;$$;
>> 
>> Even if UPDATE 2 throws an error, UPDATE 1 will be committed.

> Thank you, this is very helpful, just 1 little question:
> 
> 
> Why do you write just EXCEPTION?
> 
> 
> Shouldn't it be RAISE EXCEPTION?

That's something entirely different, see
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

The above construct *catches* the exception, which might be
raised by the UPDATE statement.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does RAISE EXCEPTION rollback previous commands in a stored function?

2016-03-02 Thread Albe Laurenz
Alexander Farber wrote:
> how to set such a savepoint inside of a stored function?
> 
> Can I call "START TRANSACTION", and then at some point later in the same 
> stored function call RAISE
> EXCEPTION?

I realize that what I wrote must be confusing.

You cannot use START TRANSACTION, BEGIN, SAVEPOINT, COMMIT or ROLLBACK
inside a function.  A function always runs within one transaction.

Savepoints or subtransactions are written with a BEGIN ... EXCEPTION
block in PL/pgSQL, so you could write:

DECLARE FUNCTION  AS
$$BEGIN
   /* UPDATE 1 */
   UPDATE ...;
   BEGIN  /* sets a savepoint */
  /* UPDATE 2, can cause an error */
  UPDATE ...;
   EXCEPTION
  /* rollback to savepoint, ignore error */
  WHEN OTHERS THEN NULL;
   END;
END;$$;

Even if UPDATE 2 throws an error, UPDATE 1 will be committed.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does RAISE EXCEPTION rollback previous commands in a stored function?

2016-03-02 Thread Albe Laurenz
Andreas Kretschmer wrote:
>> Alexander Farber  hat am 1. März 2016 um 19:41
>> geschrieben:
>>
>>
>> Good evening,
>>
>> in PostgreSQL 9.5 does RAISE EXCEPTION reliably rollback all previous
>> commands in a stored function?
> 
> Yes.

That is, unless you set a savepoint to which you can rollback.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Only owners can ANALYZE tables...seems overly restrictive

2016-02-29 Thread Albe Laurenz
David G. Johnston wrote:
 In a production environment you don't want a user to change your table
 statistics.

 They could just set default_statistics_target to something stupid,
 run ANALYZE and wreck the statistics for everyone.
 And then come back to the DBA and complain that things don't work.

>>> ​Setting default_statistics_target and running ANALYZE are two entirely 
>>> different things.

>> Setting default_statistics_target affects the statistics computed by ANALYZE,
>> so I cannot follow you here.

> ​Just because I can run ANALYZE doesn't mean I should be able to update the 
> statistic targets.  While
> the features are related the permissions are not.

See http://www.postgresql.org/docs/current/static/planner-stats.html

"The amount of information stored in pg_statistic by ANALYZE, in particular the
 maximum number of entries in the most_common_vals and histogram_bounds arrays
 for each column, can be set on a column-by-column basis using the
 ALTER TABLE SET STATISTICS command, or globally by setting the
 default_statistics_target configuration variable."

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Only owners can ANALYZE tables...seems overly restrictive

2016-02-29 Thread Albe Laurenz
David G. Johnston wrote:
> On Mon, Feb 29, 2016 at 2:52 AM, Albe Laurenz <laurenz.a...@wien.gv.at> wrote:
>> John R Pierce wrote:
>>> analyze has arguably fewer side effects, its a performance enhancement,
>>> its neither altering the schema or changing the data.
 
>> In a production environment you don't want a user to change your table
>> statistics.
>> 
>> They could just set default_statistics_target to something stupid,
>> run ANALYZE and wreck the statistics for everyone.
>> And then come back to the DBA and complain that things don't work.
>>
>> We have a policy that users are not table owners, and with the
>> current behaviour we can be certain that any bad table statistics
>> are the fault of the DBA or wrong configuration.

> ​Setting default_statistics_target and running ANALYZE are two entirely 
> different things.

Setting default_statistics_target affects the statistics computed by ANALYZE,
so I cannot follow you here.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Only owners can ANALYZE tables...seems overly restrictive

2016-02-29 Thread Albe Laurenz
John R Pierce wrote:
> On 2/28/2016 8:58 PM, Tom Lane wrote:
 I should the check for whether a given user can or cannot analyze a table
 should be whether the user has INSERT, UPDATE, or DELETE privileges.

>> By that argument, we should allow anyone with any write access to do
>> TRUNCATE.  Or perhaps even DROP TABLE.  I'm not impressed.

> I don't see why anyone with delete privileges shouldn't be able to
> truncate (after all, thats the same as deleting all records).
> 
> analyze has arguably fewer side effects, its a performance enhancement,
> its neither altering the schema or changing the data.

In a production environment you don't want a user to change your table
statistics.

They could just set default_statistics_target to something stupid,
run ANALYZE and wreck the statistics for everyone.
And then come back to the DBA and complain that things don't work.

We have a policy that users are not table owners, and with the
current behaviour we can be certain that any bad table statistics
are the fault of the DBA or wrong configuration.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PosgreSQL Security Architecture

2016-02-15 Thread Albe Laurenz
John R Pierce wrote:
> On 2/12/2016 5:20 AM, Lesley Kimmel wrote:
>> Thanks for the reply Laurenz. Of course the first thing that I thought
>> of to prevent man-in-the-middle was SSL. However, I also like to try
>> to address the issue in a way that seems to get at what they are
>> intending. It seemed to me that they wanted to do some configuration
>> within the database related to session IDs.
> 
> when the connection is broken, the process exits and the session ceases
> to exist. there are no 'session IDs' to speak of (they are process
> IDs instead, but a new process mandates new authentication, there's no
> residual authorizations associated with a PID).

I might be misunderstanding, but is there any connection to a
man-in-the-middle attack?

Without SSL, anybody who can tap into the TCP communication can inject
SQL statements.  No session ID is required.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PosgreSQL Security Architecture

2016-02-12 Thread Albe Laurenz
Lesley Kimmel wrote:
> I'm working to secure a PosgreSQL database according to a DoD security guide. 
> It has many very generic
> requirements that get more toward the internal architecture of the system 
> that wouldn't be apparent to
> the average admin. I was hoping someone might have some insight to the 
> following requirements:
> 
> 
> a) The DBMS must maintain the authenticity of communications sessions by 
> guarding against man-in-the-
> middle attacks that guess at Session ID values.

You can have that if you use SSL encryption which is available in PostgreSQL:
http://www.postgresql.org/docs/current/static/ssl-tcp.html

It uses that widely-used OpenSSL software, so an encrypted database connection
is as safe from man-in-the-middle attacks as OpenSSL is.

> b) Check DBMS settings and vendor documentation to verify the DBMS properly 
> handles transactions in
> the event of a system failure. The consistent state must include a security 
> configuration that is at
> least as restrictive as before the system failure. This must be guaranteed.

I don't understand what is meant by "security configuration" here.
Is that defined somewhere?

PostgreSQL handles system failures well, it uses a "Write Ahead Log" (WAL) to 
record
transactions as they get committed. In the case of a system failure, the
recovery process starts at the latest checkpoint (known consistent state) 
before the
failure and exactly replays all WAL logged committed transactions up to the 
point of
the crash:
http://www.postgresql.org/docs/current/static/wal-intro.html

After crash recovery, the database is in the same state as it was after the last
successful transaction.
The (unrecovered) database files of a crashed database are no less secure than
any file system backup is.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] memory problem with refresh materialized view

2016-02-12 Thread Albe Laurenz
Enrico Pirozzi wrote:
> I have a new postgresql 9.5.0 installation on a new virtual server debian 8.3 
> x64 with 4gb RAM, I have
> compiled postgresql from source.
> 
> When I import a dump with materialized views I see that postgres process 
> takes about all 4 Gb and then
> I have this error
> 
>  fork: Cannot allocate memory

What are the memory related settings of the database into which you restore?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL debugger

2016-02-01 Thread Albe Laurenz
Dane Foster wrote:
> I googled "plpgsql debugger" and eventually ended up at 
> http://pgfoundry.org/projects/edb-debugger/,
> where the latest release is almost 8 years old. I am aware that this is not 
> an EDB forum but given
> that I sent them an e-mail at 9AM EST and I haven't received a response yet I 
> figured it can't hurt to
> try my question here. Is anyone using it on recent (e.g., 9.5) versions of 
> PostgreSQL. If not, is
> there any consensus on what one should be using to debug PL/pgSQL functions?

The current version lives here:
http://git.postgresql.org/gitweb/?p=pldebugger.git

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A motion

2016-01-25 Thread Albe Laurenz
Regina Obe wrote:
> At this point I feel we should:
> 
> a) Move this to pgsql-advocacy [...]

> Or
> 
> b) Start a new PostgreSQL mailing list - call it -  pgsql-coc.

-1

While I personally feel that a code of conduct does not need to be an explicit
document and is something that "happens" through the way people on the lists
behave and the way the core team and list maintainers handle problems,
pgsql-general is where the community meets, and that is where such a discussion
should take place.

If it annoys some people, so be it; if people express their dislike, that's
a statement as well.  A code of conduct is about non-technical implications
of activity on the mailing lists, so using the non-technical nature of this
discussion as a reason to push it off the radar is counter-productive.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] long transfer time for binary data

2016-01-21 Thread Albe Laurenz
Andy Colson wrote:
> On 01/21/2016 03:59 PM, Johannes wrote:
>> Here are some transferring measurements (from server to client) with the
>> same file.
>>
>> scp
>> +ssl -compression 1.3 sec
>> +ssl +compression 4.6 sec
>>
>> pgadmin
>> select lo_get(12345);
>> -ssl  3.4 sec
>> +ssl +compression 5.5 sec
>> +ssl -compression 4.5 sec
>>
>> psql
>> select lo_get(12345);
>> +ssl -compression 6.0 sec
>> -ssl  4.4 sec
>>
>> java/jdbc
>> only while(in.read(buf,0,len))
>> +ssl -compression 6.0 sec
>> -ssl  3.0 sec (+ 1.8 sec for new Image())
>>
>> Here is a link for insecure ssl compression:
>> https://en.wikipedia.org/wiki/Comparison_of_TLS_implementations#Compression

> Thanks for the link on ssl compression, I'd not seen that before.  I'm going 
> to have to read up.
> 
> Your numbers ... look ...  odd.  scp +compression is slower?  pgadmin -ssl 
> and psql -ssl and java -ssl
> are all different speeds?  ssl always adds extra time?  Maybe a high latency 
> thing?  If you ping the
> other box what sort of time's do you get?  Maybe the extra ssl handshakes up 
> front + high latency is
> causing it.  You could try a shared/cached ssh connection to avoid the 
> overhead.

Johannes' measurements make sense to me.

In situations where network bandwith is not the bottleneck, you will be slower 
with
SSL compression than without.  The time you lose is the time the CPU needs to 
compress
and decompress the data.  I observed that behaviour in one of our systems that 
transferred
byteas with images over SSL, which led me to introduce the "sslcompression" 
connection parameter
into PostgreSQL.

The variation of times between different clients could be randon (are these 
differences
conststent across repeated runs?), could be caused by different SSL 
implementations
in Java and OpenSSL or by additional processing in pgAdmin III.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Albe Laurenz
Steve Rogerson wrote:
> Hi, this is wrong:
> 
> # select to_char('2016-01-20 00:00'::timestamp at time zone 'Europe/Lisbon',
> 'TZ');
>  to_char
> -
>  GMT
> (1 row)
> 
> 
> It should be WET, "Western European Time". Is there something I'm doing wrong?

That query will always give you your local timezone.

Here in Austria I get:

test=> select to_char('2016-01-20 00:00'::timestamp at time zone 
'Asia/Yerevan', 'TZ');
┌─┐
│ to_char │
├─┤
│ CET │
└─┘
(1 row)

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function error

2016-01-14 Thread Albe Laurenz
Sachin Srivastava wrote:
> In my function the problem is that global variables defined inside the 
> function.
> These variables are visible to functions defined inside a function.
> If we move these inner functions to outside of the main function,
> they will lose the visibility of the global variables.
> So I think that's why we have to correct this functions rather then creating 
> it as individual.

I see.
You'll have to rewrite the functions then, probably by passing the shared
values as parameters instead of having them in global variables.

> We have migrated our data from Oracle to Postgres through ORA2PG Tool
> after that we are getting this error for this function.

ora2pg cannot translate all PL/SQL code, you have to be prepared to rewrite
most nontrivial functions.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function error

2016-01-13 Thread Albe Laurenz
Sachin Srivastava wrote:
> I am getting the below error for function, please see the bold line in 
> "Function code", please suggest
> what I will do to correct this code.
> 
>  ---
> 
> ERROR:  syntax error at or near "*"
> LINE 35: SELECT * from logError(msg text) is
> ^
> CONTEXT:  invalid type name "* from logError(msg text) is

That error message does not look like it could come from the
function definition below, but the function definition is clearly
syntactically incorrect:

> CREATE OR REPLACE FUNCTION cp_property_room_count_trans(
> subscriberid bigint,
> incompanyid bigint,
> loginname text)
>   RETURNS void AS
> $BODY$
> DECLARE

[...]

>   SELECT logError(v_errorMesg text, procedureName text, subscriberID bigint, 
> companyID bigint,
> supplierSku text, loginName text) is
> 
> BEGIN
>  insert into SMERROR_LOG
>  (error_message, method_name, system_message, error_log_id, 
> subscriber_id, company_id,
> creation_date, creation_user, update_date, update_user)
>  values(v_errorMesg, procedureName, supplierSku, 
> nextval('smerror_log_sequence'), subscriberID,
> companyID, LOCALTIMESTAMP, loginName, LOCALTIMESTAMP, 'V7Admin');
>   end;

You cannot declare a function inside the DECLARE section of
another function.  Besides, what is "SELECT logError" supposed to mean?

You will have to declare each of the helper functions with ist own
CREATE FUNCTION statement.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Code of Conduct: Is it time?

2016-01-11 Thread Albe Laurenz
Tom Lane wrote:
> In my admittedly-limited experience with dealing with such problems,
> it's a lot easier to achieve positive results if you can discuss
> issues in private, before people's positions harden.
> 
> In short, I wouldn't characterize that complainant as "a troll" for
> the substance of her complaint, but maybe so for the way in which
> she went about making it.  If we're to have a CoC, I'd really like
> it (and any associated enforcement mechanism) to be designed to
> discourage this sort of let's-begin-with-public-attacks approach to
> problem resolution.  How we get to that exactly, I don't know.

There's a time-tested idea in Mt 18, 15-17:

If your brother or sister sins, go and point out their fault, just
between the two of you.  If they listen to you, you have won them over.
But if they will not listen, take one or two others along, so that
'every matter may be established by the testimony of two or three witnesses.'
If they still refuse to listen, tell it to the church; and if they refuse
to listen even to the church, treat them as you would a pagan or a tax 
collector.

"Take one or two others along" could be a CC.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql INET select and default route ?

2015-12-13 Thread Albe Laurenz
Tim Smith wrote:
> Re: I am surprised that you don't expect "0.0.0.0/0" to be returned by the 
> first
> query if you expect it to be returned by the second.
> Is that an oversicht?
> 
> Thanks for the question, but no, it wasn't an oversight, I only am
> looking for 0.0.0.0/0 to be returned if there is no more specific
> match.

I see, but then you'll have to use a different query:

SELECT a from test where '11.1.2.3' <<= a
   ORDER BY masklen(a) DESC LIMIT 1;

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   6   7   8   9   10   >