Re: [HACKERS] standalone backend PANICs during recovery

2016-08-24 Thread Bernd Helmle


--On 20. August 2016 12:41:48 -0400 Tom Lane  wrote:

> So at this point I'm pretty baffled as to what the actual use-case is
> here.  I am tempted to say that a standalone backend should refuse to
> start at all if a recovery.conf file is present.  If we do want to
> allow the case, we need some careful thought about what it should do.

Well, the "use case" was a crashed hot standby at a customer site (it kept
PANICing after restarting), where i decided to have a look through the
recovery process with gdb. The exact PANIC was

2016-03-29 15:12:40 CEST [16097]: [26-1] user=,db=,xid=0,vxid=1/0 FATAL:
unexpected GIN leaf action: 0

I had the idea that it was quick and dirty to use a single backend. I was
surprised that this time it PANIC'ed differently

That said, i'm okay if --single is not intended to bring up a hot standby.
There are many other ways to debug such problems.

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standalone backend PANICs during recovery

2016-09-16 Thread Bernd Helmle


--On 3. September 2016 um 02:05:00 -0400 Tom Lane  wrote:

>> Well, the "use case" was a crashed hot standby at a customer site (it
>> kept PANICing after restarting), where i decided to have a look through
>> the recovery process with gdb. The exact PANIC was
> 
>> 2016-03-29 15:12:40 CEST [16097]: [26-1] user=,db=,xid=0,vxid=1/0 FATAL:
>> unexpected GIN leaf action: 0
> 
> BTW, that didn't happen to be on big-endian hardware did it?

You're right, this was RHEL7 on a POWER7 machine.

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] recovery_min_apply-delay and remote_apply

2016-09-16 Thread Bernd Helmle
Current PostgreSQL Documentation on recovery.conf has this about
recovery_min_apply_delay[1]:

---<---

This parameter is intended for use with streaming replication deployments;
however, if the parameter is specified it will be honored in all cases.
Synchronous replication is not affected by this setting because there is
not yet any setting to request synchronous apply of transaction commits.

--->---

If i understand correctly, this is not true anymore with 9.6, where
remote_apply will have exactly the behavior the paragraph above wants to
contradict: any transaction executed with synchronous_commit=remote_apply
will wait at least recovery_min_apply_delay to finish. Given that
synchronous_commit can be controlled by any user, this might be dangerous
if someone doesn't take care enough.

I think we need a doc patch for that at least, see attached patch against
master, but 9.6 should have a corrected one, too.

[1] 


-- 
Thanks

Bernd

recovery-config-doc.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] recovery_min_apply_delay vs. synchronous_commit

2016-09-18 Thread Bernd Helmle
Current PostgreSQL Documentation on recovery.conf has this about
recovery_min_apply_delay[1]:

---<---

This parameter is intended for use with streaming replication deployments;
however, if the parameter is specified it will be honored in all cases.
Synchronous replication is not affected by this setting because there is
not yet any setting to request synchronous apply of transaction commits.

--->---

If i understand correctly, this is not true anymore with 9.6, where
remote_apply will have exactly the behavior the paragraph above wants to
contradict: any transaction executed with synchronous_commit=remote_apply
will wait at least recovery_min_apply_delay to finish. Given that
synchronous_commit can be controlled by any user, this might be dangerous
if someone doesn't take care enough.

I think we need a doc patch for that at least, see attached patch against
master, but 9.6 should have a corrected one, too.

[1] <https://www.postgresql.org/docs/devel/static/standby-settings.html>

-- 
Mit freundlichen Grüßen

    Bernd Helmle


recovery-config-doc.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel worker (PID ) exited with exit code 1

2017-10-06 Thread Bernd Helmle
Am Freitag, den 06.10.2017, 17:49 +0530 schrieb tushar:
> I got some few queries after running sqlsmith against PG HEAD , where
> i 
> am getting LOG message like - "parallel worker (PID) exited with
> exit 
> code 1"
> 
> set force_parallel_mode =1;
>   select
>pg_catalog.pg_wal_replay_pause() as c0,
>ref_0.ev_type as c1
>  from
>pg_catalog.pg_rewrite as ref_0
>  where ref_0.ev_enabled > ref_0.ev_type
>  limit 53;

Looks like pg_wal_replay_pause() is marked as parallel safe:

SELECT proparallel FROM pg_proc WHERE proname = 'pg_wal_replay_pause';
-[ RECORD 1 ]--
proparallel | s

Bernd



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Minor comment issue in receivelog.c

2017-11-02 Thread Bernd Helmle
Please find a minor comment fix for receivelog.c, HandleCopyStream().

The comments talks about a START_STREAMING command, but i think
START_REPLICATION is what's meant here.


Bernd
diff --git a/src/bin/pg_basebackup/receivelog.c b/src/bin/pg_basebackup/receivelog.c
index 888458f4a9..befbbb092b 100644
--- a/src/bin/pg_basebackup/receivelog.c
+++ b/src/bin/pg_basebackup/receivelog.c
@@ -763,7 +763,7 @@ ReadEndOfStreamingResult(PGresult *res, XLogRecPtr *startpos, uint32 *timeline)
 
 /*
  * The main loop of ReceiveXlogStream. Handles the COPY stream after
- * initiating streaming with the START_STREAMING command.
+ * initiating streaming with the START_REPLICATION command.
  *
  * If the COPY ends (not necessarily successfully) due a message from the
  * server, returns a PGresult and sets *stoppos to the last byte written.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LWLock optimization for multicore Power machines

2017-02-21 Thread Bernd Helmle
Am Dienstag, den 14.02.2017, 15:53 +0300 schrieb Alexander Korotkov:
> +1
> And you could try to use pg_wait_sampling
>  to sampling of wait
> events.

I've tried this with your example from your blog post[1] and got this:

(pgbench scale 1000)

pgbench -Mprepared -S -n -c 100 -j 100 -T 300 -P2 pgbench2

SELECT-only:

SELECT * FROM profile_log ;
 ts |  event_type   | event | count 
+---+---+---
 2017-02-21 15:21:52.45719  | LWLockNamed   | ProcArrayLock | 8
 2017-02-21 15:22:11.19594  | LWLockTranche | lock_manager  | 1
 2017-02-21 15:22:11.19594  | LWLockNamed   | ProcArrayLock |24
 2017-02-21 15:22:31.220803 | LWLockNamed   | ProcArrayLock | 1
 2017-02-21 15:23:01.255969 | LWLockNamed   | ProcArrayLock | 1
 2017-02-21 15:23:11.272254 | LWLockNamed   | ProcArrayLock | 2
 2017-02-21 15:23:41.313069 | LWLockNamed   | ProcArrayLock | 1
 2017-02-21 15:24:31.37512  | LWLockNamed   | ProcArrayLock |19
 2017-02-21 15:24:41.386974 | LWLockNamed   | ProcArrayLock | 1
 2017-02-21 15:26:41.530399 | LWLockNamed   | ProcArrayLock | 1
(10 rows)

writes pgbench runs have far more events logged, see the attached text
file. Maybe this is of interest...


[1] http://akorotkov.github.io/blog/2016/03/25/wait_monitoring_9_6/ ts |  event_type   |event | count 
+---+--+---
 2017-02-21 15:45:18.875842 | Lock  | tuple|18
 2017-02-21 15:45:18.875842 | LWLockTranche | lock_manager |35
 2017-02-21 15:45:18.875842 | LWLockTranche | buffer_mapping   | 1
 2017-02-21 15:45:18.875842 | LWLockTranche | buffer_content   |   402
 2017-02-21 15:45:18.875842 | LWLockTranche | wal_insert   |  1880
 2017-02-21 15:45:18.875842 | Lock  | transactionid|   716
 2017-02-21 15:45:18.875842 | LWLockNamed   | ProcArrayLock|   644
 2017-02-21 15:45:18.875842 | LWLockNamed   | CLogControlLock  |   255
 2017-02-21 15:45:18.875842 | LWLockNamed   | XidGenLock   |   104
 2017-02-21 15:45:18.875842 | Lock  | extend   |   647
 2017-02-21 15:45:28.889785 | Lock  | tuple|65
 2017-02-21 15:45:28.889785 | LWLockTranche | lock_manager |96
 2017-02-21 15:45:28.889785 | LWLockTranche | buffer_content   |   811
 2017-02-21 15:45:28.889785 | LWLockTranche | wal_insert   |  2542
 2017-02-21 15:45:28.889785 | Lock  | transactionid|  1686
 2017-02-21 15:45:28.889785 | LWLockNamed   | ProcArrayLock|  1194
 2017-02-21 15:45:28.889785 | LWLockNamed   | CLogControlLock  |  1244
 2017-02-21 15:45:28.889785 | LWLockNamed   | XidGenLock   |   401
 2017-02-21 15:45:28.889785 | Lock  | extend   |   818
 2017-02-21 15:45:38.904761 | Lock  | tuple|54
 2017-02-21 15:45:38.904761 | LWLockTranche | lock_manager |87
 2017-02-21 15:45:38.904761 | LWLockTranche | buffer_content   |   756
 2017-02-21 15:45:38.904761 | LWLockTranche | wal_insert   |  2161
 2017-02-21 15:45:38.904761 | Lock  | transactionid|  1624
 2017-02-21 15:45:38.904761 | LWLockNamed   | ProcArrayLock|  1154
 2017-02-21 15:45:38.904761 | LWLockNamed   | CLogControlLock  |  1418
 2017-02-21 15:45:38.904761 | LWLockNamed   | XidGenLock   |   450
 2017-02-21 15:45:38.904761 | Lock  | extend   |   650
 2017-02-21 15:45:48.917774 | Lock  | tuple|51
 2017-02-21 15:45:48.917774 | LWLockTranche | lock_manager |85
 2017-02-21 15:45:48.917774 | LWLockTranche | buffer_content   |   947
 2017-02-21 15:45:48.917774 | LWLockTranche | wal_insert   |  2050
 2017-02-21 15:45:48.917774 | Lock  | transactionid|  1625
 2017-02-21 15:45:48.917774 | LWLockNamed   | ProcArrayLock|  1094
 2017-02-21 15:45:48.917774 | LWLockNamed   | CLogControlLock  |  2575
 2017-02-21 15:45:48.917774 | LWLockNamed   | XidGenLock   |   372
 2017-02-21 15:45:48.917774 | Lock  | extend   |   528
 2017-02-21 15:45:58.930732 | Lock  | tuple|63
 2017-02-21 15:45:58.930732 | LWLockTranche | lock_manager |86
 2017-02-21 15:45:58.930732 | LWLockTranche | buffer_content   |  1026
 2017-02-21 15:45:58.930732 | LWLockTranche | wal_insert   |  1543
 2017-02-21 15:45:58.930732 | Lock  | transactionid|  1794
 2017-02-21 15:45:58.930732 | LWLockNamed   | ProcArrayLock|  1154
 2017-02-21 15:45:58.930732 | LWLockNamed   | CLogControlLock  |  3850
 2017-02-21 15:45:58.930732 | LWLockNamed   | XidGenLock   |   315
 2017-02-21 15:45:58.930732 | Lock  | extend   |   

Re: [HACKERS] [patch] reorder tablespaces in basebackup tar stream for backup_label

2017-02-22 Thread Bernd Helmle
Am Dienstag, den 21.02.2017, 11:17 +0100 schrieb Michael Banck:
> However, third party tools using the BASE_BACKUP command might want
> to
> extract the backup_label, e.g. in order to figure out the START WAL
> LOCATION. If they make a big tarball for the whole cluster
> potentially
> including all external tablespaces, then the backup_label file is
> somewhere in the middle of it and it takes a long time for tar to
> extract it.
> 
> So I am proposing the attached patch, which sends the base tablespace
> first, and then all the other external tablespaces afterwards, thus
> having base_backup be the first file in the tar in all cases. Does
> anybody see a problem with that?
> 
> 
> Michael
> 
> [1] Chapter 52.3 of the documentation says "one or more CopyResponse
> results will be sent, one for the main data directory and one for
> each
> additional tablespace other than pg_default and pg_global.", which
> makes
> it sound like the main data directory is first, but in my testing,
> this
> is not the case.

The comment in the code says explicitely to add the base directory to
the end of the list, not sure if that is out of a certain reason.

I'd say this is an oversight in the implementation. I'm currently
working on a tool using the streaming protocol directly and i've
understood it exactly the way, that the default tablespace is the first
one in the stream.

So +1 for the patch.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Make subquery alias optional in FROM clause

2017-02-22 Thread Bernd Helmle
>From time to time, especially during migration projects from Oracle to
PostgreSQL, i'm faced with people questioning why the alias in the FROM
clause for subqueries in PostgreSQL is mandatory. The default answer
here is, the SQL standard requires it.

This also is exactly the comment in our parser about this topic:

/*
 * The SQL spec does not permit a subselect
 * () without an alias clause,
 * so we don't either.  This avoids the problem
 * of needing to invent a unique refname for it.
 * That could be surmounted if there's sufficient
 * popular demand, but for now let's just implement
 * the spec and see if anyone complains.
 * However, it does seem like a good idea to emit
 * an error message that's better than "syntax error".
 */

So i thought i'm the one standing up for voting to relax this and
making the alias optional.

The main problem, as mentioned in the parser's comment, is to invent a
machinery to create an unique alias for each of the subquery/values
expression in the from clause. I pondered a little about it and came to
the attached result.

The patch generates an auto-alias for subqueries in the format
*SUBQUERY_* for subqueries and *VALUES_* for values
expressions.  is the range table index it gets during
transformRangeSubselect().

Doc patch and simple regression tests included.
diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 211e4c3..f2d21aa 100644
--- a/doc/src/sgml/ref/select.sgml
+++ b/doc/src/sgml/ref/select.sgml
@@ -51,7 +51,7 @@ SELECT [ ALL | DISTINCT [ ON ( expressiontable_name [ * ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
 [ TABLESAMPLE sampling_method ( argument [, ...] ) [ REPEATABLE ( seed ) ] ]
-[ LATERAL ] ( select ) [ AS ] alias [ ( column_alias [, ...] ) ]
+[ LATERAL ] ( select ) [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
 with_query_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
 [ LATERAL ] function_name ( [ argument [, ...] ] )
 [ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
@@ -408,8 +408,7 @@ TABLE [ ONLY ] table_name [ * ]
 output were created as a temporary table for the duration of
 this single SELECT command.  Note that the
 sub-SELECT must be surrounded by
-parentheses, and an alias must be
-provided for it.  A
+parentheses.  An optional alias can be used to name the subquery.  A
  command
 can also be used here.

@@ -1891,6 +1890,31 @@ SELECT distributors.* WHERE distributors.name = 'Westward';
   
 
   
+   Optional subquery alias names in FROM clauses
+
+   
+PostgreSQL allows one to omit
+alias names for subqueries used in FROM-clauses, which
+are required in the SQL standard. Thus, the following SQL is valid in PostgreSQL:
+
+SELECT * FROM (VALUES(1), (2), (3));
+ column1 
+-
+   1
+   2
+   3
+(3 rows)
+
+SELECT * FROM (SELECT 1, 2, 3);
+ ?column? | ?column? | ?column? 
+--+--+--
+1 |2 |3
+(1 row)
+
+   
+  
+
+  
ONLY and Inheritance
 

diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 6c6d21b..865b3ce 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -11302,30 +11302,13 @@ table_ref:	relation_expr opt_alias_clause
 	/*
 	 * The SQL spec does not permit a subselect
 	 * () without an alias clause,
-	 * so we don't either.  This avoids the problem
-	 * of needing to invent a unique refname for it.
-	 * That could be surmounted if there's sufficient
-	 * popular demand, but for now let's just implement
-	 * the spec and see if anyone complains.
-	 * However, it does seem like a good idea to emit
-	 * an error message that's better than "syntax error".
+	 * but PostgreSQL isn't that strict here. We
+	 * provide an unique, auto-generated alias
+	 * name instead, which will be done through
+	 * the transform/analyze phase later. See
+	 * parse_clause.c, transformRangeSubselect() for
+	 * details.
 	 */
-	if ($2 == NULL)
-	{
-		if (IsA($1, SelectStmt) &&
-			((SelectStmt *) $1)->valuesLists)
-			ereport(ERROR,
-	(errcode(ERRCODE_SYNTAX_ERROR),
-	 errmsg("VALUES in FROM must have an alias"),
-	 errhint("For example, FROM (VALUES ...) [AS] foo."),
-	 parser_errposition(@1)));
-		else
-			ereport(ERROR,
-	(errcode(ERRCODE_SYNTAX_ERROR),
-	 errmsg("subquery in FROM must have an alias"),
-	 errhint("For example, FROM (SELECT ...) [AS] foo."),
-	 parser_errposition(@1)));
-	}
 	$$ = (Node *) n;
 }
 			| LATERAL_P select_with_parens opt_alias_clause
@@ -11335,22 +11318,6 @@ table_ref:	relation_expr opt_alias_clause
 	n->subquery = $2;
 	n->alias = $3;
 	/* same comment as above */
-	if ($3 == NULL)
-	{
-		if (IsA($2, SelectStmt) &&
-			((SelectStmt *) $2)->va

Re: [HACKERS] Make subquery alias optional in FROM clause

2017-02-22 Thread Bernd Helmle
On Wed, 2017-02-22 at 10:08 -0500, Tom Lane wrote:
> 
> Indeed.  When I wrote the comment you're referring to, quite a few
> years
> ago now, I thought that popular demand might force us to allow
> omitted
> aliases.  But the demand never materialized.  At this point it seems
> clear to me that there isn't really good reason to exceed the spec
> here.
> It just encourages people to write unportable SQL code.
> 

Years ago i didn't hear anything about it either. But in the last few
months i've heard such a demand several times, so i thought we should
give it another try.

> > The patch generates an auto-alias for subqueries in the format
> > *SUBQUERY_* for subqueries and *VALUES_* for values
> > expressions.  is the range table index it gets during
> > transformRangeSubselect().
> 
> This is not a solution, because it does nothing to avoid conflicts
> with
> table names elsewhere in the FROM clause.  If we were going to relax
> this
> --- which, I repeat, I'm against --- we'd have to come up with
> something
> that would thumb through the whole query and make sure what it was
> generating didn't already appear somewhere else.  

I've thought about this already. One thing that came into my mind was
to maintain a lookup list of aliasnames during the transform phase and
throw an ereport as soon as the generated string has any duplicate. Not
sure about the details, but i was worried about the performance impact
in this area...

> Or else not generate
> a name at all, in which case there simply wouldn't be a way to refer
> to
> the subquery by name; I'm not sure what that might break though.
> 

Hmm, maybe that's an option. Though, i think parts of the code aren't
prepared to deal with empty (or even NULL) aliases. That's likely much
more invasive.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Make subquery alias optional in FROM clause

2017-02-22 Thread Bernd Helmle
On Wed, 2017-02-22 at 08:13 -0700, David G. Johnston wrote:
> I'll contribute to the popular demand aspect but given that the error
> is
> good and the fix is very simple its not exactly a strong desire.

In one project i've recently seen, for some reasons, they need to
maintain an application twice, one for Oracle and the other for
Postgres for years. To be honest, subqueries aren't the only problem,
but having this solved in the backend itself would help to decrease the
amount of maintenance efforts in such projects.

I thought that this would be another thing to make the migration pains
more less, without being too invasive, given that there were already
some thoughts about relaxing alias usage.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Make subquery alias optional in FROM clause

2017-02-23 Thread Bernd Helmle
Am Mittwoch, den 22.02.2017, 22:17 -0500 schrieb Tom Lane:
> [ shrug... ]  Well, I won't resist this hard as long as it's done
> competently, which to me means "the subquery name doesn't conflict
> with
> anything else".  Not "it doesn't conflict unless you're unlucky
> enough
> to have used the same name elsewhere".  There are a couple ways we
> could
> achieve that result, but the submitted patch fails to.

Right, i'm going to give it a try then. Currently i see these options:

* Validate any generated alias against a list of explicit alias names.

This means we have to collect explicit alias names in, say a hashtable,
and validate a generated name against potential collisions and retry.
Or better, generate the name in a way that doesn't produce a collision
with this list.

* Don't force/generate an alias at all.

I've no idea for this yet and Tom already was concerned what this might
break. There are several places in the transform phase where the
refnames are required (e.g. isLockedRefname()).


Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] reorder tablespaces in basebackup tar stream for backup_label

2017-03-03 Thread Bernd Helmle
Am Freitag, den 03.03.2017, 15:44 +0900 schrieb Michael Paquier:
> So, the main directory is located at the end on purpose. When using
> --wal-method=fetch the WAL segments are part of the main tarball, so
> if you send the main tarball first you would need to generate a
> second
> tarball with the WAL segments that have been generated between the
> moment the main tarball has finished until the end of the last
> tablespace taken if you want to have a consistent backup. Your patch
> would work with the stream mode though.

Ah right, i assumed there must be something, otherwise the comment
won't be there ;)

We could special case that part to distinguish fetch/stream mode, but i
fear that leads to more confusion than it wants to fix. The other
option of a separate tar file looks awkward from a usability point of
view.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clarification of FDW API Documentation

2014-06-18 Thread Bernd Helmle



--On 13. Juni 2014 13:46:38 -0400 Tom Lane  wrote:


Imagine if `BeginForeignScan` set up a remote cursor and
`IterateForeignScan` just fetched _one tuple at a time_ (unlike the
current behavior where they are fetched in batches). The tuple would be
passed to `ExecForeignDelete` (as is required), but the remote cursor
would remain pointing at that tuple. Couldn't `ExecForeignDelete` just
call `DELETE FROM table WHERE CURRENT OF cursor` to then delete that
tuple?


No.  This is not guaranteed (or even likely) to work in join cases: the
tuple to be updated/deleted might no longer be the current one of the
scan. You *must* arrange for the scan to return enough information to
uniquely identify the tuple later, and that generally means adding some
resjunk columns.


Yeah, this is exactly the trap i ran into while implementing the 
informix_fdw driver. It used an updatable cursor to implement the modify 
actions as you proposed first. Consider a query like


UPDATE remote SET f1 = t.id FROM local t WHERE t.id = f1

The planner might choose a hash join where the hash table is built by 
forwarding the cursor via the foreign scan. You'll end up with the cursor 
positioned at the end and you have no way to get it back "in sync" when the 
modify action is actually called.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL information view

2014-07-21 Thread Bernd Helmle



--On 12. Juli 2014 15:08:01 +0200 Magnus Hagander  
wrote:



Before doing that, however, I'd like to ask for opinions :) The hack
currently exposes a separate view that you can join to
pg_stat_activity (or pg_stat_replication) on the pid -- this is sort
of the same way that pg_stat_replication works in the first place. Do
we want something similar to that for a builtin SSL view as well, or
do we want to include the fields directly in pg_stat_activity and
pg_stat_replication?


I've heard more than once the wish to get this information without 
contrib..especially for the SSL version used (client and server likewise). 
So ++1 for this feature.


I'd vote for a special view, that will keep the information into a single 
place and someone can easily join extra information together.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] 9.3.9 and pg_multixact corruption

2015-09-10 Thread Bernd Helmle
A customer had a severe issue with a PostgreSQL 9.3.9/sparc64/Solaris 11
instance.

The database crashed with the following log messages:

2015-09-08 00:49:16 CEST [2912] PANIC:  could not access status of
transaction 1068235595
2015-09-08 00:49:16 CEST [2912] DETAIL:  Could not open file
"pg_multixact/members/5FC4": No such file or directory.
2015-09-08 00:49:16 CEST [2912] STATEMENT:  delete from StockTransfer
where oid = $1 and tanum = $2 

When they called us later, it turned out that the crash happened during a
base backup, leaving a backup_label behind which prevented the database
coming up again with a invalid checkpoint location. However, removing the
backup_label still didn't let the database through recovery, it failed
again with the former error, this time during recovery:

2015-09-08 11:40:04 CEST [27047] LOG:  database system was interrupted
while in recovery at 2015-09-08 11:19:44 CEST
2015-09-08 11:40:04 CEST [27047] HINT:  This probably means that some data
is corrupted and you will have to use the last backup for recovery.
2015-09-08 11:40:04 CEST [27047] LOG:  database system was not properly
shut down; automatic recovery in progress
2015-09-08 11:40:05 CEST [27047] LOG:  redo starts at 1A52/2313FEF8
2015-09-08 11:40:47 CEST [27082] FATAL:  the database system is starting up
2015-09-08 11:40:59 CEST [27047] FATAL:  could not access status of
transaction 1068235595
2015-09-08 11:40:59 CEST [27047] DETAIL:  Could not seek in file
"pg_multixact/members/5FC4" to offset 4294950912: Invalid argument.
2015-09-08 11:40:59 CEST [27047] CONTEXT:  xlog redo create mxid 1068235595
offset 2147483648 nmembers 2: 2896635220 (upd) 2896635510 (keysh) 
2015-09-08 11:40:59 CEST [27045] LOG:  startup process (PID 27047) exited
with exit code 1
2015-09-08 11:40:59 CEST [27045] LOG:  aborting startup due to startup
process failure

Some side notes:

An additional recovery from a base backup and archive recovery yield to the
same error, as soon as the affected tuple was touched with a DELETE. The
affected table was fully dumpable via pg_dump, though.

We also have a core dump, but no direct access to the machine. If there's
more information  required (and i believe it is), let me know where to dig
deeper. I also would like to request a backtrace from the existing core
dump, but in the absence of a sparc64 machine here we need to ask the
customer to get one.

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.3.9 and pg_multixact corruption

2015-09-11 Thread Bernd Helmle
...and here it is ;)

--On 10. September 2015 19:45:46 -0300 Alvaro Herrera
 wrote:

> Anyway, can you please request pg_controldata to be run on the failed
> cluster and paste it here?

pg_control version number:937
Catalog version number:   201306121
Database system identifier:   5995776571405068134
Database cluster state:   in archive recovery
pg_control last modified: Di 08 Sep 2015 14:58:36 CEST
Latest checkpoint location:   1A52/3CFAF758
Prior checkpoint location:1A52/3CFAF758
Latest checkpoint's REDO location:1A52/2313FEF8
Latest checkpoint's REDO WAL file:00011A520023
Latest checkpoint's TimeLineID:   1
Latest checkpoint's PrevTimeLineID:   1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID:  0/2896610102
Latest checkpoint's NextOID:  261892
Latest checkpoint's NextMultiXactId:  1068223816
Latest checkpoint's NextMultiOffset:  2147460090
Latest checkpoint's oldestXID:2693040605
Latest checkpoint's oldestXID's DB:   16400
Latest checkpoint's oldestActiveXID:  0
Latest checkpoint's oldestMultiXid:   1012219584
Latest checkpoint's oldestMulti's DB: 16400
Time of latest checkpoint:Di 08 Sep 2015 00:47:01 CEST
Fake LSN counter for unlogged rels:   0/1
Minimum recovery ending location: 1A52/2313FEF8
Min recovery ending loc's timeline:   1
Backup start location:1A52/2313FEF8
Backup end location:  0/0
End-of-backup record required:no
Current wal_level setting:archive
Current max_connections setting:  500
Current max_prepared_xacts setting:   0
Current max_locks_per_xact setting:   64
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Maximum size of a TOAST chunk:1996
Date/time type storage:   64-bit integers
Float4 argument passing:  by value
Float8 argument passing:  by value
Data page checksum version:   0

-- 

Bernd



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Add a GUC to report whether data page checksums are enabled.

2014-02-19 Thread Bernd Helmle



--On 18. Februar 2014 22:23:59 +0200 Heikki Linnakangas  
wrote:



I considered it a new feature, so not back-patching was the default. If
you want to back-patch it, I won't object.


That was my original feeling, too, but +1 for backpatching.

--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Selecting large tables gets killed

2014-02-20 Thread Bernd Helmle



--On 20. Februar 2014 14:49:28 +0530 Ashutosh Bapat 
 wrote:



If I set some positive value for this variable, psql runs smoothly with
any size of data. But unset that variable, and it gets killed. But it's
nowhere written explicitly that psql can run out of memory while
collecting the result set. Either the documentation or the behaviour
should be modified.


Maybe somewhere in the future we should consider single row mode for psql, 
see




However, i think nobody has tackled this yet, afair.

--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Selecting large tables gets killed

2014-02-20 Thread Bernd Helmle



--On 20. Februar 2014 09:51:47 -0500 Tom Lane  wrote:


Yeah.  The other reason that you can't just transparently change the
behavior is error handling: people are used to seeing either all or
none of the output of a query.  In single-row mode that guarantee
fails, since some rows might get output before the server detects
an error.


That's true. I'd never envisioned to this transparently either, exactly of 
this reason. However, i find to have single row mode somewhere has some 
attractiveness, be it only to have some code around that shows how to do it 
right. But i fear we might complicate things in psql beyond what we really 
want.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] GUC for data checksums

2013-09-14 Thread Bernd Helmle
Attached is a small patch to add a new GUC to report wether data checksums 
for a particular cluster are enabled. The only way to get this info afaik 
is to look into pg_control and the version number used, but i'd welcome a 
way to access this remotely, too. If there aren't any objections i'll add 
this to the CF.


--
Thanks

Bernd

data_checksums_guc.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GUC for data checksums

2013-09-14 Thread Bernd Helmle



--On 15. September 2013 00:25:34 +0200 Andres Freund 
 wrote:



Looks like a good idea to me. The implementation looks sane as well,
except that I am not sure if we really need to introduce that faux
variable. If the variable cannot be set and we have a SHOW hook, do we
need it?


It's along the line with the other informational variables like block_size 
et al. Do you want to have a function instead or what's your intention?


One benefit is to have 'em all in SHOW ALL which can be used to compare 
database/cluster settings, to mention one use case i have in mind.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql should show disabled internal triggers

2013-09-18 Thread Bernd Helmle



--On 18. September 2013 13:52:29 +0200 Andres Freund 
 wrote:



If you do ALTER TABLE ... DISABLE TRIGGER ALL; and then individually
re-enable the disabled triggers it's easy to miss internal triggers.
A \d+ tablename will not show anything out of the ordinary for that
situation since we don't show internal triggers. But foreign key checks
won't work.
So, how about displaying disabled internal triggers in psql?


Hi had exactly the same concerns this morning while starting to look at the 
ENABLE/DISABLE constraint patch. However, i wouldn't display them as 
triggers, but maybe more generally as "disabled constraints" or such.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql should show disabled internal triggers

2013-09-18 Thread Bernd Helmle



--On 18. September 2013 15:19:27 +0200 Andres Freund 
 wrote:



Well, that will lead the user in the wrong direction, won't it? They
haven't disabled the constraint but the trigger. Especially as we
already have NOT VALID and might grow DISABLED for constraint
themselves...



Valid point. But it is also nice to know in detail, which constraints 
stopped working. Ok, it is documented which constraints are affected and 
maybe i'm lost within too much detail atm, but i find people getting 
confused about this internal trigger thingie sometimes. Won't they get 
confused about a suddenly appearing RI_ConstraintTrigger_a_54015, too?


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ENABLE/DISABLE CONSTRAINT NAME

2013-09-24 Thread Bernd Helmle



--On 13. September 2013 20:17:19 -0400 Robert Haas  
wrote:



You're missing the point.  Peter wasn't worried that your patch throws
an error; he's concerned about the fact that it doesn't.

In PostgreSQL, you can only create the following view because test1
has a primary key over column a:

=> create table test1 (a int constraint pk primary key, b text);
=> create view test2 as select a, b from test1 group by a;
=> alter table test1 drop constraint pk;

The reason that, if the primary key weren't there, it would be
ambiguous which row should be returned as among multiple values where
a is equal and b is not.  If you can disable the constraint, then you
can create precisely that problem.


Hmm not sure i understand this argument either: this patch doesn't allow 
disabling a primary key. It only supports FKs and CHECK constraints 
explicitly.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Wait free LW_SHARED acquisition

2013-09-30 Thread Bernd Helmle



--On 27. September 2013 09:57:07 +0200 Andres Freund 
 wrote:



Ok, was free:

padding + 16 partitions:
tps = 147884.648416

padding + 32 partitions:
tps = 141777.841125

padding + 64 partitions:
tps = 141561.539790

padding + 16 partitions + new lwlocks
tps = 601895.580903 (yeha, still reproduces after some sleep!)


Hmm, out of interest and since i have access to a (atm) free DL580 G7 (4x 
E7-4800 10core) i've tried your bench against this machine and got this 
(best of three):


HEAD (default):

tps = 181738.607247 (including connections establishing)
tps = 182665.993063 (excluding connections establishing)

HEAD (padding + 16 partitions + your lwlocks patch applied):

tps = 269328.259833 (including connections establishing)
tps = 270685.666091 (excluding connections establishing)

So, still an improvement but far away from what you got. Do you have some 
other tweaks in your setup?


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Wait free LW_SHARED acquisition

2013-09-30 Thread Bernd Helmle



--On 30. September 2013 19:00:06 +0200 Andres Freund 
 wrote:



HEAD (default):

tps = 181738.607247 (including connections establishing)
tps = 182665.993063 (excluding connections establishing)

HEAD (padding + 16 partitions + your lwlocks patch applied):

tps = 269328.259833 (including connections establishing)
tps = 270685.666091 (excluding connections establishing)

So, still an improvement but far away from what you got. Do you have some
other tweaks in your setup?


The only relevant setting changed was -c shared_buffers=1GB, no other
patches applied. At which scale did you pgbench -i?


I've used a scale factor of 10 (i recall you've mentioned using the same 
upthread...).


Okay, i've used 2GB shared buffers, repeating with your setting i get a far 
more noticable speedup:


tps = 346292.008580 (including connections establishing)
tps = 347997.073595 (excluding connections establishing)

Here's the perf output:

+   4.34%  207112  postgres  postgres [.] 
AllocSetAlloc

+   4.07%  194476  postgres  libc-2.13.so [.] 0x127b33
+   2.59%  123471  postgres  postgres [.] 
SearchCatCache

+   2.49%  118974   pgbench  libc-2.13.so [.] 0x11aaef
+   2.48%  118263  postgres  postgres [.] 
GetSnapshotData
+   2.46%  117646  postgres  postgres [.] 
base_yyparse
+   2.02%   96546  postgres  postgres [.] 
MemoryContextAllocZeroAligned
+   1.58%   75326  postgres  postgres [.] 
AllocSetFreeIndex
+   1.23%   58587  postgres  postgres [.] 
hash_search_with_hash_value

+   1.01%   48391  postgres  postgres [.] palloc
+   0.93%   44258  postgres  postgres [.] 
LWLockAttemptLock

+   0.91%   43575   pgbench  libc-2.13.so [.] free
+   0.89%   42484  postgres  postgres [.] 
nocachegetattr

+   0.89%   42378  postgres  postgres [.] core_yylex
+   0.88%   42001  postgres  postgres [.] 
_bt_compare
+   0.84%   39997  postgres  postgres [.] 
expression_tree_walker
+   0.76%   36533  postgres  postgres [.] 
ScanKeywordLookup

+   0.74%   35515   pgbench  libc-2.13.so [.] malloc
+   0.64%   30715  postgres  postgres [.] 
LWLockRelease
+   0.56%   26779  postgres  postgres [.] 
fmgr_isbuiltin

+   0.54%   25681   pgbench  [kernel.kallsyms][k] _spin_lock
+   0.48%   22836  postgres  postgres [.] new_list
+   0.48%   22700  postgres  postgres [.] hash_any
+   0.47%   22378  postgres  postgres [.] 
FunctionCall2Coll

+   0.46%   22095  postgres  postgres [.] pfree
+   0.44%   20929  postgres  postgres [.] palloc0
+   0.43%   20592  postgres  postgres [.] 
AllocSetFree

+   0.40%   19495  postgres  [unknown][.] 0x81cf2f
+   0.40%   19247  postgres  postgres [.] 
hash_uint32

+   0.38%   18227  postgres  postgres [.] PinBuffer
+   0.38%   18022   pgbench  [kernel.kallsyms][k] do_select

--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btreecheck extension

2015-10-07 Thread Bernd Helmle


--On 16. Juni 2014 18:47:30 -0700 Peter Geoghegan  wrote:

> Attached prototype patch adds contrib extension, btreecheck. This
> extension provides SQL-callable functions for checking these
> conditions on nbtree indexes on live systems.

What's the current state of this module? I see you are interested in stress
testing, but i'm not sure how far this all is gone? 

This tool actually served a very good job during identifying index
corruption due to collation issues[1].

[1]


-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Duplicated assignment of slot_name in walsender.c

2015-10-20 Thread Bernd Helmle
walsender.c, CreateReplicationSlot() currently has this:

slot_name = NameStr(MyReplicationSlot->data.name);

if (cmd->kind == REPLICATION_KIND_LOGICAL)
{
[...]
}
else if (cmd->kind == REPLICATION_KIND_PHYSICAL && cmd->reserve_wal)
{
[...]
}

slot_name = NameStr(MyReplicationSlot->data.name);

The 2nd assignment to slot_name looks unnecessary?

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoid full page images in streaming replication?

2015-10-26 Thread Bernd Helmle


--On 22. Oktober 2015 22:23:58 -0300 Alvaro Herrera
 wrote:

>> You can? The xlog format between 9.4 and 9.5 changed, so I can't see how
>> that'd work?
> 
> Oh, crap.  Must have been some other cross-version trial run I did,
> then.  I would hope it's at least not terribly difficult to back-patch
> that commit locally, anyway.

You might want to try 



Adrian created it basically to track down archive replay performance issues
a while ago. Maybe it's useful somehow.

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoid full page images in streaming replication?

2015-10-26 Thread Bernd Helmle


--On 23. Oktober 2015 00:03:30 +0200 Andres Freund 
wrote:

> 
> Note that FPIs are often pretty good for replay performance, avoiding
> lots of synchronous random reads.

That's a very import argument, i think. The difference can be significant,
even if you have a decent storage, rendering a replica unusable. We had
examples in the past where only optimizing aggressive writes of FPIs lead
to an acceptable lag of the replica.

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling START TRANSACTION for a SuperUser

2015-10-29 Thread Bernd Helmle


--On 27. Oktober 2015 14:07:06 + Kevin Grittner 
wrote:

>  It would be a boon to big shops if they could
> declare (preferably with the option to set it at a role level) that
> specific default_transaction_* settings could not be overridden.

A while ago i was faced with exactly the same problem. Thinking about it
again, i think that this also applies to various other parameters a DBA
wants to restrict to its roles. E.g. resource consumption limits (work_mem,
...), session constraints like the discussed transaction modes or even not
allowing to change the application_name.

afaicr, Oracle has a CREATE PROFILE which is primilarily intended to add
resource or password restrictions to users. Maybe this can be seen as a
blueprint to introduce the concept of GUC profiles to postgres, where a set
with maybe restrictions on the allowed values for certain GUCs can be
attached to roles. That for sure is a huge project.

Another idea (and maybe not that invasive like the profile idea) might be
to just introduce a concept of "read only" GUCs. A role would get a list of
GUCs which it is not allowed to change if given with ALTER ROLE...SET
(maybe restricted to PGC_USERSET). That could be stored along the same way
like pg_db_role_settings. However, i haven't checked how complicated this
would be to incorporate into the guc assign hooks, but maybe its doable
somehow.

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stats_recovery view

2012-01-26 Thread Bernd Helmle



--On 15. Januar 2012 02:50:00 -0500 Jaime Casanova  
wrote:



Attached is a patch thats implements a pg_stat_recovery view that
keeps counters about processed wal records. I just notice that it
still lacks documentation but i will add it during the week.


Hi Jaime,

do you have an updated patch? The current v1 patch doesn't apply cleanly 
anymore, and before i go and rebase the patch i thought i'm asking...


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stats_recovery view

2012-02-02 Thread Bernd Helmle



--On 2. Februar 2012 17:12:11 +0900 Fujii Masao  wrote:


If only core developer is interested in this view, ISTM that short
description for
each WAL record is not required because he or she can know the meaning of each
WAL record by reading the source code. No? Adding short descriptions for every
WAL records seems to be overkill.


Yes, for a developer option alone adding all these *_short_desc functions looks
too much code for too less benefit. However, if someone with less code affinity
is interested to debug his server during recovery, it might be easier for him 
to interpret the statistic counters.


Unfortunately i didn't manage to do it this week, but what i'm also interested 
in is how large the performance regression is if the track_recovery variable is 
activated. Not sure wether it really makes a big difference, but maybe 
debugging recovery from a large archive could slow down to a degree, where you 
want the GUC but can't afford it?


And, for display purposes, when this is intended for developers only, shouldn't 
it be treated like all the other debug options as a DEVELOPER_OPTION, too?


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch pg_is_in_backup()

2012-02-02 Thread Bernd Helmle



--On 2. Februar 2012 18:47:35 +0100 Gilles Darold  
wrote:



For now the internal function BackupInProgress() only return a boolean.
I like the idea that pg_is_in_backup() just works as pg_is_in_recovery()
do. I mean it doesn't return the timestamp of the recovery starting time
but only true or false.

Maybe we can have an other function that will return all information
available in the backup_label file ?


I've seen customers using pg_read_file() to do exactly this. E.g. they are 
searching for the START TIME value, if backup_label is present, to report the 
backup start.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch pg_is_in_backup()

2012-02-03 Thread Bernd Helmle



--On 3. Februar 2012 13:21:11 +0900 Fujii Masao  wrote:


It seems to be more user-friendly to introduce a view like pg_stat_backup
rather than the function returning an array.


I like this idea. A use case i saw for monitoring backup_label's in the past, 
was mainly to discover a forgotten exclusive pg_stop_backup() (e.g. due to 
broken backup scripts). If the view would be able to distinguish both, 
exclusive and non-exclusive backups, this would be great.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] background worker and normal exit

2013-05-26 Thread Bernd Helmle



--On 26. Mai 2013 11:38:55 +0900 Michael Paquier 
 wrote:




This flag makes a worker not to restart only in case of a crash. To solve
your problem, you could as well allow your process to restart and put it
in indefinite sleep if server is not in recovery such it it will do
nothing in your case.


Hmm so you can't have workers just "doing something once" and exit? I have 
to admit, i didn't follow bgworkers closely in the past, but could you give 
a short insight on why this is currently not possible?


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hard limit on WAL space used (because PANIC sucks)

2013-06-07 Thread Bernd Helmle



--On 6. Juni 2013 16:25:29 -0700 Josh Berkus  wrote:


Archiving
-

In some ways, this is the simplest case.  Really, we just need a way to
know when the available WAL space has become 90% full, and abort
archiving at that stage.  Once we stop attempting to archive, we can
clean up the unneeded log segments.

What we need is a better way for the DBA to find out that archiving is
falling behind when it first starts to fall behind.  Tailing the log and
examining the rather cryptic error messages we give out isn't very
effective.


Slightly OT, but i always wondered wether we could create a function, say

pg_last_xlog_removed()

for example, returning a value suitable to be used to calculate the 
distance to the current position. An increasing value could be used to 
instruct monitoring to throw a warning if a certain threshold is exceeded.


I've also seen people creating monitoring scripts by looking into 
archive_status and do simple counts on the .ready files and give a warning, 
if that exceeds an expected maximum value.


I haven't looked at the code very deep, but i think we already store the 
position of the last removed xlog in shared memory already, maybe this can 
be used somehow. Afaik, we do cleanup only during checkpoints, so this all 
has too much delay...


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Passing fdw_private data from PlanForeignScan to PlanForeignModify

2013-06-13 Thread Bernd Helmle
I'm currently implementing DML actions in the informix FDW module and 
trying to understand how UPDATE/DELETE actions interact with the various 
structures, especially how states between the associated ForeignScan and 
ForeignModify actions could be transmitted. As far as i understood, with 
e.g. an UPDATE action on a foreign table you get the following sequence of 
callbacks:


GetForeignRelSize
GetForeignPaths
GetForeignPlan

PlanForeignModify

BeginForeignScan

BeginForeignModify


IterateForeignScan

ExecForeignUpdate



EndForeignScan

EndForeignModify

During the planning phase, the Informix FDW currently plans a server side 
cursor, which identifier i'd like to pass into the planning phase of the 
modify action to reuse it in conjunction with UPDATE ... WHERE CURRENT OF.


I understand that the ModifyTable structure passed to PlanForeignModify has 
a list of associated plan nodes, from which i can access the ForeignScan 
plan node associated with the current modify action, thus having access to 
the fdw_private data generated during the planning phase of the ForeigScan 
node. However, it's a list and i currently don't understand on how to 
reference the associated ForeignScan node reliably, given that there are 
cases with more than one node in this list.


Any pointers, someone?

--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing fdw_private data from PlanForeignScan to PlanForeignModify

2013-06-13 Thread Bernd Helmle



--On 13. Juni 2013 11:06:31 -0400 Tom Lane  wrote:


My
recommendation would be to see if you can't save the relevant info in
the RelOptInfo node for the relation, probably during GetForeignPlan,
and then get it from there in PlanForeignModify instead of digging in
the plan tree.  (You can use the fdw_private field of RelOptInfo for
whatever you need in this line.)


Hmm, I tried this already, but maybe i'm doing something entirely wrong 
here.


What i tried before was to access (in PlanForeignModify) the RelOptInfo 
structure through PlannerInfo->simple_rel_array, assuming the the 
resultRelation index points to the right array member. However, that didn't 
work, the fdw_private List is not the one filled by GetForeignPlan...is 
there another way to get back the RelOptInfo worked on earlier?


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing fdw_private data from PlanForeignScan to PlanForeignModify

2013-07-01 Thread Bernd Helmle



--On 13. Juni 2013 18:12:05 -0400 Tom Lane  wrote:


What i tried before was to access (in PlanForeignModify) the RelOptInfo
structure through PlannerInfo->simple_rel_array, assuming the the
resultRelation index points to the right array member. However, that
didn't  work, the fdw_private List is not the one filled by
GetForeignPlan...is  there another way to get back the RelOptInfo worked
on earlier?


It should work ... *if* there was in fact a RelOptInfo worked on
earlier.  There sometimes isn't.  You might need to do something like
what make_modifytable() has to do to call you in the first place:


Sorry for the late feedback, didn't manage to get back to this earlier.

This works indeed, the RelOptInfo structure stored in simple_rel_array (if 
present) allows
a FDW to access its earlier scan state, assigned in GetForeignPlan() for 
example.


Thanks for the hints!

--

Bernd



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql tab completion for updatable foreign tables

2013-07-08 Thread Bernd Helmle
Recently i got annoyed that psql doesn't tab complete to updatable foreign 
tables.


Attached is a patch to address this. I'm using the new 
pg_relation_is_updatable() function to accomplish this. The function could 
also be used for the trigger based stuff in the query, but i haven't 
touched this part of the query. The patch ist against HEAD, but maybe it's 
worth to apply this to REL9_3_STABLE, too, but not sure what our policy is 
at this state...


--
Thanks

Bernd

psql_tab_complete_updatable_fdw.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql tab completion for updatable foreign tables

2013-07-10 Thread Bernd Helmle



--On 8. Juli 2013 16:04:31 + Dean Rasheed  
wrote:



* pg_relation_is_updatable is only available in 9.3, whereas psql may
connect to older servers, so it needs to guard against that.



Oh of course, i forgot about this. Thanks for pointing out.


* If we're doing this, I think we should do it for auto-updatable
views at the same time.

There was concern that pg_relation_is_updatable() would end up opening
every relation in the database, hammering performance. I now realise
that these tab-complete queries have a limit (1000 by default) so I
don't think this is such an issue. I tested it by creating 10,000
randomly named auto-updatable views on top of a table, and didn't see
any performance problems.

Here's an updated patch based on the above points.


Okay, are you adding this to the september commitfest?

--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Disabling trust/ident authentication configure option

2015-04-16 Thread Bernd Helmle
We have a customer using a patch to harden their PostgreSQL installation
(see attached) they would like to contribute. This patch adds the ability
to disable "trust" and "ident" authentication at compile time via configure
options, thus making it impossible to use these authentication methods for
sloppy configuration purposes. This is critical to their software
deployment, as stated in their use case description:


PostgreSQL is deployed as part of a larger technical solution (e.g. a
Telecommunication system) and a field engineer has to install/upgrade this
solution. The engineer is a specialist in the Telco domain and has only
little knowledge of databases and especially PostgreSQL setup.

We now want to provide these kinds of users with pre-hardened packages that
make it very hard to accidentally expose their database. For this purpose
the patch allows to optionally disable the "trust" and "ident"
authentication methods. Especially the "trust" mechanism is very critical
as it might actually provide useful functionality for our user. Think of an
engineer who has to do a night shift upgrade with a customer breathing down
his neck to get the system back online. Circumventing all these
authentication configuration issues by just enabling "trust" is very easy
and looks well supported and documented. (the documentation states the
dangers but there are no *big red flags* or something). After finishing the
upgrade the engineer forgets to restore the secure configuration, and a
malicious attacker later uses this access method to gain access to the
database.


Currently the patch adds new configure options --without-trust-auth and
--without-ident-auth and makes "peer" authentication default when these
options are set. My testing shows that regression tests (which are normally
using trust) are still working. This works as far as it goes to Linux and
friends, Windows currently is not adressed yet. Maybe its worth to consider
making "sspi" the default on this platform instead.

There was a discussion some time ago ([1]), but i think the reasoning
behind this patch is a little bit different than discussed there. 

[1]


-- 
Thanks

Bernd

disable_trust_ident_configure.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Move pg_upgrade from contrib/ to src/bin/

2015-04-16 Thread Bernd Helmle


--On 15. April 2015 15:02:05 -0400 Andrew Dunstan 
wrote:

> We've handled the buildfarm being red for a few days before. People are
> usually good about applying fixes fairly quickly.

Took me some time to get that due to my mail backlog, but i've done the
hotfix for dotterel and forced a run for all branches on this box.

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] cost_index() and path row estimate.

2015-04-30 Thread Bernd Helmle
While looking into a customer performance problem, i saw this in
costsize.c, cost_index() (9.3.6, but it looks the same in HEAD):

/* Mark the path with the correct row estimate */
if (path->path.param_info)
{
path->path.rows = path->path.param_info->ppi_rows;
/* also get the set of clauses that should be enforced by the 
scan */
allclauses = 
list_concat(list_copy(path->path.param_info->ppi_clauses),
 
baserel->baserestrictinfo);
}
else
{
path->path.rows = baserel->rows;
/* allclauses should just be the rel's restriction clauses */
allclauses = baserel->baserestrictinfo;
}

What i'm wondering is the else branch, where the baserel row estimate is
assigned to the
IndexPath. However, it occurs to me that in conjunction with a partial
index, the overall row estimate will be constrained by the row estimate
from the partial index itself, no? I see that this doesn't have an impact
on the cost estimation of the index scan itself, since cost_index() does
this later:

/* estimate number of main-table tuples fetched */
tuples_fetched = clamp_row_est(indexSelectivity * baserel->tuples);

I stumpled across this, since i see heavy misestimates in the EXPLAIN
output, where the estimated row count from the partial index vs. the real
row count heavily differs. In the customers case, there are two tables,
where one of the relation has many tuples in the JOIN condition which are
NULLs, like:

CREATE TABLE a2(id integer);
CREATE TABLE b2(id integer);

INSERT INTO a2 SELECT NULL FROM generate_series(1, 9800) AS t(id);
INSERT INTO a2 SELECT t.id FROM generate_series(1, 200) AS t(id);
INSERT INTO b2 SELECT t.id FROM generate_series(1, 200) AS t(id);

CREATE INDEX ON a2(id) WHERE id IS NOT NULL;
CREATE INDEX ON b2(id);

Here i get the following plan:

EXPLAIN ANALYZE SELECT * FROM b2 INNER JOIN a2 ON a2.id = b2.id ;

 Merge Join  (cost=10.79..12.63 rows=4 width=8) (actual time=0.084..0.291
rows=200 loops=1)
   Merge Cond: (b2.id = a2.id)
   ->  Sort  (cost=10.64..11.14 rows=200 width=4) (actual time=0.069..0.082
rows=200 loops=1)
 Sort Key: b2.id
 Sort Method: quicksort  Memory: 35kB
 ->  Seq Scan on b2  (cost=0.00..3.00 rows=200 width=4) (actual
time=0.010..0.027 rows=200 loops=1)
   ->  Index Only Scan using a2_id_idx on a2  (cost=0.14..15.14 rows=1
width=4) (actual time=0.012..0.074 rows=200 loops=1)
 Heap Fetches: 200
 Total runtime: 0.350 ms

EXPLAIN ANALYZE SELECT * FROM b2 INNER JOIN a2 ON a2.id = b2.id WHERE a2.id
IS NOT NULL;

 Merge Join  (cost=10.79..12.12 rows=1 width=8) (actual time=0.080..0.281
rows=200 loops=1)
   Merge Cond: (b2.id = a2.id)
   ->  Sort  (cost=10.64..11.14 rows=200 width=4) (actual time=0.063..0.070
rows=200 loops=1)
 Sort Key: b2.id
 Sort Method: quicksort  Memory: 35kB
 ->  Seq Scan on b2  (cost=0.00..3.00 rows=200 width=4) (actual
time=0.010..0.034 rows=200 loops=1)
   ->  Index Only Scan using a2_id_idx on a2  (cost=0.14..15.64 rows=200
width=4) (actual time=0.012..0.052 rows=200 loops=1)
 Index Cond: (id IS NOT NULL)
 Heap Fetches: 200
 Total runtime: 0.335 ms

With the partial index predicate explicitly specified the row estimate is
accurate, without the predicate the row estimate of the index scan on
a2_id_idx assumes 1.

It's very likely i miss something really important here, could someone shed
some light on this?

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling trust/ident authentication configure option

2015-05-04 Thread Bernd Helmle


--On 30. April 2015 08:00:23 -0400 Robert Haas 
wrote:

> But... the user could use password authentication with the password
> set to "x" and that would be insecure, too, yet not prevented by any
> of this.  I think it's pretty hard to prevent someone who has
> filesystem-level access to the database server from configuring it
> insecurely.

Sure. But I think the point is to make their engineers to think about what
they're doing. Typing in a password gives you at least a hint, that you are
probably should use something safe.

I agree that you couldn't really make that bullet proof from just this
excluded functionality, but i could imagine that this makes sense in a more
system-wide context.

> 
> Of course, it's fine for people to make changes like this in their own
> copies of PostgreSQL, but I'm not in favor of incorporating those
> changes into core.  I don't think there's enough general utility to
> this to justify that, and more to the point, I think different people
> will want different things.  We haven't, for example, ever had a
> request for this specific thing before.

Well, i found at least one of such a proposal here:




-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cost_index() and path row estimate.

2015-05-04 Thread Bernd Helmle


--On 1. Mai 2015 11:30:51 -0700 Tom Lane  wrote:

> No.  The non-parameterized paths for a given relation must all have the
> same rowcount estimates; otherwise the path comparison logic fails
> fundamentally.  Another way to look at it is that every correct path
> will yield the same number of rows in reality; so it would be wrong to
> give a path that makes use of a partial index a rowcount advantage over
> a path that is not using the partial index but nonetheless is enforcing
> exactly the same set of scan restriction clauses.

Thanks for the explanation. 

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling trust/ident authentication configure option

2015-05-08 Thread Bernd Helmle


--On 6. Mai 2015 16:28:43 -0400 Andrew Dunstan  wrote:

>> Single user sessions would work, but the "peer" authentication is also 
>> still available and should be the preferred method to reset passwords 
>> when trust is disabled, so this should not be an issue.
> 
> (Personally I think there's a very good case for completely ripping out
> RFC1413 ident auth. I've not seen it used in a great long while, and it's
> always been a security risk.)

I have the same feeling. I haven't seen it in the last 6+ years used
anywhere and I personally think it's a relict...so +1.

-- 
Thanks

Bernd


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hardening pg_upgrade

2014-08-25 Thread Bernd Helmle



--On 21. August 2014 22:08:58 +0200 Magnus Hagander  
wrote:



I vote for discarding 8.3 support in pg_upgrade.  There are already
enough limitations on pg_upgrade from pre-8.4 to make it of questionable
value; if it's going to create problems like this, it's time to cut the
rope.


+1. 8.3 has been unsupported for a fairly long time now, and you can
still do a two-step upgrade if you're on that old a version.


Also +1 from my side. I've seen some old 8.3 installations at customers, 
still, but they aren't large and can easily be upgraded with a two step 
upgrade.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Better support of exported snapshots with pg_dump

2014-09-01 Thread Bernd Helmle



--On 1. September 2014 17:00:32 +0900 Michael Paquier 
 wrote:



Currently pg_dump does not allow a user to specify an exported snapshot
name that he would like to use for a dump using SET TRANSACTION SNAPSHOT
(now pg_export_snapshot is only used for parallel pg_dump within it). I
imagine that this would be handy to take a consistent dump of a given
database after creating a logical replication slot on it. Thoughts?


There was a discussion of this kind of feature some time ago here:



Not sure if all the arguments holds still true with the appearance of MVCC 
catalog scans.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY TO returning empty result with parallel ALTER TABLE

2014-11-04 Thread Bernd Helmle



--On 3. November 2014 18:15:04 +0100 Sven Wegener 
 wrote:



I've check git master and 9.x and all show the same behaviour. I came up
with the patch below, which is against curent git master. The patch
modifies the COPY TO code to create a new snapshot, after acquiring the
necessary locks on the source tables, so that it sees any modification
commited by other backends.


Well, i have the feeling that there's nothing wrong with it. The ALTER 
TABLE command has rewritten all tuples with its own XID, thus the current 
snapshot does not "see" these tuples anymore. I suppose that in 
SERIALIZABLE or REPEATABLE READ transaction isolation your proposal still 
doesn't return the tuples you'd like to see.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [GENERAL] Re: [BUGS] [HACKERS] COPY TO returning empty result with parallel ALTER TABLE

2014-11-04 Thread Bernd Helmle



--On 4. November 2014 17:18:14 -0500 Tom Lane  wrote:


Yeah, and I think that it's entirely reasonable for rewriting ALTER TABLEs
to update the xmin of the rewritten tuples; after all, the output data
could be arbitrarily different from what the previous transactions put
into the table.  But that is not the question here.  If the COPY blocks
until the ALTER completes --- as it must --- why is its execution snapshot
not taken *after* the lock is acquired?


COPY waits in DoCopy() but its snapshot gets acquired in PortalRunUtility() 
earlier. SELECT has it's lock already during transform/analyse phase and 
its snapshot is taken much later.  Looks like we need something that 
analyses a utility statement to get its lock before executing.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Amazon Redshift

2014-11-06 Thread Bernd Helmle



--On 5. November 2014 23:36:03 +0100 philip taylor  
wrote:



Date Functions

http://docs.aws.amazon.com/redshift/latest/dg/r_ADD_MONTHS.html
http://docs.aws.amazon.com/redshift/latest/dg/r_DATEADD_function.html
http://docs.aws.amazon.com/redshift/latest/dg/r_DATEDIFF_function.html
http://docs.aws.amazon.com/redshift/latest/dg/r_LAST_DAY.html
http://docs.aws.amazon.com/redshift/latest/dg/r_MONTHS_BETWEEN_function.h
tml http://docs.aws.amazon.com/redshift/latest/dg/r_NEXT_DAY.html


fyi, except DATE_DIFF() all of these functions can be installed by using 
the orafce extension i believe.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Store "timestamptz" of database creation on "pg_database"

2013-01-03 Thread Bernd Helmle



--On 2. Januar 2013 23:04:43 -0500 Robert Haas  
wrote:



TBH, I don't think anyone has any business changing the creation
timestamp.  Ever.  For me, the fact that pg_dump wouldn't preserve
this information would be a feature, not a bug.  I mostly meant to
point out that someone could bypass it if they cared enough, not to
recommend it.  Honestly, I'd probably *rather* store this information
someplace where it couldn't be changed via SQL *at all*.  But I don't
think we have such a place, so I'm happy enough to store it in the
catalogs, with the associated risks of catalog hackery that entails.


This is exactly what Informix does, it stores creation or modification 
dates of a table in its system catalog (systables.created, to be specific). 
Any export/import of tables doesn't preserve the dates, if you restore a 
database (or table), the creation date is adjusted. I'm not aware of any 
SQL interface to influence this.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Hanging backends and possible index corruption

2013-01-25 Thread Bernd Helmle
We are currently analyzing an issue at one of our customers PostgreSQL 
database.


The current version is 9.1.6 (update to 9.1.7 is scheduled for next monday, 
no downtime possible before). It runs on POWER7 (pSeries 740) on an RHEL6.3 
64-bit LPAR. The packages are built from PGDG SVN sources, no special 
tweaks added. We saw no hardware related errors on this machine, nor any 
crashes.


What currently happens on this machine are hanging statements (SELECTs and 
INSERTs occasionally) with 100% CPU. After some investigation it turned out 
that the corresponding backends are seeking within an index file over and 
over again in a loop. Looking into the hanging queries i've recognized 
certain keys which seems to have the problem, other keys used in the WHERE 
condition run smoothly. Turning off index and bitmap index scans caused the 
suspicious keys to return results, too.


So i've saved the index file (normal BTree index with a single bigint 
column), did a REINDEX and the problem was gone. Looking at the index file 
with pg_filedump and pgbtreecheck from Alvaro gave me the following output:


pgbtreecheck gives warnings about pages' parents and then loops visiting 
the same pages over and over again:


NOTICE: fast root: block 290 at level 2
NOTICE: setting page parents
WARNING: block 12709 already had a parent (8840); new parent is 12177
WARNING: block 12710 already had a parent (12439); new parent is 10835
NOTICE: done setting parents
NOTICE: Checking forward scan of level 0, starting at block 1

-- loop starts

WARNING: right sibling 12710 does not point left to me (11680); points to 
10924 instead


Looking into the relevant pages and their prev and next pointers give me 
the following:


pg_filedump -i -R 11680 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (12651)  Next (12710)  Level (0)  CycleId (0)

pg_filedump -i -R 10924 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (10923)  Next (12710)  Level (0)  CycleId (0)

pg_filedump -i -R 12710 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (10924)  Next (10925)  Level (0)  CycleId (0)

pg_filedump -i -R 10925 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (12710)  Next (10926)  Level (0)  CycleId (0)

pg_filedump -i -R 12709 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (8849)  Next (8850)  Level (0)  CycleId (0)

pg_filedump -i -R 8840 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (8555)  Next (9125)  Level (1)  CycleId (0)

pg_filedump -i -R 12439 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (11405)  Next (11690)  Level (1)  CycleId (0)

$ pg_filedump -i -R 12177 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (11690)  Next (0)  Level (1)  CycleId (0)

$ pg_filedump -i -R 10835 ~/tmp/100252789 | grep Blocks
 Blocks: Previous (10550)  Next (11120)  Level (1)  CycleId (0)

This symptom happened three times in the last couple of weeks now. Looking 
at the numbers doesn't give me the impression that some flaky hardware 
could be involved. What else can we do to track down this problem, any 
suggestions?


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hanging backends and possible index corruption

2013-01-25 Thread Bernd Helmle



--On 25. Januar 2013 16:28:16 +0100 Andres Freund  
wrote:



Did you reindex after upgrading to 9.1.6? Did you ever have any crashes
or failovers before upgrading to 9.1.6?
I have seen pretty similar symptoms caused by "Fix persistence marking
of shared buffers during WAL replay" in 9.1.6.


Hmm it happened only on a single heavily used table for now and this table 
was REINDEXed twice after updating to 9.1.6 (every time the issue occured).


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hanging backends and possible index corruption

2013-01-26 Thread Bernd Helmle



--On 25. Januar 2013 20:37:32 -0500 Tom Lane  wrote:



Don't know how careful pgbtreecheck is.  The pg_filedump output isn't
very helpful because you filtered away the flags, so we can't tell if
any of these pages are deleted.  (If they are, the duplicate-looking
links might not be errors, since we intentionally don't reset a deleted
page's left/right links when deleting it.)



Ah, wasn't aware of this.


Could we see the whole special-space dump for each of the pages you're
worried about?



Attached


One thought that occurs to me is that POWER is a weak-memory-ordering
architecture, so that it's a tenable idea that this has something to do
with changing page links while not holding sufficient lock on the page.
I don't see btree doing that anywhere, but ...

BTW, how long has this installation been around, and when did you start
seeing funny behavior?  Can you say with reasonable confidence that the
bug was *not* present in any older PG versions?



This machine started in production around august last year with (afair) 
9.1.5. There also were performance and stress tests on this machine before 
it went into production, with no noticable problems.


However, what i missed before is that there were some trouble with the 
storage multipaths. Seems early january the machine lost some of it's paths 
to the SAN, but they were recovered a few seconds later...so i cannot 
exclude this as the cause anymore, though the paths are redundant. What 
strikes me is that the index was recreated in the meantime after this 
issue...


We will watch this machine the next couple of weeks more closely, if the 
issue comes back again.


--
Thanks

Bernd

dump.txt.bz2
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] walreceiver fallback_application_name

2011-01-17 Thread Bernd Helmle



--On 16. Januar 2011 21:53:47 +0100 Dimitri Fontaine 
 wrote:



Is "walreceiver" something that "the average DBA" is going to realize
what it is? Perhaps go for something like "replication slave"?


I think walreceiver is very good here, and the user is already
confronted to such phrasing.


http://www.postgresql.org/docs/9.0/interactive/runtime-config-wal.html#GU
C-MAX-WAL-SENDERS


Hmm, given this link we have mentioned "standby" multiple times. Wouldn't 
it be better to follow that phrasing?


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in SignalSomeChildren

2011-01-20 Thread Bernd Helmle



--On 22. Dezember 2010 15:51:09 +0900 Fujii Masao  
wrote:



How about doing target != ALL test at the head for the most common case
(target == ALL)?


That's an idea, but the test you propose implements it incorrectly.


Thanks! I revised the patch.


I had a look at this for the current CF and the patch looks reasonable to 
me. Some testing shows that the changes are working as intended (at least, 
the wal sender actually receives now signals from SignalSomeChildren() as 
far as the DEBUG4 output shows). Maybe we should put in a small comment, 
why we special case BACKEND_TYPE_ALL (following Tom's comment about 
expensive shared memory access and IsPostmasterChildWalSender()). I marked 
it as "Ready for Committer".


Question for my understanding:

While reading the small patch, i realized that there's no 
BACKEND_TYPE_WALRECV or similar. If i understand correctly there's no need 
to handle it this way, since there's only one wal receiver process per 
instance?


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PGXS contrib builds broken?

2013-02-25 Thread Bernd Helmle

Looks like the recent refactoring of code into common/ stopped
PGXS builds within the PostgreSQL source tree from working, i get

/Users/bernd/pgsql-dev/install/HEAD/include/server/postgres_fe.h:27:32: 
fatal error: common/fe_memutils.h: No such file or directory


when trying to build pg_upgrade, pgbench, pg_xlogdump, ... with PGXS.

I don't see common/fe_memutils getting installed in include/, so i suppose 
either some bug between chair and keyboard or some inconsistencies here.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Materialized View patch broke pg_dump

2013-03-06 Thread Bernd Helmle
It looks like the recent matview patch broke pg_dump in a way, which make 
it impossible to dump 9.1 and 9.2 databases.


it fails with

pg_dump: [Archivierer (DB)] query failed: ERROR:  function 
pg_relation_is_scannable(oid) does not exist


Looking into this issue, it seems the version check in getTables() of 
pg_dump.c is wrong. Shouldn't the check be


if (fout->remoteVersion >= 90300)
{

}

since this is where pg_relation_is_scannable() is introduced?

--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump in current master segfaults when dumping 9.2/9.1 databases

2013-03-26 Thread Bernd Helmle

My current master segfaults with pg_dump when dumping a 9.1 or 9.2 database:

$ LC_ALL=en_US.utf8 pg_dump -s -p 5448
pg_dump: column number -1 is out of range 0..22
zsh: segmentation fault  LC_ALL=en_US.utf8 pg_dump -s -p 5448

The reason seems to be that getTables() in pg_dump.c forget to select 
relpages in the query for releases >= 90100. The error message comes from 
PQgetvalue(res, i, i_relpages), which complains about i_relpages being -1, 
which will then return NULL...


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_controldata gobbledygook

2013-04-26 Thread Bernd Helmle



--On 25. April 2013 23:19:14 -0400 Tom Lane  wrote:


I think I've heard of scripts grepping the output of pg_controldata for
this that or the other.  Any rewording of the labels would break that.
While I'm not opposed to improving the labels, I would vote against your
second, abbreviated scheme because it would make things ambiguous for
simple grep-based scripts.


I had exactly this kind of discussion just a few days ago with a customer, 
who wants to use the output in their scripts and was a little worried about 
the compatibility between major versions.


I don't think we do guarantuee any output format compatibility between 
corresponding symbols in major versions explicitly, but given that 
pg_controldata seems to have a broad use case here, we should maybe 
document it somewhere wether to discourage or encourage people to rely on 
it?


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stop_backup does not complete

2010-02-26 Thread Bernd Helmle



--On 24. Februar 2010 16:01:02 -0500 Tom Lane  wrote:


One objection to this is that it's not very clear to the user when
pg_stop_backup has finished with actual work and is just waiting for the
archiver, ie when is it safe to hit control-C?  Maybe we should emit a
"backup done, waiting for archiver to complete" notice before entering
the sleep loop.


+1 for this. This hint would certainly help to recognize the issue 
immediately (or at least point to a possible cause).


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl stop -m immediate on the primary server inflates sequences

2010-04-10 Thread Bernd Helmle



--On 10. April 2010 09:26:41 +0200 Boszormenyi Zoltan  
wrote:



The above is quite reproducable, "pg_ctl stop -m immediate"
"usually" inflated my serial sequence, but I had two occasions
when not. The 69 -> 70 was one. The inflated increase is always 33:


AFAIKS sequences are pre-logged with 32 values to WAL to avoid overhead. I 
suspect this is why you are seeing those gaps.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: Add setting to make '+' on \d implicit

2010-04-25 Thread Bernd Helmle



--On 23. April 2010 14:34:45 -0700 Steve Atkins  wrote:


Or more generally an ability to set aliases via .psqlrc similar to \set,
maybe?

\alias "\d-" = "\d"
\alias "\d" = "\d+"


You mean something like this?



--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no universally correct setting for fsync

2010-05-07 Thread Bernd Helmle



--On 7. Mai 2010 09:48:53 -0500 Kevin Grittner 
 wrote:



I think it goes beyond "tweaking" -- I think we should have a bald
statement like "don't turn this off unless you're OK with losing the
entire contents of the database cluster."  A brief listing of some
cases where that is OK might be illustrative.



+1


I never meant to suggest any statement in that section is factually
wrong; it's just all too rosy, leading people to believe it's no big
deal to turn it off.


I think one mistake in this paragraph is the passing mention of 
"performance". I've seen installations in the past with fsync=off only 
because the admin was pressured to get instantly "more speed" out of the 
database (think of "fast_mode=on"). In my opinion, phrases like 
"performance penalty" are misleading, if you need that setting in 99% of 
all use cases for reliable operation.


I've recently even started to wonder if the performance gain with fsync=off 
is still that large on modern hardware. While testing large migration 
procedures to a new version some time ago (on an admitedly fast storage) i 
forgot here and then to turn it off, without a significant degradation in 
performance.



--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no universally correct setting for fsync

2010-05-07 Thread Bernd Helmle



--On 7. Mai 2010 19:49:15 -0400 Tom Lane  wrote:


Bernd Helmle  writes:

I've recently even started to wonder if the performance gain with
fsync=off  is still that large on modern hardware. While testing large
migration  procedures to a new version some time ago (on an admitedly
fast storage) i  forgot here and then to turn it off, without a
significant degradation in  performance.


That says to me either that you're using a battery-backed write cache,
or your fsyncs don't really work (no write barriers or something like
that).



Well, yes, BBU present and proven storage. Maybe i'm wrong, but it seems 
battery backed write caches aren't that seldom even in low end systems 
nowadays.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BYTEA / DBD::Pg change in 9.0 beta

2010-05-18 Thread Bernd Helmle



--On 18. Mai 2010 23:20:26 +0200 Jesper Krogh  wrote:


That was it. Knowing what the problem was I had no problem finding it in
the release notes.

May I ask whats the reason is for "breaking" the compatibillity?


"Efficency", if i am allowed to call it this way. The new hex 
representation should be more efficient to retrieve and to handle than the 
old one. I think bytea_output was set to hex for testing purposes on the 
first hand, but not sure wether there was a consensus to leave it there 
finally later.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Scheduled back-branch releases?

2012-07-12 Thread Bernd Helmle
Do we have a schedule for when next back-branch releases are packaged (i hope i
didn't miss any announcement...)?

-- 
Thanks

Bernd

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Triggers on VIEWs

2010-09-22 Thread Bernd Helmle



--On 5. September 2010 09:09:55 +0100 Dean Rasheed 
 wrote:


I had a first look on your patch, great work!


Attached is an updated patch with more tests and docs, and a few minor
code tidy ups. I think that the INSTEAD OF triggers part of the patch
is compliant with Feature T213 of the SQL 2008 standard. As discussed,


Reading the past discussions, there was some mention about the RETURNING 
clause.
I see Oracle doesn't allow its RETURNING INTO clause with INSTEAD OF 
triggers (at least my 10g XE instance here doesn't allow it, not sure about 
newer versions). I assume the following example might have some surprising 
effects on users:


CREATE TABLE foo(id integer);
CREATE VIEW vfoo AS SELECT 'bernd', * FROM foo; 

CREATE OR REPLACE FUNCTION insert_foo() RETURNS trigger
LANGUAGE plpgsql
AS $$
   BEGIN INSERT INTO foo VALUES(NEW.id);
RETURN NEW;
END; $$;

CREATE TRIGGER insert_vfoo INSTEAD OF INSERT ON vfoo
   FOR EACH ROW EXECUTE PROCEDURE insert_foo();

INSERT INTO vfoo VALUES('helmle', 2) RETURNING *;
 text  | id
+
helmle |  2
(1 row)

SELECT * FROM vfoo;
text  | id
---+
bernd |  2
(1 row)

This is solvable by a properly designed trigger function, but maybe we need 
to do something about this?



I don't plan to add the syntax to allow triggers on views to be
disabled at this time, but that should be easy to implement, if there
is a use case for it.


I really don't see a need for this at the moment. We don't have DISABLE 
RULE either. I'm going to post some additional comments once i've 
understand all things.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Triggers on VIEWs

2010-09-23 Thread Bernd Helmle



--On 23. September 2010 08:59:32 +0100 Dean Rasheed 
 wrote:



Yes, I agree. To me this is the least surprising behaviour. I think a
more common case would be where the trigger computed a value (such as
the 'last updated' example). The executor doesn't have any kind of a
handle on the row inserted by the trigger, so it has to rely on the
function return value to support RETURNING.


I didn't mean to forbid it altogether, but at least to document 
explicitely, that the trigger returns a VIEW's NEW tuple, not the one of 
the base table (and may modify it). But you've already adressed this in 
your doc patches, so nothing to worry about further.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Extend NOT NULL representation to pg_constraint

2010-09-26 Thread Bernd Helmle



--On 25. September 2010 19:55:02 -0300 José Arthur Benetasso Villanova 
 wrote:



One thing that I take notice is when you create a simple table like this
one: select count(*) from pg_constraint ; 12 rows appears in
pg_constraint, 10 to the sequence. Is that ok?


Not sure i get you here, can you elaborate more on this?

--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Extend NOT NULL representation to pg_constraint

2010-09-27 Thread Bernd Helmle



--On 26. September 2010 15:50:06 -0400 Tom Lane  wrote:


I think his question was - how do we feel about the massive catalog
bloat this patch will create?


It's a fair question.

I can imagine designing things so that we don't create an explicit
pg_constraint row for the simplest case of an unnamed, non-inherited
NOT NULL constraint.  Seems like it would complicate matters quite
a lot though, in exchange for saving what in the end isn't an enormous
amount of space.


What i can try is to record the inheritance information only in case of 
attinhcount > 0. This would make maintenance of the pg_constraint records 
for NOT NULL columns a little complicater though. Another thing we should 
consider is that Peter's functional dependency patch was supposed to rely 
on this feature (1), once it gets done. Not sure this still holds true


1) 



--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO: You can alter it, but you can't view it

2010-09-27 Thread Bernd Helmle



--On 27. September 2010 16:54:32 +0900 Itagaki Takahiro 
 wrote:



I found it in 8.4 and newer versions. It might be an internal API
(for pg_dump?), but it'd be better to add documentation for it.


Additionally we could extend pg_tables with an additional column? This 
would make the query more user-friendly, too.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: starting to review the Extend NOT NULL representation to pg_constraint patch

2010-09-30 Thread Bernd Helmle



--On 29. September 2010 23:05:11 -0400 Andrew Geery 
 wrote:



Reference: https://commitfest.postgresql.org/action/patch_view?id=312

The patch from
http://archives.postgresql.org/message-id/ca2e4c4762eae28d68404...@amenop
his does not apply cleanly to the current git master:


Yeah, there where some changes in the meantime to the master which generate 
some merge failures...will provide a new version along with other fixes 
soon. Are you going to update the commitfest page?


Thanks
   Bernd




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ALTER DATABASE RENAME with HS/SR

2010-10-03 Thread Bernd Helmle
Our documentation in 
 currently 
says the following:



Running DROP DATABASE, ALTER DATABASE ... SET TABLESPACE, or ALTER DATABASE 
... RENAME on the primary will generate a WAL entry that will cause all 
users connected to that database on the standby to be forcibly 
disconnected. This action occurs immediately, whatever the setting of 
max_standby_streaming_delay.



However, renaming a database doesn't trigger a disconnect here on a HS/SR 
setup:


* first, on the primary do:

CREATE DATABASE foo;

* ...wait until database creation arrived on the standby and connect:

psql foo

* now rename the database on the primary

ALTER DATABASE foo RENAME TO bar;

* on the standby do in the same connection as before:

foo=# SELECT datname FROM pg_database;
 datname
---
template1
template0
postgres
bernd
pgbench
bar
(6 rows)

That looks contrary to the documented behavior. Shouldn't i get a forced 
disconnect on this connection instead?


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER DATABASE RENAME with HS/SR

2010-10-04 Thread Bernd Helmle



--On 4. Oktober 2010 13:24:37 -0400 Robert Haas  
wrote:



I understand that we need to disconnect users if the database is
dropped (it's kind of hard to access a database that's not there any
more...) but I'm fuzzy on why we'd need to do that if it is merely
renamed.


Yeah, if there's no real technical reason (besides some potential confusion 
by the user...), this feels like overkill. If the behavior is okay, we need 
to change the documentation however.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Triggers on VIEWs

2010-10-05 Thread Bernd Helmle



--On 30. September 2010 07:38:18 +0100 Dean Rasheed 
 wrote:



This version doesn't apply clean anymore due to some rejects in
plainmain.c. Corrected version attached.



Ah yes, those pesky bits have been rotting while I wasn't looking.
Thanks for fixing them!


Basic summary of this patch:

* The patch includes a fairly complete discussion about INSTEAD OF triggers 
and their usage on views. There are also additional enhancements to the 
RULE documentation, which seems, given that this might supersede the usage 
of RULES for updatable views, reasonable.


* The patch passes regressions tests and comes with a bunch of its own 
regression tests. I think they are complete, they cover statement and row 
Level trigger and show the usage for JOINed views for example.


* I've checked against a draft of the SQL standard, the behavior of the 
patch seems to match the spec (my copy might be out of date, however).


* The code looks pretty good to me, there are some low level error messages 
exposing some implementation details, which could be changed (e.g. 
"wholerow is NULL"), but given that this is most of the time unexpected and 
is used in some older code as well, this doesn't seem very important.


* The implementation introduces the notion of "wholerow". This is a junk 
target list entry which allows the executor to carry the view information 
to an INSTEAD OF trigger. In case of DELETE/UPDATE, the rewriter is 
responsible to inject the new "wholerow" TLE and make the original query to 
point to a new range table entry (correct me, when i'm wrong), which is 
based on the view's query. I'm not sure i'm happy with the notion of 
"wholerow" here, maybe "viewrow" or "viewtarget" is more descriptive?


* I'm inclined to say that INSTEAD OF triggers have less overhead than 
RULES, but this is not proven yet with a reasonable benchmark.


I would like to do some more tests/review, but going to mark this patch as 
"Ready for Committer", so that someone more qualified on the executor part 
can have a look on it during this commitfest, if that's okay for us?


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: starting to review the Extend NOT NULL representation to pg_constraint patch

2010-10-14 Thread Bernd Helmle



--On 14. Oktober 2010 11:42:27 -0400 Robert Haas  
wrote:



I did a sanity make clean && make && make check before applying the
patch and all the tests passed.  After applying the patch and doing
make clean && make && make check, I got a number of failures of the
form “FAILED (test process exited with exit code 2)”.  The exact
number of failures varies by run, so I’m wondering if I didn’t do
something wrong...


That indicates that PostgreSQL is crashing.  So I think this patch is
definitely not ready for prime time yet, and needs some debugging.  In
view of the fact that we are out of time for this CommitFest, I'm
going to mark this Returned with Feedback in the CommitFest
application.  Hopefully it will be resubmitted for the next CommitFest
after further refinement, because I think this is a good and useful
improvement.


Yeah, its crashing but it doesn't do it here on my MacBook (passing the 
regression test is one of my top prio when submitting a patch). Must be 
some broken pointer or similar oversight which is triggered on Andrew's 
box. Andrew, which OS and architecture have you tested on?


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: starting to review the Extend NOT NULL representation to pg_constraint patch

2010-10-14 Thread Bernd Helmle



--On 14. Oktober 2010 10:02:12 -0400 Andrew Geery  
wrote:



The first failure I get is in the inherit tests (tail of
/src/test/regress/results/inherit.out):

alter table a alter column aa type integer using bit_length(aa);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Connection to server was lost

This test consistently breaks in this location and breaks for both
make check and make installcheck.


Okay, can reproduce it on a Linux box here, will be back with a fixed 
version.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: starting to review the Extend NOT NULL representation to pg_constraint patch

2010-10-14 Thread Bernd Helmle



--On 14. Oktober 2010 19:16:56 +0100 Dean Rasheed 
 wrote:



Program received signal SIGSEGV, Segmentation fault.
ATExecSetNotNullInternal (is_local=1 '\001',
is_new_constraint=, atttup=,
attr_rel=, rel=)
at tablecmds.c:4847
4847Form_pg_constraint constr =
(Form_pg_constraint) GETSTRUCT(copy_tuple);

Looking in that function, there is a similar "found" variable that
isn't being initialised (which my compiler didn't warn about).
Initialising that to false, sems to fix the problem and all the
regression tests then pass.


Yepp, that was it. I had a CFLAGS='-O0' in my dev build from a former 
debugging cycle and forgot about it (which reminds me to do a 
maintainer-clean more often between coding). This is also the reason i 
haven't seen the compiler warnings and the crash in the regression tests. 
Shame on me, but i think i have learned the lesson ;)


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: starting to review the Extend NOT NULL representation to pg_constraint patch

2010-10-15 Thread Bernd Helmle



--On 14. Oktober 2010 20:47:27 +0100 Dean Rasheed 
 wrote:



OK, here it is.

I've not cured this compiler warning (in fact I'm not sure what it's
complaining about, because the variable *is* used):

tablecmds.c: In function 'ATExecSetNotNull':
tablecmds.c:4747: warning: unused variable 'is_new_constraint'


Just a left over from earlier coding, i have removed this in my patch 
version. I have adapted your fixes and would like to propose that we are 
proceeding with my version, if that's okay for you?


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: starting to review the Extend NOT NULL representation to pg_constraint patch

2010-10-15 Thread Bernd Helmle



--On 15. Oktober 2010 13:36:38 -0400 Tom Lane  wrote:


Bernd Helmle  writes:

Here is an updated version of the patch. It fixes the following issues
Andrew discovered during his review cycle:


I looked through this a bit.  It's not ready to commit unfortunately.


Thanks for looking at this. I didn't reckon that i really got everything 
done (admitted, docs and regression tests were liberally left out), and 
given your comments your review is invaluable at this point.



The main gripe I've got is that you've paid very little attention to
updating comments that your code changes invalidate.  That's not even
a little bit acceptable: people will still have to read this code later.
Two examples are that struct CookedConstraint is now used for notnull
constraints in addition to its other duties, but you didn't adjust the
comments in its declaration; and that you made transformColumnDefinition
put NOTNULL constraints into the ckconstraints list, ignoring the fact
that both its name and the comments say that that's only CHECK
constraints.  In the latter case it'd probably be a better idea to add
a separate "nnconstraints" list to CreateStmtContext.



Agreed, there's much more cleanup needed.


Some other random points:

* The ALTER TABLE changes seem to be inserting a whole lot of
CommandCounterIncrement calls in places where there were none before.
Do we really need those?  Usually the theory is that one at the end
of an operation is sufficient.


Hmm i seem to remember that i had some problems during coding and some 
testing, where changes were unvisible during recursionI will go through 
them again.




* All those bits with deconstruct_array could usefully be folded into
a subroutine, along the lines of
bool constraint_is_for_single_column(HeapTuple constraintTup, int attno)



Ok


* As best I can tell from looking, the patch *always* generates a name
for NOT NULL constraints.  It should honor the user's name for the
constraint if one is given, ie
create table foo (f1 int constraint nn1 not null);
Historically we've had to drop such names on the floor, but that should
stop.



Oh, i really missed that.


* pg_dump almost certainly needs some updates.  I imagine the behavior
we'll want from it is to print NOT NULL only when the column's notnull
constraint shows that it's locally defined.  If it gets printed for
columns that only have an inherited NOT NULL, then dump and reload
will change the not-null inheritance state.  This may be a bit tricky
since pg_dump also has to still work against older servers, but with
any luck you can steal its logic for inherited check constraints.
We probably want it to start preserving the names of notnull
constraints, too.



I will look at it.


* In general you should probably search for all places that reference
pg_constraint.contype, as a means of spotting any other code that needs
to be updated for this.



Ok


* Lots of bogus trailing whitespace.  "git diff --check" can help you
with that.  Also, please try to avoid unnecessary changes of whitespace
on lines the patch isn't otherwise changing.  That makes it harder for
reviewers to see what the patch *is* changing, and it's a useless
activity anyway because the next run of pg_indent will undo such
changes.



whoops...i've set (setq-default show-trailing-whitespace t) in my .emacs, 
so i don't miss it again.



* No documentation updates.  At the very least, catalogs.sgml has to
be updated: both the pg_attribute and pg_constraint pages probably
have to have something to say about this.

* No regression test updates.  There ought to be a few test cases that
demonstrate the new behavior.



I'll include them. It was important for me to get a feeling about wether 
the direction in refactoring/extending this code is the correct one or 
needs more thinking. So, thanks again for reviewing.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Trailing Whitespace Tips (was: [HACKERS] Re: starting to review the Extend NOT NULL representation to pg_constraint patch)

2010-10-16 Thread Bernd Helmle



--On 16. Oktober 2010 12:35:06 +0300 Peter Eisentraut  
wrote:



Maybe it should go in src/tools/editors/emacs.samples, too?


Yeah, I think we should recommend some way to highlight faulty
whitespace.

The problem is, after you turn it on, it will make you cry as you
realize how sloppy most code and other files are written.


That's exactly why it is mostly off in my case. But you always can put it 
in a special editing mode, which i currently experimenting with. Thanks for 
your tips.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER OBJECT any_name SET SCHEMA name

2010-10-31 Thread Bernd Helmle



--On 30. Oktober 2010 18:59:30 -0400 Tom Lane  wrote:


I'm not sure whether that really fixes anything, or just provides people
with a larger-caliber foot-gun.  See for example recent complaints about
citext misbehaving if it's not in the public schema (or more generally,
any schema not in the search path).  I think we'd need to think a bit
harder about the behavior of objects that aren't in the search path
before creating a facility like this, since it seems to be tantamount
to promising that extensions won't break when pushed around to different
schemas.

I'm also a bit less than enthused about the implementation approach.
If we're going to have a policy that every object type must support
ALTER SET SCHEMA, I think it might be time to refactor, rather than
copying-and-pasting similar boilerplate code for every one.


This reminds me of a small discussion we had some years ago when i targeted 
this for the sake of completeness of ASS (see 
).


I didn't follow the previous discussions about EXTENSION very closely, but 
to amend the idea made in the mentioned thread, couldn't we just invent a 
facility to move classes of objects belonging to an extension, only?


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How can we tell how far behind the standby is?

2010-11-08 Thread Bernd Helmle



--On 5. November 2010 11:46:08 -0700 Josh Berkus  wrote:


I'll work on some tools to make this a bit more palatable, but I
disagree with earlier assertions that we have the replication monitoring
"done".  There's still a *lot* of work to do.


While getting familiar with our SR/HS infrastructure i scripted this



Not sure if it does all things right, but it helped me a lot while load 
testing SR. AFAIK Magnus has a monitoring script, too, i think this one is 
it:




Maybe that helps, too.

--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Improved parallel make support

2010-11-15 Thread Bernd Helmle



--On 14. November 2010 11:08:13 -0500 Robert Haas  
wrote:



+1.  The current master branch fails to build on my (rather new) Mac with
make -j2.  I could upgrade my toolchain but it seems like more trouble
than it's worth, not to mention a possible obstacle to new users and
developers.


The same here, too. And it doesn't matter if you use the shipped make 
(3.81) or the one from macports (currently 3.82), both are failing with:


ld: file not found: ../../../../../../src/backend/postgres
collect2: ld returned 1 exit status
make[3]: *** [ascii_and_mic.so] Error 1
make[2]: *** [all-ascii_and_mic-recurse] Error 2
make[1]: *** [all-backend/utils/mb/conversion_procs-recurse] Error 2
make[1]: *** Waiting for unfinished jobs

--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JDBC connections to 9.1

2011-04-18 Thread Bernd Helmle



--On 18. April 2011 09:44:38 -0400 Tom Lane  wrote:


I'm getting JDBC exceptions when I try to connect to 9.1 (master) with
the postgresql-9.0-801.jdbc3.jar  I don't have this issue with 9.0.


Hmm, what shows up in the postmaster log?


A quick check with an application here gives the following with JDBC
loglevel=2

16:09:47.910 (1) PostgreSQL 9.1devel JDBC4 (build 900)
16:09:47.914 (1) Trying to establish a protocol version 3 connection to 
localhost:5438
16:09:47.930 (1)  FE=> StartupPacket(user=bernd, database=mailstore, 
client_encoding=UNICODE, DateStyle=ISO, extra_float_digits=2)

16:09:47.933 (1)  <=BE AuthenticationOk
16:09:47.942 (1)  <=BE ParameterStatus(application_name = )
16:09:47.942 (1)  <=BE ParameterStatus(client_encoding = UTF8)
org.postgresql.util.PSQLException: Protocol error.  Session setup failed.
	at 
org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:498)
	at 
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:112)
	at 
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
	at 
org.postgresql.jdbc2.AbstractJdbc2Connection.(AbstractJdbc2Connection.java:125)
	at 
org.postgresql.jdbc3.AbstractJdbc3Connection.(AbstractJdbc3Connection.java:30)
	at 
org.postgresql.jdbc3g.AbstractJdbc3gConnection.(AbstractJdbc3gConnection.java:22)
	at 
org.postgresql.jdbc4.AbstractJdbc4Connection.(AbstractJdbc4Connection.java:32)

at org.postgresql.jdbc4.Jdbc4Connection.(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:393)
at org.postgresql.Driver.connect(Driver.java:267)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:154)
at de.oopsware.mailstore.PGSQLMailstore.connect(Unknown Source)
at de.oopsware.mailstore.PGSQLMailstore.connect(Unknown Source)
at mailstore.main(Unknown Source)
SQLException: SQLState(08P01)
getConnection failed: org.postgresql.util.PSQLException: Protocol error. 
Session setup failed.

org.postgresql.util.PSQLException: Protocol error.  Session setup failed.

Hmm, seems it stumbles while reading client_encoding

--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JDBC connections to 9.1

2011-04-18 Thread Bernd Helmle



--On 18. April 2011 16:17:57 +0200 Bernd Helmle  wrote:


16:09:47.942 (1)  <=BE ParameterStatus(client_encoding = UTF8)
org.postgresql.util.PSQLException: Protocol error.  Session setup failed.
at
org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFa
ctoryImpl.java:498)


ConnectionFactoryImpl.readStartupMessages() has this:

   else if (name.equals("client_encoding")) 

   { 

   if (!value.equals("UNICODE")) 

   throw new PSQLException(GT.tr("Protocol error.  Session 
setup failed."), PSQLState.PROTOCOL_VIOLATION);


pgStream.setEncoding(Encoding.getDatabaseEncoding("UNICODE")); 


   }

If i am reading it correct, it reads "UTF8" from the backend, while expecting 
"UNICODE" only. Not sure what change has caused this, though. If i extend the 
check to include "UTF8", everything seems to work.


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Back branch update releases this week; beta postponed

2011-04-21 Thread Bernd Helmle



--On 12. April 2011 10:58:25 -0400 Tom Lane  wrote:


Hmm, I would like to see the patch for

going in for 8.4.8.


Simon, was there a reason you only back-patched that to 9.0?


So it seems we have shipped 8.4.8 without a fix for this ?

--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposed patch: Smooth replication during VACUUM FULL

2011-04-30 Thread Bernd Helmle



--On 30. April 2011 20:19:36 +0200 Gabriele Bartolini 
 wrote:



I have noticed that during VACUUM FULL on reasonably big tables, replication
lag climbs. In order to smooth down the replication lag, I propose the
attached patch which enables vacuum delay for VACUUM FULL.


Hmm, but this will move one problem into another. You need to hold exclusive 
locks longer than necessary and given that we discourage the regular use of 
VACUUM FULL i cannot see a real benefit of it...


--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tuning autovacuum

2011-06-09 Thread Bernd Helmle



--On 9. Juni 2011 11:53:22 -0400 Greg Smith  wrote:


There are at least four interesting numbers to collect each time autovacuum
runs:

1) This one, when was the threshold crossed.  I believe one of the AV workers
would have to pause periodically to update these if they're all busy doing
work.
2) What time did the last autovacuum start at
3) How many dead rows were there at the point when it started
4) When did the last autovacuum end (currently the only value stored)


This is what i've found me thinking about repeatingly in the past, too. I'd go 
further and expose the info or details issued by VACUUM VERBOSE into the view, 
too, at least the number of pages visited and cleaned (or dead but not yet 
cleaned). Customers are heavily interested in these numbers and i've found 
pgfouine to provide those numbers very useful. To have this information in a 
view would make monitoring infrastructure for this much easier (depending how 
easy or expensive tracking of those informations is, i didn't look into the 
code yet).


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] procpid?

2011-06-15 Thread Bernd Helmle



--On 15. Juni 2011 16:47:55 + Greg Sabino Mullane  wrote:


Or perhaps pg_connections. Yes, +1 to making things fully backwards
compatible by keeping pg_stat_activity around but making a better
designed and better named table (view/SRF/whatever).


I thought about that too when reading the thread the first time, but 
"pg_stat_sessions" sounds better. Our documentation also primarily refers to a 
database connection as a "session", i think.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch - Debug builds without optimization

2011-06-16 Thread Bernd Helmle



--On 16. Juni 2011 14:30:27 +0200 Radosław Smogura  
wrote:



 Hello,

 I'm sending following patch which disables optimization when  --enable-debug
is passed. It was nasty (for me, at least) that debug  build required passing
of CFLAGS with -O0 to get nice traceable code.



-O0 hides bugs in your code (e.g. look at 
 
and replies for an example to do it better). Doing this automatically on debug 
builds would be a step backwards.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] procpid?

2011-06-16 Thread Bernd Helmle



--On 16. Juni 2011 15:33:35 + Greg Sabino Mullane  wrote:


No, this is clearly connections, not sessions. At least based on the items
in the postgresql.conf file, especially max_connections (probably one of the
items most closely associated with pg_stat_activity)


Well, but it doesn't show database connection(s) only, it also shows what 
actions are currently performed through the various connections on the 
databases and state information about them. I'm not a native english speaker, 
but i have the feeling that "sessions" is better suited for this kind of 
interactive monitoring. I believe Oracle also has a v$session view to query 
various information about what's going on.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: starting to review the Extend NOT NULL representation to pg_constraint patch

2011-06-16 Thread Bernd Helmle



--On 16. Juni 2011 13:25:05 -0400 Tom Lane  wrote:


Possible solution is to leave bootstrap's behavior alone, and have a
step during initdb's post-bootstrap stuff that creates a matching
pg_constraint row for every pg_attribute entry that's marked attnotnull.


+1 for this idea. I never came to an end about this because i didn't have any 
clue how to do it efficiently.


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another swing at JSON

2011-06-16 Thread Bernd Helmle



--On 29. März 2011 21:15:11 -0400 Joseph Adams  
wrote:



Thanks.  I applied a minor variation of this trick to the JSON module,
so now it builds/installs/tests cleanly on both REL8_4_0 and HEAD
(though it won't work if you copy contrib/json into a pre-9.1
PostgreSQL source directory and type `make` without USE_PGXS=1).

I also went ahead and renamed uninstall_json.sql to
json--uninstall--0.1.sql (again, it's for pre-9.1 users) and removed
unnecessary trailing spaces.


Anything going into the PostgreSQL code base will be for 9.2, so
anything else would be a separate (if somewhat related) project.  I
suspect the code will be a good deal cleaner if you do just the 9.2+
version and see who wants it back-patched, if anyone does :)


It's a trivial matter to remove backward compatibility from
contrib/json, if anybody wants me to do it.  I can just remove
compat.[ch], */init-pre9.1.* , remove the PREFIX_PGVER trick from the
Makefile, remove a few lines in the source code, and maintain the
backported json module elsewhere.  It's just a matter of whether or
not explicit backward-compatibility is desirable in modules shipped
with releases.


I started looking into this. A very minor adjusted patch to filelist.sgml was 
required to apply the patch cleanly to current -HEAD (attached).


After reading Joseph's comment upthread, I don't see any consensus wether the 
existing pre-9.1 support is required or even desired. Maybe i missed it, but do 
we really expect an extension (or contrib module) to be backwards compatible to 
earlier major releases, when shipped in contrib/ ?


--
Thanks

Bernd



json_contrib_cleaned.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another swing at JSON

2011-06-16 Thread Bernd Helmle



--On 16. Juni 2011 17:38:07 -0400 Tom Lane  wrote:


After reading Joseph's comment upthread, I don't see any consensus
wether the existing pre-9.1 support is required or even desired. Maybe
i missed it, but do we really expect an extension (or contrib module)
to be backwards compatible to earlier major releases, when shipped in
contrib/ ?


No, we don't.  You won't find any attempt in any contrib module to build
against prior releases.  There's not much point, since they're shipped
with a specific release of the core.


Okay, then we should remove this code. It doesn't do any complicated, but it 
seems a waste of code in this case (and from a maintenance point of view).


Joseph, are you able to remove the compatibility code for this CF?

--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   3   4   5   >