Re: [HACKERS] pgbench patches

2013-07-10 Thread Fabien COELHO


Hello Tatsuo,


I have looked into this:
https://commitfest.postgresql.org/action/patch_view?id=1105
because it's marked as Ready for committer. However I noticed that
you worried about other pgbench patches such as
https://commitfest.postgresql.org/action/patch_view?id=1103 .


So I would like to know whether the throttling patch is committed and
then update the progress patch to take that into account.


Shall I wait for your pgbench --throttle patch becomes ready for committer?


No. I'll submit another patch to the next commitfest to improve the 
progress behavior under throttling, if  when both initial patches are 
committed.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing Inner Joins

2013-07-10 Thread Antonin Houska

On 07/10/2013 07:28 AM, Atri Sharma wrote:
I am not sure if the part you mentioned is inline with the case I am 
talking about. 
Can you please post an example of such a join removal? I mean a query 
before and after the removal. Thanks,


//Tony


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing Inner Joins

2013-07-10 Thread Atri Sharma
 Can you please post an example of such a join removal? I mean a query before
 and after the removal. Thanks,

Courtesy Robert Haas:

SELECT foo.x, foo.y, foo.z FROM foo WHERE foo.x = bar.x

Conditions:

1) foo.x is not null.

2) foo (x) is a foreign key referencing bar (x).

We can ignore bar completely in this case i.e. avoid scanning bar.

Regards,

Atri


--
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing Inner Joins

2013-07-10 Thread Hannu Krosing
On 07/10/2013 09:18 AM, Atri Sharma wrote:
 Can you please post an example of such a join removal? I mean a query before
 and after the removal. Thanks,
 Courtesy Robert Haas:

 SELECT foo.x, foo.y, foo.z FROM foo WHERE foo.x = bar.x

 Conditions:

 1) foo.x is not null.
I guess that this is also not needed. you can just remove rows where

foo.x is null

That is, replace the join with foo.x is not null

 2) foo (x) is a foreign key referencing bar (x).

 We can ignore bar completely in this case i.e. avoid scanning bar.

 Regards,

 Atri


 --
 Regards,

 Atri
 l'apprenant




-- 
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] robots.txt on git.postgresql.org

2013-07-10 Thread Craig Ringer
On 07/09/2013 11:30 PM, Andres Freund wrote:
 On 2013-07-09 16:24:42 +0100, Greg Stark wrote:
 I note that git.postgresql.org's robot.txt refuses permission to crawl
 the git repository:

 http://git.postgresql.org/robots.txt

 User-agent: *
 Disallow: /


 I'm curious what motivates this. It's certainly useful to be able to
 search for commits.
 
 Gitweb is horribly slow. I don't think anybody with a bigger git repo
 using gitweb can afford to let all the crawlers go through it.

Wouldn't whacking a reverse proxy in front be a pretty reasonable
option? There's a disk space cost, but using Apache's mod_proxy or
similar would do quite nicely.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing Inner Joins

2013-07-10 Thread Atri Sharma
On Wed, Jul 10, 2013 at 1:11 PM, Hannu Krosing ha...@2ndquadrant.com wrote:
 On 07/10/2013 09:18 AM, Atri Sharma wrote:
 Can you please post an example of such a join removal? I mean a query before
 and after the removal. Thanks,
 Courtesy Robert Haas:

 SELECT foo.x, foo.y, foo.z FROM foo WHERE foo.x = bar.x

 Conditions:

 1) foo.x is not null.
 I guess that this is also not needed. you can just remove rows where

 foo.x is null

 That is, replace the join with foo.x is not null

Yeah, sounds good. We should explore it further.

Regards,

Atri


--
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: extension template

2013-07-10 Thread Markus Wanner
Peter,

On 07/09/2013 11:04 PM, Peter Eisentraut wrote:
 I think there is an intrinsic conflict here.  You have things inside the
 database and outside.  When they depend on each other, it gets tricky.
 Extensions were invented to copy with that.  They do the job, more or
 less.

I agree. And to extend upon that, I think it's important to distinguish
between the created extension and the available one, i.e. the template.
Only the template lives outside. The created extension itself is firmly
sitting in the database, possibly with multiple dependencies from other
objects. It does not dependent on anything outside of the database
(assuming the absence of a DSO of the extension, which does not follow
that template concept).

And yes, we decided the objects that are part of the extension should
not get dumped with pg_dump. Nobody argues to change that. Note,
however, that this very decision is what raises the intrinsic conflict
for pg_restore, because CREATE EXTENSION in the dump depends on the
outside extension. If anything, Dimitri's patch solves that.

 Now you want to take the same mechanism and apply it entirely
 inside the database.  But that wasn't the point of extensions!  That's
 how you get definitional issues like, should extensions be dumped or not.

IMO the point of extensions is to extend Postgres (with code that's not
part of core). Whether their templates (SQL sources, if you want) are
stored on the file system (outside) or within the database is irrelevant
to the concept.

Think of it that way: Take one of those FUSE-Postgres-FS things [1],
which uses Postgres as the backend for a file system and allows you to
store arbitrary files. Mount that to the extensions directory of your
Postgres instance and make your extension templates available there
(i.e. copy them there). CREATE EXTENSION would just work, reading the
templates for the extension to create from itself, via that fuse
wrapper. (If the FUSE wrapper itself was using an extension, you'd get
into an interesting chicken and egg problem, but even that would be
resolvable, because the installed extension doesn't depend on the
template it was created from.)

Think of Dimitri's patch as a simpler and more elegant way to achieve
the very same thing. (Well, modulo our disagreement about the dependency
between extension and templates.) And as opposed to the file system or
fuse approach, you'd even gain transactional safety, consistency (i.e. a
constraint can enforce a full version exists as the basis for an upgrade
script), etc... But who am I to tell you the benefits of storing data in
a database?

Of course, you then also want to be able to backup your templates (not
the extensions) stored in the database. Just like you keep a backup of
your file-system templates. Either by simply making a copy, or maybe by
keeping an RPM or DEB package of it available. Thus, of course,
templates for extensions need to be dumped as well.

 I don't believe the above use case.  (Even if I did, it's marginal.)
 You should always be able to arrange things so that an upgrade of an
 inside-the-database-package is possible before or after pg_restore.

Dimitri's scenario assumes an old and a new version of an extension as
well as an old and a new Postgres major version. Where the old extension
is not compatible with the new Postgres major version. Which certainly
seems like a plausible scenario to me (postgis-2.0 is not compatible
with Postgres-9.3, for example - granted, it carries a DSO, so it's not
really a good example).

Given how extensions work, to upgrade to the new Postgres major version
*and* to the required new version of the extension, you don't ever need
to upgrade an inside-the-database-package. Instead, you need to:

createdb - provide templates - CREATE EXTENSION - restore data

Now, CREATE EXTENSION and restoring your data has effectively been
merged for the user, as pg_dump emits proper CREATE EXTENSION commands.
Providing templates so far meant installing an RPM or DEB. Or copying
the files manually.

But in fact, how and where you provide templates for the extension is
irrelevant to that order. And the possibility to merge the second step
into the 'restore data' step certainly sounds appealing to me.

 pg_dump and pg_restore are interfaces between the database and the
 outside. They should have nothing to do with upgrading things that live
 entirely inside the database.

I don't get your point here. In my view, libpq is intended to modify the
things that live inside the database, including extensions (i.e. ALTER
EXTENSION ADD FUNCTION). Or what kind of things that live entirely
inside the database do you have in mind.

 There would be value to inside-the-database package management, but it
 should be a separate concept.

Anything that's incompatible to extensions is not gonna fly. There are
too many of them available, already. We need to ease management of
those, not come up with yet another concept.

Regards

Markus Wanner


[1]: for example, 

Re: [HACKERS] robots.txt on git.postgresql.org

2013-07-10 Thread Dave Page
On Wed, Jul 10, 2013 at 9:25 AM, Craig Ringer cr...@2ndquadrant.com wrote:
 On 07/09/2013 11:30 PM, Andres Freund wrote:
 On 2013-07-09 16:24:42 +0100, Greg Stark wrote:
 I note that git.postgresql.org's robot.txt refuses permission to crawl
 the git repository:

 http://git.postgresql.org/robots.txt

 User-agent: *
 Disallow: /


 I'm curious what motivates this. It's certainly useful to be able to
 search for commits.

 Gitweb is horribly slow. I don't think anybody with a bigger git repo
 using gitweb can afford to let all the crawlers go through it.

 Wouldn't whacking a reverse proxy in front be a pretty reasonable
 option? There's a disk space cost, but using Apache's mod_proxy or
 similar would do quite nicely.

It's already sitting behind Varnish, but the vast majority of pages on
that site would only ever be hit by crawlers anyway, so I doubt that'd
help a great deal as those pages would likely expire from the cache
before it really saved us anything.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] robots.txt on git.postgresql.org

2013-07-10 Thread Magnus Hagander
On Wed, Jul 10, 2013 at 10:25 AM, Craig Ringer cr...@2ndquadrant.com wrote:
 On 07/09/2013 11:30 PM, Andres Freund wrote:
 On 2013-07-09 16:24:42 +0100, Greg Stark wrote:
 I note that git.postgresql.org's robot.txt refuses permission to crawl
 the git repository:

 http://git.postgresql.org/robots.txt

 User-agent: *
 Disallow: /


 I'm curious what motivates this. It's certainly useful to be able to
 search for commits.

 Gitweb is horribly slow. I don't think anybody with a bigger git repo
 using gitweb can afford to let all the crawlers go through it.

 Wouldn't whacking a reverse proxy in front be a pretty reasonable
 option? There's a disk space cost, but using Apache's mod_proxy or
 similar would do quite nicely.

We already run this, that's what we did to make it survive at all. The
problem is there are so many thousands of different URLs you can get
to on that site, and google indexes them all by default.

It's before we had this that the side regularly died.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Regex pattern with shorter back reference does NOT work as expected

2013-07-10 Thread Jeevan Chalke
Hi Tom,

Following example does not work as expected:

-- Should return TRUE but returning FALSE
SELECT 'Programmer' ~ '(\w).*?\1' as t;

-- Should return P, a and er i.e. 3 rows but returning just one row with
-- value Programmer
SELECT REGEXP_SPLIT_TO_TABLE('Programmer','(\w).*?\1');

Initially I thought that back-reference is not supported and thus we are
getting those result. But while trying few cases related to back-reference I
see that it is giving an error invalid back-reference number, it means we
do have support for back-reference. So I tried few more scenarios. And I
observed that if we have input string as 'rogrammer' we are getting perfect
results i.e. when very first character is back-referenced. But failing when
first character is not part of back-reference.

This is happening only for shorter pattern matching. Longer match '(\w).*\1'
works well.

Clearly, above example has two matching pattern 'rogr' and 'mm'.

So I started debugging it to get a root cause for this. It is too complex to
understand what exactly is happening here. But while debugging I got this
chunk in regexec.c:cfindloop() function from where we are returning with
REG_NOMATCH

   {
   /* no point in trying again */
   *coldp = cold;
   return REG_NOMATCH;
   }

It was starting at 'P' and ending in above block. It was strange that why it
is not continuing with next character i.e. from 'r'. So I replaced above
chunk with break statement so that it will continue from next character.
This trick worked well.

Since I have very little idea at this code area, I myself unsure that it is
indeed a correct fix. And thus thought of mailing on hackers.

I have attached patch which does above changes along with few tests in
regex.sql

Your valuable insights please...

Thanks
-- 
Jeevan B Chalke


regexp_backref_shorter.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LogSwitch

2013-07-10 Thread Kevin Grittner
mohsen soodkhah mohammadi mohsensoodk...@gmail.com wrote:

 what is log switch and when it occur ?


What log are you talking about?

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changing recovery.conf parameters into GUCs

2013-07-10 Thread Heikki Linnakangas

On 10.07.2013 02:54, Josh Berkus wrote:

On 07/08/2013 11:43 PM, Simon Riggs wrote:

1. MOVE SETTINGS
All settings move into the postgresql.conf.

Comment: AFAIK, all agree this.


Good to go then.


+1.


2. RELOCATE RECOVERY PARAMETER FILE(s)
As of 9.2, relocating the postgresql.conf means there are no user
writable files needed in the data directory.

Comment: AFAIK, all except Heikki wanted this. He has very strong
objections to my commit that would have allowed relocating
recovery.conf outside of the data directory. By which he means both
the concepts of triggerring replication and of specifying parameters.
Changes in 9.3 specifically write files to the data directory that
expect this.


Yeah, I didn't understand why this was shot down either.

Heikki?


Does this refer to this:

http://www.postgresql.org/message-id/5152f778.2070...@vmware.com

? I listed some objections and suggestions there. Probably the biggest 
issue back then, however, was that it was committed so late in the 
release cycle. In any case, relocating the config/trigger file has 
nothing to do with turning recovery.conf parameters into GUCs, so let's 
not confuse this patch with that goal.



3. SEPARATE TRIGGER FILE
Creating a standby.enabled file in the directory that houses the
postgresql.conf (same logic as include uses to locate things) puts the
system into recovery mode.  That feature needs to save some state, and
making those decisions based on existence of a file is already a thing
we do.  Rather than emulating the rename to recovery.done that happens
now, the server can just delete it, to keep from incorrectly returning
to a state it's exited.  A UI along the lines of the promote one,
allowing pg_ctl standby, should fall out of here.  I think this is
enough that people who just want to use replication features need never
hear about this file at all, at least during the important to simplify
first run through.

Comment: AFAIK, all except Heikki are OK with this.


Sorry, I don't quite understand what this is about. Can you point me to 
the previous discussion on this?


pg_ctl standby sounds handy. It's not very useful without something 
like pg_rewind or some other mechanism to do a clean failover, though. 
Have to make sure that we have enough safeguards in place that you can't 
shoot yourself in the foot with that, though; if you turn a master 
server into a standby with that, must make sure that you can't corrupt 
the database if you point that standby to another standby.


But I don't see how that's related to changing recovery.conf parameters 
into gucs.



One bit of complexity I'd like to see go away is that we have two
trigger files: one to put a server into replication, and one to promote
it.  The promotion trigger file is a legacy of warm standby, I believe.
  Maybe, now that we have pg_ctl promote available, we can eliminate the
promotion trigger?


No, see http://www.postgresql.org/message-id/5112a54b.8090...@vmware.com.


Also, previously, deleting the recovery.conf file did not cause the
server to be promoted AFAIK.  Is that something we should change if
we're going to keep a trigger file to start replication?


Deleting recovery.conf file (and restarting) takes the server out of 
standby mode, but in an unsafe way. Yeah, would be nice to do something 
about that.



4. DISALLOW PREVIOUS API
If startup finds a recovery.conf file where it used to live at,
abort--someone is expecting the old behavior.  Hint to RTFM or include a
short migration guide right on the spot.  That can have a nice section
about how you might use the various postgresql.conf include* features if
they want to continue managing those files separately.  Example: rename
it as replication.conf and use include_if_exists if you want to be able
to rename it to recovery.done like before.  Or drop it into a conf.d
directory where the rename will make it then skipped.

Comment: I am against this. Tool vendors are not the problem here.
There is no good reason to just break everybody's scripts with no
warning of future deprecataion and an alternate API, especially since
we now allow multiple parameter files.


Well, the issue is not so much the presence of a recovery.conf file full
of config variables ... which as you point out is now effectively
supported ... but the use of that as a trigger file.   So I think the
two points you make here are flipped.

Personally, I don't care if we support the old recovery.conf-trigger
behavior as long as I'm not forced to use it.  The main objection to
supporting both was code complexity, I believe.


I'm also undecided on this one. If we can figure out a good way forward 
that keeps backwards-compatibility, good. But it's not worth very much 
to me - if we can get a better interface overall by dropping 
backwards-compatibility, then let's drop it.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

[HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-07-10 Thread Josh Kupershmidt
On Fri, Jul 5, 2013 at 12:16 PM, Pavel Stehule pavel.steh...@gmail.com wrote:

 I am sending a patch that removes strict requirements for DROP IF
 EXISTS statements. This behave is similar to our ALTER IF EXISTS
 behave now.

+1 for this idea. But this patch should be treated as a separate issue
from the use of IF EXISTS in pg_dump/pg_restore, right? If so, I
suggest starting a new thread about this patch to make reviewing
easier.

Josh


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-10 Thread Merlin Moncure
On Wed, Jul 10, 2013 at 2:54 AM, ivan babrou ibob...@gmail.com wrote:
 On 9 July 2013 19:17, Dmitriy Igrishin dmit...@gmail.com wrote:



 2013/7/9 Merlin Moncure mmonc...@gmail.com

 On Fri, Jul 5, 2013 at 12:28 PM, ivan babrou ibob...@gmail.com wrote:
  Hi, guys! I made a quick patch to support floating number in
  connect_timeout param for libpq. This will treat floating number as
  seconds so this is backwards-compatible. I don't usually write in C,
  so there may be mistakes. Could you review it and give me some
  feedback?

 First thing that jumps into my head: why not use asynchronous
 connection (PQconnectStart, etc) and code the timeout on top of that?

 +1.


 merlin


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




 --
 // Dmitriy.


 Doesn't look like straightforward solution for me. In my case existing
 drivers will benefit from my patch, in async case they should be
 rewritten. We don't use  libpq directly, we use native pgsql module
 from php.

 Even with that, kernel can wait for milliseconds — why should we limit
 precision 1000x times and reinvent milliseconds again in userspace?

Fair point.  Although I still agree with Tom in the sense that if I
were in your shoes I would be reconsidering certain parts of the
connection stack since you have such demanding requirements; even with
this solved I think other issues are lurking right around the corner.
That said, I did scan your patch briefly and noted it only changed
internal API functions and seems pretty straightforward. I withdraw my
objection.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-07-10 Thread Pavel Stehule
Yes, I wrote a separate patch for next commitfest.
Dne 10.7.2013 16:54 Josh Kupershmidt schmi...@gmail.com napsal(a):

 On Fri, Jul 5, 2013 at 12:16 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

  I am sending a patch that removes strict requirements for DROP IF
  EXISTS statements. This behave is similar to our ALTER IF EXISTS
  behave now.

 +1 for this idea. But this patch should be treated as a separate issue
 from the use of IF EXISTS in pg_dump/pg_restore, right? If so, I
 suggest starting a new thread about this patch to make reviewing
 easier.

 Josh



[HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-07-10 Thread Josh Kupershmidt
On Tue, Jul 2, 2013 at 5:39 AM, Pavel Stehule pavel.steh...@gmail.com wrote:

 2013/3/8 Josh Kupershmidt schmi...@gmail.com:
 On Fri, Mar 8, 2013 at 2:23 AM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 2013/3/8 Josh Kupershmidt schmi...@gmail.com:

 Cool. I think it would also be useful to check that --clean may only
 be used with --format=p to avoid any confusion there. (This issue
 could be addressed in a separate patch if you'd rather not lard this
 patch.)

 no

 some people - like we in our company would to use this feature for
 quiet and strict mode for plain text format too.

 enabling this feature has zero overhead so there are no reason block
 it anywhere.

 I'm not sure I understand what you're getting at, but maybe my
 proposal wasn't so clear. Right now, you can specify --clean along
 with -Fc to pg_dump, and pg_dump will not complain even though this
 combination is nonsense. I am proposing that pg_dump error out in this
 case, i.e.

   $ pg_dump -Fc --file=test.dump --clean -d test
   pg_dump: option --clean only valid with plain format dump

 Although this lack of an error a (IMO) misfeature of existing pg_dump,
 so if you'd rather leave this issue aside for your patch, that is
 fine.


 I tested last patch and I am thinking so this patch has sense for
 custom format too

 [postgres@localhost ~]$ /usr/local/pgsql/bin/pg_dump   --clean -t a
 -Fc postgres  dump
 [postgres@localhost ~]$ psql -c drop table a
 DROP TABLE
 [postgres@localhost ~]$ pg_restore --clean -d postgres dump
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 171; 1259 16462 TABLE
 a postgres
 pg_restore: [archiver (db)] could not execute query: ERROR:  table a
 does not exist
 Command was: DROP TABLE public.a;

 WARNING: errors ignored on restore: 1

 [postgres@localhost ~]$ /usr/local/pgsql/bin/pg_dump  --if-exist
 --clean -t a -Fc postgres  dump
 [postgres@localhost ~]$ psql -c drop table a
 DROP TABLE
 [postgres@localhost ~]$ pg_restore --clean -d postgres dump

 So limit for plain format is not too strict

I'm not sure I understand what you're proposing here, but for the
record I really don't think it's a good idea to encourage the use of
  pg_dump -Fc --clean ...

Right now, we don't error out on this combination of command-line
options, but the --clean is effectively a no-op; you have to tell
pg_restore to use --clean. IMO, this is basically how it should be:
pg_dump, at least in custom-format, is preserving the contents of your
database with the understanding that you will use pg_restore wish to
restore from this dump in a variety of possible ways. Putting
restrictions like --clean into the custom-format dump file just makes
that dump file less useful overall.

Although it's still not clear to me why the examples you showed above
used *both* `pg_dump -Fc --clean ...` and `pg_restore --clean ...`
together. Surely the user should be specifying this preference in only
one place?

Josh


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] tab-completion for \lo_import

2013-07-10 Thread Josh Kupershmidt
Hi all,
Is there any reason not to tab-complete the local filename used by the
\lo_import command? Small patch to do so attached.
Josh


tab_complete_lo_import.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Listen/notify across clusters

2013-07-10 Thread Greg Jaskiewicz
Hi masters of PostgreSQL,

I recently got asked about possibility of listening to notifications on warm 
standby.
So question, how hard would that be to implement ?
Is it even possible without major changes to the architecture ?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-10 Thread ivan babrou
On 9 July 2013 19:17, Dmitriy Igrishin dmit...@gmail.com wrote:



 2013/7/9 Merlin Moncure mmonc...@gmail.com

 On Fri, Jul 5, 2013 at 12:28 PM, ivan babrou ibob...@gmail.com wrote:
  Hi, guys! I made a quick patch to support floating number in
  connect_timeout param for libpq. This will treat floating number as
  seconds so this is backwards-compatible. I don't usually write in C,
  so there may be mistakes. Could you review it and give me some
  feedback?

 First thing that jumps into my head: why not use asynchronous
 connection (PQconnectStart, etc) and code the timeout on top of that?

 +1.


 merlin


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




 --
 // Dmitriy.


Doesn't look like straightforward solution for me. In my case existing
drivers will benefit from my patch, in async case they should be
rewritten. We don't use  libpq directly, we use native pgsql module
from php.

Even with that, kernel can wait for milliseconds — why should we limit
precision 1000x times and reinvent milliseconds again in userspace?

--
Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-10 Thread ivan babrou
On 9 July 2013 18:43, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-07-05 21:28:59 +0400, ivan babrou wrote:
 Hi, guys! I made a quick patch to support floating number in
 connect_timeout param for libpq. This will treat floating number as
 seconds so this is backwards-compatible. I don't usually write in C,
 so there may be mistakes. Could you review it and give me some
 feedback?

 --
 Regards, Ian Babrou
 http://bobrik.name http://twitter.com/ibobrik skype:i.babrou

 diff --git a/src/interfaces/libpq/fe-connect.c 
 b/src/interfaces/libpq/fe-connect.c
 index 18fcb0c..58c1a35 100644
 --- a/src/interfaces/libpq/fe-connect.c
 +++ b/src/interfaces/libpq/fe-connect.c
 @@ -1452,7 +1452,7 @@ static int
  connectDBComplete(PGconn *conn)
  {
   PostgresPollingStatusType flag = PGRES_POLLING_WRITING;
 - time_t  finish_time = ((time_t) -1);
 + struct timeval  finish_time;

   if (conn == NULL || conn-status == CONNECTION_BAD)
   return 0;
 @@ -1462,17 +1462,14 @@ connectDBComplete(PGconn *conn)
*/
   if (conn-connect_timeout != NULL)
   {
 - int timeout = atoi(conn-connect_timeout);
 + int timeout_usec = (int) 
 (atof(conn-connect_timeout) * 100);

 I'd rather not use a plain int for storing usecs. An overflow is rather
 unlikely, but still. Also, I'd rather use something like USECS_PER_SEC
 instead of a plain 100 in multiple places.


 - if (timeout  0)
 + if (timeout_usec  0)
   {
 - /*
 -  * Rounding could cause connection to fail; need at 
 least 2 secs
 -  */
 - if (timeout  2)
 - timeout = 2;
 - /* calculate the finish time based on start + timeout 
 */
 - finish_time = time(NULL) + timeout;
 + gettimeofday(finish_time, NULL);
 + finish_time.tv_usec += (int) timeout_usec;

 Accordingly adjust this.

 Looks like a sensible thing to me.

 *Independent* from this patch, you might want look into server-side
 connection pooling using transaction mode. If that's applicable for
 your application it might reduce latency noticeably.

 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services

I tried to make it more safe. Still not sure about constants, I
haven't found any good examples in libpq.

--
Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou


connect_timeout_in_ms.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Listen/notify across clusters

2013-07-10 Thread Christopher Browne
Shouldn't be possible.

The act of requesting to LISTEN requires doing a sort of update to the
database.  In elder versions, it put tuple(s) into pg_catalog.pg_listener,
and that's Right Well Disallowed on a WAL-based replica.

I would think that if you're keen on building an event detection
substrate, particularly one that's supposed to cross clusters, then you
should consider using something actually attuned to that, such as a message
queueing system, whether an AMQP implementation such as RabbitMQ, or a
message bus like Spread.  If you do that, then you can do this in much
broader cross-cluster ways for unrelated Postgres clusters.


Re: [HACKERS] [PERFORM] In progress INSERT wrecks plans on table

2013-07-10 Thread Josh Berkus
On 07/08/2013 10:11 AM, Josh Berkus wrote:
 On 06/23/2013 09:43 PM, Abhijit Menon-Sen wrote:
 (Cc: to pgsql-performance dropped, pgsql-hackers added.)

 At 2013-05-06 09:14:01 +0100, si...@2ndquadrant.com wrote:

 New version of patch attached which fixes a few bugs.

 I read the patch, but only skimmed the earlier discussion about it. In
 isolation, I can say that the patch applies cleanly and looks sensible
 for what it does (i.e., cache pgprocno to speed up repeated calls to
 TransactionIdIsInProgress(somexid)).

 In that sense, it's ready for committer, but I don't know if there's a
 better/more complete/etc. way to address the original problem.
 
 Has this patch had performance testing?  Because of the list crossover I
 don't have any information on that.

Due to the apparent lack of performance testing, I'm setting this back
to needs review.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] hardware donation

2013-07-10 Thread Benedikt Grundmann
Jane Street has a spare server we would like to donate to the postgres
community.  We originally planed to use it for one of our database clusters
and it matches exactly what we use in production at the moment.

Rough specs:
CPU: 8x Intel(R) Xeon(R) CPU   X5570  @ 2.93GHz
MEM: 48GB

The server will probably be most interesting for the disks in it.  That is
where we spend the largest amount of time optimizing (for sequential scan
speed in particular):

22x600GB disks in a Raid6+0 (Raid0 of 2x 10disk raid 6 arrays) + 2 spare
disks. Overall size 8.7 TB in that configuration.

Is this something the community would find useful?  If so, who is the right
person to talk to about shipping/delivery?

Cheers,

Bene


Re: [HACKERS] hardware donation

2013-07-10 Thread Josh Berkus
On 07/10/2013 09:53 AM, Benedikt Grundmann wrote:
 Jane Street has a spare server we would like to donate to the postgres
 community.  We originally planed to use it for one of our database clusters
 and it matches exactly what we use in production at the moment.
 
 Rough specs:
 CPU: 8x Intel(R) Xeon(R) CPU   X5570  @ 2.93GHz
 MEM: 48GB
 
 The server will probably be most interesting for the disks in it.  That is
 where we spend the largest amount of time optimizing (for sequential scan
 speed in particular):
 
 22x600GB disks in a Raid6+0 (Raid0 of 2x 10disk raid 6 arrays) + 2 spare
 disks. Overall size 8.7 TB in that configuration.
 
 Is this something the community would find useful?  If so, who is the right
 person to talk to about shipping/delivery?

Mark, we can use this in the performance farm, no?

I know I could use it for testing ...

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hardware donation

2013-07-10 Thread Mark Wong
On Wed, Jul 10, 2013 at 10:00 AM, Josh Berkus j...@agliodbs.com wrote:
 On 07/10/2013 09:53 AM, Benedikt Grundmann wrote:
 Jane Street has a spare server we would like to donate to the postgres
 community.  We originally planed to use it for one of our database clusters
 and it matches exactly what we use in production at the moment.

 Rough specs:
 CPU: 8x Intel(R) Xeon(R) CPU   X5570  @ 2.93GHz
 MEM: 48GB

 The server will probably be most interesting for the disks in it.  That is
 where we spend the largest amount of time optimizing (for sequential scan
 speed in particular):

 22x600GB disks in a Raid6+0 (Raid0 of 2x 10disk raid 6 arrays) + 2 spare
 disks. Overall size 8.7 TB in that configuration.

 Is this something the community would find useful?  If so, who is the right
 person to talk to about shipping/delivery?

 Mark, we can use this in the performance farm, no?

 I know I could use it for testing ...

Yes, we certainly can.  I can be the contact for shipping/delivery.  Thanks!

Regards,
Mark


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-10 Thread Josh Berkus
On 07/09/2013 01:18 PM, Merlin Moncure wrote:
 On Fri, Jul 5, 2013 at 3:01 PM, Josh Berkus j...@agliodbs.com wrote:
 It's fairly common with certain kinds of apps, including Rails and PHP.
  This is one of the reasons why we've discussed having a kind of
 stripped-down version of pgbouncer built into Postgres as a connection
 manager.  If it weren't valuable to be able to relocate pgbouncer to
 other hosts, I'd still say that was a good idea.
 
 for the record, I think this is a great idea.

Well, we discussed this a bit last year.  I'd personally love to have an
event-based connection manager tied to local Postgres, which could use
all of PostgreSQL's various authentication methods.  For the simple
case, where a user has a rails/mod_python/PHP app which just spawns lots
of connections, this could make the whole too many connections
headache go away as a concern for first-time users.

We'd still need pgbouncer for the serious scalability cases (so that we
could put it on other machines), but I am a bit tired of explaining how
to manage max_connections to people.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Listen/notify across clusters

2013-07-10 Thread Josh Berkus
On 07/10/2013 09:27 AM, Christopher Browne wrote:
 The act of requesting to LISTEN requires doing a sort of update to the
 database.  In elder versions, it put tuple(s) into pg_catalog.pg_listener,
 and that's Right Well Disallowed on a WAL-based replica.
 
 I would think that if you're keen on building an event detection
 substrate, particularly one that's supposed to cross clusters, then you
 should consider using something actually attuned to that, such as a message
 queueing system, whether an AMQP implementation such as RabbitMQ, or a
 message bus like Spread.  If you do that, then you can do this in much
 broader cross-cluster ways for unrelated Postgres clusters.

Huh?  LISTEN/NOTIFY across replication has been a desired feature since
we introduced streaming replication.  We want it, there's just no
obvious way to do it.

Your email kinda implies that it's not desirable.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-10 Thread Kevin Grittner
Sorry for the delay in reviewing this.  I must make sure never to
take another vacation during a commitfest -- the backlog upon
return is a killer

Kevin Grittner kgri...@ymail.com wrote:
 Andres Freund and...@2ndquadrant.com wrote:

 Otherwise, could you try applying my git tree so we are sure we
 test the same thing?

 $ git remote add af 
 git://git.postgresql.org/git/users/andresfreund/postgres.git
 $ git fetch af
 $ git checkout -b xlog-decoding af/xlog-decoding-rebasing-cf4
 $ ./configure ...
 $ make

 Tried that, too, and problem persists.  The log shows the last
 commit on your branch as 022c2da1873de2fbc93ae524819932719ca41bdb.

The good news: the regression tests now work for me, and I'm back
on testing this at a high level.

The bad news:

(1)  The code checked out from that branch does not merge with
master.  Not surprisingly, given the recent commits, xlog.c is a
problem.  Is there another branch I should now be using?  If not,
please let me know when I can test with something that applies on
top of the master branch.

(2)  An initial performance test didn't look very good.  I will be
running a more controlled test to confirm but the logical
replication of a benchmark with a lot of UPDATEs of compressed text
values seemed to suffer with the logical replication turned on.
Any suggestions or comments on that front, before I run the more
controlled benchmarks?

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] SSL renegotiation

2013-07-10 Thread Alvaro Herrera
Hi,

I'm having a look at the SSL support code, because one of our customers
reported it behaves unstably when the network is unreliable.  I have yet
to reproduce the exact problem they're having, but while reading the
code I notice this in be-secure.c:secure_write() :

if (ssl_renegotiation_limit  port-count  ssl_renegotiation_limit * 
1024L)
{
SSL_set_session_id_context(port-ssl, (void *) SSL_context,
   sizeof(SSL_context));
if (SSL_renegotiate(port-ssl) = 0)
ereport(COMMERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
 errmsg(SSL renegotiation failure)));
if (SSL_do_handshake(port-ssl) = 0)
ereport(COMMERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
 errmsg(SSL renegotiation failure)));
if (port-ssl-state != SSL_ST_OK)
ereport(COMMERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
 errmsg(SSL failed to send renegotiation request)));
port-ssl-state |= SSL_ST_ACCEPT;
SSL_do_handshake(port-ssl);
if (port-ssl-state != SSL_ST_OK)
ereport(COMMERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
 errmsg(SSL renegotiation failure)));
port-count = 0;
}

I call your attention to the fact that we're calling SSL_do_handshake
twice here; and what's more, we're messing with the internal
port-ssl-state variable by setting the SSL_ST_ACCEPT bit.  According
to the commit message that introduces this[1], this is text book
correct, but I'm unable to find the text book that specifies that this
is correct.  Is it?  I have gone through the OpenSSL documentation and
can find not a single bit on this; and I have also gone through the
OpenSSL mailing list archives and as far as I can tell they say you have
to call SSL_renegotiate() and then SSL_do_handshake() once, and that's
it.  (You can call SSL_renegotiate_pending() afterwards to verify that
the renegotiation completed successfully, which is something we don't
do.)

I have found in our archives several reports of people that get SSL
renegotiation failed error messages in the log, and no explanation has
ever been found.  I instrumented that block, and I have observed that
after the second handshake call, ssl-state is 0x2111, 0x21c0, 0x21a0
and other values; never SSL_ST_OK.  (0x2000 is SSL_ST_ACCEPT which is
the bit we added; SSL_ST_OK is 0x03).  If I remove the second
SSL_do_handshake() call and the changes to port-ssl-state, everything
appears to work perfectly well and there's no extra log spam (and
ssl-state is SSL_ST_OK by the time things are finished).  So apparently
renegotiation is not actually failing, but we're just adding a confusing
error message for no apparent reason.

Thoughts?

I have still to find where the actual problems are happening in
unreliable networks ...


[1] commit 17386ac45345fe38a10caec9d6e63afa3ce31bb9
Author: Bruce Momjian br...@momjian.us
Date:   Wed Jun 11 15:05:50 2003 +

patch by Sean Chittenden (in CC), posted as [2]

[2] 
http://www.postgresql.org/message-id/20030419190821.gq79...@perrin.int.nxad.com

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-10 Thread Andres Freund
On 2013-07-10 12:21:23 -0700, Kevin Grittner wrote:
 Sorry for the delay in reviewing this.  I must make sure never to
 take another vacation during a commitfest -- the backlog upon
 return is a killer

Heh. Yes. Been through it before...

 Kevin Grittner kgri...@ymail.com wrote:
  Andres Freund and...@2ndquadrant.com wrote:
 
  Otherwise, could you try applying my git tree so we are sure we
  test the same thing?
 
  $ git remote add af 
  git://git.postgresql.org/git/users/andresfreund/postgres.git
  $ git fetch af
  $ git checkout -b xlog-decoding af/xlog-decoding-rebasing-cf4
  $ ./configure ...
  $ make
 
  Tried that, too, and problem persists.  The log shows the last
  commit on your branch as 022c2da1873de2fbc93ae524819932719ca41bdb.
 
 The good news: the regression tests now work for me, and I'm back
 on testing this at a high level.

 The bad news:
 
 (1)  The code checked out from that branch does not merge with
 master.  Not surprisingly, given the recent commits, xlog.c is a
 problem.  Is there another branch I should now be using?  If not,
 please let me know when I can test with something that applies on
 top of the master branch.

That one is actually relatively easy to resolve. The mvcc catalog scan
patch is slightly harder. I've pushed an updated patch that fixes the
latter in a slightly not-so-nice way. I am not sure yet how the final
fix for that's going to look like, depends on whether we will get rid of
SnapshotNow alltogether...

I'll push my local tree with that fixed in a sec.

 (2)  An initial performance test didn't look very good.  I will be
 running a more controlled test to confirm but the logical
 replication of a benchmark with a lot of UPDATEs of compressed text
 values seemed to suffer with the logical replication turned on.
 Any suggestions or comments on that front, before I run the more
 controlled benchmarks?

Hm. There theoretically shouldn't actually be anything added in that
path. Could you roughly sketch what that test is doing? Do you actually
stream those changes out or did you just turn on wal_level=logical?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-07-10 Thread Josh Kupershmidt
On Tue, Jul 2, 2013 at 7:47 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello

 remastered patch

 still there is a issue with dependencies

Several of the issues from my last review [1] seem to still be present
in this patch, such as review notes #1 and #4.

And as discussed previously, I think that the --clean option belongs
solely with pg_restore for custom-format dumps. The way the patch
handles this is rather confusing, forcing the user to do:

  $  pg_dump -Fc --clean --if-exists --file=backup.dump ...
and then:
  $  pg_restore --clean ... backup.dump (without --if-exists)

to get the desired behavior.

Josh

[1] 
http://www.postgresql.org/message-id/CAK3UJRG__4=+f46xamiqa80f_-bqhjcpfwyp8g8fpspqj-j...@mail.gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-10 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 Kevin Grittner kgri...@ymail.com wrote:

 (2)  An initial performance test didn't look very good.  I will be
 running a more controlled test to confirm but the logical
 replication of a benchmark with a lot of UPDATEs of compressed text
 values seemed to suffer with the logical replication turned on.
 Any suggestions or comments on that front, before I run the more
 controlled benchmarks?

 Hm. There theoretically shouldn't actually be anything added in that
 path. Could you roughly sketch what that test is doing? Do you actually
 stream those changes out or did you just turn on wal_level=logical?

It was an update of a every row in a table of 72 rows, with
each row updated by primary key using a separate UPDATE statement,
modifying a large text column with a lot of repeating characters
(so compressed well).  I got a timing on a master build and I got a
timing with the patch in the environment used by
test_logical_decoding.  It took several times as long in the latter
run, but it was very much a preliminary test in preparation for
getting real numbers.  (I'm sure you know how much work it is to
set up for a good run of tests.)  I'm not sure that (for example)
the synchronous_commit setting was the same, which could matter a
lot.  I wouldn't put a lot of stock in it until I can re-create it
under a much more controlled test.

The one thing about the whole episode that gave me pause was that
the compression and decompression routines were very high on the
`perf top` output in the patched run and way down the list on the
run based on master.  I don't have a ready explanation for that,
unless your branch was missing a recent commit for speeding
compression which was present on master.  It might be worth
checking that you're not detoasting more often than you need.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL renegotiation

2013-07-10 Thread Alvaro Herrera
I think this block is better written as:

if (ssl_renegotiation_limit  port-count  ssl_renegotiation_limit * 
1024L)
{
SSL_set_session_id_context(port-ssl, (void *) SSL_context,
   sizeof(SSL_context));
if (SSL_renegotiate(port-ssl) = 0)
ereport(COMMERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
 errmsg(SSL renegotiation failure in renegotiate)));
else
{
inthandshake;

do {
handshake = SSL_do_handshake(port-ssl);
if (handshake = 0)
ereport(COMMERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
 errmsg(SSL renegotiation failure in 
handshake, retrying)));
} while (handshake = 0);

if (port-ssl-state != SSL_ST_OK)
ereport(COMMERROR,
(errcode(ERRCODE_PROTOCOL_VIOLATION),
 errmsg(SSL failed to send renegotiation 
request)));
else
port-count = 0;
}
}

In other words, retry the SSL_do_handshake() until it reports OK, but
not more than that; this seems to give better results in my (admittedly
crude) experiments.  I am unsure about checking port-ssl-state after
the handshake; it's probably pointless, really.

In any case, the old code was calling SSL_do_handshake() even if
SSL_renegotiate() failed; and it was resetting the port-count even if
the handshake failed.  Both of these smell like bugs to me.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-10 Thread Andres Freund
On 2013-07-10 15:14:58 -0700, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:
  Kevin Grittner kgri...@ymail.com wrote:
 
  (2)  An initial performance test didn't look very good.  I will be
  running a more controlled test to confirm but the logical
  replication of a benchmark with a lot of UPDATEs of compressed text
  values seemed to suffer with the logical replication turned on.
  Any suggestions or comments on that front, before I run the more
  controlled benchmarks?
 
  Hm. There theoretically shouldn't actually be anything added in that
  path. Could you roughly sketch what that test is doing? Do you actually
  stream those changes out or did you just turn on wal_level=logical?
 
 It was an update of a every row in a table of 72 rows, with
 each row updated by primary key using a separate UPDATE statement,
 modifying a large text column with a lot of repeating characters
 (so compressed well).  I got a timing on a master build and I got a
 timing with the patch in the environment used by
 test_logical_decoding.  It took several times as long in the latter
 run, but it was very much a preliminary test in preparation for
 getting real numbers.  (I'm sure you know how much work it is to
 set up for a good run of tests.)  I'm not sure that (for example)
 the synchronous_commit setting was the same, which could matter a
 lot.  I wouldn't put a lot of stock in it until I can re-create it
 under a much more controlled test.

So you didn't explicitly start anything to consume those changes?
I.e. using pg_receivellog or SELECT * FROM
start/init_logical_replication(...)?

Any chance there still was an old replication slot around?
SELECT * FROM pg_stat_logical_decoding;
should show them. But theoretically the make check in
test_logical_decoding should finish without one active...

 The one thing about the whole episode that gave me pause was that
 the compression and decompression routines were very high on the
 `perf top` output in the patched run and way down the list on the
 run based on master.

That's interesting. Unless there's something consuming the changestream
and the output plugin does something that actually requests
decompression of the Datums there shouldn't be *any* added/removed calls
to toast (de-)compression...
While consuming the changes there could be ReorderBufferToast* calls in
the profile. I haven't yet seem them in profiles, but that's not saying
all that much.

So:
 I don't have a ready explanation for that, unless your branch was
 missing a recent commit for speeding compression which was present on
 master.

It didn't have 031cc55bbea6b3a6b67c700498a78fb1d4399476 - but I can't
really imagine that making *such* a big difference. But maybe you hit
some sweet spot with the data?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql tab completion for updatable foreign tables

2013-07-10 Thread Bernd Helmle



--On 8. Juli 2013 16:04:31 + Dean Rasheed dean.a.rash...@gmail.com 
wrote:



* pg_relation_is_updatable is only available in 9.3, whereas psql may
connect to older servers, so it needs to guard against that.



Oh of course, i forgot about this. Thanks for pointing out.


* If we're doing this, I think we should do it for auto-updatable
views at the same time.

There was concern that pg_relation_is_updatable() would end up opening
every relation in the database, hammering performance. I now realise
that these tab-complete queries have a limit (1000 by default) so I
don't think this is such an issue. I tested it by creating 10,000
randomly named auto-updatable views on top of a table, and didn't see
any performance problems.

Here's an updated patch based on the above points.


Okay, are you adding this to the september commitfest?

--
Thanks

Bernd


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [SPAM] SSL renegotiation

2013-07-10 Thread Sean Chittenden
Wow, that was a long time ago... I remember a few things about this

1) I was running in to an issue where every 64KB of transfer (or 
something inanely low like that), SSL was being renegotiated. This was 
causing performance problems over the wire. I think we settled on once 
an hour renegotiating the key, but I'd have to look again to check. It 
looks like the default is now 512MB, which is a more sane limit, but 
still pretty easy to exhaust - I have several hosts that would run past 
that default every 45sec or so, probably faster at peak times.

2) The system that I was running this on was Solaris 2.5, I believe, 
and /dev/random was having a problem staying populated given the 
frequent renegotiations, which prompted me to look in to this. In your 
testing and attempts to repro, try draining your prng pool, or patch 
things on Linux to read from /dev/random instead of /dev/urandom... 
something like that may be at fault and why limited testing won't 
expose this, but under load you might see it. *shrug* A WAG, but 
possibly relevant.

Rough notes inline below (almost all of this should be wrapped in an 
iirc block):

 if (ssl_renegotiation_limit  port-count  ssl_renegotiation_limit 
 * 1024L)

This doesn't seem right. 512MB * 1024? Maybe that's why I haven't 
actually had to play with this limit in a long time. Every 512GB is 
much more reasonable in that it would take 12hrs to renegotiate on a 
busy host. The * 1024L seems suspicious to me and should probably be 
removed in favor of the constant passed in from the config.

 {
 SSL_set_session_id_context(port-ssl, (void *) SSL_context,
sizeof(SSL_context));
 if (SSL_renegotiate(port-ssl) = 0)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL renegotiation failure)));

This sets a bit asking the peer to renegotiation.

 if (SSL_do_handshake(port-ssl) = 0)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL renegotiation failure)));
 if (port-ssl-state != SSL_ST_OK)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL failed to send renegotiation request)));

Push the request to renegotiate out to the peer and check the status.

 port-ssl-state |= SSL_ST_ACCEPT;
 SSL_do_handshake(port-ssl);

In OpenSSL 0.9.6 this was the correct way to renegotiate a connection 
and I would need to confirm if this is still required for OpenSSL = 
0.9.7.

 if (port-ssl-state != SSL_ST_OK)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL renegotiation failure)));
 port-count = 0;
 }

 I call your attention to the fact that we're calling SSL_do_handshake
 twice here; and what's more, we're messing with the internal
 port-ssl-state variable by setting the SSL_ST_ACCEPT bit.  According
 to the commit message that introduces this[1], this is text book
 correct, but I'm unable to find the text book that specifies that this
 is correct.  Is it?  I have gone through the OpenSSL documentation and
 can find not a single bit on this; and I have also gone through the
 OpenSSL mailing list archives and as far as I can tell they say you have
 to call SSL_renegotiate() and then SSL_do_handshake() once, and that's
 it.  (You can call SSL_renegotiate_pending() afterwards to verify that
 the renegotiation completed successfully, which is something we don't
 do.)

It would not surprise me if we could #ifdef out the |= SSL_ST_ACCEPT 
and second call to SSL_do_handshake() if we're running OpenSSL 0.9.7 or 
newer. iirc, 0.9.7's big feature was improving renegotiation.

 I have found in our archives several reports of people that get SSL
 renegotiation failed error messages in the log, and no explanation has
 ever been found.  I instrumented that block, and I have observed that
 after the second handshake call, ssl-state is 0x2111, 0x21c0, 0x21a0
 and other values; never SSL_ST_OK.  (0x2000 is SSL_ST_ACCEPT which is
 the bit we added; SSL_ST_OK is 0x03).  If I remove the second
 SSL_do_handshake() call and the changes to port-ssl-state, everything
 appears to work perfectly well and there's no extra log spam (and
 ssl-state is SSL_ST_OK by the time things are finished).  So apparently
 renegotiation is not actually failing, but we're just adding a confusing
 error message for no apparent reason.

 Thoughts?

Since 0.9.6 almost certainly has vulnerabilities that haven't been 
fixed, can we just depreciate anything older than OpenSSL 0.9.7 or 
#ifdef the above out? 0.9.7 is also ancient, but we're talking about 
new releases... and while we're at it, can we prefer PFS ciphers?

 I have still to find 

Re: [HACKERS] pgbench patches

2013-07-10 Thread Tatsuo Ishii
 Hello Tatsuo,
 
 I have looked into this:
 https://commitfest.postgresql.org/action/patch_view?id=1105
 because it's marked as Ready for committer. However I noticed that
 you worried about other pgbench patches such as
 https://commitfest.postgresql.org/action/patch_view?id=1103 .

 So I would like to know whether the throttling patch is committed and
 then update the progress patch to take that into account.

 Shall I wait for your pgbench --throttle patch becomes ready for
 committer?
 
 No. I'll submit another patch to the next commitfest to improve the
 progress behavior under throttling, if  when both initial patches are
 committed.

Ok, so I looked into the progress patch. One thing I noticed was:

case 'P':
progress = atoi(optarg);
if (progress = 0)
{
fprintf(stderr,
   thread progress delay (-P) must not 
be negative (%s)\n,
optarg);
exit(1);
}
break;

For me, the error message is not quite right, because progress == 0
case is considered error as well in your patch. I sugges you change
the error message something like:

thread progress delay (-P) must be positive number (%s)\n,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-07-10 Thread Josh Berkus
On 07/08/2013 12:51 PM, Josh Berkus wrote:
 
 I think everything has been committed - as I think the CF app shows. The
 only thing left in this srea from Cédric is the insallation of headers,
 which Peter is down to review and is in Waiting on Author status.
 
 Yeah, that's the one I'm asking about.  is that going to get done in the
 next few days, or does it bounce?
 

OK, I'm setting this to returned with feedback because there has been
no further discussion of this patch here in the last several days.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: Non-recursive processing of AND/OR lists

2013-07-10 Thread Josh Berkus

 I think it's a waste of code to try to handle bushy trees.  A list is
 not a particularly efficient representation of the pending list; this
 will probably be slower than recusing in the common case.  I'd suggest
 keeping the logic to handle left-deep trees, which I find rather
 elegant, but ditching the pending list.

Is there going to be further discussion of this patch, or do I return it?


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Remove useless USE_PGXS support in contrib

2013-07-10 Thread Josh Berkus
On 07/04/2013 06:11 AM, Cédric Villemain wrote:
 Le mercredi 3 juillet 2013 23:56:42, Josh Berkus a écrit :
 Peter, Cedric, etc.:

 Where are we on this patch?  Seems like discussion died out.  Should it
 be bounced?
 
 I for myself have been presuaded that it is a good idea. Things apparently 
 loosed are not, it just outline that we need better coverage in PGXS area, so 
 it is another thing to consider (TODO : add resgress test for PGXS ...)
 

To be more focused: is this patch going to be committed in the next 4
days or not?  If not, it's time to mark it returned so we can have
leisurely spec argument later.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2013-07-10 Thread Josh Berkus
On 07/08/2013 12:00 PM, Josh Berkus wrote:
 (b) we can expect maybe a dozen to 18 of them in core based on the data
  types there, and I hate to clutter up /contrib, and
 
  Well, that's a matter of opinion.  I'd be more happy with 250 contribs
  all on the same level versus a bunch of subdirectories structured based
  on personal preferences.
 
  But hey, we disagreed on config.sgml for similar reasons, IIRC. ;-)
 Yeah, I'd like to see some other opinions on this.
 

Well, based on the total lack of opinions from anyone on -hackers, I'd
say we leave your patch alone and leave the transforms in their current
directory location.  We can always move them later.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL renegotiation

2013-07-10 Thread Troels Nielsen
Hi,

These are the relevant bits from Apache2.4's mod_ssl.

SSL_renegotiate(ssl);
SSL_do_handshake(ssl);

if (SSL_get_state(ssl) != SSL_ST_OK) {
ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, APLOGNO(02225)
  Re-negotiation request failed);
ssl_log_ssl_error(SSLLOG_MARK, APLOG_ERR, r-server);

r-connection-keepalive = AP_CONN_CLOSE;
return HTTP_FORBIDDEN;
}

ap_log_rerror(APLOG_MARK, APLOG_INFO, 0, r, APLOGNO(02226)
  Awaiting re-negotiation handshake);

/* XXX: Should replace setting state with SSL_renegotiate(ssl);
 * However, this causes failures in perl-framework currently,
 * perhaps pre-test if we have already negotiated?
 */
#ifdef OPENSSL_NO_SSL_INTERN
SSL_set_state(ssl, SSL_ST_ACCEPT);
#else
ssl-state = SSL_ST_ACCEPT;
#endif
SSL_do_handshake(ssl);

sslconn-reneg_state = RENEG_REJECT;

if (SSL_get_state(ssl) != SSL_ST_OK) {
ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, APLOGNO(02261)
  Re-negotiation handshake failed: 
  Not accepted by client!?);

r-connection-keepalive = AP_CONN_CLOSE;
return HTTP_FORBIDDEN;
}

That code supports at least OpenSSL 0.9.7 and later.

Some explanation for it can be found here:

http://books.google.dk/books?id=IIqwAy4qEl0Cpg=PT184lpg=PT184dq=SSL_do_handshakesource=blots=ma632U4vWvsig=d2qqS0svhu4EwIZZaONdHoTulVIhl=ensa=Xei=xdPdUczoDuf34QSzmoDQDgved=0CIIDEOgBMCo

The snippet there is probably the textbook implementation.

So the original code looks OK. Perhaps the check
on the return code of the first SSL_do_handshake (and SSL_renegotiate)
is unnecessary and may lead to unwarranted error messages, though.
And it may be wrong to continue the renegotiation if
the state is not SSL_ST_OK after the first SSL_do_handshake.

If the renegotiation fails, I suppose two things can be done:
1. Quit the connection
2. Carry on pretending nothing happened.

I think 2. is correct  in the vast majority of cases (as it looks like is
being done now).
And in that case: not resetting  port-count will make for a very slow
and awkward connection as new handshakes will be attempted again and again,
even if the other party persistently refuse to shake hands.

Kind Regards
Troels Nielsen


On Thu, Jul 11, 2013 at 12:34 AM, Alvaro Herrera
alvhe...@2ndquadrant.comwrote:

 I think this block is better written as:

 if (ssl_renegotiation_limit  port-count 
 ssl_renegotiation_limit * 1024L)
 {
 SSL_set_session_id_context(port-ssl, (void *) SSL_context,
sizeof(SSL_context));
 if (SSL_renegotiate(port-ssl) = 0)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL renegotiation failure in
 renegotiate)));
 else
 {
 inthandshake;

 do {
 handshake = SSL_do_handshake(port-ssl);
 if (handshake = 0)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL renegotiation failure in
 handshake, retrying)));
 } while (handshake = 0);

 if (port-ssl-state != SSL_ST_OK)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL failed to send renegotiation
 request)));
 else
 port-count = 0;
 }
 }

 In other words, retry the SSL_do_handshake() until it reports OK, but
 not more than that; this seems to give better results in my (admittedly
 crude) experiments.  I am unsure about checking port-ssl-state after
 the handshake; it's probably pointless, really.

 In any case, the old code was calling SSL_do_handshake() even if
 SSL_renegotiate() failed; and it was resetting the port-count even if
 the handshake failed.  Both of these smell like bugs to me.

 --
 Álvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Differences in WHERE clause of SELECT

2013-07-10 Thread Prabakaran, Vaishnavi
Hi Berkus,

Thanks for your time and response.

I do understand that there is no LIKE operator support for integers and it 
would be great if you could help me understand the reason why is it not 
supported.

My intention is to know whether this is not supported because of any technical 
limitation or is it against any Postgresql/SQL standards. 

My use cases are like below ones :
Integer LIKE pattern [ESCAPE escape-character] 
1. List all the customers who are having negative balance:
SELECT * from Customer where balance LIKE ‘-%’

2. List all the customers whose id starts with 1:
SELECT * from Customer where cust_id LIKE ‘1%’

Thanks  Regards,
Vaishnavi


-Original Message-
From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Josh Berkus
Sent: Wednesday, 10 July 2013 9:13 AM
To: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Differences in WHERE clause of SELECT

Prabakaran,


 I am a newbie to PostgreSQL and was wondering about the following 
 behaviour.

pgsql-hackers is not the appropriate list for this kind of question.  In the 
future, please post to pgsql-novice, pgsql-sql, or pgsql-general with this kind 
of question.  Thanks.

 Can you please help me understand why 'LIKE' does not use implicit 
 cast ?

Like uses the operator class text_pattern_ops which doesn't include an 
implict cast.  For one thing, the implicit cast is from text -- integer, not 
the other way around, and there is no LIKE operator for integers.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Differences in WHERE clause of SELECT

2013-07-10 Thread Jaime Casanova
On Wed, Jul 10, 2013 at 8:42 PM, Prabakaran, Vaishnavi
vaishna...@fast.au.fujitsu.com wrote:
 Hi Berkus,

 Thanks for your time and response.

 I do understand that there is no LIKE operator support for integers and it 
 would be great if you could help me understand the reason why is it not 
 supported.

 My intention is to know whether this is not supported because of any 
 technical limitation or is it against any Postgresql/SQL standards.


the latter

 My use cases are like below ones :
 Integer LIKE pattern [ESCAPE escape-character]
 1. List all the customers who are having negative balance:
 SELECT * from Customer where balance LIKE ‘-%’


this is not cleaner implemented this way?
SELECT * FROM customer WHERE balance  0;

 2. List all the customers whose id starts with 1:
 SELECT * from Customer where cust_id LIKE ‘1%’


there is any real use for that query? i understand if you ask
for all customers whose names begins with 'A' but that the code
begins with '1'?

--
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for DISCARD

2013-07-10 Thread Fabrízio de Royes Mello
On Sun, Jul 7, 2013 at 12:49 AM, Robins Tharakan thara...@gmail.com wrote:

 Thanks Fabrizio.

 Although parallel_schedule was a miss for this specific patch, however, I
 guess I seem to have missed out serial_schedule completely (in all patches)
 and then thanks for pointing this out. Subsequently Robert too noticed the
 miss at the serial_schedule end.

 Please find attached a patch, updated towards serial_schedule /
 parallel_schedule as well as the role name change as per Robert's feedback
 on CREATE OPERATOR thread.


Ok.

Some basic checks on this patch:

- Removed unnecessary extra-lines: Yes
- Cleanly applies to Git-Head: Yes
- Documentation Updated: N/A
- Tests Updated: Yes
- All tests pass: Yes.
- Does it Work: Yes
- Do we want it?: Yes
- Is this a new feature: No
- Does it support pg_dump: No
- Does it follow coding guidelines: Yes
- Any visible issues: No
- Any corner cases missed out: No
- Performance tests required: No
- Any compiler warnings: No
- Are comments sufficient: Yes
- Others: N/A

Regards,

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] [PATCH] Remove useless USE_PGXS support in contrib

2013-07-10 Thread Andrew Dunstan


On 07/10/2013 09:04 PM, Josh Berkus wrote:

On 07/04/2013 06:11 AM, Cédric Villemain wrote:

Le mercredi 3 juillet 2013 23:56:42, Josh Berkus a écrit :

Peter, Cedric, etc.:

Where are we on this patch?  Seems like discussion died out.  Should it
be bounced?

I for myself have been presuaded that it is a good idea. Things apparently 
loosed are not, it just outline that we need better coverage in PGXS area, so 
it is another thing to consider (TODO : add resgress test for PGXS ...)


To be more focused: is this patch going to be committed in the next 4
days or not?  If not, it's time to mark it returned so we can have
leisurely spec argument later.


I think that would probably be the better course. I'm not sure there is 
a consensus about this.


cheers

andrew






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-10 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 Any chance there still was an old replication slot around?

It is quite likely that there was.



--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL renegotiation

2013-07-10 Thread Sean Chittenden
 If the renegotiation fails

AH! Now I remember. SSL clients can optionally renegotiate, it's not
required to renegotiate the session if the other side chooses not to
(almost certainly due to a bug or limitation in the client's connecting
library). By monkeying with the state, you can explicitly force a client
to renegotiate.

I don't think in code from yesteryear it was portable or possible to see
if the server successfully renegotiated a connection before 0.9.6, so
you just forced the client to renegotiate after the server and ignored
the result. A client pausing for a few extra round trips was probably
never noticed. I'm not saying this is correct, but I think that was the
thinking back in the day.

 , I suppose two things can be done:

 1. Quit the connection

With my Infosec hat on, this is the correct option - force the client
back in to compliance with whatever the stated crypto policy is through
a reconnection.

 2. Carry on pretending nothing happened.

This is almost never correct in a security context (all errors or
abnormalities must boil up).

 I think 2 is correct  in the vast majority of cases (as it looks like
 is being done now).

That is a correct statement in that most code disregards renegotiation,
but that is because there is a pragmatic assumption that HTTPS
connections will be short lived. In the case of PostgreSQL, there is a
good chance that a connection will be established for weeks or months.
In the case of Apache, allowing a client to renegotiate every byte would
be a possible CPU DoS, but I digress

 And in that case: not resetting  port-count will make for a very slow
 and awkward connection as new handshakes will be attempted again and
 again,
 even if the other party persistently refuse to shake hands.

Which could lead to a depletion of random bits. This sounds like a
plausible explanation to me.


Too bad we're stuck using an ill-concieved SSL implementation and can't
use botan[1].

 I think this block is better written as:

 if (ssl_renegotiation_limit  port-count 
 ssl_renegotiation_limit * 1024L)

I don't think the  * 1024L is right.

 {
 SSL_set_session_id_context(port-ssl, (void *)
 SSL_context,
sizeof(SSL_context));
 if (SSL_renegotiate(port-ssl) = 0)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL renegotiation failure in
 renegotiate)));
 else
 {
 inthandshake;

 do {
 handshake = SSL_do_handshake(port-ssl);
 if (handshake = 0)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL renegotiation failure
 in handshake, retrying)));
 } while (handshake = 0);

It's worth noting that for broken SSL implementations or SSL
implementations that refuse to renegotiate, this will be yield a stalled
connection, though at least it will be obvious in the logs. I think this
is the correct approach.

It is probably prudent to set an upper bound on this loop in order to
free up the resource and unblock the client who will appear to be
mysteriously hung for no reason until they look at the PostgreSQL server
logs.

 if (port-ssl-state != SSL_ST_OK)
 ereport(COMMERROR,
 (errcode(ERRCODE_PROTOCOL_VIOLATION),
  errmsg(SSL failed to send
 renegotiation request)));
 else
 port-count = 0;
 }
 }

 In other words, retry the SSL_do_handshake() until it reports OK, but
 not more than that; this seems to give better results in my
 (admittedly
 crude) experiments.  I am unsure about checking port-ssl-state after
 the handshake; it's probably pointless, really.

Correct. In async IO, this would be important, but since the server is
synchronous in its handling of communication, we can remove the if/else
(state != SSL_ST_OK) block.

 In any case, the old code was calling SSL_do_handshake() even if
 SSL_renegotiate() failed; and it was resetting the port-count even if
 the handshake failed.  Both of these smell like bugs to me.

I don't know how SSL_renegotiate() could fail in the past.
SSL_renegotiate(3) should never fail on a well formed implementation
(e.g. ssl/t1_reneg.c @ssl_add_serverhello_renegotiate_ext()).


While we're on the subject of crypto and OpenSSL, we force server
ciphers to be preferred instead of client ciphers:

  SSL_CTX_set_options(SSL_context, SSL_OP_CIPHER_SERVER_PREFERENCE);

http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html#NOTES

SSL_get_secure_renegotiation_support() would 

Re: [HACKERS] Differences in WHERE clause of SELECT

2013-07-10 Thread Prabakaran, Vaishnavi
On Wed, Jul 10, 2013 at 8:42 PM, Prabakaran, Vaishnavi 
vaishna...@fast.au.fujitsu.com wrote:
 Hi Berkus,

 Thanks for your time and response.

 I do understand that there is no LIKE operator support for integers and it 
 would be great if you could help me understand the reason why is it not 
 supported.

 My intention is to know whether this is not supported because of any 
 technical limitation or is it against any Postgresql/SQL standards.


 the latter

I see. Understood. Looking at the SQL standard it does say that the operands 
needs to be character or octet. But I was hoping that this can be overridden by 
implicit conversion rules which are implementation specific.  

 My use cases are like below ones :
 Integer LIKE pattern [ESCAPE escape-character] 1. List all the 
 customers who are having negative balance:
 SELECT * from Customer where balance LIKE '-%'


this is not cleaner implemented this way?
SELECT * FROM customer WHERE balance  0;

 2. List all the customers whose id starts with 1:
 SELECT * from Customer where cust_id LIKE '1%'


 there is any real use for that query? i understand if you ask for all 
 customers whose names begins with 'A' but that the code begins with '1'?

A legacy application we are migrating does have a weird requirement like this 
because it was running on 'another' RDBMS which does have support for implicit 
casting in LIKE predicate.

Rgds,
Vaishnavi


--
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] In progress INSERT wrecks plans on table

2013-07-10 Thread Abhijit Menon-Sen
At 2013-07-10 09:47:34 -0700, j...@agliodbs.com wrote:

 Due to the apparent lack of performance testing, I'm setting this back
 to needs review.

The original submission (i.e. the message linked from the CF page)
includes test results that showed a clear performance improvement.
Here's an excerpt:

 OK, here's a easily reproducible test...
 
 Prep:
 DROP TABLE IF EXISTS plan;
 CREATE TABLE plan
 (
   id INTEGER NOT NULL,
   typ INTEGER NOT NULL,
   dat TIMESTAMP,
   val TEXT NOT NULL
 );
 insert into plan select generate_series(1,10), 0,
 current_timestamp, 'some texts';
 CREATE UNIQUE INDEX plan_id ON plan(id);
 CREATE INDEX plan_dat ON plan(dat);
 
 testcase.pgb
 select count(*) from plan where dat is null and typ = 3;
 
 Session 1:
 pgbench -n -f testcase.pgb -t 100
 
 Session 2:
 BEGIN; insert into plan select 100 + generate_series(1, 10),
 3, NULL, 'b';
 
 Transaction rate in Session 1: (in tps)
 (a) before we run Session 2:
 Current: 5600tps
 Patched: 5600tps
 
 (b) after Session 2 has run, yet before transaction end
 Current: 56tps
 Patched: 65tps
 
 (c ) after Session 2 has aborted
 Current/Patched: 836, 1028, 5400tps
 VACUUM improves timing again
 
 New version of patch attached which fixes a few bugs.
 
 Patch works and improves things, but we're still swamped by the block
 accesses via the index.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers