[HACKERS] [patch] for "psql : Allow processing of multiple -f (file) options "

2012-04-08 Thread Vikash3 S
 Hi,Please find the patch regarding trivial changes against To Do item list for  "psql : Allow processing of multiple -f (file) options ".Looking for valuable feedback.Thanks,Vikash=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you



startup.patch
Description: Binary data


getopt_long.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-08 Thread Gerald Devotta
This is Gerald Devotta, Recruitment Specialist with Newt Global LLC, a
global staffing solutions provider that has been serving the
telecommunications and utility industries. 

 

I am contacting you to let you know that your resume came to my attention
while I was conducting a job search for a project in Bloomington Illinois. I
have reviewed your resume with particular interest and am excited to let you
know that many of your qualifications match my client's needs

 

PostgreSQL database developer

Bloomington Illinois

2 +Years Contract

Attractive Compensation + Expenses paid

 

 

Required:

Solid experience with PgSQL programming on PostgreSQL ORBMS

Person can work 4 days at site and one day from home

 

If you are interested kindly send your update resume with expected
rate/salary

 

 

 

 

 

Regards,

Gerald Devotta

Recruitment Analyst

Newt Global Consulting LLC

Phone: 202.470.2492

Email: gdevo...@newtglobal.com   |
www.newtglobal.com

1300 W Walnut Hill Lane, Suite #230, Irving, TX 75038

 

From: Shigeru Hanada-2 [via PostgreSQL]
[mailto:ml-node+s1045698n5626807...@n5.nabble.com] 
Sent: Monday, April 09, 2012 12:39 AM
To: Gerald Devotta
Subject: Re: pgsql_fdw, FDW for PostgreSQL server

 

(2012/04/08 5:19), Thom Brown wrote: 
> 2012/4/7 Shigeru HANADA<[hidden email]>: 
>> I've updated pgsql_fdw so that it can collect statistics from foreign 
>> data with new FDW API. 
> 
> I notice that if you restart the remote server, the connection is 
> broken, but the client doesn't notice this until it goes to fire off 
> another command.  Should there be an option to automatically 
> re-establish the connection upon noticing the connection has dropped, 
> and issue a NOTICE that it had done so? 

Hm, I'd prefer reporting the connection failure and aborting the local 
transaction, because reconnecting to the server would break consistency 
between the results come from multiple foreign tables.  Server shutdown 
(or other troubles e.g. network failure) might happen at various timing 
in the sequence of remote query (or sampling in ANALYZE).  For example, 
when we execute a local query which contains two foreign tables, foo and 
bar, then the sequence of libpq activity would be like this. 

1) connect to the server at the beginning of the local query 
2) execute EXPLAIN for foreign table foo 
3) execute EXPLAIN for foreign table bar 
4) execute actual query for foreign table foo 
5) execute actual query for foreign table bar 
6) disconnect from the server at the end of the local query 

If the connection has broken between 4) and 5), and immediate reconnect 
succeeded, retrieved results for foo and bar might be inconsistent from 
the viewpoint of transaction isolation. 

In current implementation, next local query which contains foreign table 
of failed foreign table tries to reconnect to the server. 


> Also I'm not particularly keen on the message provided to the user in 
> this event: 
> 
> ERROR:  could not execute EXPLAIN for cost estimation 
> DETAIL:  FATAL:  terminating connection due to administrator command 
> FATAL:  terminating connection due to administrator command 
> 
> There's no explanation what the "administrator" command was, and I 
> suspect this is really just a "I don't know what's happened here" 
> condition.  I don't think we should reach that point. 


That FATAL message is returned by remote backend's ProcessInterrupts() 
during some administrator commands, such as immediate shutdown or 
pg_terminate_backend().  If remote backend died of fast shutdown or 
SIGKILL, no error message is available (see the sample below). 

postgres=# select * From pgsql_branches ; 
ERROR:  could not execute EXPLAIN for cost estimation 
DETAIL: 
HINT:  SELECT bid, bbalance, filler FROM public.pgbench_branches 

I agree that the message is confusing.  How about showing message like 
"pgsql_fdw connection failure on " or something with remote 
error message for such cases?  It can be achieved by adding extra check 
for connection status right after PQexec()/PQexecParams().  Although 
some word polishing would be required :) 

postgres=# select * from pgsql_branches ; 
ERROR:  pgsql_fdw connection failure on subaru_pgbench 
DETAIL:  FATAL:  terminating connection due to administrator command 
FATAL:  terminating connection due to administrator command 

This seems to impress users that remote side has some trouble. 

Regards, 
-- 
Shigeru HANADA 

-- 
Sent via pgsql-hackers mailing list ([hidden email]) 
To make changes to your subscription: 
http://www.postgresql.org/mailpref/pgsql-hackers



  _  

If you reply to this email, your message will be added to the discussion
below:

http://postgresql.1045698.n5.nabble.com/pgsql-fdw-FDW-for-PostgreSQL-server-
tp4935560p5626807.html 

To unsubscribe from PostgreSQL, click


Re: [HACKERS] Last gasp

2012-04-08 Thread Noah Misch
On Sat, Apr 07, 2012 at 04:51:03PM -0400, Robert Haas wrote:
> I think this basically just boils down to too many patches and not
> enough people.  I was interested in Command Triggers from the
> beginning of this CommitFest, and I would have liked to pick it up
> sooner, but there were a LOT of patches to work on for this
> CommitFest.  The first three CommitFests of this cycle each had
> between 52 and 60 patches, while this one had 106 which included
> several very complex and invasive patches, command triggers among
> them.  So there was just a lot more to do, and a number of the people
> who submitted all of those patches didn't do a whole lot to help
> review them, sometimes because they were still furiously rewriting
> their submissions.  It's not surprising that more patches + fewer
> reviewers = each patch getting less attention, or getting it later.
> 
> Even before this CommitFest, it's felt to me like this hasn't been a
> great cycle for reviewing.  I think we have generally had fewer people
> doing reviews than we did during the 9.0 and 9.1 cycles.  I think we
> had a lot of momentum with the CommitFest process when it was new, but
> three years on I think there's been some ebbing of the relative
> enthusiastic volunteerism that got off the ground.  I don't have a
> very good idea what to do about that, but I think it bears some
> thought.

http://wiki.postgresql.org/wiki/Running_a_CommitFest suggests marking a patch
Returned with Feedback after five consecutive days of Waiting on Author.  That
was a great tool for keeping things moving, and I think we should return to it
or a similar timer.  It's also an objective test approximating the subjective
"large patch needs too much rework" test.  One cure for insufficient review
help is to then ratchet down the permitted Waiting on Author days.  The queue
will clear faster, and patch authors will have clearer schedules to consider
assisting with review of the remaining patches.  Incidentally, I can't feel
too sorry for any patch bounced after one solid review when some other patch
gets an implicit bounce after incomplete review or no review.  No patch in the
latest CF has or will have that fate.  Several patches in CF 2011-11 did end
that way, and it wasn't the first incident thereof.

I liked Simon's idea[1] for increasing the review supply: make a community
policy that patch submitters shall furnish commensurate review effort.  If
review is available-freely-but-we-hope-you'll-help, then the supply relative
to patch submissions is unpredictable.  Feature sponsors should see patch
review as efficient collaborative development.  When patch authorship teams
spend part of their time reviewing other submissions with the expectation of
receiving comparable reviews of their own work, we get a superior final
product compared to allocating all that time to initial patch writing.  (The
details might need work.  For example, do we give breaks for new contributors
or self-sponsored authors?)

[1] http://archives.postgresql.org/pgsql-hackers/2012-03/msg01612.php

> When it's discovered
> and agreed that a patch has serious problems that can't be corrected
> in a few days, the response to that should be "crap, OK, see you next
> CommitFest".  When people aren't willing to accept that, then we end
> up having arguments.

Agreed.  If we can develop consensus on a litmus test for that condition, it
can and should be the author recognizing it and marking his own patch Returned
with Feedback.  When the author fails to do so, the reviewer should do it.
When neither of them take action, only then should it fall on the CF manager.
We have relied on the CF manager to make the bulk of these calls, and that's a
demonstrated recipe for patch-specific policy debates and CF manager burnout.

> I think the fact that we insist on good design and
> high-quality code is a real strength of this project and something
> that makes me proud to be associated with it.  Yeah, it's harder that
> way.  Yeah, sometimes it means that it takes longer before a given
> feature gets committed.  Yeah, some things don't get done at all.  But
> on the flip side, it means that when you use a PostgreSQL feature, you
> can pretty much count on it working.  And if by some chance it
> doesn't, you can count on it being an oversight that will be corrected
> in the next maintenance release rather than a design problem that will
> never get fixed.  I like that, and I think our users do too.

Fully agreed.  We could occasionally benefit from allowing "experimental"
features documented as liable to mutate in any later major release.  They
should exhibit the same high implementation quality, but we can relax somewhat
about perfecting a long-term user interface.  Take pg_autovacuum as a rough
example of this strategy from the past.

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-08 Thread Shigeru HANADA
(2012/04/08 5:19), Thom Brown wrote:
> 2012/4/7 Shigeru HANADA:
>> I've updated pgsql_fdw so that it can collect statistics from foreign
>> data with new FDW API.
> 
> I notice that if you restart the remote server, the connection is
> broken, but the client doesn't notice this until it goes to fire off
> another command.  Should there be an option to automatically
> re-establish the connection upon noticing the connection has dropped,
> and issue a NOTICE that it had done so?

Hm, I'd prefer reporting the connection failure and aborting the local
transaction, because reconnecting to the server would break consistency
between the results come from multiple foreign tables.  Server shutdown
(or other troubles e.g. network failure) might happen at various timing
in the sequence of remote query (or sampling in ANALYZE).  For example,
when we execute a local query which contains two foreign tables, foo and
bar, then the sequence of libpq activity would be like this.

1) connect to the server at the beginning of the local query
2) execute EXPLAIN for foreign table foo
3) execute EXPLAIN for foreign table bar
4) execute actual query for foreign table foo
5) execute actual query for foreign table bar
6) disconnect from the server at the end of the local query

If the connection has broken between 4) and 5), and immediate reconnect
succeeded, retrieved results for foo and bar might be inconsistent from
the viewpoint of transaction isolation.

In current implementation, next local query which contains foreign table
of failed foreign table tries to reconnect to the server.

> Also I'm not particularly keen on the message provided to the user in
> this event:
> 
> ERROR:  could not execute EXPLAIN for cost estimation
> DETAIL:  FATAL:  terminating connection due to administrator command
> FATAL:  terminating connection due to administrator command
> 
> There's no explanation what the "administrator" command was, and I
> suspect this is really just a "I don't know what's happened here"
> condition.  I don't think we should reach that point.

That FATAL message is returned by remote backend's ProcessInterrupts()
during some administrator commands, such as immediate shutdown or
pg_terminate_backend().  If remote backend died of fast shutdown or
SIGKILL, no error message is available (see the sample below).

postgres=# select * From pgsql_branches ;
ERROR:  could not execute EXPLAIN for cost estimation
DETAIL:
HINT:  SELECT bid, bbalance, filler FROM public.pgbench_branches

I agree that the message is confusing.  How about showing message like
"pgsql_fdw connection failure on " or something with remote
error message for such cases?  It can be achieved by adding extra check
for connection status right after PQexec()/PQexecParams().  Although
some word polishing would be required :)

postgres=# select * from pgsql_branches ;
ERROR:  pgsql_fdw connection failure on subaru_pgbench
DETAIL:  FATAL:  terminating connection due to administrator command
FATAL:  terminating connection due to administrator command

This seems to impress users that remote side has some trouble.

Regards,
-- 
Shigeru HANADA

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-08 Thread Greg Smith

On 04/07/2012 04:51 PM, Robert Haas wrote:

On a related note, letting CommitFests go on for three
months because there's insufficient reviewer activity to get them done
in one or two is, in my opinion, not much of a solution.  If there's
even less reviewer activity next time, are we going to let it go on
for four months?  Six months?  Twelve months?


There should be a feedback loop here, one that makes it increasingly 
obvious to people writing features that such work will screech to a halt 
unless it's paired with review work.  An extreme enforcement might be a 
bias toward outright rejecting things from people we haven't seen a 
review from before.  That sort of message is hard to deliver without 
discouraging new developers though, even though I think there's a lot of 
data to support such a thing.  You learn a lot about how to harden a 
submission against the things any reviewer is likely to do by doing a 
review yourself.


That said, it seems to me that the large patches are the ones that clog 
the review system the most, and those are usually coming from known 
contributors.  Unfortunately, even if you know the situation here, 
making it clear to sponsors that review is just as important as new 
development is a hard sell sometimes.  There's a sales pitch needed 
there, one that makes it clear to people that the review workflow is 
actually an essential part of why the PostgreSQL code is high quality. 
Going through review isn't overhead, it's a key part of the process for 
the benefit of the feature.



I think that there are many projects that are more open to "just
committing things" than we are - perhaps no better exemplified than by
Tom's recent experiences with Henry Spencer's regular expression
engine and the Tcl project.  If you're willing to do work, we'll
assume it's good; if you know something and are willing to do work,
here are the keys.  We could decide to take that approach, and just
generally lower our standards for commit across the board.


This is a bit of a strawman argument as you constructed it.  There's a 
large middle ground between the current CF process and "just committing 
things".  One problem here is that there just isn't enough unique people 
committing things, and adjusting the commit standards may be necessary 
to improve that.


Right now I think there's a systemic structural problem to how people 
and companies approach the development cycle for each release.  Getting 
things into the last CF just before a release has a better rate of 
return on the work, making it easier for people to justify spending time 
on.  There's less elapsed time before you will see the results of your 
work in production.


But that's completely backwards from the risk/reward curve for 
committers, and it's no wonder clashes here are so common.  The idea of 
committing something that may not be perfect yet is a lot easier to 
stomach if there's plenty of time left in the release cycle for testing 
it.  Even a full reversion is straightforward to handle when something 
is changed early in the release.  In a risk adverse project like this 
one, the last batch of feature commits for a release should be extremely 
picky about what they accept.


On a related note, one reason I'm not quite as concerned about the 9.2 
schedule is that none of the more speculative submissions went in near 
the end this time.  The changes that scare me most have been committed 
for months already.  That was surely not the case for the last month of 
9.0 or 9.1 development, where some pretty big and disruptive things 
didn't land until very late.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: improve SLRU replacement algorithm

2012-04-08 Thread Robert Haas
On Sun, Apr 8, 2012 at 12:53 PM, Tom Lane  wrote:
> However, I do have a couple of quibbles with the comments.

Good points.  I made some adjustments; see what you think.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Collecting statistics on CSV file data

2012-04-08 Thread Etsuro Fujita
Thanks!

Best regards,
Etsuro Fujita

-Original Message-
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: Saturday, April 07, 2012 4:20 AM
To: Shigeru HANADA
Cc: Etsuro Fujita; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] WIP: Collecting statistics on CSV file data 

Shigeru HANADA  writes:
> Just after my post, Fujita-san posted another v7 patch[1], so I merged
> v7 patches into v8 patch.

I've committed a modified version of this, but right after pushing it I had
a better idea about what the AnalyzeForeignTable API should do.
An issue that I'd not previously figured out is how analysis of an
inheritance tree could deal with foreign-table members, because it wants to
estimate the members' sizes before collecting the actual sample rows.
However, given that we've got the work split into a precheck phase and a
sample collection phase, that's not hard to solve: we could insist that the
FDW give back a size estimate in the precheck phase not the sample
collection phase.  I'm off to fix that up ...

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-08 Thread Noah Misch
On Sun, Apr 08, 2012 at 04:25:01PM +0200, Michael Meskes wrote:
> On Sat, Apr 07, 2012 at 11:50:42AM -0400, Noah Misch wrote:
> > I do call your attention to a question I raised in my second review: if a
> > program contains "DECLARE foo READAHEAD 5 CURSOR FOR ..." and the user runs
> > the program with ECPGFETCHSZ=10 in the environment, should that cursor use a
> > readahead window of 5 or of 10?  Original commentary:
> > http://archives.postgresql.org/message-id/20120329004323.ga17...@tornado.leadboat.com
> 
> I'd say it should be 5. I don't like an environment variable overwriting a
> hard-coded setting. I think this is what you, Noah, thought, too, right?

Yes.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] bug in fast-path locking

2012-04-08 Thread Robert Haas
On Sun, Apr 8, 2012 at 12:43 PM, Boszormenyi Zoltan  wrote:
>> Indeed, the unpatched GIT version crashes if you enter
>>  =#lock TABLE pgbench_accounts ;
>> the second time in session 2 after the first one failed. Also,
>> manually spelling it out:
>>
>> Session 1:
>>
>> $ psql
>> psql (9.2devel)
>> Type "help" for help.
>>
>> zozo=# begin;
>> BEGIN
>> zozo=# lock table pgbench_accounts;
>> LOCK TABLE
>> zozo=#
>>
>> Session 2:
>>
>> zozo=# begin;
>> BEGIN
>> zozo=# savepoint a;
>> SAVEPOINT
>> zozo=# lock table pgbench_accounts;
>> ERROR:  canceling statement due to statement timeout
>> zozo=# rollback to a;
>> ROLLBACK
>> zozo=# savepoint b;
>> SAVEPOINT
>> zozo=# lock table pgbench_accounts;
>> The connection to the server was lost. Attempting reset: Failed.
>> !>
>>
>> Server log after the second lock table:
>>
>> TRAP: FailedAssertion("!(locallock->holdsStrongLockCount == 0)", File:
>> "lock.c", Line: 749)
>> LOG:  server process (PID 12978) was terminated by signal 6: Aborted
>
>
> Robert, the Assert triggering with the above procedure
> is in your "fast path" locking code with current GIT.

Yes, that sure looks like a bug.  It seems that if the top-level
transaction is aborting, then LockReleaseAll() is called and
everything gets cleaned up properly; or if a subtransaction is
aborting after the lock is fully granted, then the locks held by the
subtransaction are released one at a time using LockRelease(), but if
the subtransaction is aborted *during the lock wait* then we only do
LockWaitCancel(), which doesn't clean up the LOCALLOCK.  Before the
fast-lock patch, that didn't really matter, but now it does, because
that LOCALLOCK is tracking the fact that we're holding onto a shared
resource - the strong lock count.  So I think that LockWaitCancel()
needs some kind of adjustment, but I haven't figured out exactly what
it is yet.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-04-08 Thread Peter Geoghegan
On 8 April 2012 20:51, Tom Lane  wrote:
> Applied with some cosmetic adjustments.

Thanks.

Having taken another look at the code, I wonder if we wouldn't have
been better off just fastpathing out of pgss_store in the first call
(in a pair of calls made by a backend as part an execution of some
non-prepared query) iff there is already an entry in the hashtable -
after all, we're now going to the trouble of acquiring the spinlock
just to increment the usage for the entry by 0 (likewise, every other
field), which is obviously superfluous. I apologise for not having
spotted this before submitting my last patch.

I have attached a patch with the modifications described.

This is more than a micro-optimisation, since it will cut the number
of spinlock acquisitions approximately in half for non-prepared
queries.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services


pg_stat_statements_optimization_2012_04_08.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-08 Thread Josh Berkus
> In short, the idea of strongly calendar-driven releases looks more
> and more attractive to me the more times we go through this process.
> If your patch isn't ready on date X, then it's not getting into this
> release; but there'll be another bus coming along before long.
> Stretching out release cycles to get in those last few neat features
> just increases the pressure for more of the same, because people don't
> know how long it will be to the next release.

As you know, I've supported this view for several years.

> Just to be clear ... I don't believe that we can have hard-and-fast
> *release* dates.  I am suggesting that it might be a good idea to have
> a hard deadline for committing new features.  But beta test phase will
> take however long it takes.  I don't think shaking out bugs is a
> predictable process.

It could have more visibility though, which would probably also make it
go faster.  Something to work on.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is there any way to disable compiler optimization and enable debug?

2012-04-08 Thread Peter Eisentraut
On sön, 2012-04-08 at 14:47 -0400, Andrew Dunstan wrote:
> Try:
> 
> CFLAGS=-O0 ./configure --enable-debug 

Better yet:

./configure CFLAGS=-O0 --enable-debug 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

2012-04-08 Thread Tom Lane
Peter Geoghegan  writes:
> On 29 March 2012 21:05, Tom Lane  wrote:
>> Barring objections I'll go fix this, and then this patch can be
>> considered closed except for possible future tweaking of the
>> sticky-entry decay rule.

> Attached patch fixes a bug, and tweaks sticky-entry decay.

Applied with some cosmetic adjustments.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is there any way to disable compiler optimization and enable debug?

2012-04-08 Thread Andrew Dunstan



On 04/08/2012 01:42 PM, clover white wrote:

HI,
  I would like to debug PG because I have a problem when I run initdb, 
and I have question about the configure file.


when I used the command below to config PG, it was only built with 
debugging symbols (-g) and O2 compiler optimization which would lead 
to execute order not match the source code order.
./configure --enable-debug --enable-depend --enable-cassert 
--prefix=/home/pgsql/pgsql


then I export CFLAGS=O0, but it still couldn't work.
I read  a little about confiqure file and find out that CFLAGS is 
unset in confiqure.
and CFLAGS is also control by global variable: ac_env_CFLAGS_set and 
ac_env_CFLAGS_value.


but i do not know how i could pass ac_env_CFLAGS_set and 
ac_env_CFLAGS_value to the configure file.


by now, I replace all the O2 flag to O0 in  configure file to resolve 
the debug source code order problem temporary.


Is there any other way to disable compiler optimization and enable debug?

Thank you for help.


Try:

   CFLAGS=-O0 ./configure --enable-debug 


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Is there any way to disable compiler optimization and enable debug?

2012-04-08 Thread clover white
HI,
  I would like to debug PG because I have a problem when I run initdb, and
I have question about the configure file.

when I used the command below to config PG, it was only built with
debugging symbols (-g) and O2 compiler optimization which would lead to
execute order not match the source code order.
./configure --enable-debug --enable-depend --enable-cassert
--prefix=/home/pgsql/pgsql

then I export CFLAGS=O0, but it still couldn't work.
I read  a little about confiqure file and find out that CFLAGS is unset in
confiqure.
and CFLAGS is also control by global variable: ac_env_CFLAGS_set and
ac_env_CFLAGS_value.

but i do not know how i could pass ac_env_CFLAGS_set and
ac_env_CFLAGS_value to the configure file.

by now, I replace all the O2 flag to O0 in  configure file to resolve the
debug source code order problem temporary.

Is there any other way to disable compiler optimization and enable debug?

Thank you for help.


Re: [HACKERS] ECPG FETCH readahead

2012-04-08 Thread Michael Meskes
On Sun, Apr 08, 2012 at 06:35:33PM +0200, Boszormenyi Zoltan wrote:
> Do you want me to change this or will you do it? I am on holiday
> and will be back to work on wednesday.

I don't think waiting till later this week is a real problem. 

> The possibility to test different readahead window sizes
> without modifying the source and recompiling was useful.

Sure, but you can still do that when not defining a fixed number in the
statement.

> The -R option simply provides a default without ornamenting
> the DECLARE statement.

Could you please incorporate these changes, too, when you're back from vacation?

> >I cannot find a test that tests the environment variable giving the fetch 
> >size.
> >Could you please point me to that?
> 
> I didn't write such a test. The reason is that while variables are
> exported by make from the Makefile to the binaries run by make
> e.g.  CFLAGS et.al. for $(CC), "make check" simply runs pg_regress
> once which uses its own configuration file that doesn't have a
> way to set or unset an environment variable. This could be a useful
> extension to pg_regress though.

How about calling setenv() from the test program itself? 

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at googlemail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-08 Thread Andrew Dunstan



On 04/07/2012 11:03 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 04/07/2012 06:33 PM, Peter Geoghegan wrote:

I hope that that policy will not be applied without some degree of
discrimination.

If we are to have time based releases, then I assume it won't, it will
be pretty much a hard and fast rule.
I admit I haven't been a fan in the past, but I can see advantages, not
least being predictability of release times. It would be nice to be able
to say "In June" when asked when the next release will be, as I often am.

Just to be clear ... I don't believe that we can have hard-and-fast
*release* dates.  I am suggesting that it might be a good idea to have
a hard deadline for committing new features.  But beta test phase will
take however long it takes.  I don't think shaking out bugs is a
predictable process.




I agree, but I think the release date will be much closer to being 
predictable if we chop off the last commitfest more rigorously. But time 
will tell.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-04-08 Thread Boszormenyi Zoltan

2012-04-08 11:24 keltezéssel, Boszormenyi Zoltan írta:

2012-04-06 14:47 keltezéssel, Cousin Marc írta:

On 05/04/12 08:02, Boszormenyi Zoltan wrote:

2012-04-04 21:30 keltezéssel, Alvaro Herrera írta:

I think this patch is doing two things: first touching infrastructure
stuff and then adding lock_timeout on top of that.  Would it work to
split the patch in two pieces?


Sure. Attached is the split version.

Best regards,
Zoltán Böszörményi


Hi,

I've started looking at and testing both patches.

Technically speaking, I think the source looks much better than the
first version of lock timeout, and may help adding other timeouts in the
future.


Thanks.


  I haven't tested it in depth though, because I encountered the
following problem:

While testing the patch, I found a way to crash PG. But what's weird is
that it crashes also with an unpatched git version.

Here is the way to reproduce it (I have done it with a pgbench schema):

- Set a small statement_timeout (just to save time during the tests)

Session1:
=#BEGIN;
=#lock TABLE pgbench_accounts ;

Session 2:
=#BEGIN;
=#lock TABLE pgbench_accounts ;
ERROR:  canceling statement due to statement timeout
=# lock TABLE pgbench_accounts ;

I'm using \set ON_ERROR_ROLLBACK INTERACTIVE by the way. It can also be
done with a rollback to savepoint of course.

Session 2 crashes with this : TRAP : FailedAssertion(«
!(locallock->holdsStrongLockCount == 0) », fichier : « lock.c », ligne :
749).

It can also be done without a statement_timeout, and a control-C on the
second lock table.


Indeed, the unpatched GIT version crashes if you enter
  =#lock TABLE pgbench_accounts ;
the second time in session 2 after the first one failed. Also,
manually spelling it out:

Session 1:

$ psql
psql (9.2devel)
Type "help" for help.

zozo=# begin;
BEGIN
zozo=# lock table pgbench_accounts;
LOCK TABLE
zozo=#

Session 2:

zozo=# begin;
BEGIN
zozo=# savepoint a;
SAVEPOINT
zozo=# lock table pgbench_accounts;
ERROR:  canceling statement due to statement timeout
zozo=# rollback to a;
ROLLBACK
zozo=# savepoint b;
SAVEPOINT
zozo=# lock table pgbench_accounts;
The connection to the server was lost. Attempting reset: Failed.
!>

Server log after the second lock table:

TRAP: FailedAssertion("!(locallock->holdsStrongLockCount == 0)", File: 
"lock.c", Line: 749)
LOG:  server process (PID 12978) was terminated by signal 6: Aborted

Best regards,
Zoltán Böszörményi


Robert, the Assert triggering with the above procedure
is in your "fast path" locking code with current GIT.

Best regards,
Zoltán Böszörményi





I didn't touch anything but this. It occurs everytime, when asserts are
activated.

I tried it on 9.1.3, and I couldn't make it crash with the same sequence
of events. So maybe it's something introduced since ? Or is the assert
still valid ?

Cheers







--
--
Zoltán Böszörményi
Cybertec Schönig&  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-08 Thread Boszormenyi Zoltan

2012-04-08 16:25 keltezéssel, Michael Meskes írta:

On Sat, Apr 07, 2012 at 11:50:42AM -0400, Noah Misch wrote:

Both.  The second patch appeared after my first review, based on a comment in
that review.  I looked at it during my re-review before marking the overall
project Ready for Committer.

Thanks.


I do call your attention to a question I raised in my second review: if a
program contains "DECLARE foo READAHEAD 5 CURSOR FOR ..." and the user runs
the program with ECPGFETCHSZ=10 in the environment, should that cursor use a
readahead window of 5 or of 10?  Original commentary:
http://archives.postgresql.org/message-id/20120329004323.ga17...@tornado.leadboat.com

I'd say it should be 5. I don't like an environment variable overwriting a
hard-coded setting. I think this is what you, Noah, thought, too, right? I'd
say let's change this.


Do you want me to change this or will you do it? I am on holiday
and will be back to work on wednesday.

The possibility to test different readahead window sizes
without modifying the source and recompiling was useful.


  Is it possible to allow just READAHEAD without a number?
In that case I would accept the environment variable.


After the 2nd patch is applied, this is exactly the case.
The cursors are driven and accounted using the new functions
but with the readahead window being a single row as the default
value for fetch_readahead is 1.




And some comments mostly directed at Zoltan:

ecpg --help says ...default 0 (disabled)..., but options less than 1 are not
accepted and the default setting of 1 has a comment "Disabled by default". I
guess this needs to be adjusted.


Yes, the help text was not changed in the 2nd patch, I missed that.



Is there a reason why two new options for ecpg were invented? Normally ecpg
options define how the preprocessor works but not the resulting binary.


The -R option simply provides a default without ornamenting
the DECLARE statement.


  Well,
different preprocessor behaviour might result in different binary behaviour of
course. The only option that only effects the resulting binary is "-r" for
"runtime". Again, this is not completely true as the option has to make its way
into the binary, but that's it. Now I wonder whether it would make more sense
to add the two options as runtime options instead. The
--detect-cursor-resultset-size option should work there without a problem.


You are right. This can be a suboption to "-r".


  I
haven't delved into the source code enough to find out if -R changes something
in the compiler stage.


"-R" works just like "-r" in the sense that a value gets passed
to the runtime. "-R" simply changes the default value that gets
passed if no READAHEAD N clause is specified for a cursor.
This is true only if you intend to apply both paches.

Without the 2nd patch and fetch_readahead=0 (no -R option given)
or NO READAHEAD is specified for a cursor, the compiler makes
a distinction between uncachable cursors driven by ECPGdo() and
cachable cursors driven by the new runtime functions.

With the 2nd patch applied, this distinction is no more.



The test case cursor-readahead.pgc has a comment saying "test automatic prepare
for all statements". Copy/Paste error?


It must be, yes.


I cannot find a test that tests the environment variable giving the fetch size.
Could you please point me to that?


I didn't write such a test. The reason is that while variables are
exported by make from the Makefile to the binaries run by make
e.g.  CFLAGS et.al. for $(CC), "make check" simply runs pg_regress
once which uses its own configuration file that doesn't have a
way to set or unset an environment variable. This could be a useful
extension to pg_regress though.

Best regards,
Zoltán Böszörményi



Michael



--
--
Zoltán Böszörményi
Cybertec Schönig&  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: improve SLRU replacement algorithm

2012-04-08 Thread Tom Lane
Robert Haas  writes:
> On reflection, it seems to me that the right fix here is to make
> SlruSelectLRUPage() to avoid selecting a page on which an I/O is
> already in progress.

This patch seems reasonably sane to me.  It's not intuitively obvious
that we should ignore I/O-busy pages, but your tests seem to prove
that that's a better algorithm.

However, I do have a couple of quibbles with the comments.  The first
para in the large block comment in SlruSelectLRUPage:

 * If we find any EMPTY slot, just select that one. Else locate the
 * least-recently-used slot to replace.

seems now to be quite out of touch with reality, and correcting it two
paras down doesn't really fix that.  Besides which, you ought to explain
*why* it's ignoring I/O-busy pages.  So perhaps merge the first and
third paras of the comment into something like

 * If we find any EMPTY slot, just select that one.  Else choose
 * a victim page to replace.  We normally take the least recently
 * used valid page, but we will never take the slot containing
 * latest_page_number, even if it appears least recently used.
 * Slots that are already I/O busy are never selected, either:
 * a read-busy slot will not be least recently used once the read
 * finishes, while waiting behind someone else's write has been
 * shown to be less efficient than starting another write.

Or maybe you have a better short description of why this is a good idea,
but there ought to be something here about it.

Also, as a matter of style, I think this comment ought to be inside the
"if" block not before it:

/*
 * All pages (except possibly the latest one) are I/O busy. We'll have
 * to wait for an I/O to complete and then retry.  We choose to wait
 * for the I/O on the least recently used slot, on the assumption that
 * it was likely initiated first of all the I/Os in progress and may
 * therefore finish first.
 */
if (best_valid_delta < 0)
{
SimpleLruWaitIO(ctl, bestinvalidslot);
continue;
}

I don't know about you, but I read a comment like this as asserting a
fact about the situation when control reaches where the comment is.
So it needs to be inside the "if".  (Analogy: if it were an actual
Assert(all-pages-are-IO-busy), it would have to be inside the if, no?)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-08 Thread Dave Cramer
Hi Atri,

Is there some JDBC API that supports this in newer versions of the API ?

Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca



On Sat, Apr 7, 2012 at 7:07 AM, Atri Sharma  wrote:
> Hi All,
>
>
>
> I submitted a GSoc application yesterday. Please review it and let me know
> if anyone needs any clarifications.
>
>
>
> Atri

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-08 Thread Michael Meskes
On Sat, Apr 07, 2012 at 11:50:42AM -0400, Noah Misch wrote:
> Both.  The second patch appeared after my first review, based on a comment in
> that review.  I looked at it during my re-review before marking the overall
> project Ready for Committer.

Thanks.

> I do call your attention to a question I raised in my second review: if a
> program contains "DECLARE foo READAHEAD 5 CURSOR FOR ..." and the user runs
> the program with ECPGFETCHSZ=10 in the environment, should that cursor use a
> readahead window of 5 or of 10?  Original commentary:
> http://archives.postgresql.org/message-id/20120329004323.ga17...@tornado.leadboat.com

I'd say it should be 5. I don't like an environment variable overwriting a
hard-coded setting. I think this is what you, Noah, thought, too, right? I'd
say let's change this. Is it possible to allow just READAHEAD without a number?
In that case I would accept the environment variable.

And some comments mostly directed at Zoltan:

ecpg --help says ...default 0 (disabled)..., but options less than 1 are not
accepted and the default setting of 1 has a comment "Disabled by default". I
guess this needs to be adjusted.

Is there a reason why two new options for ecpg were invented? Normally ecpg
options define how the preprocessor works but not the resulting binary. Well,
different preprocessor behaviour might result in different binary behaviour of
course. The only option that only effects the resulting binary is "-r" for
"runtime". Again, this is not completely true as the option has to make its way
into the binary, but that's it. Now I wonder whether it would make more sense
to add the two options as runtime options instead. The
--detect-cursor-resultset-size option should work there without a problem. I
haven't delved into the source code enough to find out if -R changes something
in the compiler stage.

The test case cursor-readahead.pgc has a comment saying "test automatic prepare
for all statements". Copy/Paste error?

I cannot find a test that tests the environment variable giving the fetch size.
Could you please point me to that?

Michael 
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at googlemail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-04-08 Thread Boszormenyi Zoltan

2012-04-06 14:47 keltezéssel, Cousin Marc írta:

On 05/04/12 08:02, Boszormenyi Zoltan wrote:

2012-04-04 21:30 keltezéssel, Alvaro Herrera írta:

I think this patch is doing two things: first touching infrastructure
stuff and then adding lock_timeout on top of that.  Would it work to
split the patch in two pieces?


Sure. Attached is the split version.

Best regards,
Zoltán Böszörményi


Hi,

I've started looking at and testing both patches.

Technically speaking, I think the source looks much better than the
first version of lock timeout, and may help adding other timeouts in the
future.


Thanks.


  I haven't tested it in depth though, because I encountered the
following problem:

While testing the patch, I found a way to crash PG. But what's weird is
that it crashes also with an unpatched git version.

Here is the way to reproduce it (I have done it with a pgbench schema):

- Set a small statement_timeout (just to save time during the tests)

Session1:
=#BEGIN;
=#lock TABLE pgbench_accounts ;

Session 2:
=#BEGIN;
=#lock TABLE pgbench_accounts ;
ERROR:  canceling statement due to statement timeout
=# lock TABLE pgbench_accounts ;

I'm using \set ON_ERROR_ROLLBACK INTERACTIVE by the way. It can also be
done with a rollback to savepoint of course.

Session 2 crashes with this : TRAP : FailedAssertion(«
!(locallock->holdsStrongLockCount == 0) », fichier : « lock.c », ligne :
749).

It can also be done without a statement_timeout, and a control-C on the
second lock table.


Indeed, the unpatched GIT version crashes if you enter
  =#lock TABLE pgbench_accounts ;
the second time in session 2 after the first one failed. Also,
manually spelling it out:

Session 1:

$ psql
psql (9.2devel)
Type "help" for help.

zozo=# begin;
BEGIN
zozo=# lock table pgbench_accounts;
LOCK TABLE
zozo=#

Session 2:

zozo=# begin;
BEGIN
zozo=# savepoint a;
SAVEPOINT
zozo=# lock table pgbench_accounts;
ERROR:  canceling statement due to statement timeout
zozo=# rollback to a;
ROLLBACK
zozo=# savepoint b;
SAVEPOINT
zozo=# lock table pgbench_accounts;
The connection to the server was lost. Attempting reset: Failed.
!>

Server log after the second lock table:

TRAP: FailedAssertion("!(locallock->holdsStrongLockCount == 0)", File: 
"lock.c", Line: 749)
LOG:  server process (PID 12978) was terminated by signal 6: Aborted

Best regards,
Zoltán Böszörményi



I didn't touch anything but this. It occurs everytime, when asserts are
activated.

I tried it on 9.1.3, and I couldn't make it crash with the same sequence
of events. So maybe it's something introduced since ? Or is the assert
still valid ?

Cheers




--
--
Zoltán Böszörményi
Cybertec Schönig&  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-08 Thread Boszormenyi Zoltan

2012-04-05 20:55 keltezéssel, Michael Meskes írta:

On Thu, Apr 05, 2012 at 02:23:03PM -0400, Tom Lane wrote:

I think the ECPG fetch patch is about ready to go.  Normally Michael
Meskes handles all ECPG patches, but I'm not sure what his schedule is
like.  I'm not sure what the politics are of someone else touching
that code.

I think we should leave that one for Michael.  Frankly, none of the
rest of us pay enough attention to ecpg to be candidates to take
responsibility for nontrivial patches there.

I will take care of this over the next couple days.


Thank you.


  Is the patch that Zoltan
send last Friday the latest version?


Yes.



Michael



--
--
Zoltán Böszörményi
Cybertec Schönig&  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Why can't I use pgxs to build a plpgsql plugin?

2012-04-08 Thread Guillaume Lelarge
Hi,

I recently wrote a plpgsql plugin. I wanted to enable the use of pgxs,
to make it easier to compile the plugin, but I eventually found that I
can't do that because the plpgsql.h file is not available in the include
directory.

I'm wondering if we shouldn't put the header files of plpgsql source
code in the include directory. It would help compiling the PL/pgsql
debugger, and profiler (and of course my own plugin).

There could be a good reason which would explain why we can't (or don't
want to) do this, but I don't see it right now.

Thanks.

Regards.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers