Re: [HACKERS] bug in fast-path locking

2012-04-10 Thread Jeff Davis
On Mon, 2012-04-09 at 22:47 -0700, Jeff Davis wrote:
 but other similar paths do:
 
   if (!proclock)
   {
 AbortStrongLockAcquire();
 
 I don't think it's necessary outside of LockErrorCleanup(), right?

I take that back, it's necessary for the dontwait case, too.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #6522: PostgreSQL does not start

2012-04-10 Thread Amit Kapila
I cannot see your task manager, may be you can send it as .bmp attached with
this mail.

 

As there is only one postgres process, it seems your postgres server itself
is not started.

 

For the second I have little experience with computers, you could help me
write the correct command.

a.Go to your postgres installation directory in command prompt

b.Run command : Postgres.exe -D your data directory path

c.For your data directory path, check the path where you have created
the initial database with initdb

 

 

 

 

From: Tatiana Ortiz [mailto:tatyp...@gmail.com] 
Sent: Tuesday, April 10, 2012 9:37 AM
To: Amit Kapila
Subject: Re: [BUGS] BUG #6522: PostgreSQL does not start

 

Thanks, for your help and sorry for the delay, in Puerto Rico, we had some
days off. I did the first recommendation you asked. 

 

In task manager I see one process:

 

 For the second I have little experience with computers, you could help me
write the correct command. 

 

Tatiana

 

On Thu, Apr 5, 2012 at 9:24 PM, Amit Kapila amit.kap...@huawei.com wrote:

According to what I can see from this defect that it is not confirmed
whether the Postgre server is started or not properly.

You can try on confirming about the following 2 points:
1. How many postgres processes you are able to see in your task manager.
This can give hint whether appropriate postgres services are started
2. try to start the postgres from command promt with command
  Postgres.exe -D your data directory path
  Please tell what it prints on command promt.



-Original Message-
From: pgsql-bugs-ow...@postgresql.org
[mailto:pgsql-bugs-ow...@postgresql.org] On Behalf Of Kevin Grittner
Sent: Monday, April 02, 2012 11:43 PM
To: Tatiana Ortiz
Cc: pgsql-b...@postgresql.org
Subject: Re: [BUGS] BUG #6522: PostgreSQL does not start

[Please keep the list copied.  I won't respond to any more emails
directly to me without a copy to the list.]

Tatiana Ortiz tatyp...@gmail.com wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov wrote:

 Test if you have network connectivity from your client to the
 server host using ping or equivalent tools.

 Do you get a response when you ping 127.0.0.1?

 I have not tried that.

Well, nobody here can, so how will we know if that is working?

 Is your network / VPN/SSH tunnel / firewall configured
 correctly?

 What did you do to check that?

 It*s configured correctly; I have verified it in the control
 panel.

What, exactly, did you verify was true about the configuration?

 If you double-checked your configuration but still get this
 error message, it`s still unlikely that you encounter a fatal
 PostgreSQL misbehavior. You probably have some low level network
 connectivity problems (e.g. firewall configuration). Please
 check this thoroughly before reporting a bug to the PostgreSQL
 community.

 What did you do to check this?

 The Firewall configuration is correct.

Something isn't.  If you have a firewall running, I sure wouldn't
rule it out without pretty good evidence pointing to something else.
Do you have an anti-virus product installed?  (Note, I didn't ask
whether it was enabled -- even when supposedly disabled, many AV
products can cause problems like this.)

 Your previous email mentioned deleting the postmaster.pid file.
 Do you have any more detail on what you did?

 When I deleted the postmaster.pid, and then went to the Services
 to give a restart to the Postgre service, the file reappeared.

That's an interesting data point, although not enough to pin it down
without other facts not in evidence.

 If you know of something I could do to gain access to the
 database let me know.

 Start Task Manager and look in the Processes tab.  Are there any
 Postgres processes active?

 [suggestion apparently ignored]

If you won't provide information, nobody can help you.

 From a command line, run:

 netstat -p TCP -a

 and see if anything is listening on port 5432.

 I tried this, and it gave me this result:

 [image: nothing listening on port 5432]

So, either the PostgreSQL service isn't running, or it is not
offering IP services on the default port.

Is anything appearing in any of the Windows event logs around the
time you attempt to start the service?  Can you find a PostgreSQL
log file anywhere?  Without knowing what installer was used, it
would be hard to suggest where to look, but sometimes the log files
are in a subdirectory named pg_log, and sometimes there is a file
named logfile in the PostgreSQL data directory.

Assistance on this list is provided by volunteers.  If you don't
care enough about what you've got wrong in your environment to
perform the requested diagnostic steps, those contributing their
time are likely to lose interest and stop responding.  I have 200
databases running just fine on 100 servers scattered across the
state.  What are you doing that isn't working?  It's not my
responsibility to sort that out, but I'm willing to help if you're
willing to take responsibility for your 

Re: [HACKERS] To Do wiki

2012-04-10 Thread Heikki Linnakangas

On 10.04.2012 03:32, Jeff Janes wrote:

The To Do wiki says not to add things to the page with discussing here.

So here are some things to discuss.  Assuming the discussion is a
brief yup or nope, it seems to make sense to lump them into one email:

Vacuuming a table with a large GIN index is painfully slow, because
the index is vacuumed in logical order not physical order.  Is making
a vacuum in physical order a to-do?  Does this belong to vacuuming, or
to GIN indexing?  Looking at the complexity of how this was done for
btree index, I would say this is far from easy.  I wonder if there is
an easier way that is still good enough, for example every time you
split a page, check to see if a vacuum is in the index, and if so only
move tuples physically rightward.  If the table is so active that
there is essentially always a vacuum in the index, this could lead to
bloat.  But if the table is that large and active, under the current
non-physical order the vacuum would likely take approximately forever
to finish and so the bloat would be just as bad under that existing
system.


Yup, seems like a todo. It doesn't sound like a good idea to force 
tuples to be moved right when a vacuum is in progress, that could lead 
to bloating, but it should be feasible to implement the same 
cycleid-mechanism in gin that we did in b-tree.



Speed up COUNT(*)  is marked as done.  While index-only-scans should
speed this up in certain cases, it is nothing compared to the speed up
that could be obtained by use a fixed row count and a +/- count to
follow MVCC visibility rules, and that speed-up is the one people
used to MyISAM are expecting.  We might not want to actually implement
the fixed row count +/- MVCC count idea, but we probably shouldn't
mark the whole thing as done because just one approach to it was
implemented.


I think the way we'd speed up COUNT(*) further would be to implement 
materialized views. Then you could define a materialized view on 
COUNT(*), and essentially get a row counter similar to MyISAM. I think 
it's fair to mark this as done.



sort_support was implemented for plain tuple sorting only, To Do is
extend to index-creation sorts (item 2 from message
1698.1323222...@sss.pgh.pa.us)


Index-creation sorts are already handled, Tom is referring to using the 
new comparator API for index searches in that email. The change would go 
to _bt_compare().


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bug in fast-path locking

2012-04-10 Thread Boszormenyi Zoltan

2012-04-09 19:32 keltezéssel, Robert Haas írta:

On Sun, Apr 8, 2012 at 9:37 PM, Robert Haasrobertmh...@gmail.com  wrote:

Robert, the Assert triggering with the above procedure
is in your fast path locking code with current GIT.

Yes, that sure looks like a bug.  It seems that if the top-level
transaction is aborting, then LockReleaseAll() is called and
everything gets cleaned up properly; or if a subtransaction is
aborting after the lock is fully granted, then the locks held by the
subtransaction are released one at a time using LockRelease(), but if
the subtransaction is aborted *during the lock wait* then we only do
LockWaitCancel(), which doesn't clean up the LOCALLOCK.  Before the
fast-lock patch, that didn't really matter, but now it does, because
that LOCALLOCK is tracking the fact that we're holding onto a shared
resource - the strong lock count.  So I think that LockWaitCancel()
needs some kind of adjustment, but I haven't figured out exactly what
it is yet.

I looked at this more.  The above analysis is basically correct, but
the problem goes a bit beyond an error in LockWaitCancel().  We could
also crap out with an error before getting as far as LockWaitCancel()
and have the same problem.  I think that a correct statement of the
problem is this: from the time we bump the strong lock count, up until
the time we're done acquiring the lock (or give up on acquiring it),
we need to have an error-cleanup hook in place that will unbump the
strong lock count if we error out.   Once we're done updating the
shared and local lock tables, the special handling ceases to be
needed, because any subsequent lock release will go through
LockRelease() or LockReleaseAll(), which will do the appropriate
clenaup.

The attached patch is an attempt at implementing that; any reviews appreciated.


This patch indeed fixes the scenario discovered by Cousin Marc.

Reading this patch also made me realize that my lock_timeout
patch needs adjusting, i.e. needs an AbortStrongLockAcquire()
call if waiting for a lock timed out.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-10 Thread Boszormenyi Zoltan

2012-04-08 19:38 keltezéssel, Michael Meskes írta:

On Sun, Apr 08, 2012 at 06:35:33PM +0200, Boszormenyi Zoltan wrote:

Do you want me to change this or will you do it? I am on holiday
and will be back to work on wednesday.

I don't think waiting till later this week is a real problem.


OK.




The possibility to test different readahead window sizes
without modifying the source and recompiling was useful.

Sure, but you can still do that when not defining a fixed number in the
statement.


OK.




The -R option simply provides a default without ornamenting
the DECLARE statement.

Could you please incorporate these changes, too, when you're back from vacation?


Sure.

So, it's established that a specified READAHEAD N should not
be overridden. Even an explicit READAHEAD 1.

Only a non-decorated cursor can be overridden, even if
a different default readahead window size is specified with
e.g. ecpg -R 8. If ECPGFETCHSZ is not present, 8 will be used,
if ECPGFETCHSZ is present, its value will be used. ECPGopen()
will need an extra bool argument to distinguish this.

Is this acceptable? Noah, Michael?




I cannot find a test that tests the environment variable giving the fetch size.
Could you please point me to that?

I didn't write such a test. The reason is that while variables are
exported by make from the Makefile to the binaries run by make
e.g.  CFLAGS et.al. for $(CC), make check simply runs pg_regress
once which uses its own configuration file that doesn't have a
way to set or unset an environment variable. This could be a useful
extension to pg_regress though.

How about calling setenv() from the test program itself?


Sure, I didn't think about it. It should be done before the
first EXEC SQL OPEN cursor.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecating non-select rules (was Re: Last gasp)

2012-04-10 Thread Andres Freund
On Monday, April 09, 2012 05:32:43 PM Noah Misch wrote:
 On Mon, Apr 09, 2012 at 03:35:06PM +0200, Andres Freund wrote:
  On Monday, April 09, 2012 03:25:36 PM Robert Haas wrote:
   contrib/xml2 isn't doing us much harm beyond being an ugly wart, but
   non- SELECT rules are a land mine for the unwary at best.
  
  Which we could start deprecating now btw. since INSTEAD triggers landed
  in 9.1. There were quite some use-cases for non-select rules that
  couldn't be fullfilled before but I think saying that we won't support
  those rules for more than 3 releases or so might be a good idea. I have
  seen too many bugs being caused by experienced people not realizing the
  pitfalls of rules.
 
 A new documentation section Pitfalls of the Rule System discussing the
 known hazards would help users immediately and be far easier to adopt.
Youre right. The pitfalls should at least be hinted at. Writing a 
comprehensive guide on whats safe and whats not isn't exactly easy without 
going into quite a bit of detail.

Here is what I know and what comes to my mind right now:

1. anything but INSTEAD rules are unsafe
2. the use of volatile functions (like nextval()) directly or indirectly 
(DEFAULT) needs to be vetted very carefully even if 1. is adherred to
3. the snapshots behaviour of an expanded statement is a bit confusing if it 
contains multiple statements which causes problems with the rather frequent 
attempts to build rules with upsert'is behaviour

While documenting it is sensible a hidden note somewhere doesn't help very 
much. I personally think the rule system is too broken to be salvageable for 
anything but internals of views.

A very trivial, seemingly innocuous, but totally broken usage of rules:

CREATE TABLE data(data_id serial primary key, data int);
CREATE TABLE audit(audit_id serial primary key, data_id int NOT NULL, data 
int);

CREATE RULE data_audit AS ON INSERT TO data DO ALSO INSERT INTO audit(data_id, 
data) VALUES(NEW.data_id, NEW.data);

INSERT INTO data(data) VALUES(1);

SELECT * FROM data;
SELECT * FROM audit;

Greetings,

Andres

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_prewarm

2012-04-10 Thread Cédric Villemain
  pgfincore does not use the postgresql buffer manager, it uses the posix
  calls. It can proceed per block or full relation.
  
  Both need POSIX_FADVISE compatible system to be efficient.
  
  The main difference between pgfincore and pg_prewarm about full relation
  warm is that pgfincore will make very few system calls when pg_prewarm
  will do much more.
 
 That's a fair complaint, but I'm not sure it matters in practice,
 because I think that in real life the time spent prewarming is going
 to be dominated by I/O, not system call time.  Now, that's not an
 excuse for being less efficient, but I actually did have a reason for
 doing it this way, which is that it makes it work on systems that
 don't support POSIX_FADVISE, like Windows and MacOS X.  Unless I'm
 mistaken or it's changed recently, pgfincore makes no effort to be
 cross-platform, whereas pg_prewarm should be usable anywhere that
 PostgreSQL is, and you'll be able to do prewarming in any of those
 places, though of course it may be a bit less efficient without
 POSIX_FADVISE, since you'll have to use the read or buffer mode
 rather than prefetch.  Still, being able to do it less efficiently
 is better than not being able to do it at all.
 
 Again, I'm not saying this to knock pgfincore: I see the advantages of
 its approach in exposing a whole suite of tools to people running on,
 well, the operating systems on which the largest number of people run
 PostgreSQL.  But I do think that being cross-platform is an advantage,
 and I think it's essential for anything we'd consider shipping as a
 contrib module.  I think you could rightly view all of this as
 pointing to a deficiency in the APIs exposed by core: there's no way
 for anything above the smgr layer to do anything with a range of
 blocks, which is exactly what we want to do here.  But I wasn't as
 interested in fixing that as I was in getting something which did what
 I needed, which happened to be getting the entirety of a relation into
 shared_buffers without much ado.

Agreed, pgfincore first use was to analyze cache usage and performance impacts. 
(this works with systems having mincore(), not only linux, only windows is 
really different and while I can add the support for it, I've never been 
requested for that, I can do if it helps going to contrib/ if someone care). 

Warming with pg_prewarm looks really cool and it does the job. Pgfincore only 
advantage here are that if you call POSIX_FADVISE on whole file, the kernel 
will *try* to load as much of possible while not destructing the cache its 
have. My experience is that if you call block-per-block  all the blocks you 
touch are in cache (and eviction can occur more often). 

 
  The current implementation of pgfincore allows to make a snapshot and
  restore via pgfincore or via pg_prewarm (just need some SQL-fu for the
  later).
 
 Indeed.
 
 Just to make completely clear my position on pgfincore vs. pg_prewarm,
 I think they are complementary utilities with a small overlap.  I
 think that the prewarming is important enough to a broad enough group
 of people that we should find some way of exposing that functionality
 in core or contrib, and I wrote pg_prewarm as a minimalist
 implementation of that concept.  I am not necessarily opposed to
 someone taking the bull by the horns and coming up with a grander
 vision for what kind of tool we pull into the core distribution -
 either by extending pg_prewarm, recasting pgfincore as a contrib
 module with appropriate cross-platform sauce, or coming up with some
 third approach that is truly the one ring to rule them all and in the
 darkness bind them.  At the same time, I want to get something done
 for 9.3 and I don't want to make it harder than it needs to be.  I
 honestly believe that just having an easy way to pull stuff into
 memory/shared_buffers will give us eighty to ninety percent of what
 people need in this area; we can do more, either in core or elsewhere,
 as the motivation may strike us.
 
 Attached is an updated patch, with fixes for documentation typo noted
 by Jeff Janes and some addition documentation examples also inspired
 by comments from Jeff.

Load-per-block is indeed very useful as the Slave can really catch-up more 
quickly with the workload in case of switchover for example (this is why I've 
moved pgfincore results in a varbit that can be shared with the slaves more 
easily).

I have no problem deprecating overlapping features from pgfincore as soon as I 
can do a «depend:pg_prewarm[os_warm]»  :)
...It would have been better to split pgfincore analyze and warming parts times 
ago, anyway.

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


Re: [HACKERS] Regarding GSoc proposal

2012-04-10 Thread Albe Laurenz
Atri Sharma wrote:
 I submitted a proposal for GSoc 2012.Please review it and let me know
 your comments.
 
 The link is:
 

https://google-melange.appspot.com/gsoc/proposal/review/google/gsoc2012/
atrisharma/1001

I think that this is a pretty cool idea.
Have you contacted the developers of PL/Java?  Close collaboration with
them
might be beneficial for both projects.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding GSoc proposal

2012-04-10 Thread Atri Sharma
On Tue, Apr 10, 2012 at 5:38 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Atri Sharma wrote:
 I submitted a proposal for GSoc 2012.Please review it and let me know
 your comments.

 The link is:


 https://google-melange.appspot.com/gsoc/proposal/review/google/gsoc2012/
 atrisharma/1001

 I think that this is a pretty cool idea.
 Have you contacted the developers of PL/Java?  Close collaboration with
 them
 might be beneficial for both projects.

 Yours,
 Laurenz Albe

Hi Laurenz,

Thank you for appreciating my proposal.

I will surely contact the Pl/Java team and get their view on this.

Please let me know if you have any suggestions.

Atri

-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Robert Haas
On Thu, Apr 5, 2012 at 11:45 AM, Robert Haas robertmh...@gmail.com wrote:
 Meanwhile, pg_stat_statements converts the same data to seconds but
 makes it a double rather than a bigint.  I think that was a bad idea
 and we should make it consistent use a bigint and milliseconds while
 we still can.

Hmm.  So, on further review, this is not as simple as it seems.  I'd
like some input from other people on what we should do here.

pg_stat_statements has long exposed a column called total_time as a
float8.  It now exposes columns time_read and time_write which are
actually measuring the time spent reading and writing data blocks, and
those are also exposed as a float8; all these count seconds.

Meanwhile, all times exposed by the stats collector (including the new
and analagous pg_stat_database.block_read_time and
pg_stat_database.block_write_time columns) are exposed as int8; these
count milliseconds.

So, should we make the new columns exposed by pg_stat_statements use
milliseconds, so that the block read/write timings are everywhere in
milliseconds, or should we keep them as a float8, so that all the
times exposed by pg_stat_statements use float8?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] disposition of remaining patches

2012-04-10 Thread Robert Haas
Looking over the remaining patches that still aren't closed in the
January CommitFest:

Foreign keys with arrays - Tom wants to commit this at the beginning
of a release cycle rather than the end, but there's no actual known
problem with it.  Therefore I suggest moving it to the first 9.3
CommitFest.

ECPG FETCH readahead - Michael Meskes is going to commit this soon;
everyone seems to agree it's ready to go.

pgsql_fdw contrib module - It seems like this is still in the midst of
discussions about what the behavior should be, so it seems like
Returned with Feedback is the only place for it to go.

check function statement - Heikki stated that he isn't comfortable
committing this because it's got too much duplicative code, so I think
we should mark Returned with Feedback until someone does some more
work in that area.

Add timing of buffer I/O requests - This is basically committed, but I
have an outstanding question which I just posted on the relevant
thread.

URI connection string support for libpq - I'm unclear with Alvaro or
Peter still intend to try to slip this one in.  It's simple enough
that I think that would be OK if it can be done in the next day or
two.  Otherwise, 9.3.

parallel pg_dump - I think this one needs to get moved to the first
9.3 CommitFest.  There is more work to be done there than we can
realistically do right now, but I think we can pick it up for the next
cycle.

Thoughts/comments?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Andrew Dunstan



On 04/09/2012 01:25 PM, Atri Sharma wrote:

On Mon, Apr 9, 2012 at 10:15 PM, Andrew Dunstanand...@dunslane.net  wrote:


On 04/09/2012 12:14 PM, Dave Cramer wrote:


So I'm confused, once they link a file to an FDW can't you just read
it with an normal select ?

What additional functionality will this provide ?




I'm confused about what you're confused about. Surely this won't be linking
files to an FDW, but foreign DBMS tables, in anything you can access via
JDBC. All you'll need on the postgres side is the relevant JDBC driver, so
you'd have instant access via standard select queries to anything you can
get a JDBC driver to talk to. That seems to me something worth having.

I imagine it would look rather like this:

   CREATE FOREIGN DATA WRAPPER foodb HANDLER pljava_jdbc_handler
   OPTIONS (driver 'jdbc.foodb.org');
   CREATE SERVER myfoodb FOREIGN DATA WRAPPER foodb OPTIONS(host
   '1.2.3.4', user 'foouser', password 'foopw');
   CREATE FOREIGN TABLE footbl (id int, data text) SERVER myfoodb;
   SELECT * from footbl;


cheers

andrew

Hi Andrew,

Thanks for going through my proposal and commenting on it.

I think you have hit the nail on the head.We will be connecting the
foreign DBMS tables.The main aim of the project is to wrap JDBC so we
can connect to anything that can be reached through a JDBC URL.

I am considering two paths for doing this:
The first one takes the help of the SPI(Server Programming Interface)
and the second one directly connects through Pl/Java and JNI(Java
Native Interface).



I'd say forget SPI - I don't think it's going to help you here. Just 
concentrate on getting the functionality via a PL/Java wrapper. I 
wouldn't worry too much about jdbc style URLs either, since logically I 
think you'd want to specify the connection parameters via server and FDW 
options as in my example above - that way it would be consistent with 
other FDWs. But that's a piece of bikeshedding for now. Basically, you 
want to implement the handler function to start with.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Deprecating non-select rules (was Re: Last gasp)

2012-04-10 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Here is what I know and what comes to my mind right now:

 1. anything but INSTEAD rules are unsafe

How so?  I agree that volatile functions are problematic, but unless
there's one of those in the picture I think rules work pretty much as
documented.

 3. the snapshots behaviour of an expanded statement is a bit confusing if it 
 contains multiple statements which causes problems with the rather frequent 
 attempts to build rules with upsert'is behaviour

Again, not sure what you're complaining about here.

 A very trivial, seemingly innocuous, but totally broken usage of rules:

The problem illustrated here is all down to nextval() being volatile,
no?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Merlin Moncure
On Tue, Apr 10, 2012 at 8:42 AM, Andrew Dunstan and...@dunslane.net wrote:
 I am considering two paths for doing this:
 The first one takes the help of the SPI(Server Programming Interface)
 and the second one directly connects through Pl/Java and JNI(Java
 Native Interface).


 I'd say forget SPI - I don't think it's going to help you here. Just
 concentrate on getting the functionality via a PL/Java wrapper. I wouldn't
 worry too much about jdbc style URLs either, since logically I think you'd
 want to specify the connection parameters via server and FDW options as in
 my example above - that way it would be consistent with other FDWs. But
 that's a piece of bikeshedding for now. Basically, you want to implement the
 handler function to start with.

how do you cross from FDW into a pl/java routine without SPI?

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] disposition of remaining patches

2012-04-10 Thread Andrew Dunstan



On 04/10/2012 09:40 AM, Robert Haas wrote:


parallel pg_dump - I think this one needs to get moved to the first
9.3 CommitFest.  There is more work to be done there than we can
realistically do right now, but I think we can pick it up for the next
cycle.




Yeah, I'm only about 1/4 of the way through it :-(

cheers

andrerw

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Hmm.  So, on further review, this is not as simple as it seems.  I'd
 like some input from other people on what we should do here.

 pg_stat_statements has long exposed a column called total_time as a
 float8.  It now exposes columns time_read and time_write which are
 actually measuring the time spent reading and writing data blocks, and
 those are also exposed as a float8; all these count seconds.

 Meanwhile, all times exposed by the stats collector (including the new
 and analagous pg_stat_database.block_read_time and
 pg_stat_database.block_write_time columns) are exposed as int8; these
 count milliseconds.

 So, should we make the new columns exposed by pg_stat_statements use
 milliseconds, so that the block read/write timings are everywhere in
 milliseconds, or should we keep them as a float8, so that all the
 times exposed by pg_stat_statements use float8?

Given that we've whacked pg_stat_statements' behavior around rather
thoroughly in this release, maybe we could get away with redefining
total_time as being measured in msec rather than sec, thereby aligning
units as msec across the board.  It's arguably a smaller deal than the
way we've redefined what the query column contains...

float8 vs int8 is a distinct issue, and probably one that is not as
much of an impact on clients if we change it.  It is not hard to predict
that somebody will eventually want sub-msec resolution on these things,
which would suggest that float8 would be the better idea.  But perhaps
we could leave that change for a future release.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Peter Geoghegan
On 10 April 2012 14:33, Robert Haas robertmh...@gmail.com wrote:
 So, should we make the new columns exposed by pg_stat_statements use
 milliseconds, so that the block read/write timings are everywhere in
 milliseconds, or should we keep them as a float8, so that all the
 times exposed by pg_stat_statements use float8?

I believe that we should keep them as float8, on the basis that a user
is more likely to generalise from total_time's format (when writing a
script to query the view of whatever) than from that of
pg_stat_database.

A part of me would like to change the view definitions so that all the
columns are strongly typed (i.e. all these values would be intervals).
I realise that that isn't practical though.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Andrew Dunstan



On 04/10/2012 09:48 AM, Merlin Moncure wrote:

On Tue, Apr 10, 2012 at 8:42 AM, Andrew Dunstanand...@dunslane.net  wrote:

I am considering two paths for doing this:
The first one takes the help of the SPI(Server Programming Interface)
and the second one directly connects through Pl/Java and JNI(Java
Native Interface).


I'd say forget SPI - I don't think it's going to help you here. Just
concentrate on getting the functionality via a PL/Java wrapper. I wouldn't
worry too much about jdbc style URLs either, since logically I think you'd
want to specify the connection parameters via server and FDW options as in
my example above - that way it would be consistent with other FDWs. But
that's a piece of bikeshedding for now. Basically, you want to implement the
handler function to start with.

how do you cross from FDW into a pl/java routine without SPI?




Add the FDW handler as a sibling function of the function call handler. 
At least that would be my first approach to writing a DBI::DBD FDW 
wrapper for plperl, which I naturally know rather better than the 
PL/Java code.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] disposition of remaining patches

2012-04-10 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Looking over the remaining patches that still aren't closed in the
 January CommitFest:
 [ all but ECPG readahead and maybe libpq URIs have to go to 9.3 ]

Yeah, I agree.  I'm not comfortable with squeezing in the array foreign
keys stuff at this point, and the others are clearly not ready.

I put the buffer I/O timing units issue on the open-items list
yesterday:
http://wiki.postgresql.org/wiki/PostgreSQL_9.2_Open_Items
and I encourage people to start using that to track must-fix-for-9.2
items.

We should at this point be focusing our efforts on getting a beta out.
Some but perhaps not all of the open items have to be resolved before
beta1, and we definitely need at least draft-quality release notes
so beta testers know what to test.  Any other must-do tasks out there?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] disposition of remaining patches

2012-04-10 Thread Michael Meskes
On Tue, Apr 10, 2012 at 09:40:58AM -0400, Robert Haas wrote:
 ECPG FETCH readahead - Michael Meskes is going to commit this soon;
 everyone seems to agree it's ready to go.

It still needs a couple minor tweaks but I think it will be done shortly.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at googlemail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Kevin Grittner
Christopher Browne cbbro...@gmail.com wrote:
 Robert Haas robertmh...@gmail.com wrote:
 
 CommitFests are a time for patches that are done or very nearly
 done to get committed, and a time for other patches to get
 reviewed if they haven't been already.  If we make it clear that
 the purpose of the CommitFest is to assess whether the patch is
 committable, rather than to provide an open-ended window for it
 to become committable, we might do better.
 
 Yeah, I think there's pretty good room for a +1 on that.
 
Yeah, +1 for sure.
 
One other sort of mechanical test which I think can and should be
applied to patches submitted to the last CF is that if *at the start
of the CF* the patch doesn't apply, compile, pass regression tests,
and demonstrably provide the functionality claimed for the patch, it
should not be a candidate for inclusion in the release.  A patch on
which the author is continuing to work even in the absence of review
should be considered a WIP want feedback submission; it should not
be allowed to constitute a placeholder for inclusion in the
release.  It's one thing if review turns up corner case bugs missed
by the author; it's quite another if there is a month or two of
solid development left to be done. The CF period is not the time for
now I'll get serious about wrapping this up.
 
onlyhalfkiddingPerhaps we should have a concept of feature
months -- so that when we look at holding up a release with 20
features for two months so that one more feature can make it in, the
cost side of the equation is 40 feature-months, and the benefit is
10 feature-months.  (Remember, you can't count the added feature as
though it's there for a year before the next release if it holds the
release up.)/onlyhalfkidding
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Merlin Moncure
On Tue, Apr 10, 2012 at 9:15 AM, Andrew Dunstan and...@dunslane.net wrote:


 On 04/10/2012 09:48 AM, Merlin Moncure wrote:

 On Tue, Apr 10, 2012 at 8:42 AM, Andrew Dunstanand...@dunslane.net
  wrote:

 I am considering two paths for doing this:
 The first one takes the help of the SPI(Server Programming Interface)
 and the second one directly connects through Pl/Java and JNI(Java
 Native Interface).

 I'd say forget SPI - I don't think it's going to help you here. Just
 concentrate on getting the functionality via a PL/Java wrapper. I
 wouldn't
 worry too much about jdbc style URLs either, since logically I think
 you'd
 want to specify the connection parameters via server and FDW options as
 in
 my example above - that way it would be consistent with other FDWs. But
 that's a piece of bikeshedding for now. Basically, you want to implement
 the
 handler function to start with.

 how do you cross from FDW into a pl/java routine without SPI?



 Add the FDW handler as a sibling function of the function call handler. At
 least that would be my first approach to writing a DBI::DBD FDW wrapper for
 plperl, which I naturally know rather better than the PL/Java code.

right -- well looking at plperl.c, I take it you are meaning to invoke
(or write a similar function to) the plperl_call_handler, right?  hm
-- if you could do that then yes, that would work for pl/java as well
and it would bypass the SPI interface.  this is avoiding the public
APIs, so it's going to take some time to figure out how to set up the
call and walk the result since I don't know how to do that off the top
of my head.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-10 Thread Noah Misch
On Tue, Apr 10, 2012 at 09:35:21AM +0200, Boszormenyi Zoltan wrote:
 So, it's established that a specified READAHEAD N should not
 be overridden. Even an explicit READAHEAD 1.

 Only a non-decorated cursor can be overridden, even if
 a different default readahead window size is specified with
 e.g. ecpg -R 8. If ECPGFETCHSZ is not present, 8 will be used,
 if ECPGFETCHSZ is present, its value will be used. ECPGopen()
 will need an extra bool argument to distinguish this.

 Is this acceptable? Noah, Michael?

Sounds perfect.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Andrew Dunstan



On 04/10/2012 10:34 AM, Merlin Moncure wrote:

On Tue, Apr 10, 2012 at 9:15 AM, Andrew Dunstanand...@dunslane.net  wrote:


On 04/10/2012 09:48 AM, Merlin Moncure wrote:

On Tue, Apr 10, 2012 at 8:42 AM, Andrew Dunstanand...@dunslane.net
  wrote:

I am considering two paths for doing this:
The first one takes the help of the SPI(Server Programming Interface)
and the second one directly connects through Pl/Java and JNI(Java
Native Interface).


I'd say forget SPI - I don't think it's going to help you here. Just
concentrate on getting the functionality via a PL/Java wrapper. I
wouldn't
worry too much about jdbc style URLs either, since logically I think
you'd
want to specify the connection parameters via server and FDW options as
in
my example above - that way it would be consistent with other FDWs. But
that's a piece of bikeshedding for now. Basically, you want to implement
the
handler function to start with.

how do you cross from FDW into a pl/java routine without SPI?



Add the FDW handler as a sibling function of the function call handler. At
least that would be my first approach to writing a DBI::DBD FDW wrapper for
plperl, which I naturally know rather better than the PL/Java code.

right -- well looking at plperl.c, I take it you are meaning to invoke
(or write a similar function to) the plperl_call_handler, right?  hm
-- if you could do that then yes, that would work for pl/java as well
and it would bypass the SPI interface.  this is avoiding the public
APIs, so it's going to take some time to figure out how to set up the
call and walk the result since I don't know how to do that off the top
of my head.




I don't understand what the heck you're talking about, TBH. From a user 
perspective there is nothing to work out. It will look like any other 
FDW. The implementor of the FDW handler will have to work out the glue 
between postgres and the JVM, but that's not going to be you, right?


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-10 Thread Michael Meskes
On Tue, Apr 10, 2012 at 10:37:22AM -0400, Noah Misch wrote:
  Only a non-decorated cursor can be overridden, even if
  a different default readahead window size is specified with
  e.g. ecpg -R 8. If ECPGFETCHSZ is not present, 8 will be used,
  if ECPGFETCHSZ is present, its value will be used. ECPGopen()
  will need an extra bool argument to distinguish this.
 
  Is this acceptable? Noah, Michael?
 
 Sounds perfect.

Fine by me.

Michael

-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at googlemail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Will Crawford
On 6 April 2012 01:19, Noah Misch n...@leadboat.com wrote:
 On Thu, Apr 05, 2012 at 02:34:30PM -0400, Robert Haas wrote:
 On Thu, Apr 5, 2012 at 2:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  The FK arrays one I'm kind of queasy about. ?It's a cool-sounding idea
  but I'm not convinced that all the corner-case details have been
  adequately thought through, and I'm scared of being unable to fix any
  such bugs in later versions because of backwards compatibility worries.
  It'd be a lot better to be pushing this in at the start of a devel cycle
  than the end.

 I've been feeling that that patch has been suffering from a lack of
 reviewer attention, which is a real shame, because I think the
 functionality is indeed really cool.  But I haven't looked at it
 enough to know what kind of shape it's in.

 As the reviewer, I'm not aware of any unexplored corner cases or problems that
 ought to preclude commit.  That said, it is a large patch; I doubt anyone
 could pick it up from scratch and commit it with less than a solid day's
 effort, and 2-3 days might be more likely.  In retrospect, I should have
 suggested splitting the new ON DELETE/ON UPDATE actions into their own patch.
 That would have nicely slimmed the base patch and also isolated it from the ON
 DELETE EACH CASCADE judgement call.

As a likely user of this feature (not sure if this needs a
disclaimer, but my employer offered a small bounty towards the
development), I'd only need ON DELETE RESTRICT behaviour, currently,
and wouldn't ever need ON UPDATE ... as the referent column would
always be a SERIAL. In the meantime, I'm pretty sure the restriction
could be handled by a hand-rolled trigger on insert and delete, but
the delete one would be a lot slower without some kind of indexing.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] disposition of remaining patches

2012-04-10 Thread Alvaro Herrera

Excerpts from Robert Haas's message of mar abr 10 10:40:58 -0300 2012:

 URI connection string support for libpq - I'm unclear with Alvaro or
 Peter still intend to try to slip this one in.  It's simple enough
 that I think that would be OK if it can be done in the next day or
 two.  Otherwise, 9.3.

I intend to commit this today, unless someone is still unhappy with the
choice of syntax.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Merlin Moncure
On Tue, Apr 10, 2012 at 9:47 AM, Andrew Dunstan and...@dunslane.net wrote:
 I don't understand what the heck you're talking about, TBH. From a user
 perspective there is nothing to work out. It will look like any other FDW.

yes, that is correct.

 The implementor of the FDW handler will have to work out the glue between
 postgres and the JVM, but that's not going to be you, right?

Correct.  I think I understand what you're driving at.  Basically,
pl/java is the glue. my thinking was inside the FDW callbacks to to do
SPI calls to invoke the pl/java routines.  Unlike other fdw
implementations which mostly wrap C libraries --  which makes things
very easy since you can directly jump into the routine for foreign
execution -- a luxury we don't have.  We have to invoke java and there
are two basic ways to tie into the java runtime:  one is to jump
through SPI via the SQL executor. The other is JNI into the pl/java
jvm which I think you were hinting was the better approach.

Doing an SPI call from a FDW callback is inefficient -- that's an
extra call into the executor (although you can prepare it) and you
have to walk the SPI result just to build it up again in the FDW
iterator.  A JNI solution instead would jump into the jvm and do java
invocation and I believe would drive the difficulty of this project up
a couple of notches whereas a SPI approach utilizes a well documented
interface.  We're not stuck on the approach though -- I'm pushing Atri
to get the environment set up so we can explore alternative solutions.

In other words, our proposal is basically pretty similar to what you'd
end up with if you wrapped dblink into a fdw making dblink calls
inside the fdw over spi.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] To Do wiki

2012-04-10 Thread Greg Stark
On Tue, Apr 10, 2012 at 7:27 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 I think the way we'd speed up COUNT(*) further would be to implement
 materialized views. Then you could define a materialized view on COUNT(*),
 and essentially get a row counter similar to MyISAM. I think it's fair to
 mark this as done.

If only because it comes up so frequently it would be good to have
this noted in the TODO, either under materialized views or as a
pointer to them.

A good materialized views implementation including automatically
determining what delta data to keep sure would be nice to have.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Peter Geoghegan
On 10 April 2012 15:26, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
 A patch on which the author is continuing to work even in the absence of 
 review
 should be considered a WIP want feedback submission; it should not
 be allowed to constitute a placeholder for inclusion in the
 release.

To be fair, I doubt that anyone actually believes that. If they did,
they wouldn't have to pay attention very long to receive a rude
awakening.

 onlyhalfkiddingPerhaps we should have a concept of feature
 months -- so that when we look at holding up a release with 20
 features for two months so that one more feature can make it in, the
 cost side of the equation is 40 feature-months, and the benefit is
 10 feature-months.  (Remember, you can't count the added feature as
 though it's there for a year before the next release if it holds the
 release up.)/onlyhalfkidding

I am broadly in favour of assessing the value of features in the same
way that a commercial organisation might - the more adoption a feature
spurs, the more valuable it is, and the more hesitant we should be to
bump it (though other factors are also very important). I take this
idea seriously, or at the very least share the mentality of the idea -
I'm just not sure that we can formalise it, or that we should.

I also think that we should try and reward good will. I think that we
generally do so, but an automatic cut-off date seems contrary to that.
The law of unintended consequences might see us lower our standards to
commit something to meet the deadline, that would otherwise not be
immediately committed. We're only human, and it would be foolish to
assume that committers don't feel that kind of pressure.
-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Tom Lane
Merlin Moncure mmonc...@gmail.com writes:
 ...  We have to invoke java and there
 are two basic ways to tie into the java runtime:  one is to jump
 through SPI via the SQL executor. The other is JNI into the pl/java
 jvm which I think you were hinting was the better approach.

Hm?  SPI doesn't know anything about Java either.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-10 Thread Boszormenyi Zoltan

2012-04-10 16:55 keltezéssel, Michael Meskes írta:

On Tue, Apr 10, 2012 at 10:37:22AM -0400, Noah Misch wrote:

Only a non-decorated cursor can be overridden, even if
a different default readahead window size is specified with
e.g. ecpg -R 8. If ECPGFETCHSZ is not present, 8 will be used,
if ECPGFETCHSZ is present, its value will be used. ECPGopen()
will need an extra bool argument to distinguish this.

Is this acceptable? Noah, Michael?

Sounds perfect.

Fine by me.

Michael


OK. Next question: now that both patches are intended to be applied,
should I send a unified single patch that contains the previous functionality
and the required fixes or a new one that only contains the last required fixes?

Thanks in advance,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Dave Cramer
On Tue, Apr 10, 2012 at 11:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Merlin Moncure mmonc...@gmail.com writes:
 ...  We have to invoke java and there
 are two basic ways to tie into the java runtime:  one is to jump
 through SPI via the SQL executor. The other is JNI into the pl/java
 jvm which I think you were hinting was the better approach.

 Hm?  SPI doesn't know anything about Java either.

                        regards, tom lane

Having pl/java as a dependancy here makes this a very complex
feature to setup.
The potential benefits are quite minimal since almost any decent ETL
tool can handle multiple data sources

Dave

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Atri Sharma
On Tue, Apr 10, 2012 at 8:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Merlin Moncure mmonc...@gmail.com writes:
 ...  We have to invoke java and there
 are two basic ways to tie into the java runtime:  one is to jump
 through SPI via the SQL executor. The other is JNI into the pl/java
 jvm which I think you were hinting was the better approach.

 Hm?  SPI doesn't know anything about Java either.

                        regards, tom lane

Hi Tom,

We plan to call SQL through SPI from the FDW,which in turn would call
the Pl/Java routine.

We are working on the JNI Invocation API approach also.We will be
currently researching the differences between the two approaches(SPI
and JNI).

Atri

-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] To Do wiki

2012-04-10 Thread Jeff Janes
On Mon, Apr 9, 2012 at 11:27 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 10.04.2012 03:32, Jeff Janes wrote:

 The To Do wiki says not to add things to the page with discussing here.
...

 sort_support was implemented for plain tuple sorting only, To Do is
 extend to index-creation sorts (item 2 from message
 1698.1323222...@sss.pgh.pa.us)


 Index-creation sorts are already handled, Tom is referring to using the new
 comparator API for index searches in that email. The change would go to
 _bt_compare().

If I do select count(distinct bid) from pgbench_accounts I get many
calls to btint4fastcmp, but if I do create index on pgbench_accounts
(bid) I instead get many calls to btint4cmp.  If the index build is
using SortSupport, shouldn't it also be calling btint4fastcmp like the
distinct does?

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-10 Thread Michael Meskes
On Tue, Apr 10, 2012 at 05:24:55PM +0200, Boszormenyi Zoltan wrote:
 OK. Next question: now that both patches are intended to be applied,
 should I send a unified single patch that contains the previous functionality
 and the required fixes or a new one that only contains the last required 
 fixes?

I'm fine with whatever is easier for you. 

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at googlemail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Tom Lane
Atri Sharma atri.j...@gmail.com writes:
 On Tue, Apr 10, 2012 at 8:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Hm?  SPI doesn't know anything about Java either.

 We plan to call SQL through SPI from the FDW,which in turn would call
 the Pl/Java routine.

If you're saying that every Java function that the FDW needs would have
to be exposed as a SQL function, that seems like a pretty high-risk
(not to mention low performance) approach.  Not only do you have to
design a SQL representation for every datatype you need, but you have to
be sure that you do not have any security holes arising from
unscrupulous users calling those SQL functions manually with arguments
of their choosing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-10 Thread Boszormenyi Zoltan

2012-04-10 17:34 keltezéssel, Michael Meskes írta:

On Tue, Apr 10, 2012 at 05:24:55PM +0200, Boszormenyi Zoltan wrote:

OK. Next question: now that both patches are intended to be applied,
should I send a unified single patch that contains the previous functionality
and the required fixes or a new one that only contains the last required fixes?

I'm fine with whatever is easier for you.

Michael


I guess the second option is easier for all of us because
reviewing it doesn't invalidate the previous ones.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] To Do wiki

2012-04-10 Thread Heikki Linnakangas

On 10.04.2012 18:31, Jeff Janes wrote:

On Mon, Apr 9, 2012 at 11:27 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

On 10.04.2012 03:32, Jeff Janes wrote:


The To Do wiki says not to add things to the page with discussing here.

...



sort_support was implemented for plain tuple sorting only, To Do is
extend to index-creation sorts (item 2 from message
1698.1323222...@sss.pgh.pa.us)


Index-creation sorts are already handled, Tom is referring to using the new
comparator API for index searches in that email. The change would go to
_bt_compare().


If I do select count(distinct bid) from pgbench_accounts I get many
calls to btint4fastcmp, but if I do create index on pgbench_accounts
(bid) I instead get many calls to btint4cmp.  If the index build is
using SortSupport, shouldn't it also be calling btint4fastcmp like the
distinct does?


Oh, sorry, you're right. I stand corrected.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 11:24 AM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 On 10 April 2012 15:26, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
 A patch on which the author is continuing to work even in the absence of 
 review
 should be considered a WIP want feedback submission; it should not
 be allowed to constitute a placeholder for inclusion in the
 release.

 To be fair, I doubt that anyone actually believes that. If they did,
 they wouldn't have to pay attention very long to receive a rude
 awakening.

Uhm, this has been done, repeatedly.  It is not an imaginary problem.
I've observed people doing the following anti-social things:

1. Adding patches to the CommitFest after the deadline, sometimes
weeks after.  The system doesn't prohibit this because there can be
legitimate reasons for doing it, such as when a patch submitted on
time gets a minor piece of it split out into a separate entry.

2. Adding fake CommitFest entries that point to a non-existing
email, and then editing them later to point to the real patch.

3. Posting a patch in time for the CommitFest deadline that is not
even code-complete and then continuing to hack on it vigorously
throughout the CommitFest.  Or, a variant: it's code completed, but
not debugged.  Both command triggers and foreign key locks fell into
this category, AFAICT.

When these things are pointed out to the people who are doing them,
the response is often either (a) this feature is so important we're
all going to die if it's not in the release how can you even think
about bouncing it or (b) I'm not really still hacking on it these are
all just minor changes.  It's surprisingly easy to hoodwink even
experienced contributors into thinking that your patch is really,
really almost done, honest, it just needs a couple more tweaks when in
fact it's nowhere close.  I try not to attribute to bad faith what can
be explained by incurable optimism, so maybe we just have a lot of
incurable optimism.  But it's doing nobody any good.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] To Do wiki

2012-04-10 Thread Peter Geoghegan
On 10 April 2012 16:40, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 10.04.2012 18:31, Jeff Janes wrote:
 If I do select count(distinct bid) from pgbench_accounts I get many
 calls to btint4fastcmp, but if I do create index on pgbench_accounts
 (bid) I instead get many calls to btint4cmp.  If the index build is
 using SortSupport, shouldn't it also be calling btint4fastcmp like the
 distinct does?


 Oh, sorry, you're right. I stand corrected.

There is an impedance mismatch between tuplesort_begin_heap and
tuplesort_begin_index_btree that prevented this from being done with
the initial commit. Strangely, the SortSupport commit message didn't
comment on this.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Andrew Dunstan



On 04/10/2012 11:36 AM, Tom Lane wrote:

Atri Sharmaatri.j...@gmail.com  writes:

On Tue, Apr 10, 2012 at 8:55 PM, Tom Lanet...@sss.pgh.pa.us  wrote:

Hm?  SPI doesn't know anything about Java either.

We plan to call SQL through SPI from the FDW,which in turn would call
the Pl/Java routine.

If you're saying that every Java function that the FDW needs would have
to be exposed as a SQL function, that seems like a pretty high-risk
(not to mention low performance) approach.  Not only do you have to
design a SQL representation for every datatype you need, but you have to
be sure that you do not have any security holes arising from
unscrupulous users calling those SQL functions manually with arguments
of their choosing.





Yeah. I think this design is horribly baroque and unnecessary. SPI is 
for talking SQL. It's completely in the way of a straight-forward 
implementation of this feature IMNSHO.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 10:06 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Hmm.  So, on further review, this is not as simple as it seems.  I'd
 like some input from other people on what we should do here.

 pg_stat_statements has long exposed a column called total_time as a
 float8.  It now exposes columns time_read and time_write which are
 actually measuring the time spent reading and writing data blocks, and
 those are also exposed as a float8; all these count seconds.

 Meanwhile, all times exposed by the stats collector (including the new
 and analagous pg_stat_database.block_read_time and
 pg_stat_database.block_write_time columns) are exposed as int8; these
 count milliseconds.

 So, should we make the new columns exposed by pg_stat_statements use
 milliseconds, so that the block read/write timings are everywhere in
 milliseconds, or should we keep them as a float8, so that all the
 times exposed by pg_stat_statements use float8?

 Given that we've whacked pg_stat_statements' behavior around rather
 thoroughly in this release, maybe we could get away with redefining
 total_time as being measured in msec rather than sec, thereby aligning
 units as msec across the board.  It's arguably a smaller deal than the
 way we've redefined what the query column contains...

Retyping columns is an awfully good way to produce grumpy users.  Then
again, if we're going to do it, it would certainly be better to do it
now rather than later, because right now I'm guessing
pg_stat_statements is a lot less heavily used than it will be after
9.2 hits shelves.

 float8 vs int8 is a distinct issue, and probably one that is not as
 much of an impact on clients if we change it.  It is not hard to predict
 that somebody will eventually want sub-msec resolution on these things,
 which would suggest that float8 would be the better idea.  But perhaps
 we could leave that change for a future release.

Well, internally, the I/O timing stuff as well as the function timing
stuff use microseconds, and the SQL functions expose it as
microseconds, but then the view divides by 1000 to convert to
milliseconds.  I made the I/O timing stuff do it that way because
that's how the function timing stuff does it, but it does seem a
little random.  One thing in its favor is that it provides a way for
users to get this if they want it, without screwing readability for
the vast majority who don't care.

On the flip side, the new checkpoint timing stuff is in milliseconds
all the way through, though it seems vanishingly unlikely that anyone
needs more resolution in that case.  We have lots of other things in
milliseconds, too.

No matter what we end up doing here it will be consistent with
something; I am reminded of the phrase the good thing about standards
is that there are so many to choose from

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Merlin Moncure
On Tue, Apr 10, 2012 at 10:36 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Atri Sharma atri.j...@gmail.com writes:
 On Tue, Apr 10, 2012 at 8:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Hm?  SPI doesn't know anything about Java either.

 We plan to call SQL through SPI from the FDW,which in turn would call
 the Pl/Java routine.

 If you're saying that every Java function that the FDW needs would have
 to be exposed as a SQL function, that seems like a pretty high-risk
 (not to mention low performance) approach.  Not only do you have to
 design a SQL representation for every datatype you need, but you have to
 be sure that you do not have any security holes arising from
 unscrupulous users calling those SQL functions manually with arguments
 of their choosing.

Hm, well, for data type representation, an 'all text' representation
would avoid that requirement (although could certainly add it back in
later for performance reasons).  That's not all that different from
what the other fdw projects are doing -- mostly wrapping
BuildTupleFromCStrings and such.  But totally agree that for top
performance you'd need direct native transfer.  I'm in the 'perfect is
the enemy of the good' mindset here.

I think the security argument is mostly bogus -- pl/java is already
well into the untrusted side of things and I was figuring being able
to bypass the fdw layer and invoke the functions dblink style was a
feature, not a bug.

But adding up all the comments I see healthy skepticism that running
through SPI is the proper approach and it is noted.  So the way
forward is a more direct hook to the jvm or to go back to the drawing
board I suppose.  I agree that JNI isn't required -- we're going to
have to study the pl/java system a bit to determine the best way to
hook in.  This could end up getting us into the 'biting of more than
can chew' territory admittedly, but Atri is enthusiastic and wants to
give it a go.

Additionally, Dave is skeptical that pl/java dependency is a good
foundation for a generally useful library.  I'm not buying that --
pl/java is the 'best of class' for implementing java inside the
database that I'm aware of.  I see absolutely no reason why it
couldn't be packaged as an extension -- the project is a bit dusty and
needs some TLC but does what it does very well.  I also respectfully
disagree that the presence of high quality ETL engines eliminate the
usefulness of a direct database to database transfer mechanism.  I
personally never go the ETL route when I can just dblink the data
across and do the massaging in SQL.  Other developers may think
differently of course.  Of course, if there was a good way to
implement jdbc/fdw without using pl/java that would be good to know.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 No matter what we end up doing here it will be consistent with
 something; I am reminded of the phrase the good thing about standards
 is that there are so many to choose from

Well, FWIW I vote for making the new columns be float8 msec.  If you
don't want to change total_time to match, I guess there's no law that
says it *has* to be consistent ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Magnus Hagander
On Tue, Apr 10, 2012 at 17:58, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Apr 10, 2012 at 10:06 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Hmm.  So, on further review, this is not as simple as it seems.  I'd
 like some input from other people on what we should do here.

 pg_stat_statements has long exposed a column called total_time as a
 float8.  It now exposes columns time_read and time_write which are
 actually measuring the time spent reading and writing data blocks, and
 those are also exposed as a float8; all these count seconds.

 Meanwhile, all times exposed by the stats collector (including the new
 and analagous pg_stat_database.block_read_time and
 pg_stat_database.block_write_time columns) are exposed as int8; these
 count milliseconds.

 So, should we make the new columns exposed by pg_stat_statements use
 milliseconds, so that the block read/write timings are everywhere in
 milliseconds, or should we keep them as a float8, so that all the
 times exposed by pg_stat_statements use float8?

 Given that we've whacked pg_stat_statements' behavior around rather
 thoroughly in this release, maybe we could get away with redefining
 total_time as being measured in msec rather than sec, thereby aligning
 units as msec across the board.  It's arguably a smaller deal than the
 way we've redefined what the query column contains...

 Retyping columns is an awfully good way to produce grumpy users.  Then
 again, if we're going to do it, it would certainly be better to do it
 now rather than later, because right now I'm guessing
 pg_stat_statements is a lot less heavily used than it will be after
 9.2 hits shelves.

Agreed. It's better if we can also change the name of it - provided we
can come up with a reasonable new name. Then peoples applications will
break *visibly*, which is a lot  better than breaking invisibly. (This
is the main reason why we renamed current_query in pg_stat_activity..)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Tue, Apr 10, 2012 at 17:58, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Apr 10, 2012 at 10:06 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Given that we've whacked pg_stat_statements' behavior around rather
 thoroughly in this release, maybe we could get away with redefining
 total_time as being measured in msec rather than sec, thereby aligning
 units as msec across the board.  It's arguably a smaller deal than the
 way we've redefined what the query column contains...
 
 Retyping columns is an awfully good way to produce grumpy users.  Then
 again, if we're going to do it, it would certainly be better to do it
 now rather than later, because right now I'm guessing
 pg_stat_statements is a lot less heavily used than it will be after
 9.2 hits shelves.

 Agreed. It's better if we can also change the name of it - provided we
 can come up with a reasonable new name. Then peoples applications will
 break *visibly*, which is a lot  better than breaking invisibly. (This
 is the main reason why we renamed current_query in pg_stat_activity..)

That might be overkill.  Changing the column name will definitely break
anything more specific than select * from pg_stat_statements.
However, it's less clear that changing the units in which the column is
expressed will break things.  It seems likely to me that nobody out
there is doing anything much more sophisticated than sorting by the
column, and that's still going to work the same.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Peter Geoghegan
On 10 April 2012 16:51, Robert Haas robertmh...@gmail.com wrote:
 When these things are pointed out to the people who are doing them,
 the response is often either (a) this feature is so important we're
 all going to die if it's not in the release how can you even think
 about bouncing it or (b) I'm not really still hacking on it these are
 all just minor changes.  It's surprisingly easy to hoodwink even
 experienced contributors into thinking that your patch is really,
 really almost done, honest, it just needs a couple more tweaks when in
 fact it's nowhere close.  I try not to attribute to bad faith what can
 be explained by incurable optimism, so maybe we just have a lot of
 incurable optimism.  But it's doing nobody any good.

I think that you may be missing the greater point here. The people
that do this are kind of like the defectors in prisoner's dilemma - at
a certain point, some people cannot resist the temptation to push
their own patch forward at the expense of others by asserting
dubiously that it's ready-for-committer, or maybe they really do
incorrectly believe it to be so, or maybe, unlike you, they understand
that term to mean I've done as much as I can, as has my reviewer, or
whatever. To play devil's advocate, that might be an anti-social act,
but at a certain point, who wants to be the last honest sap? Besides,
isn't everyone's crime no crime at all?

ISTM that this is symptomatic of the wider problem of a dire shortage
of committer resources. 100% of my non-doc patches so far have been
committed by 3 people. I would really like to see us figure out a way
of making more hackers committers, perhaps subject to certain
conditions that don't currently exist for committers. You might find
that given commit bits, some people will take their responsibilities
as a reviewer far more seriously. Maybe you don't think that any of
the likely candidates are quite ready for that responsibility, but you
must admit that it's a serious problem.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Magnus Hagander
On Tue, Apr 10, 2012 at 18:27, Tom Lane t...@sss.pgh.pa.us wrote:
 Magnus Hagander mag...@hagander.net writes:
 On Tue, Apr 10, 2012 at 17:58, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Apr 10, 2012 at 10:06 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Given that we've whacked pg_stat_statements' behavior around rather
 thoroughly in this release, maybe we could get away with redefining
 total_time as being measured in msec rather than sec, thereby aligning
 units as msec across the board.  It's arguably a smaller deal than the
 way we've redefined what the query column contains...

 Retyping columns is an awfully good way to produce grumpy users.  Then
 again, if we're going to do it, it would certainly be better to do it
 now rather than later, because right now I'm guessing
 pg_stat_statements is a lot less heavily used than it will be after
 9.2 hits shelves.

 Agreed. It's better if we can also change the name of it - provided we
 can come up with a reasonable new name. Then peoples applications will
 break *visibly*, which is a lot  better than breaking invisibly. (This
 is the main reason why we renamed current_query in pg_stat_activity..)

 That might be overkill.  Changing the column name will definitely break
 anything more specific than select * from pg_stat_statements.
 However, it's less clear that changing the units in which the column is
 expressed will break things.  It seems likely to me that nobody out
 there is doing anything much more sophisticated than sorting by the
 column, and that's still going to work the same.

I've seen cases where the timing is correlated with external timings,
e.g. from the application. Have I seen it a lot? No - but then I
haven't seen a big usage of pg_stat_statements either, which might be
the better argument for allowing a change of unit but not name.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 One other sort of mechanical test which I think can and should be
 applied to patches submitted to the last CF is that if *at the start
 of the CF* the patch doesn't apply, compile, pass regression tests,
 and demonstrably provide the functionality claimed for the patch, it
 should not be a candidate for inclusion in the release.

I would not be in favor of inflexible application of such a rule.
For instance, if a patch had gotten broken by a conflict with some
other patch applied the day before the CF starts, it would be unfair
to not give the patch author a reasonable amount of time to rebase.
And such conflicts occurring after the CF starts are hardly unusual
either.

 A patch on
 which the author is continuing to work even in the absence of review
 should be considered a WIP want feedback submission; it should not
 be allowed to constitute a placeholder for inclusion in the
 release.  It's one thing if review turns up corner case bugs missed
 by the author; it's quite another if there is a month or two of
 solid development left to be done. The CF period is not the time for
 now I'll get serious about wrapping this up.

Agreed here, though.  Chris Browne mentioned upthread that we really
need a somewhat different process for WIP patches as opposed to those
that are thought to be committable or nearly so.  I don't know if we
should institute his idea of a separate series of HackFest events,
but at the very least we should try harder to draw a distinction between
WIP and finished patches.  They need different sorts of reviewing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 ...  It's surprisingly easy to hoodwink even
 experienced contributors into thinking that your patch is really,
 really almost done, honest, it just needs a couple more tweaks when in
 fact it's nowhere close.  I try not to attribute to bad faith what can
 be explained by incurable optimism, so maybe we just have a lot of
 incurable optimism.  But it's doing nobody any good.

The first, and possibly most significant, observation in Brooks' classic
_The Mythical Man-Month_ is: All programmers are optimists.  I don't
think you're going to get rid of the incurable-optimism problem.
Rather, we'd better design around it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
 One other sort of mechanical test which I think can and should be
 applied to patches submitted to the last CF is that if *at the
 start of the CF* the patch doesn't apply, compile, pass
 regression tests, and demonstrably provide the functionality
 claimed for the patch, it should not be a candidate for inclusion
 in the release.
 
 I would not be in favor of inflexible application of such a rule.
 For instance, if a patch had gotten broken by a conflict with some
 other patch applied the day before the CF starts, it would be
 unfair to not give the patch author a reasonable amount of time to
 rebase.  And such conflicts occurring after the CF starts are
 hardly unusual either.
 
I didn't mean to exclude rebasing because of conflicts with recent
commits, so perhaps mechanical was overstating it.  But maybe not
-- perhaps each patch submission should state which commit it was
last confirmed to compile and work with, and if there are problems
against HEAD that could be confirmed before asking for the rebase. 
That wouldn't be too much extra work for the initial reviewer, and
it would help establish objective criteria for categorizing whether
a patch should be treated as WIP.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython triggers are broken for composite-type columns

2012-04-10 Thread Jan Urbański

On 10/04/12 07:35, Jan Urbański wrote:

On 10/04/12 04:20, Tom Lane wrote:

Don't know if anybody noticed bug #6559
http://archives.postgresql.org/pgsql-bugs/2012-03/msg00180.php

I've confirmed that the given test case works in 9.0 but fails in
9.1 and HEAD.


So, I know what's going on, I still don't know what's the best way to 
handle it.


The function that converts Python objects to PG data checks what type 
it's supposed to produce and acts accordingly. In 9.0 it checked for 
bool, bytea and arrays, in 9.1 it also takes composite types into account.


This has been done to support functions returning composite types - to 
do that they need to return a dictionary or a list, for instance 
{'col1': 1, 'col2': 2}.


The problem is that the routine that converts PG data into Python 
objects does not handle composite type inputs all that well - it just 
bails and returns the string representation, hence '(3)' appearing in 
Python land.


Now previously, the Python-PG function did not see that the given 
conversion is supposed to return a composite so it also bailed and used 
a default text-composite conversion, so '(3)' was converted to ROW(3) 
and all went well. The new code tries to treat what it gets as a 
dictionary/list/tuple and fails in a more or less random way.


Now that I understand what's been going on, I'll try to think of a 
non-invasive way of fixing that...


Cheers,
Jan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] invalid search_path complaints

2012-04-10 Thread Christoph Berg
Re: Tom Lane 2012-04-04 28647.1333558...@sss.pgh.pa.us
 Now, Scott's comment seems to me to offer a principled way out of this:
 if we define the intended semantics of search_path as being similar
 to the traditional understanding of Unix PATH, then it's not an error
 or even unexpected to have references to nonexistent schemas in there.

Btw, the default setting does already work like this: $user,public.
It is not an error for $user not to exist, but it is a very nice
default because it will be used as soon as it appears.

It would be logical to treat all other cases the same. I then could
put the search_path into my .psqlrc and then have a one size fits
all search path for all my databases, etc...

 But as soon as you say I want warnings in some cases, I think we have
 a mess that nobody is ever going to be happy with, because there will
 never be a clear and correct definition of which cases should get
 warnings.

As it looks impossible to divide the gray area, I'd opt to just drop
the warning and accept all syntactically valid strings.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


signature.asc
Description: Digital signature


Re: [HACKERS] bug in fast-path locking

2012-04-10 Thread Jim Nasby

On 4/9/12 6:12 PM, Jeff Davis wrote:

On Mon, 2012-04-09 at 17:42 -0500, Jim Nasby wrote:

Dumb question... should operations in the various StrongLock functions
take place in a critical section? Or is that already ensure outside of
these functions?


Do you mean CRITICAL_SECTION() in the postgres sense (that is, avoid
error paths by making all ERRORs into PANICs and preventing interrupts);
or the sense described here:


Postgres sense. I thought there was concern about multiple people trying to 
increment or decrement the count at the same time, and if that was the case 
perhaps there was an issue with it not being in a CRITICAL_SECTION as well. But 
I could certainly be wrong about this. :)

And yes, we'd definitely not want to be in a CRITICAL_SECTION for the duration 
of the operation...
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Greg Smith

On 04/09/2012 11:12 PM, Christopher Browne wrote:


It seems as though we need to have a bad guy that will say, that
sure isn't ready to COMMIT, so we'd better step back from imagining
that it ought to be completed as part of this COMMITfest.


There's no reward for anyone in the PostgreSQL community to be a bad 
guy.  If you're too aggressive about it, submitters get mad; too loose, 
and you get both committers and people worried about the release 
schedule mad.  And the community is tight enough that the person you 
tick off today might be someone you have to work with next week.


Having sat in this particular seat several times now, I'd say the role 
needed here is more mediator than pointy-haired boss.  When I write bad 
news e-mail to submitters, I try to make the tone more about clarifying 
what was learned and what is needed to improve things for a next round 
of submissions.  It's not easy to adopt a writing tone for that sort of 
message while not coming off as insulting to someone.


Getting a feature punted forward is easier to take if a submitter leaves 
with a better roadmap and idea what standards they have to meet.  On 
bigger features in particular, that sometimes requires feedback from a 
committer earlier in the process, even if they haven't reached Ready 
for Committer via a reviewer yet.  My comment upthread about nailing 
down the committer for big features earlier than smaller ones was along 
these same lines.



I wonder if we're starting to have enough data to establish meaningful
statistics on feedback.


I had Robert send me a dump of the data that's in the CF app the other 
day.  I'm hoping to do some useful data mining on it before PGCon.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 12:28 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 I think that you may be missing the greater point here. The people
 that do this are kind of like the defectors in prisoner's dilemma - at
 a certain point, some people cannot resist the temptation to push
 their own patch forward at the expense of others ...

The question is - what exactly are we supposed to do about that?  I
think that when prisoner A rats out prisoner B and prisoner B spends
10 years in Leavenworth, prisoner B is unlikely to be very happy with
the situation, even though he's surely not confused about *why*
prisoner A did it.  So here.  If we accept your argument that some
people simply cannot help themselves, then the only solution is to
make it cease to be a prisoner's dilemma, and that can only be done by
changing the incentives, which presumably means handing down
punishments to people who push their own patches forward at the
expense of others.  Unless we care to choose a benevolent dictator, I
don't see any way to accomplish that.

It's feasible to think that we might be able to streamline the process
of booting patches that are not close to committable at the start of a
CommitFest, and especially at the start of the final CommitFest.  For
example, limiting patches to a small number of days in the Waiting on
Author state would help a great deal.  But the more general problem
of people arguing that *their* patch is the special one without which
the earth will cease to revolve about its axis is more difficult to
solve, or that it's ready when it's really not, is more difficult to
solve.  How would you propose we deal with that problem?

 ISTM that this is symptomatic of the wider problem of a dire shortage
 of committer resources. 100% of my non-doc patches so far have been
 committed by 3 people. I would really like to see us figure out a way
 of making more hackers committers, perhaps subject to certain
 conditions that don't currently exist for committers. You might find
 that given commit bits, some people will take their responsibilities
 as a reviewer far more seriously. Maybe you don't think that any of
 the likely candidates are quite ready for that responsibility, but you
 must admit that it's a serious problem.

I don't agree with that.  I think that there are a few people who
don't now have commit bits who should be given them - in particular,
Fujii Masao and Kevin Grittner, both of whom have been doing
consistently excellent work for several years.   But giving people a
commit bit in the hopes that they will do better reviews seems
completely backwards to me: we should instead give commit bits to
people who have *already* demonstrated that they can be trusted to do
good reviews and exercise good judgement, and no one else.

The fact is that we have no shortage of committers - there are 19
people who have access to push code into our master git repository.  A
handful of those people have basically completely left the project and
their commit rights should probably be revoked on that basis; most of
them are still involved in one way or another but just not able to
devote a lot of time to reviewing other people's code.  The problem is
even more acute for large patches, which only a handful of people are
qualified to review, and which also take a lot of wall clock time to
review thoroughly.  But that's not a problem that's going to go away
because we make more committers.  Giving more people the ability to
commit stuff will neither force them to devote time to it nor make
them qualified to do it if they aren't already.

Every time someone's favorite patch gets rejected, there is an outcry
of - the standards for commit are too high!  But this overlooks the
fact that there are some people who regularly meet them.  A patch from
Fujii Masao, Kevin, or Noah is about ten times more likely to be
applied without comment than one from the average submitter.  That's
not because I like them (although I have to admit to liking Kevin
quite a lot; I have met Fujii Masao at most briefly and Noah not at
all), or because I necessarily care about their patches more than
anyone else's; it's because they do really good work.  Over time, such
people tend to become committers, and then everyone complains that the
committers have high standards.  Well, yes.  They are committers
*because* they have high standards, and that is exactly as it should
be.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 1:27 PM, Greg Smith g...@2ndquadrant.com wrote:
 There's no reward for anyone in the PostgreSQL community to be a bad guy.
  If you're too aggressive about it, submitters get mad; too loose, and you
 get both committers and people worried about the release schedule mad.  And
 the community is tight enough that the person you tick off today might be
 someone you have to work with next week.

Yep.

 Having sat in this particular seat several times now, I'd say the role
 needed here is more mediator than pointy-haired boss.  When I write bad news
 e-mail to submitters, I try to make the tone more about clarifying what was
 learned and what is needed to improve things for a next round of
 submissions.  It's not easy to adopt a writing tone for that sort of message
 while not coming off as insulting to someone.

Agreed.  I used to be better at this, but the increasing volume of
patches that get reviewed by no one else has forced me to curtail the
amount of time I spend on each one, and that is causing my natural
bluntness to come to the fore.  Unfortunately.

I also think that people were more receptive to my reviews before I
got a commit bit.  Back then, I was the guy who was telling you what
you were going to have to fix so Tom didn't boot your patch.  Now, I'm
the guy who is threatening to boot your patch if you don't fix what I
want fixed.  It comes off differently, even if the content is
identical.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 No matter what we end up doing here it will be consistent with
 something; I am reminded of the phrase the good thing about standards
 is that there are so many to choose from

 Well, FWIW I vote for making the new columns be float8 msec.  If you
 don't want to change total_time to match, I guess there's no law that
 says it *has* to be consistent ...

Ugh.  So the three ways of doing timing that we have already aren't
enough, and we need a fourth one?  Ack!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: tab completions for 'WITH'

2012-04-10 Thread Peter Eisentraut
On tis, 2012-04-03 at 22:34 -0700, Josh Kupershmidt wrote:
 I noticed psql's tab-completion for 'WITH' is a bit overeager. If you
 try to tab-complete commands like:
   ALTER ROLE jsmith WITH [TAB]
   COPY tbl FROM 'filename' WITH [TAB]
 
 you'll get 'RECURSIVE' unhelpfully filled in. I think 'RECURSIVE'
 should only be suggested if 'WITH' is the first and only word of the
 line.

Committed that.

 On a related note, I found it annoying that after fixing the above
 problem, trying:
 ALTER ROLE jsmith WITH [TAB]
 CREATE ROLE jsmith WITH [TAB]
 
 didn't suggest any tab-completions -- it only works if you leave off
 the 'WITH' noise word, which I happen to use.

Hmm, but now you've set it up so that you can complete ALTER ROLE foo
WITH WITH.  Were you aware of that?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 10, 2012 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Well, FWIW I vote for making the new columns be float8 msec.

 Ugh.  So the three ways of doing timing that we have already aren't
 enough, and we need a fourth one?  Ack!

Huh?  I understood what you said upthread to be that we have two ways
in existing releases (anything unreleased has zero standing in this
discussion): float8 sec in pg_stat_statements.total_time, and
int8 msec everywhere else.  Did I miss something?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 1:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 10, 2012 at 12:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Well, FWIW I vote for making the new columns be float8 msec.

 Ugh.  So the three ways of doing timing that we have already aren't
 enough, and we need a fourth one?  Ack!

 Huh?  I understood what you said upthread to be that we have two ways
 in existing releases (anything unreleased has zero standing in this
 discussion): float8 sec in pg_stat_statements.total_time, and
 int8 msec everywhere else.  Did I miss something?

We also have int8 usec floating around.  But even if we didn't, float8
msec would be a new one, regardless of whether it would be third or
fourth...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PgNext CFP is still open

2012-04-10 Thread Joshua D. Drake


Hey,

Just a reminder that the CFP for PgNext in Denver is still open. Let's 
get those talks in folks!


https://www.postgresqlconference.org/

Sincerely,

Joshua D. Drake


--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG FETCH readahead

2012-04-10 Thread Boszormenyi Zoltan

Hi,

2012-04-10 16:55 keltezéssel, Michael Meskes írta:

On Tue, Apr 10, 2012 at 10:37:22AM -0400, Noah Misch wrote:

Only a non-decorated cursor can be overridden, even if
a different default readahead window size is specified with
e.g. ecpg -R 8. If ECPGFETCHSZ is not present, 8 will be used,
if ECPGFETCHSZ is present, its value will be used. ECPGopen()
will need an extra bool argument to distinguish this.

Is this acceptable? Noah, Michael?

Sounds perfect.

Fine by me.

Michael


you commented on two new options were added and they should
be suboptions to -r. I looked at man getopt_long to see what I can do
about the -R option and there seems to be a getsubopt() call which is
an extension to getopt_long. My manpage under Fedora 16 says this:

NAME
   getsubopt - parse suboption arguments from a string

SYNOPSIS
   #include stdlib.h

   int getsubopt(char **optionp, char * const *tokens, char **valuep);

   Feature Test Macro Requirements for glibc (see feature_test_macros(7)):

   getsubopt():
   _XOPEN_SOURCE = 500 || _XOPEN_SOURCE  _XOPEN_SOURCE_EXTENDED
   || /* Since glibc 2.12: */ _POSIX_C_SOURCE = 200809L

I wonder whether the manual parsing of -r suboptions may be rewritten
using this function or PostgreSQL supports systems without the above
X/Open or POSIX support levels.

Anyway, to make it possible to rewrite using the above call, I modified -R
and it's now -r readahead=number. Documentation is adjusted.

With the above, it would be possible to use a comma separated list of -r
suboptions, e.g. -r prepare,questionmarks,readahead=16 in one option.

Summary of other changes:
- The result set size detection is a suboption of -r, documentation is 
adjusted.
- Only undecorated cursors use ECPGFETCHSZ, documentation is adjusted
- ecpg --help says ...default 0 (disabled)... fixed.
- Comment in cursor-readahead.pgc is fixed.
- New regression test that exercises ECPGFETCHSZ=8 and a non-readahead
  cursor. The stderr file shows the fetch forward 8 executed by the runtime.
- Also added a note to the documentation about a possible performance trap
  if a previously written ECPG application uses its own custom readahead via
  multi-row FETCH statements.

This patch should be applied over the two patches I last sent.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



ecpg-cursor-readahead-fixes-v3.patch.gz
Description: Unix tar archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 10, 2012 at 1:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Huh?  I understood what you said upthread to be that we have two ways
 in existing releases (anything unreleased has zero standing in this
 discussion): float8 sec in pg_stat_statements.total_time, and
 int8 msec everywhere else.  Did I miss something?

 We also have int8 usec floating around.  But even if we didn't, float8
 msec would be a new one, regardless of whether it would be third or
 fourth...

It would still be the second one, because it would replace the only use
of float8 sec, no?  And more to the point, it converges us on msec being
the only exposed unit.

The business about underlying microseconds is maybe not so good, but
I don't think we want to touch that right now.  In the long run
I think it would make sense to converge on float8 msec as being the
standard for exposed timing values, because that is readily adaptable to
the underlying data having nsec or even better precision.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 1:58 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 10, 2012 at 1:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Huh?  I understood what you said upthread to be that we have two ways
 in existing releases (anything unreleased has zero standing in this
 discussion): float8 sec in pg_stat_statements.total_time, and
 int8 msec everywhere else.  Did I miss something?

 We also have int8 usec floating around.  But even if we didn't, float8
 msec would be a new one, regardless of whether it would be third or
 fourth...

 It would still be the second one, because it would replace the only use
 of float8 sec, no?  And more to the point, it converges us on msec being
 the only exposed unit.

 The business about underlying microseconds is maybe not so good, but
 I don't think we want to touch that right now.  In the long run
 I think it would make sense to converge on float8 msec as being the
 standard for exposed timing values, because that is readily adaptable to
 the underlying data having nsec or even better precision.

Hmm.  Maybe we should think about numeric ms, which would have all the
same advantages but without the round-off error.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread k...@rice.edu
On Tue, Apr 10, 2012 at 02:01:02PM -0400, Robert Haas wrote:
 On Tue, Apr 10, 2012 at 1:58 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Robert Haas robertmh...@gmail.com writes:
  On Tue, Apr 10, 2012 at 1:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Huh?  I understood what you said upthread to be that we have two ways
  in existing releases (anything unreleased has zero standing in this
  discussion): float8 sec in pg_stat_statements.total_time, and
  int8 msec everywhere else.  Did I miss something?
 
  We also have int8 usec floating around.  But even if we didn't, float8
  msec would be a new one, regardless of whether it would be third or
  fourth...
 
  It would still be the second one, because it would replace the only use
  of float8 sec, no?  And more to the point, it converges us on msec being
  the only exposed unit.
 
  The business about underlying microseconds is maybe not so good, but
  I don't think we want to touch that right now.  In the long run
  I think it would make sense to converge on float8 msec as being the
  standard for exposed timing values, because that is readily adaptable to
  the underlying data having nsec or even better precision.
 
 Hmm.  Maybe we should think about numeric ms, which would have all the
 same advantages but without the round-off error.
 
 -- 
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company
 

They are also a lot bigger with tons of added overhead. :)

Regards,
Ken

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 10, 2012 at 1:58 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 The business about underlying microseconds is maybe not so good, but
 I don't think we want to touch that right now.  In the long run
 I think it would make sense to converge on float8 msec as being the
 standard for exposed timing values, because that is readily adaptable to
 the underlying data having nsec or even better precision.

 Hmm.  Maybe we should think about numeric ms, which would have all the
 same advantages but without the round-off error.

Color me unimpressed ... numeric calculations are vastly more expensive
than float, and where are you going to get timing data that has more
than sixteen decimal digits of accuracy?  IME we're lucky to get three
repeatable digits in any timing measurement.  The point of using a
non-integer type here is not so much precision as dynamic range:
sometimes you might be measuring queries that run for hours, and other
times ones that run for microseconds.  In the latter case it's important
to be able to represent nanoseconds, but not so much in the former.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython triggers are broken for composite-type columns

2012-04-10 Thread Tom Lane
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
 On 10/04/12 04:20, Tom Lane wrote:
 Don't know if anybody noticed bug #6559
 http://archives.postgresql.org/pgsql-bugs/2012-03/msg00180.php

 So, I know what's going on, I still don't know what's the best way to 
 handle it.

 The function that converts Python objects to PG data checks what type 
 it's supposed to produce and acts accordingly. In 9.0 it checked for 
 bool, bytea and arrays, in 9.1 it also takes composite types into account.

 This has been done to support functions returning composite types - to 
 do that they need to return a dictionary or a list, for instance 
 {'col1': 1, 'col2': 2}.

 The problem is that the routine that converts PG data into Python 
 objects does not handle composite type inputs all that well - it just 
 bails and returns the string representation, hence '(3)' appearing in 
 Python land.

 Now previously, the Python-PG function did not see that the given 
 conversion is supposed to return a composite so it also bailed and used 
 a default text-composite conversion, so '(3)' was converted to ROW(3) 
 and all went well. The new code tries to treat what it gets as a 
 dictionary/list/tuple and fails in a more or less random way.

 Now that I understand what's been going on, I'll try to think of a 
 non-invasive way of fixing that...

ISTM that conversion of a composite value to Python ought to produce a
dict, now that the other direction expects a dict.  I can see that this
is probably infeasible for compatibility reasons in 9.1, but it's not
too late to fix it for 9.2.  We might have to leave the bug unfixed in
9.1, since anything we do about it will represent a compatibility break.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython triggers are broken for composite-type columns

2012-04-10 Thread Tom Lane
I wrote:
 =?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
 Now that I understand what's been going on, I'll try to think of a 
 non-invasive way of fixing that...

 ISTM that conversion of a composite value to Python ought to produce a
 dict, now that the other direction expects a dict.  I can see that this
 is probably infeasible for compatibility reasons in 9.1, but it's not
 too late to fix it for 9.2.  We might have to leave the bug unfixed in
 9.1, since anything we do about it will represent a compatibility break.

On reflection, can't we fix this as follows: if the value coming in from
Python is a string, just feed it to record_in, the same as we used to.
When I traced through the logic before, it seemed like it was failing
to distinguish strings from sequences, but I would hope that Python
is more strongly typed than that.

I still think the conversion in the other direction ought to yield a
dict, but that's clearly not back-patch material.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Peter Geoghegan
On 10 April 2012 18:28, Robert Haas robertmh...@gmail.com wrote:
 If we accept your argument that some
 people simply cannot help themselves, then the only solution is to
 make it cease to be a prisoner's dilemma, and that can only be done by
 changing the incentives, which presumably means handing down
 punishments to people who push their own patches forward at the
 expense of others.  Unless we care to choose a benevolent dictator, I
 don't see any way to accomplish that.

Well, I was really pointing out that people are somewhat forced into a
corner by the current state of affairs, because committers are not
typically able to look at anything in sufficient detail that isn't
ready for committer, particularly as we approach crunch-time - their
time is simply too precious. By not marking the patch ready for
committer, they are basically asking for their patch to be passed
over, and they may be incapable of bridging the chasm between what
really is their best effort, and what'd you'd consider to be the
ready-for-committer gold standard. Some people cannot exclusively
dedicate their time to their patch, or lack sufficient experience to
meet that standard.

 It's feasible to think that we might be able to streamline the process
 of booting patches that are not close to committable at the start of a
 CommitFest, and especially at the start of the final CommitFest.  For
 example, limiting patches to a small number of days in the Waiting on
 Author state would help a great deal.  But the more general problem
 of people arguing that *their* patch is the special one without which
 the earth will cease to revolve about its axis is more difficult to
 solve, or that it's ready when it's really not, is more difficult to
 solve.  How would you propose we deal with that problem?

As I've already said, I think that needs to be decided impartially,
ideally by people who are removed from the engineering process. I
don't mean that we'd get a marketing person to make those decisions -
far from it. I just mean that some separation of powers can be a good
thing in some circumstances.

 I don't agree with that.  I think that there are a few people who
 don't now have commit bits who should be given them - in particular,
 Fujii Masao and Kevin Grittner, both of whom have been doing
 consistently excellent work for several years.

I agree with you about both individuals. I hope that this happens
sooner rather than later.

 Giving more people the ability to
 commit stuff will neither force them to devote time to it nor make
 them qualified to do it if they aren't already.

One major component of being qualified, is, of course, knowing what
you don't know, and the risk of being left with egg on your face turns
out to be a pretty effective way of preventing new committers from
being too eager. Giving more people bits has a cost: in general, I'd
expect it to result in a higher bug-to-line ratio when code is
committed. However, not doing so has an opportunity cost: less code is
committed, which may, on balance, result in an inferior release than
what we could have had. Maybe you think that we have the balance
perfectly right, and you are of course perfectly entitled to that
view, as well as being perfectly entitled to having your opinion more
heavily weighed than mine, but I'd like to see a dialogue about it at
some point.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Tom Lane
Peter Geoghegan pe...@2ndquadrant.com writes:
 On 10 April 2012 18:28, Robert Haas robertmh...@gmail.com wrote:
 I don't agree with that.  I think that there are a few people who
 don't now have commit bits who should be given them - in particular,
 Fujii Masao and Kevin Grittner, both of whom have been doing
 consistently excellent work for several years.

 I agree with you about both individuals. I hope that this happens
 sooner rather than later.

FYI, the core committee traditionally has a discussion about whom
to appoint as new committers at the end of each release cycle.
I'm sure we'll be thinking about these names this time.

 Giving more people the ability to
 commit stuff will neither force them to devote time to it nor make
 them qualified to do it if they aren't already.

 One major component of being qualified, is, of course, knowing what
 you don't know, and the risk of being left with egg on your face turns
 out to be a pretty effective way of preventing new committers from
 being too eager. Giving more people bits has a cost: in general, I'd
 expect it to result in a higher bug-to-line ratio when code is
 committed. However, not doing so has an opportunity cost: less code is
 committed, which may, on balance, result in an inferior release than
 what we could have had. Maybe you think that we have the balance
 perfectly right, and you are of course perfectly entitled to that
 view, as well as being perfectly entitled to having your opinion more
 heavily weighed than mine, but I'd like to see a dialogue about it at
 some point.

We've done pretty well over the past fifteen years by being chary in
handing out commit bits.  I don't particularly want to change that
policy.  Obviously we do need a steady supply of new blood, since
people do leave the project, but relaxing our standards doesn't seem
like the way to get it.  The impression I have is that we have a pretty
good and even increasing supply of new interested people, so letting
them acquire experience with the code base and eventually mature into
qualified committers doesn't look like a dead-end strategy from here.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython triggers are broken for composite-type columns

2012-04-10 Thread Jan Urbański

On 10/04/12 20:47, Tom Lane wrote:

I wrote:

=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?=wulc...@wulczer.org  writes:

Now that I understand what's been going on, I'll try to think of a
non-invasive way of fixing that...



ISTM that conversion of a composite value to Python ought to produce a
dict, now that the other direction expects a dict.  I can see that this
is probably infeasible for compatibility reasons in 9.1, but it's not
too late to fix it for 9.2.  We might have to leave the bug unfixed in
9.1, since anything we do about it will represent a compatibility break.


On reflection, can't we fix this as follows: if the value coming in from
Python is a string, just feed it to record_in, the same as we used to.
When I traced through the logic before, it seemed like it was failing
to distinguish strings from sequences, but I would hope that Python
is more strongly typed than that.


Yeah, we can fix PLyObject_ToTuple to check for strings too and use the 
default PG input function. The reason it was complaining about length is 
that we're checking if the object passed implements the sequence 
protocol, which Python strings do (length, iteration, etc). Sticking a 
if branch that will catch the string case above that should be sufficient.




I still think the conversion in the other direction ought to yield a
dict, but that's clearly not back-patch material.


Yes, that would be ideal, even though not backwards-compatible. 
Back-patching is out of the question, but do we want to change trigger 
functions to receive dictionaries in NEW? If so, should this be 9.2 
material, or just a TODO?


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython triggers are broken for composite-type columns

2012-04-10 Thread Tom Lane
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
 On 10/04/12 20:47, Tom Lane wrote:
 On reflection, can't we fix this as follows: if the value coming in from
 Python is a string, just feed it to record_in, the same as we used to.
 When I traced through the logic before, it seemed like it was failing
 to distinguish strings from sequences, but I would hope that Python
 is more strongly typed than that.

 Yeah, we can fix PLyObject_ToTuple to check for strings too and use the 
 default PG input function. The reason it was complaining about length is 
 that we're checking if the object passed implements the sequence 
 protocol, which Python strings do (length, iteration, etc). Sticking a 
 if branch that will catch the string case above that should be sufficient.

Ah, makes sense then.  (Perhaps the dict case ought to be tested before
the sequence case, too, just to be safe?)

 I still think the conversion in the other direction ought to yield a
 dict, but that's clearly not back-patch material.

 Yes, that would be ideal, even though not backwards-compatible. 
 Back-patching is out of the question, but do we want to change trigger 
 functions to receive dictionaries in NEW?

Hm, I was not thinking of this as being trigger-specific, but more a
general principle that composite columns of tuples ought to be handled
in a recursive fashion.

 If so, should this be 9.2 material, or just a TODO?

If it can be done quickly and with not much risk, I'd vote for
squeezing it into 9.2, because it seems to me to be a clear bug that the
two directions are not handled consistently.  If you don't have time for
it now or you don't think it would be a small/safe patch, we'd better
just put it on TODO.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpython triggers are broken for composite-type columns

2012-04-10 Thread Jan Urbański

On 10/04/12 21:27, Tom Lane wrote:

=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?=wulc...@wulczer.org  writes:

Yes, that would be ideal, even though not backwards-compatible.
Back-patching is out of the question, but do we want to change trigger
functions to receive dictionaries in NEW?


Hm, I was not thinking of this as being trigger-specific, but more a
general principle that composite columns of tuples ought to be handled
in a recursive fashion.


Sure, that would be the way.


If so, should this be 9.2 material, or just a TODO?


If it can be done quickly and with not much risk, I'd vote for
squeezing it into 9.2, because it seems to me to be a clear bug that the
two directions are not handled consistently.  If you don't have time for
it now or you don't think it would be a small/safe patch, we'd better
just put it on TODO.


I'll see if making the conversion function recursive is easy and 
independently whip up a patch to check for strings and routes them 
through InputFunctionCall, for back-patching purposes.


Cheers,
Jan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 2:49 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 Well, I was really pointing out that people are somewhat forced into a
 corner by the current state of affairs, because committers are not
 typically able to look at anything in sufficient detail that isn't
 ready for committer, particularly as we approach crunch-time - their
 time is simply too precious. By not marking the patch ready for
 committer, they are basically asking for their patch to be passed
 over, and they may be incapable of bridging the chasm between what
 really is their best effort, and what'd you'd consider to be the
 ready-for-committer gold standard. Some people cannot exclusively
 dedicate their time to their patch, or lack sufficient experience to
 meet that standard.

I think that's partly true and partly false.  I actually spend a lot
of time looking at patches that are not marked Ready for Committer, on
the theory that I'd like to move things along that haven't been
formally tagged with that designation if they are nevertheless ready
to go, whereas Tom I think actively avoids it, on the theory that no
one else will volunteer to review if the committers just do
everything.  These positions are in tension but neither seems to me to
be without merit.

I do understand that not everyone is going to write code that meets
our standards for commit, and I have rewritten my share of patches -
sometimes, even quite large patches - to try to bring them up to that
level.  I had more time to do that last year than I have this year,
because this year I've been focused on performance stuff.  But, on the
flip side, I think we still did a pretty good job handling pretty much
everything submitted before November.  The stuff that ran into trouble
was the stuff that came in at the end, which in many cases was not
only late to the table but overly ambitious in its scope.  I think the
question should be not so much why didn't those big patches get
committed? as why does anyone think that they have a right to be
upset that they didn't?.  They were given, basically, two to three
extra months to become committable, and still fell short.  And I'm
still very willing to devote more time to them to make them
committable *even though they are not my projects*, but I am *not*
willing to do it while the features I worked hard on to get ready
early sit there and don't get released.  That might be a reasonable
expectation if the original patches were submitted in May and I had
blithely ignored them in favor of my own work all year, but that ain't
what happened.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_receivexlog stops upon server restart

2012-04-10 Thread Magnus Hagander
On Friday, April 6, 2012, Thom Brown wrote:

 Hi,

 I've tried out pg_receivexlog and have noticed that when restarting
 the cluster, pg_receivexlog gets cut off... it doesn't keep waiting.
 This is surprising as the DBA would have to remember to start
 pg_receivexlog up again.


This is intentional as far as that's how the code was written, there's not
a malfunctioning piece of code somewhere.

It would probably make sense to have an auto-reconnect feature, and to have
an option to turn it on/off.

If you haven't already (my wifi here is currently quite useless, which is
why I'm working on my email backlog, so I can't check), please add it to
the open items list.

//Magnus



-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Last gasp

2012-04-10 Thread Jeff Davis
On Mon, 2012-04-09 at 23:12 -0400, Christopher Browne wrote:
 But there is also a flip side to that, namely that if we do so, there
 ought to be some aspect to the process to help guide those items that
 *aren't* particularly close to being committable.

I have benefited immensely from review of my WIP patches, and a lot of
the serious review tends to happen during commitfests. This is most
important for features with a significant user interface, where it's
harder to guess what people will want.

My current strategy is to submit WIP-marked patches during a commitfest.

I agree that we should continue to have a mechanism to review patches
that aren't ready for commit, though I'm fine if we change it.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Bruce Momjian
The new pg_tablespace_location() function added in PG 9.2 to remove the
director location from pg_tablespace returns an odd error for '0', which
is InvalidOID:

test= select pg_tablespace_location(0);
ERROR:  could not read symbolic link pg_tblspc/0: No such file or
directory

Is this OK?  It handles NULL just fine:

test= select pg_tablespace_location(null);
 pg_tablespace_location


(1 row)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [JDBC] Regarding GSoc Application

2012-04-10 Thread Merlin Moncure
On Tue, Apr 10, 2012 at 11:07 AM, Merlin Moncure mmonc...@gmail.com wrote:
  I agree that JNI isn't required -- we're going to
 have to study the pl/java system a bit to determine the best way to
 hook in.  This could end up getting us into the 'biting of more than
 can chew' territory admittedly, but Atri is enthusiastic and wants to
 give it a go.

Well. maybe I spoke too soon...JNI is probably the best route.  Since
SPI is off the table, all we're really pulling in from pl/java is the
(non-trivial) proper installation of a jvm into a postgres process.
pl/java is essentially a wrapper to JNI that does postgres to jni type
conversion and builds a database driven class lookup system that
allows building the java environment inside the database.   That part
is not needed at all; since we're not going to install our routines as
pl/java installed they will be installed into a separate and more
typical class path from a jvm point of view. There are no exposed
routines in pljava.so that allow for easy manual invocation of a java
routine -- it's all pretty much hardwired to the language function
call handler system.

This begs the question about why pl/java should be used at all. Well,
as I said the jvm insertion into postgres is non-trivial so that alone
is worth something.  Maybe though we should just rip out the jvm guts
that we really need (mostly some parts of backend.c and some of the
type files) and do a completely standalone jni wrapper.  pl/java's
build system is a mess anyways so we're not losing all that much in
trying to go off on our own and do something.  OTOH, throwing a build
system on top of the growing pile of things to do is turning this into
a daunting project vs the (admittedly cobbled together) approach I was
hoping to do earlier.  Anyways, it's really Atri's call if he's
comfortable proceeding.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] invalid search_path complaints

2012-04-10 Thread Tom Lane
Christoph Berg c...@df7cb.de writes:
 Re: Tom Lane 2012-04-04 28647.1333558...@sss.pgh.pa.us
 Now, Scott's comment seems to me to offer a principled way out of this:
 if we define the intended semantics of search_path as being similar
 to the traditional understanding of Unix PATH, then it's not an error
 or even unexpected to have references to nonexistent schemas in there.

 Btw, the default setting does already work like this: $user,public.
 It is not an error for $user not to exist, but it is a very nice
 default because it will be used as soon as it appears.

Yeah.  Between that and the fact that there are a lot of cases where we
simply fail to check path validity at all (eg, if it's coming from
postgresql.conf), I'm becoming more and more convinced that just
removing the existence check is the best thing.

Attached is a proposed patch for this.  (Note: the docs delta includes
mention of permissions behavior, which was previously undocumented but
has not actually changed.)

I am not sure whether we should consider back-patching this into 9.1,
although that would be necessary if we wanted to fix Robert's original
complaint against 9.1.  Thoughts?

regards, tom lane

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 640defde860d57a81d0671f2957b99ded15a3566..361ad7b99a52bbbcec570b639800c175d3c19ab7 100644
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
*** COPY postgres_log FROM '/full/path/to/lo
*** 4670,4679 
  
 para
  The value for varnamesearch_path/varname must be a comma-separated
! list of schema names.  If one of the list items is
! the special value literal$user/literal, then the schema
! having the name returned by functionSESSION_USER/ is substituted, if there
! is such a schema.  (If not, literal$user/literal is ignored.)
 /para
  
 para
--- 4670,4686 
  
 para
  The value for varnamesearch_path/varname must be a comma-separated
! list of schema names.  Any name that is not an existing schema, or is
! a schema for which the user does not have literalUSAGE/
! permission, is silently ignored.
!/para
! 
!para
! If one of the list items is the special name
! literal$user/literal, then the schema having the name returned by
! functionSESSION_USER/ is substituted, if there is such a schema
! and the user has literalUSAGE/ permission for it.
! (If not, literal$user/literal is ignored.)
 /para
  
 para
*** COPY postgres_log FROM '/full/path/to/lo
*** 4697,4712 
  
 para
  When objects are created without specifying a particular target
! schema, they will be placed in the first schema listed
! in the search path.  An error is reported if the search path is
! empty.
 /para
  
 para
  The default value for this parameter is
! literal'$user, public'/literal (where the second part will be
! ignored if there is no schema named literalpublic/).
! This supports shared use of a database (where no users
  have private schemas, and all share use of literalpublic/),
  private per-user schemas, and combinations of these.  Other
  effects can be obtained by altering the default search path
--- 4704,4718 
  
 para
  When objects are created without specifying a particular target
! schema, they will be placed in the first valid schema named in
! varnamesearch_path/varname.  An error is reported if the search
! path is empty.
 /para
  
 para
  The default value for this parameter is
! literal$user, public/literal.
! This setting supports shared use of a database (where no users
  have private schemas, and all share use of literalpublic/),
  private per-user schemas, and combinations of these.  Other
  effects can be obtained by altering the default search path
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index dc8f8eaf3f3f60f51fd8b59aa78ccfc36e1b23f9..e92efd863ed74fb77425333d772c194c3d36851b 100644
*** a/src/backend/catalog/namespace.c
--- b/src/backend/catalog/namespace.c
*** ResetTempTableNamespace(void)
*** 3773,3786 
   * Routines for handling the GUC variable 'search_path'.
   */
  
! /* check_hook: validate new search_path, if possible */
  bool
  check_search_path(char **newval, void **extra, GucSource source)
  {
- 	bool		result = true;
  	char	   *rawname;
  	List	   *namelist;
- 	ListCell   *l;
  
  	/* Need a modifiable copy of string */
  	rawname = pstrdup(*newval);
--- 3773,3784 
   * Routines for handling the GUC variable 'search_path'.
   */
  
! /* check_hook: validate new search_path value */
  bool
  check_search_path(char **newval, void **extra, GucSource 

Re: [HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 The new pg_tablespace_location() function added in PG 9.2 to remove the
 director location from pg_tablespace returns an odd error for '0', which
 is InvalidOID:

Well, it's the same odd error you'd get for any other bogus OID.

The way the function is coded, it has no need to look into pg_tablespace
as such, which is why you don't get something like no such tablespace.
We could add such a lookup purely for error detection purposes, but I'm
not real sure I see the point.

 Is this OK?  It handles NULL just fine:

That's a consequence of the function being marked strict; it has nothing
much to do with anything.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Greg Smith

On 04/10/2012 12:27 PM, Tom Lane wrote:

Changing the column name will definitely break
anything more specific than select * from pg_stat_statements.
However, it's less clear that changing the units in which the column is
expressed will break things.  It seems likely to me that nobody out
there is doing anything much more sophisticated than sorting by the
column, and that's still going to work the same.


I am doing more sophisticated things with it, so I'll celebrate this as 
my opportunity to say I did something you didn't see coming for 2012.


All the sites involved will happily shred those scripts and rewrite for 
either normalized queries *or* better I/O timing info though, so net 
positive for 9.2 changes even if this part breaks on them.  I think this 
is one of those rare opportunities where there's enough positive 
goodwill from changes to ask what's the best way to handle this 
long-term? and get away with whatever change that requires, too.  I'm 
really not liking the look of this wart now that Robert has pointed it out.


I'd prefer to see at least usec resolution and 8 bytes of dynamic 
range for query related statistics.  Any of these would be fine from a 
UI perspective to me:


-float8 seconds
-float8 msec
-float8 usec
-int8 usec

I don't think int8 msec will be enough resolution to time queries for 
very long, if it's not already obsolete.  The committed example for 
pg_test_timing on good hardware already clocks trivial events at a 
single usec.  Even I/O is getting there.  I've measured my Fusion-io 
loaner card peaking at 8GB/s, which works out to 1 usec per 8K page. 
None of that is even price no object hardware today; it's the stuff 
sitting in my office.


If anything, I'd expect more timing code in the database that only has 
ms resolution right now will start looking fat in a year or two, and 
more things might need to be shifted to usec instead.  Checkpoint timing 
can survive having less resolution because its primary drumbeat is very 
unlikely to drop below the minutes range.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Bruce Momjian
On Tue, Apr 10, 2012 at 05:43:12PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  The new pg_tablespace_location() function added in PG 9.2 to remove the
  director location from pg_tablespace returns an odd error for '0', which
  is InvalidOID:
 
 Well, it's the same odd error you'd get for any other bogus OID.
 
 The way the function is coded, it has no need to look into pg_tablespace
 as such, which is why you don't get something like no such tablespace.
 We could add such a lookup purely for error detection purposes, but I'm
 not real sure I see the point.
 
  Is this OK?  It handles NULL just fine:
 
 That's a consequence of the function being marked strict; it has nothing
 much to do with anything.

OK, just checking before this function gets into a release.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 5:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Bruce Momjian br...@momjian.us writes:
 The new pg_tablespace_location() function added in PG 9.2 to remove the
 director location from pg_tablespace returns an odd error for '0', which
 is InvalidOID:

 Well, it's the same odd error you'd get for any other bogus OID.

 The way the function is coded, it has no need to look into pg_tablespace
 as such, which is why you don't get something like no such tablespace.
 We could add such a lookup purely for error detection purposes, but I'm
 not real sure I see the point.

I think what Bruce might be getting at is that 0 is more likely than a
randomly chosen value to be passed to this function; for example, one
can imagine wanting to pass pg_class.reltablespace.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] invalid search_path complaints

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 5:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Christoph Berg c...@df7cb.de writes:
 Re: Tom Lane 2012-04-04 28647.1333558...@sss.pgh.pa.us
 Now, Scott's comment seems to me to offer a principled way out of this:
 if we define the intended semantics of search_path as being similar
 to the traditional understanding of Unix PATH, then it's not an error
 or even unexpected to have references to nonexistent schemas in there.

 Btw, the default setting does already work like this: $user,public.
 It is not an error for $user not to exist, but it is a very nice
 default because it will be used as soon as it appears.

 Yeah.  Between that and the fact that there are a lot of cases where we
 simply fail to check path validity at all (eg, if it's coming from
 postgresql.conf), I'm becoming more and more convinced that just
 removing the existence check is the best thing.

 Attached is a proposed patch for this.  (Note: the docs delta includes
 mention of permissions behavior, which was previously undocumented but
 has not actually changed.)

 I am not sure whether we should consider back-patching this into 9.1,
 although that would be necessary if we wanted to fix Robert's original
 complaint against 9.1.  Thoughts?

I guess my feeling would be no, because it seems like a clear
behavior change, even though I agree the new behavior's better.  Since
my original investigation was prompted by a customer complaint, it's
tempting to say we should, but there's not much good making customer A
happy if we make customer B unhappy with the same change.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Bruce Momjian
On Tue, Apr 10, 2012 at 06:16:31PM -0400, Robert Haas wrote:
 On Tue, Apr 10, 2012 at 5:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Bruce Momjian br...@momjian.us writes:
  The new pg_tablespace_location() function added in PG 9.2 to remove the
  director location from pg_tablespace returns an odd error for '0', which
  is InvalidOID:
 
  Well, it's the same odd error you'd get for any other bogus OID.
 
  The way the function is coded, it has no need to look into pg_tablespace
  as such, which is why you don't get something like no such tablespace.
  We could add such a lookup purely for error detection purposes, but I'm
  not real sure I see the point.
 
 I think what Bruce might be getting at is that 0 is more likely than a
 randomly chosen value to be passed to this function; for example, one
 can imagine wanting to pass pg_class.reltablespace.

Yes, that was my point.  In tracking down a pg_upgrade bug, I discovered
that zero means the cluser default location, while
pg_tablespace_location() returning '' means the default _database_ (or
global) tablespace.   We are quite unclear on what DEFAULTTABLESPACE_OID
means (the database default).

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Peter Geoghegan
On 10 April 2012 23:07, Greg Smith g...@2ndquadrant.com wrote:
 On 04/10/2012 12:27 PM, Tom Lane wrote:
 I am doing more sophisticated things with it, so I'll celebrate this as my
 opportunity to say I did something you didn't see coming for 2012.

This is why I requested that we expose the query_id hash value - I
believe that it will be generally useful in clustering situations. It
would be nice to have a persistent identifier. While we're discussing
revising pg_stat_statement's interface, are you still opposed to
exposing that value, Tom?

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Greg Smith

On 04/10/2012 01:33 PM, Robert Haas wrote:


I also think that people were more receptive to my reviews before I
got a commit bit.


That's not true; many people were just as annoyed at you back then.


(Robert knows I'm kidding.  I hope.)

--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Tue, Apr 10, 2012 at 06:16:31PM -0400, Robert Haas wrote:
 On Tue, Apr 10, 2012 at 5:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 The way the function is coded, it has no need to look into pg_tablespace
 as such, which is why you don't get something like no such tablespace.

 I think what Bruce might be getting at is that 0 is more likely than a
 randomly chosen value to be passed to this function; for example, one
 can imagine wanting to pass pg_class.reltablespace.

 Yes, that was my point.

Hm.  I have no objection to special-casing zero here, but what behavior
do you want?  Should it return an empty string as we do for
DEFAULTTABLESPACE_OID, or throw a different error?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] invalid search_path complaints

2012-04-10 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Tue, Apr 10, 2012 at 5:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I am not sure whether we should consider back-patching this into 9.1,
 although that would be necessary if we wanted to fix Robert's original
 complaint against 9.1.  Thoughts?

 I guess my feeling would be no, because it seems like a clear
 behavior change, even though I agree the new behavior's better.  Since
 my original investigation was prompted by a customer complaint, it's
 tempting to say we should, but there's not much good making customer A
 happy if we make customer B unhappy with the same change.

Well, although it's a behavior change, it consists entirely of removing
an error check.  To suppose that this would break somebody's app,
you'd have to suppose that they were relying on SET search_path =
no_such_schema to throw an error.  That's possible I guess, but it
seems significantly less likely than that somebody would be expecting
the ALTER ... SET case to not result in warnings.  There are
considerably cheaper and easier-to-use methods for checking whether a
schema exists than catching an error.

Anyway, if you're happy with 9.1 being an outlier on this behavior,
I won't press the point.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Bruce Momjian
On Tue, Apr 10, 2012 at 07:09:33PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Tue, Apr 10, 2012 at 06:16:31PM -0400, Robert Haas wrote:
  On Tue, Apr 10, 2012 at 5:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  The way the function is coded, it has no need to look into pg_tablespace
  as such, which is why you don't get something like no such tablespace.
 
  I think what Bruce might be getting at is that 0 is more likely than a
  randomly chosen value to be passed to this function; for example, one
  can imagine wanting to pass pg_class.reltablespace.
 
  Yes, that was my point.
 
 Hm.  I have no objection to special-casing zero here, but what behavior
 do you want?  Should it return an empty string as we do for
 DEFAULTTABLESPACE_OID, or throw a different error?

I have no idea.  The big problem is that we currently use '' for the
cluster default, while 0 means the database default.  I can't think of a
good return result --- I think it has to be an error of some kind.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Robert Haas
On Tue, Apr 10, 2012 at 6:32 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 On 10 April 2012 23:07, Greg Smith g...@2ndquadrant.com wrote:
 On 04/10/2012 12:27 PM, Tom Lane wrote:
 I am doing more sophisticated things with it, so I'll celebrate this as my
 opportunity to say I did something you didn't see coming for 2012.

 This is why I requested that we expose the query_id hash value - I
 believe that it will be generally useful in clustering situations. It
 would be nice to have a persistent identifier. While we're discussing
 revising pg_stat_statement's interface, are you still opposed to
 exposing that value, Tom?

If people need something like that, couldn't they create it by hashing
the normalized query text with an arbitrary algorithm?

The only obvious advantage of exposing the value used internally is
that it might be helpful in terms of understanding the collision
behavior.  But hopefully collisions are pretty rare anyway, so...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Tom Lane
Peter Geoghegan pe...@2ndquadrant.com writes:
 On 10 April 2012 23:07, Greg Smith g...@2ndquadrant.com wrote:
 On 04/10/2012 12:27 PM, Tom Lane wrote:
 I am doing more sophisticated things with it, so I'll celebrate this as my
 opportunity to say I did something you didn't see coming for 2012.

 This is why I requested that we expose the query_id hash value - I
 believe that it will be generally useful in clustering situations. It
 would be nice to have a persistent identifier. While we're discussing
 revising pg_stat_statement's interface, are you still opposed to
 exposing that value, Tom?

I still am.  I'm unconvinced by references to clustering situations,
because as constructed the hash is extremely database-specific.
It will vary depending on OID assignments, not to mention platform
characteristics such as word width and endianness.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Tue, Apr 10, 2012 at 07:09:33PM -0400, Tom Lane wrote:
 Hm.  I have no objection to special-casing zero here, but what behavior
 do you want?  Should it return an empty string as we do for
 DEFAULTTABLESPACE_OID, or throw a different error?

 I have no idea.  The big problem is that we currently use '' for the
 cluster default, while 0 means the database default.  I can't think of a
 good return result --- I think it has to be an error of some kind.

If we expect this function to mainly be applied to pg_class.reltablespace,
then it seems like it ought to understand that zero means the database
default and substitute the database's default tablespace.  That might
or might not be the same as the cluster default.

Alternatively, we could expect pg_upgrade to understand that and make
the substitution itself, but if the same would be needed by most uses of
the function, maybe we should just do it here.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Peter Geoghegan
On 11 April 2012 00:35, Robert Haas robertmh...@gmail.com wrote:
 If people need something like that, couldn't they create it by hashing
 the normalized query text with an arbitrary algorithm?

That supposes that the normalised query text is perfectly stable. It
may well not be, particularly for things like ad-hoc queries or
queries generated by ORMs, across database clusters and over long
periods of time - you're basically throwing the benefit of all of that
intelligent normalisation out of the window, because it's pretty close
to free to expose it. What if a developer tweaks an alias in the
application for clarity? Also, as you point out, it has additional
utility in advertising when a collision has happened, and setting the
user's expectations appropriately. I assume that collisions are very
rare, but when they do happen, this gives you a fighting chance of
noticing them.

As Tom points out, the query hash will vary according to platform
specific characteristics, including endianness, and will require OIDs
are the same on every node. However, it is still going to be useful in
clusters that use streaming replication, though not a third party
trigger based replication system for example, because streaming
replication does of course require that those factors (and rather a
lot more) will be identical across the cluster anyway. Realistically,
I'd expect a large majority of people interested in this feature to
only want to use it with streaming replication anyway.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade incorrectly equates pg_default and database tablespace

2012-04-10 Thread Bruce Momjian
On Thu, Mar 22, 2012 at 02:55:32PM +0200, Ants Aasma wrote:
 Hi,
 
 while working on a support case I stumbled upon a bug in pg_upgrade.
 Upgrade fails with No such file or directory when a database is
 moved to a non-default tablespace and contains a table that is moved
 to pg_default. The cause seems to be that the following test
 incorrectly equates empty spclocation with database tablespace:
 
 tblspace = PQgetvalue(res, relnum, i_spclocation);
 /* if no table tablespace, use the database tablespace */
 if (strlen(tblspace) == 0)
 tblspace = dbinfo-db_tblspace;
 
 Patch to fix this is attached.

Thank you for the fine bug report, and patch (and the bug confirmation
from Jeff Davis).  Sorry for the delay in replying.

You have certainly found a bug, and one that exists all the way back to
pg_upgrade 9.0.  I was able to reproduce the bug with this SQL:

-- test database in different tablespace with table in cluster 
-- default tablespace
CREATE DATABASE tbltest TABLESPACE tt;
\connect tbltest
CREATE TABLE t1 (x int);
CREATE TABLE t2 (x int) TABLESPACE pg_default;

It is exactly as you described --- the database is in a user-defined
tablespace, but the table (t2) is in the cluster default location.  Not
sure how no one else reported this failure before.

The crux of the confusion is that pg_class.reltablespace == 0 means the
database default tablespace, while a join to pg_tablespace that returns
a zero-length string means it is in the cluster data directory.  The new
code properly looks at reltablespace rather than testing the tablespace
location, which was your fix as well.

I have applied three different patches very similar to your helpful
suggestion, attached.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/pg_upgrade/info.c b/contrib/pg_upgrade/info.c
new file mode 100644
index 1f5b7ae..02d3e0f
*** a/contrib/pg_upgrade/info.c
--- b/contrib/pg_upgrade/info.c
*** get_rel_infos(migratorContext *ctx, cons
*** 306,311 
--- 306,312 
int i_relname = -1;
int i_oid = -1;
int i_relfilenode = -1;
+   int i_reltablespace = -1;
int i_reltoastrelid = -1;
charquery[QUERY_ALLOC];
  
*** get_rel_infos(migratorContext *ctx, cons
*** 320,326 
  
snprintf(query, sizeof(query),
 SELECT DISTINCT c.oid, n.nspname, c.relname, 
!  c.relfilenode, c.reltoastrelid, t.spclocation 
 FROM pg_catalog.pg_class c JOIN 
   pg_catalog.pg_namespace n 
   ON c.relnamespace = n.oid 
--- 321,327 
  
snprintf(query, sizeof(query),
 SELECT DISTINCT c.oid, n.nspname, c.relname, 
!  c.relfilenode, c.reltoastrelid, 
c.reltablespace, t.spclocation 
 FROM pg_catalog.pg_class c JOIN 
   pg_catalog.pg_namespace n 
   ON c.relnamespace = n.oid 
*** get_rel_infos(migratorContext *ctx, cons
*** 339,345 
 ('pg_largeobject', 
'pg_largeobject_loid_pn_index'%s) )) 
   AND relkind IN ('r','t', 'i'%s)
 GROUP BY  c.oid, n.nspname, c.relname, c.relfilenode,
!  c.reltoastrelid, t.spclocation, 

   n.nspname 
 ORDER BY n.nspname, c.relname;,
 FirstNormalObjectId,
--- 340,346 
 ('pg_largeobject', 
'pg_largeobject_loid_pn_index'%s) )) 
   AND relkind IN ('r','t', 'i'%s)
 GROUP BY  c.oid, n.nspname, c.relname, c.relfilenode,
!  c.reltoastrelid, 
c.reltablespace, t.spclocation, 
   n.nspname 
 ORDER BY n.nspname, c.relname;,
 FirstNormalObjectId,
*** get_rel_infos(migratorContext *ctx, cons
*** 361,366 
--- 362,368 
i_relname = PQfnumber(res, relname);
i_relfilenode = PQfnumber(res, relfilenode);
i_reltoastrelid = PQfnumber(res, reltoastrelid);
+   i_reltablespace = PQfnumber(res, reltablespace);
i_spclocation = PQfnumber(res, spclocation);
  
for (relnum = 0; relnum  ntups; relnum++)
*** get_rel_infos(migratorContext *ctx, cons
*** 379,388 
curr-relfilenode = atooid(PQgetvalue(res, relnum, 
i_relfilenode));

Re: [HACKERS] pg_tablespace_location() error message

2012-04-10 Thread Bruce Momjian
On Tue, Apr 10, 2012 at 07:57:30PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Tue, Apr 10, 2012 at 07:09:33PM -0400, Tom Lane wrote:
  Hm.  I have no objection to special-casing zero here, but what behavior
  do you want?  Should it return an empty string as we do for
  DEFAULTTABLESPACE_OID, or throw a different error?
 
  I have no idea.  The big problem is that we currently use '' for the
  cluster default, while 0 means the database default.  I can't think of a
  good return result --- I think it has to be an error of some kind.
 
 If we expect this function to mainly be applied to pg_class.reltablespace,
 then it seems like it ought to understand that zero means the database
 default and substitute the database's default tablespace.  That might
 or might not be the same as the cluster default.

Well, do we really want to be reporting the _current_ data directory
location?  We do track tablespace symlink moves because we read the
symlinks now, so that isn't out of the question.  A bigger question is
whether returning '' for a database-default location is valid --- I am
thinking no.

 Alternatively, we could expect pg_upgrade to understand that and make
 the substitution itself, but if the same would be needed by most uses of
 the function, maybe we should just do it here.

I just applied a patch to pg_upgrade to do exactly that, and it is
needed in pre-9.2 as well because pg_upgrade was testing for an empty
spclocation, which could be '' or it could be NULL (from an outer join),
but pg_upgrade wasn't distinguising the two.  :-(

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Last gasp

2012-04-10 Thread Noah Misch
On Tue, Apr 10, 2012 at 11:53:23AM -0500, Kevin Grittner wrote:
 Tom Lane t...@sss.pgh.pa.us wrote:
  Kevin Grittner kevin.gritt...@wicourts.gov writes:
  One other sort of mechanical test which I think can and should be
  applied to patches submitted to the last CF is that if *at the
  start of the CF* the patch doesn't apply, compile, pass
  regression tests, and demonstrably provide the functionality
  claimed for the patch, it should not be a candidate for inclusion
  in the release.
  
  I would not be in favor of inflexible application of such a rule.
  For instance, if a patch had gotten broken by a conflict with some
  other patch applied the day before the CF starts, it would be
  unfair to not give the patch author a reasonable amount of time to
  rebase.  And such conflicts occurring after the CF starts are
  hardly unusual either.
  
 I didn't mean to exclude rebasing because of conflicts with recent
 commits, so perhaps mechanical was overstating it.  But maybe not
 -- perhaps each patch submission should state which commit it was
 last confirmed to compile and work with, and if there are problems
 against HEAD that could be confirmed before asking for the rebase. 
 That wouldn't be too much extra work for the initial reviewer, and
 it would help establish objective criteria for categorizing whether
 a patch should be treated as WIP.

Of the patches I've reviewed that fall into one the problem categories Robert
outlined, all applied cleanly and passed regression tests.  On the flip side,
I have submitted at least two patches that failed regression tests for the
reviewer due to isolated, easily-fixed blunders.  Consequently, I'm not
hopeful about these checks as coarse indicators of patch readiness.  I would
certainly like an objective test for assigning patches to those categories,
but I don't have a better idea for such a test.

nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: add timing of buffer I/O requests

2012-04-10 Thread Tom Lane
Peter Geoghegan pe...@2ndquadrant.com writes:
 On 11 April 2012 00:35, Robert Haas robertmh...@gmail.com wrote:
 If people need something like that, couldn't they create it by hashing
 the normalized query text with an arbitrary algorithm?

 That supposes that the normalised query text is perfectly stable. It
 may well not be, particularly for things like ad-hoc queries or
 queries generated by ORMs, across database clusters and over long
 periods of time -

Indeed, but the hash value isn't stable either given those sorts of
assumptions, so I'm not convinced that there's any advantage there.

What I think people would actually like to know, if they're in a
situation where distinct query texts are getting hashed to the same
thing, is *which* different texts got hashed to the same thing.
But there's no good way to expose that given the pg_stat_statements
infrastructure, and exposing the hash value doesn't help.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >