Re: [HACKERS] Estimating total amount of shared memory required by postmaster

2011-06-03 Thread Heikki Linnakangas

On 02.06.2011 21:58, Alexey Klyukin wrote:

Hello,

We've recently come across the task of estimating the size of shared memory
required for PostgreSQL to start. This comes from the problem of validating
postgresql.conf files
(http://archives.postgresql.org/pgsql-hackers/2011-03/msg01831.php), i.e.
checking that the server will be able to start with new configuration options
without actually performing the restart. Currently, I see a couple of ways
to get the estimate:

- Use the code from ipci.c to get the total size of the shared memory segment
   that Postmaster would be allocating with the given configuration options
   (shared_buffers, etc.). This would require getting the actual amount of
   available shared memory somehow, which is platform dependent and might not
   be very reliable. The other downside is that the code would need to be
   updated if the original estimates in ipci.c changes.

- Try to actually allocate the shared memory in a way postmaster does this
   nowadays, if the process fails - analyze the error code to check whether the
   failure is due to the shmmax or shmmall limits being too low. This would
   need to be run as a separate process (not postmaster's child) to avoid
   messing with the postmaster's own shared memory, which means that this would
   be hard to implement as a user-callable stored function.

I'm also looking for other ideas. Any suggestions?


There's a patch floating around to use POSIX shared memory, which 
doesn't obey shmmax and shmmall limits:


http://archives.postgresql.org/message-id/d9edacf7-53f1-4355-84f8-2e74cd19d...@themactionfaction.com 



That would allow us to fly under the radar and avoid the whole issue. If 
you could review that patch, that would be great.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PQdeleteTuple function in libpq

2011-06-03 Thread Pavel Golub
Hello, Andrew.

You wrote:

AC On 6/2/2011 11:02 AM, Alvaro Herrera wrote:
 Excerpts from Andrew Chernow's message of jue jun 02 10:12:40 -0400 2011:

 Andrew, why we have PQmakeEmptyPGresult, PQcopyResult,
 PQsetResultAttrs, PQsetvalue and PQresultAlloc in this case? Of course
 there's no big deal with their absence but let's be consistent.

 I'm not entirely sure what you are trying to do, but can't you use
 PQmakeEmptyPGresult, PQsetResultAttrs and PQsetvalue to construct a
 result that excludes the tuples you don't want followed by a
 PQclear(initial_result)?

 Seems pretty wasteful if you want to delete a single tuple from a large
 result.  I think if you desired to compact the result to free some
 memory after deleting a large fraction of the tuples in the result it
 could be useful to do that, otherwise just live with the unused holes in
 the storage area as suggested by Pavel.


AC Another solution is to manually cursor through the set (like grab 1000
AC tuples at a time) and copy the set to your own structure.  That way, the
AC temporary double memory to perform the copy is not as big of a hit.  By
AC using your own structure, you can organize the memory in a fashion that
AC is optimized for your requirement.

I agree that there are a lot of possible solutions. But let me compare
my function with official PQsetValue:

1. Both allow changing data in PGresult
2. Both allow changing tuples number
3. Both leave old data untouchable to be eliminated by PQClear
4. PQsetValue allocates more memory during work, mine not (it even may
allow deleted tuple to be reused with a little fix)

So why shouldn't we have both of them to make life easier?

-- 
With best wishes,
 Pavel  mailto:pa...@gf.microolap.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] storing TZ along timestamps

2011-06-03 Thread Peter Eisentraut
On tor, 2011-06-02 at 22:58 -0500, Jim Nasby wrote:
 I'm torn between whether the type should store the original time or
 the original time converted to GMT. I believe you would have the most
 accuracy if you stored the original time... but then indexing becomes
 problematic. I don't know if this data quality issue can be solved by
 anything short of somehow storing the actual timezone rule that was in
 place when the data was set.
 
 Speaking of input; someone asked what timezone should be used as the
 original timezone. We should use whatever timezone was passed in
 with the value, and if one wasn't passed in we should use whatever the
 timezone GUC is set to (I'm assuming that's what timestamptz does).

I think all of that comes down to business rules.  Train and airline
companies etc. have probably figured this out for themselves, not
necessarily consistent with each other.  So it's doubtful whether a
single solution that we can hash out here is going to work well in
practice.  I think making it easier to implement particular business
rules in this area in userspace (composite types, etc.) might go a
longer way.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI predicate locking on heap -- tuple or row?

2011-06-03 Thread Heikki Linnakangas

On 03.06.2011 00:14, Kevin Grittner wrote:

Attached is a comments-only patch for this, along with one
correction to  the comments you added and a couple other typos.


Ok, committed.


I'll submit a patch for the README-SSI file once I find a reference
I like to a paper describing what Dan's proof uses as a premise --
that the transaction on the rw-conflict *out* side of the pivot must
not only be the first of the three transactions in the dangerous
structure to commit, but the first in the entire cycle of which the
dangerous structure is a part.  With that premise as a given, the
proof is short, clear, and unassailable; but I think we need a cite
to use that argument convincingly.


Agreed.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WIP: Fast GiST index build

2011-06-03 Thread Alexander Korotkov
Hackers,

WIP patch of fast GiST index build is attached. Code is dirty and comments
are lacking, but it works. Now it is ready for first benchmarks, which
should prove efficiency of selected technique. It's time to compare fast
GiST index build with repeat insert build on large enough datasets (datasets
which don't fit to cache). There are following aims of testing:
1) Measure acceleration of index build.
2) Measure change in index quality.
I'm going to do first testing using synthetic datasets. Everybody who have
interesting real-life datasets for testing are welcome.

--
With best regards,
Alexander Korotkov.


gist_fast_build-0.0.3.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Cannot receive posts to pgsql-hackers through news server

2011-06-03 Thread MauMau

Hello,

I'm reading the posts of pgsql-hackers through the news server 
news.postgresql.org using Windows Mail on Windows Vista, to avoid receiving 
many emails. I don't seem to be able to receive posts in June. All I could 
download are before June. But I could receive messages in June of other MLs 
such as pgsql-general, pgsql-jdbc etc.


What could be wrong?

Regards
MauMau


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hacking gram.y Error syntax error at or near MERGEJOIN

2011-06-03 Thread HuangQi
Thanks all for your ideas. Though wired, I reinstalled the Postgres and
tried again. This error message disappears. The parser now works good.

On 3 June 2011 01:13, Alvaro Herrera alvhe...@commandprompt.com wrote:

 Excerpts from HuangQi's message of jue jun 02 11:17:21 -0400 2011:
  Hi, thanks a lot for your ideas. But I've done all these things. I've
  checked the gram.y and kwlist.h files many times but can not find what's
  wrong. So is there any possibility that the problem comes from something
  after parser, though it seems it should comes from parser?

 If you want more input, post the patch.

 --
 Álvaro Herrera alvhe...@commandprompt.com
 The PostgreSQL Company - Command Prompt, Inc.
 PostgreSQL Replication, Consulting, Custom Development, 24x7 support




-- 
Best Regards
Huang Qi Victor


Re: [HACKERS] pg_upgrade automatic testing

2011-06-03 Thread Andrew Dunstan



On 05/25/2011 03:07 PM, Peter Eisentraut wrote:

On ons, 2011-04-27 at 18:14 -0400, Noah Misch wrote:

Enthusiastic +1 for this concept.  There's at least one rough edge: it fails if
you have another postmaster running on port 5432.

This has now been addressed: pg_upgrade accepts PGPORT settings.
Attached is a slightly updated patch runs the test suite with a port of
65432, which you can override by setting PGPORT yourself.

Anyway, is this something that people want in the repository?  It's not
as polished as the pg_regress business, but it is definitely helpful.



As is, this will probably break on a bunch of platforms. I suspect you 
will need the equivalent of this snippet from pg_regress.c:


add_to_path(LD_LIBRARY_PATH, ':', libdir);
add_to_path(DYLD_LIBRARY_PATH, ':', libdir);
add_to_path(LIBPATH, ':', libdir);
   #if defined(WIN32)
add_to_path(PATH, ';', libdir);
   #elif defined(__CYGWIN__)
add_to_path(PATH, ':', libdir);
   #endif


For extra credit, you could create a subroutine in vcregress.pl to run 
the check for MSVC builds.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... ADD FOREIGN KEY ... NOT ENFORCED

2011-06-03 Thread Robert Haas
On Thu, Jun 2, 2011 at 5:34 PM, Thom Brown t...@linux.com wrote:
 On 8 February 2011 03:50, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Feb 3, 2011 at 11:00 PM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Jan 14, 2011 at 6:15 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Patch to implement the proposed feature attached, for CFJan2011.

 2 sub-command changes:

 ALTER TABLE foo ADD FOREIGN KEY fkoo ... NOT VALID;

 ALTER TABLE foo VALIDATE CONSTRAINT fkoo;

 This patch, which seems to be the latest version, no longer applies,
 and has not been updated based on the previous provided review
 comments.

 Also, this diff hunk looks scary to me:

 +       if (must_use_query)
 +               ereport(ERROR,
 +                               (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
 +                                errmsg(cannot SELECT from primary
 key of relation \%s\,
 +                                               
 RelationGetRelationName(rel;
 +

 What's the justification for that?

 Since this patch was reviewed on January 23rd by Marko Tiikkaja and
 more briefly on February 3rd by me, and has not been updated, I am
 marking it Returned with Feedback.

 Just a note that since Alvaro created a patch to provide similar
 functionality for constraints, I identified an issue with database
 dumps, which apparently affects invalid foreign keys too:
 http://archives.postgresql.org/pgsql-hackers/2011-06/msg00162.php

 In other words, a database containing foreign keys that hasn't yet
 been validated will not produce a dump containing the necessary NOT
 VALID parameters.  This would be fixed by Alvaro's patch.

Sounds like someone needs to extract and apply that portion of
Alvaro's patch.  I've added this to the open items list.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-03 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 
 That's an improvement of about ~3.5x.
 
Outstanding!
 
I don't want to even peek at this until I've posted the two WIP SSI
patches (now both listed on the Open Items page), but will
definitely take a look after that.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BLOB support

2011-06-03 Thread Alvaro Herrera
Excerpts from Radosław Smogura's message of jue jun 02 15:26:29 -0400 2011:

 So do I understand good should We think about create bettered TOAST to 
 support 
 larger values then 30-bit length? I like this much more,

Good :-)

(BTW while it'd be good to have longer-than-30 bit length words for
varlena, I'm not sure we have room for that.)

 but without Objects ID quering relation with lobs will require to lock
 relation for some time,

Why?  The tuples are not going away due to MVCC anyway.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 On Fri, Jun 03, 2011 at 12:27:35AM -0400, Robert Haas wrote:
 and we
 treat the call as a request to smash bar to the underlying array type.

 No, there's no need to do that.  The domain is an array, not merely
 something that can be coerced to an array.  Therefore, it can be
 chosen as the polymorphic type directly.  Indeed, all released
 versions do this.

No, it cannot be chosen as the polymorphic type directly.  The problem
with that is that there is no principled way to resolve ANYELEMENT
unless you consider that you have downcasted the domain to the array
type.  You could perhaps ignore that problem for polymorphic functions
that use only ANYARRAY and not ANYELEMENT in their arguments and return
type --- but I'm not happy with the idea that that case would work
differently from a function that does use both.

So far as the other points you raise are concerned, I'm still of the
opinion that we might be best off to consider that domains should be
smashed to their base types when matching them to ANYELEMENT, too.
Again, if we don't do that, we have a problem with figuring out what
ANYARRAY ought to be (since we don't create an array type to go with a
domain).  More generally, this dodges the whole problem of needing
polymorphic functions to enforce domain constraints, something I still
believe is entirely impractical from an implementation standpoint.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BLOB support

2011-06-03 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 Excerpts from Radosław Smogura's message of jue jun 02 15:26:29 -0400 2011:
 So do I understand good should We think about create bettered TOAST to 
 support 
 larger values then 30-bit length? I like this much more,

 Good :-)

 (BTW while it'd be good to have longer-than-30 bit length words for
 varlena, I'm not sure we have room for that.)

You wouldn't want to push such values around as whole values anyway.
Possibly what would work here is a variant form of TOAST pointer for
which we'd simply throw error if you tried to fetch the entire value
at once.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-03 Thread Alexey Klyukin
Hi,

On Jun 2, 2011, at 10:22 PM, Noah Misch wrote:

 Hi Alexey,
 
...
 Is your interest in cheap varchar(N)-varchar(N+M) conversions specifically, 
 or
 in some broader application of this facility?

Exactly varchar conversions.

 
 Thanks for volunteering to review; that will be a big help.  Actually, I could
 especially use some feedback now on a related design and implementation:
  
 http://archives.postgresql.org/message-id/20110524104029.gb18...@tornado.gateway.2wire.net
 Note that the third and fifth sentences of that description are incorrect.  
 The
 rest stands without them.  Even just some feedback on the mundane issue noted 
 in
 the last paragraph would help.

Ok, I'll review it.

Thank you,
Alexey.

--
Command Prompt, Inc.  http://www.CommandPrompt.com
PostgreSQL Replication, Consulting, Custom Development, 24x7 support




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] About bug #6049

2011-06-03 Thread Tom Lane
I looked into $SUBJECT, which complains about this:

CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1;

This dumps like so:

regression=# \d+ test_view
View public.test_view
 Column  |  Type   | Modifiers | Storage | Description 
-+-+---+-+-
 column1 | integer |   | plain   | 
View definition:
 VALUES (1), (2), (3)
  ORDER BY *VALUES*.column1;

which is problematic because it'll fail during dump/restore, because
you can't write it that way:

regression=# VALUES (1), (2), (3) ORDER BY *VALUES*.column1;
ERROR:  invalid reference to FROM-clause entry for table *VALUES*
LINE 1: VALUES (1), (2), (3) ORDER BY *VALUES*.column1;
  ^
HINT:  There is an entry for table *VALUES*, but it cannot be referenced from 
this part of the query.

The HINT gives a hint what's going on: we make an RTE for the VALUES
clause, and then we have to give it an alias, for which we use 
*VALUES*.  But the code is trying to hide the existence of that
nonstandard alias by not exposing it in the parser's p_relnamespace
list.  So you can write column1 to refer to the first result column
of the VALUES, but not *VALUES*.column1.

On reflection this looks pretty stupid --- column1 is just as
nonstandard an alias, but we're allowing that to be used explicitly,
so why not the made-up table alias as well?

But anyway, there are basically two things we could do here: either
allow the table alias to be referenced, or try to teach ruleutils.c
not to qualify the column reference.  The second looks pretty tricky
and maybe not future-proof, so I'm leaning to the first.  Comments?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Estimating total amount of shared memory required by postmaster

2011-06-03 Thread Alexey Klyukin

On Jun 2, 2011, at 10:49 PM, Tom Lane wrote:

 Alexey Klyukin al...@commandprompt.com writes:
 We've recently come across the task of estimating the size of shared memory
 required for PostgreSQL to start.
 
 ...
 
 - Try to actually allocate the shared memory in a way postmaster does this
  nowadays, if the process fails - analyze the error code to check whether the
  failure is due to the shmmax or shmmall limits being too low. This would
  need to be run as a separate process (not postmaster's child) to avoid
  messing with the postmaster's own shared memory, which means that this would
  be hard to implement as a user-callable stored function.
 
 The results of such a test wouldn't be worth the electrons they're
 written on anyway: you're ignoring the likelihood that two instances of
 shared memory would overrun the kernel's SHMALL limit, when a single
 instance would be fine.

As Alvaro already pointed out, I'm not ignoring shmall. I think that:
- shmmax case is more frequent.
- there is a way to detect that shmall is a problem.

The other question is what to do when we detect that shmall is a problem.
I don't have a good answer for that ATM.

 
 Given that you can't do it in the context of a live installation, just
 trying to start the postmaster and seeing if it works (same as initdb
 does) seems as good as anything else.

In a context of configuration files validator a postmaster  restart is 
definitely
not what we are looking for.

Alexey.

--
Command Prompt, Inc.  http://www.CommandPrompt.com
PostgreSQL Replication, Consulting, Custom Development, 24x7 support




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Robert Haas
On Fri, Jun 3, 2011 at 1:14 AM, Noah Misch n...@leadboat.com wrote:
 No, there's no need to do that.  The domain is an array, not merely 
 something
 that can be coerced to an array.  Therefore, it can be chosen as the 
 polymorphic
 type directly.  Indeed, all released versions do this.

Well, as Bill Clinton once said, it depends on what the meaning of
the word 'is' is.  I think of array types in PostgreSQL as meaning
the types whose monikers end in a pair of square brackets.  We don't
in general have the ability to create a type that behaves like
another type.  In particular, you can't create a user-defined type
that is an array in the same way that a domain-over-array is an
array.  If we had some kind of type interface facility that might be
possible, but we don't.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-03 Thread Jim Nasby
On Jun 3, 2011, at 10:11 AM, Alexey Klyukin wrote:
 Is your interest in cheap varchar(N)-varchar(N+M) conversions specifically, 
 or
 in some broader application of this facility?
 
 Exactly varchar conversions.

Why limit it to varchar? Shouldn't we be able to do this for any varlena? The 
only challenge I see is numeric; we'd need to ensure that both size and 
precision are not decreasing.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BLOB support

2011-06-03 Thread Robert Haas
On Thu, Jun 2, 2011 at 12:53 PM, Radosław Smogura
rsmog...@softperience.eu wrote:
 1. No tracking of unused LO (you store just id of such object). You may leak
 LO after row remove/update. User may write triggers for this, but it is not
 argument - BLOB type is popular, and it's simplicity of use is quite
 important. When I create app this is worst thing.

 2. No support for casting in UPDATE/INSERT. So there is no way to simple
 migrate data (e.g. from too long varchars). Or to copy BLOBs.

 3. Limitation of field size to 1GB.

As a general point, it would probably be a good idea to address each
of these issues separately, and to have a separate discussion about
each one.

As to #1 specifically, if you use a text or bytea field rather than a
large object per se, then this issue goes away.  But then you lose the
streaming functionality.  So at least some people here are saying that
we should try to fix that by adding the streaming functionality to
text/bytea rather than by doing anything to the large object facility.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] switch UNLOGGED to LOGGED

2011-06-03 Thread Robert Haas
On Tue, May 31, 2011 at 12:25 PM, Leonardo Francalanci m_li...@yahoo.it wrote:
 Well, I  sort of assumed the design was OK, too, but the more we talk
 about this  WAL-logging stuff, the less convinced I am that I really
 understand the  problem.  :-(


 I see. In fact, I think nobody thought about restart points...

 To sum up:

 1) everything seems ok when in the wal_level = minimal
 case. In this case, we fsync everything and at transaction commit
 we remove the init fork; in case of a crash, we don't reply
 anything (as nothing has been written to the log), and we
 remove the main fork as we do now.

Yeah, that seems like it should work.

 2) in the   wal_level != minimal case things become more
 complicated: if the standby reaches a restart point
 and then crashes we are in trouble: it would remove the
 main fork at startup, and would reply only a portion of
 the table.
 I guess the same applies to the master too? I mean:
 if we log   HEAP_XLOG_NEWPAGEs, reach a checkpoint,
 and then crash, at server restart the main fork would be
 deleted, and the pages logged on the log couldn't be
 replayed. But the problem on the master can be removed
 using another type of log instead of   HEAP_XLOG_NEWPAGE
 (to be replayed by the standbys only).

I think that's about right, except that I feel we're missing some
trick here that's needed to make all this work out nicely.  Somehow we
need to maintain some state that an unlogged-logged conversion is in
progress; that state needs to then get cleaned up at commit or abort
time (including implicit abort).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] About bug #6049

2011-06-03 Thread Robert Haas
On Fri, Jun 3, 2011 at 11:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 I looked into $SUBJECT, which complains about this:

 CREATE VIEW test_view AS VALUES (1), (2), (3) ORDER BY 1;

 This dumps like so:

 regression=# \d+ test_view
                View public.test_view
  Column  |  Type   | Modifiers | Storage | Description
 -+-+---+-+-
  column1 | integer |           | plain   |
 View definition:
  VALUES (1), (2), (3)
  ORDER BY *VALUES*.column1;

 which is problematic because it'll fail during dump/restore, because
 you can't write it that way:

 regression=# VALUES (1), (2), (3) ORDER BY *VALUES*.column1;
 ERROR:  invalid reference to FROM-clause entry for table *VALUES*
 LINE 1: VALUES (1), (2), (3) ORDER BY *VALUES*.column1;
                                      ^
 HINT:  There is an entry for table *VALUES*, but it cannot be referenced 
 from this part of the query.

 The HINT gives a hint what's going on: we make an RTE for the VALUES
 clause, and then we have to give it an alias, for which we use
 *VALUES*.  But the code is trying to hide the existence of that
 nonstandard alias by not exposing it in the parser's p_relnamespace
 list.  So you can write column1 to refer to the first result column
 of the VALUES, but not *VALUES*.column1.

 On reflection this looks pretty stupid --- column1 is just as
 nonstandard an alias, but we're allowing that to be used explicitly,
 so why not the made-up table alias as well?

 But anyway, there are basically two things we could do here: either
 allow the table alias to be referenced, or try to teach ruleutils.c
 not to qualify the column reference.  The second looks pretty tricky
 and maybe not future-proof, so I'm leaning to the first.  Comments?

I think that makes sense, although it would less totally arbitrary if
the alias were just values rather than *VALUES*.  The asterisks
suggest that the identifier is fake.  But it's probably too late to do
anything about that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BLOB support

2011-06-03 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Thu, Jun 2, 2011 at 12:53 PM, Radosław Smogura
 rsmog...@softperience.eu wrote:
 1. No tracking of unused LO (you store just id of such object). You may leak
 LO after row remove/update. User may write triggers for this, but it is not
 argument - BLOB type is popular, and it's simplicity of use is quite
 important. When I create app this is worst thing.
 
 2. No support for casting in UPDATE/INSERT. So there is no way to simple
 migrate data (e.g. from too long varchars). Or to copy BLOBs.
 
 3. Limitation of field size to 1GB.

 As a general point, it would probably be a good idea to address each
 of these issues separately, and to have a separate discussion about
 each one.

 As to #1 specifically, if you use a text or bytea field rather than a
 large object per se, then this issue goes away.  But then you lose the
 streaming functionality.  So at least some people here are saying that
 we should try to fix that by adding the streaming functionality to
 text/bytea rather than by doing anything to the large object facility.

#2 is also a problem that only becomes a problem if you insist that LOBs
have to be a distinct kind of value.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup

2011-06-03 Thread Robert Haas
On Wed, May 25, 2011 at 5:28 AM, Emanuel Calvo postgres@gmail.com wrote:
 Hi all,

 I'm seeing the results of pg_basebackup and I saw postmaster.opts.
 Is not necessary, although is inoffensive. But has  the name of the
 original cluster name inside. If it's only keep for information purposes,
 maybe backup_label could be the right place.

 Is just an idea.

 Thougths?

I think we're just copying all the files we find.  We could add an
exception specifically for that case, but I'm not sure whether that's
a good idea or not.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BLOB support

2011-06-03 Thread Radosław Smogura
Tom Lane t...@sss.pgh.pa.us Friday 03 of June 2011 16:44:13
 Alvaro Herrera alvhe...@commandprompt.com writes:
  Excerpts from Radosław Smogura's message of jue jun 02 15:26:29 -0400 
2011:
  So do I understand good should We think about create bettered TOAST to
  support larger values then 30-bit length? I like this much more,
  
  Good :-)
  
  (BTW while it'd be good to have longer-than-30 bit length words for
  varlena, I'm not sure we have room for that.)
 
 You wouldn't want to push such values around as whole values anyway.
 Possibly what would work here is a variant form of TOAST pointer for
 which we'd simply throw error if you tried to fetch the entire value
 at once.
 
   regards, tom lane

Ok, now it's more clear about this, what You have talked about, but I still 
need to pass constant ID to client.

Actually, this variant must be passed to client.

Form other side, as BLOB may be created before statement invoke or if it's 
called. This will require to create tempolary BLOBs, and introducing v3.1 
protocol, which will allow to stream values greater then 4GB, by passing -2 
size in length fields, and introducing stream_in/out in pg_type (this is from 
my concept of streaming protocol).

So I think better will be to introduce 1st streaming protocol, as it is on top 
LOBs. I will send thread for this in a moment.

 Why?  The tuples are not going away due to MVCC anyway.
Vaccum / autovacumm + no lock may be enaugh, I think. Constant ID is required.

Regards,
Radek

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-03 Thread Kevin Grittner
Jim Nasby j...@nasby.net wrote:
 
 The only challenge I see is numeric; we'd need to ensure that both
 size and precision are not decreasing.
 
To be picky, wouldn't that need to be neither (precision - scale)
nor scale is decreasing?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] About bug #6049

2011-06-03 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Fri, Jun 3, 2011 at 11:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 But anyway, there are basically two things we could do here: either
 allow the table alias to be referenced, or try to teach ruleutils.c
 not to qualify the column reference.  The second looks pretty tricky
 and maybe not future-proof, so I'm leaning to the first.  Comments?

 I think that makes sense, although it would less totally arbitrary if
 the alias were just values rather than *VALUES*.  The asterisks
 suggest that the identifier is fake.  But it's probably too late to do
 anything about that.

Hmm.  Right now, since the identifier can't be referenced explicitly,
you could argue that a change might be painless.  But if what we're
trying to accomplish is to allow existing view definitions of this form
to be dumped and restored, that wouldn't work.  I'm inclined to leave
it alone.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change 'pg_ctl: no server running' Exit Status to 3

2011-06-03 Thread Robert Haas
On Mon, May 23, 2011 at 8:50 PM, Aaron W. Swenson
aaron.w.swen...@gmail.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 According to Linux Standard Base Core Specification 3.1 [1], the exit
 status should be '3' when the server isn't running.

 I've attached a very simple patch that resolves this cosmetic issue,
 which applies to all branches.

 My initial search regarding other platforms has turned up nil. It looks
 like they just check for a zero or non-zero exit status.

 As for the necessity of the patch: Gentoo Linux uses pg_ctl in all of
 its initscript actions for PostgreSQL. I'd imagine that Gentoo is not
 the only distribution that takes advantage of pg_ctl for its initscript
 actions.

I'd be disinclined to back-patch this as it might break things for
people.  But I see no harm in applying it to master, if nobody
objects...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] About bug #6049

2011-06-03 Thread Robert Haas
On Fri, Jun 3, 2011 at 12:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Fri, Jun 3, 2011 at 11:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 But anyway, there are basically two things we could do here: either
 allow the table alias to be referenced, or try to teach ruleutils.c
 not to qualify the column reference.  The second looks pretty tricky
 and maybe not future-proof, so I'm leaning to the first.  Comments?

 I think that makes sense, although it would less totally arbitrary if
 the alias were just values rather than *VALUES*.  The asterisks
 suggest that the identifier is fake.  But it's probably too late to do
 anything about that.

 Hmm.  Right now, since the identifier can't be referenced explicitly,
 you could argue that a change might be painless.  But if what we're
 trying to accomplish is to allow existing view definitions of this form
 to be dumped and restored, that wouldn't work.  I'm inclined to leave
 it alone.

Yep.  I think we're stuck with it at this point.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-03 Thread Robert Haas
On Fri, Jun 3, 2011 at 11:43 AM, Jim Nasby j...@nasby.net wrote:
 On Jun 3, 2011, at 10:11 AM, Alexey Klyukin wrote:
 Is your interest in cheap varchar(N)-varchar(N+M) conversions 
 specifically, or
 in some broader application of this facility?

 Exactly varchar conversions.

 Why limit it to varchar? Shouldn't we be able to do this for any varlena? The 
 only challenge I see is numeric; we'd need to ensure that both size and 
 precision are not decreasing.

More than that: you should also be able to make it work for things
like xml - text.

Indeed, I believe Noah has plans to do just that.

Which, quite frankly, will be awesome.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Streaming solution and v3.1 protocol

2011-06-03 Thread Radosław Smogura
Hello,

Sorry for short introduction about this, and plese as far as possible, 
disconnet it from LOBs, as it on top of LOB.

Idea of streaming is to reduce memory copy mainly during receiving and sending 
tuples. Currently receive works as follows
1. Read bytes of tuple (allocate x memory).
2. Eventually convert it to database encoding.
3. Use this data and create datum (which is little changed copy of 1 or 2)
4. Streaming will be allowed only in binary mode, and actually stream in/out 
will return binary data.

Look for example at execution chain from textrecv.

Idea is to add stream_in, stream_out columns pg_type.

When value is to be serialized the sin/sout is called. Struct must pass len of 
data, and struct of stream (simillar to C FILE*).

Caller should validate if all bytes has been consumed (expose simple methods 
for this)

To implement(code API requirements):
First stream is buffered socekt reader.

Fast substreams - create fast stream limited to x bytes basing on other stream

Skipping bytes + skipAll()

Stream filtering - do fast (faster will be if conversion will occure in 
buffered chunks) encoding conversion.

Support for direct PG printf() version. Linux has ability to create cookie 
streams and use it with fprintf(), so its greate advantage to format huge 
strings. Other system should buffer output. Problem is if Linux cookie will 
fail will it write something to output? Windows proxy will push value to temp 
buffer.

Good idea may be to introduce new version of protocol reserving len field 
value 
(-2) for fixed size streams above 4GB
(-3) for chunked streaming - actualy is innovative functionality and it's not 
required by any driver.

In streaming it may be imagined that socket's fd will be passed to sin 
functions.

Problems: during output - something failed while writing. Resolution, add some 
control flags for each n-bytes send to client. This will prevent sending of 
e.g. 4GB of data if first byte filed, You send only n-bytes and then abort is 
received - or send data in frames.

Regards,
Radek

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Identifying no-op length coercions

2011-06-03 Thread Noah Misch
On Fri, Jun 03, 2011 at 10:43:17AM -0500, Jim Nasby wrote:
 On Jun 3, 2011, at 10:11 AM, Alexey Klyukin wrote:
  Is your interest in cheap varchar(N)-varchar(N+M) conversions 
  specifically, or
  in some broader application of this facility?
  
  Exactly varchar conversions.
 
 Why limit it to varchar? Shouldn't we be able to do this for any varlena? The 
 only challenge I see is numeric; we'd need to ensure that both size and 
 precision are not decreasing.

I've implemented support for varchar, varbit, numeric, time, timetz, timestamp,
timestamptz, and interval.  However, I'll probably submit only varchar in the
initial infrastructure patch and the rest in followup patches in a later CF.

For numeric, we store the display scale in every datum, so any change to it
rewrites the table.  You'll be able to cheaply change numeric(7,2) to
numeric(9,2) but not to numeric(9,3).

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Ross J. Reedstrom
On Fri, Jun 03, 2011 at 11:22:34AM -0400, Robert Haas wrote:
 On Fri, Jun 3, 2011 at 1:14 AM, Noah Misch n...@leadboat.com wrote:
  No, there's no need to do that.  The domain is an array, not merely 
  something
  that can be coerced to an array.  Therefore, it can be chosen as the 
  polymorphic
  type directly.  Indeed, all released versions do this.
 
 Well, as Bill Clinton once said, it depends on what the meaning of
 the word 'is' is.  I think of array types in PostgreSQL as meaning
 the types whose monikers end in a pair of square brackets.  We don't
 in general have the ability to create a type that behaves like
 another type.  In particular, you can't create a user-defined type
 that is an array in the same way that a domain-over-array is an
 array.  If we had some kind of type interface facility that might be
 possible, but we don't.
 
Early on in this thread, one of the users of domains-over-array-type
mentioned that he really didn't want to use them that way, he'd be
perfectly happy with array-over-domain: i.e.: mydomain[]. How does that
impact all this at the rhetorical level under discussion?

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks - now, with WIP patch

2011-06-03 Thread Robert Haas
On Fri, Jun 3, 2011 at 10:13 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Robert Haas robertmh...@gmail.com wrote:

 That's an improvement of about ~3.5x.

 Outstanding!

 I don't want to even peek at this until I've posted the two WIP SSI
 patches (now both listed on the Open Items page), but will
 definitely take a look after that.

Yeah, those SSI items are important to get nailed down RSN.  But
thanks for your interest in this patch.  :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] creating CHECK constraints as NOT VALID

2011-06-03 Thread Thom Brown
On 2 June 2011 17:48, Alvaro Herrera alvhe...@commandprompt.com wrote:
 Excerpts from Alvaro Herrera's message of mié jun 01 20:56:12 -0400 2011:
 Excerpts from Thom Brown's message of mié jun 01 19:48:44 -0400 2011:

  Is this expected?
  [ pg_dump fails to preserve not-valid status of constraints ]

 Certainly not.

  Shouldn't the constraint be dumped as not valid too??

 Sure, I'll implement that tomorrow.

 Actually, it turns out that NOT VALID foreign keys were already buggy
 here, and fixing them automatically fixes this case as well, because the
 fix involves touching pg_get_constraintdef to dump the flag.  This also
 gets it into psql's \d.  Patch attached.

 (Maybe the changes in psql's describe.c should be reverted, not sure.)

Nice work Alvaro :)  Shouldn't patches be sent to -hackers instead of
the obsolete -patches list?  Plus I'm a bit confused as to why the
patch looks like an email instead of a patch.

According to the SQL:2011 standard: The SQL Standard allows you to
turn the checking on and off for CHECK constraints, UNIQUE constraints
and FOREIGN KEYS.

So is it much work to also add the ADD CONSTRAINT UNIQUE (column, ...)
NOT VALID syntax to this too?  This would mean we're completely
covered for this standards requirement.

Cheers

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BLOB support

2011-06-03 Thread Radosław Smogura
Tom Lane t...@sss.pgh.pa.us Friday 03 of June 2011 18:08:56
 Robert Haas robertmh...@gmail.com writes:
  On Thu, Jun 2, 2011 at 12:53 PM, Radosław Smogura
  
  rsmog...@softperience.eu wrote:
  1. No tracking of unused LO (you store just id of such object). You may
  leak LO after row remove/update. User may write triggers for this, but
  it is not argument - BLOB type is popular, and it's simplicity of use
  is quite important. When I create app this is worst thing.
  
  2. No support for casting in UPDATE/INSERT. So there is no way to simple
  migrate data (e.g. from too long varchars). Or to copy BLOBs.
  
  3. Limitation of field size to 1GB.
  
  As a general point, it would probably be a good idea to address each
  of these issues separately, and to have a separate discussion about
  each one.
  
  As to #1 specifically, if you use a text or bytea field rather than a
  large object per se, then this issue goes away.  But then you lose the
  streaming functionality.  So at least some people here are saying that
  we should try to fix that by adding the streaming functionality to
  text/bytea rather than by doing anything to the large object facility.
 
 #2 is also a problem that only becomes a problem if you insist that LOBs
 have to be a distinct kind of value.
 
   regards, tom lane

And one more topic to discuss. Should blob be referencable, e.g. I create in 
JDBC new Blob, I set stream for it what should happen if I will call
UPDATE t set b = ? where 1=1
?

This is not about copy on write.

Regards,
Radek

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] creating CHECK constraints as NOT VALID

2011-06-03 Thread Alvaro Herrera
Excerpts from Thom Brown's message of vie jun 03 12:47:58 -0400 2011:
 On 2 June 2011 17:48, Alvaro Herrera alvhe...@commandprompt.com wrote:

  Actually, it turns out that NOT VALID foreign keys were already buggy
  here, and fixing them automatically fixes this case as well, because the
  fix involves touching pg_get_constraintdef to dump the flag.  This also
  gets it into psql's \d.  Patch attached.
 
  (Maybe the changes in psql's describe.c should be reverted, not sure.)
 
 Nice work Alvaro :)  Shouldn't patches be sent to -hackers instead of
 the obsolete -patches list?  Plus I'm a bit confused as to why the
 patch looks like an email instead of a patch.

Did I really email pgsql-patches?  If so, I didn't notice -- but I don't
see it (and the archives seem to agree with me, there's no email after
2008-10).

The patch looks like an email because that's what git format-patch
produced, and I attached it instead of putting it inline.

 According to the SQL:2011 standard: The SQL Standard allows you to
 turn the checking on and off for CHECK constraints, UNIQUE constraints
 and FOREIGN KEYS.
 
 So is it much work to also add the ADD CONSTRAINT UNIQUE (column, ...)
 NOT VALID syntax to this too?  This would mean we're completely
 covered for this standards requirement.

Yeah, UNIQUE is a completely different beast.  There's already some work
on making them accept invalid (duplicate) values temporarily, but making
that more general, even if it was acceptable to the community at large
(which I'm not sure it is) is way beyond what I set to do here :-)

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread David E. Wheeler
On Jun 3, 2011, at 8:22 AM, Robert Haas wrote:

 Well, as Bill Clinton once said, it depends on what the meaning of
 the word 'is' is.  I think of array types in PostgreSQL as meaning
 the types whose monikers end in a pair of square brackets.

Man, range types are going to fuck with your brainz.

D


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... ADD FOREIGN KEY ... NOT ENFORCED

2011-06-03 Thread Alvaro Herrera
Excerpts from Robert Haas's message of vie jun 03 09:34:03 -0400 2011:

  Just a note that since Alvaro created a patch to provide similar
  functionality for constraints, I identified an issue with database
  dumps, which apparently affects invalid foreign keys too:
  http://archives.postgresql.org/pgsql-hackers/2011-06/msg00162.php
 
  In other words, a database containing foreign keys that hasn't yet
  been validated will not produce a dump containing the necessary NOT
  VALID parameters.  This would be fixed by Alvaro's patch.
 
 Sounds like someone needs to extract and apply that portion of
 Alvaro's patch.  I've added this to the open items list.

It's already a separate patch; I'll apply soon.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Estimating total amount of shared memory required by postmaster

2011-06-03 Thread Alvaro Herrera
Excerpts from Tom Lane's message of jue jun 02 15:49:53 -0400 2011:
 Alexey Klyukin al...@commandprompt.com writes:

  - Try to actually allocate the shared memory in a way postmaster does this
nowadays, if the process fails - analyze the error code to check whether 
  the
failure is due to the shmmax or shmmall limits being too low. This would
need to be run as a separate process (not postmaster's child) to avoid
messing with the postmaster's own shared memory, which means that this 
  would
be hard to implement as a user-callable stored function.
 
 The results of such a test wouldn't be worth the electrons they're
 written on anyway: you're ignoring the likelihood that two instances of
 shared memory would overrun the kernel's SHMALL limit, when a single
 instance would be fine.
 
 Given that you can't do it in the context of a live installation, just
 trying to start the postmaster and seeing if it works (same as initdb
 does) seems as good as anything else.

BTW the other idea we discussed at PGCon was that we could have a
postmaster option that would parse the config file and then report the
amount of shared memory needed to run with it.  If we had that, then
it's easy to write a platform-specific shell script or program that
verifies that the given number is within the allowed limits.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming solution and v3.1 protocol

2011-06-03 Thread Heikki Linnakangas

On 03.06.2011 19:19, Radosław Smogura wrote:

Hello,

Sorry for short introduction about this, and plese as far as possible,
disconnet it from LOBs, as it on top of LOB.

Idea of streaming is to reduce memory copy mainly during receiving and sending
tuples. Currently receive works as follows
1. Read bytes of tuple (allocate x memory).
2. Eventually convert it to database encoding.
3. Use this data and create datum (which is little changed copy of 1 or 2)
4. Streaming will be allowed only in binary mode, and actually stream in/out
will return binary data.


Hmm, I was thinking that streaming would be a whole new mode, alongside 
the current text and binary mode.



Look for example at execution chain from textrecv.

Idea is to add stream_in, stream_out columns pg_type.

When value is to be serialized the sin/sout is called. Struct must pass len of
data, and struct of stream (simillar to C FILE*).

Caller should validate if all bytes has been consumed (expose simple methods
for this)

To implement(code API requirements):
First stream is buffered socekt reader.

Fast substreams - create fast stream limited to x bytes basing on other stream

Skipping bytes + skipAll()

Stream filtering - do fast (faster will be if conversion will occure in
buffered chunks) encoding conversion.

Support for direct PG printf() version. Linux has ability to create cookie
streams and use it with fprintf(), so its greate advantage to format huge
strings. Other system should buffer output. Problem is if Linux cookie will
fail will it write something to output? Windows proxy will push value to temp
buffer.


This is pretty low-level stuff, I think we should focus on the protocol 
changes and user-visible libpq API first.


However, we don't want to use anything Linux-specific here, so that 
cookie streams are not an option.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] creating CHECK constraints as NOT VALID

2011-06-03 Thread Thom Brown
On 3 June 2011 17:58, Alvaro Herrera alvhe...@commandprompt.com wrote:
 Excerpts from Thom Brown's message of vie jun 03 12:47:58 -0400 2011:
 On 2 June 2011 17:48, Alvaro Herrera alvhe...@commandprompt.com wrote:

  Actually, it turns out that NOT VALID foreign keys were already buggy
  here, and fixing them automatically fixes this case as well, because the
  fix involves touching pg_get_constraintdef to dump the flag.  This also
  gets it into psql's \d.  Patch attached.
 
  (Maybe the changes in psql's describe.c should be reverted, not sure.)

 Nice work Alvaro :)  Shouldn't patches be sent to -hackers instead of
 the obsolete -patches list?  Plus I'm a bit confused as to why the
 patch looks like an email instead of a patch.

 Did I really email pgsql-patches?  If so, I didn't notice -- but I don't
 see it (and the archives seem to agree with me, there's no email after
 2008-10).

My bad, I was reading your patch which contained an email subject
beginning with [PATCH] (similar to mailing list subject prefixes)
which, if I had given it any further though, doesn't mean it's on the
-patches list.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Noah Misch
On Fri, Jun 03, 2011 at 10:42:01AM -0400, Tom Lane wrote:
 Noah Misch n...@leadboat.com writes:
  On Fri, Jun 03, 2011 at 12:27:35AM -0400, Robert Haas wrote:
  and we
  treat the call as a request to smash bar to the underlying array type.
 
  No, there's no need to do that.  The domain is an array, not merely
  something that can be coerced to an array.  Therefore, it can be
  chosen as the polymorphic type directly.  Indeed, all released
  versions do this.
 
 No, it cannot be chosen as the polymorphic type directly.

This already happens today.

 The problem
 with that is that there is no principled way to resolve ANYELEMENT
 unless you consider that you have downcasted the domain to the array
 type.

I have nothing material to add to my statement in
http://archives.postgresql.org/message-id/20110511093217.gb26...@tornado.gateway.2wire.net

Let's be concrete; run this on 9.0:

  CREATE DOMAIN foo AS int[];

  SELECT pg_typeof(array_append('{1}'::foo, 1));
  SELECT pg_typeof(array_prepend(1, '{1}'::foo));
  SELECT pg_typeof(array_fill(1, array[3]));

Which of those type resolutions do you find unprincipled, or what hypothetical
function does 9.0 resolve in an unprincipled way?  (What the function actually
does isn't important.)

 You could perhaps ignore that problem for polymorphic functions
 that use only ANYARRAY and not ANYELEMENT in their arguments and return
 type --- but I'm not happy with the idea that that case would work
 differently from a function that does use both.

It would be no good to bifurcate the rules that way.

 So far as the other points you raise are concerned, I'm still of the
 opinion that we might be best off to consider that domains should be
 smashed to their base types when matching them to ANYELEMENT, too.
 Again, if we don't do that, we have a problem with figuring out what
 ANYARRAY ought to be (since we don't create an array type to go with a
 domain).

Perhaps, though it paints us into a backward compatibility corner should we ever
support arrays of domains.  How many field complaints has the longstanding error
outcome produced?

 More generally, this dodges the whole problem of needing
 polymorphic functions to enforce domain constraints, something I still
 believe is entirely impractical from an implementation standpoint.

I wrote about the implementation scope in
http://archives.postgresql.org/message-id/20110511191249.ga29...@tornado.gateway.2wire.net

While I don't doubt the presence of implementation challenges beyond those I
identified, we can't meaningfully discuss them as generalities.

nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Remove support for 'userlocks'?

2011-06-03 Thread Bruce Momjian
According to our documentation, 'userlocks' were removed in PG 8.2:


http://developer.postgresql.org/pgdocs/postgres/runtime-config-developer.html

trace_userlocks (boolean)

If on, emit information about user lock usage. Output is the same as
for trace_locks, only for user locks.

User locks were removed as of PostgreSQL version 8.2. This option
currently has no effect.

This parameter is only available if the LOCK_DEBUG macro was defined
when PostgreSQL was compiled.

Should we remove this parameter and the supporting code since pre-8.2 is
now end-of-life?

--
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Robert Haas
On Fri, Jun 3, 2011 at 1:00 PM, David E. Wheeler da...@kineticode.com wrote:
 On Jun 3, 2011, at 8:22 AM, Robert Haas wrote:

 Well, as Bill Clinton once said, it depends on what the meaning of
 the word 'is' is.  I think of array types in PostgreSQL as meaning
 the types whose monikers end in a pair of square brackets.

 Man, range types are going to fuck with your brainz.

They may, but probably not for this reason.  Domain types have this
weird property that we want all of the base type operations to still
work on them, except when we don't want that.  Range types won't have
that property, or at least I don't think so.  Someone might expect
1::foo + 2::foo to work when foo is a domain over int, but they
probably won't expect '[1,2]'::intrange + '[2,3)'::intrange to work.
The real crux of the issue here is: under what circumstances should we
look through the domain wrapper around an underlying type, and under
what circumstances should we refrain from doing so?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Kevin Grittner
On 1 May 2011 I wrote: 
 
 Consider this a WIP patch
 
Just so people know where this stands...
 
By 8 May 2011 I had the attached.  I didn't post it because I was
not confident I had placed the calls to the SSI functions for DROP
TABLE and TRUNCATE TABLE correctly.  Then came PGCon and also the
row versus tuple discussion -- the patch for which had conflicts
with this one.
 
I will be reviewing this on the weekend and making any final
adjustments, but the patch is very likely to look just like this
with the possible exception of placement of the DROP TABLE and
TRUNCATE TABLE calls.  If anyone wants to start reviewing this now,
it would leave more time for me to make changes if needed.
 
Also, if anyone has comments or hints about the placement of those
calls, I'd be happy to receive them.
 
This patch compiles with `make world`, passes `make check-world`,
and passes both `make installcheck-world` and `make -C
src/test/isolation installcheck` against a database with
default_transaction_isolation = 'serializable'.  It also passed a
few ad hoc tests of the implemented functionality.
 
-Kevin

*** a/src/backend/catalog/heap.c
--- b/src/backend/catalog/heap.c
***
*** 1658,1663  heap_drop_with_catalog(Oid relid)
--- 1658,1671 
CheckTableNotInUse(rel, DROP TABLE);
  
/*
+* This effectively deletes all rows in the table, and may be done in a
+* serializable transaction.  In that case we must record a rw-conflict 
in
+* to this transaction from each transaction holding a predicate lock on
+* the table.
+*/
+   CheckTableForSerializableConflictIn(rel);
+ 
+   /*
 * Delete pg_foreign_table tuple first.
 */
if (rel-rd_rel-relkind == RELKIND_FOREIGN_TABLE)
***
*** 1688,1693  heap_drop_with_catalog(Oid relid)
--- 1696,1706 
}
  
/*
+* Clean up any remaining predicate locks on the relation.
+*/
+   DropAllPredicateLocksFromTable(rel);
+ 
+   /*
 * Close relcache entry, but *keep* AccessExclusiveLock on the relation
 * until transaction commit.  This ensures no one else will try to do
 * something with the doomed relation.
*** a/src/backend/catalog/index.c
--- b/src/backend/catalog/index.c
***
*** 54,59 
--- 54,60 
  #include parser/parser.h
  #include storage/bufmgr.h
  #include storage/lmgr.h
+ #include storage/predicate.h
  #include storage/procarray.h
  #include storage/smgr.h
  #include utils/builtins.h
***
*** 1311,1316  index_drop(Oid indexId)
--- 1312,1323 
CheckTableNotInUse(userIndexRelation, DROP INDEX);
  
/*
+* All predicate locks on the index are about to be made invalid.
+* Promote them to relation locks on the heap.
+*/
+   TransferPredicateLocksToHeapRelation(userIndexRelation);
+ 
+   /*
 * Schedule physical removal of the files
 */
RelationDropStorage(userIndexRelation);
***
*** 2787,2792  reindex_index(Oid indexId, bool skip_constraint_checks)
--- 2794,2805 
 */
CheckTableNotInUse(iRel, REINDEX INDEX);
  
+   /*
+* All predicate locks on the index are about to be made invalid.
+* Promote them to relation locks on the heap.
+*/
+   TransferPredicateLocksToHeapRelation(iRel);
+ 
PG_TRY();
{
/* Suppress use of the target index while rebuilding it */
*** a/src/backend/commands/cluster.c
--- b/src/backend/commands/cluster.c
***
*** 39,44 
--- 39,45 
  #include optimizer/planner.h
  #include storage/bufmgr.h
  #include storage/lmgr.h
+ #include storage/predicate.h
  #include storage/procarray.h
  #include storage/smgr.h
  #include utils/acl.h
***
*** 385,390  cluster_rel(Oid tableOid, Oid indexOid, bool recheck, bool 
verbose,
--- 386,397 
if (OidIsValid(indexOid))
check_index_is_clusterable(OldHeap, indexOid, recheck, 
AccessExclusiveLock);
  
+   /*
+* All predicate locks on the table and its indexes are about to be made
+* invalid.  Promote them to relation locks on the heap.
+*/
+   TransferPredicateLocksToHeapRelation(OldHeap);
+ 
/* rebuild_relation does all the dirty work */
rebuild_relation(OldHeap, indexOid, freeze_min_age, freeze_table_age,
 verbose);
*** a/src/backend/storage/lmgr/predicate.c
--- b/src/backend/storage/lmgr/predicate.c
***
*** 155,160 
--- 155,162 
   *   BlockNumber 
newblkno);
   *PredicateLockPageCombine(Relation relation, BlockNumber 
oldblkno,
   * BlockNumber 
newblkno);
+  *TransferPredicateLocksToHeapRelation(const Relation relation)
+  *

[HACKERS] Re: reducing the overhead of frequent table locks - now, with WIP patch

2011-06-03 Thread Noah Misch
On Fri, Jun 03, 2011 at 09:17:08AM -0400, Robert Haas wrote:
 As you can see, this works out to a bit more than a 4% improvement on
 this two-core box.  I also got access (thanks to Nate Boley) to a
 24-core box and ran the same test with scale factor 100 and
 shared_buffers=8GB.  Here are the results of alternating runs without
 and with the patch on that machine:
 
 tps = 36291.996228 (including connections establishing)
 tps = 129242.054578 (including connections establishing)
 tps = 36704.393055 (including connections establishing)
 tps = 128998.648106 (including connections establishing)
 tps = 36531.208898 (including connections establishing)
 tps = 131341.367344 (including connections establishing)

Nice!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming solution and v3.1 protocol

2011-06-03 Thread Merlin Moncure
On Fri, Jun 3, 2011 at 12:04 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 03.06.2011 19:19, Radosław Smogura wrote:

 Hello,

 Sorry for short introduction about this, and plese as far as possible,
 disconnet it from LOBs, as it on top of LOB.

 Idea of streaming is to reduce memory copy mainly during receiving and
 sending
 tuples. Currently receive works as follows
 1. Read bytes of tuple (allocate x memory).
 2. Eventually convert it to database encoding.
 3. Use this data and create datum (which is little changed copy of 1 or 2)
 4. Streaming will be allowed only in binary mode, and actually stream
 in/out
 will return binary data.

 Hmm, I was thinking that streaming would be a whole new mode, alongside the
 current text and binary mode.

 Look for example at execution chain from textrecv.

 Idea is to add stream_in, stream_out columns pg_type.

 When value is to be serialized the sin/sout is called. Struct must pass
 len of
 data, and struct of stream (simillar to C FILE*).

 Caller should validate if all bytes has been consumed (expose simple
 methods
 for this)

 To implement(code API requirements):
 First stream is buffered socekt reader.

 Fast substreams - create fast stream limited to x bytes basing on other
 stream

 Skipping bytes + skipAll()

 Stream filtering - do fast (faster will be if conversion will occure in
 buffered chunks) encoding conversion.

 Support for direct PG printf() version. Linux has ability to create cookie
 streams and use it with fprintf(), so its greate advantage to format huge
 strings. Other system should buffer output. Problem is if Linux cookie
 will
 fail will it write something to output? Windows proxy will push value to
 temp
 buffer.

 This is pretty low-level stuff, I think we should focus on the protocol
 changes and user-visible libpq API first.

+1.  in particular, I'd like to see the libpq api changes.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove support for 'userlocks'?

2011-06-03 Thread Merlin Moncure
On Fri, Jun 3, 2011 at 1:01 PM, Bruce Momjian br...@momjian.us wrote:
 According to our documentation, 'userlocks' were removed in PG 8.2:

        
 http://developer.postgresql.org/pgdocs/postgres/runtime-config-developer.html

        trace_userlocks (boolean)

        If on, emit information about user lock usage. Output is the same as
        for trace_locks, only for user locks.

        User locks were removed as of PostgreSQL version 8.2. This option
        currently has no effect.

        This parameter is only available if the LOCK_DEBUG macro was defined
        when PostgreSQL was compiled.

 Should we remove this parameter and the supporting code since pre-8.2 is
 now end-of-life?

hm, shouldn't it be replaced with trace_advisory_locks?

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 The real crux of the issue here is: under what circumstances should we
 look through the domain wrapper around an underlying type, and under
 what circumstances should we refrain from doing so?

That's half of it.  The other half is: when we *do* look through the
wrapper, is that equivalent to having implicitly inserted a downcast
to the base type, so that the results are now indistinguishable from
having given a base-type value to begin with?  Or is the expression's
behavior still affected by the fact of having given a domain value,
and if so, how exactly?

I assert that matching a domain-over-array to an ANYARRAY parameter
certainly involves having looked through the wrapper.  It's
considerably fuzzier though what should happen when matching a domain
to ANYELEMENT.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Heikki Linnakangas

On 03.06.2011 21:04, Kevin Grittner wrote:

Also, if anyone has comments or hints about the placement of those
calls, I'd be happy to receive them.


heap_drop_with_catalog() schedules the relation for deletion at the end 
of transaction, but it's still possible that the transaction aborts and 
the heap doesn't get dropped after all. If you put the 
DropAllPredicateLocksFromTable() call there, and the transaction later 
aborts, you've lost all the locks already.


I think you'll need to just memorize the lock deletion command in a 
backend-local list, and perform the deletion in a post-commit function. 
Something similar to the PendingRelDelete stuff in storage.c. In fact, 
hooking into smgrDoPendingDeletes would work, but that seems like a 
modularity violation.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] permissions of external PID file

2011-06-03 Thread Peter Eisentraut
The external PID file, if configured, is currently generated with 600
permissions, which results from the umask setting in the postmaster.  I
think it would be nicer if we could make that 644.

I have often dealt with scripts and such that look through PID files
in /var/run, and it's always annoying when some PID file is not readable
for some reason or no reason at all.  (One simple benefit is that you
don't need extra privileges to check whether the process is alive.)
Almost all PID files on my system are world-readable now.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 
 The real crux of the issue here is: under what circumstances
 should we look through the domain wrapper around an underlying
 type, and under what circumstances should we refrain from doing
 so?
 
I don't know if this is the intent of domains in the SQL standard,
but I find them useful to extend other types (in the
object-oriented sense).  CaseNoT *is a* varchar(14), but not every
varchar(14) is a CaseNoT.  In my view, behaviors should be
determined based on that concept.
 
In the long run, it would be fabulous if a domain could extend
another domain -- or event better, multiple other domains with the
same base type.  DOT hair color code and DOT eye color code domains
might both extend a DOT color codes domain.
 
Another long-range nicety would be something which I have seen in
some other databases, and which is consistent with the inheritance
theme, is that you can't compare or assign dissimilar domains -- an
error is thrown. So if you try to join from the eye color column in
a person table to the key of a hair color table, you get an error
unless you explicitly cast one or both of them to the common type.
 
I know that doesn't directly answer the question, but I think it
provides some framework for making choices
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Tom Lane
BTW, a possibly relevant point here is that SQL:2008 says (in 4.12)

The purpose of a domain is to constrain the set of valid values
that can be stored in a column of a base table by various
operations.

and in 4.17.4

A domain constraint is a constraint that is specified for a domain.
It is applied to all columns that are based on that domain, and
to all values cast to that domain.

If you take that literally, it means that domain constraints are applied
(1) in an assignment to a table column of a domain type, and
(2) in an explicit CAST to the domain type, and
(3) nowhere else.

In particular I fail to see any support in the spec for the notion that
domain constraints should be applied to the results of expressions that
happen to include a domain value.

In our terms, that definitely suggests that domains should always be
implicitly downcast to base type when passed to polymorphic functions,
so that the result of the function is never automatically of the domain
type.

This all leads to the conclusion that domains are not first-class types
and can't be made so ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Another long-range nicety would be something which I have seen in
 some other databases, and which is consistent with the inheritance
 theme, is that you can't compare or assign dissimilar domains -- an
 error is thrown. So if you try to join from the eye color column in
 a person table to the key of a hair color table, you get an error
 unless you explicitly cast one or both of them to the common type.

[ raised eyebrow ... ]  This is all pretty cute, but I think it goes
against both the letter and spirit of the SQL standard.  What you
are describing might be a useful thing to have, but it isn't a SQL
domain.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 On 03.06.2011 21:04, Kevin Grittner wrote:
 Also, if anyone has comments or hints about the placement of
 those calls, I'd be happy to receive them.
 
 heap_drop_with_catalog() schedules the relation for deletion at
 the end of transaction, but it's still possible that the
 transaction aborts and the heap doesn't get dropped after all. If
 you put the DropAllPredicateLocksFromTable() call there, and the
 transaction later aborts, you've lost all the locks already.
 
 I think you'll need to just memorize the lock deletion command in
 a backend-local list, and perform the deletion in a post-commit
 function. Something similar to the PendingRelDelete stuff in
 storage.c. In fact, hooking into smgrDoPendingDeletes would work,
 but that seems like a modularity violation.
 
Thanks.  That's helpful.  Will look at that code and do something
similar.
 
I knew it didn't look right in the place it was, but couldn't quite
see what to do instead
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Error in PQsetvalue

2011-06-03 Thread Pavel Golub
Hello.

Reproduced under Windows XP SP3 using Visual C++ 2008 and Delphi. If
PQsetvalue is called with second parameter equals to PQntuples then
memory corruption appears. But it should grow internal tuples array
and populate the last item with provided data. Please see the code:


#include stdio.h
#include stdlib.h
#include libpq-fe.h
#pragma comment(lib,libpq.lib)

static void
exit_nicely(PGconn *conn)
{
PQfinish(conn);
exit(1);
}

int
main(int argc, char **argv)
{
const char *conninfo;
PGconn *conn;
PGresult   *res;
if (argc  1)
conninfo = argv[1];
else
conninfo = dbname = postgres user = postgres password = password;

conn = PQconnectdb(conninfo);

if (PQstatus(conn) != CONNECTION_OK)
{
fprintf(stderr, Connection to database failed: %s, 
PQerrorMessage(conn));
exit_nicely(conn);
}

   res = PQexec(conn, SELECT generate_series(1, 10));

   if (!PQsetvalue(res, PQntuples(res), 0, 1, 1))   /* - here
we have memory corruption */
{
fprintf(stderr, Shit happens: %s, PQerrorMessage(conn));
exit_nicely(conn);
}

PQclear(res);
PQfinish(conn);
return 0;
}

BUT! If we change direct call to:

...
res = PQexec(conn, SELECT generate_series(1, 10));

res2 = PQcopyResult(res, PG_COPYRES_TUPLES);

if (!PQsetvalue(res2, PQntuples(res), 0, 1, 1))
{
fprintf(stderr, Shit happens: %s, PQerrorMessage(conn));
exit_nicely(conn);
}
...
then all is OK!

As you can see, I copied result first. No error occurs.
Can anybody check this on other platforms?
-- 
Nullus est in vitae sensus ipsa vera est sensus.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Vacuum, visibility maps and SKIP_PAGES_THRESHOLD

2011-06-03 Thread Bruce Momjian
Heikki Linnakangas wrote:
 On 27.05.2011 16:52, Pavan Deolasee wrote:
  On closer inspection, I realized that we have
  deliberately put in this hook to ensure that we use visibility maps
  only when we see at least SKIP_PAGES_THRESHOLD worth of all-visible
  sequential pages to take advantage of possible OS seq scan
  optimizations.
 
 That, and the fact that if you skip any page, you can't advance 
 relfrozenxid.
 
  My statistical skills are limited, but wouldn't that mean that for a
  fairly well distributed write activity across a large table, if there
  are even 3-4% update/deletes, we would most likely hit a
  not-all-visible page for every 32 pages scanned ? That would mean that
  almost entire relation will be scanned even if the visibility map
  tells us that only 3-4% pages require scanning ?  And the probability
  will increase with the increase in the percentage of updated/deleted
  tuples. Given that the likelihood of anyone calling VACUUM (manually
  or through autovac settings) on a table which has less than 3-4%
  updates/deletes is very low, I am worried that might be loosing all
  advantages of visibility maps for a fairly common use case.
 
 Well, as with normal queries, it's usually faster to just seqscan the 
 whole table if you need to access more than a few percent of the pages, 
 because sequential I/O is so much faster than random I/O. The visibility 
 map really only helps if all the updates are limited to some part of the 
 table. For example, if you only recent records are updated frequently, 
 and old ones are almost never touched.

I realize we just read the pages from the kernel to maintain sequential
I/O, but do we actually read the contents of the page if we know it
doesn't need vacuuming?  If so, do we need to?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DOCS: SGML identifier may not exceed 44 characters

2011-06-03 Thread Peter Eisentraut
On tis, 2011-05-31 at 16:17 +1000, Brendan Jurd wrote:
 Hi folks,
 
 I was working on a little docs patch today, and when I tried to
 `make`, openjade choked on an identifier in information_schema.sgml,
 which is very much unrelated to my changes:
 
 openjade:information_schema.sgml:828:60:Q: length of name token must
 not exceed NAMELEN (44)

Fixed.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 I think you'll need to just memorize the lock deletion command in
 a backend-local list, and perform the deletion in a post-commit
 function. Something similar to the PendingRelDelete stuff in
 storage.c. In fact, hooking into smgrDoPendingDeletes would work,
 but that seems like a modularity violation.
 
 Thanks.  That's helpful.  Will look at that code and do something
 similar.

Keep in mind that it's too late to throw any sort of error post-commit.
Any code you add there will need to have negligible probability of
failure.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Domains versus polymorphic functions, redux

2011-06-03 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 
 A domain constraint is a constraint that is specified for a
 domain.
   It is applied to all columns that are based on that domain, and
   to all values cast to that domain.
 
 If you take that literally, it means that domain constraints are
 applied
 (1) in an assignment to a table column of a domain type, and
 (2) in an explicit CAST to the domain type, and
 (3) nowhere else.
 
I'm curious how you jumped from all values cast to explicit CAST
and nowhere else.  The standard does describe implicit casts.
 
That said, section 4.7.5 talks about supertypes and subtypes, so if
I ever want such behavior, it seems to match pretty well (on a quick
scan) to what the standard outlines there.  No need to invent
different mechanisms.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Andrew Chernow

On 6/3/2011 3:03 PM, Pavel Golub wrote:

Hello.

Reproduced under Windows XP SP3 using Visual C++ 2008 and Delphi. If
PQsetvalue is called with second parameter equals to PQntuples then
memory corruption appears. But it should grow internal tuples array
and populate the last item with provided data. Please see the code:




At first glance (have not tested this theory), looks like pqAddTuple() 
doesn't zero the newly allocated tuples slots like PQsetvalue() does. 
PQsetvalue is depending on the unassigned tuple table slots to be NULL 
to detect when a tuple must be allocated.  Around line 446 on fe-exec.c. 
 I never tested this case since libpqtypes never tried to call 
PQsetvalue on a PGresult created by the standard libpq library.


The solution I see would be to zero the new table slots within 
pqAddTuple.  Any other ideas?


--
Andrew Chernow
eSilo, LLC
global backup
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Merlin Moncure
On Fri, Jun 3, 2011 at 2:03 PM, Pavel Golub pa...@microolap.com wrote:
 Hello.

 Reproduced under Windows XP SP3 using Visual C++ 2008 and Delphi. If
 PQsetvalue is called with second parameter equals to PQntuples then
 memory corruption appears. But it should grow internal tuples array
 and populate the last item with provided data. Please see the code:


 #include stdio.h
 #include stdlib.h
 #include libpq-fe.h
 #pragma comment(lib,libpq.lib)

 static void
 exit_nicely(PGconn *conn)
 {
    PQfinish(conn);
    exit(1);
 }

 int
 main(int argc, char **argv)
 {
    const char *conninfo;
    PGconn     *conn;
    PGresult   *res;
    if (argc  1)
        conninfo = argv[1];
    else
        conninfo = dbname = postgres user = postgres password = password;

    conn = PQconnectdb(conninfo);

    if (PQstatus(conn) != CONNECTION_OK)
    {
        fprintf(stderr, Connection to database failed: %s, 
 PQerrorMessage(conn));
        exit_nicely(conn);
    }

   res = PQexec(conn, SELECT generate_series(1, 10));

   if (!PQsetvalue(res, PQntuples(res), 0, 1, 1))   /* - here
 we have memory corruption */

hm, what are the exact symptoms you see that is causing you to think
you are having memory corruption? (your example is working for me).

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change 'pg_ctl: no server running' Exit Status to 3

2011-06-03 Thread Peter Eisentraut
On mån, 2011-05-23 at 20:50 -0400, Aaron W. Swenson wrote:
 According to Linux Standard Base Core Specification 3.1 [1], the exit
 status should be '3' when the server isn't running.
 
 I've attached a very simple patch that resolves this cosmetic issue,
 which applies to all branches.

If we're going to make the exit status meaningful, it should probably be
added to the documentation.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 I think you'll need to just memorize the lock deletion command
 in a backend-local list, and perform the deletion in a
 post-commit function. Something similar to the PendingRelDelete
 stuff in storage.c. In fact, hooking into smgrDoPendingDeletes
 would work, but that seems like a modularity violation.
  
 Thanks.  That's helpful.  Will look at that code and do something
 similar.
 
 Keep in mind that it's too late to throw any sort of error
 post-commit. Any code you add there will need to have negligible
 probability of failure.
 
Thanks for the heads-up.  I think we're OK in that regard, though. 
We have two commit-time functions in SSI:
 
PreCommit_CheckForSerializationFailure(void) which is the last
chance for failure before actual commit, and

ReleasePredicateLocks(const bool isCommit) which is for resource
cleanup after rollback or commit is effective.
 
We pretty much can't fail on the latter except for Assert
statements, and this new logic would be the same.  It's hard to fail
when freeing resources unless something is quite seriously hosed
already.  This is where I was planning to put the freeing of the
locks, based on queuing up the request at the time the DROP TABLE
statement is encountered.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting a bug tracker for the Postgres project

2011-06-03 Thread Bruce Momjian
Josh Berkus wrote:
 All,
 
 Let me mention some of the reasons we as a project could use a bug
 tracker which have nothing to do with actually fixing bugs.
 
 (1) Testing: a bug tracker could be used for beta testing instead of the
 ad-hoc system I'm writing.  Assuming it has the right features, of course.
 
 (2) User information: right now, if a user has an issue, it's very very
 hard for them to answer the question Has this already been reported
 and/or fixed in a later release.  This is a strong source of
 frustration for business users who don't actively participate in the
 community, a complaint I have heard multiple times.

Also, bug reporters frequently don't get any email feedback on when
their bug was fixed.  It is also hard to identify what major/minor
release fixed a specific bug, especially if the bug was rare.

 Where *fixing* bugs is concerned, I'm concerned that a bug tracker would
 actually slow things down.  I'm dubious about our ability to mobilize
 volunteers for anything other than bug triage, and the fact that we
 *don't* triage is an advantage in bug report responsiveness (I have
 unconfirmed bugs for Thunderbird which have been pending for 3 years).
  So I'm skeptical about bug trackers on that score.

Yes, I agree.  Too many bug systems are just a dumping-pile for bugs.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting a bug tracker for the Postgres project

2011-06-03 Thread Bruce Momjian
Greg Stark wrote:
 On Mon, May 30, 2011 at 6:52 PM, Robert Haas robertmh...@gmail.com wrote:
  ?The number of people reading and replying to
  emails on pgsql-bugs is already insufficient, perhaps because of the
  (incorrect) perception that Tom does or will fix everything and no one
  else needs to care. ?So anything that makes it harder for people to
  follow along and participate is a non-starter IMV.
 
 Actually I think most of our bugs don't come in from pgsql-bugs. I
 think we want to add other bugs that come up from discussions on
 -hackers or -general which for whatever reason don't get immediately
 fixed.

Agreed.  At that point the TODO list is no longer needed, perhaps.  It
would be nice to have a system where we could categorize items, and add
features as well because the bug/feature distinction is often very
hard to make.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting a bug tracker for the Postgres project

2011-06-03 Thread Bruce Momjian
Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
  That doesn't mean that better integration cannot be worked on later, but
  this illusion that a bug tracker must have magical total awareness of
  the entire flow of information in the project from day one is an
  illusion and has blocked this business for too long IMO.
 
 If it has only a partial view of the set of bugs being worked on, it's
 not going to meet the goals that are being claimed for it.

The problem with a bug tracker that only tracks some bugs is that people
will mistakenly believe the system is complete, when it is not.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Merlin Moncure
On Fri, Jun 3, 2011 at 3:06 PM, Andrew Chernow a...@esilo.com wrote:
 On 6/3/2011 3:03 PM, Pavel Golub wrote:

 Hello.

 Reproduced under Windows XP SP3 using Visual C++ 2008 and Delphi. If
 PQsetvalue is called with second parameter equals to PQntuples then
 memory corruption appears. But it should grow internal tuples array
 and populate the last item with provided data. Please see the code:



 At first glance (have not tested this theory), looks like pqAddTuple()
 doesn't zero the newly allocated tuples slots like PQsetvalue() does.
 PQsetvalue is depending on the unassigned tuple table slots to be NULL to
 detect when a tuple must be allocated.  Around line 446 on fe-exec.c.  I
 never tested this case since libpqtypes never tried to call PQsetvalue on a
 PGresult created by the standard libpq library.

 The solution I see would be to zero the new table slots within pqAddTuple.
  Any other ideas?

It might not be necessary to do that.  AIUI the tuple table slot guard
is there essentially to let setval know if it needs to allocate tuple
attributes, which always has to be done after a new tuple is created
after a set.  It should be enough to keep track of the 'allocation
tuple' as an int (which is incremented after attributes are allocated
for the new tuple).  so if tup# is same is allocation tuple, allocate
the atts and increment the number, otherwise just do a straight 'set'.
 Basically we are taking advantage of the fact only one tuple can be
allocated at a time, and it always has to be the next one after the
current set.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Andrew Chernow

On 6/3/2011 4:06 PM, Andrew Chernow wrote:

On 6/3/2011 3:03 PM, Pavel Golub wrote:

Hello.

Reproduced under Windows XP SP3 using Visual C++ 2008 and Delphi. If
PQsetvalue is called with second parameter equals to PQntuples then
memory corruption appears. But it should grow internal tuples array
and populate the last item with provided data. Please see the code:




At first glance (have not tested this theory), looks like pqAddTuple()
doesn't zero the newly allocated tuples slots like PQsetvalue() does.
PQsetvalue is depending on the unassigned tuple table slots to be NULL
to detect when a tuple must be allocated. Around line 446 on fe-exec.c.
I never tested this case since libpqtypes never tried to call PQsetvalue
on a PGresult created by the standard libpq library.

The solution I see would be to zero the new table slots within
pqAddTuple. Any other ideas?



Eeekks.  Found an additional bug.  PQsetvalue only allocates the actual 
tuple if the provided tup_num equals the number of tuples (append) and 
that slot is NULL.  This is wrong.  The original idea behind PQsetvalue 
was you can add tuples in any order and overwrite existing ones.


Also, doing res-ntups++ whenever a tuple is allocated is incorrect as 
well; since we may first add tuple 3.  In that case, if ntups is 
currently = 3 we need to set it to 3+1, otherwise do nothing.  In other 
words, while adding tuples via PQsetvalue the ntups should always be 
equal to (max_supplied_tup_num + 1) with all unset slots being NULL.


PQsetvalue(res, 3, 0, x, 1); // currently sets res-ntups to 1
PQsetvalue(res, 2, 0, x, 1); // as code is now, this returns FALSE due 
to bounds checking


--
Andrew Chernow
eSilo, LLC
global backup
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 
 I think you'll need to just memorize the lock deletion command in
 a backend-local list, and perform the deletion in a post-commit
 function. 
 
Hmm.  As mentioned earlier in the thread, cleaning these up doesn't
actually have any benefit beyond freeing space in the predicate
locking collections.  I'm not sure that benefit is enough to justify
this much new mechanism.  Maybe I should just leave them alone and
let them get cleaned up in due course with the rest of the locks. 
Any opinions on that?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting a bug tracker for the Postgres project

2011-06-03 Thread Bruce Momjian
Just to throw out a crazy idea, there has been talk of bug ids.  What if
a thread, made up of multiple message ids, was in fact the bug id, and
the first message in the thread (ignoring month boundaries) was the
definitive bug id, but any of the message ids could be used to represent
the definitive one.

That way, a message id mentioned in a commit message could track back to
the definitive bug id and therefore be used to close the bug.

If you think of it that way, your email stream is just a stream of
threads, with a definitive bug id per thread, that is either not a
bug, a bug,  a fix, or other.

In a way, all you need to do is for someone to add the thread to the
bug system via email, and change its status via email.

Yes, crazy, but that is kind of how I track open items in my mailbox.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Andrew Chernow




At first glance (have not tested this theory), looks like pqAddTuple()
doesn't zero the newly allocated tuples slots like PQsetvalue() does.
PQsetvalue is depending on the unassigned tuple table slots to be NULL to
detect when a tuple must be allocated.  Around line 446 on fe-exec.c.  I
never tested this case since libpqtypes never tried to call PQsetvalue on a
PGresult created by the standard libpq library.

The solution I see would be to zero the new table slots within pqAddTuple.
  Any other ideas?


It might not be necessary to do that.  AIUI the tuple table slot guard
is there essentially to let setval know if it needs to allocate tuple
attributes, which always has to be done after a new tuple is created
after a set.


Trying to append a tuple to a libpq generated PGresult will core dump 
due to the pqAddTuple issue, unless the append operation forces 
PQsetvalue to grow the tuple table; in which case new elements are 
zero'd.  OP attempted to append.


res = PQexec(returns 2 tuples);
PQsetvalue(res, PQntuples(res), ...);

--
Andrew Chernow
eSilo, LLC
global backup
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Heikki Linnakangas

On 03.06.2011 23:44, Kevin Grittner wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com  wrote:


I think you'll need to just memorize the lock deletion command in
a backend-local list, and perform the deletion in a post-commit
function.


Hmm.  As mentioned earlier in the thread, cleaning these up doesn't
actually have any benefit beyond freeing space in the predicate
locking collections.  I'm not sure that benefit is enough to justify
this much new mechanism.  Maybe I should just leave them alone and
let them get cleaned up in due course with the rest of the locks.
Any opinions on that?


Is there a chance of false positives if oid wraparound happens and a new 
table gets the same oid as the old one? It's also possible for a heap to 
get the OID of an old index or vice versa, will that confuse things?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 On 03.06.2011 23:44, Kevin Grittner wrote:
 
 Hmm.  As mentioned earlier in the thread, cleaning these up
 doesn't actually have any benefit beyond freeing space in the
 predicate locking collections.  I'm not sure that benefit is
 enough to justify this much new mechanism.  Maybe I should just
 leave them alone and let them get cleaned up in due course with
 the rest of the locks.  Any opinions on that?
 
 Is there a chance of false positives if oid wraparound happens and
 a new table gets the same oid as the old one? It's also possible
 for a heap to get the OID of an old index or vice versa, will that
 confuse things?
 
Tuple locks should be safe from that because we use the tuple xmin
as part of the target key, but page and heap locks could result in
false positive serialization failures on OID wraparound unless we do
the cleanup.  I guess that tips the scales in favor of it being
worth the extra code.  I think it's still in that gray area where
it's a judgment call because it would be very infrequent and it
would be recoverable; but the fewer mysterious rollbacks, the fewer
posts about it we'll get.  So any costs in maintaining the extra
code will probably be saved in reduced support, even if the
performance benefit is negligible.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting a bug tracker for the Postgres project

2011-06-03 Thread Peter Eisentraut
On fre, 2011-06-03 at 16:42 -0400, Bruce Momjian wrote:
 Just to throw out a crazy idea, there has been talk of bug ids.  What
 if a thread, made up of multiple message ids, was in fact the bug id,
 and the first message in the thread (ignoring month boundaries) was
 the definitive bug id, but any of the message ids could be used to
 represent the definitive one.

That way, if someone breaks a thread, you can't reattach the
conversation to a bug.  And you couldn't take a thread off a bug or to a
new bug.

A heavily email-based tracker such as debbugs works almost like that,
but for those mentioned reasons, it's simpler to have the messages
belonging to a bug stored separately.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting a bug tracker for the Postgres project

2011-06-03 Thread Christopher Browne
On Fri, Jun 3, 2011 at 8:42 PM, Bruce Momjian br...@momjian.us wrote:
 Just to throw out a crazy idea, there has been talk of bug ids.  What if
 a thread, made up of multiple message ids, was in fact the bug id, and
 the first message in the thread (ignoring month boundaries) was the
 definitive bug id, but any of the message ids could be used to represent
 the definitive one.

 That way, a message id mentioned in a commit message could track back to
 the definitive bug id and therefore be used to close the bug.

 If you think of it that way, your email stream is just a stream of
 threads, with a definitive bug id per thread, that is either not a
 bug, a bug,  a fix, or other.

 In a way, all you need to do is for someone to add the thread to the
 bug system via email, and change its status via email.

 Yes, crazy, but that is kind of how I track open items in my mailbox.

That doesn't seem crazy at all...  It seems to parallel the way that
distributed SCMs treat series of versions as the intersections of
related repository versions, each identified by a hash code.

There is one problem I see with the definitive bug ID, which is that
a thread might wind up discussing *two* problems, and it would be
regrettable to discover that this got forced to be treated as a single
bug.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Postmaster holding unlinked files for pg_largeobject table

2011-06-03 Thread Alexander Shulgin
Hello Hackers,

There is some strange behavior we're experiencing with one of the customer's 
DBs (8.4)

We've noticed that free disk space went down heavily on a system, and after a 
short analysis determined that the reason was that postmaster was holding lots 
of unlinked files open.  A sample of lsof output was something like this:

postmaste 15484  postgres   57u  REG  253,0 1073741824   
41125093 /srv/pgsql/data/base/352483309/2613.2 (deleted)
postmaste 15484  postgres   58u  REG  253,0 1073741824   
41125094 /srv/pgsql/data/base/352483309/2613.3 (deleted)
postmaste 15484  postgres   59u  REG  253,0 1073741824   
41125095 /srv/pgsql/data/base/352483309/2613.4 (deleted)

There were about 450 such (or similar) files, all of them having /2613 in the 
filename.  Since 2613 is a regclass of pg_largeobject and we are indeed working 
with quite a few large objects in that DB so this is where our problem lies we 
suspect.

Restarting PostgreSQL obviously helps the issue and the disk space occupied by 
those unlinked files (about 63GB actually) is reclaimed.

So what happens on that host is that we drop/restore a fresh version of the DB 
from the production host, followed by a migration script which among other 
things loads around 16GB of data files as large objects.  This happens nightly.

But if we go and run the whole drop/restore and migration manually, the problem 
doesn't manifest itself right after migration is successfully finished.

Our next thought was that it must be dropdb part of the nightly script that 
removes the pg_largeobject's files (still we don't know what makes it keep them 
opened,) but dropping the DB doesn't manifest the problem either.

I'm currently running a VACUUM pg_largeobject on the problematic DB, to see if 
it triggers the problem, but this didn't happen so far.

At this point it would be nice to hear what are your thoughts.  What could 
cause such behavior?

--
Regards,
Alex

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Merlin Moncure
On Fri, Jun 3, 2011 at 3:38 PM, Andrew Chernow a...@esilo.com wrote:

 Eeekks.  Found an additional bug.  PQsetvalue only allocates the actual
 tuple if the provided tup_num equals the number of tuples (append) and that
 slot is NULL.  This is wrong.  The original idea behind PQsetvalue was you
 can add tuples in any order and overwrite existing ones.

That was by design -- you are only supposed to be able to add a tuple
if you pass in exactly the insertion position (which is the same as
PQntuples()).  If you pass less than that, you will overwrite the
value at that position.  If you pass greater, you should get an error.
 This is also how the function is documented.  That's why you don't
have to zero out the tuple slots at all -- the insertion position is
always known and you just need to make sure the tuple atts are not
allocated more than one time per inserted tuple.  Meaning, PQsetvalue
needs to work more like pqAddTuple (and the insertion code should
probably be abstracted).

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock

2011-06-03 Thread Kevin Grittner
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
 
 Tuple locks should be safe from that because we use the tuple xmin
 as part of the target key, but page and heap locks
 
That should have read page and relation locks.
 
 I guess that tips the scales in favor of it being worth the extra
 code.  I think it's still in that gray area
 
I just thought of something which takes it out of the gray area for
me: pg_locks.  Even though it would be extremely rare for a false
positive to actually occur if we let this go, people would be sure
to get confused by seeing locks on the dropped objects in the
pg_locks view..  They've got to be cleaned up.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Andrew Chernow

On 6/3/2011 5:54 PM, Merlin Moncure wrote:

On Fri, Jun 3, 2011 at 3:38 PM, Andrew Chernowa...@esilo.com  wrote:


Eeekks.  Found an additional bug.  PQsetvalue only allocates the actual
tuple if the provided tup_num equals the number of tuples (append) and that
slot is NULL.  This is wrong.  The original idea behind PQsetvalue was you
can add tuples in any order and overwrite existing ones.


That was by design -- you are only supposed to be able to add a tuple
if you pass in exactly the insertion position (which is the same as
PQntuples()).  If you pass less than that, you will overwrite the
value at that position.  If you pass greater, you should get an error.


Actually, the original idea, as I stated UT, was to allow adding tuples 
in any order as well as overwriting them.  Obviously lost that while 
trying to get libpqtypes working, which only appends.



  This is also how the function is documented.  That's why you don't
have to zero out the tuple slots at all -- the insertion position is
always known and you just need to make sure the tuple atts are not
allocated more than one time per inserted tuple.  Meaning, PQsetvalue
needs to work more like pqAddTuple (and the insertion code should
probably be abstracted).



You need to have insertion point zero'd in either case.  Look at the 
code.  It only allocates when appending *AND* the tuple at that index is 
NULL.  If pqAddTuple allocated the table, the tuple slots are 
uninitialized memory, thus it is very unlikely that the next tuple slot 
is NULL.


It is trivial to make this work the way it was initially intended (which 
mimics PQgetvalue in that you can fetch values in any order, that was 
the goal).  Are there any preferences?  I plan to supply a patch that 
allows setting values in any order as well as overwriting unless someone 
speaks up.  I fix the docs as well.


--
Andrew Chernow
eSilo, LLC
global backup
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Assert failure when rechecking an exclusion constraint

2011-06-03 Thread Noah Misch
Commit d2f60a3ab055fb61c8e1056a7c5652f1dec85e00 added an assert to indexam.c's
RELATION_CHECKS to block use of an index while it's being rebuilt.  This
assert trips while rechecking an exclusion constraint after an ALTER TABLE
rebuild:

CREATE TABLE t (
c   int,
EXCLUDE (c WITH =)
);

INSERT INTO t VALUES (1), (2);
ALTER TABLE t ALTER c TYPE smallint USING 1;

Here's the end of the stack trace (at -O1):

#0  0x7f3d2bae3095 in raise () from /lib/libc.so.6
#1  0x7f3d2bae4af0 in abort () from /lib/libc.so.6
#2  0x00705449 in ExceptionalCondition (conditionName=value optimized 
out, errorType=value optimized out, fileName=value optimized out, 
lineNumber=value optimized out) at assert.c:57
#3  0x00486148 in index_beginscan_internal 
(indexRelation=0x7f3d2c6990a0, nkeys=1, norderbys=0) at indexam.c:283
#4  0x0048622c in index_beginscan (heapRelation=0x7f3d2c68fb48, 
indexRelation=0x6a73, snapshot=0x7fff679d17b0, nkeys=-1, norderbys=0) at 
indexam.c:237
#5  0x0058a375 in check_exclusion_constraint (heap=0x7f3d2c68fb48, 
index=0x7f3d2c6990a0, indexInfo=0xc97968, tupleid=0xc8c06c, 
values=0x7fff679d18d0, isnull=0x7fff679d19e0 , estate=0xc98ac8, newIndex=1 
'\001', errorOK=0 '\000') at execUtils.c:1222
#6  0x004d164e in IndexCheckExclusion (heapRelation=0x7f3d2c68fb48, 
indexRelation=0x7f3d2c6990a0, indexInfo=0xc97968, isprimary=0 '\000', 
isreindex=1 '\001') at index.c:2328
#7  index_build (heapRelation=0x7f3d2c68fb48, indexRelation=0x7f3d2c6990a0, 
indexInfo=0xc97968, isprimary=0 '\000', isreindex=1 '\001') at index.c:1765
#8  0x004d21b8 in reindex_index (indexId=131529, 
skip_constraint_checks=0 '\000') at index.c:2814
#9  0x004d2450 in reindex_relation (relid=value optimized out, 
flags=6) at index.c:2988
#10 0x0052a86a in finish_heap_swap (OIDOldHeap=131524, 
OIDNewHeap=131531, is_system_catalog=0 '\000', swap_toast_by_content=value 
optimized out, check_constraints=1 '\001', frozenXid=value optimized out) at 
cluster.c:1427
#11 0x0055fc1b in ATRewriteTables (rel=value optimized out, 
cmds=value optimized out, recurse=value optimized out, lockmode=8) at 
tablecmds.c:3329
#12 ATController (rel=value optimized out, cmds=value optimized out, 
recurse=value optimized out, lockmode=8) at tablecmds.c:2738
...

I could not come up with an actual wrong behavior arising from this usage, so
I'll tentatively call it a false positive.  reindex_index() could instead
unconditionally clear indexInfo-ii_Exclusion* before calling index_build(),
then pop pendingReindexedIndexes and call IndexCheckExclusion() itself.  Popping
pendingReindexedIndexes as we go can make the success of a reindex_relation()
dependent on the order in which we choose to rebuild indexes, though.

Another option is to just remove the assert as not worth preserving.

Opinions, other ideas?

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Disallow SELECT FOR UPDATE/SHARE on sequences.

2011-06-03 Thread Tatsuo Ishii
 and even wrap around completely.  Since the row lock is ignored by nextval
 and setval, the usefulness of the operation is highly debatable anyway.

As for pgpool, this is plain wrong. The reason why pgpool uses sequene
row lock is to obtain sequence table lock like effect, which is not
currently permitted. Whether nextval and setval ignore sequence row
lock is irrelevant.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Merlin Moncure
On Fri, Jun 3, 2011 at 6:22 PM, Andrew Chernow a...@esilo.com wrote:
 Actually, the original idea, as I stated UT, was to allow adding tuples in
 any order as well as overwriting them.  Obviously lost that while trying to
 get libpqtypes working, which only appends.

Well, that may or not be the case, but that's irrelevant.  This has to
be backpatched to 9.0 and we can't use a bug to sneak in a behavior
change...regardless of what's done, it has to work as documented --
the current behavior isn't broken.  If we want PQsetvalue to support
creating tuples past the insertion point, that should be proposed for
9.2.

 You need to have insertion point zero'd in either case.  Look at the code.
  It only allocates when appending *AND* the tuple at that index is NULL.  If
 pqAddTuple allocated the table, the tuple slots are uninitialized memory,
 thus it is very unlikely that the next tuple slot is NULL.

I disagree -- I think the fix is a one-liner.  line 446:
if (tup_num == res-ntups  !res-tuples[tup_num])

should just become
if (tup_num == res-ntups)

also the memset of the tuple slots when the slot array is expanded can
be removed. (in addition, the array tuple array expansion should
really be abstracted, but that isn't strictly necessary here).

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove support for 'userlocks'?

2011-06-03 Thread Josh Berkus
On 6/3/11 11:01 AM, Bruce Momjian wrote:
 According to our documentation, 'userlocks' were removed in PG 8.2:
 
   
 http://developer.postgresql.org/pgdocs/postgres/runtime-config-developer.html

I take it this doesn't trace advisory locks, and trace_locks does?

If so, then by all means remove the code.  I highly doubt anyone is
using that option anyway ...

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Next CF in 2 weeks

2011-06-03 Thread Josh Berkus
Hackers,

Our next commitfest begins June 15.  I will be CF manager for this one.

Currently we already have 40 patches in the queue.

If you already have code for a 9.2 feature, please get it in now.  If
you submitted a patch for this CF, we'd really appreciate it if you
would volunteer to review someone else's patch.  That will help ensure
that others have time to give your patch a thorough review.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Andrew Chernow

On 6/3/2011 7:14 PM, Merlin Moncure wrote:

On Fri, Jun 3, 2011 at 6:22 PM, Andrew Chernowa...@esilo.com  wrote:

Actually, the original idea, as I stated UT, was to allow adding tuples in
any order as well as overwriting them.  Obviously lost that while trying to
get libpqtypes working, which only appends.


Well, that may or not be the case, but that's irrelevant.  This has to
be backpatched to 9.0 and we can't use a bug to sneak in a behavior
change...regardless of what's done, it has to work as documented --
the current behavior isn't broken.  If we want PQsetvalue to support
creating tuples past the insertion point, that should be proposed for
9.2.



Well, not really irrelevant since understanding the rationale behind changes is 
important.  I actually reversed my opinion on this and was preparing a patch 
that simply did a memset in pqAddTuple.  See below for why



You need to have insertion point zero'd in either case.  Look at the code.
  It only allocates when appending *AND* the tuple at that index is NULL.  If
pqAddTuple allocated the table, the tuple slots are uninitialized memory,
thus it is very unlikely that the next tuple slot is NULL.


I disagree -- I think the fix is a one-liner.  line 446:
if (tup_num == res-ntups  !res-tuples[tup_num])

should just become
if (tup_num == res-ntups)

also the memset of the tuple slots when the slot array is expanded can
be removed. (in addition, the array tuple array expansion should
really be abstracted, but that isn't strictly necessary here).



All true.  This is a cleaner fix to something that was in fact broken ;)  You 
want to apply it?


The fact that the tuples were being zero'd plus the NULL check is evidence of 
the original intent; which is what confused me.  I found internal libpqtypes 
notes related to this change, it actually had to do with producing a result with 
dead tuples that would cause PQgetvalue and others to crash.  Thus, it seemed 
better to only allow creating a result that is always *valid*.


--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] creating CHECK constraints as NOT VALID

2011-06-03 Thread Alvaro Herrera
Excerpts from Thom Brown's message of vie jun 03 13:45:57 -0400 2011:
 On 3 June 2011 17:58, Alvaro Herrera alvhe...@commandprompt.com wrote:
  Excerpts from Thom Brown's message of vie jun 03 12:47:58 -0400 2011:

  Nice work Alvaro :)  Shouldn't patches be sent to -hackers instead of
  the obsolete -patches list?  Plus I'm a bit confused as to why the
  patch looks like an email instead of a patch.
 
  Did I really email pgsql-patches?  If so, I didn't notice -- but I don't
  see it (and the archives seem to agree with me, there's no email after
  2008-10).
 
 My bad, I was reading your patch which contained an email subject
 beginning with [PATCH] (similar to mailing list subject prefixes)
 which, if I had given it any further though, doesn't mean it's on the
 -patches list.

Ah, that makes sense.  The pgsql-patches tag was [PATCHES] actually,
though :-)

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postmaster holding unlinked files for pg_largeobject table

2011-06-03 Thread Alvaro Herrera
Excerpts from Alexander Shulgin's message of vie jun 03 17:45:28 -0400 2011:

 There were about 450 such (or similar) files, all of them having /2613 in the 
 filename.  Since 2613 is a regclass of pg_largeobject and we are indeed 
 working with quite a few large objects in that DB so this is where our 
 problem lies we suspect.
 
 Restarting PostgreSQL obviously helps the issue and the disk space occupied 
 by those unlinked files (about 63GB actually) is reclaimed.
 
 So what happens on that host is that we drop/restore a fresh version of the 
 DB from the production host, followed by a migration script which among other 
 things loads around 16GB of data files as large objects.  This happens 
 nightly.

What surprises me is that the open references remain after a database
drop.  Surely this means that no backends keep open file descriptors to
any table in that database, because there are no connections.

I also requested Alexander to run a checkpoint and see if that made the
FDs go away (on the theory that bgwriter could be the culprit) -- no dice.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove support for 'userlocks'?

2011-06-03 Thread Bruce Momjian
Josh Berkus wrote:
 On 6/3/11 11:01 AM, Bruce Momjian wrote:
  According to our documentation, 'userlocks' were removed in PG 8.2:
  
  
  http://developer.postgresql.org/pgdocs/postgres/runtime-config-developer.html
 
 I take it this doesn't trace advisory locks, and trace_locks does?
 
 If so, then by all means remove the code.  I highly doubt anyone is
 using that option anyway ...

OK, that is something I will do for 9.2.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Andrew Chernow



I disagree -- I think the fix is a one-liner. line 446:
if (tup_num == res-ntups !res-tuples[tup_num])

should just become
if (tup_num == res-ntups)

also the memset of the tuple slots when the slot array is expanded can
be removed. (in addition, the array tuple array expansion should
really be abstracted, but that isn't strictly necessary here).



All true. This is a cleaner fix to something that was in fact broken ;) You want


Attached a patch that fixes the OP's issue.  PQsetvalue now uses pqAddTuple to 
grow the tuple table and has removed the remnants of an older idea that caused 
the bug.


--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index 83c5ea3..9f013ed 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -424,28 +424,8 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len)
 	if (tup_num  0 || tup_num  res-ntups)
 		return FALSE;
 
-	/* need to grow the tuple table? */
-	if (res-ntups = res-tupArrSize)
-	{
-		int			n = res-tupArrSize ? res-tupArrSize * 2 : 128;
-		PGresAttValue **tups;
-
-		if (res-tuples)
-			tups = (PGresAttValue **) realloc(res-tuples, n * sizeof(PGresAttValue *));
-		else
-			tups = (PGresAttValue **) malloc(n * sizeof(PGresAttValue *));
-
-		if (!tups)
-			return FALSE;
-
-		memset(tups + res-tupArrSize, 0,
-			   (n - res-tupArrSize) * sizeof(PGresAttValue *));
-		res-tuples = tups;
-		res-tupArrSize = n;
-	}
-
-	/* need to allocate a new tuple? */
-	if (tup_num == res-ntups  !res-tuples[tup_num])
+	/* need to allocate a new tuple. */
+	if (tup_num == res-ntups)
 	{
 		PGresAttValue *tup;
 		int			i;
@@ -457,6 +437,12 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len)
 		if (!tup)
 			return FALSE;
 
+		if (!pqAddTuple(res, tup))
+		{
+			free(tup);
+			return FALSE;
+		}
+
 		/* initialize each column to NULL */
 		for (i = 0; i  res-numAttributes; i++)
 		{
@@ -464,11 +450,12 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len)
 			tup[i].value = res-null_field;
 		}
 
-		res-tuples[tup_num] = tup;
-		res-ntups++;
+		attval = tup[tup_num][field_num];
+	}
+	else
+	{
+		attval = res-tuples[tup_num][field_num];
 	}
-
-	attval = res-tuples[tup_num][field_num];
 
 	/* treat either NULL_LEN or NULL value pointer as a NULL field */
 	if (len == NULL_LEN || value == NULL)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error in PQsetvalue

2011-06-03 Thread Andrew Chernow

On 6/3/2011 10:26 PM, Andrew Chernow wrote:



I disagree -- I think the fix is a one-liner. line 446:
if (tup_num == res-ntups !res-tuples[tup_num])

should just become
if (tup_num == res-ntups)

also the memset of the tuple slots when the slot array is expanded can
be removed. (in addition, the array tuple array expansion should
really be abstracted, but that isn't strictly necessary here).



All true. This is a cleaner fix to something that was in fact broken ;) You want


Attached a patch that fixes the OP's issue. PQsetvalue now uses pqAddTuple to
grow the tuple table and has removed the remnants of an older idea that caused
the bug.



Sorry, I attached the wrong patch.  Here is the correct one.

--
Andrew Chernow
eSilo, LLC
every bit counts
http://www.esilo.com/
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index 83c5ea3..b4a394f 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -424,28 +424,8 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len)
 	if (tup_num  0 || tup_num  res-ntups)
 		return FALSE;
 
-	/* need to grow the tuple table? */
-	if (res-ntups = res-tupArrSize)
-	{
-		int			n = res-tupArrSize ? res-tupArrSize * 2 : 128;
-		PGresAttValue **tups;
-
-		if (res-tuples)
-			tups = (PGresAttValue **) realloc(res-tuples, n * sizeof(PGresAttValue *));
-		else
-			tups = (PGresAttValue **) malloc(n * sizeof(PGresAttValue *));
-
-		if (!tups)
-			return FALSE;
-
-		memset(tups + res-tupArrSize, 0,
-			   (n - res-tupArrSize) * sizeof(PGresAttValue *));
-		res-tuples = tups;
-		res-tupArrSize = n;
-	}
-
-	/* need to allocate a new tuple? */
-	if (tup_num == res-ntups  !res-tuples[tup_num])
+	/* need to allocate a new tuple. */
+	if (tup_num == res-ntups)
 	{
 		PGresAttValue *tup;
 		int			i;
@@ -457,15 +437,18 @@ PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len)
 		if (!tup)
 			return FALSE;
 
+		if (!pqAddTuple(res, tup))
+		{
+			free(tup);
+			return FALSE;
+		}
+
 		/* initialize each column to NULL */
 		for (i = 0; i  res-numAttributes; i++)
 		{
 			tup[i].len = NULL_LEN;
 			tup[i].value = res-null_field;
 		}
-
-		res-tuples[tup_num] = tup;
-		res-ntups++;
 	}
 
 	attval = res-tuples[tup_num][field_num];

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers