Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Heikki Linnakangas
Josh Berkus wrote:
>> That is exactly the core idea I was trying to suggest in my rambling
>> message.  Just that small additional bit of information transmitted and
>> published to the master via that route, and it's possible to optimize
>> this problem in a way not available now.  And it's a way that I believe
>> will feel more natural to some users who may not be well served by any
>> of the existing tuning possibilities.
> 
> Well, if both you and Tom think it would be relatively easy (or at least
> easier that continuing to pursue query cancel troubleshooting), then
> please start coding it.  It was always a possible approach, we just
> collectively thought that query cancel would be easier.

You still need query cancels. A feedback loop just makes it happen less
frequently.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Heikki Linnakangas
Greg Stark wrote:
> On Fri, Feb 26, 2010 at 9:19 PM, Tom Lane  wrote:
>> There's *definitely* not going to be enough information in the WAL
>> stream coming from a master that doesn't think it has HS slaves.
>> We can't afford to record all that extra stuff in installations for
>> which it's just useless overhead.  BTW, has anyone made any attempt
>> to measure the performance hit that the patch in its current form is
>> creating via added WAL entries?
> 
> What extra entries?

* An xact-assignment record is written every PGPROC_MAX_CACHED_SUBXIDS
(= 64) subtransaction ids assigned to a single top-level transaction.

* A running-xacts record is written at every online checkpoint

* A btree-reuse-page record is written whenever a dead b-tree page is
recycled

* A vacuum cleanup-info record is written once per VACUUM of a table

* A standby-lock record is written for each AccessExclusiveLock acquired.

Am I missing something?

I doubt any of these are noticeable, though I don't think anyone has
measured it.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Testing of parallel restore with current snapshot

2010-02-26 Thread Gokulakannan Somasundaram
Tom,
I just took the patch, but it seems to be in binary format. Can you send
me the patch to me?

Thanks,
Gokul.

On Sat, May 30, 2009 at 3:12 AM, Tom Lane  wrote:

> Josh Berkus  writes:
> > Tom,
> >> Is anyone interested enough to try it if I code it?
>
> > If you're patient for results, sure.  I seem to be doing a customer
> > migration or upgrade every week now, so it wouldn't take me long to have
> > a test subject with a fairly complex database.
>
> Here's a draft patch that does ordering using two lists, as I proposed.
> Please test to see if it's any faster or slower than the original logic.
>
> Note: since this changes struct TocEntry, be sure to recompile all files
> in src/bin/pg_dump/ after patching.
>
>regards, tom lane
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] Lock Wait Statistics (next commitfest)

2010-02-26 Thread Gokulakannan Somasundaram
I am just adding my two cents, please ignore it, if its totally irrelevant.
While we do performance testing/tuning of any applications, the important
things, a standard monitoring requirement from a database are
a) Different type of wait events and the time spent in each of them
b) Top ten Queries by Total Logical reads & Average Logical Reads
c) Top ten Queries by Total CPU Time & Average CPU Time

The monitoring methodology should not put too much overhead during the test
to invalidate the application response times captured during the performance
test (Let's not worry about Heisenberg uncertainty for now :)) )

Of all the databases i worked with, Oracle provides the best monitoring
product in the form of Statspack.

Statspack works by the following way -a) it takes a copy of important
catalog tables(pg_ tables) which store the information like wait statistics
against wait events, i/o statistics cumulative against each SQL_Hash( and
SQL_Text), whether a particular plan went for hard parse/ soft parse(because
of plan caching) and the status of different in-memory data structures etc.

So we take a snapshot like this before and after the test and generate
statspack report out of it, which contains all the necessary information for
database level tuning. So we are never left in the dark from database tuning
perspective.

Recently i wrote a set of SQL Statements, which will do the same for SQL
Server from their sys tables like wait_io_events, query_io_stats etc and
finally will retrieve the information in the same format as Statspack.

But i think we lack some functionality like that in Postgres. I think things
like DTrace are more for developers than for users and as already pointed
out, will work only in Solaris. While we can expect that for Linux shortly,
people in windows do not have much options. (While i am maintaining that
DTrace is a absolutely wonderful hooking mechanism). So we should aim to
develop a monitoring mechanism like statspack for postgres.

Hope i have delievered my concern.

Thanks,
Gokul.




On Sat, Feb 27, 2010 at 10:40 AM, Greg Smith  wrote:

> Bruce Momjian wrote:
>
>> What happened to this patch?
>>
>>
>
> Returned with feedback in October after receiving a lot of review, no
> updated version submitted since then:
>
> https://commitfest.postgresql.org/action/patch_view?id=98
>
> --
> Greg Smith  2ndQuadrant US  Baltimore, MD
> PostgreSQL Training, Services and Support
> g...@2ndquadrant.com   www.2ndQuadrant.us
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Correcting Error message

2010-02-26 Thread Tom Lane
Jaime Casanova  writes:
> On Fri, Feb 26, 2010 at 7:12 PM, Michael Glaesemann  
> wrote:
>> In any event, I couldn't get your example to work on Postgres 8.4 regardless
>> due to the varchar2 type. Which version of Postgres are you using?
>> 
>> test=# CREATE TABLE footable(id int4, name varchar2(10));
>> ERROR:  type "varchar2" does not exist

> it;s probably postgres plus (the enterprisedb fork),

Yeah, particularly given the OP's address ;-).  The example goes through
fine in standard Postgres if you use varchar, or indeed any other
datatype.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Correcting Error message

2010-02-26 Thread Jaime Casanova
On Fri, Feb 26, 2010 at 7:12 PM, Michael Glaesemann
 wrote:
>
> In any event, I couldn't get your example to work on Postgres 8.4 regardless
> due to the varchar2 type. Which version of Postgres are you using?
>
> test=# CREATE TABLE footable(id int4, name varchar2(10));
> ERROR:  type "varchar2" does not exist
>

it;s probably postgres plus (the enterprisedb fork), because varchar2
it's an oracle invention

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Gokulakannan Somasundaram
>
>
> Actually Tom, i am not able to understand that completely. But what you are
> saying that in the current scenario, when there is a broken data type based
> index, then it will return no results, but never will return wrong results.
> So never the update will corrupt the heap data. But i take it as you say
> (please, correct me, if  i am wrong).
> But even returning no results might lead to failures in unqiue checks.
> While i inserting, i try to check whether a particular data is already
> inserted and if it returns no results, then it will go ahead and insert the
> data assuming that the unique check has passed, while in reality it has
> failed.
>
> Wait a minute.  Bingo  So for unique checks we are already going to
> index from Heap. So it is the same thing i am doing with Thick index. So if
> we can trust our current unique checks, then we should trust the Thick
> index.
>
> Thanks Tom!!! for having this good conversation
>
> I think this broken data type problem / volatile function issue has to be
> resolved for the current index, if we advocate to stop the thick index.
> WOW!!!
>
>
> Can i get a feedback from Tom / Heikki regarding my observation?

Regards,
Gokul.


Re: [HACKERS] Lock Wait Statistics (next commitfest)

2010-02-26 Thread Greg Smith

Bruce Momjian wrote:

What happened to this patch?
  


Returned with feedback in October after receiving a lot of review, no 
updated version submitted since then:


https://commitfest.postgresql.org/action/patch_view?id=98

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Aidan Van Dyk wrote:

Would we (ya, the royal we) be willing to say that if you want the
benifit of removing the MVCC overhead of long-running queries you need
to run PITR backup/archive recovery, and if you want SR, you get a
closed-loop master-follows-save-xmin behaviour?
  


To turn that question around a little, I think it's reasonable to say 
that closed-loop master-follows-slave-xmin behavior is only practical to 
consider implementing with SR--and even there, it should be optional 
rather than required until there's more field experience on the whole 
thing.  Whether it's the default or not could take a bit of debate to 
sort out too.


If you think of it in those terms, the idea that "you need to run PITR 
backup/archive recovery" to not get that behavior isn't an important 
distinction anymore.  If you run SR with the option enabled you could 
get it, any other setup and you won't.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Aidan Van Dyk
* Greg Smith  [100226 23:39]:

> Just not having the actual query running on the master is such a  
> reduction in damage that I think it's delivering the essence of what  
> people are looking for regardless.  That it might be possible in some  
> cases to additionally avoid the overhead that comes along with any  
> long-running query is a nice bonus, and it's great the design allows for  
> that possibility.  But if that's only possible with risk, heavy  
> tweaking, and possibly some hacks, I'm not sure that's making the right  
> trade-offs for everyone.

Would we (ya, the royal we) be willing to say that if you want the
benifit of removing the MVCC overhead of long-running queries you need
to run PITR backup/archive recovery, and if you want SR, you get a
closed-loop master-follows-save-xmin behaviour?

a.

-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


signature.asc
Description: Digital signature


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Greg Stark wrote:

But if they move from having a plain old PITR warm standby to having
one they can run queries on they might well assume that the big
advantage of having the standby to play with is precisely that they
can do things there that they have never been able to do on the master
previously without causing damage.
  


Just not having the actual query running on the master is such a 
reduction in damage that I think it's delivering the essence of what 
people are looking for regardless.  That it might be possible in some 
cases to additionally avoid the overhead that comes along with any 
long-running query is a nice bonus, and it's great the design allows for 
that possibility.  But if that's only possible with risk, heavy 
tweaking, and possibly some hacks, I'm not sure that's making the right 
trade-offs for everyone.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Joshua D. Drake wrote:

On Sat, 27 Feb 2010 00:43:48 +, Greg Stark  wrote:
  

I want my ability to run large batch queries without any performance
or reliability impact on the primary server.



+1

I can use any number of other technologies for high availability.
  


Remove "must be an instant-on failover at the same time" from the 
requirements and you don't even need 9.0 to handle that, this has been a 
straightforward to solve problem since 8.2.  It's the combination of HA 
and queries that make things hard to do.


If you just want batch queries on another system without being concerned 
about HA at the same time, the first option is to just fork the base 
backup and WAL segment delivery to another server and run queries there.


Some simple filesystem snapshot techniques will also suffice to handle 
it all on the same standby.  Stop warm standby recovery, snapshot, 
trigger the server, run your batch job; once finished, rollback to the 
snapshot, grab the latest segment files, and resume standby catchup.  
Even the lame Linux LVM snapshot features can handle that job--one of my 
coworkers has the whole thing scripted even this is so common.


And if you have to go live because there's a failover, you're back to 
the same "cold standby" situation a large max_standby_delay puts you at, 
so it's not even very different from what you're going to get in 9.0 if 
this is your priority mix.  The new version is just lowering the 
operational complexity involved.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] caracara failing to bind to localhost?

2010-02-26 Thread Tom Lane
Buildfarm member caracara has been failing the last few days because of
this:

LOG:  could not bind socket for statistics collector: Cannot assign requested 
address
LOG:  disabling statistics collector for lack of working socket

That code hasn't changed recently, AFAIK, so I'm thinking something's
broken in the machine's environment.  Any ideas?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Stark
On Fri, Feb 26, 2010 at 9:44 PM, Tom Lane  wrote:
> Greg Stark  writes:
>
>> What extra entries?
>
> Locks, just for starters.  I haven't read enough of the code yet to know
> what else Simon added.  In the past it's not been necessary to record
> any transient information in WAL, but now we'll have to.

Haven't we been writing locks to the WAL since two-phase commit?

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Joshua D. Drake
On Sat, 27 Feb 2010 00:43:48 +, Greg Stark  wrote:
> On Fri, Feb 26, 2010 at 11:56 PM, Greg Smith 
wrote:
>> This is also the reason why the whole "pause recovery" idea is a
>> fruitless
>> path to wander down.  The whole point of this feature is that people
>> have a
>> secondary server available for high-availability, *first and foremost*,
>> but
>> they'd like it to do something more interesting that leave it idle all
>> the
>> time.  The idea that you can hold off on applying standby updates for
>> long
>> enough to run seriously long reports is completely at odds with the
idea
>> of
>> high-availability.

> I want my ability to run large batch queries without any performance
> or reliability impact on the primary server.

+1

I can use any number of other technologies for high availability.

Joshua D. Drake

 
-- 
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha4 Available Now

2010-02-26 Thread Дмитрий Фефелов
> 
> On Feb 26, 2010, at 0:55 , Дмитрий Фефелов wrote:
> 
> > http://developer.postgresql.org/pgdocs/postgres/release-9-0.html
> >
> > Performance section:
> >
> >> Simplify the forms foo <> true and foo <> false to foo = false and
> >> foo = true during query optimization.
> >
> > Will it work correct;ly when foo is NULL?
> 
> It shouldn't have any effect: NULL <> anything and NULL = anything is  
> NULL

I already got it, dumb question ;))


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Stark
On Sat, Feb 27, 2010 at 2:43 AM, Greg Smith  wrote:
>
> But if you're running the 8 hour report on the master right now, aren't you
> already exposed to a similar pile of bloat issues while it's going?  If I
> have the choice between "sometimes queries will get canceled" vs. "sometimes
> the master will experience the same long-running transaction bloat issues as
> in earlier versions even if the query runs on the standby", I feel like
> leaning toward the latter at least leads to a problem people are used to.

If they move from running these batch queries on the master to running
them on the slave then sure, the situation will be no worse than
before.

But if they move from having a plain old PITR warm standby to having
one they can run queries on they might well assume that the big
advantage of having the standby to play with is precisely that they
can do things there that they have never been able to do on the master
previously without causing damage.

I agree that having queries randomly and unpredictably canceled is
pretty awful. My argument was that max_standby_delay should default to
infinity on the basis that any other value has to be picked based on
actual workloads and SLAs.

My feeling is that there are probably only two types of configurations
that make sense, a HA replica with a low max_standby_delay or a
reporting replica with a high max_standby_delay. Any attempt to
combine the two on the same system will only work if you know your
application well and can make compromises with both.

I would also like to be able to handle load balancing read-only
queries but that will be really tricky. You want up-to-date data and
you want to be able to run moderately complex queries. That kind of
workload may well require synchronous replication to really work
properly.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER ROLE/DATABASE RESET ALL versus security

2010-02-26 Thread Bruce Momjian
Alvaro Herrera wrote:
> Tom Lane wrote:
> > Alvaro Herrera  writes:
> > > Tom Lane wrote:
> > >> It looks to me like the code in AlterSetting() will allow an ordinary
> > >> user to blow away all settings for himself.  Even those that are for
> > >> SUSET variables and were presumably set for him by a superuser.  Isn't
> > >> this a security hole?  I would expect that an unprivileged user should
> > >> not be able to change such settings, not even to the extent of
> > >> reverting to the installation-wide default.
> > 
> > > Yes, it is, but this is not a new hole.  This works just fine in 8.4
> > > too:
> > 
> > So I'd argue for changing it in 8.4 too.
> 
> Understood.  I'm starting to look at what this requires.

Any progress on this?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lock Wait Statistics (next commitfest)

2010-02-26 Thread Bruce Momjian

What happened to this patch?

---

Mark Kirkwood wrote:
> Where I work they make extensive use of Postgresql. One of the things 
> they typically want to know about are lock waits. Out of the box in 
> there is not much in the way of tracking for such, particularly in older 
> versions. The view pg_stats is fine for stuff happening *now*, but 
> typically I find I'm being asked about something that happened last 
> night...
> 
> Now for those platforms with dtrace there is a lock wait probe, useful - 
> but not for those of us on Linux! There is the conf option to log lock 
> waits > deadlock timeout (8.3 onwards), and this is great, but I 
> wondered if having something like this available as part of the stats 
> module would be useful.
> 
> So here is my initial attempt at this, at this point merely to spark 
> discussion (attached patch)
> 
> I have followed some of the ideas from the function execution stats, but 
> locks required some special treatment.
> 
> - new parameter track_locks (maybe should be track_lock_waits?)
> - new hash locks attached to stats dbentry
> - several new stat*lock c functions
> - new view pg_stat_lock_waits
> - new attributes for pg_stat_database
> 
> This version has simply exposed the locktag structure in the view along 
> with corresponding #waits and wait time. This should probably get 
> reformed to look a little more like pg_locks. I figured this is easily 
> doable along with the no doubt many changes coming from revew comments.
> 
> Also I did not do any clever amalgamation of transaction lock waits - 
> there is probably gonna be a lot of those and keeping the detail is 
> unlikely to be useful. It would be easy to collect them all together in 
> one transaction entry.
> 
> regards
> 
> Mark
> 
> 
> 

[ application/x-gzip is not supported, skipping... ]

> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql: numeric assignment to an integer variable errors out

2010-02-26 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian  writes:
> > Whatever happened to this patch?
> 
> I think we bounced it on the grounds that it would represent a
> fundamental change in plpgsql behavior and break a whole lot of
> applications.  People have been relying on plpgsql's coerce-via-IO
> assignment behavior for ten years.  If you prefer coerce via
> cast conversion, you can get that by writing an explicit cast.
> 
> Now it is true that a lot of the uses for that were subsumed when
> we added coerce-via-IO to the native cast capabilities; but I'm
> still quite scared of what this would break, and I don't see any
> field demand for a change.

Thanks.  Sorry to be asking so many questions but it is the only way I
can be sure we have covered everything.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql: numeric assignment to an integer variable errors out

2010-02-26 Thread Tom Lane
Bruce Momjian  writes:
> Whatever happened to this patch?

I think we bounced it on the grounds that it would represent a
fundamental change in plpgsql behavior and break a whole lot of
applications.  People have been relying on plpgsql's coerce-via-IO
assignment behavior for ten years.  If you prefer coerce via
cast conversion, you can get that by writing an explicit cast.

Now it is true that a lot of the uses for that were subsumed when
we added coerce-via-IO to the native cast capabilities; but I'm
still quite scared of what this would break, and I don't see any
field demand for a change.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG, two varchars with same name on same line

2010-02-26 Thread Bruce Momjian
Michael Meskes wrote:
> On Fri, May 01, 2009 at 03:49:47PM +0300, Heikki Linnakangas wrote:
> > ECPG constructs internal struct names for VARCHAR fields using the field  
> > name and line number it's defined on. In a contrived example, though,  
> > that's not unique. Consider the following example:
> > ...
> > That hardly happens in practice, of course, but it's trivial to fix by  
> > just adding some more salt to the struct name, like a simple counter, so  
> > it seems we should.
> 
> In principle you're right. However, the change needs to be added in several
> places like the internal variable structure that keeps the lineno anyway but
> needs to add the counter too. BTW we can remove the lineno then I think.
> Anyway, given that we are close to a release and the bug apparently never got
> up in a real life usage for years I'd prefer to not change it now but wait
> until the release has been done.

Was this fixed?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Testing of parallel restore with current snapshot

2010-02-26 Thread Tom Lane
Bruce Momjian  writes:
> I don't see this as every having been applied.  What should we do with
> it?

I believe we decided that there wasn't any measurable win.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql: numeric assignment to an integer variable errors out

2010-02-26 Thread Bruce Momjian

Whatever happened to this patch?

---

Nikhil Sontakke wrote:
> Hi,
> 
>  wrote:
> 
> > The following plpgsql function errors out with cvs head:
> >
> > CREATE function test_assign() returns void
> > AS
> > $$ declare x int;
> > BEGIN
> > x := 9E3/2;
> > END
> > $$ LANGUAGE 'plpgsql';
> >
> > postgres=# select test_assign();
> > ERROR:  invalid input syntax for integer: "4500."
> > CONTEXT:  PL/pgSQL function "test_assign" line 3 at assignment
> >
> > We do have an existing cast from numeric to type integer. But here
> > basically we convert the value to string in exec_cast_value before calling
> > int4in. And then use of strtol in pg_atoi leads to this complaint. Guess
> > converting the value to string is not always a good strategy.
> >
> 
> PFA, patch which uses find_coercion_pathway to find a direct
> COERCION_PATH_FUNC function and uses that if it is available. Or is there a
> better approach? Seems to handle the above issue with this patch.
> 
> Regards,
> Nikhils
> -- 
> http://www.enterprisedb.com

[ Attachment, skipping... ]

> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Greg Stark wrote:

Eh? That's not what I meant at all. Actually it's kind of the exact
opposite of what I meant.
  


Sorry about that--I think we just hit one of those language usage drift 
bits of confusion.  "Sit in the corner" has a very negative tone to it 
in US English and I interpreted your message badly as a result.  A 
Google search for images using that phrase will quickly show you what I 
mean.



What I meant was that your description of the "High Availability first
and foremost" is only one possible use case. Simon in the past
expressed the same single-minded focus on that use case. It's a
perfectly valid use case and I would probably agree if we had to
choose just one it would be the most important.
  


Sure, there are certainly others, and as much as possible more 
flexibility here is a good thing.  What I was suggesting is that if the 
only good way to handle long-running queries has no choice but to 
sacrifice high-availability, which is is the situation if 
max_standby_delay is the approach you use, then the most obvious users 
for this feature are not being well served by that situation.  I would 
guess a large portion of the users looking forward to Hot Standby are in 
the "have an underutilized high-availability standby I'd like to use for 
offloading long running reports", and if there is no way to serve them 
well this feature is missing the mark a bit. 

You really can't do any better without better master/standby integration 
though, and as pointed out a couple of times here that was considered 
and just not followed through on yet.  I'm increasingly concerned that 
nothing else will really do though.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Testing of parallel restore with current snapshot

2010-02-26 Thread Bruce Momjian

I don't see this as every having been applied.  What should we do with
it?

---

Tom Lane wrote:
> Josh Berkus  writes:
> > Tom,
> >> Is anyone interested enough to try it if I code it?
> 
> > If you're patient for results, sure.  I seem to be doing a customer 
> > migration or upgrade every week now, so it wouldn't take me long to have 
> > a test subject with a fairly complex database.
> 
> Here's a draft patch that does ordering using two lists, as I proposed.
> Please test to see if it's any faster or slower than the original logic.
> 
> Note: since this changes struct TocEntry, be sure to recompile all files
> in src/bin/pg_dump/ after patching.
> 
>   regards, tom lane
> 

Content-Description: alternate-parallel-restore-1.patch.gz

[ Type application/octet-stream treated as attachment, skipping... ]

> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] add_path optimization

2010-02-26 Thread Bruce Momjian

Did this ever get applied/resolved?

---

Robert Haas wrote:
> I've been doing some benchmarking and profiling on the PostgreSQL
> query analyzer, and it seems that (at least for the sorts of queries
> that I typically run) the dominant cost is add_path().  I've been able
> to find two optimizations that seem to help significantly:
> 
> 1. add_path() often calls compare_fuzzy_path_costs() twice on the same
> pair of paths, and when the paths compare equal on one criterion, some
> comparisons are duplicated.  I've refactored this function to return
> the results of both calculations without repeating any floating-point
> arithmetic.
> 
> 2. match_unsorted_outer() adds as many as 5 nested loop joins at a
> time with the same set of pathkeys.  In my tests, it tended to be ~3 -
> cheapest inner, cheapest inner materialized, and cheapest inner index.
>  Since these all have the same pathkeys, clearly only the one with the
> cheapest total cost is in the running for cheapest total cost for that
> set of pathkeys, and likewise for startup cost (and the two may be the
> same).  Yet we compare all of them against the whole pathlist, one
> after the other, including (for the most part) the rather expensive
> pathkey comparison.  I've added a function add_similar_paths() and
> refactored match_unsorted_outer() to use it.
> 
> On a couple of complex (and proprietary) queries with 12+ joins each,
> I measure a planning time improvement of 8-12% with the attached patch
> applied.  It would be interesting to try to replicate this on a
> publicly available data set, but I don't know of a good one to use.
> Suggestions welcome - results of performance testing on your own
> favorite big queries even more welcome.  Simple test harness also
> attached.  I took the approach of dropping caches, starting the
> server, and then running this 5 times each on several queries,
> dropping top and bottom results.
> 
> ...Robert

[ Attachment, skipping... ]

[ Attachment, skipping... ]

> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Was partitioned table row estimation fixed in 9.0?

2010-02-26 Thread Tom Lane
Josh Berkus  writes:
> There were a flurry of patches around this from Stark and Aster Data, so
> I'm checking if I should be testing on 9.0 or adding this to the TODO list.

> The problem I'm grappling with is that OUTER JOINS against the master in
> a partitioned table (joining to the append node) gives a row estimate
> which does not take into account any CE applicable to the partitioned
> table.  The CE is properly executed when the join is actually executed,
> but with the row estimate way off (orders of magnitude) you often get
> bad plans for other joins in the query.

Would you give a concrete example?  There was some work done in this
area but it's impossible to tell whether it solves your problem with
only that much detail.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Bruce Momjian
Greg Smith wrote:
> Heikki Linnakangas wrote:
> > One such landmine is that the keepalives need to flow from client to
> > server while the WAL records are flowing from server to client. We'll
> > have to crack that problem for synchronous replication too, but I think
> > that alone is a big enough problem to make this 9.1 material.
> >   
> 
> This seems to be the real sticking point then, given that the 
> xmin/PGPROC side on the master seems logically straightforward.  For 
> some reason I thought the sync rep feature had the reverse message flow 
> already going, and that some other sort of limitation just made it 
> impractical to merge into the main codebase this early.  My hope was 
> that just this particular part could get cherry-picked out of there, and 
> that it might even have been thought about already in that context given 
> the known HS keepalive "serious issue".  If there was a solution or 
> partial solution in progress to that floating around, my thought was 
> that just piggybacking this extra xid info on top of it would be easy 
> enough.
> 
> If there's not already a standby to primary communications backchannel 
> implementation available that can be harvested from that work, your 
> suggestion that this may not be feasible at all for 9.0 seems like a 
> more serious concern than I had thought it was going to be.

I suspect the master could connect to the slave to pull an xid.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] visibility maps and heap_prune

2010-02-26 Thread Bruce Momjian
Pavan Deolasee wrote:
> On Fri, Feb 26, 2010 at 8:19 AM, Bruce Momjian  wrote:
> 
> >
> > Whatever happened to this?  It was in the first 9.0 commitfest but was
> > returned with feedback but never updated:
> >
> >
> Though Alex did some useful tests and review, and in fact confirmed that the
> VACUUM time dropped from 16494 msec to 366 msec, I somehow kept waiting for
> Heikki's decision on the general direction of the patch and lost interest in
> between. If we are still interested in this, I can work out a patch and
> submit for next release if not this.

OK, TODO added:

Have single-page pruning update the visibility map
* https://commitfest.postgresql.org/action/patch_view?id=75

Hopefully Heikki can comment on this.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Heikki Linnakangas wrote:

One such landmine is that the keepalives need to flow from client to
server while the WAL records are flowing from server to client. We'll
have to crack that problem for synchronous replication too, but I think
that alone is a big enough problem to make this 9.1 material.
  


This seems to be the real sticking point then, given that the 
xmin/PGPROC side on the master seems logically straightforward.  For 
some reason I thought the sync rep feature had the reverse message flow 
already going, and that some other sort of limitation just made it 
impractical to merge into the main codebase this early.  My hope was 
that just this particular part could get cherry-picked out of there, and 
that it might even have been thought about already in that context given 
the known HS keepalive "serious issue".  If there was a solution or 
partial solution in progress to that floating around, my thought was 
that just piggybacking this extra xid info on top of it would be easy 
enough.


If there's not already a standby to primary communications backchannel 
implementation available that can be harvested from that work, your 
suggestion that this may not be feasible at all for 9.0 seems like a 
more serious concern than I had thought it was going to be.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Correcting Error message

2010-02-26 Thread Michael Glaesemann


On Feb 26, 2010, at 21:03 , Tom Lane wrote:


Michael Glaesemann  writes:

On Feb 26, 2010, at 3:30 , Piyush Newe wrote:

SELECT (footable.*).foofunc FROM footable;
ERROR:  column footable.foofunc does not exist



Is that calling syntax correct?  I'd think it should be:
SELECT foofunc(footable.*, 10) FROM footable;


He's relying on the f(x) === x.f syntactic equivalence, as per the
comments for ParseFuncOrColumn:




Note there are two arguments to foofunc (in either version)


... and the example also relies on the presence of default arguments  
for

both functions.  This makes both of them match a single-argument call,
resulting in an ambiguous-function situation.  The proposed change
would cause it to actually throw an "ambiguous function" error.


Ah! Learned two new things. Thanks, Tom!

Michael Glaesemann
grzm seespotcode net




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Bruce Momjian wrote:

Well, I think the choice is either you delay vacuum on the master for 8
hours or pile up 8 hours of WAL files on the slave, and delay
application, and make recovery much slower.  It is not clear to me which
option a user would prefer because the bloat on the master might be
permanent.
  


But if you're running the 8 hour report on the master right now, aren't 
you already exposed to a similar pile of bloat issues while it's going?  
If I have the choice between "sometimes queries will get canceled" vs. 
"sometimes the master will experience the same long-running transaction 
bloat issues as in earlier versions even if the query runs on the 
standby", I feel like leaning toward the latter at least leads to a 
problem people are used to. 

This falls into the principle of least astonishment category to me.  
Testing the final design for how transactions get canceled here led me 
to some really unexpected situations, and the downside for a mistake is 
"your query is lost".  Had I instead discovered that sometimes 
long-running transactions on the standby can ripple back to cause a 
maintenance slowdown on the master, that's not great.  But it would not 
have been so surprising, and it won't result in lost query results. 

I think people will expect that their queries cancel because of things 
like DDL changes.  And the existing knobs allow inserting some slack for 
things like locks taking a little bit of time to acquire sometimes.  
What I don't think people will see coming is that a routine update on an 
unrelated table is going to kill a query they might have been waiting 
hours for the result of, just because that update crossed an autovacuum 
threshold for the other table and introduced a dead row cleanup.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Stark
On Sat, Feb 27, 2010 at 1:53 AM, Greg Smith  wrote:
> Greg Stark wrote:
>>
>> Well you can go sit in the same corner as Simon with your high
>> availability servers.
>>
>> I want my ability to run large batch queries without any performance
>> or reliability impact on the primary server.
>>
>
> Thank you for combining a small personal attack with a selfish commentary
> about how yours is the only valid viewpoint.  Saves me a lot of trouble
> replying to your messages, can just ignore them instead if this is how
> you're going to act.

Eh? That's not what I meant at all. Actually it's kind of the exact
opposite of what I meant.

What I meant was that your description of the "High Availability first
and foremost" is only one possible use case. Simon in the past
expressed the same single-minded focus on that use case. It's a
perfectly valid use case and I would probably agree if we had to
choose just one it would be the most important.

But we don't have to choose just one. There are other valid use cases
such as load balancing and isolating your large batch queries from
your production systems. I don't want us to throw out all these other
use cases because we only consider high availability as the only use
case we're interested in.


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Correcting Error message

2010-02-26 Thread Tom Lane
Michael Glaesemann  writes:
> On Feb 26, 2010, at 3:30 , Piyush Newe wrote:
>> SELECT (footable.*).foofunc FROM footable;
>> ERROR:  column footable.foofunc does not exist

> Is that calling syntax correct?  I'd think it should be:
> SELECT foofunc(footable.*, 10) FROM footable;

He's relying on the f(x) === x.f syntactic equivalence, as per the
comments for ParseFuncOrColumn:
 
 *  For historical reasons, Postgres tries to treat the notations tab.col
 *  and col(tab) as equivalent: if a single-argument function call has an
 *  argument of complex type and the (unqualified) function name matches
 *  any attribute of the type, we take it as a column projection.  Conversely
 *  a function of a single complex-type argument can be written like a
 *  column reference, allowing functions to act like computed columns.

or see the user-facing documentation near the end of section 34.4.2:
http://www.postgresql.org/docs/8.4/static/xfunc-sql.html#AEN43797

It's still an unnecessarily awkward example though, as you could just
as well write
SELECT footable.foofunc FROM footable;

> Note there are two arguments to foofunc (in either version)

... and the example also relies on the presence of default arguments for
both functions.  This makes both of them match a single-argument call,
resulting in an ambiguous-function situation.  The proposed change
would cause it to actually throw an "ambiguous function" error.

I'm not very sure if the proposed change is an improvement or not.
The given message is 100% correct: there is no such column.  Now
if you were intending a function call, it would be more useful if it
complained about "ambiguous function" instead, but if you really just
typo'd a column name then that could be mighty confusing.  I'm inclined
to think that if you were intending a function call, you'd be most
likely to write it as a function call, especially if you didn't
understand why you were getting an error; and then you'd get the
message that was helpful for that case.  So I'm inclined to leave
the code alone.  It's a judgment call though, without a doubt.

It might help to make a decision if we saw a real-world case where
this happened and the other error message would be more desirable.
The example seems a bit contrived to me; who'd really create such a
pair of functions and then try to invoke them this way?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Greg Stark wrote:

Well you can go sit in the same corner as Simon with your high
availability servers.

I want my ability to run large batch queries without any performance
or reliability impact on the primary server.
  


Thank you for combining a small personal attack with a selfish 
commentary about how yours is the only valid viewpoint.  Saves me a lot 
of trouble replying to your messages, can just ignore them instead if 
this is how you're going to act.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Bruce Momjian
Greg Smith wrote:
> You can think of the idea of passing an xmin back from the standby as 
> being like an auto-tuning vacuum_defer_cleanup_age. It's 0 when no 
> standby queries are running, but grows in size to match longer ones. And 
> you don't have to have to know anything to set it correctly; just toggle 
> on the proposed "feedback xid from the standby" feature and you're safe.

Yes, there is no question there is value in passing the xid back to the
master.  My point is that it is like your blog entry:


http://blog.2ndquadrant.com/en/2010/02/tradeoffs-in-hot-standby-deplo.html

you can't have all the options:

1.  agressive vacuum on master
2.  fast recovery of slave
3.  no query cancel on slave

Again, pick two.  Passing the xid back to the master gives you #2 and
#3, and that might be good for some people, but not others.  Do we have
any idea which options most administrators would want?  If we get xid
passback to the master, do we keep the other configuration options?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Mark Mielke

On 02/26/2010 07:03 PM, Tom Lane wrote:

Robert Haas  writes:
   

Basically, what I really want here is some kind of keyword or other
syntax that I can stick into a PL/pgsql query that requests a replan
on every execution.
 

Wouldn't it be better if it just did the right thing automatically?
   


Yes please. :-) Often, we are just users of the application, and we do 
not have the freedom to change it.



The sort of heuristic I'm envisioning would essentially do "replan every
time" for some number of executions, and give up only if it noticed that
it wasn't getting anything better than the generic plan.  So you'd have
a fixed maximum overhead per session when the custom plan was useless,
and the Right Thing when it wasn't.


My other comments aside - I think generic plan + specific plan where 
specific plan continues to beat generic plan, will meet the cases that 
really annoyed me, and would make a lot of us very happy... Thanks.


Cheers,
mark


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Stark
On Fri, Feb 26, 2010 at 11:56 PM, Greg Smith  wrote:
> This is also the reason why the whole "pause recovery" idea is a fruitless
> path to wander down.  The whole point of this feature is that people have a
> secondary server available for high-availability, *first and foremost*, but
> they'd like it to do something more interesting that leave it idle all the
> time.  The idea that you can hold off on applying standby updates for long
> enough to run seriously long reports is completely at odds with the idea of
> high-availability.

Well you can go sit in the same corner as Simon with your high
availability servers.

I want my ability to run large batch queries without any performance
or reliability impact on the primary server.

You can have one or the other but you can't get both. If you set
max_standby_delay low then you get your high availability server, if
you set it high you get a useful report server.

If you build sync replication which we don't have today and which will
open another huge can of usability worms when we haven't even finish
bottling the two we've already opened then you lose the lack of impact
on the primary. Suddenly the queries you run on the slaves cause your
production database to bloat. Plus you have extra network connections
which take resources on your master and have to be kept up at all
times or you lose your slaves.

I think the design constraint of not allowing any upstream data flow
is actually very valuable. Eventually we'll have it for sync
replication but it's much better that we've built things incrementally
and can be sure that nothing really depends on it for basic
functionality. This is what allows us to know that the slave imposes
no reliability impact on the master. It's what allows us to know that
everything will work identically regardless of whether you have a
walreceiver running or are running off archived log files.

Remember I wanted to entirely abstract away the walreciever and allow
multiple wal communication methods. I think it would make more sense
to use something like Spread to distribute the logs so the master only
has to send them once and as many slaves as you want can pick them up.
The current architecture doesn't scale very well if you want to have
hundreds of slaves for one master.


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Bruce Momjian
Greg Smith wrote:
> Bruce Momjian wrote:
> > Doesn't the system already adjust the delay based on the length of slave
> > transactions, e.g. max_standby_delay.  It seems there is no need for a
> > user switch --- just max_standby_delay really high.
> >   
> 
> The first issue is that you're basically saying "I don't care about high 
> availability anymore" when you increase max_standby_delay to a high 
> value.  Want to offload an 8 hour long batch report every day to the 
> standby?  You can do it with max_standby_delay=8 hours.  But the day 
> your master crashes 7 hours into that, you're in for a long wait before 
> your standby is available while it replays all the queued up segments.  
> Your 'hot standby' has actually turned into the old form of 'cold 
> standby' just when you need it to be responsive.

Well, I think the choice is either you delay vacuum on the master for 8
hours or pile up 8 hours of WAL files on the slave, and delay
application, and make recovery much slower.  It is not clear to me which
option a user would prefer because the bloat on the master might be
permanent.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Correcting Error message

2010-02-26 Thread Michael Glaesemann


On Feb 26, 2010, at 3:30 , Piyush Newe wrote:


Hi,

Consider following testcase,

CREATE TABLE footable(id int4, name varchar2(10));

CREATE FUNCTION foofunc(a footable, b integer DEFAULT 10)
 RETURNS integer AS $$ SELECT 123; $$ LANGUAGE SQL;

CREATE FUNCTION foofunc(a footable, b numeric DEFAULT 10)
 RETURNS integer AS $$ SELECT 123; $$ LANGUAGE SQL;

SELECT (footable.*).foofunc FROM footable;
ERROR:  column footable.foofunc does not exist
LINE 1: SELECT (footable.*).foofunc FROM footable;
  ^


Is that calling syntax correct?  I'd think it should be:

SELECT foofunc(footable.*, 10) FROM footable;

Note there are two arguments to foofunc (in either version)

test=# SELECT version();
   version
--
 PostgreSQL 8.4.2 on i386-apple-darwin9.8.0, compiled by GCC i686- 
apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5493), 32-bit

(1 row)

test=# CREATE TABLE footable(id int4, name varchar(10));
CREATE TABLE
test=# INSERT INTO footable (id, name) VALUES (1, 'foo'), (2, 'bar');
INSERT 0 2
test=# CREATE FUNCTION foofunc(a footable, b integer DEFAULT 10)
postgres-#  RETURNS integer AS $$ SELECT 123; $$ LANGUAGE SQL;
CREATE FUNCTION
test=# CREATE FUNCTION foofunc(a footable, b numeric DEFAULT 10)
postgres-#  RETURNS integer AS $$ SELECT 456; $$ LANGUAGE SQL;
CREATE FUNCTION
test=# SELECT name, foofunc(footable.*, 10) FROM footable;
 name | foofunc
--+-
 foo  | 123
 bar  | 123
(2 rows)

test=# SELECT name, foofunc(footable.*, 10.0) FROM footable;
 name | foofunc
--+-
 foo  | 456
 bar  | 456
(2 rows)

In any event, I couldn't get your example to work on Postgres 8.4  
regardless due to the varchar2 type. Which version of Postgres are you  
using?


test=# CREATE TABLE footable(id int4, name varchar2(10));
ERROR:  type "varchar2" does not exist



Michael Glaesemann
grzm seespotcode net




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alpha4 Available Now

2010-02-26 Thread Michael Glaesemann


On Feb 26, 2010, at 0:55 , Дмитрий Фефелов wrote:


http://developer.postgresql.org/pgdocs/postgres/release-9-0.html

Performance section:


Simplify the forms foo <> true and foo <> false to foo = false and
foo = true during query optimization.


Will it work correct;ly when foo is NULL?


It shouldn't have any effect: NULL <> anything and NULL = anything is  
NULL


SELECT arg1, arg2,
   (arg1 <> arg2) AS "arg1 <> arg2",
   (arg1 = arg2) AS "(arg1 = arg2)"
  FROM (VALUES (TRUE, TRUE), (TRUE, FALSE),
   (FALSE, TRUE), (FALSE, FALSE),
   (NULL, TRUE), (NULL, FALSE)) AS bools (arg1, arg2)
  ORDER BY arg1, arg2;

  arg1  | arg2 | arg1 <> arg2 | (arg1 = arg2)
+--+--+---
 f  | f| f| t
 f  | t| t| f
 t  | f| t| f
 t  | t| f| t
 (null) | f| (null)   | (null)
 (null) | t| (null)   | (null)
(6 rows)

Michael Glaesemann
grzm seespotcode net




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Josh Berkus

> That is exactly the core idea I was trying to suggest in my rambling
> message.  Just that small additional bit of information transmitted and
> published to the master via that route, and it's possible to optimize
> this problem in a way not available now.  And it's a way that I believe
> will feel more natural to some users who may not be well served by any
> of the existing tuning possibilities.

Well, if both you and Tom think it would be relatively easy (or at least
easier that continuing to pursue query cancel troubleshooting), then
please start coding it.  It was always a possible approach, we just
collectively thought that query cancel would be easier.

--Josh Berkus


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Bruce Momjian wrote:

5 Early cleanup of data still visible to the current query's
  snapshot

#5 could be handled by using vacuum_defer_cleanup_age on the master.
Why is vacuum_defer_cleanup_age not listed in postgresql.conf?
  


I noticed that myself and fired off a corrective patch to Simon 
yesterday, he said it was intentional but not sure why that is yet. 
We'll sort that out.


You are correct that my suggestion is targeting primarily #5 on this 
list. There are two problems with the possible solutions using that 
parameter though:


-vacuum_defer_cleanup_age is set in a unit that people cannot be 
expected to work in--transactions ids. The UI is essentially useless, 
and there's no obvious way how to make a better one. The best you can do 
will still be really fragile.


-If you increase vacuum_defer_cleanup_age, it's active all the time. 
You're basically making every single transaction that could be cleaned 
up pay for the fact that a query *might* be running on the standby it 
needs to avoid.


You can think of the idea of passing an xmin back from the standby as 
being like an auto-tuning vacuum_defer_cleanup_age. It's 0 when no 
standby queries are running, but grows in size to match longer ones. And 
you don't have to have to know anything to set it correctly; just toggle 
on the proposed "feedback xid from the standby" feature and you're safe.


Expecting that anyone will ever set vacuum_defer_cleanup_age correctly 
in the field in its current form is pretty unreasonable I think. Since 
there's no timestamp-based memory of past xid activity, it's difficult 
to convert it to that form instead, and I think something in terms of 
time is what people would like to set this in.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Tom Lane
Robert Haas  writes:
> Basically, what I really want here is some kind of keyword or other
> syntax that I can stick into a PL/pgsql query that requests a replan
> on every execution.

Wouldn't it be better if it just did the right thing automatically?

The sort of heuristic I'm envisioning would essentially do "replan every
time" for some number of executions, and give up only if it noticed that
it wasn't getting anything better than the generic plan.  So you'd have
a fixed maximum overhead per session when the custom plan was useless,
and the Right Thing when it wasn't.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Bruce Momjian wrote:

Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay.  It seems there is no need for a
user switch --- just max_standby_delay really high.
  


The first issue is that you're basically saying "I don't care about high 
availability anymore" when you increase max_standby_delay to a high 
value.  Want to offload an 8 hour long batch report every day to the 
standby?  You can do it with max_standby_delay=8 hours.  But the day 
your master crashes 7 hours into that, you're in for a long wait before 
your standby is available while it replays all the queued up segments.  
Your 'hot standby' has actually turned into the old form of 'cold 
standby' just when you need it to be responsive.


This is also the reason why the whole "pause recovery" idea is a 
fruitless path to wander down.  The whole point of this feature is that 
people have a secondary server available for high-availability, *first 
and foremost*, but they'd like it to do something more interesting that 
leave it idle all the time.  The idea that you can hold off on applying 
standby updates for long enough to run seriously long reports is 
completely at odds with the idea of high-availability.


The second major problem is that the day the report actually takes 8.1 
hours instead, because somebody else ran another report that slowed you 
down a little, you're screwed if that's something you depend on being 
available--it just got canceled only *after* wasting 8 hours of 
reporting resource time.


max_standby_delay is IMHO only useful for allowing non-real-time web-app 
style uses of HS (think "Facebook status updates"), where you say "I'm 
OK giving people slightly out of date info sometimes if it lets me split 
the query load over two systems".  Set max_standby_delay to a few 
seconds or maybe a minute, enough time to service a typical user query, 
make your app tolerate the occasional failed query and only send big 
ones to the master, and you've just scaled up all the small ones.  
Distributed queries with "eventual consistency" on all nodes is where 
many of the web app designs are going, and this feature is a reasonable 
match for that use case.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Gokulakannan Somasundaram
> Wait a minute.  Bingo  So for unique checks we are already going to
> index from Heap. So it is the same thing i am doing with Thick index. So if
> we can trust our current unique checks, then we should trust the Thick
> index.
>
> Thanks Tom!!! for having this good conversation
>
> I think this broken data type problem / volatile function issue has to be
> resolved for the current index, if we advocate to stop the thick index.
> WOW!!!
>

I think, this opens up lot of opportunities for improvement in Postgres.
a) HOT can now extend its reach beyond page boundaries
b) If a heap has three indexes and the update is going to affect only one
index, then we need not update the other two indexes.

HOT can have more cleaner and fresh approach. If we have both normal index
without snapshot and the thick index, Postgres can boast itself of having a
very rich index family, in which it has some index structures for
update/delete intensive transactions(normal index) and the thick index for
select based transactions.

Marketing folks can easily advertise the product.:

Gokul.


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Smith

Tom Lane wrote:

I don't see a "substantial additional burden" there.  What I would
imagine is needed is that the slave transmits a single number back
--- its current oldest xmin --- and the walsender process publishes
that number as its transaction xmin in its PGPROC entry on the master.
  


That is exactly the core idea I was trying to suggest in my rambling 
message.  Just that small additional bit of information transmitted and 
published to the master via that route, and it's possible to optimize 
this problem in a way not available now.  And it's a way that I believe 
will feel more natural to some users who may not be well served by any 
of the existing tuning possibilities.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Was partitioned table row estimation fixed in 9.0?

2010-02-26 Thread Josh Berkus
There were a flurry of patches around this from Stark and Aster Data, so
I'm checking if I should be testing on 9.0 or adding this to the TODO list.

The problem I'm grappling with is that OUTER JOINS against the master in
a partitioned table (joining to the append node) gives a row estimate
which does not take into account any CE applicable to the partitioned
table.  The CE is properly executed when the join is actually executed,
but with the row estimate way off (orders of magnitude) you often get
bad plans for other joins in the query.

This seems fixable for 9.1 if it hasn't been done already.

--Josh Berkus

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Robert Haas
On Fri, Feb 26, 2010 at 12:01 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Fri, Feb 26, 2010 at 11:27 AM, Tom Lane  wrote:
>>> Also, I think there is a lot of confusion here over two different
>>> issues: generic plan versus parameter-specific plan, and bad planner
>>> estimates leading to a wrong plan choice.  While the latter is certainly
>>> an issue sometimes, there is no reason to believe that it affects
>>> prepared statements worse than non-prepared ones.  So I think that
>>> designing a fix for prepared statements on the assumption that you can't
>>> trust the planner's estimates is solving the wrong problem.
>
>> The two issues seem intimately connected to me.  Of course, estimates
>> can be inaccurate for many reasons, but ONE of those reasons is that
>> the planner is optimizing for an unknown parameter value which has
>> very different distribution from the actually supplied value.
>
> Right, but if the parameter is unknown then its distribution is also
> unknown.

I must be losing my mind.  The distribution of the parameter will be
known at execution time because a specific value will be supplied.  Of
course, it's not known at prepare time.

> In any case that's just nitpicking, because the solution is
> to create a custom plan for the specific value supplied.  Or are you
> suggesting that we should create a way for users to say "here is the
> expected distribution of this parameter", and then try to fold that into
> the planner estimates?  I think the uptake on that would be about nil;
> not to mention that it fails to fix several of the most pressing
> problems, such as LIKE indexing and partition selection, where all the
> distribution information in the world doesn't help.  You need the exact
> value.

No, I'm not suggesting that.

Basically, what I really want here is some kind of keyword or other
syntax that I can stick into a PL/pgsql query that requests a replan
on every execution.  It's easy to identify the cases where this is
needed: they are precisely the cases where my function runs for a
small eternity.  I realize that you can use EXECUTE for this, but as
you pointed out upthread, that's not without its share of pain.
Exposing the same functionality via other interfaces (e.g. PQprepare,
other PLs) would be nice too, but for me personally, PL/pgsql is the
one that keeps biting me in the rear end.

One particularly ugly misadventure in this area had me write a
function which updated a bunch of data in user tables either for one
particular widget, if a widget id was passed in as an argument, or for
all widgets, if NULL was passed in.  I did this by writing ...some big
hairy query... WHERE ... various conditions ... AND (id = $1 OR $1 IS
NULL).  The point was that sometimes the caller knew that only a
particular widget needed to have its stuff recomputed, and sometimes
it didn't know for sure what might need updating so it just wanted to
update everything.  Of course it turned out that this didn't really
work: the same plan was used in both cases, so the version with an id
specified took just as long to run as the generic version.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Bruce Momjian
Dimitri Fontaine wrote:
> Bruce Momjian  writes:
> > Doesn't the system already adjust the delay based on the length of slave
> > transactions, e.g. max_standby_delay.  It seems there is no need for a
> > user switch --- just max_standby_delay really high.
> 
> Well that GUC looks like it allows to set a compromise between HA and
> reporting, not to say "do not ever give the priority to the replay while
> I'm running my reports". At least that's how I understand it.

Well, if you set it high, it effectively is that.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Dimitri Fontaine
Bruce Momjian  writes:
> Doesn't the system already adjust the delay based on the length of slave
> transactions, e.g. max_standby_delay.  It seems there is no need for a
> user switch --- just max_standby_delay really high.

Well that GUC looks like it allows to set a compromise between HA and
reporting, not to say "do not ever give the priority to the replay while
I'm running my reports". At least that's how I understand it.

The feedback loop might get expensive on master server when running
reporting queries on the slave, unless you can "pause" it explicitly I
think. I don't see how the system will guess that you're running a
reporting server rather than a HA node, and max_standby_delay is just a
way to tell the standby to please be nice in case of abuse.

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Bruce Momjian
Dimitri Fontaine wrote:
> Tom Lane  writes:
> > Well, as Heikki said, a stop-and-go WAL management approach could deal
> > with that use-case.  What I'm concerned about here is the complexity,
> > reliability, maintainability of trying to interlock WAL application with
> > slave queries in any sort of fine-grained fashion.
> 
> Some admin functions for Hot Standby were removed from the path to ease
> its integration, there was a pause() and resume() feature.
> 
> I think that offering this explicit control to the user would allow them
> to choose between HA setup and reporting setup easily enough: just pause
> the replay when running the reporting, resume it to get fresh data
> again. If you don't pause, any query can get killed, replay is the
> priority.

Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay.  It seems there is no need for a
user switch --- just max_standby_delay really high.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Gokulakannan Somasundaram
> No, what generally happens is it fails to find a matching index entry at
> all, because the search algorithm concludes there can be no match based
> on the limited set of comparisons it's done.  Transitivity failures lead
> to searching the wrong subset of the index.
>

Actually Tom, i am not able to understand that completely. But what you are
saying that in the current scenario, when there is a broken data type based
index, then it will return no results, but never will return wrong results.
So never the update will corrupt the heap data. But i take it as you say
(please, correct me, if  i am wrong).
But even returning no results might lead to failures in unqiue checks. While
i inserting, i try to check whether a particular data is already inserted
and if it returns no results, then it will go ahead and insert the data
assuming that the unique check has passed, while in reality it has failed.

Wait a minute.  Bingo  So for unique checks we are already going to
index from Heap. So it is the same thing i am doing with Thick index. So if
we can trust our current unique checks, then we should trust the Thick
index.

Thanks Tom!!! for having this good conversation

I think this broken data type problem / volatile function issue has to be
resolved for the current index, if we advocate to stop the thick index.
WOW!!!


Thanks,
Gokul.


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Tom Lane
Greg Stark  writes:
> On Fri, Feb 26, 2010 at 9:19 PM, Tom Lane  wrote:
>> There's *definitely* not going to be enough information in the WAL
>> stream coming from a master that doesn't think it has HS slaves.
>> We can't afford to record all that extra stuff in installations for
>> which it's just useless overhead.  BTW, has anyone made any attempt
>> to measure the performance hit that the patch in its current form is
>> creating via added WAL entries?

> What extra entries?

Locks, just for starters.  I haven't read enough of the code yet to know
what else Simon added.  In the past it's not been necessary to record
any transient information in WAL, but now we'll have to.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Dimitri Fontaine
Tom Lane  writes:
> Well, as Heikki said, a stop-and-go WAL management approach could deal
> with that use-case.  What I'm concerned about here is the complexity,
> reliability, maintainability of trying to interlock WAL application with
> slave queries in any sort of fine-grained fashion.

Some admin functions for Hot Standby were removed from the path to ease
its integration, there was a pause() and resume() feature.

I think that offering this explicit control to the user would allow them
to choose between HA setup and reporting setup easily enough: just pause
the replay when running the reporting, resume it to get fresh data
again. If you don't pause, any query can get killed, replay is the
priority.

Now as far as the feedback loop is concerned, I guess the pause()
function would cause the slave to stop publishing any xmin in the
master's procarray so that it's free to vacuum and archive whatever it
wants to.

Should the slave accumulate too much lag, it will resume from the
archive rather than live from the SR link.

How much that helps?

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Tom Lane
Gokulakannan Somasundaram  writes:
>> It does.  The point is that the system is set up to limit the bad
>> consequences.  You might (will) get wrong query answers, but the
>> heap data won't get corrupted.
>> 
> Again Tom, if there is an update based on index scan, then it takes the
> tupleid and updates the wrong heap data right?

No, what generally happens is it fails to find a matching index entry at
all, because the search algorithm concludes there can be no match based
on the limited set of comparisons it's done.  Transitivity failures lead
to searching the wrong subset of the index.

The case you're thinking about could arise if VACUUM failed to clean out
an index entry; after some unrelated tuple is inserted at the
just-cleared TID, searches finding that index entry would mistakenly
process the new tuple.  This is why we insist on VACUUM not assuming
very much about the consistency of the index.

It's also a problem for thick indexes, because if you try to do a normal
index search for the index tuple to update its copy of the tuple
xmin/xmax data, you might fail to find it --- but that doesn't mean it's
not there.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Stark
On Fri, Feb 26, 2010 at 9:19 PM, Tom Lane  wrote:
> There's *definitely* not going to be enough information in the WAL
> stream coming from a master that doesn't think it has HS slaves.
> We can't afford to record all that extra stuff in installations for
> which it's just useless overhead.  BTW, has anyone made any attempt
> to measure the performance hit that the patch in its current form is
> creating via added WAL entries?

What extra entries?


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Bruce Momjian
bruce wrote:
>   4 The standby waiting longer than max_standby_delay to acquire a
...
> #4 can be controlled by max_standby_delay, where a large value only
> delays playback during crash recovery --- again, a rare occurance.

One interesting feature is that max_standby_delay will _only_ delay if
it it encounters a conflict when applying WAL, meaning only if there are
long-running transactions.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Tom Lane
Greg Stark  writes:
> Why shouldn't it have any queries at walreceiver startup? It has any
> xlog segments that were copied from the master and any it can find in
> the archive, it could easily reach a consistent point long before it
> needs to connect to the master. If you really want to protect your
> master from any additional overhead you don't currently need to
> configure a streaming connection at all, you can just use the file
> shipping interface.

There's *definitely* not going to be enough information in the WAL
stream coming from a master that doesn't think it has HS slaves.
We can't afford to record all that extra stuff in installations for
which it's just useless overhead.  BTW, has anyone made any attempt
to measure the performance hit that the patch in its current form is
creating via added WAL entries?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Stark
On Fri, Feb 26, 2010 at 8:30 PM, Tom Lane  wrote:
> How's it going to do that, when it has no queries at the instant
> of startup?
>

Why shouldn't it have any queries at walreceiver startup? It has any
xlog segments that were copied from the master and any it can find in
the archive, it could easily reach a consistent point long before it
needs to connect to the master. If you really want to protect your
master from any additional overhead you don't currently need to
configure a streaming connection at all, you can just use the file
shipping interface.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Josh Berkus

> Well, as Heikki said, a stop-and-go WAL management approach could deal
> with that use-case.  What I'm concerned about here is the complexity,
> reliability, maintainability of trying to interlock WAL application with
> slave queries in any sort of fine-grained fashion.

This sounds a bit brute-force, but what about simply having some form of
automatic query retry on the slave?

--Josh Berkus


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Gokulakannan Somasundaram
> It does.  The point is that the system is set up to limit the bad
>> consequences.  You might (will) get wrong query answers, but the
>> heap data won't get corrupted.
>>
>>
> Tom,
   if this is our goal - *"can return wrong query answers, but
should not corrupt the heap data.*" and if we make Thick indexes capable of
that, can i consider that as a thumbs up from your side? As you may already
know, this will only happen when there is a volatile function based index.

Heikki,
  Please let me know, if you feel otherwise.

Thanks,
Gokul.


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Tom Lane
Mark Mielke  writes:
> Here are parts that can be done "fixed":

> 1) Statement parsing and error checking.
> 2) Identification of tables and columns involved in the query.

The above two are done in the parser, not the planner.

> 3) Query the column statistics for involved columns, to be used in plan 
> cost estimation now and later.

What makes you think that these can be cached any more effectively than
they are now?

> 4) Determine plan constraints under which elements of the plan must be 
> executed a certain way (something like constant folding for a compiler), 
> or for which parameter substitution would not impact the outcome.

Some marginal savings achievable there, perhaps.  But you can't just
push const-folding upstream, because one of the purposes of it is to
const-fold expressions involving parameter values, if the planner is
allowed to assume the parameter values are known.  So while we could
do a pass of const-folding upstream (in the rewriter say), we would
still need a pass at plan time.  Not very clear that this nets out
as a win.

> 5) Identify the elements of the plan that still require plan enumeration 
> and plan selection, to be used in a later part of the pipeline.

[ shrug... ] In practice that will typically be "all of them".  The only
time it wouldn't be is when a parameter is only used in a join
condition, but I think the majority of practical cases have parameters
that are involved in relation scan conditions.  Even if you could cache
the path results for some of the relations involved in a query, the
required size of the cache would be staggering.

> And that all said, I think I am challenging the status quo and ticking 
> people off. So while my intent is to challenge the status quo, it is not 
> to tick people off. So, please let me know if you would like me to 
> continue, or if you have already written this off. :-)

It would be more productive if you spent awhile studying the code as it
actually exists now, and then started to propose rearchitecting.  What
I'm reading here seems to be largely uninformed speculation.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Yeb Havinga

Mark Mielke wrote:

On 02/26/2010 03:11 PM, Yeb Havinga wrote:
Or instead of letting users give the distribution, gather it 
automatically in some plan statistics catalog? I suspect in most 
applications queries stay the same for months and maybe years, so 
after some number of iterations it is possible to have decent call 
statistics / parameter distributions. Maybe the the parameter value 
distribution could even be annotated with actual cached plans.


The problem with the last - actual cached plans - is that it implies 
the other aspect I have been suggesting: In order to have a custom 
cached plan, the primary model must be to use custom plans. If 
PREPARE/EXECUTE uses generic plans normally, than the only cached 
plans available will be generic plans.
I should have been clearer, with 'actual cached plans' I meant 'cached 
plans planned with actual parameters' or 'cached custom plans'. It makes 
no sense to annotate points or intervals in a gathered value 
distribution with generic plans.


regards,
Yeb Havinga


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Tom Lane
Greg Stark  writes:
> On Fri, Feb 26, 2010 at 7:16 PM, Tom Lane  wrote:
>> I don't see a "substantial additional burden" there.  What I would
>> imagine is needed is that the slave transmits a single number back
>> --- its current oldest xmin --- and the walsender process publishes
>> that number as its transaction xmin in its PGPROC entry on the master.

> And when we want to support cascading slaves?

So?  Fits right in.  The walsender on the first-level slave is
advertising an xmin from the second-level one, which will be included in
what's passed back up to the master.

> Or when you want to bring up a new slave and it suddenly starts
> advertising a new xmin that's older than the current oldestxmin?

How's it going to do that, when it has no queries at the instant
of startup?

> But in any case if I were running a reporting database I would want it
> to just stop replaying logs for a few hours while my big batch report
> runs, not cause the master to be unable to vacuum any dead records for
> hours. That defeats much of the purpose of running the queries on the
> slave.

Well, as Heikki said, a stop-and-go WAL management approach could deal
with that use-case.  What I'm concerned about here is the complexity,
reliability, maintainability of trying to interlock WAL application with
slave queries in any sort of fine-grained fashion.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Mark Mielke

On 02/26/2010 03:11 PM, Yeb Havinga wrote:

Tom Lane wrote:

Right, but if the parameter is unknown then its distribution is also
unknown.  In any case that's just nitpicking, because the solution is
to create a custom plan for the specific value supplied.  Or are you
suggesting that we should create a way for users to say "here is the
expected distribution of this parameter", and then try to fold that into
the planner estimates?
Or instead of letting users give the distribution, gather it 
automatically in some plan statistics catalog? I suspect in most 
applications queries stay the same for months and maybe years, so 
after some number of iterations it is possible to have decent call 
statistics / parameter distributions. Maybe the the parameter value 
distribution could even be annotated with actual cached plans.


The problem with the last - actual cached plans - is that it implies the 
other aspect I have been suggesting: In order to have a custom cached 
plan, the primary model must be to use custom plans. If PREPARE/EXECUTE 
uses generic plans normally, than the only cached plans available will 
be generic plans.


Cheers,
mark


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] C libpq frontend library fetchsize

2010-02-26 Thread Yeb Havinga

Hello list,

I'm wondering if there would be community support for adding using the 
execute message with a rownum > 0 in the c libpq client library, as it 
is used by the jdbc driver with setFetchSize.


kind regards,
Yeb Havinga


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Mark Mielke

On 02/26/2010 02:57 PM, Tom Lane wrote:

Mark Mielke  writes:
   

There must be some way to lift the cost of planning out of the plan
enumeration and selection phase, such that only plan enumeration and
selection is run at execute time. In most cases, plan enumeration and
selection, provided that all data required to make these decisions is
all cached in data structures ready to go, should be very fast? Right?
 

Huh?  What exactly do you think the cost of planning is, if not
enumeration and selection?  There isn't very much that's cacheable,
at least not in any sanely-sized cache.
   


I think most operations, including this one, can be broken into a fixed 
portion and a dynamic portion. The PREPARE should concern itself only 
with the fixed portion, and should leave the dynamic portion to EXECUTE. 
At present, the "planning process" is one big blob.


Here are parts that can be done "fixed":

1) Statement parsing and error checking.
2) Identification of tables and columns involved in the query.
3) Query the column statistics for involved columns, to be used in plan 
cost estimation now and later.
4) Determine plan constraints under which elements of the plan must be 
executed a certain way (something like constant folding for a compiler), 
or for which parameter substitution would not impact the outcome.
5) Identify the elements of the plan that still require plan enumeration 
and plan selection, to be used in a later part of the pipeline.


At a minimum, I am suggesting that 1), 2), and 3) should take a chunk 
out of the planning process. I think 4) and 5) are more complex but 
still valuable in terms of extracting the fixed portion out of the 
planning process.


I think an assumption is being made that the planning process is an 
atomic unit that cannot be turned into a pipeline or assembly line. I 
think this assumption was what originally tied PREPARE = PLAN, and 
EXECUTE = RUN. I think this assumption is leading to the conclusion that 
EXECUTE should re-plan. I also expect that this assumption is tightly 
woven into the current implementation and changing it would require some 
amount of re-architecture. :-)



By "not worth it", do you mean development effort or run time?
 

Run time.  The development cost of what you are proposing is negligible:
just rip out the plan cache altogether.  I don't believe it would be a
performance win though.
   


That's not my proposal, though. I'm suspecting you didn't read it. :-)

I'm fine with you saying "too hard and not worth my development effort" 
after you read it. I agree it would be a lot of work.


But if the conclusion is that the current architecture is the best that 
can be had, and the decision is only about when to do a custom re-plan 
or when to use the generic plan, I am putting my opinion out there that 
the generic plan has always been a compromise, and it will always be a 
compromise, and that this discussion exists primarily because the 
compromise is not adequate in many real world scenarios.


And that all said, I think I am challenging the status quo and ticking 
people off. So while my intent is to challenge the status quo, it is not 
to tick people off. So, please let me know if you would like me to 
continue, or if you have already written this off. :-)


Cheers,
mark


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Aidan Van Dyk
* Greg Stark  [100226 15:10]:
> On Fri, Feb 26, 2010 at 7:16 PM, Tom Lane  wrote:
> > I don't see a "substantial additional burden" there.  What I would
> > imagine is needed is that the slave transmits a single number back
> > --- its current oldest xmin --- and the walsender process publishes
> > that number as its transaction xmin in its PGPROC entry on the master.
> 
> And when we want to support cascading slaves?
> 
> Or when you want to bring up a new slave and it suddenly starts
> advertising a new xmin that's older than the current oldestxmin?
> 
> But in any case if I were running a reporting database I would want it
> to just stop replaying logs for a few hours while my big batch report
> runs, not cause the master to be unable to vacuum any dead records for
> hours. That defeats much of the purpose of running the queries on the
> slave.

*I* would be quite happy having the stop--and-go and the closed-loop be
the only 2 modes of operation, and I'ld even be quite happy if the were
both limited to separate method:

1) Running SR - then you are forced to use a closed-loop
2) Running HS from a backup/archive - forced to use stop-n-go


#1 still needs to deal ith a slave "disappearing" and not advancing xmin
for a period (TCP timeout)?

I'll note that until SR does synchronous streaming rep (which will
likely require some close-loop plan to allow the slave to be hot), I
want situation #2, and hopefully the knob to control how long it allows
a "stop" before going again can be a HUP'able knob so I can change it
occasionally without taking the server down...

-- 
Aidan Van Dyk Create like a god,
ai...@highrise.ca   command like a king,
http://www.highrise.ca/   work like a slave.


signature.asc
Description: Digital signature


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Joshua D. Drake
On Fri, 2010-02-26 at 12:02 -0800, Josh Berkus wrote:
> > I don't see a "substantial additional burden" there.  What I would
> > imagine is needed is that the slave transmits a single number back
> > --- its current oldest xmin --- and the walsender process publishes
> > that number as its transaction xmin in its PGPROC entry on the master.
> 
> If the main purpose of the slave is long-running queries, though, this
> could cause a lot of bloat on the master.  That's a special case, but a
> reason why we would want to preserve the stop replication functionality.
> 

Do we really think that users, using the slave to run long-running
queries is a special case? One of the number one things I can see this
being used for is reporting

Sincerely,

Joshua D. Drake


-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or 
Sir.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Tom Lane
Heikki Linnakangas  writes:
> I don't actually understand how tight synchronization on its own would
> solve the problem. What if the connection to the master is lost? Do you
> kill all queries in the standby before reconnecting?

Sure.  So what?  They'd have been killed if they individually lost
connections to the master (or the slave), too.

> [ assorted analysis based on WAL contents ]

The problem is all the interactions that are not reflected (historically
anyway) to WAL.  We already know about btree page reclamation interlocks
and relcache init files.  How many others are there, and how messy and
expensive is it going to be to deal with them?

> If you really think the current approach is unworkable, I'd suggest that
> we fall back to a stop-and-go system, where you either let the recovery
> to progress or allow queries to run, but not both at the same time. But
> FWIW I don't think the situation is that grave.

I might be wrong.  I hope for the sake of the project schedule that I am
wrong.  But I'm afraid that we will spend several months beavering away
to try to make the current approach solid and user-friendly, and
eventually conclude that it's a dead end.  It would be prudent to have
a Plan B; and it looks to me like closed-loop synchronization is the
best Plan B.  Putting off all thought about it for the next release
cycle seems like a recipe for a scheduling disaster.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #5021: ts_parse doesn't recognize email addresses with underscores

2010-02-26 Thread Teodor Sigaev

Oleg, Teodor, can you look at this?  I tried to fix it in wparser_def.c,
but couldn't figure out how.  Thanks.


select distinct token as email
from ts_parse('default', ' first_l...@yahoo.com '   )
where tokid = 4


Patch in attachment, it allows underscore in the middle of local part of email 
in in host name (similarly to '-' character).



I'm not sure about backpatching, because it could break existing search 
configuration.

--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/


underscore.patch.gz
Description: Unix tar archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Yeb Havinga

Tom Lane wrote:

Right, but if the parameter is unknown then its distribution is also
unknown.  In any case that's just nitpicking, because the solution is
to create a custom plan for the specific value supplied.  Or are you
suggesting that we should create a way for users to say "here is the
expected distribution of this parameter", and then try to fold that into
the planner estimates?
Or instead of letting users give the distribution, gather it 
automatically in some plan statistics catalog? I suspect in most 
applications queries stay the same for months and maybe years, so after 
some number of iterations it is possible to have decent call statistics 
/ parameter distributions. Maybe the the parameter value distribution 
could even be annotated with actual cached plans.


regards,
Yeb Havinga

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Stark
On Fri, Feb 26, 2010 at 7:16 PM, Tom Lane  wrote:
> I don't see a "substantial additional burden" there.  What I would
> imagine is needed is that the slave transmits a single number back
> --- its current oldest xmin --- and the walsender process publishes
> that number as its transaction xmin in its PGPROC entry on the master.

And when we want to support cascading slaves?

Or when you want to bring up a new slave and it suddenly starts
advertising a new xmin that's older than the current oldestxmin?

But in any case if I were running a reporting database I would want it
to just stop replaying logs for a few hours while my big batch report
runs, not cause the master to be unable to vacuum any dead records for
hours. That defeats much of the purpose of running the queries on the
slave.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Heikki Linnakangas
Tom Lane wrote:
> Josh Berkus  writes:
>> On 2/26/10 10:53 AM, Tom Lane wrote:
>>> I think that what we are going to have to do before we can ship 9.0
>>> is rip all of that stuff out and replace it with the sort of closed-loop
>>> synchronization Greg Smith is pushing.  It will probably be several
>>> months before everyone is forced to accept that, which is why 9.0 is
>>> not going to ship this year.
> 
>> I don't think that publishing visibility info back to the master ... and
>> subsequently burdening the master substantially for each additional
>> slave ... are what most users want.
> 
> I don't see a "substantial additional burden" there.  What I would
> imagine is needed is that the slave transmits a single number back
> --- its current oldest xmin --- and the walsender process publishes
> that number as its transaction xmin in its PGPROC entry on the master.

The additional burden comes from the old snapshot effect. It makes it
unusable for offloading reporting queries, for example. In general, it
is a very good architectural property that the master is not affected by
what happens in a standby, and a closed-loop synchronization would break
that.

I don't actually understand how tight synchronization on its own would
solve the problem. What if the connection to the master is lost? Do you
kill all queries in the standby before reconnecting?

One way to think about this is to first consider a simple a stop-and-go
system. Clearly the database must be consistent at any point in the WAL
sequence, if recovery was stopped and the database started up. So it is
always safe to pause recovery and run a read-only query against the
database as it is at that point in time (this assumes that the index
"cleanup" operations are not required for consistent query results BTW).
After the read-only transaction is finished, you can continue recovery.

The next step up is to relax that so that you allow replay of those WAL
records that are known to not cause trouble to the read-only queries.
For example, heap_insert records are very innocent, they only add rows
with a yet-uncommitted xmin.

Things get more complex when you allow the replay of commit records; all
the known-assigned-xids tracking is related to that, so that
transactions that are not committed when a snapshot is taken in the
standby to be considered uncommitted by the snapshot even after the
commit record is later replayed. If that feels too fragile, there might
be other methods to achieve that. One I once pondered is to not track
all in-progress transactions in shared memory like we do now, but only
OldestXmin. When a backend wants to take a snapshot in the slave, it
memcpy()s clog from OldestXmin to the latest committed XID, and includes
it in the snapshot. The visibility checks use the copy instead of the
actual clog, so they see the situation as it was when the snapshot was
taken. To keep track of the OldestXmin in the slave, the master can emit
that as a WAL record every now and then; it's ok if it lags behind.

Then there's the WAL record types that remove data that might still be
required by the read-only transactions. This includes vacuum and index
deletion records.


If you really think the current approach is unworkable, I'd suggest that
we fall back to a stop-and-go system, where you either let the recovery
to progress or allow queries to run, but not both at the same time. But
FWIW I don't think the situation is that grave.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Alex Hunsaker
On Fri, Feb 26, 2010 at 09:50, Robert Haas  wrote:
> On Fri, Feb 26, 2010 at 1:29 AM, Alex Hunsaker  wrote:
>> Prepared plans + exec plan (new guc/ protocol thing):
>>  Use: not quite sure
>>  Problems: slow because it would replan every time
>>  Solutions: use a prepared plan with the appropriate things not
>> parametrized...?
>>
>> [ aka we already have this, its called dont use a prepared statement ]
>
> The point is sometimes you'd like to replan every time, but not
> reparse every time.  There's no way to do that ATM.

So what you save on parse time?  Maybe that's worth it.  I've never
run the numbers nor have I seen them in this thread.  I probably
missed em...  My _hunch_ is planning will on average take
significantly longer than parse time (read in the noise of plan time).
 But that's unfounded :)  I can certainly imagine cases where you have
HUGE queries where the parse time too slow-- wont the plan most of the
time be an order of magnitude slower?  Anyway Ill stop until I get a
chance to do _some_ kind of benchmarking, I'm really quite clueless
here.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Josh Berkus

> I don't see a "substantial additional burden" there.  What I would
> imagine is needed is that the slave transmits a single number back
> --- its current oldest xmin --- and the walsender process publishes
> that number as its transaction xmin in its PGPROC entry on the master.

If the main purpose of the slave is long-running queries, though, this
could cause a lot of bloat on the master.  That's a special case, but a
reason why we would want to preserve the stop replication functionality.

> I don't doubt that this approach will have its own gotchas that we
> find as we get into it.  But it looks soluble.  I have no faith in
> either the correctness or the usability of the approach currently
> being pursued.

So, why not start working on it now, instead of arguing about it?  It'll
be easy to prove the approach once we have some test code.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Greg Stark
On Fri, Feb 26, 2010 at 6:30 PM, Gokulakannan Somasundaram
 wrote:
> http://archives.postgresql.org/pgsql-hackers/2008-03/msg00682.php
> I think, the buy-in became difficult because of the code quality.
>

Er, yeah. That's something we need to work on a bit. You should
probably expect your first few attempts to just be completely wrong.
Tom did give a very brief hint what was wrong with the patch but it
wasn't a point by point howto either.

It looks like your patch was unnecessarily complex.
slot_deform_tuple/heap_deform_tuple should handle missing columns
automatically already so they shouldn't need any modification.

All you need to do is check in heap_form_tuple whether there's a block
of nulls at the end and trim them off. If you can do this in a
cpu-efficient way it would be valuable because this is a very critical
path in the code.

Tom's concerns about benchmarking are interesting but I'm not sure
there's much we can do. We're talking about spending cpu time for
space gains which is usually worthwhile. I guess the best to hope for
is that on any macro benchmark there's no measurable performance
penalty even with a lot of nulls at the end of a very narrow row. Or
that in a microbenchmark there's a negligable penalty, perhaps under
10% for trimming 100+ trailing null columns.

--
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Tom Lane
Mark Mielke  writes:
> There must be some way to lift the cost of planning out of the plan 
> enumeration and selection phase, such that only plan enumeration and 
> selection is run at execute time. In most cases, plan enumeration and 
> selection, provided that all data required to make these decisions is 
> all cached in data structures ready to go, should be very fast? Right? 

Huh?  What exactly do you think the cost of planning is, if not
enumeration and selection?  There isn't very much that's cacheable,
at least not in any sanely-sized cache.

> By "not worth it", do you mean development effort or run time?

Run time.  The development cost of what you are proposing is negligible:
just rip out the plan cache altogether.  I don't believe it would be a
performance win though.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Gokulakannan Somasundaram
> It does.  The point is that the system is set up to limit the bad
> consequences.  You might (will) get wrong query answers, but the
> heap data won't get corrupted.
>
>
Again Tom, if there is an update based on index scan, then it takes the
tupleid and updates the wrong heap data right?
The only difference between normal index and thick index is to reach back to
the same index tuple to update the snapshot. How will that corrupt the heap
data? Did you intend to say that it corrupts the index data?

Thanks,
Gokul.


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Mark Mielke

On 02/26/2010 01:59 PM, Tom Lane wrote:

Mark Mielke  writes:
   

Just to point out that I agree, and as per my original post, I think the
only time prepared statements should be re-planned for the statistics
case, is after 'analyze' has run. That sounds like a quicker solution,
and a much smaller gain. After 'analyze' of an object, invalidate all
cached plans for prepared statements that rely on that object and
require a re-plan.
 

Please note that that has been happening since 8.3, which is probably
why you haven't detected a problem.
   


Excellent, and sorry for missing the release note on this.

Thanks,
mark



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Mark Mielke

On 02/26/2010 01:59 PM, Tom Lane wrote:

... It's walking around the problem
that the idea of a generic plan is just wrong. The only time a generic
plan is right, is when the specific plan would result in the same.
 

I think that's a significant overstatement.  There are a large number
of cases where a custom plan isn't worth it, even if it doesn't generate
exactly the same plan.
   



There must be some way to lift the cost of planning out of the plan 
enumeration and selection phase, such that only plan enumeration and 
selection is run at execute time. In most cases, plan enumeration and 
selection, provided that all data required to make these decisions is 
all cached in data structures ready to go, should be very fast? Right? 
Wrong? If right, my original post suggested that prepare should do the 
parts of planning which are fixed, and not change based on the input 
parameters, while execute should do the dynamic parts that would change 
based on the input parameters.


By "not worth it", do you mean development effort or run time?

For development effort, it would definitely be worth it in the grand 
scheme of things, but perhaps not worth it to specific individuals.


For run time, I've having trouble seeing the situation where it would 
not be worth it. In the case that the resulting plan is the same (custom 
vs generic) there should be no cost. In the case that the plan is 
different, I think the difference proves that it is worth it. The case 
where it wouldn't be worth it would be if a prepared statement was 
called many times with many different parameters, and each set of 
parameters required a re-plan - but my experience in this regard tells 
me that the current model is to choose a sub-optimal plan, and the 
entire query will run much slower than the planning time, on every 
execute. We wouldn't be having this discussion if generic plans were 
considered adequate. So, I feel that it is worth it in this case as well.


It's the development effort that is the problem. I can't do it, and I 
can't make you do it. If you say "too hard", there isn't anything I can 
do about it. :-)


Cheers,
mark


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Tom Lane
Josh Berkus  writes:
> On 2/26/10 10:53 AM, Tom Lane wrote:
>> I think that what we are going to have to do before we can ship 9.0
>> is rip all of that stuff out and replace it with the sort of closed-loop
>> synchronization Greg Smith is pushing.  It will probably be several
>> months before everyone is forced to accept that, which is why 9.0 is
>> not going to ship this year.

> I don't think that publishing visibility info back to the master ... and
> subsequently burdening the master substantially for each additional
> slave ... are what most users want.

I don't see a "substantial additional burden" there.  What I would
imagine is needed is that the slave transmits a single number back
--- its current oldest xmin --- and the walsender process publishes
that number as its transaction xmin in its PGPROC entry on the master.

I don't doubt that this approach will have its own gotchas that we
find as we get into it.  But it looks soluble.  I have no faith in
either the correctness or the usability of the approach currently
being pursued.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Assertion failure twophase.c (testing HS/SR)

2010-02-26 Thread Heikki Linnakangas
Erik Rijkers wrote:
> 9.0devel (cvs yesterday) primary+server, with this patch:
>   extend_format_of_recovery_info_funcs_v2.patch
>   ( http://archives.postgresql.org/pgsql-hackers/2010-02/msg02116.php )
> 
> A large (500 GB) restore left to run overnight, gave the below crash. The 
> standby was restarted,
> and seems to be catching up OK.
> 
> LOG:  entering standby mode
> LOG:  redo starts at 0/120
> LOG:  consistent recovery state reached at 0/200
> LOG:  database system is ready to accept read only connections
> TRAP: FailedAssertion("!(((xid) != ((TransactionId) 0)))", File: 
> "twophase.c", Line: 1201)
> LOG:  startup process (PID 21044) was terminated by signal 6: Aborted
> LOG:  terminating any other active server processes
> LOG:  database system was interrupted while in recovery at log time 
> 2010-02-26 06:42:14 CET
> HINT:  If this has occurred more than once some data might be corrupted and 
> you might need to
> choose an earlier recovery target.
> cp: cannot stat 
> `/var/data1/pg_stuff/dump/hotslave/replication_archive/00010015003F':
> No such file or directory
> LOG:  entering standby mode
> LOG:  redo starts at 15/3400E828
> LOG:  consistent recovery state reached at 15/6D6D9FD8
> LOG:  database system is ready to accept read only connections

Aha, there seems to be a typo in KnownAssignedXidsRemoveMany(), see
attached patch.

But I wonder how an invalid XID found its way to that function, with
keepPreparedXacts==true? I don't think that should happen; it's called
from ExpireOldKnownAssignedTransactionIds(), which is called from
ProcArrayApplyRecoveryInfo() and at replay of checkpoint records in
xlog_redo. So either the oldestRunningXid value in a running-xacts
record or the nextXid field in a checkpoint record is invalid, and
neither should ever be.

We need to track that down, but in any case we should add an assertion
to ExpireOldAssignedTransactionIds() so that we still catch such cases
after fixing KnownAssignedXidsRemoveMany().

> (btw, I think I have seen this exact same one (File "twophase.c", Line: 1201) 
> a few times before,
> without reporting it here, so it might have no connection to this particular 
> patch. Sorry to be
> vague about that)

Thanks for the report, an assertion failure is always a bug.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index 12de877..4691c51 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -2317,6 +2317,7 @@ ExpireAllKnownAssignedTransactionIds(void)
 void
 ExpireOldKnownAssignedTransactionIds(TransactionId xid)
 {
+	Assert(TransactionIdIsValid(xid));
 	LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
 	KnownAssignedXidsRemoveMany(xid, true);
 	LWLockRelease(ProcArrayLock);
@@ -2512,7 +2513,7 @@ KnownAssignedXidsRemoveMany(TransactionId xid, bool keepPreparedXacts)
 
 		if (!TransactionIdIsValid(xid) || TransactionIdPrecedes(removeXid, xid))
 		{
-			if (keepPreparedXacts && StandbyTransactionIdIsPrepared(xid))
+			if (keepPreparedXacts && StandbyTransactionIdIsPrepared(removeXid))
 continue;
 			else
 			{

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Josh Berkus
On 2/26/10 10:53 AM, Tom Lane wrote:
> I think that what we are going to have to do before we can ship 9.0
> is rip all of that stuff out and replace it with the sort of closed-loop
> synchronization Greg Smith is pushing.  It will probably be several
> months before everyone is forced to accept that, which is why 9.0 is
> not going to ship this year.

I don't think that publishing visibility info back to the master ... and
subsequently burdening the master substantially for each additional
slave ... are what most users want.  Certainly for use cases like NTT's,
it is, but not for most of our users.

In fact, I seem to remember specifically discussing the approach of
trying to publish snapshots back to the master, and rejecting it on this
list during the development of SR.

Does anyone know how Oracle solves these issues?  Does their structure
(separate rollback log) make it easier for them?

--Josh Berkus

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Tom Lane
Gokulakannan Somasundaram  writes:
> But Tom, can you please explain me why that broken ordering example doesn't
> affect the current index scans.

It does.  The point is that the system is set up to limit the bad
consequences.  You might (will) get wrong query answers, but the
heap data won't get corrupted.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Josh Berkus
On 2/26/10 6:57 AM, Richard Huxton wrote:
> 
> Can we not wait to cancel the transaction until *any* new lock is
> attempted though? That should protect all the single-statement
> long-running transactions that are already underway. Aggregates etc.

I like this approach.  Is it fragile in some non-obvious way?

--Josh Berkus

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Bruce Momjian
Tom Lane wrote:
> I'm going to make an unvarnished assertion here.  I believe that the
> notion of synchronizing the WAL stream against slave queries is
> fundamentally wrong and we will never be able to make it work.
> The information needed isn't available in the log stream and can't be
> made available without very large additions (and consequent performance
> penalties).  As we start getting actual beta testing we are going to
> uncover all sorts of missed cases that are not going to be fixable
> without piling additional ugly kluges on top of the ones Simon has
> already crammed into the system.  Performance and reliability will both
> suffer.
> 
> I think that what we are going to have to do before we can ship 9.0
> is rip all of that stuff out and replace it with the sort of closed-loop
> synchronization Greg Smith is pushing.  It will probably be several
> months before everyone is forced to accept that, which is why 9.0 is
> not going to ship this year.

Wow, can I have some varnish with that.  :-O

You are right that we need to go down the road a bit before we know what
we need for 9.0 or 9.1.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Tom Lane
Mark Mielke  writes:
> Just to point out that I agree, and as per my original post, I think the 
> only time prepared statements should be re-planned for the statistics 
> case, is after 'analyze' has run. That sounds like a quicker solution, 
> and a much smaller gain. After 'analyze' of an object, invalidate all 
> cached plans for prepared statements that rely on that object and 
> require a re-plan.

Please note that that has been happening since 8.3, which is probably
why you haven't detected a problem.

> ... It's walking around the problem 
> that the idea of a generic plan is just wrong. The only time a generic 
> plan is right, is when the specific plan would result in the same.

I think that's a significant overstatement.  There are a large number
of cases where a custom plan isn't worth it, even if it doesn't generate
exactly the same plan.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Gokulakannan Somasundaram
> IIRC, what was being talked about was shoehorning some hint bits into
> the line pointers by assuming that size and offset are multiples of 4.
> I'm not thrilled with having mutable status bits there for reliability
> reasons, but it could be done without breaking a lot of existing code.
> What I was reacting to above was a suggestion that we could delete the
> itempointer size field altogether, which seems unworkable for the
> reasons I mentioned.
>

I think then we can pursue on using the IndexTuple structure similar to
HeapTuple(as you have suggested in an earlier update). This would involve(i
believe)
a) Making the current IndexTuple  into IndexTupleHeader
b) Creating a new structure called IndexTuple which will store the size and
the have a pointer to IndexTupleHeader.

But Tom, can you please explain me why that broken ordering example doesn't
affect the current index scans.

Thanks,
Gokul.


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Mark Mielke

On 02/26/2010 11:27 AM, Tom Lane wrote:

Also, I think there is a lot of confusion here over two different
issues: generic plan versus parameter-specific plan, and bad planner
estimates leading to a wrong plan choice.  While the latter is certainly
an issue sometimes, there is no reason to believe that it affects
prepared statements worse than non-prepared ones.  So I think that
designing a fix for prepared statements on the assumption that you can't
trust the planner's estimates is solving the wrong problem.
   


Just to point out that I agree, and as per my original post, I think the 
only time prepared statements should be re-planned for the statistics 
case, is after 'analyze' has run. That sounds like a quicker solution, 
and a much smaller gain. After 'analyze' of an object, invalidate all 
cached plans for prepared statements that rely on that object and 
require a re-plan. I doubt this will help me or many others very often. 
It's something that should be done some day, but I don't recall ever 
concluding that a performance problem I was experiencing was related to 
using prepared statements too long. Also, the client is able to figure 
this out. The client can choose to free prepared statements after 1 
minute or 1000 calls. It's not really a problem.


It also has nothing to do with trust of the planner's estimates. Given 
the same criteria, the planner should come up with the same best plan 
most or all of the time. Trial and error planning, with the exception of 
hugely complicated plans that cannot be produced in a finite time frame, 
does not appeal to me at all. I do trust the planner's estimates.


The issue of specific parameter is the one I think most of us would 
benefit from, and I think the most effective benefit is to not create 
generic plans. I would prefer a prepare with specific plan and re-plan 
when the specific plan does not apply, over generic plan, every time. 
This has nothing to do with "time to prepare" or a ratio of "time to 
prepare" vs "time to execute", or plans that are expected to take some 
time to execute. The fact that I can run a PREPARE/EXECUTE, and SELECT, 
and with only one invocation see a difference of over 100X shows that 
generic plans is just not the right approach. It works according to 
spec, but it is not practical under the current model.


Generic plans is the problem. My post was to bring attention to this, as 
I see most comments focusing on an assumption that generic plans provide 
value, and specific plans should only be used when generic plans are 
expected to take a while to execute. It's walking around the problem 
that the idea of a generic plan is just wrong. The only time a generic 
plan is right, is when the specific plan would result in the same.


Cheers,
mark


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Tom Lane
Greg Stark  writes:
> In the model you describe any long-lived queries on the slave cause
> tables in the master to bloat with dead records.

Yup, same as they would do on the master.

> I think this model is on the roadmap but it's not appropriate for
> everyone and I think one of the benefits of having delayed it is that
> it forces us to get the independent model right before throwing in
> extra complications. It would be too easy to rely on the slave
> feedback as an answer for hard questions about usability if we had it
> and just ignore the question of what to do when it's not the right
> solution for the user.

I'm going to make an unvarnished assertion here.  I believe that the
notion of synchronizing the WAL stream against slave queries is
fundamentally wrong and we will never be able to make it work.
The information needed isn't available in the log stream and can't be
made available without very large additions (and consequent performance
penalties).  As we start getting actual beta testing we are going to
uncover all sorts of missed cases that are not going to be fixable
without piling additional ugly kluges on top of the ones Simon has
already crammed into the system.  Performance and reliability will both
suffer.

I think that what we are going to have to do before we can ship 9.0
is rip all of that stuff out and replace it with the sort of closed-loop
synchronization Greg Smith is pushing.  It will probably be several
months before everyone is forced to accept that, which is why 9.0 is
not going to ship this year.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Bruce Momjian
Heikki Linnakangas wrote:
> > How to handle situations where the standby goes away for a while,
> > such as a network outage, so that it doesn't block the master from ever
> > cleaning up dead tuples is a concern.
> 
> Yeah, that's another issue that needs to be dealt with. You'd probably
> need some kind of a configurable escape valve in the master, to let it
> ignore a standby's snapshot once it gets too old.
> 
> > But I do know that the current Hot Standby implementation is going to be
> > frustrating to configure correctly for people.
> 
> Perhaps others who are not as deep into the code as I am will have a
> better view on this, but I seriously don't think that's such a big
> issue. I think the max_standby_delay setting is quite intuitive and easy
> to explain. Sure, it would better if there was no tradeoff between
> killing queries and stalling recovery, but I don't think it'll be that
> hard to understand the tradeoff.

Let's look at the five documented cases of query conflict (from our manual):

1 Access Exclusive Locks from primary node, including both explicit
  LOCK commands and various DDL actions 

2 Dropping tablespaces on the primary while standby queries are
  using those tablespaces for temporary work files (work_mem
  overflow) 

3 Dropping databases on the primary while users are connected to
  that database on the standby.  

4 The standby waiting longer than max_standby_delay to acquire a
  buffer cleanup lock.  

5 Early cleanup of data still visible to the current query's
  snapshot

We might have a solution to #1 by only cancelling queries that try to
take locks.

#2 and #3 seem like rare occurances.

#4 can be controlled by max_standby_delay, where a large value only
delays playback during crash recovery --- again, a rare occurance.

#5 could be handled by using vacuum_defer_cleanup_age on the master.

Why is vacuum_defer_cleanup_age not listed in postgresql.conf?

In summary, I think passing snapshots to the master is not something
possible for 9.0, and ideally we will never need to add that feature.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A thought on Index Organized Tables

2010-02-26 Thread Gokulakannan Somasundaram
Missed the group..

On Sat, Feb 27, 2010 at 12:00 AM, Gokulakannan Somasundaram <
gokul...@gmail.com> wrote:

>
> I definitely think thick indexes were too ambitious of a target for a
>> first time patch. Sequential index scans is very ambitious itself
>> despite being significantly simpler (if you have a solution which
>> works -- we haven't had one thus far).
>>
>
> The point, i am trying to bring out is that i want to work with one of the
> senior persons of the community to do my first few patches.
>
>
>>
>> Can you point me to the thread on "trailing nulls"? I think trimming
>> off any null columns from the ends of tuples when forming them should
>> be a cheap and easy optimization which just nobody's gotten around to
>> doing. If that's what you mean then I'm surprised you had any trouble
>> getting buy-in for it.
>>
>> http://archives.postgresql.org/pgsql-hackers/2008-03/msg00682.php
> I think, the buy-in became difficult because of the code quality.
>
>
> Thanks,
> Gokul.
>
>


Re: [HACKERS] ProcSignalSlot vs. PGPROC

2010-02-26 Thread Tom Lane
Markus Wanner  writes:
> do I understand correctly that a BackendId is just an index into the 
> ProcSignalSlots array and not (necessarily) the same as the index into 
> ProcArrayStruct's procs?

> If yes, could these be synchronized? Why is ProcSignalSlot not part of 
> PGPROC at all? Both are shared memory structures per backend (plus 
> NUM_AUXILIARY_PROCS). What am I missing?

It's easier to keep those modules separate.  I believe also that there
are (or could be) processes that have entries in one array but not the
other.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding bad prepared-statement plans.

2010-02-26 Thread Tom Lane
Robert Haas  writes:
> On Fri, Feb 26, 2010 at 10:07 AM, Tom Lane  wrote:
>> I think this is basically a planner problem and should be fixed in the
>> planner, not by expecting users to make significant changes in
>> application logic in order to create an indirect effect.

> I would agree if I thought that were possible, but I'm skeptical about
> your proposed solution.

Fair enough --- maybe it will work well enough, or maybe it won't.
But the same can be said of every other proposal that's been made.
I'm in favor of trying the simpler approaches first.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ecpg tests broken by pgindent run

2010-02-26 Thread Bruce Momjian
Andrew Dunstan wrote:
> 
> 
> Bruce Momjian wrote:
>    
>  
> >>> Don't look further, interfaces/ecpg/include/sqlda.h has changed
> >>> by the pgindent run.
> >>>   
> >> Yea, it is that, and sqltypes.h and one other file I am trying to find
> >> now.
> >> 
> >
> > I have reverted changes to three include files who's output appears in
> > the regression tests.   I asssume this is new 9.0 behavior because I
> > didn't have this issue before.   I also updated the pgindent script to
> > prevent these from being changed in the future.
> >
> >   
> 
> Should not the instructions include doing a full set of regression tests 
> to ensure that nothing breaks? The new installcheck-world target should 
> help there.

Wow, I didn't know that was available.  I have updated the pgindent
README.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com
  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do
  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Greg Stark
On Fri, Feb 26, 2010 at 4:43 PM, Richard Huxton  wrote:
> Let's see if I've got the concepts clear here, and hopefully my thinking it
> through will help others reading the archives.
>
> There are two queues:

I don't see two queues. I only see the one queue of operations which
have been executed on the master but not replayed yet on the slave.
Every write operation on the master enqueues an operation to it and
every operation replayed on the slave dequeues from it. Only a subset
of operations create conflicts with concurrent transactions on the
slave, namely vacuums and a few similar operations (HOT pruning and
btree index pruning).

There's no question we need to make sure users have good tools to
monitor this queue and are aware of these tools. You can query each
slave for its currently replayed log position and hopefully you can
find out how long it's been delayed (ie, if it's looking at a log
record and waiting for a conflict to clear how long ago that log
record was generated). You can also find out what the log position is
on the master.




-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot Standby query cancellation and Streaming Replication integration

2010-02-26 Thread Robert Haas
On Fri, Feb 26, 2010 at 10:21 AM, Heikki Linnakangas
 wrote:
> Richard Huxton wrote:
>> Can we not wait to cancel the transaction until *any* new lock is
>> attempted though? That should protect all the single-statement
>> long-running transactions that are already underway. Aggregates etc.
>
> Hmm, that's an interesting thought. You'll still need to somehow tell
> the victim backend "you have to fail if you try to acquire any more
> locks", but a single per-backend flag in the procarray would suffice.
>
> You could also clear the flag whenever you free the last snapshot in the
> transaction (ie. between each query in read committed mode).

Wow, that seems like it would help a lot.  Although I'm not 100% sure
I follow all the details of how this works.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >