Re: [HACKERS] switch UNLOGGED to LOGGED

2011-04-09 Thread Leonardo Francalanci
 I'm pretty sure we wouldn't accept a patch for a  feature that would
 only work with wal_level=minimal, but it might be a useful  starting
 point for someone else to keep hacking on.


I understand.

Reading your post at 
http://archives.postgresql.org/pgsql-hackers/2011-01/msg00315.php
I thought I got the part:

what happens if we *crash* without writing an abort record?  It
seems  like that could leave a stray file around on a standby, 
because the  current code only cleans things up on the standby
at the start of  recovery


But re-reading it, I don't understand: what's the difference in creating
a new regular table and crashing before emitting the abort record,
and converting an unlogged table to logged and crashing before
emitting the abort record? How do the standby servers handle a 
CREATE TABLE followed by a ROLLBACK if the master crashes
before writing the abort record? I thought that too would leave a
stray file around on a standby.



Leonardo


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-09 Thread Bruce Momjian
bricklen wrote:
 On Fri, Apr 8, 2011 at 10:01 PM, bricklen brick...@gmail.com wrote:
  Update on the status of the steps we took, which were:
  - test on a hot standby by bringing it live, running the script,
  determing the missing clog files, copying them into the live (hot
  standby) pg_clog dir
 
  Now, on the master, copied the same old clog files into the production
  *master*, ran vacuumdb -a -v -F. The step I should have taken on the
  master before the vacuumdb -F would have been to run the
  http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix script to see
  if I was missing any clog files on the master.
  That vacuum freeze step pointed out a clog file, I copied that into
  the master pg_clog dir, ran the aforementioned script. It didn't fail
  on any of the clog files this time, so now I am rerunning the vacuum
  freeze command and hoping like hell it works!
 
  If the current run of the vacuum freeze fails, I'll report back.
 
  Thanks again for everyone's help.
 
 
 The vacuumdb -a -v F completed successfully this time.

YEA!

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-09 Thread Bruce Momjian
Alvaro Herrera wrote:
 
 Why is it important to have the original pg_clog files around?  Since
 the transactions in question are below the freeze horizon, surely the
 tuples that involve those transaction have all been visited by vacuum
 and thus removed if they were leftover from aborted transactions or
 deleted, no?  So you could just fill those files with the 0x55 pattern
 (signalling all transactions are committed) and the net result should
 be the same.  No?
 
 Forgive me if I'm missing something.  I haven't been following this
 thread and I'm more than a little tired (but wanted to shoot this today
 because I'm gonna be able to, until Monday).

Well, in most cases vacuum (or SELECT?) is going to set those xids as
committed on the tuple, but if there have been few deletes in the toast
table, it is possible vacuum did not run.  I think the fact we only have
three report query error cases is because in most cases vacuum is
already taking care of this as part of space reuse.

relfrozenxid is not going to cause freeze to run and therefore those
xids, even though marked as committed, still are on the tuple, so we
need this script to be run.

In fact, if the tuple is marked as committed, do we even bother to mark
the xids as fixed via vacuum freeze?  Seems we don't have to.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgindent

2011-04-09 Thread Bruce Momjian
Robert Haas wrote:
 On Fri, Apr 8, 2011 at 11:21 PM, Andrew Dunstan and...@dunslane.net wrote:
  We've got more work to do before that works, so I have committed what we
  have. Some symbols have disappeared, some because of code changes and some
  probably because Cygwin has changed the way it does objdump. This is
  probably harmless, but whoever does the pgindent run needs to look at the
  results carefully before committing them (as always).
 
 Well, that's normally Bruce.  Bruce?

I can run it tonight, in 15 hours.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-09 Thread Bruce Momjian
Bruce Momjian wrote:
 Alvaro Herrera wrote:
  
  Why is it important to have the original pg_clog files around?  Since
  the transactions in question are below the freeze horizon, surely the
  tuples that involve those transaction have all been visited by vacuum
  and thus removed if they were leftover from aborted transactions or
  deleted, no?  So you could just fill those files with the 0x55 pattern
  (signalling all transactions are committed) and the net result should
  be the same.  No?
  
  Forgive me if I'm missing something.  I haven't been following this
  thread and I'm more than a little tired (but wanted to shoot this today
  because I'm gonna be able to, until Monday).

To answer your other question, it is true we _probably_ could assume all
the rows were committed, except that again, vacuum might not have run
and the pages might not be full so single-page cleanup wasn't done
either.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] really lazy vacuums?

2011-04-09 Thread Cédric Villemain
2011/3/24 Jim Nasby j...@nasby.net:
 On Mar 22, 2011, at 11:46 AM, Cédric Villemain wrote:
 2011/3/22 Greg Stark gsst...@mit.edu:
 On Mon, Mar 21, 2011 at 6:08 PM, Jim Nasby j...@nasby.net wrote:
 Has anyone looked at the overhead of measuring how long IO requests to the 
 kernel take? If we did that not only could we get an idea of what our IO 
 workload looked like, we could also figure out whether a block came out of 
 cache or not. That information could potentially be useful to the planner, 
 but even if the database couldn't use that knowledge itself it would be a 
 damn useful statistic to have... IMHO, far more useful than our current 
 hit rate statistics.


 I've done this -- actually better, I used mincore to actually check
 whether the block was in cache before issuing the read -- but it turns
 out you can't get what you're looking for this way.

 The linux fincore() syscall never get in the kernel, maybe something
 to revive...

 Is there an equivalent in other OSes? Could we use time measurement as an 
 alternative if not?

I made a quick test with time measurement, and find quickly the main
bottleneck with this strategy. How to know if block has been fetched
from OS memory, SAN memory, quick RAID, slow SATA ..
I just added a gettimeofday around the read() call, and adjust the
XXms|µs used to seperate disk fetch and memory fetch.
By manualy adjusting this duration I get good results but wonder how
this can be automatically adjusted on other systems, also the method
use for measuring may impact the measure.

Maybe using it to just track 'slow' access, and define 'slow access' in a GUC...



 It turns out when you do this you see one block being read from disk
 followed by n blocks that all appear to be cache hits. Because they've
 been prefetched by the kernel.

 I did the same, I now believe that it is not very important to have
 the very exact numbers.
 Prefetech blocks *are* in memory when we request them, the first read
 access read more than one block because the cost is the same.

 Yeah... there's places in the planner where we make guesses as to the 
 likelyhood of something being in-cache. If we could actually track complete 
 hit rate over time (PG buffers + FS cache), then we wouldn't have to guess at 
 things anymore.

 And having this info in pg_stats would be extremely valuable.

 What you end up with is actually something like the number of iops
 which is also an interesting measure but not really what you were
 looking for.

 My getrusage patch, which I should still dig out though it's rather
 too late to be committing now unless someone tells me otherwise, would
 tell you how much i/o a plan node actually did. But you won't know
 which blocks did the i/o since I was only tracking totals for the plan
 node. That's probably what you're looking for here.

 Please show us the patch :)
 --
 Jim C. Nasby, Database Architect                   j...@nasby.net
 512.569.9461 (cell)                         http://jim.nasby.net






-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] really lazy vacuums?

2011-04-09 Thread Andres Freund
On Thursday, March 24, 2011 06:32:10 PM Jim Nasby wrote:
 Is there an equivalent in other OSes?
Some have mincore which can be used for that in combination with mmap.

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-09 Thread Aidan Van Dyk
On Sat, Apr 9, 2011 at 7:03 AM, Bruce Momjian br...@momjian.us wrote:
 Bruce Momjian wrote:
 Alvaro Herrera wrote:
 
  Why is it important to have the original pg_clog files around?  Since
  the transactions in question are below the freeze horizon, surely the
  tuples that involve those transaction have all been visited by vacuum
  and thus removed if they were leftover from aborted transactions or
  deleted, no?  So you could just fill those files with the 0x55 pattern
  (signalling all transactions are committed) and the net result should
  be the same.  No?
 
  Forgive me if I'm missing something.  I haven't been following this
  thread and I'm more than a little tired (but wanted to shoot this today
  because I'm gonna be able to, until Monday).

 To answer your other question, it is true we _probably_ could assume all
 the rows were committed, except that again, vacuum might not have run
 and the pages might not be full so single-page cleanup wasn't done
 either.

OK, continuing the thought of just making all the old clog files as
all committed...

Since it only affects toast tables, the only time the system (with
normal queries) would check for a particular toast tuple, the tuple
referring to it would have been committed, right?  So forcing all
transactions committed for the older clog segments might mean a scan
on a *toast* heap might return tuples as committed when they might
have been aborted, but the real table heap would never refer to those,
right?

a.


-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Failed assert ((data - start) == data_size) in heaptuple.c

2011-04-09 Thread Brendan Jurd
On 9 April 2011 00:41, Alvaro Herrera alvhe...@commandprompt.com wrote:
 Excerpts from Brendan Jurd's message of vie abr 08 06:00:22 -0300 2011:
 Memtest didn't report any errors.  I intend to try swapping out the
 RAM tomorrow, but in the meantime we got a *different* assertion
 failure today.  The fact that we are tripping over various different
 assertions seems to lend further weight to the flaky hardware
 hypothesis.

 TRAP: FailedAssertion(!(((lpp)-lp_flags == 1)), File: heapam.c, Line: 
 727)

 Yep.


I swapped the RAM with another machine, and after a few hours running
the other machine popped a segfault.  The faulty RAM diagnosis is now
official, so I won't be bothering you folks about this any further.

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread Jesper Krogh

This seems like a place where there is room for improvement.

2011-04-09 15:18:08.016 testdb=# select id from test1 where id  3 order 
by id;

 id

  1
  2
(2 rows)

Time: 0.328 ms
2011-04-09 15:18:11.936 testdb=# CREATE or Replace FUNCTION testsort(id 
integer) returns integer as $$ BEGIN perform pg_sleep(id); return id; 
END; $$ language plpgsql;

CREATE FUNCTION
Time: 12.349 ms
2011-04-09 15:18:22.138 testdb=# select id from test1 where id  3 order 
by id,testsort(id);

 id

  1
  2
(2 rows)

Time: 3001.896 ms

It seems strange that there is a need to evaluate testsort(id) at all in 
this case.


--
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] getting carriage return character in vacuumo

2011-04-09 Thread Martijn van Oosterhout
On Fri, Apr 08, 2011 at 02:10:56PM -0400, Tom Lane wrote:
 Muhammad Usama m.us...@gmail.com writes:
  While using the vacuumlo utility I encountered a redundant carriage
  return(\r') character in the output. It is required in any scenario? If not,
  please find attached a tiny patch which will get rid of that extra '\r'
  character.
 
 I think the idea there is to overwrite successive progress messages
 on the same line.  It's maybe not going to work in all environments,
 though, so perhaps we should reconsider that bit of cuteness.

One could hinge it on batch or non-batch mode, the default being
determined by whether it's running in a terminal or not.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 Patriotism is when love of your own people comes first; nationalism,
 when hate for people other than your own comes first. 
   - Charles de Gaulle


signature.asc
Description: Digital signature


Re: [HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread David Fetter
On Sat, Apr 09, 2011 at 03:22:14PM +0200, Jesper Krogh wrote:
 This seems like a place where there is room for improvement.
 
 2011-04-09 15:18:08.016 testdb=# select id from test1 where id  3
 order by id;
  id
 
   1
   2
 (2 rows)
 
 Time: 0.328 ms
 2011-04-09 15:18:11.936 testdb=# CREATE or Replace FUNCTION
 testsort(id integer) returns integer as $$ BEGIN perform
 pg_sleep(id); return id; END; $$ language plpgsql;
 CREATE FUNCTION
 Time: 12.349 ms
 2011-04-09 15:18:22.138 testdb=# select id from test1 where id  3
 order by id,testsort(id);
  id
 
   1
   2
 (2 rows)
 
 Time: 3001.896 ms
 
 It seems strange that there is a need to evaluate testsort(id) at
 all in this case.

How would PostgreSQL know that sorting by id leaves no ambiguity for
the next key to address?

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread Jesper Krogh

 
 How would PostgreSQL know that sorting by id leaves no ambiguity for
 the next key to address?

It wouldn't   But it could postpone evaluation until ambiguity was actually 
met.  

Jesper
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread Martijn van Oosterhout
On Sat, Apr 09, 2011 at 09:17:10AM -0700, David Fetter wrote:

  2011-04-09 15:18:22.138 testdb=# select id from test1 where id  3
  order by id,testsort(id);
   id
  
1
2
  (2 rows)
  
  Time: 3001.896 ms
  
  It seems strange that there is a need to evaluate testsort(id) at
  all in this case.
 
 How would PostgreSQL know that sorting by id leaves no ambiguity for
 the next key to address?

Well, it doesn't know that, but I guess the point is it could wait with
evaluating the second key until it needs it.

The reason ot works as it does now is that the ORDER BY fields are
added as hidden fields to the query, like:

select id, /*hidden*/ id, /*hidden*/ testsort(id) from test1 where id  3 order 
by 2, 3;

Here it's obvious why the evaluation happens. To avoid this evaluation
would require redoing the way sorts work (I think).

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 Patriotism is when love of your own people comes first; nationalism,
 when hate for people other than your own comes first. 
   - Charles de Gaulle


signature.asc
Description: Digital signature


Re: [HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread Heikki Linnakangas

On 09.04.2011 19:17, David Fetter wrote:

On Sat, Apr 09, 2011 at 03:22:14PM +0200, Jesper Krogh wrote:

This seems like a place where there is room for improvement.

2011-04-09 15:18:08.016 testdb=# select id from test1 where id  3
order by id;
  id

   1
   2
(2 rows)

Time: 0.328 ms
2011-04-09 15:18:11.936 testdb=# CREATE or Replace FUNCTION
testsort(id integer) returns integer as $$ BEGIN perform
pg_sleep(id); return id; END; $$ language plpgsql;
CREATE FUNCTION
Time: 12.349 ms
2011-04-09 15:18:22.138 testdb=# select id from test1 where id  3
order by id,testsort(id);
  id

   1
   2
(2 rows)

Time: 3001.896 ms

It seems strange that there is a need to evaluate testsort(id) at
all in this case.


How would PostgreSQL know that sorting by id leaves no ambiguity for
the next key to address?


Presumably there's a primary key constraint on id. This is one of those 
cases where we could optimize, but then again, there's no reason to 
write a query like that in the first place.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread Tom Lane
Martijn van Oosterhout klep...@svana.org writes:
 On Sat, Apr 09, 2011 at 09:17:10AM -0700, David Fetter wrote:
 It seems strange that there is a need to evaluate testsort(id) at
 all in this case.

 How would PostgreSQL know that sorting by id leaves no ambiguity for
 the next key to address?

 Well, it doesn't know that, but I guess the point is it could wait with
 evaluating the second key until it needs it.

I think that would be a positive disimprovement.  The current design
guarantees that volatile sort expressions are evaluated exactly once,
in the order the rows are read from the data source.  There would be no
guarantees at all, either as to the number of evaluations or the order
in which they happen, if we tried to do evaluation only during the
actual sort.

Another small problem is that any such thing would require carrying
along some kind of closure (ie, the expression and all its input
values), not just the final sort key value, in tuples being sorted.
The ensuing complexity, speed penalty, and increase in data volume
to be sorted would be paid by everybody, making this probably a net
performance loss when considered across all applications.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-04-09 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 * Tom Lane (t...@sss.pgh.pa.us) wrote:
 If we were actually trying to support such large allocations,
 what I'd be inclined to do is introduce a separate call along the lines
 of MemoryContextAllocLarge() that lacks the safety check.  

 This sounds like the right approach to me.  Basically, I'd like to have
 MemoryContextAllocLarge(), on 64bit platforms, and have it be used for
 things like sorts and hash tables.  We'd need to distinguish that usage
 from things which allocate varlena's and the like.

Yes, but ...

 But before
 expending time on that, I'd want to see some evidence that it's actually
 helpful for production situations.  I'm a bit dubious that you're going
 to gain much here.

 I waited ~26hrs for a rather simple query:

The fact that X is slow does not prove anything about whether Y will
make it faster.  In particular I see nothing here showing that this
query is bumping up against the 1GB-for-sort-pointers limit, or that
if it is, any significant gain would result from increasing that.
I think the only real way to prove that is to hack the source code
to remove the limit and see what happens.  (You could try using malloc
directly, not palloc at all, to have a non-production-quality but
very localized patch to test.)

BTW, it sounded like your argument had to do with whether it would use
HashAgg or not -- that is *not* dependent on the per-palloc limit, and
never has been.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_hba.conf needs sample replication line, replication user

2011-04-09 Thread Joshua Berkus
All,

We left this out of 9.0; let's not leave it out of 9.1.  We need an example 
replication line in pg_hba.conf, commented out.  e.g.

# host   replication   all samenet   md5

Also, what happened to having a replication user defined by default?  We 
talked this to death last year, I thought that's what we'd decided to do?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_hba.conf needs sample replication line, replication user

2011-04-09 Thread Magnus Hagander
On Sat, Apr 9, 2011 at 19:29, Joshua Berkus j...@agliodbs.com wrote:
 All,

 We left this out of 9.0; let's not leave it out of 9.1.  We need an example 
 replication line in pg_hba.conf, commented out.  e.g.

 # host   replication   all     samenet   md5

 Also, what happened to having a replication user defined by default?  We 
 talked this to death last year, I thought that's what we'd decided to do?

Didn't that end up with allowing the default superuser (postgres)
having replication privilieges  by default? Which it does...

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread David Fetter
On Sat, Apr 09, 2011 at 07:24:15PM +0300, Heikki Linnakangas wrote:
 On 09.04.2011 19:17, David Fetter wrote:
 On Sat, Apr 09, 2011 at 03:22:14PM +0200, Jesper Krogh wrote:
 This seems like a place where there is room for improvement.
 
 2011-04-09 15:18:08.016 testdb=# select id from test1 where id  3
 order by id;
   id
 
1
2
 (2 rows)
 
 Time: 0.328 ms
 2011-04-09 15:18:11.936 testdb=# CREATE or Replace FUNCTION
 testsort(id integer) returns integer as $$ BEGIN perform
 pg_sleep(id); return id; END; $$ language plpgsql;
 CREATE FUNCTION
 Time: 12.349 ms
 2011-04-09 15:18:22.138 testdb=# select id from test1 where id  3
 order by id,testsort(id);
   id
 
1
2
 (2 rows)
 
 Time: 3001.896 ms
 
 It seems strange that there is a need to evaluate testsort(id) at
 all in this case.
 
 How would PostgreSQL know that sorting by id leaves no ambiguity
 for the next key to address?
 
 Presumably there's a primary key constraint on id. This is one of
 those cases where we could optimize, but then again, there's no
 reason to write a query like that in the first place.

Given the horrors query generators perpetrate, it might be worth
dropping provably redundant ORDER BYs on the floor at planning time.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Bug in pg_hba.conf or pg_basebackup concerning replication connections

2011-04-09 Thread Joshua Berkus
All,

If I have the following line in pg_hba.conf:

hostreplication replication all md5

pg_basebackup -x -v -P -h master1 -U replication -D $PGDATA
pg_basebackup: could not connect to server: FATAL:  no pg_hba.conf entry for 
replication connection from host 216.121.61.233, user replication

But, if I change it to all users, replication succeeds:

hostreplication all all md5

... even if the user postgres (the only other user in this test) is declared 
with noreplication.

I can't figure out what's going wrong here; either HBA is broken and won't 
accept a replication line unless user is all, or pgbasebackup is doing 
something to test a connection as postgres, even though no such connection 
attempt shows up in the logs.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Feature request: pg_basebackup --force

2011-04-09 Thread Joshua Berkus
Magnus, all:

It seems a bit annoying to have to do an rm -rf * $PGDATA/ before resynching a 
standby using pg_basebackup.  This means that I still need to wrap basebackup 
in a shell script, instead of having it do everything for me ... especially if 
I have multiple tablespaces.

Couldn't we have a --force option which would clear all data and tablespace 
directories before resynching?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread Jesper Krogh

On 2011-04-09 20:00, David Fetter wrote:

Given the horrors query generators perpetrate, it might be worth
dropping provably redundant ORDER BYs on the floor at planning time.

Well, many people often add a secondary sort-key to their SQL
for the only purpose of obtainting a consistent result in the
corner-cases where the first sort key is ambiguios.

If the first sort-key isn't planned to be supported by an index-scan,
then you'll end up calculating the second sortkey for the entire
dataset even if you end up doing a limit 100 at the end.

You can only deem it redundant if there is a primary key in front.
if you have a primary key in front, where as a fix may be really
good in cases where you have a n_distinct at or near -1 in pg_stats
for the column.

Jesper
--
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Evaluation of secondary sort key.

2011-04-09 Thread Jesper Krogh

On 2011-04-09 18:54, Tom Lane wrote:

I think that would be a positive disimprovement. The current design
guarantees that volatile sort expressions are evaluated exactly once,
in the order the rows are read from the data source.  There would be no
guarantees at all, either as to the number of evaluations or the order
in which they happen, if we tried to do evaluation only during the
actual sort.

Another small problem is that any such thing would require carrying
along some kind of closure (ie, the expression and all its input
values), not just the final sort key value, in tuples being sorted.
The ensuing complexity, speed penalty, and increase in data volume
to be sorted would be paid by everybody, making this probably a net
performance loss when considered across all applications.

The current approach gives that:

select id from test1 where some clause that matches say 10% random by 
another index

order by sortfunc1(id),sortfunc(2) limit 20;

on a table with 100.000 elements will also end up applying
both sortfunc1(id) and sortfunc2(id) to all 10.000 elements
even though sortfunc2(id) might only brings value to a very few amount
of tuples (the ones needed as secondary sortkeys for top20 within
the dataset).

It might be worth noting in the manual, that if at all possible you should
stuff the sortfunc2(id) into the table as a column (perhaps computed by 
a before

trigger), since it might actully be evaluated way more often than
you anticipated.

Thanks a lot for the insight.

Jesper
--
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_hba.conf or pg_basebackup concerning replication connections

2011-04-09 Thread Brendan Jurd
On 10 April 2011 04:23, Joshua Berkus j...@agliodbs.com wrote:
 If I have the following line in pg_hba.conf:

 host    replication     replication             all                     md5

 pg_basebackup -x -v -P -h master1 -U replication -D $PGDATA
 pg_basebackup: could not connect to server: FATAL:  no pg_hba.conf entry for 
 replication connection from host 216.121.61.233, user replication


Welcome to the wonderful world of keywords in hba not being specific
to fields.  I encountered this problem myself back in Oct 2010 [1] and
predicted that it would bite other users.  You've been kind enough to
validate that prediction.  I submitted a WIP patch aimed at fixing it
just over a week ago [2].

Until that patch (or some other solution) goes through, you'll need to
quote replication in your hba.conf if you want to use it as a
username.

Cheers,
BJ

[1] 
http://archives.postgresql.org/message-id/AANLkTi=q8dzj79okrwc-ke9zg-rh-1tcqdqbsbkfo...@mail.gmail.com
[2] 
http://archives.postgresql.org/message-id/aanlktin8p0son1yjexo3cgidlxev67oh4c7vtj7e0...@mail.gmail.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] \dO versus collations for other encodings

2011-04-09 Thread Peter Eisentraut
On fre, 2011-04-08 at 20:14 -0400, Tom Lane wrote:
 Given that this display doesn't include any encoding column, I'm
 thinking that the intention was to show only relevant collation
 entries.
 Which we could do by adding a WHERE clause about the encoding.
 If the intention was to not restrict that way, don't we need an
 encoding
 column?  (But I'm not actually sure how we could make that work
 unsurprisingly without changes in CollationGetCollid, which would
 likely
 break other things, so I don't really want to hear suggestions that we
 should do it the other way ...)

The fix you pushed looks OK.

The whole way of dealing with the wrong encodings has been schizophrenic
throughout the development of this feature.  One idea was that at some
point we could add support for creating collations in the wrong encoding
in template databases, if that turns out to be something that is
requested.  You can, of course, do that manually already.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Open issues for collations

2011-04-09 Thread Peter Eisentraut
On mån, 2011-03-28 at 20:02 -0400, Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
  On lör, 2011-03-26 at 00:36 -0400, Tom Lane wrote:
  * It'd sure be nice if we had some nontrivial test cases that work
  in encodings besides UTF8.  I'm still bothered that the committed
  patch failed to cover single-byte-encoding cases in 
 upper/lower/initcap.
 
  Well, how do we want to maintain these test cases without doing too
  much duplication?  It would be easy to run a small sed script over
  collate.linux.utf8.sql to create, say, a latin1 version out of it.
 
 I tried.  The upper/lower test cases require Turkish characters that
 aren't in Latin1.  I'm not sure if we can readily produce test cases
 that cover both sorting changes and case-folding changes in just one
 single-byte encoding --- anybody?
 
 One thing I noticed but didn't push to committing is that the test
 case has a largely-unnecessary assumption about how the local system's
 locale names spell utf8.  We could eliminate that by having it use
 the trimmed locale names created by initdb.

I see you went for the latter option.  That works pretty well already.
I've also been playing around with separating out the Turkish tests
into a separate file.  That would then probably get the remaining
latin1 file passing, if we also dropped the encoding mention from this
error message:

ERROR:  collation foo for encoding UTF8 does not exist

I had thought hard about this in the past and didn't want to do it, but
since we are now making every effort to effectively hide collations with
the wrong encoding, this would possibly be acceptable.

I'm also seeing promising signs that we might get this test (minus
Turkish, perhaps) passing on Windows.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Teaching regex operators about collations

2011-04-09 Thread Tom Lane
Since ILIKE now responds to collations, it would be nice if the
case-insensitive regex operators did too.  The hard part of that is
getting the information from src/backend/utils/adt/regexp.c to
src/backend/regex/regc_locale.c.  In principle we could probably add
a field to the data structures carried around in the regex library,
but that is looking a bit invasive, and since we share that code with
the Tcl project I'm loath to change it too much.  So what I'm thinking
about is just having a couple of static variables in regc_locale.c that
we initialize before each use of the regex library.  This is a bit
grotty, but there's no need for the regex library to be re-entrant,
so it wouldn't cause any problems until that improbable day when
somebody succeeds in multi-threading the backend.

Comments?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Open issues for collations

2011-04-09 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On mån, 2011-03-28 at 20:02 -0400, Tom Lane wrote:
 One thing I noticed but didn't push to committing is that the test
 case has a largely-unnecessary assumption about how the local system's
 locale names spell utf8.  We could eliminate that by having it use
 the trimmed locale names created by initdb.

 I see you went for the latter option.  That works pretty well already.
 I've also been playing around with separating out the Turkish tests
 into a separate file.  That would then probably get the remaining
 latin1 file passing, if we also dropped the encoding mention from this
 error message:

 ERROR:  collation foo for encoding UTF8 does not exist

 I had thought hard about this in the past and didn't want to do it, but
 since we are now making every effort to effectively hide collations with
 the wrong encoding, this would possibly be acceptable.

Not sure.  If we had the test refactored to the point where that was the
only diff you got with a different server encoding, maybe it'd be worth
changing; but right now we're still a long way from there.  I was seeing
this change as mainly targeted towards making the test useful on more
platforms, and since that encoding name is ours and not
platform-specific, it doesn't create any portability issues to show it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_hba.conf or pg_basebackup concerning replication connections

2011-04-09 Thread Josh Berkus

 Welcome to the wonderful world of keywords in hba not being specific
 to fields.  I encountered this problem myself back in Oct 2010 [1] and
 predicted that it would bite other users.  You've been kind enough to
 validate that prediction.  I submitted a WIP patch aimed at fixing it
 just over a week ago [2].

Well, I'd like to add this to the Open Issues.  Given that I managed to
hit this issue pretty much immediately on a blind test, I'm not going to
be even close to the last user who experiences it.

Has this always been an issue if you have users and databases in
pg_hba.conf with the same name?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Teaching regex operators about collations

2011-04-09 Thread David E. Wheeler
On Apr 9, 2011, at 2:40 PM, Tom Lane wrote:

 Since ILIKE now responds to collations, it would be nice if the
 case-insensitive regex operators did too.  The hard part of that is
 getting the information from src/backend/utils/adt/regexp.c to
 src/backend/regex/regc_locale.c.  In principle we could probably add
 a field to the data structures carried around in the regex library,
 but that is looking a bit invasive, and since we share that code with
 the Tcl project I'm loath to change it too much.  So what I'm thinking
 about is just having a couple of static variables in regc_locale.c that
 we initialize before each use of the regex library.  This is a bit
 grotty, but there's no need for the regex library to be re-entrant,
 so it wouldn't cause any problems until that improbable day when
 somebody succeeds in multi-threading the backend.
 
 Comments?

Sounds reasonable. Is this something that CITEXT could take advantage of 
somehow? Right now, its using a nasty hack to make ILIKE and friends work 
properly…

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_hba.conf or pg_basebackup concerning replication connections

2011-04-09 Thread Andrew Dunstan



On 04/09/2011 03:18 PM, Brendan Jurd wrote:

On 10 April 2011 04:23, Joshua Berkusj...@agliodbs.com  wrote:

If I have the following line in pg_hba.conf:

hostreplication replication all md5

pg_basebackup -x -v -P -h master1 -U replication -D $PGDATA
pg_basebackup: could not connect to server: FATAL:  no pg_hba.conf entry for replication connection 
from host 216.121.61.233, user replication


Welcome to the wonderful world of keywords in hba not being specific
to fields.  I encountered this problem myself back in Oct 2010 [1] and
predicted that it would bite other users.  You've been kind enough to
validate that prediction.  I submitted a WIP patch aimed at fixing it
just over a week ago [2].

Until that patch (or some other solution) goes through, you'll need to
quote replication in your hba.conf if you want to use it as a
username.

Cheers,
BJ

[1] 
http://archives.postgresql.org/message-id/AANLkTi=q8dzj79okrwc-ke9zg-rh-1tcqdqbsbkfo...@mail.gmail.com
[2] 
http://archives.postgresql.org/message-id/aanlktin8p0son1yjexo3cgidlxev67oh4c7vtj7e0...@mail.gmail.com




That's a 2000 line patch that looks like it's out of the question now. 
But I think this should fix Josh's immediate problem if we want to do it:


diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2def6ce..4306071 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -492,6 +492,8 @@ check_role(const char *role, Oid roleid, char 
*param_str)

return true;
}
else if (strcmp(tok, role) == 0 ||
+(strcmp(tok, replication\n) == 0 
+ strcmp(role,replication) ==0) ||
 strcmp(tok, all\n) == 0)
return true;
}



Incidentally, are walsenders supposed to be able to match any db name 
other than 'replication'? If not, I think we have a bug in check_db(), 
which is probably missing an else return false; in the amwalsender branch.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_hba.conf or pg_basebackup concerning replication connections

2011-04-09 Thread Andrew Dunstan



On 04/09/2011 07:11 PM, Andrew Dunstan wrote:




Incidentally, are walsenders supposed to be able to match any db name 
other than 'replication'? If not, I think we have a bug in check_db(), 
which is probably missing an else return false; in the amwalsender 
branch.





Sorry, I misread the code. It will fall through. Sorry for the noise.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-09 Thread Noah Misch
On Sat, Apr 09, 2011 at 09:05:42AM -0400, Aidan Van Dyk wrote:
 On Sat, Apr 9, 2011 at 7:03 AM, Bruce Momjian br...@momjian.us wrote:
  Bruce Momjian wrote:
  Alvaro Herrera wrote:
  
   Why is it important to have the original pg_clog files around? ?Since
   the transactions in question are below the freeze horizon, surely the
   tuples that involve those transaction have all been visited by vacuum
   and thus removed if they were leftover from aborted transactions or
   deleted, no? ?So you could just fill those files with the 0x55 pattern
   (signalling all transactions are committed) and the net result should
   be the same. ?No?
  
   Forgive me if I'm missing something. ?I haven't been following this
   thread and I'm more than a little tired (but wanted to shoot this today
   because I'm gonna be able to, until Monday).
 
  To answer your other question, it is true we _probably_ could assume all
  the rows were committed, except that again, vacuum might not have run
  and the pages might not be full so single-page cleanup wasn't done
  either.
 
 OK, continuing the thought of just making all the old clog files as
 all committed...
 
 Since it only affects toast tables, the only time the system (with
 normal queries) would check for a particular toast tuple, the tuple
 referring to it would have been committed, right?  So forcing all
 transactions committed for the older clog segments might mean a scan
 on a *toast* heap might return tuples as committed when they might
 have been aborted, but the real table heap would never refer to those,
 right?

Yes; it would be relatively harmless to retain some unreferenced TOAST chunks.
However, all xacts committed is not equivalent to all tuples visible.  If
the user rolled back a DELETE shortly before the pg_upgrade run, we need to
recognize that outcome to keep any deleted TOAST entries visible.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] How would sorting work with millions of rows in a huge DB with PG?

2011-04-09 Thread Vaibhav Kaushal
Hello all,

I was going through some papers related to sorting and since I am
studying PG code side by side, I wondered how sorting would be done on a
DB with millions of rows on disk with GBs of data. Since holding
everything in memory would not be the possible solution, how do we
actually sort the results in such conditions.

A pointer to some doc page or code would be great for me. 

Regards,
Vaibhav


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_hba.conf or pg_basebackup concerning replication connections

2011-04-09 Thread Josh Berkus

 That's a 2000 line patch that looks like it's out of the question now.
 But I think this should fix Josh's immediate problem if we want to do it:

I have confirmed that Andrew's patch works.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typed table DDL loose ends

2011-04-09 Thread Noah Misch
While looking at the typed table/pg_upgrade problem, I ran into a few smaller
problems in the area.  I'm not envisioning a need for much code shift to fix
them, but there are a few points of policy.

* Table row types used in typed tables vs. ALTER TABLE
As previously noted:
  CREATE TABLE t ();
  CREATE TABLE is_a OF t;
  ALTER TABLE t ADD c int;
  \d is_a
  -- No columns

At first I thought we should just forbid the use of table row types in CREATE
TABLE OF.  However, we've been quite systematic about not distinguishing between
table row types and CREATE TYPE AS types; I've only found a distinction in ALTER
TABLE/ALTER TYPE, where we direct you to the other command.  It would be nice to
preserve this heritage.  That doesn't look particularly difficult; it may
actually yield a net code reduction.

There is a minor policy question: when should ALTER TABLE behave like ALTER TYPE
... RESTRICT, if ever?  Would using the inheritance recursion decision (driven
by ONLY, *, and sql_inheritance) be sufficiently reasonable, or do we need a
distinct signal?  I can't envision a case where you'd want to recurse to
inheritance children but error on typed table children (YMMV).  ALTER TABLE DROP
COLUMN currently uses RESTRICT/CASCADE in a completely different sense, so any
syntactic signal would need to step around that.

* Inheriting from a typed table blocks further type DDL
  CREATE TYPE t AS (x int);
  CREATE TABLE parent OF t;
  CREATE TABLE child () INHERITS (parent);
  ALTER TYPE t ADD ATTRIBUTE y int CASCADE;
  -- ERROR:  column must be added to child tables too
We ought to just set INH_YES on the downstream command in ATTypedTableRecursion.
If we get to that point, the user did choose ALTER TYPE CASCADE; it seems fair
to assume he'd want inheritance recursion rather than a later error.

* Users can CREATE TABLE OF on a type they don't own
This in turns blocks the owner's ability to alter the table/type.  However, we
already have this hazard with composite-type columns.  A TODO to address this
broadly seems in order, but it's not a 9.1 issue.

* Can create a permanent table using a temp table row type
  CREATE TEMP TABLE tempt ();
  CREATE TABLE permt OF tempt; -- silently dropped on session exit
Looks easy to fix, with no policy questions.

Does any of this appear incorrect or unreasonable?

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-04-09 Thread Greg Stark
On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 BTW, it sounded like your argument had to do with whether it would use
 HashAgg or not -- that is *not* dependent on the per-palloc limit, and
 never has been.


His point was he wanted to be allowed to set work_mem  1GB. This is
going to become a bigger and bigger problem with 72-128GB and larger
machines already becoming quite standard.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-04-09 Thread Joshua D. Drake
On Sun, 2011-04-10 at 03:05 +0100, Greg Stark wrote:
 On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  BTW, it sounded like your argument had to do with whether it would use
  HashAgg or not -- that is *not* dependent on the per-palloc limit, and
  never has been.
 
 
 His point was he wanted to be allowed to set work_mem  1GB. This is
 going to become a bigger and bigger problem with 72-128GB and larger
 machines already becoming quite standard.
 

Yes it is, it even came up at East. 1GB just doesn't cut it anymore... 

JD

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] using a lot of maintenance_work_mem

2011-04-09 Thread Stephen Frost
* Greg Stark (gsst...@mit.edu) wrote:
 On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  BTW, it sounded like your argument had to do with whether it would use
  HashAgg or not -- that is *not* dependent on the per-palloc limit, and
  never has been.
 
 His point was he wanted to be allowed to set work_mem  1GB. This is
 going to become a bigger and bigger problem with 72-128GB and larger
 machines already becoming quite standard.

Actually, Tom has a point in that work_mem can be set above 1GB (which
is where I had it set previously..).  I didn't think it'd actually do
anything given the MaxAlloc limit, but suprisingly, it does (at least,
under 8.4).  I'm currently trying to see if we've got anything that's
going to *break* with work_mem set up that high; right now I have a
hashagg plan running across this data set which has 2.4G allocted to
it so far.

I'll update this thread with whatever I find out.  I'm trying to
remember the other issues that I ran in to with this limit (beyond the
whole sort limit, which I do think would be helped by allowing a larger
value, but it's not as big a deal).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pg_upgrade bug found!

2011-04-09 Thread Bruce Momjian
Aidan Van Dyk wrote:
 On Sat, Apr 9, 2011 at 7:03 AM, Bruce Momjian br...@momjian.us wrote:
  Bruce Momjian wrote:
  Alvaro Herrera wrote:
  
   Why is it important to have the original pg_clog files around? ?Since
   the transactions in question are below the freeze horizon, surely the
   tuples that involve those transaction have all been visited by vacuum
   and thus removed if they were leftover from aborted transactions or
   deleted, no? ?So you could just fill those files with the 0x55 pattern
   (signalling all transactions are committed) and the net result should
   be the same. ?No?
  
   Forgive me if I'm missing something. ?I haven't been following this
   thread and I'm more than a little tired (but wanted to shoot this today
   because I'm gonna be able to, until Monday).
 
  To answer your other question, it is true we _probably_ could assume all
  the rows were committed, except that again, vacuum might not have run
  and the pages might not be full so single-page cleanup wasn't done
  either.
 
 OK, continuing the thought of just making all the old clog files as
 all committed...
 
 Since it only affects toast tables, the only time the system (with
 normal queries) would check for a particular toast tuple, the tuple
 referring to it would have been committed, right?  So forcing all
 transactions committed for the older clog segments might mean a scan
 on a *toast* heap might return tuples as committed when they might
 have been aborted, but the real table heap would never refer to those,
 right?

Uh, good point.  I think you are right that you only get to a toast row
from a non-aborted heap row.  I think the problem might be in following
the toast chain but even then I am unclear how that works.  Anyone?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How would sorting work with millions of rows in a huge DB with PG?

2011-04-09 Thread tomas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sun, Apr 10, 2011 at 05:20:02AM +0530, Vaibhav Kaushal wrote:
 Hello all,
 
 I was going through some papers related to sorting and since I am
 studying PG code side by side, I wondered how sorting would be done on a
 DB with millions of rows on disk with GBs of data. Since holding
 everything in memory would not be the possible solution, how do we
 actually sort the results in such conditions.

Look for external sort and external merge

Regards
- -- tomás
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFNoUIyBcgs9XrR2kYRAua/AJ4sIw54Mq6EgCsJzGqqmYELLPnSOwCeN0+S
f19mq0vePoCC9rAWLtWpUUE=
=8XVk
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers