[HACKERS] [SQL] Proposed patch - psql wraps at window width

2008-03-04 Thread Bryce Nesbitt
I've got a patch to psql that offers a "no wider than terminal" option, 
patched against cvs head.  Would anyone be willing to test this before I 
submit the patch?


# \pset format aligned-wrapped
# \pset border 2
# select * from distributors order by did;
Word wrap debug: rows=11 terminal=66 total_w=108
Max= 8 18 56 13
Avg= 2 11 17 0
+-+---+--+---+
| did |   name|descr | long_col_name |
+-+---+--+---+
|   5 | Food Heaven   | default  |   |
|  10 | Lah   | default  |   |
|   3 | Cat Food Heaven 3 | default  |   |
|   4 | Cat Food Heaven 4 | default  |   |
|   2 | Cat Food Heaven 2 | abcdefghijklmnopqrst :   |
| |   | uvwxyz   |   |
| 289 : short name| short|   |
| 287 :   |  |   |
|  32 |   |  |   |
|  99 | Tab \x09 tab  | --   |   |
|   6 | multiwrap | line one :   |
| |   | e|   |
| |   | line two |   |
| |   | line thr :   |
| |   |  :   |
| |   |  |   |
+-+---+--+---+
(9 rows)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] "could not open relation 1663/16384/16584: No such file or directory" in a specific combination of transactions with temp tables

2008-03-04 Thread Bruce Momjian
Tom Lane wrote:
> "Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> > John Smith wrote:
> >> [3] I am not certain how widespread they might be, but I think there
> >> may be some backward compatibility concerns with the patch you are
> >> proposing.
> 
> > Well, the current behavior is certainly broken, so an application 
> > relying on it is in trouble anyway :-(. Even if we came up with a patch 
> > for 8.4 to relax the limitation, I doubt it would be safe enough to 
> > backport to stable branches.
> 
> As Heikki pointed out later, PG 8.1 correctly enforces the restriction
> against preparing a transaction that has dropped a temp table.  It's
> only 8.2.x and 8.3.0 that (appear to) allow this.  So I'm not persuaded
> by backwards-compatibility arguments.
> 
> I've applied Heikki's new patch, and I think that's as much as we can do
> for 8.2 and 8.3.  Any improvement in the functionality would be new
> development (and not trivial development, either) for 8.4 or later.

Is there a TODO here?

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-04 Thread Tom Lane
"Gavin M. Roy" <[EMAIL PROTECTED]> writes:
> (gdb) where
> #0  0x003fe362e21d in raise () from /lib64/tls/libc.so.6
> #1  0x003fe362fa1e in abort () from /lib64/tls/libc.so.6
> #2  0x0063a2e3 in errfinish ()
> #3  0x005974c4 in DeadLockReport ()
> #4  0x0059381f in LockAcquire ()
> #5  0x00592357 in LockRelationOid ()
> #6  0x00457255 in relation_open ()
> #7  0x004574c3 in heap_open ()
> #8  0x0062cf41 in CatalogCacheInitializeCache ()
> #9  0x0062dfad in PrepareToInvalidateCacheTuple ()
> #10 0x0062e8c5 in CacheInvalidateHeapTuple ()
> #11 0x0045c803 in heap_page_prune ()
> #12 0x005086cd in vacuum_rel ()
> #13 0x005096bb in vacuum ()
> #14 0x005a163b in PortalRunUtility ()
> #15 0x005a1714 in PortalRunMulti ()
> #16 0x005a1d30 in PortalRun ()
> #17 0x0059f4b6 in PostgresMain ()
> #18 0x005760c0 in ServerLoop ()
> #19 0x0050 in PostmasterMain ()
> #20 0x0052fd3e in main ()

So what did DeadLockReport put in the server log before panic'ing?

I'm wondering exactly why CatalogCacheInitializeCache is being called
here --- seems like that should have been done long before we got to
VACUUM.  But it would be useful to know just what deadlock it saw.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-04 Thread Gavin M. Roy
[EMAIL PROTECTED] backup]$ cat /etc/redhat-release
CentOS release 4.4 (Final)

BINDIR = /usr/local/pgsql/bin
DOCDIR = /usr/local/pgsql/doc
INCLUDEDIR = /usr/local/pgsql/include
PKGINCLUDEDIR = /usr/local/pgsql/include
INCLUDEDIR-SERVER = /usr/local/pgsql/include/server
LIBDIR = /usr/local/pgsql/lib
PKGLIBDIR = /usr/local/pgsql/lib
LOCALEDIR =
MANDIR = /usr/local/pgsql/man
SHAREDIR = /usr/local/pgsql/share
SYSCONFDIR = /usr/local/pgsql/etc
PGXS = /usr/local/pgsql/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--with-ldap' '--with-perl' '--enable-integer-datetimes'
CC = gcc
CPPFLAGS = -D_GNU_SOURCE
CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
CFLAGS_SL = -fpic
LDFLAGS = -Wl,-rpath,'/usr/local/pgsql/lib'
LDFLAGS_SL =
LIBS = -lpgport -lz -lreadline -ltermcap -lcrypt -ldl -lm
VERSION = PostgreSQL 8.3.0

(gdb) where
#0  0x003fe362e21d in raise () from /lib64/tls/libc.so.6
#1  0x003fe362fa1e in abort () from /lib64/tls/libc.so.6
#2  0x0063a2e3 in errfinish ()
#3  0x005974c4 in DeadLockReport ()
#4  0x0059381f in LockAcquire ()
#5  0x00592357 in LockRelationOid ()
#6  0x00457255 in relation_open ()
#7  0x004574c3 in heap_open ()
#8  0x0062cf41 in CatalogCacheInitializeCache ()
#9  0x0062dfad in PrepareToInvalidateCacheTuple ()
#10 0x0062e8c5 in CacheInvalidateHeapTuple ()
#11 0x0045c803 in heap_page_prune ()
#12 0x005086cd in vacuum_rel ()
#13 0x005096bb in vacuum ()
#14 0x005a163b in PortalRunUtility ()
#15 0x005a1714 in PortalRunMulti ()
#16 0x005a1d30 in PortalRun ()
#17 0x0059f4b6 in PostgresMain ()
#18 0x005760c0 in ServerLoop ()
#19 0x0050 in PostmasterMain ()
#20 0x0052fd3e in main ()

On Tue, Mar 4, 2008 at 11:35 AM, Alvaro Herrera <[EMAIL PROTECTED]>
wrote:

> Gavin M. Roy wrote:
> > This morning I had a postgres 8.3 install core this morning while
> multiple
> > vacuum fulls were taking place. I saved the core file, would anyone be
> > interested in dissecting it?  I've otherwise had no issues with this
> machine
> > or pgsql install.
>
> Of course.  Please post the backtrace.
>
> --
> Alvaro Herrera
> http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
>


Re: [HACKERS] newbie: renaming sequences task

2008-03-04 Thread craigp
--- Tom Lane <[EMAIL PROTECTED]> wrote:

> The first one is really not related to the others --- it just proposes
> that when renaming a table or individual column, we should look for
> sequences "owned by" that column or columns, and rename them so that
> they still look like "table_column_seq".  This is about 50%
> straightforward searching of pg_depend, and about 50% dealing with
> collisions --- if there's already something of that name, you'd need
> to go through the same type of fallback name selection that's already
> done when a serial column is first made.
> 
> (Thinking about it, I kinda wonder whether we even *want* such behavior
> anymore.  In the presence of ALTER SEQUENCE ... OWNED BY, it's entirely
> possible that an owned sequence has a name that's got nothing to do with
> table_column_seq, and which the user wouldn't really want us to forcibly
> rename.  Maybe this TODO has been overtaken by events?)

Well, if we just look at the first one, I wonder (as well) whether or not it is
useful (and as you said, perhaps surprising in some cases). 

I did a quick test, and neither sequences nor primary keys are renamed [1].
Maybe this should be the default behaviour, unless we give provide some other
option to ALTER TABLE RENAME? Another possibility might be to only rename
sequences and/or primary keys which had been created implicitly during table
creation (assuming that information is tracked)?

[1]
create table t1 (id serial8 primary key not null)

alter table t1 rename to t2

pk still named t1_pkey
sequence still named t1_id_seq



  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] "could not open relation 1663/16384/16584: No such file or directory" in a specific combination of transactions with temp tables

2008-03-04 Thread Tom Lane
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> John Smith wrote:
>> [3] I am not certain how widespread they might be, but I think there
>> may be some backward compatibility concerns with the patch you are
>> proposing.

> Well, the current behavior is certainly broken, so an application 
> relying on it is in trouble anyway :-(. Even if we came up with a patch 
> for 8.4 to relax the limitation, I doubt it would be safe enough to 
> backport to stable branches.

As Heikki pointed out later, PG 8.1 correctly enforces the restriction
against preparing a transaction that has dropped a temp table.  It's
only 8.2.x and 8.3.0 that (appear to) allow this.  So I'm not persuaded
by backwards-compatibility arguments.

I've applied Heikki's new patch, and I think that's as much as we can do
for 8.2 and 8.3.  Any improvement in the functionality would be new
development (and not trivial development, either) for 8.4 or later.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] How to handle error message in PG_CATCH

2008-03-04 Thread Zdenek Kotala

Tom Lane napsal(a):

Zdenek Kotala <[EMAIL PROTECTED]> writes:

Alvaro Herrera napsal(a):

Zdenek Kotala wrote:

At the end I got following message:

ERROR:  Error test
CONTEXT:  Context error
server sent data ("D" message) without prior row description ("T" message)

I don't see anything wrong with this code.  Perhaps the problem is
somewhere else?


There is whole test code. It is store procedure and there are nothing 
special.  The difference between this and autovacuum is that autovacuum 
works without client side.


The problem with this is it's violating the wire protocol.  Once you've
sent the client an ERROR message, it no longer expects to see any result
from the SELECT that called the function.



Thanks for explanation. I added extra error message at the end of 
function and it works now.



Thanks Zdenek


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] Google Summer of Code 2008

2008-03-04 Thread Jan Urbański

Oleg Bartunov wrote:

Jan,

the problem is known and well requested. From your promotion it's not
clear what's an idea ?


I guess the first approach could be to populate some more columns in 
pg_statistics for tables with tsvectors. I see there are some statistics 
already being gathered (pg_stat's histogram_bounds are populated for 
tsvector columns), so maybe one could use that?
Even remembering a few of the most frequently appearing lexemes could in 
my opinion help. I plotted distinct lexemes against the number documents 
containing them (basically the output of stat()) in one of our databases 
and came out with this:

http://www.fiok.pl/~jurbanski/kaired-depesze.png
The three high values are really stopwords, and partially because of 
that I wrote my first FTS patch, but this shows that if we'd remember 
the ~40 most frequent lexemes, we could give much better estimates for 
popular queries (and I think are the ones that hurt performance most are 
those which underestimate the row count).


As for a more general solution I'd have to read deeper into the tsearch 
code to understand how the tsvector type and @@ operator work and give 
it a bit more thought. I'm planning to do that in the next three weeks 
(read: before the student applications period starts). Maybe some kind 
of heuristic could be implemented? Possibly someone could load some 
information specific to her language, which would tell the planner how 
common (more or less) a given word is?


Another attempt at it would be: return lower estimates for tsqueries 
consisting of more parts - 'X'::tsquery is usually far less selective 
than 'X & Y & Z & V'::tsquery.


I searched through the list archives, but couldn't find any other 
attempts at this problem - were there any?


Jan

--
Jan Urbanski
GPG key ID: E583D7D2

ouden estin



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] "could not open relation 1663/16384/16584: No such file or directory" in a specific combination of transactions with temp tables

2008-03-04 Thread Tom Lane
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Looking back, I think it was driven by the desire to tie the behavior
>> directly to things that are going to get persisted, such as locks.
>> From that standpoint your initial patch to attach a temp-check to
>> relation-drop 2PC entries would be the right kind of design.  However,
>> given what we now know about the lock situation, I'd feel uncomfortable
>> with applying that without also fixing LockTagIsTemp, and right now
>> that's looking like much more complexity and possible performance
>> penalty than it's worth.

> Looking closer, this actually worked in 8.1, and was broken in 8.2 by 
> this change:

Argh, so it's actually my bug :-(

> Before that, we had an isTempObject flag in LOCALLOCK, which worked even 
> when the relation was dropped later on, unlike LockTagIsTemp.

Yeah.  I guess my hindbrain was remembering that arrangement, because
adding such a field to LOCALLOCK was what I was first thinking about.
The problem though is that we need to take a lock on a table before
reading its pg_class row, in order to avoid race conditions when the
row is being deleted or updated.  So we can't easily know at the time
the lock is taken whether it's a temp table or not.  (In some contexts
such as the parser we might know which schema the table is in, but
most places are expected to be able to open and lock the table knowing
only its OID.)  I think the only feasible solution like that would
involve calling the lock manager a second time to mark the lock temp
after we'd looked at the pg_class row.  Which is not impossible, but
it would cost an extra hashtable search for each open of a temp table,
and it isn't really buying us anything compared to setting a global
flag in the same places.

> Anyway, patches attached, using the global flag approach, for 8.2 and 
> 8.3. As discussed earlier, since the flag is global, we won't allow 
> PREPARE TRANSACTION if you have operated on a temp table in an aborted 
> subxact, but I think that's acceptable.

Patch looks good in a fast once-over, I'll check it more carefully
and apply today.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] How to handle error message in PG_CATCH

2008-03-04 Thread Tom Lane
Zdenek Kotala <[EMAIL PROTECTED]> writes:
> Alvaro Herrera napsal(a):
>> Zdenek Kotala wrote:
>>> At the end I got following message:
>>> 
>>> ERROR:  Error test
>>> CONTEXT:  Context error
>>> server sent data ("D" message) without prior row description ("T" message)
>> 
>> I don't see anything wrong with this code.  Perhaps the problem is
>> somewhere else?

> There is whole test code. It is store procedure and there are nothing 
> special.  The difference between this and autovacuum is that autovacuum 
> works without client side.

The problem with this is it's violating the wire protocol.  Once you've
sent the client an ERROR message, it no longer expects to see any result
from the SELECT that called the function.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] Google Summer of Code 2008

2008-03-04 Thread Oleg Bartunov

Jan,

the problem is known and well requested. From your promotion it's not
clear what's an idea ?

Oleg
On Tue, 4 Mar 2008, Jan Urbaski wrote:


Tom Lane wrote:

=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= <[EMAIL PROTECTED]> writes:

2. Implement better selectivity estimates for FTS.


+1 for that one ...


OK, this one might very well be the one that'd be more useful. And I can 
always reuse the other idea for my thesis, after expanding it a bit.
Speaking of @@ selectivity, I even mailed about it once on the -performance 
list, but the mail somehow got lost in the processing queue and never reached 
the list. Anyway, the idea has been on my mind for some time now.
So are you considering putting FTS selectivity estimates on this year's SoC 
ideas list? That is, if PostgreSQL is planning to participate in SoC this 
year, which I'm sure it does ;)


cheers,
Jan Urbanski




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] Google Summer of Code 2008

2008-03-04 Thread Josh Berkus
Jan,

> OK, this one might very well be the one that'd be more useful. 

Well, you should submit *both* once SoC opens for applications.  The mentors 
will decide which.

-- 
Josh Berkus
PostgreSQL @ Sun
San Francisco

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] Google Summer of Code 2008

2008-03-04 Thread Dave Page
On Tue, Mar 4, 2008 at 4:47 PM, Jan Urbański
<[EMAIL PROTECTED]> wrote:
> Tom Lane wrote:
>  > =?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= <[EMAIL PROTECTED]> writes:
>  >> 2. Implement better selectivity estimates for FTS.
>  >
>  > +1 for that one ...
>
>  OK, this one might very well be the one that'd be more useful. And I can
>  always reuse the other idea for my thesis, after expanding it a bit.
>  Speaking of @@ selectivity, I even mailed about it once on the
>  -performance list, but the mail somehow got lost in the processing queue
>  and never reached the list. Anyway, the idea has been on my mind for
>  some time now.
>  So are you considering putting FTS selectivity estimates on this year's
>  SoC ideas list? That is, if PostgreSQL is planning to participate in SoC
>  this year, which I'm sure it does ;)

We are - but the idea doesn't need to be on the list for us to
consider it. Just write up a good project outline and plan ready to
submit when the doors open.


-- 
Dave Page
EnterpriseDB UK Ltd: http://www.enterprisedb.com
PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] How to handle error message in PG_CATCH

2008-03-04 Thread Zdenek Kotala

Alvaro Herrera napsal(a):

Zdenek Kotala wrote:


PG_TRY();
{
...
ereport(ERROR, (errmsg("Error test")));
...
}
PG_CATCH();
{
errcontext("Context error");
EmitErrorReport();
FlushErrorState();
}
PG_END_TRY();

At the end I got following message:

ERROR:  Error test
CONTEXT:  Context error
server sent data ("D" message) without prior row description ("T" message)


I don't see anything wrong with this code.  Perhaps the problem is
somewhere else?




There is whole test code. It is store procedure and there are nothing 
special.  The difference between this and autovacuum is that autovacuum 
works without client side.


Datum
pg_check(PG_FUNCTION_ARGS)
{
PG_TRY();
{
ereport(ERROR, (errmsg("Error test")));
}
PG_CATCH();
{
errcontext("Context error");
EmitErrorReport();
FlushErrorState();
}
PG_END_TRY();
PG_RETURN_DATUM(0);
}


Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] Google Summer of Code 2008

2008-03-04 Thread Jan Urbański

Tom Lane wrote:

=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= <[EMAIL PROTECTED]> writes:

2. Implement better selectivity estimates for FTS.


+1 for that one ...


OK, this one might very well be the one that'd be more useful. And I can 
always reuse the other idea for my thesis, after expanding it a bit.
Speaking of @@ selectivity, I even mailed about it once on the 
-performance list, but the mail somehow got lost in the processing queue 
and never reached the list. Anyway, the idea has been on my mind for 
some time now.
So are you considering putting FTS selectivity estimates on this year's 
SoC ideas list? That is, if PostgreSQL is planning to participate in SoC 
this year, which I'm sure it does ;)


cheers,
Jan Urbanski

--
Jan Urbanski
GPG key ID: E583D7D2

ouden estin



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] How to handle error message in PG_CATCH

2008-03-04 Thread Alvaro Herrera
Zdenek Kotala wrote:

> PG_TRY();
> {
>   ...
>   ereport(ERROR, (errmsg("Error test")));
>   ...
> }
> PG_CATCH();
> {
>   errcontext("Context error");
> EmitErrorReport();
> FlushErrorState();
> }
> PG_END_TRY();
>
> At the end I got following message:
>
> ERROR:  Error test
> CONTEXT:  Context error
> server sent data ("D" message) without prior row description ("T" message)

I don't see anything wrong with this code.  Perhaps the problem is
somewhere else?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-04 Thread Alvaro Herrera
Gavin M. Roy wrote:
> This morning I had a postgres 8.3 install core this morning while multiple
> vacuum fulls were taking place. I saved the core file, would anyone be
> interested in dissecting it?  I've otherwise had no issues with this machine
> or pgsql install.

Of course.  Please post the backtrace.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] RFP: Recursive query in 8.4

2008-03-04 Thread Tatsuo Ishii
> > Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
> > 
> > > 5. Limitation with PostgreSQL
> > > 
> > > 1) we do not implement SEARCH clause and CYCLE clause. This is because
> > >we need array of rows to implement them. Note that there's no
> > >support for array of rows in PostgreSQL.
> > 
> > What is difference between "array of rows" and
> > Arrays of composite types, that is new feature in 8.3 ?
> > 
> > =# CREATE TABLE foo (i integer);
> > CREATE TABLE
> > =# CREATE TABLE bar (foos foo[]);  -- *here*
> > CREATE TABLE
> 
> Will check. Thanks for pointing it out.

Yes, I agree with that the 8.3's new feature might be used to
implement SEARCH clause and CYCLE clause. Problem is, it requres that
the named composite type (the standard's term is "row type") is
previously defined.  Maybe we need "anonymous" row type?
--
Tatsuo Ishii
SRA OSS, Inc. Japan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] pg_dump additional options for performance

2008-03-04 Thread Bruce Momjian
bruce wrote:
> Joshua D. Drake wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> > 
> > On Mon, 3 Mar 2008 20:33:08 -0500 (EST)
> > Bruce Momjian <[EMAIL PROTECTED]> wrote:
> > 
> > > Added to TODO based on this discussion:
> > > 
> > > o Allow pg_dump to utilize multiple CPUs and I/O channels by
> > > dumping multiple objects simultaneously
> > > 
> > >   The difficulty with this is getting multiple dump processes
> > > to produce a single dump output file.
> > >   
> > > http://archives.postgresql.org/pgsql-hackers/2008-02/msg00205.php
> > 
> > Isn't part of this problem also a consistent snapshot?
> 
> Yes, that might also be an issue though I thought we had some ideas
> about snapshot sharing.

OK, I have added the possible requirement of shared snapshots to that
pg_dump TODO item.  Thanks.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] RFP: Recursive query in 8.4

2008-03-04 Thread Tatsuo Ishii
> Tatsuo Ishii wrote:
> >> On Tue, Feb 19, 2008 at 3:36 AM, Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
> >> 
> > I hope so. But the first thing I would like to do is, to implement the
> > right thing (i.e. following the standard).
> >
> > I don't see any reason that the proposal gets less performance than
> > existing functions.  Moreover the proposal could better cooperate with
> > the optimizer since it can feed more info to it. Any ideas to enhance
> > the performance are welcome.
> >   
> 
> I agree about following the standard but I think it's true that the 
> standard creates some challenges for the optimizer.
> 
> The standard recursive query syntax is quite general. It can represent 
> arbitrary non-linear recursive queries including possibly mutually 
> recursive queries, for example. The challenge is that there are no extra 
> hints when you have the more usual case of a simple linear recursion.

I seems the standard does not allow non-linear recursive queries. In
the SQL:2008 draft pp.380:

7.13 

Syntax Rules 2) g) iv)

"If WLE_i is recursive, then WLE_i shall be linearly recursive."

So now the problem is, how to detect the non-linear recursive queries
and reject them.

> You really do want to discover such linear recursive structures because 
> you can use simpler algorithms and recover memory sooner if you know you 
> have a linear recursive query. You can also support the SEARCH and CYCLE 
> clauses to do depth-first searches which you can't do for arbitrary 
> recursive queries. I also don't have much hope for good optimizer 
> estimates for general recursive queries but for linear recursive queries 
> we can probably do better.
>
> But I think it's actually easier to implement the general case than the 
> special nodes to handle the linear case more efficiently. To handle the 
> general case we need the memoize node to handle recursive loops in the 
> plan and then we can use otherwise normal plan nodes.
> 
> My plan was to implement the general case first, then look for ways to 
> add intelligence in the planner to discover linearity and add new paths 
> to take advantage of it.
> 

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


[HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-04 Thread Gavin M. Roy
This morning I had a postgres 8.3 install core this morning while multiple
vacuum fulls were taking place. I saved the core file, would anyone be
interested in dissecting it?  I've otherwise had no issues with this machine
or pgsql install.
Gavin


Re: [HACKERS] pg_dump additional options for performance

2008-03-04 Thread Bruce Momjian
Joshua D. Drake wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On Mon, 3 Mar 2008 20:33:08 -0500 (EST)
> Bruce Momjian <[EMAIL PROTECTED]> wrote:
> 
> > Added to TODO based on this discussion:
> > 
> > o Allow pg_dump to utilize multiple CPUs and I/O channels by
> > dumping multiple objects simultaneously
> > 
> >   The difficulty with this is getting multiple dump processes
> > to produce a single dump output file.
> >   http://archives.postgresql.org/pgsql-hackers/2008-02/msg00205.php
> 
> Isn't part of this problem also a consistent snapshot?

Yes, that might also be an issue though I thought we had some ideas
about snapshot sharing.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [PATCHES] [HACKERS] Show INHERIT in \du

2008-03-04 Thread Bruce Momjian

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Brendan Jurd wrote:
> I've done up a patch per Tom's idea of combining the binary role
> attributes into a single column.
> 
> Each attribute which differs from the default is listed on a separate
> line, like so:
> 
>   List of roles
>   Role name  |   Attributes   | Member of
> -++---
>  bob || {readers,writers}
>  brendanjurd | Superuser  | {}
>  : Create role
>  : Create DB
>  harry   | No inherit | {}
>  jim | 10 connections | {readers}
>  readers | No login   | {}
>  writers | No login   | {}
> (6 rows)
> 
> Notes:
> 
>  * The patch relies on array_to_string's current treatment of NULL
> values in the array; they are ignored.  If that behaviour changes in
> the future, the \du output will become very ugly indeed.
>  * I'm not sure whether "No login" and "No inherit" are the best
> phrases to use.  I took my cue from the SQL setting names NOLOGIN and
> NOINHERIT, but maybe something more grammatically sensible with
> "Cannot login" and "No inheritance" would be preferable.
>  * If accepted, this patch would supercede the earlier patch mentioned
> by Bernd Helmle upthread, which adds LOGIN to the output as a new
> column: http://archives.postgresql.org/pgsql-patches/2007-11/msg00014.php
> 
> Cheers,
> BJ

[ Attachment, skipping... ]

> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [PATCHES] [HACKERS] Show INHERIT in \du

2008-03-04 Thread Bruce Momjian
Alvaro Herrera wrote:
> Brendan Jurd escribi?:
> > I've done up a patch per Tom's idea of combining the binary role
> > attributes into a single column.
> 
> This patch seems to be missing from the queue.

Yes because I am still processing email from two weeks ago.  I was in
Europe for a week.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [PATCHES] [HACKERS] Show INHERIT in \du

2008-03-04 Thread Alvaro Herrera
Brendan Jurd escribió:
> I've done up a patch per Tom's idea of combining the binary role
> attributes into a single column.

This patch seems to be missing from the queue.


-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] "could not open relation 1663/16384/16584: No such file or directory" in a specific combination of transactions with temp tables

2008-03-04 Thread Heikki Linnakangas

Tom Lane wrote:

"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:

Tom Lane wrote:

Actually ... why are we using the lock manager to drive this at all?


Good question. It has always seemed a bit strange to me. The assumption 
that we always hold the lock on temp table until end of transaction, 
while true today, seems weak to me.


Looking back, I think it was driven by the desire to tie the behavior
directly to things that are going to get persisted, such as locks.
From that standpoint your initial patch to attach a temp-check to
relation-drop 2PC entries would be the right kind of design.  However,
given what we now know about the lock situation, I'd feel uncomfortable
with applying that without also fixing LockTagIsTemp, and right now
that's looking like much more complexity and possible performance
penalty than it's worth.


Looking closer, this actually worked in 8.1, and was broken in 8.2 by 
this change:



date: 2006-07-31 21:09:05 +0100;  author: tgl;  state: Exp;  lines: +167 -48;
Change the relation_open protocol so that we obtain lock on a relation
(table or index) before trying to open its relcache entry.  This fixes
race conditions in which someone else commits a change to the relation's
catalog entries while we are in process of doing relcache load.  Problems
of that ilk have been reported sporadically for years, but it was not
really practical to fix until recently --- for instance, the recent
addition of WAL-log support for in-place updates helped.

Along the way, remove pg_am.amconcurrent: all AMs are now expected to support
concurrent update.


Before that, we had an isTempObject flag in LOCALLOCK, which worked even 
when the relation was dropped later on, unlike LockTagIsTemp.


Anyway, patches attached, using the global flag approach, for 8.2 and 
8.3. As discussed earlier, since the flag is global, we won't allow 
PREPARE TRANSACTION if you have operated on a temp table in an aborted 
subxact, but I think that's acceptable.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
Index: src/backend/access/heap/heapam.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/heap/heapam.c,v
retrieving revision 1.249
diff -c -r1.249 heapam.c
*** src/backend/access/heap/heapam.c	30 Jan 2008 18:35:55 -	1.249
--- src/backend/access/heap/heapam.c	4 Mar 2008 12:43:25 -
***
*** 868,873 
--- 868,877 
  	if (!RelationIsValid(r))
  		elog(ERROR, "could not open relation with OID %u", relationId);
  
+ 	/* Make note that we've accessed a temporary relation. */
+ 	if (r->rd_istemp)
+ 		MyXactAccessedTempRel = true;
+ 
  	pgstat_initstats(r);
  
  	return r;
***
*** 912,917 
--- 916,925 
  	if (!RelationIsValid(r))
  		elog(ERROR, "could not open relation with OID %u", relationId);
  
+ 	/* Make note that we've accessed a temporary relation. */
+ 	if (r->rd_istemp)
+ 		MyXactAccessedTempRel = true;
+ 
  	pgstat_initstats(r);
  
  	return r;
***
*** 958,963 
--- 966,975 
  	if (!RelationIsValid(r))
  		elog(ERROR, "could not open relation with OID %u", relationId);
  
+ 	/* Make note that we've accessed a temporary relation. */
+ 	if (r->rd_istemp)
+ 		MyXactAccessedTempRel = true;
+ 
  	pgstat_initstats(r);
  
  	return r;
Index: src/backend/access/transam/xact.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/xact.c,v
retrieving revision 1.257
diff -c -r1.257 xact.c
*** src/backend/access/transam/xact.c	15 Jan 2008 18:56:59 -	1.257
--- src/backend/access/transam/xact.c	4 Mar 2008 12:29:21 -
***
*** 189,194 
--- 189,200 
  static bool forceSyncCommit = false;
  
  /*
+  * MyXactAccessedTempRel is set when a temporary relation is accessed. We
+  * don't allow PREPARE TRANSACTION in that case.
+  */
+ bool MyXactAccessedTempRel = false;
+ 
+ /*
   * Private context for transaction-abort work --- we reserve space for this
   * at startup to ensure that AbortTransaction and AbortSubTransaction can work
   * when we've run out of memory.
***
*** 1445,1450 
--- 1451,1457 
  	XactIsoLevel = DefaultXactIsoLevel;
  	XactReadOnly = DefaultXactReadOnly;
  	forceSyncCommit = false;
+ 	MyXactAccessedTempRel = false;
  
  	/*
  	 * reinitialize within-transaction counters
***
*** 1770,1775 
--- 1777,1808 
  
  	/* NOTIFY and flatfiles will be handled below */
  
+ 	/*
+ 	 * Don't allow PREPARE TRANSACTION if we've accessed a temporary table
+ 	 * in this transaction. It's not clear what should happen if a prepared
+ 	 * transaction holds a lock on a temp table, and the original backend
+ 	 * exits and deletes the file, for example. Also, if a temp table is
+ 	 * dropped, we have no way of flushing temp buffers from the backend-
+ 	 * private temp buffer cache of the original backend at COMMIT 

Re: [HACKERS] HOT and autovacuum

2008-03-04 Thread Alvaro Herrera
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > I'm seeing a 8.3 database whose autovacuum-initiated vacuums are being
> > cancelled and I am not sure of the cause.  I am wondering if a HOT
> > cleanup round on a page could try to acquire locks on it that would
> > conflict with autovacuum (basically anything that conflicts with a
> > vacuum lock).  This would cause autovacuum to commit hara-kiri.
> 
> This is nonsense, no?  A buffer cleanup lock is an LWLock not a lock
> manager lock.

Yeah, I realized that after awhile.  There must be something else they
are doing.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] 8.3 / 8.2.6 restore comparison

2008-03-04 Thread Heikki Linnakangas

Bruce Momjian wrote:

Heikki, are there any TODO items here?


Hmm. You could put an entry there for "profile and optimize COPY", with 
the below list of ideas as a starting point. It's more about profiling 
and performance testing than coding.



---

Heikki Linnakangas wrote:

Joshua D. Drake wrote:

On Sun, 24 Feb 2008 00:43:18 +
"Heikki Linnakangas" <[EMAIL PROTECTED]> wrote:
 

Incidentally, I've been working on a patch to speed up CopyReadLine.
I was going to run some more tests first, but since we're talking
about it, I guess I should just post the patch. I'll post to
pgsql-patches shortly.

On your post to patches you mentioned only about a 5% improvement.
Don't get me wrong, 5% is 5% and I respect it greatly but as far as I
can tell we are about 300% behind the curve.
Yeah. Looking at the profile, the time is spent really all over the 
place. There's no one clear bottleneck to focus on. I think we could do 
a few more ~5% improvements, but


At some point, I think we have to bite the bullet and find a way to use 
multiple CPUs for a single load. I don't have any good ideas or plans 
for that, but hopefully someone does.


My tests were maxing out at ~22G an hour. On hardware that can do 
in 360G an hour and that is assuming > 50% overhead between OS, libs,

etc... I have no choice but to conclude we have a much, much deeper and
fundamental issue going on with COPY. I am inspired by Itagaki Takahiro
and his batch update of indexes which should help greatly overall but
doesn't help my specific issue.

Yep, the index build idea is an I/O improvement, not a CPU one.


Forgive me for not being a C programmer and Alvaro is not online so I
would vet these questions with him first.

I know that copy is in theory a bulk loader but, when performing the
readline how many lines are we reading?  Do we read up to 8192? Or do we
shove in say 8megs of data before we invoke DoCopy?
We read 64 KB at a time, and then CopyReadLineText returns one line at a 
time from that buffer.


Looking at your profile more, and after the memchr patch, the "raw input 
side" of copy, consisting of reading the data from disk in 64KB blocks, 
splitting that into lines, and splitting lines into columns, still takes 
~20% of the CPU time. I suspect CopyReadAttributesText is the biggest 
culprit there.


You could avoid the ~8% spent in XLogInsert in PostgreSQL 8.3, by 
creating the table (or truncating it) in the same transaction with the COPY.


After that, heap_formtuple is high on the list. I wonder if we could do 
something about that.



I am just curious if there is some simple low hanging fruit that is
possibly missing.
I don't see any piece of code that's causing problems. We can shave off 
a few percents here and there I think, but don't expect a 300% 
improvement anytime soon. A few ideas I've thought about are:


- use a specialized version of strtol, for base 10. That won't help on 
your table, but I've seen strtol consume a significant amount of time on 
tables with numeric/integer columns.


- Instead of pallocing and memcpying the text fields, leave a little bit 
of room between fields in the attribute_buf, and write the varlen header 
there directly. This might help you since your table has a lot of text 
fields.


- Instead of the normal PG function calling conventions, provide 
specialized fastpath input functions for the most common data types. 
InputFunctionCall consumed 4.5% of the CPU time in your profile.


- Use a simpler memory context implementation, that's like a stack with 
no pfree support, for the per-tuple context.


--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly





--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] "could not open relation 1663/16384/16584: No such file or directory" in a specific combination of transactions with temp tables

2008-03-04 Thread Heikki Linnakangas

John Smith wrote:

BTW, I found a easier way of reproducing this (see attached 2pc.sql).
It might help with debugging or verifying a fix/regression.


Thanks.


[1] The data file is reported missing in the second transaction only
if the first transaction was ended using PREPARE TRANSACTION. The
error does not show up if a direct COMMIT is performed (commit.sql)
instead of PREPARE TRANSACTION + COMMIT PREPARED. Why is that so?


On normal COMMIT, all buffers related to the table are flushed from the 
buffer cache, and the file is deleted. On PREPARE TRANSACTION, the 
buffers and the file are not immediately dropped, but the relfilenode (= 
filename) of the file is stored in the two-phase state file. On COMMIT 
PREPARED, the state file is read, the buffers are dropped and the file 
is deleted.


Temporary tables don't use the shared buffer cache, but a backend-local 
buffer cache. In PREPARE TRANSACTION, we don't make any note of which 
tables are temporary, because there shouldn't be any, because we 
should've aborted if you have operated on temporary tables. But as we 
found out, that check in the lock manager isn't working. Therefore when 
COMMIT PREPARED is run, we delete the file, but don't flush the buffers 
from the backend-local temporary buffer cache. The leftover buffers 
cause the "relation not found" error later on, when we try to flush them 
to disk to make room in the cache for other pages.



[2] From all of the discussion here since my first post, I understand
that there are complications for session-level TEMP tables. But is it
easier to support PREPARE TRANSACTION for transactions that create and
drop their TEMP tables, i.e., so that the tables are not session-level
but just transaction-level?


Yes, if the table is created and dropped in the same transaction, that 
avoids many of the problems. I think we could get away with dropping the 
buffers, deleting the file, and releasing locks on it immediately at 
PREPARE TRANSACTION in that case. It wouldn't behave exactly the same as 
a normal transaction, though. The lock would be released early, which 
would allow another transaction to create a table with the same name 
before the transaction has been committed/rolled back.


ON COMMIT DELETE ROWS could be handled like that as well.


[3] I am not certain how widespread they might be, but I think there
may be some backward compatibility concerns with the patch you are
proposing. On the one hand, the documentation says, "It is not
currently allowed to PREPARE a transaction that has executed any
operations involving temporary tables or created any cursors WITH
HOLD." But temporary tables that are created ON COMMIT DROP are more
like cursors that do not have WITH HOLD specified. So it does not seem
clear from the documentation that PREPARE TRANSACTION is not
supported, and indeed due to the lack of a check in Postgres today, it
seems as though it is supported. Do you think there is a risk in
breaking applications?


Well, the current behavior is certainly broken, so an application 
relying on it is in trouble anyway :-(. Even if we came up with a patch 
for 8.4 to relax the limitation, I doubt it would be safe enough to 
backport to stable branches.


Is your application relying on this? As a workaround, you could use 
non-temporary tables instead.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


[HACKERS] How to handle error message in PG_CATCH

2008-03-04 Thread Zdenek Kotala
I'm working on implementing pg_check functions. Because I want to test 
whole table I need catch a error and handle it myself. I use following 
construct:




PG_TRY();
{
...
ereport(ERROR, (errmsg("Error test")));
...
}
PG_CATCH();
{
errcontext("Context error");
EmitErrorReport();
FlushErrorState();
}
PG_END_TRY();

At the end I got following message:

ERROR:  Error test
CONTEXT:  Context error
server sent data ("D" message) without prior row description ("T" message)

and also nothing appears in a log file. Similar concept is used in
autovacuum.c.


Any idea what is wrong?

Thanks for help Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


Re: [HACKERS] Replacing the corrupt "global" folder with older one

2008-03-04 Thread Zdenek Kotala

chris.jurado napsal(a):

Sorry for sending this directly to the hackers mailing list. But, i think it 
did not belong in any other as it involves internals about the files in the 
data directory.


PostgreSQL version: 8.1.2
Operating system:   Windows XP/2003

The PostgreSQL service is not starting anymore. When I manually start it, it 
said something like it started but ended immediately because it had no work to 
do. I encountered this error before but a simple re-installation fixed it. This 
time, it didn't.

I've tried running the service using a backup data folder w/c was 2 days ago, 
it runs. When I switch back to the current, I get the error above. So, I 
figured the current data must be corrupt. After examination of the data folder, 
the global folder is now not a folder but a file that is 8KB in size (I found 
this out by trying out the reset_xlog command and it said it could no longer 
find the global/xxx file).  It could be the hard disk has a problem. I'm going 
to replace it soon.

I searched the documentation about what the global folder contains and found 
out that these contain cluster-wide tables like the list of databases, etc.

I wanted not to lose the latest transactions by restoring a backup. Now the question is this, is it ok to just copy the global folder from my backup 2 days ago, and replace the one in my current data folder? I'm very very sure no DDL statements were executed or no new databases/other objects were created since my last backup. Only DML statements were executed on the user's databases. Will I lose data if I do this? 


Global tables also contain information about vacuuming, freeze xid and 
oids. You could have problem for example with tuple visibility.


You can try it but backup current state first. And when it will work, 
dump all database cluster and regenerate it from a scratch. And try to 
use newer version of PostgreSQL (8.1.11 or better is 8.3).


Zdenek

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers


[HACKERS] pt_br translation

2008-03-04 Thread jomarba
Dear Magnus,

I've sent another xml file to your e-mail. Let's see if this one is Ok...

Best regards,

Jomar


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-hackers