On Wed, Jun 28, 2006 at 01:12:17PM -0500, Jim C. Nasby wrote:
On Wed, Jun 28, 2006 at 01:49:55PM -0400, Andrew Dunstan wrote:
Personally I don't buy the misuse objection - we already have plenty of
things that can be misused. As long as there is a reasonable valid use
and we can make it
On Thu, Jun 29, 2006 at 02:02:32AM -0400, [EMAIL PROTECTED] wrote:
It was written by Nathan Wagner [EMAIL PROTECTED] and myself, and
is based off the OSSP ( http://www.ossp.org/ ) UUID implementation.
I'm not an expert on the license, but it seems acceptable to me:
...
If there is interest -
On Jun 28, 2006, at 10:14 AM, [EMAIL PROTECTED] wrote:
All the geometric types that I'll never use in core,
with few or no uses, including functions to operate on these types,
and no UUID type... Hehe... To me, that's irony... :-)
Interestingly, the superior geometry capability is driving a
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
Hm, so it could be stored on disk without the length header as long as
the length header is added to the in-memory representation? I don't
think the type system has hooks for reading and storing data to disk
though.
[EMAIL PROTECTED] wrote:
On Wed, Jun 28, 2006 at 01:12:17PM -0500, Jim C. Nasby wrote:
On Wed, Jun 28, 2006 at 01:49:55PM -0400, Andrew Dunstan wrote:
Personally I don't buy the misuse objection - we already have plenty of
things that can be misused. As long as there is a reasonable valid use
[EMAIL PROTECTED] wrote:
On Thu, Jun 29, 2006 at 02:02:32AM -0400, [EMAIL PROTECTED] wrote:
It was written by Nathan Wagner [EMAIL PROTECTED] and myself, and
is based off the OSSP ( http://www.ossp.org/ ) UUID implementation.
I'm not an expert on the license, but it seems acceptable to me:
...
If there is interest - I'm sure Nathan and I would be willing to put
it on pgfoundry, and at some point give it up for inclusion into
PostgreSQL.
One requirement would be that it runs on Windows. Is that something you
have tested?
In case it influences anyone, MySQL 5 already has built-in
Hi AllI am unable to compile the source code for windows.the command in the INSTALL file nmake /f win32.makis not working !!!Can any one help ???Thanks in advancesandeep
Here is an overview of the SITC method:
http://momjian.us/cgi-bin/pgsitc
A pretty fundamental problem is that the method assumes it's
OK to change the CTID of a live tuple (by swapping its item
pointer with some expired version). It is not --- this will break:
I am having
On Thu, Jun 29, 2006 at 02:40:15AM -0400, Greg Stark wrote:
Greg Stark [EMAIL PROTECTED] writes:
No, it doesn't, and we'd pay a nonzero price for allowing that.
Currently the executor doesn't have to care (much) about whether a
tuple is on-disk or in-memory --- the individual datums look
Ühel kenal päeval, K, 2006-06-28 kell 18:19, kirjutas Tom Lane:
Bruce Momjian [EMAIL PROTECTED] writes:
Here is an overview of the SITC method:
http://momjian.us/cgi-bin/pgsitc
A pretty fundamental problem is that the method assumes it's OK to
change the CTID of a live tuple (by
Ühel kenal päeval, T, 2006-06-27 kell 12:16, kirjutas Bruce Momjian:
Hannu Krosing wrote:
?hel kenal p?eval, T, 2006-06-27 kell 10:38, kirjutas Hannu Krosing:
?hel kenal p?eval, E, 2006-06-26 kell 23:08, kirjutas Bruce Momjian:
Jim C. Nasby wrote:
On Mon, Jun 26, 2006 at 02:32:59PM
On Thu, Jun 29, 2006 at 01:39:51AM +0300, Hannu Krosing wrote:
And anyway, ctid is a usable unique row identifier only within read-only
transactions, or not ?
Err, no. The ctid is the only identifer of a tuple in any case. When
you do a delete, the tuple to be deleted is indicated by the ctid
And anyway, ctid is a usable unique row identifier only within
read-only transactions, or not ?
actually for as long as no vacuum comes along. This would change
with SITC. (Maybe it would help to only reuse old versions of the same
row,
then anybody holding a ctid would at least be still
Hello,
Recently an interview with Marten Mickos (mysql's ceo) appeared in
russian IT news (eg http://citcity.ru/12776/). I'd like ask some
[similar] questions to PostgreSQL core developers and write an article
in Russian. Actually, I'm sure that there is a great lack of PG news
in our mass
Hi, I cant find any function, which tells me something like session
id. Is there something like that? I need it in my AM, because I need
to know, if something which I wrote in file was written in this
current session or previously.
How about
select procpid||' '||backend_start from
Rodrigo De Leon wrote:
Hi, I cant find any function, which tells me something like session
id. Is there something like that? I need it in my AM, because I need
to know, if something which I wrote in file was written in this
current session or previously.
How about
select procpid||'
Martijn van Oosterhout kleptog@svana.org writes:
A tuple is just an array of datums, with some header information. The
problems come when you don't have a tuple anymore, but only the datum,
like in arguments for functions.
I think it's more a case that most places that deal with datums
That's pretty roundabout. We already expose (hex coded) pid.starttime as
a session identifier in log_line_prefix (it's the %c escape) so I don't
see any reason not to provide either the same thing directly in a
function, or at least to expose the backend pid.
That would be nice.
Greg Stark wrote:
Martijn van Oosterhout kleptog@svana.org writes:
To be honest, it seems like a lot of work to save the four bytes of
overhead for the varlena structure on disk if you're going to need it
in memory anyway. And anything like RAW(16) which people want for
UUIDs, if it's going
Andrew Dunstan [EMAIL PROTECTED] writes:
Rodrigo De Leon wrote:
You could do this:
SELECT procpid||' '||backend_start
FROM pg_stat_activity
WHERE datname = current_database()
AND usename = session_user
AND client_addr = inet_client_addr()
AND client_port = inet_client_port();
That's
Hi.
Probably,
Supposing you are using source of 8.1.4, it is necessary to bring Stable of CVS
or to apply PATCH with reference to the following.
http://archives.postgresql.org/pgsql-patches/2006-05/msg00232.php
Regards,
Hiroshi Saito
- Original Message -
From: Sandeep
On Thu, Jun 29, 2006 at 03:54:36PM +0200, Thomas Hallgren wrote:
I have to concur with this. Assume you use a bytea for a UUID that in
turn is used as a primary key. The extra overhead will be reflected in
all indexes, all foreign keys, etc. In a normalized database some tables
may consist
Hannu Krosing wrote:
But we still have to think about similar cases (index entries pointing
inside CITC chains), unless we plan to disallow adding indexes to
tables.
CREATE INDEX has to undo any chains where the new indexed columns change
in the chain, and add index entries to
Martijn van Oosterhout kleptog@svana.org writes:
On Thu, Jun 29, 2006 at 03:54:36PM +0200, Thomas Hallgren wrote:
I have to concur with this. Assume you use a bytea for a UUID that in
turn is used as a primary key. The extra overhead will be reflected in
all indexes, all foreign keys,
On Tom Lane's advice, we upgraded to Postgres 8.0.8. We also upgraded
slony to 1.1.5, due to some rpm issues. Apart from that everything is
as described below.
We were able to corrupt the index within 90 minutes of starting up our
cluster. A slony-induced vacuum was under way on the provider
Marc Munro [EMAIL PROTECTED] writes:
On Tom Lane's advice, we upgraded to Postgres 8.0.8.
OK, so it's not an already-known problem.
We were able to corrupt the index within 90 minutes of starting up our
cluster. A slony-induced vacuum was under way on the provider at the
time the subscriber
Greg Stark [EMAIL PROTECTED] writes:
In the current setup the only reason for Postgres to have this data type at
all is purely for legacy compatibility.
Yes. So?
regards, tom lane
---(end of broadcast)---
TIP 1: if
On Thu, 2006-06-29 at 12:11 -0400, Tom Lane wrote:
OK, so it's not an already-known problem.
We were able to corrupt the index within 90 minutes of starting up our
cluster. A slony-induced vacuum was under way on the provider at the
time the subscriber failed.
You still haven't given
On Thu, Jun 29, 2006 at 09:12:32AM +0200, Thomas Hallgren wrote:
The split make sense since clients often have powerful UUID utilities handy
and hence have limited or no use for such utilities in the database (true
for all .NET and Java clients). Some PL's will also enable such packages
out
Martijn van Oosterhout kleptog@svana.org writes:
Tom - what do you think of the other related idea, that of reusing dead
index entries ?
Possibly workable for btree now that we do page-at-a-time index scans;
however I'm pretty hesitant to build any large infrastructure atop that
change until
Martijn van Oosterhout wrote:
On Thu, Jun 29, 2006 at 03:54:36PM +0200, Thomas Hallgren wrote:
I have to concur with this. Assume you use a bytea for a UUID that in
turn is used as a primary key. The extra overhead will be reflected in
all indexes, all foreign keys, etc. In a normalized
Marc Munro [EMAIL PROTECTED] writes:
As you see, slony is attempting to enter one tuple
('374520943','22007','0') two times.
Each previous time we have had this problem, rebuilding the indexes on
slony log table (sl_log_1) has fixed the problem. I have not reindexed
the table this time as I
On Thu, Jun 29, 2006 at 06:40:13PM +0200, Thomas Hallgren wrote:
Martijn van Oosterhout wrote:
On Thu, Jun 29, 2006 at 03:54:36PM +0200, Thomas Hallgren wrote:
I have to concur with this. Assume you use a bytea for a UUID that in
turn is used as a primary key. The extra overhead will be
Ühel kenal päeval, N, 2006-06-29 kell 12:35, kirjutas Tom Lane:
Martijn van Oosterhout kleptog@svana.org writes:
Tom - what do you think of the other related idea, that of reusing dead
index entries ?
Possibly workable for btree now that we do page-at-a-time index scans;
however I'm
Tom Lane wrote:
Another issue is that this would replace a simple hint-bit setting with
an index change that requires a WAL entry. There'll be more WAL traffic
altogether from backends retail-deleting index tuples than there would
be from VACUUM cleaning the whole page at once
Speaking of
On Thu, Jun 29, 2006 at 12:35:12PM -0400, Tom Lane wrote:
Another issue is that this would replace a simple hint-bit setting with
an index change that requires a WAL entry. There'll be more WAL traffic
altogether from backends retail-deleting index tuples than there would
be from VACUUM
Martijn van Oosterhout wrote:
One thing I am confused about, currently the ctid chain follows tuple
history so that transactions can find the latest version of any tuple,
even if the key fields have changed. This proposal breaks that, I'm not
sure how important that is though.
No, SITC
Martijn van Oosterhout wrote:
You can't truncate a tuple to just the header, or at least it's not
going to be very useful to do it, unless you can also move other tuples
to coalesce the free space on the page. Which means you need a
VACUUM-strength page lock. If you're trying to do this
We have reproduced the problem again. This time it looks like vacuum is
not part of the picture. From the provider's log:
2006-06-29 14:02:41 CST DEBUG2 cleanupThread: 101.057 seconds for vacuuming
And from the subscriber's:
2006-06-29 13:00:43 PDT ERROR remoteWorkerThread_1: insert into
Adding -advocacy
On Thu, Jun 29, 2006 at 04:48:01PM +0400, Nikolay Samokhvalov wrote:
Recently an interview with Marten Mickos (mysql's ceo) appeared in
russian IT news (eg http://citcity.ru/12776/). I'd like ask some
[similar] questions to PostgreSQL core developers and write an article
in
[EMAIL PROTECTED] (Marc Munro) writes:
As you see, slony is attempting to enter one tuple
('374520943','22007','0') two times.
Each previous time we have had this problem, rebuilding the indexes on
slony log table (sl_log_1) has fixed the problem. I have not reindexed
the table this time as
Marc Munro [EMAIL PROTECTED] writes:
Tom,
we have a newer and much smaller (35M) file showing the same thing:
Thanks. Looking into this, what I find is that *both* indexes have
duplicated entries for the same heap tuple:
idx1:
Item 190 -- Length: 24 Offset: 3616 (0x0e20) Flags: USED
Ühel kenal päeval, N, 2006-06-29 kell 16:42, kirjutas Chris Browne:
[EMAIL PROTECTED] (Marc Munro) writes:
As you see, slony is attempting to enter one tuple
('374520943','22007','0') two times.
Each previous time we have had this problem, rebuilding the indexes on
slony log table
Ühel kenal päeval, N, 2006-06-29 kell 17:23, kirjutas Tom Lane:
Marc Munro [EMAIL PROTECTED] writes:
Tom,
we have a newer and much smaller (35M) file showing the same thing:
Thanks. Looking into this, what I find is that *both* indexes have
duplicated entries for the same heap tuple:
On Fri, 2006-06-30 at 00:37 +0300, Hannu Krosing wrote:
Marc: do you have triggers on some replicated tables ?
We have a non-slony trigger on only 2 tables, neither of them involved
in this transaction. We certainly have no circular trigger structures.
I remember having some corruption in a
I wrote:
What I speculate right at the moment is that we are not looking at index
corruption at all, but at heap corruption: somehow, the first insertion
into ctid (27806,2) got lost and the same ctid got re-used for the next
inserted row. We fixed one bug like this before ...
Further study
Ooops: forgot to cc this to the list.
On Thu, 2006-06-29 at 19:27 -0400, Tom Lane wrote:
Are you *certain* this slave isn't running 8.0.2 or older? If you can
verify that, then I guess we need to look for another mechanism that
could cause the same kind of thing.
Certain. We built new rpms
On Thu, 2006-06-29 at 19:59 -0400, Tom Lane wrote:
Ummm ... you did restart the server? select version(); would be
the definitive test.
Can't say I blame you for the skepticism but I have confirmed it again.
test=# select version();
version
Marc Munro [EMAIL PROTECTED] writes:
If there's anything we can do to help debug this we will. We can apply
patches, different build options, etc.
One easy thing that would be worth trying is to build with
--enable-cassert and see if any Asserts get provoked during the
failure case. I don't
[ back to the start of the thread... ]
Marc Munro [EMAIL PROTECTED] writes:
We have now experienced index corruption on two separate but identical
slony clusters. In each case the slony subscriber failed after
attempting to insert a duplicate record. In each case reindexing the
sl_log_1
On Thu, 2006-06-29 at 21:47 -0400, Tom Lane wrote:
One easy thing that would be worth trying is to build with
--enable-cassert and see if any Asserts get provoked during the
failure case. I don't have a lot of hope for that, but it's
something that would require only machine time not people
Marc Munro [EMAIL PROTECTED] writes:
By dike out, you mean remove? Please confirm and I'll try it.
Right, just remove (or comment out) the lines I quoted.
We ran this system happily for nearly a year on the
previous kernel without experiencing this problem (tcp lockups are a
different
On Thu, 2006-06-29 at 21:59 -0400, Tom Lane wrote:
[ back to the start of the thread... ]
BTW, a couple of thoughts here:
* If my theory about the low-level cause is correct, then reindexing
sl_log_1 would make the duplicate key errors go away, but nonetheless
you'd have lost data --- the
Marc Munro [EMAIL PROTECTED] writes:
I'll get back to you with kernel build information tomorrow. We'll also
try to talk to some kernel hackers about this.
Some googling turned up recent discussions about race conditions in
Linux NFS code:
55 matches
Mail list logo