David Fetter <[EMAIL PROTECTED]> writes:
> 1. We can continue to support 5.6 until we can't any more, and
> statistically speaking that "can't any more" is quite likely to happen
> between two minor releases.
That's a silly and unfounded assertion. What sort of event do you
foresee that is going
"Vitali Stupin" <[EMAIL PROTECTED]> writes:
> The error "invalid memory alloc request size 4294967293" apears when
> selecting array of empty arrays:
> select ARRAY['{}'::text[],'{}'::text[]];
I can get a core dump off it too, sometimes. The problem is in
ExecEvalArray, which computes the dimensi
Martijn van Oosterhout writes:
> It's clear whether you actually want to allow people to put utf8
> characters directly into their source (especially if the database is
> not in utf8 encoding anyway). There is always the \u{} escape.
Well, if the database encoding isn't utf8 then we'd not iss
Hi,For the last 6 months or so we've had an intermittent issue while doing a data import with a simple update statement. The fix that we've found for this issue is to REINDEX TABLE ; Has anyone seen this error before?
Again, the error is: duplicate key violates unique constraint pg_toast_<>Thank
"Paul Laughlin" <[EMAIL PROTECTED]> writes:
> For the last 6 months or so we've had an intermittent issue while doing a
> data import with a simple update statement. The fix that we've found for
> this issue is to REINDEX TABLE ;
What PG version is this?
Are you sure that the REINDEX actually do
warehouse=# select count(distinct chunk_id) from pg_toast.pg_toast_635216540;count---74557(1 row)We're on version 8.0.7On 10/16/06,
Tom Lane <[EMAIL PROTECTED]> wrote:"Paul Laughlin" <
[EMAIL PROTECTED]> writes:> For the last 6 months or so we've had an intermittent issue while doing a> data i
"Paul Laughlin" <[EMAIL PROTECTED]> writes:
> warehouse=# select count(distinct chunk_id) from
> pg_toast.pg_toast_635216540;
> count
> ---
> 74557
> (1 row)
> We're on version 8.0.7
Well, 8.0 is definitely at risk for OID collisions in a toast table,
but with so few entries I'd have thought
On Mon, Oct 16, 2006 at 10:00:13AM -0400, Tom Lane wrote:
> David Fetter <[EMAIL PROTECTED]> writes:
> > 1. We can continue to support 5.6 until we can't any more, and
> > statistically speaking that "can't any more" is quite likely to
> > happen between two minor releases.
>
> That's a silly and
We got it early last week and again this morning. Before these two it was about six months ago.On 10/16/06, Tom Lane <
[EMAIL PROTECTED]> wrote:"Paul Laughlin" <
[EMAIL PROTECTED]> writes:> warehouse=# select count(distinct chunk_id) from> pg_toast.pg_toast_635216540;> count> ---> 74557> (1 ro
"Paul Laughlin" <[EMAIL PROTECTED]> writes:
> We got it early last week and again this morning. Before these two it was
> about six months ago.
A certain amount of clustering could be expected, if a lot of the
entries were made at the time of initial table load --- they'd have
nearby OIDs. You c
the following query will crash the server
process:
INSERT INTO news.newsSELECT * FROM
news.news2;
news.news is empty;
news.news2 contains 600 records. both have the same
structure (see end of mail)
when the tsvector column & trigger is
dumped from the two tables everything works as exp
Since it seems this mail got lost in the depths of maintainer
approval, pardon my resend.
The following bug has been logged online:
Bug reference: 2697
Logged by: Rusty Conover
Email address: [EMAIL PROTECTED]
PostgreSQL version: 8.2beta1
Operating system: Fedora Core 5
De
12 matches
Mail list logo