Basically, I'm wondering if anyone can see a problem with my standard
workaround to the macro-expansion-vs-nextval problem with view. I can't
see how PG changes might break it, but I might be using it in a
presentation to others so thought I'd best check.
BEGIN;
CREATE TABLE foo (f_id serial
Tom Lane wrote:
The only alternative I can see is the one Heikki suggested: don't
truncate clog until the freeze horizon. That's safe (given the planned
change to WAL-log tuple freezing) and clean and simple, but a permanent
requirement of 250MB+ for pg_clog would put the final nail in the coffi
Richard Huxton writes:
> Basically, I'm wondering if anyone can see a problem with my standard
> workaround to the macro-expansion-vs-nextval problem with view.
> CREATE FUNCTION foobar_ins_fn(p_f1 int4, p_b1 int4) RETURNS void AS $$
> BEGIN
>INSERT INTO foo (f_id, f1) VALUES (nextval('foo_f
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> I got another idea. If we make sure that vacuum removes any aborted xid
> older than OldestXmin from the table, we can safely assume that any xid
> < the current clog truncation point we are going to be interested in is
> committed. Vacuum already
Tom Lane wrote:
Richard Huxton writes:
Basically, I'm wondering if anyone can see a problem with my standard
workaround to the macro-expansion-vs-nextval problem with view.
CREATE FUNCTION foobar_ins_fn(p_f1 int4, p_b1 int4) RETURNS void AS $$
BEGIN
INSERT INTO foo (f_id, f1) VALUES (next
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Simon Riggs wrote:
>> Ouch! We did discuss that also. Flushing the buffercache is nasty with
>> very large caches, so this makes autovacuum much less friendly - and
>> could take a seriously long time if you enforce the vacuum delay
>> costings.
> Hmm,
It seems that we're converging on the conclusion that not truncating
clog early is the least bad alternative. This has the advantage of
making things a lot simpler --- we won't need to track minxid at all.
Allow me to summarize what I think has to happen:
* VACUUM will determine a freeze cutoff X
On Tue, Oct 31, 2006 at 12:55:46PM -0500, Andrew Dunstan wrote:
> To this you propose, as I understand it, to have a fourth possibility
> which would be spec compliant for comparison purposes but would label
> result set columns with the case preserved name originally used (or
> would you use th
On Mon, 2006-10-30 at 20:40 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > That was understood; in the above example I agree you need to flush. If
> > you don't pass a truncation point, you don't need to flush whether or
> > not you actually truncate. So we don't need to flu
We treated quoted identifiers as case-specific, as the spec requires.
In the catalog, we stored TWO columns... The column name with case
converted as appropriate (as PostgreSQL already does), used for looking
up the attribute,
And a second column, which was the column name with the case exactly as
Oh... And Microsoft SQLServer does something similar.
At Greenplum, we've already gotten complaints from customers about this
when they were switching from MSSQL to GP's PostgreSQL-based database.
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Monday, October 30, 2006
There was some discussion a couple of years ago on the -hackers list
about it, so you might like to review the archives. The consensus seemed
to be that behaviour would need to be set no later than createdb time.
The options I thought of were:
. current postgres behaviour (we need to do thi
Hello, everybody,I am developing a postgresql-based distributed relational data integration system which is like the IBM DB2 Information Integrator. And I have started a pgfoundry project:
http://pgfoundry.org/projects/ignite .(codes on it have already been obsolete.)The goal of the system is to p
On Tue, 31 Oct 2006, Chuck McDevitt wrote:
> We treated quoted identifiers as case-specific, as the spec requires.
>
> In the catalog, we stored TWO columns... The column name with case
> converted as appropriate (as PostgreSQL already does), used for looking
> up the attribute,
> And a second col
On Wed, Nov 01, 2006 at 02:13:47AM +0800, Lee Riquelmei wrote:
>
>Hello, everybody,
>I am developing a postgresql-based distributed relational data integration
>system which is like the IBM DB2 Information Integrator. And I have started
>a pgfoundry project: [1]http://pgfoundry.org
-Original Message-
From: Stephan Szabo [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 31, 2006 10:23 AM
To: Chuck McDevitt
Cc: Tom Lane; beau hargis; pgsql-sql@postgresql.org;
pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] [SQL] Case Preservation disregarding case
On Tue, 31 Oct
"Tom Lane" <[EMAIL PROTECTED]> writes:
> The added WAL volume should be pretty minimal, because only tuples that have
> gone untouched for a long time incur extra work.
That seems like a weak point in the logic. It seems like it would make VACUUM
which is already an i/o hog even more so. Perhaps
The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent a
self contained test case directly to Teodor which shows the error.
'ERROR: index row requires 8792 bytes, maximum size is 8191'
Uh, I see. But I'm really surprised why do you use pg_trgm on big text? pg_trgm
is desig
On October 31, 2006 06:42 am, Joshua D. Drake wrote:
> Teodor Sigaev wrote:
> >> The problem I am after is the 8k index size issue. It is very easy to
> >> get a GIST index (especially when using tsearch2) that is larger than
> >> that.
The problem as I remember it is pg_tgrm not tsearch2 directl
Teodor Sigaev wrote:
>> The problem I am after is the 8k index size issue. It is very easy to
>> get a GIST index (especially when using tsearch2) that is larger than
>> that.
> Hmm, tsearch2 GIST index is specially designed for support huge index
> entry:
> first, every lexemes in tsvectore are t
On October 31, 2006 08:53 am, Teodor Sigaev wrote:
> > The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent
> > a self contained test case directly to Teodor which shows the error.
> >
> > 'ERROR: index row requires 8792 bytes, maximum size is 8191'
>
> Uh, I see. But I'm rea
Darcy Buskermolen wrote:
> On October 31, 2006 08:53 am, Teodor Sigaev wrote:
> > > The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent
> > > a self contained test case directly to Teodor which shows the error.
> > >
> > > 'ERROR: index row requires 8792 bytes, maximum size
Gregory Stark wrote:
>
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>
> > The added WAL volume should be pretty minimal, because only tuples that have
> > gone untouched for a long time incur extra work.
>
> That seems like a weak point in the logic. It seems like it would make VACUUM
> which is alr
Bruce, all:
This is a longstanding performance issue which just came up again on IRC,
and I can't find a TODO item for it. So I'd like it added to TODO.
Suggested phrasing:
-- Improve performance of queries with IN() clauses containing hundreds or
more literal values, possibly by re-writing
Josh Berkus wrote:
> Bruce, all:
>
> This is a longstanding performance issue which just came up again on IRC,
> and I can't find a TODO item for it. So I'd like it added to TODO.
> Suggested phrasing:
>
> -- Improve performance of queries with IN() clauses containing hundreds or
> more lite
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Josh Berkus wrote:
>> -- Improve performance of queries with IN() clauses containing hundreds or
>> more literal values, possibly by re-writing it as a join to a virtual
>> table.
> Hmm, wasn't there some work on this regard in 8.2?
I think it's pret
Current beta seems to have the 2006k version of the zic database. The current
zic database is version 2006n however. Since this update does not include
source code but just data files I would vote to do the update in beta, what
do others think?
One prominent update concerns Brazil which changes to
"Joachim Wieland" <[EMAIL PROTECTED]> writes:
> Current beta seems to have the 2006k version of the zic database. The current
> zic database is version 2006n however. Since this update does not include
> source code but just data files I would vote to do the update in beta, what
> do others think?
Tom,
> I think it's pretty much done. Try WHERE foo IN (VALUES (1),(2),...)
> if you have so many values that a non-nestloop join seems indicated.
Hmmm. Was there a reason not to automate this? Thread link is fine if
you can remember the subject line ... I can't find it on archives.
> The p
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Huh, but the log would not be flushed for each operation that the vacuum
> logs. Only when it's going to commit.
It strikes me that the vacuum cost delay feature omits to consider
generation of WAL records as a cost factor. It may not be a big problem
I've been looking at adding SASL or GSSAPI as an auth method. I have
some questions about how to handle the flow of control changes.
When you do one of the above, an authentication is not (necessarily)
a simple one-packet exchange. In fact the exchange may involve
trying several different
These are logs from Beta 2.
Did I miss a discussion where we removed the name of the portal
during parse, and bind ?
5715%2006-11-01 01:02:26.631 PST%454862a0.1653%SELECT LOG: execute
S_2: select t0.c_id, t0.c_contact, t0.c_credit_limit, t0.c_state,
t0.c_zip, t0.c_phone, t0.c_credit, t0.
"Henry B. Hotz" <[EMAIL PROTECTED]> writes:
> I notice that all the
> authentication (pg_fe_sendauth()) is done inside PWConnectPoll(),
> which sounds like something that isn't expected to block on network
> access.
That's right.
> Is this behavior important during startup?
You needn't bot
Dave Cramer <[EMAIL PROTECTED]> writes:
> These are logs from Beta 2.
With what logging settings? log_duration has rather different behavior
from what it used to do.
regards, tom lane
---(end of broadcast)---
TIP 2: Don't '
I created a user-defined function which has a remote Web serivce invokings inside and executed with select statement.
select ( parameters)
When I implemented in a static way of Web service invoking, it worked.
However, when I tried with a dynamic invoking, I had following errors.
==
Teodor Sigaev wrote:
>> The problem as I remember it is pg_tgrm not tsearch2 directly, I've
>> sent a self contained test case directly to Teodor which shows the
>> error.
>> 'ERROR: index row requires 8792 bytes, maximum size is 8191'
> Uh, I see. But I'm really surprised why do you use pg_trgm
Alvaro Herrera wrote:
> Darcy Buskermolen wrote:
>> On October 31, 2006 08:53 am, Teodor Sigaev wrote:
The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent
a self contained test case directly to Teodor which shows the error.
'ERROR: index row requires 8792
Joshua D. Drake wrote:
> Alvaro Herrera wrote:
> > Darcy Buskermolen wrote:
> >> On October 31, 2006 08:53 am, Teodor Sigaev wrote:
> The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent
> a self contained test case directly to Teodor which shows the error.
>
>
>> We are not storing bytea, a customer is. We are trying to work around
>> customer requirements. The data that is being stored is not always text,
>> sometimes it is binary (a flash file or jpeg). We are using escaped text
>> to be able to search the string contents of that file .
>
> Hmm, have
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Do you mean you actually find stuff based on text attributes in JPEG
> images and the like? I thought those were compressed ...
Typically not --- the design assumption is that the text size wouldn't
amount to anything anyway compared to the image data,
"Chuck McDevitt" <[EMAIL PROTECTED]> writes:
> Equivalent, yes. But I can interpret that clause it mean I can show
> either the case folded or non-case-folded value in the information
> schema, as they are equivalent.
Well, that's an interesting bit of specs-lawyering, but I don't see
how you can
41 matches
Mail list logo