Tom Lane wrote:
I've been making another pass over getting rid of buildfarm failures.
The remaining ones I see at the moment are:
firefly HEAD: intermittent failures in the stats test. We seem to have
fixed every other platform back in January, but not this one.
kudu HEAD: one-time
I would like to implement Allow postgresql.conf file values to be
changed via an SQL API, perhaps using SET GLOBAL functionality. Is
there anybody who works on it? Is there any detailed explanation?
Thanks Zdenek
---(end of broadcast)---
Am Freitag, 2. Juni 2006 09:46 schrieb Zdenek Kotala:
I would like to implement Allow postgresql.conf file values to be
changed via an SQL API, perhaps using SET GLOBAL functionality. Is
there anybody who works on it? Is there any detailed explanation?
I don't think the semantics are all that
Tom Lane wrote:
I've been making another pass over getting rid of buildfarm failures.
The remaining ones I see at the moment are:
firefly HEAD: intermittent failures in the stats test. We seem to
have fixed every other platform back in January, but not this one.
firefly 7.4: dblink
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Andrew Dunstan
Sent: 02 June 2006 03:31
To: [EMAIL PROTECTED]
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] 'CVS-Unknown' buildfarm failures?
cvs-unknown means there are unknown
Hi All,
Just a small comment from a mortal user.
On Thursday 01 June 2006 19:28, Josh Berkus wrote:
5. random_page_cost (as previously discussed) is actually a funciton of
relatively immutable hardware statistics, and as such should not need to
exist as a GUC once the cost model is fixed.
Josh, Greg, and Tom,
I do not know how sensitive the plans will be to the correlation,
but one thought might be to map the histogram X histogram correlation
to a square grid of values. Then you can map them to an integer which
would give you 8 x 8 with binary values, a 5 x 5 with 4 values per
Dave Page said:
I have
repeatedly
advised buildfarm member owners not to build by hand in the
buildfarm repos.
Not everybody listens, apparently.
The owner of snake can guarantee that that is not the case - that box
is not used for *anything* other than the buildfarm and hasn't even
Larry Rosenman said:
Tom Lane wrote:
I've been making another pass over getting rid of buildfarm failures.
The remaining ones I see at the moment are:
firefly HEAD: intermittent failures in the stats test. We seem to
have fixed every other platform back in January, but not this one.
Tom Lane wrote:
Or is
it worth improving buildfarm to be able to skip specific tests?
There is a session on buildfarm improvements scheduled for the Toronto
conference. This is certainly one possibility.
cheers
andrew
---(end of
-Original Message-
From: Andrew Dunstan [mailto:[EMAIL PROTECTED]
Sent: 02 June 2006 12:18
To: Dave Page
Cc: [EMAIL PROTECTED]; pgsql-hackers@postgresql.org
Subject: RE: [HACKERS] 'CVS-Unknown' buildfarm failures?
That's why I said almost always :-)
:-)
I strongly suspect
Stefan Kaltenbrunner [EMAIL PROTECTED] writes:
FWIW: lionfish had a weird make check error 3 weeks ago which I
(unsuccessfully) tried to reproduce multiple times after that:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfishdt=2006-05-12%2005:30:14
Weird.
SELECT ''::text AS eleven,
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is a
good idea.
Currently, the COPY command only copies a table, what if it could operate
with a query, as:
COPY (select * from mytable where foo='bar') as
Andrew Dunstan [EMAIL PROTECTED] writes:
I strongly suspect that snake is hitting the file/directory doesn't
disappear immediately when you unlink/rmdir problem on Windows that we have
had to code around inside Postgres. It looks like cvs is trying to prune an
empty directory but isn't fast
Andrew Dunstan [EMAIL PROTECTED] writes:
Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?
Sure, although I wouldn't bother with 7.3 - just take 7.3 out of firefly's
build schedule. That's not carte blanche on fixes, of course -
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?
Sure, although I wouldn't bother with 7.3 - just take 7.3 out of
firefly's build schedule. That's not carte blanche on
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is a
good idea.
Currently, the COPY command only copies a table, what if it could operate
with a query, as:
COPY (select * from
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is
a
good idea.
Currently, the COPY command only copies a table, what if it could
operate
with a query, as:
COPY (select * from
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is a
good idea.
Currently, the COPY command only copies a table, what if it could operate
with a query, as:
COPY (select * from
Mark Woodward wrote:
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is
a
good idea.
Currently, the COPY command only copies a table, what if it could
operate
with a query, as:
Mark Woodward wrote:
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it
is
a
good idea.
Currently, the COPY command only copies a table, what if it could
operate
with a query,
Andrew Dunstan wrote:
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is a
good idea.
Currently, the COPY command only copies a table, what if it could operate
with a query, as:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
I strongly suspect that snake is hitting the file/directory doesn't
disappear immediately when you unlink/rmdir problem on Windows that we have
had to code around inside Postgres. It looks like cvs is trying to prune an
empty directory
Andrew Dunstan [EMAIL PROTECTED] writes:
What's happening here is that cvs actually creates the directory and
then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a ?
for such a directory?
regards, tom lane
Tom Lane wrote:
Stefan Kaltenbrunner [EMAIL PROTECTED] writes:
FWIW: lionfish had a weird make check error 3 weeks ago which I
(unsuccessfully) tried to reproduce multiple times after that:
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfishdt=2006-05-12%2005:30:14
Weird.
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
What's happening here is that cvs actually creates the directory and
then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a ?
for such a directory?
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
What's happening here is that cvs actually creates the directory and
then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a ?
for such a directory?
cvs will print a ? if it
Larry Rosenman wrote:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?
Sure, although I wouldn't bother with 7.3 - just take 7.3 out of
firefly's build schedule.
Joshua D. Drake [EMAIL PROTECTED] writes:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
What's happening here is that cvs actually creates the directory and
then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a ?
for
Joshua D. Drake wrote:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
What's happening here is that cvs actually creates the directory and
then later prunes it when it finds it is empty.
I find that explanation pretty unconvincing. Why would cvs print a ?
for such a directory?
Andrew Dunstan [EMAIL PROTECTED] writes:
I suppose I could provide a switch to turn it off ... in one recent case
the repo was genuinely not clean, though, so I am not terribly keen on
that approach - but I am open to persuasion.
No, I agree it's a good check. Just wondering if we can
Tom Lane wrote:
Sudden thought: is there any particularly good reason to use the cvs
update -P switch in buildfarm repositories? If we simply eliminated
the create/prune thrashing for these directories, it'd fix the problem,
if Andrew's idea is correct. Probably save a few cycles too. And
On Fri, 2006-06-02 at 09:56 -0400, Andrew Dunstan wrote:
Allow COPY to output from views
FYI, there is a patch for this floating around -- I believe it was
posted to -patches a few months back.
Another idea would be to allow actual SELECT statements in a COPY.
Personally I strongly
Neil Conway wrote:
On Fri, 2006-06-02 at 09:56 -0400, Andrew Dunstan wrote:
Allow COPY to output from views
FYI, there is a patch for this floating around -- I believe it was
posted to -patches a few months back.
I have it. The pieces of it than I can use to implement the idea below,
David Fetter [EMAIL PROTECTED] writes:
In the prior discussions someone posted the paper with the algorithm
I mentioned. That paper mentions that previous work showed poor
results at estimating n_distinct even with sample sizes as large as
50% or more.
Which paper? People have
Larry Rosenman wrote:
Larry Rosenman wrote:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Larry Rosenman said:
If I generate fixes for firefly (I'm the owner), would they have a
prayer Of being applied?
Sure, although I wouldn't bother with 7.3 - just take 7.3 out of
Sorry, but I thought it that was the most appropriate list for the issue.I was following these instructions:http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/custom-dict.html
And what happens is that the function works just once. Perhaps a malloc/free issue?$ psql fuzzyfuzzy=# select
Tom et al,
Bruce and I talked a little bit about modularizing the xlog code a bit
more. As you know, one of the PostgreSQL Summer of Code projects is
to enhance xlogdump. This and other projects which would like to be
able to use the xlog code directly (like resetxlog in the -f option)
rather
Rodrigo,
you gave us too little information. Did you use your own dictionary ?
What's your configuration, version, etc.
Oleg
On Fri, 2 Jun 2006, Rodrigo Hjort wrote:
Sorry, but I thought it that was the most appropriate list for the issue.
I was following these instructions:
Greg,
Using a variety of synthetic and real-world data sets, we show that
distinct sampling gives estimates for distinct values queries that
are within 0%-10%, whereas previous methods were typically 50%-250% off,
across the spectrum of data sets and queries studied.
Aha. It's a
Mark Woodward wrote:
...
create table as select ...; followed by a copy of that table
if it really is faster then just the usual select fetch?
Why create table?
Just to simulate and time the proposal.
SELECT ... already works over the network and if COPY from a
select (which would basically
Josh Berkus josh@agliodbs.com writes:
Using a variety of synthetic and real-world data sets, we show that
distinct sampling gives estimates for distinct values queries that
are within 0%-10%, whereas previous methods were typically 50%-250% off,
across the spectrum of data sets
Tino Wildenhain [EMAIL PROTECTED] writes:
Ok, but why not just implement this into pg_dump or psql?
Why bother the backend with that functionality?
You're not seriously suggesting we reimplement evaluation of WHERE clauses
on the client side, are you?
regards, tom lane
Greg Stark wrote:
Josh Berkus josh@agliodbs.com writes:
Using a variety of synthetic and real-world data sets, we show that
distinct sampling gives estimates for distinct values queries that
are within 0%-10%, whereas previous methods were typically 50%-250% off,
across the spectrum
Greg Stark [EMAIL PROTECTED] writes:
And a 5% sample is a pretty big. In fact my tests earlier showed the i/o from
5% block sampling took just as long as reading all the blocks. Even if we
figure out what's causing that (IMHO surprising) result and improve matters I
would only expect it to be
On Fri, Jun 02, 2006 at 01:39:32PM -0700, Michael Dean wrote:
I'm sorry to interrupt your esoteric (to me) discussion, but I have
a very simple question: would you define a good unbiased sample?
My statistics professor Dan Price (rest his soul) would tell me
there are only random samples of
Mark Woodward wrote:
...
create table as select ...; followed by a copy of that table
if it really is faster then just the usual select fetch?
Why create table?
Just to simulate and time the proposal.
SELECT ... already works over the network and if COPY from a
select (which would
Tom Lane wrote:
Tino Wildenhain [EMAIL PROTECTED] writes:
Ok, but why not just implement this into pg_dump or psql?
Why bother the backend with that functionality?
You're not seriously suggesting we reimplement evaluation of WHERE clauses
on the client side, are you?
no, did I? But what is
Now that we've got a nice amount of tuneability in the bgwriter, it
would be nice if we had as much insight into how it's actually doing.
I'd like to propose that the following info be added to the stats
framework to assist in tuning it:
bgwriter_rounds - number of rounds that have run
Mark Woodward wrote:
...
pg_dump -t mytable | psql -h target -c COPY mytable FROM STDIN
With a more selective copy, you can use pretty much this mechanism to
limit a copy to a sumset of the records in a table.
Ok, but why not just implement this into pg_dump or psql?
Why bother the backend
On Fri, Jun 02, 2006 at 09:56:07AM -0400, Andrew Dunstan wrote:
Mark Woodward wrote:
Tom had posted a question about file compression with copy. I thought
about it, and I want to through this out and see if anyone things it is a
good idea.
Currently, the COPY command only copies a table,
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
And a 5% sample is a pretty big. In fact my tests earlier showed the i/o
from
5% block sampling took just as long as reading all the blocks. Even if we
figure out what's causing that (IMHO surprising) result and
Tino Wildenhain [EMAIL PROTECTED] writes:
Tom Lane wrote:
Tino Wildenhain [EMAIL PROTECTED] writes:
Ok, but why not just implement this into pg_dump or psql?
Why bother the backend with that functionality?
You're not seriously suggesting we reimplement evaluation of WHERE clauses
on
Greg, Tom,
But for most users analyze doesn't really have to run as often as
vacuum. One sequential scan per night doesn't seem like that big a deal
to me.
Clearly you don't have any 0.5 TB databases.
I'd still be worried about the CPU pain though. ANALYZE can afford to
expend a
Josh Berkus wrote:
Greg, Tom,
But for most users analyze doesn't really have to run as often as
vacuum. One sequential scan per night doesn't seem like that big a deal
to me.
Clearly you don't have any 0.5 TB databases.
Perhaps something like ANALYZE FULL? Then only those who need the
Mark Woodward wrote:
...
pg_dump -t mytable | psql -h target -c COPY mytable FROM STDIN
With a more selective copy, you can use pretty much this mechanism to
limit a copy to a sumset of the records in a table.
Ok, but why not just implement this into pg_dump or psql?
Why bother the
I wrote:
In general it seems to me that for CPU-bound databases, the default values
of the cpu_xxx_cost variables are too low. ... rather than telling people
to manipulate all three of these variables individually, I think it might
also be a good idea to provide a new GUC variable named
Oleg,
Actually I got PG 8.1.4 compiled from source on a Debian GNU/Linux 2.6.16-k7-2.
My locale is pt_BR, but I configured TSearch2 to use rules from the 'simple'.
Then I just followed the instructions from the link. The fact is that it only works at the first time.
Regards,
Rodrigo Hjort
Josh Berkus josh@agliodbs.com writes:
Greg, Tom,
But for most users analyze doesn't really have to run as often as
vacuum. One sequential scan per night doesn't seem like that big a deal
to me.
Clearly you don't have any 0.5 TB databases.
Actually I did not so long ago.
One objection to this is that after moving off the gold standard of
1.0 = one page fetch, there is no longer any clear meaning to the
cost estimate units; you're faced with the fact that they're just an
arbitrary scale. I'm not sure that's such a bad thing, though. For
instance, some people
We got another report of this failure today:
http://archives.postgresql.org/pgsql-novice/2006-06/msg00020.php
which I found particularly interesting because it happened on a Fedora
machine, and I had thought Fedora impervious because it considers
glibc-common a standard component. Seems it can
Rod Taylor [EMAIL PROTECTED] writes:
One objection to this is that after moving off the gold standard of
1.0 = one page fetch, there is no longer any clear meaning to the
cost estimate units; you're faced with the fact that they're just an
arbitrary scale. I'm not sure that's such a bad
Allow COPY to output from views
Another idea would be to allow actual SELECT statements in a COPY.
Personally I strongly favor the second option as being more flexible
than the first.
I second that - allowing arbitrary SELECT statements as a COPY source
seems much more powerful and
Allow COPY to output from views
Another idea would be to allow actual SELECT statements in a COPY.
Personally I strongly favor the second option as being more flexible
than the first.
I second that - allowing arbitrary SELECT statements as a COPY source
seems much more powerful and
Not to be a sour apple or anything but I don't see how any of this is
needed in the backend since we can easily use Psql to do it, or pretty
much any other tool.
There is an important difference between a capability in the backend vs
one synthesized in the frontend.
And that would be? The
Hello,
I was looking at this todo item and I was wondering why we want to do
this? I have had to use -o -P on many occassion and was wondering if
there is something new to replace it in newer PostgreSQL?
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Mark Woodward wrote:
Allow COPY to output from views
Another idea would be to allow actual SELECT statements in a COPY.
Personally I strongly favor the second option as being more flexible
than the first.
I second that - allowing arbitrary SELECT statements as a COPY source
seems much more
VIEW.
Not to be a sour apple or anything but I don't see how any of this is
needed in the backend since we can easily use Psql to do it, or pretty
much any other tool.
There is an important difference between a capability in the backend vs
one synthesized in the frontend.
After much
Joshua D. Drake wrote:
Hello,
I was looking at this todo item and I was wondering why we want to do
this? I have had to use -o -P on many occassion and was wondering if
there is something new to replace it in newer PostgreSQL?
Uh, are you confusing it with
postgres -O -P?
Keep in mind
Alvaro Herrera [EMAIL PROTECTED] writes:
Joshua D. Drake wrote:
I was looking at this todo item and I was wondering why we want to do
this? I have had to use -o -P on many occassion and was wondering if
there is something new to replace it in newer PostgreSQL?
Keep in mind that postgres
Tino Wildenhain [EMAIL PROTECTED] writes:
Tom Lane wrote:
You're not seriously suggesting we reimplement evaluation of WHERE clauses
on the client side, are you?
no, did I? But what is wrong with something like:
\COPY 'SELECT foo,bar,baz FROM footable WHERE baz=5 ORDER BY foo' TO
Jim Nasby [EMAIL PROTECTED] wrote
Now that we've got a nice amount of tuneability in the bgwriter, it
would be nice if we had as much insight into how it's actually doing.
I'd like to propose that the following info be added to the stats
framework to assist in tuning it:
In general, I
72 matches
Mail list logo