Greg Stark <[EMAIL PROTECTED]> writes:
> Perhaps what this indicates is that the real meat is in track sampling, not
> block sampling.
Fwiw, I've done a little benchmarking and I'm starting to think this isn't a
bad idea. I see a dramatic speed improvement for samples of 1-10% as the block
size
On Saturday 03 June 2006 17:27, Tom Lane wrote:
> PFC <[EMAIL PROTECTED]> writes:
> >[snip - complicated update logic proprosal]
> > What do you think ?
>
> Sounds enormously complicated and of very doubtful net win --- you're
>
> [snip - ... bad idea reasoning] :)
What if every backend wh
Original Message
From: Tom Lane <[EMAIL PROTECTED]>
kudu HEAD: one-time failure 6/1/06 in statement_timeout test, never seen
before. Is it possible system was under enough load that the 1-second
timeout fired before control reached the exception block?
The load here was
Andreas Pflug wrote:
> Tom Lane wrote:
> > After re-reading what I just wrote to Andreas about how compression of
> > COPY data would be better done outside the backend than inside, it
> > struck me that we are missing a feature that's fairly common in Unix
> > programs. Perhaps COPY ought to have
Martijn van Oosterhout wrote:
-- Start of PGP signed section.
> On Wed, May 31, 2006 at 01:08:28PM -0700, Steve Atkins wrote:
> > On May 31, 2006, at 12:58 PM, Dave Page wrote:
> > >On 31/5/06 19:13, "Andreas Pflug" <[EMAIL PROTECTED]> wrote:
> > >
> > >>I wonder if we'd be able to ship gzip with t
Greg Stark wrote:
It would have been awfully nice to do be able to do
SELECT ... FROM (VALUES (a,b,c),(d,e,f),(g,h,i))
The trouble with supporting it for any case other than INSERT is that
you have to work out what the column datatypes of the construct ought
to be. This is the same as
Hannu Krosing <[EMAIL PROTECTED]> writes:
> Disks can read at full rotation speed, so skipping (not reading) some
> blocks will not make reading the remaining blocks from the same track
> faster. And if there are more than 20 8k pages per track, you still have
> a very high probablility you need
pgbench appears to already support arbitrary SQL queries with the -f
switch, so why couldn't we just make it a little smarter and have people
enable SQL query logging for a day or two, then pass the log off to
pgbench:
pgbench -f
Seems to me like that wouldn't be too difficult to do, and would g
Ühel kenal päeval, R, 2006-06-02 kell 16:23, kirjutas Greg Stark:
> And a 5% sample is a pretty big. In fact my tests earlier showed the i/o from
> 5% block sampling took just as long as reading all the blocks. Even if we
> figure out what's causing that (IMHO surprising) result and improve matter
Tom Lane <[EMAIL PROTECTED]> writes:
> The interesting point here is that a is defined as a
> parenthesized , which means that you ought to be able to
> use a parenthesized VALUES list anyplace you could use a parenthesized
> SELECT. So FROM lists, IN clauses, = ANY and friends, etc all really o
Ühel kenal päeval, L, 2006-06-03 kell 10:43, kirjutas Jim Nasby:
> Might also be worth adding analyze delay settings, ala
> vacuum_cost_delay.
Actually we should have delay settings for all potential
(almost-)full-scan service ops, - VACUUM, ANALYSE, CREATE INDEX, ADD
CONSTRAINT, maybe more - s
Tom Lane wrote:
Greg Stark <[EMAIL PROTECTED]> writes:
PFC <[EMAIL PROTECTED]> writes:
MySQL already does this for INSERT :
INSERT INTO x (a,b) VALUES (1,2), (3,4), (5,6)...;
The above syntax is SQL-standard, so we ought to support it sometime,
performance benefits or no.
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> Supporting VALUES only in INSERT would be relatively trivial BTW,
>> but the spec actually thinks it should be allowed as a
>> in FROM ...
> How does that syntax work?
If you look at SQL92, INSERT ... VALUES is actu
Jim Nasby wrote:
yes, it's a file/directory it doesn't know about.
At one stage I suppressed these checks, but I found that too many
times we saw errors due to unclean repos. So now buildfarm insists
on having a clean repo.
I suppose I could provide a switch to turn it off ... in one
Tom Lane <[EMAIL PROTECTED]> writes:
> Supporting VALUES only in INSERT would be relatively trivial BTW,
> but the spec actually thinks it should be allowed as a
> in FROM ...
How does that syntax work?
INSERT INTO x (a,b) from select x,y,z from t from select x2,y2,z2 from t
? doesn't seem to
On Jun 2, 2006, at 5:24 PM, Todd A. Cook wrote:
Josh Berkus wrote:
Greg, Tom,
But for most users analyze doesn't really have to run as often as
vacuum. One sequential scan per night doesn't seem like that big
a deal
to me.
Clearly you don't have any 0.5 TB databases.
Perhaps something lik
On Jun 2, 2006, at 10:27 AM, Andrew Dunstan wrote:
Joshua D. Drake wrote:
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
What's happening here is that cvs actually creates the directory
and then later prunes it when it finds it is empty.
I find that explanation pretty unconvinci
PFC <[EMAIL PROTECTED]> writes:
> What do you think ?
Sounds enormously complicated and of very doubtful net win --- you're
moving a lot of overhead into SELECT in order to make UPDATE cheaper,
and on top of that the restriction to same-page will limit the
usefulness quite a lot (unless we d
Mark Woodward wrote:
>> Mark Woodward wrote:
...
>>> This runs completely in the background and can serve as a running
>>> backup.
>> And you are sure it would be much faster then a server local running
>> psql just dumping the result of a query?
>
> No I can't be sure of that at all, but Th
Hello,
Sometimes people complain that UPDATE is slow in postgres. UPDATE...
- generates dead tuples which must be vacuumed.
- needs to hit all indexes even if only one column was modified.
From what I know UPDATE creates a new copy of the old row with the
rele
MySQL already does this for INSERT :
INSERT INTO x (a,b) VALUES (1,2), (3,4), (5,6)...;
Does MySQL really let you stream that? Trying to do syntax like that in
Postgres wouldn't work because the parser would try to build up a parse
tree
for the whole statement before runnin
Greg Stark <[EMAIL PROTECTED]> writes:
> PFC <[EMAIL PROTECTED]> writes:
>> MySQL already does this for INSERT :
>> INSERT INTO x (a,b) VALUES (1,2), (3,4), (5,6)...;
> Does MySQL really let you stream that? Trying to do syntax like that in
> Postgres wouldn't work because the parser would try to
PFC <[EMAIL PROTECTED]> writes:
> > I was also vaguely pondering whether all the DDL commands could be
> > generalized to receive or send COPY formatted data for repeated execution.
> > It would be neat to be able to prepare an UPDATE with placeholders and
> > stream data in COPY format as parame
I was also vaguely pondering whether all the DDL commands could be
generalized
to receive or send COPY formatted data for repeated execution. It would
be
neat to be able to prepare an UPDATE with placeholders and stream data
in COPY
format as parameters to the UPDATE to execute it thousand
Think about version API compatibility.
Suppose you have a working database on server A which uses module foo
version 1.
Some time passes, you buy another server B and install postgres on it.
Meanwhile the module foo has evolved into version 2 which is cooler, but
has some minor A
25 matches
Mail list logo