Hi,
Georgi Chulkov wrote:
Please allow me to ask then:
1. In your opinion, would the above scenario indeed benefit from a raw-device
interface for large objects?
No, because file systems also try to do what you outline above. They
certainly don't split sequential data up into blocks and
Bruce Momjian wrote:
If we only prune on an update (or insert) why not just do prune every
time? I figure the prune/defrag has to be lighter than the
update/insert itself.
Pruning is a quite costly operation. You need to check the visibility of
each tuple on the page, following tuple chains
Tom Lane wrote:
* We also need to think harder about when to invoke the page pruning
code. As the patch stands, if you set a breakpoint at
heap_page_prune_opt it'll seem to be hit constantly (eg, once for every
system catalog probe), which seems uselessly often. And yet it also
seems not
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
What I think we'd need to have a complete solution is
convert(text, name) returns bytea
-- convert from DB encoding to arbitrary encoding
convert(bytea, name, name) returns bytea
-- convert between any two
Ühel kenal päeval, T, 2007-09-18 kell 08:08, kirjutas Andrew Dunstan:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
What I think we'd need to have a complete solution is
convert(text, name) returns bytea
-- convert from DB encoding to arbitrary
Hannu Krosing wrote:
Ühel kenal päeval, T, 2007-09-18 kell 08:08, kirjutas Andrew Dunstan:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
What I think we'd need to have a complete solution is
convert(text, name) returns bytea
--
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
But then what happens when you want to update a second tuple on the same
page? None of our existing plan types release and reacquire pin if they
don't have to, and I really doubt that we want to give up that
optimization.
You will
Heikki Linnakangas [EMAIL PROTECTED] writes:
There is one wacky idea I haven't dared to propose yet:
We could lift the limitation that you can't defragment a page that's
pinned, if we play some smoke and mirrors in the buffer manager. When
you prune a page, make a *copy* of the page you're
Georgi Chulkov [EMAIL PROTECTED] writes:
Here's the reason why I'm looking at raw device storage for large objects
only
(as opposed to all tables): with raw device I/O I can control, to an extent,
spatial locality. So, if I have an application that wants to store N large
objects (totaling
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
But then what happens when you want to update a second tuple on the same
page? None of our existing plan types release and reacquire pin if they
don't have to, and I really doubt that we want to give up that
FYI, I am speaking in Moscow twice in the next week. I leave tomorrow
(Wednesday), and return the following Thursday, 9/27. I am staying with
Teodor so will have internet access and will try to address any major
issues while I am away.
--
Bruce Momjian [EMAIL PROTECTED]
in 8.2
SELECT headline('default', 'a b c', 'c'::tsquery,
E'StartSel=span class=\\style1\\, StopSel=/span');
In 8.2 this produces an error:
SELECT headline('default', 'a b c', 'c'::tsquery,
'StartSel=span class=style1, StopSel=/span');
ERROR: syntax error
--
Teodor Sigaev
Andrew Dunstan [EMAIL PROTECTED] writes:
What's bothering me here though is that in the two argument forms, if
the first argument is text the second argument is the destination
encoding, but if the first argument is a bytea the second argument is
the source encoding. That strikes me as
Pavan Deolasee [EMAIL PROTECTED] writes:
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
Perhaps we could
replace that heuristic with something that is page-local; seems like
dividing the total used space by the number of item pointers would give
at least a rough approximation of the page's
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
Another option would be to prune whenever the free space goes
below table fillfactor
If default fillfactor weren't 100% then this might be good ;-). But
we could use max(1-fillfactor, BLCKSZ/8) or some such.
Yet another option would be to
It was suggested to me today that I should clarify how others should be
able to test this patch themselves by writing a sort of performance
reviewer's guide; that information has been scattered among material
covering development. That's what you'll find below. Let me know if any
of it seems
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead tuples. Do we like this?
No one seems to have picked up on this point, but after reflection
I think there's actually a pretty big
Hi all,
As we will soon enter beta, I decided to give a try to 8.3devel. The
first step is of course to load a dump from an existing database.
The dump is a text dump of 1.6 GB (database is approximately 4 GB).
The restore is far slower than with 8.2. From time to time ALTER TABLE
queries
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead tuples. Do we like this?
No one seems to have picked up on this point, but after
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead tuples. Do we like this?
If we do this, then it's not clear that
Pavan Deolasee [EMAIL PROTECTED] writes:
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
In a system with
HOT running well, the reasons to vacuum a table will be:
1. Remove dead index entries.
2. Remove LP_DEAD line pointers.
3. Truncate off no-longer-used end pages.
4. Transfer knowledge
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
* I'm still pretty unhappy about the patch's use of a relcache copy of
GetAvgFSMRequestSize()'s result. The fact that there's no provision for
ever updating the value while the relcache entry lives is part of it,
but the bigger part is that I'd
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
In a system with
HOT running well, the reasons to vacuum a table will be:
1. Remove dead index entries.
2. Remove LP_DEAD line pointers.
3. Truncate
On Tuesday 18 September 2007 17:15:43 Teodor Sigaev wrote:
in 8.2
SELECT headline('default', 'a b c', 'c'::tsquery,
E'StartSel=span class=\\style1\\, StopSel=/span');
In 8.2 this produces an error:
SELECT headline('default', 'a b c', 'c'::tsquery,
'StartSel=span class=style1,
Tom Lane wrote:
Anyway, on the strength of that, these functions are definitely
best named to stay away from the spec syntax, so +1 for your
proposal above.
OK, I have committed this and the other the functional changes that
should change the encoding holes.
Catalog version
On Tue, Sep 18, 2007 at 11:32:52AM -0400, Tom Lane wrote:
Another option would be to prune whenever the free space goes
below table fillfactor and hope that users would set fillfactor so that
atleast one updated tuple can fit in the block. I know its not best to
rely on the users though.
[EMAIL PROTECTED] (Andrew Dunstan) writes:
Log Message:
The two argument form of convert() is gone,
Um ... so that means CONVERT(c USING y) now fails entirely? That might
be going a bit far. If we do want to get rid of that syntax then I'm
noting a lack of parser changes in this patch.
Hello,
pg_ctl -w -D ... start doesn't work when unix_socket_directory is set
to somewhere else than the compiled in default (/tmp). Having this is
useful for the startup scripts, so the status DONE actually means
success, instead of maybe.
Jeff Davis wrote about it a while ago:
Tom Lane wrote:
[EMAIL PROTECTED] (Andrew Dunstan) writes:
Log Message:
The two argument form of convert() is gone,
Um ... so that means CONVERT(c USING y) now fails entirely? That might
be going a bit far. If we do want to get rid of that syntax then I'm
noting a lack of parser
This has a trivial workaround - just set PGHOST for pg_ctl:
[EMAIL PROTECTED] inst.codfix.5705]$
PGHOST=/home/andrew/pgl/inst.codfix.5705 bin/pg_ctl -D data/ -l logfile
-w start
waiting for server to start done
server started
[EMAIL PROTECTED] inst.codfix.5705]$
cheers
andrew
Andrew Dunstan [EMAIL PROTECTED] writes:
We can revert that if necessary. It will open up a hole, though. Take
your pick - spec compliance or validly coded data.
I would rather take CONVERT USING out altogether, than have an
implementation that so clearly disregards the spec as to not even
Tom Lane wrote:
OTOH we may be talking at cross-purposes --- on looking into gram.y
I see that this syntax is transformed to a call of convert_using(),
which may mean it has nothing to do with your changes.
No, I changed convert_using -
Andrew Dunstan [EMAIL PROTECTED] writes:
Um ... so that means CONVERT(c USING y) now fails entirely? That might
be going a bit far. If we do want to get rid of that syntax then I'm
noting a lack of parser changes in this patch.
No, that's still there. The only change there is that it
Andrew Dunstan [EMAIL PROTECTED] [18-09-2007 23:42]:
This has a trivial workaround - just set PGHOST for pg_ctl:
[EMAIL PROTECTED] inst.codfix.5705]$
PGHOST=/home/andrew/pgl/inst.codfix.5705 bin/pg_ctl -D data/ -l logfile -w
start
That would be fine for a particular installation, but isn't
Radoslaw Zielinski [EMAIL PROTECTED] writes:
pg_ctl -w -D ... start doesn't work when unix_socket_directory is set
to somewhere else than the compiled in default (/tmp).
pg_ctl not working is going to be the very least of your worries;
pretty much nothing else will either.
If you want some
Radoslaw Zielinski wrote:
Andrew Dunstan [EMAIL PROTECTED] [18-09-2007 23:42]:
This has a trivial workaround - just set PGHOST for pg_ctl:
[EMAIL PROTECTED] inst.codfix.5705]$
PGHOST=/home/andrew/pgl/inst.codfix.5705 bin/pg_ctl -D data/ -l logfile -w
start
That would be
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
No. We have a function overloading system, we should use it.
In general I agree with you.
What's bothering me here though is that in the two argument forms, if the
first
argument is text the second argument is the destination
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
We can revert that if necessary. It will open up a hole, though. Take
your pick - spec compliance or validly coded data.
I would rather take CONVERT USING out altogether, than have an
implementation that so clearly disregards
On Tue, Sep 18, 2007 at 09:31:03AM -0700, Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead
On Tue, Sep 18, 2007 at 06:51:04PM +0200, Guillaume Smet wrote:
Hi all,
As we will soon enter beta, I decided to give a try to 8.3devel. The
first step is of course to load a dump from an existing database.
The dump is a text dump of 1.6 GB (database is approximately 4 GB).
The restore
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Decibel! wrote:
On Tue, Sep 18, 2007 at 09:31:03AM -0700, Joshua D. Drake wrote:
If we do this, then it's not clear that having pgstats track dead space
is worth the trouble at all. It might possibly be of value for testing
purposes to see how
41 matches
Mail list logo