On Thu, 2006-01-05 at 00:33 -0500, Greg Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
The approach I suggested uses the existing technique for selecting
random blocks, then either an exhaustive check on all of the rows in a
block or the existing random row approach, depending upon
+listitemparaFix bug in filename/contrib/pgcrypto/ Openwall
+gen_salt processing (Marko Kreen/para/listitem
I guess it should be bit more explicit:
listitemparaFix bug in filename/contrib/pgcrypto/ gen_salt,
which caused it not to use all available salt space for md5 and
xdes algorithms (Marko
Do you *really* want the median estimate in these case? Are you certain
you
do not want something with the opposite behavior of Chaudhuri's estimate so
that for small sample sizes the bias is toward a high estimate of D?
(Converges on D from the right instead of the left.)
Tom, et al,
Sorry for the longish email; if you're most interested in a change to
the ACL system to allow more privileges then skip to the bottom where
I worked up a change to give us more options without much of a
performance impact (I don't think anyway). Personally, I'd prefer that
to
Stephen Frost wrote:
I'm not a particularly big fan of this though because, while I'd
like to be able to give TRUNCATE permissions I'm not a big fan of SET
RELIABILITY because it would affect PITR backups.
As far as I have understood the discussion... with WAL archiving turned on,
the whole
* Michael Paesold ([EMAIL PROTECTED]) wrote:
Stephen Frost wrote:
I'm not a particularly big fan of this though because, while I'd
like to be able to give TRUNCATE permissions I'm not a big fan of SET
RELIABILITY because it would affect PITR backups.
As far as I have understood the
Josh Berkus josh@agliodbs.com writes:
Only if your sample is random and independent. The existing mechanism tries
fairly hard to ensure that every record has an equal chance of being
selected.
If you read the entire block and not appropriate samples then you'll
introduce
systematic
Josh Berkus josh@agliodbs.com writes:
These statements are at odds with my admittedly basic understanding of
statistics. Isn't the power of a sample more related to the absolute size
of
the sample than the sample as fraction of the population? Why not just pick
a smallish sample
Thanks. Updated.
---
Marko Kreen wrote:
+listitemparaFix bug in filename/contrib/pgcrypto/ Openwall
+gen_salt processing (Marko Kreen/para/listitem
I guess it should be bit more explicit:
listitemparaFix bug in
If we find at the bottom of test_config_settings() that we have not been
able to run successfully at all (i.e. status != 0 at about line 1183 of
initdb.c) is there any point in continuing? Don't we know that we are
bound to fail at the template1 creation stage? Maybe we should just exit
nicely
On Thu, 2006-01-05 at 09:41 -0500, Stephen Frost wrote:
* Michael Paesold ([EMAIL PROTECTED]) wrote:
Stephen Frost wrote:
I'm not a particularly big fan of this though because, while I'd
like to be able to give TRUNCATE permissions I'm not a big fan of SET
RELIABILITY because it would
Andrew Dunstan [EMAIL PROTECTED] writes:
If we find at the bottom of test_config_settings() that we have not been
able to run successfully at all (i.e. status != 0 at about line 1183 of
initdb.c) is there any point in continuing? Don't we know that we are
bound to fail at the template1
FYI, with the options merged, we still have this TODO item:
* %Remove behavior of postmaster -o
---
Peter Eisentraut wrote:
Here's the plan for assimilating the command-line options of the postmaster
and
Simon Riggs wrote:
So, we need a name for EXCLUSIVE mode that suggests how it is different
from TRUNCATE, and in this case, the difference is that EXCLUSIVE
preserves the previous contents of the table on recovery, while TRUNCATE
does not. Do you want to call the mode PRESERVE, or
Simon Riggs wrote:
On Thu, 2005-12-29 at 11:37 -0500, Bruce Momjian wrote:
Having COPY behave differently because it is
in a transaction is fine as long as it is user-invisible, but once you
require users to do that to get the speedup, it isn't user-invisible
anymore.
Since we're
Josh Berkus wrote:
Euler,
It should but it's not implemented yet. There is no difficulty in doing
it. But I want to propose the following idea: if some object depends on
another object and its type is 'DEPENDENCY_INTERNAL' we could
grant/revoke privileges automagically to it. Or maybe
Greg,
We *currently* use a block based sampling algorithm that addresses this
issue by taking care to select rows within the selected blocks in an
unbiased way. You were proposing reading *all* the records from the
selected blocks, which throws away that feature.
The block-based algorithms
On Thu, Jan 05, 2006 at 10:12:29AM -0500, Greg Stark wrote:
Worse, my recollection from the paper I mentioned earlier was that sampling
small percentages like 3-5% didn't get you an acceptable accuracy. Before you
got anything reliable you found you were sampling very large percentages of
the
hii was really more than happy when i saw table partitioning in release info for 8.1.then i tried to use it, and hit some serious problem (described on pgsql-general).basically the question is - is anybody at the moment working on improving partitioning capabilities? like improving queries to
hubert depesz lubaczewski wrote:
hi
i was really more than happy when i saw table partitioning in release
info for 8.1.
then i tried to use it, and hit some serious problem (described on
pgsql-general).
basically the question is - is anybody at the moment working on
improving partitioning
On Thu, 2006-01-05 at 12:27 -0500, Bruce Momjian wrote:
Seems like a nice optimization.
Negative thoughts: Toast tables have a toast index on them, yes? We have
agreed that we cannot use the optimization if we have indexes on the
main table. It follows that we cannot use the optimization if we
Jeremy Drake [EMAIL PROTECTED] writes:
We have encountered a very nasty but apparently rare bug which appears to
result in catalog corruption.
I've been fooling around with this report today. In several hours of
trying, I've been able to get one Assert failure from running Jeremy's
example on
I did some profiling related to printtup() by a simple libpq SELECT *
test program (revised from the libpq programing sample in document
without retriving the results). There are 260k or so records in table
test(i int).
/* original version - prepare tuple and send */
SELECT * TIMING: 0.63 sec
Hinted by this thread:
http://archives.postgresql.org/pgsql-performance/2006-01/msg00016.php
I wonder if we should really implement file-system-cache-warmup strategy
which we have discussed before. There are two natural good places to do
this:
(1) sequentail scan
(2)
Qingqing Zhou [EMAIL PROTECTED] writes:
So we spend a portion of time at preparing tuples in printtup() by
converting the tuple format to a network format. I am not quite familiar
with that part, so I wonder is it possible to try to send with original
tuple format with minimal preparing job
Qingqing Zhou [EMAIL PROTECTED] writes:
Hinted by this thread:
http://archives.postgresql.org/pgsql-performance/2006-01/msg00016.php
I wonder if we should really implement file-system-cache-warmup strategy
which we have discussed before.
The difference between the cached and non-cached
On Thu, 5 Jan 2006, Tom Lane wrote:
The difference between the cached and non-cached states is that the
kernel has seen fit to remove those pages from its cache. It is
reasonable to suppose that it did so because there was a more immediate
use for the memory. Trying to override that
Here is some additional information that I have managed to gather today
regarding this. It is not really what causes it, so much as what does
not.
I removed all plperl from the loading processes. I did a VACUUM FULL
ANALYZE, and then I reindexed everything in the database (Including
starting
Jeremy Drake [EMAIL PROTECTED] writes:
Here is some additional information that I have managed to gather today
regarding this. It is not really what causes it, so much as what does
not.
...
Similar for pg_type, there being 248 index row versions vs 244 row
versions in the table.
The
On Thu, 5 Jan 2006, Qingqing Zhou wrote:
Feasibility: Our bufmgr lock rewrite already makes this possible. But to
enable it, we may need more work: (w1) make bufferpool relation-wise,
which makes our estimation of data page residence more easy and reliable.
(w2) add aggresive pre-read on
Josh Berkus josh@agliodbs.com writes:
Greg,
We *currently* use a block based sampling algorithm that addresses this
issue by taking care to select rows within the selected blocks in an
unbiased way. You were proposing reading *all* the records from the
selected blocks, which throws
31 matches
Mail list logo