How about:
Implement a function estimated_count that can be used instead of
count. It could use something like the algorithm in
src/backend/commands/analyze.c to get a reasonably accurate psuedo count
quickly.
The advantage of this approach is that count still means (exact)count
(for your
Shridhar Daithankar wrote:
Something like select reltuples from pg_class where relname='foo'?
Shridhar
[chuckles] - I had envisaged something more accurate that the last
ANALYZE, estimate_count would effectively *do* acquire_sample_rows()
then and there for you...
regards
Mark
is possible or reasonable - others who know feel free to
chip in :-)
regards
Mark
Randolf Richardson wrote:
[EMAIL PROTECTED] (Mark Kirkwood) wrote in
comp.databases.postgresql.hackers:
[sNip]
How about:
Implement a function estimated_count that can be used instead of
count. It could use
Could certainly do that - a scalar function that returns reltuples from
pg_class. I was hoping to do 2 additional things:
i) provide better accuracy than the last ANALYZE
ii) make it behave like an aggregate
So I wanted to be able to use estimated_count as you would use count, i.e:
SELECT
Couldn't agree more - syntax like
SHOW TABLES;
is inituitive and somehow right - [chuckles] - Mysql does not have
*everything* wrong!
regards
Mark
Bruce Momjian wrote:
I like the idea of adding a new syntax to show that information using
simple SQL command syntax, and putting it in the
It would be interesting to see the results if you do this test again with :
- scaling factor somewhere in the 100-300 range (so the generated db ~=
your ram amount)
- number of clients somewhere in 10-100
- number of transactions somewhere in 1000-1000
best wishes
Mark
P.s - Are you using the
Just updated now. compiles. fast fix :-)
Tom Lane wrote:
I plan to fix that later tonight
if no one beats me to it.
regards, tom lane
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
I seems to me that the existing situation is actually correct :
The configuration is a property of the initialized database cluster, so
a logical place for it is in the root of said cluster.
It is *not* a property of the installed binary distribution (e.g
/usr/local/pgsql/etc) - as you may
[EMAIL PROTECTED] wrote:
IMHO my patch can do this in a self
documenting way, thus making it easier to do, i.e.
postmaster -C /etc/postgres/fundb.conf
postmaster -C /etc/postgres/testdb.conf
I think that is far more intuitive than:
postmaster -D /some/path/who/knows/where/fundb
postmaster -D
Bruce Momjian wrote:
My idea was to put config files in /usr/local/pgsql/data/etc, not
pgsql/etc.
We don't put Unix configuration files in /, etc put them in /etc.
Sorry, I missed the 'data' pathname. However - I may be a bit slow - but
I do not see how this will handle the situation where
Bruce Momjian wrote:
Mark Kirkwood wrote:
Bruce Momjian wrote:
My idea was to put config files in /usr/local/pgsql/data/etc, not
pgsql/etc.
We don't put Unix configuration files in /, etc put them in /etc.
Sorry, I missed the 'data' pathname. However - I may be a bit slow
Bruce Momjian wrote:
Let me tell you the compromise I thought of.
First, we put the config files (postgresql.conf, pg_hba.conf,
pg_ident.conf) in data/etc by default.
Sorry Bruce,
I was being slow :-) , I was thinking you were going to associate the
config files with the binary
[EMAIL PROTECTED] wrote:
ii) I think the -C switch *WITH* the -D switch has viable usability.
Consider this, you are testing two different database layouts and/or RAID
controllers. You could easily bounce back and forth from *identical*
configurations like this:
Convenient indeed, but I would
Tom Lane wrote:
I think if you spelled the subdir name config rather than etc,
it would be more obvious what's what.
How about 'conf' - (familiar to anyone who has used apache or tomcat )
regards
Mark
---(end of broadcast)---
TIP 4: Don't
Joe Conway wrote:
Tom Lane wrote:
Personally I rely quite a lot on setting PGDATA to keep straight which
installation I'm currently working with, so I'm not going to be happy
with a redesign that eliminates that variable without providing an
adequate substitute :-(
I'll second that.
I'll
Fresh checkout of CVS HEAD yesterday, updated today :
$ ./configure --prefix=/usr/local/pgsql/7.5
$ make
gmake[4]: Entering directory
`/usr/home/postgres/develop/c/pgsql/src/backend/access/nbtree'
gcc -O2 -fno-strict-aliasing -Wall -Wmissing-prototypes
-Wmissing-declarations
What scale factor TPC H are you importing?
additionally - might be worth giving the specs of the machine you are
doing this on.
(I seem to recall trying this with Pg 7.2 a while ago without this
issue, mind you - think I had ~1G of Ram and used the scale fact 1
dataset, i.e 1G)
regards
Mark
Have you tried this with Oracle or similar commercial database?
I have timed COPY/LOAD times for Postgresql/Mysql/Oracle/Db2 -
the rough comparison is :
Db2 and Mysql fastest (Db2 slightly faster)
Oracle approx twice as slow as Db2
Postgresql about 3.5-4 times slower than Db2
However
Tom Lane wrote:
Has anyone done the corresponding experiments on the other DBMSes to
identify exactly when they allow CURRENT_TIMESTAMP to advance ?
I have Db2 on hand and examined CURRENT TIMESTAMP in an sql procedure.
(IBM have implemented it without the _ )
The short of it is that
Dear hackers,
I have recently been playing with DB2 8.1 Beta. It has introduced a
feature to enable index clustering on more than one key. This reminded
me of a previous thread on HACKERS about index access anding/bitmaps in
Firebird. So anyway, here is a little snip from the 8.1 manual as a
Justin Clift wrote:
Hi guys,
Have created a Solaris 8 Intel package for PostgreSQL 7.3.1, but don't
have any SPARC boxes here any more.
Does anyone have a SPARC box handy that would be available for
compiling PostgreSQL 7.3.1 on? It would need to be Solaris 8 (or
maybe 9), and have things
Lamar Owen wrote:
On Wednesday 15 January 2003 09:20, Justin Clift wrote:
Sound like a plan? Will also need someone else with a Solaris 8 SPARC
system to try the packages out too, just in case there are weird library
dependencies happening that might catch us out.
I have access to
snippage
The Oracle system stops from time to time because of various arcane
reasons. You get the error message, look it up on alltheweb.com, and
fix it. The whole system is bogus. It DEMANDS a full time DBA.
PostgreSQL does not.
I could be accused of being cynical here (gosh)... but I
Adrian 'Dagurashibanipal' von Bidder wrote:
I'm just saying that there are
*some* small arcane details in postgres, too (although, at least, they
don't affect stability, just performance).
Indeed you are right... Pg has its own collection of arcane details too,
but hopefully the culture of
Fred Zellinger wrote:
I also am a Linux Nut and use Postgres whenever possible because I like the
freedom of access to the HACKERS mailing list...something only a few highly
renound DBA snobs have with Oracle.
Indeed, I think this is a significant component of the appeal of open
source
I
On Fri, 2002-05-03 at 04:25, mlw wrote:
IMHO, if we can come up with a strong, positive statement, it would help MBA
trained CIOs and CTOs choose PostgreSQL. To them, it will show a professional
minded development group, it will be recognizable to them.
I am not so sure about that -
In my
There has been a discussion on the general list about this area. One of
the members produced a test case for demonstrating rapid size increase.
I decided to see if I could induce similar behaviour with a more
(seemingly) benign example.
I tried this :
1) Create a table and load 10 rows
On Sat, 2002-05-11 at 11:24, Tom Lane wrote:
Hannu Krosing [EMAIL PROTECTED] writes:
Was it not the case that lazy vacuum had problems freeing tuples that
have toasted fields ?
News to me if so.
regards, tom lane
It looks like this may in fact be the case.
I
On Wed, 2002-05-22 at 03:10, Tom Lane wrote:
(snippage) That might be enough to cause the growth. It may be
worth playing around with the details of the threshold-setting policy.
(snippage)
Possibly the initial threshold set in create_fsm_rel also needs to be
smaller than it is. Not sure
On Wed, 2002-06-05 at 16:33, Bruce Momjian wrote:
OK, I think I am now caught up on the Win32/cygwin discussion, and would
like to make some remarks.
First, are we doing enough to support the Win32 platform? I think the
answer is clearly no. There are 3-5 groups/companies working on Win32
What about the TPC-H benchmark ?
I cant recall if it has more than 10 tables, but it seemed like the
queries were quite good for a benchmark. In addition it comes with a
data generator.
regards
Mark
On Sat, 2002-07-13 at 04:05, Neil Conway wrote:
I'd like to look at the performance of the
Lamar Owen wrote:
And you added this by doing what, exactly? I'm not familiar with pg_dump
largefile support as a standalone feature.
Enabling largefile support for the utilities was accomplished by :
CFLAGS=-O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 ./configure ...
It seemed to me
Ralph Graulich wrote:
Hi,
just my two cents worth: I like having the files sized in a way I can
handle them easily with any UNIX tool on nearly any system. No matter
wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at
a maximum size below any limits, handy for handling.
Andrew Sullivan wrote:
On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:
I'm actually amazed that postgres isn't already using large file
support. Especially for tools like dump.
Except it would only cause confusion if you ran such a program on a
system that didn't itself have
I was attempting to measure random page cost a while ago -
I used three programs in this archive :
http://techdocs.postgresql.org/markir/download/benchtool/
It writes a single big file and seems to give more realistic
measurements ( like 6 for a Solaris scsi system and 10 for a Linux ide
Tom Lane wrote:
Perhaps it's time to remind people that what we want to measure
is the performance seen by a C program issuing write() and read()
commands, transferring 8K at a time, on a regular Unix filesystem
Yes...and at the risk of being accused of marketing ;-) , that is
exactly what
Curt Sampson wrote:
On Wed, 11 Sep 2002, Mark Kirkwood wrote:
Hm, it appears we've both been working on something similar. However,
I've just released version 0.2 of randread, which has the following
features:
funny how often that happens...( I think its often worth the effort
We could perhaps do something similar to the Apache 1.3 win platform
notes, where they (still) say *something* like :
Apache on windows is not as stable as on unix... but is being actively
improved all the time
This is a bit more positive than it's dangerous!.
As for people not reading the
The warn 'em in several places seems like a very good approach.
regards
Mark
Matthew T. O'Connor wrote:
There will always be people who won't read the notes, or ignore the
notes, as there will always be people doing all sorts of stupid things
that we can't protect them from. There is only so much
[EMAIL PROTECTED] wrote:
The best phrasing would be the accumulating overhead of deletes and
updates.
Yes.
Are you using 7.3?
I am asking because in 7.3 high update / delete tables could suffer
(index and toast) bloat that was untamable via (lazy) VACUUM and FSM.
I believe this is fixed in
Joshua D. Drake wrote:
The PostgreSQL Enhanced Server (How's that name? Maybe we call it Zerver
and use PEZ?) idea is how to take the excellent core of PostgreSQL and
productize it in much the same way distributions take the Linux kernel and
may a GNU/Linux system.
It would seem to me that
[EMAIL PROTECTED] wrote:
I have been talking about two types of problems which are both based on
PostgreSQL's behavior with frequently updated tables.
Summary table: In the single row table system, you have to vacuum very
requently, and this affects performance.
Frequently updated tables: think
This seems like a nice idea -
It might even be worth targeting a couple pf specific ranges - e.g :
machines with 1G RAM and 4G RAM ( medium are large come to mind, I
know it's a bit like that other database product we know of but
that doesn't mean it's necessarily bad!)
Mark
Christopher
Oh dear, a lot of typos here, hopefully still decipherable... apologies.
Mark
Mark Kirkwood wrote:
This seems like a nice idea -
It might even be worth targeting a couple pf specific ranges - e.g :
machines with 1G RAM and 4G RAM ( medium are large come to mind,
I know it's a bit like
I don't know if this provides any more info than you already have -
but is my last few lines from a single process backend run with valgrind :
==19666== Syscall param write(buf) contains uninitialised or
unaddressable byte(s)
==19666==at 0x404D94F8: __GI___libc_write (in /lib/libc-2.3.2.so)
For the past few days I have experienced compile failure with CVS HEAD
on win32.
The culprit is src/port/pipe.c which uses ereport() when it only
#includes c.h.
I cured the error by #including postgres.h - but after a bit of
thought suspect that utils/elog.h is all that is necessary!
Is
You are right! (goes away to sign up)
Merlin Moncure wrote:
p.s. this probably belongs on win32 hackers list.
Merlin
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's
I noticed that compiling with 5_1 patch applied fails due to
XLOG_archive_dir being removed from xlog.c , but
src/backend/commands/tablecmds.c still uses it.
I did the following to tablecmds.c :
5408c5408
extern char XLOG_archive_dir[];
---
extern char
I tried what I thought was a straightforward scenario, and seem to have
broken it :-(
Here is the little tale
1) initdb
2) set archive_mode and archive_dest in postgresql.conf
3) startup
4) create database called 'test'
5) connect to 'test' and type 'checkpoint'
6) backup PGDATA using 'tar
Simon Riggs wrote:
First, thanks for sticking with it to test this.
I've not received such a message myself - this is interesting.
Is it possible to copy that directory to one side and re-run the test?
Add another parameter in postgresql.conf called archive_debug = true
Does it happen identically
Couldn't agree more. Maybe we should have made more noise :-)
Glen Parker wrote:
Simon Riggs wrote:
On Thu, 2004-07-15 at 23:18, Devrim GUNDUZ wrote:
Thanks for the vote of confidence, on or off list.
too many people spend a lot of
money for proprietary databases, just for
Simon Riggs wrote:
So far:
I've tried to re-create the problem as exactly as I can, but it works
for me.
This is clearly an important case to chase down.
I assume that this is the very first time you tried recovery? Second and
subsequent recoveries using the same set have a potential loophole,
on the second log (and gives the zero length
redo at 0/1E0 message).
Let me know if I can do any more legwork on this (I am considering
re-compiling with WAL_DEBUG now that example is simpler)
regards
Mark
Simon Riggs wrote:
On Thu, 2004-07-15 at 10:47, Mark Kirkwood wrote:
I tried what I thought
There are some silly bugs in the script:
- forgot to export PGDATA and PATH after changing them
- forgot to mention the need to edit test.sql (COPY line needs path to
dump file)
Apologies - I will submit a fixed version a little later
regards
Mark
Mark Kirkwood wrote:
A script to run the whole
fixed.
Mark Kirkwood wrote:
There are some silly bugs in the script:
- forgot to export PGDATA and PATH after changing them
- forgot to mention the need to edit test.sql (COPY line needs path to
dump file)
Apologies - I will submit a fixed version a little later
regards
Mark
Mark Kirkwood wrote
I have been doing some re-testing with CVS HEAD from about 1 hour ago
using the simplified example posted previously.
It is quite interesting:
i) create the table as:
CREATE TABLE test0 (filler TEXT);
and COPY 100 000 rows on length 109, then recovery succeeds.
ii) create the table as:
CREATE
Great that it's not fundamental - and hopefully with this discovery, the
probability you mentioned is being squashed towards zero a bit more :-)
Don't let this early bug detract from what is really a superb piece of work!
regards
Mark
Tom Lane wrote:
In any case this isn't a fundamental bug,
FYI - I can confirm that the patch fixes main issue.
Simon Riggs wrote:
This was a very confusing test...Here's what I think happened:
.
The included patch doesn't attempt to address those issues, yet.
Best regards, Simon Riggs
---(end of
This is presumably a standard feature of any PITR design - if the
failure event destroys the current transaction log, then you can only
recover transactions that committed in the last *archived* log.
regards
Mark
Simon Riggs wrote:
The test works, but gives what looks like strange results: the
Looks good to me. Log file numbering scheme seems to have changed - is
that part of the fix too?.
Tom Lane wrote:
This is done in CVS tip. Mark, could you retest to verify it's fixed?
regards, tom lane
---(end of broadcast)---
TIP 1:
I was wondering about this point - might it not be just as reasonable
for the copied file to *be* an exact image of pg_control? Then a very
simple variant of pg_controldata (or maybe even just adding switches to
pg_controldata itself) would enable the relevant info to be extracted
P.s : would
Ok - that is a much better way of doing it!
regards
Mark
Tom Lane wrote:
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
If you use a readable file you will also need a feature for restore
(or a tool) to create an appropriate pg_control file, or are you
intending to still require that
Just out of interest, what happens to the difference if you use *ext3*
(perhaps with data=writeback)
regards
Mark
[EMAIL PROTECTED] wrote:
I did a little test on the various options of fsync.
...
create table testndx (value integer, name varchar);
create index testndx_val on testndx (value);
The pg_dump fix in 8.0 that stops the destruction of existing users in
the target database via DELETE FROM pg_shadow WHERE usesysid (...
would be great!
regards
Mark
Tom Lane wrote:
Comments anyone? Backpatches for other bugs?
---(end of
Sorry - I meant pg_dump*all* rather than pg_dump.
Mark Kirkwood wrote:
The pg_dump fix in 8.0 that stops the destruction of existing users in
the target database via DELETE FROM pg_shadow WHERE usesysid (...
would be great!
regards
Mark
Tom Lane wrote:
Comments anyone? Backpatches for other
Greg Stark wrote:
It's only allowed when the transaction is in READ UNCOMMITTED
isolation level.
Something Postgres doesn't currently support. In fact I'm not aware of any SQL
database that supports it, though I'm sure there's one somewhere.
FYI - DB2 supports this isolation level, I don't
Tom Lane wrote:
I believe that the term bitmap index is also used with a different
meaning wherein it actually does describe a particular kind of on-disk
index structure, with one bit per table row.
IMHO building in-memory bitmaps (the first idea) is a very good idea to
pursue for Postgres. I'm
Simon Riggs wrote:
I believe that the benefit of on-disk bitmap indexes is supposed to be
reduced storage size (compared to btree).
The main problem is the need for the table to be read-only. Until we have
partitioning, we wouldn't be able to easily guarantee parts of a table as
being
Greg Stark wrote:
I think what you're trying to accomplish is better accomplished through
partitioned tables. Then the user can decide which keys to use to partition
the data and the optimizer can use the data to completely exclude some
partitions from consideration. And it wouldn't interfere with
Mark Kirkwood wrote:
I think that bitmap indexes provide a flexible may to get fact access
to the result set
that should be *fast* access tosorry
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http
I like it - nice and simple, but targets a large (and likely) foot gun
situation.
regards
Mark
Simon Riggs wrote:
If a further pg_ctl mode, recover, were implemented, this would allow a
fail safe mode for recovery.
e.g.pg_ctl -D datadir recover
pg_ctl could then check for the existence of a
While this is nice, it will not help you if the restoration directory is
different from your archive directory. That is : restore_command in
recovery.conf fetches from somewhere other than where archive_command in
postgresql.conf copied.
I am not sure how likely this situation is, but setting
I was thinking that even mildly experienced folks could benefit from a
helpful sanity check. Typically the need to recover a system never comes
at a good time, and features that help prevent silly mistakes are a
great stress saver.
As an aside, while testing recovery during pre beta, I think I
I am looking at implementing this TODO item. e.g. (max case):
rewrite
SELECT max(foo) FROM bar
as
SELECT foo FROM bar ORDER BY foo DESC LIMIT 1
if there is an index on bar(foo)
Suggestions about the most suitable point in the parser/planner stage to
perform this sort of rewrite would be most
Tom Lane wrote:
A more radical way of handling it would be to detect the relevance of an
indexscan in indxpath.c and generate a special kind of Path node; this
would not generalize to other sorts of things as you were hoping, but
I'm unconvinced that the mechanism is going to be very
,
and if that works well, then look at extending to more complex rewrites.
What do you think?
Jim C. Nasby wrote:
On Thu, Nov 11, 2004 at 11:48:49AM +1300, Mark Kirkwood wrote:
I am looking at implementing this TODO item. e.g. (max case):
rewrite
SELECT max(foo) FROM bar
as
SELECT foo FROM bar ORDER
the best approach.
regards
Mark
Bruno Wolff III wrote:
On Thu, Nov 11, 2004 at 17:57:42 +1300,
Mark Kirkwood [EMAIL PROTECTED] wrote:
Your example and ones like :
SELECT max(foo), count(foo) FROM bar
SELECT max(a.foo1), max(b.foo2) FROM bar1 AS a NATURAL JOIN bar2 AS b
have made me realize
Probably for a small table, where the machinery of reading the index,
followed by checking the table for non-visible tuples is more costly
than just scanning the table!
regards
Mark
John Hansen wrote:
Why not just change the function all together to 'select $1 from $2
order by $1 desc limit 1;'
It seems that the check in src/pl/plperl/plperl.c
eval_pv((safe_version 2.09 ? safe_bad : safe_ok), FALSE);
is not working quite as expected (CVS HEAD from today):
I have Safe.pm at version 2.09, yet any plperl function I run fails with :
ERROR: error from function: trusted perl functions
A bit more thinking led me to try:
float safe_version;
...
eval_pv((safe_version (float)2.09 ? safe_bad : safe_ok), FALSE);
which seems to fix the issue. (after all float *should* be accurate
enough in this case)
cheers
Mark
P.s : trivial patch attached
Andrew Dunstan wrote:
Could
I think a summary of where the discussion went might be helpful
(especially for me after a week or so away doing perl).
There were a number of approaches suggested, which I will attempt to
summarize in a hand wavy fashion - (apologies for any misrepresentation
caused):
i) Rewrite max/min
Bruce Momjian wrote:
Peter Eisentraut wrote:
Does the FreeBSD one actually produce different output? I don't
remember seeing any of that and I am not running FreeBSD.
On my 5.4 system autoconf259 and gnu-autoconf both fetch the *same* src
file (autoconf-2.59.tar.bz2 with md5sum
Rodrigo Moreno wrote:
Hi All,
The pg_autovacuum on FreeBSD and pg 803 is not working. Just do nothing, no
log, nothing in screen, no daemonize.
It was ok on pg746.
Could some one help me ?
What version of FreeBSD are you running?
Mark
---(end of
Jim C. Nasby wrote:
It should be possible to see what the crossover point is in terms of
benefit using dbt2 and tweaking the transactions that are run, something
I can do if there's interest. But I agree with Satoshi; if there are
people who will benefit from this option (which doesn't hurt
Tom Lane wrote:
Anyone have SMP boxes of other types that they can try this on?
For those of us running antiques:
2x PIII 1Ghz 2G
Running on FreeBSD 6.0beta4 (non-debug kernel)
8.0.3:
N runtime: 1 158s 2 271s 4 567s
8.1beta1 (2005-08-28):
N runtime: 1 85s 2 139s 4 220s
Wow - a
Tom Lane wrote:
I attach two proposed patches: the first removes the cmpb/jne from
the x86 TAS assembly code, and the second one does the s_lock changes
enumerated as points #2, #3, #4. The first one in particular needs
more testing to see if it hurts performance on any non-Opteron x86
chips.
In two of the sections covered by #ifdef WAL_DEBUG there are
declarations like:
charbuf[8192];
It seems to me that these should be:
charbuf[BLCKSZ];
- or have I misunderstood what is going on here?
I realize that it's probably not terribly significant, as most people
will
Qingqing Zhou wrote:
Mark Kirkwood [EMAIL PROTECTED] wrote
In two of the sections covered by #ifdef WAL_DEBUG there are declarations
like:
charbuf[8192];
Those two 8192 have nothing to do with BLCKSZ, it is just an arbitrary
buffer size as long as it is big enough to hold debug
Tom Lane wrote:
Gavin Sherry [EMAIL PROTECTED] writes:
I haven't had time to prototype whether we can easily implement async IO
Just as with any suggestion to depend on threads, you are going to have
to show results that border on astounding to have any chance of getting
this in. Otherwise
Gavin Sherry wrote:
The paper I linked to seemed to suggest that they weren't using async IO
in 9.2 -- which is fairly old. I'm not sure why the authors didn't test
10g.
There have been async io type parameters in Oracle's init.ora files from
(at least) 8i (disk_async_io=true IIRC) - on
Simon,
I am seeing a reasonably reproducible performance boost after applying
your patch (I'm not sure if that was one of the main objectives, but it
certainly is nice).
I *was* seeing a noticeable decrease between 7.4.6 and 8.0.0RC1 running
pgbench. However, after applying your patch, 8.0 is
Simon Riggs wrote:
100pct.patch (SR)
Test results to date:
1. Mark Kirkwood ([HACKERS] [Testperf-general] BufferSync and bgwriter)
pgbench 1xCPU 1xDisk shared_buffers=1
showed 8.0RC1 had regressed compared with 7.4.6, but patch improved
performance significantly against 8.0RC1
It occurs
Mark Kirkwood wrote:
It occurs to me that cranking up the number of transactions (say
1000-10) and seeing if said regression persists would be
interesting. This would give the smoothing effect of the bgwriter
(plus the ARC) a better chance to shine.
I ran a few of these over the weekend
Bruce Momjian wrote:
well, I usually get results that differ by that much from run to run.
Probably you ran in to more checkpoints on the second test.
Also, did you reinitialize the bench database with pgbench -i ?
I destroyed the database and recreated it.
The only way I managed to
Michael Fuhr wrote:
On Thu, Jan 13, 2005 at 05:05:00PM -0500, Tom Lane wrote:
SELECT delitel(100, 1);
Mean times over the last five of six runs on my poor 500MHz FreeBSD
4.11-PRERELEASE box:
6741 ms 7.4.6 (from FreeBSD ports collection)
14427 ms 8.0.0rc5 (from CVS source)
Looks like
Marc G. Fournier wrote:
What the client did was a 'delete from pg_attribute where ... ' ...
The database is a 7.4.2 one ... my first thought was one of the older
standbys ... rebuild the schema and move the data files into place over
top of that ... but of course, 7.3 and beyond are OID based
Marc,
In case you don't work out a better way to sort this, I can reproduce
and fix the error 'catalog is missing n attribute(s) for relid 16396'
caused by directly deleting (part of) pg_attribute:
Setup :
$ initdb
$ pg_ctl start
$ createdb test
Backup :
$ pg_ctl stop
$ tar -czvf pgdata.tar.gz
Simon Riggs wrote:
On Thu, 2005-01-20 at 23:17 +1100, Neil Conway wrote:
(snippage)
For 8.0.x, I wonder
if it would be better to just replace ARC with LRU.
Sequential scans will still flood
the cache, but I don't view that as an enormous problem.
Agree with everything apart from the idea that
Jim C. Nasby wrote:
Does anyone have working code they could contribute? It would be best to
give at least an example in the docs. Even better would be something in
pgfoundry that helps build a summary table and the rules/triggers you
need to maintain it.
Marc G. Fournier wrote:
please let me know the URL for the message, so that I can see what it
was overlooked, and see if I can't improve the 'search' ...
Marc - here is one I submitted that was not picked up :-)
http://archives.postgresql.org/pgsql-patches/2005-01/msg00145.php
regards
Mark
1 - 100 of 509 matches
Mail list logo