Andrew Dunstan wrote:
It fixes the build error on Windows - haven't tried because i don't have
time, but I know it won't work on Cygwin, because WIN32 isn't (usually)
defined on Cygwin - see previous almost endless discussions.
Yes - I recall that discussion a while ago.
This patch should sort the
Andrew Dunstan wrote:
Mark Kirkwood wrote:
Andrew Dunstan wrote:
It fixes the build error on Windows - haven't tried because i don't have
time, but I know it won't work on Cygwin, because WIN32 isn't (usually)
defined on Cygwin - see previous almost endless discussions.
Yes - I recall
Andrew Dunstan wrote:
... is apparently broken for Windows and Cygwin. See for example
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lorisdt=2005-03-16%2001:55:33
cheers
hmmm - never tried to compile it on win32! I am getting http 502 from
the above url, so I will have a go at building on
Tom Lane wrote:
Would those of you with access to other DBMSes try this:
create table tab (col integer);
select 1 from tab having 1=0;
select 1 from tab having 1=1;
insert into tab values(1);
insert into tab values(2);
select 1 from tab having 1=0;
select 1 from tab having 1=1;
I claim that a
Tom Lane wrote:
The question is whether we are willing to back-patch a fairly large
amount of not-very-well-tested code into 8.0. See
http://archives.postgresql.org/pgsql-patches/2005-02/msg00123.php
http://archives.postgresql.org/pgsql-committers/2005-02/msg00127.php
Bruce Momjian wrote:
Jim C. Nasby wrote:
On Mon, Feb 14, 2005 at 09:55:38AM -0800, Ron Mayer wrote:
I still suspect that the correct way to do it would not be
to use the single correlation, but 2 stats - one for estimating
how sequential/random accesses would be; and one for estimating
the number
Andrew Dunstan wrote:
There is no news in the problem you're complaining of. It's completely
known and documented. You've stated before that you've been using
PostgreSQL for years - why is this suddenly so urgent that we have to
drop everything and backpatch old releases? Please move along,
Although this is all true, consider that adding hints will mean that the
Pg developers *never* get bug reports to drive the optimizer improvement
process. This will have the effect of stagnating its development. I
think this would be a bad thing :-)
As an aside note that DB2 UDB does not let you
[EMAIL PROTECTED] wrote:
actually [EMAIL PROTECTED], is Mark Woodward. Pleased to meet you.
:)
(I hate using my name on lists like this because of spammers)
Not to be confused with me :-)
---(end of broadcast)---
TIP 6: Have you searched our list
Maybe I am missing something - ISTM that you can increase your
statistics target for those larger tables to obtain a larger (i.e.
better) sample.
regards
Mark
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:
Any and all random sampling assumes a degree of uniform distribution. This
is the
[EMAIL PROTECTED] wrote:
In this case, the behavior observed could be changed by altering the
sample size for a table. I submit that an arbitrary fixed sample size is
not a good base for the analyzer, but that the sample size should be based
on the size of the table or some calculation of its
Marc G. Fournier wrote:
please let me know the URL for the message, so that I can see what it
was overlooked, and see if I can't improve the 'search' ...
Marc - here is one I submitted that was not picked up :-)
http://archives.postgresql.org/pgsql-patches/2005-01/msg00145.php
regards
Mark
Jim C. Nasby wrote:
Does anyone have working code they could contribute? It would be best to
give at least an example in the docs. Even better would be something in
pgfoundry that helps build a summary table and the rules/triggers you
need to maintain it.
Simon Riggs wrote:
On Thu, 2005-01-20 at 23:17 +1100, Neil Conway wrote:
(snippage)
For 8.0.x, I wonder
if it would be better to just replace ARC with LRU.
Sequential scans will still flood
the cache, but I don't view that as an enormous problem.
Agree with everything apart from the idea that
Michael Fuhr wrote:
On Thu, Jan 13, 2005 at 05:05:00PM -0500, Tom Lane wrote:
SELECT delitel(100, 1);
Mean times over the last five of six runs on my poor 500MHz FreeBSD
4.11-PRERELEASE box:
6741 ms 7.4.6 (from FreeBSD ports collection)
14427 ms 8.0.0rc5 (from CVS source)
Looks like
Marc G. Fournier wrote:
What the client did was a 'delete from pg_attribute where ... ' ...
The database is a 7.4.2 one ... my first thought was one of the older
standbys ... rebuild the schema and move the data files into place over
top of that ... but of course, 7.3 and beyond are OID based
Marc,
In case you don't work out a better way to sort this, I can reproduce
and fix the error 'catalog is missing n attribute(s) for relid 16396'
caused by directly deleting (part of) pg_attribute:
Setup :
$ initdb
$ pg_ctl start
$ createdb test
Backup :
$ pg_ctl stop
$ tar -czvf pgdata.tar.gz
Bruce Momjian wrote:
well, I usually get results that differ by that much from run to run.
Probably you ran in to more checkpoints on the second test.
Also, did you reinitialize the bench database with pgbench -i ?
I destroyed the database and recreated it.
The only way I managed to
Mark Kirkwood wrote:
It occurs to me that cranking up the number of transactions (say
1000-10) and seeing if said regression persists would be
interesting. This would give the smoothing effect of the bgwriter
(plus the ARC) a better chance to shine.
I ran a few of these over the weekend
Simon Riggs wrote:
100pct.patch (SR)
Test results to date:
1. Mark Kirkwood ([HACKERS] [Testperf-general] BufferSync and bgwriter)
pgbench 1xCPU 1xDisk shared_buffers=1
showed 8.0RC1 had regressed compared with 7.4.6, but patch improved
performance significantly against 8.0RC1
It occurs
Simon,
I am seeing a reasonably reproducible performance boost after applying
your patch (I'm not sure if that was one of the main objectives, but it
certainly is nice).
I *was* seeing a noticeable decrease between 7.4.6 and 8.0.0RC1 running
pgbench. However, after applying your patch, 8.0 is
I think a summary of where the discussion went might be helpful
(especially for me after a week or so away doing perl).
There were a number of approaches suggested, which I will attempt to
summarize in a hand wavy fashion - (apologies for any misrepresentation
caused):
i) Rewrite max/min
It seems that the check in src/pl/plperl/plperl.c
eval_pv((safe_version 2.09 ? safe_bad : safe_ok), FALSE);
is not working quite as expected (CVS HEAD from today):
I have Safe.pm at version 2.09, yet any plperl function I run fails with :
ERROR: error from function: trusted perl functions
A bit more thinking led me to try:
float safe_version;
...
eval_pv((safe_version (float)2.09 ? safe_bad : safe_ok), FALSE);
which seems to fix the issue. (after all float *should* be accurate
enough in this case)
cheers
Mark
P.s : trivial patch attached
Andrew Dunstan wrote:
Could
I am looking at implementing this TODO item. e.g. (max case):
rewrite
SELECT max(foo) FROM bar
as
SELECT foo FROM bar ORDER BY foo DESC LIMIT 1
if there is an index on bar(foo)
Suggestions about the most suitable point in the parser/planner stage to
perform this sort of rewrite would be most
Tom Lane wrote:
A more radical way of handling it would be to detect the relevance of an
indexscan in indxpath.c and generate a special kind of Path node; this
would not generalize to other sorts of things as you were hoping, but
I'm unconvinced that the mechanism is going to be very
,
and if that works well, then look at extending to more complex rewrites.
What do you think?
Jim C. Nasby wrote:
On Thu, Nov 11, 2004 at 11:48:49AM +1300, Mark Kirkwood wrote:
I am looking at implementing this TODO item. e.g. (max case):
rewrite
SELECT max(foo) FROM bar
as
SELECT foo FROM bar ORDER
the best approach.
regards
Mark
Bruno Wolff III wrote:
On Thu, Nov 11, 2004 at 17:57:42 +1300,
Mark Kirkwood [EMAIL PROTECTED] wrote:
Your example and ones like :
SELECT max(foo), count(foo) FROM bar
SELECT max(a.foo1), max(b.foo2) FROM bar1 AS a NATURAL JOIN bar2 AS b
have made me realize
Probably for a small table, where the machinery of reading the index,
followed by checking the table for non-visible tuples is more costly
than just scanning the table!
regards
Mark
John Hansen wrote:
Why not just change the function all together to 'select $1 from $2
order by $1 desc limit 1;'
I like it - nice and simple, but targets a large (and likely) foot gun
situation.
regards
Mark
Simon Riggs wrote:
If a further pg_ctl mode, recover, were implemented, this would allow a
fail safe mode for recovery.
e.g.pg_ctl -D datadir recover
pg_ctl could then check for the existence of a
While this is nice, it will not help you if the restoration directory is
different from your archive directory. That is : restore_command in
recovery.conf fetches from somewhere other than where archive_command in
postgresql.conf copied.
I am not sure how likely this situation is, but setting
I was thinking that even mildly experienced folks could benefit from a
helpful sanity check. Typically the need to recover a system never comes
at a good time, and features that help prevent silly mistakes are a
great stress saver.
As an aside, while testing recovery during pre beta, I think I
Greg Stark wrote:
I think what you're trying to accomplish is better accomplished through
partitioned tables. Then the user can decide which keys to use to partition
the data and the optimizer can use the data to completely exclude some
partitions from consideration. And it wouldn't interfere with
Mark Kirkwood wrote:
I think that bitmap indexes provide a flexible may to get fact access
to the result set
that should be *fast* access tosorry
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http
Simon Riggs wrote:
I believe that the benefit of on-disk bitmap indexes is supposed to be
reduced storage size (compared to btree).
The main problem is the need for the table to be read-only. Until we have
partitioning, we wouldn't be able to easily guarantee parts of a table as
being
Tom Lane wrote:
I believe that the term bitmap index is also used with a different
meaning wherein it actually does describe a particular kind of on-disk
index structure, with one bit per table row.
IMHO building in-memory bitmaps (the first idea) is a very good idea to
pursue for Postgres. I'm
Greg Stark wrote:
It's only allowed when the transaction is in READ UNCOMMITTED
isolation level.
Something Postgres doesn't currently support. In fact I'm not aware of any SQL
database that supports it, though I'm sure there's one somewhere.
FYI - DB2 supports this isolation level, I don't
The pg_dump fix in 8.0 that stops the destruction of existing users in
the target database via DELETE FROM pg_shadow WHERE usesysid (...
would be great!
regards
Mark
Tom Lane wrote:
Comments anyone? Backpatches for other bugs?
---(end of
Sorry - I meant pg_dump*all* rather than pg_dump.
Mark Kirkwood wrote:
The pg_dump fix in 8.0 that stops the destruction of existing users in
the target database via DELETE FROM pg_shadow WHERE usesysid (...
would be great!
regards
Mark
Tom Lane wrote:
Comments anyone? Backpatches for other
Just out of interest, what happens to the difference if you use *ext3*
(perhaps with data=writeback)
regards
Mark
[EMAIL PROTECTED] wrote:
I did a little test on the various options of fsync.
...
create table testndx (value integer, name varchar);
create index testndx_val on testndx (value);
Ok - that is a much better way of doing it!
regards
Mark
Tom Lane wrote:
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
If you use a readable file you will also need a feature for restore
(or a tool) to create an appropriate pg_control file, or are you
intending to still require that
I was wondering about this point - might it not be just as reasonable
for the copied file to *be* an exact image of pg_control? Then a very
simple variant of pg_controldata (or maybe even just adding switches to
pg_controldata itself) would enable the relevant info to be extracted
P.s : would
Looks good to me. Log file numbering scheme seems to have changed - is
that part of the fix too?.
Tom Lane wrote:
This is done in CVS tip. Mark, could you retest to verify it's fixed?
regards, tom lane
---(end of broadcast)---
TIP 1:
Great that it's not fundamental - and hopefully with this discovery, the
probability you mentioned is being squashed towards zero a bit more :-)
Don't let this early bug detract from what is really a superb piece of work!
regards
Mark
Tom Lane wrote:
In any case this isn't a fundamental bug,
FYI - I can confirm that the patch fixes main issue.
Simon Riggs wrote:
This was a very confusing test...Here's what I think happened:
.
The included patch doesn't attempt to address those issues, yet.
Best regards, Simon Riggs
---(end of
This is presumably a standard feature of any PITR design - if the
failure event destroys the current transaction log, then you can only
recover transactions that committed in the last *archived* log.
regards
Mark
Simon Riggs wrote:
The test works, but gives what looks like strange results: the
I have been doing some re-testing with CVS HEAD from about 1 hour ago
using the simplified example posted previously.
It is quite interesting:
i) create the table as:
CREATE TABLE test0 (filler TEXT);
and COPY 100 000 rows on length 109, then recovery succeeds.
ii) create the table as:
CREATE
There are some silly bugs in the script:
- forgot to export PGDATA and PATH after changing them
- forgot to mention the need to edit test.sql (COPY line needs path to
dump file)
Apologies - I will submit a fixed version a little later
regards
Mark
Mark Kirkwood wrote:
A script to run the whole
fixed.
Mark Kirkwood wrote:
There are some silly bugs in the script:
- forgot to export PGDATA and PATH after changing them
- forgot to mention the need to edit test.sql (COPY line needs path to
dump file)
Apologies - I will submit a fixed version a little later
regards
Mark
Mark Kirkwood wrote
on the second log (and gives the zero length
redo at 0/1E0 message).
Let me know if I can do any more legwork on this (I am considering
re-compiling with WAL_DEBUG now that example is simpler)
regards
Mark
Simon Riggs wrote:
On Thu, 2004-07-15 at 10:47, Mark Kirkwood wrote:
I tried what I thought
I tried what I thought was a straightforward scenario, and seem to have
broken it :-(
Here is the little tale
1) initdb
2) set archive_mode and archive_dest in postgresql.conf
3) startup
4) create database called 'test'
5) connect to 'test' and type 'checkpoint'
6) backup PGDATA using 'tar
Simon Riggs wrote:
First, thanks for sticking with it to test this.
I've not received such a message myself - this is interesting.
Is it possible to copy that directory to one side and re-run the test?
Add another parameter in postgresql.conf called archive_debug = true
Does it happen identically
Couldn't agree more. Maybe we should have made more noise :-)
Glen Parker wrote:
Simon Riggs wrote:
On Thu, 2004-07-15 at 23:18, Devrim GUNDUZ wrote:
Thanks for the vote of confidence, on or off list.
too many people spend a lot of
money for proprietary databases, just for
Simon Riggs wrote:
So far:
I've tried to re-create the problem as exactly as I can, but it works
for me.
This is clearly an important case to chase down.
I assume that this is the very first time you tried recovery? Second and
subsequent recoveries using the same set have a potential loophole,
I noticed that compiling with 5_1 patch applied fails due to
XLOG_archive_dir being removed from xlog.c , but
src/backend/commands/tablecmds.c still uses it.
I did the following to tablecmds.c :
5408c5408
extern char XLOG_archive_dir[];
---
extern char
For the past few days I have experienced compile failure with CVS HEAD
on win32.
The culprit is src/port/pipe.c which uses ereport() when it only
#includes c.h.
I cured the error by #including postgres.h - but after a bit of
thought suspect that utils/elog.h is all that is necessary!
Is
You are right! (goes away to sign up)
Merlin Moncure wrote:
p.s. this probably belongs on win32 hackers list.
Merlin
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's
I don't know if this provides any more info than you already have -
but is my last few lines from a single process backend run with valgrind :
==19666== Syscall param write(buf) contains uninitialised or
unaddressable byte(s)
==19666==at 0x404D94F8: __GI___libc_write (in /lib/libc-2.3.2.so)
This seems like a nice idea -
It might even be worth targeting a couple pf specific ranges - e.g :
machines with 1G RAM and 4G RAM ( medium are large come to mind, I
know it's a bit like that other database product we know of but
that doesn't mean it's necessarily bad!)
Mark
Christopher
Oh dear, a lot of typos here, hopefully still decipherable... apologies.
Mark
Mark Kirkwood wrote:
This seems like a nice idea -
It might even be worth targeting a couple pf specific ranges - e.g :
machines with 1G RAM and 4G RAM ( medium are large come to mind,
I know it's a bit like
Joshua D. Drake wrote:
The PostgreSQL Enhanced Server (How's that name? Maybe we call it Zerver
and use PEZ?) idea is how to take the excellent core of PostgreSQL and
productize it in much the same way distributions take the Linux kernel and
may a GNU/Linux system.
It would seem to me that
[EMAIL PROTECTED] wrote:
I have been talking about two types of problems which are both based on
PostgreSQL's behavior with frequently updated tables.
Summary table: In the single row table system, you have to vacuum very
requently, and this affects performance.
Frequently updated tables: think
[EMAIL PROTECTED] wrote:
The best phrasing would be the accumulating overhead of deletes and
updates.
Yes.
Are you using 7.3?
I am asking because in 7.3 high update / delete tables could suffer
(index and toast) bloat that was untamable via (lazy) VACUUM and FSM.
I believe this is fixed in
The warn 'em in several places seems like a very good approach.
regards
Mark
Matthew T. O'Connor wrote:
There will always be people who won't read the notes, or ignore the
notes, as there will always be people doing all sorts of stupid things
that we can't protect them from. There is only so much
We could perhaps do something similar to the Apache 1.3 win platform
notes, where they (still) say *something* like :
Apache on windows is not as stable as on unix... but is being actively
improved all the time
This is a bit more positive than it's dangerous!.
As for people not reading the
Fresh checkout of CVS HEAD yesterday, updated today :
$ ./configure --prefix=/usr/local/pgsql/7.5
$ make
gmake[4]: Entering directory
`/usr/home/postgres/develop/c/pgsql/src/backend/access/nbtree'
gcc -O2 -fno-strict-aliasing -Wall -Wmissing-prototypes
-Wmissing-declarations
What scale factor TPC H are you importing?
additionally - might be worth giving the specs of the machine you are
doing this on.
(I seem to recall trying this with Pg 7.2 a while ago without this
issue, mind you - think I had ~1G of Ram and used the scale fact 1
dataset, i.e 1G)
regards
Mark
Joe Conway wrote:
Tom Lane wrote:
Personally I rely quite a lot on setting PGDATA to keep straight which
installation I'm currently working with, so I'm not going to be happy
with a redesign that eliminates that variable without providing an
adequate substitute :-(
I'll second that.
I'll
Bruce Momjian wrote:
Let me tell you the compromise I thought of.
First, we put the config files (postgresql.conf, pg_hba.conf,
pg_ident.conf) in data/etc by default.
Sorry Bruce,
I was being slow :-) , I was thinking you were going to associate the
config files with the binary
[EMAIL PROTECTED] wrote:
ii) I think the -C switch *WITH* the -D switch has viable usability.
Consider this, you are testing two different database layouts and/or RAID
controllers. You could easily bounce back and forth from *identical*
configurations like this:
Convenient indeed, but I would
Tom Lane wrote:
I think if you spelled the subdir name config rather than etc,
it would be more obvious what's what.
How about 'conf' - (familiar to anyone who has used apache or tomcat )
regards
Mark
---(end of broadcast)---
TIP 4: Don't
Bruce Momjian wrote:
Mark Kirkwood wrote:
Bruce Momjian wrote:
My idea was to put config files in /usr/local/pgsql/data/etc, not
pgsql/etc.
We don't put Unix configuration files in /, etc put them in /etc.
Sorry, I missed the 'data' pathname. However - I may be a bit slow
I seems to me that the existing situation is actually correct :
The configuration is a property of the initialized database cluster, so
a logical place for it is in the root of said cluster.
It is *not* a property of the installed binary distribution (e.g
/usr/local/pgsql/etc) - as you may
[EMAIL PROTECTED] wrote:
IMHO my patch can do this in a self
documenting way, thus making it easier to do, i.e.
postmaster -C /etc/postgres/fundb.conf
postmaster -C /etc/postgres/testdb.conf
I think that is far more intuitive than:
postmaster -D /some/path/who/knows/where/fundb
postmaster -D
Bruce Momjian wrote:
My idea was to put config files in /usr/local/pgsql/data/etc, not
pgsql/etc.
We don't put Unix configuration files in /, etc put them in /etc.
Sorry, I missed the 'data' pathname. However - I may be a bit slow - but
I do not see how this will handle the situation where
Just updated now. compiles. fast fix :-)
Tom Lane wrote:
I plan to fix that later tonight
if no one beats me to it.
regards, tom lane
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
It would be interesting to see the results if you do this test again with :
- scaling factor somewhere in the 100-300 range (so the generated db ~=
your ram amount)
- number of clients somewhere in 10-100
- number of transactions somewhere in 1000-1000
best wishes
Mark
P.s - Are you using the
Couldn't agree more - syntax like
SHOW TABLES;
is inituitive and somehow right - [chuckles] - Mysql does not have
*everything* wrong!
regards
Mark
Bruce Momjian wrote:
I like the idea of adding a new syntax to show that information using
simple SQL command syntax, and putting it in the
Could certainly do that - a scalar function that returns reltuples from
pg_class. I was hoping to do 2 additional things:
i) provide better accuracy than the last ANALYZE
ii) make it behave like an aggregate
So I wanted to be able to use estimated_count as you would use count, i.e:
SELECT
is possible or reasonable - others who know feel free to
chip in :-)
regards
Mark
Randolf Richardson wrote:
[EMAIL PROTECTED] (Mark Kirkwood) wrote in
comp.databases.postgresql.hackers:
[sNip]
How about:
Implement a function estimated_count that can be used instead of
count. It could use
Shridhar Daithankar wrote:
Something like select reltuples from pg_class where relname='foo'?
Shridhar
[chuckles] - I had envisaged something more accurate that the last
ANALYZE, estimate_count would effectively *do* acquire_sample_rows()
then and there for you...
regards
Mark
How about:
Implement a function estimated_count that can be used instead of
count. It could use something like the algorithm in
src/backend/commands/analyze.c to get a reasonably accurate psuedo count
quickly.
The advantage of this approach is that count still means (exact)count
(for your
Finger or brain trouble, here is the correction :
NUMBER - INTEGER when transporting schemas from Oracle to Pg.
(This needs a little care - as NUMBER in Oracle has bigger *precision*
than INTEGER in Pg)
Thinking about this a bit more, its probably fine to just substitute
NUMERIC for NUMBER,
I think he means that you can do this in oracle :
CREATE TABLE test (id NUMBER);
Oracle treats NUMBER as NUMBER(40) I think.
This seems to be an example of Oracle making up standards as they go
along - do we want to copy this sort of thing ?
I usually just run a substitution of NUMBER(..) -
As part of attempting to gain an understanding of how Postgres works, I
wanted to see if I could display a summary of what relations were using
pages in the cache.
Having done that, I was all set to trash the code when I wondered if it
might be useful in its own right...
Here is a sample of
Hans,
You are right about the startup memory - here is the top line for a few
seconds after startup :
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+
COMMAND
10116 postgres 15 0 3816 3816 3180 R 33.8 1.0 0:01.03 postmaster
seems that VIRT, RES, SHR all get the
Hans-Jürgen Schönig wrote:
I can hardly imagine that the backend started working with 9mb of
memory. what did you do that PostgreSQL needed so much memory from the
beginning??? are you using the default settings? usually the
postmaster does not need more than 3mb at startup (in this scenario).
Stephan Szabo wrote:
I rebuild without debug, and ran just the start/insert/commit sequence
over and over and noticed that on my machine it looked to grow as above
but that if I let it go long enough it seemed to basically stop (or at
least the growth was slow enough to go without notice as
Greg Stark wrote:
Shridhar Daithankar [EMAIL PROTECTED] writes:
But database is not webserver. It is not suppose to handle tons of concurrent
requests. That is a fundamental difference.
And in one fell swoop you've dismissed the entire OLTP database industry.
Have you ever called a
Fred Zellinger wrote:
I also am a Linux Nut and use Postgres whenever possible because I like the
freedom of access to the HACKERS mailing list...something only a few highly
renound DBA snobs have with Oracle.
Indeed, I think this is a significant component of the appeal of open
source
I
Justin Clift wrote:
Hi guys,
Have created a Solaris 8 Intel package for PostgreSQL 7.3.1, but don't
have any SPARC boxes here any more.
Does anyone have a SPARC box handy that would be available for
compiling PostgreSQL 7.3.1 on? It would need to be Solaris 8 (or
maybe 9), and have things
Lamar Owen wrote:
On Wednesday 15 January 2003 09:20, Justin Clift wrote:
Sound like a plan? Will also need someone else with a Solaris 8 SPARC
system to try the packages out too, just in case there are weird library
dependencies happening that might catch us out.
I have access to
snippage
The Oracle system stops from time to time because of various arcane
reasons. You get the error message, look it up on alltheweb.com, and
fix it. The whole system is bogus. It DEMANDS a full time DBA.
PostgreSQL does not.
I could be accused of being cynical here (gosh)... but I
Adrian 'Dagurashibanipal' von Bidder wrote:
I'm just saying that there are
*some* small arcane details in postgres, too (although, at least, they
don't affect stability, just performance).
Indeed you are right... Pg has its own collection of arcane details too,
but hopefully the culture of
Dear hackers,
I have recently been playing with DB2 8.1 Beta. It has introduced a
feature to enable index clustering on more than one key. This reminded
me of a previous thread on HACKERS about index access anding/bitmaps in
Firebird. So anyway, here is a little snip from the 8.1 manual as a
Tom Lane wrote:
Has anyone done the corresponding experiments on the other DBMSes to
identify exactly when they allow CURRENT_TIMESTAMP to advance ?
I have Db2 on hand and examined CURRENT TIMESTAMP in an sql procedure.
(IBM have implemented it without the _ )
The short of it is that
Have you tried this with Oracle or similar commercial database?
I have timed COPY/LOAD times for Postgresql/Mysql/Oracle/Db2 -
the rough comparison is :
Db2 and Mysql fastest (Db2 slightly faster)
Oracle approx twice as slow as Db2
Postgresql about 3.5-4 times slower than Db2
However
Curt Sampson wrote:
On Wed, 11 Sep 2002, Mark Kirkwood wrote:
Hm, it appears we've both been working on something similar. However,
I've just released version 0.2 of randread, which has the following
features:
funny how often that happens...( I think its often worth the effort
I was attempting to measure random page cost a while ago -
I used three programs in this archive :
http://techdocs.postgresql.org/markir/download/benchtool/
It writes a single big file and seems to give more realistic
measurements ( like 6 for a Solaris scsi system and 10 for a Linux ide
Tom Lane wrote:
Perhaps it's time to remind people that what we want to measure
is the performance seen by a C program issuing write() and read()
commands, transferring 8K at a time, on a regular Unix filesystem
Yes...and at the risk of being accused of marketing ;-) , that is
exactly what
401 - 500 of 509 matches
Mail list logo