On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only access to
"frozen" tables, i.e. archive partitions. But it all sort of hinges on
someone implementing it and testing
Is there any way to expose the planner estim
On Mon, Jun 06, 2005 at 10:08:23AM -0500, John A Meinel wrote:
I don't know if you can do it, but it would be nice to see this be 1
RAID1 for OS, 1 RAID10 for pg_xlog,
That's probably overkill--it's a relatively small sequential-write
partition with really small writes; I don't see how pg_xlog
On Mon, Jun 06, 2005 at 10:52:09AM -0500, John A Meinel wrote:
pg_xlog benefits from being super fast. Because it has to be fully
synced before the rest of the data can be committed. Yes they are small,
but if you can make it fast, you eliminate that overhead. It also
benefits from having it's ow
On Tue, Jun 07, 2005 at 01:39:04PM -0700, Tom Arthurs wrote:
Yes, shared buffers in postgres are not used for caching
That begs the question of what they are used for. :)
Mike Stone
---(end of broadcast)---
TIP 9: the planner will ignore your de
On Tue, Jun 07, 2005 at 11:50:33PM -0400, Tom Lane wrote:
Again, let's see some evidence that it's worth putting effort into that.
(Offhand it seems this is probably an easier fix than changing the
shared-memory allocation code; but conventional wisdom is that really
large values of work_mem are
On Wed, Jun 08, 2005 at 09:36:31AM +0200, Martin Fandel wrote:
I've installed the same installation of my reiser-fs-postgres-8.0.1
with xfs.
Do you have pg_xlog on a seperate partition? I've noticed that ext2
seems to have better performance than xfs for the pg_xlog workload (with
all the syncs
On Sat, Jun 18, 2005 at 06:42:27PM +0200, Yves Vindevogel wrote:
With striping, each file is distributed over several disks, making
the physical write faster because several disks can do the work.
Same for reading, multiple disks return a part of the file.
A mirror behaves almost exactly the
On Wed, Jun 22, 2005 at 10:31:29PM -0400, Keith Worthington wrote:
Use RAID 10 (striping across mirrored disks)
or RAID 0+1 (mirror a striped array) for your data.
yikes! never tell an unsuspecting person to use mirred stripes--that
configuration has lower reliability and performance than stri
Is it possible to tweak the size of a block that postgres tries to read
when doing a sequential scan? It looks like it reads in fairly small
blocks, and I'd expect a fairly significant boost in i/o performance
when doing a large (multi-gig) sequential scan if larger blocks were
used.
Mike Stone
On Tue, Jun 28, 2005 at 12:02:55PM -0500, John A Meinel wrote:
There has been discussion about changing the reading/writing code to be
able to handle multiple pages at once, (using something like vread())
but I don't know that it has been implemented.
that sounds promising
Also, this would hu
Did you seperate the data & the transaction log? I've noticed less than
optimal performance on xfs if the transaction log is on the xfs data
partition, and it's silly to put the xlog on a journaled filesystem
anyway. Try putting xlog on an ext2 for all the tests.
Mike Stone
-
On Thu, Jul 14, 2005 at 02:15:52PM +0200, Grega Bremec wrote:
I'm curious as to what this means - did they have problems integrating
it into their toolchain or are there actual problems going on in jfs
currently?
I've found jfs to be the least stable linux filesystem and won't allow
it anywhere
On Thu, Jul 14, 2005 at 04:29:58PM -0600, Dan Harris wrote:
Ok, I tried this one. My ssh keeps getting cut off by a router
somewhere between me and the server due to inactivity timeouts, so
all I know is that both the select and explain analyze are taking
over an hour to run.
Try running
On Sat, Jul 16, 2005 at 01:12:27AM -0700, Jeffrey W. Baker wrote:
Another frequent suggestion is to put the xlog on a separate device. I
tried this, and, for a given number of disks, it appears to be
counter-productive. A RAID5 of 5 disks holding both logs and data is
about 15% faster than a RA
I'd say, "don't do that". Unless you've deleted a lot of stuff and are
expecting the DB to shrink, a full vacuum shouldn't really be needed. On
a DB that big a full vacuum is just going to take a long time. If you
really are shrinking, consider structuring things so you can just drop a
table inste
On Tue, Jul 26, 2005 at 11:23:23AM -0700, Luke Lonergan wrote:
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
On the contrary--it's not obvious that it is an ideal fit for a WAL. A
ram disk like this is optimized for
On Fri, Aug 05, 2005 at 01:11:31PM +0200, Dirk Lutzebäck wrote:
I will compile the latest PostgreSQL 8.1 snapshot for 32bit to evaluate
the new shared buffer code from Tom.
I think, the 64bit is slow because my queries are CPU intensive.
Have you actually tried it or are you guessing? If you'r
On Thu, Aug 11, 2005 at 01:23:21PM +0100, Paul Johnson wrote:
I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to
a single spindle disk?
In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data f
On Thu, Aug 11, 2005 at 10:18:44AM -0700, Mark Lewis wrote:
Actually, it seems to me that with the addition of the WAL in PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed write
cache would make a really
On Thu, Aug 11, 2005 at 07:13:27PM -0400, Jeffrey Tenny wrote:
The system for testing was 512MB
That's definately *not* a "large ram" system. If you're reading a subset
of data that totals 70MB I'm going to guess that your data set is larger
than or at least a large fraction of 512MB.
addition
On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:
However, you are absolutely correct in that it's *relative* advice, not
absolute advice. If, for example, you're using a $100,000 EMC SAN as your
storage you'll probably be better off giving it everything and letting its
controller
On Sat, Aug 20, 2005 at 12:52:08AM -0600, Dan Harris wrote:
On Aug 19, 2005, at 12:55 AM, Jeffrey W. Baker wrote:
Have you considered booting your
machine with elevator=deadline?
Although I'm not the OP for this problem, I thought I'd try it out.
WOW.. this should be in a Pg tuning guide
On Sat, Aug 20, 2005 at 01:12:15AM -0600, Dan Harris wrote:
XFS seems to be a trusted choice, followed by Reiser and JFS both
with the occasional controversy when the comparisons pop up.
And don't put the xlog on a journaled filesystem. There is no advantage
to doing so, and it will slow thing
On Sat, Aug 20, 2005 at 02:17:54PM +0300, Marko Ristola wrote:
Based on my knoledge, Ext3 is good with keeping filesystem integrity
AND data integrity while pressing the reset button. However, by
selecting data=writeback, you gain more speed, but you risk the data
integrity during a crash: Ext3
On Tue, Aug 23, 2005 at 12:38:04PM -0700, Josh Berkus wrote:
which have a clear and measurable effect on performance and are fixable
without bloating the PG code. Some of these issues (COPY path, context
switching
Does that include increasing the size of read/write blocks? I've noticed
that w
On Tue, Aug 23, 2005 at 05:29:01PM -0400, Jignesh Shah wrote:
Actually some of that readaheads,etc the OS does already if it does
some sort of throttling/clubbing of reads/writes.
Note that I specified the fully cached case--even with the workload in
RAM the system still has to process a heck
On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:
What we have been finding, as RAID controllers get smarter, is that it
is getting increasingly futile to try to attach knobs to 'disk stuff;'
it is *way* more effective to add a few more spindles to an array than
it is to fiddle with w
On Wed, Aug 24, 2005 at 03:34:41PM -0400, [EMAIL PROTECTED] wrote:
It isn't an urban myth that 64-bit math on a 64-bit processor is
faster, at least if done using registers. It definately is faster.
It may be an urban myth, though, that most applications perform
a sufficient amount of 64-bit arit
On Mon, Aug 29, 2005 at 09:42:46AM -0400, Rémy Beaumont wrote:
We have been trying to pinpoint what originally seem to be a I/O
bottleneck but which now seems to be an issue with either Postgresql or
RHES 3.
Nope, it's an IO bottleneck.
The behavior we see is that when running queries that d
On Tue, Aug 30, 2005 at 07:02:28PM -0400, Ron wrote:
purpose(s). That's why the TPC bench marked systems tend to have
literally 100's of HD's and they tend to be split into very focused
purposes.
Of course, TPC benchmark systems are constructed such that cost and
storage capacity are irreleva
On Tue, Aug 30, 2005 at 08:41:40PM -0400, Ron wrote:
The scary thing is that I've worked on RW production systems that
bore a striking resemblance to a TPC benchmark system. As you can
imagine, they uniformly belonged to BIG organizations (read: lot's 'o
$$$) who were using the systems for mis
On Tue, Sep 20, 2005 at 10:51:41AM +0300, Michael Ben-Nes wrote:
I must admit im a bit amazed how such important parameter is so
ambiguous. an optimal strip size can improve the performance of the db
significantly.
It's configuration dependent. IME, it has an insignificant effect. If
anything
On Fri, Sep 23, 2005 at 08:49:27AM +0200, Joost Kraaijeveld wrote:
3. Can I anyhow improve the performance without replacing my hardware,
e.g. by tweaking the software?
It's not clear what your object id generator does. If it's just a
sequence, it's not clear that you need this program at all--
On Fri, Sep 23, 2005 at 12:21:15PM +0200, Joost Kraaijeveld wrote:
On Fri, 2005-09-23 at 05:55 -0400, Michael Stone wrote:
It's not clear what your object id generator does. If it's just a
sequence, it's not clear that you need this program at all--just use a
SELECT INTO and mak
On Fri, Sep 23, 2005 at 03:49:25PM +0200, Joost Kraaijeveld wrote:
On Fri, 2005-09-23 at 07:05 -0400, Michael Stone wrote:
Ok, that's great, but you didn't respond to the suggestion of using COPY
INTO instead of INSERT.
Part of the code I left out are some data conversions (e.g. fr
On Sun, Sep 25, 2005 at 10:57:56AM -0400, Dave Cramer wrote:
I would think software raid would be quite inappropriate considering
postgres when it is working is taking a fair amount of CPU as would
software RAID. Does anyone know if this is really the case ?
It's not. Modern cpu's can handle
On Sun, Sep 25, 2005 at 01:41:06PM -0400, Greg Stark wrote:
Also, Raid 5 is particularly inappropriate for write-heavy Database traffic.
Raid 5 actually hurts write latency dramatically and Databases are very
sensitive to latency.
Software raid 5 actually may have an advantage here. The main ca
On Fri, Sep 30, 2005 at 01:41:22PM -0700, Josh Berkus wrote:
Realistically, you can't do better than about 25MB/s on a single-threaded
I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Mike Stone
---(end of broadcast)---
On Mon, Oct 03, 2005 at 01:34:01PM -0700, Josh Berkus wrote:
>Realistically, you can't do better than about 25MB/s on a
> single-threaded I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Nope, LOTS of testing, at OSDL, GreenPlum and Sun. For comparison,
On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
COPY TO /dev/null WITH binary
13MB/s55% user 45% system (ergo, CPU bound)
[snip]
the most expensive. But it does point out that the whole process is
probably CPU bound more than anything else.
Note that 45% of that c
On Tue, Oct 04, 2005 at 12:43:10AM +0300, Hannu Krosing wrote:
Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and
it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in
RAID10, reiserfs). A little less than 100MB sec.
And none of that 15G table is in the 6
On Wed, Oct 05, 2005 at 11:24:07AM -0400, Luke Lonergan wrote:
Nope - it would be disk wait.
I said I/O overhead; i.e., it could be the overhead of calling the
kernel for I/O's. E.g., the following process is having I/O problems:
time dd if=/dev/sdc of=/dev/null bs=1 count=1000
On Wed, Oct 05, 2005 at 04:55:51PM -0700, Luke Lonergan wrote:
You've proven my point completely. This process is bottlenecked in the CPU.
The only way to improve it would be to optimize the system (libc) functions
like "fread" where it is spending most of it's time.
Or to optimize its IO hand
On Mon, Oct 10, 2005 at 03:28:42PM -0700, Steve Poe wrote:
I don't have room for more drives, but I am considering moving the XFS
external log
There is absolutely no reason to move the xfs log on a system that
small.
Mike Stone
---(end of broadcast)-
On Tue, Oct 25, 2005 at 03:44:36PM +0200, Chris Mair wrote:
Is there a better, faster way to do these inserts?
COPY is generally the fastest way to do bulk inserts (see
PQputCopyData).
Mike Stone
---(end of broadcast)---
TIP 9: In versions bel
On Thu, Nov 10, 2005 at 03:25:35PM +0100, Steinar H. Gunderson wrote:
Ah, yes, I forgot -- it's ext3. We're considering simply moving the WAL onto
a separate partition (with data=writeback and noatime) if that can help us
any.
There's no reason to use a journaled filesystem for the wal. Use ext
On Thu, Nov 10, 2005 at 09:52:38AM -0600, Scott Marlowe wrote:
Not from what I understood. Ext2 can't guarantee that your data will
even be there in any form after a crash.
It can if you sync the data. (Which is the whole point of the WAL.)
I believe only metadata journaling is needed thou
On Tue, Nov 15, 2005 at 09:33:25AM -0500, Luke Lonergan wrote:
write performance is now up to par with the best cards I believe. We
find that you still need to set Linux readahead to at least 8MB
(blockdev --setra) to get maximum read performance on them, is that your
What on earth does that d
On Sat, Nov 19, 2005 at 09:54:23AM -0800, Craig A. James wrote:
First, I occasionally rebuild the keywords, after which the VACUUM FULL
ANALYZE takes a LONG time - like 24 hours.
You know you just need vacuum, not vacuum full, right?
Mike Stone
---(end of broadcast)-
On Mon, Nov 21, 2005 at 02:01:26PM -0500, Greg Stark wrote:
I also fear that heading in that direction could push Postgres even further
from the niche of software that works fine even on low end hardware into the
realm of software that only works on high end hardware. It's already suffering
a bit
On Mon, Nov 21, 2005 at 10:14:29AM -0800, Luke Lonergan wrote:
This has partly been a challenge to get others to post their results.
You'll find that people respond better if you don't play games with
them.
---(end of broadcast)---
TIP 9: In vers
On Thu, Dec 01, 2005 at 02:50:56PM +0900, Tatsumi Abe wrote:
Could anyone advise some solutions for this fragmentation problem
without stopping PostgreSQL ?
Stop doing VACUUM FULL so often. If your table size is constant anyway
you're just wasting time by compacting the table and shrinking it,
On Fri, Dec 02, 2005 at 03:15:00AM -0500, Ron wrote:
I've seen enough HD failures that I tend to be paranoid and log the
metadata of fs dedicated to WAL as well, but that may very well be overkill.
Especially since it wouldn't gain anything. Journalling doesn't give you
any advantage whatsoever
On Fri, Dec 02, 2005 at 12:15:57AM -0500, Luke Lonergan wrote:
That's good to know - makes sense. I suppose we might still thrash over
a 1GB range in seeks if the BG writer starts running at full rate in the
background, right? Or is there some write combining in the BG writer?
That part your
On Fri, Dec 02, 2005 at 01:24:31PM -0800, Luke Lonergan wrote:
From a performance standpoint no argument, although you're betting that you
can do parsing / conversion faster than the COPY core in the backend can
Not necessarily; you may be betting that it's more *efficient* to do the
parsing o
On Tue, Dec 06, 2005 at 04:03:22PM -0600, Ameet Kini wrote:
I'm running postgresql v8.0 and my problem is that running vacuum on my
indices are blazing fast (upto 10x faster) AFTER running reindex. For a
table with only 1 index, the time to do a vacuum (without full) went down
from 45 minutes to
On Tue, Dec 06, 2005 at 09:36:23PM +, Rory Campbell-Lange wrote:
Yes, we don't have any spare disks unfortunately. We have enabled the
BBU write, so we are hoping for good performance.
Even if you don't use seperate disks you'll probably get better
performance by putting the WAL on a sepera
On Tue, Dec 06, 2005 at 07:52:25PM -0500, Alex Turner wrote:
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS that is probably closer to
the current head position.
I
On Wed, Dec 14, 2005 at 11:53:52AM -0500, Andrew Rawnsley wrote:
Other goofy things about it: it isn't 1 device with 14 disks and redundant
controllers. Its 2 7 disk arrays with non-redundant controllers. It doesn't
do RAID10.
And if you want hot spares you need *two* per tray (one for each
con
On Fri, Dec 16, 2005 at 04:18:01PM -0600, Jim C. Nasby wrote:
Even if you're doing a lot of random IO? I would think that random IO
would perform better if you use smaller (8K) blocks, since there's less
data being read in and then just thrown away that way.
The overhead of reading an 8k block
On Fri, Dec 16, 2005 at 06:25:25PM -0600, Jim C. Nasby wrote:
True, but now you've got 4x the amount of data in your cache that you
probably don't need.
Or you might be 4x more likely to have data cached that's needed later.
If you're hitting disk either way, that's probably more likely than th
On Sat, Dec 24, 2005 at 05:45:20PM -0500, Ron wrote:
Caches help, and the bigger the cache the better, but once you are
doing enough writes fast enough (and that doesn't take much even with
a few GBs of cache) the recalculate-checksums-and-write-new-ones
overhead will decrease the write speed o
On Mon, Dec 26, 2005 at 12:32:19PM -0500, Alex Turner wrote:
It's irrelavent what controller, you still have to actualy write the
parity blocks, which slows down your write speed because you have to
write n+n/2 blocks. instead of just n blocks making the system write
50% more data.
RAID 5 must w
On Tue, Dec 27, 2005 at 11:50:16AM -0500, Ron wrote:
Sorry. A decade+ RWE in production with RAID 5 using controllers as
bad as Adaptec and as good as Mylex, Chaparral, LSI Logic (including
their Engino stuff), and Xyratex under 5 different OS's (Sun, Linux,
M$, DEC, HP) on each of Oracle, SQL
On Tue, Dec 27, 2005 at 02:57:13PM -0500, Ron wrote:
Your quoted position is "there isn't a 'RAID 5 penalty' assuming
you've got a reasonably fast controller and you're doing large
sequential writes (or have enough cache that random writes can be
batched as large sequential writes)."
And you
On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote:
We use RAID50 instead of RAID0 for the tables for some fault-tolerance.
We use RAID0 for the WAL for performance.
I'm missing the implication of the question...
If you have the WAL on RAID 0 you have no fault tolerance, regardless
On Sun, Nov 26, 2006 at 05:41:02PM -0600, Jim C. Nasby wrote:
What's interesting is that apparently FreeBSD also has overcommit (and
IIRC no way to disable it), yet I never hear people going off on OOM
kills in FreeBSD.
Could just be that nobody is using FreeBSD.
Seriously, though, there are
On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:
This discussion I think is important, as I think it would be useful for this
list to have a list of RAID cards that _do_ work well under Linux/BSD for
people as recommended hardware for Postgresql. So far, all I can recommend
is what
On Mon, Dec 04, 2006 at 12:52:46PM -0500, Alex Turner wrote:
http://en.wikipedia.org/wiki/RAID_controller
What is the wikipedia quote supposed to prove? Pray tell, if you
consider RAID==HBA, what would you call a SCSI (e.g.) controller that
has no RAID functionality? If you'd call it an HBA,
On Tue, Dec 05, 2006 at 01:21:38AM -0500, Alex Turner wrote:
My other and most important point is that I can't find any solid
recommendations for a SCSI card that can perform optimally in Linux or
*BSD. Off by a factor of 3x is pretty sad IMHO. (and yes, we know the
Adaptec cards suck worse, tha
On Tue, Dec 05, 2006 at 07:57:43AM -0500, Alex Turner wrote:
The problem I see with software raid is the issue of a battery backed unit:
If the computer loses power, then the 'cache' which is held in system
memory, goes away, and fubars your RAID.
Since the Adaptec doesn't have a BBU, it's a la
On Wed, Dec 06, 2006 at 06:59:12PM +0100, Arnaud Lesauvage wrote:
Markus Schiltknecht a écrit :
What's common practice? What's it on the pgsql mailing lists?
The netiquette usually advise mailers to wrap after 72 characters
on mailing lists.
This does not apply for format=flowed I guess (that
On Mon, Dec 11, 2006 at 09:05:56AM -0200, Daniel van Ham Colchete wrote:
unfortunally I don't have any benchmarks right now.
That's fairly normal for gentoo users pushing their compile options.
Mike Stone
---(end of broadcast)---
TIP 4: Have you
On Mon, Dec 11, 2006 at 12:15:51PM -0500, Ron wrote:
I'd say the fairest attitude is to do everything we can to support
having the proper experiments done w/o presuming the results.
Who's presuming results?[1] It is fair to say that making extraordinary
claims without any evidence should be di
On Mon, Dec 11, 2006 at 01:20:50PM -0500, Ron wrote:
(The validity of the claim has nothing to do with the skills or
experience of the claimant or anyone else in the discussion. Only on
the evidence.)
Please go back and reread the original post. I don't think the response
was unwarranted.
On Mon, Dec 11, 2006 at 10:30:55AM -0800, Luke Lonergan wrote:
Here's one - I wrote a general purpose Computational Fluid Dynamics analysis
method used by hundreds of people to perform aircraft and propulsion systems
analysis.
That's kinda the opposite of what I meant by general code. I was t
On Mon, Dec 11, 2006 at 02:51:09PM -0500, Ron wrote:
Let's support getting definitive evidence.
Since nobody opposed the concept of contrary evidence, I don't suppose
you're fighting an uphill battle on that particular point.
It's fine to get preachy about supporting intellectual curiosity,
On Mon, Dec 11, 2006 at 08:22:42PM -0200, Daniel van Ham Colchete wrote:
TEST 01: CFLAGS="-O2 -march=i686" fsync=false
tps = 734.948620 (including connections establishing)
tps = 736.866642 (excluding connections establishing)
[snip]
TEST 03: CFLAGS="-O2 -march=pentium4" fsync=false
tps = 846
On Tue, Dec 12, 2006 at 01:35:04AM -0500, Greg Smith wrote:
These changes could easily explain the magnitude of difference in results
you're seeing, expecially when combined with a 20% greater raw CPU clock.
I'm not interested in comparing the numbers between the systems (which
is obviously po
On Tue, Dec 12, 2006 at 07:10:34AM -0200, Daniel van Ham Colchete wrote:
are you using "-mtune/-mcpu" or "-march" with GCC?
I used exactly the options you said you used.
Witch GCC version? Are you working with a 32bits OS or 64bits?
3.3.5; 32
Mike Stone
---(end of
On Tue, Dec 12, 2006 at 12:29:29PM +0100, Alexander Staubo wrote:
I suspect the hardware's real maximum performance of the system is
~150 tps, but that the LSI's write cache is buffering the writes. I
would love to validate this hypothesis, but I'm not sure how.
With fsync off? The write cac
On Tue, Dec 12, 2006 at 01:42:06PM +0100, Cosimo Streppone wrote:
"-O0" ~ 957 tps
"-O1 -mcpu=pentium4 -mtune=pentium4" ~ 1186 tps
"-O2 -mcpu=pentium4 -mtune=pentium4" ~ 1229 tps
"-O3 -mcpu=pentium4 -mtune=pentium4" ~ 1257 tps
"-O6 -mcpu=pentium4 -mtune=pentium4" ~ 1254 tps
I'm curious now to get
On Tue, Dec 12, 2006 at 07:48:06AM -0500, Michael Stone wrote:
I'd be curious to see -O2 with and without the arch-specific flags,
since that's mostly what the discussion is about.
That came across more harshly than I intended; I apologize for that.
It's certainly a usefu
On Wed, Dec 13, 2006 at 01:03:04PM -0500, Ron wrote:
What I find interesting is that so far Guido's C2D Mac laptop has
gotten the highest values by far in this set of experiments, and no
one else is even close.
The slowest results, Michael's, are on the system with what appears
to be the slowes
On Fri, Dec 15, 2006 at 12:24:46PM -0500, Ron wrote:
ATM, the most we can say is that in a number of systems with modest
physical IO subsystems
So I reran it on a 3.2GHz xeon with 6G RAM off a ramdisk; I/O ain't the
bottleneck on that one. Results didn't show didn't show any signficant
gain
On Sat, Dec 16, 2006 at 10:53:21AM -0500, Ron wrote:
The most important "gain" IMO is Knowledge, and I'd say there is
still more to learn and/or verify IMHO. YMMV.
Well, I think there are other areas where I can spend my time where
potential gains are more likely. YMMV (although, I note, you d
On Sun, Mar 18, 2007 at 06:45:34AM -0600, Barry Moore wrote:
Does anyone know how I can repeatedly run the same query in the
"worst case scenario" of no postgres data in the disk cache (e.g.,
clear the page cache or force it to be ignored)?
Depends on your OS. On linux you can run:
echo 1 >
On Thu, Mar 22, 2007 at 01:30:35PM +0200, [EMAIL PROTECTED] wrote:
approximated count?
why? who would need it? where you can use it?
Do a google query. Look at the top of the page, where it says
"results N to M of about O". For user interfaces (which is where a lot
of this count(*) stuff
On Thu, Mar 22, 2007 at 09:39:18AM -0400, Merlin Moncure wrote:
You can get the approximate count by selecting reltuples from
pg_class. It is valid as of last analyze.
Of course, that only works if you're not using any WHERE clause.
Here's a (somewhat ugly) example of getting an approximate c
On Thu, Mar 22, 2007 at 04:55:02PM +0100, Dimitri wrote:
In my case I have several CPU on the server and quite powerful storage box
which is not really busy with a single vacuum. So, my idea is quite simple -
speed-up vacuum with parallel execution (just an algorithm):
Vacuum is I/O intensive,
On Thu, Mar 22, 2007 at 06:27:32PM +0100, Tino Wildenhain wrote:
Craig A. James schrieb:
You guys can correct me if I'm wrong, but the key feature that's missing
from Postgres's flexible indexing is the ability to maintain state
across queries. Something like this:
select a, b, my_index_sta
On Thu, Mar 22, 2007 at 07:24:38PM +0100, Dimitri wrote:
you're right until you're using a single disk :)
Now, imagine you have more disks
I do have more disks. I maximize the I/O performance by dedicating
different sets of disks to different tables. YMMV. I do suggest watching
your I/O rates
On Thu, Mar 22, 2007 at 02:24:39PM -0400, Merlin Moncure wrote:
Tino was saying that rather that build a complete indexing storage
management solution that lives outside the database, it is better to
do intelligent session management so that you get the simplicity if a
two tier client server syst
On Fri, Mar 23, 2007 at 01:01:02PM +0100, Tino Wildenhain wrote:
This discussion is a bit theoretical until we see the actual problem
and the proposed solution here.
It's good to see you back off a bit from your previous stance of
assuming that someone doesn't know what they're doing and that
On Fri, Mar 23, 2007 at 04:37:32PM +0100, Dimitri wrote:
Speed-up x4 is obtained just because single vacuum process reaching max
80MB/sec in throughput
I'd look at trying to improve that, it seems very low.
Mike Stone
---(end of broadcast)---
TI
On Fri, Mar 30, 2007 at 10:00:30PM -0600, Guido Neitzer wrote:
On 30.03.2007, at 19:18, Christopher Browne wrote:
2. There are known issues with the combination of Xeon processors and
PAE memory addressing; that sort of hardware tends to be *way* less
speedy than the specs would suggest.
That
On Thu, Apr 05, 2007 at 11:19:04PM -0400, Ron wrote:
Both statements are the literal truth.
Repeating something over and over again doesn't make it truth. The OP
asked for statistical evidence (presumably real-world field evidence) to
support that assertion. Thus far, all the publicly availab
On Fri, Apr 06, 2007 at 02:00:15AM -0400, Tom Lane wrote:
It seems hard to believe that the vendors themselves wouldn't burn in
the drives for half a day, if that's all it takes to eliminate a large
fraction of infant mortality. The savings in return processing and
customer goodwill would surely
On Fri, Apr 06, 2007 at 08:49:08AM -0400, Ron wrote:
Not quite. Each of our professional experiences is +also+
statistical evidence. Even if it is a personally skewed sample.
I'm not sure that word means what you think it means. I think the one
you're looking for is "anecdotal".
My experie
On Fri, Apr 06, 2007 at 12:41:25PM -0400, Ron wrote:
3.based on personal observation, case study
reports, or random investigations rather than
systematic scientific evaluation: anecdotal evidence.
Here you even quote the appropriate definition before ignoring it.
In short, professional advic
1 - 100 of 222 matches
Mail list logo