On Tue, May 24, 2005 at 04:35:14PM -0700, Josh Berkus wrote:
Pretty much. There has been discussion about allowing index-only access to
frozen tables, i.e. archive partitions. But it all sort of hinges on
someone implementing it and testing
Is there any way to expose the planner
On Mon, Jun 06, 2005 at 10:08:23AM -0500, John A Meinel wrote:
I don't know if you can do it, but it would be nice to see this be 1
RAID1 for OS, 1 RAID10 for pg_xlog,
That's probably overkill--it's a relatively small sequential-write
partition with really small writes; I don't see how
On Mon, Jun 06, 2005 at 10:52:09AM -0500, John A Meinel wrote:
pg_xlog benefits from being super fast. Because it has to be fully
synced before the rest of the data can be committed. Yes they are small,
but if you can make it fast, you eliminate that overhead. It also
benefits from having it's
On Tue, Jun 07, 2005 at 01:39:04PM -0700, Tom Arthurs wrote:
Yes, shared buffers in postgres are not used for caching
That begs the question of what they are used for. :)
Mike Stone
---(end of broadcast)---
TIP 9: the planner will ignore your
On Tue, Jun 07, 2005 at 11:50:33PM -0400, Tom Lane wrote:
Again, let's see some evidence that it's worth putting effort into that.
(Offhand it seems this is probably an easier fix than changing the
shared-memory allocation code; but conventional wisdom is that really
large values of work_mem are
On Wed, Jun 08, 2005 at 09:36:31AM +0200, Martin Fandel wrote:
I've installed the same installation of my reiser-fs-postgres-8.0.1
with xfs.
Do you have pg_xlog on a seperate partition? I've noticed that ext2
seems to have better performance than xfs for the pg_xlog workload (with
all the
On Sat, Jun 18, 2005 at 06:42:27PM +0200, Yves Vindevogel wrote:
With striping, each file is distributed over several disks, making
the physical write faster because several disks can do the work.
Same for reading, multiple disks return a part of the file.
A mirror behaves almost exactly
On Wed, Jun 22, 2005 at 10:31:29PM -0400, Keith Worthington wrote:
Use RAID 10 (striping across mirrored disks)
or RAID 0+1 (mirror a striped array) for your data.
yikes! never tell an unsuspecting person to use mirred stripes--that
configuration has lower reliability and performance than
Is it possible to tweak the size of a block that postgres tries to read
when doing a sequential scan? It looks like it reads in fairly small
blocks, and I'd expect a fairly significant boost in i/o performance
when doing a large (multi-gig) sequential scan if larger blocks were
used.
Mike Stone
On Tue, Jun 28, 2005 at 12:02:55PM -0500, John A Meinel wrote:
There has been discussion about changing the reading/writing code to be
able to handle multiple pages at once, (using something like vread())
but I don't know that it has been implemented.
that sounds promising
Also, this would
Did you seperate the data the transaction log? I've noticed less than
optimal performance on xfs if the transaction log is on the xfs data
partition, and it's silly to put the xlog on a journaled filesystem
anyway. Try putting xlog on an ext2 for all the tests.
Mike Stone
On Thu, Jul 14, 2005 at 02:15:52PM +0200, Grega Bremec wrote:
I'm curious as to what this means - did they have problems integrating
it into their toolchain or are there actual problems going on in jfs
currently?
I've found jfs to be the least stable linux filesystem and won't allow
it
On Thu, Jul 14, 2005 at 04:29:58PM -0600, Dan Harris wrote:
Ok, I tried this one. My ssh keeps getting cut off by a router
somewhere between me and the server due to inactivity timeouts, so
all I know is that both the select and explain analyze are taking
over an hour to run.
Try running
On Sat, Jul 16, 2005 at 01:12:27AM -0700, Jeffrey W. Baker wrote:
Another frequent suggestion is to put the xlog on a separate device. I
tried this, and, for a given number of disks, it appears to be
counter-productive. A RAID5 of 5 disks holding both logs and data is
about 15% faster than a
I'd say, don't do that. Unless you've deleted a lot of stuff and are
expecting the DB to shrink, a full vacuum shouldn't really be needed. On
a DB that big a full vacuum is just going to take a long time. If you
really are shrinking, consider structuring things so you can just drop a
table
On Tue, Jul 26, 2005 at 11:23:23AM -0700, Luke Lonergan wrote:
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
On the contrary--it's not obvious that it is an ideal fit for a WAL. A
ram disk like this is optimized for
On Fri, Aug 05, 2005 at 01:11:31PM +0200, Dirk Lutzebäck wrote:
I will compile the latest PostgreSQL 8.1 snapshot for 32bit to evaluate
the new shared buffer code from Tom.
I think, the 64bit is slow because my queries are CPU intensive.
Have you actually tried it or are you guessing? If
On Thu, Aug 11, 2005 at 01:23:21PM +0100, Paul Johnson wrote:
I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to
a single spindle disk?
In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data
On Thu, Aug 11, 2005 at 10:18:44AM -0700, Mark Lewis wrote:
Actually, it seems to me that with the addition of the WAL in PostgreSQL
and the subsequent decreased need to fsync the data files themselves
(only during checkpoints?), that the only time a battery-backed write
cache would make a
On Thu, Aug 11, 2005 at 07:13:27PM -0400, Jeffrey Tenny wrote:
The system for testing was 512MB
That's definately *not* a large ram system. If you're reading a subset
of data that totals 70MB I'm going to guess that your data set is larger
than or at least a large fraction of 512MB.
On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:
However, you are absolutely correct in that it's *relative* advice, not
absolute advice. If, for example, you're using a $100,000 EMC SAN as your
storage you'll probably be better off giving it everything and letting its
controller
On Tue, Aug 23, 2005 at 12:38:04PM -0700, Josh Berkus wrote:
which have a clear and measurable effect on performance and are fixable
without bloating the PG code. Some of these issues (COPY path, context
switching
Does that include increasing the size of read/write blocks? I've noticed
that
On Tue, Aug 23, 2005 at 05:29:01PM -0400, Jignesh Shah wrote:
Actually some of that readaheads,etc the OS does already if it does
some sort of throttling/clubbing of reads/writes.
Note that I specified the fully cached case--even with the workload in
RAM the system still has to process a
On Tue, Aug 23, 2005 at 06:09:09PM -0400, Chris Browne wrote:
What we have been finding, as RAID controllers get smarter, is that it
is getting increasingly futile to try to attach knobs to 'disk stuff;'
it is *way* more effective to add a few more spindles to an array than
it is to fiddle with
On Wed, Aug 24, 2005 at 03:34:41PM -0400, [EMAIL PROTECTED] wrote:
It isn't an urban myth that 64-bit math on a 64-bit processor is
faster, at least if done using registers. It definately is faster.
It may be an urban myth, though, that most applications perform
a sufficient amount of 64-bit
On Mon, Aug 29, 2005 at 09:42:46AM -0400, Rémy Beaumont wrote:
We have been trying to pinpoint what originally seem to be a I/O
bottleneck but which now seems to be an issue with either Postgresql or
RHES 3.
Nope, it's an IO bottleneck.
The behavior we see is that when running queries that
On Tue, Aug 30, 2005 at 07:02:28PM -0400, Ron wrote:
purpose(s). That's why the TPC bench marked systems tend to have
literally 100's of HD's and they tend to be split into very focused
purposes.
Of course, TPC benchmark systems are constructed such that cost and
storage capacity are
On Tue, Aug 30, 2005 at 08:41:40PM -0400, Ron wrote:
The scary thing is that I've worked on RW production systems that
bore a striking resemblance to a TPC benchmark system. As you can
imagine, they uniformly belonged to BIG organizations (read: lot's 'o
$$$) who were using the systems for
On Tue, Sep 20, 2005 at 10:51:41AM +0300, Michael Ben-Nes wrote:
I must admit im a bit amazed how such important parameter is so
ambiguous. an optimal strip size can improve the performance of the db
significantly.
It's configuration dependent. IME, it has an insignificant effect. If
On Fri, Sep 23, 2005 at 08:49:27AM +0200, Joost Kraaijeveld wrote:
3. Can I anyhow improve the performance without replacing my hardware,
e.g. by tweaking the software?
It's not clear what your object id generator does. If it's just a
sequence, it's not clear that you need this program at
On Fri, Sep 23, 2005 at 12:21:15PM +0200, Joost Kraaijeveld wrote:
On Fri, 2005-09-23 at 05:55 -0400, Michael Stone wrote:
It's not clear what your object id generator does. If it's just a
sequence, it's not clear that you need this program at all--just use a
SELECT INTO and make the object id
On Fri, Sep 23, 2005 at 03:49:25PM +0200, Joost Kraaijeveld wrote:
On Fri, 2005-09-23 at 07:05 -0400, Michael Stone wrote:
Ok, that's great, but you didn't respond to the suggestion of using COPY
INTO instead of INSERT.
Part of the code I left out are some data conversions (e.g. from
path
On Sun, Sep 25, 2005 at 10:57:56AM -0400, Dave Cramer wrote:
I would think software raid would be quite inappropriate considering
postgres when it is working is taking a fair amount of CPU as would
software RAID. Does anyone know if this is really the case ?
It's not. Modern cpu's can
On Sun, Sep 25, 2005 at 01:41:06PM -0400, Greg Stark wrote:
Also, Raid 5 is particularly inappropriate for write-heavy Database traffic.
Raid 5 actually hurts write latency dramatically and Databases are very
sensitive to latency.
Software raid 5 actually may have an advantage here. The main
On Sat, Oct 01, 2005 at 06:19:41PM +0200, Martijn van Oosterhout wrote:
COPY TO /dev/null WITH binary
13MB/s55% user 45% system (ergo, CPU bound)
[snip]
the most expensive. But it does point out that the whole process is
probably CPU bound more than anything else.
Note that 45% of that
On Tue, Oct 04, 2005 at 12:43:10AM +0300, Hannu Krosing wrote:
Just FYI, I run a count(*) on a 15.6GB table on a lightly loaded db and
it run in 163 sec. (Dual opteron 2.6GHz, 6GB RAM, 6 x 74GB 15k disks in
RAID10, reiserfs). A little less than 100MB sec.
And none of that 15G table is in the
On Wed, Oct 05, 2005 at 11:24:07AM -0400, Luke Lonergan wrote:
Nope - it would be disk wait.
I said I/O overhead; i.e., it could be the overhead of calling the
kernel for I/O's. E.g., the following process is having I/O problems:
time dd if=/dev/sdc of=/dev/null bs=1 count=1000
On Mon, Oct 10, 2005 at 03:28:42PM -0700, Steve Poe wrote:
I don't have room for more drives, but I am considering moving the XFS
external log
There is absolutely no reason to move the xfs log on a system that
small.
Mike Stone
---(end of
On Tue, Oct 25, 2005 at 03:44:36PM +0200, Chris Mair wrote:
Is there a better, faster way to do these inserts?
COPY is generally the fastest way to do bulk inserts (see
PQputCopyData).
Mike Stone
---(end of broadcast)---
TIP 9: In versions
On Thu, Nov 10, 2005 at 03:25:35PM +0100, Steinar H. Gunderson wrote:
Ah, yes, I forgot -- it's ext3. We're considering simply moving the WAL onto
a separate partition (with data=writeback and noatime) if that can help us
any.
There's no reason to use a journaled filesystem for the wal. Use
On Thu, Nov 10, 2005 at 09:52:38AM -0600, Scott Marlowe wrote:
Not from what I understood. Ext2 can't guarantee that your data will
even be there in any form after a crash.
It can if you sync the data. (Which is the whole point of the WAL.)
I believe only metadata journaling is needed
On Tue, Nov 15, 2005 at 09:33:25AM -0500, Luke Lonergan wrote:
write performance is now up to par with the best cards I believe. We
find that you still need to set Linux readahead to at least 8MB
(blockdev --setra) to get maximum read performance on them, is that your
What on earth does that
On Sat, Nov 19, 2005 at 09:54:23AM -0800, Craig A. James wrote:
First, I occasionally rebuild the keywords, after which the VACUUM FULL
ANALYZE takes a LONG time - like 24 hours.
You know you just need vacuum, not vacuum full, right?
Mike Stone
---(end of
On Mon, Nov 21, 2005 at 02:01:26PM -0500, Greg Stark wrote:
I also fear that heading in that direction could push Postgres even further
from the niche of software that works fine even on low end hardware into the
realm of software that only works on high end hardware. It's already suffering
a
On Thu, Dec 01, 2005 at 02:50:56PM +0900, Tatsumi Abe wrote:
Could anyone advise some solutions for this fragmentation problem
without stopping PostgreSQL ?
Stop doing VACUUM FULL so often. If your table size is constant anyway
you're just wasting time by compacting the table and shrinking it,
On Fri, Dec 02, 2005 at 03:15:00AM -0500, Ron wrote:
I've seen enough HD failures that I tend to be paranoid and log the
metadata of fs dedicated to WAL as well, but that may very well be overkill.
Especially since it wouldn't gain anything. Journalling doesn't give you
any advantage
On Fri, Dec 02, 2005 at 12:15:57AM -0500, Luke Lonergan wrote:
That's good to know - makes sense. I suppose we might still thrash over
a 1GB range in seeks if the BG writer starts running at full rate in the
background, right? Or is there some write combining in the BG writer?
That part your
On Fri, Dec 02, 2005 at 01:24:31PM -0800, Luke Lonergan wrote:
From a performance standpoint no argument, although you're betting that you
can do parsing / conversion faster than the COPY core in the backend can
Not necessarily; you may be betting that it's more *efficient* to do the
parsing
On Tue, Dec 06, 2005 at 04:03:22PM -0600, Ameet Kini wrote:
I'm running postgresql v8.0 and my problem is that running vacuum on my
indices are blazing fast (upto 10x faster) AFTER running reindex. For a
table with only 1 index, the time to do a vacuum (without full) went down
from 45 minutes to
On Tue, Dec 06, 2005 at 09:36:23PM +, Rory Campbell-Lange wrote:
Yes, we don't have any spare disks unfortunately. We have enabled the
BBU write, so we are hoping for good performance.
Even if you don't use seperate disks you'll probably get better
performance by putting the WAL on a
On Tue, Dec 06, 2005 at 07:52:25PM -0500, Alex Turner wrote:
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS that is probably closer to
the current head position.
I
On Wed, Dec 14, 2005 at 11:53:52AM -0500, Andrew Rawnsley wrote:
Other goofy things about it: it isn't 1 device with 14 disks and redundant
controllers. Its 2 7 disk arrays with non-redundant controllers. It doesn't
do RAID10.
And if you want hot spares you need *two* per tray (one for each
On Fri, Dec 16, 2005 at 04:18:01PM -0600, Jim C. Nasby wrote:
Even if you're doing a lot of random IO? I would think that random IO
would perform better if you use smaller (8K) blocks, since there's less
data being read in and then just thrown away that way.
The overhead of reading an 8k block
On Fri, Dec 16, 2005 at 06:25:25PM -0600, Jim C. Nasby wrote:
True, but now you've got 4x the amount of data in your cache that you
probably don't need.
Or you might be 4x more likely to have data cached that's needed later.
If you're hitting disk either way, that's probably more likely than
On Sat, Dec 24, 2005 at 05:45:20PM -0500, Ron wrote:
Caches help, and the bigger the cache the better, but once you are
doing enough writes fast enough (and that doesn't take much even with
a few GBs of cache) the recalculate-checksums-and-write-new-ones
overhead will decrease the write speed
On Mon, Dec 26, 2005 at 12:32:19PM -0500, Alex Turner wrote:
It's irrelavent what controller, you still have to actualy write the
parity blocks, which slows down your write speed because you have to
write n+n/2 blocks. instead of just n blocks making the system write
50% more data.
RAID 5 must
On Tue, Dec 27, 2005 at 11:50:16AM -0500, Ron wrote:
Sorry. A decade+ RWE in production with RAID 5 using controllers as
bad as Adaptec and as good as Mylex, Chaparral, LSI Logic (including
their Engino stuff), and Xyratex under 5 different OS's (Sun, Linux,
M$, DEC, HP) on each of Oracle,
On Tue, Dec 27, 2005 at 02:57:13PM -0500, Ron wrote:
Your quoted position is there isn't a 'RAID 5 penalty' assuming
you've got a reasonably fast controller and you're doing large
sequential writes (or have enough cache that random writes can be
batched as large sequential writes).
And you
On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote:
We use RAID50 instead of RAID0 for the tables for some fault-tolerance.
We use RAID0 for the WAL for performance.
I'm missing the implication of the question...
If you have the WAL on RAID 0 you have no fault tolerance,
OIDs seem to be on their way out, and most of the time you can get a
more helpful result by using a serial primary key anyway, but I wonder
if there's any extension to INSERT to help identify what unique id a
newly-inserted key will get? Using OIDs the insert would return the OID
of the inserted
On Fri, Jan 13, 2006 at 04:29:15PM -0500, Neil Conway wrote:
There's really no additional operations required:
INSERT INTO t2 VALUES (currval('t1_id_seq'), ...);
You need a separate SELECT if you want to use the generated sequence
value outside the database,
That would, of course, be the
On Tue, Jan 17, 2006 at 11:33:02AM +0100, Michael Riess wrote:
did you read my post? In the first part I explained why I don't want to
increase the FSM that much.
Since you didn't quantify it, that wasn't much of a data point. (IOW,
you'd generally have to be seriously resource constrained
On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:
about the FSM: You say that increasing the FSM is fairly cheap - how
should I know that?
Why would you assume otherwise, to the point of not considering changing
the setting?
The documentation explains how much memory is used
On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
I understand most of these issues, and expected this kind of reply. Please,
allow me to insist that we reason on this problem and try to find a
solution. My reason for doing so is that the future software industry is
likely to
On Sun, Jan 29, 2006 at 12:25:23PM +0100, hubert depesz lubaczewski wrote:
hmm .. do i understand correctly that you're suggesting that using
raid 10 and/or hardware raid adapter might hurt disc subsystem
performance? could you elaborate on the reasons, please?
I think it's been fairly well
On Tue, Jan 24, 2006 at 07:40:22PM +0100, Arnau Rebassa Villalonga wrote:
I have a performance problem and I don't know where is my bottleneck.
[snip]
Most of the time the idle value is even higher than 60%.
It's generally a fairly safe bet that if you are running slow and your
cpu is
On Tue, Jan 31, 2006 at 08:09:40PM -0800, Jeffrey W. Baker wrote:
I think I've proved my point. Software RAID1 read balancing provides
0%, 300%, 100%, and 100% speedup on 1, 2, 4, and 8 threads,
respectively. In the presence of random I/O, the results are even
better.
Umm, the point *was*
On Tue, Mar 21, 2006 at 06:01:58AM -0600, Jim C. Nasby wrote:
On Tue, Mar 21, 2006 at 12:56:18PM +0100, Steinar H. Gunderson wrote:
For the record, that's the wrong way round. For the data partitioning
metadata journaling is enough, and for the WAL partition you don't need any
FS journaling at
On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:
A naive read on this is that you might start with one backend process,
which then spawns 16 more. Each of those backends is scanning through
one of those 16 files; they then throw relevant tuples into shared
memory to be
On Thu, Mar 30, 2006 at 01:59:10PM +0200, Peter Eisentraut wrote:
The table has seen VACUUM FULL and REINDEX before this.
But no analyze?
Mike Stone
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
On Thu, Mar 30, 2006 at 02:31:34PM +0200, Steinar H. Gunderson wrote:
Well, it's logical enough; it scans along activity_id until it finds one with
state=1 or state=10001. You obviously have a _lot_ of records with low
activity_id and state none of these two, so Postgres needs to scan all
On Thu, Mar 30, 2006 at 02:51:47PM +0200, Steinar H. Gunderson wrote:
On Thu, Mar 30, 2006 at 07:42:53AM -0500, Michael Stone wrote:
Yes. And the estimates are bad enough (orders of magnitude) that I can't
help but wonder whether pg could come up with a better plan with better
statistics
On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote:
In RAID 10 would it matter that WALL is in the same RAID set?
Would it be better:
4 disks in RAID10 Data
2 disks RAID 1 WALL
2 hot spares
I guess the first question is why 2 hot spares? You don't have many
spindles, so you don't
On Fri, Apr 14, 2006 at 02:01:56PM -0400, Francisco Reyes wrote:
Michael Stone writes:
I guess the first question is why 2 hot spares?
Because we are using RAID 10
I still don't follow that. Why would the RAID level matter? IOW, are you
actually wanting 2 spares, or are you just stick
From my initial testing this is very promising for a postgres server.
Benchmark-wise, a simple dd with an 8k blocksize gets ~200MB/s as
compared to ~140MB/s on the same hardware without the patch. Also, that
200MB/s seems to be unaffected by the dd blocksize, whereas without the
patch a 512k
On Wed, Apr 26, 2006 at 04:33:40PM -0700, Luke Lonergan wrote:
I¹m thinking about it, we¹re already using a fixed read-ahead of 16MB using
blockdev on the stock Redhat 2.6.9 kernel, it would be nice to not have to
set this so we may try it.
FWIW, I never saw much performance difference from
On Wed, Apr 26, 2006 at 10:43:48AM -0400, Michael Stone wrote:
patch a 512k blocksize would get ~100MB/s. I'm now watching to see how
it does over a couple of days on real-world workloads.
I've got one DB where the VACUUM ANALYZE generally takes 11M-12M ms;
with the patch the job took 1.7M ms
On Thu, Apr 27, 2006 at 08:38:55AM -0400, Ketema Harris wrote:
I am looking for the best solution to have a large amount of disk storage
attached to my PostgreSQL 8.1 server.
What other options/protocols are there to get high performance and data
integrity while having the benefit of not
On Thu, Apr 27, 2006 at 08:57:51AM -0400, Ketema Harris wrote:
OK. My thought process was that having non local storage as say a big raid
5 san ( I am talking 5 TB with expansion capability up to 10 )
That's two disk trays for a cheap slow array. (Versus a more expensive
solution with more
On Thu, Apr 27, 2006 at 09:41:21AM -0400, Ketema Harris wrote:
No, backups are completely unrelated to your storage type; you need them
either way.
Please another post. I meant the storage would do the back ups.
Which isn't a backup. Even expensive storage arrays can break or burn
down.
On Thu, Apr 27, 2006 at 12:50:16PM -0500, Jim C. Nasby wrote:
Yes, but some dedicated storage devices actually provide good
performance with RAID5. Most simpler solutions give pretty abysmal write
performance.
dedicated storage device != SAN != NAS. You can get good performance in
a dedicated
On Sat, Apr 29, 2006 at 11:18:10AM +0800, K C Lau wrote:
I apologize for simplistic ideas again. I presume that the equivalent tuple
header information is not maintained for index entries. What if they are,
probably only for the most commonly used index types to allow recycling
where possible?
On Tue, May 02, 2006 at 07:28:34PM -0400, Bill Moran wrote:
Reindexing is in a different class than vacuuming.
Kinda, but it is in the same class as vacuum full. If vacuum neglect (or
dramatic change in usage) has gotten you to the point of 10G of overhead
on a 2G table you can get a
On Tue, May 02, 2006 at 08:09:52PM -0600, Brendan Duddridge wrote:
---Sequential Output ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %
On Wed, May 03, 2006 at 09:19:52AM -0400, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to figure
out hte size you want (2x ram) - the original bonnie is limited to 2GB.
Yes, and once you get into large datasets like that the quality of the
data is fairly poor
On Wed, May 03, 2006 at 11:07:15AM -0500, Scott Marlowe wrote:
I have often used the mem=xxx arguments to lilo when needing to limit
the amount of memory for testing purposes. Just google for limit memory
and your bootloader to find the options.
Or, just don't worry about it. Even if you get
On Wed, May 03, 2006 at 01:08:21PM -0500, Jim C. Nasby wrote:
Well, in this case the question was about random write access, which dd
won't show you.
That's the kind of thing you need to measure against your workload.
Mike Stone
---(end of
On Wed, May 03, 2006 at 02:40:15PM -0500, Scott Marlowe wrote:
Note that I'm referring to bonnie++ as was an earlier poster. It
certainly seems capable of giving you a good idea of how your hardware
will behave under load.
IME it give fairly useless results. YMMV. Definately the numbers
On Wed, May 03, 2006 at 04:30:32PM -0500, Scott Marlowe wrote:
If you've not tried bonnie++ on a limited memory machine, you really
should.
Yes, I have. I also patched bonnie to handle large files and other such
nifty things before bonnie++ was forked. Mostly I just didn't get much
value out
On Fri, May 05, 2006 at 09:57:58AM -0400, mcelroy, tim wrote:
Are you saying the kernel's disc cache may be getting whacked? No, I
understand that PG should use as much memory as it can and the system as
well. The main problem here is that with almost all the 8GB of RAM 'in use'
when I try to
On Fri, May 05, 2006 at 10:27:10AM -0400, mcelroy, tim wrote:
Sorry, been up all night and maybe provided too much information or not the
right information and only confused folks, tired I guess. When I say 'in
use' I am referring to the 'used' column.
Which is a mostly irrelevant number.
On Fri, May 05, 2006 at 06:33:33PM +0200, Gábriel Ákos wrote:
if postgres tries to allocate more memory (because of settings enable
it) than the kernel itself is configured for, then you will see similar
error messages.
If you're talking about the shared memory limits, postgres will bomb out
On Mon, May 08, 2006 at 03:38:23PM -0400, Vivek Khera wrote:
On May 8, 2006, at 1:30 PM, Jim C. Nasby wrote:
Yeah, I prefer my surgeons to work this way too. training is for the
birds.
I think you read too quickly past the part where Tim said he'd
taking a
week-long training class.
On Wed, Jun 14, 2006 at 04:32:23PM +0200, Sven Geisler wrote:
For example, You run two queries with two clients and each queries needs
to read some indices from disk. In this case it more efficient to read
from different volumes than to read from one large volume where the disc
arms has to
On Mon, Jun 19, 2006 at 08:09:47PM +1000, Tim Allen wrote:
Certainly, the read performance of the SATA disk still beats the SAN,
and there is no way to lie about read performance.
Sure there is: you have the data cached in system RAM. I find it real
hard to believe that you can sustain
On Mon, Jun 26, 2006 at 08:33:34PM +0100, Simon Riggs wrote:
of the SQL standard, so being unaware of them when using SQL is strange
to me.
Welcome to the world of programs designed for mysql. You'll almost never
see them batch inserts, take advantage of referential integrity, etc.
You end
On Mon, Jul 17, 2006 at 12:52:17AM +0200, Mikael Carneholm wrote:
I have finally gotten my hands on the MSA1500 that we ordered some time
ago. It has 28 x 10K 146Gb drives, currently grouped as 10 (for wal) +
18 (for data). There's only one controller (an emulex), but I hope
You've got 1.4TB
On Sat, Jul 22, 2006 at 10:26:53AM -0700, Craig A. James wrote:
This causes massive file-system activity and flushes all files that the
kernel has cached. If you run this between each Postgres test (let it run
for a couple minutes), it gives you an apples-to-apples comparison between
On Mon, Jul 31, 2006 at 12:48:11PM +0200, Axel Rau wrote:
WHERE P.path ~ '^%@/[^/]*/$' ) AS NLPC
This can't be indexed. You might try something like
WHERE P.path LIKE '[EMAIL PROTECTED]' AND P.path ~ '^%@/[^/]*/$'
The schema could be a lot more intelligent here. (E.g., store
On Mon, Jul 31, 2006 at 05:06:00PM +0200, Axel Rau wrote:
Please reconsider your proposals with the above
I'm not sure what you're getting at; could you be more specific?
Mike Stone
---(end of broadcast)---
TIP 1: if posting/reading through
1 - 100 of 208 matches
Mail list logo