William Yu wrote:
We upgraded our disk system for our main data processing server earlier
this year. After pricing out all the components, basically we had the
choice of:
LSI MegaRaid 320-2 w/ 1GB RAM+BBU + 8 15K 150GB SCSI
or
Areca 1124 w/ 1GB RAM+BBU + 24 7200RPM 250GB SATA
My mistake
Usually when simple queries take a long time to run, it's the system
tables (pg_*) that have become bloated and need vacuuming. But that's
just random guess on my part w/o my detailed info.
Greg Stumph wrote:
Well, since I got no response at all to this message, I can only assume that
I've
[EMAIL PROTECTED] wrote:
I have an Intel Pentium D 920, and an AMD X2 3800+. These are very
close in performance. The retail price difference is:
Intel Pentium D 920 is selling for $310 CDN
AMD X2 3800+is selling for $347 CDN
Anybody who claims that Intel is 2X more
David Boreham wrote:
It isn't only Postgres. I work on a number of other server applications
that also run much faster on Opterons than the published benchmark
figures would suggest they should. They're all compiled with gcc4,
so possibly there's a compiler issue. I don't run Windows on any
of
Benjamin Arai wrote:
Obviously, I have done this to improve write performance for the update
each week. My question is if I install a 3ware or similar card to
replace my current software RAID 1 configuration, am I going to see a
very large improvement? If so, what would be a ball park
Steinar H. Gunderson wrote:
On Wed, Jan 18, 2006 at 01:58:09PM -0800, William Yu wrote:
The key is getting a card with the ability to upgrade the onboard ram.
Our previous setup was a LSI MegaRAID 320-1 (128MB), 4xRAID10,
fsync=off. Replaced it with a ARC-1170 (1GB) w/ 24x7200RPM SATA2 drives
David Lang wrote:
raid 5 is bad for random writes as you state, but how does it do for
sequential writes (for example data mining where you do a large import
at one time, but seldom do other updates). I'm assuming a controller
with a reasonable amount of battery-backed cache.
Random write
Luke Lonergan wrote:
Note that host-based SCSI raid cards from LSI, Adaptec, Intel, Dell, HP
and others have proven to have worse performance than a single disk
drive in many cases, whether for RAID0 or RAID5. In most circumstances
This is my own experience. Running a LSI MegaRAID in pure
Juan Casero wrote:
Can you elaborate on the reasons the opteron is better than the Xeon when it
comes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One of our
Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,
transfers to 4GB, the OS must allocated the
Michael Riess wrote:
Well, I'd think that's were your problem is. Not only you have a
(relatively speaking) small server -- you also share it with other
very-memory-hungry services! That's not a situation I'd like to be in.
Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB
Alan Stange wrote:
Luke Lonergan wrote:
The aka iowait is the problem here - iowait is not idle (otherwise it
would be in the idle column).
Iowait is time spent waiting on blocking io calls. As another poster
pointed out, you have a two CPU system, and during your scan, as
iowait time is
Welty, Richard wrote:
David Boreham wrote:
I guess I've never bought into the vendor story that there are
two reliability grades. Why would they bother making two
different kinds of bearing, motor etc ? Seems like it's more
likely an excuse to justify higher prices.
then how to account for
Alex Turner wrote:
Opteron 242 - $178.00
Opteron 242 - $178.00
Tyan S2882 - $377.50
Total: $733.50
Opteron 265 - $719.00
Tyan K8E - $169.00
Total: $888.00
You're comparing the wrong CPUs. The 265 is the 2x of the 244 so you'll
have to bump up the price more although not enough to make a
Joshua Marsh wrote:
On 11/17/05, *William Yu* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
No argument there. But it's pointless if you are IO bound.
Why would you just accept we're IO bound, nothing we can do? I'd do
everything in my power to make my app go from IO bound
Alex Turner wrote:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat the
Areca cards at RAID 10 database access patern.
The max 256MB onboard for 3ware cards is disappointing though. While
good enough for 95%
James Mello wrote:
Unless there was a way to guarantee consistency, it would be hard at
best to make this work. Convergence on large data sets across boxes is
non-trivial, and diffing databases is difficult at best. Unless there
was some form of automated way to ensure consistency, going 8 ways
Alex Stapleton wrote:
Your going to have to factor in the increased failure rate in your cost
measurements, including any downtime or performance degradation whilst
rebuilding parts of your RAID array. It depends on how long your
planning for this system to be operational as well of course.
Alex Turner wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
Spend your money on better Disks, and don't
David Boreham wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
I guess I've never bought into the vendor
Merlin Moncure wrote:
You could instead buy 8 machines that total 16 cores, 128GB RAM and
It's hard to say what would be better. My gut says the 5u box would be
a lot better at handling high cpu/high concurrency problems...like your
typical business erp backend. This is pure speculation of
Carlos Henrique Reimer wrote:
I forgot to say that it´s a 12GB database...
Ok, I´ll set shared buffers to 30.000 pages but even so meminfo and
top shouldn´t show some shared pages?
I heard something about that Redhat 9 can´t handle very well RAM higher
than 2GB. Is it right?
Thanks in
Donald Courtney wrote:
I built postgreSQL 8.1 64K bit on solaris 10 a few months ago
and side by side with the 32 bit postgreSQL build saw no improvement. In
fact the 64 bit result was slightly lower.
I'm not surprised 32-bit binaries running on a 64-bit OS would be faster
than
Donald Courtney wrote:
in that even if you ran postgreSQL on a 64 bit address space
with larger number of CPUs you won't see much of a scale up
and possibly even a drop. I am not alone in having the *expectation*
What's your basis for believing this is the case? Why would PostgreSQL's
Ron wrote:
PERC4eDC-PCI Express, 128MB Cache, 2-External Channels
Looks like they are using the LSI Logic MegaRAID SCSI 320-2E
controller. IIUC, you have 2 of these, each with 2 external channels?
A lot of people have mentioned Dell's versions of the LSI cards can be
WAY slower than the
I've been running 2x265's on FC4 64-bit (2.6.11-1+) and it's been
running perfect. With NUMA enabled, it runs incrementally faster than
NUMA off. Performance is definitely better than the 2x244s they replaced
-- how much faster, I can't measure since I don't have the transaction
volume to
My Dual Core Opteron server came in last week. I tried to do some
benchmarks with pgbench to get some numbers on the difference between
1x1 - 2x1 - 2x2 but no matter what I did, I kept getting the same TPS
on all systems. Any hints on what the pgbench parameters I should be using?
In terms of
Rory Campbell-Lange wrote:
Processor:
First of all I noted that we were intending to use Opteron processors. I
guess this isn't a straightforward choice because I believe Debian (our
Linux of choice) doesn't have a stable AMD64 port. However some users on
this list suggest that Opterons work
We are considering two RAID1 system disks, and two RAID1 data disks.
We've avoided buying Xeons. The machine we are looking at looks like
this:
Rackmount Chassis - 500W PSU / 4 x SATA Disk Drive Bays
S2882-D - Dual Opteron / AMD 8111 Chipset / 5 x PCI Slots
2x - (Dual) AMD Opteron
A pretty awful way is to mangle the sql statement so the other field
logical statements are like so:
select * from mytable where 0+field = 100
Tobias Brox wrote:
Is it any way to attempt to force the planner to use some specific index
while creating the plan? Other than eventually
I've used LSI MegaRAIDs successfully in the following systems with both
Redhat 9 and FC3 64bit.
Arima HDAMA/8GB RAM
Tyan S2850/4GB RAM
Tyan S2881/4GB RAM
I've previously stayed away from Adaptec because we used to run Solaris
x86 and the driver was somewhat buggy. For Linux and FreeBSD, I'd
- 257184
4x848 - 360008
2x275 - 392634
order entry stored procedures
2x248 - 2939
1x175 - 3215
4x848 - 4500
2x275 - 4908
Greg Stark wrote:
William Yu [EMAIL PROTECTED] writes:
It turns out the latency in a 2xDC setup is just so much lower and most apps
like lower latency than higher bandwidth
I'm sure there's some corner case where more memory helps. If you
consider that 1GB of RAM is about $100, I'd max out memory on the
controller just for the hell of it.
Josh Berkus wrote:
Steve,
Past recommendations for a good RAID card (for SCSI) have been the LSI
MegaRAID 2x. This unit comes
Unfortunately, Anandtech only used Postgres just a single time in his
benchmarks. And what it did show back then was a huge performance
advantage for the Opteron architecture over Xeon in this case. Where the
fastest Opterons were just 15% faster in MySQL/MSSQL/DB2 than the
fastest Xeons, it
I posted this link a few months ago and there was some surprise over the
difference in postgresql compared to other DBs. (Not much surprise in
Opteron stomping on Xeon in pgsql as most people here have had that
experience -- the surprise was in how much smaller the difference was in
other
The Linux kernel is definitely headed this way. The 2.6 allows for
several different I/O scheduling algorithms. A brief overview about the
different modes:
http://nwc.serverpipeline.com/highend/60400768
Although a much older article from the beta-2.5 days, more indepth info
from one of the
My experience:
1xRAID10 for postgres
1xRAID1 for OS + WAL
Jeff Frost wrote:
Now that we've hashed out which drives are quicker and more money equals
faster...
Let's say you had a server with 6 separate 15k RPM SCSI disks, what raid
option would you use for a standalone postgres server?
a)
Problem with this strategy. You want battery-backed write caching for
best performance safety. (I've tried IDE for WAL before w/ write
caching off -- the DB got crippled whenever I had to copy files from/to
the drive on the WAL partition -- ended up just moving WAL back on the
same SCSI drive
Alex Turner wrote:
I'm no drive expert, but it seems to me that our write performance is
excellent. I think what most are concerned about is OLTP where you
are doing heavy write _and_ heavy read performance at the same time.
Our system is mostly read during the day, but we do a full system
update
.
If someone has a simple benchmark test database to run, I would be
happy to run it on our hardware here.
Alex Turner
On Apr 6, 2005 3:30 AM, William Yu [EMAIL PROTECTED] wrote:
Alex Turner wrote:
I'm no drive expert, but it seems to me that our write performance is
excellent. I think what most
Jeremiah Jahn wrote:
I have about 5M names stored on my DB. Currently the searches are very
quick unless, they are on a very common last name ie. SMITH. The Index
is always used, but I still hit 10-20 seconds on a SMITH or Jones
search, and I average about 6 searches a second and max out at about
You can get 64-bit Xeons also but it takes hit in the I/O department due
to the lack of a hardware I/O MMU which limits DMA transfers to
addresses below 4GB. This has a two-fold impact:
1) transfering data to 4GB require first a transfer to 4GB and then a
copy to the final destination.
2) You
Bruce Momjian wrote:
William Yu wrote:
You can get 64-bit Xeons also but it takes hit in the I/O department due
to the lack of a hardware I/O MMU which limits DMA transfers to
addresses below 4GB. This has a two-fold impact:
1) transfering data to 4GB require first a transfer to 4GB
Jim C. Nasby wrote:
On Tue, Feb 01, 2005 at 07:35:35AM +0100, Cosimo Streppone wrote:
You might look at Opteron's, which theoretically have a higher data
bandwidth. If you're doing anything data intensive, like a sort in
memory, this could make a difference.
Would Opteron systems need 64-bit
Hervé Piedvache wrote:
My point being is that there is no free solution. There simply isn't.
I don't know why you insist on keeping all your data in RAM, but the
mysql cluster requires that ALL data MUST fit in RAM all the time.
I don't insist about have data in RAM but when you use
Hervé Piedvache wrote:
Sorry but I don't agree with this ... Slony is a replication solution ... I
don't need replication ... what will I do when my database will grow up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
Have you
I inferred this from reading up on the compressed vm project. It can be
higher or lower depending on what devices you have in your system --
however, I've read messages from kernel hackers saying Linux is very
aggressive in reserving memory space for devices because it must be
allocated at
[EMAIL PROTECTED] wrote:
Since the optimal state is to allocate a small amount of memory to
Postgres and leave a huge chunk to the OS cache, this means you are
already hitting the PAE penalty at 1.5GB of memory.
How could I chang this hitting?
Upgrade to 64-bit processors + 64-bit linux.
My experience is RH9 auto detected machines = 2GB of RAM and installs
the PAE bigmem kernel by default. I'm pretty sure the FC2/3 installer
will do the same.
[EMAIL PROTECTED] wrote:
I understand that the 2.6.* kernels are much better at large memory
support (with respect to performance
Gavin Sherry wrote:
There is no problem with free Linux distros handling 4 GB of memory. The
problem is that 32 hardware must make use of some less than efficient
mechanisms to be able to address the memory.
The theshold for using PAE is actually far lower than 4GB. 4GB is the
total memory
[EMAIL PROTECTED] wrote:
Now I turn hyperthreading off and readjust the conf . I found the bulb query
that was :
update one flag of the table [8 million records which I think not too much]
.When I turned this query off everything went fine.
I don't know whether update the data is much slower than
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings on my
8GB Opteron server and 10K seems to be the best setting.
also effective cache is the sum of kernel buffers + shared_buffers so it
should be
Dave Cramer wrote:
William Yu wrote:
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings on
my 8GB Opteron server and 10K seems to be the best setting.
Be careful here, he is not using opterons which
Alex wrote:
Hi,
i recently run pgbench against different servers and got some results I
dont quite understand.
A) EV1: Dual Xenon, 2GHz, 1GB Memory, SCSI 10Krpm, RHE3
B) Dual Pentium3 1.4ghz (Blade), SCSI Disk 10Krmp, 1GB Memory, Redhat 8
C) P4 3.2GHz, IDE 7.2Krpm, 1GBMem, Fedora Core2
Runnig
IDE disks lie about write completion (This can be disabled on some
drives) whereas SCSI drives wait for the data to actually be written
before they report success. It is quite
easy to corrupt a PG (Or most any db really) on an IDE drive. Check
the archives for more info.
Do we have any real
Greg Stark wrote:
William Yu [EMAIL PROTECTED] writes:
Biggest speedup I've found yet is the backup process (PG_DUMP -- GZIP). 100%
faster in 64-bit mode. This drastic speed might be more the result of 64-bit
GZIP though as I've seen benchmarks in the past showing encryption/compression
running 2
I just finished upgrading the OS on our Opteron 148 from Redhat9 to
Fedora FC2 X86_64 with full recompiles of Postgres/Apache/Perl/Samba/etc.
The verdict: a definite performance improvement. I tested just a few CPU
intensive queries and many of them are a good 30%-50% faster.
.
William Yu wrote:
I just finished upgrading the OS on our Opteron 148 from Redhat9 to
Fedora FC2 X86_64 with full recompiles of Postgres/Apache/Perl/Samba/etc.
The verdict: a definite performance improvement. I tested just a few CPU
intensive queries and many of them are a good 30%-50% faster
I gave -O3 a try with -funroll-loops, -fomit-frame-pointer and a few
others. Seemed to perform about the same as the default -O2 so I just
left it as -O2.
Gustavo Franklin Nóbrega wrote:
Hi Willian,
Which are the GCC flags that you it used to compile PostgreSQL?
Best regards,
Gustavo
Josh Berkus wrote:
1) Query caching is not a single problem, but rather several different
problems requiring several different solutions.
2) Of these several different solutions, any particular query result caching
implementation (but particularly MySQL's) is rather limited in its
Ron St-Pierre wrote:
Yes, I know that it's not a very good idea, however queries are allowed
against all of those columns. One option is to disable some or all of the
indexes when we update, run the update, and recreate the indexes,
however it may slow down user queries. Because there are so
You're not getting much of a bump with this server. The CPU is
incrementally faster -- in the absolutely best case scenario where your
queries are 100% cpu-bound, that's about ~25%-30% faster.
What about using Dual Athlon MP instead of a Xeon? Would be much less expensive,
but have higher
Anjan Dave wrote:
We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running RH9,
PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4 drives.
We are expecting a pretty high load, a few thousands of 'concurrent'
users executing either select, insert, update, statments.
The
David Teran wrote:
Hi,
we are trying to speed up a database which has about 3 GB of data. The
server has 8 GB RAM and we wonder how we can ensure that the whole DB is
read into RAM. We hope that this will speed up some queries.
regards David
---(end of
Some arbitrary data processing job
WAL on single drive: 7.990 rec/s
WAL on 2nd IDE drive: 8.329 rec/s
WAL on tmpfs: 13.172 rec/s
A huge jump in performance but a bit scary having a WAL that can
disappear at any time. I'm gonna workup a rsync script and do some
power-off experiments to see how
Russell Garrett wrote:
WAL on single drive: 7.990 rec/s
WAL on 2nd IDE drive: 8.329 rec/s
WAL on tmpfs: 13.172 rec/s
A huge jump in performance but a bit scary having a WAL that can
disappear at any time. I'm gonna workup a rsync script and do some
power-off experiments to see how badly it gets
Jeff Bohmer wrote:
We're willing to shell out extra bucks to get something that will
undoubtedly handle the projected peak load in 12 months with excellent
performance. But we're not familiar with PG's performance on Linux and
don't like to waste money.
Properly tuned, PG on Linux runs really
Jeff Bohmer wrote:
It seems I don't fully understand the bigmem situation. I've searched
the archives, googled, checked RedHat's docs, etc. But I'm getting
conflicting, incomplete and/or out of date information. Does anyone
have pointers to bigmem info or configuration for the 2.4 kernel?
Sean Shanny wrote:
First question is do we gain anything by moving the RH Enterprise
version of Linux in terms of performance, mainly in the IO realm as we
are not CPU bound at all? Second and more radical, has anyone run
postgreSQL on the new Apple G5 with an XRaid system? This seems like a
Ivar Zarans wrote:
I am experiencing strange behaviour, where simple UPDATE of one field is
very slow, compared to INSERT into table with multiple indexes. I have
two tables - one with raw data records (about 24000), where one field
In Postgres and any other DB that uses MVCC (multi-version
Tom Lane wrote:
William Yu [EMAIL PROTECTED] writes:
I then tried to put the WAL directory onto a ramdisk. I turned off
swapping, created a tmpfs mount point and copied the pg_xlog directory
over. Everything looked fine as far as I could tell but Postgres just
panic'd with a file permissions
This is an intriguing thought which leads me to think about a similar
solution for even a production server and that's a solid state drive for
just the WAL. What's the max disk space the WAL would ever take up?
There's quite a few 512MB/1GB/2GB solid state drives available now in
the
Josh Berkus wrote:
William,
When my current job batch is done, I'll save a copy of the dir and give
the WAL on ramdrive a test. And perhaps even buy a Sandisk at the local
store and run that through the hooper.
We'll be interested in the results. The Sandisk won't be much of a
performance
Josh Berkus wrote:
William,
The SanDisks do seem a bit pokey at 16MBps. On the otherhand, you could
get 4 of these suckers, put them in a mega-RAID-0 stripe for 64MBps. You
shouldn't need to do mirroring with a solid state drive.
I wouldn't count on RAID0 improving the speed of SANDisk's much.
Rob Sell wrote:
Not being one to hijack threads, but I haven't heard of this performance hit
when using HT, I have what should all rights be a pretty fast server, dual
2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50% as
fast as my old server which was a dual AMD MP 1400's
So what is the ceiling on 32-bit processors for RAM? Most of the 64-bit
vendors are pushing Athalon64 and G5 as breaking the 4GB barrier, and even
I can do the math on 2^32. All these 64-bit vendors, then, are talking
about the limit on ram *per application* and not per machine?
64-bit CPU on
I have never worked with a XEON CPU before. Does anyone know how it performs
running PostgreSQL 7.3.4 / 7.4 on RedHat 9 ? Is it faster than a Pentium 4?
I believe the main difference is cache memory, right? Aside from cache mem,
it's basically a Pentium 4, or am I wrong?
Well, see the problem is
1) Memory - clumsily adjusted shared_buffer - tried three values: 64,
128, 256 with no discernible change in performance. Also adjusted,
clumsily, effective_cache_size to 1000, 2000, 4000 - with no discernible
change in performance. I looked at the Admin manual and googled around
for how to
Relaxin wrote:
I have a table with 102,384 records in it, each record is 934 bytes.
Using the follow select statement:
SELECT * from table
PG Info: version 7.3.4 under cygwin on Windows 2000
ODBC: version 7.3.100
Machine: 500 Mhz/ 512MB RAM / IDE HDD
Under PG: Data is returned in 26 secs!!
Shridhar Daithankar wrote:
Be careful here, we've seen that with the P4 Xeon's that are
hyper-threaded and a system that has very high disk I/O causes the
system to be sluggish and slow. But after disabling the hyper-threading
itself, our system flew..
Anybody has opteron working? Hows' the
Shridhar Daithankar wrote:
Just a guess here but does a precompiled postgresql for x86 and a x86-64
optimized one makes difference?
Opteron is one place on earth you can watch difference between 32/64
bit on same machine. Can be handy at times..
I don't know yet. I tried building a 64-bit
80 matches
Mail list logo