On Mar 31, 2005, at 9:01 PM, Steve Poe wrote:
Now, we need to purchase a good U320 RAID card now. Any suggestions
for those which run well under Linux?
Not sure if it works with linux, but under FreeBSD 5, the LSI MegaRAID
cards are well supported. You should be able to pick up a 320-2X with
1
byte record, one row
per transaction.
Well, if you're not heavily multitasking, the advantage of SCSI is lost
on you.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
made today (18+ GB is the smallest now)
that there are no defects on the surfaces?
/me remembers trying to cram an old donated 5MB (yes M) disk into an
old 8088 Zenith PC in college...
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
r $7k, including onsite warrantee.
They totally blow away the Dell Dual XEON with external 14 disk RAID
(also 15kRPM drives, manufacturer unknown) which also has 4GB RAM and a
Dell PERC 3/DC controller, the whole of which set me back over $15k.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smim
bad blocks
so that the OS knows not to use them. You can also run the bad blocks
command to try and find new bad blocks.
my point was that you cannot assume an linear correlation between block
number and physical location, since the bad blocks will be mapped all
over the place.
Vivek Khera, Ph.D
ce scans to find and lock
the referenced rows in the parent tables.
Make sure you have indexes on your FK columns (on *both* tables), and
that the data type on both tables is the same.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
go with fewer bigger boxes with RAID so i can sleep better at night
:-)
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
On Apr 19, 2005, at 11:07 PM, Josh Berkus wrote:
RAID1 2 disks OS, pg_xlog
RAID 1+0 4 disks pgdata
This is my preferred setup, but I do it with 6 disks on RAID10 for
data, and since I have craploads of disk space I set checkpoint
segments to 256 (and checkpoint timeout to 5 minutes)
Vivek
, and having 64-bit
all the way to the disk controller helps... just be sure to run a
64-bit version of your OS.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
On Apr 20, 2005, at 4:22 PM, Josh Berkus wrote:
Realistically I don't think a 30k$ Dell is a something that needs to
be
junked. I am pretty sure if I got MSSQL running on it, it would
outperform
my two proc box. I can agree it may not have been the optimal
platform. My
decision is not based sole
know about the GNU extensions/changes to Makefile syntax. Vivek Khera, Ph.D. +1-301-869-4449 x806
g out my other
boxes in speed, but the I/O sucks out the wazoo. I'm migrating to
opteron based DB servers with LSI branded cards (not the Dell re-
branded ones).
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
ry heavy insert/update/delete
load. Database + indexes hovers at about 50Gb.
I don't use the adaptec controllers because they don't support
FreeBSD well (and vice versa) and the management tools are not there
for FreeBSD in a supported fashion like they are for LSI.
Vivek Khera, P
13 active semaphores.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
redundancy on the data like in a RAID 5 ? I'd recommend 4 disks in a hardware RAID10 plus a hot spare, or use the 5th disk as boot + OS if you're feeling lucky. Vivek Khera, Ph.D. +1-301-869-4449 x806
ng my procedure, does it exist a known issue, a
workaround ?
just because your application frees the memory doesn't mean that the
OS takes it back. in other words, don't confuse memory usage with
memory leakage.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Descript
5kRPM drives.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
your only recourse is to throw hardware at
the problem. I would suspect that getting faster disks and splitting
the checkpoint log to its own RAID partition would help you here.
Adding more RAM while you're at it always does wonders for me :-)
Vivek Khera, Ph.D.
+1-301-869-4449
erver-side prepared statements when
you do $dbh->prepare() against an 8.x database server.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
ther vendors of SSD's out there. Some even have *real*
power fail strategies such as dumping to a physical disk. These are
not cheap, but you gets what ya pays for...
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
write performance than RAID10.
well, then run your own tests and find out :-)
if I were using LSI MegaRAID controllers, I'd probalby go RAID10, but
I don't see why you need 6 disks for this... perhaps just 4 would be
enough? Or are your logs really that big?
Vivek Khera, Ph.D.
+1-30
rate.Also, updating to 8.0 may help. Vivek Khera, Ph.D. +1-301-869-4449 x806
hat's what I thought until the first time that list needed to be
altered. At this point, it becomes a royal pain.
point to take: do it right the first time, or you have to do it over,
and over, and over...
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(e
elease the lock, then process it at our leisure to do the inserts to Pg in one big transaction. Vivek Khera, Ph.D. +1-301-869-4449 x806
cross your RAID data channels on your test machine: I put each pair of the RAID10 mirrors on opposite channels, so both channels of my RAID controller are pretty evenly loaded during write. Vivek Khera, Ph.D. +1-301-869-4449 x806
On Oct 3, 2005, at 7:02 AM, Steinar H. Gunderson wrote:
Anybody know a good reason why you can't put a WAL on this, and
enjoy a hefty
speed boost for a fraction of the price of a traditional SSD? (Yes,
it's
SATA, not PCI, so the throughput is not all that impressive -- but
still,
it's got
On Oct 11, 2005, at 10:54 AM, Claus Guttesen wrote:
Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on
amd64 (or both)?
It applies to FreeBSD >= 5.0.
However, I have not been able to get a real answer from the FreeBSD
hacker community on what the max buffer space usage will
On Nov 15, 2005, at 3:28 AM, Claus Guttesen wrote:
Hardware-wise I'd say dual core opterons. One dual-core-opteron
performs better than two single-core at the same speed. Tyan makes
at 5TB data, i'd vote that the application is disk I/O bound, and the
difference in CPU speed at the level of
On Nov 16, 2005, at 4:50 PM, Claus Guttesen wrote:
I'm (also) FreeBSD-biased but I'm not shure whether the 5 TB fs will
work so well if tools like fsck are needed. Gvinum could be one option
but I don't have any experience in that area.
Then look into an external filer and mount via NFS. The
On Nov 18, 2005, at 10:13 AM, Luke Lonergan wrote:Still, there is a CPU limit here – this is not I/O bound, it is CPU limited as evidenced by the sensitivity to readahead settings. If the filesystem could do 1GB/s, you wouldn’t go any faster than 244MB/s.Yeah, and mysql would probably be faster o
On Nov 18, 2005, at 1:07 AM, Luke Lonergan wrote:
A $1,000 system with one CPU and two SATA disks in a software RAID0
will
perform exactly the same as a $80,000 system with 8 dual core CPUs
and the
world's best SCSI RAID hardware on a large database for decision
support
(what the poster as
On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote: This is a Dell Quad XEON. Hyperthreading is turned on, and I am planning to turn it off as soon as I get a chance to bring it down.You should probably also upgrade to Pg 8.0 or newer since it is a known problem with XEON processors and older postgres
On Dec 6, 2005, at 12:44 PM, Ameet Kini wrote:
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full)
on all
of my tables. However, its noted in the docs (e.g.
http://developer.postgresql.org/docs/postgres/r
On Dec 6, 2005, at 2:04 PM, Anjan Dave wrote:
interestingly, it was experiencing 3x more context switches than the
Intel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0
I'll assume that's context switches per second... so for the opteron
that's 6540 cs's and for the Dell that's
On Dec 6, 2005, at 11:14 AM, Ameet Kini wrote:
need for vacuums. However, it'd be great if there was a similar
automatic
reindex utility, like say, a pg_autoreindex daemon. Are there any
plans
for this feature? If not, then would cron scripts be the next best
what evidence do you have th
On Dec 6, 2005, at 5:03 PM, Ameet Kini wrote:
table with only 1 index, the time to do a vacuum (without full)
went down
from 45 minutes to under 3 minutes. Maybe thats not bloat but thats
surely surprising. And this was after running vacuum periodically.
I'll bet either your FSM settings
I have a choice to make on a RAID enclosure:
14x 36GB 15kRPM ultra 320 SCSI drives
OR
12x 72GB 10kRPM ultra 320 SCSI drives
both would be configured into RAID 10 over two SCSI channels using a
megaraid 320-2x card.
My goal is speed. Either would provide more disk space than I would
need
On Dec 9, 2005, at 10:50 AM, Andreas Pflug wrote:
Well, if your favourite dealer can't supply you with such common
equipment as 15k drives you should consider changing the dealer.
They don't seem to be aware of db hardware reqirements.
Thanks to all for your opinions. I'm definitely stick
On Dec 8, 2005, at 2:21 PM, Jeffrey W. Baker wrote:
For the write transactions, the speed and size of the DIMM on that LSI
card will matter the most. I believe the max memory on that
adapter is
512MB. These cost so little that it wouldn't make sense to go with
anything smaller.
From wher
On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:
We've swapped out the DIMMs on MegaRAID controllers. Given the
cost of a standard low-end DIMM these days (which is what the LSI
controllers use last I checked), it is a very cheap upgrade.
What's the max you can put into one of these c
On Dec 20, 2005, at 1:27 PM, Antal Attila wrote:
The budget line is about 30 000$ - 40 000$.
Like Jim said, without more specifics it is hard to give more
specific recommendations, but I'm architecting something like this
for my current app which needs ~100GB disk space. I made room to
On Dec 22, 2005, at 9:44 PM, Juan Casero wrote:
Agreed. I have a 13 million row table that gets a 100,000 new
records every
week. There are six indexes on this table. Right about the time
when it
i have some rather large tables that grow much faster than this (~1
million per day on
On Dec 22, 2005, at 11:14 PM, David Lang wrote:
but it boils down to the fact that there just isn't enough
experiance with the new sun systems to know how well they will
work. they could end up being fabulous speed demons, or dogs (and
it could even be both, depending on your workload)
T
On Dec 23, 2005, at 5:15 PM, Mark Kirkwood wrote:
Vivek Khera wrote:
and only the opteron boxes needed to come from sun. add a zero
return policy and you wonder how they expect to keep in business
sorry, i had to vent.
Just out of interest - why did the opterons need to come from
On Dec 4, 2006, at 12:10 PM, Mark Lonsdale wrote:
- 4 physical CPUs (hyperthreaded to 8)
i'd tend to disable hyperthreading on Xeons...
shared_buffers – 50,000 - >From what Id read, increasing this
number higher than this wont have any advantages ?
if you can, increase it until you
I've got one logging table that is over 330 million rows to store 6
months' worth of data. It consists of two integers and a 4-character
long string. I have one primary key which is the two integers, and
an additional index on the second integer.
I'm planning to use inheritance to split t
On Mar 20, 2007, at 11:20 AM, Heiko W.Rupp wrote:
partition through the master
table abould halfed the speed with 4 partitions and made a 50%
increase for 2 partitions.
Please note: this is not representative in any kind!
I fully intend to build knowledge of the partitions into the insert
On Apr 13, 2007, at 4:01 PM, Dan Harris wrote:
Is there a pg_stat_* table or the like that will show how bloated
an index is? I am trying to squeeze some disk space and want to
track down where the worst offenders are before performing a global
REINDEX on all tables, as the database is rou
On Apr 23, 2007, at 12:09 PM, Scott Marlowe wrote:
And do you have 32 or 64 Megs of memory in that machine?
Cause honestly, that's the kinda hardware I was running 7.0.2 on,
so you
might as well get retro in your hardware department while you're at
it.
I think you're being too conservati
On May 18, 2007, at 2:30 PM, Andrew Sullivan wrote:
Note also that your approach of updating all 121 million records in
one statement is approximately the worst way to do this in Postgres,
because it creates 121 million dead tuples on your table. (You've
created some number of those by killing
On May 18, 2007, at 11:40 AM, Liviu Ionescu wrote:
8.1 might have similar problems, but the point here is different:
if what
was manually tuned to work in 8.1 confuses the 8.2 planner and
performance
drops so much (from 2303 to 231929 ms in my case) upgrading a
production
machine to 8.2 i
On May 23, 2007, at 2:32 AM, Andreas Kostyrka wrote:
You forgot pulling some RAID drives at random times to see how the
hardware deals with the fact. And how it deals with the rebuild
afterwards. (Many RAID solutions leave you with worst of both
worlds, taking longer to rebuild than a rest
On May 23, 2007, at 9:26 AM, Susan Russo wrote:
I've played 'catch up' wrt adjusting max_fsm_pages (seems to be a
regular event),
however am wondering if the vacuum analyze which reports the error was
actually completed?
Yes, it completed. However not all pages with open space in them are
On May 23, 2007, at 4:40 PM, Peter Schuller wrote:
Sounds like you need to increase your shared memory limits.
Unfortunately this will require a reboot on FreeBSD :(
No, it does not. You can tune some of the sysv IPC parameters at
runtime. the shmmax and shmall are such parameters.
On Jun 11, 2007, at 9:14 PM, Francisco Reyes wrote:
RAID card 1 with 8 drives. 7200 RPM SATA RAID10
RAID card 2 with 4 drives. 10K RPM SATA RAID10
what raid card have you got? i'm playing with an external enclosure
which has an areca sata raid in it and connects to the host via fibre
ch
On Jun 12, 2007, at 8:33 PM, Francisco Reyes wrote:
Vivek Khera writes:
what raid card have you got?
2 3ware cards.
I believe both are 9550SX
i'm playing with an external enclosure which has an areca sata
raid in it and connects to the host via fibre channel.
What is the OS? Fr
On Jun 13, 2007, at 6:25 AM, Christo Du Preez wrote:
Is there some kind of performance testing utility available for
postgresql Something I can run after installing postgresql to help me
identify if my installation is optimal.
Your own app is the only one that will give you meaningful
resul
On Jun 13, 2007, at 10:36 PM, Francisco Reyes wrote:
FreeBSD, indeed. The vendor, Partners Data Systems, did a wonderful
This one?
http://www.partnersdata.com
that's the one.
job ensuring that everything integrated well to the point of
talking with various FreeBSD developers, LSI engi
On Jul 9, 2007, at 1:02 PM, Joshua D. Drake wrote:
It is also the reason that those in the know typically ignore all
benchmarks and do their own testing.
Heresy!
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Jul 14, 2007, at 11:50 AM, Patric de Waha wrote:
Yesterday I switched from 8.1 to 8.2. So I needed to dump the
dbase
and reimport it. The dbase after 4 months of running without
"vacuum full"
reached 60 gigabyte of diskspace. Now after a fresh import it
only has 5 gigabyte!
On Jul 18, 2007, at 1:08 PM, Steven Flatt wrote:
Some background: we make extensive use of partitioned tables. In
fact, I'm
really only considering reindexing partitions that have "just
closed". In
our simplest/most general case, we have a table partitioned by a
timestamp
column, each pa
On Aug 8, 2007, at 11:34 PM, justin wrote:
So whats the thoughts on a current combined rack/disks/cpu combo
around the $10k-$15k point, currently?
I just put into production testing this setup:
SunFire X4100M2 (2x Opteron Dual core) with 20Gb RAM and an LSI PCI-e
dual-channel 4Gb Fibre ch
On Aug 9, 2007, at 3:47 PM, Joe Uhl wrote:
PowerEdge 1950 paired with a PowerVault MD1000
2 x Quad Core Xeon E5310
16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)
PERC 5/E Raid Adapter
2 x 146 GB SAS in Raid 1 for OS + logs.
A bunch of disks in the MD1000 configured in Raid 10 f
On Aug 10, 2007, at 4:36 PM, Merlin Moncure wrote:
I'm not so sure I agree. They are using LSI firmware now (and so is
everyone else). The servers are well built (highly subjective, I
admit) and configurable. I have had some bad experiences with IBM
gear (adaptec controller though), and whit
On Aug 30, 2007, at 2:08 PM, Mark Lewis wrote:
If you're not running regular VACUUMs at all but are instead
exclusively
running VACUUM FULL, then I don't think you would see warnings about
running out of fsm enties, which would explain why you did not notice
the bloat. I haven't confirmed th
On Sep 6, 2007, at 2:42 PM, Scott Marlowe wrote:
I'd recommend against Dell unless you're at a company that orders
computers by the hundred lot. My experience with Dell has been that
unless you are a big customer you're just another number (a small one
at that) on a spreadsheet.
I order mayb
On Sep 28, 2007, at 10:28 AM, Radhika S wrote:
20775 ?S 0:00 postgres: abc myDB [local] idle in
transaction
20776 ?S 0:00 postgres: abc myDB [local] idle
17509 ?S 0:06 postgres: abc myDB [local] VACUUM
waiting
24656 ?S 0:00
On Jan 18, 2006, at 1:09 PM, Benjamin Arai wrote:
Obviously, I have done this to improve write performance for the
update each
week. My question is if I install a 3ware or similar card to
replace my
I'll bet that if you increase your checkpoint_segments (and
corresponding timeout value)
On Feb 1, 2006, at 4:37 PM, Matthew T. O'Connor wrote:
As far I as I know, we are still looking for real world feedback.
8.1 is the first release to have the integrated autovacuum. The
thresholds in 8.1 are a good bit less conservative than the
thresholds in the contrib version. The con
On Feb 9, 2006, at 6:36 PM, Rafael Martinez wrote:
This is an application that we have not programmed, so I am not sure
what they are trying to do here. I will contact the developers.
Tomorrow
I will try to test some of your suggestions.
well, obviously you're running RT... what you want t
On Feb 22, 2006, at 5:38 AM, Chethana, Rao (IE10) wrote:It is rich in features but slow in performance.No, it is fast and feature-rich. But you have to tune it for your specific needs; the default configuration is not ideal for large DBs.
On Feb 22, 2006, at 10:44 PM, Chethana, Rao ((IE10)) wrote:That is what I wanted to know, how do I tune it?If there were a simple formula for doing it, it would already have been written up as a program that runs once you install postgres.You have to monitor your usage, use your understanding of y
On Feb 23, 2006, at 11:38 AM, Ron Peacetree wrote:
Where "*" ==
{print | save to PDF | save to format | display on screen}
Anyone know of one?
There's a perl module, GraphViz::DBI::General, which does a rather
nifty job of taking a schema and making a graphviz "dot" file from
it, which
On Feb 24, 2006, at 9:29 AM, Bruce Momjian wrote:
Dell often says part X is included, but part X is not the exact
same as
part X sold by the original manufacturer. To hit a specific price
point, Dell is willing to strip thing out of commodity hardware, and
often does so even when performance
On Feb 24, 2006, at 11:32 AM, Scott Marlowe wrote:
My bad experiences were with the 2600 series machines. We now have
some
2800 and they're much better than the 2600/2650s I've used in the
past.
Yes, the 2450 and 2650 were CRAP disk performers. I haven't any 2850
to compare, just an 18
On Mar 14, 2006, at 4:19 PM, mcelroy, tim wrote:
Humm, well I am running 8.0.1 and use that option and see the
following in
my vacuum output log:
vacuumdb: vacuuming database "template1"
it has done so since at least 7.4, probably 7.3. the "-a" flag
really does what is says.
---
On Mar 17, 2006, at 8:55 AM, Merlin Moncure wrote:
I like their approach...ddr ram + raid sanity backup + super reliable
power system. Their prices are on jupiter (and i dont mean jupiter,
fl) but hopefully there will be some competition and the invetible
Nothing unique to them. I have a 4
On Mar 17, 2006, at 5:07 PM, Scott Marlowe wrote:
Open Source SSD via iSCSI with commodity hardware... hmmm. sounds
like
a useful project.
sh! don't give away our top secret plans!
---(end of broadcast)---
TIP 6: explain analyze is yo
On Mar 17, 2006, at 5:11 PM, Kenji Morishige wrote:
In summary, my questions:
1. Would running PG on FreeBSD 5.x or 6.x or Linux improve
performance?
FreeBSD 6.x will definitely get you improvements. Many speedup
improvements have been made to both the generic disk layer and the
speci
On Mar 20, 2006, at 2:44 PM, Merlin Moncure wrote:
For my use it was worth the price. However, given the speed increase
of other components since then, I don't think I'd buy one today.
Parallelism (if you can do it like Luke suggested) is the way to go.
Thats an interesting statement. My pe
If you do put on FreeBSD 6, I'd love to see the output of
"diskinfo - v -t" on your RAID volume(s).
Not directly related ...
i have a HP dl380 g3 with array 5i controlled (1+0), these are my
results
[...]
is this good enough?
Is that on a loaded box or a mostly quiet box? Those number se
On Mar 20, 2006, at 6:04 PM, Miguel wrote:
Umm, in my box i see better seektimes but worst transfer rates,
does it make sense?
i think i have something wrong, the question i cant answer is what
tunning am i missing?
Well, I forgot to mention I have 15k RPM disks, so the transfers
should
On Mar 21, 2006, at 6:03 AM, Mark Kirkwood wrote:
The so-called limit (controllable via various sysctl's) is on the
amount of memory used for kvm mapped pages, not cached pages, i.e -
its a subset of the cached pages that are set up for immediate
access (the
Thanks... now that makes sens
On Mar 20, 2006, at 6:27 PM, PFC wrote:
Expensive SCSI hardware RAID cards with expensive 10Krpm harddisks
should not get humiliated by such a simple (and cheap) setup. (I'm
referring to the 12-drive RAID10 mentioned before, not the other
one which was a simple 2-disk mirror). Toms hardwa
On Mar 21, 2006, at 2:04 PM, PFC wrote:
especially since I have desktop PCI and the original poster has a
real server with PCI-X I think.
that was me :-)
but yeah, I never seem to get full line speed for some reason. i
don't know if it is because of inadequate measurement tools or what..
On Mar 21, 2006, at 12:59 PM, Jim C. Nasby wrote:
atapci1:
And note that this is using FreeBSD gmirror, not the built-in raid
controller.
I get similar counter-intuitive slowdown with gmirror SATA disks on
an IBM e326m I'm evaluating. If/when I buy one I'll get the onboard
SCSI RAID in
On Mar 28, 2006, at 1:57 PM, Madison Kelly wrote:
From what I understand, PostgreSQL is designed with stability and
reliability as key tenants. MySQL favors performance and ease of
use. An
From my point of view, mysql favors single-user performance over all
else. Get into multiple upda
On Mar 28, 2006, at 1:59 PM, Scott Marlowe wrote:
Generally you'll find the PostgreSQL gotchas are of the sort that make
you go "oh, that's interesting" and the MySQL gotchas are the kind
that
make you go "Dear god, you must be kidding me!"
But that's just my opinion, I could be wrong.
I
On Mar 28, 2006, at 11:55 AM, Marcos wrote:
The application will be a chat for web, the chats will be stored in
the
server. In a determined interval of time... more or less 2 seconds,
the
application will be looking for new messages.
We bought software for this purpose (phplive). It is b
On Apr 3, 2006, at 10:10 PM, Mark Kirkwood wrote:
I've always left them on, and never had any issues...(even after
unscheduled power loss - which happened here yesterday). As I
understand it, the softupdate code reorders *metadata* operations,
and does not alter data operations - so the ef
On Apr 5, 2006, at 6:07 PM, Jim Nasby wrote:
More importantly, it allows the system to come up and do fsck in
the background. If you've got a large database that's a pretty big
benefit.
That's a UFS2 feature, not a soft-updates feature.
---(end of broadcast)
On Apr 5, 2006, at 5:58 PM, August Zajonc wrote:
Most involve some AMD Opertons, lots of spindles with a good raid
controller preferred to one or two large disks and a good helping of
ram. Be interesting to get some numbers on the sunfire machine.
I can highly recommend the SunFire X4100, how
On Apr 5, 2006, at 9:11 PM, Marcelo Tada wrote:
What are you think about the Sun Fire X64 X4200 Server?
I use the X4100 and like it a lot. I'm about to buy another. I see
no advantage to the X4200 unless you want the extra internal disks.
I use an external array.
---
On Apr 6, 2006, at 12:47 AM, Leigh Dyer wrote:
I'm sure those little SAS drives would be great for web servers and
other non-IO-intensive tasks though -- I'd love to get some X4100s
in to replace our Poweredge 1750s for that. It's a smart move
overall IMHO,
For this purpose, bang for the
On Apr 10, 2006, at 3:55 AM, Jesper Krogh wrote:
I'd run pg_dump | gzip > sqldump.gz on the old system. That took
about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about the same time. Are there
any tricks I can use to
On Apr 13, 2006, at 2:59 PM, Francisco Reyes wrote:
This particular server is pretty much what I inherited for now for
this project.and its Raid 5. There is a new server I am setting up
soon... 8 disks which we are planning to setup
6 disks in RAID 10
2 Hot spares
In RAID 10 would it matte
On Apr 14, 2006, at 8:00 AM, Marc Cousin wrote:
So, you'll probably end up being slowed down by WAL fsyncs ... and
you won't
have a lot of solutions. Maybe you should start with trying to set
fsync=no
as a test to confirm that (you should have a lot of iowaits right
now if you
haven't dis
On Apr 25, 2006, at 2:14 PM, Bill Moran wrote:
Where I'm stuck is in deciding whether we want to go with dual-core
pentiums with 2M cache, or with HT pentiums with 8M cache.
In order of preference:
Opterons (dual core or single core)
Xeon with HT *disabled* at the BIOS level (dual or single
On Apr 25, 2006, at 5:09 PM, Ron Peacetree wrote:
...and even if you do buy Intel, =DON"T= buy Dell unless you like
causing trouble for yourself.
Bad experiences with Dell in general and their poor PERC RAID
controllers in specific are all over this and other DB forums.
I don't think that
On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:
When I had this installed on a single SATA drive running from the
PE1800's on-board SATA interface, this operation took anywhere from
65-80 seconds.
With my new RAID card and drives, this operation took 272 seconds!?
switch it to RAID10
1 - 100 of 248 matches
Mail list logo