- Original Message -
From: David Boreham david_l...@boreham.org
To: pgsql-performance@postgresql.org pgsql-performance@postgresql.org
Cc:
Sent: Tuesday, 2 October 2012, 16:14
Subject: Re: [PERFORM] hardware advice
On 10/2/2012 2:20 AM, Glyn Astill wrote:
newer R910s recently
From: M. D. li...@turnkey.bz
To: pgsql-performance@postgresql.org
Sent: Friday, 28 September 2012, 18:33
Subject: Re: [PERFORM] hardware advice
On 09/28/2012 09:57 AM, David Boreham wrote:
On 9/28/2012 9:46 AM, Craig James wrote:
Your best warranty would
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Glyn Astill
Sent: Tuesday, October 02, 2012 4:21 AM
To: M. D.; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] hardware advice
From: M. D. li...@turnkey.bz
To: pgsql-performance
On Tue, Oct 2, 2012 at 10:51:46AM -0400, Franklin, Dan (FEN) wrote:
Look around and find another vendor, even if your company has to pay
more for you to have that blame avoidance.
We're currently using Dell and have had enough problems to think about
switching.
What about HP?
If you
On 10/2/2012 2:20 AM, Glyn Astill wrote:
newer R910s recently all of a sudden went dead to the world; no prior symptoms
showing in our hardware and software monitoring, no errors in the os logs,
nothing in the dell drac logs. After a hard reset it's back up as if
nothing happened, and it's an
On Tue, Oct 2, 2012 at 9:14 AM, Bruce Momjian br...@momjian.us wrote:
On Tue, Oct 2, 2012 at 10:51:46AM -0400, Franklin, Dan (FEN) wrote:
We're currently using Dell and have had enough problems to think about
switching.
What about HP?
If you need a big vendor, I think HP is a good choice.
On 09/27/2012 10:22 PM, M. D. wrote:
On 09/27/2012 02:55 PM, Scott Marlowe wrote:
On Thu, Sep 27, 2012 at 2:46 PM, M. D. li...@turnkey.bz wrote:
select item.item_id,item_plu.number,item.description,
(select number from account where asset_acct = account_id),
(select number from account where
On Thu, Sep 27, 2012 at 03:50:33PM -0500, Shaun Thomas wrote:
On 09/27/2012 03:44 PM, Scott Marlowe wrote:
This 100x this. We used to buy our boxes from aberdeeninc.com and got
a 5 year replacement parts warranty included. We spent ~$10k on a
server that was right around $18k from dell for
On 9/27/2012 1:56 PM, M. D. wrote:
I'm in Belize, so what I'm considering is from ebay, where it's unlikely
that I'll get the warranty. Should I consider some other brand rather? To
build my own or buy custom might be an option too, but I would not get any
warranty.
Your best warranty would
On 9/28/2012 9:46 AM, Craig James wrote:
Your best warranty would be to have the confidence to do your own
repairs, and to have the parts on hand. I'd seriously consider
putting your own system together. Maybe go to a few sites with
pre-configured machines and see what parts they use. Order
On 09/28/2012 09:57 AM, David Boreham wrote:
On 9/28/2012 9:46 AM, Craig James wrote:
Your best warranty would be to have the confidence to do your own
repairs, and to have the parts on hand. I'd seriously consider
putting your own system together. Maybe go to a few sites with
pre-configured
On Fri, Sep 28, 2012 at 11:33 AM, M. D. li...@turnkey.bz wrote:
On 09/28/2012 09:57 AM, David Boreham wrote:
On 9/28/2012 9:46 AM, Craig James wrote:
Your best warranty would be to have the confidence to do your own
repairs, and to have the parts on hand. I'd seriously consider
putting
Hi everyone,
I want to buy a new server, and am contemplating a Dell R710 or the
newer R720. The R710 has the x5600 series CPU, while the R720 has the
newer E5-2600 series CPU.
At this point I'm dealing with a fairly small database of 8 to 9 GB.
The server will be dedicated to Postgres
On Thu, Sep 27, 2012 at 4:11 PM, M. D. li...@turnkey.bz wrote:
At this point I'm dealing with a fairly small database of 8 to 9 GB.
...
The on_hand lookup table
currently has 3 million rows after 4 years of data.
...
For both servers I'd have at least 32GB Ram and 4 Hard Drives in raid 10.
On Thu, Sep 27, 2012 at 12:11 PM, M. D. li...@turnkey.bz wrote:
Hi everyone,
I want to buy a new server, and am contemplating a Dell R710 or the newer
R720. The R710 has the x5600 series CPU, while the R720 has the newer
E5-2600 series CPU.
At this point I'm dealing with a fairly small
On 09/27/2012 01:22 PM, Claudio Freire wrote:
On Thu, Sep 27, 2012 at 4:11 PM, M. D. li...@turnkey.bz wrote:
At this point I'm dealing with a fairly small database of 8 to 9 GB.
...
The on_hand lookup table
currently has 3 million rows after 4 years of data.
...
For both servers I'd have at
On 9/27/2012 1:11 PM, M. D. wrote:
I want to buy a new server, and am contemplating a Dell R710 or the
newer R720. The R710 has the x5600 series CPU, while the R720 has the
newer E5-2600 series CPU.
For this the best data I've found (excepting actually running tests on
the physical
On 9/27/2012 1:37 PM, Craig James wrote:
We use a white box vendor (ASA Computers), and have been very happy
with the results. They build exactly what I ask for and deliver it in
about a week. They offer on-site service and warranties, but don't
pressure me to buy them. I'm not locked in to
On 09/27/2012 01:47 PM, David Boreham wrote:
On 9/27/2012 1:37 PM, Craig James wrote:
We use a white box vendor (ASA Computers), and have been very happy
with the results. They build exactly what I ask for and deliver it in
about a week. They offer on-site service and warranties, but don't
On 9/27/2012 1:56 PM, M. D. wrote:
I'm in Belize, so what I'm considering is from ebay, where it's
unlikely that I'll get the warranty. Should I consider some other
brand rather? To build my own or buy custom might be an option too,
but I would not get any warranty.
I don't have any recent
On Thursday, September 27, 2012 02:13:01 PM David Boreham wrote:
The equivalent Supermicro box looks to be somewhat less expensive :
http://www.newegg.com/Product/Product.aspx?Item=N82E16816101693
When you consider downtime and the cost to ship equipment back to the
supplier, a warranty
On Thu, Sep 27, 2012 at 2:31 PM, Alan Hodgson ahodg...@simkin.ca wrote:
On Thursday, September 27, 2012 02:13:01 PM David Boreham wrote:
The equivalent Supermicro box looks to be somewhat less expensive :
http://www.newegg.com/Product/Product.aspx?Item=N82E16816101693
When you consider
On 09/27/2012 01:37 PM, Craig James wrote:
I don't think you've supplied enough information for anyone to give
you a meaningful answer. What's your current configuration? Are you
I/O bound, CPU bound, memory limited, or some other problem? You need
to do a specific analysis of the queries
On 09/27/2012 03:44 PM, Scott Marlowe wrote:
This 100x this. We used to buy our boxes from aberdeeninc.com and got
a 5 year replacement parts warranty included. We spent ~$10k on a
server that was right around $18k from dell for the same numbers and a
3 year warranty.
Whatever you do, go
On 09/27/2012 02:40 PM, David Boreham wrote:
I think the newer CPU is the clear winner with a specintrate
performance of 589 vs 432.
The comparisons you linked to had 24 absolute threads pitted against 32,
since the newer CPUs have a higher maximum cores per CPU. That said,
you're right
On Thu, Sep 27, 2012 at 2:46 PM, M. D. li...@turnkey.bz wrote:
select item.item_id,item_plu.number,item.description,
(select number from account where asset_acct = account_id),
(select number from account where expense_acct = account_id),
(select number from account where income_acct =
On 09/27/2012 03:55 PM, Scott Marlowe wrote:
Have you tried re-writing this query first? Is there a reason to have
a bunch of subselects instead of joining the tables? What pg version
are you running btw? A newer version of pg might help too.
Wow, yeah. I was just about to say something
On 9/27/2012 2:55 PM, Scott Marlowe wrote:
Whatever you do, go for the Intel ethernet adaptor option. We've had so many
headaches with integrated broadcom NICs.:(
Sound advice, but not a get out of jail card unfortunately : we had a
horrible problem with the Intel e1000 driver in RHEL for
On 9/27/2012 2:47 PM, Shaun Thomas wrote:
On 09/27/2012 02:40 PM, David Boreham wrote:
I think the newer CPU is the clear winner with a specintrate
performance of 589 vs 432.
The comparisons you linked to had 24 absolute threads pitted against
32, since the newer CPUs have a higher maximum
Hello,
from benchmarking on my r/o in memory database, i can tell that 9.1 on x5650 is
faster than 9.2 on e2440.
I do not have x5690, but i have not so loaded e2660.
If you can give me a dump and some queries, i can bench them.
Nevertheless x5690 seems more efficient on single threaded
On Thu, Sep 27, 2012 at 6:08 PM, David Boreham david_l...@boreham.org wrote:
We went from Dunnington to Nehalem, and it was stunning how much better
the X5675 was compared to the E7450. Sandy Bridge isn't quite that much of a
jump though, so if you don't need that kind of bleeding-edge, you
On Thu, Sep 27, 2012 at 2:50 PM, Shaun Thomas stho...@optionshouse.com wrote:
On 09/27/2012 03:44 PM, Scott Marlowe wrote:
This 100x this. We used to buy our boxes from aberdeeninc.com and got
a 5 year replacement parts warranty included. We spent ~$10k on a
server that was right around
On Thursday, September 27, 2012 03:04:51 PM David Boreham wrote:
On 9/27/2012 2:55 PM, Scott Marlowe wrote:
Whatever you do, go for the Intel ethernet adaptor option. We've had so
many
headaches with integrated broadcom NICs.:(
Sound advice, but not a get out of jail card unfortunately
On 09/27/2012 04:08 PM, Evgeny Shishkin wrote:
from benchmarking on my r/o in memory database, i can tell that 9.1
on x5650 is faster than 9.2 on e2440.
How did you run those benchmarks? I find that incredibly hard to
believe. Not only does 9.2 scale *much* better than 9.1, but the E5-2440
On 09/27/2012 02:55 PM, Scott Marlowe wrote:
On Thu, Sep 27, 2012 at 2:46 PM, M. D. li...@turnkey.bz wrote:
select item.item_id,item_plu.number,item.description,
(select number from account where asset_acct = account_id),
(select number from account where expense_acct = account_id),
(select
On 9/27/2012 3:16 PM, Claudio Freire wrote:
Careful with AMD, since many (I'm not sure about the latest ones)
cannot saturate the memory bus when running single-threaded. So, great
if you have a high concurrent workload, quite bad if you don't.
Actually we test memory bandwidth with John
Please don't take responses off list, someone else may have an insight I'd miss.
On Thu, Sep 27, 2012 at 3:20 PM, M. D. li...@turnkey.bz wrote:
On 09/27/2012 02:55 PM, Scott Marlowe wrote:
On Thu, Sep 27, 2012 at 2:46 PM, M. D. li...@turnkey.bz wrote:
select
On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, Sep 27, 2012 at 6:08 PM, David Boreham david_l...@boreham.org wrote:
We went from Dunnington to Nehalem, and it was stunning how much better
the X5675 was compared to the E7450. Sandy Bridge isn't quite that
On Sep 28, 2012, at 1:20 AM, Shaun Thomas stho...@optionshouse.com wrote:
On 09/27/2012 04:08 PM, Evgeny Shishkin wrote:
from benchmarking on my r/o in memory database, i can tell that 9.1
on x5650 is faster than 9.2 on e2440.
How did you run those benchmarks? I find that incredibly hard
On Thu, Sep 27, 2012 at 3:36 PM, Scott Marlowe scott.marl...@gmail.com wrote:
Conversely, we often got MUCH better parallel performance from our
quad 12 core opteron servers than I could get on a dual 8 core xeon at
the time.
Clarification that the two base machines were about the same price.
On 09/27/2012 04:39 PM, Scott Marlowe wrote:
Clarification that the two base machines were about the same price.
48 opteron cores (2.2GHz) or 16 xeon cores at ~2.6GHz. It's been a
few years, I'm not gonna testify to the exact numbers in court.
Same here. We got really good performance on
On Sep 28, 2012, at 1:36 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire klaussfre...@gmail.com
wrote:
On Thu, Sep 27, 2012 at 6:08 PM, David Boreham david_l...@boreham.org
wrote:
We went from Dunnington to Nehalem, and it was stunning
On Thu, Sep 27, 2012 at 3:40 PM, Evgeny Shishkin itparan...@gmail.com wrote:
On Sep 28, 2012, at 1:36 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Sep 27, 2012 at 3:16 PM, Claudio Freire klaussfre...@gmail.com
wrote:
On Thu, Sep 27, 2012 at 6:08 PM, David Boreham
On Thu, Sep 27, 2012 at 3:44 PM, Shaun Thomas stho...@optionshouse.com wrote:
On 09/27/2012 04:39 PM, Scott Marlowe wrote:
Clarification that the two base machines were about the same price.
48 opteron cores (2.2GHz) or 16 xeon cores at ~2.6GHz. It's been a
few years, I'm not gonna testify
On Thu, Sep 27, 2012 at 3:28 PM, David Boreham david_l...@boreham.org wrote:
On 9/27/2012 3:16 PM, Claudio Freire wrote:
Careful with AMD, since many (I'm not sure about the latest ones)
cannot saturate the memory bus when running single-threaded. So, great
if you have a high concurrent
Hi Chris,
A couple comments on the NetApp SAN.
We use NetApp, primarily with Fiber connectivity and FC drives. All of the
Postgres files are located on the SAN and this configuration works well.
We have tried iSCSI, but performance his horrible. Same with SATA drives.
The SAN will definitely be
Hi list,
My employer will be donated a NetApp FAS 3040 SAN [1] and we want to run
our warehouse DB on it. The pg9.0 DB currently comprises ~1.5TB of
tables, 200GB of indexes, and grows ~5%/month. The DB is not update
critical, but undergoes larger read and insert operations frequently.
My
chris wrote:
My employer is a university with little funds and we have to find a
cheap way to scale for the next 3 years, so the SAN seems a good chance
to us.
A SAN is rarely ever the cheapest way to scale anything; you're paying
extra for reliability instead.
I was thinking to put the
1 x Intel Xeon X5670, 6C, 2.93GHz, 12M Cache
16 GB (4x4GB) Low Volt DDR3 1066Mhz
PERC H700 SAS RAID controller
4 x 300 GB 10k SAS 6Gbps 2.5 in RAID 10
Apart from Gregs excellent recommendations. I would strongly suggest
more memory. 16GB in 2011 is really on the low side.
PG is using
On 7/15/2011 2:10 AM, Greg Smith wrote:
chris wrote:
My employer is a university with little funds and we have to find a
cheap way to scale for the next 3 years, so the SAN seems a good chance
to us.
A SAN is rarely ever the cheapest way to scale anything; you're paying
extra for reliability
On Fri, Jul 15, 2011 at 12:34 AM, chris chri...@gmx.net wrote:
I was thinking to put the WAL and the indexes on the local disks, and
the rest on the SAN. If funds allow, we might downgrade the disks to
SATA and add a 50 GB SATA SSD for the WAL (SAS/SATA mixup not possible).
Just to add to the
On Fri, Jul 15, 2011 at 10:39 AM, Robert Schnabel
schnab...@missouri.edu wrote:
I'm curious what people think of these:
http://www.pc-pitstop.com/sas_cables_enclosures/scsase166g.asp
I currently have my database on two of these and for my purpose they seem to
be fine and are quite a bit less
Just to add to the conversation, there's no real advantage to putting
WAL on SSD. Indexes can benefit from them, but WAL is mosty
seqwuential throughput and for that a pair of SATA 1TB drives at
7200RPM work just fine for most folks.
Actually, there's a strong disadvantage to putting WAL
Hi list,
Thanks a lot for your very helpful feedback!
I've tested MD1000, MD1200, and MD1220 arrays before, and always gotten
seriously good performance relative to the dollars spent
Great hint, but I'm afraid that's too expensive for us. But it's a great
way to scale over the years, I'll keep
On Fri, Jul 15, 2011 at 11:49 AM, chris r. chri...@gmx.net wrote:
Hi list,
Thanks a lot for your very helpful feedback!
I've tested MD1000, MD1200, and MD1220 arrays before, and always gotten
seriously good performance relative to the dollars spent
Great hint, but I'm afraid that's too
On 7/14/11 11:34 PM, chris wrote:
Any comments on the configuration? Any experiences with iSCSI vs. Fibre
Channel for SANs and PostgreSQL? If the SAN setup sucks, do you see a
cheap alternative how to connect as many as 16 x 2TB disks as DAS?
Here's the problem with iSCSI: on gigabit ethernet,
:57 AM
Subject: Re: [PERFORM] Hardware advice
Hi Alexandru,
Alexandru Coseru schrieb:
[...]
Question 1:
The RAID layout should be:
a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in
raid10 for data ?
b) 8
PROTECTED] on behalf of Sven Geisler
Sent: Wed 12/6/2006 1:09 AM
To: Alex Turner
Cc: Alexandru Coseru; pgsql-performance@postgresql.org
Subject:Re: [PERFORM] Hardware advice
Hi Alex,
Please check out http://www.powerpostgresql.com/PerfList before you
use RAID 5 for PostgreSQL.
Anyhow
Hi Alexandru,
Alexandru Coseru schrieb:
[...]
Question 1:
The RAID layout should be:
a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in
raid10 for data ?
b) 8 hdd in raid10 for all ?
c) 2 hdd in raid1 for system , 2 hdd in raid1 for pg_xlog ,
, 2006 11:57 AM
Subject: Re: [PERFORM] Hardware advice
Hi Alexandru,
Alexandru Coseru schrieb:
[...]
Question 1:
The RAID layout should be:
a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in
raid10 for data ?
b) 8 hdd in raid10 for all ?
c) 2 hdd
-performance@postgresql.org
Sent: Tuesday, December 05, 2006 11:57 AM
Subject: Re: [PERFORM] Hardware advice
Hi Alexandru,
Alexandru Coseru schrieb:
[...]
Question 1:
The RAID layout should be:
a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in
raid10 for data
Hello..
I'm waiting for my new system , and meanwhile , i have some questions.
First , here are the specs:
The server will have kernel 2.1.19 and it will be use only as a postgresql
server (nothing else... no named,dhcp,web,mail , etc).
Postgresql version will be 8.2.
It will be heavily
Alexandru,
The server will have kernel 2.1.19 and it will be use only as a postgresql
Assuming you're talking Linux, I think you mean 2.6.19?
--
Josh Berkus
PostgreSQL @ Sun
San Francisco
---(end of broadcast)---
TIP 2: Don't 'kill -9' the
Hello..
Yes , sorry for the mistype..
Regards
Alex
- Original Message -
From: Josh Berkus josh@agliodbs.com
To: pgsql-performance@postgresql.org
Cc: Alexandru Coseru [EMAIL PROTECTED]
Sent: Sunday, December 03, 2006 10:11 PM
Subject: Re: [PERFORM] Hardware advice
Alexandru
On 30/5/03 6:17 pm, scott.marlowe [EMAIL PROTECTED] wrote:
On Fri, 30 May 2003, Adam Witney wrote:
Hi scott,
Thanks for the info
You might wanna do something like go to all 146 gig drives, put a mirror
set on the first 20 or so gigs for the OS, and then use the remainder
(5x120gig or
65 matches
Mail list logo