Glyn Astill wrote:
Last month I found myself taking a powerdrill to our new dell
boxes in order to route cables to replacement raid cards. Having
to do that made me feel really unprofessional and a total cowboy,
but it was either that or shitty performance.
Can you be more specific? Which Dell
--- On Fri, 25/12/09, Scott Marlowe wrote:
> It does kind of knock the stuffing out of the argument that
> buying
> from the big vendors ensures good hardware
> experiences. I've had
> similar problems from all the big vendors in the
> past. I can't
> imagine getting treated that way by my curr
On Thu, Dec 24, 2009 at 5:15 PM, Richard Neill wrote:
>
>
> Scott Marlowe wrote:
>>
>> On Thu, Dec 24, 2009 at 3:51 PM, Richard Neill wrote:
>>>
>>> Adam Tauno Williams wrote:
This isn't true. IBMs IPS series controllers can the checked and
configured via the ipssend utility that
Scott Marlowe wrote:
On Thu, Dec 24, 2009 at 3:51 PM, Richard Neill wrote:
Adam Tauno Williams wrote:
This isn't true. IBMs IPS series controllers can the checked and
configured via the ipssend utility that works very well in 2.6.x LINUX.
Unfortunately, what we got (in the IBM) was the g
On Thu, Dec 24, 2009 at 3:51 PM, Richard Neill wrote:
>
>
> Adam Tauno Williams wrote:
>>
>> This isn't true. IBMs IPS series controllers can the checked and
>> configured via the ipssend utility that works very well in 2.6.x LINUX.
>>
>
> Unfortunately, what we got (in the IBM) was the garbage S
Adam Tauno Williams wrote:
This isn't true. IBMs IPS series controllers can the checked and configured
via the ipssend utility that works very well in 2.6.x LINUX.
Unfortunately, what we got (in the IBM) was the garbage ServeRaid 8kl
card. This one is atrocious - it shipped with a hideous
This isn't true. IBMs IPS series controllers can the checked and configured
via the ipssend utility that works very well in 2.6.x LINUX.
"Scott Marlowe" wrote:
>On Thu, Dec 24, 2009 at 11:09 AM, Richard Neill wrote:
>>
>>
>> Jeremy Harris wrote:
>>>
>>> On 12/24/2009 05:12 PM, Richard Neill w
Richard and others, thank you all for your answers.
My comments inline.
Richard Neill wrote:
> 2. Also, for reads, the more RAM you have, the better (for caching). I'd
> suspect that another 8GB of RAM is a better expenditure than a 2nd drive
> in many cases.
The size of the RAM is already four
On Thu, Dec 24, 2009 at 11:09 AM, Richard Neill wrote:
>
>
> Jeremy Harris wrote:
>>
>> On 12/24/2009 05:12 PM, Richard Neill wrote:
>>>
>>> Of course, with a server machine, it's nearly impossible to use mdadm
>>> raid: you are usually compelled to use a hardware raid card.
>>
>> Could you expand
Jeremy Harris wrote:
On 12/24/2009 05:12 PM, Richard Neill wrote:
Of course, with a server machine, it's nearly impossible to use mdadm
raid: you are usually compelled to use a hardware raid card.
Could you expand on that?
Both of the last machines I bought (an IBM X3550 and an HP DL380) c
On 12/24/2009 05:12 PM, Richard Neill wrote:
Of course, with a server machine, it's nearly impossible to use mdadm
raid: you are usually compelled to use a hardware raid card.
Could you expand on that?
- Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To m
Greg Smith wrote:
Richard Neill wrote:
3. RAID 0 is twice as unreliable as no raid. I'd recommend using RAID 1
intead. If you use the Linux software mdraid, remote admin is easy.
The main thing to be wary of with Linux software RAID-1 is that you
configure things so that both drives are capa
Mark Mielke wrote:
Can you be more specific? I am using ext4 without problems than I have
discerned - but mostly for smaller databases (~10 databases, one
almost about 1 Gbyte, most under 500 Mbytes).
Every time I do hear about ext4, so far it's always in the context of
something that doesn't
On 12/24/2009 10:51 AM, Greg Smith wrote:
7. If you have 3 equal disks, try doing some experiments. My inclination
would be to set them all up with ext4...
I have yet to yet a single positive thing about using ext4 for
PostgreSQL. Stick with ext3, where the problems you might run into
are at
Gaël Le Mignot wrote:
This solution costs only one extra disk (which is quite cheap
nowadays)
I would wager that the system being used here only has enough space to
house 3 drives, thus the question, which means that adding a fourth
drive probably requires buying a whole new server.
Hello,
Instead of using 3 disks in RAID-0 and one without RAID for archive, I
would rather invest into one extra disk and have either a RAID 1+0
setup or use two disks in RAID-1 for the WAL and two disks in RAID-1
for the main database (I'm not sure which perform better between those
two so
Richard Neill wrote:
3. RAID 0 is twice as unreliable as no raid. I'd recommend using RAID 1
intead. If you use the Linux software mdraid, remote admin is easy.
The main thing to be wary of with Linux software RAID-1 is that you
configure things so that both drives are capable of booting the s
A couple of thoughts occur to me:
1. For reads, RAID 1 should also be good: it will allow a read to occur
from whichever disk can provide the data fastest.
2. Also, for reads, the more RAM you have, the better (for caching). I'd
suspect that another 8GB of RAM is a better expenditure than a 2nd
2009/12/24 Ognjen Blagojevic :
> Hi all,
>
> I'm trying to figure out which HW configuration with 3 SATA drives is the
> best in terms of reliability and performance for Postgres database.
>
> I'm thinking to connect two drives in RAID 0, and to keep the database (and
> WAL) on these disks - to imp
Hi all,
I'm trying to figure out which HW configuration with 3 SATA drives is
the best in terms of reliability and performance for Postgres database.
I'm thinking to connect two drives in RAID 0, and to keep the database
(and WAL) on these disks - to improve the write performance of the SATA
20 matches
Mail list logo