On Sat, 2020-11-14 at 14:37 -0700, Warren Young wrote:
> On Nov 14, 2020, at 5:56 AM, hw wrote:
> > On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
> > > On Nov 11, 2020, at 2:01 PM, hw wrote:
> > > > I have yet to see software RAID that doesn't kill the performance.
> > >
> > > When was
> On Nov 14, 2020, at 8:45 PM, hw wrote:
>
> On Sat, 2020-11-14 at 18:55 +0100, Simon Matter wrote:
>>> On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
On Nov 11, 2020, at 2:01 PM, hw wrote:
> I have yet to see software RAID that doesn't kill the performance.
When
On Sat, Nov 14, 2020 at 6:32 PM hw wrote:
>
> I don't like the idea of flashing one. I don't have the firmware and I
> don't
> know if they can be flashed with Linux. Aren't there any good --- and cost
> efficient --- ones that do JBOD by default, preferably including 16-port
> cards
> with
On Sat, 2020-11-14 at 18:55 +0100, Simon Matter wrote:
> > On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
> > > On Nov 11, 2020, at 2:01 PM, hw wrote:
> > > > I have yet to see software RAID that doesn't kill the performance.
> > >
> > > When was the last time you tried it?
> >
> > I'm
On Sat, 2020-11-14 at 07:11 -0800, John Pierce wrote:
> On Sat, Nov 14, 2020, 4:57 AM hw wrote:
>
> > On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
> >
> > > > And where
> > > > do you get cost-efficient cards that can do JBOD?
> > >
> > > $69, 8 SATA/SAS ports:
On Nov 14, 2020, at 5:56 AM, hw wrote:
>
> On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
>> On Nov 11, 2020, at 2:01 PM, hw wrote:
>>> I have yet to see software RAID that doesn't kill the performance.
>>
>> When was the last time you tried it?
>
> I'm currently using it, and the
> On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
>> On Nov 11, 2020, at 2:01 PM, hw wrote:
>> > I have yet to see software RAID that doesn't kill the performance.
>>
>> When was the last time you tried it?
>
> I'm currently using it, and the performance sucks. Perhaps it's
> not the
On Sat, Nov 14, 2020, 4:57 AM hw wrote:
> On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
>
> > > And where
> > > do you get cost-efficient cards that can do JBOD?
> >
> > $69, 8 SATA/SAS ports: https://www.newegg.com/p/0ZK-08UH-0GWZ1
>
> That says it's for HP. So will you still get
On Wed, 2020-11-11 at 16:38 -0700, Warren Young wrote:
> On Nov 11, 2020, at 2:01 PM, hw wrote:
> > I have yet to see software RAID that doesn't kill the performance.
>
> When was the last time you tried it?
I'm currently using it, and the performance sucks. Perhaps it's
not the software
>
>
>> On Nov 11, 2020, at 6:00 PM, John Pierce wrote:
>>
>> On Wed, Nov 11, 2020 at 3:38 PM Warren Young wrote:
>>
>>> On Nov 11, 2020, at 2:01 PM, hw wrote:
I have yet to see software RAID that doesn't kill the performance.
>>>
>>> When was the last time you tried it?
>>>
>>> Why
> On Nov 11, 2020, at 8:04 PM, John Pierce wrote:
>
> in large raids, I label my disks with the last 4 or 6 digits of the drive
> serial number (or for SAS disks, the WWN).this is visible via smartctl,
> and I record it with the zpool documentation I keep on each server
> (typically a text
> On Nov 11, 2020, at 8:07 PM, John Pierce wrote:
>
> On Wed, Nov 11, 2020 at 5:47 PM Valeri Galtsev
> wrote:
>
>> I’m sure you can reflash LSI card to make it SATA or SAS HBA, or MegaRAD
>> hardware RAID adapter. Is far as I recollect it is the same electronics
>> board. I reflashed a
On Nov 11, 2020, at 7:04 PM, Warren Young wrote:
>
> zpool mount -d /dev/disk/by-partlabel
Oops, I’m mixing the zpool and zfs commands. It’d be “zpool import”.
And you do this just once: afterward, the automatic on-boot import brings the
drives back in using the names they had before, so
On Wed, Nov 11, 2020 at 5:47 PM Valeri Galtsev
wrote:
> I’m sure you can reflash LSI card to make it SATA or SAS HBA, or MegaRAD
> hardware RAID adapter. Is far as I recollect it is the same electronics
> board. I reflashed a couple of HBAs to make them MegaRAID boards.
>
you can reflash SOME
in large raids, I label my disks with the last 4 or 6 digits of the drive
serial number (or for SAS disks, the WWN).this is visible via smartctl,
and I record it with the zpool documentation I keep on each server
(typically a text file on a cloud drive). zpools don't actually care
WHAT
On Nov 11, 2020, at 6:37 PM, Valeri Galtsev wrote:
>
> how do you map failed software RAID drive to physical port of, say,
> SAS-attached enclosure.
With ZFS, you set a partition label on the whole-drive partition pool member,
then mount the pool with something like “zpool mount -d
> On Nov 11, 2020, at 5:38 PM, Warren Young wrote:
>
> On Nov 11, 2020, at 2:01 PM, hw wrote:
>>
>> I have yet to see software RAID that doesn't kill the performance.
>
> When was the last time you tried it?
>
> Why would you expect that a modern 8-core Intel CPU would impede I/O in any
>
> On Nov 11, 2020, at 5:38 PM, Warren Young wrote:
>
> On Nov 11, 2020, at 2:01 PM, hw wrote:
>>
>> I have yet to see software RAID that doesn't kill the performance.
>
> When was the last time you tried it?
>
> Why would you expect that a modern 8-core Intel CPU would impede I/O in any
>
> On Nov 11, 2020, at 6:00 PM, John Pierce wrote:
>
> On Wed, Nov 11, 2020 at 3:38 PM Warren Young wrote:
>
>> On Nov 11, 2020, at 2:01 PM, hw wrote:
>>>
>>> I have yet to see software RAID that doesn't kill the performance.
>>
>> When was the last time you tried it?
>>
>> Why would you
On Wed, Nov 11, 2020 at 3:38 PM Warren Young wrote:
> On Nov 11, 2020, at 2:01 PM, hw wrote:
> >
> > I have yet to see software RAID that doesn't kill the performance.
>
> When was the last time you tried it?
>
> Why would you expect that a modern 8-core Intel CPU would impede I/O in
> any
On Nov 11, 2020, at 2:01 PM, hw wrote:
>
> I have yet to see software RAID that doesn't kill the performance.
When was the last time you tried it?
Why would you expect that a modern 8-core Intel CPU would impede I/O in any
measureable way as compared to the outdated single-core 32-bit RISC
On Wed, 2020-11-11 at 11:34 +0100, Thomas Bendler wrote:
> Am Mi., 11. Nov. 2020 um 07:28 Uhr schrieb hw :
>
> > [...]
> > With this experience, these controllers are now deprecated. RAID
> > controllers
> > that can't rebuild an array after a disk has failed and has been replaced
> > are
Am Mi., 11. Nov. 2020 um 07:28 Uhr schrieb hw :
> [...]
> With this experience, these controllers are now deprecated. RAID
> controllers
> that can't rebuild an array after a disk has failed and has been replaced
> are virtually useless.
> [...]
HW RAID is often delivered with quite limited
On Mon, 2020-11-09 at 16:30 +0100, Thomas Bendler wrote:
> Am Fr., 6. Nov. 2020 um 20:38 Uhr schrieb hw :
>
> > [...]
> > Some search results indicate that it's possible that other disks in the
> > array have read errors and might prevent rebuilding for RAID 5. I don't
> > know if there are read
Am Fr., 6. Nov. 2020 um 20:38 Uhr schrieb hw :
> [...]
> Some search results indicate that it's possible that other disks in the
> array have read errors and might prevent rebuilding for RAID 5. I don't
> know if there are read errors, and if it's read errors, I think it would
> mean that these
On Fri, 2020-11-06 at 12:08 +0100, Thomas Bendler wrote:
> Am Fr., 6. Nov. 2020 um 00:52 Uhr schrieb hw :
>
> > [...]
> > logicaldrive 1 (14.55 TB, RAID 1+0, Ready for Rebuild)
> > [...]
>
> Have you checked the rebuild priority:
>
> ❯ ssacli ctrl slot=0 show config detail | grep "Rebuild
Am Fr., 6. Nov. 2020 um 00:52 Uhr schrieb hw :
> [...]
> logicaldrive 1 (14.55 TB, RAID 1+0, Ready for Rebuild)
> [...]
Have you checked the rebuild priority:
❯ ssacli ctrl slot=0 show config detail | grep "Rebuild Priority"
~
Hi,
is there a way to rebuild an array using ssacli with a P410?
A failed disk has been replaced and now the array is not
rebuilding like it should:
Array A (SATA, Unused Space: 1 MB)
logicaldrive 1 (14.55 TB, RAID 1+0, Ready for Rebuild)
physicaldrive 1I:0:1 (port 1I:box
28 matches
Mail list logo