Re: mdadm - two questions

2016-11-30 Thread Rick Thomas

On Nov 30, 2016, at 3:40 AM, Kamil Jońca  wrote:

> Rick Thomas  writes:
> 
>> Hi Kamil,
>> 
>> You’d get a bit more space by configuring your 4 drives as a RAID5
>> array (3TB usable for RAID5, vs 2TB usable for RAID10).  The downside
>> of RAID5 is that the RAID10 (or the one LV with two RAID1 PVs — they
>> amount to the same thing for this discussion) can survive loosing two
>> drives at once — if they happen to be the right two drives: i.e. not
>> both sides of a single mirrored pair — while RAID5 would not be able
>> to survive any failure that involved two drives at once.  Either
>> configuration would survive loosing any one single drive, of course.
>> 
>> If you want to be able to survive simultaneous loss of any two drives, you 
>> should look at RAID6, which would have the same usable capacity (2TB) as the 
>> RAID10.
> 
> I though about this, but I'm afraid about performance (calculating
> control sums ). Needlessly?
> KJ

There’s a three-way tradeoff: usable space; performance; survivability.

For my application (backups), performance is less important than space and 
survivability.  The performance hit definitely exists, but I do not find it a 
problem.  Of course, YMMV — “your milage may vary".

Enjoy!
Rick



Re: mdadm - two questions

2016-11-30 Thread Kamil Jońca
Rick Thomas  writes:

> Hi Kamil,
>
> You’d get a bit more space by configuring your 4 drives as a RAID5
> array (3TB usable for RAID5, vs 2TB usable for RAID10).  The downside
> of RAID5 is that the RAID10 (or the one LV with two RAID1 PVs — they
> amount to the same thing for this discussion) can survive loosing two
> drives at once — if they happen to be the right two drives: i.e. not
> both sides of a single mirrored pair — while RAID5 would not be able
> to survive any failure that involved two drives at once.  Either
> configuration would survive loosing any one single drive, of course.
>
> If you want to be able to survive simultaneous loss of any two drives, you 
> should look at RAID6, which would have the same usable capacity (2TB) as the 
> RAID10.

I though about this, but I'm afraid about performance (calculating
control sums ). Needlessly?
KJ

-- 
http://stopstopnop.pl/stop_stopnop.pl_o_nas.html
Davis' Law of Traffic Density:
The density of rush-hour traffic is directly proportional to
1.5 times the amount of extra time you allow to arrive on time.



Re: mdadm - two questions

2016-11-30 Thread Kamil Jońca
Andy Smith  writes:

> Hi Kamil,
>
> On Tue, Nov 29, 2016 at 01:26:55AM +0100, Kamil Jońca wrote:
>> My first plan was somehow migrate to RAID10. I thought that is simply
>> "raid0 over some raid1 arrays" so it should be legal to use 2*1TB +
>> 2*1GB devices and then extend 2*1G => 2*1TB. But it not work that
>> way. All devices in linux mdadm raid10 array must be the same, or I'm
>> missing something.
>
> In <87d1hnff79.fsf@alfa.kjonca> you said you were hoping to go from
> 2*1TB to 4*1TB. What's the "2*1TB + 2*1GB" you mention now?

All migrating scenarios RAID1 -> RAID10 I have read assumes, that there
were period in time, when array is degreaded, and have no redundancy.

So I thought I can do someting like:
1. Create new md1 array with new 1TB drives + 2 small partitions (say 1G).
2. extend vg with this array
3. pvmove to new array
4. remove old md0 array
5. sequentially replace 1G partitions with 1TB disks (from old array )
   + --grow
and result will be raid10 array with new + old disks. But my scenario
will fails at 1. :)

I hope now is clearer why I speak about 2*1T+2*1G.
KJ


-- 
http://wolnelektury.pl/wesprzyj/teraz/
The Tree of Learning bears the noblest fruit, but noble fruit tastes bad.



Re: mdadm - two questions

2016-11-30 Thread Rick Thomas
Hi Kamil,

You’d get a bit more space by configuring your 4 drives as a RAID5 array (3TB 
usable for RAID5, vs 2TB usable for RAID10).  The downside of RAID5 is that the 
RAID10 (or the one LV with two RAID1 PVs — they amount to the same thing for 
this discussion) can survive loosing two drives at once — if they happen to be 
the right two drives: i.e. not both sides of a single mirrored pair — while 
RAID5 would not be able to survive any failure that involved two drives at 
once.  Either configuration would survive loosing any one single drive, of 
course.

If you want to be able to survive simultaneous loss of any two drives, you 
should look at RAID6, which would have the same usable capacity (2TB) as the 
RAID10.

Just my two cents…

Rick


Re: mdadm - two questions

2016-11-29 Thread Andy Smith
Hi Kamil,

On Tue, Nov 29, 2016 at 01:26:55AM +0100, Kamil Jońca wrote:
> My first plan was somehow migrate to RAID10. I thought that is simply
> "raid0 over some raid1 arrays" so it should be legal to use 2*1TB +
> 2*1GB devices and then extend 2*1G => 2*1TB. But it not work that
> way. All devices in linux mdadm raid10 array must be the same, or I'm
> missing something.

In <87d1hnff79.fsf@alfa.kjonca> you said you were hoping to go from
2*1TB to 4*1TB. What's the "2*1TB + 2*1GB" you mention now?

Yes all your devices will need to be the same size. You've already
been advised of a way to go from RAID-1 to RAID-10¹, so if you
really do have a total of four 1TB drives I can't see why you can't
do that.

Your proposed solution…

> So simplest way in my case is to make second device and assign it as PV
> to VG.

…has the advantage of simplicity, and perhaps that you do not need
to reboot² (assuming hot swap insertion of new drives). But really,
if you have four identical drives that you intend to use for the
same purpose it would really be neater and perhaps more performant
to have them all in one RAID-10, wouldn't it? Data will get striped
across four devices instead of two.

If you really do need to make a separate md array and add it to your
VG, you may want to use RAID-10 on it anyway (md RAID-10 works fine
with less than four devices). It is a little bit faster than RAID-1.

The other thing you could try, if forced to use two PVs, is configure
your LVM to stripe extents across both PVs instead of just
allocating them linearly from one PV or another. That would get you
back a bit of the performance.

Cheers,
Andy

¹ Namely:

  0. Have backups in case one of the new drives encounters an error
 during step (6) below.

  1. Make a four device RAID-10 with two missing devices

  2. Copy your data from your existing RAID-1 to the new (degraded)
 RAID-10

  4. Adjust config to make new RAID-10 the real thing that's used

  5. Reboot to test it all

  6. Take a deep breath and consider that after what you're about to
 do, any kind of error on the two devices running your RAID-10
 will result in you needed to go to your backups from step
 (0).

 Kill your RAID-1 and add its devices to your RAID-10, so
 it's not degraded any more.

  7. Breathe out in relief as your data is now on a redundant array
 again.

² You don't need to reboot to go from RAID-1 to RAID-10 as already
  discussed, either, but I think I'd be a bit nervous of the machine
  not booting correctly after I had switched everything over to
  using the new (temporarily degraded) RAID-10, and so I'd want to
  test the full boot process before consigning my working RAID-1 to
  oblivion.

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: mdadm - two questions

2016-11-28 Thread Kamil Jońca
kjo...@poczta.onet.pl (Kamil Jońca) writes:

[...]> 2. there is md0 (raid1) with two disk in it. It is PV for lvm.
> I want to extend space by adding  another two disks. Is it possible somehow
> extent md0? Or the only way is to create second md device, and assign it
> to volume group?

My first plan was somehow migrate to RAID10. I thought that is simply
"raid0 over some raid1 arrays" so it should be legal to use 2*1TB +
2*1GB devices and then extend 2*1G => 2*1TB. But it not work that
way. All devices in linux mdadm raid10 array must be the same, or I'm
missing something.
So simplest way in my case is to make second device and assign it as PV
to VG.

KJ

-- 
http://wolnelektury.pl/wesprzyj/teraz/
You will have good luck and overcome many hardships.



Re: mdadm - two questions

2016-11-23 Thread Frédéric Marchal
On Tuesday 22 November 2016 18:10:53 Kamil Jońca wrote:
> 2. there is md0 (raid1) with two disk in it. It is PV for lvm.
> I want to extend space by adding  another two disks. Is it possible somehow
> extent md0? Or the only way is to create second md device, and assign it
> to volume group?

I can only answer this question. I did something similar on an old computer 
several years ago except I had to grow the LV using other partitions instead 
of adding new disks.

The computer had two disks. Each disk had a partition assembled as a raid1 
array md0. md0 was the only PV of the VG.

When I had to grow the VG, I reclaimed two existing partitions from each disk 
and assembled them as md1 and md2. Then I added /dev/md1 and /dev/md2 to the 
VG.

It doesn't matter to the VG if it is made of /dev/sda1 and /dev/sdb2 or 
/dev/md0 and /dev/md1.

Then follow the usual procedure to grow the LV and the file system.

Frederic



Re: mdadm - two questions

2016-11-22 Thread Kamil Jońca
Dan Ritter  writes:

> http://serverfault.com/questions/43677/best-way-to-grow-linux-software-raid-1-to-raid-10

Yes. And it was my idea (except rsync I plan to do pvmove to new aray)
Thanks for confirmation.
KJ
-- 
http://stopstopnop.pl/stop_stopnop.pl_o_nas.html
The sweeter the apple, the blacker the core --
Scratch a lover and find a foe!
-- Dorothy Parker, "Ballad of a Great Weariness"



Re: mdadm - two questions

2016-11-22 Thread Jens Sauer
> Unfortunately I cannot see how from raid1 of 2*1TB disks migrate to
> raid1(raid10?)  of 4*1TB disks

I don't think you can reshape a RAID 1 to a RAID 10.

mdadm can reshape RAID 1/5/6. You can move from RAID 5 to 6 or the other
way around.
*Maybe* you can even reshape from 1 to RAID 5/6.

I see these options:

* create a second RAID 1 with the new devices, make this a pv and add it to
your vg
* backup all data, create a RAID 10/5/6 with all four devices
* Test if you can reshape from RAID 1 to 5/6

The last option only if RAID 5/6 serves your needs.

Regards,
Jens


Re: mdadm - two questions

2016-11-22 Thread Dan Ritter
On Tue, Nov 22, 2016 at 09:10:50PM +0100, Kamil Jońca wrote:
> Dan Ritter  writes:
> 
> >> I want to extend space by adding  another two disks. Is it possible somehow
> >> extent md0? Or the only way is to create second md device, and assign it
> >> to volume group?
> >
> 
> Unfortunately I cannot see how from raid1 of 2*1TB disks migrate to
> raid1(raid10?)  of 4*1TB disks


http://serverfault.com/questions/43677/best-way-to-grow-linux-software-raid-1-to-raid-10

-dsr-



Re: mdadm - two questions

2016-11-22 Thread Kamil Jońca
Dan Ritter  writes:

>> I want to extend space by adding  another two disks. Is it possible somehow
>> extent md0? Or the only way is to create second md device, and assign it
>> to volume group?
>

Unfortunately I cannot see how from raid1 of 2*1TB disks migrate to
raid1(raid10?)  of 4*1TB disks
KJ
-- 
http://stopstopnop.pl/stop_stopnop.pl_o_nas.html
If you want to program in C, program in C.  It's a nice language.  I
use it occasionally...   :-)
-- Larry Wall in <7...@jpl-devvax.jpl.nasa.gov>



Re: mdadm - two questions

2016-11-22 Thread Dan Ritter
On Tue, Nov 22, 2016 at 06:10:53PM +0100, Kamil Jońca wrote:
> (I do not know it is proper place to ask these questions.)
> 1.
> man mdadm has some info about CONTAINER-s.
> 
> I think, that I understand how to use it, but I cannot imagine use case
> of containers.
> Can someone explain whent it is desirable to use containers, especially
> DDF instead of free mdX devices?

Only when you have bizarre Windows compatibility needs and
appropriate hardware.

> 2. there is md0 (raid1) with two disk in it. It is PV for lvm.
> I want to extend space by adding  another two disks. Is it possible somehow
> extent md0? Or the only way is to create second md device, and assign it
> to volume group?

Both are available:
https://raid.wiki.kernel.org/index.php/Growing

but if you're doing this frequently, you
should consider ZFS instead of mdadm and lvm.

-dsr-



mdadm - two questions

2016-11-22 Thread Kamil Jońca
(I do not know it is proper place to ask these questions.)
1.
man mdadm has some info about CONTAINER-s.

I think, that I understand how to use it, but I cannot imagine use case
of containers.
Can someone explain whent it is desirable to use containers, especially
DDF instead of free mdX devices?

2. there is md0 (raid1) with two disk in it. It is PV for lvm.
I want to extend space by adding  another two disks. Is it possible somehow
extent md0? Or the only way is to create second md device, and assign it
to volume group?

KJ

-- 
http://stopstopnop.pl/stop_stopnop.pl_o_nas.html
Oh yeah.  Forgot about those.  Getting senile, I guess...
-- Larry Wall in <199710261551.haa17...@wall.org>