Re: Raid 1 borked

2020-10-26 Thread Leslie Rhorer




On 10/26/2020 7:55 AM, Bill wrote:

Hi folks,

So we're setting up a small server with a pair of 1 TB hard disks 
sectioned into 5x100GB Raid 1 partition pairs for data,  with 400GB+ 
reserved for future uses on each disk.


	Oh, also, why are you leaving so much unused space on the drives?  One 
of the big advantages of RAID and LVM is the ability to manage storage 
space.  Unmanaged space on drives doesn't serbe much purpose.




Re: Raid 1 borked

2020-10-26 Thread Leslie Rhorer

This might be better handled on linux-r...@vger.kernel.org

On 10/26/2020 10:35 AM, Dan Ritter wrote:

Bill wrote:

So we're setting up a small server with a pair of 1 TB hard disks sectioned
into 5x100GB Raid 1 partition pairs for data,  with 400GB+ reserved for
future uses on each disk.


That's weird, but I expect you have a reason for it.


	It does seem odd.  I am curious what the reasons might be.  Do you mean 
perhaps, rather than RAID 1 pairs on each disk, each partition  is 
paired with the corresponding partition on the other drive?


Also, why so small and so many?


I'm not sure what happened, we had the five pairs of disk partitions set up
properly through the installer without problems. However, now the Raid 1
pairs are not mounted as separate partitions but do show up as
subdirectories under /, ie /datab, and they do seem to work as part of the
regular / filesystem.  df -h does not show any md devices or sda/b devices,
neither does mount. (The system partitions are on an nvme ssd).


Mounts have to happen at mount points, and mount points are
directories. What you have is five mount points and nothing
mounted on them.



lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. blkid
reveals that sda[1-5] and sdb[1-5] are still listed as
TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. What
commands should I use for that? And secondly, I'd like to get the raid
arrays remounted as separate partitions. How to do that?


Well, you need to get them assembled and mounted. I'm assuming
you used mdadm.

Start by inspecting /proc/mdstat. Does it show 5 assembled MD
devices? If not:

mdadm -A /dev/md0
mdadm -A /dev/md1
mdadm -A /dev/md2
mdadm -A /dev/md3
mdadm -A /dev/md4

And tell us any errors.


	Perhaps before that (or after), what are the contents of 
/etc/mdadm/mdadm.conf?  Try:


grep -v "#" /etc/mdadm/mdadm.conf


Once they are assembled, mount them:

mount -a

if that doesn't work -- did you remember to list them in
/etc/fstab? Put them in there, something like:

/dev/md0/dataa  ext4defaults0   0

and try again.

-dsr-




Fortunately, there is no data to worry about. However, I'd rather not
reinstall as we've put in a bit of work installing and configuring things.
I'd prefer not to loose that. Can someone help us out?


	Don't fret.  There is rarely, if ever, any need to re-install a system 
to accommodate updates in RAID facilities.  Even if / or /boot are RAID 
arrays - which does not seem to be the case here - one can ordinarily 
manage RAID systems without resorting to a re-install.  I cannot think 
of any reason why a re-install would be required in order to manage a 
mounted file system.  Even if /home is part of a mounted file system 
(other than /, of course), the root user can handle any sort of changes 
to mounted file systems.  This would be especially true in your case, 
where your systems aren't even mounted, yet.  Even in the worst case - 
and yours is far from that - one should ordinarily be able to boot from 
a DVD or a USB drive and manage the system.




Re: Raid 1 borked

2020-10-26 Thread Mark Neyhart
On 10/26/20 4:55 AM, Bill wrote:

> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].
> blkid reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
> 
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for that? And secondly, I'd like to get the raid
> arrays remounted as separate partitions. How to do that?
> 
    Bill

mdadm will give you some information about which partitions have been
configured as part of a raid device.

mdadm --examine /dev/sda1

It can also report on a raid device

mdadm --detail /dev/md1

If these commands don't report anything, you will need to define the
raid devices again.

Mark



Re: Raid 1 borked

2020-10-26 Thread R. Ramesh

Hi folks,

So we're setting up a small server with a pair of 1 TB hard 
diskssectioned into 5x100GB Raid 1 partition pairs for data, with 
400GB+reserved for future uses on each disk.I'm not sure what 
happened, we had the five pairs of disk partitions setup properly 
through the installer without problems. However, now theRaid 1 pairs 
are not mounted as separate partitions but do show up assubdirectories 
under /, ie /datab, and they do seem to work as part ofthe regular / 
filesystem. df -h does not show any md devices or sda/bdevices, 
neither does mount. (The system partitions are on an nvme ssd).lsblk 
reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].blkid 
reveals that sda[1-5] and sdb[1-5] are still listed as

TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. 
Whatcommands should I use for that? And secondly, I'd like to get the 
raidarrays remounted as separate partitions. How to do 
that?Fortunately, there is no data to worry about. However, I'd rather 
notreinstall as we've put in a bit of work installing and 
configuringthings. I'd prefer not to loose that. Can someone help us out?

Thanks in advance,

Bill


Did you create the md raid1s after partitioning the disks?

Normally when you install mdadm or when you install the system from 
usb/.iso for the first time, the respective mds are assembled and 
appropriately set up if you have already created them.


If you added and partitioned the disk after the main system has been 
installed and running, you will have to create md raid1s and enable 
automatic assembly through /etc/mdadm.conf file. You may need to update 
your initrd also, but this I am not sure. To access and use the md 
raid1s as file systems, You also need to add appropriate fstab entries 
to mount them.


Hope I am not trivializing your issues.

Regards
Ramesh



Re: Raid 1 borked

2020-10-26 Thread Dan Ritter
Bill wrote: 
> So we're setting up a small server with a pair of 1 TB hard disks sectioned
> into 5x100GB Raid 1 partition pairs for data,  with 400GB+ reserved for
> future uses on each disk.

That's weird, but I expect you have a reason for it.

> I'm not sure what happened, we had the five pairs of disk partitions set up
> properly through the installer without problems. However, now the Raid 1
> pairs are not mounted as separate partitions but do show up as
> subdirectories under /, ie /datab, and they do seem to work as part of the
> regular / filesystem.  df -h does not show any md devices or sda/b devices,
> neither does mount. (The system partitions are on an nvme ssd).

Mounts have to happen at mount points, and mount points are
directories. What you have is five mount points and nothing
mounted on them.


> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. blkid
> reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
> 
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for that? And secondly, I'd like to get the raid
> arrays remounted as separate partitions. How to do that?

Well, you need to get them assembled and mounted. I'm assuming
you used mdadm.

Start by inspecting /proc/mdstat. Does it show 5 assembled MD
devices? If not:

mdadm -A /dev/md0
mdadm -A /dev/md1
mdadm -A /dev/md2
mdadm -A /dev/md3
mdadm -A /dev/md4

And tell us any errors.

Once they are assembled, mount them:

mount -a

if that doesn't work -- did you remember to list them in
/etc/fstab? Put them in there, something like:

/dev/md0/dataa  ext4defaults0   0

and try again.

-dsr-


> 
> Fortunately, there is no data to worry about. However, I'd rather not
> reinstall as we've put in a bit of work installing and configuring things.
> I'd prefer not to loose that. Can someone help us out?
> 
> Thanks in advance,
> 
>   Bill
> -- 
> Sent using Icedove on Debian GNU/Linux.
> 

-- 
https://randomstring.org/~dsr/eula.html is hereby incorporated by reference.
there is no justice, there is just us.



Raid 1 borked

2020-10-26 Thread Bill

Hi folks,

So we're setting up a small server with a pair of 1 TB hard disks 
sectioned into 5x100GB Raid 1 partition pairs for data,  with 400GB+ 
reserved for future uses on each disk.


I'm not sure what happened, we had the five pairs of disk partitions set 
up properly through the installer without problems. However, now the 
Raid 1 pairs are not mounted as separate partitions but do show up as 
subdirectories under /, ie /datab, and they do seem to work as part of 
the regular / filesystem.  df -h does not show any md devices or sda/b 
devices, neither does mount. (The system partitions are on an nvme ssd).


lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. 
blkid reveals that sda[1-5] and sdb[1-5] are still listed as

TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. What 
commands should I use for that? And secondly, I'd like to get the raid 
arrays remounted as separate partitions. How to do that?


Fortunately, there is no data to worry about. However, I'd rather not 
reinstall as we've put in a bit of work installing and configuring 
things. I'd prefer not to loose that. Can someone help us out?


Thanks in advance,

Bill
--
Sent using Icedove on Debian GNU/Linux.