On Tue, Jan 31 2017, Andrei Borzenkov wrote:
>
Changing the
Conflicts=sys-devices-virtual-block-%i.device
lines to
ConditionPathExists=/sys/devices/virtual/block/%i
might make the problem go away, without any negative consequences.
>>>
>>> Ugly, but yes, may be
31.01.2017 01:19, NeilBrown пишет:
> On Mon, Jan 30 2017, Andrei Borzenkov wrote:
>
>> On Mon, Jan 30, 2017 at 9:36 AM, NeilBrown wrote:
>> ...
>>>
>>> systemd[1]: Created slice system-mdadm\x2dlast\x2dresort.slice.
>>> systemd[1]: Starting
> Does
> systemctl list-dependencies sys-devices-virtual-block-md0.device
> report anything interesting? I get
>
> sys-devices-virtual-block-md0.device
> ● └─mdmonitor.service
Nothing interesting, the same output as you have above.
> Could you try run with systemd.log_level=debug on
On Mon, Jan 30, 2017 at 9:36 AM, NeilBrown wrote:
...
>
> systemd[1]: Created slice system-mdadm\x2dlast\x2dresort.slice.
> systemd[1]: Starting system-mdadm\x2dlast\x2dresort.slice.
> systemd[1]: Starting Activate md array even though degraded...
> systemd[1]:
Am 30.01.2017 um 07:36 schrieb NeilBrown:
By virtue of "Following" attribute. dev-md0.device is Following
sys-devices-virtual-block-md0.device so stopping the latter will also
stop the former.
Ahh.. I see why I never saw this now.
Two reasons.
1/ My /etc/fstab has
On Mon, Jan 30 2017, Andrei Borzenkov wrote:
> 30.01.2017 04:53, NeilBrown пишет:
>> On Fri, Jan 27 2017, Andrei Borzenkov wrote:
>>
>>> 26.01.2017 21:02, Luke Pyzowski пишет:
Hello,
I have a large RAID6 device with 24 local drives on CentOS7.3. Randomly
(around 50% of the time)
30.01.2017 04:53, NeilBrown пишет:
> On Fri, Jan 27 2017, Andrei Borzenkov wrote:
>
>> 26.01.2017 21:02, Luke Pyzowski пишет:
>>> Hello,
>>> I have a large RAID6 device with 24 local drives on CentOS7.3. Randomly
>>> (around 50% of the time) systemd will unmount my RAID device thinking it is
On Fri, Jan 27 2017, Andrei Borzenkov wrote:
> 26.01.2017 21:02, Luke Pyzowski пишет:
>> Hello,
>> I have a large RAID6 device with 24 local drives on CentOS7.3. Randomly
>> (around 50% of the time) systemd will unmount my RAID device thinking it is
>> degraded after the
27.01.2017 22:44, Luke Pyzowski пишет:
...
> Jan 27 11:33:14 lnxnfs01 kernel: md/raid:md0: raid level 6 active with 24 out
> of 24 devices, algorithm 2
...
> Jan 27 11:33:14 lnxnfs01 kernel: md0: detected capacity change from 0 to
> 45062020923392
> Jan 27 11:33:14 lnxnfs01 systemd[1]: Found
I've modified a number of settings to try to resolve this, so far no success.
I've created an explicit mount file for the RAID array:
/etc/systemd/system/share.mount
Inside there I've experimented with TimeoutSec=
In /etc/systemd/system/mdadm-last-resort@.timer I've worked with
OnActiveSec=
> 26.01.2017 21:02, Luke Pyzowski пишет:
> > Hello,
> > I have a large RAID6 device with 24 local drives on CentOS7.3.
> > Randomly (around 50% of the time) systemd will unmount my RAID
> > device thinking it is degraded after the mdadm-last-resort@.timer
> > expires, however the device is working
26.01.2017 21:02, Luke Pyzowski пишет:
> Hello,
> I have a large RAID6 device with 24 local drives on CentOS7.3. Randomly
> (around 50% of the time) systemd will unmount my RAID device thinking it is
> degraded after the mdadm-last-resort@.timer expires, however the device is
> working normally
Am 26.01.2017 um 19:02 schrieb Luke Pyzowski:
I have a large RAID6 device with 24 local drives on CentOS7.3. Randomly (around
50% of the time) systemd will unmount my RAID device thinking it is degraded
after the mdadm-last-resort@.timer expires, however the device is working
normally by
Am 26.01.2017 um 19:02 schrieb Luke Pyzowski:
I have a large RAID6 device with 24 local drives on CentOS7.3. Randomly (around
50% of the time) systemd will unmount my RAID device thinking it is degraded
after the mdadm-last-resort@.timer expires, however the device is working
normally by
14 matches
Mail list logo