hi there,

 As a newbie in Btrfs land i installed a raid1 configuration and played with it.
 After hot removing a drive of the raid i got two remarks that i cannot find 
answer with with my google foo.

 So, to not die stupid i post a little email here to see if there is solutions 
about this :)

  The experiment finished well the raid was reconstructed ans some untouched 
partition were even recreated automaticaly when the drive was put back, this is 
very cool !
  The one with changes had to be manualy replaced, that was espected.

  The issues are:

- logs: my systemd journal and my kernel.log were completely stampeded by btrfs 
logs.
       On the mdadm world you have some message about disapearing drives and 
then it stops.
       On my test there was hundreds of megabytes of brtfs error logs.

- monitoring : on mdamd you have a daemon that can warn you and even 
automaticaly run programs to warn you of a failure.
               I think zfs has this too with ZED.
               I was not able to find such a thing in btrfs.
               Does it exist such a monitoring system capable of warning the 
admin when error appear in btrfs other than cron grepping the logs or btrfs 
device stats ?


  Is there anyway to mitigate the log issue and did someone took a shot at a 
monitoring system for btrfs ?

--
best regards,
Ghislain.

Reply via email to