On Wed, Aug 29, 2018 at 09:58:58AM +, Duncan wrote:
> Cerem Cem ASLAN posted on Wed, 29 Aug 2018 09:58:21 +0300 as excerpted:
>
> > Thinking again, this is totally acceptable. If the requirement was a
> > good health disk, then I think I must check the disk health by myself.
> > I may believe
Cerem Cem ASLAN posted on Wed, 29 Aug 2018 09:58:21 +0300 as excerpted:
> Thinking again, this is totally acceptable. If the requirement was a
> good health disk, then I think I must check the disk health by myself.
> I may believe that the disk is in a good state, or make a quick test or
> make s
On 08/28/2018 12:51 AM, Cerem Cem ASLAN wrote:
> Hi,
>
Good morning.
>
> I'm not sure how to proceed at the moment. Taking succesfull backups
> made me think that everything might be okay but I'm not sure if I
> should continue trusting the drive or not. What additional checks
> should I perform?
Chris Murphy , 29 Ağu 2018 Çar, 02:58
tarihinde şunu yazdı:
>
> On Tue, Aug 28, 2018 at 5:04 PM, Cerem Cem ASLAN
> wrote:
> > What I want to achive is that I want to add the problematic disk as
> > raid1 and see how/when it fails and how BTRFS recovers these fails.
> > While the party goes on, th
On Tue, Aug 28, 2018 at 5:04 PM, Cerem Cem ASLAN wrote:
> What I want to achive is that I want to add the problematic disk as
> raid1 and see how/when it fails and how BTRFS recovers these fails.
> While the party goes on, the main system shouldn't be interrupted
> since this is a production syste
What I want to achive is that I want to add the problematic disk as
raid1 and see how/when it fails and how BTRFS recovers these fails.
While the party goes on, the main system shouldn't be interrupted
since this is a production system. For example, I would never expect
to be ended up with such a r
On Tue, Aug 28, 2018 at 12:50 PM, Cerem Cem ASLAN wrote:
> I've successfully moved everything to another disk. (The only hard
> part was configuring the kernel parameters, as my root partition was
> on LVM which is on LUKS partition. Here are the notes, if anyone
> needs:
> https://github.com/cer
I've successfully moved everything to another disk. (The only hard
part was configuring the kernel parameters, as my root partition was
on LVM which is on LUKS partition. Here are the notes, if anyone
needs:
https://github.com/ceremcem/smith-sync/blob/master/create-bootable-backup.md)
Now I'm see
On Mon, Aug 27, 2018 at 6:49 PM, Cerem Cem ASLAN wrote:
> Thanks for your guidance, I'll get the device replaced first thing in
> the morning.
>
> Here is balance results which I think resulted not too bad:
>
> sudo btrfs balance start /mnt/peynir/
> WARNING:
>
> Full balance without filte
Thanks for your guidance, I'll get the device replaced first thing in
the morning.
Here is balance results which I think resulted not too bad:
sudo btrfs balance start /mnt/peynir/
WARNING:
Full balance without filters requested. This operation is very
intense and takes potential
On Mon, Aug 27, 2018 at 6:38 PM, Chris Murphy wrote:
>> Metadata,single: Size:8.00MiB, Used:0.00B
>>/dev/mapper/master-root 8.00MiB
>>
>> Metadata,DUP: Size:2.00GiB, Used:562.08MiB
>>/dev/mapper/master-root 4.00GiB
>>
>> System,single: Size:4.00MiB, Used:0.00B
>>/dev/m
On Mon, Aug 27, 2018 at 6:05 PM, Cerem Cem ASLAN wrote:
> Note that I've directly received this reply, not by mail list. I'm not
> sure this is intended or not.
I intended to do Reply to All but somehow this doesn't always work out
between the user and Gmail, I'm just gonna assume gmail is being
Hi,
I'm getting DRDY ERR messages which causes system crash on the server:
# tail -n 40 /var/log/kern.log.1
Aug 24 21:04:55 aea3 kernel: [ 939.228059] lxc-bridge: port
5(vethI7JDHN) entered disabled state
Aug 24 21:04:55 aea3 kernel: [ 939.300602] eth0: renamed from vethQ5Y2OF
Aug 24 21:04:55 a
13 matches
Mail list logo