Satoru Takeuchi wrote on 2016/04/22 11:21 +0900:
On 2016/04/21 20:58, Qu Wenruo wrote:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be
Hi Mark,
Thanks for your contribution to btrfs-filesystem-du command.
However there seems to be some strange behavior related to reflinke(and
further in-band dedupe).
(And the root cause is lying quite deep into kernel backref resolving codes)
["Exclusive" value not really exclsuive]
When a
On 2016/04/21 20:58, Qu Wenruo wrote:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here
Jussi Kansanen posted on Thu, 21 Apr 2016 18:09:31 +0300 as excerpted:
> The replace operation is super slow (no other load) with avg. 3x20MB/s
> (old disks) reads and 1.4MB/s write (new disk) with CFQ scheduler. Using
> deadline schd. the performance is better with avg. 3x40MB/s reads and
>
Mark Fasheh wrote on 2016/04/21 16:53 -0700:
Thank you for the review, comments are below.
On Wed, Apr 20, 2016 at 09:48:54AM +0900, Satoru Takeuchi wrote:
On 2016/04/20 7:25, Mark Fasheh wrote:
+# Force a small leaf size to make it easier to blow out our root
+# subvolume tree
This has been broken since Linux v4.1. We may have worked out a solution on
the btrfs list but in the meantime sending a test to expose the issue seems
like a good idea.
Changes from v1-v2:
- cleanups
- added 122.out
Signed-off-by: Mark Fasheh
---
tests/btrfs/122 | 88
Thank you for the review, comments are below.
On Wed, Apr 20, 2016 at 09:48:54AM +0900, Satoru Takeuchi wrote:
> On 2016/04/20 7:25, Mark Fasheh wrote:
> >+# Force a small leaf size to make it easier to blow out our root
> >+# subvolume tree
> >+_scratch_mkfs "--nodesize 16384"
>
> nodesize
On Thu, Apr 21, 2016 at 6:53 AM, Martin Svec wrote:
> Hello,
>
> we use btrfs subvolumes for rsync-based backups. During backups btrfs often
> fails with "No space
> left" error and goes to readonly mode (dmesg output is below) while there's
> still plenty of
> unallocated
Соберем для Вас по интернет базу данных потенциальных клиентов для Вашего
Бизнеса. По базе можно звонить, писать, слать факсы и email,вести любые прямые
активные продажи Ваших товаров и услуг Узнайте подробнее по тел +79133913837
(whatsapp,viber,telegram) Skype: prodawez389 Email:
>we use btrfs subvolumes for rsync-based backups. During backups btrfs often
>fails with "No >space
>left" error and goes to readonly mode (dmesg output is below) while there's
>still plenty of
>unallocated space
I have the same use case and the same issue with no real solution that
I've found.
Am 21.04.2016 um 07:43 schrieb Qu Wenruo:
> There are already unmerged patches which will partly do the mdadm level
> behavior, like automatically change to degraded mode without making the fs RO.
>
> The original patchset:
> http://comments.gmane.org/gmane.comp.file-systems.btrfs/48335
The
Am 21.04.2016 um 13:28 schrieb Henk Slager:
>> Can anyone explain this behavior?
>
> All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in
> this test. What is on WD20 is unclear to me, but the raid1 array is
> {WD75, WD50, SP2504C}
> So the test as described by Matthias is not what
On 04/21/2016 04:02 AM, Austin S. Hemmelgarn wrote:
> On 2016-04-20 16:23, Konstantin Svist wrote:
>> Pretty much all commands print out the usage message when no device is
>> specified:
>>
>> [root@host ~]# btrfs scrub start
>> btrfs scrub start: too few arguments
>> usage: btrfs scrub start
Hello,
I have a 4x 2TB HDD raid5 array and one of the disks started going bad
(according to smart no read/write errors seen by btrfs), after replacing the
disk with a new one I ran "btrfs replace" which resulted in kernel crash about
0.5% done:
BTRFS info (device dm-10): dev_replace from
On Thu, 2016-03-31 at 23:43 +0100, Duncan wrote:
> Chris Murray posted on Thu, 31 Mar 2016 21:49:29 +0100 as excerpted:
>
> > I'm using Proxmox, based on Debian. Kernel version 4.2.8-1-pve.
> Btrfs
> > v3.17.
>
> The problem itself is beyond my level, but aiming for the obvious low-
> hanging
Hello,
we use btrfs subvolumes for rsync-based backups. During backups btrfs often
fails with "No space
left" error and goes to readonly mode (dmesg output is below) while there's
still plenty of
unallocated space:
$ btrfs fi df /backup
Data, single: total=15.75TiB, used=15.72TiB
System, DUP:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is
On Thu, Apr 21, 2016 at 8:23 AM, Satoru Takeuchi
wrote:
> On 2016/04/20 14:17, Matthias Bodenbinder wrote:
>>
>> Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
>>>
>>> BTW, it would be better to post the dmesg for better debug.
>>
>>
>> So here we. I did the same test
On 2016-04-21 02:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i
did. It seems to be mean like
On 2016-04-20 16:23, Konstantin Svist wrote:
Pretty much all commands print out the usage message when no device is
specified:
[root@host ~]# btrfs scrub start
btrfs scrub start: too few arguments
usage: btrfs scrub start [-BdqrRf] [-c ioprio_class -n ioprio_classdata]
|
...
However, balance
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean
lenovomi wrote on 2016/04/21 08:53 +0200:
Hello Liu,
please find both files stored here:
https://drive.google.com/folderview?id=0B6RZ_9vVuTEcMDV6eGNmRlZ0ZjQ=sharing
Thanks
On Thu, Apr 21, 2016 at 7:27 AM, Liu Bo wrote:
On Wed, Apr 20, 2016 at 10:09:07PM +0200,
On 04/21/2016 01:15 PM, Matthias Bodenbinder wrote:
Am 20.04.2016 um 15:32 schrieb Anand Jain:
1. mount the raid1 (2 disc with different size)
2. unplug the biggest drive (hotplug)
Btrfs won't know that you have plugged-out a disk.
Though it experiences IO failures, it won't close
I run:
od -x /tmp/corrupt-dev2.txt > a
od -x /tmp/corrupt-dev1.txt > b
cmp a b;
diff a b;
looks like both files are identical, means that both metadata files
got corrupted?
thanks
On Thu, Apr 21, 2016 at 8:53 AM, lenovomi wrote:
> Hello Liu,
>
> please find both files
Hello Liu,
please find both files stored here:
https://drive.google.com/folderview?id=0B6RZ_9vVuTEcMDV6eGNmRlZ0ZjQ=sharing
Thanks
On Thu, Apr 21, 2016 at 7:27 AM, Liu Bo wrote:
> On Wed, Apr 20, 2016 at 10:09:07PM +0200, lenovomi wrote:
>> Hi Chris,
>>
>> please
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean like a bug in btrfs.
Sequenz of events:
1. mount
Liu Bo wrote on 2016/04/20 23:02 -0700:
On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote:
Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200:
Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
Unfortunately, this is the designed behavior.
The fs is rw just because it doesn't hit any
On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote:
>
>
> Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200:
> >Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
> >
> >>
> >>Unfortunately, this is the designed behavior.
> >>
> >>The fs is rw just because it doesn't hit any critical problem.
>
28 matches
Mail list logo