OK so we know for raid5 data block groups there can be RMW. And
because of that, any interruption results in the write hole. On Btrfs
thought, the write hole is on disk only. If there's a lost strip
(failed drive or UNC read), reconstruction from wrong parity results
in a checksum error and EIO. Th
> That's the proper answer. In practice... all hope isn't yet lost.
I understood the proper answer.
I'll take care it in the future.
Is there something step/method can I do from this situation?
Thank you
Hiroshi Honda
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" i
On 2016-10-14 06:11, Hiroshi Honda wrote:
That's the proper answer. In practice... all hope isn't yet lost.
I understood the proper answer.
I'll take care it in the future.
Is there something step/method can I do from this situation?
You should probably look at `btrfs restore`. I'm not sure
On 2016-10-13 17:21, Alberto Bursi wrote:
Hi, I'm using OpenSUSE on a btrfs volume spanning 2 disks (set as raid1
for both metadata and data), no separate /home partition.
The distro loves to create dozens of subvolumes for various things and
makes snapshots, see:
alby@openSUSE-xeon:~> sudo btrfs
On 10/14/2016 08:28 AM, Stefan Priebe - Profihost AG wrote:
Hello list,
while running the same workload on two machines (single xeon and a dual
xeon) both with 64GB RAM.
I need to run echo 3 >/proc/sys/vm/drop_caches every 15-30 minutes to
keep the speed as good as on the non numa system. I'm n
On 2016-10-14 02:28, Stefan Priebe - Profihost AG wrote:
Hello list,
while running the same workload on two machines (single xeon and a dual
xeon) both with 64GB RAM.
I need to run echo 3 >/proc/sys/vm/drop_caches every 15-30 minutes to
keep the speed as good as on the non numa system. I'm not
Am 06.10.2016 um 04:51 schrieb Wang Xiaoguang:
> This issue was revealed by modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB,
> When modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB, fsstress test often
> gets these warnings from btrfs_destroy_inode():
> WARN_ON(BTRFS_I(inode)->outstanding_exten
Am 06.10.2016 um 04:51 schrieb Wang Xiaoguang:
> When testing btrfs compression, sometimes we got ENOSPC error, though fs
> still has much free space, xfstests generic/171, generic/172, generic/173,
> generic/174, generic/175 can reveal this bug in my test environment when
> compression is enabled
Dear julian,
Am 14.10.2016 um 14:26 schrieb Julian Taylor:
> On 10/14/2016 08:28 AM, Stefan Priebe - Profihost AG wrote:
>> Hello list,
>>
>> while running the same workload on two machines (single xeon and a dual
>> xeon) both with 64GB RAM.
>>
>> I need to run echo 3 >/proc/sys/vm/drop_caches ev
On 10/06/16 04:51, Wang Xiaoguang wrote:
> When testing btrfs compression, sometimes we got ENOSPC error, though fs
> still has much free space, xfstests generic/171, generic/172, generic/173,
> generic/174, generic/175 can reveal this bug in my test environment when
> compression is enabled.
>
>
Hi,
Am 14.10.2016 um 15:19 schrieb Stefan Priebe - Profihost AG:
> Dear julian,
>
> Am 14.10.2016 um 14:26 schrieb Julian Taylor:
>> On 10/14/2016 08:28 AM, Stefan Priebe - Profihost AG wrote:
>>> Hello list,
>>>
>>> while running the same workload on two machines (single xeon and a dual
>>> xeon)
On Fri, Oct 14, 2016 at 08:50:31AM +0800, Qu Wenruo wrote:
> > I can pull the branches from you, either to devel or integration. Please
> > base them on the last release point (ie. master) if there are no other
> > dependencies. Otherwise, I'll publish a branch that contains the desired
> > patches
On Thu, Oct 13, 2016 at 03:25:57PM +0800, Qu Wenruo wrote:
> At 10/13/2016 01:26 AM, David Sterba wrote:
> > On Wed, Oct 12, 2016 at 10:01:27PM +0800, Qu Wenruo wrote:
> >>
> >>
> >> On 10/12/2016 09:58 PM, Abhay Sachan wrote:
> >>> Hi,
> >>> I tried building latest btrfs progs on CentOS 6, "./conf
On 10/14/16 10:49 AM, David Sterba wrote:
> On Thu, Oct 13, 2016 at 03:25:57PM +0800, Qu Wenruo wrote:
>> At 10/13/2016 01:26 AM, David Sterba wrote:
>>> On Wed, Oct 12, 2016 at 10:01:27PM +0800, Qu Wenruo wrote:
On 10/12/2016 09:58 PM, Abhay Sachan wrote:
> Hi,
> I tried bui
As it stands today, btrfs-progs won't build if the installed
kernel headers don't define FIEMAP_EXTENT_SHARED.
But that doesn't tell us anything about the capabilities of
the /running/ kernel - so just define it locally if not found
in the fiemap header, and allow the build to proceed.
If run aga
The FIEMAP_EXTENT_SHARED fiemap flag was introduced in 2.6.33. If the
headers do not provide the definition, the build will fail. The support
of the fiemap sharing depends on the running kernel. There are still
systems with 2.6.32 kernel headers but running newer versions.
To support such environm
On Fri, Oct 14, 2016 at 12:00:56PM -0500, Eric Sandeen wrote:
> On 10/14/16 10:49 AM, David Sterba wrote:
> > On Thu, Oct 13, 2016 at 03:25:57PM +0800, Qu Wenruo wrote:
> >> At 10/13/2016 01:26 AM, David Sterba wrote:
> >>> On Wed, Oct 12, 2016 at 10:01:27PM +0800, Qu Wenruo wrote:
>
>
>
On Wed, Oct 12, 2016 at 07:28:20PM +0530, Abhay Sachan wrote:
> Hi,
> I tried building latest btrfs progs on CentOS 6, "./configure" failed
> with the error:
>
> checking for FIEMAP_EXTENT_SHARED defined in linux/fiemap.h... no
> configure: error: no definition of FIEMAP_EXTENT_SHARED found
>
> A
On Mon, Oct 10, 2016 at 10:56:12AM +1100, Dave Chinner wrote:
> On Fri, Oct 07, 2016 at 06:05:51PM +0200, David Sterba wrote:
> > On Fri, Oct 07, 2016 at 08:18:38PM +1100, Dave Chinner wrote:
> > > On Thu, Oct 06, 2016 at 04:12:56PM +0800, Qu Wenruo wrote:
> > > > So I'm wondering if I can just upl
On Fri, Oct 14, 2016 at 01:16:05AM -0600, Chris Murphy wrote:
> OK so we know for raid5 data block groups there can be RMW. And
> because of that, any interruption results in the write hole. On Btrfs
> thought, the write hole is on disk only. If there's a lost strip
> (failed drive or UNC read), re
Hi Linus,
My for-linus-4.9 branch:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
for-linus-4.9
Has some fixes from Omar and Dave Sterba for our new free space tree.
This isn't heavily used yet, but as we move toward making it the new
default we wanted to nail down an endia
Zygo Blaxell posted on Fri, 14 Oct 2016 15:55:45 -0400 as excerpted:
> The current btrfs raid5 implementation is a thin layer of bugs on top of
> code that is still missing critical pieces. There is no mechanism to
> prevent RMW-related failures combined with zero tolerance for
> RMW-related fail
On Fri, Oct 14, 2016 at 1:55 PM, Zygo Blaxell
wrote:
>
>> And how common is RMW for metadata operations?
>
> RMW in metadata is the norm. It happens on nearly all commits--the only
> exception seems to be when both ends of a commit write happen to land
> on stripe boundaries accidentally, which
This may be relevant and is pretty terrible.
http://www.spinics.net/lists/linux-btrfs/msg59741.html
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordom
On 10/14/2016 01:38 PM, Austin S. Hemmelgarn wrote:
> On 2016-10-13 17:21, Alberto Bursi wrote:
>> Hi, I'm using OpenSUSE on a btrfs volume spanning 2 disks (set as raid1
>> for both metadata and data), no separate /home partition.
>> -
>> I'd like to be able to clone verbatim the whole volum
It should be -e can accept a listing of all the subvolumes you want to
send at once. And possibly an -r flag, if it existed, could
automatically populate -e. But the last time I tested -e I just got
errors.
https://bugzilla.kernel.org/show_bug.cgi?id=111221
--
Chris Murphy
--
To unsubscribe fro
On Fri, Oct 14, 2016 at 3:38 PM, Chris Murphy wrote:
> On Fri, Oct 14, 2016 at 1:55 PM, Zygo Blaxell
> wrote:
>
>>
>>> And how common is RMW for metadata operations?
>>
>> RMW in metadata is the norm. It happens on nearly all commits--the only
>> exception seems to be when both ends of a commit
On 10/15/2016 12:17 AM, Chris Murphy wrote:
> It should be -e can accept a listing of all the subvolumes you want to
> send at once. And possibly an -r flag, if it existed, could
> automatically populate -e. But the last time I tested -e I just got
> errors.
>
> https://bugzilla.kernel.org/show_b
On Fri, Oct 14, 2016 at 04:30:42PM -0600, Chris Murphy wrote:
> Also, is there RMW with raid0, or even raid10?
No. Mirroring is writing the same data in two isolated places. Striping
is writing data at different isolated places. No matter which sectors
you write through these layers, it does n
29 matches
Mail list logo