Larkin Lowrey posted on Sun, 26 Oct 2014 12:20:45 -0500 as excerpted:
One unusual property of my setup is I have my fs on top of bcache. More
specifically, the stack is md raid6 - bcache - lvm - btrfs. When the
fs mounts it has mount option 'ssd' due to the fact that bcache sets
Zygo Blaxell posted on Mon, 27 Oct 2014 00:39:25 -0400 as excerpted:
One thing that may be significant is _when_ those 3 hanging filesystems
are hanging: when using rsync to update local files. These machines
are using the traditional rsync copy-then-rename method rather than
--inplace
Marc Joliet posted on Mon, 27 Oct 2014 02:24:15 +0100 as excerpted:
Am Sat, 25 Oct 2014 14:35:33 -0600 schrieb Chris Murphy
li...@colorremedies.com:
On Oct 25, 2014, at 2:33 PM, Chris Murphy li...@colorremedies.com
wrote:
On Oct 25, 2014, at 6:24 AM, Marc Joliet mar...@gmx.de wrote:
On Mon, Oct 27, 2014 at 08:18:12AM +0800, Qu Wenruo wrote:
Original Message
Subject: Re: [PATCH] btrfs: Enhance btrfs chunk allocation algorithm
to reduce ENOSPC caused by unbalanced data/metadata allocation.
From: Liu Bo bo.li@oracle.com
To: Qu Wenruo
Original Message
Subject: Re: [PATCH] btrfs: Enhance btrfs chunk allocation algorithm to
reduce ENOSPC caused by unbalanced data/metadata allocation.
From: Liu Bo bo.li@oracle.com
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2014年10月27日 16:14
On Mon, Oct 27, 2014 at
If we couldn't find our extent item, we accessed the current slot
(path-slots[0]) to check if it corresponds to an equivalent skinny
metadata item. However this slot could be beyond our last item in the
leaf (i.e. path-slots[0] = btrfs_header_nritems(leaf)), in which case
we shouldn't process it.
We have a race that can lead us to miss skinny extent items in the function
btrfs_lookup_extent_info() when the skinny metadata feature is enabled.
So basically the sequence of steps is:
1) We search in the extent tree for the skinny extent, which returns 0
(not found);
2) We check the
Hello Folks,
I used to have an array of 4x4TB drives with BTRFS in raid10.
The kernel version is: 3.13-0.bpo.1-amd64
BTRFS version is: v3.14.1
When it was reaching 80% in space I added another 4TB drive to the array with:
btrfs device add /dev/sdf /mnt/backup
And started the balancing to the
On Mon, 27 Oct 2014 09:16:55 +, Filipe Manana wrote:
If we couldn't find our extent item, we accessed the current slot
(path-slots[0]) to check if it corresponds to an equivalent skinny
metadata item. However this slot could be beyond our last item in the
leaf (i.e. path-slots[0] =
If we couldn't find our extent item, we accessed the current slot
(path-slots[0]) to check if it corresponds to an equivalent skinny
metadata item. However this slot could be beyond our last item in the
leaf (i.e. path-slots[0] = btrfs_header_nritems(leaf)), in which case
we shouldn't process it.
On Mon, Oct 27, 2014 at 10:34 AM, Christian Kujau li...@nerdbynature.de wrote:
(somehow this message did not make it to the list)
Hi,
After upgrading from linux 3.17.0 to 3.18.0-rc2, I cannot mount my btrfs
partition any more. It's just one btrfs partition, no raid, no
compression, no fancy
On Mon, 27 Oct 2014 09:19:52 +, Filipe Manana wrote:
We have a race that can lead us to miss skinny extent items in the function
btrfs_lookup_extent_info() when the skinny metadata feature is enabled.
So basically the sequence of steps is:
1) We search in the extent tree for the skinny
On 2014-10-26 13:20, Larkin Lowrey wrote:
On 10/24/2014 10:28 PM, Duncan wrote:
Robert White posted on Fri, 24 Oct 2014 19:41:32 -0700 as excerpted:
On 10/24/2014 04:49 AM, Marc MERLIN wrote:
On Thu, Oct 23, 2014 at 06:04:43PM -0500, Larkin Lowrey wrote:
I have a 240GB VirtualBox vdi image
On Mon, Oct 27, 2014 at 11:08 AM, Miao Xie mi...@cn.fujitsu.com wrote:
On Mon, 27 Oct 2014 09:19:52 +, Filipe Manana wrote:
We have a race that can lead us to miss skinny extent items in the function
btrfs_lookup_extent_info() when the skinny metadata feature is enabled.
So basically the
Reproducer:
# mkfs.btrfs -f -b 20G /dev/sdb
# mount /dev/sdb /mnt/test
# fallocate -l 17G /mnt/test/largefile
# btrfs fi df /mnt/test
Data, single: total=17.49GiB, used=6.00GiB - only 6G, but actually it
should be 17G.
System, DUP: total=8.00MiB,
Hi,
I created a filesystem and mounted it with compress-force=lzo. Then I did:
# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 100M 4.1M 96M 5% /mnt
# yes Hello World | dd of=/mnt/test iflag=fullblock bs=1M count=20
status=none
yes: standard output: Broken pipe
Am Montag, 27. Oktober 2014, 13:59:24 schrieb Swâmi Petaramesh:
Le lundi 27 octobre 2014, 13:56:07 Marc Dietrich a écrit :
oops, no compression.
Is this intended?
« Compression does not work for NOCOW files » is clearly stated in
As far as I understood, NOCOW means that modified parts of files be rewritten
into place, whereas compression causes compressed blocks of variable sizes to
be created (depending upon their compression ratio). Changing a block in a file
will most probably change its compressed size, and then you
On Mon, Oct 27, 2014 at 12:11 PM, Filipe David Manana
fdman...@gmail.com wrote:
On Mon, Oct 27, 2014 at 11:08 AM, Miao Xie mi...@cn.fujitsu.com wrote:
On Mon, 27 Oct 2014 09:19:52 +, Filipe Manana wrote:
We have a race that can lead us to miss skinny extent items in the function
Hi!
My btrfs system partition went readonly. After reboot it doesnt mount
anymore. System was openSUSE 13.1 Tumbleweed (kernel 3.17.??). Now I'm
on openSUSE 13.2-RC1 rescue (kernel 3.16.3). I dumped (dd) the whole 250
GB SSD to some USB file and tried some btrfs tools on another copy per
On Oct 26, 2014, at 7:40 PM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
BTW what's the output of 'df' command?
Jasper,
What do you get for the conventional df command when this btrfs volume is
mounted? Thanks.
Chris Murphy--
To unsubscribe from this list: send the line unsubscribe
On Mon, Oct 27, 2014 at 10:57:59AM +, Filipe David Manana wrote:
The only thing fancy may be the machine: PowerBook G4 (powerpc 32 bit),
running Debian/Linux (stable).
The message comes from the newly added fs/btrfs/disk-io.c:
if (sb-num_devices (1UL 31))
Hej guys!
Thanks for your input on the issue this far.
Too my knowledge raid1 in btrfs means 2 copies of each piece of data
independent of the amount of disks used.
So 4 x 2,73tb would result in a totaal storage of roughly 5,5tb right?
Shouldn't this be more then enough?
btw, here is the
On Oct 27, 2014, at 9:56 AM, Jasper Verberk jverb...@hotmail.com wrote:
These are the results to a normal df:
http://paste.debian.net/128932/
The mountpoint is /data.
OK so this is with the new computation in kernel 3.17 (which I think contains a
bug by counting free space twice); so
On Oct 27, 2014, at 3:26 AM, Stephan Alz stephan...@gmx.com wrote:
My question is where to go from here? What I going to do right now is to copy
the most important data to another separated XFS drive.
What I planning to do is:
1, Upgrade the kernel
2, Upgrade BTRFS
3, Continue the
On Mon, 27 Oct 2014 at 16:35, David Sterba wrote:
Yeah sorry, I sent the v2 too late, here's an incremental that applies
on top of current 3.18-rc
https://patchwork.kernel.org/patch/5160651/
Yup, that fixes it. Thank you! If it's needed:
Tested-by: Christian Kujau li...@nerdbynature.de
On Mon, Oct 27, 2014 at 11:21:13AM -0700, Christian Kujau wrote:
On Mon, 27 Oct 2014 at 16:35, David Sterba wrote:
Yeah sorry, I sent the v2 too late, here's an incremental that applies
on top of current 3.18-rc
https://patchwork.kernel.org/patch/5160651/
Yup, that fixes it. Thank
Revisit of a previous issue. Setup a single 640GB drive with BTRFS and
compression. This was not a system drive, just a place to put random
junk.
Made a RAID1 with another drive of just the metadata. Was in
that state for less than 12 hours-ish, removed the second drive and
now cannot get to any
On Tue, Oct 21, 2014 at 6:12 AM, Filipe Manana fdman...@suse.com
wrote:
If right after starting the snapshot creation ioctl we perform a
write against a
file followed by a truncate, with both operations increasing the
file's size, we
can get a snapshot tree that reflects a state of the source
Am 27.10.14 um 14:23 schrieb Ansgar Hockmann-Stolle:
Hi!
My btrfs system partition went readonly. After reboot it doesnt mount
anymore. System was openSUSE 13.1 Tumbleweed (kernel 3.17.??). Now I'm
on openSUSE 13.2-RC1 rescue (kernel 3.16.3). I dumped (dd) the whole 250
GB SSD to some USB file
Ansgar Hockmann-Stolle posted on Mon, 27 Oct 2014 14:23:19 +0100 as
excerpted:
Hi!
My btrfs system partition went readonly. After reboot it doesnt mount
anymore. System was openSUSE 13.1 Tumbleweed (kernel 3.17.??). Now I'm
on openSUSE 13.2-RC1 rescue (kernel 3.16.3). I dumped (dd) the
On 10/26/2014 12:59 AM, Christian Tschabuschnig wrote:
Hello,
currently I am trying to recover a btrfs filesystem which had a few subvolumes.
When running
# btrfs restore -sx /dev/xxx .
one subvolume gets restored.
Important Aside: The one time I had to resort to btrfs restore I didn't
get
Original Message
Subject: Re: btrfs unmountable: read block failed check_tree_block;
Couldn't read tree root
From: Qu Wenruo quwen...@cn.fujitsu.com
To: Ansgar Hockmann-Stolle ansgar.hockmann-sto...@uni-osnabrueck.de,
linux-btrfs@vger.kernel.org
Date: 2014年10月28日 09:05
On Mon, 27 Oct 2014 13:44:22 +, Filipe David Manana wrote:
On Mon, Oct 27, 2014 at 12:11 PM, Filipe David Manana
fdman...@gmail.com wrote:
On Mon, Oct 27, 2014 at 11:08 AM, Miao Xie mi...@cn.fujitsu.com wrote:
On Mon, 27 Oct 2014 09:19:52 +, Filipe Manana wrote:
We have a race that
On Thu, 2014-10-23 at 15:23 +0200, Petr Janecek wrote:
Hello Gui,
Oh, it seems that there are btrfs with missing devs that are bringing
troubles to the @open_ctree_... function.
what do you mean by missing devs? I have no degraded fs.
Ah, sorry, I'm too focused on the problem that
On Thu, 2014-10-23 at 21:36 +0800, Anand Jain wrote:
there is no point in re-creating so many btrfs kernel's logic in user
space. its just unnecessary, when kernel is already doing it. use
some interface to get info from kernel after device is registered,
(not necessarily mounted).
36 matches
Mail list logo