Hi Linus,
Please pull my for-linus branch:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git for-linus
This was held up a little trying to track down a use-after-free in btrfs
raid5/6. It's not clear yet if this is just made easier to trigger with
this pull or if its a new
Karl-Philipp Richter posted on Thu, 19 Feb 2015 13:17:50 +0100 as
excerpted:
According to
https://btrfs.wiki.kernel.org/index.php/
Problem_FAQ#How_do_I_report_bugs_and_issues.3F
bugs ought to be reported on bugzilla.kernel.org and on the mailing
list. Is and to be interpreted as a logical AND
Max Schettler posted on Thu, 19 Feb 2015 12:49:37 +0100 as excerpted:
I recently was looking for the status of hot relocation on btrfs.
There seemed to be some activity on the mailinglist around 5/2013
regarding patches that should provide the functionality.
However they have not been merged
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
My system comprises 2 x 3TB hard drives, each partitioned into a 4GB
swap, a 36GB /, and the rest for /home. / and /home are (were) then
assembled into btrfs raid1 arrays, with both metadata and data being
mirrored.
I then installed the wrong
Is it possible to create a subvolume and define the id?
Like btrfs subvolume create TEST id=?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 19/02/15 09:29, Bob Williams wrote:
My system comprises 2 x 3TB hard drives, each partitioned into a
4GB swap, a 36GB /, and the rest for /home. / and /home are (were)
then assembled into btrfs raid1 arrays, with both metadata and data
being
Hi,
I recently was looking for the status of hot relocation on btrfs.
There seemed to be some activity on the mailinglist around 5/2013
regarding patches that should provide the functionality.
However they have not been merged yet and there hasn`t been
further discussion about them (to my
Hi,
According to
https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#How_do_I_report_bugs_and_issues.3F
bugs ought to be reported on bugzilla.kernel.org and on the mailing
list. Is and to be interpreted as a logical AND or rather XOR. If the
former is the case, is it sufficient to set the mailing
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 19/02/15 10:06, Bob Williams wrote:
On 19/02/15 09:29, Bob Williams wrote:
My system comprises 2 x 3TB hard drives,
[...]
Whoops. I really meant to say:
# btrfs device add -f /dev/sdg2 / # btrfs balance start
-dconvert=raid1
On Mon, Feb 16, 2015 at 10:02 PM, Dave Chinner da...@fromorbit.com wrote:
On Mon, Feb 16, 2015 at 09:45:22AM +, Filipe David Manana wrote:
On Mon, Feb 16, 2015 at 12:33 AM, Dave Chinner da...@fromorbit.com wrote:
On Fri, Feb 13, 2015 at 12:47:54PM +, Filipe Manana wrote:
This test is
On Thu, Jan 29, 2015 at 12:46 AM, Xing Gu gux.f...@cn.fujitsu.com wrote:
Regression test for a btrfs issue of resizing 'thread_pool' when
remount the fs.
This regression was introduced by the following linux kernel commit:
btrfs: Added btrfs_workqueue_struct implemented ordered
We can get into inconsistency between inodes and directory entries
after fsyncing a directory. The issue is that while a directory gets
the new dentries persisted in the fsync log and replayed at mount time,
the link count of the inode that directory entries point to doesn't
get updated, staying
This test is motivated by an fsync issue discovered in btrfs.
The issue was that after adding a new hard link to an existing file
(one that was created in a past transaction) and fsync'ing the parent
directory of the new hard link, after the fsync log replay the file's
inode link count did not get
Chris Murphy posted on Thu, 19 Feb 2015 10:51:57 -0700 as excerpted:
Chris is looking at a per file autodefrag setting,
Just to be clear, that's Chris _Mason_, not a third-person reference by
Chris _Murphy_ to himself... =:^)
People who know that Chris _Mason_ is btrfs lead dev won't have
Since commit 8e5cfb55d3f (Btrfs: Make raid_map array be inlined in
btrfs_bio structure), the raid map array is allocated along with the
btrfs bio in alloc_btrfs_bio. The calculation used to decide how much
we need to allocate was using the wrong parameter passed into the
allocation function.
The
Systemd 219 now sets the special FS_NOCOW file flag for its journal
files[1]. This unfortunately breaks the ability to repair the journal on
RAID 1/5/6 btrfs volumes, should a bad sector happen to appear there. Is
this something that can be configured for systemd? Is btrfs going to
someday fix
On Thu, Feb 19, 2015 at 7:30 AM, Konstantinos Skarlatos
k.skarla...@gmail.com wrote:
Systemd 219 now sets the special FS_NOCOW file flag for its journal
files[1]. This unfortunately breaks the ability to repair the journal on
RAID 1/5/6 btrfs volumes, should a bad sector happen to appear there.
Konstantinos Skarlatos posted on Thu, 19 Feb 2015 16:30:37 +0200 as
excerpted:
Systemd 219 now sets the special FS_NOCOW file flag for its journal
files[1]. This unfortunately breaks the ability to repair the journal on
RAID 1/5/6 btrfs volumes, should a bad sector happen to appear there. Is
Duncan 1i5t5.dun...@cox.net schrieb:
Max Schettler posted on Thu, 19 Feb 2015 12:49:37 +0100 as excerpted:
I recently was looking for the status of hot relocation on btrfs.
There seemed to be some activity on the mailinglist around 5/2013
regarding patches that should provide the
We can get into inconsistency between inodes and directory entries
after fsyncing a directory. The issue is that while a directory gets
the new dentries persisted in the fsync log and replayed at mount time,
the link count of the inode that directory entries point to doesn't
get updated, staying
20 matches
Mail list logo