On Sat, Mar 22, 2014 at 02:04:56PM -0700, Marc MERLIN wrote:
After deleting a huge directory tree in my /home subvolume, syncing
snapshots now fails with:
ERROR: rmdir o1952777-157-0 failed. No such file or directory
Error line 156 with status 1
DIE: Code dump:
153 if [[ -n $init ]]; then
On Mon, Feb 24, 2014 at 10:36:52PM -0800, Marc MERLIN wrote:
I got this during a btrfs send:
BTRFS error (device dm-2): did not find backref in send_root. inode=22672,
offset=524288, disk_byte=1490517954560 found extent=1490517954560
I'll try a scrub when I've finished my backup, but is there
On Mon, May 05, 2014 at 11:10:54PM -0700, David Brown wrote:
On Mon, Feb 24, 2014 at 10:36:52PM -0800, Marc MERLIN wrote:
I got this during a btrfs send:
BTRFS error (device dm-2): did not find backref in send_root. inode=22672,
offset=524288, disk_byte=1490517954560 found extent=1490517954560
On Sat, May 10, 2014 at 04:57:18PM -0700, Marc MERLIN wrote:
On Fri, May 09, 2014 at 11:39:13AM -0700, Anacron wrote:
/etc/cron.daily/btrfs-scrub:
scrub device /dev/mapper/cryptroot (id 1) done
scrub started at Fri May 9 06:09:14 2014 and finished after 19153
seconds
total
On Tue, May 13, 2014 at 08:44:44PM -0300, Bernardo Donadio wrote:
Hi!
I'm trying to do a send/receive of a snapshot between two disks on
Fedora 20 with Linux 3.15-rc5 (and also tried with 3.14 and 3.11) and
SELinux disabled, and then I'm receiving the following error:
[root@darwin /]# btrfs
On Wed, Nov 23, 2011 at 07:35:34PM -0500, Jeff Mahoney wrote:
As part of the effort to eliminate BUG_ON as an error handling
technique, we need to determine which errors are actual logic errors,
which are on-disk corruption, and which are normal runtime errors
e.g. -ENOMEM.
Annotating these
On Wed, Nov 23, 2011 at 09:22:06PM -0500, Jeff Mahoney wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/23/2011 09:05 PM, David Brown wrote:
On Wed, Nov 23, 2011 at 07:35:34PM -0500, Jeff Mahoney wrote:
As part of the effort to eliminate BUG_ON as an error handling
technique, we
On Thu, Nov 24, 2011 at 09:36:55PM -0500, Jeff Mahoney wrote:
Probably best not to, it makes them inconsistent with the rest of
the kernel's history when imported into git. The body becomes the
commit text directly.
I'll change them to do this since you're obviously correct. You're the
first
Hi,
I was wondering if there has been any thought or progress in
content-based storage for btrfs beyond the suggestion in the Project
ideas wiki page?
The basic idea, as I understand it, is that a longer data extent
checksum is used (long enough to make collisions unrealistic), and merge
On 16/03/2010 23:45, Fabio wrote:
Some years ago I was searching for that kind of functionality and found
an experimental ext3 patch to allow the so-called COW-links:
http://lwn.net/Articles/76616/
I'd read about the COW patches for ext3 before. While there is
certainly some similarity
On 17/03/2010 01:45, Hubert Kario wrote:
On Tuesday 16 March 2010 10:21:43 David Brown wrote:
Hi,
I was wondering if there has been any thought or progress in
content-based storage for btrfs beyond the suggestion in the Project
ideas wiki page?
The basic idea, as I understand
On Sat, Jun 12, 2010 at 06:06:23PM -0500, C Anthony Risinger wrote:
# btrfs subvolume create new_root
# mv . new_root/old_root
can i at least get confirmation that the above is possible?
I've had no problem with
# btrfs subvolume snapshot . new_root
# mkdir old_root
# mv * old_root
On 16/06/2010 21:35, Freddie Cash wrote:
snip a lot of fancy math that missed the point
That's all well and good, but you missed the part where he said ext2
on a 5-way LVM stripeset is many times faster than btrfs on a 5-way
btrfs stripeset.
IOW, same 5-way stripeset, different filesystems and
On Wednesday 28 July 2010, Ken D'Ambrosio said:
Hello, all. I'm thinking of rolling out a BackupPC server, and --
based on the strength of the recent Phoronix benchmarks
(http://benchmarkreviews.com/index.php?option=com_contenttask=viewid=11156Itemid=23)
-- had been strongly considering
On 29/09/2010 23:31, Yuehai Xu wrote:
On Wed, Sep 29, 2010 at 3:59 PM, Sean Bartellwingedtachik...@gmail.com wrote:
On Wed, Sep 29, 2010 at 02:45:29PM -0400, Yuehai Xu wrote:
On Wed, Sep 29, 2010 at 1:08 PM, Sean Bartellwingedtachik...@gmail.com wrote:
On Wed, Sep 29, 2010 at 11:30:14AM
On 12/10/2010 11:34, Tomasz Torcz wrote:
On Tue, Oct 12, 2010 at 11:32:07AM +0200, David Brown wrote:
On 11/10/2010 19:06, Chris Ball wrote:
Hi,
Is it possible to turn a 1-disk (partition) btrfs filesystem into
RAID-1?
Not yet, but I'm pretty sure it's on the roadmap.
- Chris
On Mon, Oct 25, 2010 at 03:20:58PM -0500, C Anthony Risinger wrote:
For example, right now extlinux support booting btrfs, but _only_ from
the top-level root. if i just had a way to swap the top-level root
with a different subvol, i could overcome several problems i have with
users all at
On Tue, Jan 03, 2012 at 01:05:07PM -0500, Calvin Walton wrote:
The best way to get the btrfs-progs source is probably via git; Chris
Mason's repository for it can be found at
http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git
Chris,
The wiki at
I've been creating some time-based snapshots, e.g.
# btrfs subvolume snapshot @root 2012-01-09-@root
After some changes, I wanted to see what had changed, so I tried:
# btrfs subvolume find-new @root 2012-01-09-@root
transid marker was 37
which doesn't print anything out.
On Wed, Feb 22, 2012 at 10:30:55AM +0300, Dan Carpenter wrote:
Gcc warns that ret can be used uninitialized. It can't actually be
used uninitialized because btrfs_num_copies() always returns 1 or more.
Signed-off-by: Dan Carpenter dan.carpen...@oracle.com
diff --git
Marc MERLIN m...@merlins.org writes:
I made a mistake and copied data in the root of a new btrfs filesystem.
I created a subvolume, and used mv to put everything in there.
Something like:
cd /mnt
btrfs subvolume create dir
mv * dir
Except it's been running for over a day now (ok, it's 5TB
On 19/11/13 00:25, H. Peter Anvin wrote:
On 11/18/2013 02:35 PM, Andrea Mazzoleni wrote:
Hi Peter,
The Cauchy matrix has the mathematical property to always have itself
and all submatrices not singular. So, we are sure that we can always
solve the equations to recover the data disks.
On 20/11/13 02:23, John Williams wrote:
On Tue, Nov 19, 2013 at 4:54 PM, Chris Murphy li...@colorremedies.com
wrote:
If anything, I'd like to see two implementations of RAID 6 dual
parity. The existing implementation in the md driver and btrfs could
remain the default, but users could opt
On 20/11/13 19:09, John Williams wrote:
On Wed, Nov 20, 2013 at 2:31 AM, David Brown david.br...@hesbynett.no wrote:
That's certainly a reasonable way to look at it. We should not limit
the possibilities for high-end systems because of the limitations of
low-end systems that are unlikely
On 20/11/13 19:34, Andrea Mazzoleni wrote:
Hi David,
The choice of ZFS to use powers of 4 was likely not optimal,
because to multiply by 4, it has to do two multiplications by 2.
I can agree with that. I didn't copy ZFS's choice here
David, it was not my intention to suggest that you
On 21/11/13 02:28, Stan Hoeppner wrote:
On 11/20/2013 10:16 AM, James Plank wrote:
Hi all -- no real comments, except as I mentioned to Ric, my tutorial
in FAST last February presents Reed-Solomon coding with Cauchy
matrices, and then makes special note of the common pitfall of
assuming that
On 21/11/13 10:54, Adam Goryachev wrote:
On 21/11/13 20:07, David Brown wrote:
I can see plenty of reasons why raid15 might be a good idea, and even
raid16 for 5 disk redundancy, compared to multi-parity sets. However,
it costs a lot in disk space. For example, with 20 disks at 1 TB each
On 21/11/13 21:52, Piergiorgio Sartor wrote:
Hi David,
On Thu, Nov 21, 2013 at 09:31:46PM +0100, David Brown wrote:
[...]
If this can all be done to give the user an informed choice, then it
sounds good.
that would be my target.
To _offer_ more options to the (advanced) user.
It _must_
On 22/11/13 01:30, Stan Hoeppner wrote:
I don't like it either. It's a compromise. But as RAID1/10 will soon
be unusable due to URE probability during rebuild, I think it's a
relatively good compromise for some users, some workloads.
An alternative is to move to 3-way raid1 mirrors rather
On 22/11/13 23:59, NeilBrown wrote:
On Fri, 22 Nov 2013 10:07:09 -0600 Stan Hoeppner s...@hardwarefreak.com
wrote:
snip
In the event of a double drive failure in one mirror, the RAID 1 code
will need to be modified in such a way as to allow the RAID 5 code to
rebuild the first replacement
On 28/11/13 08:16, Stan Hoeppner wrote:
Late reply. This one got lost in the flurry of activity...
On 11/22/2013 7:24 AM, David Brown wrote:
On 22/11/13 09:38, Stan Hoeppner wrote:
On 11/21/2013 3:07 AM, David Brown wrote:
For example, with 20 disks at 1 TB each, you can have
31 matches
Mail list logo