Systemd 219 now sets the special FS_NOCOW file flag for its journal
files[1]. This unfortunately breaks the ability to repair the journal on
RAID 1/5/6 btrfs volumes, should a bad sector happen to appear there. Is
this something that can be configured for systemd? Is btrfs going to
someday fix
On 8/1/2015 3:30 μμ, Lennart Poettering wrote:
On Wed, 07.01.15 15:10, Josef Bacik (jba...@fb.com) wrote:
On 01/07/2015 12:43 PM, Lennart Poettering wrote:
Heya!
Currently, systemd-journald's disk access patterns (appending to the
end of files, then updating a few pointers in the front)
On 10/12/2014 9:28 μμ, Marc Joliet wrote:
Am Wed, 10 Dec 2014 10:51:15 +0800
schrieb Anand Jain anand.j...@oracle.com:
Is there any relevant log in the dmegs ?
Not in my case; at least, nothing that made it into the syslog.
Same with me, no messages at all
--
To unsubscribe from this
I've got the exact same problem, with a 4 drive RAID1. kernel 3.18-git
and btrfs tools-git, all built yesterday.
On 22/11/2014 2:13 μμ, Marc Joliet wrote:
Hi all,
While I haven't gotten any scrub already running type errors any more, I do
get one strange case of state misreport. When running
with
it. It was with scrub and was fixed by Liu Bo[1], so i think
skinny-metadata is mature enough be a default.
[1] https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg34493.html
--
Konstantinos Skarlatos
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
(|/.*)))$' \
../x220_home.img .
done
And I now have back my ~2800 photos (~13 Gb).
Many thanks to those who helped!
I am glad i could help!
Best regards,
Jean-Denis Girard
Le 30/08/2014 10:12, Jean-Denis Girard a écrit :
Le 28/08/2014 21:40, Konstantinos Skarlatos a écrit :
On 28/8
it be a problem?
Thanks,
Jean-Denis Girard
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Konstantinos Skarlatos
--
To unsubscribe from this list: send
option of rsync.
--
Konstantinos Skarlatos
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 22/8/2014 12:58 μμ, Filipe David Manana wrote:
On Fri, Aug 22, 2014 at 8:35 AM, Duncan 1i5t5.dun...@cox.net wrote:
Konstantinos Skarlatos posted on Fri, 22 Aug 2014 09:56:55 +0300 as
excerpted:
I would stay with rsync for a while, because there is always the
possibility of a bug
On 13/8/2014 2:01 μμ, David Pottage wrote:
On 12/08/14 12:00, Konstantinos Skarlatos wrote:
Maybe help with Andrea Mazzoleni's New RAID library supporting up to
six parities? It seems to be a great feature for btrfs.
https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg31735.html
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Konstantinos Skarlatos
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Konstantinos Skarlatos
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
On 7/7/2014 6:48 μμ, Duncan wrote:
Konstantinos Skarlatos posted on Mon, 07 Jul 2014 16:54:05 +0300 as
excerpted:
On 7/7/2014 4:38 μμ, André-Sebastian Liebe wrote:
can anyone tell me how much time is acceptable and assumable for a
multi-disk btrfs array with classical hard disk drives
On 7/7/2014 5:24 μμ, André-Sebastian Liebe wrote:
On 07/07/2014 03:54 PM, Konstantinos Skarlatos wrote:
On 7/7/2014 4:38 μμ, André-Sebastian Liebe wrote:
Hello List,
can anyone tell me how much time is acceptable and assumable for a
multi-disk btrfs array with classical hard disk drives
RAID1 with copies on each device
- RAID5/6
- n-way striped+parity with n2
- stacked layouts (RAID 10 as e.g. MD has it,... RAID50, 60)
And terminology should really be re-worked... IMHO it's very bad to use
the term RAID1, if it's not what classic RAID1 does.
Cheers,
Chris.
--
Konstantinos
On 19/6/2014 12:22 πμ, Duncan wrote:
Konstantinos Skarlatos posted on Wed, 18 Jun 2014 16:23:04 +0300 as
excerpted:
I guess that btrfs developers have put these BUG_ONs so that they get
reports from users when btrfs gets in these unexpected situations. But
if most of these reports are ignored
/0xb0
[69932.967493] [8108c8a0] ? kthread_create_on_node+0x180/0x180
[69932.967505] INFO: task kworker/u16:15:30882 blocked for more than 120
seconds.
--
Konstantinos Skarlatos
[ 995.654816] BTRFS info (device sdh): force zlib compression
[ 995.654827] BTRFS info (device sdh): disk space
On 18/6/2014 5:11 πμ, Jens Axboe wrote:
On 2014-06-17 14:35, Konstantinos Skarlatos wrote:
Hi all,
with 3.16-rc1 rsync stops writing to my btrfs filesystem and stays at a
D+ state.
git bisect showed that the problematic commit is:
762380ad9322951cea4ce9d24864265f9c66a916 is the first bad
_much_ better at reporting what happened,
which file was implicated and if it is a multiple disk fs, the disk
where the problem is and the sector where that occured.
PS.
I am not a kernel developer, so please be kind if I have said something
completely wrong :)
Thanks,
Marc
--
Konstantinos
On 5/6/2014 1:59 πμ, Konstantinos Skarlatos wrote:
Hi, I get this after doing a few runs of rsync on my btrfs filesystem.
kernel: 3.15.0-rc8
filesystem has 6x2tb disks, data is raid 0, fs was created with skinny
metadata, mount options are noatime, compress-force=zlib. No quota or
defrag
On 5/6/2014 10:05 πμ, Liu Bo wrote:
Hi, Konstantinos
On Thu, Jun 05, 2014 at 09:28:16AM +0300, Konstantinos Skarlatos wrote:
On 5/6/2014 1:59 πμ, Konstantinos Skarlatos wrote:
Hi, I get this after doing a few runs of rsync on my btrfs filesystem.
kernel: 3.15.0-rc8
filesystem has 6x2tb disks
On 21/5/2014 3:58 πμ, Chris Murphy wrote:
On May 20, 2014, at 4:56 PM, Konstantinos Skarlatos k.skarla...@gmail.com
wrote:
On 21/5/2014 1:37 πμ, Mark Fasheh wrote:
On Tue, May 20, 2014 at 01:07:50AM +0300, Konstantinos Skarlatos wrote:
Duperemove will be shipping as supported software
On 20/5/2014 5:07 πμ, Russell Coker wrote:
On Mon, 19 May 2014 23:47:37 Brendan Hide wrote:
This is extremely difficult to measure objectively. Subjectively ... see
below.
[snip]
*What other failure modes* should we guard against?
I know I'd sleep a /little/ better at night knowing that a
On 21/5/2014 1:37 πμ, Mark Fasheh wrote:
On Tue, May 20, 2014 at 01:07:50AM +0300, Konstantinos Skarlatos wrote:
Duperemove will be shipping as supported software in a major SUSE release so
it will be bug fixed, etc as you would expect. At the moment I'm very busy
trying to fix qgroup bugs so I
On 19/5/2014 7:01 μμ, Brendan Hide wrote:
On 19/05/14 15:00, Scott Middleton wrote:
On 19 May 2014 09:07, Marc MERLIN m...@merlins.org wrote:
On Wed, May 14, 2014 at 11:36:03PM +0800, Scott Middleton wrote:
I read so much about BtrFS that I mistaked Bedup with Duperemove.
Duperemove is
On 19/5/2014 8:38 μμ, Mark Fasheh wrote:
On Mon, May 19, 2014 at 06:01:25PM +0200, Brendan Hide wrote:
On 19/05/14 15:00, Scott Middleton wrote:
On 19 May 2014 09:07, Marc MERLIN m...@merlins.org wrote:
Thanks for that.
I may be completely wrong in my approach.
I am not looking for a file
On 8/5/2014 4:26 πμ, Wang Shilong wrote:
This patch adds an option '--check-data-csum' to verify data csums.
fsck won't check data csums unless users specify this option explictly.
Can this option be added to btrfs restore as well? i think it would be a
good thing if users can tell restore to
Hello,
Here are the test results from my testing of the latest patches of btrfs
dedup.
TLDR;
I rsynced 10 separate copies of a 3.8GB folder with 138 RAW photographs
(23-36MiB) on a btrfs volume with dedup enabled.
On the first try, the copy was very slow, and a sync after that took
over 10
On 10/4/2014 6:48 πμ, Liu Bo wrote:
Hello,
This the 10th attempt for in-band data dedupe, based on Linux _3.14_ kernel.
Data deduplication is a specialized data compression technique for eliminating
duplicate copies of repeating data.[1]
This patch set is also related to Content based storage
On 4/4/2014 6:20 μμ, Filipe David Borba Manana wrote:
This new send flag makes send calculate first the amount of new file data (in
bytes)
the send root has relatively to the parent root, or for the case of a
non-incremental
send, the total amount of file data we will send through the send
I am trying to delete a device (device 5, /dev/sdg) that has some read
errors from a multi device file system :
Label: none uuid: f379d9aa-ddfd-4b4e-84c1-cd93d4592862
Total devices 6 FS bytes used 7.11TiB
devid1 size 1.82TiB used 1.21TiB path /dev/sda
devid2
Hello, i am using btrfs send to copy a snapshot to another btrfs
filesystem on the same machine, and it has a maximum speed of
30-35MByte/sec.
Incredibly rsync is much faster, at 120-140MB/sec. Source btrfs is a
5x2TB raid 0 and target is 1x4TB.
mount options:
Sorry for the spam, i just mixed up the order of your patches. they now
apply cleanly to 3.13 git.
Thanks
On 2/1/2014 4:32 μμ, Konstantinos Skarlatos wrote:
Hello, I am trying to test your patches and they do not apply to
latest 3.12 source or 3.13 git. Am I doing something wrong?
---logs
On 26/11/2013 7:44 μμ, Goffredo Baroncelli wrote:
On 2013-11-26 16:12, Konstantinos Skarlatos wrote:
On 25/11/2013 11:23 μμ, Goffredo Baroncelli wrote:
Hi all,
nobody is interested in these new features ?
Is this ZFS-style recursive snapshotting? If yes, i am interested, and
thanks for your
Hello,
in https://btrfs.wiki.kernel.org/index.php/Btrfs_source_repositories, i
used the fedora instructions for Centos.
The problem is that lzo2-devel is named lzo-devel in Centos, so if
somebody follows the fedora instructions and doesn't notice that
lzo2-devel is missing, btrfs-progs build
On 25/11/2013 11:23 μμ, Goffredo Baroncelli wrote:
Hi all,
nobody is interested in these new features ?
Is this ZFS-style recursive snapshotting? If yes, i am interested, and
thanks for your great work :)
On 2013-11-16 18:09, Goffredo Baroncelli wrote:
Hi All,
the following patches
According to https://github.com/g2p/bedup/tree/wip/dedup-syscall
The clone call is considered a write operation and won't work on
read-only snapshots.
Is this fixed on newer kernels?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to
Hi all,
I have two multi-disk btrfs filesystems on a Arch linux 3.4.0 system.
After a power failure, both filesystems refuse to mount
[ 10.402284] Btrfs loaded
[ 10.402714] device fsid 1e7c18a4-02d6-44b1-8eaf-c01378009cd3 devid 4
transid 65282 /dev/sdc
[ 10.403108] btrfs: force zlib
On Παρασκευή, 8 Ιούνιος 2012 11:28:39 πμ, Tomasz Torcz wrote:
On Fri, Jun 08, 2012 at 11:26:21AM +0300, Konstantinos Skarlatos wrote:
Hi all,
I have two multi-disk btrfs filesystems on a Arch linux 3.4.0
system. After a power failure, both filesystems refuse to mount
Multi-device
On Κυριακή, 1 Απρίλιος 2012 8:07:54 μμ, Norbert Scheibner wrote:
On: Sun, 01 Apr 2012 19:45:13 +0300 Konstantinos Skarlatos wrote
That's my point. This poor man's dedupe would solve my problems here
very well. I don't need a zfs-variant of dedupe. I can implement such a
file-based dedupe
On 1/4/2012 9:39 μμ, Norbert Scheibner wrote:
On: Sun, 01 Apr 2012 19:22:42 +0200Klaus A. Kreil wrote
I am just an interested reader on the btrfs list and so far have never
posted or sent a message to the list, but I do have a dedup bash script
that searches for duplicates underneath a
On 1/4/2012 9:11 μμ, Norbert Scheibner wrote:
On: Sun, 01 Apr 2012 20:19:24 +0300 Konstantinos Skarlatos wrote
I use btrfs for my backups. Ones a day I rsync --delete --inplace
the
complete system to a subvolume, snapshot it, delete some tempfiles
in the snapshot.
In my setup I rsync
On 22/12/2011 2:24 μμ, Chris Samuel wrote:
Christoph,
On Sat, 2 Apr 2011 12:40:11 AM Chris Mason wrote:
Excerpts from Christoph Hellwig's message of 2011-04-01 09:34:05
-0400:
I don't think it's a good idea to introduce any user visible
operations over subvolume boundaries. Currently we
Hello everyone,
I was reading this article in Slashdot about dedupe [1] and i was
wondering about the status of the (offline) dedupe patches in btrfs. Are
they applicable to a recent kernel? do userspace tools support it?
Kind regards
[1]
Hello all:
I have two machines with btrfs, that give me the blocked for more than
120 seconds message. After that I cannot write anything to disk, i am
unable to unmount the btrfs filesystem and i can only reboot with
sysrq-trigger.
It always happens when i write many files with rsync over
Well now machine2 has just crashed too...
http://pastebin.com/gvfUm0az
On Τετάρτη, 28 Δεκέμβριος 2011 9:26:07 μμ, Konstantinos Skarlatos wrote:
Hello all:
I have two machines with btrfs, that give me the blocked for more
than 120 seconds message. After that I cannot write anything to disk,
i
On Τετάρτη, 28 Δεκέμβριος 2011 11:48:32 μμ, Dave Chinner wrote:
On Wed, Dec 28, 2011 at 09:26:07PM +0200, Konstantinos Skarlatos wrote:
Hello all:
I have two machines with btrfs, that give me the blocked for more
than 120 seconds message. After that I cannot write anything to
disk, i am unable
even more kernel messages from btrfs crashing when rsyncing large
amounts of data on 3.2rc4
Dec 3 15:12:14 mail kernel: [15481.100564] loop0 D
00010044b6c5 0 1729 2 0x
Dec 3 15:12:14 mail kernel: [15481.101550] 8801f9b31b30
0046
, 3 Δεκέμβριος 2011 2:35:50 πμ, Konstantinos Skarlatos wrote:
After about 1TB of rsyncs from multiple servers at the same time, plus
some heavy filesystem loading, i believe that 3.2rc4 solves the
problem for me. Now if only we had deduplication and an fsck tool :)
On Παρασκευή, 2 Δεκέμβριος 2011
Hi all
On 2/12/2011 3:46 μμ, Tobias wrote:
Hi Chris!
Am 01.12.2011 19:41, schrieb Chris Mason:
So, the transaction close is in btrfs_evict_inode, which sounds like a
deadlock recently fixed by this commit:
I see they got into 3.2rc4, so I am now compiling it. I will report
back in a few hours
On Παρασκευή, 2 Δεκέμβριος 2011 5:48:31 μμ, Tobias wrote:
Am 02.12.2011 16:22, schrieb Konstantinos Skarlatos:
So, the transaction close is in btrfs_evict_inode, which sounds like a
deadlock recently fixed
After about 1TB of rsyncs from multiple servers at the same time, plus
some heavy filesystem loading, i believe that 3.2rc4 solves the problem
for me. Now if only we had deduplication and an fsck tool :)
On Παρασκευή, 2 Δεκέμβριος 2011 9:53:10 μμ, Konstantinos Skarlatos
wrote:
I see they got
Hello, I have a 5.5TB Btrfs filesystem on top of a md-raid 5 device. Now
if i run some file operations like find, i get these messages.
kernel is 2.6.38.5-1 on arch linux
May 5 14:15:12 mail kernel: [13559.089713] parent transid verify failed
on 3062073683968 wanted 5181 found 5188
May 5
On 5/5/2011 2:42 μμ, Chris Mason wrote:
Excerpts from Konstantinos Skarlatos's message of 2011-05-05 07:19:52 -0400:
Hello, I have a 5.5TB Btrfs filesystem on top of a md-raid 5 device. Now
if i run some file operations like find, i get these messages.
kernel is 2.6.38.5-1 on arch linux
Are
On 5/5/2011 6:06 μμ, Chris Mason wrote:
Excerpts from Konstantinos Skarlatos's message of 2011-05-05 10:27:30 -0400:
attached you can find the whole dmesg log. I can trigger the error again
if more logs are needed
Yes, I'll send you a patch to get rid of the printk for the transid
failed
I think i made some progress. When i tried to remove the directory that
i suspect contains the problematic file, i got this on the console
rm -rf serverloft/
2011 May 5 23:32:53 mail [ 200.580195] Oops: [#1] PREEMPT SMP
2011 May 5 23:32:53 mail [ 200.580220] last sysfs file:
On 5/5/2011 11:32 μμ, Chris Mason wrote:
Excerpts from Konstantinos Skarlatos's message of 2011-05-05 16:27:54 -0400:
I think i made some progress. When i tried to remove the directory that
i suspect contains the problematic file, i got this on the console
rm -rf serverloft/
Ok, our one bad
On 6/5/2011 2:50 πμ, Chris Mason wrote:
Excerpts from Konstantinos Skarlatos's message of 2011-05-05 17:04:00 -0400:
On 5/5/2011 11:32 μμ, Chris Mason wrote:
Excerpts from Konstantinos Skarlatos's message of 2011-05-05 16:27:54 -0400:
I think i made some progress. When i tried to remove the
Hello,
I would like to ask about the status of this feature/patch, is it
accepted into btrfs code, and how can I use it?
I am interested in enabling compression in a specific
folder(force-compress would be ideal) of a large btrfs volume, and
disabling it for the rest.
On 21/3/2011 10:57
On 1/4/2011 3:12 μμ, Helmut Hullen wrote:
Hallo, Struan,
Du meintest am 01.04.11:
1) Is the balancing operation expected to take many hours (or days?)
on a filesystem such as this? Or are there known issues with the
algorithm that are yet to be addressed?
May be. Balancing about 15 GByte
On 1/4/2011 1:59 πμ, Josef Bacik wrote:
On Thu, Mar 31, 2011 at 05:06:42PM -0400, Calvin Walton wrote:
On Wed, 2011-03-30 at 17:19 -0400, Josef Bacik wrote:
Hello,
Just found a big bug in the free space caching stuff that will result in
early ENOSPC. I'm working on fixing this bug, but it
On 1/4/2011 4:37 μμ, Hugo Mills wrote:
On Fri, Apr 01, 2011 at 04:22:39PM +0300, Konstantinos Skarlatos wrote:
On 1/4/2011 3:12 μμ, Helmut Hullen wrote:
Du meintest am 01.04.11:
dmesg counts down the number of remaining jobs.
are you sure? here is a snippet of dmesg from a balance i did
Hello, I have these messages from a two not full (111GB of 2TB and
452GB of 2TB free) filesystems.
Eventually the filesystem mounts, but I am unable to create new files,
even when i delete data.
Most files are 1.45GB.
[r...@linuxserver ~]# btrfs filesystem df /storage/WD20_1
Data:
63 matches
Mail list logo