On Sat, Jul 15, 2017 at 04:12:45PM -0700, Marc MERLIN wrote:
> On Fri, Jul 14, 2017 at 06:22:16PM -0700, Marc MERLIN wrote:
> > Dear Chris and other developers,
> >
> > Can you look at this bug which has been happening since 2012 on apparently
> > all kernels between at least
> > 3.4 and 4.11.
>
Add -S/--subvol [NAME] option to configure. It enables users to create a
subvolume under the toplevel volume
> and populate the created subvolume
> with files from the rootdir specified by -r/--rootdir option.
This brings two enhancements, those might be good ideas, but stating a
specific
shally verma posted on Mon, 28 Aug 2017 12:49:10 +0530 as excerpted:
> On Sat, Aug 26, 2017 at 9:45 PM, Adam Borowski
> wrote:
>> On Sat, Aug 26, 2017 at 01:36:35AM +, Duncan wrote:
>>> The second has to do with btrfs scaling issues due to reflinking,
>>> which of course is the operational me
Hello,
a trace of the kworker looks like this:
kworker/u24:4-13405 [003] 344186.202535: _cond_resched
<-find_free_extent
kworker/u24:4-13405 [003] 344186.202535: down_read
<-find_free_extent
kworker/u24:4-13405 [003] 344186.202535:
block_group_cache_done.isra.27 <-find_free
Hi All,
unfortunately, your patch crashes on my PC
$ truncate -s 100G /tmp/disk.img
$ sudo losetup -f /tmp/disk.img
$ # good case
$ sudo ./mkfs.btrfs -f -r /tmp/empty/ /dev/loop0
btrfs-progs v4.12.1-1-gf80d059c
See http://btrfs.wiki.kernel.org for more information.
Making image is completed.
On Mon, 28 Aug 2017 15:03:47 +0300
Nikolay Borisov wrote:
> when the cleaner thread runs again the snapshot's root item is going to
> be deleted for good and you no longer will see it.
Oh, that's pretty sweet -- it means there's actually a way to reliably wait
for cleaner work to be done on all
On Mon, Aug 28, 2017 at 03:03:47PM +0300, Nikolay Borisov wrote:
>
>
> On 28.08.2017 11:07, Christoph Anton Mitterer wrote:
> > Thanks...
> >
> > Still a bit strange that it displays that entry... especially with a
> > generation that seems newer than what I thought was the actually last
> > gen
On 28.08.2017 11:07, Christoph Anton Mitterer wrote:
> Thanks...
>
> Still a bit strange that it displays that entry... especially with a
> generation that seems newer than what I thought was the actually last
> generation on the fs.
Snapshot destroy is a 2-phase process. The first phase delete
On 2017-08-28 06:32, Adam Borowski wrote:
On Mon, Aug 28, 2017 at 12:49:10PM +0530, shally verma wrote:
Am bit confused over here, is your description based on offline-dedupe
here Or its with inline deduplication?
It doesn't matter _how_ you get to excessive reflinking, the resulting
slowdown
On Mon, Aug 28, 2017 at 12:49:10PM +0530, shally verma wrote:
> Am bit confused over here, is your description based on offline-dedupe
> here Or its with inline deduplication?
It doesn't matter _how_ you get to excessive reflinking, the resulting
slowdown is the same.
By the way, you can try "bee
Thanks...
Still a bit strange that it displays that entry... especially with a
generation that seems newer than what I thought was the actually last
generation on the fs.
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic signature
On 26.08.2017 23:30, Adam Bahe wrote:
> Hello all. Recently I added another 10TB sas drive to my btrfs array
> and I have received the following messages in dmesg during the
> balance. I was hoping someone could clarify what seems to be causing
> this.
>
> Some additional info, I did a smartctl
On Sat, Aug 26, 2017 at 9:45 PM, Adam Borowski wrote:
> On Sat, Aug 26, 2017 at 01:36:35AM +, Duncan wrote:
>> The second has to do with btrfs scaling issues due to reflinking, which
>> of course is the operational mechanism for both snapshotting and dedup.
>> Snapshotting of course reflinks t
13 matches
Mail list logo