Qu Wenruo posted on Wed, 16 Mar 2016 10:48:48 +0800 as excerpted:
> Hi,
>
> During debugging a bug related to balancing metadata chunk, we found
> that if we specify -m option for "btrfs balance", it will always balance
> system chunk too.
>
> cmds-balance.c:
> ---
> /*
> *
On Wed, Mar 16, 2016 at 5:53 AM, Austin S. Hemmelgarn
wrote:
> On 2016-03-16 02:51, Chris Murphy wrote:
>>
>> On Tue, Mar 15, 2016 at 10:23 PM, Nazar Mokrynskyi
>> wrote:
>>>
>>> Sounds like a really good idea!
>>>
>>> I'll try to implement in in my
Austin S. Hemmelgarn wrote on 2016/03/16 11:26 -0400:
Currently, open_ctree_fs_info will open whatever path you pass it and
try to interpret it as a BTRFS filesystem. While this is not
nessecarily dangerous (except possibly if done on a character device),
it does result in some rather cryptic
I try mount with -o degraded but this same effect what recovery:
[ 7133.926778] BTRFS info (device sdc): allowing degraded mounts
[ 7133.926783] BTRFS info (device sdc): disk space caching is enabled
[ 7133.932140] BTRFS info (device sdc): bdev (null) errs: wr 921, rd
164889, flush 0, corrupt
sri posted on Fri, 18 Mar 2016 13:36:50 + as excerpted:
> Henk Slager gmail.com> writes:
>
>> sri yahoo.co.in> writes:
>> >
>> > I Would like to know between 2 snapshots of a subvolume, can we
>> > identify what all blocks modified particular to that subvolume ?
>> >
>> > there can be
"qgroup create/destroy" don't work from the following commit.
commit 176aeca9a148 ("btrfs-progs: add getopt stubs where needed")
* actual result
==
# ./btrfs qgroup create 1 /btrfs/sub
btrfs qgroup create: too few arguments
usage: btrfs
The main thing you haven't tried here is mount -o degraded, which
is the thing to do if you have a missing device in your array.
Also, that kernel's not really all that good for a parity RAID
array -- it's the very first one that had the scrub and replace
implementation, so it's rather less
Anyone? Would really appreciate a pointer or two!
Thanks
Paul
On Tue, Mar 15, 2016 at 4:00 PM, Paul Harrison
wrote:
> Hi all,
>
> I'm new to btrfs, and have just taken over management of this system; can
> anyone point me in the right direction with regard to the
On Tue, Mar 08, 2016 at 08:30:06PM +0530, Lakshmipathi.G wrote:
> + perm)
> + for modes in $(seq 1 );do
This generates way too many files, and is the longest part of image
pupulation, given the number of option combinations of ext4 and btrfs,
we need to keep
Hi,
btrfs-progs 4.5-rc2 have been released. The changes in option parsing broke
several commands, that's been fixed since rc1 but I'd rather give it some more
time before 4.5 release. New ETA is this Sunday.
Changes since rc1:
* bugfixes
* subvol sync: fix crash, memory corruption
*
Hello Josef Bacik,
The patch 47ab2a6c6899: "Btrfs: remove empty block groups
automatically" from Sep 18, 2014, leads to the following static
checker warning:
fs/btrfs/extent-tree.c:10584 btrfs_delete_unused_bgs()
warn: 'ret' can be either negative or positive
The warning here is
Duncan,
thanks for your extensive answer.
On 17.03.2016 11:51, Duncan wrote:
> Ole Langbehn posted on Wed, 16 Mar 2016 10:45:28 +0100 as excerpted:
>
> Have you tried the autodefrag mount option, then defragging? That should
> help keep rewritten files from fragmenting so heavily, at least.
On Thu, Mar 17, 2016 at 11:44:29AM +0530, Chandan Rajendra wrote:
> The following scenario can occur when running btrfs/066,
>
> Task ATask B Task C
>
> run_test()
> - Execute _btrfs_stress_subvolume()
> in a background shell.
>
Hi,
I Would like to know between 2 snapshots of a subvolume, can we identify
what all blocks modified particular to that subvolume ?
there can be many subvolume and snapshots present on the btrfs but i want
only blocks modified since first snapshot for the specific subvolume.
blocks should
Ole Langbehn posted on Wed, 16 Mar 2016 10:45:28 +0100 as excerpted:
> Hi,
>
> on my box, frequently, mostly while using firefox, any process doing
> disk IO freezes while btrfs-transacti has a spike in CPU usage for more
> than a minute.
>
> I know about btrfs' fragmentation issue, but have a
On 03/14/2016 01:13 PM, Marc Haber wrote:
This was not asked, and I didn't try. Since this is an encrypted root
filesystem, is it a workable way to add clear_cache to /etc/fstab,
rebuild initramfs and reboot? Or do you recommend using a rescue system?
You should be able to boot to single user
Hi all,
I try to get Anand's patchset for global hotspare functionality working.
Now it's working for me but I have met number of issues while applying
and patches testing.
I took latest versions of patchset and its dependencies (latest at two
weeks ago):
1) Anand's hotspare patchset:
Ole Langbehn posted on Fri, 18 Mar 2016 10:33:46 +0100 as excerpted:
> Duncan,
>
> thanks for your extensive answer.
>
> On 17.03.2016 11:51, Duncan wrote:
>> Ole Langbehn posted on Wed, 16 Mar 2016 10:45:28 +0100 as excerpted:
>>
>> Have you tried the autodefrag mount option, then defragging?
On Fri, Mar 18, 2016 at 12:02 PM, Hugo Mills wrote:
>The main thing you haven't tried here is mount -o degraded, which
> is the thing to do if you have a missing device in your array.
>
>Also, that kernel's not really all that good for a parity RAID
> array -- it's the
Austin S. Hemmelgarn posted on Fri, 18 Mar 2016 07:38:29 -0400 as
excerpted:
>>> 188 Command_Timeout 0x0032 100 099 000Old_age
>>> Always
>>>- 8590065669
>>
>> Again, a non-zero raw value indicating command timeouts, probably due
>> to those bad seeks. It'll
On 2016-03-15 18:29, Peter Chant wrote:
On 03/15/2016 03:52 PM, Duncan wrote:
Tho even with autodefrag, given the previous relatime and snapshotting,
it could be that the free-space in existing chunks is fragmented, which
over time and continued usage would force higher file fragmentation
On 03/18/2016 11:38 AM, Austin S. Hemmelgarn wrote:
> This one is tricky, as it's not very clearly defined in the SMART spec.
> Most manufacturers just count the total time the head has been loaded.
> There are some however who count the time the heads have been loaded,
> multiplied by the
On Fri, Mar 18, 2016 at 05:31:51PM -0600, Chris Murphy wrote:
> On Fri, Mar 18, 2016 at 12:02 PM, Hugo Mills wrote:
> >The main thing you haven't tried here is mount -o degraded, which
> > is the thing to do if you have a missing device in your array.
> >
> >Also, that
Austin S. Hemmelgarn posted on Fri, 18 Mar 2016 14:54:54 -0400 as
excerpted:
> As of right now, the top three brands of SSD as far as quality IMHO are
> Intel, Samsung, and Crucial. I usually go with Crucial myself because
> they are almost on-par with the other two, give more deterministic
>
Pete posted on Fri, 18 Mar 2016 18:16:50 + as excerpted:
> On 03/18/2016 09:17 AM, Duncan wrote:
>> So btrfs raid1 has data integrity and repair features that aren't
>> available on normal raid1, and thus is highly recommended.
>>
>> But, raid1 /does/ mean two copies of both data and
Paul Harrison posted on Fri, 18 Mar 2016 10:25:44 + as excerpted:
> Anyone? Would really appreciate a pointer or two!
I was hoping someone, presumably a dev with understanding of the code,
would reply here, as I'm just an admin-level list regular who uses btrfs
on my own machines and helps
26 matches
Mail list logo