Re: [PATCH] btrfs: properly track when rescan worker is running

2016-08-15 Thread Qu Wenruo
At 08/16/2016 12:10 AM, Jeff Mahoney wrote: The qgroup_flags field is overloaded such that it reflects the on-disk status of qgroups and the runtime state. The BTRFS_QGROUP_STATUS_FLAG_RESCAN flag is used to indicate that a rescan operation is in progress, but if the file system is unmounted

Re: About minimal device number for RAID5/6

2016-08-15 Thread Qu Wenruo
At 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote: On 2016-08-15 10:08, Anand Jain wrote: IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. Any comment is welcomed. Based on looking at the code, we do in fact support 2/3 devices for raid5/6 respectively.

Re: btrfs quota issues

2016-08-15 Thread Qu Wenruo
At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote: yes, subvol level. qgroupid rfer excl max_rfer max_excl parent child -- - 0/5 16.00KiB 16.00KiB none none --- ---

Re: BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-08-15 Thread Chris Murphy
On Mon, Aug 15, 2016 at 5:12 PM, Ronan Chagas wrote: > Hi guys! > > It happened again. The computer was completely unusable. The only useful > message I saw was this one: > > http://img.ctrlv.in/img/16/08/16/57b24b0bb2243.jpg > > Does it help? > > I decided to format and

Re: About minimal device number for RAID5/6

2016-08-15 Thread Henk Slager
On Mon, Aug 15, 2016 at 8:30 PM, Hugo Mills wrote: > On Mon, Aug 15, 2016 at 10:32:25PM +0800, Anand Jain wrote: >> >> >> On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote: >> >On 2016-08-15 10:08, Anand Jain wrote: >> >> >> >> >> IMHO it's better to warn user about 2

Re: Huge load on btrfs subvolume delete

2016-08-15 Thread Daniel Caillibaud
Le 15/08/16 à 10:16, "Austin S. Hemmelgarn" a écrit : ASH> With respect to databases, you might consider backing them up separately ASH> too. In many cases for something like an SQL database, it's a lot more ASH> flexible to have a dump of the database as a backup than it

Re: Extents for a particular subvolume

2016-08-15 Thread Graham Cobb
On 03/08/16 22:55, Graham Cobb wrote: > On 03/08/16 21:37, Adam Borowski wrote: >> On Wed, Aug 03, 2016 at 08:56:01PM +0100, Graham Cobb wrote: >>> Are there any btrfs commands (or APIs) to allow a script to create a >>> list of all the extents referred to within a particular (mounted) >>>

Re: btrfs quota issues

2016-08-15 Thread Rakesh Sankeshi
yes, subvol level. qgroupid rfer excl max_rfer max_excl parent child -- - 0/5 16.00KiB 16.00KiB none none --- --- 0/258 119.48GiB119.48GiB200.00GiB

Re: About minimal device number for RAID5/6

2016-08-15 Thread Hugo Mills
On Mon, Aug 15, 2016 at 10:32:25PM +0800, Anand Jain wrote: > > > On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote: > >On 2016-08-15 10:08, Anand Jain wrote: > >> > >> > IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. > > Any comment is welcomed. > >

Re: [GIT PULL] [PATCH v4 00/26] Delete CURRENT_TIME and CURRENT_TIME_SEC macros

2016-08-15 Thread Greg KH
On Sat, Aug 13, 2016 at 03:48:12PM -0700, Deepa Dinamani wrote: > The series is aimed at getting rid of CURRENT_TIME and CURRENT_TIME_SEC > macros. > The macros are not y2038 safe. There is no plan to transition them into being > y2038 safe. > ktime_get_* api's can be used in their place. And,

[PATCH] btrfs: properly track when rescan worker is running

2016-08-15 Thread Jeff Mahoney
The qgroup_flags field is overloaded such that it reflects the on-disk status of qgroups and the runtime state. The BTRFS_QGROUP_STATUS_FLAG_RESCAN flag is used to indicate that a rescan operation is in progress, but if the file system is unmounted while a rescan is running, the rescan operation

Re: About minimal device number for RAID5/6

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 10:32, Anand Jain wrote: On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote: On 2016-08-15 10:08, Anand Jain wrote: IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. Any comment is welcomed. Based on looking at the code, we do in fact support 2/3

Re: About minimal device number for RAID5/6

2016-08-15 Thread Anand Jain
On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote: On 2016-08-15 10:08, Anand Jain wrote: IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. Any comment is welcomed. Based on looking at the code, we do in fact support 2/3 devices for raid5/6 respectively.

Re: Huge load on btrfs subvolume delete

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 10:06, Daniel Caillibaud wrote: Le 15/08/16 à 08:32, "Austin S. Hemmelgarn" a écrit : ASH> On 2016-08-15 06:39, Daniel Caillibaud wrote: ASH> > I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume delete […] ASH> Before I start

Re: About minimal device number for RAID5/6

2016-08-15 Thread Anand Jain
Have a look at this.. http://www.spinics.net/lists/linux-btrfs/msg54779.html -- RAID5&6 devs_min values are in the context of degraded volume. RAID1&10.. devs_min values are in the context of healthy volume. RAID56 is correct. We already have devs_max to know the number of devices in

Re: About minimal device number for RAID5/6

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 10:08, Anand Jain wrote: IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. Any comment is welcomed. Based on looking at the code, we do in fact support 2/3 devices for raid5/6 respectively. Personally, I agree that we should warn when trying to do this,

Re: About minimal device number for RAID5/6

2016-08-15 Thread Anand Jain
IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. Any comment is welcomed. Based on looking at the code, we do in fact support 2/3 devices for raid5/6 respectively. Personally, I agree that we should warn when trying to do this, but I absolutely don't think we should

Re: Huge load on btrfs subvolume delete

2016-08-15 Thread Daniel Caillibaud
Le 15/08/16 à 08:32, "Austin S. Hemmelgarn" a écrit : ASH> On 2016-08-15 06:39, Daniel Caillibaud wrote: ASH> > I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume delete […] ASH> Before I start explaining possible solutions, it helps to explain

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 09:39, Martin wrote: That really is the case, there's currently no way to do this with BTRFS. You have to keep in mind that the raid5/6 code only went into the mainline kernel a few versions ago, and it's still pretty immature as far as kernel code goes. I don't know when (if

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Chris Murphy
On Mon, Aug 15, 2016 at 7:38 AM, Martin wrote: >> Looking at the kernel log itself, you've got a ton of write errors on >> /dev/sdap. I would suggest checking that particular disk with smartctl, and >> possibly checking the other hardware involved (the storage controller

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 09:38, Martin wrote: Looking at the kernel log itself, you've got a ton of write errors on /dev/sdap. I would suggest checking that particular disk with smartctl, and possibly checking the other hardware involved (the storage controller and cabling). I would kind of expect BTRFS

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Martin
> That really is the case, there's currently no way to do this with BTRFS. > You have to keep in mind that the raid5/6 code only went into the mainline > kernel a few versions ago, and it's still pretty immature as far as kernel > code goes. I don't know when (if ever) such a feature might get

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Martin
> Looking at the kernel log itself, you've got a ton of write errors on > /dev/sdap. I would suggest checking that particular disk with smartctl, and > possibly checking the other hardware involved (the storage controller and > cabling). > > I would kind of expect BTRFS to crash with that many

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Chris Murphy
On Mon, Aug 15, 2016 at 6:19 AM, Martin wrote: > > I have now had the first crash, can you take a look if I have provided > the needed info? > > https://bugzilla.kernel.org/show_bug.cgi?id=153141 [337406.626175] BTRFS warning (device sdq): lost page write due to IO error

Re: [PATCH v4 10/26] fs: btrfs: Use ktime_get_real_ts for root ctime

2016-08-15 Thread David Sterba
On Sat, Aug 13, 2016 at 03:48:22PM -0700, Deepa Dinamani wrote: > btrfs_root_item maintains the ctime for root updates. > This is not part of vfs_inode. > > Since current_time() uses struct inode* as an argument > as Linus suggested, this cannot be used to update root > times unless, we modify

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 08:19, Martin wrote: I'm not sure what Arch does any differently to their kernels from kernel.org kernels. But bugzilla.kernel.org offers a Mainline and Fedora drop down for identifying the kernel source tree. IIRC, they're pretty close to mainline kernels. I don't think they

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 08:19, Martin wrote: The smallest disk of the 122 is 500GB. Is it possible to have btrfs see each disk as only e.g. 10GB? That way I can corrupt and resilver more disks over a month. Well, at least you can easily partition the devices for that to happen. Can it be done with

Re: Huge load on btrfs subvolume delete

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 06:39, Daniel Caillibaud wrote: Hi, I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume delete I use snapshots on lxc hosts under debian jessie with - kernel 4.6.0-0.bpo.1-amd64 - btrfs-progs 4.6.1-1~bpo8 For backup, I have each day, for each

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Martin
>> The smallest disk of the 122 is 500GB. Is it possible to have btrfs >> see each disk as only e.g. 10GB? That way I can corrupt and resilver >> more disks over a month. > > Well, at least you can easily partition the devices for that to happen. Can it be done with btrfs or should I do it with

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Martin
>> I'm not sure what Arch does any differently to their kernels from >> kernel.org kernels. But bugzilla.kernel.org offers a Mainline and >> Fedora drop down for identifying the kernel source tree. > > IIRC, they're pretty close to mainline kernels. I don't think they have any > patches in the

Re: About minimal device number for RAID5/6

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 03:50, Qu Wenruo wrote: Hi, Recently I found that manpage of mkfs is saying minimal device number for RAID5 and RAID6 is 2 and 3. Personally speaking, although I understand that RAID5/6 only requires 1/2 devices for parity stripe, it is still quite strange behavior. Under most

Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-12 11:06, Duncan wrote: Austin S. Hemmelgarn posted on Fri, 12 Aug 2016 08:04:42 -0400 as excerpted: On a file server? No, I'd ensure proper physical security is established and make sure it's properly secured against network based attacks and then not worry about it. Unless you

Huge load on btrfs subvolume delete

2016-08-15 Thread Daniel Caillibaud
Hi, I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume delete I use snapshots on lxc hosts under debian jessie with - kernel 4.6.0-0.bpo.1-amd64 - btrfs-progs 4.6.1-1~bpo8 For backup, I have each day, for each subvolume btrfs subvolume snapshot -r $subvol $snap #

About minimal device number for RAID5/6

2016-08-15 Thread Qu Wenruo
Hi, Recently I found that manpage of mkfs is saying minimal device number for RAID5 and RAID6 is 2 and 3. Personally speaking, although I understand that RAID5/6 only requires 1/2 devices for parity stripe, it is still quite strange behavior. Under most case, user use raid5/6 for striping

Re: [PATCH] code cleanup

2016-08-15 Thread Omar Sandoval
On Sun, Aug 14, 2016 at 04:11:31PM -0400, Harinath Nampally wrote: > This patch checks ret value and jumps to clean up in case of > btrs_add_systme_chunk call fails > > Signed-off-by: Harinath Nampally > --- > fs/btrfs/volumes.c | 11 +++ > 1 file changed, 7