Re: data DUP
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 27/04/13 19:53, Alex Elsayed wrote: When using btrfs, run a recent kernel :P. Every software developer says that of what they produce. Newer is almost always better in many different axes. Honestly, even leaving aside the lack of backporting, there are other benefits to a recent kernel - things like cross-subvolume reflinks, btrfs device replace support being far more efficient than add/balance/remove/balance, and a bunch more. Those are all features, none of which I use or have had to use yet. If it will make you feel better I did upgrade some systems today to the most recent Ubuntu release which meant going from kernel 3.5 to 3.8. Roger -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAlF83dUACgkQmOOfHg372QQeDACgz0oBnrYeg6fO5tFxUy9qonE9 HYIAoJWjT8z2sJ356YAph1NAyLKhcEBz =ySAg -END PGP SIGNATURE- -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH RFC] btrfs-progs: don't allow to delete default subvolume
Default subvolume set via 'set-default' command can be deleted now # Create btrfs and a subvolume [root@localhost ~]# mkfs -t btrfs /dev/sda5 [root@localhost ~]# mount /dev/sda5 /mnt/btrfs [root@localhost ~]# btrfs sub create /mnt/btrfs/vol_1 Create subvolume '/mnt/btrfs/vol_1' [root@localhost ~]# btrfs sub list -a /mnt/btrfs ID 256 gen 7 top level 5 path vol_1 # Set subvolid 256 as default volume [root@localhost ~]# btrfs sub set-default 256 /mnt/btrfs [root@localhost ~]# btrfs sub get-default /mnt/btrfs/ ID 256 gen 5 top level 5 path vol_1 # Delete it [root@localhost ~]# btrfs sub delete /mnt/btrfs/vol_1/ Delete subvolume '/mnt/btrfs/vol_1' # list shows nothing [root@localhost ~]# btrfs sub list -a /mnt/btrfs # mount default subvolume failed, it's been deleted [root@localhost ~]# umount /mnt/btrfs [root@localhost ~]# mount /dev/sda5 /mnt/btrfs mount: mount(2) failed: No such file or directory # Have to specify which subvolume to mount [root@localhost ~]# mount -o subvol=/ /dev/sda5 /mnt/btrfs It makes more sense to prevent deleting default subvolume. Also fix some code style issues and magical return values. Signed-off-by: Eryu Guan guane...@gmail.com --- cmds-subvolume.c | 58 +++- 1 file changed, 41 insertions(+), 17 deletions(-) diff --git a/cmds-subvolume.c b/cmds-subvolume.c index 74e2130..00712c3 100644 --- a/cmds-subvolume.c +++ b/cmds-subvolume.c @@ -198,10 +198,11 @@ static const char * const cmd_subvol_delete_usage[] = { static int cmd_subvol_delete(int argc, char **argv) { - int res, fd, len, e, cnt = 1, ret = 0; + int res, fd, vol_fd, len, e, cnt = 1, ret = 1; struct btrfs_ioctl_vol_args args; char*dname, *vname, *cpath; char*path; + u64 default_id, root_id; if (argc 2) usage(cmd_subvol_delete_usage); @@ -210,29 +211,25 @@ again: path = argv[cnt]; res = test_issubvolume(path); - if(res0){ + if (res 0) { fprintf(stderr, ERROR: error accessing '%s'\n, path); - ret = 12; goto out; } - if(!res){ + if (!res) { fprintf(stderr, ERROR: '%s' is not a subvolume\n, path); - ret = 13; goto out; } - cpath = realpath(path, 0); + cpath = realpath(path, NULL); dname = strdup(cpath); dname = dirname(dname); vname = strdup(cpath); vname = basename(vname); free(cpath); - if( !strcmp(vname,.) || !strcmp(vname,..) || -strchr(vname, '/') ){ + if (!strcmp(vname, .) || !strcmp(vname, ..) || strchr(vname, '/')) { fprintf(stderr, ERROR: incorrect subvolume name ('%s')\n, vname); - ret = 14; goto out; } @@ -240,31 +237,58 @@ again: if (len == 0 || len = BTRFS_VOL_NAME_MAX) { fprintf(stderr, ERROR: snapshot name too long ('%s)\n, vname); - ret = 14; goto out; } fd = open_file_or_dir(dname); if (fd 0) { fprintf(stderr, ERROR: can't access to '%s'\n, dname); - ret = 12; goto out; } + res = btrfs_list_get_default_subvolume(fd, default_id); + if (res) { + fprintf(stderr, ERROR: can't perform the search - %s\n, + strerror(errno)); + goto out_close; + } + if (default_id == 0) { + fprintf(stderr, ERROR: 'default' dir item not found\n); + goto out_close; + } + + vol_fd = open_file_or_dir(path); + if (vol_fd 0) { + fprintf(stderr, ERROR: can't access to '%s'\n, path); + goto out_close; + } + res = btrfs_list_get_path_rootid(vol_fd, root_id); + close(vol_fd); + if (res) + goto out_close; + + if (root_id == default_id) { + fprintf(stderr, + Unable to delete current default subvolume '%s/%s'\n, + dname, vname); + goto out_close; + } + printf(Delete subvolume '%s/%s'\n, dname, vname); strncpy_null(args.name, vname); res = ioctl(fd, BTRFS_IOC_SNAP_DESTROY, args); e = errno; - close(fd); - - if(res 0 ){ - fprintf( stderr, ERROR: cannot delete '%s/%s' - %s\n, + if (res 0) { + fprintf(stderr, ERROR: cannot delete '%s/%s' - %s\n, dname, vname, strerror(e)); - ret = 11; - goto out; + ret = 1; + goto out_close; } + ret = 0; +out_close: + close(fd); out: cnt++; if (cnt
Btrfs performance problem; metadata size to blame?
Hi guys, My Btrfs fs has a performance problem which I hope you can help me solve. I have a dataset of around 3.15 TiB, that has lived on a ZFS volume for almost two years (ZRAID1, 4 2TiB disks). In order to move to Btrfs I bought myself a 4TiB disk with the idea of buying a new one next week and balance it to a RAID1 of 2 4TiB disks. I created a single disk Btrfs volume with the default mkfs options (no data duplication, metadata duplication on). Next I transferred my dataset to this disk (no problems there). Today when I tried to create a directory and I noticed the Btrfs volume was awefully slow; it took a few seconds to create the directory and a few to delete it (which should be milliseconds as you know). In fact each and every operation on the volume grinded the fs down to a halt. FS information: # btrfs fi df /storage Data: total=3.29TB, used=3.15TB System, DUP: total=8.00MB, used=360.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=4.00GB, used=3.88GB Metadata: total=8.00MB, used=0.00 # btrfs fi show Label: 'storage' uuid: 3fa262cd-baa9-46dc-92a8-318c87166186 Total devices 1 FS bytes used 3.16TB devid1 size 3.64TB used 3.30TB path /dev/sdb I suspect my performance blow has everything to do with the abysmally low amount of space for metadata that is left, but since I am not a Btrfs guru I don't now whether this is truly the case and/or how to solve it. btrfs fi balance start -dusage=5 did not help. Yours, John -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs performance problem; metadata size to blame?
I use Ubuntu with kernel 3.8.0-19-generic. I also tested with the latest live disk of Arch Linux; write performance was the same (bad). My mount options: rw,compress=lzo. Iotop does not show any strange disk activity. 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:10 PM, John . btrfsp...@gmail.com wrote: Hi Harald, I did perform a defrag of the volume a few hours ago. This did not make a difference. :( Yours, John 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:04 PM, John . btrfsp...@gmail.com wrote: Hi guys, My Btrfs fs has a performance problem which I hope you can help me solve. I have a dataset of around 3.15 TiB, that has lived on a ZFS volume for almost two years (ZRAID1, 4 2TiB disks). In order to move to Btrfs I bought myself a 4TiB disk with the idea of buying a new one next week and balance it to a RAID1 of 2 4TiB disks. I created a single disk Btrfs volume with the default mkfs options (no data duplication, metadata duplication on). Next I transferred my dataset to this disk (no problems there). Today when I tried to create a directory and I noticed the Btrfs volume was awefully slow; it took a few seconds to create the directory and a few to delete it (which should be milliseconds as you know). In fact each and every operation on the volume grinded the fs down to a halt. FS information: # btrfs fi df /storage Data: total=3.29TB, used=3.15TB System, DUP: total=8.00MB, used=360.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=4.00GB, used=3.88GB Metadata: total=8.00MB, used=0.00 # btrfs fi show Label: 'storage' uuid: 3fa262cd-baa9-46dc-92a8-318c87166186 Total devices 1 FS bytes used 3.16TB devid1 size 3.64TB used 3.30TB path /dev/sdb I suspect my performance blow has everything to do with the abysmally low amount of space for metadata that is left, but since I am not a Btrfs guru I don't now whether this is truly the case and/or how to solve it. btrfs fi balance start -dusage=5 did not help. Yours, John -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Try to defragment the root of the volume (e.g the mountpoint). While it's mounted: btrfs fi defrag /path/to/mnt Then try performance again What kernel version do you use? What are your mount options? Try to run iotop and see if there is any unusual activity... -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs performance problem; metadata size to blame?
On Sun, Apr 28, 2013 at 9:18 PM, John . btrfsp...@gmail.com wrote: I use Ubuntu with kernel 3.8.0-19-generic. I also tested with the latest live disk of Arch Linux; write performance was the same (bad). My mount options: rw,compress=lzo. Iotop does not show any strange disk activity. 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:10 PM, John . btrfsp...@gmail.com wrote: Hi Harald, I did perform a defrag of the volume a few hours ago. This did not make a difference. :( Yours, John 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:04 PM, John . btrfsp...@gmail.com wrote: Hi guys, My Btrfs fs has a performance problem which I hope you can help me solve. I have a dataset of around 3.15 TiB, that has lived on a ZFS volume for almost two years (ZRAID1, 4 2TiB disks). In order to move to Btrfs I bought myself a 4TiB disk with the idea of buying a new one next week and balance it to a RAID1 of 2 4TiB disks. I created a single disk Btrfs volume with the default mkfs options (no data duplication, metadata duplication on). Next I transferred my dataset to this disk (no problems there). Today when I tried to create a directory and I noticed the Btrfs volume was awefully slow; it took a few seconds to create the directory and a few to delete it (which should be milliseconds as you know). In fact each and every operation on the volume grinded the fs down to a halt. FS information: # btrfs fi df /storage Data: total=3.29TB, used=3.15TB System, DUP: total=8.00MB, used=360.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=4.00GB, used=3.88GB Metadata: total=8.00MB, used=0.00 # btrfs fi show Label: 'storage' uuid: 3fa262cd-baa9-46dc-92a8-318c87166186 Total devices 1 FS bytes used 3.16TB devid1 size 3.64TB used 3.30TB path /dev/sdb I suspect my performance blow has everything to do with the abysmally low amount of space for metadata that is left, but since I am not a Btrfs guru I don't now whether this is truly the case and/or how to solve it. btrfs fi balance start -dusage=5 did not help. Yours, John -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Try to defragment the root of the volume (e.g the mountpoint). While it's mounted: btrfs fi defrag /path/to/mnt Then try performance again What kernel version do you use? What are your mount options? Try to run iotop and see if there is any unusual activity... Try mount -o inode_cache,space_cache,autodefrag - first mount with the new options might take a while, also there might be disk activity for a while after mounting with this... -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs performance problem; metadata size to blame?
I just started up my usenet reader (which generate a lot of small files) and transferred two large files (7,5GiB) at the same time. Performance seems all right again! :D Thanks! Could you explain to me why each of the options could have a positive effect on performance? The wiki explains what the option imply, but not how they could help boost performance. root@host:/storage# time mkdir test real 0m0.001s user 0m0.000s sys 0m0.000s root@test:/storage# time rm -Rf test real 0m0.001s user 0m0.000s sys 0m0.000s Because performance was good again I was able to spam the volume with data and the metadata size also grew. No problems in that department either. ;-) 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:18 PM, John . btrfsp...@gmail.com wrote: I use Ubuntu with kernel 3.8.0-19-generic. I also tested with the latest live disk of Arch Linux; write performance was the same (bad). My mount options: rw,compress=lzo. Iotop does not show any strange disk activity. 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:10 PM, John . btrfsp...@gmail.com wrote: Hi Harald, I did perform a defrag of the volume a few hours ago. This did not make a difference. :( Yours, John 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:04 PM, John . btrfsp...@gmail.com wrote: Hi guys, My Btrfs fs has a performance problem which I hope you can help me solve. I have a dataset of around 3.15 TiB, that has lived on a ZFS volume for almost two years (ZRAID1, 4 2TiB disks). In order to move to Btrfs I bought myself a 4TiB disk with the idea of buying a new one next week and balance it to a RAID1 of 2 4TiB disks. I created a single disk Btrfs volume with the default mkfs options (no data duplication, metadata duplication on). Next I transferred my dataset to this disk (no problems there). Today when I tried to create a directory and I noticed the Btrfs volume was awefully slow; it took a few seconds to create the directory and a few to delete it (which should be milliseconds as you know). In fact each and every operation on the volume grinded the fs down to a halt. FS information: # btrfs fi df /storage Data: total=3.29TB, used=3.15TB System, DUP: total=8.00MB, used=360.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=4.00GB, used=3.88GB Metadata: total=8.00MB, used=0.00 # btrfs fi show Label: 'storage' uuid: 3fa262cd-baa9-46dc-92a8-318c87166186 Total devices 1 FS bytes used 3.16TB devid1 size 3.64TB used 3.30TB path /dev/sdb I suspect my performance blow has everything to do with the abysmally low amount of space for metadata that is left, but since I am not a Btrfs guru I don't now whether this is truly the case and/or how to solve it. btrfs fi balance start -dusage=5 did not help. Yours, John -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Try to defragment the root of the volume (e.g the mountpoint). While it's mounted: btrfs fi defrag /path/to/mnt Then try performance again What kernel version do you use? What are your mount options? Try to run iotop and see if there is any unusual activity... Try mount -o inode_cache,space_cache,autodefrag - first mount with the new options might take a while, also there might be disk activity for a while after mounting with this... -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs performance problem; metadata size to blame?
On Sun, Apr 28, 2013 at 9:53 PM, John . btrfsp...@gmail.com wrote: I just started up my usenet reader (which generate a lot of small files) and transferred two large files (7,5GiB) at the same time. Performance seems all right again! :D Thanks! Could you explain to me why each of the options could have a positive effect on performance? The wiki explains what the option imply, but not how they could help boost performance. root@host:/storage# time mkdir test real 0m0.001s user 0m0.000s sys 0m0.000s root@test:/storage# time rm -Rf test real 0m0.001s user 0m0.000s sys 0m0.000s Because performance was good again I was able to spam the volume with data and the metadata size also grew. No problems in that department either. ;-) 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:18 PM, John . btrfsp...@gmail.com wrote: I use Ubuntu with kernel 3.8.0-19-generic. I also tested with the latest live disk of Arch Linux; write performance was the same (bad). My mount options: rw,compress=lzo. Iotop does not show any strange disk activity. 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:10 PM, John . btrfsp...@gmail.com wrote: Hi Harald, I did perform a defrag of the volume a few hours ago. This did not make a difference. :( Yours, John 2013/4/28 Harald Glatt m...@hachre.de: On Sun, Apr 28, 2013 at 9:04 PM, John . btrfsp...@gmail.com wrote: Hi guys, My Btrfs fs has a performance problem which I hope you can help me solve. I have a dataset of around 3.15 TiB, that has lived on a ZFS volume for almost two years (ZRAID1, 4 2TiB disks). In order to move to Btrfs I bought myself a 4TiB disk with the idea of buying a new one next week and balance it to a RAID1 of 2 4TiB disks. I created a single disk Btrfs volume with the default mkfs options (no data duplication, metadata duplication on). Next I transferred my dataset to this disk (no problems there). Today when I tried to create a directory and I noticed the Btrfs volume was awefully slow; it took a few seconds to create the directory and a few to delete it (which should be milliseconds as you know). In fact each and every operation on the volume grinded the fs down to a halt. FS information: # btrfs fi df /storage Data: total=3.29TB, used=3.15TB System, DUP: total=8.00MB, used=360.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=4.00GB, used=3.88GB Metadata: total=8.00MB, used=0.00 # btrfs fi show Label: 'storage' uuid: 3fa262cd-baa9-46dc-92a8-318c87166186 Total devices 1 FS bytes used 3.16TB devid1 size 3.64TB used 3.30TB path /dev/sdb I suspect my performance blow has everything to do with the abysmally low amount of space for metadata that is left, but since I am not a Btrfs guru I don't now whether this is truly the case and/or how to solve it. btrfs fi balance start -dusage=5 did not help. Yours, John -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Try to defragment the root of the volume (e.g the mountpoint). While it's mounted: btrfs fi defrag /path/to/mnt Then try performance again What kernel version do you use? What are your mount options? Try to run iotop and see if there is any unusual activity... Try mount -o inode_cache,space_cache,autodefrag - first mount with the new options might take a while, also there might be disk activity for a while after mounting with this... Great, glad it helped! I'm not dev so I can only give vague and possibly wrong answers here: - autodefrag: would actually negatively impact immediate performance but will make a difference compared to no defrag over time - inode_cache: is apparently caching the latest inode number(?) that has been used so that whena new one has to be given it is immediately available instead of searching for it again - space_cache: caches the amount of space free, otherwise each space free 'question' to the volume would require it to recaculate it If you want better answers you should wait until a dev answers or corrects me :) Or come onto #btrfs on irc.freenode.net and ask there. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs performance problem; metadata size to blame?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 28/04/13 12:57, Harald Glatt wrote: If you want better answers ... There is a lot of good information at the wiki and it does see regular updates. For example the performance mount options are on this page: https://btrfs.wiki.kernel.org/index.php/Mount_options Roger -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAlF9g+wACgkQmOOfHg372QQu6QCffq/cB7GPutTwiAUE0CyTuIJx Qj8AnjsqxVyPrK5FTDqaLk1d1lsYYB38 =6HN3 -END PGP SIGNATURE- -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs performance problem; metadata size to blame?
On Sun, Apr 28, 2013 at 2:17 PM, Roger Binns rog...@rogerbinns.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 28/04/13 12:57, Harald Glatt wrote: If you want better answers ... There is a lot of good information at the wiki and it does see regular updates. For example the performance mount options are on this page: https://btrfs.wiki.kernel.org/index.php/Mount_options Roger -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.12 (GNU/Linux) iEYEARECAAYFAlF9g+wACgkQmOOfHg372QQu6QCffq/cB7GPutTwiAUE0CyTuIJx Qj8AnjsqxVyPrK5FTDqaLk1d1lsYYB38 =6HN3 -END PGP SIGNATURE- -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Btrfs performance problem; metadata size to blame?
[how'd that send button get there] space_cache is the default, set by mkfs, for a year or so now. It's sticky, so even if it wasn't, you'd only need to mount with it once. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html