Re: Blocket for more than 120 seconds

2013-12-16 Thread Duncan
Hans-Kristian Bakke posted on Mon, 16 Dec 2013 01:06:36 +0100 as excerpted: torrents are really only one thing my storage server get hammered with. It also does a lot more IO intensive stuff. I actually run enterprise storage drives in a Supermicro-server for a reason, even if it is my home

Re: Blocket for more than 120 seconds

2013-12-16 Thread Hans-Kristian Bakke
Stupid me, I completely forgot that you can run multidisk arrays with just block level partitions, just like with md raid! It will introduce a rather significant management overhead in my case though, as managing several individual partitions per drive is quite annoying with so many drives. What

Re: Blocket for more than 120 seconds

2013-12-16 Thread Duncan
Hans-Kristian Bakke posted on Mon, 16 Dec 2013 11:55:40 +0100 as excerpted: Stupid me, I completely forgot that you can run multidisk arrays with just block level partitions, just like with md raid! It will introduce a rather significant management overhead in my case though, as managing

Re: Blocket for more than 120 seconds

2013-12-16 Thread Chris Mason
On Sun, 2013-12-15 at 03:35 +0100, Hans-Kristian Bakke wrote: I have done some more testing. I turned off everything using the disk and only did defrag. I have created a script that gives me a list of the files with the most extents. I started from the top to improve the fragmentation of the

Re: Blocket for more than 120 seconds

2013-12-16 Thread Hans-Kristian Bakke
Ok, I guess the essence have been lost in the meta discussion. Basically I get blocking for more than 120 seconds during these workloads: - defragmenting several large fragmentet files in succession (leaving time for btrfs to finish writing each file). This have *always* happened in my array,

Re: Blocket for more than 120 seconds

2013-12-16 Thread Chris Mason
On Mon, 2013-12-16 at 17:32 +0100, Hans-Kristian Bakke wrote: Ok, I guess the essence have been lost in the meta discussion. Basically I get blocking for more than 120 seconds during these workloads: - defragmenting several large fragmentet files in succession (leaving time for btrfs to

Re: Blocket for more than 120 seconds

2013-12-16 Thread Hans-Kristian Bakke
I have explicitly set compression=lzo, and later noatime just to test now, else it's just default 3.12.4 options (or 3.13-rc2 when I tested that). To make sure, here is my btrfs mounts from /proc/mounts: /dev/sdl /btrfs btrfs rw,noatime,compress=lzo,space_cache 0 0 /dev/sdl /storage/storage-vol0

Re: Blocket for more than 120 seconds

2013-12-16 Thread Chris Mason
On Mon, 2013-12-16 at 19:22 +0100, Hans-Kristian Bakke wrote: I have explicitly set compression=lzo, and later noatime just to test now, else it's just default 3.12.4 options (or 3.13-rc2 when I tested that). To make sure, here is my btrfs mounts from /proc/mounts: /dev/sdl /btrfs btrfs

Re: Blocket for more than 120 seconds

2013-12-16 Thread Hans-Kristian Bakke
No problem. You have to wait a bit though, as the volume is currently going through a reduction in the number of drives from 8 to 7 and I do not feel comfortable stalling the volume while that is happening. I will report back with the logs later on. Mvh Hans-Kristian Bakke On 16 December 2013

Re: Blocket for more than 120 seconds

2013-12-15 Thread Duncan
Hans-Kristian Bakke posted on Sun, 15 Dec 2013 03:35:53 +0100 as excerpted: I have done some more testing. I turned off everything using the disk and only did defrag. I have created a script that gives me a list of the files with the most extents. I started from the top to improve the

Re: Blocket for more than 120 seconds

2013-12-15 Thread Hans-Kristian Bakke
Thank you for your very thorough answer Duncan. Just to clear up a couple of questions. # Backups The backups I am speaking of is backup of data on the btrfs filesystem to another place. The btrfs filesystem sees this as large reads at about 100 mbit/s, at the time for about a week continuous.

Re: Blocket for more than 120 seconds

2013-12-15 Thread Duncan
Hans-Kristian Bakke posted on Sun, 15 Dec 2013 15:51:37 +0100 as excerpted: # Regarding torrents and preallocation I have actually turned preallocation on specifically in rtorrent thinking that it did btrfs a favour like with ext4 (system.file_allocate.set = yes). It is easy to turn it off.

Re: Blocket for more than 120 seconds

2013-12-15 Thread Charles Cazabon
Chris Murphy li...@colorremedies.com wrote: On Dec 14, 2013, at 4:19 PM, Hans-Kristian Bakke hkba...@gmail.com wrote: # btrfs fi df /storage/storage-vol0/ Data, RAID10: total=13.89TB, used=12.99TB System, RAID10: total=64.00MB, used=1.19MB System: total=4.00MB, used=0.00 Metadata,

Blocket for more than 120 seconds

2013-12-14 Thread Hans-Kristian Bakke
Hi During high disk loads, like backups combinded with lot of writers, rsync at high speed locally or btrfs defrag I always get these messages, and everything grinds to a halt on the btrfs filesystem: [ 3123.062229] INFO: task rtorrent:8431 blocked for more than 120 seconds. [ 3123.062251]

Re: Blocket for more than 120 seconds

2013-12-14 Thread Chris Murphy
On Dec 14, 2013, at 1:30 PM, Hans-Kristian Bakke hkba...@gmail.com wrote: During high disk loads, like backups combinded with lot of writers, rsync at high speed locally or btrfs defrag I always get these messages, and everything grinds to a halt on the btrfs filesystem: [ 3123.062229]

Re: Blocket for more than 120 seconds

2013-12-14 Thread Hans-Kristian Bakke
Looking into triggering the error again and dmesg and sysrq, but here are the other two: # btrfs fi show Label: none uuid: 9302fc8f-15c6-46e9-9217-951d7423927c Total devices 8 FS bytes used 13.00TB devid4 size 3.64TB used 3.48TB path /dev/sdt devid3 size 3.64TB

Re: Blocket for more than 120 seconds

2013-12-14 Thread Chris Murphy
On Dec 14, 2013, at 4:19 PM, Hans-Kristian Bakke hkba...@gmail.com wrote: Looking into triggering the error again and dmesg and sysrq, but here are the other two: # btrfs fi show Label: none uuid: 9302fc8f-15c6-46e9-9217-951d7423927c Total devices 8 FS bytes used 13.00TB

Re: Blocket for more than 120 seconds

2013-12-14 Thread Hans-Kristian Bakke
When I look at the entire FS with df-like tools it is reported as 89.4% used (26638.65 of 29808.2 GB). But this is shared amongst both data and metadata I guess? I do know that ~90%+ seems full, but it is still around 3TB in my case! Are the percentage rules of old times still valid with modern

Re: Blocket for more than 120 seconds

2013-12-14 Thread Chris Murphy
On Dec 14, 2013, at 5:28 PM, Hans-Kristian Bakke hkba...@gmail.com wrote: When I look at the entire FS with df-like tools it is reported as 89.4% used (26638.65 of 29808.2 GB). But this is shared amongst both data and metadata I guess? Yes. I do know that ~90%+ seems full, but it is

Re: Blocket for more than 120 seconds

2013-12-14 Thread Hans-Kristian Bakke
I have done some more testing. I turned off everything using the disk and only did defrag. I have created a script that gives me a list of the files with the most extents. I started from the top to improve the fragmentation of the worst files. The most fragmentet file was a file of about 32GB with

Re: Blocket for more than 120 seconds

2013-12-14 Thread George Mitchell
On 12/14/2013 04:28 PM, Hans-Kristian Bakke wrote: I would normally expect that there is no difference in 1TB free space on a FS that is 2TB in total, and 1TB free space on a filesystem that is 30TB in total, other than my sense of urge and that you would probably expect data growth to be more