Re: Very slow filesystem

2014-06-06 Thread Russell Coker
On Fri, 6 Jun 2014 14:06:53 Mitch Harder wrote: > Every time you update your database, btrfs is going to update > whichever 128 KiB blocks need to be modified. > > Even for a tiny modification, the new compressed block may be slightly > more or slightly less than 128 KiB. > > If you have a 1-2 GB

Re: Very slow filesystem

2014-06-06 Thread Duncan
Mitch Harder posted on Fri, 06 Jun 2014 14:06:53 -0500 as excerpted: > Every time you update your database, btrfs is going to update whichever > 128 KiB blocks need to be modified. > > Even for a tiny modification, the new compressed block may be slightly > more or slightly less than 128 KiB. FW

Re: Very slow filesystem

2014-06-06 Thread Mitch Harder
On Thu, Jun 5, 2014 at 2:53 PM, Duncan <1i5t5.dun...@cox.net> wrote: > Timofey Titovets posted on Thu, 05 Jun 2014 19:13:08 +0300 as excerpted: > >> 2014-06-05 18:52 GMT+03:00 Igor M : >>> One more question. Is there any other way to find out file >>> fragmentation ? >>> I just copied 35Gb file on

Re: Very slow filesystem

2014-06-05 Thread Duncan
Timofey Titovets posted on Thu, 05 Jun 2014 19:13:08 +0300 as excerpted: > 2014-06-05 18:52 GMT+03:00 Igor M : >> One more question. Is there any other way to find out file >> fragmentation ? >> I just copied 35Gb file on new btrfs filesystem (compressed) and >> filefrag reports 282275 extents fou

Re: Very slow filesystem

2014-06-05 Thread Timofey Titovets
2014-06-05 18:52 GMT+03:00 Igor M : > One more question. Is there any other way to find out file fragmentation ? > I just copied 35Gb file on new btrfs filesystem (compressed) and > filefrag reports 282275 extents found. This can't be right ? hes, because filefrag show compressed block (128kbite) a

Re: Very slow filesystem

2014-06-05 Thread Igor M
One more question. Is there any other way to find out file fragmentation ? I just copied 35Gb file on new btrfs filesystem (compressed) and filefrag reports 282275 extents found. This can't be right ? On Thu, Jun 5, 2014 at 5:05 AM, Duncan <1i5t5.dun...@cox.net> wrote: > Igor M posted on Thu, 05 J

Re: Very slow filesystem

2014-06-05 Thread Russell Coker
On Thu, 5 Jun 2014 09:50:53 Igor M wrote: > But data to this big tables is only appended, it's never deleted. So > no rewrites should be happening. When you write to the big tables the indexes will be rewritten. Indexes can be in the same file as table data or as separate files depending on what

Re: Very slow filesystem

2014-06-05 Thread Erkki Seppala
Erkki Seppala writes: > If the number is hitting your seek rate (ie. 1/0.0075 for 7.5 ms seek = > 133), then fragmentation is sure to be blamed. Actually the number may very well be off by at least a factor of two (I tested that my device did 400 tps when I expected 200; perhaps bulk transfers c

Re: Very slow filesystem

2014-06-05 Thread Erkki Seppala
Igor M writes: > Why btrfs becames EXTREMELY slow after some time (months) of usage ? Have you tried iostat from sysstat to see the number of IO-operations performed per second (tps) on the devices when it is performing badly? If the number is hitting your seek rate (ie. 1/0.0075 for 7.5 ms see

Re: Very slow filesystem

2014-06-05 Thread Igor M
Thanks for explanation. I did read wiki, but didn't see this mentioned. I saw mentioned 'nodatacow' mount option, but this disables compression and I need compression. Also I was wrong about files size, files can go to 70GB. But data to this big tables is only appended, it's never deleted. So no re

Re: Very slow filesystem

2014-06-04 Thread Duncan
Fajar A. Nugraha posted on Thu, 05 Jun 2014 10:22:49 +0700 as excerpted: > (resending to the list as plain text, the original reply was rejected > due to HTML format) > > On Thu, Jun 5, 2014 at 10:05 AM, Duncan <1i5t5.dun...@cox.net> wrote: >> >> Igor M posted on Thu, 05 Jun 2014 00:15:31 +0200 a

Re: Very slow filesystem

2014-06-04 Thread Fajar A. Nugraha
(resending to the list as plain text, the original reply was rejected due to HTML format) On Thu, Jun 5, 2014 at 10:05 AM, Duncan <1i5t5.dun...@cox.net> wrote: > > Igor M posted on Thu, 05 Jun 2014 00:15:31 +0200 as excerpted: > > > Why btrfs becames EXTREMELY slow after some time (months) of usag

Re: Very slow filesystem

2014-06-04 Thread Duncan
Igor M posted on Thu, 05 Jun 2014 00:15:31 +0200 as excerpted: > Why btrfs becames EXTREMELY slow after some time (months) of usage ? > This is now happened second time, first time I though it was hard drive > fault, but now drive seems ok. > Filesystem is mounted with compress-force=lzo and is us

Re: Very slow filesystem

2014-06-04 Thread Timofey Titovets
i can mistake, but i think what: btrfstune -x # can improve perfomance because this decrease metadata Also, in last versions of btrfs progs changed from 4k to 16k, it also can help (but for this, you must reformat fs) For clean btrfs fi df /, you can try do: btrfs bal start -f -sconvert=dup,soft -

Re: Very slow filesystem

2014-06-04 Thread Igor M
On Thu, Jun 5, 2014 at 12:27 AM, Fajar A. Nugraha wrote: > On Thu, Jun 5, 2014 at 5:15 AM, Igor M wrote: >> Hello, >> >> Why btrfs becames EXTREMELY slow after some time (months) of usage ? > >> # btrfs fi show >> Label: none uuid: b367812a-b91a-4fb2-a839-a3a153312eba >> Total devices 1

Re: Very slow filesystem

2014-06-04 Thread Roman Mamedov
On Thu, 5 Jun 2014 05:27:33 +0700 "Fajar A. Nugraha" wrote: > On Thu, Jun 5, 2014 at 5:15 AM, Igor M wrote: > > Hello, > > > > Why btrfs becames EXTREMELY slow after some time (months) of usage ? > > > # btrfs fi show > > Label: none uuid: b367812a-b91a-4fb2-a839-a3a153312eba > > Total

Re: Very slow filesystem

2014-06-04 Thread Fajar A. Nugraha
On Thu, Jun 5, 2014 at 5:15 AM, Igor M wrote: > Hello, > > Why btrfs becames EXTREMELY slow after some time (months) of usage ? > # btrfs fi show > Label: none uuid: b367812a-b91a-4fb2-a839-a3a153312eba > Total devices 1 FS bytes used 2.36TiB > devid1 size 2.73TiB used 2.38Ti

Very slow filesystem

2014-06-04 Thread Igor M
Hello, Why btrfs becames EXTREMELY slow after some time (months) of usage ? This is now happened second time, first time I though it was hard drive fault, but now drive seems ok. Filesystem is mounted with compress-force=lzo and is used for MySQL databases, files are mostly big 2G-8G. Copying from