Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-24 Thread Jamie Landeg-Jones
Scott Bennett  wrote:

> thousand blocks allocated.  Directories don't shrink.  Directory entries do
> not get moved around within directories when files are added or deleted.
> Directories can remain the same length or they can grow in length.  If a
> directory once had many tens of thousands of filenames and links to their
> primary inodes, then the directory is still that big, even if it now only
> contains two [+ 20 to 30 directory], probably widely separated, entries.

> IOW, if you want the performance to go back to what it was when the directory
> was fresh (and still small), you have to create a new directory and then move
> the remaining entries from the old directory into the new (small) directory.

Not entirely true. FreeBSD on UFS *will* *truncate* directories where possible,
each time a new directory entry is made, and there is contiguous unused space
to the end of the directory-file.

 [ As an aside, tmpfs does 'defragment' and rezise directories
   in real time, when a file from *any* location is deleted.   ]

Back to UFS (I don't know about ZFS):

So, whilst you are right about directory entries not being moved around,
consider the following. Note the size of '.' in each directory listing: 

| 17:05 [2] (1) "~" jamie@lapcat% md dir
| dir
|
| 17:05 [2] (2) "~" jamie@lapcat% cd dir
|
| 17:05 [2] (3) "~/dir" jamie@lapcat% l
| total 8
| 4 drwxr-xr-x   2 jamie  jamie  -  512 24 Oct 17:05 ./
| 4 drwx--  72 jamie  jamie  - 3072 24 Oct 17:05 ../
|
| 17:05 [2] (4) "~/dir" jamie@lapcat% jot 999 1 999 | awk '{printf "touch 
%03d\n", $1}' | sh
|
| 17:05 [2] (5) "~/dir" jamie@lapcat% l | head
| total 16
| 12 drwxr-xr-x   2 jamie  jamie  - 12288 24 Oct 17:05 ./
|  4 drwx--  72 jamie  jamie  -  3072 24 Oct 17:05 ../
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 001
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 002
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 003
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 004
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 005
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 006
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 007
|
| *141* 17:05 [2] (6) "~/dir" jamie@lapcat% rm [678]*
| remove 300 files? y
|
| 17:06 [2] (7) "~/dir" jamie@lapcat% l | head
| total 16
| 12 drwxr-xr-x   2 jamie  jamie  - 12288 24 Oct 17:06 ./
|  4 drwx--  72 jamie  jamie  -  3072 24 Oct 17:05 ../
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 001
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 002
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 003
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 004
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 005
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 006
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 007
|
| *141* 17:06 [2] (8) "~/dir" jamie@lapcat% touch x; rm x
|
| 17:06 [2] (9) "~/dir" jamie@lapcat% l | head
| total 16
| 12 drwxr-xr-x   2 jamie  jamie  - 12288 24 Oct 17:06 ./
|  4 drwx--  72 jamie  jamie  -  3072 24 Oct 17:05 ../
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 001
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 002
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 003
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 004
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 005
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 006
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 007
|
| *141* 17:06 [2] (10) "~/dir" jamie@lapcat% rm 9*
| remove 100 files? y
|
| 17:06 [2] (11) "~/dir" jamie@lapcat% l | head
| total 16
| 12 drwxr-xr-x   2 jamie  jamie  - 12288 24 Oct 17:06 ./
|  4 drwx--  72 jamie  jamie  -  3072 24 Oct 17:05 ../
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 001
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 002
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 003
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 004
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 005
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 006
|  0 -rw-r--r--   1 jamie  jamie  - 0 24 Oct 17:05 007
|
| *141* 17:06 [2] (12) "~/dir" jamie@lapcat% touch x ; rm x
|
| 17:06 [2] (13) "~/dir" jamie@lapcat% l | head
| total 12
| 8 drwxr-xr-x   2 jamie  jamie  - 7680 24 Oct 17:06 ./
| 4 drwx--  72 jamie  jamie  - 3072 24 Oct 17:05 ../
| 0 -rw-r--r--   1 jamie  jamie  -0 24 Oct 17:05 001
| 0 -rw-r--r--   1 jamie  jamie  -0 24 Oct 17:05 002
| 0 -rw-r--r--   1 jamie  jamie  -0 24 Oct 17:05 003
| 0 -rw-r--r--   1 jamie  jamie  -0 24 Oct 17:05 004
| 0 -rw-r--r--   1 jamie  jamie  -0 24 Oct 17:05 005
| 0 -rw-r--r--   1 jamie  jamie  -0 24 Oct 17:05 006
| 0 -rw-r--r--   1 jamie  jamie  -0 24 Oct 17:05 007
|
| *141* 17:06 [2] (14) "~/dir" jamie@lapcat% rm *
| remove 599 files? y
|
| 17:06 [2] (15) "~/dir" jamie@lapcat% l
| total 12
| 8 drwxr-xr-x   2 jamie  jamie  - 7680 24 Oct 17:06 ./
| 4 

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Brandon Allbery
On Fri, Oct 21, 2016 at 10:04 AM, Pete French 
wrote:

> Not forgotten, just under the impression that ZFS shrinks directories
> unlike good old UFS. Apparenrly not,
>

Someone offhandedly mentioned this earlier (it's apparently intended for
the future sometime). I at least hope they do something smarter than double
indirect blocks these days

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Matthew Seaman
On 2016/10/21 13:47, Pete French wrote:
>> In bad case metadata of every file will be placed in random place of disk.
>> ls need access to metadata of every file before start of output listing.
> 
> Umm, are we not talkong abut an issue where the directoyr no longer contains
> any files. It used to have lots, now it has none.
> 
>> I.e. in bad case you will be need tens of thousands seeks over disk
>> capable only 72 seeks per seconds.
> 
> Why does it need to seek all over the disc if there are no files (and hence
> no metadata surely) ?
> 
> I am not bothered if a hufge directoyr takes a while to list,
> thats something I am happy to deal with. What I dont like is
> when it is back down to zero that it still takes a long time
> to list. That doesnt make much sense.

Interesting.  Is this somehow related to the old Unixy thing with
directories, where the directory node would grow in size as you created
more and more files or sub-directories (as you might expect), but it
wouldn't shrink immediately if you simply deleted many files -- it would
only shrink later when you next created a new file in that directory.
This was a performance feature IIRC -- it avoided shrinking and
re-growing directory nodes in quick succession for what was apparently a
fairly common usage pattern of clearing out a directory and then
refilling it.

Can't see how that would apply to ZFS though, as the CoW nature means
there should be no benefit to not immediately adjusting the size of the
directory node to fit the amount of contents.

Cheers,

Matthew




signature.asc
Description: OpenPGP digital signature


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Pete French
>  Oh, my goodness, how far afield nonsense has gotten!  Have all the
> good folks posting in this thread forgotten how directory blocks are
> allocated in UNIX?

Not forgotten, just under the impression that ZFS shrinks directories
unlike good old UFS. Apparenrly not, and yes, if thats true then the
behaviour is not surprising in the slightest.

Live and learn... ;-)

-pete. [old enough to have used 32V on a Vax, a lng time ago...]

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Scott Bennett
 On Fri, 21 Oct 2016 16:51:36 +0500 "Eugene M. Zheganin"
 wrote:

>On 21.10.2016 15:20, Slawa Olhovchenkov wrote:
>>
>> ZFS prefetch affect performance dpeneds of workload (independed of RAM
>> size): for some workloads wins, for some workloads lose (for my
>> workload prefetch is lose and manualy disabled with 128GB RAM).
>>
>> Anyway, this system have only 24MB in ARC by 2.3GB free, this is may
>> be too low for this workload.
>You mean - "for getting a list of a directory with 20 subdirectories" ? 
>Why then does only this directory have this issue with pause, not 
>/usr/ports/..., which has more directories in it ?
>
>(and yes, /usr/ports/www isn't empty and holds 2410 entities)
>
>/usr/bin/time -h ls -1 /usr/ports/www
>[...]
>0.14s real  0.00s user  0.00s sys
>
 Oh, my goodness, how far afield nonsense has gotten!  Have all the
good folks posting in this thread forgotten how directory blocks are
allocated in UNIX?  This isn't even a BSD-specific thing; it's really
ancient.  What Eugene has complained of is exactly what is to be expected--
on really old hardware.  The only eyebrow-raiser is that he has created a
use case so extreme that a live human can actually notice the delays on
modern hardware.
 I quote from his original posting:  "I also have one directory that used
to have a lot of (tens of thousands) files." and "But now I have 2 files and
a couple of dozens directories in it".  A directory with tens of thousands
of files in it at one point in time most likely has somewhere well over one
thousand blocks allocated.  Directories don't shrink.  Directory entries do
not get moved around within directories when files are added or deleted.
Directories can remain the same length or they can grow in length.  If a
directory once had many tens of thousands of filenames and links to their
primary inodes, then the directory is still that big, even if it now only
contains two [+ 20 to 30 directory], probably widely separated, entries.  To
read a file's entry, all blocks must be searched until the desired filename
is found.  Likewise, to list the contents of a directory, all blocks must be
read until the number of files found matches the link count for the directory.
IOW, if you want the performance to go back to what it was when the directory
was fresh (and still small), you have to create a new directory and then move
the remaining entries from the old directory into the new (small) directory.
The only real difference here between UFS (or even the early AT filesystem)
and ZFS is that the two remaining entries in a formerly huge directory are
likely to be in different directory blocks that could be at effectively random
locations scattered around the space of a partition for one filesystem in UFS
or over an entire pool of potentially many filesystems and much more space in
ZFS.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at sdf.org   *xor*   bennett at freeshell.org  *
**
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Slawa Olhovchenkov
On Fri, Oct 21, 2016 at 01:47:08PM +0100, Pete French wrote:

> > In bad case metadata of every file will be placed in random place of disk.
> > ls need access to metadata of every file before start of output listing.
> 
> Umm, are we not talkong abut an issue where the directoyr no longer contains
> any files. It used to have lots, now it has none.
> 
> > I.e. in bad case you will be need tens of thousands seeks over disk
> > capable only 72 seeks per seconds.
> 
> Why does it need to seek all over the disc if there are no files (and hence
> no metadata surely) ?
> 
> I am not bothered if a hufge directoyr takes a while to list,
> thats something I am happy to deal with. What I dont like is
> when it is back down to zero that it still takes a long time
> to list. That doesnt make much sense.

OK, this case may be differ.
May be zdb can help.
ls -li /parent/dir
Take inode number
zdb - zfs_set inode_number

also do ktrace ls and anaylyse `kdump -E`
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Pete French
> In bad case metadata of every file will be placed in random place of disk.
> ls need access to metadata of every file before start of output listing.

Umm, are we not talkong abut an issue where the directoyr no longer contains
any files. It used to have lots, now it has none.

> I.e. in bad case you will be need tens of thousands seeks over disk
> capable only 72 seeks per seconds.

Why does it need to seek all over the disc if there are no files (and hence
no metadata surely) ?

I am not bothered if a hufge directoyr takes a while to list,
thats something I am happy to deal with. What I dont like is
when it is back down to zero that it still takes a long time
to list. That doesnt make much sense.

-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Slawa Olhovchenkov
On Fri, Oct 21, 2016 at 04:51:36PM +0500, Eugene M. Zheganin wrote:

> Hi.
> 
> On 21.10.2016 15:20, Slawa Olhovchenkov wrote:
> >
> > ZFS prefetch affect performance dpeneds of workload (independed of RAM
> > size): for some workloads wins, for some workloads lose (for my
> > workload prefetch is lose and manualy disabled with 128GB RAM).
> >
> > Anyway, this system have only 24MB in ARC by 2.3GB free, this is may
> > be too low for this workload.
> You mean - "for getting a list of a directory with 20 subdirectories" ? 
> Why then does only this directory have this issue with pause, not 
> /usr/ports/..., which has more directories in it ?
> 
> (and yes, /usr/ports/www isn't empty and holds 2410 entities)
> 
> /usr/bin/time -h ls -1 /usr/ports/www
> [...]
> 0.14s real  0.00s user  0.00s sys

You wrote: "(tens of thousands) files".

In bad case metadata of every file will be placed in random place of
disk.
ls need access to metadata of every file before start of output
listing.
I.e. in bad case you will be need tens of thousands seeks over disk
capable only 72 seeks per seconds.

Perhaps /usr/ports/www created at once and metadata of all
entries placed near each other, need less seeks.

If zfs property primarycache/secondarycache not off.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Eugene M. Zheganin

Hi.

On 21.10.2016 15:20, Slawa Olhovchenkov wrote:


ZFS prefetch affect performance dpeneds of workload (independed of RAM
size): for some workloads wins, for some workloads lose (for my
workload prefetch is lose and manualy disabled with 128GB RAM).

Anyway, this system have only 24MB in ARC by 2.3GB free, this is may
be too low for this workload.
You mean - "for getting a list of a directory with 20 subdirectories" ? 
Why then does only this directory have this issue with pause, not 
/usr/ports/..., which has more directories in it ?


(and yes, /usr/ports/www isn't empty and holds 2410 entities)

/usr/bin/time -h ls -1 /usr/ports/www
[...]
0.14s real  0.00s user  0.00s sys

Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Andriy Gapon

Instead of the guesswork and black magic, you could try to use tools to analyze
the problem.  E.g., determine if the delay is because a CPU does a lot of work
or it is because of waiting.  Find the bottleneck, etc.
pmcstat, dtrace are your friends :-)

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Slawa Olhovchenkov
On Fri, Oct 21, 2016 at 11:02:57AM +0100, Steven Hartland wrote:

> > Mem: 21M Active, 646M Inact, 931M Wired, 2311M Free
> > ARC: 73M Total, 3396K MFU, 21M MRU, 545K Anon, 1292K Header, 47M Other
> > Swap: 4096M Total, 4096M Free
> >
> >   PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
> >   600 root390 27564K  5072K nanslp  1 295.0H  24.56% monit
> > 0 root   -170 0K  2608K -   1  75:24   0.00% 
> > kernel{zio_write_issue}
> >   767 freeswitch  200   139M 31668K uwait   0  48:29   0.00% 
> > freeswitch{freeswitch}
> >   683 asterisk200   806M   483M uwait   0  41:09   0.00% 
> > asterisk{asterisk}
> > 0 root-80 0K  2608K -   0  37:43   0.00% 
> > kernel{metaslab_group_t}
> > [... others lines are just 0% ...]
> This looks like you only have ~4Gb ram which is pretty low for ZFS I 
> suspect vfs.zfs.prefetch_disable will be 1, which will crash the 
> performance.

ZFS prefetch affect performance dpeneds of workload (independed of RAM
size): for some workloads wins, for some workloads lose (for my
workload prefetch is lose and manualy disabled with 128GB RAM).

Anyway, this system have only 24MB in ARC by 2.3GB free, this is may
be too low for this workload.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Steven Hartland

On 21/10/2016 10:04, Eugene M. Zheganin wrote:

Hi.

On 21.10.2016 9:22, Steven Hartland wrote:

On 21/10/2016 04:52, Eugene M. Zheganin wrote:

Hi.

On 20.10.2016 21:17, Steven Hartland wrote:

Do you have atime enabled for the relevant volume?

I do.


If so disable it and see if that helps:
zfs set atime=off 


Nah, it doesn't help at all.
As per with Jonathon what does gstat -pd and top -SHz show? 


gstat (while ls'ing):

dT: 1.005s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s kBps   
ms/d   %busy Name
1 49 49   2948   13.5  0  00.0  0 0 0.0   
65.0| ada0
0 32 32   1798   11.1  0  00.0  0 0 0.0   
35.3| ada1



Averagely busy then on rust.

gstat (while idling):

dT: 1.003s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s kBps   
ms/d   %busy Name
0  0  0  00.0  0  00.0  0 0 0.0
0.0| ada0
0  2  22550.8  0  00.0  0 0 0.0
0.1| ada1


top -SHz output doesn't really differ while ls'ing or idling:

last pid: 12351;  load averages:  0.46,  0.49, 
0.46   up 39+14:41:02 14:03:05

376 processes: 3 running, 354 sleeping, 19 waiting
CPU:  5.8% user,  0.0% nice, 16.3% system,  0.0% interrupt, 77.9% idle
Mem: 21M Active, 646M Inact, 931M Wired, 2311M Free
ARC: 73M Total, 3396K MFU, 21M MRU, 545K Anon, 1292K Header, 47M Other
Swap: 4096M Total, 4096M Free

  PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
  600 root390 27564K  5072K nanslp  1 295.0H  24.56% monit
0 root   -170 0K  2608K -   1  75:24   0.00% 
kernel{zio_write_issue}
  767 freeswitch  200   139M 31668K uwait   0  48:29   0.00% 
freeswitch{freeswitch}
  683 asterisk200   806M   483M uwait   0  41:09   0.00% 
asterisk{asterisk}
0 root-80 0K  2608K -   0  37:43   0.00% 
kernel{metaslab_group_t}

[... others lines are just 0% ...]
This looks like you only have ~4Gb ram which is pretty low for ZFS I 
suspect vfs.zfs.prefetch_disable will be 1, which will crash the 
performance.


Regards
Steve
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Eugene M. Zheganin

Hi.

On 21.10.2016 9:22, Steven Hartland wrote:

On 21/10/2016 04:52, Eugene M. Zheganin wrote:

Hi.

On 20.10.2016 21:17, Steven Hartland wrote:

Do you have atime enabled for the relevant volume?

I do.


If so disable it and see if that helps:
zfs set atime=off 


Nah, it doesn't help at all.
As per with Jonathon what does gstat -pd and top -SHz show? 


gstat (while ls'ing):

dT: 1.005s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s kBps   
ms/d   %busy Name
1 49 49   2948   13.5  0  00.0  0 0
0.0   65.0| ada0
0 32 32   1798   11.1  0  00.0  0 0
0.0   35.3| ada1


gstat (while idling):

dT: 1.003s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s kBps   
ms/d   %busy Name
0  0  0  00.0  0  00.0  0 0
0.00.0| ada0
0  2  22550.8  0  00.0  0 0
0.00.1| ada1


top -SHz output doesn't really differ while ls'ing or idling:

last pid: 12351;  load averages:  0.46,  0.49, 
0.46   up 39+14:41:02 14:03:05

376 processes: 3 running, 354 sleeping, 19 waiting
CPU:  5.8% user,  0.0% nice, 16.3% system,  0.0% interrupt, 77.9% idle
Mem: 21M Active, 646M Inact, 931M Wired, 2311M Free
ARC: 73M Total, 3396K MFU, 21M MRU, 545K Anon, 1292K Header, 47M Other
Swap: 4096M Total, 4096M Free

  PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
  600 root390 27564K  5072K nanslp  1 295.0H  24.56% monit
0 root   -170 0K  2608K -   1  75:24   0.00% 
kernel{zio_write_issue}
  767 freeswitch  200   139M 31668K uwait   0  48:29   0.00% 
freeswitch{freeswitch}
  683 asterisk200   806M   483M uwait   0  41:09   0.00% 
asterisk{asterisk}
0 root-80 0K  2608K -   0  37:43   0.00% 
kernel{metaslab_group_t}

[... others lines are just 0% ...]

Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Peter Jeremy
Have you done any ZFS tuning?

Could you try installing ports/sysutils/zfs-stats and posting the output
from "zfs-stats -a".  That might point to a bottleneck or poor cache
tuning.

-- 
Peter Jeremy


signature.asc
Description: PGP signature


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

On 21/10/2016 04:52, Eugene M. Zheganin wrote:

Hi.

On 20.10.2016 21:17, Steven Hartland wrote:

Do you have atime enabled for the relevant volume?

I do.


If so disable it and see if that helps:
zfs set atime=off 


Nah, it doesn't help at all.

As per with Jonathon what does gstat -pd and top -SHz show?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

In your case there your vdev (ada0) is saturated with writes from postgres.

You should consider more / faster disks.

You might also want to consider enabling lz4 compression on the PG 
volume as its works well in IO bound situations.



On 21/10/2016 01:54, Jonathan Chen wrote:

On 21 October 2016 at 12:56, Steven Hartland  wrote:
[...]

When you see the stalling what does gstat -pd and top -SHz show?

On my dev box:

1:38pm# uname -a
FreeBSD irontree 10.3-STABLE FreeBSD 10.3-STABLE #0 r307401: Mon Oct
17 10:17:22 NZDT 2016 root@irontree:/usr/obj/usr/src/sys/GENERIC
amd64
1:49pm# gstat -pd
dT: 1.004s  w: 1.000s
  L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s   kBps
ms/d   %busy Name
 0  0  0  00.0  0  00.0  0  0
  0.00.0| cd0
18618  1128   41.4606  52854   17.2  0  0
  0.0  100.5| ada0
^C
1:49pm# top -SHz
last pid: 83284;  load averages:  0.89,  0.68,  0.46
  up
4+03:11:32  13:49:05
565 processes: 9 running, 517 sleeping, 17 zombie, 22 waiting
CPU:  3.7% user,  0.0% nice,  1.9% system,  0.0% interrupt, 94.3% idle
Mem: 543M Active, 2153M Inact, 11G Wired, 10M Cache, 2132M Free
ARC: 7249M Total, 1325M MFU, 4534M MRU, 906M Anon, 223M Header, 261M Other
Swap: 32G Total, 201M Used, 32G Free

   PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
83149 postgres380  2197M   528M zio->i  5   1:13  23.19% postgres
83148 jonc220 36028K 13476K select  2   0:11   3.86% pg_restore
   852 postgres200  2181M  2051M select  5   0:27   0.68% postgres
 0 root   -15- 0K  4240K -   6   0:50   0.49%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.29%
kernel{zio_write_issue_}
 3 root-8- 0K   112K zio->i  6   1:50   0.20%
zfskern{txg_thread_enter}
12 root   -88- 0K   352K WAIT0   1:07   0.20%
intr{irq268: ahci0}
 0 root   -16- 0K  4240K -   4   0:29   0.20%
kernel{zio_write_intr_4}
 0 root   -16- 0K  4240K -   7   0:29   0.10%
kernel{zio_write_intr_6}
 0 root   -16- 0K  4240K -   0   0:29   0.10%
kernel{zio_write_intr_1}
 0 root   -16- 0K  4240K -   5   0:29   0.10%
kernel{zio_write_intr_2}
 0 root   -16- 0K  4240K -   1   0:29   0.10%
kernel{zio_write_intr_5}
...

Taking another look at the internal dir structure for postgres, I'm
not too sure whether this is related to the original poster's problem
though.

Cheers.


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

On 20.10.2016 21:17, Steven Hartland wrote:

Do you have atime enabled for the relevant volume?

I do.


If so disable it and see if that helps:
zfs set atime=off 


Nah, it doesn't help at all.

Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Jonathan Chen
On 21 October 2016 at 12:56, Steven Hartland  wrote:
[...]
> When you see the stalling what does gstat -pd and top -SHz show?

On my dev box:

1:38pm# uname -a
FreeBSD irontree 10.3-STABLE FreeBSD 10.3-STABLE #0 r307401: Mon Oct
17 10:17:22 NZDT 2016 root@irontree:/usr/obj/usr/src/sys/GENERIC
amd64
1:49pm# gstat -pd
dT: 1.004s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s   kBps
ms/d   %busy Name
0  0  0  00.0  0  00.0  0  0
 0.00.0| cd0
   18618  1128   41.4606  52854   17.2  0  0
 0.0  100.5| ada0
^C
1:49pm# top -SHz
last pid: 83284;  load averages:  0.89,  0.68,  0.46
 up
4+03:11:32  13:49:05
565 processes: 9 running, 517 sleeping, 17 zombie, 22 waiting
CPU:  3.7% user,  0.0% nice,  1.9% system,  0.0% interrupt, 94.3% idle
Mem: 543M Active, 2153M Inact, 11G Wired, 10M Cache, 2132M Free
ARC: 7249M Total, 1325M MFU, 4534M MRU, 906M Anon, 223M Header, 261M Other
Swap: 32G Total, 201M Used, 32G Free

  PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
83149 postgres380  2197M   528M zio->i  5   1:13  23.19% postgres
83148 jonc220 36028K 13476K select  2   0:11   3.86% pg_restore
  852 postgres200  2181M  2051M select  5   0:27   0.68% postgres
0 root   -15- 0K  4240K -   6   0:50   0.49%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   7   0:50   0.29%
kernel{zio_write_issue_}
3 root-8- 0K   112K zio->i  6   1:50   0.20%
zfskern{txg_thread_enter}
   12 root   -88- 0K   352K WAIT0   1:07   0.20%
intr{irq268: ahci0}
0 root   -16- 0K  4240K -   4   0:29   0.20%
kernel{zio_write_intr_4}
0 root   -16- 0K  4240K -   7   0:29   0.10%
kernel{zio_write_intr_6}
0 root   -16- 0K  4240K -   0   0:29   0.10%
kernel{zio_write_intr_1}
0 root   -16- 0K  4240K -   5   0:29   0.10%
kernel{zio_write_intr_2}
0 root   -16- 0K  4240K -   1   0:29   0.10%
kernel{zio_write_intr_5}
...

Taking another look at the internal dir structure for postgres, I'm
not too sure whether this is related to the original poster's problem
though.

Cheers.
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland



On 20/10/2016 23:48, Jonathan Chen wrote:

On 21 October 2016 at 11:27, Steven Hartland  wrote:

On 20/10/2016 22:18, Jonathan Chen wrote:

On 21 October 2016 at 09:09, Peter  wrote:
[...]

I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
activity (even writing!) when doing ls.
Seems normal, I dont care as long as the thing is stable. One would need
to check how ZFS stores directories and what kind of fragmentation can
happen there. Or wait for some future feature that would do
housekeeping. ;)

I'm seeing this as well with an Odoo ERP running on Postgresql. This
lag does matter to me as this is huge performance hit when running
Postgresql on ZFS, and it would be good to see this resolved.
pg_restores can make the system crawl as well.

As mentioned before could you confirm you have disable atime?

Yup, also set the blocksize to 4K.

11:46am# zfs get all irontree/postgresql
NAME PROPERTY  VALUE  SOURCE
irontree/postgresql  type  filesystem -
irontree/postgresql  creation  Wed Sep 23 15:07 2015  -
irontree/postgresql  used  43.8G  -
irontree/postgresql  available 592G   -
irontree/postgresql  referenced43.8G  -
irontree/postgresql  compressratio 1.00x  -
irontree/postgresql  mounted   yes-
irontree/postgresql  quota none   default
irontree/postgresql  reservation   none   default
irontree/postgresql  recordsize8K local
irontree/postgresql  mountpoint/postgresql
inherited from irontree
irontree/postgresql  sharenfs  offdefault
irontree/postgresql  checksum  on default
irontree/postgresql  compression   offdefault
irontree/postgresql  atime offlocal
irontree/postgresql  devices   on default
irontree/postgresql  exec  on default
irontree/postgresql  setuidon default
irontree/postgresql  readonly  offdefault
irontree/postgresql  jailedoffdefault
irontree/postgresql  snapdir   hidden default
irontree/postgresql  aclmode   discarddefault
irontree/postgresql  aclinheritrestricted default
irontree/postgresql  canmount  on default
irontree/postgresql  xattr offtemporary
irontree/postgresql  copies1  default
irontree/postgresql  version   5  -
irontree/postgresql  utf8only  off-
irontree/postgresql  normalization none   -
irontree/postgresql  casesensitivity   sensitive  -
irontree/postgresql  vscan offdefault
irontree/postgresql  nbmandoffdefault
irontree/postgresql  sharesmb  offdefault
irontree/postgresql  refquota  none   default
irontree/postgresql  refreservationnone   default
irontree/postgresql  primarycache  alldefault
irontree/postgresql  secondarycachealldefault
irontree/postgresql  usedbysnapshots   0  -
irontree/postgresql  usedbydataset 43.8G  -
irontree/postgresql  usedbychildren0  -
irontree/postgresql  usedbyrefreservation  0  -
irontree/postgresql  logbias   latencydefault
irontree/postgresql  dedup offdefault
irontree/postgresql  mlslabel -
irontree/postgresql  sync  standard   default
irontree/postgresql  refcompressratio  1.00x  -
irontree/postgresql  written   43.8G  -
irontree/postgresql  logicalused   43.4G  -
irontree/postgresql  logicalreferenced 43.4G  -
irontree/postgresql  volmode   defaultdefault
irontree/postgresql  filesystem_limit  none   default
irontree/postgresql  snapshot_limitnone   default
irontree/postgresql  filesystem_count  none   default
irontree/postgresql 

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Jonathan Chen
On 21 October 2016 at 11:27, Steven Hartland  wrote:
> On 20/10/2016 22:18, Jonathan Chen wrote:
>>
>> On 21 October 2016 at 09:09, Peter  wrote:
>> [...]
>>>
>>> I see this on my pgsql_tmp dirs (where Postgres stores intermediate
>>> query data that gets too big for mem - usually lots of files) - in
>>> normal operation these dirs are completely empty, but make heavy disk
>>> activity (even writing!) when doing ls.
>>> Seems normal, I dont care as long as the thing is stable. One would need
>>> to check how ZFS stores directories and what kind of fragmentation can
>>> happen there. Or wait for some future feature that would do
>>> housekeeping. ;)
>>
>> I'm seeing this as well with an Odoo ERP running on Postgresql. This
>> lag does matter to me as this is huge performance hit when running
>> Postgresql on ZFS, and it would be good to see this resolved.
>> pg_restores can make the system crawl as well.
>
> As mentioned before could you confirm you have disable atime?

Yup, also set the blocksize to 4K.

11:46am# zfs get all irontree/postgresql
NAME PROPERTY  VALUE  SOURCE
irontree/postgresql  type  filesystem -
irontree/postgresql  creation  Wed Sep 23 15:07 2015  -
irontree/postgresql  used  43.8G  -
irontree/postgresql  available 592G   -
irontree/postgresql  referenced43.8G  -
irontree/postgresql  compressratio 1.00x  -
irontree/postgresql  mounted   yes-
irontree/postgresql  quota none   default
irontree/postgresql  reservation   none   default
irontree/postgresql  recordsize8K local
irontree/postgresql  mountpoint/postgresql
inherited from irontree
irontree/postgresql  sharenfs  offdefault
irontree/postgresql  checksum  on default
irontree/postgresql  compression   offdefault
irontree/postgresql  atime offlocal
irontree/postgresql  devices   on default
irontree/postgresql  exec  on default
irontree/postgresql  setuidon default
irontree/postgresql  readonly  offdefault
irontree/postgresql  jailedoffdefault
irontree/postgresql  snapdir   hidden default
irontree/postgresql  aclmode   discarddefault
irontree/postgresql  aclinheritrestricted default
irontree/postgresql  canmount  on default
irontree/postgresql  xattr offtemporary
irontree/postgresql  copies1  default
irontree/postgresql  version   5  -
irontree/postgresql  utf8only  off-
irontree/postgresql  normalization none   -
irontree/postgresql  casesensitivity   sensitive  -
irontree/postgresql  vscan offdefault
irontree/postgresql  nbmandoffdefault
irontree/postgresql  sharesmb  offdefault
irontree/postgresql  refquota  none   default
irontree/postgresql  refreservationnone   default
irontree/postgresql  primarycache  alldefault
irontree/postgresql  secondarycachealldefault
irontree/postgresql  usedbysnapshots   0  -
irontree/postgresql  usedbydataset 43.8G  -
irontree/postgresql  usedbychildren0  -
irontree/postgresql  usedbyrefreservation  0  -
irontree/postgresql  logbias   latencydefault
irontree/postgresql  dedup offdefault
irontree/postgresql  mlslabel -
irontree/postgresql  sync  standard   default
irontree/postgresql  refcompressratio  1.00x  -
irontree/postgresql  written   43.8G  -
irontree/postgresql  logicalused   43.4G  -
irontree/postgresql  logicalreferenced 43.4G  -
irontree/postgresql  volmode   defaultdefault
irontree/postgresql  filesystem_limit  none   default
irontree/postgresql  snapshot_limitnone   default
irontree/postgresql  filesystem_count  none   default

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

On 20/10/2016 22:18, Jonathan Chen wrote:

On 21 October 2016 at 09:09, Peter  wrote:
[...]

I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
activity (even writing!) when doing ls.
Seems normal, I dont care as long as the thing is stable. One would need
to check how ZFS stores directories and what kind of fragmentation can
happen there. Or wait for some future feature that would do
housekeeping. ;)

I'm seeing this as well with an Odoo ERP running on Postgresql. This
lag does matter to me as this is huge performance hit when running
Postgresql on ZFS, and it would be good to see this resolved.
pg_restores can make the system crawl as well.

As mentioned before could you confirm you have disable atime?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Chris Watson
While I have yet to encounter this with PG on ZFS, knock on wood, this 
obviously is not an isolated issue and if possible those experiencing it should 
do as much investigation as possible and open a PR. This seems like something 
I'm going to read about FreeBSD and PG/ZFS over at Hacker News from a Linux 
switcher as they explain why they went back to Linux in the future. It's not a 
show stopper but it's obviously an issue. 

Chris

Sent from my iPhone 5

> On Oct 20, 2016, at 4:18 PM, Jonathan Chen  wrote:
> 
>> On 21 October 2016 at 09:09, Peter  wrote:
>> [...]
>> 
>> I see this on my pgsql_tmp dirs (where Postgres stores intermediate
>> query data that gets too big for mem - usually lots of files) - in
>> normal operation these dirs are completely empty, but make heavy disk
>> activity (even writing!) when doing ls.
>> Seems normal, I dont care as long as the thing is stable. One would need
>> to check how ZFS stores directories and what kind of fragmentation can
>> happen there. Or wait for some future feature that would do
>> housekeeping. ;)
> 
> I'm seeing this as well with an Odoo ERP running on Postgresql. This
> lag does matter to me as this is huge performance hit when running
> Postgresql on ZFS, and it would be good to see this resolved.
> pg_restores can make the system crawl as well.
> 
> Cheers.
> -- 
> Jonathan Chen 
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Jonathan Chen
On 21 October 2016 at 09:09, Peter  wrote:
[...]
>
> I see this on my pgsql_tmp dirs (where Postgres stores intermediate
> query data that gets too big for mem - usually lots of files) - in
> normal operation these dirs are completely empty, but make heavy disk
> activity (even writing!) when doing ls.
> Seems normal, I dont care as long as the thing is stable. One would need
> to check how ZFS stores directories and what kind of fragmentation can
> happen there. Or wait for some future feature that would do
> housekeeping. ;)

I'm seeing this as well with an Odoo ERP running on Postgresql. This
lag does matter to me as this is huge performance hit when running
Postgresql on ZFS, and it would be good to see this resolved.
pg_restores can make the system crawl as well.

Cheers.
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Peter

Eugene M. Zheganin wrote:

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I also have one directory that used to
have a lot of (tens of thousands) files. I surely takes a lot of time to
get a listing of it. But now I have 2 files and a couple of dozens
directories in it (I sorted files into directories). Surprisingly,
there's still a lag between "ls" and an output:


I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
activity (even writing!) when doing ls.
Seems normal, I dont care as long as the thing is stable. One would need
to check how ZFS stores directories and what kind of fragmentation can
happen there. Or wait for some future feature that would do
housekeeping. ;)

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

Do you have atime enabled for the relevant volume?

If so disable it and see if that helps:
zfs set atime=off 

Regards
Steve

On 20/10/2016 14:47, Eugene M. Zheganin wrote:

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation 
on different releases) and a zfs. I also have one directory that used 
to have a lot of (tens of thousands) files. I surely takes a lot of 
time to get a listing of it. But now I have 2 files and a couple of 
dozens directories in it (I sorted files into directories). 
Surprisingly, there's still a lag between "ls" and an output:



===Cut===

# /usr/bin/time -h ls
.recycle2016-01 2016-04 2016-07 
2016-10 sort-files.sh
20142016-02 2016-05 2016-08 
ktrace.out  sort-months.sh
20152016-03 2016-06 2016-09 
old sounds

5.75s real  0.00s user  0.02s sys

===Cut===


I've seen this situation before, on other servers, so it's not the 
first time I encounter this. However, it's not 100% reproducible (I 
mean, if I fill the directory with dozens of thousands of files, I 
will not certainly get this lag after the deletion).


Has anyone seen this and does anyone know how to resolve this ? It's 
not critical issue, but it makes thing uncomfortable here. One method 
I'm aware of: you can move the contents of this directory to some 
other place, then delete it and create again. But it's kind of a nasty 
workaround.



Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Pete French
> > I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one 
> > RAID0 for each disk.
> >
> Not in my case, both pool disks are attached to the Intel ICH7 SATA300 
> controller.

Nor my case - my discs are on this:

ahci0: 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

On 20.10.2016 19:18, Dr. Nikolaus Klepp wrote:


I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one 
RAID0 for each disk.

Not in my case, both pool disks are attached to the Intel ICH7 SATA300 
controller.


Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi,

On 20.10.2016 19:12, Pete French wrote:

Have ignored this thread untiul now, but I observed the same behaviour
on mysystems over the last week or so. In my case its an exim spool
directory, which was hugely full as some point (thousands of
files) and now takes an awfully long time to open and list. I delet
and remake them and the problem goes away, but I belive it is the same thing.

I am running 10.3-STABLE, r303832


Yup, saw this once on a sendmail spool directory.

Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

On 20.10.2016 19:03, Miroslav Lachman wrote:


What about snapshots? Are there any snapshots on this filesystem?

Nope.

# zfs list -t all
NAMEUSED  AVAIL  REFER  MOUNTPOINT
zroot   245G   201G  1.17G  legacy
zroot/tmp  10.1M   201G  10.1M  /tmp
zroot/usr  9.78G   201G  7.36G  /usr
zroot/usr/home 77.9M   201G  77.9M  /usr/home
zroot/usr/ports1.41G   201G   857M  /usr/ports
zroot/usr/ports/distfiles   590M   201G   590M  /usr/ports/distfiles
zroot/usr/ports/packages642K   201G   642K  /usr/ports/packages
zroot/usr/src   949M   201G   949M  /usr/src
zroot/var   234G   201G   233G  /var
zroot/var/crash21.5K   201G  21.5K  /var/crash
zroot/var/db127M   201G   121M  /var/db
zroot/var/db/pkg   6.28M   201G  6.28M  /var/db/pkg
zroot/var/empty  20K   201G20K  /var/empty
zroot/var/log   631M   201G   631M  /var/log
zroot/var/mail 24.6M   201G  24.6M  /var/mail
zroot/var/run54K   201G54K  /var/run
zroot/var/tmp   198K   201G   198K  /var/tmp



Or scrub running in the background?


No.


Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

On 20.10.2016 18:54, Nicolas Gilles wrote:

Looks like it's not taking up any processing time, so my guess is
the lag probably comes from stalled I/O ... bad disk?
Well, I cannot rule this out completely, but first time I've seen this 
lag on this particular server about two months ago, and I guess two 
months is enough time for zfs on a redundant pool to ger errors, but as 
you can see:


]# zpool status
  pool: zroot
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
  scan: resilvered 5.74G in 0h31m with 0 errors on Wed Jun  8 11:54:14 2016
config:

NAMESTATE READ WRITE CKSUM
zroot   ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
gpt/zroot0  ONLINE   0 0 0  block size: 512B 
configured, 4096B native

gpt/zroot1  ONLINE   0 0 0

errors: No known data errors

there's none. Yup, disks have different sector size, but this issue 
happened with one particular directory, not all of them. So I guess this 
is irrelevant.



Does a second "ls" immediately returned (ie. metadata has been
cached) ?


Nope. Although the lag varies slightly:

4.79s real  0.00s user  0.02s sys
5.51s real  0.00s user  0.02s sys
4.78s real  0.00s user  0.02s sys
6.88s real  0.00s user  0.02s sys

Thanks.
Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Dr. Nikolaus Klepp
Am Donnerstag, 20. Oktober 2016 schrieb Eugene M. Zheganin:
> Hi.
> 
> I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation 
> on different releases) and a zfs. I also have one directory that used to 
> have a lot of (tens of thousands) files. I surely takes a lot of time to 
> get a listing of it. But now I have 2 files and a couple of dozens 
> directories in it (I sorted files into directories). Surprisingly, 
> there's still a lag between "ls" and an output:
> 
> 
> ===Cut===
> 
> # /usr/bin/time -h ls
> .recycle2016-01 2016-04 2016-07 2016-10 
> sort-files.sh
> 20142016-02 2016-05 2016-08 ktrace.out  
> sort-months.sh
> 20152016-03 2016-06 2016-09 old 
> sounds
>  5.75s real  0.00s user  0.02s sys
> 
> ===Cut===
> 
> 
> I've seen this situation before, on other servers, so it's not the first 
> time I encounter this. However, it's not 100% reproducible (I mean, if I 
> fill the directory with dozens of thousands of files, I will not 
> certainly get this lag after the deletion).
> 
> Has anyone seen this and does anyone know how to resolve this ? It's not 
> critical issue, but it makes thing uncomfortable here. One method I'm 
> aware of: you can move the contents of this directory to some other 
> place, then delete it and create again. But it's kind of a nasty workaround.


Hi!

I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one 
RAID0 for each disk.

Nik


-- 
Please do not email me anything that you are not comfortable also sharing with 
the NSA.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Pete French
Have ignored this thread untiul now, but I observed the same behaviour
on mysystems over the last week or so. In my case its an exim spool
directory, which was hugely full as some point (thousands of
files) and now takes an awfully long time to open and list. I delet
and remake them and the problem goes away, but I belive it is the same thing.

I am running 10.3-STABLE, r303832

-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Miroslav Lachman

Eugene M. Zheganin wrote on 2016/10/20 15:47:

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I also have one directory that used to
have a lot of (tens of thousands) files. I surely takes a lot of time to
get a listing of it. But now I have 2 files and a couple of dozens
directories in it (I sorted files into directories). Surprisingly,
there's still a lag between "ls" and an output:


[...]


I've seen this situation before, on other servers, so it's not the first
time I encounter this. However, it's not 100% reproducible (I mean, if I
fill the directory with dozens of thousands of files, I will not
certainly get this lag after the deletion).


What about snapshots? Are there any snapshots on this filesystem?

Or scrub running in the background?

Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Nicolas Gilles
On Thu, Oct 20, 2016 at 3:47 PM, Eugene M. Zheganin  wrote:
> Hi.
>
> I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation on
> different releases) and a zfs. I also have one directory that used to have a
> lot of (tens of thousands) files. I surely takes a lot of time to get a
> listing of it. But now I have 2 files and a couple of dozens directories in
> it (I sorted files into directories). Surprisingly, there's still a lag
> between "ls" and an output:
>
>
> ===Cut===
>
> # /usr/bin/time -h ls
> .recycle2016-01 2016-04 2016-07 2016-10
> sort-files.sh
> 20142016-02 2016-05 2016-08 ktrace.out
> sort-months.sh
> 20152016-03 2016-06 2016-09 old
> sounds
> 5.75s real  0.00s user  0.02s sys

Looks like it's not taking up any processing time, so my guess is
the lag probably comes from stalled I/O ... bad disk?

Does a second "ls" immediately returned (ie. metadata has been
cached) ?

>
> ===Cut===
>
>
> I've seen this situation before, on other servers, so it's not the first
> time I encounter this. However, it's not 100% reproducible (I mean, if I
> fill the directory with dozens of thousands of files, I will not certainly
> get this lag after the deletion).
>
> Has anyone seen this and does anyone know how to resolve this ? It's not
> critical issue, but it makes thing uncomfortable here. One method I'm aware
> of: you can move the contents of this directory to some other place, then
> delete it and create again. But it's kind of a nasty workaround.
>
>
> Thanks.
>
> Eugene.
>
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation 
on different releases) and a zfs. I also have one directory that used to 
have a lot of (tens of thousands) files. I surely takes a lot of time to 
get a listing of it. But now I have 2 files and a couple of dozens 
directories in it (I sorted files into directories). Surprisingly, 
there's still a lag between "ls" and an output:



===Cut===

# /usr/bin/time -h ls
.recycle2016-01 2016-04 2016-07 2016-10 
sort-files.sh
20142016-02 2016-05 2016-08 ktrace.out  
sort-months.sh
20152016-03 2016-06 2016-09 old 
sounds

5.75s real  0.00s user  0.02s sys

===Cut===


I've seen this situation before, on other servers, so it's not the first 
time I encounter this. However, it's not 100% reproducible (I mean, if I 
fill the directory with dozens of thousands of files, I will not 
certainly get this lag after the deletion).


Has anyone seen this and does anyone know how to resolve this ? It's not 
critical issue, but it makes thing uncomfortable here. One method I'm 
aware of: you can move the contents of this directory to some other 
place, then delete it and create again. But it's kind of a nasty workaround.



Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"