Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Peter Jeremy
Have you done any ZFS tuning?

Could you try installing ports/sysutils/zfs-stats and posting the output
from "zfs-stats -a".  That might point to a bottleneck or poor cache
tuning.

-- 
Peter Jeremy


signature.asc
Description: PGP signature


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

On 21/10/2016 04:52, Eugene M. Zheganin wrote:

Hi.

On 20.10.2016 21:17, Steven Hartland wrote:

Do you have atime enabled for the relevant volume?

I do.


If so disable it and see if that helps:
zfs set atime=off 


Nah, it doesn't help at all.

As per with Jonathon what does gstat -pd and top -SHz show?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

In your case there your vdev (ada0) is saturated with writes from postgres.

You should consider more / faster disks.

You might also want to consider enabling lz4 compression on the PG 
volume as its works well in IO bound situations.



On 21/10/2016 01:54, Jonathan Chen wrote:

On 21 October 2016 at 12:56, Steven Hartland  wrote:
[...]

When you see the stalling what does gstat -pd and top -SHz show?

On my dev box:

1:38pm# uname -a
FreeBSD irontree 10.3-STABLE FreeBSD 10.3-STABLE #0 r307401: Mon Oct
17 10:17:22 NZDT 2016 root@irontree:/usr/obj/usr/src/sys/GENERIC
amd64
1:49pm# gstat -pd
dT: 1.004s  w: 1.000s
  L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s   kBps
ms/d   %busy Name
 0  0  0  00.0  0  00.0  0  0
  0.00.0| cd0
18618  1128   41.4606  52854   17.2  0  0
  0.0  100.5| ada0
^C
1:49pm# top -SHz
last pid: 83284;  load averages:  0.89,  0.68,  0.46
  up
4+03:11:32  13:49:05
565 processes: 9 running, 517 sleeping, 17 zombie, 22 waiting
CPU:  3.7% user,  0.0% nice,  1.9% system,  0.0% interrupt, 94.3% idle
Mem: 543M Active, 2153M Inact, 11G Wired, 10M Cache, 2132M Free
ARC: 7249M Total, 1325M MFU, 4534M MRU, 906M Anon, 223M Header, 261M Other
Swap: 32G Total, 201M Used, 32G Free

   PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
83149 postgres380  2197M   528M zio->i  5   1:13  23.19% postgres
83148 jonc220 36028K 13476K select  2   0:11   3.86% pg_restore
   852 postgres200  2181M  2051M select  5   0:27   0.68% postgres
 0 root   -15- 0K  4240K -   6   0:50   0.49%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
 0 root   -15- 0K  4240K -   7   0:50   0.29%
kernel{zio_write_issue_}
 3 root-8- 0K   112K zio->i  6   1:50   0.20%
zfskern{txg_thread_enter}
12 root   -88- 0K   352K WAIT0   1:07   0.20%
intr{irq268: ahci0}
 0 root   -16- 0K  4240K -   4   0:29   0.20%
kernel{zio_write_intr_4}
 0 root   -16- 0K  4240K -   7   0:29   0.10%
kernel{zio_write_intr_6}
 0 root   -16- 0K  4240K -   0   0:29   0.10%
kernel{zio_write_intr_1}
 0 root   -16- 0K  4240K -   5   0:29   0.10%
kernel{zio_write_intr_2}
 0 root   -16- 0K  4240K -   1   0:29   0.10%
kernel{zio_write_intr_5}
...

Taking another look at the internal dir structure for postgres, I'm
not too sure whether this is related to the original poster's problem
though.

Cheers.


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

On 20.10.2016 21:17, Steven Hartland wrote:

Do you have atime enabled for the relevant volume?

I do.


If so disable it and see if that helps:
zfs set atime=off 


Nah, it doesn't help at all.

Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Jonathan Chen
On 21 October 2016 at 12:56, Steven Hartland  wrote:
[...]
> When you see the stalling what does gstat -pd and top -SHz show?

On my dev box:

1:38pm# uname -a
FreeBSD irontree 10.3-STABLE FreeBSD 10.3-STABLE #0 r307401: Mon Oct
17 10:17:22 NZDT 2016 root@irontree:/usr/obj/usr/src/sys/GENERIC
amd64
1:49pm# gstat -pd
dT: 1.004s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/wd/s   kBps
ms/d   %busy Name
0  0  0  00.0  0  00.0  0  0
 0.00.0| cd0
   18618  1128   41.4606  52854   17.2  0  0
 0.0  100.5| ada0
^C
1:49pm# top -SHz
last pid: 83284;  load averages:  0.89,  0.68,  0.46
 up
4+03:11:32  13:49:05
565 processes: 9 running, 517 sleeping, 17 zombie, 22 waiting
CPU:  3.7% user,  0.0% nice,  1.9% system,  0.0% interrupt, 94.3% idle
Mem: 543M Active, 2153M Inact, 11G Wired, 10M Cache, 2132M Free
ARC: 7249M Total, 1325M MFU, 4534M MRU, 906M Anon, 223M Header, 261M Other
Swap: 32G Total, 201M Used, 32G Free

  PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
83149 postgres380  2197M   528M zio->i  5   1:13  23.19% postgres
83148 jonc220 36028K 13476K select  2   0:11   3.86% pg_restore
  852 postgres200  2181M  2051M select  5   0:27   0.68% postgres
0 root   -15- 0K  4240K -   6   0:50   0.49%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   6   0:50   0.39%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   7   0:50   0.39%
kernel{zio_write_issue_}
0 root   -15- 0K  4240K -   7   0:50   0.29%
kernel{zio_write_issue_}
3 root-8- 0K   112K zio->i  6   1:50   0.20%
zfskern{txg_thread_enter}
   12 root   -88- 0K   352K WAIT0   1:07   0.20%
intr{irq268: ahci0}
0 root   -16- 0K  4240K -   4   0:29   0.20%
kernel{zio_write_intr_4}
0 root   -16- 0K  4240K -   7   0:29   0.10%
kernel{zio_write_intr_6}
0 root   -16- 0K  4240K -   0   0:29   0.10%
kernel{zio_write_intr_1}
0 root   -16- 0K  4240K -   5   0:29   0.10%
kernel{zio_write_intr_2}
0 root   -16- 0K  4240K -   1   0:29   0.10%
kernel{zio_write_intr_5}
...

Taking another look at the internal dir structure for postgres, I'm
not too sure whether this is related to the original poster's problem
though.

Cheers.
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland



On 20/10/2016 23:48, Jonathan Chen wrote:

On 21 October 2016 at 11:27, Steven Hartland  wrote:

On 20/10/2016 22:18, Jonathan Chen wrote:

On 21 October 2016 at 09:09, Peter  wrote:
[...]

I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
activity (even writing!) when doing ls.
Seems normal, I dont care as long as the thing is stable. One would need
to check how ZFS stores directories and what kind of fragmentation can
happen there. Or wait for some future feature that would do
housekeeping. ;)

I'm seeing this as well with an Odoo ERP running on Postgresql. This
lag does matter to me as this is huge performance hit when running
Postgresql on ZFS, and it would be good to see this resolved.
pg_restores can make the system crawl as well.

As mentioned before could you confirm you have disable atime?

Yup, also set the blocksize to 4K.

11:46am# zfs get all irontree/postgresql
NAME PROPERTY  VALUE  SOURCE
irontree/postgresql  type  filesystem -
irontree/postgresql  creation  Wed Sep 23 15:07 2015  -
irontree/postgresql  used  43.8G  -
irontree/postgresql  available 592G   -
irontree/postgresql  referenced43.8G  -
irontree/postgresql  compressratio 1.00x  -
irontree/postgresql  mounted   yes-
irontree/postgresql  quota none   default
irontree/postgresql  reservation   none   default
irontree/postgresql  recordsize8K local
irontree/postgresql  mountpoint/postgresql
inherited from irontree
irontree/postgresql  sharenfs  offdefault
irontree/postgresql  checksum  on default
irontree/postgresql  compression   offdefault
irontree/postgresql  atime offlocal
irontree/postgresql  devices   on default
irontree/postgresql  exec  on default
irontree/postgresql  setuidon default
irontree/postgresql  readonly  offdefault
irontree/postgresql  jailedoffdefault
irontree/postgresql  snapdir   hidden default
irontree/postgresql  aclmode   discarddefault
irontree/postgresql  aclinheritrestricted default
irontree/postgresql  canmount  on default
irontree/postgresql  xattr offtemporary
irontree/postgresql  copies1  default
irontree/postgresql  version   5  -
irontree/postgresql  utf8only  off-
irontree/postgresql  normalization none   -
irontree/postgresql  casesensitivity   sensitive  -
irontree/postgresql  vscan offdefault
irontree/postgresql  nbmandoffdefault
irontree/postgresql  sharesmb  offdefault
irontree/postgresql  refquota  none   default
irontree/postgresql  refreservationnone   default
irontree/postgresql  primarycache  alldefault
irontree/postgresql  secondarycachealldefault
irontree/postgresql  usedbysnapshots   0  -
irontree/postgresql  usedbydataset 43.8G  -
irontree/postgresql  usedbychildren0  -
irontree/postgresql  usedbyrefreservation  0  -
irontree/postgresql  logbias   latencydefault
irontree/postgresql  dedup offdefault
irontree/postgresql  mlslabel -
irontree/postgresql  sync  standard   default
irontree/postgresql  refcompressratio  1.00x  -
irontree/postgresql  written   43.8G  -
irontree/postgresql  logicalused   43.4G  -
irontree/postgresql  logicalreferenced 43.4G  -
irontree/postgresql  volmode   defaultdefault
irontree/postgresql  filesystem_limit  none   default
irontree/postgresql  snapshot_limitnone   default
irontree/postgresql  filesystem_count  none   default
irontree/postgresql 

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Jonathan Chen
On 21 October 2016 at 11:27, Steven Hartland  wrote:
> On 20/10/2016 22:18, Jonathan Chen wrote:
>>
>> On 21 October 2016 at 09:09, Peter  wrote:
>> [...]
>>>
>>> I see this on my pgsql_tmp dirs (where Postgres stores intermediate
>>> query data that gets too big for mem - usually lots of files) - in
>>> normal operation these dirs are completely empty, but make heavy disk
>>> activity (even writing!) when doing ls.
>>> Seems normal, I dont care as long as the thing is stable. One would need
>>> to check how ZFS stores directories and what kind of fragmentation can
>>> happen there. Or wait for some future feature that would do
>>> housekeeping. ;)
>>
>> I'm seeing this as well with an Odoo ERP running on Postgresql. This
>> lag does matter to me as this is huge performance hit when running
>> Postgresql on ZFS, and it would be good to see this resolved.
>> pg_restores can make the system crawl as well.
>
> As mentioned before could you confirm you have disable atime?

Yup, also set the blocksize to 4K.

11:46am# zfs get all irontree/postgresql
NAME PROPERTY  VALUE  SOURCE
irontree/postgresql  type  filesystem -
irontree/postgresql  creation  Wed Sep 23 15:07 2015  -
irontree/postgresql  used  43.8G  -
irontree/postgresql  available 592G   -
irontree/postgresql  referenced43.8G  -
irontree/postgresql  compressratio 1.00x  -
irontree/postgresql  mounted   yes-
irontree/postgresql  quota none   default
irontree/postgresql  reservation   none   default
irontree/postgresql  recordsize8K local
irontree/postgresql  mountpoint/postgresql
inherited from irontree
irontree/postgresql  sharenfs  offdefault
irontree/postgresql  checksum  on default
irontree/postgresql  compression   offdefault
irontree/postgresql  atime offlocal
irontree/postgresql  devices   on default
irontree/postgresql  exec  on default
irontree/postgresql  setuidon default
irontree/postgresql  readonly  offdefault
irontree/postgresql  jailedoffdefault
irontree/postgresql  snapdir   hidden default
irontree/postgresql  aclmode   discarddefault
irontree/postgresql  aclinheritrestricted default
irontree/postgresql  canmount  on default
irontree/postgresql  xattr offtemporary
irontree/postgresql  copies1  default
irontree/postgresql  version   5  -
irontree/postgresql  utf8only  off-
irontree/postgresql  normalization none   -
irontree/postgresql  casesensitivity   sensitive  -
irontree/postgresql  vscan offdefault
irontree/postgresql  nbmandoffdefault
irontree/postgresql  sharesmb  offdefault
irontree/postgresql  refquota  none   default
irontree/postgresql  refreservationnone   default
irontree/postgresql  primarycache  alldefault
irontree/postgresql  secondarycachealldefault
irontree/postgresql  usedbysnapshots   0  -
irontree/postgresql  usedbydataset 43.8G  -
irontree/postgresql  usedbychildren0  -
irontree/postgresql  usedbyrefreservation  0  -
irontree/postgresql  logbias   latencydefault
irontree/postgresql  dedup offdefault
irontree/postgresql  mlslabel -
irontree/postgresql  sync  standard   default
irontree/postgresql  refcompressratio  1.00x  -
irontree/postgresql  written   43.8G  -
irontree/postgresql  logicalused   43.4G  -
irontree/postgresql  logicalreferenced 43.4G  -
irontree/postgresql  volmode   defaultdefault
irontree/postgresql  filesystem_limit  none   default
irontree/postgresql  snapshot_limitnone   default
irontree/postgresql  filesystem_count  none   default

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

On 20/10/2016 22:18, Jonathan Chen wrote:

On 21 October 2016 at 09:09, Peter  wrote:
[...]

I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
activity (even writing!) when doing ls.
Seems normal, I dont care as long as the thing is stable. One would need
to check how ZFS stores directories and what kind of fragmentation can
happen there. Or wait for some future feature that would do
housekeeping. ;)

I'm seeing this as well with an Odoo ERP running on Postgresql. This
lag does matter to me as this is huge performance hit when running
Postgresql on ZFS, and it would be good to see this resolved.
pg_restores can make the system crawl as well.

As mentioned before could you confirm you have disable atime?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Chris Watson
While I have yet to encounter this with PG on ZFS, knock on wood, this 
obviously is not an isolated issue and if possible those experiencing it should 
do as much investigation as possible and open a PR. This seems like something 
I'm going to read about FreeBSD and PG/ZFS over at Hacker News from a Linux 
switcher as they explain why they went back to Linux in the future. It's not a 
show stopper but it's obviously an issue. 

Chris

Sent from my iPhone 5

> On Oct 20, 2016, at 4:18 PM, Jonathan Chen  wrote:
> 
>> On 21 October 2016 at 09:09, Peter  wrote:
>> [...]
>> 
>> I see this on my pgsql_tmp dirs (where Postgres stores intermediate
>> query data that gets too big for mem - usually lots of files) - in
>> normal operation these dirs are completely empty, but make heavy disk
>> activity (even writing!) when doing ls.
>> Seems normal, I dont care as long as the thing is stable. One would need
>> to check how ZFS stores directories and what kind of fragmentation can
>> happen there. Or wait for some future feature that would do
>> housekeeping. ;)
> 
> I'm seeing this as well with an Odoo ERP running on Postgresql. This
> lag does matter to me as this is huge performance hit when running
> Postgresql on ZFS, and it would be good to see this resolved.
> pg_restores can make the system crawl as well.
> 
> Cheers.
> -- 
> Jonathan Chen 
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Jonathan Chen
On 21 October 2016 at 09:09, Peter  wrote:
[...]
>
> I see this on my pgsql_tmp dirs (where Postgres stores intermediate
> query data that gets too big for mem - usually lots of files) - in
> normal operation these dirs are completely empty, but make heavy disk
> activity (even writing!) when doing ls.
> Seems normal, I dont care as long as the thing is stable. One would need
> to check how ZFS stores directories and what kind of fragmentation can
> happen there. Or wait for some future feature that would do
> housekeeping. ;)

I'm seeing this as well with an Odoo ERP running on Postgresql. This
lag does matter to me as this is huge performance hit when running
Postgresql on ZFS, and it would be good to see this resolved.
pg_restores can make the system crawl as well.

Cheers.
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Nightly disk-related panic since upgrade to 10.3

2016-10-20 Thread Peter

Andrea Venturoli wrote:

Hello.

Last week I upgraded a 9.3/amd64 box to 10.3: since then, it crashed and
rebooted at least once every night.


Hi,

  I have quite similar issue, crash dumps every night, but then my
stacktrace is different (crashing mostly in cam/scsi/scsi.c), and my
env is also quite different (old i386, individual disks, extensive use
of ZFS), so here is very likely a different reason. Also here the
upgrade is not the only change, I also replaced a burnt powersupply
recently and added an SSD cache.
Basically You have two options: A) fire up kgdb, go into the code and
try and understand what exactly is happening. This depends
if You have clue enough to go that way; I found "man 4 gdb" and
especially the "Debugging Kernel Problems" pdf by Greg Lehey quite
helpful.
B) systematically change parameters. Start by figuring from the logs
the exact time of crash and what was happening then, try to reproduce
that. Then change things and isolate the cause.

Having a RAID controller is a bit ugly in this regard, as it is more
or less a blackbox, and difficult to change parameters or swap
components.


The only exception was on Friday, when it locked without rebooting: it
still answered ping request and logins through HTTP would half work; I'm
under the impression that the disk subsystem was hung, so ICMP would
work since it does no I/O and HTTP too worked as far as no disk access
was required.


Yep. That tends to happen. It doesnt give much clue, except that there
is a disk related problem.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Peter

Eugene M. Zheganin wrote:

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I also have one directory that used to
have a lot of (tens of thousands) files. I surely takes a lot of time to
get a listing of it. But now I have 2 files and a couple of dozens
directories in it (I sorted files into directories). Surprisingly,
there's still a lag between "ls" and an output:


I see this on my pgsql_tmp dirs (where Postgres stores intermediate
query data that gets too big for mem - usually lots of files) - in
normal operation these dirs are completely empty, but make heavy disk
activity (even writing!) when doing ls.
Seems normal, I dont care as long as the thing is stable. One would need
to check how ZFS stores directories and what kind of fragmentation can
happen there. Or wait for some future feature that would do
housekeeping. ;)

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Jenkins build is still unstable: FreeBSD_stable_10 #436

2016-10-20 Thread jenkins-admin
https://jenkins.FreeBSD.org/job/FreeBSD_stable_10/436/
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: I'm upset about FreeBSD

2016-10-20 Thread Ian Smith
On Wed, 19 Oct 2016 16:38:09 -0700, Kevin Oberman wrote:
 > On Wed, Oct 19, 2016 at 3:39 PM, Warner Losh  wrote:
 > 
 > > On Wed, Oct 19, 2016 at 12:21 PM, Rostislav Krasny 
 > > wrote:
 > > > On Tue, Oct 18, 2016 at 21:57:29 +1100, Ian Smith 
 > > wrote:
 > > >>
 > > >> If FreeBSD GPT images (and Kindle readers) can trigger this, so could a
 > > >> theoretically unlimited combination of data on block 2 of USB media;
 > > >> modifying FreeBSD to fix a Windows bug should be out of the question.
 > > >
 > > > Not modifying FreeBSD and not fixing Windows bug but modifying the
 > > > FreeBSD installation media and working around the Windows bug to let
 > > > people install FreeBSD without disappointing at very beginning. Why
 > > > GPT is used in the memstick images at all? Why they can't be MBR
 > > > based? I think they can.
 > >
 > > Can't boot UEFI off of MBR disks on all BIOSes.
 > >
 > > Warner
 >
 > I'll go one farther. You can't boot many new PCs with traditional MBR
 > disks. And. please don't  confuse GPT with UEFI. I have yet to find an
 > amd64 computer that has a problem with a GPT format with MBR. Due to broken
 > BIOS, my 5-year old ThinkPad won't boot UEFI, but it has no problem with
 > MBR, whether GPT formatted or not. As far as I know, the 11.0 memstick
 > images are still MBR, just on a GPT structured disk, not UEFI. (Let me know
 > if I am mistaken on this.)

Well, GPT with protective MBR.  Wikipedia calls FreeBSD's GPT 'hybrid', 
I gathered because it still works also with older BIOS booting.

root@x200:/extra/images # mdconfig -lv
md0 vnode 700M  /home/smithi/FreeBSD-11.0-RELEASE-amd64-memstick.img
root@x200:/extra/images # gpart show -p md0
=>  3  1433741md0  GPT  (700M)
3 1600  md0p1  efi  (800k)
 1603  125  md0p2  freebsd-boot  (62k)
 1728  1429968  md0p3  freebsd-ufs  (698M)
  1431696 2048  md0p4  freebsd-swap  (1.0M)

[wondering vaguely what use 1.0M of swap might be?]

 > I do accept that some early amd64 systems and, perhaps, many i386 systems
 > may have problems with GPT, but GPT was designed to be compatible with
 > traditional disk formats and, while they may have problems, they really
 > should work for single partition disks. And I understand that it is
 > frustrating if you hit one of these cases where it fails.

My 8yo Thinkpad X200 knows nothing of UEFI and boots 11.0 amd64 memstick 
fine as is.  It also happily boots from a sliced MBR memstick w/ boot0, 
after dd'ing md0p3 to (in this case) da0s2, adding bootcode after a bit 
of fiddling recreating da0s2 & da0s2a after the above dd clobbers it ..

So it's not hard making an MBR sliced memstick with up to 4 bootables; 
I'm hoping to convert dvd1 to one of these, maybe it'll work this time, 
and - at least theoretically - couldn't "kill Windows" by mere presence.

cheers, Ian
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Steven Hartland

Do you have atime enabled for the relevant volume?

If so disable it and see if that helps:
zfs set atime=off 

Regards
Steve

On 20/10/2016 14:47, Eugene M. Zheganin wrote:

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation 
on different releases) and a zfs. I also have one directory that used 
to have a lot of (tens of thousands) files. I surely takes a lot of 
time to get a listing of it. But now I have 2 files and a couple of 
dozens directories in it (I sorted files into directories). 
Surprisingly, there's still a lag between "ls" and an output:



===Cut===

# /usr/bin/time -h ls
.recycle2016-01 2016-04 2016-07 
2016-10 sort-files.sh
20142016-02 2016-05 2016-08 
ktrace.out  sort-months.sh
20152016-03 2016-06 2016-09 
old sounds

5.75s real  0.00s user  0.02s sys

===Cut===


I've seen this situation before, on other servers, so it's not the 
first time I encounter this. However, it's not 100% reproducible (I 
mean, if I fill the directory with dozens of thousands of files, I 
will not certainly get this lag after the deletion).


Has anyone seen this and does anyone know how to resolve this ? It's 
not critical issue, but it makes thing uncomfortable here. One method 
I'm aware of: you can move the contents of this directory to some 
other place, then delete it and create again. But it's kind of a nasty 
workaround.



Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Pete French
> > I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one 
> > RAID0 for each disk.
> >
> Not in my case, both pool disks are attached to the Intel ICH7 SATA300 
> controller.

Nor my case - my discs are on this:

ahci0: 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

On 20.10.2016 19:18, Dr. Nikolaus Klepp wrote:


I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one 
RAID0 for each disk.

Not in my case, both pool disks are attached to the Intel ICH7 SATA300 
controller.


Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi,

On 20.10.2016 19:12, Pete French wrote:

Have ignored this thread untiul now, but I observed the same behaviour
on mysystems over the last week or so. In my case its an exim spool
directory, which was hugely full as some point (thousands of
files) and now takes an awfully long time to open and list. I delet
and remake them and the problem goes away, but I belive it is the same thing.

I am running 10.3-STABLE, r303832


Yup, saw this once on a sendmail spool directory.

Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

On 20.10.2016 19:03, Miroslav Lachman wrote:


What about snapshots? Are there any snapshots on this filesystem?

Nope.

# zfs list -t all
NAMEUSED  AVAIL  REFER  MOUNTPOINT
zroot   245G   201G  1.17G  legacy
zroot/tmp  10.1M   201G  10.1M  /tmp
zroot/usr  9.78G   201G  7.36G  /usr
zroot/usr/home 77.9M   201G  77.9M  /usr/home
zroot/usr/ports1.41G   201G   857M  /usr/ports
zroot/usr/ports/distfiles   590M   201G   590M  /usr/ports/distfiles
zroot/usr/ports/packages642K   201G   642K  /usr/ports/packages
zroot/usr/src   949M   201G   949M  /usr/src
zroot/var   234G   201G   233G  /var
zroot/var/crash21.5K   201G  21.5K  /var/crash
zroot/var/db127M   201G   121M  /var/db
zroot/var/db/pkg   6.28M   201G  6.28M  /var/db/pkg
zroot/var/empty  20K   201G20K  /var/empty
zroot/var/log   631M   201G   631M  /var/log
zroot/var/mail 24.6M   201G  24.6M  /var/mail
zroot/var/run54K   201G54K  /var/run
zroot/var/tmp   198K   201G   198K  /var/tmp



Or scrub running in the background?


No.


Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

On 20.10.2016 18:54, Nicolas Gilles wrote:

Looks like it's not taking up any processing time, so my guess is
the lag probably comes from stalled I/O ... bad disk?
Well, I cannot rule this out completely, but first time I've seen this 
lag on this particular server about two months ago, and I guess two 
months is enough time for zfs on a redundant pool to ger errors, but as 
you can see:


]# zpool status
  pool: zroot
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
  scan: resilvered 5.74G in 0h31m with 0 errors on Wed Jun  8 11:54:14 2016
config:

NAMESTATE READ WRITE CKSUM
zroot   ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
gpt/zroot0  ONLINE   0 0 0  block size: 512B 
configured, 4096B native

gpt/zroot1  ONLINE   0 0 0

errors: No known data errors

there's none. Yup, disks have different sector size, but this issue 
happened with one particular directory, not all of them. So I guess this 
is irrelevant.



Does a second "ls" immediately returned (ie. metadata has been
cached) ?


Nope. Although the lag varies slightly:

4.79s real  0.00s user  0.02s sys
5.51s real  0.00s user  0.02s sys
4.78s real  0.00s user  0.02s sys
6.88s real  0.00s user  0.02s sys

Thanks.
Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: tcsh is not handled correctly UTF-8 in arguments

2016-10-20 Thread Slawa Olhovchenkov
On Thu, Oct 20, 2016 at 08:54:05AM -0600, Alan Somers wrote:

> On Wed, Oct 19, 2016 at 11:10 AM, Slawa Olhovchenkov  wrote:
> > tcsh called by sshd for invocation of scp: `tcsh -c scp -f Расписание.pdf`
> > At this time no any LC_* is set.
> > tcsh read .cshrc and set LC_CTYPE=ru_RU.UTF-8 LC_COLLATE=ru_RU.UTF-8.
> > After this invocation of scp will be incorrect:
> >
> > 7ab0  20 2d 66 20 c3 90 c2 a0  c3 90 c2 b0 c3 91 c2 81  | -f 
> > |
> > 7ac0  c3 90 c2 bf c3 90 c2 b8  c3 91 c2 81 c3 90 c2 b0  
> > ||
> > 7ad0  c3 90 c2 bd c3 90 c2 b8  c3 90 c2 b5 5f c3 90 c2  
> > |_...|
> > 7ae0  a2 c3 90 c2 97 c3 90 c2  98 2e 70 64 66 0a|..pdf. 
> >  |
> >
> > Correct invocation must be:
> >
> >    20 2d 66 20  | 
> > -f |
> > 0010  d0 a0 d0 b0 d1 81 d0 bf  d0 b8 d1 81 d0 b0 d0 bd  
> > ||
> > 0020  d0 b8 d0 b5 5f d0 a2 d0  97 d0 98 2e 70 64 66 0a  
> > |_...pdf.|
> >
> > `d0` =>  `c3 90`
> > `a0` =>  `c2 a0`
> >
> > I.e. every byte re-encoded to utf-8: `d0` =>  `c3 90`
> >
> > As result imposible to access files w/ non-ascii names.
> 
> This might be related to PR213013.  Could you please try on head after 
> r306782 ?

I think not related. PR213013 is about character classification, my
report is about unnecessary encoding shell arguments.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: tcsh is not handled correctly UTF-8 in arguments

2016-10-20 Thread Alan Somers
On Wed, Oct 19, 2016 at 11:10 AM, Slawa Olhovchenkov  wrote:
> tcsh called by sshd for invocation of scp: `tcsh -c scp -f Расписание.pdf`
> At this time no any LC_* is set.
> tcsh read .cshrc and set LC_CTYPE=ru_RU.UTF-8 LC_COLLATE=ru_RU.UTF-8.
> After this invocation of scp will be incorrect:
>
> 7ab0  20 2d 66 20 c3 90 c2 a0  c3 90 c2 b0 c3 91 c2 81  | -f |
> 7ac0  c3 90 c2 bf c3 90 c2 b8  c3 91 c2 81 c3 90 c2 b0  ||
> 7ad0  c3 90 c2 bd c3 90 c2 b8  c3 90 c2 b5 5f c3 90 c2  |_...|
> 7ae0  a2 c3 90 c2 97 c3 90 c2  98 2e 70 64 66 0a|..pdf.  |
>
> Correct invocation must be:
>
>    20 2d 66 20  | -f |
> 0010  d0 a0 d0 b0 d1 81 d0 bf  d0 b8 d1 81 d0 b0 d0 bd  ||
> 0020  d0 b8 d0 b5 5f d0 a2 d0  97 d0 98 2e 70 64 66 0a  |_...pdf.|
>
> `d0` =>  `c3 90`
> `a0` =>  `c2 a0`
>
> I.e. every byte re-encoded to utf-8: `d0` =>  `c3 90`
>
> As result imposible to access files w/ non-ascii names.

This might be related to PR213013.  Could you please try on head after r306782 ?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Dr. Nikolaus Klepp
Am Donnerstag, 20. Oktober 2016 schrieb Eugene M. Zheganin:
> Hi.
> 
> I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation 
> on different releases) and a zfs. I also have one directory that used to 
> have a lot of (tens of thousands) files. I surely takes a lot of time to 
> get a listing of it. But now I have 2 files and a couple of dozens 
> directories in it (I sorted files into directories). Surprisingly, 
> there's still a lag between "ls" and an output:
> 
> 
> ===Cut===
> 
> # /usr/bin/time -h ls
> .recycle2016-01 2016-04 2016-07 2016-10 
> sort-files.sh
> 20142016-02 2016-05 2016-08 ktrace.out  
> sort-months.sh
> 20152016-03 2016-06 2016-09 old 
> sounds
>  5.75s real  0.00s user  0.02s sys
> 
> ===Cut===
> 
> 
> I've seen this situation before, on other servers, so it's not the first 
> time I encounter this. However, it's not 100% reproducible (I mean, if I 
> fill the directory with dozens of thousands of files, I will not 
> certainly get this lag after the deletion).
> 
> Has anyone seen this and does anyone know how to resolve this ? It's not 
> critical issue, but it makes thing uncomfortable here. One method I'm 
> aware of: you can move the contents of this directory to some other 
> place, then delete it and create again. But it's kind of a nasty workaround.


Hi!

I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one 
RAID0 for each disk.

Nik


-- 
Please do not email me anything that you are not comfortable also sharing with 
the NSA.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Pete French
Have ignored this thread untiul now, but I observed the same behaviour
on mysystems over the last week or so. In my case its an exim spool
directory, which was hugely full as some point (thousands of
files) and now takes an awfully long time to open and list. I delet
and remake them and the problem goes away, but I belive it is the same thing.

I am running 10.3-STABLE, r303832

-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Miroslav Lachman

Eugene M. Zheganin wrote on 2016/10/20 15:47:

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation
on different releases) and a zfs. I also have one directory that used to
have a lot of (tens of thousands) files. I surely takes a lot of time to
get a listing of it. But now I have 2 files and a couple of dozens
directories in it (I sorted files into directories). Surprisingly,
there's still a lag between "ls" and an output:


[...]


I've seen this situation before, on other servers, so it's not the first
time I encounter this. However, it's not 100% reproducible (I mean, if I
fill the directory with dozens of thousands of files, I will not
certainly get this lag after the deletion).


What about snapshots? Are there any snapshots on this filesystem?

Or scrub running in the background?

Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Nicolas Gilles
On Thu, Oct 20, 2016 at 3:47 PM, Eugene M. Zheganin  wrote:
> Hi.
>
> I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation on
> different releases) and a zfs. I also have one directory that used to have a
> lot of (tens of thousands) files. I surely takes a lot of time to get a
> listing of it. But now I have 2 files and a couple of dozens directories in
> it (I sorted files into directories). Surprisingly, there's still a lag
> between "ls" and an output:
>
>
> ===Cut===
>
> # /usr/bin/time -h ls
> .recycle2016-01 2016-04 2016-07 2016-10
> sort-files.sh
> 20142016-02 2016-05 2016-08 ktrace.out
> sort-months.sh
> 20152016-03 2016-06 2016-09 old
> sounds
> 5.75s real  0.00s user  0.02s sys

Looks like it's not taking up any processing time, so my guess is
the lag probably comes from stalled I/O ... bad disk?

Does a second "ls" immediately returned (ie. metadata has been
cached) ?

>
> ===Cut===
>
>
> I've seen this situation before, on other servers, so it's not the first
> time I encounter this. However, it's not 100% reproducible (I mean, if I
> fill the directory with dozens of thousands of files, I will not certainly
> get this lag after the deletion).
>
> Has anyone seen this and does anyone know how to resolve this ? It's not
> critical issue, but it makes thing uncomfortable here. One method I'm aware
> of: you can move the contents of this directory to some other place, then
> delete it and create again. But it's kind of a nasty workaround.
>
>
> Thanks.
>
> Eugene.
>
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin

Hi.

I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation 
on different releases) and a zfs. I also have one directory that used to 
have a lot of (tens of thousands) files. I surely takes a lot of time to 
get a listing of it. But now I have 2 files and a couple of dozens 
directories in it (I sorted files into directories). Surprisingly, 
there's still a lag between "ls" and an output:



===Cut===

# /usr/bin/time -h ls
.recycle2016-01 2016-04 2016-07 2016-10 
sort-files.sh
20142016-02 2016-05 2016-08 ktrace.out  
sort-months.sh
20152016-03 2016-06 2016-09 old 
sounds

5.75s real  0.00s user  0.02s sys

===Cut===


I've seen this situation before, on other servers, so it's not the first 
time I encounter this. However, it's not 100% reproducible (I mean, if I 
fill the directory with dozens of thousands of files, I will not 
certainly get this lag after the deletion).


Has anyone seen this and does anyone know how to resolve this ? It's not 
critical issue, but it makes thing uncomfortable here. One method I'm 
aware of: you can move the contents of this directory to some other 
place, then delete it and create again. But it's kind of a nasty workaround.



Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Jenkins build is still unstable: FreeBSD_stable_10 #435

2016-10-20 Thread jenkins-admin
https://jenkins.FreeBSD.org/job/FreeBSD_stable_10/435/
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"