Re: Authentication tried for XXX with correct key but not from a permitted host

2010-07-11 Thread Matthew Seaman
On 11/07/2010 04:04:57, Dan Langille wrote:

 That asked, I know if I move the key to the top of the
 ~/.ssh/authorized_keys file, the message is no longer logged. Further
 investigation reveals that if a line of the form:
 
 from=10..etc
 
 appears before the key being used to log in, the message will appear.

Usually the from='10.0.0.100' tag should be inserted at the beginning of
the line for each key it should affect.  It shouldn't do anything on a
line on its own -- in fact that should be a syntax error.  The behaviour
you're seeing sounds like something new: it isn't what sshd(8) describes
in the section on AUTHORIZED_KEYS FILE FORMAT.

This new behaviour sounds as if it could be quite useful for easing the
management of complicated authorised_keys files, but I'd have expected
some sort of notice somewhere.  I can't see anything relevant in the
release notes for OpenSSH for versions 5.0, 5.1, 5.3, 5.3, 5.4 or 5.5
[Eg. http://www.openssh.org/txt/release-5.4 -- 8.1-PRERELEASE has
OpenSSH 5.4p1 bundled].  Nor anything in any of the ssh(1),
ssh_config(1), sshd(8), sshd_config(8) man pages.

Maybe it's a bug, but one that has fortuitously useful effects.

Cheers,

Mathew

-- 
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
  Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
JID: matt...@infracaninophile.co.uk   Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


dmg handling

2010-07-11 Thread Zoran Kolic
Howdy!
I got a file made on mac, sized at 142 mb and with
dmg extension. It is a dump of 4 gb SD card with
firmware upgrade files. File says:

  disk1s1.dmg: VAX COFF executable not stripped

So, compressed. The file system on that original
card is ext3. Linux people use dmg2img utility to
decompress and then mount it with options like:

  mount -o loop -t hfsplus image dir

If correct, I could try to bunzip2 it first, then
rsync to sheevaplug node (ubuntu on it) and mount
finally. The final step would be to write that ima-
ge to new SD card and use it to upgrade the gadget.
I was sure that simple dd would suffice, but more
I read, more I see it's wrong.
I also assume that I cannot write that image to
smaller, 2 gb SD card.
Any idea what to do in this situation?
Best regards

   Zoran

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Richard Lee
This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.

The closest I found by Googling was this:
http://forums.freebsd.org/showthread.php?t=9935

And it talks about all kinds of little tweaks, but in the end, the
only thing that actually works is the stupid 1-line perl code that
forces the kernal to free the memory allocated to (non-zfs) disk
cache, which is the Inactive memory in top.

I have a 4-disk raidz pool, but that's unlikely to matter.

Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
cache the data read from non-zfs disk in memory, and free memory will
go down.  This is as expected, obviously.

Once there's very little free memory, one would expect whatever is
more important to kick out the cached data (Inact) and make memory
available.

But when almost all of the memory is taken by disk cache (of non-zfs
file system), ZFS disks start threshing like mad and the write
throughput goes down in 1-digit MB/second.

I believe it should be extremely easy to duplicate.  Just plug in a
big USB drive formatted in UFS (msdosfs will likely do the same), and
copy large files from that USB drive to zfs pool.

Right after clean boot, gstat will show something like 20+MB/s
movement from USB device (da*), and occasional bursts of activity on
zpool devices at very high rate.  Once free memory is exhausted, zpool
devices will change to constant low-speed activity, with disks
threshing about constantly.

I tried enabling/disabling prefetch, messing with vnode counts,
zfs.vdev.min/max_pending, etc.  The only thing that works is that
stupid perl 1-liner (perl -e '$x=xx15'), which returns the
activity to that seen right after a clean boot.  It doesn't last very
long, though, as the disk cache again consumes all the memory.

Copying files between zfs devices doesn't seem to affect anything.

I understand zfs subsystem has its own memory/cache management.
Can a zfs expert please comment on this?

And is there a way to force the kernel to not cache non-zfs disk data?

--rich
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


syslogs altlog_proglist and isc-dhcpd logging for FreeBSD

2010-07-11 Thread Harald Schmalzbauer

Hello,

since isc-dhcpd-4.1.1 promised ipv6, I wanted to replace my existing 
DHCP servers with this new version.

I'm running chrooted. My problem was with logging.

dhcpd is very noisy and setting log-facility local1 in dhcpd.conf 
doesn't work out of the box (*) because of the chrooted environment.


But some good guys already coded everything which is needed to have 
dhcpd logging with chrooted.
- syslogd has the -l switch which enables to place an additional log 
socket into the chrooted enivronment.
- /etc/rc.d/syslogd already knows about this and has the variable 
altlog_proglist, which checks for possible chrooted daemons


The problems are:
- /etc/rc.d/syslogd has the altlog_proglist hard coded
- /etc/rc.d/syslogd checks for daemons in rc.conf which have flags 
any_chrootdir, but rc.d/isc-dhcpd uses dhcpd_rootdir.


So here's the view simple lines that make dhcpd logging working with 
individula log-facility configs:


--- etc/rc.d/syslogd2009-09-06 02:47:31.0 +0200
+++ etc/rc.d/syslogd2010-07-11 21:27:46.477366986 +0200
@@ -1,6 +1,6 @@
 #!/bin/sh
 #
-# $FreeBSD: src/etc/rc.d/syslogd,v 1.13.2.1 2009/08/03 08:13:06 
kensmith Exp $
+# $FreeBSD: src/etc/rc.d/syslogd,v 1.13.2.1.4.1 2010/06/14 02:09:06 
kensmith Exp $

 #

 # PROVIDE: syslogd
@@ -19,7 +19,9 @@

 sockfile=/var/run/syslogd.sockets
 evalargs=rc_flags=\\`set_socketlist\` \$rc_flags\
-altlog_proglist=named
+
+load_rc_config $name
+altlog_proglist=${syslogd_altlog_proglist:-named}

 syslogd_precmd()
 {
--- etc/defaults/rc.conf2009-11-01 15:08:40.0 +0100
+++ etc/defaults/rc.conf2010-07-11 21:30:04.373974162 +0200
@@ -255,6 +255,7 @@
 syslogd_enable=YES # Run syslog daemon (or NO).
 syslogd_program=/usr/sbin/syslogd # path to syslogd, if you want a 
different one.

 syslogd_flags=-s   # Flags to syslogd (if enabled).
+syslogd_altlog_proglist=named # Check vor chrooted daemons and place 
additional socket

 inetd_enable=NO# Run the network daemon dispatcher (YES/NO).
 inetd_program=/usr/sbin/inetd	# path to inetd, if you want a 
different one.

 inetd_flags=-wW -C 60  # Optional flags to inetd



--- etc/rc.d/isc-dhcpd.orig 2010-07-08 13:03:45.0 +0200
+++ etc/rc.d/isc-dhcpd  2010-07-11 20:41:36.0 +0200
@@ -32,7 +32,7 @@

 dhcpd_chroot_enable=${dhcpd_chroot_enable:-NO} # runs chrooted?
 dhcpd_devfs_enable=${dhcpd_devfs_enable:-YES}  # devfs if 
available?
-dhcpd_rootdir=${dhcpd_rootdir:-/var/db/${name}}# directory to 
run in
+dhcpd_rootdir=${dhcpd_chrootdir:-/var/db/${name}}  # directory to 
run in
 # dhcpd_includedir=# directory for included config 
files

 safe_run ()# rc command [args...]

Is it possible to get these changes into base system?
@wxs Any objections changing dhacpd_rootdir into dhcpd_chrootdir variable?

Shall I file a PR?

Thanks,

-Harry

P.S.: For the records, here another possibility to make dhcpd use 
different syslog facility in chrooted environmen:

(*)
Chaging the syslog facility of dhcpd with log-facility local7; in 
dhcpd.conf doesn't work for chrooted dhcpd.
At startup, it uses the local datagram syslogd socket /dev/log 
(/var/run/syslog.sockets).
The syslog facility change is done after changeroot took place, so in 
the chrooted environment there is no syslogd reachable.
To change the default syslog facility from LOG_DAEMON to LOG_LOCAL7 add 
the following to the ports Makefile:

CONFIGURE_ENV=  CPPFLAGS=-DDHCPD_LOG_FACILITY=LOG_LOCAL7 .. *snip*



signature.asc
Description: OpenPGP digital signature


Re: 8.x grudges

2010-07-11 Thread M. Warner Losh
In message: aanlktikdj39liaffibdwkfa1vgt4w7m8toxevjykh...@mail.gmail.com
Garrett Cooper yanef...@gmail.com writes:
: On Wed, Jul 7, 2010 at 1:17 PM, Mikhail T. mi+t...@aldan.algebra.com wrote:
:  07.07.2010 14:59, Jeremy Chadwick ???(??):
: 
:       FREEBSD_COMPAT7 kernel option is, apparently, a requirement (and
:       thus not an option) -- the kernel-config files, that worked with
:       7.x, break without this option in them (in addition to all the
:       nuisance, that's documented in UPDATING -- which, somehow, makes
:       the breakage acceptable). config(8) would not warn about this, but
:       kernel build fails.
: 
: 
:  We don't use this option (meaning it's removed from our kernels).  It's
:  definitely not required.  All it does is ensure your kernel can
:  comprehend executables/binaries built on 7.x.
: 
: 
:  Attached is the kernel config-file (i386), that worked fine under 7.x. The
:  kernel-compile will break (some *freebsd7* structs undefined), without the
:  COMPAT_FREEBSD7 option. Try it for yourself...
: 
: options   SYSVSHM # SYSV-style shared memory
: options   SYSVMSG # SYSV-style message queues
: options   SYSVSEM # SYSV-style semaphores
: 
: Those require COMPAT_FREEBSD7. This does seem like a bug:
: 
: static struct syscall_helper_data shm_syscalls[] = {
: SYSCALL_INIT_HELPER(shmat),
: SYSCALL_INIT_HELPER(shmctl),
: SYSCALL_INIT_HELPER(shmdt),
: SYSCALL_INIT_HELPER(shmget),
: #if defined(COMPAT_FREEBSD4) || defined(COMPAT_FREEBSD5) || \
: defined(COMPAT_FREEBSD6) || defined(COMPAT_FREEBSD7)
: SYSCALL_INIT_HELPER(freebsd7_shmctl),
: #endif
: 
: The check should be for COMPAT_FREEBSD7 only I would think.
: 
: Apart from that, everything else should work without it I would think.

You would think that, but you'd be wrong.

In general, if you have COMPAT_FREEBSDx defined, you need all
COMPAT_FREEBSDy for y  x defined.

The reason for this is that we name the compat shim for the version
where it was removed, but it is needed for all prior versions.
freebsd7_shmctl is needed to emulate the earlier versions as well...

This is why we'd like to move to something more like
COMPAT_MIN_FREEBSD=z, but there's hooks into the config system and
syscall tables that make it tricky...

Warner
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Jeremy Chadwick
On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
 This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
 
 The closest I found by Googling was this:
 http://forums.freebsd.org/showthread.php?t=9935
 
 And it talks about all kinds of little tweaks, but in the end, the
 only thing that actually works is the stupid 1-line perl code that
 forces the kernal to free the memory allocated to (non-zfs) disk
 cache, which is the Inactive memory in top.
 
 I have a 4-disk raidz pool, but that's unlikely to matter.
 
 Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
 cache the data read from non-zfs disk in memory, and free memory will
 go down.  This is as expected, obviously.
 
 Once there's very little free memory, one would expect whatever is
 more important to kick out the cached data (Inact) and make memory
 available.
 
 But when almost all of the memory is taken by disk cache (of non-zfs
 file system), ZFS disks start threshing like mad and the write
 throughput goes down in 1-digit MB/second.
 
 I believe it should be extremely easy to duplicate.  Just plug in a
 big USB drive formatted in UFS (msdosfs will likely do the same), and
 copy large files from that USB drive to zfs pool.
 
 Right after clean boot, gstat will show something like 20+MB/s
 movement from USB device (da*), and occasional bursts of activity on
 zpool devices at very high rate.  Once free memory is exhausted, zpool
 devices will change to constant low-speed activity, with disks
 threshing about constantly.
 
 I tried enabling/disabling prefetch, messing with vnode counts,
 zfs.vdev.min/max_pending, etc.  The only thing that works is that
 stupid perl 1-liner (perl -e '$x=xx15'), which returns the
 activity to that seen right after a clean boot.  It doesn't last very
 long, though, as the disk cache again consumes all the memory.
 
 Copying files between zfs devices doesn't seem to affect anything.
 
 I understand zfs subsystem has its own memory/cache management.
 Can a zfs expert please comment on this?
 
 And is there a way to force the kernel to not cache non-zfs disk data?

I believe you may be describing two separate issues:

1) ZFS using a lot of memory but not freeing it as you expect
2) Lack of disk I/O scheduler

For (1), try this in /boot/loader.conf and reboot:

# Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
# on 2010/05/24.
# http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
vfs.zfs.zio.use_uma=0

For (2), may try gsched_rr:

http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


RE: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Scott Sanbeg
Using Jeremy's suggestion as follows:
1) ZFS using a lot of memory but not freeing it as you expect
For (1), try this in /boot/loader.conf and reboot:
vfs.zfs.zio.use_uma=0

... works like a charm for me.  Thank you.

Scott

-Original Message-
From: owner-freebsd-sta...@freebsd.org
[mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Jeremy Chadwick
Sent: Sunday, July 11, 2010 1:48 PM
To: Richard Lee
Cc: freebsd-stable@freebsd.org
Subject: Re: Serious zfs slowdown when mixed with another file system
(ufs/msdosfs/etc.).

On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
 This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
 
 The closest I found by Googling was this:
 http://forums.freebsd.org/showthread.php?t=9935
 
 And it talks about all kinds of little tweaks, but in the end, the
 only thing that actually works is the stupid 1-line perl code that
 forces the kernal to free the memory allocated to (non-zfs) disk
 cache, which is the Inactive memory in top.
 
 I have a 4-disk raidz pool, but that's unlikely to matter.
 
 Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
 cache the data read from non-zfs disk in memory, and free memory will
 go down.  This is as expected, obviously.
 
 Once there's very little free memory, one would expect whatever is
 more important to kick out the cached data (Inact) and make memory
 available.
 
 But when almost all of the memory is taken by disk cache (of non-zfs
 file system), ZFS disks start threshing like mad and the write
 throughput goes down in 1-digit MB/second.
 
 I believe it should be extremely easy to duplicate.  Just plug in a
 big USB drive formatted in UFS (msdosfs will likely do the same), and
 copy large files from that USB drive to zfs pool.
 
 Right after clean boot, gstat will show something like 20+MB/s
 movement from USB device (da*), and occasional bursts of activity on
 zpool devices at very high rate.  Once free memory is exhausted, zpool
 devices will change to constant low-speed activity, with disks
 threshing about constantly.
 
 I tried enabling/disabling prefetch, messing with vnode counts,
 zfs.vdev.min/max_pending, etc.  The only thing that works is that
 stupid perl 1-liner (perl -e '$x=xx15'), which returns the
 activity to that seen right after a clean boot.  It doesn't last very
 long, though, as the disk cache again consumes all the memory.
 
 Copying files between zfs devices doesn't seem to affect anything.
 
 I understand zfs subsystem has its own memory/cache management.
 Can a zfs expert please comment on this?
 
 And is there a way to force the kernel to not cache non-zfs disk data?

I believe you may be describing two separate issues:

1) ZFS using a lot of memory but not freeing it as you expect
2) Lack of disk I/O scheduler

For (1), try this in /boot/loader.conf and reboot:

# Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
# on 2010/05/24.
# http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
vfs.zfs.zio.use_uma=0

For (2), may try gsched_rr:

http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=
markup

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


More buzzing fun with snd_emu10kx - but now with more determinism!

2010-07-11 Thread Garrett Cooper
Getting back to the thread I brought up before (with my now dead
email address):
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/stable/2010-06/msg00036.html
, I now have a more deterministic testcase for this issue.
The problem appears to be with vchan-related code. If I start up
4+ applications on my machine that access the audio device, all goes
wonky on the 4+ allocation (I was stress testing the nvidia driver to
see whether or not it'd break with multiple instances of vlc, and
stumbled on this by accident). So pushing the number of consumers of
the audio subsystem forces a breakdown somewhere (even though the
number of available hardware vchans is set to 16).
I'll continue to look into this further as time permits.
Thanks,
-Garrett
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Richard Lee
On Sun, Jul 11, 2010 at 01:47:57PM -0700, Jeremy Chadwick wrote:
 On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
  This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
  
  The closest I found by Googling was this:
  http://forums.freebsd.org/showthread.php?t=9935
  
  And it talks about all kinds of little tweaks, but in the end, the
  only thing that actually works is the stupid 1-line perl code that
  forces the kernal to free the memory allocated to (non-zfs) disk
  cache, which is the Inactive memory in top.
  
  I have a 4-disk raidz pool, but that's unlikely to matter.
  
  Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
  cache the data read from non-zfs disk in memory, and free memory will
  go down.  This is as expected, obviously.
  
  Once there's very little free memory, one would expect whatever is
  more important to kick out the cached data (Inact) and make memory
  available.
  
  But when almost all of the memory is taken by disk cache (of non-zfs
  file system), ZFS disks start threshing like mad and the write
  throughput goes down in 1-digit MB/second.
  
  I believe it should be extremely easy to duplicate.  Just plug in a
  big USB drive formatted in UFS (msdosfs will likely do the same), and
  copy large files from that USB drive to zfs pool.
  
  Right after clean boot, gstat will show something like 20+MB/s
  movement from USB device (da*), and occasional bursts of activity on
  zpool devices at very high rate.  Once free memory is exhausted, zpool
  devices will change to constant low-speed activity, with disks
  threshing about constantly.
  
  I tried enabling/disabling prefetch, messing with vnode counts,
  zfs.vdev.min/max_pending, etc.  The only thing that works is that
  stupid perl 1-liner (perl -e '$x=xx15'), which returns the
  activity to that seen right after a clean boot.  It doesn't last very
  long, though, as the disk cache again consumes all the memory.
  
  Copying files between zfs devices doesn't seem to affect anything.
  
  I understand zfs subsystem has its own memory/cache management.
  Can a zfs expert please comment on this?
  
  And is there a way to force the kernel to not cache non-zfs disk data?
 
 I believe you may be describing two separate issues:
 
 1) ZFS using a lot of memory but not freeing it as you expect
 2) Lack of disk I/O scheduler
 
 For (1), try this in /boot/loader.conf and reboot:
 
 # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
 # on 2010/05/24.
 # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
 vfs.zfs.zio.use_uma=0
 
 For (2), may try gsched_rr:
 
 http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup
 
 -- 
 | Jeremy Chadwick   j...@parodius.com |
 | Parodius Networking   http://www.parodius.com/ |
 | UNIX Systems Administrator  Mountain View, CA, USA |
 | Making life hard for others since 1977.  PGP: 4BD6C0CB |

vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
touched it.  And in my case, Wired memory is stable at around 1GB.  It's
the Inact memory that takes off, but only if reading from non-zfs file
system.  Without other file systems, I can keep moving files around and
see no adverse slowdown.  I can also scp huge files from another system
into the zfs machine, and it doesn't affect memory usage (as reported by
top), nor does it affect performance.

As for gsched_rr, I don't believe this is related.  There is only ONE
access to the zfs devices (4 sata drives), which is purely a sequential
write.

The external USB HDD (UFS2) is a completely different device, and is doing
purely sequential read.  There is only one cp process doing anything at
all.

The FreeBSD system files aren't on either of these devices, either not
that it's doing anything with the disk during this time.

--rich
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Jeremy Chadwick
On Sun, Jul 11, 2010 at 02:12:13PM -0700, Richard Lee wrote:
 On Sun, Jul 11, 2010 at 01:47:57PM -0700, Jeremy Chadwick wrote:
  On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
   This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
   
   The closest I found by Googling was this:
   http://forums.freebsd.org/showthread.php?t=9935
   
   And it talks about all kinds of little tweaks, but in the end, the
   only thing that actually works is the stupid 1-line perl code that
   forces the kernal to free the memory allocated to (non-zfs) disk
   cache, which is the Inactive memory in top.
   
   I have a 4-disk raidz pool, but that's unlikely to matter.
   
   Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
   cache the data read from non-zfs disk in memory, and free memory will
   go down.  This is as expected, obviously.
   
   Once there's very little free memory, one would expect whatever is
   more important to kick out the cached data (Inact) and make memory
   available.
   
   But when almost all of the memory is taken by disk cache (of non-zfs
   file system), ZFS disks start threshing like mad and the write
   throughput goes down in 1-digit MB/second.
   
   I believe it should be extremely easy to duplicate.  Just plug in a
   big USB drive formatted in UFS (msdosfs will likely do the same), and
   copy large files from that USB drive to zfs pool.
   
   Right after clean boot, gstat will show something like 20+MB/s
   movement from USB device (da*), and occasional bursts of activity on
   zpool devices at very high rate.  Once free memory is exhausted, zpool
   devices will change to constant low-speed activity, with disks
   threshing about constantly.
   
   I tried enabling/disabling prefetch, messing with vnode counts,
   zfs.vdev.min/max_pending, etc.  The only thing that works is that
   stupid perl 1-liner (perl -e '$x=xx15'), which returns the
   activity to that seen right after a clean boot.  It doesn't last very
   long, though, as the disk cache again consumes all the memory.
   
   Copying files between zfs devices doesn't seem to affect anything.
   
   I understand zfs subsystem has its own memory/cache management.
   Can a zfs expert please comment on this?
   
   And is there a way to force the kernel to not cache non-zfs disk data?
  
  I believe you may be describing two separate issues:
  
  1) ZFS using a lot of memory but not freeing it as you expect
  2) Lack of disk I/O scheduler
  
  For (1), try this in /boot/loader.conf and reboot:
  
  # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
  # on 2010/05/24.
  # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
  vfs.zfs.zio.use_uma=0
  
  For (2), may try gsched_rr:
  
  http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup
  
  -- 
  | Jeremy Chadwick   j...@parodius.com |
  | Parodius Networking   http://www.parodius.com/ |
  | UNIX Systems Administrator  Mountain View, CA, USA |
  | Making life hard for others since 1977.  PGP: 4BD6C0CB |
 
 vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
 touched it.

Okay, just checking, because the default did change at one point, as the
link in my /boot/loader.conf denotes.  Here's further confirmation (same
thread), the first confirming on i386, the second confirming on amd64:

http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057168.html
http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057239.html

 And in my case, Wired memory is stable at around 1GB.  It's
 the Inact memory that takes off, but only if reading from non-zfs file
 system.  Without other file systems, I can keep moving files around and
 see no adverse slowdown.  I can also scp huge files from another system
 into the zfs machine, and it doesn't affect memory usage (as reported by
 top), nor does it affect performance.

Let me get this straight:

The system has ZFS enabled (kernel module loaded), with a 4-disk raidz1
pool defined and used in the past (Wired being @ 1GB, due to ARC).  The
same system also has UFS2 filesystems.  The ZFS pool vdevs consist of
their own dedicated disks, and the UFS2 filesystems also have their own
disk (which appears to be USB-based).

When any sort of read I/O is done on the UFS2 filesystems, Inact
skyrockets, and as a result this impacts performance of ZFS.

If this is correct: can you remove USB from the picture and confirm the
problem still happens?  This is the first I've heard of the UFS caching
mechanism spiraling out of control.

By the way, all the stupid perl 1-liner does is make a process with an
extremely large SIZE, and RES will grow to match it (more or less).  The
intention is to cause the VM to force a swap-out + free of memory by
stressing the VM.  Using 'x15', you'll find something like this:

  PID USERNAME   THR PRI 

Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Richard Lee
On Sun, Jul 11, 2010 at 02:45:46PM -0700, Jeremy Chadwick wrote:
 On Sun, Jul 11, 2010 at 02:12:13PM -0700, Richard Lee wrote:
  On Sun, Jul 11, 2010 at 01:47:57PM -0700, Jeremy Chadwick wrote:
   On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.

The closest I found by Googling was this:
http://forums.freebsd.org/showthread.php?t=9935

And it talks about all kinds of little tweaks, but in the end, the
only thing that actually works is the stupid 1-line perl code that
forces the kernal to free the memory allocated to (non-zfs) disk
cache, which is the Inactive memory in top.

I have a 4-disk raidz pool, but that's unlikely to matter.

Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
cache the data read from non-zfs disk in memory, and free memory will
go down.  This is as expected, obviously.

Once there's very little free memory, one would expect whatever is
more important to kick out the cached data (Inact) and make memory
available.

But when almost all of the memory is taken by disk cache (of non-zfs
file system), ZFS disks start threshing like mad and the write
throughput goes down in 1-digit MB/second.

I believe it should be extremely easy to duplicate.  Just plug in a
big USB drive formatted in UFS (msdosfs will likely do the same), and
copy large files from that USB drive to zfs pool.

Right after clean boot, gstat will show something like 20+MB/s
movement from USB device (da*), and occasional bursts of activity on
zpool devices at very high rate.  Once free memory is exhausted, zpool
devices will change to constant low-speed activity, with disks
threshing about constantly.

I tried enabling/disabling prefetch, messing with vnode counts,
zfs.vdev.min/max_pending, etc.  The only thing that works is that
stupid perl 1-liner (perl -e '$x=xx15'), which returns the
activity to that seen right after a clean boot.  It doesn't last very
long, though, as the disk cache again consumes all the memory.

Copying files between zfs devices doesn't seem to affect anything.

I understand zfs subsystem has its own memory/cache management.
Can a zfs expert please comment on this?

And is there a way to force the kernel to not cache non-zfs disk data?
   
   I believe you may be describing two separate issues:
   
   1) ZFS using a lot of memory but not freeing it as you expect
   2) Lack of disk I/O scheduler
   
   For (1), try this in /boot/loader.conf and reboot:
   
   # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
   # on 2010/05/24.
   # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
   vfs.zfs.zio.use_uma=0
   
   For (2), may try gsched_rr:
   
   http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup
   
   -- 
   | Jeremy Chadwick   j...@parodius.com |
   | Parodius Networking   http://www.parodius.com/ |
   | UNIX Systems Administrator  Mountain View, CA, USA |
   | Making life hard for others since 1977.  PGP: 4BD6C0CB |
  
  vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
  touched it.
 
 Okay, just checking, because the default did change at one point, as the
 link in my /boot/loader.conf denotes.  Here's further confirmation (same
 thread), the first confirming on i386, the second confirming on amd64:
 
 http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057168.html
 http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057239.html
 
  And in my case, Wired memory is stable at around 1GB.  It's
  the Inact memory that takes off, but only if reading from non-zfs file
  system.  Without other file systems, I can keep moving files around and
  see no adverse slowdown.  I can also scp huge files from another system
  into the zfs machine, and it doesn't affect memory usage (as reported by
  top), nor does it affect performance.
 
 Let me get this straight:
 
 The system has ZFS enabled (kernel module loaded), with a 4-disk raidz1
 pool defined and used in the past (Wired being @ 1GB, due to ARC).  The
 same system also has UFS2 filesystems.  The ZFS pool vdevs consist of
 their own dedicated disks, and the UFS2 filesystems also have their own
 disk (which appears to be USB-based).

Yes, correct.

I have:
ad4 (An old 200GB SATA UFS2 main system drive)
ad8, ad10, ad12, ad14 (1TB SATA drives) part of raidz1 and nothing else
da0 is an external USB disk (1TB), but I don't think it's related to USB.

Status looks like this:
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
uchuu   ONLINE   0 0 0
  raidz1ONLINE   0 0 0
ad8 ONLINE   

Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Freddie Cash
Search the archives for the -stable, -current, and -fs mailing lists
from the past 3 months.  There are patches floating around to fix
this.  The ZFS code that monitors memory pressure currently only
monitors the free amount, and completely ignores the inact and
other not actually in use amounts.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More buzzing fun with snd_emu10kx - but now with more determinism!

2010-07-11 Thread Andrew J. Caines

On 07/11/2010 17:03, Garrett Cooper wrote:

The problem appears to be with vchan-related code. If I start up 4+
applications on my machine that access the audio device, all goes
wonky on the 4+ allocation


I can confirm this behaviour, which seems odd with hw.snd.maxautovchans
defaulting to 16. It does not appear to be affected by increasing
dev.pcm.0.play.vchans up fron the default of 2 (as I apparently did at
some point up to 7.x), though reading sound(4) it's clear I don't fully
understand vchans.

A problem I encountered with snd_emu10kx in a clean 8.1RC2 install which
was not present in any previous version is a faint rapid mechanical
clicking sound adjustable with the cd mixer setting.

The only non-default audio setting I have is in loader.conf:

hint.emu10kx.0.multichannel_disabled=1
hint.emu10kx.1.disabled=1


pcm0: EMU10Kx DSP front PCM interface on emu10kx0
pcm0: TriTech TR28602 AC97 Codec (id = 0x54524123)
pcm0: Codec features 5 bit master volume, no 3D Stereo Enhancement

FreeBSD Audio Driver (newpcm: 32bit 2009061500/i386)
Installed devices:
pcm0: EMU10Kx DSP front PCM interface on emu10kx0 (4p:2v/1r:1v) default
snddev flags=0x2e2AUTOVCHAN,BUSY,MPSAFE,REGISTERED,VPC
	[pcm0:play:dsp0.p0]: spd 48000, fmt 0x00200010, flags 0x2100, 
0x0004

interrupts 726, underruns 0, feed 5, ready 0 
[b:4096/2048/2|bs:4096/2048/2]
channel flags=0x2100BUSY,HAS_VCHAN
...


--
-Andrew J. Caines-   Unix Systems Engineer   a.j.cai...@halplant.com
FreeBSD/Linux/Solaris, Web/Mail/Proxy/...   http://halplant.com:2001/
  Machines take me by surprise with great frequency - Alan Turing
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org