Re: tmpfs runs out of space on 8.2pre-release, zfs related?

2011-01-02 Thread miyamoto moesasji
miyamoto moesasji miyamoto.31b at gmail.com writes:


 In setting up tmpfs (so not tmpmfs) on a machine that is using
 zfs(v15, zfs v4) on 8.2prerelease I run out of space on the tmpfs when
 copying a file of ~4.6 GB file from the zfs-filesystem to the memory
 disk. This machine has 8GB of memory backed by swap on the harddisk,
 so I expected the file to copy to memory without problems.


this is in fact worse than I first thought. After leaving the
machine running overnight the tmpfs is reduced to a size of 4K, which
shows that tmpfs is in fact
completely unusable for me. See the output of df:
---
h...@pulsarx4:~/  df -hi /tmp
FilesystemSizeUsed   Avail Capacity iused ifree %iused  Mounted on
tmpfs 4.0K4.0K  0B   100%  18 0  100%   /tmp
---

Relevant zfs-stats info:
---
System Memory Statistics:
Physical Memory:8161.74M
Kernel Memory:  4117.40M
DATA:   99.29%  4088.07M
TEXT:   0.71%   29.33M

ARC Size:
Current Size (arcsize): 63.58%  4370.60M
Target Size (Adaptive, c):  100.00% 6874.44M
Min Size (Hard Limit, c_min):   12.50%  859.31M
Max Size (High Water, c_max):   ~8:16874.44M
---

I'm not sure what triggered this further reduction in size; but the
above 4K size is probably important to show how dramatic this goes
wrong.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE

2011-01-02 Thread Attila Nagy

 On 01/02/2011 05:06 AM, J. Hellenthal wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/01/2011 13:18, Attila Nagy wrote:

  On 12/16/2010 01:44 PM, Martin Matuska wrote:

Link to the patch:

http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz




I've used this:
http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101223-nopython.patch.xz

on a server with amd64, 8 G RAM, acting as a file server on
ftp/http/rsync, the content being read only mounted with nullfs in
jails, and the daemons use sendfile (ftp and http).

The effects can be seen here:
http://people.fsn.hu/~bra/freebsd/20110101-zfsv28-fbsd/
the exact moment of the switch can be seen on zfs_mem-week.png, where
the L2 ARC has been discarded.

What I see:
- increased CPU load
- decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
hard disk load (IOPS graph)

Maybe I could accept the higher system load as normal, because there
were a lot of things changed between v15 and v28 (but I was hoping if I
use the same feature set, it will require less CPU), but dropping the
L2ARC hit rate so radically seems to be a major issue somewhere.
As you can see from the memory stats, I have enough kernel memory to
hold the L2 headers, so the L2 devices got filled up to their maximum
capacity.

Any ideas on what could cause these? I haven't upgraded the pool version
and nothing was changed in the pool or in the file system.


Running arc_summary.pl[1] -p4 should print a summary about your l2arc
and you should also notice in that section that there is a high number
of SPA Mismatch mine usually grew to around 172k before I would notice
a crash and I could reliably trigger this while in scrub.

What ever is causing this needs desperate attention!

I emailed mm@ privately off-list when I noticed this going on but have
not received any feedback as of yet.

It's at zero currently (2 days of uptime):
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Specifying root mount options on diskless boot.

2011-01-02 Thread Daniel Braniss
 
 --2iBwrppp/7QCDedR
 Content-Type: text/plain; charset=us-ascii
 Content-Disposition: inline
 Content-Transfer-Encoding: quoted-printable
 
 [I'm not sure if -stable is the best list for this but anyway...]
 
 I'm trying to convert an old laptop running FreeBSD 8.0 into a diskless
 client (since its internal HDD is growing bad spots faster than I can
 repair them).  I have it pxebooting nicely and running with an NFS root
 but it then reports locking problems: devd, syslogd, moused (and maybe
 others) lock their PID file to protect against multiple instances.
 Unfortunately, these daemons all start before statd/lockd and so the
 locking fails and reports operation not supported.
 
 It's not practical to reorder the startup sequence to make lockd start
 early enough (I've tried).
 
 Since the filesystem is reserved for this client, there's no real need
 to forward lock requests across the wire and so specifying nolockd
 would be another solution.  Looking through sys/nfsclient/bootp_subr.c,
 DHCP option 130 should allow NFS mount options to be specified (though
 it's not clear that the relevant code path is actually followed because
 I don't see the associated printf()s anywhere on the console.  After
 getting isc-dhcpd to forward this option (made more difficult because
 its documentation is incorrect), it still doesn't work.
 
 Understanding all this isn't helped by kenv(8) reporting three different
 sets of root filesystem options:
 boot.nfsroot.path=3D/tank/m3
 boot.nfsroot.server=3D192.168.123.200
 dhcp.option-130=3Dnolockd
 dhcp.root-path=3D192.168.123.200:/tank/m3
 vfs.root.mountfrom=3Dnfs:server:/tank/m3
 vfs.root.mountfrom.options=3Drw,tcp,nolockd
 
 And the console also reports conflicting root definitions:
 Trying to mount root from nfs:server:/tank/m3
 NFS ROOT: 192.168.123.200:/tank/m3
 
 Working through all these:
 boot.nfsroot.* appears to be initialised by sys/boot/i386/libi386/pxe.c
 but, whilst nfsclient/nfs_diskless.c can parse boot.nfsroot.options,
 there's no code to initialise that kenv name in pxe.c
 
 dhcp.* appears to be initialised by lib/libstand/bootp.c - which does
 include code to populate boot.nfsroot.options (using vendor specific
 DHCP option 20) but this code is not compiled in.  Further studying
 of bootp.c shows that it's possible to initialise arbitrary kenv's
 using DHCP options 246-254 - but the DHCPDISCOVER packets do not
 request these options so they don't work without special DHCP server
 configuration (to forward options that aren't requested).
 
 vfs.root.* is parsed out of /etc/fstab but, other than being
 reported in the console message above, it doesn't appear to be
 used in this environment (it looks like the root entry can be
 commented out of /etc/fstab without problem).
 
 My final solution was to specify 'boot.nfsroot.options=3Dnolockd' in
 loader.conf - and this seems to actually work.
 
 It seems rather unfortunate that FreeBSD has code to allow NFS root
 mount options to be specified via DHCP (admittedly in several
 incompatible ways) but none actually work.  A quick look at -current
 suggests that the situation there remains equally broken.
 
 Has anyone else tried to use any of this?  And would anyone be interested
 in trying to make it actually work?

Hi Peter,
i have beed doing diskless booting for a long time, and am very pleased
(though 8.2-prerelease is causing some problems :-).
In my case /var is mfs, or ufs/zfs, and have no lockd problems.

here is what you need to do:
either change in libstand/bootp.c:
#define DHCP_ENV DHCP_ENV_NO_VENDOR
to
#define DHCP_ENVDHCP_ENV_FREEBSD

or pick my version from:
ftp://ftp.cs.huji.ac.il/users/danny/freebsd/diskless-boot/
and compile a new pxeboot.
this new pxeboot will allow you to pass via dhcp some key options.

next, take a look at
  ftp://ftp.cs.huji.ac.il/users/danny/freebsd/diskless-boot/rc.initdiskless
make sure that your exported root has /.etc

If you'r /var is also nfs mounted, maybe unionfs might help too.

just writing quickly so you won't feel discouraged, and that diskless
actually works.

danny


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: slow ZFS on FreeBSD 8.1

2011-01-02 Thread Dan Langille

On 12/31/2010 6:47 PM, Jeremy Chadwick wrote:

On Sat, Jan 01, 2011 at 10:33:43AM +1100, Peter Jeremy wrote:



Based on my experiences at home, I converted my desktop at work to
pure ZFS.  The only issues I've run into have been programs that
extensively use mmap(2) - which is a known issue with ZFS.


Is your ZFS root filesystem associated with a pool that's mirrored or
using raidzX?  What about mismatched /boot content (ZFS vs. UFS)?  What
about booting into single-user mode?

http://wiki.freebsd.org/ZFSOnRoot indirectly hints at these problems but
doesn't outright admit them (yet should), so I'm curious to know how
people have solved them.  Remembering manual one-offs for a system
configured this way is not acceptable (read: highly prone to
error/mistake).  Is it worth the risk?  Most administrators don't have
the tolerance for stuff like that in the middle of a system upgrade or
what not; they should be able to follow exactly what's in the handbook,
to a tee.

There's a link to www.dan.me.uk at the bottom of the above Wiki page
that outlines the madness that's required to configure the setup, all
of which has to be done by hand.  I don't know many administrators who
are going to tolerate this when deploying numerous machines, especially
when compounded by the complexities mentioned above.


This basically outlines the reason why I do not use ZFS on root.

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: slow ZFS on FreeBSD 8.1

2011-01-02 Thread Bruce Cran
On Fri, 31 Dec 2010 15:47:47 -0800
Jeremy Chadwick free...@jdc.parodius.com wrote:

 There's a link to www.dan.me.uk at the bottom of the above Wiki page
 that outlines the madness that's required to configure the setup,
 all of which has to be done by hand.  I don't know many
 administrators who are going to tolerate this when deploying numerous
 machines, especially when compounded by the complexities mentioned
 above.

All of that page could be summarized as:

mkdir /usb
mount /dev/da4 /usb
/usb/install_zfs.sh

Where da4 is a USB drive and install_zfs.sh is essentially the commands
with some changes needed to support different disks.  I'd imagine that
administrators wouldn't use sysinstall in interactive mode when
deploying to many machines.

-- 
Bruce Cran
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-02 Thread Damien Fleuriot


On 1/1/11 6:28 PM, Jean-Yves Avenard wrote:
 On 2 January 2011 02:11, Damien Fleuriot m...@my.gd wrote:
 
 I remember getting rather average performance on v14 but Jean-Yves
 reported good performance boosts from upgrading to v15.
 
 that was v28 :)
 
 saw no major difference between v14 and v15.
 
 JY


Oopsie :)


Seeing I for one will have no backups, I think I won't be using v28 on
this box, and stick with v15 instead.


Are there any views regarding the best implementation for a system ?

I currently have a ZFS only system but I'm planning on moving it to UFS,
with ZFS used only for mass storage.


I understand ZFS root is much trickier, and my main fear is that if I
somehow break ZFS (by upgrading to v28 for example) I won't be able to
boot anymore, thus no repair process...
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-02 Thread Ronald Klop

On Sun, 02 Jan 2011 15:31:49 +0100, Damien Fleuriot m...@my.gd wrote:




On 1/1/11 6:28 PM, Jean-Yves Avenard wrote:

On 2 January 2011 02:11, Damien Fleuriot m...@my.gd wrote:


I remember getting rather average performance on v14 but Jean-Yves
reported good performance boosts from upgrading to v15.


that was v28 :)

saw no major difference between v14 and v15.

JY



Oopsie :)


Seeing I for one will have no backups, I think I won't be using v28 on
this box, and stick with v15 instead.


Are there any views regarding the best implementation for a system ?

I currently have a ZFS only system but I'm planning on moving it to UFS,
with ZFS used only for mass storage.


I understand ZFS root is much trickier, and my main fear is that if I
somehow break ZFS (by upgrading to v28 for example) I won't be able to
boot anymore, thus no repair process...


You can repair by booting from USB of CD in a lot of cases.

Ronald.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-02 Thread Peter Jeremy
On 2010-Dec-30 12:40:00 +0100, Damien Fleuriot m...@my.gd wrote:
What are the steps for properly removing my drives from the zraid1 pool
and inserting them in the zraid2 pool ?

I've documented my experiences in migrating from a 3-way RAIDZ1 to a
6-way RAIDZ2 at http://bugs.au.freebsd.org/dokuwiki/doku.php/zfsraid

Note that, even for a home system, backups are worthwhile.  In my
case, I backup onto a 2TB disk in an eSATA enclosure.  That's
currently (just) adequate but I'll soon need to identify data that I
can leave off that backup.

-- 
Peter Jeremy


pgpOSt5NCO7Do.pgp
Description: PGP signature


Re: tmpfs runs out of space on 8.2pre-release, zfs related?

2011-01-02 Thread Ronald Klop
On Sun, 02 Jan 2011 09:41:04 +0100, miyamoto moesasji  
miyamoto@gmail.com wrote:



miyamoto moesasji miyamoto.31b at gmail.com writes:



In setting up tmpfs (so not tmpmfs) on a machine that is using
zfs(v15, zfs v4) on 8.2prerelease I run out of space on the tmpfs when
copying a file of ~4.6 GB file from the zfs-filesystem to the memory
disk. This machine has 8GB of memory backed by swap on the harddisk,
so I expected the file to copy to memory without problems.



this is in fact worse than I first thought. After leaving the
machine running overnight the tmpfs is reduced to a size of 4K, which
shows that tmpfs is in fact
completely unusable for me. See the output of df:
---
h...@pulsarx4:~/  df -hi /tmp
FilesystemSizeUsed   Avail Capacity iused ifree %iused  Mounted  
on

tmpfs 4.0K4.0K  0B   100%  18 0  100%   /tmp
---

Relevant zfs-stats info:
---
System Memory Statistics:
Physical Memory:8161.74M
Kernel Memory:  4117.40M
DATA:   99.29%  4088.07M
TEXT:   0.71%   29.33M

ARC Size:
Current Size (arcsize): 63.58%  4370.60M
Target Size (Adaptive, c):  100.00% 6874.44M
Min Size (Hard Limit, c_min):   12.50%  859.31M
Max Size (High Water, c_max):   ~8:16874.44M
---

I'm not sure what triggered this further reduction in size; but the
above 4K size is probably important to show how dramatic this goes
wrong.


Is it possible that some program is filling a file (on your tmpfs) which  
is deleted?
You will not see it with df or ls, but it still takes space on the fs,  
until the application closes the file.


Ronald.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs runs out of space on 8.2pre-release, zfs related?

2011-01-02 Thread miyamoto moesasji
On Sun, Jan 2, 2011 at 3:26 PM, Ronald Klop ronald-freeb...@klop.yi.org wrote:
 On Sun, 02 Jan 2011 09:41:04 +0100, miyamoto moesasji
 miyamoto@gmail.com wrote:

 miyamoto moesasji miyamoto.31b at gmail.com writes:


 In setting up tmpfs (so not tmpmfs) on a machine that is using
 zfs(v15, zfs v4) on 8.2prerelease I run out of space on the tmpfs when
 copying a file of ~4.6 GB file from the zfs-filesystem to the memory
 disk. This machine has 8GB of memory backed by swap on the harddisk,
 so I expected the file to copy to memory without problems.


 this is in fact worse than I first thought. After leaving the
 machine running overnight the tmpfs is reduced to a size of 4K, which
 shows that tmpfs is in fact
 completely unusable for me. See the output of df:
 ---
 h...@pulsarx4:~/  df -hi /tmp
 Filesystem    Size    Used   Avail Capacity iused ifree %iused  Mounted on
 tmpfs         4.0K    4.0K      0B   100%      18     0  100%   /tmp
 ---

 Relevant zfs-stats info:
 ---
 System Memory Statistics:
        Physical Memory:                        8161.74M
        Kernel Memory:                          4117.40M
        DATA:                           99.29%  4088.07M
        TEXT:                           0.71%   29.33M

 ARC Size:
        Current Size (arcsize):         63.58%  4370.60M
        Target Size (Adaptive, c):      100.00% 6874.44M
        Min Size (Hard Limit, c_min):   12.50%  859.31M
        Max Size (High Water, c_max):   ~8:1    6874.44M
 ---

 I'm not sure what triggered this further reduction in size; but the
 above 4K size is probably important to show how dramatic this goes
 wrong.

 Is it possible that some program is filling a file (on your tmpfs) which is
 deleted?
 You will not see it with df or ls, but it still takes space on the fs, until
 the application closes the file.

 Ronald.
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


I'm pretty sure that this is not the case as the problem shows up
immediately upon a clean reboot (and /tmp is normally pretty empty
with just 22k in use at the moment with the older tmpmfs (determined
with du -xh /tmp as root) )

Note that the key problem was given in the first post. There I posted
that the tmpfs has 8.2GB available immediately after a reboot, yet it
is impossible to copy a 4.6GB file to tmpfs from a zfs-drive even
though the machine has 8GB of memory backed with swap.

My feeling is that somehow the zfs file cache, which is also in memory
is causing this, which is consistent with the solaris bugreport I
linked to, see:
http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=e4ae9c32983000ef651e38edbba1?bug_id=6804661
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: slow ZFS on FreeBSD 8.1

2011-01-02 Thread Steven Hartland


- Original Message - 
From: Jeremy Chadwick free...@jdc.parodius.com

Based on my experiences at home, I converted my desktop at work to
pure ZFS.  The only issues I've run into have been programs that
extensively use mmap(2) - which is a known issue with ZFS.


Is your ZFS root filesystem associated with a pool that's mirrored or
using raidzX?  What about mismatched /boot content (ZFS vs. UFS)?  What
about booting into single-user mode?

http://wiki.freebsd.org/ZFSOnRoot indirectly hints at these problems but
doesn't outright admit them (yet should), so I'm curious to know how
people have solved them.  Remembering manual one-offs for a system
configured this way is not acceptable (read: highly prone to
error/mistake).  Is it worth the risk?  Most administrators don't have
the tolerance for stuff like that in the middle of a system upgrade or
what not; they should be able to follow exactly what's in the handbook,
to a tee.

There's a link to www.dan.me.uk at the bottom of the above Wiki page
that outlines the madness that's required to configure the setup, all
of which has to be done by hand.  I don't know many administrators who
are going to tolerate this when deploying numerous machines, especially
when compounded by the complexities mentioned above.


With regards installing machines with a root zfs we now use mfsBSD which
makes the process as simple as pie, so for those that haven't used it
give it a wirl:-
http://mfsbsd.vx.sk/



The mmap(2) and sendfile(2) complexities will bite an junior or
mid-level SA in the butt too -- they won't know why software starts
failing or behaving oddly (FreeBSD ftpd is a good example).  It just so
happens that Apache, out-of-the-box, comes with mmap and sendfile use
disabled.


This is the same with nginx which is rapidly taking over from apache due
to its ability to scale much much better than apache does.

Proper mmap and sendfile integration are the only major issue we have
with moving all our machines to ZFS thanks to great work by everyone.

I really hope sendfile support in particular is fixed in the near future
but as I understanding it, that's not going to be simple at all :(

   Regards
   Steve


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Sense fetching [Was: cdrtools /devel ...]

2011-01-02 Thread Joerg Schilling
If FreeBSD writes any warning for SCSI commands send by cdrtools, then this is 
definitely a kernel bug that needs to be fixed.

SCSI is a protocol that lives from aparently failing SCSI commands.
These warnings are related to high level interaction between cdrecord and the 
drive. Only cdrecord is able to decide whether a SCSI command that apparently 
failed is worth for being reported or not.

It therefore is a bug if the kernel prints related messages.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: umass: AutoSense failed

2011-01-02 Thread Harald Weis
On Tue, Dec 14, 2010 at 01:19:29PM -0700, Ross Alexander wrote:
 On Sat, 11 Dec 2010, Harad Weiss wrote
 
 Message: 7
 Date: Fri, 10 Dec 2010 22:59:17 +0100
 From: Harald Weis ha...@free.fr
 Subject: Re: umass: AutoSense failed
 To: freebsd-stable@freebsd.org
 Cc: Jeremy Chadwick free...@jdc.parodius.com
 Message-ID: 20101210215917.ga2...@pollux.local.net
 Content-Type: text/plain; charset=us-ascii
 
 [big snip]
 
 The AutoSense failure occurs on two desktop computers, whether
 motherboard or not, and on three laptops. As I said, all PC's
 run 8.1-RELEASE.
 What I do not know is whether the Sony DSC (Digital Still Camera) has
 worked on 8.0-RELEASE. I didn't use it for some time, so I'm afraid
 I never tried it on 8.x.
 But I am sure it worked on all earlier releases.
 In my original post I have forgotten to mention that I never had any
 problems with all sorts of thumbdrives and USB disks.
 Fortunately, I can read the 256MB CF memory of the DSC on an Ubuntu
 laptop.
 Could it possibly be a DSC hardware problem which is ignored by Ubuntu,
 but not by FreeBSD?
 
 My Sony DSC camera did work under 8.0-RELEASE.  It's a model DSC-W35.
 It's seeing the umass problem now, and has since 8.1-PRERELEASE afair.
 
 This fault is the same on i386 or amd64, intel or amd chipsets, USB
 1.0 or 2.0, through hubs or straight off the m/b, etc..  I'm keeping a
 backup box (intel atom / i386) running 7.3-STABLE around specifically
 to work around the problem.  I'd have said something eralier, but I've
 gotten used to small nits poppping up and then disappearing; this one
 is dragging along :(
 
I haven't looked at the List for a while, so I only saw your message
yesterday. In the mean time, I've re-learned to work with bootable
memory sticks. I've had no luck yet with ReWritable CDs which I need for
all boxes but one. Perhaps a blanking problem?
Here is the summary of my trials in Fixit mode for my Sony DSC-P10 which
is also (at last) the answer to Jeremy's message:

7.2-RELEASE-i386-livefs:  OK

8.0-RELEASE-i386-memstick: OK

FreeBSD-8.1-STABLE-201009-i386-memstick:
   AutoSense failed

FreeBSD-8.2-PRERELEASE-201012-i386-memstick:
   booting OK, but
   No USB devices found!
message extract
CAM status: SCSI Status Error
SCSI sense: UNIT ATTENTION asc: 28,0 (Not ready to ready
change, medium may have changed)
da0: 1915MB (3922944 512 byte sectors: 255H 63S/T 244C)
GEOM: da0: media size does not match label.
/message extract

FreeBSD-9.0-CURRENT-201009-i386-memstick:
AutoSense failed

Is it not a case for send-pr(1) ?

Best regards,
Harald
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE

2011-01-02 Thread J. Hellenthal
On 01/02/2011 03:45, Attila Nagy wrote:
  On 01/02/2011 05:06 AM, J. Hellenthal wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 01/01/2011 13:18, Attila Nagy wrote:
   On 12/16/2010 01:44 PM, Martin Matuska wrote:
 Link to the patch:

 http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215.patch.xz




 I've used this:
 http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101223-nopython.patch.xz


 on a server with amd64, 8 G RAM, acting as a file server on
 ftp/http/rsync, the content being read only mounted with nullfs in
 jails, and the daemons use sendfile (ftp and http).

 The effects can be seen here:
 http://people.fsn.hu/~bra/freebsd/20110101-zfsv28-fbsd/
 the exact moment of the switch can be seen on zfs_mem-week.png, where
 the L2 ARC has been discarded.

 What I see:
 - increased CPU load
 - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
 hard disk load (IOPS graph)

 Maybe I could accept the higher system load as normal, because there
 were a lot of things changed between v15 and v28 (but I was hoping if I
 use the same feature set, it will require less CPU), but dropping the
 L2ARC hit rate so radically seems to be a major issue somewhere.
 As you can see from the memory stats, I have enough kernel memory to
 hold the L2 headers, so the L2 devices got filled up to their maximum
 capacity.

 Any ideas on what could cause these? I haven't upgraded the pool version
 and nothing was changed in the pool or in the file system.

 Running arc_summary.pl[1] -p4 should print a summary about your l2arc
 and you should also notice in that section that there is a high number
 of SPA Mismatch mine usually grew to around 172k before I would notice
 a crash and I could reliably trigger this while in scrub.

 What ever is causing this needs desperate attention!

 I emailed mm@ privately off-list when I noticed this going on but have
 not received any feedback as of yet.
 It's at zero currently (2 days of uptime):
 kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
 

Right but do you have a 'cache' 'l2arc' vdev attached to any pool in the
system ? This suggests to me that you do not at this time.

If not can you attach a cache vdev and run a scrub on it and monitor the
value of that MIB ?

-- 

Regards,

 jhell,v
 JJH48-ARIN
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org