Re: [zfs-discuss] Any rhyme or reason to disk dev names?

2011-12-21 Thread Shawn Ferry
On Dec 21, 2011, at 2:58, Matthew R. Wilson mwil...@mattwilson.org wrote:

 Can anyone offer any suggestions on a way to predict the device naming, or at 
 least get the system to list the disks after I insert one without rebooting?

You have gotten some good responses that should help you out.

However, you shouldn't have to reboot to see the new disks try devfsadm.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem hang when 100% full

2010-07-25 Thread Shawn Ferry

On Jul 24, 2010, at 2:20 PM, Edward Ned Harvey wrote:

 I remember asking about this a long time ago, and everybody seemed to think 
 it was a non-issue.  The vague and unclearly reported rumor that ZFS behaves 
 poorly when it's 100% full.  Well now I have one really solid data point to 
 confirm it.  And possibly how to reproduce it, avoid it, and prevent it.

I am given to understand that you can delete snapshots in current builds (I 
don't have anything recent where I can test).

  
 I'm looking to see if anyone else has similar or related issues.  Of 
 particular value, if you have any test machine to attempt reproducing the 
 problem, that could be very valuable.
snip 

  
 
 Before there is a problem, create a new zfs filesystem, with a space 
 reservation.
 zfs create -o reservation=1G tank/reservation

Yes, this works. You don't need much space to recover what I do is drop the 
reservation down about 10% from the original size and proceed to free up space. 
I don't remove the reservation entirely because I use a small reservation 
(100M) and I don't want to be caught out if something is trying to write and I 
missed it.

Shawn

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Shawn Ferry

On Apr 22, 2010, at 1:26 PM, Rich Teer wrote:

 Hi all,
 
 I have a server running SXCE b130 and I use ZFS for all file systems.  I
 also have a couple of workstations running the same OS, and all is well.
 But I also have a MacBook Pro laptop running Snow Leopard (OS X 10.6.3),
 and I have troubles creating files on exported ZFS file systems.
 
 From the laptop, I can read and write existing files on the exported ZFS
 file systems just fine, but I can't create new ones.  My understanding is
 that Mac OS makes extensive use of file attributes so I was wondering if
 this might be the cause of the problem (I know ZFS supports file attributes,
 but I wonder if I have to utter some magic incantation to get them working
 properly with Mac OS).
 
 At the moment I have a workaround: I use sftp to copy the files from the
 laptop to the server.  But this is a pain in the ass and I'm sure there's
 a way to make this just work properly!

I haven't seen this behavior. However, all of my file systems used by my
Mac are pool version 8 fs ver 2. I don't know if that could be part of your
problem or not.

I am attaching two ways direct attach and iSCSI zvol with pool and FS created
locally.

Shawn 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-23 Thread Shawn Ferry
I would try using hdadm or cfgadm to specifically offline devices out from 
under ZFS.

I have done that previously with cfgadm for systems I cannot physically access.
You can also use file backed storage to create your raidz and move, delete, 
overwrite the files to simulate issues.

Shawn


On Nov 23, 2009, at 1:32 PM, David Dyer-Bennet wrote:

 
 On Mon, November 23, 2009 11:44, sundeep dhall wrote:
 All,
 
 I have a test environment with 4 internal disks and RAIDZ option.
 
 Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz
 handles things well without data errors
 
 Options considered
 1. suddenly pulling a disk out
 2. using zpool offline
 
 3.  Use dd to a raw device to corrupt small random parts of the disks
 supporting the zpool.
 
 I think both these have issues in simulating a sudden failure
 
 Probably now it's all three  :-(.
 
 -- 
 David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
 Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
 Photos: http://dd-b.net/photography/gallery/
 Dragaera: http://dragaera.info
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Shawn Ferry  shawn.ferry at sun.com
571.291.4898






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot delete file when fs 100% full

2008-08-29 Thread Shawn Ferry

On Aug 29, 2008, at 7:09 AM, Tomas Ögren wrote:

 On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes:

 On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:

 This problem is becoming a real pain to us again and I was wondering
 if there has been in the past few month any known fix or workaround.

I had this problem in the past. Fortunately I was able to recover by  
removing an old snapshot which gave me enough room to deal with my  
problems.

Now, I create a fs called reserved and set a small reservation to  
ensure that there is a small amount of space available.

[sferry@noroute(0) 12:59 s001]
/Users/sferry
[6] zfs get reservation,mountpoint,canmount,type  noroute/reserved
NAME  PROPERTY VALUE SOURCE
noroute/reserved  reservation  50M   local
noroute/reserved  mountpoint   none  inherited from noroute
noroute/reserved  canmount off   local
noroute/reserved  type filesystem-

If I fill the pool now, I reduce the reservation (reduce instead of  
remove in case I have something writing uncontrollably to the pool)  
and clean up.

Shawn

--
Shawn Ferry  shawn.ferry at sun.com
Senior Primary Systems Engineer
Sun Managed Operations






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Subversion repository on ZFS

2008-08-28 Thread Shawn Ferry



On Aug 27, 2008, at 4:38 PM, Tim wrote:


On Wed, Aug 27, 2008 at 3:29 PM, Ian Collins [EMAIL PROTECTED] wrote:

Does anyone have any tuning tips for a Subversion repository on  
ZFS?  The

repository will mainly be storing binary (MS Office documents).

It looks like a vanilla, uncompressed file system is the best bet.


I have a SVN on ZFS repository with ~75K relatively small files and  
few binaries. That is working well without any special tuning.


Shawn


--
Shawn Ferry  shawn.ferry at sun.com
Senior Primary Systems Engineer
Sun Managed Operations





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS number of file systems scalability

2008-02-06 Thread Shawn Ferry

There is a write up of similar findings and more information about  
sharemgr
http://developers.sun.com/solaris/articles/nfs_zfs.html

Unfortunately I don't see anything that says those changes will be in  
u5.

Shawn

On Feb 5, 2008, at 8:21 PM, Paul B. Henson wrote:


 I was curious to see about how many filesystems one server could
 practically serve via NFS, and did a little empirical testing.

 Using an x4100M2 server running S10U4x86, I created a pool from a  
 slice of
 the hardware raid array created from the two internal hard disks,  
 and set
 sharenfs=on for the pool.

 I then created filesystems, 1000 at a time, and timed how long it  
 took to
 create each thousand filesystems, to set sharenfs=off for all  
 filesystems
 created so far, and to set sharenfs=on again for all filesystems. I
 understand sharetab optimization is one of the features in the latest
 OpenSolaris, so just for fun I tried symlinking /etc/dfs/sharetab to  
 a mfs
 file system to see if it made any difference. I also timed a  
 complete boot
 cycle (from typing 'init 6' until the server was again remotely  
 available)
 at 5000 and 10,000 filesystems.

 Interestingly, filesystem creation itself scaled reasonably well. I
 recently read a thread where someone was complaining it took over  
 eight
 minutes to create a filesystem at the 10,000 filesystem count. In my  
 tests,
 while the first 1000 filesystems averaged only a little more than  
 half a
 second each to create, filesystems 9000-1 only took roughly  
 twice that,
 averaging about 1.2 seconds each to create.

 Unsharing scalability wasn't as good, time requirements increasing  
 by a
 factor of six. Having sharetab in mfs made a slight difference, but  
 nothing
 outstanding. Sharing (unsurprisingly) was the least scalable,  
 increasing by
 a factor of eight.

 Boot-wise, the system took about 10.5 minutes to reboot at 5000
 filesystems. This increased to about 35 minutes at the 10,000 file  
 system
 counts.

 Based on these numbers, I don't think I'd want to run more than 5-7
 thousand filesystems per server to avoid extended outages. Given our  
 user
 count, that will probably be 6-10 servers 8-/. I suppose we could  
 have a
 large number of smaller servers rather than a small number of beefier
 servers; although that seems less than efficient. It's too bad  
 there's no
 way to fast track backporting of openSolaris improvements to  
 production
 Solaris, from what I've heard there will be virtually no ZFS  
 improvements
 in S10U5 :(.

 Here are the raw numbers for anyone interested. The first column is  
 number
 of file systems. The second column is total and average time in  
 seconds to
 create that block of filesystems (eg, the first 1000 took 589  
 seconds to
 create, the second 1000 took 709 seconds). The third column is the  
 time in
 seconds to turn off NFS sharing for all filesystems created so far  
 (eg, 14
 seconds for 1000 filesystems, 38 seconds for 2000 filesystems). The  
 fourth
 is the same operation with sharetab in a memory filesystem (I  
 stopped this
 measurement after 7000 because sharing was starting to take so  
 long). The
 final column is how long it took to turn on NFS sharing for all  
 filesystems
 created so far.


 #FS create/avgoff/avg  off(mfs)/avg  on/avg
 1000 589/.59  14/.01 9/.01   32/.03
 2000 709/.71  38/.02 25/.01  107/.05
 3000 783/.78  70/.02 50/.02  226/.08
 4000 836/.84  112/.0383/.02  388/.10
 5000 968/.97  178/.04124/.02 590/.12
 6000 930/.93 245/.04 172/.03 861/.14
 7000 961/.96 319/.05 229/.03 1172/.17
 8000 1045/1.05   405/.05 1515/.19
 9000 1098/1.10   500/.06 1902/.21
 11165/1.17   599/.06 2348/.23


 -- 
 Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/ 
 ~henson/
 Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
 California State Polytechnic University  |  Pomona CA 91768
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Shawn Ferry  shawn.ferry at sun.com
Senior Primary Systems Engineer
Sun Managed Operations
571.291.4898





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 x2

2008-01-31 Thread Shawn Ferry



On Jan 31, 2008, at 6:13 AM, Jorgen Lundman wrote:


 If we were to get two x4500s, with the idea of keeping one as a  
 passive
 standby (serious hardware failure) are there any clever solutions in
 doing so?

You should take a look at AVS, there are some ZFS and AVS demos online
http://opensolaris.org/os/project/avs/


 We can not use ZFS itself, but rather zpool volumes, with UFS on- 
 top. I
 assume there is no zpool send/recv (although, that would be pretty  
 neat
 if there was!). Doing full rsyncs all the time would probably be slow.

 Would it be possible to do a snapshot, then 10 minutes later, another
 snapshot and only rsync the differences?

zfs send/recv and the incremental capability may also work for you
depending on your needs.



 Any advice will be appreciated.

 Lund


 -- 
 Jorgen Lundman   | [EMAIL PROTECTED]
 Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
 Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
 Japan| +81 (0)3 -3375-1767  (home)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Shawn Ferry  shawn.ferry at sun.com
Senior Primary Systems Engineer
Sun Managed Operations
571.291.4898





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on OS X port now on macosforge

2008-01-10 Thread Shawn Ferry

On Jan 9, 2008, at 7:38 PM, Noël Dellofano wrote:

 Yep, these issues are known and I've listed them with explanations on
 on our website under The State of the ZFS on OS X World:
 http://zfs.macosforge.org/

 We're working on them.  The Trash and iTunes bugs should be fixed
 soon.  The your non-replicated drive just went MIA panic will take a
 little longer as it requires a more complex fix that will need to
 involve diskutil and friends while also preserving the integrity of
 your pool and all the data therin.

I don't have any non-replicated drives so I never ran into this
particular issue. The panics when brining devices online were
always for mirrors.

I realize that I wasn't clear before. Since the by request only 101
and continuing with the 102A release, I have been able to do rude things
to the external portion of my mirrors without experiencing a panic.


 This is also a problem the Solaris
 ZFS team is working on as well, it's just not as common that people
 rip out live drives from a Thumper :)

That was almost the first thing I did when we got one to play with :)



 Noel

 On Jan 9, 2008, at 2:45 PM, Shawn Ferry wrote:

 Noël,

 To try and get regular use out of ZFS on OS X I moved all of my
 Music directory a mirrored pool implemented as a slice of my internal
 disk and a FW enclosure.

 There are a few issues I see in using ZFS for end user applications
 at the moment.

 1) Trash doesn't work
  You adding files to the trash seems to work but you
  can't empty it without manually going to the .Trashes
  directory and manually removing files
 2) You can't download new iTunes Store music through iTunes
  iTunes can download podcasts and add new music from
  local files even using the consolidate music option
 3) It was somewhat prone to causing panics
  mainly bringing devices back online or rudely disconnecting
  them
 4) once the kext is loaded it appears that any user can perform any
 action e.g. anybody can create or delete a snapshot or a pool

 Shawn

 On Jan 9, 2008, at 2:26 PM, Noël Dellofano wrote:

 As soon as I get in to work and can backup my sparsebundle to a
 spare
 MBP, I'm going to start banging on it.

 Sweet deal :)

 So, do you have all of /Users on zfs, just one account, have you
 tried
 a FileVaulted account too? Or is that just crazy talk? :-)

 I currently just have one account, my personal one, to use ZFS.   
 Then
 I just have another local admin account that uses HFS+ that I don't
 really use for anything except occasional testing.  In my current
 setup, I created a pool, and I have 2 filesystems in it, one of  
 which
 is my home directory.  Then I just created my  account and pointed  
 it
 to use that directory for my home dir.
 I haven't experimented with File Vault yet at all, so feel free to
 have at it.  Hopefully when we get encryption for ZFS then we'll be
 able to just offer it natively that way.

 Noel


 On Jan 8, 2008, at 9:17 PM, Joe Block wrote:

 Thanks for the pointer. I had just checked ADC for a new download
 this
 morning, actually.

 As soon as I get in to work and can backup my sparsebundle to a
 spare
 MBP, I'm going to start banging on it.

 So, do you have all of /Users on zfs, just one account, have you
 tried
 a FileVaulted account too? Or is that just crazy talk? :-)

 On Jan 8, 2008 2:09 PM, Noël Dellofano [EMAIL PROTECTED]  
 wrote:
 Hey everyone,

 This is just a quick announcement to say that the ZFS on OS X
 port is
 now posted for your viewing fun at:

 http://zfs.macosforge.org/

 The page is also linked off of the ZFS Open Solaris page under  
 ZFS
 Ports:
 http://opensolaris.org/os/community/zfs/porting/

 This page holds the status for the ZFS on OSX port and includes a
 small FAQ, some known bugs, announcements, and will include more  
 as
 time goes on.  It also holds the latest source code and binaries
 that
 you can download to your hearts content.  So if you have a Mac,  
 are
 running Leopard, and are feeling bleeding edge please try it out.
 Comments, questions, suggestions and feedback are all very  
 welcome.
 I also want to point out this is BETA.  We're still working on
 getting
 some features going, as well as fleshing out issues with Finder,
 Disk
 Util, iTunes, and other parts of the system.  So when I say
 bleeding,
 I'm not kidding :)  However I'm excited to say that I'm happily
 running ZFS as my home directory on my MacBook Pro which is what I
 work off of every day, and am running weekly snapshots which I  
 'zfs
 send' to my external drive.  Oh happy day.

 thanks!
 Noel Dellofano
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




 -- 
 Joe Block [EMAIL PROTECTED]
 Macintosh System Administrator
 +1.650.253.7264

 Information is the currency of democracy.
 -Thomas Jefferson

 ___
 zfs-discuss mailing list
 zfs-discuss

Re: [zfs-discuss] ZFS on OS X port now on macosforge

2008-01-09 Thread Shawn Ferry
Noël,

To try and get regular use out of ZFS on OS X I moved all of my
Music directory a mirrored pool implemented as a slice of my internal
disk and a FW enclosure.

There are a few issues I see in using ZFS for end user applications
at the moment.

1) Trash doesn't work
You adding files to the trash seems to work but you
can't empty it without manually going to the .Trashes
directory and manually removing files
2) You can't download new iTunes Store music through iTunes
iTunes can download podcasts and add new music from
local files even using the consolidate music option
3) It was somewhat prone to causing panics
mainly bringing devices back online or rudely disconnecting
them
4) once the kext is loaded it appears that any user can perform any
action e.g. anybody can create or delete a snapshot or a pool

Shawn

On Jan 9, 2008, at 2:26 PM, Noël Dellofano wrote:

 As soon as I get in to work and can backup my sparsebundle to a spare
 MBP, I'm going to start banging on it.

 Sweet deal :)

 So, do you have all of /Users on zfs, just one account, have you  
 tried
 a FileVaulted account too? Or is that just crazy talk? :-)

 I currently just have one account, my personal one, to use ZFS.  Then
 I just have another local admin account that uses HFS+ that I don't
 really use for anything except occasional testing.  In my current
 setup, I created a pool, and I have 2 filesystems in it, one of which
 is my home directory.  Then I just created my  account and pointed it
 to use that directory for my home dir.
 I haven't experimented with File Vault yet at all, so feel free to
 have at it.  Hopefully when we get encryption for ZFS then we'll be
 able to just offer it natively that way.

 Noel


 On Jan 8, 2008, at 9:17 PM, Joe Block wrote:

 Thanks for the pointer. I had just checked ADC for a new download  
 this
 morning, actually.

 As soon as I get in to work and can backup my sparsebundle to a spare
 MBP, I'm going to start banging on it.

 So, do you have all of /Users on zfs, just one account, have you  
 tried
 a FileVaulted account too? Or is that just crazy talk? :-)

 On Jan 8, 2008 2:09 PM, Noël Dellofano [EMAIL PROTECTED] wrote:
 Hey everyone,

 This is just a quick announcement to say that the ZFS on OS X port  
 is
 now posted for your viewing fun at:

 http://zfs.macosforge.org/

 The page is also linked off of the ZFS Open Solaris page under ZFS
 Ports:
 http://opensolaris.org/os/community/zfs/porting/

 This page holds the status for the ZFS on OSX port and includes a
 small FAQ, some known bugs, announcements, and will include more as
 time goes on.  It also holds the latest source code and binaries  
 that
 you can download to your hearts content.  So if you have a Mac, are
 running Leopard, and are feeling bleeding edge please try it out.
 Comments, questions, suggestions and feedback are all very welcome.
 I also want to point out this is BETA.  We're still working on
 getting
 some features going, as well as fleshing out issues with Finder,  
 Disk
 Util, iTunes, and other parts of the system.  So when I say  
 bleeding,
 I'm not kidding :)  However I'm excited to say that I'm happily
 running ZFS as my home directory on my MacBook Pro which is what I
 work off of every day, and am running weekly snapshots which I 'zfs
 send' to my external drive.  Oh happy day.

 thanks!
 Noel Dellofano
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




 -- 
 Joe Block [EMAIL PROTECTED]
 Macintosh System Administrator
 +1.650.253.7264

 Information is the currency of democracy.
 -Thomas Jefferson

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Shawn Ferry  shawn.ferry at sun.com
Senior Primary Systems Engineer
Sun Managed Operations
571.291.4898





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-14 Thread Shawn Ferry

On Dec 14, 2007, at 12:27 AM, Jorgen Lundman wrote:



 Shawn Ferry wrote:
 Jorgen,

 You may want to try running 'bootadm update-archive'

 Assuming that your boot-archive problem is an out of date boot- 
 archive
 message at boot and/or doing a clean reboot to let the system try to
 write an up to date boot-archive.

 Yeah, it is remembering to do so after something has changed that's
 hard. In this case, I had to break the mirror to install OpenSolaris.
 (shame that the CD/DVD, and miniroot, doesn't not have md driver).

 It would be tempting to add the bootadm update-archive to the boot
 process, as I would rather have it come up half-assed, than not come  
 up
 at all.

It is part of the shutdown process, you just need to stop crashing :)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to properly tell zfs of new GUID controller numbers after a firmware upgrade changes the IDs

2007-12-13 Thread Shawn Ferry
 of the problem.

 So again, we do NOT want to zero out any of our disks, we are not  
 trying
 to forcibly use replaced disks.  We simply wanted zfs to re-read the
 devices under /dev/rdsk and update each pool with the correct disk
 targets.

 If you can confirm that a simple export/import is the proper  
 procedure for
 this (followed by a clear once the resulting resilvering  
 finishes), I
 would appreciate it.  And, if you can postulate what may have caused  
 the
 freeze that Robert noticed, that would put our minds at ease.



 TIA,

 Any assistance on this would be greatly appreciated and or pointers  
 on helpful documentation.

 -- 
   S U N  M I C R O S Y S T E M S  I N C.

   Jill Manfield - TSE-OS Administration Group
   email: [EMAIL PROTECTED]
   phone: (800)USA-4SUN (Reference your case number)
   address:  1617 Southwood Drive Nashua,NH 03063
   mailstop: NSH-01- B287
   
   OS Support Team 9AM to 6PM EST
Manager  [EMAIL PROTECTED]  x74110

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Shawn Ferry  shawn.ferry at sun.com
Senior Primary Systems Engineer
Sun Managed Operations
571.291.4898





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-13 Thread Shawn Ferry
Jorgen,

You may want to try running 'bootadm update-archive'

Assuming that your boot-archive problem is an out of date boot-archive
message at boot and/or doing a clean reboot to let the system try to
write an up to date boot-archive.

I would also encourage you to connect the LOM to the network in case you
have such issues again, you should be able to recover remotely.

Shawn

On Dec 13, 2007, at 10:33 PM, Jorgen Lundman wrote:


 NOC staff couldn't reboot it after the quotacheck crash, and I only  
 just
 got around to going to the Datacenter.  This time I disabled NFS, and
 the rsync that was running, and ran just quotacheck and it completed
 successfully. The reason it didn't boot what that damned boot-archive
 again. Seriously!

 Anyway, I did get a vmcore from the crash, but maybe it isn't so
 interesting. I will continue with the stress testing of UFS on zpool  
 as
 it is the only solution that would be acceptable. Not given up yet, I
 have a few more weeks to keep trying. :)



 -rw-r--r--   1 root root 2345863 Dec 14 09:57 unix.0
 -rw-r--r--   1 root root 4741623808 Dec 14 10:05 vmcore.0

 bash-3.00# adb -k unix.0 vmcore.0
 physmem 3f9789
 $c
 top_end_sync+0xcb(ff0a5923d000, ff001f175524, b, 0)
 ufs_fsync+0x1cb(ff62e757ad80, 1, fffedd6d2020)
 fop_fsync+0x51(ff62e757ad80, 1, fffedd6d2020)
 rfs3_setattr+0x3a3(ff001f1757c8, ff001f1758b8,  
 ff1a0d942080,
 ff001f175b20, fffedd6d2020)
 common_dispatch+0x444(ff001f175b20, ff0a5a4baa80, 2, 4,
 f7c7ea78
 , c06003d0)
 rfs_dispatch+0x2d(ff001f175b20, ff0a5a4baa80)
 svc_getreq+0x1c6(ff0a5a4baa80, fffec7eda6c0)
 svc_run+0x171(ff62becb72a0)
 svc_do_run+0x85(1)
 nfssys+0x748(e, fecf0fc8)
 sys_syscall32+0x101()


 BAD TRAP: type=e (#pf Page fault) rp=ff001f175320 addr=0  
 occurred in
 module
 unknown due to a NULL pointer dereference





 -- 
 Jorgen Lundman   | [EMAIL PROTECTED]
 Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
 Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
 Japan| +81 (0)3 -3375-1767  (home)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Shawn Ferry  shawn.ferry at sun.com
Senior Primary Systems Engineer
Sun Managed Operations
571.291.4898





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss