Re: [zfs-discuss] Interaction between ZFS intent log and mmap'd files

2012-07-03 Thread James Litchfield

inline

On 07/02/12 15:00, Nico Williams wrote:

On Mon, Jul 2, 2012 at 3:32 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us  wrote:

On Mon, 2 Jul 2012, Iwan Aucamp wrote:

I'm interested in some more detail on how ZFS intent log behaves for
updated done via a memory mapped file - i.e. will the ZIL log updates done
to an mmap'd file or not ?


I would to expect these writes to go into the intent log unless msync(2) is
used on the mapping with the MS_SYNC option.

You can't count on any writes to mmap(2)ed files hitting disk until
you msync(2) with MS_SYNC.  The system should want to wait as long as
possible before committing any mmap(2)ed file writes to disk.
Conversely you can't expect that no writes will hit disk until you
msync(2) or munmap(2).

Driven by fsflush which will scan memory (in chunks) looking for dirty,
unlocked, non-kernel pages to flush to disk.


Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interaction between ZFS intent log and mmap'd files

2012-07-03 Thread James Litchfield

Agreed - msync/munmap is the only guarantee.

On 07/ 3/12 08:47 AM, Nico Williams wrote:

On Tue, Jul 3, 2012 at 9:48 AM, James Litchfield
jim.litchfi...@oracle.com  wrote:

On 07/02/12 15:00, Nico Williams wrote:

You can't count on any writes to mmap(2)ed files hitting disk until
you msync(2) with MS_SYNC.  The system should want to wait as long as
possible before committing any mmap(2)ed file writes to disk.
Conversely you can't expect that no writes will hit disk until you
msync(2) or munmap(2).

Driven by fsflush which will scan memory (in chunks) looking for dirty,
unlocked, non-kernel pages to flush to disk.

Right, but one just cannot count on that -- it's not part of the API
specification.

Nico
--



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] tuning zfs_arc_min

2011-10-10 Thread James Litchfield
The value of zfs_arc_min specified in /etc/system must be over 64MB 
(0x400).

Otherwise the setting is ignored. The value is in bytes not pages.

Jim
---

n 10/ 6/11 05:19 AM, Frank Van Damme wrote:

Hello,

quick and stupid question: I'm breaking my head over how to tunz
zfs_arc_min on a running system. There must be some magic word to pipe
into mdb -kw but I forgot it. I tried /etc/system but it's still at the
old value after reboot:

ZFS Tunables (/etc/system):
  set zfs:zfs_arc_min = 0x20
  set zfs:zfs_arc_meta_limit=0x1

ARC Size:
  Current Size: 1314 MB (arcsize)
  Target Size (Adaptive):   5102 MB (c)
  Min Size (Hard Limit):2048 MB (zfs_arc_min)
  Max Size (Hard Limit):5102 MB (zfs_arc_max)


I could use the memory now since I'm running out of it, trying to delete
a large snapshot :-/



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-19 Thread James Litchfield




Erik's experiences echo mine. I've never seen a white-box in a medium to
large company that I've visited. Always a name brand.

His comments on sysadmin staffing are dead on.

Jim Litchfield
Oracle Consulting


On 7/19/2010 5:35 PM, Erik Trimble wrote:

  On Mon, 2010-07-19 at 17:54 -0600, Eric D. Mudama wrote:
  
  
On Wed, Jul 14 at 23:51, Tim Cook wrote:


  Out of the fortune 500, I'd be willing to bet there's exactly zero
companies that use whitebox systems, and for a reason.
--Tim
  


Sure, some core SAP system or HR data warehouse runs on name-brand
gear, and maybe they have massive SANs with various capabilities that
run on name brand gear as well, but I'd guess that most every fortune
500 company buys some large number of generic machines as well.

(generic being anything from newegg build-it-yourself to the bargain
SKUs from major PC companies that may not have mission-critical
support contracts associated with them)

Any company that believes it can add more value in their IT supply
chain than the vendor they'd be buying from would be foolish not to
put energy into that space (if they can "afford" to.)  Google is but a
single example, though I am sure there are others.


  
  
They may *believe* they can, but no one ever does, because you trade
increased manpower for up-front hardware cost. And companies aren't
willing to do that. 


I've been around a large number of different environments (finance,
publishing, development, ISP, ASP, even HW manufacturing), and the only
place I've ever seen non-name-brand servers in a datacenter/server room
production configuration is for Google-like massive deployments.
Whitebox machines proliferate in SQE and desktop environs where they're
burnable and disposable. But for any kind of production use (or those
with a Deployment staging or QA setup), I've only ever seen brand-names,
WITH the service contract fully paid up.


IT departments are *always* critically understaffed, and in order to
make a whitebox deployment successful for production use, you need
dedicated staff for that - PERMANENT staff. Companies don't do that.
Admins are just so chronically overworked that they have no ability to
spend any extra time on making a whitebox setup usable for production,
even if they have the expertise.  And you better believe that us Admins
won't even think about production support for a box that doesn't have a
service contract on it. Hardware and Software.  Because no matter how
good you are, you can't think of everything (or, if you can, it takes
awhile) - and, the 20 hours it just took you to fix that machine could
have been 2 hours if it had a service contract. Doesn't take too long
for that kind of math to blow out any savings whiteboxes may have had.

Worst case, someone goes and buys Dell.  :-)





  



-- 

James Litchfield | Senior Consultant
Phone: +1 4082237059 | Mobile: +1 4082180790 
Oracle Oracle ACS
California 


Oracle is committed to developing practices and products that
help protect the environment




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-14 Thread James Litchfield

There is a 32-bit and 64-bit version of the file system module
available on x86. Given the quality of the development team, I'd be *very*
surprised if such issues as suggested in your message exist.

Jurgen's comment highlights the major issue - the lack of space to
cache data when in 32-bit mode.

Jim Litchfield
---

Erik Trimble wrote:

Jürgen Keil wrote:

besides performance aspects, what`s the con`s of
running zfs on 32 bit ?



The default 32 bit kernel can cache a limited amount of data
( 512MB) - unless you lower the kernelbase parameter.
In the end the small cache size on 32 bit explains the inferior
performance compared to the 64 bit kernel.
  
It's been a long time, but I seem to recall that the ZFS internals 
were written using values (ints, longs, etc) as found on 64-bit 
architectures, and that there was the possibility that many of them 
wouldn't operate properly in a 32-bit environment (i.e. size 
assumption mismatches on values that might silently drop/truncate or 
screw up calculations).  I don't know if that's still correct (or if 
I'm getting it completely wrong), but the word was (2 years ago), that 
32-bit ZFS might not just have performance problems, but might 
possibly be silently screwing you.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2), atomicity, crashes and fsync()

2009-03-18 Thread James Litchfield

POSIX has a  Synchronized I/O Data (and File) Integrity Completion
definition (line 115434 of the Issue 7 (POSIX.1-2008) specification). 
What it
says is that writes for a byte range in a file must complete before any 
pending

reads for that byte range are satisfied.

It does not say that if you have 3 pending writes and pending reads for 
a byte range,
that the writes  must complete in  the order issued - simply that they 
must all complete
before any reads complete. See lines 71371-71376 in the write() 
discussion. The
specification explicitly avoids discussing the behavior of concurrent 
writes to a file from
multiple processes. and suggests that applications doing this should 
use some form

of concurrency control.

It is true that because of these semantics, many file system 
implementations will use
locks to ensure that no reads can occur in the entire file while writes 
are happening
which has the side  effect of ensuring the writes are executed in the 
order they are issued.
This is an implementation detail that can be  complicated by async IO as 
well. The only
guarantee  POSIX offers is that all  pending  writes to the relevant 
byte range in the file
will be completed before a read to that byte range is  allowed. An 
in-progress read is
expected to block any writes to the relevant byte range file the  read 
completes.


The specification also does not say the bits for a file must end up on 
the disk without
an intervening  fsync() operation unless you've explicitly asked for 
data synchronization
(O_SYNC,  O_DSYNC) when you opened  the file. The fsync() discussion  
(line 31956)
says that the bits must undergo a physical write of data from the 
buffer cache that should
be completed  when the fsync() call returns. If there are errors, the 
return from the fsync()
call should express the fact that one or more errors occurred. The only 
guarantee that the
physical write happens is if the system supports the  
_POSIX_SYNCHRONIZED_IO option. If
not, the comment is to read the system's  conformance documentation (if 
any) to see what
actually does happen. In the case that _POSIX_SYNCHRONIZED_IO is not 
supported,

it's perfectly allowable for fsync()  to be a no-op.

Jim Litchfield
---
David Magda wrote:

On Mar 18, 2009, at 12:43, Bob Friesenhahn wrote:

POSIX does not care about disks or filesystems.  The only correct 
behavior is for operations to be applied in the order that they are 
requested of the operating system.  This is a core function of any 
operating system.  It is therefore ok for some (or all) of the data 
which was written to new to be lost, or for the rename operation to 
be lost, but it is not ok for the rename to end up with a corrupted 
file with the new name.


Out of curiousity, is this what POSIX actually specifies? If that is 
the case, wouldn't that mean that the behaviour of ext3/4 is 
incorrect? (Assuming that it does re-order operations.)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] fmd dying in zfs shutdown?

2009-02-16 Thread James Litchfield

known issue? I've seen this 5 times over the past few days. I think
these were, for the most part BFUs on top of B107. x86.

# pstack fmd.733
core 'fmd.733' of 733:/usr/lib/fm/fmd/fmd
-  lwp# 1 / thread# 1  
fe8c3347 libzfs_fini (0, fed9e000, 8047d08, fed74964) + 17
fed74979 zfs_fini (84701d0, fed9e000, 8047d38, fed7ac40) + 21
fed75adb bltin_fini (84701d0, 4, fed8a340, fed7a9f0) + 1b
fed7aa0f topo_mod_stop (84701d0, fed9e000, 8047d78, fed7b17e) + 2b
fed7b1ba topo_modhash_unload_all (84abe88, 84939a8, 8047dc8, fed803d2) + 4a
fed804b6 topo_close (84abe88, 84fdd70) + f2
0807c75f fmd_topo_fini (807305c, feef12d5, 0, 0, 8047e60, 8047e30) + 37
0806003d fmd_destroy (809a6d8, 4, 8047e78, 8072f67) + 281
08073075 main (1, 8047ea4, 8047eac, 805f2ef) + 365
0805f34d _start   (1, 8047f30, 0, 8047f44, 8047f5c, 8047f7d) + 7d

# mdb fmd.733
Loading modules: [ fmd libumem.so.1 libnvpair.so.1 libtopo.so.1 
libuutil.so.1 libavl.so.1 libsysevent.so.1 ld.so.1 ]

 $c
libzfs.so.1`libzfs_fini+0x17(0, fed9e000, 8047d08, fed74964)
libtopo.so.1`zfs_fini+0x21(84701d0, fed9e000, 8047d38, fed7ac40)
libtopo.so.1`bltin_fini+0x1b(84701d0, 4, fed8a340, fed7a9f0)
libtopo.so.1`topo_mod_stop+0x2b(84701d0, fed9e000, 8047d78, fed7b17e)
libtopo.so.1`topo_modhash_unload_all+0x4a(84abe88, 84939a8, 8047dc8, 
fed803d2)

libtopo.so.1`topo_close+0xf2(84abe88, 84fdd70)
fmd_topo_fini+0x37(807305c, feef12d5, 0, 0, 8047e60, 8047e30)
fmd_destroy+0x281(809a6d8, 4, 8047e78, 8072f67)
main+0x365(1, 8047ea4, 8047eac, 805f2ef)
_start+0x7d(1, 8047f30, 0, 8047f44, 8047f5c, 8047f7d)
 libzfs.so.1`libzfs_fini+0x17/i
libzfs.so.1`libzfs_fini+0x17:   pushl  0x4(%esi)
 $r
%cs = 0x0043%eax = 0xfed74958 libtopo.so.1`zfs_fini
%ds = 0x004b%ebx = 0xfe934000
%ss = 0x004b%ecx = 0x084701d0
%es = 0x004b%edx = 0xfee12a00
%fs = 0x%esi = 0x
%gs = 0x01c3%edi = 0x084701d0

%eip = 0xfe8c3347 libzfs.so.1`libzfs_fini+0x17
%ebp = 0x08047cd8
%kesp = 0x

%eflags = 0x00010212
id=0 vip=0 vif=0 ac=0 vm=0 rf=1 nt=0 iopl=0x0
status=of,df,IF,tf,sf,zf,AF,pf,cf

 %esp = 0x08047cc4
%trapno = 0xe
 %err = 0x4

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hotplug issues on USB removable media.

2008-10-28 Thread James Litchfield
I believe the answer is in the last email in that thread. hald doesn't offer
the notifications and it's not clear that ZFS can handle them. As is noted,
there are complications with ZFS due to the possibility of multiple disks
comprising a volume, etc. It would be a lot of work to make it work
correctly for any but the simplest single disk case.

Jim
---
Niall Power wrote:
 Hi Tim,

 Tim Foster wrote:
   
 Niall Power wrote:
 
 Bueller? Anyone?
   
 Yeah, I'd love to know the answer too. The furthest I got into
 investigating this last time was:

 http://mail.opensolaris.org/pipermail/zfs-discuss/2007-December/044787.html 


 - does that help at all Niall?
 

 I dug around and found those few hald pieces for zpools also. Seems to me
 that there was at least an intention or desire to make things work work with
 hald.
 Some further searching around reveals this conversation thread:
 http://opensolaris.org/jive/thread.jspa?messageID=257186
 The trail goes cold there though.
   
 The context to Niall's question is to extend Time Slider to do proper
 backups to usb devices whenever a device is inserted.  I nearly had this
 working with:

 http://blogs.sun.com/timf/entry/zfs_backups_to_usb_mass
 http://blogs.sun.com/timf/entry/zfs_automatic_backup_0_1

 but I used pcfs on the storage device to store flat zfs send-streams as
 I didn't have a chance to work out what was going on. Getting ZFS plug
 n' play on usb disks would be much much cooler though[1].
 

 Exactly. Having zfs as the native filesystem would enable snapshot browsing
 from within nautilus so it's a requirement for this project.
   
 cheers,
 tim

 [1] and I reckon that by relying on the 'zfs/interval' 'none' setting
 for the auto-snapshot service, doing this now will be a lot easier than
 my previous auto-backup hack.
 
 That could be quite useful alright. We might need to come up with a 
 mechanism
 to delete the snapshot after it's taken and backed up.

 Cheers,
 Niall

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verify files' checksums

2008-10-26 Thread James Litchfield
A nit on the nit...

cat does not use mmap for files = 32K in size. For those files
it's a simple read() into a buffer and write() it out.

Jim
---
Chris Gerhard wrote:
 A slight nit.  

 Using cat(1) to read the file to /dev/null will not actually cause the data 
 to be read thanks to the magic that is mmap().  If you use dd(1) to read the 
 file then yes you will either get the data and thus know it's blocks match  
 their checksums or dd will give you an error if you have no redundancy.

 --chris
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs status -v tries too hard?

2008-08-06 Thread James Litchfield
After some errors were logged as to a problem with a ZFS file system,
I ran zfs status followed by zfs status -v...

# zpool status
  pool: ehome
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
ehome   ONLINE   6.28K 2.84M 0
  c2t0d0p0  ONLINE   6.28K 2.84M 0

errors: 796332 data errors, use '-v' for a list

[ elided ]

# zpool status -v
  pool: ehome
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A

 scrub: none requested
config:

NAMESTATEREAD WRITE CKSUM
ehomeONLINE   3.03K 2.09M0
  c2t0d0p0  ONLINE   3.03K 2.09M0

 HANGS HERE

 From another window do a truss of zfs status...

ioctl(3, ZFS_IOC_ERROR_LOG, 0x08041DE0)Err#12 ENOMEM
ioctl(3, ZFS_IOC_ERROR_LOG, 0x08041DE0)Err#12 ENOMEM
ioctl(3, ZFS_IOC_ERROR_LOG, 0x08041DE0)Err#12 ENOMEM
ioctl(3, ZFS_IOC_ERROR_LOG, 0x08041DE0)Err#12 ENOMEM
ioctl(3, ZFS_IOC_ERROR_LOG, 0x08041DE0)Err#12 ENOMEM
ioctl(3, ZFS_IOC_ERROR_LOG, 0x08041DE0)Err#12 ENOMEM

One would think it would get the message

After a reboot, a move of the drive to another UFS port on the laptop,
a zfe export of ehome and a zfs import of ehome, it is back on line
with zero errors


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] And the answer to why can't ZFS find a plugged in disk is ...

2008-07-09 Thread James Litchfield
 From an email exchange with a HAL developer...

 This comes about because I boot back and forth between Windows
 and Solaris and when on the Windows side I have the drive unplugged.
 On occasion, I forget to plug it back in before returning to Solaris.

 I wonder then, if Solaris should export removable ZFS volumes on 
 shutdown.

 Seems a strange limitation for HAL to not attempt to mount a zfs file
 system. If it's not imported the mount fails and an error can be 
 generated. If
 it's imported then everything just works. What was the reasoning for 
 this?

 There are multiple reasons. Initially, when HAL was introduced in 
 Solaris (PSARC 2005/399), ZFS did not support hotplug very well or at 
 all. Also, HAL's object model only accomodates traditional single 
 device volumes; it needs to be expanded to account for ZFS's volumes 
 than span multiple devices. There are also more operations than just 
 mount/unmount possible, and sometimes necessary, on ZFS datasets, and 
 HAL simply lacks such interfaces. The third problematic area is that 
 now that ZFS itself includes some sort of hotplug magic, there needs 
 to be coordination with HAL-based volume managers. There are also 
 potential difficulties related to different security models between 
 traditionally mounted filesystems and ZFS.

 In other words, there is nothing fundamentally preventing HAL from 
 supporting ZFS, but the amount of new design is enough for a 
 full-blown project.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why can't ZFS find a plugged in disk

2008-07-08 Thread James Litchfield
Turns out zfs mount -a will pick up the file system.

Fun question is why the OS can't mount the disk by itself.
gnome-mount is what puts up the Can't access the disk
and whines to stdout (/dev/null in this case) about:
 ** (gnome-mount:1050): WARNING **: Mount failed for 
 /org/freedesktop/Hal/devices
 /pci_0_0/pci1179_1_1d_7/storage_7_if0_0/scsi_host0/disk1/sd1/p0
 org.freedesktop.Hal.Device.Volume.UnknownFailure : cannot open 
 '/dev/dsk/c2t0d0p
 0': invalid dataset name
gnome-mount never attempts to open, access or mount the disk. It
comes to the above conclusion after an exchange of messages with hald.

Further questions will be directed in that direction.

Jim
---

James Litchfield wrote:
 Indeed, after rebooting we see the following. You'll have to trust me that
 /ehome and /ehome/v1 are the relevant ZFS filesystems. If it makes any
 different, this file system had been previously mounted. My memory is
 suggesting that zpool import works in this situation whenever the FS
 hasn't been previously mounted.

 Jim
 
 bash-3.2$ rmformat
   
 ld.so.1: rmformat: warning: libumem.so.1: open failed: No such file in 
 secure directories
 Looking for devices...
  1. Logical Node: /dev/rdsk/c1t0d0p0
 Physical Node: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL 
 PROTECTED]/[EMAIL PROTECTED],0
 Connected Device: MATSHITA DVD-RAM UJ-841S  1.40
 Device Type: CD Reader
 Bus: IDE
 Size: 2.8 GB
 Label: None
 Access permissions: Medium is not write protected.
  2. Logical Node: /dev/rdsk/c2t0d0p0
 Physical Node: /[EMAIL PROTECTED],0/pci1179,[EMAIL 
 PROTECTED],7/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
 Connected Device: WDC WD16 00BEAE-11UWT0
 Device Type: Removable
 Bus: USB
 Size: 152.6 GB
 Label: None
 Access permissions: Medium is not write protected.
 bash-3.2$ /usr/sbin/mount | egrep ehome
 /ehome on ehome 
 read/write/setuid/devices/nonbmand/exec/xattr/noatime/dev=2d90006 on 
 Sun Jul  6 17:08:12 2008
 /ehome/v1 on ehome/v1 
 read/write/setuid/devices/nonbmand/exec/xattr/noatime/dev=2d90007 on 
 Sun Jul  6 17:08:12 2008
 


 James Litchfield wrote:
   
 Currently on SNV92 + some BFUs but this has bene going on for quite a while.

 If I boot my system without a USB drive plugged in and then plug it in,
 rmformat sees it but ZFS seems not to. If I reboot the system, ZFS
 will have no problem with using the disk.


   
 
 # zpool import
 # rmformat
 Looking for devices...
  1. Logical Node: /dev/rdsk/c1t0d0p0
 Physical Node: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL 
 PROTECTED]/[EMAIL PROTECTED],0
 Connected Device: MATSHITA DVD-RAM UJ-841S  1.40
 Device Type: DVD Reader/Writer
 Bus: IDE
 Size: 2.8 GB
 Label: None
 Access permissions: Medium is not write protected.
  2. Logical Node: /dev/rdsk/c2t0d0p0
 Physical Node: /[EMAIL PROTECTED],0/pci1179,[EMAIL 
 PROTECTED],7/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
 Connected Device: WDC WD16 00BEAE-11UWT0
 Device Type: Removable
 Bus: USB
 Size: 152.6 GB
 Label: None
 Access permissions: Medium is not write protected.

 
   
 Perhaps because I didn't label the disk before giving to ZFS?
 If so, bad ZFS for either not complaining or else asking me
 for permission to label the disk.

 Jim
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

   
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool misses the obvious?

2006-10-11 Thread James Litchfield

I have a zfs pool on a USB hard drive attached to my system.
I had unplugged it and when I reconnect it, zpool import does
not see the pool.

# cd /dev/dsk
# fstyp  c3t0d0s0
zfs

When I truss zpool import, it looks everywhere (seemingly) *but*
c3t0d0s0 for the pool...

The relevant portion...

stat64(/dev/dsk/c3t0d0s1, 0x08043150) = 0
open64(/dev/dsk/c3t0d0s1, O_RDONLY)   Err#5 EIO
stat64(/dev/dsk/c1t0d0p3, 0x08043150) = 0
open64(/dev/dsk/c1t0d0p3, O_RDONLY)   Err#16 EBUSY

This is Nevada B49, BFUed to B50 and then BFUed to
10/9/2006 nightly. I have been seeing this behavior for a while
so I don't think it is the result of a very recent change...

Thoughts?

Jim Litchfield

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool misses the obvious?

2006-10-11 Thread James Litchfield

Artem Kachitchkine wrote:



# fstyp  c3t0d0s0
zfs


s0? How is this disk labeled? From what I saw, when you put EFI label 
on a USB disk, the whole disk device is going to be d0 (without 
slice). What do these commands print:


# fstyp /dev/dsk/c3t0d0


unknown_fstyp (no matches)


# fdisk -W - /dev/rdsk/c3t0d0



/dev/rdsk/c3t0d0 default fdisk table
Dimensions:
   512 bytes/sector
63 sectors/track
   255 tracks/cylinder
  36483 cylinders

[ eliding almost all the systid cruft ]

*  238: EFI_PMBR

* IdAct  Bhead  Bsect  BcylEhead  Esect  EcylRsectNumsect
 238   025563 102325563 10231586114703


# fdisk -W /dev/rdsk/c3t0d0p0


Same dimension info as above...

* IdAct  Bhead  Bsect  BcylEhead  Esect  EcylRsectNumsect
 238   025563 102325563 10231586114703


-Artem.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss