[zfs-discuss] live upgrade incompability

2006-09-21 Thread Thomas Maier-Komor
Hi,

concerning this issue I didn't find anything in the bug database, so I thought 
I report it here...

When running live-upgrade on a system with a zfs, LU creates directories for 
all ZFS filesystems in the ABE. This causes svc:/system/filesystem/local to go 
to maintainance state, when booting the ABE, because the zpool won't be 
imported because of the existing directory structure in its mount point.

I observed this behavior on a Solaris 10 system with live-upgrade 11.10.

Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] live upgrade incompability

2006-09-21 Thread Ian Collins
Thomas Maier-Komor wrote:

Hi,

concerning this issue I didn't find anything in the bug database, so I thought 
I report it here...

When running live-upgrade on a system with a zfs, LU creates directories for 
all ZFS filesystems in the ABE. This causes svc:/system/filesystem/local to go 
to maintainance state, when booting the ABE, because the zpool won't be 
imported because of the existing directory structure in its mount point.

I observed this behavior on a Solaris 10 system with live-upgrade 11.10.

  

Last time I reported this (a upgrade to build 41) I was told the only
solution was to remove the unwanted mount points before booting the new BE.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: low disk performance

2006-09-21 Thread Gino Ruopolo
 Looks like you have compression turned on?

we made tests with compression on and off and found almost no difference.
CPU load was under 3% ...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool wrongly recognizes disk size

2006-09-21 Thread Robert Milkowski
Hi.


S10U2 Generic_118833-23 (SPARC)

LUNs provided by Symmetrix box.

# zpool create t2 raidz c7t5d176 c7t5d177 c7t5d178 c7t5d179 c7t5d180 c7t5d181 
c7t5d182 c7t5d183 c7t5d184 c7t5d185 c7t5d186 c7t5d187 c7t5d188 c7t5d189 c7t5d190
invalid vdev specification
use '-f' to override the following errors:
raidz contains devices of different sizes
#

Just remove last one and try again:

# zpool create t2 raidz c7t5d176 c7t5d177 c7t5d178 c7t5d179 c7t5d180 c7t5d181 
c7t5d182 c7t5d183 c7t5d184 c7t5d185 c7t5d186 c7t5d187 c7t5d188 c7t5d189
#
# zpool destroy t2
#

Hmm.. lets see if last two disks differ:

# prtvtoc /dev/rdsk/c7t5d189s0
* /dev/rdsk/c7t5d189s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 17677440 sectors
* 17677373 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400 34  17660989  17661022
   8 1100   17661023 16384  17677406
# prtvtoc /dev/rdsk/c7t5d190s0
* /dev/rdsk/c7t5d190s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 17677440 sectors
* 17677373 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400 34  17660989  17661022
   8 1100   17661023 16384  17677406
#

They look the same and still d190 is somewhow different.

#truss -v all zpool create 
[...]
/1: open(/dev/dsk/c7t5d187, O_RDONLY) = 9
/1: fstat64(9, 0xFFBFB598)  = 0
/1: d=0x047C i=16781665 m=0060640 l=1  u=0 g=3 
rdev=0x008008AF
/1: at = Sep 20 18:25:27 CEST 2006  [ 1158769527 ]
/1: mt = Sep 20 18:25:27 CEST 2006  [ 1158769527 ]
/1: ct = Sep 20 18:25:27 CEST 2006  [ 1158769527 ]
/1: bsz=8192  blks=0 fs=devfs
/1: close(9)= 0
/1: open(/dev/dsk/c7t5d188, O_RDONLY) Err#2 ENOENT
/1: stat64(/dev/dsk/c7t5d188, 0xFFBFB598) Err#2 ENOENT
/1: open(/dev/dsk/c7t5d189, O_RDONLY) = 9
/1: fstat64(9, 0xFFBFB598)  = 0
/1: d=0x047C i=16781697 m=0060640 l=1  u=0 g=3 
rdev=0x008008BF
/1: at = Sep 20 18:25:27 CEST 2006  [ 1158769527 ]
/1: mt = Sep 20 18:25:27 CEST 2006  [ 1158769527 ]
/1: ct = Sep 20 18:25:27 CEST 2006  [ 1158769527 ]
/1: bsz=8192  blks=0 fs=devfs
/1: close(9)= 0
/1: open(/dev/dsk/c7t5d190, O_RDONLY) = 9
/1: fstat64(9, 0xFFBFB598)  = 0
/1: d=0x047C i=16781713 m=0060640 l=1  u=0 g=3 
rdev=0x008008C7
/1: at = Sep 20 19:04:06 CEST 2006  [ 1158771846 ]
/1: mt = Sep 20 19:04:06 CEST 2006  [ 1158771846 ]
/1: ct = Sep 20 19:04:06 CEST 2006  [ 1158771846 ]
/1: bsz=8192  blks=0 fs=devfs
/1: close(9)= 0
/1: fstat64(2, 0xFFBFA5C8)  = 0
/1: d=0x047C i=12582920 m=0020620 l=1  u=0 g=7 
rdev=0x0062
/1: at = Sep 21 11:21:54 CEST 2006  [ 1158830514 ]
/1: mt = Sep 21 11:21:54 CEST 2006  [ 1158830514 ]
/1: ct = Sep 20 18:21:33 CEST 2006  [ 1158769293 ]
/1: bsz=8192  blks=0 fs=devfs
/1: write(2,  i n v a l i d   v d e v.., 27)  = 27
/1: write(2,  u s e   ' - f '   t o  .., 43)  = 43
/1: write(2,  r a i d z, 5)   = 5
/1: write(2,c o n t a i n s   d e.., 37)  = 37
/1: _exit(1)


H... so maybe it's d188 after all.

# ls -l /dev/dsk/c7t5d187
lrwxrwxrwx   1 root other 46 Sep 20 18:25 /dev/dsk/c7t5d187 - 
../../devices/[EMAIL PROTECTED],0/[EMAIL PROTECTED],401000/[EMAIL 
PROTECTED],bb:wd
# ls -l /dev/dsk/c7t5d188
/dev/dsk/c7t5d188: No such file or directory
# ls -l /dev/dsk/c7t5d189
lrwxrwxrwx   1 root other 46 Sep 20 18:25 /dev/dsk/c7t5d189 - 
../../devices/[EMAIL PROTECTED],0/[EMAIL PROTECTED],401000/[EMAIL 
PROTECTED],bd:wd
# ls -l /dev/dsk/c7t5d190
lrwxrwxrwx   1 root root  46 Sep 20 19:04 /dev/dsk/c7t5d190 - 
../../devices/[EMAIL PROTECTED],0/[EMAIL PROTECTED],401000/[EMAIL 
PROTECTED],be:wd
#


So format -e I put SMI label then again EFI label and now there's a symlink:

# ls -l /dev/dsk/c7t5d188
lrwxrwxrwx   1 root root  46 Sep 21 11:31 /dev/dsk/c7t5d188 - 
../../devices/[EMAIL PROTECTED],0/[EMAIL PROTECTED],401000/[EMAIL 
PROTECTED],bc:wd

But still with d190 I can't create a pool.

# zpool create t2 raidz c7t5d176 c7t5d177 c7t5d178 c7t5d179 c7t5d180 c7t5d181 
c7t5d182 c7t5d183 c7t5d184 c7t5d185 c7t5d186 c7t5d187 c7t5d188 c7t5d189 c7t5d190
invalid vdev specification
use '-f' to override 

[zfs-discuss] Re: zpool wrongly recognizes disk size

2006-09-21 Thread Robert Milkowski
I forced to create raidz pool.
# zpool create symm36 raidz c7t5d161 c7t5d162 c7t5d163 c7t5d164 c7t5d165 
c7t5d166 c7t5d167 c7t5d168 c7t5d169 c7t5d170 c7t5d171 c7t5d172 c7t5d173 
c7t5d174 c7t5d175 raidz c7t5d176 c7t5d177 c7t5d178 c7t5d179 c7t5d180 c7t5d181 
c7t5d182 c7t5d183 c7t5d184 c7t5d185 c7t5d186 c7t5d187 c7t5d188 c7t5d189 
c7t5d190 raidz c7t5d191 c7t5d192 c7t5d193 c7t5d194 c7t5d195 c7t5d196 c7t5d197 
c7t5d198 c7t5d199 c7t5d200 c7t5d201 c7t5d202 c7t5d203 c7t5d204 c7t5d205 raidz 
c7t5d206 c7t5d207 c7t5d208 c7t5d209 c7t5d210 c7t5d211 c7t5d212 c7t5d213 
c7t5d214 c7t5d215 c7t5d216 c7t5d217 c7t5d218 c7t5d219 c7t5d220
invalid vdev specification
use '-f' to override the following errors:
raidz contains devices of different sizes
# zpool create -f symm36 raidz c7t5d161 c7t5d162 c7t5d163 c7t5d164 c7t5d165 
c7t5d166 c7t5d167 c7t5d168 c7t5d169 c7t5d170 c7t5d171 c7t5d172 c7t5d173 
c7t5d174 c7t5d175 raidz c7t5d176 c7t5d177 c7t5d178 c7t5d179 c7t5d180 c7t5d181 
c7t5d182 c7t5d183 c7t5d184 c7t5d185 c7t5d186 c7t5d187 c7t5d188 c7t5d189 
c7t5d190 raidz c7t5d191 c7t5d192 c7t5d193 c7t5d194 c7t5d195 c7t5d196 c7t5d197 
c7t5d198 c7t5d199 c7t5d200 c7t5d201 c7t5d202 c7t5d203 c7t5d204 c7t5d205 raidz 
c7t5d206 c7t5d207 c7t5d208 c7t5d209 c7t5d210 c7t5d211 c7t5d212 c7t5d213 
c7t5d214 c7t5d215 c7t5d216 c7t5d217 c7t5d218 c7t5d219 c7t5d220
# zdb|grep asize asize=135565148160 
asize=135565148160 asize=135565148160 
asize=135565148160
# bc


So while zpool complains it looks like onec the pool is created all raidz 
groups have the same size. Of course the question is if asize is assumed (based 
on first vdev which reported size in a given group) or actually all devices 
were checked for size and from kernel point of view all these devices are the 
same size and only fstat() reports wrongly for some of them.

???
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs gets confused with multiple faults involving hot spares

2006-09-21 Thread Liam McBrien
Hi there,

Not sure if this is a known bug (or even if it's a bug at all), but zfs seems 
to get confused when several consecutive temporary disk faults occur involving 
a hot spare. I couldn't find anything related to this on this forum, so here 
goes:

I'm testing this on a SunBlade 2000 hooked up to a T3 via STMS. The OS version 
is snv48.

This is a bit confusing, so bear with me. Basically, the problem occurs when 
the following happens:

- a pool is created with a hot spare
- a data disk is faulted (so that the spare steps in)
- the data disk is brought back online
- the hot spare is faulted
- the hot spare is brought back online and detached from the pool (to stop it 
from acting as a spare for the data disc that faulted)
- the original data disc is faulted again

When the above takes place, the spare ends up replacing the data disc 
completely in the pool but it still shows up as a spare. This occurs with 
mirror, raidz1 and raidz2 volumes.

On another note, when a disk is faulted the console output says AUTO-RESPONSE: 
No automated response will occur. - shouldn't this mention that a hot spare 
action will happen?



Here's a walkthrough with a 2-way mirror (I'm 'faulting' the discs by making 
them invisible to the host using the T3's LUN masking, then bringing them back 
by making them visible again):

*
***create pool***
*

[EMAIL PROTECTED] zpool create tank mirror 
c5t60020F200A78450A91BE00088501d0 c5t60020F200A78450A918D0003BA4Ad0 
spare c5t60020F200A7845098A27000B9ED2d0
[EMAIL PROTECTED]
[EMAIL PROTECTED] zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
tank   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c5t60020F200A78450A91BE00088501d0  ONLINE   0 0 0
c5t60020F200A78450A918D0003BA4Ad0  ONLINE   0 0 0
spares
  c5t60020F200A7845098A27000B9ED2d0AVAIL

errors: No known data errors




***fault a data disc (bring spare in)***


t3f1:/:161lun perm lun 4 none grp v4u2000a

console output
SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Thu Sep 21 11:45:13 BST 2006
PLATFORM: SUNW,Sun-Blade-1000, CSN: -, HOSTNAME: v4u-2000a-gmp03
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 3eef63b6-061e-6039-e273-e06c9feb8475
DESC: A ZFS device failed.  Refer to http://sun.com/msg/ZFS-8000-D3 for more 
information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: Fault tolerance of the pool may be compromised.
REC-ACTION: Run 'zpool status -x' and replace the bad device.

[EMAIL PROTECTED] zpool status
  pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver completed with 0 errors on Thu Sep 21 11:45:14 2006
config:

NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
  mirror DEGRADED 0 0 0
c5t60020F200A78450A91BE00088501d0ONLINE   0 0 0
spareDEGRADED 0 0 0
  c5t60020F200A78450A918D0003BA4Ad0  UNAVAIL  062 0 
 cannot open
  c5t60020F200A7845098A27000B9ED2d0  ONLINE   0 0 0
spares
  c5t60020F200A7845098A27000B9ED2d0  INUSE currently in use

errors: No known data errors




*** Bring data disc back online ***


t3f1:/:162lun perm lun 4 rw grp v4u2000a

[EMAIL PROTECTED] zpool status
  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver completed with 0 errors on Thu Sep 21 11:48:26 2006
config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  mirror ONLINE   0 0 0
c5t60020F200A78450A91BE00088501d0ONLINE   0 0 0
spareONLINE   0 0 0
  

[zfs-discuss] ZFS Available Space

2006-09-21 Thread Adhari C Mahendra
[Sol 10 6/6 x64]

I am very newbie in ZFS.

I have created 30GB storage pool and become a root_pool. If I run du -hs from 
root directory , it reports only used 5.4G. But when I run df -h, it reports 
used 26G. Why it is happens? How to reclaim back to 5.4G usage?

Thanks,
Regards
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] live upgrade incompability

2006-09-21 Thread Lori Alt



You can use the -x option (on each zfs file system)
to prevent lucreate from creating new copies of
each one in the new BE.

lori

Ian Collins wrote:

Thomas Maier-Komor wrote:



Hi,

concerning this issue I didn't find anything in the bug database, so I thought 
I report it here...

When running live-upgrade on a system with a zfs, LU creates directories for 
all ZFS filesystems in the ABE. This causes svc:/system/filesystem/local to go 
to maintainance state, when booting the ABE, because the zpool won't be 
imported because of the existing directory structure in its mount point.

I observed this behavior on a Solaris 10 system with live-upgrade 11.10.





Last time I reported this (a upgrade to build 41) I was told the only
solution was to remove the unwanted mount points before booting the new BE.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS vs. Apple XRaid

2006-09-21 Thread Sergey
I had the same problem. Read the following article -
http://docs.info.apple.com/article.html?artnum=302780

Most likely you have Allow host cache Flushing checked. Uncheck it and try 
again.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: drbd using zfs send/receive?

2006-09-21 Thread eric kustarz

Jakob Praher wrote:

Frank Cusack wrote:

On September 18, 2006 5:45:08 PM +0200 Jakob Praher [EMAIL PROTECTED] wrote:



huh.  How do you create a SAN with NFS?
Sorry. Okay it would be Network Attached Sotrage not the other way round 
. I guess you are right.


BUT if we are at discussing NFS for distributed stroage: What are your 
guys performance data for NFSv4 as a storage node. How well does the 
current Solaris NFSv4 stack interoperate with the Linux stack?

Would you go for that?


Depends on what your application is for NFS performance.  The Solaris 
NFS stack can easily saturate a 1Gbe link doing straight I/O.  Doing 
heavy metadata operations obviously won't be the case.


If you follow the nfsv4 IETF working group, you'll see that the NFSv4 
people have been meeting about every 4 months for over 6 years to do 
interopability testing.  So anyone who has a serious NFSv4 stack (Sun, 
Netapp, Linux, IBM, Hummingbird, etc) interoperates great with others 
with a serious stack.


You're best bet as always is try your specific application and see if it 
performs to what you want.




What about iSCSI on top of ZFS? is that an option. I did a research on 
iSCSI vs NFSv4 once and I found out that the overhead for transproting 
the fs metadata (in the NFSv4 case) is not the real problem for many 
szenarios. Especially the COMPOUND messages should help here.


Compound messages may help in the future but i don't think anyone has 
fully taken advantage of them yet - most VFS's are the same, and its a 
little tricky given the historical part of the kernel.


We've integrated some things in the Solaris kernel to take advantage and 
have thrown around other ideas that haven't made it quite yet.


eric





I have been using DRBD on linux before and now am asking whether some 
of   you have experience on

on-demand network filesystem mirrors.



AFAIK, Solaris does not export file change notification to userland in
any way that would be useful for on-demand filesystem replication.  From
looking at drbd for 5 minutes, it looks like the kind of notification
that windows/linux/macos provides isn't what drbd uses; it does BLOCK
LEVEL replication, and part of the software is a kernel module to export
that data to userspace.  It sounds like that distinction doesn't matter
for what you are trying to achieve, and I believe that this 
block-by-block

duplication isn't a great idea for zfs anyway.  It might be neat if zfs
could inform userland of each new txg.

yes. exactly. It is a block device driver and that replicates. So it 
sits right underneath Linux's VFS.
Okay that is something i wanted to know. Are there any good heartbeat 
control apps for Solaris out there? I mean if i want to have failover 
(even if it is a little bit cheap) it should detect failures and react 
accordingly. Switching from Sender to Receiver should not be difficult 
given that all you need is to make ZFS snapshots. (and that is really 
cheap in ZFS).




Is this mere a hack or can it be used to create some sort of failover.

E.g. DRBD has the master/slave option, which can be configured 
easily. Something like this would
be nice out of the box. So in case of failure another node is the 
master and if the former master
is back again, it is simply the slave, so that both have the current 
data available again.


Any pointers to solutions in that area are greatly appreaciated.


See if http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_now_with
comes close.

I have 2 setups, one using SC 3.2 with a SAN (both systems can access
the same filesystem, yes it's not as redundant as a remote node and
remote filesystem, but it's for HA not DR.  I could add another JBOD
to the SAN and configure zfs to mirror between the two enclosures to
get rid of the SPoF of the JBOD backplane/midplane, but it's not
worth it.


JBOD, SPoF - what are these things?


The other setup is using my own cron script (zfs send | zfs recv) to
send snapshots to a remote (just another server in the same rack)
host.  This is for a service that also has very high availability
requirements but where I can't afford shared storage.  I do a homegrown
heartbeat and failover thing.  I'm looking at replacing the cron script
with the SMF service linked above, but I'm in no rush since the cron job
works quite well.

If zfs is otherwise a good solution for you, you might want to consider
if you really need true on-demand replication.  Maybe 5-minute or even
1-minute recency is good enough.  I would imagine that you don't actually
get too much better than 30s with drbd anyway, since outside of fsync()
data doesn't actually make it to disk (and then replicated by drbd)
more frequently than that for some generic application.


Okay. I think zfs is nice. I am using xfs+lvm2 on my linux boxes so far. 
This works nice too.


SMF is the init.d replacement of solaris, right? What would that look 
like. What would SMF do, but restart your app if it fails? Would you 
like to have a background 

Re: [zfs-discuss] Re: zpool wrongly recognizes disk size

2006-09-21 Thread Eric Schrock
On Thu, Sep 21, 2006 at 03:39:08AM -0700, Robert Milkowski wrote:
 
 1. What would happen is size actually are different but it wasn't
 checked and pool was created?  ZFS will panic or generate r/w error
 when accessing non existant blocks?

No, the devices will all be created with the same size in the kernel.
The size check is solely enforced in userland.

 2. What about my case - why format() or EMC's inq (and Symmetrix
 itself) show that all these  devices are the same size while
 fstat() shows different?

There are some oddities w.r.t. specfs, devices, and size determination.
We play some tricks in zpool, such as keeping the device node open, to
try and get a reliable device size.  But my understanding is that it's
still possible to get the wrong answer for some devices.  I would
suggest doing a 'truss -v fstat -t open' and see what the actual values
being returned are.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs gets confused with multiple faults involving hot spares

2006-09-21 Thread Eric Schrock
On Thu, Sep 21, 2006 at 04:25:44AM -0700, Liam McBrien wrote:
 Hi there,
 
 Not sure if this is a known bug (or even if it's a bug at all), but
 zfs seems to get confused when several consecutive temporary disk
 faults occur involving a hot spare. I couldn't find anything related
 to this on this forum, so here goes:
 
 I'm testing this on a SunBlade 2000 hooked up to a T3 via STMS. The OS 
 version is snv48.
 
 This is a bit confusing, so bear with me. Basically, the problem occurs when 
 the following happens:
 
 - a pool is created with a hot spare
 - a data disk is faulted (so that the spare steps in)
 - the data disk is brought back online
 - the hot spare is faulted
 - the hot spare is brought back online and detached from the pool (to
   stop it from acting as a spare for the data disc that faulted) - the
   original data disc is faulted again
 
 When the above takes place, the spare ends up replacing the data disc
 completely in the pool but it still shows up as a spare. This occurs
 with mirror, raidz1 and raidz2 volumes.

Yes, this sounds like a variation of a known bug that's on my queue to
look at.  Basically, the way we determine if something is a spare or not
is rather broken, and you can confuse ZFS to the point of doing the
wrong thing.  I'll take a specific look at this case and see if it's the
same underlying root cause.

 On another note, when a disk is faulted the console output says
 AUTO-RESPONSE: No automated response will occur. - shouldn't this
 mention that a hot spare action will happen?

Yep.  I'll take care of this when I do the next phase of ZFS/FMA
integration.

- Eri

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how do I find out if I am on a zfs filesystem

2006-09-21 Thread Jan Hendrik Mangold
This may be a dumb question, but is there a way to find out if an arbitrary filesystem is actually a zfs filesystem? Like if I were to write a script that needs to do different steps based on the underlying filesystem.Any pointers welcome.  --Jan Hendrik Mangold Sun Microsystems650-585-5484 (x81371)"idle hands are the developers workshop" 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how do I find out if I am on a zfs filesystem

2006-09-21 Thread Mark Shellenbaum

Jan Hendrik Mangold wrote:
This may be a dumb question, but is there a way to find out if an 
arbitrary filesystem is actually a zfs filesystem? Like if I were to 
write a script that needs to do different steps based on the underlying 
filesystem.


Any pointers welcome. 



$ df -n path


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how do I find out if I am on a zfs filesystem

2006-09-21 Thread Darren J Moffat

Jan Hendrik Mangold wrote:
This may be a dumb question, but is there a way to find out if an 
arbitrary filesystem is actually a zfs filesystem? Like if I were to 
write a script that needs to do different steps based on the underlying 
filesystem.


From scripting look at the /etc/mnttab file.

From C code use statvfs and look at f_fstr.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how do I find out if I am on a zfs filesystem

2006-09-21 Thread Jan Hendrik Mangold
On Sep 21, 2006, at 1:30 PM, Mark Shellenbaum wrote:Jan Hendrik Mangold wrote: This may be a dumb question, but is there a way to find out if an arbitrary filesystem is actually a zfs filesystem? Like if I were to write a script that needs to do different steps based on the underlying filesystem.Any pointers welcome.  $ df -n path thank youshould have read the man page for df ... ;) --Jan Hendrik Mangold Sun Microsystems650-585-5484 (x81371)"idle hands are the developers workshop" 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: drbd using zfs send/receive?

2006-09-21 Thread Frank Cusack

On September 21, 2006 10:48:34 AM +0200 Jakob Praher [EMAIL PROTECTED] wrote:

Frank Cusack wrote:

On September 18, 2006 5:45:08 PM +0200 Jakob Praher [EMAIL PROTECTED] wrote:


BUT if we are at discussing NFS for distributed stroage: What are your
guys performance data for NFSv4 as a storage node. How well does the
current Solaris NFSv4 stack interoperate with the Linux stack?
Would you go for that?


My last knowledge of Linux NFSv4 vs. Solaris NFSv4 is that they don't
interoperate.  This was about a year ago.  I've always had to force
Linux to v3.


 Are there any good heartbeat
control apps for Solaris out there?


Sun Cluster and Veritas VCS come to mind.  I use ucarp for a homegrown
solution.


JBOD, SPoF - what are these things?


Wow.  Just a Bunch of Disks.  Single Point of Failure.


SMF is the init.d replacement of solaris, right? What would that look
like. What would SMF do, but restart your app if it fails? Would you like
to have a background task running instead of kicking it on with cron?


http://opensolaris.org/os/community/smf/

-frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: drbd using zfs send/receive?

2006-09-21 Thread Eric Kustarz

Frank Cusack wrote:


On September 21, 2006 10:48:34 AM +0200 Jakob Praher [EMAIL PROTECTED] wrote:


Frank Cusack wrote:

On September 18, 2006 5:45:08 PM +0200 Jakob Praher [EMAIL PROTECTED] 
wrote:



BUT if we are at discussing NFS for distributed stroage: What are your
guys performance data for NFSv4 as a storage node. How well does the
current Solaris NFSv4 stack interoperate with the Linux stack?
Would you go for that?



My last knowledge of Linux NFSv4 vs. Solaris NFSv4 is that they don't
interoperate.  This was about a year ago.  I've always had to force
Linux to v3.



They interoprate just fine.  The only weird thing is how the linux 
people implemented their pseudo-filesystem - make sure to add fsid=0 
to your exports via 'exportfs'.  So to transition from v3 to v4, they 
require you administratively to make a change or otherwise your 
Opensolaris clients won't be able to mount the linux server.  And yes 
they are planning on fixing it.


http://wiki.linux-nfs.org/index.php/Nfsv4_configuration
http://blogs.sun.com/macrbg/date/20051020

If you see something that doesn't work, let us or the linux people know.

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool always thinks it's mounted on another system

2006-09-21 Thread Eric Schrock
On Wed, Sep 20, 2006 at 12:33:50AM -0400, Rich wrote:

 Below is the (lack of) 'zdb -C' output:
 # zdb -C
 # zpool import
 
   pool: moonside
 id: 8290331144559232496
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 The pool may be active on on another system, but can be imported
 using
 the '-f' flag.
 config:
 [snip]
 # zpool import -f moonside
 # zdb -C
 #

Hmmm, that's seriously busted.  This indicates that it wasn't able to
write out the /etc/zfs/zpool.cache file.  Can you do an 'ls -l' of this
file and the containing directory, both before and after you do the
import?

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-21 Thread Neil Perrin



Philip Brown wrote On 09/21/06 20:28,:

Eric Schrock wrote:


If you're using EFI labels, yes (VTOC labels are not endian neutral).
ZFS will automatically convert endianness from the on-disk format, and
new data will be written using the native endianness, so data will be
gradually be rewritten to avoid the byteswap overhead.



now, when you say data, you just mean metadata, right?


Yes. ZFS has no knowledge of the layout of any structured records
written by applications, so it can't byteswap user data.

Neil

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building large home file server with SATA

2006-09-21 Thread Richard Elling - PAE

Alexei Rodriguez wrote:
I currently have a linux system at home with a pair of 3ware RAID (pci) 
controllers (4 port each) with a total of 8x250GB drives attached. I would 
like to move this existing setup to zfs but the problem I keep running into 
is finding suitable SATA controllers to replace the 3ware cards with.


Bill mentioned the supermicro 8-port SATA controllers for ~$100, but these 
are PCI-X; not something I can make use of in the system... afaik (or can I?). 


Unless they break the spec, yes, it should work.  PCI cards autonegotiate, so
when they state a speed, what they mean is that it handle negotiations up to 
that
speed, not necessarily that it will actually transfer data at that speed.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss