Re: [zfs-discuss] Spare Won't Remove

2008-02-19 Thread Robin Guo




Hi, Christ,

 I just verified this issue could simply reproduce in onnv, I've filed
CR #6664649 to trace it.
Thanks for report.

# zpool create -f tank c3t5d0s1 spare c3t5d0s0
# mkfile 100m /var/tmp/file
# zpool add tank sparc /var/tmp/file
# zpool export tank
# format c3t5d0 ( modify c3t5d0s0 to be unassigned)
# rm /var/tmp/file
# zpool import tank
# zpool status -v tank
 pool: tank
state: ONLINE
scrub: none requested
config:

 NAME STATE READ WRITE CKSUM
 tank ONLINE 0 0 0
 c3t5d0s1 ONLINE 0 0 0
 spares
 c3t5d0s0 UNAVAIL cannot open
  /var/tmp/file UNAVAIL cannot open

If the sparce device status as UNAVAIL, it cannot be removed by 'zpool
remove',
even I tried 'zpool scrub' and get no help.

# zpool remove tank c3t5d0s0
# echo $?
0
# zpool remove tank /var/tmp/file
# echo $?
0
# zpool status -v tank
 pool: tank
state: ONLINE
scrub: none requested
config:

 NAME STATE READ WRITE CKSUM
 tank ONLINE 0 0 0
 c3t5d0s1 ONLINE 0 0 0
 spares
 c3t5d0s0 UNAVAIL cannot open
  /var/tmp/file UNAVAIL cannot open

Christopher Gibbs wrote:

  Oops, I forgot a step. I also upgraded the zpool in snv79b before I
tried the remove. It is now version 10.

On 2/15/08, Christopher Gibbs [EMAIL PROTECTED] wrote:
  
  
The pool was exported from snv_73 and the spare was disconnected from
 the system. The OS was upgraded to snv_79b (SXDE 1/08) and the pool
 was re-imported.

 I think this weekend I'll try connecting a different drive to that
 controller and see if it will remove then.

 Thanks for your help.


 On 2/15/08, Robin Guo [EMAIL PROTECTED] wrote:
  Hi, Christopher,
 
I tried by using raw files as the spare, remove the file, then 'zpool
   remove' ,
   looks the raw files could be eliminated from the pool.
 
But since you use the physical device, I suppose it might be a bug there,
   for the status of spare device has turned to be 'UNAVAIL'.
 
Could you point out the OS you used? I might check with the latest
   onnv nightly to
   see if this issue exist.
 
 
   Christopher Gibbs wrote:
I have a hot spare that was part of my zpool but is no longer
connected to the system. I can run the zpool remove command and it
returns fine but doesn't seem to do anything.
   
I have tried adding and removing spares that are connected to the
system and works properly. Is zpool remove failing because the disk is
no longer connected to the system?
   
# zpool remove tank c1d0s4
# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:
   
NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c2d0ONLINE   0 0 0
c2d1ONLINE   0 0 0
c3d0ONLINE   0 0 0
c3d1ONLINE   0 0 0
c1t0d0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
spares
  c1d0s4UNAVAIL   cannot open
   
errors: No known data errors
   
   
   
   
 
 
   --
 
  Regards,
 
 
   Robin Guo, Xue-Bin Guo
   Solaris Kernel and Data Service QE,
   Sun China Engineering and Reserch Institute
   Phone: +86 10 82618200 +82296
   Email: [EMAIL PROTECTED]
   Blog: http://blogs.sun.com/robinguo
 
 



--
 Chris


  
  

  



-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't share a zfs

2008-02-19 Thread Robin Guo
Hi, Jason,

  Could you succeed by these steps?

# zpool create tank vdev
# zfs set sharenfs=on tank
# share
[EMAIL PROTECTED]  /tank   rw

  The nfs server will be enable automatically while there's any 
shareable dataset exist,
(sharenfs or sharesmb = on)

jason wrote:
 -bash-3.2$ zfs share tank
 cannot share 'tank': share(1M) failed
 -bash-3.2$ 

 how do i figure out what's wrong?
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


-- 
Regards,

Robin Guo, Xue-Bin Guo
Solaris Kernel and Data Service QE,
Sun China Engineering and Reserch Institute
Phone: +86 10 82618200 +82296
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com/robinguo

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool shared between OSX and Solaris on a MacBook Pro

2008-02-19 Thread Darren J Moffat
Peter Karlsson wrote:
 Hi,
 
 I got my MacBook pro set up to dual boot between Solaris and OSX and I  
 have created a zpool to use as a shred storage for documents etc..  
 However got this strange thing when trying to access the zpool from  
 Solaris, only root can see it?? I created the zpool on OSX as they are  
 using an old version of the on disk format, if I create a zpool on  
 Solaris all users can see it, strange

What do you mean by only root can see it

All files are owned by root ?
Users don't see the datasets with zfs list ?
Users don't see the mounted filesystems with df ?
Users don't even see the pool with zpool status ?


This looks strange:

zpace  delegation   off default

The default is on not off.

What build of Solaris are you using ?

Also see this:

zpace/demo  mountpoint  /Volumes/zpace/demodefault

Do you have a /Volumes directory on Solaris ?

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Regression with ZFS best practice

2008-02-19 Thread Laurent Blume
Hi all,

I've just put my first ZFS into production, and users are complaining about 
some regressions.

One problem for them is that now, they can't see all the users directories in 
the automount point: the homedirs used to be part of a single UFS, and were 
browsable with the correct autofs option. Now, following the ZFS best-practice, 
each user has his own FS - but being all shared separately, they're not 
browsable anymore.

Is there a way to work around that, and have the same behaviour as before, ie, 
all homedirs shown in /home, whether they're mounted or not?

TIA,

Laurent
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Regression with ZFS best practice

2008-02-19 Thread Mattias Pantzare
 I've just put my first ZFS into production, and users are complaining about 
 some regressions.

 One problem for them is that now, they can't see all the users directories in 
 the automount point: the homedirs used to be part of a single UFS, and were 
 browsable with the correct autofs option. Now, following the ZFS 
 best-practice, each user has his own FS - but being all shared separately, 
 they're not browsable anymore.

 Is there a way to work around that, and have the same behaviour as before, 
 ie, all homedirs shown in /home, whether they're mounted or not?

Remove -nobrowse from the map in auto_master.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with the best layout

2008-02-19 Thread Kim Tingkær
Thanks everybody :)

The solution i'm using now is the one where i backup to the usb disk and settle 
for a mirror on the two smaller disks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on arc_buf_remove_ref() assertion

2008-02-19 Thread Stuart Anderson
In this particular case will 127729-07 contain all the bug fixes in
IDR127787-12 (or later?). I have also run into a few other kernel
panics addressed in earlier revisions of this IDR but I am eager
to get back on the main Sol10 branch.

Thanks.

On Mon, Feb 18, 2008 at 08:45:46PM -0800, Prabahar Jeyaram wrote:
 Any IDRXX (Released immediately) is the interim relief (Will also 
 contains the fix) provided to the customers till the official patch 
 (Usually takes longer to be released) is available. Patch is supposed to 
 be consider as the permanent solution.
 
 --
 Prabahar.
 
 Stuart Anderson wrote:
 Thanks for the information.
 
 How does the temporary patch 127729-07 relate to the IDR127787 (x86) which
 I believe also claims to fix this panic?
 

-- 
Stuart Anderson  [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filebench for Solaris 10?

2008-02-19 Thread eric kustarz

On Feb 16, 2008, at 5:26 PM, Bob Friesenhahn wrote:

 Some of us are still using Solaris 10 since it is the version of
 Solaris released and supported by Sun.  The 'filebench' software from
 SourceForge does not seem to install or work on Solaris 10.  The
 'pkgadd' command refuses to recognize the package, even when it is set
 to Solaris 2.4 mode.

 I was able to build the software but observation of what 'make
 install' does is that it installs into the private home directory of
 some hard-coded user.  The 'make package' command builds an unusable
 package similar to the one on SourceForge.

Hmm, i'll take a look...

eric


 Are the filebench maintainers aware of this problem?  Will a package
 which works for Solaris 10 (which some of us are still using) be
 posted?

 Thanks,

 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/ 
 bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool shared between OSX and Solaris on a MacBook Pro

2008-02-19 Thread Peter Karlsson
On Feb 19, 2008, at 17:27, Darren J Moffat wrote:

 Peter Karlsson wrote:
 Hi,
 I got my MacBook pro set up to dual boot between Solaris and OSX and 
 I  have created a zpool to use as a shred storage for documents etc.. 
  However got this strange thing when trying to access the zpool from 
  Solaris, only root can see it?? I created the zpool on OSX as they 
 are  using an old version of the on disk format, if I create a zpool 
 on  Solaris all users can see it, strange

 What do you mean by only root can see it

As root:
-bash-3.2# ls -ld /zpace
drwxr-xr-x   8 root root   9 Feb 19 12:28 /zpace

As myself:
 ls -ld /zpace
/zpace: Permission denied
bash-3.2$ cd /zpace
bash: cd: /zpace: Permission denied

So I create a zpool on Solaris
 -bash-3.2# zpool create ztst /export/home/tst/a /export/home/tst/b 
/export/home/tst/c

bash-3.2$ ls -ld /ztst
drwxr-xr-x   2 root root   2 Feb 19 17:23 /ztst
bash-3.2$ cd /ztst

So that works, so something is strange with the zpool I created in OSX

 All files are owned by root ?

Nope, some files are owned by other users, 

 Users don't see the datasets with zfs list ?
Can:

bash-3.2$ /sbin/zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
zpace61G   715M  60.3G 1%  ONLINE  -


 Users don't see the mounted filesystems with df ?
Nope:
/zpace (zpace ):124462673 blocks 124462673 files
df: cannot statvfs /zpace/DB: Permission denied
df: cannot statvfs /zpace/Download: Permission denied
df: cannot statvfs /zpace/demo: Permission denied


 Users don't even see the pool with zpool status ?
Can:
bash-3.2$ /sbin/zpool status
  pool: zpace
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
zpace   ONLINE   0 0 0
  c1d0p4ONLINE   0 0 0

errors: No known data errors
bash-3.2$ /sbin/zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
zpace715M  59.3G   586K  /zpace
zpace/DB29.5K  59.3G  29.5K  /zpace/DB
zpace/Download   648M  59.3G   648M  /zpace/Download
zpace/demo  66.2M  59.3G  66.2M  /zpace/demo



 This looks strange:

 zpace  delegation   off default

That's the default on OSX, as I created the file system on OSX

On Solaris it reports delegation on
bash-3.2$ /sbin/zpool get all zpace
NAME   PROPERTY VALUE   SOURCE
zpace  size 61G -
zpace  used 715M-
zpace  available60.3G   -
zpace  capacity 1%  -
zpace  altroot  -   default
zpace  health   ONLINE  -
zpace  guid 2692302108782490543  -
zpace  version  6   local
zpace  bootfs   -   default
zpace  delegation   on  default
zpace  autoreplace  off default
zpace  cachefile-   default
zpace  failmode waitdefault


 The default is on not off.

 What build of Solaris are you using ?
snvx_b80



 Also see this:

 zpace/demo  mountpoint  /Volumes/zpace/demodefault

It was from OSX, I should note that


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-19 Thread Glinty McFrikknuts
Thanks for the suggestions.   I re-created the pool, set the record size to 8K, 
re-created the file and increased the I/O size from the application.  It's 
nearly all writes now.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and small block random I/O

2008-02-19 Thread Marcel Guerin
Hi,

 

We're doing some benchmarking at a customer (using IOzone) and for some
specific small block random tests, performance of their X4500 is very
poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ).  Specifically,
the test is the IOzone multithreaded throughput test of an 8GB file size
and 8KB record size, with the server physmem'd to 2GB.

 

I noticed a couple of peculiar anomalies when investigating the slow
results.  I am wondering if Sun has any best practices, tips for
optimizing small block random I/O on ZFS, or any other documents that
might explain what we're seeing and give us guidance on how to most
effectively deploy ZFS in an environment with heavy small block random
I/O.

 

The first anomaly, Brendan Gregg's CacheKit Perl script fcachestat shows
the segmap cache is hardly used (occasionally during the IOzone random
read benchmark, while the disks are grabbing 20MB/s in aggregate, the
segmap cache gets 100% hits for 1-3 attempts *every 10 seconds*--while
all other samples are zero% for zero attempts.  I don't know the kernel
I/O path as well as I'd like, but I tried to see requests for ZFS to
grab a file/offset block from disk by DTracing fbt::zfs_getpage
(assuming it was the ZFS equivalent of ufs_getpage) and got no hits as
well.  In other words, it's as if ZFS isn't using the segmap cache.

 

Secondly, DTrace scripts show the IOzone application is reading 8KB
blocks, but by the time the physical I/O happens it's ballooned into a
26KB read operation for each disk.  In other words, a single 8KB read
generates 156KB of actual disk reads.  We tried changing the ZFS recsize
parameter from 128KB down to 8KB (recreated the ZPool and ZFS file
system and changing recsize before creating the file), and that made the
performance even worse-which has thrown us for a loop.

 

I appreciate any assistance or direction you might be able to provide!

Thanks!
Marcel

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't share a zfs

2008-02-19 Thread jason
that doesn't work
it looks like something maybe corrupt, maybe something didn't get installed 
properly or i have a bad disc, for some reason my share command doesn't have an 
-F option

i'm going to get a new disc and reinstall everything

thanks for the help everyone
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't share a zfs

2008-02-19 Thread jason
btw, my machine doesn't have a dns name so i had to enter a phony one to get 
nfs/server online

can that have any ill effects?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and small block random I/O

2008-02-19 Thread Richard Elling
Start with the man page for zfs(1m), specifically, the recordsize parameter.
More discussion is available on the Solaris Internals ZFS Best Practices 
Guide.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

-- richard

Marcel Guerin wrote:

 Hi,

 We’re doing some benchmarking at a customer (using IOzone) and for 
 some specific small block random tests, performance of their X4500 is 
 very poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). 
 Specifically, the test is the IOzone multithreaded throughput test of 
 an 8GB file size and 8KB record size, with the server physmem’d to 2GB.

 I noticed a couple of peculiar anomalies when investigating the slow 
 results. I am wondering if Sun has any best practices, tips for 
 optimizing small block random I/O on ZFS, or any other documents that 
 might explain what we’re seeing and give us guidance on how to most 
 effectively deploy ZFS in an environment with heavy small block random 
 I/O.

 The first anomaly, Brendan Gregg’s CacheKit Perl script fcachestat 
 shows the segmap cache is hardly used (occasionally during the IOzone 
 random read benchmark, while the disks are grabbing 20MB/s in 
 aggregate, the segmap cache gets 100% hits for 1-3 attempts **every 10 
 seconds**--while all other samples are zero% for zero attempts. I 
 don’t know the kernel I/O path as well as I’d like, but I tried to see 
 requests for ZFS to grab a file/offset block from disk by DTracing 
 fbt::zfs_getpage (assuming it was the ZFS equivalent of ufs_getpage) 
 and got no hits as well. In other words, it’s as if ZFS isn’t using 
 the segmap cache.

 Secondly, DTrace scripts show the IOzone application is reading 8KB 
 blocks, but by the time the physical I/O happens it’s ballooned into a 
 26KB read operation for each disk. In other words, a single 8KB read 
 generates 156KB of actual disk reads. We tried changing the ZFS 
 recsize parameter from 128KB down to 8KB (recreated the ZPool and ZFS 
 file system and changing recsize before creating the file), and that 
 made the performance even worse—which has thrown us for a loop.

 I appreciate any assistance or direction you might be able to provide!

 Thanks!
 Marcel

 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] five megabytes per second with Microsoft iSCSI initiator (2.06)

2008-02-19 Thread John Tracy
Hello All-
I've been creating iSCSI targets on the following two boxes:
- Sun Ultra 40 M2 with eight 10K SATA disks
- Sun x2200 M2, with two 15K RPM SAS drives
Both were running build 82

I'm creating a zfs volume, and sharing it with zfs set shareiscsi=on 
poolname/volume. I can access the iSCSI volume without any problems, but IO is 
terribly slow, as in five megabytes per second sustained transfers.

I've tried creating an iSCSI target stored on a UFS filesystem, and get the 
same slow IO. I've tried every level of RAID available in ZFS with the same 
results.

The client machines are Windows 2003 Enterprise Edition SP2, running Microsoft 
iSCSI initiator 2.06, and Windows XP SP2, running MS iSCSI initiator 2.06. I've 
tried moving some of the client machines to the same physical switch as the 
target servers, and get the same results. I've tried another switch, and get 
the same results. I've even physically isolated the computers from my network, 
and get the same results.

I'm not sure where to go from here and what to try next. The network is all 
gigabit. I normally have the Solaris boxes in a 802.3ad LAG group, tying two 
physical NICs together which should give me a max of 2gb/s of bandwidth (250 
megabytes per second). Of course, I've tried no LAG connections with the same 
results. In short, I've tried every combination of everything I know to try, 
except using a different iSCSI client/server software stack (well, I did try 
the 2.05 version of MS's iSCSI initiator client--same result).

Here is what I'm seeing with performance logs on the Windows side-
On any of the boxes, I see the queue length for the hard disk (iSCSI target) 
go from under 1 to 600+, and then back to under 1 about every four or five 
seconds. 

On the Solaris side, I'm running iostat -xtc 1 which shows me lots of IO 
activity on the hard drives associated with my ZFS pool, and then about three 
or four seconds of pause, and then lots of activity again for a second or two, 
and then a lull again, and the cycle repeats as long as I'm doing active 
sustained IO against the iSCSI target. The output of prstat doesn't show any 
heavy processor/memory usage on the Solaris box. I'm not sure what other 
monitors to run on either side to get a better picture.

Any recommendations on how to proceed? Does anybody else use the Solaris iSCSI 
target software to export iSCSI targets to initiators running the MS iSCSI 
initiator?

Thank you-
John
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filebench for Solaris 10?

2008-02-19 Thread Marion Hakanson
[EMAIL PROTECTED] said:
 Some of us are still using Solaris 10 since it is the version of  Solaris
 released and supported by Sun.  The 'filebench' software from  SourceForge
 does not seem to install or work on Solaris 10.  The  'pkgadd' command
 refuses to recognize the package, even when it is set  to Solaris 2.4 mode. 

I've installed and run filebench (version 1.1.0) from the SourceForge
packages on Solaris-10 here, both SPARC and x86_64, with no problems.
Looks like I downloaded it 23-Jan-2008.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how do I fix this situation?

2008-02-19 Thread Max Bruning
 
 Hi everybody,
 while trying to figure out what on earth has been
 going on
 in my u20m2 due to
 
 6636511 u20m2 bios version 1.45.1 still can't
 distinguish disks on sata 
 channel #1,
 
 I engaged in a lot of cable swapping operations for
 the internal
 sata drive cables.
 
 Somehow I've managed to end up with an allegedly
 corrupted zpool,
 which I was unable to do a zpool replace on, and now
 I can't import
 it either.
 
 
 Its config is 2 slices on disks c3t0d0 and c3t1d0,
 but the zpool
 config data reckons it's really using c2t1d0 instead
 of c3t0d0.
 
 Looking at the output from zdb -l /dev/dsk/c3t0d0s0
 I can clearly
 see that there is a path field which is incorrect.
 How do I change
 this field to reflect reality? Is there some way I
 can force-import
 the pool and get that mapping changed? (zpool import
 -f soundandvision
 fails with invalid vdev configuration).
 
 
 
 
 LABEL 0
 
  version=9
 name='soundandvision'
  state=1
 txg=2247550
  pool_guid=7968359165854648625
 hostid=226162178
  hostname='farnarkle'
 top_guid=4672721547114476840
  guid=9244482965678353940
 vdev_tree
  type='mirror'
 id=0
  guid=4672721547114476840
 metaslab_array=14
  metaslab_shift=30
 ashift=9
  asize=199968161792
 is_log=0
  children[0]
 type='disk'
  id=0
 guid=15422701819531588989
  path='/dev/dsk/c2t1d0s0'
 devid='id1,[EMAIL PROTECTED]/a'
 
 hys_path='/[EMAIL PROTECTED],0/pci108e,[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a'
  whole_disk=0
 DTL=85
  children[1]
 type='disk'
  id=1
 guid=9244482965678353940
  path='/dev/dsk/c3t0d0s0'
 
 evid='id1,[EMAIL PROTECTED]/a'
 
 hys_path='/[EMAIL PROTECTED],0/pci108e,[EMAIL PROTECTED],1/[EMAIL 
 PROTECTED],0:a'
  whole_disk=0
 DTL=84
 
 
 
 
 
 Thankyou in advance,
 James C. McPherson
 --
 Senior Kernel Software Engineer, Solaris
 Sun Microsystems
 http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com
 /blog
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss

Hi James,
Out of curiosity, did you get an answer on this?
thanks,
max
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how do I fix this situation?

2008-02-19 Thread James C. McPherson
Max Bruning wrote:
 Hi everybody,
 while trying to figure out what on earth has been
 going on in my u20m2 due to

 6636511 u20m2 bios version 1.45.1 still can't
 distinguish disks on sata channel #1,

 I engaged in a lot of cable swapping operations for
 the internal sata drive cables.
...

 Hi James,
 Out of curiosity, did you get an answer on this?


Hi Max,
nope, didn't get an answer from the list. I ended up
moving the cables back to how they were previously,
then re-importing the zpool.



cheers,
James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] five megabytes per second with Microsoft iSCSI initiator (2.06)

2008-02-19 Thread Marion Hakanson
[EMAIL PROTECTED] said:
 I'm creating a zfs volume, and sharing it with zfs set shareiscsi=on
 poolname/volume. I can access the iSCSI volume without any problems, but IO
 is terribly slow, as in five megabytes per second sustained transfers.
 
 I've tried creating an iSCSI target stored on a UFS filesystem, and get the
 same slow IO. I've tried every level of RAID available in ZFS with the same
 results. 

Apologies if you've already done so, but try testing your network (without
iSCSI and storage).  You can use ttcp from blastwave.org on the Solaris
side, and PCATTCP on the Windows side.  That should tell you if your
TCP/IP stacks and network hardware are in good condition.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'du' is not accurate on zfs

2008-02-19 Thread Marion Hakanson
[EMAIL PROTECTED] said:
 It may not be relevant, but I've seen ZFS add weird delays to things too.  I
 deleted a file to free up space, but when I checked no more space was
 reported.  A second or two later the space appeared. 

Run the sync command before you do the du.  That flushes the ARC and/or
ZIL out to disk, after which you'll get accurate results.  I do the same when
timing how long it takes to create a file -- time the file creation plus the
sync to see how long it takes to get the data to nonvolatile storage.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filebench for Solaris 10?

2008-02-19 Thread Bob Friesenhahn
On Tue, 19 Feb 2008, Marion Hakanson wrote:

 I've installed and run filebench (version 1.1.0) from the SourceForge
 packages on Solaris-10 here, both SPARC and x86_64, with no problems.
 Looks like I downloaded it 23-Jan-2008.

This is what I get with the filebench-1.1.0_x86_pkg.tar.gz from 
SourceForge:

# pkgadd -d .
pkgadd: ERROR: no packages were found in 
/home/bfriesen/src/benchmark/filebench
# ls
install/  pkginfo   pkgmapreloc/

My system has the latest package management patches applied.  What am 
I missing?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] five megabytes per second with Microsoft iSCSI initiator (2.06)

2008-02-19 Thread Bob Friesenhahn
It would be useful if people here who have used iSCSI on top of ZFS 
could share their performance experiences.  It is very easy to waste a 
lot of time trying to realize unrealistic expectations.  Hopefully 
iSCSI on top of ZFS normally manages to transfer much more than 
5MB/second!

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] five megabytes per second with Microsoft iSCSI initiator (2.06)

2008-02-19 Thread Erast Benson
http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfect

On Tue, 2008-02-19 at 14:44 -0600, Bob Friesenhahn wrote:
 It would be useful if people here who have used iSCSI on top of ZFS 
 could share their performance experiences.  It is very easy to waste a 
 lot of time trying to realize unrealistic expectations.  Hopefully 
 iSCSI on top of ZFS normally manages to transfer much more than 
 5MB/second!
 
 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filebench for Solaris 10?

2008-02-19 Thread Marion Hakanson

[EMAIL PROTECTED] said:
 This is what I get with the filebench-1.1.0_x86_pkg.tar.gz from  SourceForge:
 
 # pkgadd -d .
 pkgadd: ERROR: no packages were found in 
 /home/bfriesen/src/benchmark/filebench
 # ls
 install/  pkginfo   pkgmapreloc/
 . . .

Um, cd .. and pkgadd -d . again.  The package is the actual directory
that you unpacked.  Note the instructions for unpacking confused me a bit
as well.  I had expected to pkgadd -d . filebench, but pkgadd is smart
enough to scan the entire -d directory for packages.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3ware support

2008-02-19 Thread Dave Johnson
Nice putrid spew of FUD regarding 3Ware cards.

Regarding the SuperMicro 8-port SATA PCI-X card, yes, that is a good 
recommendation.

-=dave
  - Original Message - 
  From: Rob Windsor 
  To: zfs-discuss@opensolaris.org 
  Sent: Tuesday, February 12, 2008 12:39 PM
  Subject: Re: [zfs-discuss] 3ware support

  3ware cards do not work (as previously specified).  Even in 
  linux/windows, they're pretty flaky -- if you had Solaris drivers, you'd 
  probably shoot yourself in a month anyway.

  I'm using the SuperMicro aoc-sat2-mv8 at the recommendation of someone 
  else on this list.  It's a JBOD card, which is perfect for ZFS.  Also, 
  you won't be paying for RAID functionality that you're wanting to 
  disable anyway.

  Rob++___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filebench for Solaris 10?

2008-02-19 Thread Bob Friesenhahn
On Tue, 19 Feb 2008, Marion Hakanson wrote:

 # pkgadd -d .
 pkgadd: ERROR: no packages were found in 
 /home/bfriesen/src/benchmark/filebench
 # ls
 install/  pkginfo   pkgmapreloc/
 . . .

 Um, cd .. and pkgadd -d . again.  The package is the actual directory
 that you unpacked.  Note the instructions for unpacking confused me a bit
 as well.  I had expected to pkgadd -d . filebench, but pkgadd is smart
 enough to scan the entire -d directory for packages.

Very odd. That worked.  Thank you very much!.  It seems that filebench 
is unconventional in almost every possible way.  Installing it based 
on the available documentation was an exercise in frustration.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss