Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-09-15 Thread Ian Collins

Ian Collins wrote:

I have a case open for this problem on Solaris 10u7.

The case has been identified and I've just received an IDR,which I 
will test next week.  I've been told the issue is fixed in update 8, 
but I'm not sure if there is an nv fix target.


I'll post back once I've abused a test system for a while.

The IDR I was sent appears to have fixed the problem.  I have been 
abusing the box for a couple of weeks without any lockups.  Roll on 
update 8!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Mark J Musante



On Mon, 14 Sep 2009, Marty Scholes wrote:

I really want to move back to 2009.06 and keep all of my files / 
snapshots.  Is there a way somehow to zfs send an older stream that 
2009.06 will read so that I can import that into 2009.06?


Can I even create an older pool/dataset using 122?  Ideally I would 
provision an older version of the data and simply reinstall 2009.06 and 
just import the pool created under 122.


The zfs send stream is dependent on the version of the filesystem, so the 
only way to create an older stream is to create a back-versioned 
filesystem:


zfs create -o version=N pool/filesystem

You can see what versions your system supports by using the zfs upgrade 
command:


# zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifer (FUID)
 4   userquota, groupquota properties

For more information on a particular version, including supported releases, see:
http://www.opensolaris.org/os/community/zfs/version/zpl/N

Where 'N' is the version number.
#

Of course, creating a version 3 or earlier system will not allow you to 
use user  group quotas, for example, but at least you'll be able to 
zfs-send that filesystem to a version of zfs that can only understand the 
earlier versions.


It seems this would be a regular request.  If I understand it correctly, 
an older BE cannot read upgraded pools and file systems, so a boot image 
upgrade followed by a zfs and zpool upgrade would kill a shop's ability 
to fall back.  Or am I mistaken?


You're not mistaken.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Luca Morettoni

On 09/15/09 02:07 PM, Mark J Musante wrote:

zfs create -o version=N pool/filesystem


is possible to implement into a future version of ZFS a released send 
command, like:


# zfs send -r2 snap ...

to send a specific release (version 2 in the example) of the metadata?

--
Luca Morettoni luca(AT)morettoni.net | OpenSolaris SCA #OS0344
Web/BLOG: http://www.morettoni.net/ | http://twitter.com/morettoni
jugUmbria founder - https://jugUmbria.dev.java.net/ | Thawte notary
ITL-OSUG leader - http://www.opensolaris.org/os/project/itl-osug/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-09-15 Thread Gary Mills
On Tue, Sep 15, 2009 at 08:48:20PM +1200, Ian Collins wrote:
 Ian Collins wrote:
 I have a case open for this problem on Solaris 10u7.
 
 The case has been identified and I've just received an IDR,which I 
 will test next week.  I've been told the issue is fixed in update 8, 
 but I'm not sure if there is an nv fix target.
 
 I'll post back once I've abused a test system for a while.
 
 The IDR I was sent appears to have fixed the problem.  I have been 
 abusing the box for a couple of weeks without any lockups.  Roll on 
 update 8!

Was that IDR140221-17?  That one fixed a deadlock bug for us back
in May.

-- 
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Marty Scholes
 The zfs send stream is dependent on the version of
 the filesystem, so the 
 only way to create an older stream is to create a
 back-versioned 
 filesystem:
 
   zfs create -o version=N pool/filesystem
 You can see what versions your system supports by
 using the zfs upgrade 
 command:

Thanks for the feedback.  So if I have a version X pool/filesystem set, does 
that mean the way to move it back to an older version of TANK is to do 
something like:
* Create OLDTANK with version=N
* For each snapshot in TANK
** (cd tank_snapshot; tar cvf -) | (cd old_tank; tar xvf -)
** zfs snapshot oldtank the_snapshot_name

This seems rather involved to get my current files/snaps into an older format.  
What did I miss?

Thanks again,
Marty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-09-15 Thread Dave
 The case has been identified and I've just received
 an IDR,which I will 
 test next week.  I've been told the issue is fixed in
 update 8, but I'm 
 not sure if there is an nv fix target.
 

Anyone know if there Is an opensolaris fix for this issue and when?

These seem to be related.  
http://www.opensolaris.org/jive/thread.jspa?threadID=112808
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-09-15 Thread Constantin Gonzalez

Hi,

I think I've run into the same issue on OpenSolaris 2009.06.

Does anybody know when this issue will be solved in OpenSolaris?
What's the BugID?

Thanks,
   Constantin

Gary Mills wrote:

On Tue, Sep 15, 2009 at 08:48:20PM +1200, Ian Collins wrote:

Ian Collins wrote:

I have a case open for this problem on Solaris 10u7.

The case has been identified and I've just received an IDR,which I 
will test next week.  I've been told the issue is fixed in update 8, 
but I'm not sure if there is an nv fix target.


I'll post back once I've abused a test system for a while.

The IDR I was sent appears to have fixed the problem.  I have been 
abusing the box for a couple of weeks without any lockups.  Roll on 
update 8!


Was that IDR140221-17?  That one fixed a deadlock bug for us back
in May.



--
Constantin Gonzalez  Sun Microsystems GmbH, Germany
Principal Field Technologisthttp://blogs.sun.com/constantin
Tel.: +49 89/4 60 08-25 91   http://google.com/search?q=constantin+gonzalez

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] EON 0.59.3 based on snv_122 released

2009-09-15 Thread Andre Lue
EON ZFS NAS 0.59.3 based on snv_122 released!
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is 
released on Genunix! Much thanks to Al at Genunix.org for download hosting and 
serving the opensolaris community.

It is available in a CIFS and Samba flavor
EON 64-bit x86 CIFS ISO image version 0.59.3 based on snv_122
* eon-0.593-122-64-cifs.iso
* MD5: 8be86fb315b5b4929a04e0346ed0168c
* Size: ~89Mb
* Released: Monday 14-September-2009

EON 64-bit x86 Samba ISO image version 0.59.3 based on snv_122
* eon-0.593-122-64-smb.iso
* MD5: f68fefdc525a517b9c4b66028ae4347e
* Size: ~101Mb
* Released: Monday 14-September-2009

EON 32-bit x86 CIFS ISO image version 0.59.3 based on snv_122
* eon-0.593-122-32-cifs.iso
* MD5: fa71f059aa1eeefbcda597b98006ba9f
* Size: ~56Mb
* Released: Monday 14-September-2009

EON 32-bit x86 Samba ISO image version 0.59.3 based on snv_122
* eon-0.593-122-32-smb.iso
* MD5: 1b9861a780dc01da36ca17d1b4450132
* Size: ~69Mb
* Released: Monday 14-September-2009

New/Fixes:
- added 32/64-bit drivers: bnx, igb
- Workaround fix for IP validation in setup.sh
- added /usr/local/sbin for bin kit to bashrc
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Lori Alt

On 09/15/09 06:27, Luca Morettoni wrote:

On 09/15/09 02:07 PM, Mark J Musante wrote:

zfs create -o version=N pool/filesystem


is possible to implement into a future version of ZFS a released 
send command, like:


# zfs send -r2 snap ...

to send a specific release (version 2 in the example) of the metadata?

I just created a RFE for this problem in general:  6882134.  I'm not 
sure the above suggestion is the best way to solve the problem, but we 
do need some kind of support for inter-version send stream readability.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to map solaris disk devices to physical location for ZFS pool setup

2009-09-15 Thread David Smith
Hi, I'm setting up a ZFS environment running on a Sun x4440 + J4400 arrays 
(similar to 7410 environment) and I was trying to figure out the best way to 
map a disk drive physical location (tray and slot) to the Solaris device 
c#t#d#.   Do I need to install the CAM software to do this, or is there another 
way?  I would like to understand the solaris device to physical drive location 
so that I can setup my ZFS pool mirrors/raid properly.

I'm currently running Solaris Express build 119.

Thanks,

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Lightning SSD with 180,000 IOPs, 320MB/s writes

2009-09-15 Thread Neal Pollack

http://www.dailytech.com/Startup+Drops+Bombshell+Lightning+SSD+With+180k+IOPS+500320+MBs+ReadWrites/article16249.htm

Pliant Technologies
just released two Lightning high performance enterprise SSDs that threaten to blow 
away the competition.  The drives uses proprietary ASICs to deliver an incredible 
input-output performance per second (IOPS) that close to doubles the fastest of its 
competitors.  The Enterprise Flash Drive (EFD) LS offers 180,000 IOPS in a 3.5 form 
factor, while the 2.5 form factor EFD LB claims 140,000 IOPS of performance.


If that's not enough to sate the appetite of even the most die-hard flash drive 
enthusiast, this will be --  the drives also offer 500MB/sec and 320 MB/sec reads 
and 420MB/sec read and 220MB/sec write rates for the 3.5 and 2.5, respectively.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lightning SSD with 180,000 IOPs, 320MB/s writes

2009-09-15 Thread Roman Naumenko
http://www.plianttechnology.com/lightning_ls.php

Write Endurance Unlimited

:)

--
Roman Naumenko
ro...@frontline.ca
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-09-15 Thread Dale Ghent

On Sep 15, 2009, at 5:21 PM, Richard Elling wrote:



On Sep 15, 2009, at 1:03 PM, Dale Ghent wrote:


On Sep 10, 2009, at 3:12 PM, Rich Morris wrote:


On 07/28/09 17:13, Rich Morris wrote:

On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:

Sun has opened internal CR 6859997.  It is now in Dispatched  
state at High priority.


CR 6859997 has recently been fixed in Nevada.  This fix will also  
be in Solaris 10 Update 9.
This fix speeds up the sequential prefetch pattern described in  
this CR without slowing down other prefetch patterns.  Some kstats  
have also been added to help improve the observability of ZFS file  
prefetching.


Awesome that the fix exists. I've been having a hell of a time with  
device-level prefetch on my iscsi clients causing tons of  
ultimately useless IO and have resorted to setting  
zfs_vdev_cache_max=1.


This only affects metadata. Wouldn't it be better to disable
prefetching for data?


Well, that's a surprise to me, but the zfs_vdev_cache_max=1 did  
provide relief.


Just a general description of my environment:

My setup consists of several s10uX iscsi clients which get LUNs from a  
pairs of thumpers. Each thumper pair exports identical LUNs to each  
iscsi client, and the client in turn mirrors each LUN pair inside a  
local zpool. As more space is needed on a client, a new LUN is created  
on the pair of thumpers, exported to the iscsi client, which then  
picks it up and we add a new mirrored vdev to the client's existing  
zpool.


This is so we have data redundancy across chassis, so if one thumper  
were to fail or need patching, etc, the iscsi clients just see one of  
side of their mirrors drop out.


The problem that we observed on the iscsi clients was that, when  
viewing things through 'zpool iostat -v', far more IO was being  
requested from the LUs than was being registered for the vdev those  
LUs were a member of.


Being that that was a iscsi setup with stock thumpers (no SSD ZIL,  
L2ARC) serving the LUs, this apparently overhead caused far more  
uneccessary disk IO on the thumpers, thus starving out IO for data  
that was actually needed.


The working set is lots of small-ish files, entirely random IO.

If zfs_vdev_cache_max only affects metadata prefetches, which  
parameter affects data prefetches ?


I have to admit that disabling device-level prefetching was a shot in  
the dark, but it did result in drastically reduced contention on the  
thumpers.


/dale





Question though... why is bug fix that can be a watershed for  
performance be held back for so long? s10u9 won't be available for  
at least 6 months from now, and with a huge environment, I try hard  
not to live off of IDRs.


Am I the only one that thinks this is way too conservative? It's  
just maddening to know that a highly beneficial fix is out there,  
but its release is based on time rather than need. Sustaining  
really needs to be more proactive when it comes to this stuff.


/dale





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-09-15 Thread Richard Elling

Reference below...

On Sep 15, 2009, at 2:38 PM, Dale Ghent wrote:


On Sep 15, 2009, at 5:21 PM, Richard Elling wrote:



On Sep 15, 2009, at 1:03 PM, Dale Ghent wrote:


On Sep 10, 2009, at 3:12 PM, Rich Morris wrote:


On 07/28/09 17:13, Rich Morris wrote:

On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote:

Sun has opened internal CR 6859997.  It is now in Dispatched  
state at High priority.


CR 6859997 has recently been fixed in Nevada.  This fix will also  
be in Solaris 10 Update 9.
This fix speeds up the sequential prefetch pattern described in  
this CR without slowing down other prefetch patterns.  Some  
kstats have also been added to help improve the observability of  
ZFS file prefetching.


Awesome that the fix exists. I've been having a hell of a time  
with device-level prefetch on my iscsi clients causing tons of  
ultimately useless IO and have resorted to setting  
zfs_vdev_cache_max=1.


This only affects metadata. Wouldn't it be better to disable
prefetching for data?


Well, that's a surprise to me, but the zfs_vdev_cache_max=1 did  
provide relief.


Just a general description of my environment:

My setup consists of several s10uX iscsi clients which get LUNs from  
a pairs of thumpers. Each thumper pair exports identical LUNs to  
each iscsi client, and the client in turn mirrors each LUN pair  
inside a local zpool. As more space is needed on a client, a new LUN  
is created on the pair of thumpers, exported to the iscsi client,  
which then picks it up and we add a new mirrored vdev to the  
client's existing zpool.


This is so we have data redundancy across chassis, so if one thumper  
were to fail or need patching, etc, the iscsi clients just see one  
of side of their mirrors drop out.


The problem that we observed on the iscsi clients was that, when  
viewing things through 'zpool iostat -v', far more IO was being  
requested from the LUs than was being registered for the vdev those  
LUs were a member of.


Being that that was a iscsi setup with stock thumpers (no SSD ZIL,  
L2ARC) serving the LUs, this apparently overhead caused far more  
uneccessary disk IO on the thumpers, thus starving out IO for data  
that was actually needed.


The working set is lots of small-ish files, entirely random IO.

If zfs_vdev_cache_max only affects metadata prefetches, which  
parameter affects data prefetches ?


There are two main areas for prefetch: at the transactional object  
layer (DMU) and the pooled
storage level (VDEV).  zfs_vdev_cache_max works at the VDEV level,  
obviously. The
DMU knows more about the context of the data and is where the  
intelligent prefetching

algorithm works.

You can easily observe the VDEV cache statistics with kstat:
# kstat -n vdev_cache_stats
module: zfs instance: 0
name:   vdev_cache_statsclass:misc
crtime  38.83342625
delegations 14030
hits105169
misses  59452
snaptime4564628.18130739

This represents a 59% cache hit rate, which is pretty decent.  But you
will notice far fewer delegations+hits+misses than real IOPS because  
it is

only caching metadata.

Unfortunately, there is not a kstat for showing the DMU cache stats.
But a DTrace script can be written or, even easier, lockstat will show
if you are spending much time in the zfetch_* functions.  More details
are in the Evil Tuning Guide, including how to set zfs_prefetch_disable
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide



I have to admit that disabling device-level prefetching was a shot  
in the dark, but it did result in drastically reduced contention on  
the thumpers.


That is a little bit surprising.  I would expect little metadata  
activity for iscsi
service. It would not be surprising for older Solaris 10 releases,  
though.

It was fixed in NV b70, circa July 2007.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-09-15 Thread Bob Friesenhahn

On Tue, 15 Sep 2009, Dale Ghent wrote:


Question though... why is bug fix that can be a watershed for 
performance be held back for so long? s10u9 won't be available for 
at least 6 months from now, and with a huge environment, I try hard 
not to live off of IDRs.


As someone who currently faces kernel panics with recent U7+ kernel 
patches (on AMD64 and SPARC) related to PCI bus upset, I expect that 
Sun will take the time to make sure that the implementation is as good 
as it can be and is thoroughly tested before release.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs not sharing nfs shares on OSOl 2009.06?

2009-09-15 Thread Tom de Waal

Hi,

I'm trying to identify why my nfs server does not work. I'm using a more 
or less core install of OSOL 2009.06 (release) and installed and 
configured a nfs server.


The issue: nfs server won't start - it can't find any filesystems in 
/etc/dfs/sharetab. the zfs file systems do have sharenfs=on property 
(infact the pool the used to be on a working NV build 100).


Some investigations that I did:
zfs create -o sharenfs=os tank1/home/nfs # just an example fs
cannot share 'tank1/home/nfs': share(1M) failed
filesystem successfully create, but not shared

sharemgr list -v
default enabled nfs
zfs enabled nfs smb


svcs -a | grep nfs
disabled   19:52:51 svc:/network/nfs/client:default
disabled   21:05:36 svc:/network/nfs/server:default
online 19:53:23 svc:/network/nfs/status:default
online 19:53:25 svc:/network/nfs/nlockmgr:default
online 19:53:25 svc:/network/nfs/mapid:default
online 19:53:30 svc:/network/nfs/rquota:default
online 21:05:24 svc:/network/nfs/cbd:default

cat /etc/dfs/sharetab is empty

sharemgr start -v -P nfs zfs
Starting group zfs

share
# no response

share -F nfs /tank1/home/nfs zfs
Could not share: /tank1/home/nfs: system error

pkg list | grep nfs
SUNWnfsc   0.5.11-0.111installed  
SUNWnfsckr 0.5.11-0.111installed  
SUNWnfss   0.5.11-0.111installed  

Note: I also enabled the smb server (CIFS), which works fine (and fills 
sharetab)


Any suggestion how to resolve this? Am I missing an  ips package or a file?

Regards,



Tom de Waal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-09-15 Thread Dale Ghent

On Sep 15, 2009, at 6:28 PM, Bob Friesenhahn wrote:


On Tue, 15 Sep 2009, Dale Ghent wrote:


Question though... why is bug fix that can be a watershed for  
performance be held back for so long? s10u9 won't be available for  
at least 6 months from now, and with a huge environment, I try hard  
not to live off of IDRs.


As someone who currently faces kernel panics with recent U7+ kernel  
patches (on AMD64 and SPARC) related to PCI bus upset, I expect that  
Sun will take the time to make sure that the implementation is as  
good as it can be and is thoroughly tested before release.


Are you referring the the same testing that gained you this PCI panic  
feature in s10u7?


Testing is a no-brainer, and I would expect that there already exists  
some level of assurance that a CR fix is correct at the point of  
putback.


But I've dealt with many bugs both very recently and long in the past  
where a fix has existed in nevada for months, even a year, before I  
got bit by the same bug in s10 and then had to go through the support  
channels to A) convince whomever I'm talking to that, yes, I'm hitting  
this bug, B) yes, there is a fix, and then C) pretty please can I have  
an IDR


Just this week I'm wrapping up testing of a IDR which addresses a  
e1000g hardware errata that was fixed in onnv earlier this year in  
February. For something that addresses a hardware issue on a Intel  
chipset used on shipping Sun servers, one would think that Sustaining  
would be on the ball and get that integrated ASAP. But the current  
mode of operation appears to be no CR, no backport, which leaves us  
customers needlessly running into bugs and then begging for their  
fixes... or hearing the dreaded oh that fix will be available two  
updates from now. Not cool.


/dale



/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs not sharing nfs shares on OSOl 2009.06?

2009-09-15 Thread Brandon Mercer
On Tue, Sep 15, 2009 at 4:31 PM, Tom de Waal tom.dew...@sun.com wrote:
 Hi,

 I'm trying to identify why my nfs server does not work. I'm using a more or
 less core install of OSOL 2009.06 (release) and installed and configured a
 nfs server.

 The issue: nfs server won't start - it can't find any filesystems in
 /etc/dfs/sharetab. the zfs file systems do have sharenfs=on property (infact
 the pool the used to be on a working NV build 100).

 Some investigations that I did:
 zfs create -o sharenfs=os tank1/home/nfs # just an example fs
 cannot share 'tank1/home/nfs': share(1M) failed
 filesystem successfully create, but not shared

 sharemgr list -v
 default enabled nfs
 zfs     enabled nfs smb


 svcs -a | grep nfs
 disabled       19:52:51 svc:/network/nfs/client:default
 disabled       21:05:36 svc:/network/nfs/server:default
 online         19:53:23 svc:/network/nfs/status:default
 online         19:53:25 svc:/network/nfs/nlockmgr:default
 online         19:53:25 svc:/network/nfs/mapid:default
 online         19:53:30 svc:/network/nfs/rquota:default
 online         21:05:24 svc:/network/nfs/cbd:default

 cat /etc/dfs/sharetab is empty

 sharemgr start -v -P nfs zfs
 Starting group zfs

 share
 # no response

 share -F nfs /tank1/home/nfs zfs
 Could not share: /tank1/home/nfs: system error

 pkg list | grep nfs
 SUNWnfsc           0.5.11-0.111    installed  
 SUNWnfsckr         0.5.11-0.111    installed  
 SUNWnfss           0.5.11-0.111    installed  

 Note: I also enabled the smb server (CIFS), which works fine (and fills
 sharetab)

 Any suggestion how to resolve this? Am I missing an  ips package or a file?


Make sure the hosts that are trying to connect resolve properly either
by DNS or by putting them in the /etc/hosts file.
Brandon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lightning SSD with 180,000 IOPs, 320MB/s writes

2009-09-15 Thread eneal

Quoting Brian Hechinger wo...@4amlunch.net:



If you need to ask you can't afford it?  :-D

-brian
--



We can all dream can't we?




This email and any files transmitted with it are confidential and are  
intended solely for the use of the individual or entity to whom they  
are addressed. This communication may contain material protected by  
the attorney-client privilege. If you are not the intended recipient,  
be advised that any use, dissemination, forwarding, printing or  
copying is strictly prohibited. If you have received this email in  
error, please contact the sender and delete all copies.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs not sharing nfs shares on OSOl 2009.06?

2009-09-15 Thread Trevor Pretty




Tom

What's in the NFS server log? (svcs -x)

BTW: Why are the NFS services disabled? If it has a problem I would
have expected it to be in state maintenance.

http://docs.sun.com/app/docs/doc/819-2252/smf-5?a=view


  DISABLED
  
  
The instance is disabled. Enabling the service results in a
transition to the offline state and eventually to the online state with
all dependencies satisfied.
  


  MAINTENANCE
  
  
The instance is enabled, but not able to run. Administrative
action (through svcadm clear)
is required to move the instance out of the maintenance state. The
maintenance state might be a temporarily reached state if an
administrative operation is underway.
  


Trevor

Tom de Waal wrote:

  Hi,

I'm trying to identify why my nfs server does not work. I'm using a more 
or less core install of OSOL 2009.06 (release) and installed and 
configured a nfs server.

The issue: nfs server won't start - it can't find any filesystems in 
/etc/dfs/sharetab. the zfs file systems do have sharenfs=on property 
(infact the pool the used to be on a working NV build 100).

Some investigations that I did:
zfs create -o sharenfs=os tank1/home/nfs # just an example fs
cannot share 'tank1/home/nfs': share(1M) failed
filesystem successfully create, but not shared

sharemgr list -v
default enabled nfs
zfs enabled nfs smb


svcs -a | grep nfs
disabled   19:52:51 svc:/network/nfs/client:default
disabled   21:05:36 svc:/network/nfs/server:default
online 19:53:23 svc:/network/nfs/status:default
online 19:53:25 svc:/network/nfs/nlockmgr:default
online 19:53:25 svc:/network/nfs/mapid:default
online 19:53:30 svc:/network/nfs/rquota:default
online 21:05:24 svc:/network/nfs/cbd:default

cat /etc/dfs/sharetab is empty

sharemgr start -v -P nfs zfs
Starting group "zfs"

share
# no response

share -F nfs /tank1/home/nfs zfs
Could not share: /tank1/home/nfs: system error

pkg list | grep nfs
SUNWnfsc   0.5.11-0.111installed  
SUNWnfsckr 0.5.11-0.111installed  
SUNWnfss   0.5.11-0.111installed  

Note: I also enabled the smb server (CIFS), which works fine (and fills 
sharetab)

Any suggestion how to resolve this? Am I missing an  ips package or a file?

Regards,



Tom de Waal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Roland Mainz

Hi!



Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
yes - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, CJAVASunUnix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lightning SSD with 180,000 IOPs, 320MB/s writes

2009-09-15 Thread Marc Bevand
Neal Pollack Neal.Pollack at Sun.COM writes:
 
 Pliant Technologies just released two Lightning high performance
 enterprise SSDs that threaten to blow away the competition.

One can build an SSD-based storage device that gives you:
o 320GB of storage capacity (2.1x better than their 2.5 model: 150GB)
o 1000 MB/s sequential reads (2.4x better than their 2.5 model: 420MB/s)
o 280 MB/s sequential writes (1.3x better than their 2.5 model: 220MB/s)
o 140k random 4kB read IOPS (1.2x better than their 2.5 model: 120k)
o 26k random 4kB write IOPS (Pliant doesn't document it)
o at a price of $920 (half the MINIMUM price hinted by Pliant )

This device is a ZFS stripe of 4 80GB Intel 34nm MLC devices ($230 each).
Now, the acute reader will observe that:
o Pliant's device is SLC, mine is MLC (shorter life - but so cheap it can be
  replaced cheaply)
o The Pliant specs I quote above are from their website, some press releases
  quote slighly higher numbers
o Pliant's device fits in a single 2.5 bay, mine requires 4
o Pliant doesn't quote random 4kB *write* IOPS performance - if I were a
  potential buyer, I would ask them before buying

As a side note, I personally measure 15k random 4kB write IOPS on my Intel 34nm
MLC 80GB drive whereas Intel's official number is 6.6k - they probably give a
pessimistic number representing the performance of the drive after having been
aged.

-mrb


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Mark Shellenbaum

Roland Mainz wrote:

Hi!



Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
yes - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?



tmpfs does not support ACLs

see _PC_ACL_ENABLED in [f]pathconf(2).  You can query the file system 
for what type of ACLs it supports.





Bye,
Roland




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Roland Mainz
Norm Jacobs wrote:
 Roland Mainz wrote:
  Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
  yes - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
  etc.) are supported by tmpfs ?
 
 I have some vague recollection that tmpfs doesn't support ACLs snd it
 appears to be so...

Is there any RFE which requests the implementation of NFSv4-like ACLs
for tmpfs yet ?

 ZFS
 
 opensolaris% touch /var/tmp/bar
 opensolaris% chmod A=user:lp:r:deny /var/tmp/bar
 opensolaris%
 
 TMPFS
 
 opensolaris% touch /tmp/bar
 opensolaris% chmod A=user:lp:r:deny /tmp/bar
 chmod: ERROR: Failed to set ACL: Operation not supported
 opensolaris%

Ok... does that mean that I have to create a ZFS filesystem to actually
test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
other options ?

[1]=The idea is to have a test module which checks whether ACL
operations work correctly, however the testing framework must only run
as normal, unpriviledged user...



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, CJAVASunUnix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Trevor Pretty




Interesting question takes a
few minutes to test...

http://docs.sun.com/app/docs/doc/819-2252/acl-5?l=ena=viewq=acl%285%29+
http://docs.sun.com/app/docs/doc/819-2239/chmod-1?l=ena=view

ZFS

[tp47...@norton:] df .
Filesystem size used avail capacity Mounted on
rpool/export/home/tp47565
 16G 1.2G 9.7G 11% /export/home/tp47565
[tp47...@norton:] touch file.3
[tp47...@norton:] ls -v file.3
-rw-r- 1 tp47565 staff 0 Sep 16 15:02 file.3
 0:owner@:execute:deny

1:owner@:read_data/write_data/append_data/write_xattr/write_attributes
 /write_acl/write_owner:allow
 2:group@:write_data/append_data/execute:deny
 3:group@:read_data:allow
 4:everyone@:read_data/write_data/append_data/write_xattr/execute
 /write_attributes/write_acl/write_owner:deny
 5:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow
[tp47...@norton:] chmod A+user:lp:read_data:deny file.3
[tp47...@norton:] ls -v file.3 
-rw-r-+ 1 tp47565 staff 0 Sep 16 15:02 file.3
 0:user:lp:read_data:deny
 1:owner@:execute:deny

2:owner@:read_data/write_data/append_data/write_xattr/write_attributes
 /write_acl/write_owner:allow
 3:group@:write_data/append_data/execute:deny
 4:group@:read_data:allow
 5:everyone@:read_data/write_data/append_data/write_xattr/execute
 /write_attributes/write_acl/write_owner:deny
 6:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow
[tp47...@norton:] 



Let's try the new ACLs on tmpfs

[tp47...@norton:] cd /tmp
[tp47...@norton:] df .
Filesystem size used avail capacity Mounted on
swap 528M 12K 528M 1% /tmp
[tp47...@norton:] grep swap /etc/vfstab 
swap  -  /tmp  tmpfs - yes -
/dev/zvol/dsk/rpool/swap -  -  swap - no -
[tp47...@norton:] 

[tp47...@norton:] touch file.3
[tp47...@norton:] ls -v file.3
-rw-r- 1 tp47565 staff 0 Sep 16 14:58 file.3
 0:user::rw-
 1:group::r--  #effective:r--
 2:mask:rwx
 3:other:---
[tp47...@norton:] 

[tp47...@norton:] chmod A+user:lp:read_data:deny file.3
chmod: ERROR: ACL type's are different
[tp47...@norton:] 

So tmpfs does not
support the new ACLs

Do I have to do the
old way as well?


Roland Mainz wrote:

  Hi!



Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
"yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?



Bye,
Roland

  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Ian Collins

Roland Mainz wrote:

Norm Jacobs wrote:
  

Roland Mainz wrote:


Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
yes - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?
  

I have some vague recollection that tmpfs doesn't support ACLs snd it
appears to be so...



Is there any RFE which requests the implementation of NFSv4-like ACLs
for tmpfs yet ?

  

ZFS

opensolaris% touch /var/tmp/bar
opensolaris% chmod A=user:lp:r:deny /var/tmp/bar
opensolaris%

TMPFS

opensolaris% touch /tmp/bar
opensolaris% chmod A=user:lp:r:deny /tmp/bar
chmod: ERROR: Failed to set ACL: Operation not supported
opensolaris%



Ok... does that mean that I have to create a ZFS filesystem to actually
test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
other options ?
Use function interposition.  I am currently updating an ACL manipulation 
application and I use mocks for the acl get/set functions.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfssupport?

2009-09-15 Thread Roland Mainz
Ian Collins wrote:
 Roland Mainz wrote:
  Norm Jacobs wrote:
  Roland Mainz wrote:
  Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
  yes - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
  etc.) are supported by tmpfs ?
 
  I have some vague recollection that tmpfs doesn't support ACLs snd it
  appears to be so...
 
  Is there any RFE which requests the implementation of NFSv4-like ACLs
  for tmpfs yet ?
 
  ZFS
 
  opensolaris% touch /var/tmp/bar
  opensolaris% chmod A=user:lp:r:deny /var/tmp/bar
  opensolaris%
 
  TMPFS
 
  opensolaris% touch /tmp/bar
  opensolaris% chmod A=user:lp:r:deny /tmp/bar
  chmod: ERROR: Failed to set ACL: Operation not supported
  opensolaris%
 
 
  Ok... does that mean that I have to create a ZFS filesystem to actually
  test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
  other options ?
 Use function interposition.

Umpf... the matching code is linked with -Bdirect ... AFAIK I can't
interpose library functions linked with this option, right ?



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, CJAVASunUnix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfssupport?

2009-09-15 Thread Ian Collins

Roland Mainz wrote:

Ian Collins wrote:
  

Roland Mainz wrote:


Norm Jacobs wrote:
  

Roland Mainz wrote:


Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
yes - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?

  

I have some vague recollection that tmpfs doesn't support ACLs snd it
appears to be so...


Is there any RFE which requests the implementation of NFSv4-like ACLs
for tmpfs yet ?

  

ZFS

opensolaris% touch /var/tmp/bar
opensolaris% chmod A=user:lp:r:deny /var/tmp/bar
opensolaris%

TMPFS

opensolaris% touch /tmp/bar
opensolaris% chmod A=user:lp:r:deny /tmp/bar
chmod: ERROR: Failed to set ACL: Operation not supported
opensolaris%



Ok... does that mean that I have to create a ZFS filesystem to actually
test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
other options ?
  

Use function interposition.



Umpf... the matching code is linked with -Bdirect ... AFAIK I can't
interpose library functions linked with this option, right ?

  
I never build test harnesses with explicit ld options (I use the C or 
C++ compiler for linking).


There is a note about interposition in the ld man page description of -B.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-09-15 Thread Bob Friesenhahn

On Tue, 15 Sep 2009, Dale Ghent wrote:


As someone who currently faces kernel panics with recent U7+ kernel patches 
(on AMD64 and SPARC) related to PCI bus upset, I expect that Sun will take 
the time to make sure that the implementation is as good as it can be and 
is thoroughly tested before release.


Are you referring the the same testing that gained you this PCI panic feature 
in s10u7?


No.  The system worked with the kernel patch corresponding to baseline 
S10U7.  Problems started with later kernel patches (which seem to be 
much less tested).  Of course there could actually be a real hardware 
problem.


Regardless, when the integrity of our data is involved, I prefer to 
wait for more testing rather than to potentially have to recover the 
pool from backup.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-15 Thread Paul B. Henson
On Sun, 13 Sep 2009, Eric Schrock wrote:

 Actually, it's not one byte - the entire page is garbage (as we saw in
 the dtrace output).  But I'm guessing that smartctl (and hardware SATL)
 is aborting on the first invalid record, while we keep going and blindly
 translate one form of garbage into another.

I updated to the new X25-E firmware, and I think it might have resolved the
problem. smartctl under Linux no longer give a warning, and the diskstat
check under Solaris no longer appears to have garbage. I attached output
from smartctl, diskstat, and the dtrace script at the bottom, does it look
like the firmware is returning valid stuff now?

 Absolutely.  The SATA code could definitely be cleaned up to bail when
 processing an invalid record.  I can file a CR for you if you haven't
 already done so.

I haven't; even if the new firmware does resolve the problem, I like
robustness :), so it would still be nice in general for the code to be more
forgiving and perhaps just log a warning.

Thanks...

--

smartctl version 5.38 [x86_64-pc-linux-gnu] Copyright (C) 2002-8 Bruce
Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===
Device Model: SSDSA2SH032G1GN INTEL
Serial Number:CVEM902600J6032HGN
Firmware Version: 045C8850
User Capacity:32,000,000,000 bytes
Device is:Not in smartctl database [for details use: -P showall]
ATA Version is:   7
ATA Standard is:  ATA/ATAPI-7 T13 1532D revision 1
Local Time is:Mon Sep 14 18:26:09 2009 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
See vendor-specific Attribute list for marginal Attributes.

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection:
Disabled.
Self-test execution status:  (  32) The self-test routine was
interrupted
by the host with a hard or soft
reset.
Total time to complete Offline
data collection: (   1) seconds.
Offline data collection
capabilities:(0x75) SMART execute Offline immediate.
No Auto Offline data collection
support.
Abort Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time:(   2) minutes.
Extended self-test routine
recommended polling time:(   2) minutes.
Conveyance self-test routine
recommended polling time:(   1) minutes.

SMART Attributes Data Structure revision number: 5
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED
WHEN_FAILED RAW_VALUE
  3 Spin_Up_Time0x   100   000   000Old_age   Offline
In_the_past 0
  4 Start_Stop_Count0x   100   000   000Old_age   Offline
In_the_past 0
  5 Reallocated_Sector_Ct   0x0002   100   100   000Old_age   Always
-   0
  9 Power_On_Hours  0x0002   100   100   000Old_age   Always
-   68
 12 Power_Cycle_Count   0x0002   100   100   000Old_age   Always
-   151
192 Power-Off_Retract_Count 0x0002   100   100   000Old_age   Always
-   22
232 Unknown_Attribute   0x0003   100   100   010Pre-fail  Always
-   0
233 Unknown_Attribute   0x0002   099   099   000Old_age   Always
-   0
225 Load_Cycle_Count0x   200   200   000Old_age   Offline
-   50147
226 Load-in_Time0x0002   255   000   000Old_age   Always
In_the_past 4294967295
227 Torq-amp_Count  0x0002   000   000   000Old_age   Always
FAILING_NOW 281474976710655
228 Power-off_Retract_Count 0x0002   000   000   000Old_age   Always
FAILING_NOW 4294967295

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_DescriptionStatus  Remaining  LifeTime(hours)
LBA_of_first_error
# 1  Short offline   Completed without error   00%68
-
# 2  Short offline   

Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Robert Thurlow

Roland Mainz wrote:


Ok... does that mean that I have to create a ZFS filesystem to actually
test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
other options ?


By all means, test with ZFS.  But it's easy to do that:

# mkfile 64m /zpool.file
# zpool create test /zpool.file
# zfs list
test   67.5K  27.4M18K  /test

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-15 Thread Eric Schrock


On Sep 15, 2009, at 8:32 PM, Paul B. Henson wrote:


I updated to the new X25-E firmware, and I think it might have  
resolved the
problem. smartctl under Linux no longer give a warning, and the  
diskstat
check under Solaris no longer appears to have garbage. I attached  
output
from smartctl, diskstat, and the dtrace script at the bottom, does  
it look

like the firmware is returning valid stuff now?


I don't have the ATA spec in front of me, but that that looks like  
pretty normal output to me.  Glad to hear they addressed the issue.


- Eric



Absolutely.  The SATA code could definitely be cleaned up to bail  
when

processing an invalid record.  I can file a CR for you if you haven't
already done so.


I haven't; even if the new firmware does resolve the problem, I like
robustness :), so it would still be nice in general for the code to  
be more

forgiving and perhaps just log a warning.

Thanks...

--

smartctl version 5.38 [x86_64-pc-linux-gnu] Copyright (C) 2002-8 Bruce
Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF INFORMATION SECTION ===
Device Model: SSDSA2SH032G1GN INTEL
Serial Number:CVEM902600J6032HGN
Firmware Version: 045C8850
User Capacity:32,000,000,000 bytes
Device is:Not in smartctl database [for details use: -P  
showall]

ATA Version is:   7
ATA Standard is:  ATA/ATAPI-7 T13 1532D revision 1
Local Time is:Mon Sep 14 18:26:09 2009 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
See vendor-specific Attribute list for marginal Attributes.

General SMART Values:
Offline data collection status:  (0x00) Offline data collection  
activity

   was never started.
   Auto Offline Data Collection:
Disabled.
Self-test execution status:  (  32) The self-test routine was
interrupted
   by the host with a hard or soft
reset.
Total time to complete Offline
data collection: (   1) seconds.
Offline data collection
capabilities:(0x75) SMART execute Offline  
immediate.

   No Auto Offline data collection
support.
   Abort Offline collection upon  
new

   command.
   No Offline surface scan  
supported.

   Self-test supported.
   Conveyance Self-test supported.
   Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before  
entering

   power-saving mode.
   Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
   General Purpose Logging  
supported.

Short self-test routine
recommended polling time:(   2) minutes.
Extended self-test routine
recommended polling time:(   2) minutes.
Conveyance self-test routine
recommended polling time:(   1) minutes.

SMART Attributes Data Structure revision number: 5
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE   
UPDATED

WHEN_FAILED RAW_VALUE
 3 Spin_Up_Time0x   100   000   000Old_age
Offline

In_the_past 0
 4 Start_Stop_Count0x   100   000   000Old_age
Offline

In_the_past 0
 5 Reallocated_Sector_Ct   0x0002   100   100   000Old_age
Always

-   0
 9 Power_On_Hours  0x0002   100   100   000Old_age
Always

-   68
12 Power_Cycle_Count   0x0002   100   100   000Old_age
Always

-   151
192 Power-Off_Retract_Count 0x0002   100   100   000Old_age
Always

-   22
232 Unknown_Attribute   0x0003   100   100   010Pre-fail   
Always

-   0
233 Unknown_Attribute   0x0002   099   099   000Old_age
Always

-   0
225 Load_Cycle_Count0x   200   200   000Old_age
Offline

-   50147
226 Load-in_Time0x0002   255   000   000Old_age
Always

In_the_past 4294967295
227 Torq-amp_Count  0x0002   000   000   000Old_age
Always

FAILING_NOW 281474976710655
228 Power-off_Retract_Count 0x0002   000   000   000Old_age
Always

FAILING_NOW 4294967295

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_DescriptionStatus  Remaining  LifeTime 
(hours)

LBA_of_first_error
# 1  Short offline   Completed without error   00%68
-
# 2  Short offline   Completed without error   00%68
-
# 3  

Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfssupport?

2009-09-15 Thread Roland Mainz
Robert Thurlow wrote:
 Roland Mainz wrote:
 
  Ok... does that mean that I have to create a ZFS filesystem to actually
  test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
  other options ?
 
 By all means, test with ZFS.  But it's easy to do that:
 
 # mkfile 64m /zpool.file
 # zpool create test /zpool.file
 # zfs list
 test   67.5K  27.4M18K  /test

I know... but AFAIK this requires root priviledges which the test
suite won't have...



Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) roland.ma...@nrubsig.org
  \__\/\/__/  MPEG specialist, CJAVASunUnix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-15 Thread Paul B. Henson
On Tue, 15 Sep 2009, Eric Schrock wrote:

 I don't have the ATA spec in front of me, but that that looks like pretty
 normal output to me.  Glad to hear they addressed the issue.

Excellent; I reinstalled it in my test x4500, if no other issues show up I
can try to get my proposal to install them in production going again;
they make a huge difference for common sysadmin operations such as tarball
extraction or code development scenarios like revision control checkouts.
If I'm lucky maybe the ability to import a pool with a dead slog will make
it into U8, that was the only other potential snag in my deployment plan,
as I'd only have one SSD in each system.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Erik Trimble

Lori Alt wrote:

On 09/15/09 06:27, Luca Morettoni wrote:

On 09/15/09 02:07 PM, Mark J Musante wrote:

zfs create -o version=N pool/filesystem


is possible to implement into a future version of ZFS a released 
send command, like:


# zfs send -r2 snap ...

to send a specific release (version 2 in the example) of the metadata?

I just created a RFE for this problem in general:  6882134.  I'm not 
sure the above suggestion is the best way to solve the problem, but we 
do need some kind of support for inter-version send stream readability.


Lori



I haven't see this specific problem, but it occurs to me thus:

For the reverse of the original problem, where (say) I back up a 'zfs 
send' stream to tape, then later on, after upgrading my system, I want 
to get that stream back.


Does 'zfs receive' support reading a version X stream and dumping it 
into a version X+N zfs filesystem?


If not, frankly, that's a higher priority than the reverse.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Ian Collins

Erik Trimble wrote:

Lori Alt wrote:

On 09/15/09 06:27, Luca Morettoni wrote:

On 09/15/09 02:07 PM, Mark J Musante wrote:

zfs create -o version=N pool/filesystem


is possible to implement into a future version of ZFS a released 
send command, like:


# zfs send -r2 snap ...

to send a specific release (version 2 in the example) of the metadata?

I just created a RFE for this problem in general:  6882134.  I'm not 
sure the above suggestion is the best way to solve the problem, but 
we do need some kind of support for inter-version send stream 
readability.


Lori



I haven't see this specific problem, but it occurs to me thus:

For the reverse of the original problem, where (say) I back up a 'zfs 
send' stream to tape, then later on, after upgrading my system, I want 
to get that stream back.


Does 'zfs receive' support reading a version X stream and dumping it 
into a version X+N zfs filesystem?


If not, frankly, that's a higher priority than the reverse.

I think you'll find it isn't supported but it does work.  That is 
there's no guarantee a new stream version will be upwardly compatible 
with an earlier one.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Which kind of ACLs does tmpfssupport?

2009-09-15 Thread Ian Collins

Roland Mainz wrote:

Robert Thurlow wrote:
  

Roland Mainz wrote:



Ok... does that mean that I have to create a ZFS filesystem to actually
test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
other options ?
  

By all means, test with ZFS.  But it's easy to do that:

# mkfile 64m /zpool.file
# zpool create test /zpool.file
# zfs list
test   67.5K  27.4M18K  /test



I know... but AFAIK this requires root priviledges which the test
suite won't have...
  
It is also more difficult to test error conditions. 

Unless you really have to, don't use fancy link options for a test 
harness so you can easily interpose (or simply define) mock library 
calls.  If you stick with regular dynamic linking, you can simply define 
your mock/stub library functions in your test code.  Easy.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss