[zfs-discuss] Problem with snapshot

2009-01-29 Thread Abishek
Hi

I am facing some problems after rolling back the snapshots created on pool.

Environment:
bash-3.00# uname -a
SunOS hostname 5.10 Generic_118833-17 sun4u sparc SUNW,Sun-Blade-100

ZFS version:
bash-3.00# zpool upgrade
This system is currently running ZFS version 2.
All pools are formatted using this version.

I have a zpool called testpol with 10G

This is the initial pool status of the pool
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
testpol9.94G 90K   9.94G 0%  ONLINE -

bash-3.00# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
testpol 84K  9.78G  24.5K  /testpol

Now i run the following commands
bash-3.00# mkfile 10m /testpol/10megfile
bash-3.00# zfs create testpol/fs1
bash-3.00# mkfile 20m /testpol/fs1/20megfile

[b]bash-3.00# zfs snapshot test...@snap[/b]

bash-3.00# zfs create testpol/fs2
bash-3.00# mkfile 30m /testpol/fs2/30megfile
bash-3.00# mkfile 15m /testpol/15megfile

Output of zfs list command after running the above commands
bash-3.00# zfs list(shows that all the above commands were successfully 
executed)
NAME   USED  AVAIL  REFER  MOUNTPOINT
testpol   75.2M  9.71G  25.0M  /testpol
test...@snap  23.5K  -  10.0M  -
testpol/fs1   20.0M  9.71G  20.0M  /testpol/fs1
testpol/fs2   30.0M  9.71G  30.0M  /testpol/fs2

The following are the file/file system entries under /testpol
bash-3.00# ls -lR /testpol
/testpol:
total 51222
-rw--T   1 root root 10485760 Jan 29 13:32 10megfile
-rw--T   1 root root 15728640 Jan 29 13:34 15megfile
drwxr-xr-x   2 root sys3 Jan 29 13:33 fs1
drwxr-xr-x   2 root sys3 Jan 29 13:34 fs2

/testpol/fs1:
total 40977
-rw--T   1 root root 20971520 Jan 29 13:33 20megfile

/testpol/fs2:
total 61461
-rw--T   1 root root 31457280 Jan 29 13:34 30megfile

Everything shows up correctly until i rollback to the snapshot test...@snap
bash-3.00# zfs rollback test...@snap
bash-3.00# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
testpol   60.2M  9.72G  10.0M  /testpol
test...@snap  0  -  10.0M  -
testpol/fs1   20.0M  9.72G  20.0M  /testpol/fs1
testpol/fs2   30.0M  9.72G  30.0M  /testpol/fs2

bash-3.00# ls -lR /testpol/
/testpol/:
total 20490
-rw--T   1 root root 10485760 Jan 29 13:32 10megfile
drwxr-xr-x   2 root root   2 Jan 29 13:32 fs1 [b]fs1 is treated as 
a normal directory. rm fs1 will succeed, which would fail in case of a file 
system[/b]

/testpol/fs1:
total 0 [b]fs1 is empty.[/b]
As expected fs2 (which was created after snapshot test...@snap was taken) is 
not listed under directories.

Issues after rolling back:
1. Before snapshot was taken fs1 contained 20megfile which not present 
after the snapshot is rolled back.
2. Though file system fs2 is not present on the disk, zfs list shows fs2
3. The size of the file system fs1 is incorrect
4. After performing the rollback operation fs1 is not treated as a file system
bash-3.00# mkfile 45m /testpol/fs1/45megfile
bash-3.00# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
testpol105M  9.68G  55.0M  /testpol
test...@snap  23.5K  -  10.0M  -
testpol/fs1   20.0M  9.68G  20.0M  /testpol/fs1
testpol/fs2   30.0M  9.68G  30.0M  /testpol/fs2
You could see 45m got added to /testpol not fs1

Did i do something that i shouldn't be doing?
Can anyone please explain me what is wrong with this behavior?

-Abishek
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-29 Thread Orvar Korvar
Thanx for your answers guys. :o)  

I dont contemplating trying this for my ZFS raid, as the SSD drives are 
expensive right now. I just want to be able to answer questions when I convert 
Windows/Linux to Solaris. And therefore collect info. Has anyone tried this on 
a blogg? Would be cool to blog about. Before and After.

BTW, did I tell you that Solaris rocks? :o)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-29 Thread Jim Dunham
BJ,

 The means to specify this is sndradm -nE ...,
 when 'E' is equal enabled.

 Got it.  Nothing on the disk, nothing to replicate (yet).

:-)

 The manner in which SNDR can guarantee that
 two or more volumes are write-order consistent, as they are
 replicated is place them in the same I/O consistency group.

 Ok, so my sndradm -nE command with g [same name as first data  
 drive group] simply ADDs a set of drives to the group, it doesn't  
 stop or replace the replication on the first set of drives, and in  
 fact in keeping the same group name I even keep the two sets of  
 drives in each server in sync.  THEN I run my zfs attach command  
 on the non-bitmap slice to my existing pool.  Do I have that all  
 right?

Yes.



 Thanks!
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Kevin Maguire
Hi

We have been using a Solaris 10 system (Sun-Fire-V245) for a while as
our primary file server. This is based on Solaris 10 06/06, plus
patches up to approx May 2007. It is a production machine, and until
about a week ago has had few problems.

Attached to the V245 is a SCSI RAID array, which presents one LUN to
the OS.  On this lun is a zpool (tank), and within that 300+ zfs file
systems (one per user for automounted home directories). The system is
connected to our LAN via gigabit Ethernet,. most of our NFS clients
have just 100FD network connection.

In recent days performance of the file server seems to have gone off a
cliff.  I don't know how to troubleshoot what might be wrong? Typical
zpool iostat 120 output is shown below. If I run truss -D df I see
each call to statvfs64(/tank/bla) takes 2-3 seconds. The RAID itself
is healthy, and all disks are reporting as OK.

I have tried to establish if some client or clients are thrashing the
server via nfslogd, but without seeing anything obvious.  Is there
some kind of per-zfs-filesystem iostat?

End users are reporting just saving small files can take 5-30 seconds?
prstat/top shows no process using significant CPU load.  The system
has 8GB of RAM, vmstat shows nothing interesting.

I have another V245, with the same SCSI/RAID/zfs setup, and a similar
(though a bit less) load of data and users where this problem is NOT
apparent there?

Suggestions?
Kevin

Thu Jan 29 11:32:29 CET 2009
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank2.09T   640G 10 66   825K  1.89M
tank2.09T   640G 39  5  4.80M   126K
tank2.09T   640G 38  8  4.73M   191K
tank2.09T   640G 40  5  4.79M   126K
tank2.09T   640G 39  5  4.73M   170K
tank2.09T   640G 40  3  4.88M  43.8K
tank2.09T   640G 40  3  4.87M  54.7K
tank2.09T   640G 39  4  4.81M   111K
tank2.09T   640G 39  9  4.78M   134K
tank2.09T   640G 37  5  4.61M   313K
tank2.09T   640G 39  3  4.89M  32.8K
tank2.09T   640G 35  7  4.31M   629K
tank2.09T   640G 28 13  3.47M  1.43M
tank2.09T   640G  5 51   433K  4.27M
tank2.09T   640G  6 51   450K  4.23M
tank2.09T   639G  5 52   543K  4.23M
tank2.09T   640G 26 57  3.00M  1.15M
tank2.09T   640G 39  6  4.82M   107K
tank2.09T   640G 39  3  4.80M   119K
tank2.09T   640G 38  8  4.64M   295K
tank2.09T   640G 40  7  4.82M   102K
tank2.09T   640G 43  5  4.79M   103K
tank2.09T   640G 39  4  4.73M   193K
tank2.09T   640G 39  5  4.87M  62.1K
tank2.09T   640G 40  3  4.88M  49.3K
tank2.09T   640G 40  3  4.80M   122K
tank2.09T   640G 42  4  4.83M  82.0K
tank2.09T   640G 40  3  4.89M  42.0K
...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with Rational (ClearCase VOB) Supported ???

2009-01-29 Thread Vikash Gupta
Hi,

 

Is anyone using ZFS with IBM now Rational (ClearCase VOB)?

I am looking for how to take the VOB backup using the ZFS snapshot and
restoring it?

 

Is this a supported config from vendor side?

 

Any help would be appreciated.

 

Rgds

Vikash  

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Mike Gerdts
On Thu, Jan 29, 2009 at 6:13 AM, Kevin Maguire k.c.f.magu...@gmail.com wrote:
 I have tried to establish if some client or clients are thrashing the
 server via nfslogd, but without seeing anything obvious.  Is there
 some kind of per-zfs-filesystem iostat?

The following should work in bash or ksh, so long as the list of zfs
mount points does not overflow the maximum command line length.

$ fsstat $(zfs list -H -o mountpoint | nawk '$1 !~ /^(\/|-|legacy)$/') 5

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-29 Thread Orvar Korvar
Imagine 10 SATA discs in raidz2 and one or two SSD drives as a cache. Each 
Vista client reaches ~90MB/sec to the server, using Solaris CIFS and iSCSI. So 
you want to use iSCSI with this. (iSCSI allows ZFS to export a file system as a 
native SCSI disc to a desktop PC. The desktop PC can mount this iSCSI disk as a 
native SCSI disk and format it with NTFS - on top of ZFS with snapshots, etc. 
This is done from desktop PC bios). 

Now, you install WinXP on the iSCSI ZFS volume, and clone it with a snapshot. 
Then you can boot from the clone, with the iSCSI volume on a desktop PC. Thus, 
your desktop PC doesnt need any hard drive at all. It uses the iSCSI volume on 
the ZFS server as a native SCSI disk, which has WinXP installed.

This way, you can deploy lots of desktop PC in an instant. Using the cloned 
WinXP snapshot. And if there are problems e.g virus, just destroy the clone and 
create a new one in one second.




Has anyone does this? Does the SSD provide extra speed? Any stories to share? 
(Ive read this iSCSI suggestion on a blogg with black background color, it's 
not my idea).

If I add a SSD disk as a cache, can I remove it? No? Will there be problems if 
I remove it? Can I exchange it with a bigger? (Trying to convert these Windows 
people)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Fredrich Maney
On Wed, Jan 28, 2009 at 11:16 PM, Christine Tran
christine.t...@gmail.com wrote:
 On Wed, Jan 28, 2009 at 11:07 PM, Christine Tran
 christine.t...@gmail.com wrote:
 What is wrong with this?

 # chmod -R A+user:webservd:add_file/write_data/execute:allow /var/apache
 chmod: invalid mode: `A+user:webservd:add_file/write_data/execute:allow'
 Try `chmod --help' for more information.


 Never mind. /usr/gnu/bin/chmod  Can we lose GNU, gee louise it is
 OpenSolaris2008.11 isn't it.  ls [-v|-V] is messed up as well.
 Blarhghgh!


There was a very long discussion about this a couple of weeks ago on
one of the lists. Apparently the decision was made to put the GNU
utilities in default system wide path before the native Sun utilities
in order to make it easier to attract Linux users by making the
environment more familiar to them. It was apparently assumed that
longtime Solaris users would quickly and easily figure out what the
problem was and adjust the PATH to their liking.

Whether or not this was a good idea remains to be determined. From my
vantage, it makes about as much sense to make Solaris more Linux-y in
order to attract more Linux users while annoying current Solaris users
as it does to change the speedometers on cars in the US to metric in
order to make it more comfortable for immigrants and tourists to
drive.

fpsm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Christine Tran
 There was a very long discussion about this a couple of weeks ago on
 one of the lists. Apparently the decision was made to put the GNU
 utilities in default system wide path before the native Sun utilities
 in order to make it easier to attract Linux users by making the
 environment more familiar to them. It was apparently assumed that
 longtime Solaris users would quickly and easily figure out what the
 problem was and adjust the PATH to their liking.


Well, it's OpenSOLARIS, comes with nice OpenSOLARIS goodies.  Oooh ZFS
ACL! rub hands together  Goody!  chmod A+user... gets slapped  ls
-V gets slapped  OpenSOLARIS sucks!

It's a quibble, but the way things are, it pleases no one, I don't
think the casual Linux user moseying over to OpenSolaris would like
the scenario above.

CT
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-29 Thread kristof
Kebabber,

You can't expose zfs filesystems over iSCSI.

You only can expose ZFS volumes (raw volumes) over iscsi.

PS: 2 weeks ago I did a few tests, using filebench.

I saw little to no improvement using a 32GB Intel X25E SSD.

Maybe this is because filebench is flushing the cache in between tests.

I also compared iscsi boot time (using gpxe as boot loader) ,

We are using raidz storagepool (4disks). here again, adding the X25E as cache 
device did not speedup the boot proccess. So I did not see real improvement. 

PS: We have 2 master volumes (xp and vista) which we clone to provision 
additional guests. 

I'm now waiting for new SSD disks (STEC Zeus 18GB en STEC Mach 100GB.), since 
those are used in SUN 7000 product. I hope they perform better.

Kristof
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-29 Thread Greg Mason
How were you running this test?

were you running it locally on the machine, or were you running it over 
something like NFS?

What is the rest of your storage like? just direct-attached (SAS or 
SATA, for example) disks, or are you using a higher-end RAID controller?

-Greg

kristof wrote:
 Kebabber,
 
 You can't expose zfs filesystems over iSCSI.
 
 You only can expose ZFS volumes (raw volumes) over iscsi.
 
 PS: 2 weeks ago I did a few tests, using filebench.
 
 I saw little to no improvement using a 32GB Intel X25E SSD.
 
 Maybe this is because filebench is flushing the cache in between tests.
 
 I also compared iscsi boot time (using gpxe as boot loader) ,
 
 We are using raidz storagepool (4disks). here again, adding the X25E as cache 
 device did not speedup the boot proccess. So I did not see real improvement. 
 
 PS: We have 2 master volumes (xp and vista) which we clone to provision 
 additional guests. 
 
 I'm now waiting for new SSD disks (STEC Zeus 18GB en STEC Mach 100GB.), since 
 those are used in SUN 7000 product. I hope they perform better.
 
 Kristof
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Problems with '..' on ZFS pool

2009-01-29 Thread Dustin Marquess
Hello.  I have a really weird problem with a ZFS pool on one machine, and it's 
only with 1 pool on that machine (the other pool is fine).  Any non-root users 
cannot access '..' on any directories where the pool is mounted, eg:

/a1000 on a1000 
read/write/setuid/devices/nonbmand/exec/xattr/noatime/dev=4010002 on Wed Jan 28 
20:55:38 2009
/home on a1000/home 
read/write/setuid/devices/nonbmand/exec/xattr/noatime/dev=4010005 on Wed Jan 28 
20:55:39 2009

$ ls -ld /
drwxr-xr-x  28 root root1024 Jan 29 10:09 /
$ ls -ld /home
drwxr-xr-x  11 root sys   11 Jan  9 14:49 /home
$ ls -ld /home/..
/home/..: Permission denied
$ ls -ld /a1000/..
/a1000/..: Permission denied
$ ls -V /
total 1065
drwxr-xr-x   2 root sys2 Dec  1 14:39 a1000
owner@:--:--:deny
owner@:rwxp---A-W-Co-:--:allow
group@:-w-p--:--:deny
group@:r-x---:--:allow
 everyone@:-w-p---A-W-Co-:--:deny
 everyone@:r-x---a-R-c--s:--:allow
drwxr-xr-x   6 root sys6 Aug 20 11:47 appl
owner@:--:--:deny
owner@:rwxp---A-W-Co-:--:allow
group@:-w-p--:--:deny
group@:r-x---:--:allow
 everyone@:-w-p---A-W-Co-:--:deny
 everyone@:r-x---a-R-c--s:--:allow
lrwxrwxrwx   1 root root   9 Jun 18  2008 bin - ./usr/bin
drwxr-xr-x   3 root sys  512 Jan 28 18:49 boot
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
drwxr-xr-x  19 root sys 7680 Jan 28 20:54 dev
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
drwxr-xr-x   2 root sys  512 Jan 28 20:53 devices
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
drwxr-xr-x  80 root sys 4608 Jan 29 09:40 etc
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
drwxr-xr-x   2 root sys  512 Jun 18  2008 export
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
drwxr-xr-x  11 root sys   11 Jan  9 14:49 home
owner@:--:--:deny
owner@:rwxp---A-W-Co-:--:allow
group@:-w-p--:--:deny
group@:r-x---:--:allow
 everyone@:-w-p---A-W-Co-:--:deny
 everyone@:r-x---a-R-c--s:--:allow
drwxr-xr-x  15 root sys  512 Jun 18  2008 kernel
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
drwxr-xr-x   7 root bin 5632 Jan 28 19:50 lib
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
drwx--   2 root root8192 Jun 18  2008 lost+found
 0:user::rwx
 1:group::---   #effective:---
 2:mask:---
 3:other:---
drwxr-xr-x   2 root sys  512 Jun 18  2008 mnt
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
dr-xr-xr-x   2 root root 512 Jun 18  2008 net
 0:user::r-x
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
-rw-r--r--   1 root root   0 Jun 18  2008 noautoshutdown
 0:user::rw-
 1:group::r--   #effective:r--
 2:mask:r--
 3:other:r--
drwxr-xr-x   7 root sys7 Jan 28 15:50 opt
owner@:--:--:deny
owner@:rwxp---A-W-Co-:--:allow
group@:-w-p--:--:deny
group@:r-x---:--:allow
 everyone@:-w-p---A-W-Co-:--:deny
 everyone@:r-x---a-R-c--s:--:allow
drwxr-xr-x  40 root sys 1536 Jun 18  2008 platform
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
drwxr-xr-x   2 root sys2 Jul 29  2008 pool
owner@:--:--:deny
owner@:rwxp---A-W-Co-:--:allow
group@:-w-p--:--:deny
group@:r-x---:--:allow
 everyone@:-w-p---A-W-Co-:--:deny
 everyone@:r-x---a-R-c--s:--:allow
dr-xr-xr-x  76 root root  480032 Jan 29 10:23 proc
 0:user::r-x
 1:group::r-x   #effective:r-x
 2:mask:rwx
 3:other:r-x
drwxr-x---  12 root root1024 Jan 29 10:09 root
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:---
drwxr-xr-x   2 root sys 1024 Jan 28 19:37 sbin
 0:user::rwx
 1:group::r-x   #effective:r-x
 2:mask:r-x
 3:other:r-x
-rw-rw-rw-   1 root root1576 Oct 15 12:40 sybinit.err
 0:user::rw-
 1:group::rw-   #effective:rw-
 2:mask:rw-
 

Re: [zfs-discuss] Problems with '..' on ZFS pool

2009-01-29 Thread Dustin Marquess
Forgot to add that a truss shows:

14960:  lstat64(/a1000/.., 0xFFBFF7E8)Err#13 EACCES 
[file_dac_search]

ppriv shows the error in UFS:

$ ppriv -e -D -s -file_dac_search ls -ld /a1000/..
ls[15022]: missing privilege file_dac_search (euid = 100, syscall = 216) 
needed at ufs_iaccess+0x110
/a1000/..: Permission denied

However seeing as it only happens for mounts on that 1 ZFS pool, it being a UFS 
problem seems highly unlikely.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] destroy means destroy, right?

2009-01-29 Thread dick hoogendijk
On Wed, 28 Jan 2009 20:16:37 -0500
Christine Tran christine.t...@gmail.com wrote:

 Everybody respects rm -f *.
+1

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv105 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problems with '..' on ZFS pool

2009-01-29 Thread Mark Shellenbaum
Dustin Marquess wrote:
 Forgot to add that a truss shows:
 
 14960:  lstat64(/a1000/.., 0xFFBFF7E8)Err#13 EACCES 
 [file_dac_search]
 
 ppriv shows the error in UFS:
 
 $ ppriv -e -D -s -file_dac_search ls -ld /a1000/..
 ls[15022]: missing privilege file_dac_search (euid = 100, syscall = 216) 
 needed at ufs_iaccess+0x110
 /a1000/..: Permission denied
 
 However seeing as it only happens for mounts on that 1 ZFS pool, it being a 
 UFS problem seems highly unlikely.

unmount the file system and look at the permission on the UFS mountpoint 
directory /a1000.  They will probably be 0700 or something similar.

   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Matt Harrison
Christine Tran wrote:
 There was a very long discussion about this a couple of weeks ago on
 one of the lists. Apparently the decision was made to put the GNU
 utilities in default system wide path before the native Sun utilities
 in order to make it easier to attract Linux users by making the
 environment more familiar to them. It was apparently assumed that
 longtime Solaris users would quickly and easily figure out what the
 problem was and adjust the PATH to their liking.

 
 Well, it's OpenSOLARIS, comes with nice OpenSOLARIS goodies.  Oooh ZFS
 ACL! rub hands together  Goody!  chmod A+user... gets slapped  ls
 -V gets slapped  OpenSOLARIS sucks!
 
 It's a quibble, but the way things are, it pleases no one, I don't
 think the casual Linux user moseying over to OpenSolaris would like
 the scenario above.

As a previous long-time linux user who came over for ZFS, I totally 
agree. I much preferred to learn the solaris way and do things right 
than try and think it was still linux.

Now I'm comfortable working on both despite their differences, and I'm 
sure I can perform tasks a lot better for it.

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problems with '..' on ZFS pool

2009-01-29 Thread Dustin Marquess
Bingo, they were 0750.  Thanks so much, that was the one thing I didn't think 
of.  I thought I was going crazy :).

Thanks again!
-Dustin
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] firewire card?

2009-01-29 Thread Alan Perry
I am pretty sure that Oxford 911 is a family of parts.  The current Oxford 
Firewire parts are the 934 and 936 families.  It appears that the Oxford 911 
was commonly used in drive enclosures.

The most troublesome part in my experience is the Initio INIC-1430. It does not 
get along with scsa1394 at all.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Ross
Yeah, breaking functionality in one of the main reasons people are going to be 
trying OpenSolaris is just dumb... really, really dumb.

One thing Linux, Windows, OS/X, etc all get right is that they're pretty easy 
to use right out of the box.  They're all different, but they all do their own 
jobs pretty well.  So what do Sun do, make OpenSolaris harder to use...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Toby Thain

On 29-Jan-09, at 2:17 PM, Ross wrote:

 Yeah, breaking functionality in one of the main reasons people are  
 going to be trying OpenSolaris is just dumb... really, really dumb.

 One thing Linux, Windows, OS/X, etc all get right is that they're  
 pretty easy to use right out of the box.  They're all different,  
 but they all do their own jobs pretty well.  So what do Sun do,  
 make OpenSolaris harder to use...

I've not used OpenSolaris (yet), but I did spend quite a bit of time  
in Solaris 10 after using Linux and OS X for a long time.

Matt's approach does work. Learn the differences, don't resent that  
they're the same, and get comfortable in both.

Given the massive success of GNU based systems (Linux, OS X, *BSD)  
one can hardly fault OpenSolaris for taking this direction (I assume  
this is part of Ian Murdock's brief to make it a more appealing O/S  
option). Maybe Sun needs to make this optional, leaving the  
traditional SYSV flavour for the kneejerk anti-GNU elements (a  
bizarre phenomenon given SunOS' pure open source genesis ...  
Berkeley, Bill Joy, BSD, yadda yadda).

--Toby


 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Import Issues Btw 101 104

2009-01-29 Thread Daniel Templeton
I just noticed that my previous assessment was not quite accurate.  It's 
even stranger.  Let's try again.

On S10/b101, I have two pools:

  zpool list
NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
double   172G   116G  56.3G67%  ONLINE  -
single   516G  61.3G   455G11%  ONLINE  -

double is mirrored.  single is not.  single has the following datasets:

  zfs list -r single
NAME USED  AVAIL  REFER  MOUNTPOINT
single  61.3G   447G  1.50K  none
single/backup   1.55G   447G  1.93M  /export/storage
single/backup/cari18K   447G18K  /export/home/cari/storage
single/backup/dant  1.54G   447G  1.54G  /export/home/dant/storage
single/misc 54.0G   447G  24.5K  none
single/misc/dant54.0G   447G  24.5K  none
single/misc/dant/music  38.1G   447G  38.1G  /export/home/dant/Music
single/misc/dant/vbox   15.9G   447G  15.9G  /export/home/dant/VMs
single/share  18K   447G18K  /export/share
single/software 5.73G   447G  5.73G  /usr/local

After doing a zpool import from b104, I have:

  zpool list
NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
double   516G  61.3G   455G11%  ONLINE  -

and zpool import -D shows me that single is unavailable because it's 
corrupted because zpool thinks it's on the wrong partition.  Same as 
what I wrote before.  What I didn't notice before was a subtle 
difference in the zpool list.  It's more obvious if we look at the zfs list:

  zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
double  61.3G   447G  1.50K  none
double/backup   1.55G   447G  1.93M  /export/storage
double/backup/cari18K   447G18K  /export/home/cari/storage
double/backup/dant  1.54G   447G  1.54G  /export/home/dant/storage
double/misc 54.0G   447G  24.5K  none
double/misc/dant54.0G   447G  24.5K  none
double/misc/dant/music  38.1G   447G  38.1G  /export/home/dant/Music
double/misc/dant/vbox   15.9G   447G  15.9G  /export/home/dant/VMs
double/share  18K   447G18K  /export/share
double/software 5.73G   447G  5.73G  /usr/local

Notice that those are the datasets from single on b101!  So, not only 
does b104 have the location of the single pool wrong, but it thinks the 
double pool is where the single pool is.  WTF?

Daniel

Daniel Templeton wrote:
 Hi!

 I have a system with S10, b101, and b104 installed in the same 
 partition on disk 1.  On disks 1 and 2 in different partitions, I also 
 created ZFS pools from S10 to be imported by b101 and b104.  Pool 1 is 
 mirrored.  Pool 2 is not.  About every three builds, I replace the 
 oldest build with the latest available and switch to that as the 
 default OS.  Up through 101 everything was fine.  I just installed 
 104, however, and when I do the zpool import, the mirrored pool is 
 picked up just fine, but the non-mirrored pool shows up as corrupted.  
 zpool import -D shows me that zpool on b104 thinks that the pool is on 
 c1t1d0p2, whereas S10, b101, and fdisk agree that it's actually on 
 c1t1d0p3.  How do I convince zpool on b104 where my non-mirrored pool 
 really is?  I'm a bit afraid to do an import -f because that pool is 
 my home directory, and I'd really rather not screw it up.  And I don't 
 see where the -f will change zpool's mind about where the pool 
 actually lives.  Maybe import -c from the default location?  Where is 
 the default location?  Any thoughts or suggestions?

 Thanks!
 Daniel

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] destroy means destroy, right?

2009-01-29 Thread Orvar Korvar
Maybe add a timer or something? When doing a destroy, ZFS will keep 
everything for 1 minute or so, before overwriting. This way the disk won't get 
as fragmented. And if you had fat fingers and typed wrong, you have up to one 
minute to undo. That will catch 80% of the mistakes?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] destroy means destroy, right?

2009-01-29 Thread Jacob Ritorto
I like that, although it's a bit of an intelligence insulter.  Reminds
me of the old pdp11 install (
http://charles.the-haleys.org/papers/setting_up_unix_V7.pdf ) --

This step makes an empty file system.
6.The next thing to do is to restore the data onto the new empty
file system. To do this you respond
  to the ':' printed in the last step with
(bring in the program restor)
: tm(0,4)  ('ht(0,4)' for TU16/TE16)
tape? tm(0,5)  (use 'ht(0,5)' for TU16/TE16)
disk? rp(0,0)(use 'hp(0,0)' for RP04/5/6)
Last chance before scribbling on disk. (you type return)
(the tape moves, perhaps 5-10 minutes pass)
end of tape
Boot
:
  You now have a UNIX root file system.




On Thu, Jan 29, 2009 at 3:42 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
 Maybe add a timer or something? When doing a destroy, ZFS will keep 
 everything for 1 minute or so, before overwriting. This way the disk won't 
 get as fragmented. And if you had fat fingers and typed wrong, you have up to 
 one minute to undo. That will catch 80% of the mistakes?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] write cache and cache flush

2009-01-29 Thread Greg Mason
So, I'm still beating my head against the wall, trying to find our 
performance bottleneck with NFS on our Thors.

We've got a couple Intel SSDs for the ZIL, using 2 SSDs as ZIL devices. 
Cache flushing is still enabled, as are the write caches on all 48 disk 
devices.

What I'm thinking of doing is disabling all write caches, and disabling 
the cache flushing.

What would this mean for the safety of data in the pool?

And, would this even do anything to address the performance issue?

-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Volker A. Brandt
 Given the massive success of GNU based systems (Linux, OS X, *BSD)

Ouch!  Neither OSX nor *BSD are GNU-based.  They do ship with
GNU-related things but that's been a long and hard battle.

And the massive success has really only been Linux due to brilliant
PR (and FUD about *BSD) and OS X due to Apple's commercial approach 
to BSD.


Regards -- Volker
--

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt  Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Ian Collins
Volker A. Brandt wrote:
 Given the massive success of GNU based systems (Linux, OS X, *BSD)
 

 Ouch!  Neither OSX nor *BSD are GNU-based. 

Not here, please.  This topic has been beaten to death on the discuss
list, where it's topical.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] destroy means destroy, right?

2009-01-29 Thread Nathan Kroenert
For years, we resisted stopping rm -r / because people should know 
better, until *finally* someone said - you know what - that's just dumb.

Then, just like that, it was fixed.

Yes - This is Unix.

Yes - Provide the gun and allow the user to point it.

Just don't let it go off in their groin or when pointed at their foot, 
or provide at least some protection when they do.

Having even limited amount of restore capability will provide the user 
with steel capped boots and a codpiece. It won't protect them from 
herpes or fungus but it might deflect the bullet.

On 01/30/09 08:19, Jacob Ritorto wrote:
 I like that, although it's a bit of an intelligence insulter.  Reminds
 me of the old pdp11 install (
 http://charles.the-haleys.org/papers/setting_up_unix_V7.pdf ) --
 
 This step makes an empty file system.
 6.The next thing to do is to restore the data onto the new empty
 file system. To do this you respond
   to the ':' printed in the last step with
 (bring in the program restor)
 : tm(0,4)  ('ht(0,4)' for TU16/TE16)
 tape? tm(0,5)  (use 'ht(0,5)' for TU16/TE16)
 disk? rp(0,0)(use 'hp(0,0)' for RP04/5/6)
 Last chance before scribbling on disk. (you type return)
 (the tape moves, perhaps 5-10 minutes pass)
 end of tape
 Boot
 :
   You now have a UNIX root file system.
 
 
 
 
 On Thu, Jan 29, 2009 at 3:42 PM, Orvar Korvar
 knatte_fnatte_tja...@yahoo.com wrote:
 Maybe add a timer or something? When doing a destroy, ZFS will keep 
 everything for 1 minute or so, before overwriting. This way the disk won't 
 get as fragmented. And if you had fat fingers and typed wrong, you have up 
 to one minute to undo. That will catch 80% of the mistakes?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 


//
// Nathan Kroenert  nathan.kroen...@sun.com //
// Senior Systems Engineer  Phone:  +61 3 9869 6255 //
// Global Systems Engineering   Fax:+61 3 9869 6288 //
// Level 7, 476 St. Kilda Road  //
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Extended attributes in ZFS

2009-01-29 Thread Peter Reiher
Does ZFS currently support actual use of extended attributes?  If so, where can 
I find some documentation that describes how to use them?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Extended attributes in ZFS

2009-01-29 Thread David Magda
On Jan 29, 2009, at 18:02, Peter Reiher wrote:

 Does ZFS currently support actual use of extended attributes?  If  
 so, where can I find some documentation that describes how to use  
 them?

Your best bet would probably be:

http://search.sun.com/docs/index.jsp?qt=zfs+extended+attributes

Is there anything in particular you're wondering about?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Extended attributes in ZFS

2009-01-29 Thread Nicolas Williams
On Thu, Jan 29, 2009 at 03:02:50PM -0800, Peter Reiher wrote:
 Does ZFS currently support actual use of extended attributes?  If so, where 
 can I find some documentation that describes how to use them?

man runat.1 openat.2 etcetera

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Extended attributes in ZFS

2009-01-29 Thread Cindy . Swearingen
Hi Peter,

Yes, ZFS supports extended attributes.

The runat.1 and fsattr.5 man pages are good places
to start.

Cindy



Peter Reiher wrote:
 Does ZFS currently support actual use of extended attributes?  If so, where 
 can I find some documentation that describes how to use them?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Toby Thain

On 29-Jan-09, at 4:53 PM, Volker A. Brandt wrote:

 Given the massive success of GNU based systems (Linux, OS X, *BSD)

 Ouch!  Neither OSX nor *BSD are GNU-based.

I meant, extensive GNU userland (in OS X's case).
(sorry Ian)

--Toby

   They do ship with
 GNU-related things but that's been a long and hard battle.

 And the massive success has really only been Linux due to brilliant
 PR (and FUD about *BSD) and OS X due to Apple's commercial approach
 to BSD.


 Regards -- Volker
 --
 -- 
 --
 Volker A. Brandt  Consulting and Support for Sun  
 Solaris
 Brandt  Brandt Computer GmbH   WWW: http://www.bb- 
 c.de/
 Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb- 
 c.de
 Handelsregister: Amtsgericht Bonn, HRB 10513   
 Schuhgröße: 45
 Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Extended attributes in ZFS

2009-01-29 Thread Joerg Schilling
Nicolas Williams nicolas.willi...@sun.com wrote:

 On Thu, Jan 29, 2009 at 03:02:50PM -0800, Peter Reiher wrote:
  Does ZFS currently support actual use of extended attributes?  If so, where 
  can I find some documentation that describes how to use them?

 man runat.1 openat.2 etcetera

Nico, the term extended attributes is overloaded.
What you are talking of is called extended attribute files on Solaris.

We should first ask the OP what he has in mind while asking.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] New RAM disk from ACARD might be interesting

2009-01-29 Thread Janåke Rönnblom
ACARD have launched a new RAM disk which can take up to 64 GB of ECC RAM while 
still looking like a standard SATA drive. If anyone remember the Gigabyte I-RAM 
this might be a new development in this area.

Its called ACARD ANS-9010 and up...

http://www.acard.com.tw/english/fb01-product.jsp?idno_no=270prod_no=ANS-9010type1_title=%20Solid%20State%20Drivetype1_idno=13

This might be interesting to use as a cheap log instead of SSD cards... This 
test compares it with both Intel SSD (consumer and pro):

http://www.techreport.com/articles.x/16255/1

However the test is more from a homeuser point of view...

Anyone got the money and time to test it ;)

-J
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New RAM disk from ACARD might be interesting

2009-01-29 Thread Nathan Kroenert
As it presents as standard SATA, there should be no reason for this not 
to work...

It has battery backup, and CF for backup / restore from DDR2 in the 
event of power loss... Pretty cool. (Would have preferred a super-cap, 
but oh, well... ;)

Should make an excellent ZIL *and* L2ARC style device...

Seems a little pricey for what it is though.

It's going onto my list of what I'd buy if I had the money... ;)

Nathan.

On 01/30/09 12:10, Janåke Rönnblom wrote:
 ACARD have launched a new RAM disk which can take up to 64 GB of ECC RAM 
 while still looking like a standard SATA drive. If anyone remember the 
 Gigabyte I-RAM this might be a new development in this area.
 
 Its called ACARD ANS-9010 and up...
 
 http://www.acard.com.tw/english/fb01-product.jsp?idno_no=270prod_no=ANS-9010type1_title=%20Solid%20State%20Drivetype1_idno=13
 
 This might be interesting to use as a cheap log instead of SSD cards... This 
 test compares it with both Intel SSD (consumer and pro):
 
 http://www.techreport.com/articles.x/16255/1
 
 However the test is more from a homeuser point of view...
 
 Anyone got the money and time to test it ;)
 
 -J

-- 


//
// Nathan Kroenert  nathan.kroen...@sun.com //
// Senior Systems Engineer  Phone:  +61 3 9869 6255 //
// Global Systems Engineering   Fax:+61 3 9869 6288 //
// Level 7, 476 St. Kilda Road  //
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with snapshot

2009-01-29 Thread Anton B. Rang
Snapshots are not on a per-pool basis but a per-file-system basis.  Thus, when 
you took a snapshot of testpol, you didn't actually snapshot the pool; 
rather, you took a snapshot of the top level file system (which has an implicit 
name matching that of the pool).

Thus, you haven't actually affected file systems fs1 or fs2 at all.

However, apparently you were able to roll back the file system, which either 
unmounted or broke the mounts to fs1 and fs2.  This probably shouldn't have 
been allowed.  (I wonder what would happen with an explicit non-ZFS mount to a 
ZFS directory which is removed by a rollback?)

Your fs1 and fs2 file systems still exist, but they're not attached to their 
old names any more. Maybe they got unmounted. You could probably mount them, 
either on the fs1 directory and on a new fs2 directory if you create one, or at 
a different point in your file system hierarchy.

Anton
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problems with '..' on ZFS pool

2009-01-29 Thread Anton B. Rang
That bug has been in Solaris forever.  :-(
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New RAM disk from ACARD might be interesting

2009-01-29 Thread Will Murnane
On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert nathan.kroen...@sun.com wrote:
 Seems a little pricey for what it is though.
For what it's worth, there's also a 9010B model that has only one sata
port and room for six dimms instead of eight at $250 instead of $400.
That might fit in your budget a little easier...  I'm considering one
for a log device.  I wish someone else could test it first and report
problems, but someone's gotta take the jump first.

It looks like this device (the 9010, that is) is also being marketed
as the HyperDrive V at the same price point.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] write cache and cache flush

2009-01-29 Thread Jim Mauro
Multiple Thors (more than 2?), with performance problems.
Maybe it's the common demnominator - the network.

Can you run local ZFS IO loads and determine if performance
is expected when NFS and the network are out of the picture?

Thanks,
/jim


Greg Mason wrote:
 So, I'm still beating my head against the wall, trying to find our 
 performance bottleneck with NFS on our Thors.

 We've got a couple Intel SSDs for the ZIL, using 2 SSDs as ZIL devices. 
 Cache flushing is still enabled, as are the write caches on all 48 disk 
 devices.

 What I'm thinking of doing is disabling all write caches, and disabling 
 the cache flushing.

 What would this mean for the safety of data in the pool?

 And, would this even do anything to address the performance issue?

 -Greg
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New RAM disk from ACARD might be interesting

2009-01-29 Thread Nathan Kroenert
You could be the first...

Man up! ;)

Nathan.

Will Murnane wrote:
 On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert nathan.kroen...@sun.com 
 wrote:
 Seems a little pricey for what it is though.
 For what it's worth, there's also a 9010B model that has only one sata
 port and room for six dimms instead of eight at $250 instead of $400.
 That might fit in your budget a little easier...  I'm considering one
 for a log device.  I wish someone else could test it first and report
 problems, but someone's gotta take the jump first.
 
 It looks like this device (the 9010, that is) is also being marketed
 as the HyperDrive V at the same price point.
 
 Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New RAM disk from ACARD might be interesting

2009-01-29 Thread Will Murnane
On Thu, Jan 29, 2009 at 22:44, Nathan Kroenert nathan.kroen...@sun.com wrote:
 You could be the first...

 Man up! ;)
*sigh*  The 9010b is ordered.  Ground shipping, unfortunately, but
eventually I'll post my impressions of it.

Will

 Nathan.

 Will Murnane wrote:

 On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert nathan.kroen...@sun.com
 wrote:

 Seems a little pricey for what it is though.

 For what it's worth, there's also a 9010B model that has only one sata
 port and room for six dimms instead of eight at $250 instead of $400.
 That might fit in your budget a little easier...  I'm considering one
 for a log device.  I wish someone else could test it first and report
 problems, but someone's gotta take the jump first.

 It looks like this device (the 9010, that is) is also being marketed
 as the HyperDrive V at the same price point.

 Will

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] write cache and cache flush

2009-01-29 Thread Greg Mason
This problem only manifests itself when dealing with many small files 
over NFS. There is no throughput problem with the network.

I've run tests with the write cache disabled on all disks, and the cache 
flush disabled. I'm using two Intel SSDs for ZIL devices.

This setup is faster than using the two Intel SSDs with write caches 
enabled on all disks, and with the cache flush enabled.

My test would run around 3.5 to 4 minutes, now it is completing in 
abound 2.5 minutes. I still think this is a bit slow, but I still have 
quite a bit of testing to perform. I'll keep the list updated with my 
findings.

I've already established both via this list and through other research 
that ZFS has performance issues over NFS when dealing with many small 
files. This seems to maybe be an issue with NFS itself, where 
NVRAM-backed storage is needed for decent performance with small files. 
Typically such an NVRAM cache is supplied by a hardware raid controller 
in a disk shelf.

I find it very hard to explain to a user why an upgrade is a step down 
in performance. For the users these Thors are going to serve, such a 
drastic performance hit is a deal breaker...

I've done my homework on this issue, I've ruled out the network as an 
issue, as well as the NFS clients. I've narrowed my particular 
performance issue down to the ZIL, and how well ZFS plays with NFS.

-Greg

Jim Mauro wrote:
 Multiple Thors (more than 2?), with performance problems.
 Maybe it's the common demnominator - the network.
 
 Can you run local ZFS IO loads and determine if performance
 is expected when NFS and the network are out of the picture?
 
 Thanks,
 /jim
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Sanjeev
Kevin,

Looking at the stats I think the tank pool is about 80% full.
And at this point you are possibly hitting the bug :
6596237 - Stop looking and start ganging

Also, there is another ZIL related bug which worsens the case
by fragmenting the space : 
6683293 concurrent O_DSYNC writes to a fileset can be much improved over NFS

You could compare the disk usage of the other machine that you have.

Also, it would be useful to know what patch levels you are running.

We do have IDRs for the bug#6596237 and the other bug has been
fixed in the official patches.

Hope that helps.

Thanks and regards,
Sanjeev.

On Thu, Jan 29, 2009 at 01:13:29PM +0100, Kevin Maguire wrote:
 Hi
 
 We have been using a Solaris 10 system (Sun-Fire-V245) for a while as
 our primary file server. This is based on Solaris 10 06/06, plus
 patches up to approx May 2007. It is a production machine, and until
 about a week ago has had few problems.
 
 Attached to the V245 is a SCSI RAID array, which presents one LUN to
 the OS.  On this lun is a zpool (tank), and within that 300+ zfs file
 systems (one per user for automounted home directories). The system is
 connected to our LAN via gigabit Ethernet,. most of our NFS clients
 have just 100FD network connection.
 
 In recent days performance of the file server seems to have gone off a
 cliff.  I don't know how to troubleshoot what might be wrong? Typical
 zpool iostat 120 output is shown below. If I run truss -D df I see
 each call to statvfs64(/tank/bla) takes 2-3 seconds. The RAID itself
 is healthy, and all disks are reporting as OK.
 
 I have tried to establish if some client or clients are thrashing the
 server via nfslogd, but without seeing anything obvious.  Is there
 some kind of per-zfs-filesystem iostat?
 
 End users are reporting just saving small files can take 5-30 seconds?
 prstat/top shows no process using significant CPU load.  The system
 has 8GB of RAM, vmstat shows nothing interesting.
 
 I have another V245, with the same SCSI/RAID/zfs setup, and a similar
 (though a bit less) load of data and users where this problem is NOT
 apparent there?
 
 Suggestions?
 Kevin
 
 Thu Jan 29 11:32:29 CET 2009
capacity operationsbandwidth
 pool used  avail   read  write   read  write
 --  -  -  -  -  -  -
 tank2.09T   640G 10 66   825K  1.89M
 tank2.09T   640G 39  5  4.80M   126K
 tank2.09T   640G 38  8  4.73M   191K
 tank2.09T   640G 40  5  4.79M   126K
 tank2.09T   640G 39  5  4.73M   170K
 tank2.09T   640G 40  3  4.88M  43.8K
 tank2.09T   640G 40  3  4.87M  54.7K
 tank2.09T   640G 39  4  4.81M   111K
 tank2.09T   640G 39  9  4.78M   134K
 tank2.09T   640G 37  5  4.61M   313K
 tank2.09T   640G 39  3  4.89M  32.8K
 tank2.09T   640G 35  7  4.31M   629K
 tank2.09T   640G 28 13  3.47M  1.43M
 tank2.09T   640G  5 51   433K  4.27M
 tank2.09T   640G  6 51   450K  4.23M
 tank2.09T   639G  5 52   543K  4.23M
 tank2.09T   640G 26 57  3.00M  1.15M
 tank2.09T   640G 39  6  4.82M   107K
 tank2.09T   640G 39  3  4.80M   119K
 tank2.09T   640G 38  8  4.64M   295K
 tank2.09T   640G 40  7  4.82M   102K
 tank2.09T   640G 43  5  4.79M   103K
 tank2.09T   640G 39  4  4.73M   193K
 tank2.09T   640G 39  5  4.87M  62.1K
 tank2.09T   640G 40  3  4.88M  49.3K
 tank2.09T   640G 40  3  4.80M   122K
 tank2.09T   640G 42  4  4.83M  82.0K
 tank2.09T   640G 40  3  4.89M  42.0K
 ...
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] write cache and cache flush

2009-01-29 Thread Neil Perrin


On 01/29/09 21:32, Greg Mason wrote:
 This problem only manifests itself when dealing with many small files 
 over NFS. There is no throughput problem with the network.
 
 I've run tests with the write cache disabled on all disks, and the cache 
 flush disabled. I'm using two Intel SSDs for ZIL devices.
 
 This setup is faster than using the two Intel SSDs with write caches 
 enabled on all disks, and with the cache flush enabled.
 
 My test would run around 3.5 to 4 minutes, now it is completing in 
 abound 2.5 minutes. I still think this is a bit slow, but I still have 
 quite a bit of testing to perform. I'll keep the list updated with my 
 findings.
 
 I've already established both via this list and through other research 
 that ZFS has performance issues over NFS when dealing with many small 
 files. This seems to maybe be an issue with NFS itself, where 
 NVRAM-backed storage is needed for decent performance with small files. 
 Typically such an NVRAM cache is supplied by a hardware raid controller 
 in a disk shelf.
 
 I find it very hard to explain to a user why an upgrade is a step down 
 in performance. For the users these Thors are going to serve, such a 
 drastic performance hit is a deal breaker...

Perhaps I missed something, but what was your previous setup?
I.e. what did you upgrade from? 

Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issue with drive replacement

2009-01-29 Thread Wes Morgan
On Wed, 28 Jan 2009, Cuyler Dingwell wrote:

 In the process of replacing a raidz1 of four 500GB drives with four 
 1.5TB drives on the third one I ran into an interesting issue.  The 
 process was to remove the old drive, put the new drive in and let it 
 rebuild.

 The problem was the third drive I put in had a hardware fault.  That 
 caused both drives (c4t2d0) to show as FAULTED.  I couldn't put a new 
 1.5TB drive in as a replacement - it'd still show as a faulted drive. 
 I couldn't remove the faulted since you can't remove a drive without 
 enough replicas. You also can't do anything to a pool in the process of 
 replacing.

 The remedy was to put the original drive back in and let it resilver. 
 Once complete, a new 1.5TB drive was put in and the process was able to 
 complete.

 If I didn't have the original drive (or it was broken) I think I would 
 have been in a tough spot.

 Has anyone else experienced this - and if so, is there a way to force 
 the replacement of drive that failed during resilvering?

Not to my knowledge. You have to first cancel the replacement, and to do 
that you need the actual device (or something claiming it is) to be 
present. At least, I couldn't figure out how...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss