[zfs-discuss] Pool performance when nearly full

2012-12-20 Thread sol
Hi

I know some of this has been discussed in the past but I can't quite find the 
exact information I'm seeking
(and I'd check the ZFS wikis but the websites are down at the moment).

Firstly, which is correct, free space shown by zfs list or by zpool iostat ?

zfs list:
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%

zpool iostat:
used 61.9 TB, free 18.1 TB, total = 80 TB, free = 22.6%

(That's a big difference, and the percentage doesn't agree)

Secondly, there's 8 vdevs each of 11 disks.
6 vdevs show used 8.19 TB, free 1.81 TB, free = 18.1%
2 vdevs show used 6.39 TB, free 3.61 TB, free = 36.1%

I've heard that 
a) performance degrades when free space is below a certain amount
b) data is written to different vdevs depending on free space

So a) how do I determine the exact value when performance degrades and how 
significant is it?
b) has that threshold been reached (or exceeded?) in the first six vdevs?
and if so are the two emptier vdevs being used exclusively to prevent 
performance degrading
so it will only degrade when all vdevs reach the magic 18.1% free (or whatever 
it is)?

Presumably there's no way to identify which files are on which vdevs in order 
to delete them and recover the performance?

Thanks for any explanations!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-18 Thread sol
From: Cindy Swearingen cindy.swearin...@oracle.com
No doubt. This is a bad bug and we apologize.
1. If you are running Solaris 11 or Solaris 11.1 and have separate
cache devices, you should remove them to avoid this problem.How is the 
7000-series storage appliance affected?

2. A MOS knowledge article (1497293.1) is available to help diagnose this 
problem.
MOS isn't able to find this article when I search for it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread sol
Here it is:

# pstack core.format1
core 'core.format1' of 3351:    format
-  lwp# 1 / thread# 1  
 0806de73 can_efi_disk_be_expanded (0, 1, 0, ) + 7
 08066a0e init_globals (8778708, 0, f416c338, 8068a38) + 4c2
 08068a41 c_disk   (4, 806f250, 0, 0, 0, 0) + 48d
 0806626b main     (1, f416c3b0, f416c3b8, f416c36c) + 18b
 0805803d _start   (1, f416c47c, 0, f416c483, f416c48a, f416c497) + 7d
-  lwp# 2 / thread# 2  
 eed690b1 __door_return (0, 0, 0, 0) + 21
 eed50668 door_create_func (0, eee02000, eea1efe8, eed643e9) + 32
 eed6443c _thrp_setup (ee910240) + 9d
 eed646e0 _lwp_start (ee910240, 0, 0, 0, 0, 0)
-  lwp# 3 / thread# 3  
 eed6471b __lwp_park (8780880, 8780890) + b
 eed5e0d3 cond_wait_queue (8780880, 8780890, 0, eed5e5f0) + 63
 eed5e668 __cond_wait (8780880, 8780890, ee90ef88, eed5e6b1) + 89
 eed5e6bf cond_wait (8780880, 8780890, 208, eea740ad) + 27
 eea740f8 subscriber_event_handler (8778dd0, eee02000, ee90efe8, eed643e9) + 5c
 eed6443c _thrp_setup (ee910a40) + 9d
 eed646e0 _lwp_start (ee910a40, 0, 0, 0, 0, 0)





 From: John D Groenveld jdg...@elvis.arl.psu.edu
# pstack core
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS array on marvell88sx in Solaris 11.1

2012-12-13 Thread sol
Oh I can run the disks off a SiliconImage 3114 but it's the marvell controller 
that I'm trying to get working. I'm sure it's the controller which is used in 
the Thumpers so it should surely work in solaris 11.1




 From: Bob Friesenhahn bfrie...@simple.dallas.tx.us

 If the SATA card you are using is a JBOD-style card (i.e. disks are portable 
 to a different controller), are you able/willing to swap it for one that 
 Solaris is known to support well?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-13 Thread sol
Hi

I've just tried to use illumos (151a5)  import a pool created on solaris (11.1) 
but it failed with an error about the pool being incompatible.

Are we now at the stage where the two prongs of the zfs fork are pointing in 
incompatible directions?




 From: Matthew Ahrens mahr...@delphix.com
On Thu, Jan 5, 2012 at 6:53 AM, sol a...@yahoo.comwrote:



I would have liked to think that there was some good-will between the ex- and 
current-members of the zfs team, in the sense that the people who created zfs 
but then left Oracle still care about it enough to want the Oracle version to 
be as bug-free as possible.



There is plenty of good will between everyone who's worked on ZFS -- current 
Oracle employees, former employees, and those never employed by Oracle.  We 
would all like to see all implementations of ZFS be the highest quality 
possible.  I'd like to think that we all try to achieve that to the extent 
that it is possible within our corporate priorities.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-13 Thread sol
Hi

I added a 3TB Seagate disk (ST3000DM001) and ran the 'format' command but it 
crashed and dumped core.


However the zpool 'create' command managed to create a pool on the whole disk 
(2.68 TB space).

I hope that's only a problem with the format command and not with zfs or any 
other part of the kernel.

(Solaris 11.1 by the way)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS array on marvell88sx in Solaris 11.1

2012-12-13 Thread sol
That's right I'm only using the 3114 out of desperation.

Does anyone else have the marvell88sx working in Solaris 11.1?





 From: Andrew Gabriel andrew.gabr...@oracle.com
3112 and 3114 were very early SATA controllers before there were any SATA 
drivers, which pretend to be ATA controllers to the OS.
No one should be using these today.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS array on marvell88sx in Solaris 11.1

2012-12-12 Thread sol
Hello

I've got a ZFS box running perfectly with an 8-port SATA card
using the marvell88sx driver in opensolaris-2009.

However when I try to run Solaris-11 it won't boot.
If I unplug some of the hard disks it might boot
but then none of them show up in 'format'
and none of them have configured status in 'cfgadm'
(and there's an error or hang if I try to configure them).

Does anyone have any suggestions how to solve the problem?

Thanks!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS array on marvell88sx in Solaris 11.1

2012-12-12 Thread sol
Thanks for the reply.
I've just tried openindiana and it behaves identically -
disks attached to the mv88sx6081 don't show up as disks.
(and APIC error interrupt (status0=0, status1=40) is emitted at boot.)

I've tried some changes to /etc/system with no success
(sata_func_enable=0x5, ahci_msi_enabled=0, sata_max_queue_depth=1)

Is there anything else I can try?



 From: Bob Friesenhahn bfrie...@simple.dallas.tx.us
To: sol a...@yahoo.com 
Cc: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org 
Sent: Wednesday, 12 December 2012, 14:49
Subject: Re: [zfs-discuss] ZFS array on marvell88sx in Solaris 11.1
 
On Wed, 12 Dec 2012, sol wrote:

 Hello
 
 I've got a ZFS box running perfectly with an 8-port SATA card
 using the marvell88sx driver in opensolaris-2009.
 
 However when I try to run Solaris-11 it won't boot.
 If I unplug some of the hard disks it might boot
 but then none of them show up in 'format'
 and none of them have configured status in 'cfgadm'
 (and there's an error or hang if I try to configure them).
 
 Does anyone have any suggestions how to solve the problem?

Since you were previously using opensolaris-2009, have you considered trying 
OpenIndiana oi_151a7 instead?  You could experiment by booting from the live 
CD and seeing if your disks show up.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS array on marvell88sx in Solaris 11.1

2012-12-12 Thread sol
Some more information about the system:

Solaris 11.1 with latest updates (assembled 19 Sep 2012), amd64
The card is vendor 0x11ab device 0x6081
Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
 CardVendor 0x11ab card 0x11ab (Marvell Technology Group Ltd., Card unknown)

  STATUS    0x02b8  COMMAND 0x0007
  CLASS     0x01 0x00 0x00  REVISION 0x09
  BIST      0x00  HEADER 0x00  LATENCY 0x20  CACHE 0x10
  BASE0     0xfac0 SIZE 1048576  MEM64
  BASE2     0xc400 SIZE 256  I/O
  BASEROM   0x  addr 0x
  MAX_LAT   0x00  MIN_GNT 0x00  INT_PIN 0x01  INT_LINE 0x0b


I've forgotten where it hung when looking at verbose boot output
(although it hung at a couple of different points)
so I'll post that next time it hangs.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Any company willing to support a 7410 ?

2012-07-19 Thread sol
Other than Oracle do you think any other companies would be willing to take 
over support for a clustered 7410 appliance with 6 JBODs?

(Some non-Oracle names which popped out of google: 
Joyent/Coraid/Nexenta/Greenbytes/NAS/RackTop/EraStor/Illumos/???)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Persistent errors?

2012-06-18 Thread sol
Hello

It seems as though every time I scrub my mirror I get a few megabytes of 
checksum errors on one disk (luckily corrected by the other). Is there some way 
of tracking down a problem which might be persistent?

I wonder if it's anything to do with these messages which are constantly 
appearing on the console:

Jun 17 12:06:18 sunny scsi: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci1000,8000@16/sd@0,0 (sd2):
Jun 17 12:06:18 sunny   SYNCHRONIZE CACHE command failed (5)

I've no idea what they are about (this is on Solaris 11 btw).___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-01-16 Thread sol
Thanks for that, Matt, very reassuring  :-)





 There is plenty of good will between everyone who's worked on ZFS -- current 
 Oracle employees, former employees, and those never employed by Oracle.  We 
 would all like to see all implementations of ZFS be the highest quality 
 possible.  I'd like to think that we all try to achieve that to the extent 
 that it is possible within our corporate priorities.


--matt___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-01-05 Thread sol
 if a bug fixed in Illumos is never reported to Oracle by a customer,
 it would likely never get fixed in Solaris either


:-(

I would have liked to think that there was some good-will between the ex- and 
current-members of the zfs team, in the sense that the people who created zfs 
but then left Oracle still care about it enough to want the Oracle version to 
be as bug-free as possible.

(Obviously I don't expect this to be the case for developers of all software 
but I think filesystem developers are a special breed!)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2011-12-29 Thread sol
Richard Elling wrote: 

 many of the former Sun ZFS team 
 regularly contribute to ZFS through the illumos developer community.  


Does this mean that if they provide a bug fix via illumos then the fix won't 
make it into the Oracle code?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread sol
Hello

Has anyone else come across a bug moving files between two zfs file systems?

I used mv /my/zfs/filesystem/files /my/zfs/otherfilesystem and got the error 
too many open files.

This is on Solaris 11
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS smb/cifs shares in Solaris 11 (some observations)

2011-11-29 Thread sol
Hi

Several observations with zfs cifs/smb shares in the new Solaris 11.

1) It seems that the previously documented way to set the smb share name no 
longer works
 zfs set sharesmb=name=my_share_name
You have to use the long-winded
zfs set share=name=my_share_name,path=/my/share/path,prot=smb
This is fine but not really obvious if moving scripts from Solaris10 to 
Solaris11.

2) If you use zfs rename to rename a zfs filesystem it doesn't rename the smb 
share name.

3) Also you might end up with two shares having the same name.

4) So how do you rename the smb share? There doesn't appear to be a zfs unset 
and if you issue the command twice with different names then both are listed 
when you use zfs get share.

5) The share value act like a property but does not show up if you use zfs 
get so that's not really consistent

6) zfs filesystems created with Solaris 10 and shared with smb cannot be 
mounted from Windows when the server is upgraded to Solaris 11.
The client just gets permission denied but in the server log you might see 
access denied: share ACL.
If you create a brand new zfs filesystem then it works fine. So what is the 
difference?
The ACLs have never been set or changed so it's not that, and the two 
filesystems appear to have identical ACLs.
But if you look at the extended attributes the successful filesystem has xattr 
{A--m} and the unsuccessful has {}.
However that xattr cannot be set on the share to see if it allows it to be 
mounted.
chmod S+cA share gives chmod: ERROR: extended system attributes not 
supported for share (even though it has the xattr=on property).
What is the problem here, why cannot a Solaris 10 filesystem be shared via smb?
And how can extended attributes be set on a zfs filesystem?

Thanks folks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread sol
Yes, it's moving a tree of files, and the shell ulimit is the default (which I 
think is 256).


It happened twice recently in normal use but not when I tried to replicate it 
(standard test response ;-))


Anyway it only happened moving between zfs filesystems in Solaris 11, I've 
never seen it before, which is why I posted here first. But if it's a problem 
elsewhere in Solaris I should move the discussion... although any ideas are 
welcome!





 From: casper@oracle.com casper@oracle.com
I think the too many open files is a generic error message about 
running out of file descriptors. You should check your shell ulimit
information.

Yeah, but mv shouldn't run out of file descriptors or should be
handle to deal with that.

Are we moving a tree of files?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs xattr not supported prevents smb mount

2011-11-11 Thread sol
Hello

I have some zfs filesystems shared via cifs. Some of them I can mount and 
others I can't. They appear identical in properties and ACLs; the only 
difference I've found is the successful ones have xattr {A--m} and the 
others have {}. But I can't set that xattr on the share to see if it allows it 
to be mounted.
chmod S+cA share
chmod: ERROR: extended system attributes not supported for share (even though 
it has the xattr=on property)
(The mount fails (after entering the correct password) with tree connect 
failed: syserr = Permission denied and the log message access denied: share 
ACL.)

Maybe I'm barking up the wrong tree and it's not the xattr of the share which 
is causing the problem but I'd be grateful for some enlightenment. This is 
Solaris 11 (and is a 'regression' from Solaris 11 Express).

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How can a mirror lose a file?

2010-07-28 Thread sol
Hi

Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror.  Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks?  However the
zpool status shows checksum errors not I/O errors and I'm not sure what
that means in this case.

I thought that a zfs mirror would be the ultimate in protection but it's not!
Any ideas why and how to protect against this in the future?

(BTW it's osol official release 2009.06 snv_111b)

# zpool status -v
  pool: liver
 state: ONLINE
status: One or more devices has experienced an error resulting in 
data corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the entire 
pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed after 3h31m with 1 errors 
config:

NAME  STATE  READ WRITE CKSUM
liver ONLINE  0 0 1
 mirror   ONLINE  0 0 2
  c9d0p0  ONLINE  0 0 2
  c10d0p0 ONLINE  0 0 2

errors: Permanent errors have been detected in the following files:



  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss