[zfs-discuss] LAST CALL: zfs-discuss is moving Sunday, March 24, 2013

2013-03-22 Thread Cindy Swearingen

I hope to see everyone on the other side...

***

The ZFS discussion list is moving to java.net.

This opensolaris/zfs discussion will not be available after March 24.
There is no way to migrate the existing list to the new list.

The solaris-zfs project is here:

http://java.net/projects/solaris-zfs

See the steps below to join the ZFS project or just the discussion list,
but you must create an account on java.net to join the list.

Thanks, Cindy

1. Create an account on java.net.

https://java.net/people/new

2. When logged in to your java.net account, join the solaris-zfs
project as an Observer by clicking the Join This Project link on the
left side of this page:

http://java.net/projects/solaris-zfs

3. Subscribe to the zfs discussion mailing list here:

http://java.net/projects/solaris-zfs/lists
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] This mailing list EOL???

2013-03-20 Thread Cindy Swearingen

Hi Ned,

This list is migrating to java.net and will not be available
in its current form after March 24, 2013.

The archive of this list is available here:

http://www.mail-archive.com/zfs-discuss@opensolaris.org/

I will provide an invitation to the new list shortly.

Thanks for your patience.

Cindy

On 03/20/13 15:05, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

I can't seem to find any factual indication that opensolaris.org mailing
lists are going away, and I can't even find the reference to whoever
said it was EOL in a few weeks ... a few weeks ago.

So ... are these mailing lists going bye-bye?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Please join us on the new zfs discuss list on java.net

2013-03-20 Thread Cindy Swearingen

Hi Everyone,

The ZFS discussion list is moving to java.net.

This opensolaris/zfs discussion will not be available after March 24.
There is no way to migrate the existing list to the new list.

The solaris-zfs project is here:

http://java.net/projects/solaris-zfs

See the steps below to join the ZFS project or just the discussion list,
but you must create an account on java.net to join the list.

Thanks, Cindy

1. Create an account on java.net.

https://java.net/people/new

2. When logged in to your java.net account, join the solaris-zfs
project as an Observer by clicking the Join This Project link on the
left side of this page:

http://java.net/projects/solaris-zfs

3. Subscribe to the zfs discussion mailing list here:

http://java.net/projects/solaris-zfs/lists
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] partioned cache devices

2013-03-19 Thread Cindy Swearingen

Hi Andrew,

Your original syntax was incorrect.

A p* device is a larger container for the d* device or s* devices.
In the case of a cache device, you need to specify a d* or s* device.
That you can add p* devices to a pool is a bug.

Adding different slices from c25t10d1 as both log and cache devices
would need the s* identifier, but you've already added the entire
c25t10d1 as the log device. A better configuration would be using
c25t10d1 for log and using c25t9d1 for cache or provide some spares
for this large pool.

After you remove the log devices, re-add like this:

# zpool add aggr0 log c25t10d1
# zpool add aggr0 cache c25t9d1

You might review the ZFS recommendation practices section, here:

http://docs.oracle.com/cd/E26502_01/html/E29007/zfspools-4.html#storage-2

See example 3-4 for adding a cache device, here:

http://docs.oracle.com/cd/E26502_01/html/E29007/gayrd.html#gazgw

Always have good backups.

Thanks, Cindy



On 03/18/13 23:23, Andrew Werchowiecki wrote:

I did something like the following:

format -e /dev/rdsk/c5t0d0p0

fdisk

1 (create)

F (EFI)

6 (exit)

partition

label

1

y

0

usr

wm

64

4194367e

1

usr

wm

4194368

117214990

label

1

y

Total disk size is 9345 cylinders

Cylinder size is 12544 (512 byte) blocks

Cylinders

Partition Status Type Start End Length %

= ==  = === == ===

1 EFI 0 9345 9346 100

partition print

Current partition table (original):

Total disk sectors available: 117214957 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector

0 usr wm 64 2.00GB 4194367

1 usr wm 4194368 53.89GB 117214990

2 unassigned wm 0 0 0

3 unassigned wm 0 0 0

4 unassigned wm 0 0 0

5 unassigned wm 0 0 0

6 unassigned wm 0 0 0

8 reserved wm 117214991 8.00MB 117231374

This isn’t the output from when I did it but it is exactly the same
steps that I followed.

Thanks for the info about slices, I may give that a go later on. I’m not
keen on that because I have clear evidence (as in zpools set up this
way, right now, working, without issue) that GPT partitions of the style
shown above work and I want to see why it doesn’t work in my set up
rather than simply ignoring and moving on.

*From:*Fajar A. Nugraha [mailto:w...@fajar.net]
*Sent:* Sunday, 17 March 2013 3:04 PM
*To:* Andrew Werchowiecki
*Cc:* zfs-discuss@opensolaris.org
*Subject:* Re: [zfs-discuss] partioned cache devices

On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki
andrew.werchowie...@xpanse.com.au
mailto:andrew.werchowie...@xpanse.com.au wrote:

I understand that p0 refers to the whole disk... in the logs I
pasted in I'm not attempting to mount p0. I'm trying to work out why
I'm getting an error attempting to mount p2, after p1 has
successfully mounted. Further, this has been done before on other
systems in the same hardware configuration in the exact same
fashion, and I've gone over the steps trying to make sure I haven't
missed something but can't see a fault.

How did you create the partition? Are those marked as solaris partition,
or something else (e.g. fdisk on linux use type 83 by default).

I'm not keen on using Solaris slices because I don't have an
understanding of what that does to the pool's OS interoperability.

Linux can read solaris slice and import solaris-made pools just fine, as
long as you're using compatible zpool version (e.g. zpool version 28).

--

Fajar



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What would be the best tutorial cum reference doc for ZFS

2013-03-19 Thread Cindy Swearingen

Hi Hans,

Start with the ZFS Admin Guide, here:

http://docs.oracle.com/cd/E26502_01/html/E29007/index.html

Or, start with your specific questions.

Thanks, Cindy

On 03/19/13 03:30, Hans J. Albertsson wrote:

as used on Illumos?

I've seen a few tutorials written by people who obviously are very
action oriented; afterwards you find you have worn your keyboard down a
bit and not learned a lot at all, at least not in the sense of
understanding what zfs is and what it does and why things are the way
they are.

I'm looking for something that would make me afterwards understand what,
say, commands like zpool import ... or zfs send ... actually do, and
some idea as to why, so I can begin to understand ZFS in a way that
allows me to make educated guesses on how to perform tasks I haven't
tried before.
And mostly without having to ask around for days on end.

For SOME part of zfs I'm already there, but only for the things I had to
do more than twice or so while managing the Swedish lab at Sun Micro.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss mailing list opensolaris EOL

2013-02-18 Thread Cindy Swearingen

Hi Jim,

We will be restaging the ZFS community info, most likely on OTN.

The zfs discussion list archive cannot be migrated to the new
list on java.net, but you can pick it up here:

http://www.mail-archive.com/zfs-discuss@opensolaris.org/

We are looking at other ways to make the zfs discuss list archive
available, possibly as a file download.

Thanks, Cindy

On 02/16/13 16:41, Jim Klimov wrote:

Hello Cindy,

Are there any plans to preserve the official mailing lists' archives,
or will they go the way of Jive forums and the future digs for bits
of knowledge would rely on alternate mirrors and caches?

I understand that Oracle has some business priorities, but retiring
hardware causes site shutdown? They've gotta be kidding, with all
the buzz about clouds and virtualization ;)

I'd guess, you also are not authorized to say whether Oracle might
permit re-use (re-hosting) of current OpenSolaris.Org materials or
even give away the site and domain for community steering and rid
itself of more black PR by shooting down another public project of
the Sun legacy (hint: if the site does wither and die in community's
hands - it is not Oracle's fault; and if it lives on - Oracle did
something good for karma... win-win, at no price).

Thanks for your helpfulness in the past years,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss mailing list opensolaris EOL

2013-02-16 Thread cindy swearingen
Hey Ned and Everyone,

This was new news to use too and we're just talking over some options
yesterday
afternoon so please give us a chance to regroup and provide some
alternatives.

This list will be shutdown but we can start a new one on java.net. There is
a huge
ecosystem around Solaris and ZFS, particularly within Oracle. Many of us
are still
here because we are passionate about ZFS, Solaris 11 and even Solaris 10. I
think we have a great product and a lot of info to share.

If you're interested in a rejuvenated ZFS discuss list on java.net, then
drop me a note:
cindy.swearin...@oracle.com

We are also considering a new ZFS page in that community as well. Oracle is
very
committed to Solaris and ZFS, but they want to consolidate their community
efforts on
java.net, retire some old hardware, and so on.

If you are an Oracle customer with a support contract and you are using
Solaris and ZFS,
and you want to discuss support issues, you should consider that list as
well:

https://communities.oracle.com/portal/server.pt/community/oracle_solaris_zfs_file_system/526

Thanks, Cindy

On Fri, Feb 15, 2013 at 9:00 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  So, I hear, in a couple weeks' time, opensolaris.org is shutting down.
 What does that mean for this mailing list?  Should we all be moving over to
 something at illumos or something?

 ** **

 I'm going to encourage somebody in an official capacity at opensolaris to
 respond...

 I'm going to discourage unofficial responses, like, illumos enthusiasts
 etc simply trying to get people to jump this list.

 ** **

 Thanks for any info ...

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-14 Thread Cindy Swearingen

Hi Jamie,

Yes, that is correct.

The S11u1 version of this bug is:

https://bug.oraclecorp.com/pls/bug/webbug_print.show?c_rptno=15852599

and has this notation which means Solaris 11.1 SRU 3.4:

Changeset pushed to build 0.175.1.3.0.4.0

Thanks,

Cindy

On 01/11/13 19:10, Jamie Krier wrote:

It appears this bug has been fixed in Solaris 11.1 SRU 3.4

7191375 15809921SUNBT7191375 metadata rewrites should 
coordinate with
l2arc


Cindy can you confirm?

Thanks


On Fri, Jan 4, 2013 at 3:55 PM, Richard Elling richard.ell...@gmail.com
mailto:richard.ell...@gmail.com wrote:

On Jan 4, 2013, at 11:12 AM, Robert Milkowski
rmilkow...@task.gda.pl mailto:rmilkow...@task.gda.pl wrote:



Illumos is not so good at dealing with huge memory systems but
perhaps
it is also more stable as well.


Well, I guess that it depends on your environment, but generally I
would
expect S11 to be more stable if only because the sheer amount of bugs
reported by paid customers and bug fixes by Oracle that Illumos is not
getting (lack of resource, limited usage, etc.).


There is a two-edged sword. Software reliability analysis shows that
the
most reliable software is the software that is oldest and unchanged.
But
people also want new functionality. So while Oracle has more changes
being implemented in Solaris, it is destabilizing while simultaneously
improving reliability. Unfortunately, it is hard to get both wins.
What is more
likely is that new features are being driven into Solaris 11 that are
destabilizing. By contrast, the number of new features being added to
illumos-gate (not to be confused with illumos-based distros) is
relatively
modest and in all cases are not gratuitous.
  -- richard

--

richard.ell...@richardelling.com
mailto:richard.ell...@richardelling.com
+1-760-896-4422 tel:%2B1-760-896-4422












___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-14 Thread Cindy Swearingen

I believe the bug.oraclecorp.com URL is accessible with a support
contract, but its difficult for me to test.

I should have mentioned it. I apologize.

cs

On 01/14/13 14:02, Nico Williams wrote:

On Mon, Jan 14, 2013 at 1:48 PM, Tomas Forsmanst...@acc.umu.se  wrote:

https://bug.oraclecorp.com/pls/bug/webbug_print.show?c_rptno=15852599


Host oraclecorp.com not found: 3(NXDOMAIN)

Would oracle.internal be a better domain name?


Things like that cannot be changed easily.  They (Oracle) are stuck
with that domainname for the forseeable future.  Also, whoever thought
it up probably didn't consider leakage of internal URIs to the
outside.  *shrug*
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Cindy Swearingen

Free advice is cheap...

I personally don't see the advantage of caching reads
and logging writes to the same devices. (Is this recommended?)

If this pool is serving CIFS/NFS, I would recommend testing
for best performance with a mirrored log device first without
a separate cache device:

# zpool add tank0 log mirror c4t1d0 c4t2d0

Thanks, Cindy

On 01/03/13 14:21, Phillip Wagstrom wrote:

Eugen,

Be aware that p0 corresponds to the entire disk, regardless of how it 
is partitioned with fdisk.  The fdisk partitions are 1 - 4.  By using p0 for 
log and p1 for cache, you could very well be writing to same location on the 
SSD and corrupting things.
Personally, I'd recommend putting a standard Solaris fdisk partition on 
the drive and creating the two slices under that.

-Phil

On Jan 3, 2013, at 2:33 PM, Eugen Leitl wrote:


On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:


Happy $holidays,

I have a pool of 8x ST31000340AS on an LSI 8-port adapter as


Just a little update on the home NAS project.

I've set the pool sync to disabled, and added a couple
of

   8. c4t1d0ATA-INTELSSDSA2M080-02G9 cyl 11710 alt 2 hd 224 sec 56
  /pci@0,0/pci1462,7720@11/disk@1,0
   9. c4t2d0ATA-INTELSSDSA2M080-02G9 cyl 11710 alt 2 hd 224 sec 56
  /pci@0,0/pci1462,7720@11/disk@2,0

I had no clue what the partitions names (created with napp-it web
interface, a la 5% log and 95% cache, of 80 GByte) were and so
did a iostat -xnp

1.40.35.50.0  0.0  0.00.00.0   0   0 c4t1d0
0.10.03.70.0  0.0  0.00.00.5   0   0 c4t1d0s2
0.10.02.60.0  0.0  0.00.00.5   0   0 c4t1d0s8
0.00.00.00.0  0.0  0.00.00.2   0   0 c4t1d0p0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0p1
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0p2
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0p3
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0p4
1.20.31.40.0  0.0  0.00.00.0   0   0 c4t2d0
0.00.00.60.0  0.0  0.00.00.4   0   0 c4t2d0s2
0.00.00.70.0  0.0  0.00.00.4   0   0 c4t2d0s8
0.10.00.00.0  0.0  0.00.00.2   0   0 c4t2d0p0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t2d0p1
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t2d0p2

then issued

# zpool add tank0 cache /dev/dsk/c4t1d0p1 /dev/dsk/c4t2d0p1
# zpool add tank0 log mirror /dev/dsk/c4t1d0p0 /dev/dsk/c4t2d0p0

which resulted in

root@oizfs:~# zpool status
  pool: rpool
state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Wed Jan  2 21:09:23 2013
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c4t3d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: tank0
state: ONLINE
  scan: scrub repaired 0 in 5h17m with 0 errors on Wed Jan  2 17:53:20 2013
config:

NAME   STATE READ WRITE CKSUM
tank0  ONLINE   0 0 0
  raidz3-0 ONLINE   0 0 0
c3t5000C500098BE9DDd0  ONLINE   0 0 0
c3t5000C50009C72C48d0  ONLINE   0 0 0
c3t5000C50009C73968d0  ONLINE   0 0 0
c3t5000C5000FD2E794d0  ONLINE   0 0 0
c3t5000C5000FD37075d0  ONLINE   0 0 0
c3t5000C5000FD39D53d0  ONLINE   0 0 0
c3t5000C5000FD3BC10d0  ONLINE   0 0 0
c3t5000C5000FD3E8A7d0  ONLINE   0 0 0
logs
  mirror-1 ONLINE   0 0 0
c4t1d0p0   ONLINE   0 0 0
c4t2d0p0   ONLINE   0 0 0
cache
  c4t1d0p1 ONLINE   0 0 0
  c4t2d0p1 ONLINE   0 0 0

errors: No known data errors

which resulted in bonnie++
befo':

NAME SIZEBonnie  Date(y.m.d) FileSeq-Wr-Chr  %CPU
Seq-Write   %CPUSeq-Rewr%CPUSeq-Rd-Chr  %CPU
Seq-Read%CPURnd Seeks   %CPUFiles   Seq-Create  
Rnd-Create
rpool59.5G   start   2012.12.28  15576M  24 MB/s 61  47 
MB/s 18  40 MB/s 19  26 MB/s 98  273 MB/s   
 48  2657.2/s25  16  12984/s 12058/s
tank07.25T   start   2012.12.29  15576M  35 MB/s 86  145 
MB/s48  109 MB/s50  25 MB/s 97  291 MB/s
53  819.9/s 12  16  12634/s 9194/s

aftuh:

-Wr-Chr  %CPUSeq-Write   %CPUSeq-Rewr%CPUSeq-Rd-Chr 
 %CPUSeq-Read%CPURnd Seeks   %CPUFiles   Seq-Create 
 Rnd-Create
rpool 

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-30 Thread cindy swearingen
Existing Solaris 10 releases are not impacted. S10u11 isn't released yet so
I think
we can assume that this upcoming Solaris 10 release will include a
preventative fix.

Thanks, Cindy

On Thu, Dec 27, 2012 at 11:11 PM, Andras Spitzer wsen...@gmail.com wrote:

 Josh,

 You mention that Oracle is preparing patches for both Solaris 11.2 and
 S10u11, does that mean that the bug exist in Solaris 10 as well? I may be
 wrong but Cindy mentioned the bug is only in Solaris 11.

 Regards,
 sendai

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Cindy Swearingen

Hi Ned,

Which man page are you referring to?

I see the zfs receive -o syntax in the S11 man page.

The bottom line is that not all properties can be set on the
receiving side and the syntax is one property setting per -o
option.

See below for several examples.

Thanks,

Cindy

I don't think version is a property that can be set on the
receiving size. The version must be specified when the file
system is created:

# zfs create -o version=5 tank/joe

You can't change blocksize on the receiving side either because
it is set during the I/O path. You can use shadow migration to
migrate a file system's blocksize.

This syntax errors because the supported syntax is -o
property not -o properties.

# zfs send tank/home/cindy@now | zfs receive -o
compression=on,sync=disabled pond/cindy.backup
cannot receive new filesystem stream: 'compression' must be one of 'on
| off | lzjb | gzip | gzip-[1-9] | zle'

Set multiple properties like this:

# zfs send tank/home/cindy@now | zfs receive -o compression=on -o
sync=disabled pond/cindy.backup2

Enabling compression on the receiving side works, but verifying
the compression can't be done with ls.

The data is compressed on the receiving side:

# zfs list -r pond | grep data
pond/cdata   168K  63.5G   168K  /pond/cdata
pond/nocdata 289K  63.5G   289K  /pond/nocdata

# zfs send -p pond/nocdata@snap1 | zfs recv -Fo compression=on rpool/cdata

# zfs get compression pond/nocdata
NAME  PROPERTY VALUE  SOURCE
pond/nocdata  compression  offdefault
# zfs get compression rpool/cdata
NAME PROPERTY VALUE  SOURCE
rpool/cdata  compression  on local

You can't see the compressed size with the ls command:

# ls -lh /pond/nocdata/file.1
-r--r--r--   1 root root  202K Dec 21 13:52 /pond/nocdata/file.1
# ls -lh /rpool/cdata/file.1
-r--r--r--   1 root root  202K Dec 21 13:52 /rpool/cdata/file.1

You can see the size difference with zfs list:

# zfs list -r pond rpool | grep data
pond/cdata  168K  63.5G   168K  /pond/cdata
pond/nocdata289K  63.5G   289K  /pond/nocdata
rpool/cdata 168K  47.6G   168K  /rpool/cdata

You can also see the size differences with du -h:

# du -h pond/nocdata/file.1
 258K   pond/nocdata/file.1
# du -h rpool/cdata/file.1
 137K   rpool/cdata/file.1


On 12/21/12 11:41, Edward Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey

zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz

I have not yet tried this syntax.  Because you mentioned it, I looked for it in
the man page, and because it's not there, I hesitate before using it.


Also, readonly=on

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool performance when nearly full

2012-12-20 Thread Cindy Swearingen

Hi Sol,

You can review the Solaris 11 ZFS best practices info, here:

http://docs.oracle.com/cd/E26502_01/html/E29007/practice-1.html#scrolltoc

The above section also provides info about the full pool performance
penalty.

For S11 releases, we're going to increase the 80% pool capacity
recommendation to 90%.

Pool/file system space accounting is dependent on the type
of pool that you can read about, here:

http://docs.oracle.com/cd/E26502_01/html/E29007/gbbti.html#scrolltoc

Thanks,

Cindy


On 12/20/12 10:25, sol wrote:

Hi

I know some of this has been discussed in the past but I can't quite
find the exact information I'm seeking
(and I'd check the ZFS wikis but the websites are down at the moment).

Firstly, which is correct, free space shown by zfs list or by zpool
iostat ?

zfs list:
used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4%

zpool iostat:
used 61.9 TB, free 18.1 TB, total = 80 TB, free = 22.6%

(That's a big difference, and the percentage doesn't agree)

Secondly, there's 8 vdevs each of 11 disks.
6 vdevs show used 8.19 TB, free 1.81 TB, free = 18.1%
2 vdevs show used 6.39 TB, free 3.61 TB, free = 36.1%

I've heard that
a) performance degrades when free space is below a certain amount
b) data is written to different vdevs depending on free space

So a) how do I determine the exact value when performance degrades and
how significant is it?
b) has that threshold been reached (or exceeded?) in the first six vdevs?
and if so are the two emptier vdevs being used exclusively to prevent
performance degrading
so it will only degrade when all vdevs reach the magic 18.1% free (or
whatever it is)?

Presumably there's no way to identify which files are on which vdevs in
order to delete them and recover the performance?

Thanks for any explanations!



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-19 Thread Cindy Swearingen

Hi Everyone,

I was mistaken. The ZFSSA is not impacted by this bug.

I provided a set of steps below to help identify this problem.

If you file an SR, an IDR can be applied. Otherwise, you will
need to wait for the SRU.

Thanks,

Cindy

If you are running S11 or S11.1 and you have a ZFS storage
pool with separate cache devices, consider running these
steps to identify whether your pool is impacted.

Until an IDR is applied or the SRU is available, remove the
cache devices.

1. Export the pool.

This step is necessary because zdb needs to be run on a
quiet pool.

# zpool export pool-name

2. Run zdb to identify space map inconsistencies.

# zdb -emm pool-name

3. Based on running zdb, determine your next step:

A. If zdb completes successfully, scrub the pool.

# zpool import pool-name
# zpool scrub pool-name

If scrubbing the pool finds no issues, then your pool
is most likely not impacted by this problem.

If scrubbing the pool finds permanent metadata errors,
then you should open an SR.

B. If zdb doesn't complete successfully, open an SR.




On 12/18/12 09:45, Cindy Swearingen wrote:

Hi Sol,

The appliance is affected as well.

I apologize. The MOS article is for internal diagnostics.

I'll provide a set of steps to identify this problem
as soon as I understand them better.

Thanks, Cindy

On 12/18/12 05:27, sol wrote:

*From:* Cindy Swearingen cindy.swearin...@oracle.com
No doubt. This is a bad bug and we apologize.
1. If you are running Solaris 11 or Solaris 11.1 and have separate
cache devices, you should remove them to avoid this problem.

How is the 7000-series storage appliance affected?

2. A MOS knowledge article (1497293.1) is available to help diagnose
this problem.

MOS isn't able to find this article when I search for it.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-18 Thread Cindy Swearingen

Hi Sol,

The appliance is affected as well.

I apologize. The MOS article is for internal diagnostics.

I'll provide a set of steps to identify this problem
as soon as I understand them better.

Thanks, Cindy

On 12/18/12 05:27, sol wrote:

*From:* Cindy Swearingen cindy.swearin...@oracle.com
No doubt. This is a bad bug and we apologize.
1. If you are running Solaris 11 or Solaris 11.1 and have separate
cache devices, you should remove them to avoid this problem.

How is the 7000-series storage appliance affected?

2. A MOS knowledge article (1497293.1) is available to help diagnose
this problem.

MOS isn't able to find this article when I search for it.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-17 Thread Cindy Swearingen

Hi Jamie,

No doubt. This is a bad bug and we apologize.

Below is a misconception that this bug is related to the VM2 project.
It is not. Its related to a problem that was introduced in the ZFS ARC
code.

If you would send me your SR number privately, we can work with the
support person to correct this misconception.

We agree with Thomas's advice that should you remove separate cache
devices to help alleviate this problem.

To summarize:

1. If you are running Solaris 11 or Solaris 11.1 and have separate
cache devices, you should remove them to avoid this problem.

When the SRU that fixes this problem is available, apply the SRU.

Solaris 10 releases are not impacted.

2. A MOS knowledge article (1497293.1) is available to help diagnose
this problem.

3. File a MOS SR to get access to the IDR.

4. We hope to have the SRU information available in a few days.

Thanks, Cindy

On 12/12/12 11:21, Jamie Krier wrote:

I've hit this bug on four of my Solaris 11 servers. Looking for anyone
else who has seen it, as well as comments/speculation on cause.

This bug is pretty bad.  If you are lucky you can import the pool
read-only and migrate it elsewhere.

I've also tried setting zfs:zfs_recover=1,aok=1 with varying results.


http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc


Hardware platform:

Supermicro X8DAH

144GB ram

Supermicro sas2 jbods

LSI 9200-8e controllers (Phase 13 fw)

Zuesram log

ZuesIops sas l2arc

Seagate ST33000650SS sas drives


All four servers are running the same hardware, so at first I suspected
a problem there.  I opened a ticket with Oracle which ended with this email:

-

We strongly expect that this is a software issue because this problem
does not happen

on Solaris 10.   On Solaris 11, it happens with both the SPARC and the
X64 versions of

Solaris.


We have quite a few customer who have seen this issue and we are in the
process of

working on a fix.  Because we do not know the source of the problem yet,
I cannot speculate

on the time to fix.  This particular portion of Solaris 11 (the virtual
memory sub-system) is quite

different than in Solaris 10.  We re-wrote the memory management in
order to get ready for

systems with much more memory than Solaris 10 was designed to handle.


Because this is the memory management system, there is not expected to
be any

work-around.


Depending on your company's requirements, one possibility is to use
Solaris 10 until this

issue is resolved.


I apologize for any inconvenience that  this bug may cause.  We are
working on it as a Sev 1 Priority1 in sustaining engineering.

-


I am thinking about switching to an Illumos distro, but wondering if
this problem may be present there as well.


Thanks


- Jamie



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread Cindy Swearingen

Hey Sol,

Can you send me the core file, please?

I would like to file a bug for this problem.

Thanks, Cindy

On 12/14/12 02:21, sol wrote:

Here it is:

# pstack core.format1
core 'core.format1' of 3351: format
- lwp# 1 / thread# 1 
0806de73 can_efi_disk_be_expanded (0, 1, 0, ) + 7
08066a0e init_globals (8778708, 0, f416c338, 8068a38) + 4c2
08068a41 c_disk (4, 806f250, 0, 0, 0, 0) + 48d
0806626b main (1, f416c3b0, f416c3b8, f416c36c) + 18b
0805803d _start (1, f416c47c, 0, f416c483, f416c48a, f416c497) + 7d
- lwp# 2 / thread# 2 
eed690b1 __door_return (0, 0, 0, 0) + 21
eed50668 door_create_func (0, eee02000, eea1efe8, eed643e9) + 32
eed6443c _thrp_setup (ee910240) + 9d
eed646e0 _lwp_start (ee910240, 0, 0, 0, 0, 0)
- lwp# 3 / thread# 3 
eed6471b __lwp_park (8780880, 8780890) + b
eed5e0d3 cond_wait_queue (8780880, 8780890, 0, eed5e5f0) + 63
eed5e668 __cond_wait (8780880, 8780890, ee90ef88, eed5e6b1) + 89
eed5e6bf cond_wait (8780880, 8780890, 208, eea740ad) + 27
eea740f8 subscriber_event_handler (8778dd0, eee02000, ee90efe8,
eed643e9) + 5c
eed6443c _thrp_setup (ee910a40) + 9d
eed646e0 _lwp_start (ee910a40, 0, 0, 0, 0, 0)



*From:* John D Groenveld jdg...@elvis.arl.psu.edu
# pstack core



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VXFS to ZFS

2012-12-05 Thread Cindy Swearingen

Hi Morris,

I hope someone has done this recently and can comment, but the process
is mostly manual and it will depend on how much gear you have.

For example, if you have some extra disks, you can build a minimal ZFS
storage pool to hold the bulk of your data. Then, you can do a live
migration of data from the existing VxFS config to the new ZFS pool by
using rsync or your favorite tool.

After the data migration is complete, you can tear down the VxFS config
and add the disks to expand the minimal ZFS storage pool.

If you don't have enough disks to do live data migration, then the
steps are backup the data, tear down the VxFS config, create the
ZFS storage pool, restore the data.

In the Solaris 11 release, I think you could live migrate the data by
using shadow migration.

Thanks, Cindy

On 12/05/12 15:11, Morris Hooten wrote:

Is there a documented way or suggestion on how to migrate data from VXFS
to ZFS?



Thanks

Morris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Segfault running zfs create -o readonly=off tank/test on Solaris 11 Express 11/11

2012-10-23 Thread Cindy Swearingen

Hi Andreas,

Which release is this... Can you provide the /etc/release info?

It works fine for me on a S11 Express (b162) system:

# zfs create -o readonly=off pond/amy

# zfs get readonly pond/amy
NAME  PROPERTY  VALUE   SOURCE
pond/amy  readonly  off local

This is somewhat redundant syntax since readonly is off by default.

Thanks, Cindy

On 10/23/12 13:57, Andreas Erz wrote:

Hi,

I have tried running

zfs create -o readonly=off tank/test

on two different Solaris 11 Express 11/11 (x86) machines resulting in
segfaults. Can anybody verify this behavior? Or is this some
idiosyncrasy of my configuration?

Any help would be appreciated.

Regards,
Andreas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Cindy Swearingen

Hi Charles,

Yes, a faulty or failing disk can kill performance.

I would see if FMA has generated any faults:

# fmadm faulty

Or, if any of the devices are collecting errors:

# fmdump -eV | more

Thanks,

Cindy

On 10/04/12 11:22, Knipe, Charles wrote:

Hey guys,

I’ve run into another ZFS performance disaster that I was hoping someone
might be able to give me some pointers on resolving. Without any
significant change in workload write performance has dropped off
dramatically. Based on previous experience we tried deleting some files
to free space, even though we’re not near 60% full yet. Deleting files
seemed to help for a little while, but now we’re back in the weeds.

We already have our metaslab_min_alloc_size set to 0x500, so I’m
reluctant to go lower than that. One thing we noticed, which is new to
us, is that zio_state shows a large number of threads in
CHECKSUM_VERIFY. I’m wondering if that’s generally indicative of
anything in particular. I’ve got no errors on any disks, either in zpool
status or iostat –e. Any ideas as to where else I might want to dig in
to figure out where my performance has gone?

Thanks

-Charles



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Missing disk space

2012-08-03 Thread Cindy Swearingen

You said you're new to ZFS so might consider using zpool list
and zfs list rather df -k to reconcile your disk space.

In addition, your pool type (mirrored on RAIDZ) provides a
different space perspective in zpool list that is not always
easy to understand.

http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-6.html#scrolltoc

See these sections:

Displaying ZFS File System Information
Resolving ZFS File System Space Reporting Issues

Let us know if this doesn't help.

Thanks,

Cindy

On 08/03/12 16:00, Burt Hailey wrote:

I seem to be missing a large amount of disk space and am not sure how to
locate it. My pool has a total of 1.9TB of disk space. When I run df -k
I see that the pool is using ~650GB of space and has only ~120GB
available. Running zfs list shows that my pool (localpool) is using
1.67T. When I total up the amount of snapshots I see that they are using
250GB. Unless I’m missing something it appears that there is ~750GB of
disk space that is unaccounted for. We do hourly snapshots. Two days ago
I deleted 100GB of data and did not see a corresponding increase in
snapshot sizes. I’m new to zfs and am reading the zfs admin handbook but
I wanted to post this to get some suggestions on what to look at.

Burt Hailey



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-08-01 Thread Cindy Swearingen

Hi--

If the S10 patch is installed on this system...

Can you remind us if you ran the zpool online -e command after the
LUN is expanded and the autoexpand propery is set?

I hear that some storage that doesn't generate the correct codes
in response to a LUN expansion so you might need to run this command
even if autoexpand is set.

Thanks,

Cindy



On 07/26/12 07:04, Habony, Zsolt wrote:

There is bug what I mentioned:  SUNBUG:6430818  Solaris Does Not Automatically 
Handle an Increase in LUN Size
Patch for that is: 148098-03

Its readme says:
Synopsis: Obsoleted by: 147440-15 SunOS 5.10: scsi patch

Looking at current version 147440-21, there is reference for the incorporated 
patch, and for the bug id as well.

(from 148098-03)

6228435 undecoded command in var/adm/messages - Error for Command: undecoded 
cmd 0x5a
6241086 format should allow label adjustment when disk/LUN size changes
6430818 Solaris needs mechanism of dynamically increasing LUN size


-Original Message-
From: Hung-Sheng Tsao (LaoTsao) Ph.D [mailto:laot...@gmail.com]
Sent: 2012. július 26. 14:49
To: Habony, Zsolt
Cc: Cindy Swearingen; Sašo Kiselkov; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] online increase of zfs after LUN increase ?

imho, the 147440-21 does not list the bugs that solved by 148098- even through 
it obsoletes the 148098




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Cindy Swearingen

Hi--

Patches are available to fix this so I would suggest that you
request them from MOS support.

This fix fell through the cracks and we tried really hard to
get it in the current Solaris 10 release but sometimes things
don't work in your favor. The patches are available though.

Relabeling disks on a live pool is not a recommended practice
so let's review other options but first some questions:

1. Is this a redundant pool?

2. Do you have an additional LUN (equivalent size) that you
could use as a spare?

What you could do is replace this existing LUN with a larger
LUN, if available. Then, reattach the original LUN and detach
the spare LUN but this depends on your pool configuration.

If requesting the patches is not possible and you don't have
a spare LUN, then please contact me directly. I might be able
to walk you through a more manual process.

Thanks,

Cindy


On 07/25/12 09:49, Habony, Zsolt wrote:

Hello,
There is a feature of zfs (autoexpand, or zpool online -e ) that it can 
consume the increased LUN immediately and increase the zpool size.
That would be a very useful ( vital ) feature in enterprise environment.

Though when I tried to use it, it did not work.  LUN expanded and visible in 
format, but zpool did not increase.
I found a bug SUNBUG:6430818 (Solaris Does Not Automatically Handle an Increase 
in LUN Size)
Bad luck.

Patch exists: 148098 but _not_ part of recommended patch set.  Thus my fresh 
install Sol 10 U9 with latest patch set still has the problem.  ( Strange that 
this problem
  is not considered high impact ... )

It mentiones a workaround :   zpool export, Re-label the LUN using format(1m) 
command., zpool import

Can you pls. help in that, what does that re-label mean ?
(As I need to ask downtime for the zone now ... , would like to prepare for 
what I need to do )

I used format utility in thousands of times, for organizing partitions, though I have no 
idea how I would relabel a disk.
Also I did not use format to label the disks, I gave the LUN to zpool directly, 
I would not dare to touch or resize any partition with format utility, not 
knowing what zpool wants to see there.

Have you experienced such problem, and do you know how to increase zpool after 
a LUN increase ?

Thank you in advance,
Zsolt Habony




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Cindy Swearingen

Hi--

I guess I can't begin to understand patching.

Yes, you provided a whole disk to zpool create but it actually
creates a part(ition) 0 as you can see in the output below.

Part  TagFlag First SectorSizeLast Sector
   0  usrwm   256  19.99GB  41927902

Part  TagFlag First Sector Size Last Sector
0 usrwm   256   99.99GB  209700062

I'm sorry you had to recreate the pool. This *is* a must-have feature
and it is working as designed in Solaris 11 and with patch 148098-3 (or
whatever the equivalent is) in Solaris 10 as well.

Maybe its time for me to recheck this feature in current Solaris 10
bits.

Thanks,

Cindy



On 07/25/12 16:14, Habony, Zsolt wrote:

Thank you for your replies.

First, sorry for misleading info.  Patch 148098-03  indeed not included in 
recommended set, but trying to download it shows that 147440-15 obsoletes it
and 147440-19 is included in latest recommended patch set.
Thus time solves the problem elsewhere.

Just for fun, my case was:

A standard LUN used as a zfs filesystem, no redundancy (as storage already 
has), and no partition is used, disk is given directly to zpool.
# zpool status -oraarch
   pool: -oraarch
  state: ONLINE
  scan: none requested
config:

 NAME STATE READ WRITE CKSUM
 xx-oraarch   ONLINE   0 0 0
   c5t60060E800570B90070B96547d0  ONLINE   0 0 0

errors: No known data errors

Partitioning shows this.

partition  pr
Current partition table (original):
Total disk sectors available: 41927902 + 16384 (reserved sectors)

Part  TagFlag First SectorSizeLast Sector
   0usrwm   256  19.99GB 41927902
   1 unassignedwm 0  0  0
   2 unassignedwm 0  0  0
   3 unassignedwm 0  0  0
   4 unassignedwm 0  0  0
   5 unassignedwm 0  0  0
   6 unassignedwm 0  0  0
   8   reservedwm  41927903   8.00MB 41944286


As I mentioned I did not partition it, zpool create did.  I had absolutely no 
idea how to resize these partitions, where to get the available number of sectors and how 
many should be skipped and reserved ...
Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) , 
restored data.

Partition looks like this now, I do not think I could have created it easily 
manually.

partition  pr
Current partition table (original):
Total disk sectors available: 209700062 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
   0usrwm   256   99.99GB  209700062
   1 unassignedwm 0   0   0
   2 unassignedwm 0   0   0
   3 unassignedwm 0   0   0
   4 unassignedwm 0   0   0
   5 unassignedwm 0   0   0
   6 unassignedwm 0   0   0
   8   reservedwm 2097000638.00MB  209716446

Thank you for your help.
Zsolt Habony




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Has anyone switched from IR - IT firmware on the fly ? (existing zpool on LSI 9211-8i)

2012-07-18 Thread Cindy Swearingen

Here's a better link below.

I have seen enough bad things happen to pool devices when hardware is
changed or firmware is updated to recommend that the pool is exported
first, even an HBA firmware update.

Either shutting the system down (where pool is hosted) or exporting
the pool should do it.

Always have good backups.

Thanks,

Cindy

http://docs.oracle.com/cd/E23824_01/html/821-1448/gcfog.html#scrolltoc

Considerations for ZFS Storage Pools - see the last bullet

On 07/17/12 18:47, Damon Pollard wrote:

Correct.

LSI 1068E has IR and IT firmwares + I have gone from IR - IT and IT -
IR without hassle.

Damon Pollard


On Wed, Jul 18, 2012 at 8:13 AM, Jason Usher jushe...@yahoo.com
mailto:jushe...@yahoo.com wrote:


Ok, and your LSI 1068E also had alternate IR and IT firmwares, and
you went from IR - IT ?

Is that correct ?

Thanks.


--- On Tue, 7/17/12, Damon Pollard damon.poll...@birchmangroup.com
mailto:damon.poll...@birchmangroup.com wrote:

From: Damon Pollard damon.poll...@birchmangroup.com
mailto:damon.poll...@birchmangroup.com
Subject: Re: [zfs-discuss] Has anyone switched from IR - IT
firmware on the fly ? (existing zpool on LSI 9211-8i)
To: Jason Usher jushe...@yahoo.com mailto:jushe...@yahoo.com
Cc: zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
Date: Tuesday, July 17, 2012, 5:05 PM

Hi Jason,
I have done this in the past. (3x LSI 1068E - IBM BR10i).
Your pool has no tie with the hardware used to host it (including
your HBA). You could change all your hardware, and still import your
pool correctly.

If you really want to be on the safe side; you can export your pool
before the firmware change and then import when
your satisfied the firmware change is complete.
Export: http://docs.oracle.com/cd/E19082-01/817-2271/gazqr/index.html
Import: http://docs.oracle.com/cd/E19082-01/817-2271/gazuf/index.html
Damon Pollard


On Wed, Jul 18, 2012 at 6:14 AM, Jason Usher jushe...@yahoo.com
mailto:jushe...@yahoo.com wrote:

We have a running zpool with a 12 disk raidz3 vdev in it ... we gave
ZFS the full, raw disks ... all is well.



However, we built it on two LSI 9211-8i cards and we forgot to
change from IR firmware to IT firmware.



Is there any danger in shutting down the OS, flashing the cards to
IT firmware, and then booting back up ?



We did not create any raid configuration - as far as we know, the
LSI cards are just passing through the disks to ZFS ... but maybe not ?



I'd like to hear of someone else doing this successfully before we
try it ...





We created the zpool with raw disks:



zpool create -m /mount/point MYPOOL raidz3 da{0,1,2,3,4,5,6,7,8,9,10,11}



and diskinfo tells us that each disk is:



da1 512 3000592982016   5860533168



The physical label (the sticker) on the disk also says 5860533168
sectors ... so that seems to line up ...





Someone else in the world has made this change while inflight and
can confirm ?



Thanks.

___

zfs-discuss mailing list

zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Creating NFSv4/ZFS XATTR through dirfd through /proc not allowed?

2012-07-16 Thread Cindy Swearingen

I speak for myself... :-)

If the real bug is in procfs, I can file a CR.

When xattrs were designed right down the hall from me,
I don't think /proc interactions were considered, which
is why I mentioned an RFE.

Thanks,

Cindy




On 07/15/12 15:59, Cedric Blancher wrote:

On 14 July 2012 02:33, Cindy Swearingencindy.swearin...@oracle.com  wrote:

I don't think that xattrs were ever intended or designed
for /proc content.

I could file an RFE for you if you wish.


So Oracle Newspeak now calls it an RFE if you want a real bug fixed, huh? ;-)

This is a real bug in procfs. Problem is, procfs can't do name-based
access checking because the directory has no path and comes back with
EACCESS. Same problem can happen with smbfs if the files no longer
exist on the server but the client still has an open filehandle to it
and a different process tries to access it through
/proc/$pid/fd/$fdnum. The advantage of Olga's testcase is that you
don't need a tricky smbfs/samba setup to reproduce.

Ced

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Creating NFSv4/ZFS XATTR through dirfd through /proc not allowed?

2012-07-13 Thread Cindy Swearingen

I don't think that xattrs were ever intended or designed
for /proc content.

I could file an RFE for you if you wish.

Thanks,

Cindy

On 07/13/12 14:00, ольга крыжановская wrote:

Yes, accessing the files through runat works.

I think /proc (and /dev/fd, which has the same trouble but only works
if the same process accesses the fds, for obvious reasons since
/dev/fd is per process and can not be shared between processes unlike
/proc/$pid/fd/) gets confused because the directories have no name.
pfiles gets confused in a similar way and some times crashes, but
without a predictable pattern or test case.

As interestingly side note, doing a cd to the /proc/$$/fd/$fd first works:
 cut here 
touch x4 ; cd -@ x4 ; redirect {n}. ; cd .. ;
(cd /proc/$$/fd/$n ; print hello1myxattr) ;
(cd -@ x4 ; cat myxattr ) ;
rm x4
 stop cutting here 
Accessing the file with the full path directly does not work:
 cut here 
touch x1 ; cd -@ x1 ; redirect {n}. ; cd .. ;
print hello1/proc/$$/fd/$n/myxattr1 ;
(cd -@ x1 ; cat myxattr1 ) ;
rm x1
 stop cutting here 

Olga

On Fri, Jul 13, 2012 at 9:17 PM, Gordon Rossgordon.w.r...@gmail.com  wrote:

On Fri, Jul 13, 2012 at 2:16 AM, ольга крыжановская
olga.kryzhanov...@gmail.com  wrote:

Can some one here explain why accessing a NFSv4/ZFS xattr directory
through proc is forbidden?


[...]

truss says the syscall fails with
open(/proc/3988/fd/10/myxattr, O_WRONLY|O_CREAT|O_TRUNC, 0666) Err#13 EACCES

Accessing files or directories through /proc/$$/fd/ from a shell
otherwise works, only the xattr directories cause trouble. Native C
code has the same problem.

Olga


Does runat let you see those xattr files?

--
Gordon Rossg...@nexenta.com
Nexenta Systems, Inc.  www.nexenta.com
Enterprise class storage for everyone





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS recovery

2012-07-12 Thread Cindy Swearingen

Hi Rich,

I don't think anyone can say definitively how this problem resolved,
but I believe that the dd command overwrote some of the disk label,
as you describe below.

Your format output below looks like you relabeled the disk and maybe
that was enough to resolve this problem.

I have had success with just relabeling the disk in an active pool,
when I accidentally trampled it with the wrong command.

You could try to use zpool clear to clear the DEGRADED device.
Possibly, scrub again and clear as needed.

Thanks,

Cindy

On 07/12/12 08:33, RichTea wrote:

 How did you decide it is okay and that zfs saved you? Did you
 NOT post some further progress in your recovery?

I made no further recovery attempts,  the pool imported cleanly after
rebooting, or so i thought [1] as a zpool status showed no errors and i
could read data from the drive again.



On Thu, Jul 12, 2012 at 2:35 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com
mailto:opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: zfs-discuss-boun...@opensolaris.org
mailto:zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
mailto:zfs-discuss-
  boun...@opensolaris.org mailto:boun...@opensolaris.org] On
Behalf Of Jim Klimov
 
  Purely speculating, I might however suggest that your disk was
  dedicated to the pool completely, so its last blocks contain
  spare uberblocks (zpool labels) and that might help ZFS detect
  and import the pool -

Certain types of data have multiple copies on disk.  I have
overwritten the
first 1MB of a disk before, and then still been able to import the
pool, so
I suspect, with a little effort, you'll be able to import your pool
again.

After the pool is imported, of course, some of your data is very
likely to
be corrupt.  ZFS should be able to detect it, because the checksum won't
match.  You should run a scrub.



[1]  Ok i have run a scrub on the pool and is now being reported as
being in DEGRADED status again.

I did think it was strange that the zpool had magically recovered its self:

root@n36l:~# zpool status data2
   pool: data2
  state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
 attempt was made to correct the error.  Applications are
unaffected.
action: Determine if the device needs to be replaced, and clear the errors
 using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
   scan: scrub repaired 0 in 0h26m with 0 errors on Thu Jul 12 15:07:47 2012
config:

 NAMESTATE READ WRITE CKSUM
 data2   DEGRADED 0 0 0
   c2t0d0s0  DEGRADED 0 0 0  too many errors

errors: No known data errors


At least it is letting me access data for now, i guess the only fix is
to migrate data off and then rebuild the disk.

--
Ritchie


You'll be able to produce a list of all the partially-corrupted
files.  Most
likely, you'll just want to rm those files, and then you'll know you
have
good files, whatever is still left.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Cindy Swearingen

Hi Hans,

Its important to identify your OS release to determine if
booting from a 4k disk is supported.

Thanks,

Cindy



On 06/15/12 06:14, Hans J Albertsson wrote:

I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only has room for two disks. I do have an esata connector,
though, and a suitable external cabinet for connecting one extra disk.

How would I go about migrating/expanding the root pool to the larger
disks so I can then use the larger disks for booting?

I have no extra machine to use.



Skickat från min Android Mobil


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Spare drive inherited cksum errors?

2012-05-29 Thread Cindy Swearingen

Hi--

You don't see what release this is but I think that seeing the checkum
error accumulation on the spare was a zpool status formatting bug that
I have seen myself. This is fixed in a later Solaris release.

Thanks,

Cindy

On 05/28/12 22:21, Stephan Budach wrote:

Hi all,

just to wrap this issue up: as FMA didn't report any other error than
the one which led to the degradation of the one mirror, I detached the
original drive from the zpool which flagged the mirror vdev as ONLINE
(although there was still a cksum error count of 23 on the spare drive).

Afterwards I attached the formerly degraded drive again to the good
drive in that mirror and let the resilver finish, which didn't show any
errors at all. Finally I detached the former spare drive and re-added it
as a spare drive again.

Now, I will run a scrub once more to veryfy the zpool.

Cheers,
budy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slow zfs send

2012-05-07 Thread Cindy Swearingen

Hi Karl,

I like to verify that no dead or dying disk is killing pool
performance and your zpool status looks good. Jim has replied
with some ideas to check your individual device performance.

Otherwise, you might be impacted by this CR:

7060894 zfs recv is excruciatingly slow

This CR covers both zfs send/recv ops and should be resolved
in an upcoming Solaris 10 release. Its already available in an
s11 SRU.

Thanks,

Cindy

On 5/7/12 10:45 AM, Karl Rossing wrote:

Hi,

I'm showing slow zfs send on pool v29. About 25MB/sec
bash-3.2# zpool status vdipool
  pool: vdipool
 state: ONLINE
 scan: scrub repaired 86.5K in 7h15m with 0 errors on Mon Feb  6 
01:36:23 2012

config:

NAME   STATE READ WRITE CKSUM
vdipoolONLINE   0 0 0
  raidz1-0 ONLINE   0 0 0
c0t5000C500103F2057d0  ONLINE   0 0 0 
(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
c0t5000C5000440AA0Bd0  ONLINE   0 0 0 
(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
c0t5000C500103E9FFBd0  ONLINE   0 0 
0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
c0t5000C500103E370Fd0  ONLINE   0 0 
0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
c0t5000C500103E120Fd0  ONLINE   0 0 
0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod

logs
  mirror-1 ONLINE   0 0 0
c0t500151795955D430d0  ONLINE   0 0 
0(ATA-INTEL SSDSA2VP02-02M5-18.64GB) onboard drive on x4140
c0t500151795955BDB6d0  ONLINE   0 0 0 
(ATA-INTEL SSDSA2VP02-02M5-18.64GB)onboard drive on x4140

cache
  c0t5001517BB271845Dd0ONLINE   0 0 0 
(ATA-INTEL SSDSA2CW16-0362-149.05GB)onboard drive on x4140

spares
  c0t5000C500103E368Fd0AVAIL   
(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod


The drives are in an external promise 12 drive jbod. The jbod is also 
connected to another server that uses the other 6 SEAGATE ST31000640SS 
drives.


This on Solaris 10 8/11 (Generic_147441-01). I'm using LSI 9200 for 
the external promise jbod and an internal 9200 for the zli and l2arc 
which also uses rpool.
FW versions on both cards are  MPTFW-12.00.00.00-IT and 
MPT2BIOS-7.23.01.00.


I'm wondering why the zfs send could be so slow.  Could the other 
server be slowing down the sas bus?


Karl




CONFIDENTIALITY NOTICE:  This communication (including all 
attachments) is
confidential and is intended for the use of the named addressee(s) 
only and

may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any 
attachments, in
whole or in part, by anyone other than the intended recipient(s) is 
strictly
prohibited.  If you have received this communication in error, please 
notify

the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slow zfs send

2012-05-07 Thread Cindy Swearingen

Hi Karl,

Someone sitting across the table from me (who saw my posting)
informs me that CR 7060894 would not impact Solaris 10 releases,
so kindly withdrawn my comment about CR 7060894.

Thanks,

Cindy

On 5/7/12 11:35 AM, Cindy Swearingen wrote:

Hi Karl,

I like to verify that no dead or dying disk is killing pool
performance and your zpool status looks good. Jim has replied
with some ideas to check your individual device performance.

Otherwise, you might be impacted by this CR:

7060894 zfs recv is excruciatingly slow

This CR covers both zfs send/recv ops and should be resolved
in an upcoming Solaris 10 release. Its already available in an
s11 SRU.

Thanks,

Cindy

On 5/7/12 10:45 AM, Karl Rossing wrote:

Hi,

I'm showing slow zfs send on pool v29. About 25MB/sec
bash-3.2# zpool status vdipool
  pool: vdipool
 state: ONLINE
 scan: scrub repaired 86.5K in 7h15m with 0 errors on Mon Feb  6 
01:36:23 2012

config:

NAME   STATE READ WRITE CKSUM
vdipoolONLINE   0 0 0
  raidz1-0 ONLINE   0 0 0
c0t5000C500103F2057d0  ONLINE   0 0 0 
(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
c0t5000C5000440AA0Bd0  ONLINE   0 0 0 
(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
c0t5000C500103E9FFBd0  ONLINE   0 0 
0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
c0t5000C500103E370Fd0  ONLINE   0 0 
0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod
c0t5000C500103E120Fd0  ONLINE   0 0 
0(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod

logs
  mirror-1 ONLINE   0 0 0
c0t500151795955D430d0  ONLINE   0 0 
0(ATA-INTEL SSDSA2VP02-02M5-18.64GB) onboard drive on x4140
c0t500151795955BDB6d0  ONLINE   0 0 0 
(ATA-INTEL SSDSA2VP02-02M5-18.64GB)onboard drive on x4140

cache
  c0t5001517BB271845Dd0ONLINE   0 0 0 
(ATA-INTEL SSDSA2CW16-0362-149.05GB)onboard drive on x4140

spares
  c0t5000C500103E368Fd0AVAIL   
(SEAGATE-ST31000640SS-0003-931.51GB) Promise Jbod


The drives are in an external promise 12 drive jbod. The jbod is also 
connected to another server that uses the other 6 SEAGATE 
ST31000640SS drives.


This on Solaris 10 8/11 (Generic_147441-01). I'm using LSI 9200 for 
the external promise jbod and an internal 9200 for the zli and l2arc 
which also uses rpool.
FW versions on both cards are  MPTFW-12.00.00.00-IT and 
MPT2BIOS-7.23.01.00.


I'm wondering why the zfs send could be so slow.  Could the other 
server be slowing down the sas bus?


Karl




CONFIDENTIALITY NOTICE:  This communication (including all 
attachments) is
confidential and is intended for the use of the named addressee(s) 
only and

may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any 
attachments, in
whole or in part, by anyone other than the intended recipient(s) is 
strictly
prohibited.  If you have received this communication in error, please 
notify

the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Cindy Swearingen

Hi Matt,

Regarding this issue:

As an aside, I have noticed that on the old laptop, it would not boot
if the USB part of the mirror was not attached to the laptop,
successful boot could only be achieved when both mirror devices were
online. Is this a know issue with ZFS ? bug ?

Which Solaris release is this? I see related bugs are fixed so I'm
not sure what is going on here.

I detach mirrored root pool disks and booting is not impacted. The
best method is to let ZFS know that the device is detached before
the reboot, like this:

# zpool detach rpool usb-disk

Thanks,

Cindy


On 04/17/12 04:47, Matt Keenan wrote:

Hi Cindy,

Tried out your example below in a vbox env, and detaching a device from
a pool makes that device simply unavailable. and simply cannot be
re-imported.

I then tried setting up a mirrored rpool within a vbox env, agreed one
device is not USB however, when booted into the rpool, split worked, I
then tried booting directly into the rpool on the faulty laptop, and
split still failed.

My only conclusion for failure is
- The rpool I'm attempting to split has a LOT of history been around for
some 2 years now, so has gone through a lot of upgrades etc, there may
be some ZFS history there that's not letting this happen, BTW the
version is 33 which is current.
- or is it possible that one of the devices being a USB device is
causing the failure ? I don't know.

My reason for splitting the pool was so I could attach the clean USB
rpool to another laptop and simply attach the disk from the new laptop,
let it resilver, installgrub to new laptop disk device and boot it up
and I would be back in action.

As a workaround I'm trying to simply attach my USB rpool to the new
laptop and use zfs replace to effectively replace the offline device
with the new laptop disk device. So far so good, 12% resilvering, so
fingers crossed this will work.

As an aside, I have noticed that on the old laptop, it would not boot if
the USB part of the mirror was not attached to the laptop, successful
boot could only be achieved when both mirror devices were online. Is
this a know issue with ZFS ? bug ?

cheers

Matt


On 04/16/12 10:05 PM, Cindy Swearingen wrote:

Hi Matt,

I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S11 FCS release without
problems.

I'm not a fan of root pools on external USB devices.

I haven't tested these steps in a while but you might try
these steps instead. Make sure you have a recent snapshot
of your rpool on the unhealthy laptop.

1. Ensure that the existing root pool and disks are healthy.

# zpool status -x

2. Detach the USB disk.

# zpool detach rpool disk-name

3. Connect the USB disk to the new laptop.

4. Force import the pool on the USB disk.

# zpool import -f rpool rpool2

5. Device cleanup steps, something like:

Boot from media and import rpool2 as rpool.
Make sure the device info is visible.
Reset BIOS to boot from this disk.

On 04/16/12 04:12, Matt Keenan wrote:

Hi

Attempting to split a mirrored rpool and fails with error :

Unable to split rpool: pool already exists


I have a laptop with main disk mirrored to an external USB. However as
the laptop is not too healthy I'd like to split the pool into two pools
and attach the external drive to another laptop and mirror it to the new
laptop.

What I did :

- Booted laptop into an live DVD

- Import the rpool:
$ zpool import rpool

- Attempt to split :
$ zpool split rpool rpool-ext

- Error message shown and split fails :
Unable to split rpool: pool already exists

- So I tried exporting the pool
and re-importing with a different name and I still get the same
error. There are no other zpools on the system, both zpool list and
zpool export return nothing other than the rpool I've just imported.

I'm somewhat stumped... any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split failing

2012-04-16 Thread Cindy Swearingen

Hi Matt,

I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S11 FCS release without
problems.

I'm not a fan of root pools on external USB devices.

I haven't tested these steps in a while but you might try
these steps instead. Make sure you have a recent snapshot
of your rpool on the unhealthy laptop.

1. Ensure that the existing root pool and disks are healthy.

# zpool status -x

2. Detach the USB disk.

# zpool detach rpool disk-name

3. Connect the USB disk to the new laptop.

4. Force import the pool on the USB disk.

# zpool import -f rpool rpool2

5. Device cleanup steps, something like:

Boot from media and import rpool2 as rpool.
Make sure the device info is visible.
Reset BIOS to boot from this disk.

On 04/16/12 04:12, Matt Keenan wrote:

Hi

Attempting to split a mirrored rpool and fails with error :

Unable to split rpool: pool already exists


I have a laptop with main disk mirrored to an external USB. However as
the laptop is not too healthy I'd like to split the pool into two pools
and attach the external drive to another laptop and mirror it to the new
laptop.

What I did :

- Booted laptop into an live DVD

- Import the rpool:
$ zpool import rpool

- Attempt to split :
$ zpool split rpool rpool-ext

- Error message shown and split fails :
Unable to split rpool: pool already exists

- So I tried exporting the pool
and re-importing with a different name and I still get the same
error. There are no other zpools on the system, both zpool list and
zpool export return nothing other than the rpool I've just imported.

I'm somewhat stumped... any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Cindy Swearingen

Hi Peter,

The root pool disk labeling/partitioning is not so easy.

I don't know which OpenIndiana release this is but in a previous
Solaris release we had a bug that caused the error message below
and the workaround is exactly what you did, use the -f option.

We don't yet have an easy way to clear a disk label, but
sometimes I just create a new test pool on the problematic disk,
destroy the pool, and start over with a more coherent label.
This doesn't work for scenarios. Some people use the dd command
to wipe an existing label, but you must use it carefully.

Thanks,

Cindy

On 04/12/12 11:35, Peter Wood wrote:

Hi,

I was following the instructions in ZFS Troubleshooting Guide on how to
replace a disk in the root pool on x86 system. I'm using OpenIndiana,
ZFS pool v.28 with mirrored system rpool. The replacement disk is brand new.

root:~# zpool status
   pool: rpool
  state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
 attempt was made to correct the error.  Applications are
unaffected.
action: Determine if the device needs to be replaced, and clear the errors
 using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
   scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
config:

 NAME STATE READ WRITE CKSUM
 rpoolDEGRADED 0 0 0
   mirror-0   DEGRADED 0 0 0
 c2t5000CCA369C55DB8d0s0  OFFLINE  0   126 0
 c2t5000CCA369D5231Cd0s0  ONLINE   0 0 0

errors: No known data errors
root:~#

I'm not very familiar with Solaris partitions and slices so somewhere in
the format/partition commands I must to have made a mistake because when
I try to replace the disk I'm getting the following error:

root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0 c2t5000CCA369C89636d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t5000CCA369C89636d0s0 overlaps with
/dev/dsk/c2t5000CCA369C89636d0s2
root:~#

I used -f and it worked but I was wondering is there a way to completely
reset the new disk? Remove all partitions and start from scratch.

Thank you
Peter


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Cindy Swearingen

Actually, since I answer the root pool partitioning questions
at least 52 times per year, I provided the steps for clearing
the existing partitions in step 9, here:

http://docs.oracle.com/cd/E23823_01/html/817-5093/disksxadd-2.html#disksxadd-40

How to Create a Disk Slice for a ZFS Root File System

Step 9 uses the format--disk--partition--modify option and
sets the free hog space to slice 0. Then, you press return for
each existing slice to zero them out. This creates one large
slice 0.

cs

On 04/12/12 11:48, Cindy Swearingen wrote:

Hi Peter,

The root pool disk labeling/partitioning is not so easy.

I don't know which OpenIndiana release this is but in a previous
Solaris release we had a bug that caused the error message below
and the workaround is exactly what you did, use the -f option.

We don't yet have an easy way to clear a disk label, but
sometimes I just create a new test pool on the problematic disk,
destroy the pool, and start over with a more coherent label.
This doesn't work for scenarios. Some people use the dd command
to wipe an existing label, but you must use it carefully.

Thanks,

Cindy

On 04/12/12 11:35, Peter Wood wrote:

Hi,

I was following the instructions in ZFS Troubleshooting Guide on how to
replace a disk in the root pool on x86 system. I'm using OpenIndiana,
ZFS pool v.28 with mirrored system rpool. The replacement disk is
brand new.

root:~# zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are
unaffected.
action: Determine if the device needs to be replaced, and clear the
errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
config:

NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c2t5000CCA369C55DB8d0s0 OFFLINE 0 126 0
c2t5000CCA369D5231Cd0s0 ONLINE 0 0 0

errors: No known data errors
root:~#

I'm not very familiar with Solaris partitions and slices so somewhere in
the format/partition commands I must to have made a mistake because when
I try to replace the disk I'm getting the following error:

root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0
c2t5000CCA369C89636d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t5000CCA369C89636d0s0 overlaps with
/dev/dsk/c2t5000CCA369C89636d0s2
root:~#

I used -f and it worked but I was wondering is there a way to completely
reset the new disk? Remove all partitions and start from scratch.

Thank you
Peter


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Accessing Data from a detached device.

2012-03-29 Thread Cindy Swearingen

Hi Matt,

There is no easy way to access data from a detached device.

You could try to force import it on another system or under
a different name on the same system with the remaining device.

The easiest way is to split the mirrored pool. See the
steps below.

Thanks,

Cindy


# zpool status pool
  pool: pool
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 28 15:58:44 2012
config:

NAME   STATE READ WRITE CKSUM
pool   ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
c0t2014C3F04F4Fd0  ONLINE   0 0 0
c0t2014C3F04F38d0  ONLINE   0 0 0

errors: No known data errors
# zpool split pool pool2
# zpool import pool2
# zpool status pool pool2
  pool: pool
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 28 15:58:44 2012
config:

NAME STATE READ WRITE CKSUM
pool ONLINE   0 0 0
  c0t2014C3F04F4Fd0  ONLINE   0 0 0

errors: No known data errors

  pool: pool2
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 28 15:58:44 2012
config:

NAME STATE READ WRITE CKSUM
pool2ONLINE   0 0 0
  c0t2014C3F04F38d0  ONLINE   0 0 0

errors: No known data errors
#



On 03/29/12 09:50, Matt Keenan wrote:

Hi,

Is it possible to access the data from a detached device from an
mirrored pool.

Given a two device mirrored pool, if you zpool detach one device. Can
the data on the removed device be accessed in some means. From what I
can see you can attach the device back to the original pool, but this
will simply re-silver everything from the already attached device back
onto this device.

If I attached this device to a different pool it will simply get
overwritten.

Any ideas ?

cheers

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Cindy Swearingen

Hi Bob,

Not many options because you can't attach disks to convert a
non-redundant pool to a RAIDZ pool.

To me, the best solution is to get one more disk (for a total of 4
disks) to create a mirrored pool. Mirrored pools provide more
flexibility. See 1 below.

See the options below.

Thanks,

Cindy

1. Convert this pool to a mirrored pool by using 4 disks. If your
existing export pool looks like this:

# zpool status export
  pool: export
 state: ONLINE
  scan: none requested
config:

NAME STATE READ WRITE CKSUM
export   ONLINE   0 0 0
  disk1  ONLINE   0 0 0
  disk2  ONLINE   0 0 0

Then, attach the additional 2 disks:

# zpool attach export disk1 disk3
# zpool attach export disk2 disk4

2. Borrow a couple of disks to temporarily create a pool (export1),
copy over the data from export, destroy export, and rebuild export
as a 3-disk RAIDZ pool. Then, copy over the data to export, destroy
export1, and you can have the same export mount points.




On 03/07/12 14:38, Bob Doolittle wrote:

Hi,

I had a single-disk zpool (export) and was given two new disks for
expanded storage. All three disks are identically sized, no
slices/partitions. My goal is to create a raidz1 configuration of the
three disks, containing the data in the original zpool.

However, I got off on the wrong foot by doing a zpool add of the first
disk. Apparently this has simply increased my storage without creating a
raidz config.

Unfortunately, there appears to be no simple way to just remove that
disk now and do a proper raidz create of the other two. Nor am I clear
on how import/export works and whether that's a good way to copy content
from one zpool to another on a single host.

Can somebody guide me? What's the easiest way out of this mess, so that
I can move from what is now a simple two-disk zpool (less than 50% full)
to a three-disk raidz configuration, starting with one unused disk? In
the end I want the three-disk raidz to have the same name (and mount
point) as the original zpool. There must be an easy way to do this.

Thanks for any assistance.

-Bob

P.S. I would appreciate being kept on the CC list for this thread to
avoid digest mailing delays.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Cindy Swearingen

 In theory, instead of this missing
 disk approach I could create a two-disk raidz pool and later add the
 third disk to it, right?

No, you can't add a 3rd disk to an existing RAIDZ vdev of two disks.
You would want to add another 2 disk RAIDZ vdev.

See Example 4-2 in this section:

http://docs.oracle.com/cd/E23823_01/html/819-5461/gayrd.html#gazgw

Adding Disks to a RAID-Z Configuration

This section describes what you can and can't do with RAID-Z pools:

http://docs.oracle.com/cd/E23823_01/html/819-5461/gaypw.html#gcvjg

cs

On 03/07/12 15:41, Bob Doolittle wrote:

Perfect, thanks. Just what I was looking for.

How do I know how large to make the fakedisk file? Any old enormous
size will do, since mkfile -n doesn't actually allocate the blocks until
needed?

To be sure I understand correctly: In theory, instead of this missing
disk approach I could create a two-disk raidz pool and later add the
third disk to it, right? Your method looks much more efficient however
so thanks.

It's too bad we can't change a 1-volume zpool to raidz before or while
adding disks. That would make this much easier.

Regards,
Bob

On 03/07/12 17:03, Fajar A. Nugraha wrote:

On Thu, Mar 8, 2012 at 4:38 AM, Bob
Doolittlebob.doolit...@oracle.com wrote:

Hi,

I had a single-disk zpool (export) and was given two new disks for
expanded
storage. All three disks are identically sized, no slices/partitions. My
goal is to create a raidz1 configuration of the three disks,
containing the
data in the original zpool.

However, I got off on the wrong foot by doing a zpool add of the first
disk. Apparently this has simply increased my storage without creating a
raidz config.

IIRC you can't convert a single-disk (or striped) pool to raidz. You
can only convert it to mirror. So even your intended approach (you
wanted to try zpool attach?) was not appropriate.


Unfortunately, there appears to be no simple way to just remove that
disk
now and do a proper raidz create of the other two. Nor am I clear on how
import/export works and whether that's a good way to copy content
from one
zpool to another on a single host.

Can somebody guide me? What's the easiest way out of this mess, so
that I
can move from what is now a simple two-disk zpool (less than 50%
full) to a
three-disk raidz configuration, starting with one unused disk?

- use the one new disk to create a temporary pool
- copy the data (zfs snapshot -r + zfs send -R | zfs receive)
- destroy old pool
- create a three-disk raidz pool using two disks and a fake device,
something like http://www.dev-eth0.de/creating-raidz-with-missing-device/
- destroy the temporary pool
- replace the fake device with now-free disk
- export the new pool
- import the new pool and rename it in the process: zpool import
temp_pool_name old_pool_name


In the end I
want the three-disk raidz to have the same name (and mount point) as the
original zpool. There must be an easy way to do this.

Nope.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange send failure

2012-02-09 Thread Cindy Swearingen

Hi Ian,

This looks like CR 7097870.

To resolve this problem, apply the latest s11 SRU to both systems.

Thanks,

Cindy

On 02/08/12 17:55, Ian Collins wrote:

Hello,

I'm attempting to dry run the send the root data set of a zone from one
Solaris 11 host to another:

sudo zfs send -r rpool/zoneRoot/zone@to_send | sudo ssh remote zfs
receive -ven fileserver/zones

But I'm seeing

cannot receive: stream has unsupported feature, feature flags = 24

The source pool version is 31, the remote pool version is 33. Both the
source filesystem and parent on the remote box are version 5.

I've never seen this before, any clues?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Cindy Swearingen

Hi Jan,

These commands will tell you if FMA faults are logged:

# fmdump
# fmadm faulty

This command will tell you if errors are accumulating on this
disk:

# fmdump -eV | more

Thanks,

Cindy

On 02/01/12 11:20, Jan Hellevik wrote:

I suspect that something is wrong with one of my disks.

This is the output from iostat:

 extended device statistics    errors ---
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
 2.0   18.9   38.1  160.9  0.0  0.10.13.2   0   6   0   0   0   0 
c5d0
 2.7   18.8   59.3  160.9  0.0  0.10.23.2   0   6   0   0   0   0 
c5d1
 0.0   36.81.1 3593.7  0.0  0.10.02.9   0   8   0   0   0   0 
c6t66d0
 0.0   38.20.0 3693.7  0.0  0.20.04.6   0  12   0   0   0   0 
c6t70d0
 0.0   38.10.0 3693.7  0.0  0.10.02.4   0   5   0   0   0   0 
c6t74d0
 0.0   42.00.0 4155.4  0.0  0.00.00.6   0   2   0   0   0   0 
c6t76d0
 0.0   36.90.0 3593.7  0.0  0.10.01.4   0   3   0   0   0   0 
c6t78d0
 0.0   41.70.0 4155.4  0.0  0.00.01.2   0   4   0   0   0   0 
c6t80d0

The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t
than the other disks in the pool. The output is from a 'zfs receive' after 
about 3 hours.
The two c5dx disks are the 'rpool' mirror, the others belong to the 'backup' 
pool.

admin@master:~# zpool status
   pool: backup
  state: ONLINE
  scan: scrub repaired 0 in 5h7m with 0 errors on Tue Jan 31 04:55:31 2012
config:

 NAME STATE READ WRITE CKSUM
 backup   ONLINE   0 0 0
   mirror-0   ONLINE   0 0 0
 c6t78d0  ONLINE   0 0 0
 c6t66d0  ONLINE   0 0 0
   mirror-1   ONLINE   0 0 0
 c6t70d0  ONLINE   0 0 0
 c6t74d0  ONLINE   0 0 0
   mirror-2   ONLINE   0 0 0
 c6t76d0  ONLINE   0 0 0
 c6t80d0  ONLINE   0 0 0

errors: No known data errors

admin@master:~# zpool list
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
backup  4.53T  1.37T  3.16T30%  1.00x  ONLINE  -

admin@master:~# uname -a
SunOS master 5.11 oi_148 i86pc i386 i86pc

Should I be worried? And what other commands can I use to investigate further?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-19 Thread Cindy Swearingen

Hi Pawel,

In addition to the current SMI label requirement for booting,
I believe another limitation is that the boot info must be
contiguous.

I think an RFE is filed to relax this requirement as well.
I just can't find it right now.

Thanks,

Cindy

On 12/18/11 04:52, Pawel Jakub Dawidek wrote:

On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote:

Hi Anon,

The disk that you attach to the root pool will need an SMI label
and a slice 0.

The syntax to attach a disk to create a mirrored root pool
is like this, for example:

# zpool attach rpool c1t0d0s0 c1t1d0s0


BTW. Can you, Cindy, or someone else reveal why one cannot boot from
RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code
would have to be licensed under GPL as the rest of the boot code?

I'm asking, because I see no technical problems with this functionality.
Booting off of RAIDZ (even RAIDZ3) and also from multi-top-level-vdev
pools works just fine on FreeBSD for a long time now. Not being forced
to have dedicated pool just for the root if you happen to have more than
two disks in you box is very convenient.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen

Hi Tim,

No, in current Solaris releases the boot blocks are installed
automatically with a zpool attach operation on a root pool.

Thanks,

Cindy

On 12/15/11 17:13, Tim Cook wrote:

Do you still need to do the grub install?

On Dec 15, 2011 5:40 PM, Cindy Swearingen cindy.swearin...@oracle.com
mailto:cindy.swearin...@oracle.com wrote:

Hi Anon,

The disk that you attach to the root pool will need an SMI label
and a slice 0.

The syntax to attach a disk to create a mirrored root pool
is like this, for example:

# zpool attach rpool c1t0d0s0 c1t1d0s0

Thanks,

Cindy

On 12/15/11 16:20, Anonymous Remailer (austria) wrote:


On Solaris 10 If I install using ZFS root on only one drive is
there a way
to add another drive as a mirror later? Sorry if this was discussed
already. I searched the archives and couldn't find the answer.
Thank you.
_
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen

Hi Gregg,

Yes, fighting with partitioning is just silly.

Santa will bring us bootable GPT/EFI labels in the coming year
is my wish so you will be able to just attach disks to root
pools.

Send us some output so we can see what the trouble is.

In the meantime, the links below might help.

Thanks,

Cindy


http://docs.oracle.com/cd/E23824_01/html/821-1459/disksprep-34.html

http://docs.oracle.com/cd/E23824_01/html/821-1459/diskssadd-2.html#diskssadd-5

http://docs.oracle.com/cd/E23824_01/html/821-1459/disksxadd-2.html#disksxadd-30



On 12/16/11 00:27, Gregg Wonderly wrote:

Cindy, will it ever be possible to just have attach mirror the surfaces,
including the partition tables? I spent an hour today trying to get a
new mirror on my root pool. There was a 250GB disk that failed. I only
had a 1.5TB handy as a replacement. prtvtoc ... | fmthard does not work
in this case and so you have to do the partitioning by hand, which is
just silly to fight with anyway.

Gregg

Sent from my iPhone

On Dec 15, 2011, at 6:13 PM, Tim Cook t...@cook.ms mailto:t...@cook.ms
wrote:


Do you still need to do the grub install?

On Dec 15, 2011 5:40 PM, Cindy Swearingen
cindy.swearin...@oracle.com mailto:cindy.swearin...@oracle.com wrote:

Hi Anon,

The disk that you attach to the root pool will need an SMI label
and a slice 0.

The syntax to attach a disk to create a mirrored root pool
is like this, for example:

# zpool attach rpool c1t0d0s0 c1t1d0s0

Thanks,

Cindy

On 12/15/11 16:20, Anonymous Remailer (austria) wrote:


On Solaris 10 If I install using ZFS root on only one drive is
there a way
to add another drive as a mirror later? Sorry if this was
discussed
already. I searched the archives and couldn't find the answer.
Thank you.
_
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/__mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen

Yep, well said, understood, point taken, I hear you, you're
preaching to the choir. Have faith in Santa.

A few comments:

1. I need more info on the x86 install issue. I haven't seen this
problem myself.

2. We don't use slice2 for anything and its not recommended.

3. The SMI disk is a long-standing boot requirement. We're
working on it.

4. Both the s10 and s11 installer can create a mirrored root pool
so you don't have to do this manually.

If you do have do this manually in the S11 release, you can use
this shortcut to slap on a new label but it does no error checking
so make sure you have the right disk:

# format -L vtoc -d c1t0d0

Unfortunately, this applies the default partition table, which
might be a 129MB slice 0, so you still have to do the other 17 steps to
create one large slice 0. I filed an RFE to do something like this:

# format -L vtoc -a(ll) s0 c1t0d0

5. The overlapping partition error on x86 systems is a bug (unless they
really are overlapping) and you can override it by using the -f option.

Thanks,

Cindy

On 12/16/11 09:44, Gregg Wonderly wrote:

The issue is really quite simple. The solaris install, on x86 at least,
chooses to use slice-0 for the root partition. That slice is not created
by a default format/fdisk, and so we have the web strewn with

prtvtoc path/to/old/slice2 | fmthard -s - path/to/new/slice2

As a way to cause the two commands to access the entire disk. If you
have to use dissimilar sized disks because 1) that's the only media you
have, or 2) you want to increase the size of your root pool, then all we
end up with, is an error message about overlapping partitions and no
ability to make progress.

If I then use dd if=/dev/zero to erase the front of the disk, and the
fire up format, select fdisk, say yes to create solaris2 partitioning,
and then use partition to add a slice 0, I will have problems getting
the whole disk in play.

So, the end result, is that I have to jump through hoops, when in the
end, I'd really like to just add the whole disk, every time. If I say

zpool attach rpool c8t0d0s0 c12d1

I really do mean the whole disk, and I'm not sure why it can't just
happen. Failing to type a slice reference, is no worse of a 'typo'
than typing 's2' by accident, because that's what I've been typing with
all the other commands to try and get the disk partitioned.

I just really think there's not a lot of value in all of this,
especially with ZFS, where we can, in fact add more disks/vdevs to a
keep expanding space, and extremely rarely is that going to be done, for
the root pool, with fractions of disks.

The use of SMI and absolute refusal to use EFI partitioning plus all of
this just stacks up to a pretty large barrier to simple and/or easy
administration.

I'm very nervous when I have a simplex filesystem setting there, and
when a disk has died, I'm doubly nervous that the other half is going
to fall over.

I'm not trying to be hard nosed about this, I'm just trying to share my
angst and frustration with the details that drove me in that direction.

Gregg Wonderly

On 12/16/2011 2:56 AM, Andrew Gabriel wrote:

On 12/16/11 07:27 AM, Gregg Wonderly wrote:

Cindy, will it ever be possible to just have attach mirror the
surfaces, including the partition tables? I spent an hour today
trying to get a new mirror on my root pool. There was a 250GB disk
that failed. I only had a 1.5TB handy as a replacement. prtvtoc ... |
fmthard does not work in this case


Can you be more specific why it fails?
I have seen a couple of cases, and I'm wondering if you're hitting the
same thing.
Can you post the prtvtoc output of your original disk please?


and so you have to do the partitioning by hand, which is just silly
to fight with anyway.

Gregg




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-15 Thread Cindy Swearingen

Hi Anon,

The disk that you attach to the root pool will need an SMI label
and a slice 0.

The syntax to attach a disk to create a mirrored root pool
is like this, for example:

# zpool attach rpool c1t0d0s0 c1t1d0s0

Thanks,

Cindy

On 12/15/11 16:20, Anonymous Remailer (austria) wrote:


On Solaris 10 If I install using ZFS root on only one drive is there a way
to add another drive as a mirror later? Sorry if this was discussed
already. I searched the archives and couldn't find the answer. Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gaining access to var from a live cd

2011-11-30 Thread Cindy Swearingen

Hi Francois,

A similar recovery process in OS11 is to just mount the BE,
like this:

# beadm mount s11_175 /mnt
# ls /mnt/var
adm croninetlogadm  preservetmp
ai  db  infomailrun tpm
apache2 dhcpinstalladm  nfs sadmuser
audit   dt  krb5ntp samba   xauth
cache   fm  ld  ocm smb yp
coherence   fps ldapopenldapspool
cores   games   lib opt statmon
crash   idmap   log pkg svc
# beadm umount /mnt

It took me awhile to figure out the revised recovery steps without
failsafe mode, but its something like this:

1. Boot single-user mode for some minor recovery, like bad root
shell
2. Boot from media and import the root pool to fix boot-related
issues
3. Boot from media and mount the BE to fix root password

Thanks,

Cindy

On 11/29/11 16:12, Francois Dion wrote:

In the end what I needed to do was to set the mountpoint with:

zfs set mountpoint=/tmp/rescue rpool/ROOT/openindiana

it ended up mounting it in /mnt/rpool/tmp/rescue but still, it gave me
the access to var/ld/... and after removing the ld.config, doing a
zpool export and reboot, my desktop is back.

Thanks for the pointers. man zfs did mention mountpoint as a valid
option, not sure why it didnt work. as for mount -F zfs... it only
works on legacy.

On 11/29/11, Mike Gerdtsmger...@gmail.com  wrote:

On Tue, Nov 29, 2011 at 4:40 PM, Francois Dionfrancois.d...@gmail.com
wrote:

It is on openindiana 151a, no separate /var as far as But I'll have to
test this on solaris11 too when I get a chance.

The problem is that if I

zfs mount -o mountpoint=/tmp/rescue (or whatever) rpool/ROOT/openindiana

i get a cannot mount /mnt/rpool: directory is not empty.

The reason for that is that I had to do a zpool import -R /mnt/rpool
rpool (or wherever I mount it it doesnt matter) before I could do a
zfs mount, else I dont have access to the rpool zpool for zfs to do
its thing.

chicken / egg situation? I miss the old fail safe boot menu...


You can mount it pretty much anywhere:

mkdir /tmp/foo
zfs mount -o mountpoint=/tmp/foo ...

I'm not sure when the temporary mountpoint option (-o mountpoint=...)
came in. If it's not valid syntax then:

mount -F zfs rpool/ROOT/solaris /tmp/foo

--
Mike Gerdts
http://mgerdts.blogspot.com/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Cindy Swearingen
I think the too many open files is a generic error message about 
running out of file descriptors. You should check your shell ulimit

information.

On 11/29/11 09:28, sol wrote:

Hello

Has anyone else come across a bug moving files between two zfs file systems?

I used mv /my/zfs/filesystem/files /my/zfs/otherfilesystem and got the
error too many open files.

This is on Solaris 11



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS smb/cifs shares in Solaris 11 (some observations)

2011-11-29 Thread Cindy Swearingen

Hi Sol,

For 1) and several others, review the ZFS Admin Guide for
a detailed description of the share changes, here:

http://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html

For 2-4), You can't rename a share. You would have to remove it
and recreate it with the new name.

For 6), I think you need to upgrade your file systems.

Thanks,

Cindy
On 11/29/11 09:46, sol wrote:

Hi

Several observations with zfs cifs/smb shares in the new Solaris 11.

1) It seems that the previously documented way to set the smb share name
no longer works
zfs set sharesmb=name=my_share_name
You have to use the long-winded
zfs set share=name=my_share_name,path=/my/share/path,prot=smb
This is fine but not really obvious if moving scripts from Solaris10 to
Solaris11.

2) If you use zfs rename to rename a zfs filesystem it doesn't rename
the smb share name.

3) Also you might end up with two shares having the same name.

4) So how do you rename the smb share? There doesn't appear to be a zfs
unset and if you issue the command twice with different names then both
are listed when you use zfs get share.

5) The share value act like a property but does not show up if you use
zfs get so that's not really consistent

6) zfs filesystems created with Solaris 10 and shared with smb cannot be
mounted from Windows when the server is upgraded to Solaris 11.
The client just gets permission denied but in the server log you might
see access denied: share ACL.
If you create a brand new zfs filesystem then it works fine. So what is
the difference?
The ACLs have never been set or changed so it's not that, and the two
filesystems appear to have identical ACLs.
But if you look at the extended attributes the successful filesystem has
xattr {A--m} and the unsuccessful has {}.
However that xattr cannot be set on the share to see if it allows it to
be mounted.
chmod S+cA share gives chmod: ERROR: extended system attributes not
supported for share (even though it has the xattr=on property).
What is the problem here, why cannot a Solaris 10 filesystem be shared
via smb?
And how can extended attributes be set on a zfs filesystem?

Thanks folks



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-10 Thread Cindy Swearingen

Hi John,

CR 7102272:

 ZFS storage pool created on a 3 TB USB 3.0 device has device label 
problems


Let us know if this is still a problem in the OS11 FCS release.

Thanks,

Cindy


On 11/10/11 08:55, John D Groenveld wrote:

In message4e9db04b.80...@oracle.com, Cindy Swearingen writes:

This is CR 7102272.


What is the title of this BugId?
I'm trying to attach my Oracle CSI to it but Chuck Rozwat
and company's support engineer can't seem to find it.

Once I get upgraded from S11x SRU12 to S11, I'll reproduce
on a more recent kernel build.

Thanks,
John
groenv...@acm.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen

Hi John,

I'm going to file a CR to get this issue reviewed by the USB team
first, but if you could humor me with another test:

Can you run newfs to create a UFS file system on this device
and mount it?

Thanks,

Cindy

On 10/18/11 08:18, John D Groenveld wrote:

In message 201110150202.p9f22w2n000...@elvis.arl.psu.edu, John D Groenveld 
writes:

I'm baffled why zpool import is unable to find the pool on the
drive, but the drive is definitely functional.


Per Richard Elling, it looks like ZFS is unable to find
the requisite labels for importing.

John
groenv...@acm.org

# prtvtoc /dev/rdsk/c1t0d0s2
* /dev/rdsk/c1t0d0s2 partition map
*
* Dimensions:
*4096 bytes/sector
*  63 sectors/track
* 255 tracks/cylinder
*   16065 sectors/cylinder
*   45599 cylinders
*   45597 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector
*   0 16065 16064
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  200  16065 732483675 732499739
   2  501  0 732515805 732515804
   8  101  0 16065 16064
# zpool create -f foobar c1t0d0s0
# zpool status foobar
  pool: foobar
 state: ONLINE
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
foobar  ONLINE   0 0 0
  c1t0d0s0  ONLINE   0 0 0

errors: No known data errors
# zdb -l /dev/dsk/c1t0d0s0

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen

Yeah, okay, duh. I should have known that large sector size
support is only available for a non-root ZFS file system.

A couple more things if you're still interested:

1. If you re-create the pool on the whole disk, like this:

# zpool create foo c1t0d0

Then, resend the prtvtoc output for c1t0d0s0.

We should be able to tell if format is creating a dummy label,
which means the ZFS data is never getting written to this disk.
This would be a bug.

2. You are running this early S11 release:

SunOS 5.11 151.0.1.12 i386

You might retry this on more recent bits, like the EA release,
which I think is b 171.

I'll still file the CR.

Thanks,

Cindy





On 10/13/11 09:40, John D Groenveld wrote:

In message 201110131150.p9dbo8yk011...@acsinet22.oracle.com, Casper.Dik@oracl
e.com writes:

What is the partition table?


I thought about that so I reproduced with the legacy SMI label
and a Solaris fdisk partition with ZFS on slice 0.
Same result as EFI; once I export the pool I cannot import it.

John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-18 Thread Cindy Swearingen

Hi Paul,

Your 1-3 is very sensible advice and I must ask about this
statement:

I have yet to have any data loss with ZFS.

Maybe this goes without saying, but I think you are using
ZFS redundancy.

Thanks,

Cindy

On 10/18/11 08:52, Paul Kraus wrote:

On Tue, Oct 18, 2011 at 9:38 AM, Gregory Shaw greg.s...@oracle.com wrote:


Another item that made me nervous was my experience with ZFS.  Even when
called 'ready for production', a number of bugs were found that were pretty 
nasty.
They've since been fixed (years ago), but there were some surprises there that 
I'd
 rather not encounter on a Linux system.


I know that I have been really spoiled by UFS. It has been around
for so long that it has been really optimized, some might even say,
optimized beyond the point of diminishing returns :-) UFS is amazingly
and has very reasonable performance, given it's roots. I did not have
to live through the early days of UFS and the pain of finding bugs. I
_am_ living through that with ZFS :-(

Having said that, I have yet to have any data loss with ZFS. I
have developed a number of simple rules I follow with ZFS:

1. OS and DATA go on different zpools on different physical drives (if
at all possible)

2. Do NOT move drives around without first exporting any zpools on those drives.

3. Do NOT let a system see drives with more than one OS zpool at the
same time (I know you _can_ do this safely, but I have seen too many
horror stories on this list that I just avoid it).


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen

This is CR 7102272.

cs

On 10/18/11 10:50, John D Groenveld wrote:

In message 4e9da8b1.7020...@oracle.com, Cindy Swearingen writes:

1. If you re-create the pool on the whole disk, like this:

# zpool create foo c1t0d0

Then, resend the prtvtoc output for c1t0d0s0.


# zpool create snafu c1t0d0
# zpool status snafu
  pool: snafu
 state: ONLINE
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
snafu   ONLINE   0 0 0
  c1t0d0ONLINE   0 0 0

errors: No known data errors
# prtvtoc /dev/rdsk/c1t0d0s0
* /dev/rdsk/c1t0d0s0 partition map
*
* Dimensions:
*4096 bytes/sector
* 732566642 sectors
* 732566631 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector
*   6   250   255
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256 732549997 732550252
   8 1100  732550253 16384 732566636


We should be able to tell if format is creating a dummy label,
which means the ZFS data is never getting written to this disk.
This would be a bug.


# zdb -l /dev/dsk/c1t0d0s0

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3


2. You are running this early S11 release:

SunOS 5.11 151.0.1.12 i386

You might retry this on more recent bits, like the EA release,
which I think is b 171.


Doubtful I'll find time to install EA before S11 FCS's
November launch.


I'll still file the CR.


Thank you.

John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-13 Thread Cindy Swearingen

John,

Any USB-related messages in /var/adm/messages for this device?

Thanks,

Cindy

On 10/12/11 11:29, John D Groenveld wrote:

In message 4e95cb2a.30...@oracle.com, Cindy Swearingen writes:

What is the error when you attempt to import this pool?


cannot import 'foo': no such pool available
John
groenv...@acm.org

# format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 Seagate-External-SG11 cyl 45597 alt 2 hd 255 sec 63
  /pci@0,0/pci108e,6676@2,1/hub@7/storage@2/disk@0,0
   1. c8t0d0 ATA-HITACHI HDS7225-A9CA cyl 30397 alt 2 hd 255 sec 63
  /pci@0,0/pci108e,6676@5/disk@0,0
   2. c8t1d0 ATA-HITACHI HDS7225-A7BA cyl 30397 alt 2 hd 255 sec 63
  /pci@0,0/pci108e,6676@5/disk@1,0
Specify disk (enter its number): ^C
# zpool create foo c1t0d0
# zfs create foo/bar
# zfs list -r foo
NAME  USED  AVAIL  REFER  MOUNTPOINT
foo   126K  2.68T32K  /foo
foo/bar31K  2.68T31K  /foo/bar
# zpool export foo
# zfs list -r foo
cannot open 'foo': dataset does not exist
# truss -t open zpool import foo
open(/var/ld/ld.config, O_RDONLY) Err#2 ENOENT
open(/lib/libumem.so.1, O_RDONLY) = 3
open(/lib/libc.so.1, O_RDONLY)= 3
open(/lib/libzfs.so.1, O_RDONLY)  = 3
open(/usr/lib/fm//libtopo.so, O_RDONLY)   = 3
open(/lib/libxml2.so.2, O_RDONLY) = 3
open(/lib/libpthread.so.1, O_RDONLY)  = 3
open(/lib/libz.so.1, O_RDONLY)= 3
open(/lib/libm.so.2, O_RDONLY)= 3
open(/lib/libsocket.so.1, O_RDONLY)   = 3
open(/lib/libnsl.so.1, O_RDONLY)  = 3
open(/usr/lib//libshare.so.1, O_RDONLY)   = 3
open(/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_SGS.mo, O_RDONLY) Err#2 
ENOENT
open(/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSLIB.mo, O_RDONLY) 
Err#2 ENOENT
open(/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3, O_RDONLY) = 3
open(/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3, O_RDONLY) = 3
open(/dev/zfs, O_RDWR)= 3
open(/etc/mnttab, O_RDONLY)   = 4
open(/etc/dfs/sharetab, O_RDONLY) = 5
open(/lib/libavl.so.1, O_RDONLY)  = 6
open(/lib/libnvpair.so.1, O_RDONLY)   = 6
open(/lib/libuutil.so.1, O_RDONLY)= 6
open64(/dev/rdsk/, O_RDONLY)  = 6
/3: openat64(6, c8t0d0s0, O_RDONLY)   = 9
/3: open(/lib/libadm.so.1, O_RDONLY)  = 15
/9: openat64(6, c8t0d0s2, O_RDONLY)   = 13
/5: openat64(6, c8t1d0s0, O_RDONLY)   = 10
/7: openat64(6, c8t1d0s2, O_RDONLY)   = 14
/8: openat64(6, c1t0d0s0, O_RDONLY)   = 7
/4: openat64(6, c1t0d0s2, O_RDONLY)   Err#5 EIO
/8: open(/lib/libefi.so.1, O_RDONLY)  = 15
/3: openat64(6, c1t0d0, O_RDONLY) = 9
/5: openat64(6, c1t0d0p0, O_RDONLY)   = 10
/9: openat64(6, c1t0d0p1, O_RDONLY)   = 13
/7: openat64(6, c1t0d0p2, O_RDONLY)   Err#5 EIO
/4: openat64(6, c1t0d0p3, O_RDONLY)   Err#5 EIO
/7: openat64(6, c1t0d0s8, O_RDONLY)   = 14
/2: openat64(6, c7t0d0s0, O_RDONLY)   = 8
/6: openat64(6, c7t0d0s2, O_RDONLY)   = 12
/1: Received signal #20, SIGWINCH, in lwp_park() [default]
/3: openat64(6, c7t0d0p0, O_RDONLY)   = 9
/4: openat64(6, c7t0d0p1, O_RDONLY)   = 11
/5: openat64(6, c7t0d0p2, O_RDONLY)   = 10
/6: openat64(6, c8t0d0p0, O_RDONLY)   = 12
/6: openat64(6, c8t0d0p1, O_RDONLY)   = 12
/6: openat64(6, c8t0d0p2, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t0d0p3, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t0d0p4, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t1d0p0, O_RDONLY)   = 12
/8: openat64(6, c7t0d0p3, O_RDONLY)   = 7
/6: openat64(6, c8t1d0p1, O_RDONLY)   = 12
/6: openat64(6, c8t1d0p2, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t1d0p3, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t1d0p4, O_RDONLY)   Err#5 EIO
/9: openat64(6, c7t0d0p4, O_RDONLY)   = 13
/7: openat64(6, c7t0d0s1, O_RDONLY)   = 14
/1: open(/usr/share/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.cat, 
O_RDONLY) Err#2 ENOENT
open(/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.mo, O_RDONLY) 
Err#2 ENOENT
cannot import 'foo': no such pool available
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-12 Thread Cindy Swearingen

Hi John,

What is the error when you attempt to import this pool?

Thanks,

Cindy

On 10/11/11 18:17, John D Groenveld wrote:

Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it identify itself as:
Seagate-External-SG11-2.73TB

Under both Solaris 10 and Solaris 11x, I receive the evil message:
| I/O request is not aligned with 4096 disk sector size.
| It is handled through Read Modify Write but the performance is very low.

However, that's not my big issue as I will use the zpool-12 hack.

My big issue is that once I zpool(1M) export the pool from
my W2100z running S10 or my Ultra 40 running S11x, I can't 
import it.


I thought weird USB connectivity issue, but I can run
format - analyze - read merrily.

Anyone seen this bug?

John
groenv...@acm.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-12 Thread Cindy Swearingen

In the steps below, you're missing a zpool import step.
I would like to see the error message when the zpool import
step fails.

Thanks,

Cindy

On 10/12/11 11:29, John D Groenveld wrote:

In message 4e95cb2a.30...@oracle.com, Cindy Swearingen writes:

What is the error when you attempt to import this pool?


cannot import 'foo': no such pool available
John
groenv...@acm.org

# format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 Seagate-External-SG11 cyl 45597 alt 2 hd 255 sec 63
  /pci@0,0/pci108e,6676@2,1/hub@7/storage@2/disk@0,0
   1. c8t0d0 ATA-HITACHI HDS7225-A9CA cyl 30397 alt 2 hd 255 sec 63
  /pci@0,0/pci108e,6676@5/disk@0,0
   2. c8t1d0 ATA-HITACHI HDS7225-A7BA cyl 30397 alt 2 hd 255 sec 63
  /pci@0,0/pci108e,6676@5/disk@1,0
Specify disk (enter its number): ^C
# zpool create foo c1t0d0
# zfs create foo/bar
# zfs list -r foo
NAME  USED  AVAIL  REFER  MOUNTPOINT
foo   126K  2.68T32K  /foo
foo/bar31K  2.68T31K  /foo/bar
# zpool export foo
# zfs list -r foo
cannot open 'foo': dataset does not exist
# truss -t open zpool import foo
open(/var/ld/ld.config, O_RDONLY) Err#2 ENOENT
open(/lib/libumem.so.1, O_RDONLY) = 3
open(/lib/libc.so.1, O_RDONLY)= 3
open(/lib/libzfs.so.1, O_RDONLY)  = 3
open(/usr/lib/fm//libtopo.so, O_RDONLY)   = 3
open(/lib/libxml2.so.2, O_RDONLY) = 3
open(/lib/libpthread.so.1, O_RDONLY)  = 3
open(/lib/libz.so.1, O_RDONLY)= 3
open(/lib/libm.so.2, O_RDONLY)= 3
open(/lib/libsocket.so.1, O_RDONLY)   = 3
open(/lib/libnsl.so.1, O_RDONLY)  = 3
open(/usr/lib//libshare.so.1, O_RDONLY)   = 3
open(/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_SGS.mo, O_RDONLY) Err#2 
ENOENT
open(/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSLIB.mo, O_RDONLY) 
Err#2 ENOENT
open(/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3, O_RDONLY) = 3
open(/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3, O_RDONLY) = 3
open(/dev/zfs, O_RDWR)= 3
open(/etc/mnttab, O_RDONLY)   = 4
open(/etc/dfs/sharetab, O_RDONLY) = 5
open(/lib/libavl.so.1, O_RDONLY)  = 6
open(/lib/libnvpair.so.1, O_RDONLY)   = 6
open(/lib/libuutil.so.1, O_RDONLY)= 6
open64(/dev/rdsk/, O_RDONLY)  = 6
/3: openat64(6, c8t0d0s0, O_RDONLY)   = 9
/3: open(/lib/libadm.so.1, O_RDONLY)  = 15
/9: openat64(6, c8t0d0s2, O_RDONLY)   = 13
/5: openat64(6, c8t1d0s0, O_RDONLY)   = 10
/7: openat64(6, c8t1d0s2, O_RDONLY)   = 14
/8: openat64(6, c1t0d0s0, O_RDONLY)   = 7
/4: openat64(6, c1t0d0s2, O_RDONLY)   Err#5 EIO
/8: open(/lib/libefi.so.1, O_RDONLY)  = 15
/3: openat64(6, c1t0d0, O_RDONLY) = 9
/5: openat64(6, c1t0d0p0, O_RDONLY)   = 10
/9: openat64(6, c1t0d0p1, O_RDONLY)   = 13
/7: openat64(6, c1t0d0p2, O_RDONLY)   Err#5 EIO
/4: openat64(6, c1t0d0p3, O_RDONLY)   Err#5 EIO
/7: openat64(6, c1t0d0s8, O_RDONLY)   = 14
/2: openat64(6, c7t0d0s0, O_RDONLY)   = 8
/6: openat64(6, c7t0d0s2, O_RDONLY)   = 12
/1: Received signal #20, SIGWINCH, in lwp_park() [default]
/3: openat64(6, c7t0d0p0, O_RDONLY)   = 9
/4: openat64(6, c7t0d0p1, O_RDONLY)   = 11
/5: openat64(6, c7t0d0p2, O_RDONLY)   = 10
/6: openat64(6, c8t0d0p0, O_RDONLY)   = 12
/6: openat64(6, c8t0d0p1, O_RDONLY)   = 12
/6: openat64(6, c8t0d0p2, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t0d0p3, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t0d0p4, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t1d0p0, O_RDONLY)   = 12
/8: openat64(6, c7t0d0p3, O_RDONLY)   = 7
/6: openat64(6, c8t1d0p1, O_RDONLY)   = 12
/6: openat64(6, c8t1d0p2, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t1d0p3, O_RDONLY)   Err#5 EIO
/6: openat64(6, c8t1d0p4, O_RDONLY)   Err#5 EIO
/9: openat64(6, c7t0d0p4, O_RDONLY)   = 13
/7: openat64(6, c7t0d0s1, O_RDONLY)   = 14
/1: open(/usr/share/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.cat, 
O_RDONLY) Err#2 ENOENT
open(/usr/lib/locale/en_US.UTF-8/LC_MESSAGES/SUNW_OST_OSCMD.mo, O_RDONLY) 
Err#2 ENOENT
cannot import 'foo': no such pool available
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool recovery import from dd images

2011-08-24 Thread Cindy Swearingen

Hi Kelsey,

I haven't had to do this myself so someone who has done this
before might have a better suggestion.

I wonder if you need to make links from the original device
name to the new device names.

You can see from the zdb -l output below that the device path
is pointing to the original device names (really long device
names).

Thanks,

Cindy



On 08/24/11 12:11, Kelsey Damas wrote:

I am in a rather unique situation.  I've inherited a zpool composed of
two vdevs.   One vdev was roughly 9TB on one RAID 5 array, and the
other vdev is roughly 2TB on a different RAID 5 array.The 9TB
array crashed and was sent to a data recovery firm, and they've given
me a dd image.   I've also taken a dd image of the other side.

-rw-r--r--   1 root wheel   9.2T Aug 18 07:53 deep_Lun0.dd
-rw-r--r--   1 root wheel   2.1T Aug 23 15:55 deep_san01_lun.dd

zpool import identifies that both files are members of the pool, but
there are 'insufficient replicas' and it can not import.

zpool import -d . -F
  pool: deep
id: 8026270260630449340
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

deep   UNAVAIL
insufficient replicas
  /jbod1-diskbackup/restore/deep_san01_lun.dd  UNAVAIL  cannot open
  /jbod1-diskbackup/restore/deep_Lun0.dd   UNAVAIL  cannot open

Using zdb, I can see both images have 4 labels.   What are my options
in terms of attempting a recovery?   I had hoped that ZFS could import
the pool even if the vdevs are now files instead of devices, but
unfortunately it was not so easy.


zdb -l deep_Lun0.dd

LABEL 0

version: 15
name: 'deep'
state: 0
txg: 322458504
pool_guid: 8026270260630449340
hostid: 713828554
hostname: 'vali.REDACTED
top_guid: 7224951874042153155
guid: 7224951874042153155
vdev_tree:
type: 'disk'
id: 1
guid: 7224951874042153155
path: 
'/dev/dsk/c4t526169645765622E436F6D202020202030303330383933323030303130363120d0s0'
devid: 'id1,sd@TRaidWeb.Com_003089320001061_/a'
phys_path:
'/scsi_vhci/disk@g526169645765622e436f6d202020202030303330383933323030303130363120:a'
whole_disk: 0
metaslab_array: 148
metaslab_shift: 36
ashift: 9
asize: 10076008480768
is_log: 0
DTL: 213

LABEL 1

version: 15
name: 'deep'
state: 0
txg: 322458504
pool_guid: 8026270260630449340
hostid: 713828554
hostname: 'vali.REDACTED
top_guid: 7224951874042153155
guid: 7224951874042153155
vdev_tree:
type: 'disk'
id: 1
guid: 7224951874042153155
path: 
'/dev/dsk/c4t526169645765622E436F6D202020202030303330383933323030303130363120d0s0'
devid: 'id1,sd@TRaidWeb.Com_003089320001061_/a'
phys_path:
'/scsi_vhci/disk@g526169645765622e436f6d202020202030303330383933323030303130363120:a'
whole_disk: 0
metaslab_array: 148
metaslab_shift: 36
ashift: 9
asize: 10076008480768
is_log: 0
DTL: 213

LABEL 2

version: 15
name: 'deep'
state: 0
txg: 322458504
pool_guid: 8026270260630449340
hostid: 713828554
hostname: 'vali.REDACTED
top_guid: 7224951874042153155
guid: 7224951874042153155
vdev_tree:
type: 'disk'
id: 1
guid: 7224951874042153155
path: 
'/dev/dsk/c4t526169645765622E436F6D202020202030303330383933323030303130363120d0s0'
devid: 'id1,sd@TRaidWeb.Com_003089320001061_/a'
phys_path:
'/scsi_vhci/disk@g526169645765622e436f6d202020202030303330383933323030303130363120:a'
whole_disk: 0
metaslab_array: 148
metaslab_shift: 36
ashift: 9
asize: 10076008480768
is_log: 0
DTL: 213

LABEL 3

version: 15
name: 'deep'
state: 0
txg: 322458504
pool_guid: 8026270260630449340
hostid: 713828554
hostname: 'vali.REDACTED
top_guid: 7224951874042153155
guid: 7224951874042153155
vdev_tree:
type: 'disk'
id: 1
guid: 7224951874042153155
path: 
'/dev/dsk/c4t526169645765622E436F6D202020202030303330383933323030303130363120d0s0'
devid: 'id1,sd@TRaidWeb.Com_003089320001061_/a'
phys_path:
'/scsi_vhci/disk@g526169645765622e436f6d202020202030303330383933323030303130363120:a'
whole_disk: 0
metaslab_array: 148
metaslab_shift: 36
ashift: 9
asize: 10076008480768
  

Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-15 Thread Cindy Swearingen

D'oh. I shouldn't answer questions first thing Monday morning.

I think you test this configuration with and without the
underlying hardware RAID.

If RAIDZ is the right redundancy level for your workload,
you might be pleasantly surprised with a RAIDZ configuration
built on the h/w raid array in JBOD mode.

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

cs

On 08/15/11 08:41, Cindy Swearingen wrote:


Hi Tom,

I think you test this configuration with and without the
underlying hardware RAID.

If RAIDZ is the right redundancy level for your workload,
you might be pleasantly surprised.

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Thanks,

Cindy

On 08/12/11 19:34, Tom Tang wrote:
Suppose I want to build a 100-drive storage system, wondering if there 
is any disadvantages for me to setup 20 arrays of HW RAID0 (5 drives 
each), then setup ZFS file system on these 20 virtual drives and 
configure them as RAIDZ?


I understand people always say ZFS doesn't prefer HW RAID.  Under this 
case, the HW RAID0 is only for stripping (allows higher data transfer 
rate), while the actual RAID5 (i.e. RAIDZ) is done via ZFS which takes 
care all the checksum/error detection/auto-repair.  I guess this will 
not affect any advantages of using ZFS, while I could get higher data 
transfer rate.  Wondering if it's the case? 
Any suggestion or comment?  Please kindly advise.  Thanks!



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Space usage

2011-08-14 Thread Cindy Swearingen

Hi Ned,

The difference is that for mirrored pools, zpool list displays the
actual available space so that if you have a mirrored pool of two
30-GB disks, zpool list will display 30 GBs, which should jibe with
the zfs list output of available space for file systems.

For RAIDZ pools, zpool list displays the RAW pool space and zfs list
displays actual available pool space for file systems.

The inconsistency in the way zpool list displays AVAIL pool space for
RAIDZ and mirrored pools has been around for awhile.

http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HZFSAdministrationQuestions
Why doesn't the space that is reported by the zpool list command and the 
zfs list command match?


Cindy
Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash

zpool list output show raw disk usage, including all redundant copies of
metadata, all redundant copies of data blocks, all redundancy accounted for
(mirror, raidz), etc.  


Perhaps that's true after a certain version?  It's not true in the latest 
solaris 10.  Here are my results on a fully patched solaris 10 installation, 
installed from the latest disc (10u9):

[root@foo ~]# zpool status rpool
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpoolONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c0t5000C5003424396Bd0s0  ONLINE   0 0 0
c0t5000C5002637311Fd0s0  ONLINE   0 0 0

errors: No known data errors

[root@foo ~]# zpool list rpool
NAMESIZE  ALLOC   FREECAP  HEALTH  ALTROOT
rpool  97.5G  9.22G  88.3G 9%  ONLINE  -

[root@foo ~]# zfs list rpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  11.3G  84.6G  32.5K  /rpool

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recover data from detached ZFS mirror

2011-07-28 Thread Cindy Swearingen

Hi Judy,

Without much to go on, let's try the easier task first.

Is it possible that you can re-attach the detached disk back
to original root mirror disk?

Thanks,

Cindy

On 07/28/11 13:16, Judy Wheeler (QTSI) X7567 wrote:

Does anyone know where the Jeff Bonwick tool to recover a ZFS label is?

The old link was: 
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg15620.html


But it does not exist anymore.  I have a zfs detached disk that was part
of a root mirror and I need to boot from it. 


Thank you for any information.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recover data from detached ZFS mirror

2011-07-28 Thread Cindy Swearingen

Hi Judy,

The disk label of the detached disk is intact but the pool info is no
longer accessible so no easy way exists to solve this problem.

Its not simply a matter of getting the boot info back on there, its the
pool info.

Your disk will have to meet up with its other half before it can be
bootable again is my assessment.

Many experts are on this list so maybe they will have a suggestion.

Thanks,

Cindy




On 07/28/11 13:37, Judy Wheeler (QTSI) X7567 wrote:

Hi Cindy,

Well, the other half of the zfs mirror is on a truck via snail mail to
another location.  We thought we were saving a copy of the root disk when
we did a zfs detach.  We didn't know it would remove the zfs label
information.  Is there a way to get the disk back to a bootable state?

Thanks,

--Judy--


Hi Judy,

Without much to go on, let's try the easier task first.

Is it possible that you can re-attach the detached disk back
to original root mirror disk?

Thanks,

Cindy

On 07/28/11 13:16, Judy Wheeler (QTSI) X7567 wrote:

Does anyone know where the Jeff Bonwick tool to recover a ZFS label is?

The old link was: 
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg15620.html


But it does not exist anymore.  I have a zfs detached disk that was part
of a root mirror and I need to boot from it. 


Thank you for any information.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Judy Wheeler
System Administrator (321) 494-7567  Patrick AFB
Quantum Technology Services, Inc.(321) 799-9655  QTSI
1980 N. Atlantic Blvd.   (321) 749-1620  Cell Phone
Cocoa Beach, FL. 32931



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding mirrors to an existing zfs-pool]

2011-07-26 Thread Cindy Swearingen


Subject: Re: [zfs-discuss] Adding mirrors to an existing zfs-pool
Date: Tue, 26 Jul 2011 08:54:38 -0600
From: Cindy Swearingen cindy.swearin...@oracle.com
To: Bernd W. Hennig consult...@hennig-consulting.com
References: 342994905.11311662049567.JavaMail.Twebapp@sf-app1

Hi Bernd,

If you are talking about attaching 4 new disks to a non redundant pool
with 4 disks, and then you want to detach the previous disks then yes,
this is possible and a good way to migrate to new disks.

The new disks must be the equivalent size or larger than the original
disks.

See the hypothetical example below.

If you mean something else, then please provide your zpool status
output.

Thanks,

Cindy


# zpool status tank
 pool: tank
 state: ONLINE
 scan: resilvered 1018K in 0h0m with 0 errors on Fri Jul 22 15:54:52 2011
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
c4t1d0  ONLINE   0 0 0
c4t2d0  ONLINE   0 0 0
c4t3d0  ONLINE   0 0 0
c4t4d0  ONLINE   0 0 0


# zpool attach tank c4t1d0 c6t1d0
# zpool attach tank c4t2d0 c6t2d0
# zpool attach tank c4t3d0 c6t3d0
# zpool attach tank c4t4d0 c6t4d0

The above syntax will create 4 mirrored pairs of disks.

Attach each new disk, wait for it to resilver, attach the next disk,
resilver, and so on. I would scrub the pool after resilvering is
complete, and check fmdump to ensure all new devices are operational.

When all the disks are replaced and the pool is operational, detach
the original disks.

# zpool detach tank c4t1d0
# zpool detach tank c4t2d0
# zpool detach tank c4t3d0
# zpool detach tank c4t4d0


On 07/26/11 00:33, Bernd W. Hennig wrote:

G'Day,

- zfs pool with 4 disks (from Clariion A)
- must migrate to Clariion B (so I created 4 disks with the same size,
  avaiable for the zfs)

The zfs pool has no mirrors, my idea was to add the new 4 disks from
the Clariion B to the 4 disks which are still in the pool - and later
remove the original 4 disks.

I only found in all example how to create a new pool with mirrors
but no example how to add to a pool without mirrors a mirror disk
for each disk in the pool.

- is it possible to add disks to each disk in the pool (they have different
  sizes, so I have exact add the correct disks form Clariion B to the 
  original disk from Clariion B)

- can I later remove the disks from the Clariion A, pool is intact, user
  can work with the pool 



??

Sorry for the beginner questions

Tnx for help


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover zpool with a new installation

2011-07-26 Thread Cindy Swearingen

Hi Roberto,

Yes, you can reinstall the OS on another disk and as long as the
OS install doesn't touch the other pool's disks, your
previous non-root pool should be intact. After the install
is complete, just import the pool.

Thanks,

Cindy



On 07/26/11 10:49, Roberto Scudeller wrote:

Hi all,

I lost my storage because rpool don't boot. I try to recover, but 
opensolaris says to destroy and re-create.
My rpool installed on flash drive, and my pool (with my info) it's on 
another disks.


My question is: It's possible I reinstall opensolaris in new flash 
drive, without stirring on my pool of disks, and recover this pool?


Thanks.

Regards,
--
Roberto Scudeller





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Each user has his own zfs filesystem??

2011-07-24 Thread Cindy Swearingen
That is correct. 

Those of us working in ZFS land recommend one file system per user, but the 
software has not yet caught up to that model. The wheels are turning though.

When I get back to office, I will send out some steps that might help during
this transition.

Thanks,

Cindy
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing failed drive

2011-07-22 Thread Cindy Swearingen

Hi Chris,

Which Solaris release is this?

Depending on the Solaris release, you have a couple of
different options.

Here's one:

1. Physically replace original failed disk and detach
the spare.

A. If  c10t0d0 was the disk that you physically replaced,
issue this command:

# zpool replace tank c10t0d0
# zpool clear tank

Then check zpool status to see if the spare detached
automatically.

B. If zpool status says that c10t0d0 is ONLINE but the
spare is still INUSE, just detach the spare:

# zpool detach tank c10t6d0

The next one depends on your Solaris release.

Thanks,

Cindy


On 07/22/11 12:35, Chris Dunbar - Earthside, LLC wrote:

Hello,

The bad news is I lost a drive on my production server. The good news is the 
spare kicked in and everything kept running as it should. I stopped by the 
datacenter last night, shut down the server, and physically replaced the failed 
drive. I want to make sure I don't screw anything up so would somebody be so 
kind as to lay out what my next steps are? Below is a snippet of what zpool 
status is showing:

tank DEGRADED 0 0 0 
mirror-0 DEGRADED 0 0 0 
spare-0 DEGRADED 0 0 0 
c10t0d0 REMOVED 0 0 0 
c10t6d0 ONLINE 0 0 0 
c11t0d0 ONLINE 

snip... 

spares 
c10t6d0 INUSE currently in use



How do to bring the replaced drive back online and get it into the array? Do I 
make the new drive the spare or do I bring the new drive online in the mirror 
and return the original spare to spare status? Any advice and/or actual 
commands would be greatly appreciated!

Thank you,
Chris Dunbar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to recover -- LUNs go offline, now permanent errors?

2011-07-15 Thread Cindy Swearingen

Hi David,

If the permanent error is in some kind of metadata, then it doesn't
translate to a specific file name.

You might try another zpool scrub and then a zpool clear to see if
it clears this error.

Thanks,

Cindy

On 07/13/11 12:47, David Smith wrote:

I recently had an issue with my LUNs from our storage unit going offline.  This 
caused the zpool to get numerous errors on the luns.  The pool is on-line, and 
I did a scrub, but one of the raid sets is
degraded:

   raidz2-3 DEGRADED 0 0 0
c7t60001FF011C6F3103B00011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F3023900011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F2F53700011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F2E43500011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F2D23300011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F2A93100011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F29A2F00011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F2682D00011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F24C2B00011D1BF1d0  DEGRADED 0 0 0  
too many errors
c7t60001FF011C6F2192900011D1BF1d0  DEGRADED 0 0 0  
too many errors

Also I have the following:
errors: Permanent errors have been detected in the following files:

0x3a:0x3b04

Originally, there was a file, and then a directory listed, but I removed them.  
Now I'm stuck with
the hex codes above.  How do I interpret them?  Can this pool be recovered, or 
basically how do
I proceed?

The system is Solaris 10 U9 with all recent patches.

Thanks,

David

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Non-Global zone recovery

2011-07-07 Thread Cindy Swearingen

Hi Ram,

Which Solaris release is this and how was the OS re-imaged?

If this is a recent Solaris 10 release and you used Live Upgrade,
then the answer is yes.

I'm not so sure about zone behavior in the Oracle Solaris 11
Express release.

You should just be able to import testpool and boot your zones.

Thanks,

Cindy

On 07/07/11 12:08, Ram wrote:

Can we recover non-global zone if Global zone is reimaged because of an OS 
issue and non-global zone OS is running on different pool which is on SAN.

For ex: Global is running on rpool - Internal disk
Non-Global zone is running on testpool - Which is on SAN.

Thanks,
Ram

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Non-Global zone recovery

2011-07-07 Thread Cindy Swearingen

Okay, so which Solaris 10 release is this?

It might also depend on how your zones are created.
You can review the support zone configurations here:

http://download.oracle.com/docs/cd/E18752_01/html/819-5461/ggpdm.html#gigek

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With 
Zones (at Least Solaris 10 5/09)


Also see Example 5-6

When you import your test pool, do you see your file systems:

# zfs list

Can you see your zone information:

# zoneadm list -v

Thanks,

Cindy

On 07/07/11 13:41, Ram kumar wrote:

Hi Cindy,
 
Thanks for the email.
 


We are using Solaris 10 with out Live Upgrade.

 

 


Tested following in the sandbox environment:

 

1)  We have one non-global zone (TestZone)  which is running on Test 
zpool (SAN)


2)  Don’t see zpool or non-global zone after re-image of Global zone.

3)  Imported zpool Test

 


Now I am trying to create Non-global zone and it is giving error

 


bash-3.00# zonecfg -z Test

Test: No such zone configured

Use 'create' to begin configuring a new zone.

zonecfg:Test create -a /zones/Test

invalid path to detached zone

Thanks,
Ram


 
On Thu, Jul 7, 2011 at 3:14 PM, Cindy Swearingen 
cindy.swearin...@oracle.com mailto:cindy.swearin...@oracle.com wrote:


Hi Ram,

Which Solaris release is this and how was the OS re-imaged?

If this is a recent Solaris 10 release and you used Live Upgrade,
then the answer is yes.

I'm not so sure about zone behavior in the Oracle Solaris 11
Express release.

You should just be able to import testpool and boot your zones.

Thanks,

Cindy

On 07/07/11 12:08, Ram wrote:

Can we recover non-global zone if Global zone is reimaged
because of an OS issue and non-global zone OS is running on
different pool which is on SAN.

For ex: Global is running on rpool - Internal disk
Non-Global zone is running on testpool - Which is on SAN.

Thanks,
Ram



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] time-slider/plugin:zfs-send

2011-07-06 Thread Cindy Swearingen

Hi Adrian,

I wonder if you have seen these setup instructions:

http://www.oracle.com/technetwork/articles/servers-storage-dev/autosnapshots-397145.html

If you have, let me know if you are still having trouble.

Thanks,

Cindy

On 07/05/11 16:37, Adrian Carpenter wrote:

I've been trying to figure out how the time-slider zfs-send works to no avail.  
Is there any documentation/howto for the zfs-send time slider plugin?

- Adrian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trouble mirroring root pool onto larger disk

2011-07-01 Thread Cindy Swearingen

Hi Jiawen,

Yes, the boot failure message would be very helpful.

The first thing to rule out is:

I think you need to be running a 64-bit kernel to
boot from a 2 TB disk.

Thanks,

Cindy

On 07/01/11 02:58, Jiawen Chen wrote:

Hi,

I have Solaris 11 Express with a root pool installed on a 500 GB disk.  I'd 
like to migrate it to a 2 TB disk.  I've followed the instructions on the ZFS 
troubleshooting guide 
(http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk)
 and the Oracle ZFS Administration Guide 
(http://download.oracle.com/docs/cd/E19253-01/819-5461/ghzvx/index.html) pretty 
carefully.  However, things still don't work: after re-silvering, I switch my BIOS 
to boot from the 2 TB disk and at boot, *some* kind of error message appears for 
 1 second before the machine reboots itself.  Is there any way I can view this 
message?  I.e., is this message written to the log anywhere?

As far as I can tell, I've set up all the partitions and slices correctly 
(VTOC below).  The only error message I get is when I do:

# zpool attach rpool c9t0d0s0 c13d1s0

(c9t0d0s0 is the 500 GB original disk, c13d1s0 is the 2 TB new disk)

I get:

invalid vdev specification
use '-f' to override the following errors:
_dev_dsk_c13d1s0 overlaps with _dev_dsk_c13d1s2

But that's a well known bug and I use -f to force it since the backup slice 
shouldn't matter.  If anyone has any ideas, I really appreciate it.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Cindy Swearingen

Hi Dave,

Consider the easiest configuration first and it will probably save
you time and money in the long run, like this:

73g x 73g mirror (one large s0 on each disk) - rpool
73g x 73g mirror (use whole disks) - data pool

Then, get yourself two replacement disks, a good backup strategy,
and we all sleep better.

Convert the complexity of some of the suggestions to time and money
for replacement if something bad happens, and the formula would look
like this:

time to configure x time to replace x replacement disks = $$ 
cost of two replacement for two mirrored pools

A complex configuration of slices and a combination of raidZ and
mirrored pools across the same disks will be difficult to administer,
performance will be unknown, not to mention how much time it might take
to replace a disk.

Use the simplicity of ZFS as it was intended is my advice and you
will save time and money in the long run.

Cindy


On 06/23/11 07:38, Dave U. Random wrote:

Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com
wrote:

Well ... 
Slice all 4 drives into 13G and 60G.

Use a mirror of 13G for the rpool.
Use 4x 60G in some way (raidz, or stripe of mirrors) for tank
Use a mirror of 13G appended to tank


Hi Edward! Thanks for your post. I think I understand what you are saying
but I don't know how to actually do most of that. If I am going to make a
new install of Solaris 10 does it give me the option to slice and dice my
disks and to issue zpool commands? Until now I have only used Solaris on
Intel with boxes and used both complete drives as a mirror.

Can you please tell me what are the steps to do your suggestion?

I imagine I can slice the drives in the installer and then setup a 4 way
root mirror (stupid but as you say not much choice) on the 13G section. Or
maybe one root mirror on two slices and then have 13G aux storage left to
mirror for something like /var/spool? What would you recommend? I didn't
understand what you suggested about appending a 13G mirror to tank. Would
that be something like RAID10 without actually being RAID10 so I could still
boot from it? How would the system use it?

In this setup that will install everything on the root mirror so I will
have to move things around later? Like /var and /usr or whatever I don't
want on the root mirror? And then I just make a RAID10 like Jim was saying
with the other 4x60 slices? How should I move mountpoints that aren't
separate ZFS filesystems?


The only conclusion you can draw from that is:  First take it as a given
that you can't boot from a raidz volume.  Given, you must have one mirror.


Thanks, I will keep it in mind.


Then you raidz all the remaining space that's capable of being put into a
raidz...  And what you have left is a pair of unused space, equal to the
size of your boot volume.  You either waste that space, or you mirror it
and put it into your tank.


So RAID10 sounds like the only reasonable choice since there are an even
number of slices, I mean is RAIDZ1 even possible with 4 slices?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-23 Thread Cindy Swearingen

Hi Kitty,

Try this:

# zpool create test c5t0d0

Thanks,

Cindy

On 06/23/11 12:34, Kitty Tam wrote:

It wouldn't let me

# zpool create test_pool c5t0d0p0
cannot create 'test_pool': invalid argument for this pool operation

Thanks,
Kitty


On 06/23/11 03:00, Roy Sigurd Karlsbakk wrote:

I cannot run format -e to change it since it will crash my sys or
the server I am trying to attach the disk to.


Did you try to do as Jim Dunham said?

 zpool create test_pool c5t0d0p0
 zpool destroy test_pool
 format -e c5t0d0p0
 partition
 print
 controlD


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread Cindy Swearingen

Hi David,

I see some inconsistencies between the mirrored pool tank info below
and the device info that you included.

1. The zpool status for tank shows some remnants of log devices (?),
here:

 tank FAULTED  corrupted data
   logs

Generally, the log devices are listed after the pool devices.
Did this pool have log devices at one time? Are they missing?

# zpool status datap
  pool: datap
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
datap   ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
  mirror-1  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
logs
  mirror-2  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0

I would like to see this output:

# zpool history tank

2. Can you include the zdb -l output for c9t57d0 because
the zdb -l device output below is from a RAIDZ config, not
a mirrored config, although the pool GUIDs match so I'm
confused.

I don't think this has anything to do with moving from s10u9 to S11
express.

My sense is that if you have remnants of the same pool name on some of
your devices but as different pools, then you will see device problems
like these.

Thanks,

Cindy



On 06/22/11 20:28, David W. Smith wrote:

On Wed, Jun 22, 2011 at 06:32:49PM -0700, Daniel Carosone wrote:

On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:

# /home/dws# zpool import
  pool: tank
id: 13155614069147461689
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

tank FAULTED  corrupted data
logs
  mirror-6   ONLINE
c9t57d0  ONLINE
c9t58d0  ONLINE
  mirror-7   ONLINE
c9t59d0  ONLINE
c9t60d0  ONLINE

Is there something else I can do to see what is wrong.

Can you tell us more about the setup, in particular the drivers and
hardware on the path?  There may be labelling, block size, offset or
even bad drivers or other issues getting in the way, preventing ZFS
from doing what should otherwise be expected to work.   Was there
something else in the storage stack on the old OS, like a different
volume manager or some multipathing?

Can you show us the zfs labels with zdb -l /dev/foo ?

Does import -F get any further?


Original attempt when specifying the name resulted in:

# /home/dws# zpool import tank
cannot import 'tank': I/O error

Some kind of underlying driver problem odour here.

--
Dan.


The system is an x4440 with two dual port Qlogic 8 Gbit FC cards connected to a 
DDN 9900 storage unit.  There are 60 luns configured from the storage unit we using

raidz1 across these luns in a 9+1 configuration.  Under Solaris 10U9 
multipathing
is enabled.

For example here is one of the devices:


# luxadm display /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
  Vendor:   DDN 
  Product ID:   S2A 9900
  Revision: 6.11

  Serial Num:   10DC50AA002E
  Unformatted capacity: 15261576.000 MBytes
  Write Cache:  Enabled
  Read Cache:   Enabled
Minimum prefetch:   0x0
Maximum prefetch:   0x0
  Device Type:  Disk device
  Path(s):

  /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
  /devices/scsi_vhci/disk@g60001ff010dc50aa2e00081d1bf1:c,raw
   Controller   /dev/cfg/c5
Device Address  2401ff051232,2e
Host controller port WWN2101001b32bfe1d3
Class   secondary
State   ONLINE
   Controller   /dev/cfg/c7
Device Address  2801ff0510dc,2e
Host controller port WWN2101001b32bd4f8f
Class   primary
State   ONLINE


Here is the output of the zdb command:

# zdb -l /dev/dsk/c8t60001FF010DC50AA2E00081D1BF1d0s0

LABEL 0

version=22
name='tank'
state=0
txg=402415
pool_guid=13155614069147461689
hostid=799263814
hostname='Chaiten'
top_guid=7879214599529115091
guid=9439709931602673823
vdev_children=8
vdev_tree
type='raidz'
id=5
guid=7879214599529115091
nparity=1
metaslab_array=35
metaslab_shift=40
ashift=12
asize=160028491776000
is_log=0
create_txg=22
children[0]
type='disk'
id=0
guid=15738823520260019536

Re: [zfs-discuss] Zpool with data errors

2011-06-22 Thread Cindy Swearingen

Hi Todd,

Yes, I have seen zpool scrub do some miracles but I think it depends
on the amount of corruption.

A few suggestions are:

1. Identify and resolve the corruption problems on the underlying
hardware. No point in trying to clear the pool errors if this
problem continues.

The fmdump command and the fmdump -eV command output will
tell you how long these errors have occurred.

2. Run zpool scrub and zpool clear to attempt to clear the errors.

3. If the errors below don't clear, then manually remove the corrupted
files below, if possible, and restore from backup. Depending on what
fmdump says, you might check your backups for corruption.

4. Run zpool scrub and zpool clear again as needed.

5. Consider replacing this configuration with a redundant ZFS storage
pool. We can provide the recommended syntax.

Let us know how this turns out.

Thanks,

Cindy

On 06/20/11 23:36, Todd Urie wrote:

I have a zpool that shows the following from a zpool status -v zpool name

brsnnfs0104 [/var/spool/cron/scripts]# zpool status -v ABC0101
  pool:ABC0101
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
ABC0101   ONLINE   0 010
  /dev/vx/dsk/ABC01dg/ABC0101_01  ONLINE   0 0 2
  /dev/vx/dsk/ABC01dg/ABC0101_02  ONLINE   0 0 8
  /dev/vx/dsk/ABC01dg/ABC0101_03  ONLINE   0 010

errors: Permanent errors have been detected in the following files:

/clients/ABC0101/rep/local/bfm/web/htdocs/tmp/rscache/717b52282ea059452621587173561360

/clients/ABC0101/rep/local/bfm/web/htdocs/tmp/rscache/6e6a9f37c4d13fdb3dcb8649272a2a49

/clients/ABC0101/rep/d0/prod1/reports/ReutersCMOLoad/ReutersCMOLoad.ABCntss001.20110620.141330.26496.ROLLBACK_FOR_UPDATE_COUPONS.html

/clients/ABC0101/rep/local/bfm/web/htdocs/tmp/G2_0.related_detail_loader.1308593666.54643.n5cpoli3355.data

/clients/ABC0101/rep/d0/prod1/reports/gp_reports/ALLMNG/20110429/F_OLPO82_A.gp.ABCIM_GA.nlaf.xml.gz

/clients/ABC0101/rep/d0/prod1/reports/gp_reports/ALLMNG/20110429/UNVLXCIAFI.gp.ABCIM_GA.nlaf.xml.gz

/clients/ABC0101/rep/d0/prod1/reports/gp_reports/ALLMNG/20110429/UNIVLEXCIA.gp.BARCRATING_ABC.nlaf.xml.gz


I think that a scrub at least has the possibility to clear this up.  A 
quick search suggests that others have had some good experience with 
using scrub in similar circumstances.  I was wondering if anyone could 
share some of their experiences, good and bad, so that I can assess the 
risk and probability of success with this approach.  Also, any other 
ideas would certainly be appreciated.



-RTU




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs directory inheritance question/issue/problem ?

2011-06-22 Thread Cindy Swearingen

Hi Ed,

This is current Solaris SMB sharing behavior. CR 6582165 is filed to
provide this feature.

You will need to reshare your 3 descendent file systems.

NFS sharing does this automatically.

Thanks,

Cindy


On 06/22/11 09:46, Ed Fang wrote:

Need a little help.  I set up my zfs storage last year and everything has been 
working great.  The initial setup was as follows

tank/documents  (not shared explicitly)
tank/documents/Jan- shared as Jan
tank/documents/Feb   - shared as Feb
tank/documents/March - shared as March

Anyhow, I now prefer to have one share rather than 3 separate shares.  So I 
entered in zfs set sharesmb=name=documents tank/documents thinking that I could 
just mount the documents directory on my smb clients and have Jan/Feb/Mar show 
up in subsequent directories.  Well, apparently not the case - when I set up 
the parent directory share, nothing below it shows up.  In Solaris, the 
directories are present physically, but do not show up in the parent share.

Is there something I'm doing incorrectly here or did I miss something.  I read 
up on inheritance, but this seems to be the reverse.  Any assistance would be 
greatly appreciated.  Thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resizing ZFS partition, shrinking NTFS?

2011-06-16 Thread Cindy Swearingen

Hi Clive,

What you are asking is not recommended nor supported and could render
your ZFS root pool unbootable. (I'm not saying that some expert
couldn't do it, but its risky, like data corruption risky.)

ZFS expects the partition boundaries to remain the same unless you
replace the original disk with another disk, attach another disk and
detach the original disk, or expand a pool's an underlying LUN.

If you have a larger disk, this is what I would recommend. Attach the
larger disk and the detach the smaller disk. The full steps are
documented on the solarisinternals.com wiki, ZFS troubleshooting
section, replacing the root pool disk steps.


Thanks,

Cindy


On 06/16/11 13:21, Clive Meredith wrote:

Problem:

I currently run a duel boot machine with a 45Gb partition for Win7 Ultimate and 
a 25Gb partition for OpenSolaris 10 (134).  I need to shrink NTFS to 20Gb and 
increase the ZFS partion to 45Gb.  Is this possible please?  I have looked at 
using the partition tool in OpenSolaris but both partition are locked, even 
under admin.  Win7 won't allow me to shrink the dynamic volume, as the Finsh 
button is always greyed out, so no luck in that direction.

Thanks in advance.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-15 Thread Cindy Swearingen

Hi Lanky,

If you created a mirrored pool instead of a RAIDZ pool, you could use
the zpool split feature to split your mirrored pool into two identical
pools.

For example, If you had 3-way mirrored pool, your primary pool will
remain redundant with 2-way mirrors after the split. Then, you would
have a non-redundant pool as a backup. You could also attach more disks
to the backup pool to make it redundant.

At the end of the week or so, destroy the non-redundant pool and
re-attach the disks to your primary pool and repeat.

This is what I would do with daily snapshots and a monthly backup.

Make sure you develop a backup strategy for any pool you build.

Thanks,

Cindy



On 06/15/11 06:20, Lanky Doodle wrote:

That's how I understood autoexpand, about not doing so until all disks have 
been done.

I do indeed rip from disc rather than grab torrents - to VIDEO_TS folders and 
not ISO - on my laptop then copy the whole folder up to WHS in one go. So while 
they're not one large single file, they are lots of small .vob files, but being 
written in one hit.

This is a bit OT, but can you have one vdev that is a duplicate of another 
vdev? By that I mean say you had 2x 7 disk raid-z2 vdevs, instead of them both 
being used in one large pool could you have one that is a backup of the other, 
allowing you to destroy one of them and re-build without data loss?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Cindy Swearingen

Hi Matt,

You have several options in terms of migrating the data but I think the
best approach is to do something like I have described below.

Thanks,

Cindy

1. Create snapshots of the file systems to be migrated. If you
want to capture the file system properties, then see the zfs.1m
man page for a description of what you need.

2. Create your mirrored pool with the new disks and call it
pool2, if your raidz pool is pool1, for example.

3. Use zfs send/receive to send your snapshots to pool2.

4. Review the pool properties on pool1 if you want pool2 set up
similarly.

# zpool get all pool1

5. After your pool2 is setup and your data is migrated, then
you can destroy pool1.

6. You can export pool2 and import is as pool1.

On 06/01/11 12:54, Matt Harrison wrote:

Hi list,

I've got a pool thats got a single raidz1 vdev. I've just some more 
disks in and I want to replace that raidz1 with a three-way mirror. I 
was thinking I'd just make a new pool and copy everything across, but 
then of course I've got to deal with the name change.


Basically, what is the most efficient way to migrate the pool to a 
completely different vdev?


Thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] not sure how to make filesystems

2011-05-30 Thread Cindy Swearingen
Hi Bill,

I'm assuming you've already upgraded to a Solaris 10 release that supports a UFS
to ZFS migration...

I don't think Live Upgrade supports the operations below. The UFS to ZFS 
migration takes your existing UFS file systems and creates one ZFS BE in a root 
pool. An advantage to this is that you don't have to manage separate file 
systems. 
No direct connection exists between files systems and disk slices.

If you prefer, you can create a separate var file system if you do an initial
installation. You can't migrate a separate UFS var to a separate ZFS file
system in current Solaris 10 releases yet.

You can read about UFS to ZFS Live Upgrade migrations here:

http://download.oracle.com/docs/cd/E19253-01/819-5461/zfsboot-1/index.html

Thanks,

Cindy
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-24 Thread Cindy Swearingen

version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
children[1]:
type: 'disk'
id: 1
guid: 9267693216478869057
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 893

LABEL 3

version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
children[1]:
type: 'disk'
id: 1
guid: 9267693216478869057
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 893







On May 20, 2011, at 8:34 AM, Cindy Swearingen wrote:


Hi Alex

More scary than interesting to me.

What kind of hardware and which Solaris release?

Do you know what steps lead up to this problem? Any recent hardware
changes?

This output should tell you which disks were in this pool originally:

# zpool history tank

If the history identifies tank's actual disks, maybe you can determine
which disk is masquerading as c5t1d0.

If that doesn't work, accessing the individual disk entries in format
should tell which one is the problem, if its only one.

I would like to see the output of this command:

# zdb -l /dev/dsk/c5t1d0s0

Make sure you have a good backup of your data. If you need to pull a
disk to check cabling, or rule out controller issues, you should
probably export this pool first. Have a good backup.

Others have resolved minor device issues by exporting/importing the
pool but with format/zpool commands hanging on your system, I'm not
confident that this operation will work for you.

Thanks,

Cindy

On 05/19/11 12:17, Alex wrote:

I thought this was interesting - it looks like we have a failing drive in our 
mirror, but the two device nodes in the mirror are the same:
 pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
   invalid.  Sufficient replicas exist for the pool to continue
   functioning in a degraded state.
action: Replace the device using 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-4J
scrub: scrub completed after 1h9m with 0 errors on Sat May 14 03:09:45 2011
config:
   NAMESTATE READ WRITE CKSUM
   tankDEGRADED 0 0 0
 mirror-0  DEGRADED 0 0 0
   c5t1d0  ONLINE   0 0 0
   c5t1d0  FAULTED  0 0 0  corrupted data
c5t1d0 does indeed only appear once in the format list. I wonder how to go 
about correcting this if I can't uniquely identify the failing drive.
format takes forever to spill its guts, and the zpool commands all hang.. 
clearly there is hardware error here, probably causing that, but not sure how to identify 
which disk to pull.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-20 Thread Cindy Swearingen

Hi Alex

More scary than interesting to me.

What kind of hardware and which Solaris release?

Do you know what steps lead up to this problem? Any recent hardware
changes?

This output should tell you which disks were in this pool originally:

# zpool history tank

If the history identifies tank's actual disks, maybe you can determine
which disk is masquerading as c5t1d0.

If that doesn't work, accessing the individual disk entries in format
should tell which one is the problem, if its only one.

I would like to see the output of this command:

# zdb -l /dev/dsk/c5t1d0s0

Make sure you have a good backup of your data. If you need to pull a
disk to check cabling, or rule out controller issues, you should
probably export this pool first. Have a good backup.

Others have resolved minor device issues by exporting/importing the
pool but with format/zpool commands hanging on your system, I'm not
confident that this operation will work for you.

Thanks,

Cindy

On 05/19/11 12:17, Alex wrote:

I thought this was interesting - it looks like we have a failing drive in our 
mirror, but the two device nodes in the mirror are the same:

  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: scrub completed after 1h9m with 0 errors on Sat May 14 03:09:45 2011
config:

NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  mirror-0  DEGRADED 0 0 0
c5t1d0  ONLINE   0 0 0
c5t1d0  FAULTED  0 0 0  corrupted data

c5t1d0 does indeed only appear once in the format list. I wonder how to go 
about correcting this if I can't uniquely identify the failing drive.

format takes forever to spill its guts, and the zpool commands all hang.. 
clearly there is hardware error here, probably causing that, but not sure how to identify 
which disk to pull.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bootfs ID on zfs root

2011-05-11 Thread Cindy Swearingen

Hi Ketan,

What steps lead up to this problem?

I believe the boot failure messages below are related to a mismatch
between the pool version and the installed OS version.

If you're using the JumpStart installation method, then the root pool is 
re-created each time, I believe. Does it also install a patch that

upgrades the pool version?

Thanks,

Cindy


On 05/11/11 13:27, Ketan wrote:
So y my system is not coming up .. i jumpstarted the system again ...  but it panics like earlier .. so how should i recover it .. and get it up ? 

System was booted from network into single user mode and then rpool imported and following is the listing 



# zpool list
NAMESIZE  ALLOC   FREECAP  HEALTH  ALTROOT
rpool68G  4.08G  63.9G 5%  ONLINE  -
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 9.15G  57.8G98K  /rpool
rpool/ROOT4.08G  57.8G21K  /rpool/ROOT
rpool/ROOT/zfsBE_patched  4.08G  57.8G  4.08G  /
rpool/dump3.01G  60.8G16K  -
rpool/swap2.06G  59.9G16K  -
#



Dataset mos [META], ID 0, cr_txg 4, 137K, 62 objects
Dataset rpool/ROOT/zfsBE_patched [ZPL], ID 47, cr_txg 40, 4.08G, 110376 objects
Dataset rpool/ROOT [ZPL], ID 39, cr_txg 32, 21.0K, 4 objects
Dataset rpool/dump [ZVOL], ID 71, cr_txg 74, 16K, 2 objects
Dataset rpool/swap [ZVOL], ID 65, cr_txg 71, 16K, 2 objects
Dataset rpool [ZPL], ID 16, cr_txg 1, 98.0K, 10 objects


But when system is rebooted it again panics .. Is there any way to recover it ? I tried all the things which i know 



SunOS Release 5.10 Version Generic_142900-13 64-bit
Copyright 1983-2010 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
NOTICE: zfs_parse_bootfs: error 48
Cannot mount root on rpool/47 fstype zfs

panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-26 Thread Cindy Swearingen

Hi--

I don't know why the spare isn't kicking in automatically, it should.

A documented workaround is to outright replace the failed disk with one
of the spares, like this:

# zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0

The autoreplace pool property has nothing to do with automatic spare
replacement. When this property is enabled, a replacement disk will
be automatically labeled and replaced. No need to manually run the
zpool command when this property is enabled.

Then, you can find the original failed c4t5000C5001128FE4Dd0 disk
and physically replace it when you have time. You could then add this
disk back into the pool as the new spare, like this:

# zpool add fwgpool0 spare c4t5000C5001128FE4Dd0


Thanks,

Cindy
On 04/25/11 17:56, Lamp Zy wrote:

Hi,

One of my drives failed in Raidz2 with two hot spares:

# zpool status
  pool: fwgpool0
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas 
exist for

the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: resilver completed after 0h0m with 0 errors on Mon Apr 25 
14:45:44 2011

config:

NAME   STATE READ WRITE CKSUM
fwgpool0   DEGRADED 0 0 0
  raidz2   DEGRADED 0 0 0
c4t5000C500108B406Ad0  ONLINE   0 0 0
c4t5000C50010F436E2d0  ONLINE   0 0 0
c4t5000C50011215B6Ed0  ONLINE   0 0 0
c4t5000C50011234715d0  ONLINE   0 0 0
c4t5000C50011252B4Ad0  ONLINE   0 0 0
c4t5000C500112749EDd0  ONLINE   0 0 0
c4t5000C5001128FE4Dd0  UNAVAIL  0 0 0  cannot open
c4t5000C500112C4959d0  ONLINE   0 0 0
c4t5000C50011318199d0  ONLINE   0 0 0
c4t5000C500113C0E9Dd0  ONLINE   0 0 0
c4t5000C500113D0229d0  ONLINE   0 0 0
c4t5000C500113E97B8d0  ONLINE   0 0 0
c4t5000C50014D065A9d0  ONLINE   0 0 0
c4t5000C50014D0B3B9d0  ONLINE   0 0 0
c4t5000C50014D55DEFd0  ONLINE   0 0 0
c4t5000C50014D642B7d0  ONLINE   0 0 0
c4t5000C50014D64521d0  ONLINE   0 0 0
c4t5000C50014D69C14d0  ONLINE   0 0 0
c4t5000C50014D6B2CFd0  ONLINE   0 0 0
c4t5000C50014D6C6D7d0  ONLINE   0 0 0
c4t5000C50014D6D486d0  ONLINE   0 0 0
c4t5000C50014D6D77Fd0  ONLINE   0 0 0
spares
  c4t5000C50014D70072d0AVAIL
  c4t5000C50014D7058Dd0AVAIL

errors: No known data errors


I'd expect the spare drives to auto-replace the failed one but this is 
not happening.


What am I missing?

I really would like to get the pool back in a healthy state using the 
spare drives before trying to identify which one is the failed drive in 
the storage array and trying to replace it. How do I do this?


Thanks for any hints.

--
Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bootable root pool?

2011-04-18 Thread Cindy Swearingen

Hi Darren,

Yes, a bootable root pool must be created on a disk slice.

You can use a cache device, but not a log device, and the cache device
must be a disk slice.

See the output below.

Thanks,

Cindy

# zpool add rpool log c0t2d0s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate 
logs

# zpool add rpool cache c0t3d0
cannot label 'c0t3d0': EFI labeled devices are not supported on root pools.
# zpool add rpool cache c0t3d0s0
# zpool status rpool
  pool: rpool
 state: ONLINE
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c0t0d0s0  ONLINE   0 0 0
cache
  c0t3d0s0  ONLINE   0 0 0

errors: No known data errors



On 04/18/11 10:22, Darren Kenny wrote:

Hi,

As I understand it there were restrictions on a bootable root pool where it
cannot be defined to use whole-disk configurations for a single disk, or
multiple disks which are mirrored.

Does it still apply that you need to define such pools as using slices, ie. by
either defining a partition (if non-SPARC) and then a slice 0? i.e.

   zpool create mirror c1t0d0 c1t1d0# not bootable

while

   zpool create mirror c1t0d0s0 c1t1d0s0# IS bootable

Is it possible to use cache and log devices in a root pool? If so, does this
restriction on the use of a slice rather than just the disk device also apply
here? i.e. would the following be supported:

   zpool create mirror c1t0d0s0 c1t1d0s0 cache c2t0d0 log c2t1d0

Thanks,

Darren.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scrub on b123

2011-04-15 Thread Cindy Swearingen

Hi Karl...

I just saw this same condition on another list. I think the poster
resolved it by replacing the HBA.

Drives go bad but they generally don't all go bad at once, so I would
suspect some common denominator like the HBA/controller, cables, and
so on.

See what FMA thinks by running fmdump like this:

# fmdump
TIME UUID SUNW-MSG-ID
Apr 11 16:02:38.2262 ed0bdffe-3cf9-6f46-f20c-99e2b9a6f1cb ZFS-8000-D3
Apr 11 16:22:23.8401 d4157e2f-c46d-c1e9-c05b-f2d3e57f3893 ZFS-8000-D3
Apr 14 15:55:26.1918 71bd0b08-60c2-e114-e1bc-daa03d7b163f ZFS-8000-D3

This output will tell you when the problem started.

Depending on what fmdump says, which probably indicates multiple drive
problems, I would run diagnostics on the HBA or get it replaced.

Always have good backups.

Thanks,

Cindy


On 04/15/11 12:52, Karl Rossing wrote:

Hi,

One of our zfs volumes seems to be having some errors. So I ran zpool 
scrub and it's currently showing the following.


-bash-3.2$ pfexec /usr/sbin/zpool status -x
  pool: vdipool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are 
unaffected.

action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub in progress for 3h10m, 13.53% done, 20h16m to go
config:

NAME STATE READ WRITE CKSUM
vdipool  ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c9t14d0  ONLINE   0 012  6K repaired
c9t15d0  ONLINE   0 013  167K repaired
c9t16d0  ONLINE   0 011  5.50K repaired
c9t17d0  ONLINE   0 020  10K repaired
c9t18d0  ONLINE   0 015  7.50K repaired
spares
  c9t19d0AVAIL

errors: No known data errors


I have another server connected to the same jbod using drives c8t1d0 to 
c8t13d0 and it doesn't seem to have any errors.


I'm wondering how it could have gotten so screwed up?

Karl





CONFIDENTIALITY NOTICE:  This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any attachments, in
whole or in part, by anyone other than the intended recipient(s) is 
strictly
prohibited.  If you have received this communication in error, please 
notify

the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scrub on b123

2011-04-15 Thread Cindy Swearingen

D'oh. One more thing.

We had a problem in b120-123 that caused random checksum errors on RAIDZ 
configs. This info is still in the ZFS troubleshooting guide.


See if a zpool clear resolves these errors. If that works, then I would
upgrade to a more recent build and see if the problem is resolved
completely.

If not, then see the recommendation below.

Thanks,

Cindy

On 04/15/11 13:18, Cindy Swearingen wrote:

Hi Karl...

I just saw this same condition on another list. I think the poster
resolved it by replacing the HBA.

Drives go bad but they generally don't all go bad at once, so I would
suspect some common denominator like the HBA/controller, cables, and
so on.

See what FMA thinks by running fmdump like this:

# fmdump
TIME UUID SUNW-MSG-ID
Apr 11 16:02:38.2262 ed0bdffe-3cf9-6f46-f20c-99e2b9a6f1cb ZFS-8000-D3
Apr 11 16:22:23.8401 d4157e2f-c46d-c1e9-c05b-f2d3e57f3893 ZFS-8000-D3
Apr 14 15:55:26.1918 71bd0b08-60c2-e114-e1bc-daa03d7b163f ZFS-8000-D3

This output will tell you when the problem started.

Depending on what fmdump says, which probably indicates multiple drive
problems, I would run diagnostics on the HBA or get it replaced.

Always have good backups.

Thanks,

Cindy


On 04/15/11 12:52, Karl Rossing wrote:

Hi,

One of our zfs volumes seems to be having some errors. So I ran zpool 
scrub and it's currently showing the following.


-bash-3.2$ pfexec /usr/sbin/zpool status -x
  pool: vdipool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are 
unaffected.
action: Determine if the device needs to be replaced, and clear the 
errors

using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub in progress for 3h10m, 13.53% done, 20h16m to go
config:

NAME STATE READ WRITE CKSUM
vdipool  ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c9t14d0  ONLINE   0 012  6K repaired
c9t15d0  ONLINE   0 013  167K repaired
c9t16d0  ONLINE   0 011  5.50K repaired
c9t17d0  ONLINE   0 020  10K repaired
c9t18d0  ONLINE   0 015  7.50K repaired
spares
  c9t19d0AVAIL

errors: No known data errors


I have another server connected to the same jbod using drives c8t1d0 
to c8t13d0 and it doesn't seem to have any errors.


I'm wondering how it could have gotten so screwed up?

Karl





CONFIDENTIALITY NOTICE:  This communication (including all 
attachments) is
confidential and is intended for the use of the named addressee(s) 
only and

may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any 
attachments, in
whole or in part, by anyone other than the intended recipient(s) is 
strictly
prohibited.  If you have received this communication in error, please 
notify

the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scrub on b123

2011-04-15 Thread Cindy Swearingen

Yes, the Solaris 10 9/10 release has the fix for RAIDZ checksum errors
if you have ruled out any hardware problems.

cs
On 04/15/11 14:47, Karl Rossing wrote:

Would moving the pool to a Solaris 10U9 server fix the random RAIDZ errors?

On 04/15/2011 02:23 PM, Cindy Swearingen wrote:

D'oh. One more thing.

We had a problem in b120-123 that caused random checksum errors on 
RAIDZ configs. This info is still in the ZFS troubleshooting guide.


See if a zpool clear resolves these errors. If that works, then I would
upgrade to a more recent build and see if the problem is resolved
completely.

If not, then see the recommendation below.

Thanks,

Cindy

On 04/15/11 13:18, Cindy Swearingen wrote:

Hi Karl...

I just saw this same condition on another list. I think the poster
resolved it by replacing the HBA.

Drives go bad but they generally don't all go bad at once, so I would
suspect some common denominator like the HBA/controller, cables, and
so on.

See what FMA thinks by running fmdump like this:

# fmdump
TIME UUID SUNW-MSG-ID
Apr 11 16:02:38.2262 ed0bdffe-3cf9-6f46-f20c-99e2b9a6f1cb ZFS-8000-D3
Apr 11 16:22:23.8401 d4157e2f-c46d-c1e9-c05b-f2d3e57f3893 ZFS-8000-D3
Apr 14 15:55:26.1918 71bd0b08-60c2-e114-e1bc-daa03d7b163f ZFS-8000-D3

This output will tell you when the problem started.

Depending on what fmdump says, which probably indicates multiple drive
problems, I would run diagnostics on the HBA or get it replaced.

Always have good backups.

Thanks,

Cindy


On 04/15/11 12:52, Karl Rossing wrote:

Hi,

One of our zfs volumes seems to be having some errors. So I ran 
zpool scrub and it's currently showing the following.


-bash-3.2$ pfexec /usr/sbin/zpool status -x
  pool: vdipool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are 
unaffected.
action: Determine if the device needs to be replaced, and clear the 
errors

using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub in progress for 3h10m, 13.53% done, 20h16m to go
config:

NAME STATE READ WRITE CKSUM
vdipool  ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c9t14d0  ONLINE   0 012  6K repaired
c9t15d0  ONLINE   0 013  167K repaired
c9t16d0  ONLINE   0 011  5.50K repaired
c9t17d0  ONLINE   0 020  10K repaired
c9t18d0  ONLINE   0 015  7.50K repaired
spares
  c9t19d0AVAIL

errors: No known data errors


I have another server connected to the same jbod using drives c8t1d0 
to c8t13d0 and it doesn't seem to have any errors.


I'm wondering how it could have gotten so screwed up?

Karl





CONFIDENTIALITY NOTICE:  This communication (including all 
attachments) is
confidential and is intended for the use of the named addressee(s) 
only and

may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are 
expressly

claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any 
attachments, in
whole or in part, by anyone other than the intended recipient(s) is 
strictly
prohibited.  If you have received this communication in error, 
please notify

the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




CONFIDENTIALITY NOTICE:  This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private, confidential, privileged, and
exempt from disclosure under law.  All rights to privilege are expressly
claimed and reserved and are not waived.  Any use, dissemination,
distribution, copying or disclosure of this message and any attachments, in
whole or in part, by anyone other than the intended recipient(s) is 
strictly
prohibited.  If you have received this communication in error, please 
notify

the sender immediately, delete this communication from all data storage
devices and destroy all hard copies.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Cindy Swearingen

Arjun,

Yes, you an choose any name for the root pool, but an existing
limitation is that you can't rename the root pool by using the
zpool export/import with new name feature.

Too much internal boot info is tied to the root pool name.

What info are you changing? Instead, could you create a new BE
and update the info in the new BE.

Creating a new BE is the best solution.

Another solution is to create a mirrored root pool and then use the
zpool split option to create an identical root pool (that is then
imported with a new name, like rpool2), but this identical root pool
won't be bootable either. You would then need to mount rpool2's file
systems and copy whatever data you needed.

Thanks,

Cindy



On 04/08/11 01:24, Arjun YK wrote:

Hi,

Let me add another query.
I would assume it would be perfectly ok to choose any name for root
pool, instead of 'rpool', during the OS install. Please suggest
otherwise.

Thanks
Arjun

On 4/8/11, Arjun YK arju...@gmail.com wrote:

Hello,

I have a situation where a host, which is booted off its 'rpool', need
to temporarily import the 'rpool' of another host, edit some files in
it, and export the pool back retaining its original name 'rpool'. Can
this be done ?

Here is what I am trying to do:

# zpool import -R /a rpool temp-rpool
# zfs set mountpoint=/mnt temp-rpool/ROOT/s10_u8
# zfs mount temp-rpool/ROOT/s10_u8
### For example, edit /a/mnt/etc/hosts
# zfs set mountpoint=/  temp-rpool/ROOT/s10_u8
# zfs export temp-rpool-- But, I want to give temp-rpool its
original name 'rpool' before or after this export.

I cannot see how this can be achieved. So, I decided to live with the
name 'temp-rpool'. But, is renaming 'rpool'  recommended or supported
parctice ?

Any help is appreciated.

Thanks
Arjun


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Cindy Swearingen

Hi Albert,

I didn't notice that you are running the Solaris 10 9/10 release.

Although the autoexpand property is provided, the underlying driver
changes to support the LUN expansion are not available in this release.

I don't have the right storage to test, but a possible workaround is
to create another larger LUN and replace the existing (smaller) LUN with 
a larger LUN by using the zpool replace command. Then, either set the

autoexpand property to on or use the following command:

# zpool online -e TEST LUN

The autoexpand features work as expected in the Oracle Solaris 11
release.

Thanks,

Cindy



On 04/04/11 05:38, Albert wrote:

W dniu 04.04.2011 12:44, Fajar A. Nugraha pisze:

On Mon, Apr 4, 2011 at 4:49 PM, For@llfor...@stalowka.info  wrote:

What can I do that zpool show new value?

zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
  -- richard

I tried your suggestion, but no effect.

Did you modify the partition table?

IIRC if you pass a DISK to zpool create, it would create
partition/slice on it, either with SMI (the default for rpool) or EFI
(the default for other pool). When the disk size changes (like when
you change LUN size on storage node side), you PROBABLY need to resize
the partition/slice as well.

When I test with openindiana b148, simply setting zpool set
autoexpand=on is enough (I tested with Xen, and openinidiana reboot is
required). Again, you might need to set both autoexpand=on and
resize partition slice.

As a first step, try choosing c2t1d0 in format, and see what the
size of this first slice is.


Hi,

I choosed format and change type to the auto-configure and now I see new 
value if I choosed partition - print, but when I exit from format and 
reboot the old value is stay. How I can write new settings?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot remove zil device

2011-03-31 Thread Cindy Swearingen

You can add and remove mirrored or non-mirrored log devices.

Jordan is probably running into CR 7000154:

cannot remove log device

Thanks,

Cindy



On 03/31/11 12:28, Roy Sigurd Karlsbakk wrote:

http://pastebin.com/nD2r2qmh


Here is zpool status and zpool version


The only thing I wonder about here, is why you have two striped log devices. I 
didn't even know that was supported.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] detach configured log devices?

2011-03-16 Thread Cindy Swearingen

Hi Jim,

Yes, the Solaris 10 9/10 release supports log device removal.

http://download.oracle.com/docs/cd/E19253-01/819-5461/gazgw/index.html

See Example 4-3 in this section.

The ability to import a pool with a missing log device is not yet
available in the Solaris 10 release.

Thanks,

Cindy

On 03/16/11 10:33, Jim Mauro wrote:

With ZFS, Solaris 10 Update 9, is it possible to
detach configured log devices from a zpool?

I have a zpool with 3 F20 mirrors for the ZIL. They're
coming up corrupted. I want to detach them, remake
the devices and reattach them to the zpool.

Thanks
/jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Cindy Swearingen

Hi Robert,

We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.

Yes, you can do #2 below and the pool size will be adjusted down to the
smaller size. Before you do this, I would check the sizes of both
spares.

If both spares are equivalent smaller sizes, you could use those to
build the replacement pool with the larger disks and then put the extra
larger disks on the shelf.

Thanks,

Cindy



On 03/04/11 09:22, Robert Hartzell wrote:

In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
storage pool and then shelved the other two for spares. One of the disks failed 
last night so I shut down the server and replaced it with a spare. When I tried 
to zpool replace the disk I get:

zpool replace tank c10t0d0 
cannot replace c10t0d0 with c10t0d0: device is too small


The 4 original disk partition tables look like this:

Current partition table (original):
Total disk sectors available: 312560317 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.04GB  312560350
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3125603518.00MB  312576734


Spare disk partition table looks like this:

Current partition table (original):
Total disk sectors available: 312483549 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.00GB  312483582
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3124835838.00MB  312499966
 
So it seems that two of the disks are slightly different models and are about 40mb smaller then the original disks. 


I know I can just add a larger disk but I would rather user the hardware I have 
if possible.
1) Is there anyway to replace the failed disk with one of the spares?
2) Can I recreate the zpool using 3 of the original disks and one of the 
slightly smaller spares? Will zpool/zfs adjust its size to the smaller disk?
3) If #2 is possible would I still be able to use the last still shelved disk 
as a spare?

If #2 is possible I would probably recreate the zpool as raidz2 instead of the 
current raidz1.

Any info/comments would be greatly appreciated.

Robert
  
--   
   Robert Hartzell

b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Cindy Swearingen

Robert,

Which Solaris release is this?

Thanks,

Cindy


On 03/04/11 11:10, Mark J Musante wrote:


The fix for 6991788 would probably let the 40mb drive work, but it would 
depend on the asize of the pool.


On Fri, 4 Mar 2011, Cindy Swearingen wrote:


Hi Robert,

We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.

Yes, you can do #2 below and the pool size will be adjusted down to the
smaller size. Before you do this, I would check the sizes of both
spares.

If both spares are equivalent smaller sizes, you could use those to
build the replacement pool with the larger disks and then put the extra
larger disks on the shelf.

Thanks,

Cindy



On 03/04/11 09:22, Robert Hartzell wrote:
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a 
raidz storage pool and then shelved the other two for spares. One of 
the disks failed last night so I shut down the server and replaced it 
with a spare. When I tried to zpool replace the disk I get:


zpool replace tank c10t0d0 cannot replace c10t0d0 with c10t0d0: 
device is too small


The 4 original disk partition tables look like this:

Current partition table (original):
Total disk sectors available: 312560317 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.04GB  
312560350 1 unassignedwm 0   
0   0  2 unassignedwm 0   
0   0  3 unassignedwm 0   
0   0  4 unassignedwm 0   
0   0  5 unassignedwm 0   
0   0  6 unassignedwm 0   
0   0  8 reservedwm 312560351
8.00MB  312576734


Spare disk partition table looks like this:

Current partition table (original):
Total disk sectors available: 312483549 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.00GB  
312483582 1 unassignedwm 0   
0   0  2 unassignedwm 0   
0   0  3 unassignedwm 0   
0   0  4 unassignedwm 0   
0   0  5 unassignedwm 0   
0   0  6 unassignedwm 0   
0   0  8 reservedwm 312483583
8.00MB  312499966
 So it seems that two of the disks are slightly different models and 
are about 40mb smaller then the original disks. I know I can just add 
a larger disk but I would rather user the hardware I have if possible.

1) Is there anyway to replace the failed disk with one of the spares?
2) Can I recreate the zpool using 3 of the original disks and one of 
the slightly smaller spares? Will zpool/zfs adjust its size to the 
smaller disk?
3) If #2 is possible would I still be able to use the last still 
shelved disk as a spare?


If #2 is possible I would probably recreate the zpool as raidz2 
instead of the current raidz1.


Any info/comments would be greatly appreciated.

Robert
  --  Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




Regards,
markm

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hung Hot Spare

2011-03-03 Thread Cindy Swearingen

Hi Paul,

I've seen some spare stickiness too and its generally when I'm trying to
simulate a drive failure (like you are below) without actually
physically replacing the device.

If I actually physically replace the failed drive, the spare is
detached automatically after the new device is resilvered. I haven't
seen a multiple drive failure in our systems here so I can't comment on
that part, but I don't think this part is related to spare behavior.

The issue here is that the drive was only disabled, you didn't
actually physically replace it so ZFS throws an error:

sudo zpool replace nyc-test-01 c5t5000CCA215C28142d0
 Password:
 invalid vdev specification
 use '-f' to override the following errors:
 /dev/dsk/c5t5000CCA215C28142d0s0 is part of active ZFS pool
 nyc-test-01. Please see zpool(1M).

If you want to rerun your test, tries these steps:

1. Removing a device from the pool
2. Watch for the spare to kick-in
3. Replace the failed device with a new physical device
4. Run the zpool replace command
5. Observe spare behavior

I don't see the spare as hung, it just needs to be detached as described
here:

http://download.oracle.com/docs/cd/E19253-01/819-5461/6n7ht6qvv/index.html#gjfbs

Thanks,

Cindy



On 03/03/11 07:45, Paul Kraus wrote:

Apologies in advance as this is a Solaris 10 question and not an
OpenSolaris issue (well, OK, it *may* also be an OpenSolaris issue).
System is a T2000 running Solaris 10U9 with latest ZFS patches (zpool
version 22). Storage is a pile of J4400 (5 of them).

I have run into what appears to be (Sun) Bug ID 6995143, and I opened
a case with Oracle requesting to be added to that bug. I am being told
that bug had been abandoned and that ZFS is behaving correctly. Here
is what I am seeing:

1) zpool with multiple vdevs and hot spares
2) multiple drive failures at once
3) multiple hot spares in use (so far, only one in each vdev, but they
are raidz2 so I suppose it could be up to 2 in each vdev)
4) after repair of the failed drives and resilver completes, the hot
spares stay in use

I have NOT seen the issue with only a single drive failure.
I have NOT seen the problem if the failed drive(s) is(are) replaced
BEFORE the resilver of the hot spares completes

In other words, I have only seen the issue if there are more than one
failed drive at once and if the hot spares complete resilvering before
the bad drives are repaired.

This has all been seen in our test environment, and we simulate a
drive failure by either removing a drive or disabling it (via CAM,
these are J4400 drives). This came to light due to testing to
determine resolution of another bug (SATA over SAS multipathing driver
issues).

We do have a work around. Once the resilver of the repaired drives
completes we can 'zpool detach' the hot spare device from the zpool
(vdev) and it goes back into the AVAIL state.

Is this EXPECTED behavior for multiple drive failures ?

Here is some detailed information from my latest test.

Here is the pool in failed state (2 drives failed)...

  pool: nyc-test-01
 state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
 scrub: scrub completed after 0h0m with 0 errors on Thu Mar  3 08:41:04 2011
config:

NAME STATE READ WRITE CKSUM
nyc-test-01  DEGRADED 0 0 0
  raidz2-0   DEGRADED 0 0 0
c5t5000CCA215C8A649d0ONLINE   0 0 0
c5t5000CCA215C84A65d0ONLINE   0 0 0
c5t5000CCA215C34786d0ONLINE   0 0 0
spare-3  DEGRADED 0 0 0
  c5t5000CCA215C28142d0  REMOVED  0 0 0
  c5t5000CCA215C7FD6Ed0  ONLINE   0 0 0
  raidz2-1   DEGRADED 0 0 0
c5t5000CCA215C8A5B5d0ONLINE   0 0 0
spare-1  DEGRADED 0 0 0
  c5t5000CCA215C280F8d0  REMOVED  0 0 0
  c5t5000CCA215C83160d0  ONLINE   0 0 0
c5t5000CCA215C34753d0ONLINE   0 0 0
c5t5000CCA215C34823d0ONLINE   0 0 0
spares
  c5t5000CCA215C7FD6Ed0  INUSE currently in use
  c5t5000CCA215C83160d0  INUSE currently in use

errors: No known data errors

Here is the attempt to bring one of the failed drives back online
using 'zpool replace' (after the drive was enabled), which tosses a
warning (as expected)...


sudo zpool replace nyc-test-01 c5t5000CCA215C28142d0

Password:
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c5t5000CCA215C28142d0s0 is part of active ZFS pool
nyc-test-01. Please see 

Re: [zfs-discuss] Format returning bogus controller info

2011-03-01 Thread Cindy Swearingen

(Dave P...I sent this yesterday, but it bounced on your email address)

A small comment from me would be to create some test pools and replace
devices in the pools to see if device names remain the same or change
during these operations.

If the device names change and the pools are unhappy, retest similar
operations while the pools' are exported.

I've seen enough controller/device numbering wreak havoc on pool
availability that I'm automatically paranoid when I see the controller
numbering that you started with.

Thanks,

Cindy



On 02/28/11 22:39, Dave Pooser wrote:

On 2/28/11 4:23 PM, Garrett D'Amore garr...@nexenta.com wrote:


Drives are ordered in the order they are *enumerated* when they *first*
show up in the system.  *Ever*.


Is the same true of controllers? That is, will c12 remain c12 or
/pci@0,0/pci8086,340c@5 remain /pci@0,0/pci8086,340c@5 even if other
controllers are active?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance

2011-02-25 Thread Cindy Swearingen
Hi Dave,

Still true.

Thanks,

Cindy

On 02/25/11 13:34, David Blasingame Oracle wrote:
 Hi All,
 
 In reading the ZFS Best practices, I'm curious if this statement is 
 still true about 80% utilization.
 
 from :  
 http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
 
 
 
 
   
 http://www.solarisinternals.com/wiki/index.php?title=ZFS_Best_Practices_Guideaction=editsection=12Storage
   Pool Performance Considerations
 
 .
 Keep pool space under 80% utilization to maintain pool performance. 
 Currently, pool performance can degrade when a pool is very full and 
 file systems are updated frequently, such as on a busy mail server. Full 
 pools might cause a performance penalty, but no other issues.
 
 
 
 Dave
 
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] time-sliderd doesn't remove snapshots

2011-02-18 Thread Cindy Swearingen

Hi Bill,

I think the root cause of this problem is that time slider implemented
the zfs destroy -d feature but this feature is only available in later
pool versions. This means that the routine removal of time slider 
generated snapshots fails on older pool versions.


The zfs destroy -d feature (snapshot user holds) was introduced in pool 
version 18.


I think this bug describes some or all of the problem:

https://defect.opensolaris.org/bz/show_bug.cgi?id=16361

Thanks,

Cindy



On 02/18/11 12:34, Bill Shannon wrote:

In the last few days my performance has gone to hell.  I'm running:

# uname -a
SunOS nissan 5.11 snv_150 i86pc i386 i86pc

(I'll upgrade as soon as the desktop hang bug is fixed.)

The performance problems seem to be due to excessive I/O on the main
disk/pool.

The only things I've changed recently is that I've created and destroyed
a snapshot, and I used zpool upgrade.

Here's what I'm seeing:

# zpool iostat rpool 5
 capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   13.3G   807M  7 85  15.9K   548K
rpool   13.3G   807M  3 89  1.60K   723K
rpool   13.3G   810M  5 91  5.19K   741K
rpool   13.3G   810M  3 94  2.59K   756K

Using iofileb.d from the dtrace toolkit shows:

# iofileb.d
Tracing... Hit Ctrl-C to end.
^C
 PID CMD  KB FILE
   0 sched 6 none
   5 zpool-rpool7770 none

zpool status doesn't show any problems:

# zpool status rpool
pool: rpool
   state: ONLINE
   scan: none requested
config:

  NAMESTATE READ WRITE CKSUM
  rpool   ONLINE   0 0 0
c3d0s0ONLINE   0 0 0


Perhaps related to this or perhaps not, I discovered recently that 
time-sliderd
was doing just a ton of close requests.  I disabled time-sliderd while 
trying

to solve my performance problem.

I was also getting these error messages in the time-sliderd log file:

Warning: Cleanup failed to destroy: 
rpool/ROOT@zfs-auto-snap_hourly-2010-11-10-15h01

Details:
['/usr/bin/pfexec', '/usr/sbin/zfs', 'destroy', '-d', 
'rpool/ROOT@zfs-auto-snap_hourly-2010-11-10-15h01'] failed with exit code 1
cannot destroy 'rpool/ROOT@zfs-auto-snap_hourly-2010-11-10-15h01': 
unsupported version


That was the reason I did the zpool upgrade.

I discovered that I had a *ton* of snapshots from time-slider that
hadn't been destroyed, over 6500 of them, presumably all because of this
version problem?

I manually removed all the snapshots and my performance returned to normal.

I don't quite understand what the -d option to zfs destroy does.
Why does time-sliderd use it, and why does it prevent these snapshots
from being destroyed?

Shouldn't time-sliderd detect that it can't destroy any of the snapshots
it's created and stop creating snapshots?

And since I don't quite understand why time-sliderd was failing to begin 
with,

I'm nervous about re-enabling it.  Do I need to do a zpool upgrade on all
my pools to make it work?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recover data from disk with zfs

2011-02-16 Thread Cindy Swearingen

Sergey,

I think you are saying that you had 4 separate ZFS storage pools on 4
separate disks and one ZFS pool/fs didn't not import successfully.

If you created a new storage pool on the disk for the pool that
failed to import then the data on that disk is no longer available
because it was overwritten with new pool info.

Is this what happened?

If a pool fails to import in the Solaris 10 9/10 release, we can
try to import it in recovery mode.

Thanks,

Cindy

On 02/16/11 05:42, Sergey wrote:

Hello everybody! Please, help me!

I have Solaris 10x86_64 server with a 5x40gb hdd.
1 HDD with /root and /usr (and other partition) (ufs filesystem) were crashed. 
He's died.
Other 4 HDD (zfs file system) were mounted by 4 pool (zfs create pool disk1 
c0t1d0 and etc.).

I install Solaris 10x86_64 on new disk and then mount (zpool import) other 4 
HDD disks. 3 disk mount successfully, but 1 don't mount (i can create new pool 
with this disk, but he is empty).

How can I mount this disk or recover data from this disk?
Sorry for my English.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get rid of phantom pool ?

2011-02-15 Thread Cindy Swearingen

The best way to remove the pool is to reconnect the device and then
destroy the pool, but if the device is faulted or no longer available,
then you'll need a workaround.

If the external drive with the FAULTED pool remnants isn't connected to
the system, then rename the /etc/zfs/zpool.cache file and reboot the
system. The zpool.cache content will be rebuilt based on existing
devices with pool info.

Thanks,

Cindy



On 02/15/11 01:10, Alxen4 wrote:

I had a pool on external drive.Recently the drive failed,but pool still shows 
up when run 'zpoll status'

Any attempt to remove/delete/export pool ends up with unresponsiveness(The 
system is still up/running perfectly,it's just this specific command kind of 
hangs so I have to open new ssh session)

zpool status shows state: UNAVAIL

When try zpool clear get cannot clear errors for backup: I/O error

Please help me out to get rid of this phantom pool.


Many,many thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   3   4   5   6   7   >