[zfs-discuss] how do I fix this situation?

2007-12-10 Thread James C. McPherson

Hi everybody,
while trying to figure out what on earth has been going on
in my u20m2 due to

6636511 u20m2 bios version 1.45.1 still can't distinguish disks on sata 
channel #1,

I engaged in a lot of cable swapping operations for the internal
sata drive cables.

Somehow I've managed to end up with an allegedly corrupted zpool,
which I was unable to do a zpool replace on, and now I can't import
it either.


Its config is 2 slices on disks c3t0d0 and c3t1d0, but the zpool
config data reckons it's really using c2t1d0 instead of c3t0d0.

Looking at the output from zdb -l /dev/dsk/c3t0d0s0 I can clearly
see that there is a path field which is incorrect. How do I change
this field to reflect reality? Is there some way I can force-import
the pool and get that mapping changed? (zpool import -f soundandvision
fails with invalid vdev configuration).




LABEL 0

 version=9
 name='soundandvision'
 state=1
 txg=2247550
 pool_guid=7968359165854648625
 hostid=226162178
 hostname='farnarkle'
 top_guid=4672721547114476840
 guid=9244482965678353940
 vdev_tree
 type='mirror'
 id=0
 guid=4672721547114476840
 metaslab_array=14
 metaslab_shift=30
 ashift=9
 asize=199968161792
 is_log=0
 children[0]
 type='disk'
 id=0
 guid=15422701819531588989
 path='/dev/dsk/c2t1d0s0'
 devid='id1,[EMAIL PROTECTED]/a'
 phys_path='/[EMAIL PROTECTED],0/pci108e,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0:a'
 whole_disk=0
 DTL=85
 children[1]
 type='disk'
 id=1
 guid=9244482965678353940
 path='/dev/dsk/c3t0d0s0'
 devid='id1,[EMAIL PROTECTED]/a'
 phys_path='/[EMAIL PROTECTED],0/pci108e,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0:a'
 whole_disk=0
 DTL=84





Thankyou in advance,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Expanding a RAIDZ based Pool...

2007-12-10 Thread Karl Pielorz

Hi,

I've seen/read a number of articles on the net, about RAIDZ - and things 
like Dynamic Striping et'al. I know roughly how this works - but I can't 
seem to get to the bottom of expanding existing pool space, if this is even 
possible.

e.g. If I build a RAIDZ pool with 5 * 400Gb drives, and later add a 6th 
400Gb drive to this pool, will its space instantly be available to volumes 
using that pool? (I can't quite see this working myself)

Other articles, talk about replacing one drive at a time, letting it 
re-silver, and at the end when the last drive is replaced, the space 
available to volumes will reflect the new pool size (i.e. replace each 
400Gb device in turn with a 750Gb device - when the last one is done, 
you'll have a 5 * 750Gb pool with all the space (minus RAIDZ overhead) 
being available).

I know I can add additional RAIDZ pools to the volume - but that's only any 
good for adding numbers of multiple drives, not singles (if you want to 
continue fault tolerance).

Thanks,

-Karl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Expanding a RAIDZ based Pool...

2007-12-10 Thread Adam Leventhal
On Mon, Dec 10, 2007 at 03:59:22PM +, Karl Pielorz wrote:
 e.g. If I build a RAIDZ pool with 5 * 400Gb drives, and later add a 6th 
 400Gb drive to this pool, will its space instantly be available to volumes 
 using that pool? (I can't quite see this working myself)

Hi Karl,

You can't currently expand the width of a RAID-Z stripe. It has been
considered, but implementing that would require a fairly substantial change
in the way RAID-Z works. Sun's current ZFS priorities are elsewhere, but
there's nothing preventing an interested member of the community from
undertaking this project...

Adam

-- 
Adam Leventhal, FishWorkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance writing to USB drive, performance reporting

2007-12-10 Thread Akhilesh Mritunjai
USB2 giving you ~30MB/s is normal... a little better than mine (on Windows - 
~25MB/s) actually.

For better performance better switch to eSATA or Firewire. Even FW400 will give 
you better results than USB as there are lesser overheads.

However, I'm sure I saw some FW+ZFS related bug in bugdb sometime ago. Please 
check.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Memory Sticks

2007-12-10 Thread Paul Gress
I did some work over the weekend.  Still is having some trouble.


# fdisk -E /dev/rdsk/c7t0d0s2
# zpool create Radical-Vol /dev/dsk/c7t0d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c7t0d0s0 is part of exported or potentially active ZFS pool 
Radical-Vol. Please see zpool(1M).
/dev/dsk/c7t0d0s2 is part of exported or potentially active ZFS pool 
Radical-Vol. Please see zpool(1M).
# dd if=/dev/zero of=/dev/dsk/c7t0d0s2 bs=1
write: No such device or address
8200604161+0 records in
8200604161+0 records out
# fdisk -E /dev/rdsk/c7t0d0s2
# zpool create Radical-Vol /dev/dsk/c7t0d0
cannot label 'c7t0d0': failed to write EFI label
use fdisk(1M) to partition the disk, and provide a specific slice
#


Basically after doing an fdisk -E, I couldn't create pool Radical-Vol.  
So I decided to zero the whole device and start from scratch.  Now I 
have a different error, zpool couldn't write the EFI label.  It wants an 
fdisk partition, but as far as I know, once I do this it will be 
specific for Sparc and X86.  Any other suggestions?

Thanks,

Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance considerations (Linux vs Solaris)

2007-12-10 Thread msl
Ok, i have proposed, so, i'm trying to implement it. :)
 I hope you can (at least) criticizing it. :))
 The document is here: http://www.posix.brte.com.br/blog/?p=89
 It is not complete, i'm running some tests yet, and analyzing the results. But 
i think you can look and contribute with tome thoughts already.
  It was nice to see the write performance for the iSCSI protocol and the 
NFSv3. Why iSCSI was much better? Why the read performance was the same?  
All guarantees that i have with NFS i have with iSCSI?
 Please, comment it!

 Thanks a lot for your time!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS handle a SATA II port multiplier ?

2007-12-10 Thread Eric Haycraft
Apparently I spent more than my brain wanted me to believe. Here is what I 
picked up. Even though I am over the 1 meter limit on SATAII, it worked great. 

http://www.pc-pitstop.com/sata_enclosures/scsat84xb.asp


Eric
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup in general (was Does ZFS handle a SATA II ' port multiplier' ?)

2007-12-10 Thread Wade . Stuart
 
  If you care enough to do backups, at least care enough to be
  able to restore.  For my home backups, I use portable drives with
  copies=2 or 3 and compression enabled.  I don't fool with
  incrementals, but many people do.  The failure mode I'm worried
  about is decay, as the drives will be off most of the time.  The
  copies feature works well for this failure mode.

 I am definitely and strongly interested in restoring!  That's why I hate
 my previous backup solutions so much (NTI backup and then Acronis True
 Image); I verified backups and tested restores, and had *FAR* too much
 trouble to be at all comfortable.  The photos and the ebooks are backed
 up eventually (but not always within the month) to good DVDs, and one
 copy is kept off-site, and that's the stuff I'd miss most if it went,
 but I want a good *overall* solution.

 The copies thing sounds familiar from discussion here...ah.  Yes,
 that's exactly perfect; it lets me make up a batch of miscellaneous
 spare disks totaling enough space, each one a vdev, put them into one
 pool (no redundancy), but with copies=2 get nearly the redundancy of
 mirroring which would have required matching drives.   At least, if I

From what I have seen I think you are over estimating the value of
copies=x.  copies=X are guaranteed to store multiple copies (X) of the
blocks _somewhere_ in the pool,  but not necessarily on different disks.
So while you may gain mirror like protection when you have failed blocks on
a disk (maybe -- blocks could be too close together on the same disk);  you
do not necessarily gain that from a failed disk (block copies could be on
only one disk). Having different sized unprotected disks and using copies=N
has less mirror like effect over time and fragmentation of those disks.

http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection


copies  mirrors.

 find a solution for connecting that bunch of disks conveniently.  I
 really want one box with easily swappable disks, and one cable.  (And
 then two of them, since of course I need two sets of backup media to
 alternate between.)  And I could update the old full backup to become
 the new one using rsync locally, perhaps much faster than doing a full
CP.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-10 Thread Robert Milkowski
Hello Jorgen,

Monday, December 10, 2007, 5:53:31 AM, you wrote:


JL Robert Milkowski wrote:
 Hello Jorgen,
 
 Honestly - I don't think zfs is a good solution to your problem.
 
 What you could try to do however when it comes to x4500 is:
 
 1. Use SVM+UFS+user quotas

JL I am now trying zfs -V 1Tb and newfs'ed ufs on that device. This looks
JL like a potential solution at least. Even appears that I am allowed to 
JL enable compression on the volume.

JL Thanks


I don't know... while it will work I'm not sure I would trust it.
Maybe just use Solaris Volume Manager with Soft Partitioning + UFS and
forget about ZFS in your case?



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup in general (was Does ZFS handle a SATA II ' port multiplier' ?)

2007-12-10 Thread David Dyer-Bennet
[EMAIL PROTECTED] wrote:
 If you care enough to do backups, at least care enough to be
 able to restore.  For my home backups, I use portable drives with
 copies=2 or 3 and compression enabled.  I don't fool with
 incrementals, but many people do.  The failure mode I'm worried
 about is decay, as the drives will be off most of the time.  The
 copies feature works well for this failure mode.
   
 I am definitely and strongly interested in restoring!  That's why I hate
 my previous backup solutions so much (NTI backup and then Acronis True
 Image); I verified backups and tested restores, and had *FAR* too much
 trouble to be at all comfortable.  The photos and the ebooks are backed
 up eventually (but not always within the month) to good DVDs, and one
 copy is kept off-site, and that's the stuff I'd miss most if it went,
 but I want a good *overall* solution.

 The copies thing sounds familiar from discussion here...ah.  Yes,
 that's exactly perfect; it lets me make up a batch of miscellaneous
 spare disks totaling enough space, each one a vdev, put them into one
 pool (no redundancy), but with copies=2 get nearly the redundancy of
 mirroring which would have required matching drives.   At least, if I
 

 From what I have seen I think you are over estimating the value of
 copies=x.  copies=X are guaranteed to store multiple copies (X) of the
 blocks _somewhere_ in the pool,  but not necessarily on different disks.
 So while you may gain mirror like protection when you have failed blocks on
 a disk (maybe -- blocks could be too close together on the same disk);  you
 do not necessarily gain that from a failed disk (block copies could be on
 only one disk). Having different sized unprotected disks and using copies=N
 has less mirror like effect over time and fragmentation of those disks.

 http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
 http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection
   

Yes, that's quite clear even just from the man page.  That's why I said 
nearly; I understand that  copies  mirrors, as you put it.  Not 
*necessarily* on different disks, but it *tries* to put it on different 
disks.  Over time isn't necessarily an issue, since a new full backup 
could be done into a clean filesystem.

-- 
David Dyer-Bennet, [EMAIL PROTECTED]; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-10 Thread Robert Milkowski
Hello can,

Monday, December 10, 2007, 3:35:27 AM, you wrote:

cyg  and it 
 made them slower

cyg That's the second time you've claimed that, so you'll really at
cyg least have to describe *how* you measured this even if the
cyg detailed results of those measurements may be lost in the mists of time.


cyg So far you don't really have much of a position to defend at
cyg all:  rather, you sound like a lot of the disgruntled TOPS users
cyg of that era.  Not that they didn't have good reasons to feel
cyg disgruntled - but they frequently weren't very careful about aiming their 
ire accurately.

cyg Given that RMS really was *capable* of coming very close to the
cyg performance capabilities of the underlying hardware, your
cyg allegations just don't ring true.  Not being able to jump into

And where is your proof that it was capable of coming very close to
the...?

Let me use your own words:

In other words, you've got nothing, but you'd like people to believe it's 
something.

The phrase Put up or shut up comes to mind.

Where are your proofs on some of your claims about ZFS?
Where are your detailed concepts how to solve some ZFS issues
(imagined or not)?

Demand nothing less from yourself than you demand from others.

Bill, to be honest I don't understand you - you wrote I have no
interest in working on it myself. So what is your interest here?
The way you respond to people is offensive some times (don't bother to
say that they deserve it... it's just your opinion) and your attitude
from time to time is of a guru who knows everything but doesn't
actually deliver anything.

So, except that you fighting ZFS everywhere you can, you don't want
to contribute to ZFS - what you want then? You seem like a guy with a
quite good technical background (just an impression) who wants to
contribute something but doesn't know exactly what... Maybe you should
try to focus that knowledge a little bit more and get something useful
out of it instead of writing long essays which doesn't contribute
much (not that this reply isn't long :))

I'm not being malicious here - I'm genuinely interested in what's your
agenda. I don't blame other people accusing you of trolling.

No offense intended.

:)

-- 
Best regards,
 Robert Milkowski   mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-10 Thread Jorgen Lundman


 
 I don't know... while it will work I'm not sure I would trust it.
 Maybe just use Solaris Volume Manager with Soft Partitioning + UFS and
 forget about ZFS in your case?

Well, the idea was to see if it could replace the existing NetApps as 
that was what Jonathan promised it could do, and we do use snapshots on 
the NetApps, so having zfs snapshots would be attractive, as well as 
easy to grow the file-system as needed. (Although, perhaps I can growfs 
with SVM as well.)


You may be correct about the trust issue though. copied over a small 
volume from the netapp:

Filesystem size   used  avail capacity  Mounted on
1.0T   8.7G  1005G 1%/export/vol1

NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
zpool1 20.8T   5.00G   20.8T 0%  ONLINE -

So copied 8.7Gb, to compressed volume takes up 5Gb. That is quite nice. 
Enable the same quotas for users, then run quotacheck:

[snip]
#282759fixed:  files 0 - 4939  blocks 0 - 95888
#282859fixed:  files 0 - 9  blocks 0 - 144
Read from remote host x4500-test: Operation timed out
Connection to x4500-test closed.

and it has not come back, so not a panic, just a complete hang. I'll 
have to get NOC staff to go power cycle it.


We are bending over backwards trying to get the x4500 to work in a 
simple NAS design, but honestly, the x4500 is not a NAS. Nor can it 
compete with NetApps. As a Unix server with lots of disks, it is very nice.

Perhaps one day it can mind you, it just is not there today.



-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Faulted raidz1 shows the same device twice ?!?

2007-12-10 Thread Jeff Thompson
zpool status shows:
NAME STATE READ WRITE CKSUM
external DEGRADED 0 0 0
  raidz1 DEGRADED 0 0 0
c18t0d0  ONLINE   0 0 0
c18t0d0  FAULTED  0 0 0  corrupted data

I used to have a normal raidz1 with devices c18t0d0 and c19t0d0 but c19t0d0 
broke. So, I plugged a new drive in its slot.  But attach and replace give 
errors:

# zpool replace external c19t0d0
cannot replace c19t0d0 with c19t0d0: no such device in pool

# zpool attach external c18t0d0 c19t0d0
cannot attach c19t0d0 to c18t0d0: can only attach to mirrors and top-level disks

Why does zpool status show the same drive twice?  How can I clear the fault and 
attach a new good drive in c19t0d0?

Any help appreciated,
- Jeff
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup in general (was Does ZFS handle a SATA II '

2007-12-10 Thread Anton B. Rang
 Tape drives and tapes seem to be just too expensive. Am I out of date here?

No, I don't think so.  The problem is that the low-end tape market has mostly 
vanished as CDs/DVDs/disks get cheaper -- not that it should, because tape is 
much more reliable -- so the cost of entry is pretty high. I use AIT-1 tapes at 
home, which give me about 35 GB/tape, but I'd be a lot happier with AIT-3 (100 
GB/tape). The tapes are reasonably affordable; unfortunately, the drives are 
priced for small business, not for home use.

 What would I need to buy to back up a system that currently has 
 about 600GB of data in it, growing a few GB a month on average?

I'd probably back up everything onto two large (750 GB or 1 TB) external 
drives, each kept off-site at a different location. (Being paranoid, I'd also 
likely want at least one tape backup, but the initial full backup would take a 
long time)

 Also, what *software* does one use?  For a full, and for an incremental?

I've heard of Amanda but haven't used it.  I suppose there are other 
open-source backup solutions.  I use commercial backup software, myself.

 ZFS can give me a view equivalent to an 
 incremental, can't it?  Which I could then copy
 somewhere suitable?

Hmmm.  I don't think it exposes such a view right now, though at first glance 
it wouldn't be too hard -- along with a snapshot, you could have a 'snapdiff' 
view, which would only expose the files which had changed since a previous 
snapshot.  I think that would be pretty straightforward to implement

Anton
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-10 Thread Mertol Ozyoney
Hello All;

While sometimes not possible, ZFS+Thumper solution is not so far away from
replacing expensive to buy and own NetApp like equipment. 
What people can sometimes forget is Thumper and Solaris are general purpose
products that can be spcialized with some efforts. 

We had some cases where we had to fine tune X4500 and ZFS for more stability
or performance. At the end of the day the benefits well worth the efforts. 

Best regards

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jorgen Lundman
Sent: Tuesday, December 11, 2007 4:22 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.



 
 I don't know... while it will work I'm not sure I would trust it.
 Maybe just use Solaris Volume Manager with Soft Partitioning + UFS and
 forget about ZFS in your case?

Well, the idea was to see if it could replace the existing NetApps as 
that was what Jonathan promised it could do, and we do use snapshots on 
the NetApps, so having zfs snapshots would be attractive, as well as 
easy to grow the file-system as needed. (Although, perhaps I can growfs 
with SVM as well.)


You may be correct about the trust issue though. copied over a small 
volume from the netapp:

Filesystem size   used  avail capacity  Mounted on
1.0T   8.7G  1005G 1%/export/vol1

NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
zpool1 20.8T   5.00G   20.8T 0%  ONLINE -

So copied 8.7Gb, to compressed volume takes up 5Gb. That is quite nice. 
Enable the same quotas for users, then run quotacheck:

[snip]
#282759fixed:  files 0 - 4939  blocks 0 - 95888
#282859fixed:  files 0 - 9  blocks 0 - 144
Read from remote host x4500-test: Operation timed out
Connection to x4500-test closed.

and it has not come back, so not a panic, just a complete hang. I'll 
have to get NOC staff to go power cycle it.


We are bending over backwards trying to get the x4500 to work in a 
simple NAS design, but honestly, the x4500 is not a NAS. Nor can it 
compete with NetApps. As a Unix server with lots of disks, it is very nice.

Perhaps one day it can mind you, it just is not there today.



-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss