Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Ron Mexico
I ran into the same thing where I had to manually delete directories. 

Once you export the pool you can plug in the drives anywhere else. Reimport the 
pool and the file systems come right up — as long as the drives can be seen by 
the system.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-05-13 Thread Ron Mexico
I have moved drives between controllers, rearranged drives in other slots, and 
moved disk sets between different machines and I've never had an issue with a 
zpool not importing. Are you sure you didn't remove the drives while the system 
was powered up?

Try this:

zpool import -D

If zpool lists the pool as destroyed, you can re-import it by doing this:

zpool import -D vault

I know this is a shot in the dark — sorry for not having a better idea.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hard drives for ZFS NAS

2010-05-13 Thread Ron Mexico
I use the RE4's at work on the storage server, but at home I use the consumer 
1TB green drives.

My system [2009.06] uses an Intel Atom 330 based motherboard, 4 gigs of non-ecc 
ram, a Supermicro AOC-SAT2-MV8 controller with 5 1TB Western Digital [WD10EARS] 
drives in a raidz1.

There are many reasons why this set-up isn't optimal, but in the nine months 
it's been online, it hasn't produced a single error.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool errors

2010-04-27 Thread Ron Mexico
I had a problems with a UFS file system on a hardware raid controller. It was 
spitting out errors like crazy, so I rsynced it to a ZFS volume on the same 
machine. There were a lot of read errors during the transfer and the RAID 
controller alarm was going off constantly. Rsync was copying the corrupted 
files to the ZFS volume, and performing a zpool status -v reported the full 
path name of the affected files. Sometimes only an inode number appears instead 
of a file path. Is there any way to figure out exactly what files were affected 
with these inodes?

disk_old/some/path/to/a/file
disk_old:0x41229e
disk_old:0x4124bf
disk_old:0x4126a4
disk_old:0x41276f
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] In iSCSI hell...

2010-04-21 Thread Ron Mexico
 are you using comstar or the old iSCSI target (iscsitadm) to provision 
 targets?

I'm using zfs set shareiscsi=on to confugure the logical units and COMSTAR for 
the rest on the OpenSolaris side. The targets are initiated on Solaris 10 with 
iscsiadm.

This thing was humming right along and all of a sudden it stopped working after 
a few weeks. The connection is made via gigabit crossover cable using the 
e1000g driver on both ends. I get endless errors like this:

Apr 21 19:58:43 storage scsi: [ID 107833 kern.warning] WARNING: 
/p...@0,0/pci8086,3...@6/pci8086,3...@0/pci1028,1...@8 (mpt1):
Apr 21 19:58:43 storage Disconnected command timeout for Target 27
Apr 21 19:58:45 storage scsi: [ID 365881 kern.info] 
/p...@0,0/pci8086,3...@6/pci8086,3...@0/pci1028,1...@8 (mpt1):
Apr 21 19:58:45 storage Log info 0x3114 received for target 27.
Apr 21 19:58:45 storage scsi_status=0x0, ioc_status=0x8048, 
scsi_state=0xc
Apr 21 19:58:45 storage scsi: [ID 365881 kern.info] 
/p...@0,0/pci8086,3...@6/pci8086,3...@0/pci1028,1...@8 (mpt1):
Apr 21 19:58:45 storage Log info 0x3114 received for target 27.
Apr 21 19:58:45 storage scsi_status=0x0, ioc_status=0x8048, 
scsi_state=0xc
Apr 21 19:58:45 storage scsi: [ID 365881 kern.info] 
/p...@0,0/pci8086,3...@6/pci8086,3...@0/pci1028,1...@8 (mpt1):
Apr 21 19:58:45 storage Log info 0x3114 received for target 27.
Apr 21 19:58:45 storage scsi_status=0x0, ioc_status=0x8048, 
scsi_state=0xc

If anyone can suggest an optimal network setting for this, please chime in.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] In iSCSI hell...

2010-04-20 Thread Ron Mexico
I have a storage server with snv_134 installed. This has four zfs file systems 
shared with iscsi that are mounted as zfs volumes on a Sun v480.

Everything has been working great for about a month, and all of a sudden the 
v480 has timeout errors when trying to connect to the iscsi volumes on the 
storage server.

The connection between the two is a gigabit crossover cable, so other network 
traffic isn't an issue. HBAs, NICs, and cables in the storage server have been 
troubleshot and are working normally. The only common element here is the Intel 
nic in the v480. It seems to work OK otherwise, but it's the only component in 
this equation that hasn't changed.

What's the consensus on NIC configurations for iscsi? Are errors like this 
common if the MTU is set at the default of 1500?

Any and all comments/opinions are welcome.

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool hosed during testing

2009-11-10 Thread Ron Mexico
This didn't occur on a production server, but I thought I'd post this anyway 
because it might be interesting.

I'm currently testing a ZFS NAS machine consisting of a Dell R710 with two Dell 
5/E SAS HBAs. Right now I'm in the middle of torture testing the system, 
simulating drive failures, exporting the storage pool, rearranging the disks in 
different slots, and what have you. Up until now, everything has been going 
swimmingly.

Here was my original zpool configuration: 

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  raidz2 ONLINE   0 0 0
c1t1d0   ONLINE   0 0 0
c1t2d0   ONLINE   0 0 0
c1t3d0   ONLINE   0 0 0
c1t4d0   ONLINE   0 0 0
c1t5d0   ONLINE   0 0 0
c1t6d0   ONLINE   0 0 0
c1t7d0   ONLINE   0 0 0
c1t8d0   ONLINE   0 0 0
c1t9d0   ONLINE   0 0 0
c1t10d0  ONLINE   0 0 0
c1t11d0  ONLINE   0 0 0
c1t12d0  ONLINE   0 0 0
  raidz2 ONLINE   0 0 0
c2t25d0  ONLINE   0 0 0
c2t26d0  ONLINE   0 0 0
c2t27d0  ONLINE   0 0 0
c2t28d0  ONLINE   0 0 0
c2t29d0  ONLINE   0 0 0
c2t30d0  ONLINE   0 0 0
c2t31d0  ONLINE   0 0 0
c2t32d0  ONLINE   0 0 0
c2t33d0  ONLINE   0 0 0
c2t34d0  ONLINE   0 0 0
c2t35d0  ONLINE   0 0 0
c2t36d0  ONLINE   0 0 0

I exported the tank zpool, and rearranged drives in the chassis and reimported 
it - I ended up with this:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  raidz2 ONLINE   0 0 0
c2t31d0  ONLINE   0 0 0
c1t2d0   ONLINE   0 0 0
c1t3d0   ONLINE   0 0 0
c1t1d0   ONLINE   0 0 0
c1t12d0  ONLINE   0 0 0
c1t6d0   ONLINE   0 0 0
c1t7d0   ONLINE   0 0 0
c1t8d0   ONLINE   0 0 0
c1t9d0   ONLINE   0 0 0
c1t5d0   ONLINE   0 0 0
c1t11d0  ONLINE   0 0 0
c2t25d0  ONLINE   0 0 0
  raidz2 ONLINE   0 0 0
c1t4d0   ONLINE   0 0 0
c2t26d0  ONLINE   0 0 0
c2t27d0  ONLINE   0 0 0
c2t28d0  ONLINE   0 0 0
c2t29d0  ONLINE   0 0 0
c2t30d0  ONLINE   0 0 0
c1t10d0  ONLINE   0 0 0
c2t32d0  ONLINE   0 0 0
c2t33d0  ONLINE   0 0 0
c2t34d0  ONLINE   0 0 0
c2t35d0  ONLINE   0 0 0
c2t48d0  ONLINE   0 0 0

Great. No problems.

Next, I took c2t48d0 offline and then unconfigured it with cfgadm.

# zpool offline tank c2t48d0
# cfgadm -c unconfigure c2::dsk/c2t48d0

I checked the status next.

# zpool status tank
 pool: tank
state: DEGRADED
   status: One or more devices has been taken offline by the administrator.
  Sufficient replicas exist for the pool to continue functioning in 
a
  degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
  raidz2 ONLINE   0 0 0
c2t31d0  ONLINE   0 0 0
c1t2d0   ONLINE   0 0 0
c1t3d0   ONLINE   0 0 0
c1t1d0   ONLINE   0 0 0
c1t12d0  ONLINE   0 0 0
c1t6d0   ONLINE   0 0 0
c1t7d0   ONLINE   0 0 0
c1t8d0   ONLINE   0 0 0
c1t9d0   ONLINE   0 0 0
c1t5d0   ONLINE   0 0 0
c1t11d0  ONLINE   0 0 0
c2t25d0  ONLINE   0 0 0
  raidz2 DEGRADED 0 0 0
c1t4d0   ONLINE   0 0 0
c2t26d0  ONLINE   0 0 0
c2t27d0  ONLINE   0 0 0
c2t28d0  ONLINE   0 0 0
c2t29d0  ONLINE   0 0 0
c2t30d0  ONLINE   0 0 0
c1t10d0  ONLINE   0 0 0
c2t32d0  ONLINE   0 0 0

Re: [zfs-discuss] possibilities of AFP ever making it into ZFS like NFS an

2009-09-21 Thread Ron Mexico
I was able to get Netatalk built on OpenSolaris for my ZFS NAS at home. 
Everything is running great so far, and I'm planning on using it on the 96TB 
NAS I'm building for my office. It would be nice to have this supported out of 
the box, but there are probably licensing issues involved.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-27 Thread Ron Mexico
This non-raid sas controller is $199 and is based on the LSI SAS 1068.

http://accessories.us.dell.com/sna/products/Networking_Communication/productdetail.aspx?c=usl=ens=bsdcs=04sku=310-8285~lt=popup~ck=TopSellers

What kind of chassis do these drives currently reside in? Does the backplane 
have a sata connector for each drive, or does it have a sas backplane [i.e. one 
SFF 8087 for every four drive slots]?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Using consumer drives in a zraid2

2009-08-24 Thread Ron Mexico
I'm putting together a 48 bay NAS for my company [24 drives to start]. My 
manager has already ordered 24 2TB [b]WD Caviar Green[/b] consumer drives - 
should we send these back and order the 2TB [b]WD RE4-GP[/b] enterprise drives 
instead? 

I'm tempted to try these out. First off, they're about $100 less per drive. 
Second, my experience with so called consumer drives in a raid controller has 
been pretty good - only 2 drive failures in five years with an Apple X-Raid.

Thoughts? Opinions? Flames? All input is appreciated. Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-24 Thread Ron Mexico
 Suffice to say, 2 top-level raidz2 vdevs of similar size with copies=2
 should offer very nearly the same protection as raidz2+1. 
 -- richard

This looks like the way to go. Thanks for your input. It's much appreciated!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-24 Thread Ron Mexico
Is there a formula to determine the optimal size of dedicated cache space for 
zraid systems to improve speed?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Ron Mexico
I'm in the process of setting up a NAS for my company. It's going to be based 
on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs. Each HBA 
will be connected to a 24 bay Supermicro JBOD chassis. Each chassis will have 
12 drives to start out with, giving us room for expansion as needed.

Ideally, I'd like to have a mirror of a raidz2 setup, but from the 
documentation I've read, it looks like I can't do that, and that a stripe of 
mirrors is the only way to accomplish this.

I'm interested in hearing the opinions of others about the best way to set this 
up.

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS configuration input needed.

2009-08-21 Thread Ron Mexico
 You'll have to add a bit of meat to this!
 
 What are you resiliency, space and performance
 requirements?

Resiliency is most important, followed by space and then speed. It's primary 
function is to host digital assets for ad agencies and backups of other servers 
and workstations in the office.

Since I can't make a mirrored raidz2, I'd like the next best thing. If that 
means doing a zfs send from one raidz2 to the other, that's fine.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss