[zfs-discuss] Guide to COMSTAR iSCSI?

2010-12-13 Thread Martin Mundschenk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi!

I have configured two LUs following this guide:

http://thegreyblog.blogspot.com/2010/02/setting-up-solaris-comstar-and.html

Now I want each LU to be available to only one distinct client in the network. 
I found no easy guide how to accomplish the anywhere in the internet. Any hint?

Martin


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.16 (Darwin)

iQEcBAEBAgAGBQJNBIw2AAoJEA6eiwqkMgR8vAcH/0jeBh0PvZdnjLK4FOY6/Xw1
JwAqdNbS5jvUn8pvYRxdA379gqyZNoFXMRTpPl5Xefw88rpXS+vqvDHoaM1A5Wov
tTERXrh9DMACAswm4KYnA7lcWxEUJWBJ8LA870Sd6GVqPHbBnE+R+o2Op69XUy/g
+sAa0f7MDHPJP46xad5/qweUVRNZ0C+Ka2YYqhWKvYTN2DEYmFfnem+c6Vna2TXv
uOLoEeV+CHOI/BdrpcDaU8XQzAS5f1x/oTPhk56j0Uzm4q8+aKqc2YTccvGnRJCm
8F+/ZyZ40fy2TRLfhmZIGoL+y9nrJqUDm+K2jXkdH/55vzsk+EdhfZUlDYXsalo=
=NdL6
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Guide to COMSTAR iSCSI?

2010-12-12 Thread Martin Mundschenk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi!

I have configured two LUs following this guide:

http://thegreyblog.blogspot.com/2010/02/setting-up-solaris-comstar-and.html

Now I want each LU to be available to only one distinct client in the network. 
I found no easy guide how to accomplish the anywhere in the internet. Any hint?

Martin

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.16 (Darwin)

iQEcBAEBAgAGBQJNBIzZAAoJEA6eiwqkMgR8NhYIALeIA7VTTSP3PkpN+GaIwQ/e
Y5lVRTJCCY5jcj++g7WLniF9NmbrYrm/dGObXGL8WbkdsJSW1G0vUwVoW+lEYU9G
wFbXRtny5uklb7N7coy25aPioSGdJGaIBFk+I7Taus1plc1hs0B0sJffBxNzF4lQ
YfsyQxwd6kY9y4dc8+E41YPgeRojle96UDuJIEnjG4X4nii6VhlfCUOU7vlxvJli
64wB8cE6+4AS582M7/a7q+7+zU/uokTzeS3JAPY+uQEmSMp3COz9YsJSNiqvIiIm
Op7XWeBzr7eDuK+0hrHRaXj/uxhIUfEY9Xci6hdYv2kldM0fD7Ds6fe84wAsHns=
=EB37
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 59, Issue 13

2010-09-09 Thread Dr. Martin Mundschenk

Am 09.09.2010 um 07:00 schrieb zfs-discuss-requ...@opensolaris.org:

 What's the write workload like?  You could try disabling the ZIL to see
 if that makes a difference.  If it does, the addition of an SSD-based
 ZIL / slog device would most certainly help.
 
 Maybe you could describe the makeup of your zpool as well?
 
 Ray


The zpool is a mirrored root-pool (2 SATA 250GB devices). The box is a Dell PE 
T710. When I copy via NFS, zpool iostat reports 4MB/sec along the copy process. 
When I copy via scp I get a network performance of about 50 MB/sec and zpool 
iostat reports 105 MB/sec for a short interval about 5 seconds after scp 
completed. 

As far as I figured out, the problem is the nfs commit, that forces the 
filesystem to write data directly on disk instead of caching the data stream, 
like it is done in the scp example. 

NFS was there long before SSD-based drives where. I can not imagine, that NFS 
performance used to be not more than 1/3 of the speed of a 10BaseT connection 
ever before...

Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NFS performance issue

2010-09-08 Thread Dr. Martin Mundschenk
Hi!

I searched the web for hours, trying to solve the NFS/ZFS low performance issue 
on my just setup OSOL box (snv134). The problem is discussed in many threads 
but I've found no solution. 

On a nfs shared volume, I get write performance of 3,5M/sec (!!) read 
performance is about 50M/sec which is ok but on a GBit network, more should be 
possible, since the servers disk performance reaches up to 120 M/sec.

Does anyone have a solution how I can at least speed up the writes?

Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-26 Thread Dr. Martin Mundschenk

Am 26.08.2010 um 04:38 schrieb Edward Ned Harvey:

 There is no such thing as reliable external disks.  Not unless you want to
 pay $1000 each, which is dumb.  You have to scrap your mini, and use
 internal (or hotswappable) disks.
 
 Never expect a mini to be reliable.  They're designed to be small and cute.
 Not reliable.


The MacMini and the disks themselves are just fine. The problem seems to be the 
SATA-bridges to USB/FW. They just stall, when the load gets heavy.

Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Dr. Martin Mundschenk
Hi!

I'm running a OSOL box for quite a while and I think ZFS is an amazing 
filesystem. As a computer I use a Apple MacMini with USB and FireWire devices 
attached. Unfortunately the USB and sometimes the FW devices just die, causing 
the whole system to stall, forcing me to do a hard reboot.

I had the worst experience with an USB-SATA bridge running an Oxford chipset, 
in a way that the four external devices stalled randomly within a day or so. I 
switched to a four slot raid box, also with USB bridge, but with better 
reliability.

Well, I wonder what are the components to build a stable system without having 
an enterprise solution: eSATA, USB, FireWire, FibreChannel?

Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cant't detach spare device from pool

2010-08-21 Thread Martin Mundschenk
After about 62 hours and 90%, the resilvering process got stuck. Since 12 hours 
nothing happens anymore. Thus, I can not detach the spare device. Is there a 
way to get the resilvering process back running?

Martin



Am 18.08.2010 um 20:11 schrieb Mark Musante:

 You need to let the resilver complete before you can detach the spare.  This 
 is a known problem, CR 6909724.
 
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive failure causes system to be unusable

2010-02-09 Thread Dr. Martin Mundschenk
Am 08.02.2010 um 20:03 schrieb Richard Elling:

 Are you sure there is not another fault here?  What does svcs -xv show?

Well, I don't have the result of svcs -xv, since the fault is recovered by now, 
but it turned out not to be a hardware failure but an unstable USB-conectivity. 
But sill: Why does the system get stuck? Even when a USB-Plug is unhooked, why 
does the spare does not go online?

Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Drive failure causes system to be unusable

2010-02-08 Thread Martin Mundschenk
Hi!

I have a OSOL box as a home file server. It has 4 1TB USB Drives and 1 TB 
FW-Drive attached. The USB devices are combined to a RaidZ-Pool and the FW 
Drive acts as a hot spare.

This night, one USB drive faulted and the following happened:

1. The zpool was not accessible anymore
2. changing to a directory on the pool causes the tty to get stuck
3. no reboot was possible
4. the system had to be rebooted ungracefully by pushing the power button

After reboot:

1. The zpool ran in a degraded state
2. the spare device did NOT automatically go online
3. the system did not boot to the usual run level, and no auto-boot zones where 
started, GDM did not start either


NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
  raidz1-0   DEGRADED 0 0 0
c21t0d0  ONLINE   0 0 0
c22t0d0  ONLINE   0 0 0
c20t0d0  FAULTED  0 0 0  corrupted data
c23t0d0  ONLINE   0 0 0
cache
  c18t0d0ONLINE   0 0 0
spares
  c16t0d0AVAIL  



My questions:

1. Why does the system get stuck, when a device faults?
2. Why does the hot spare not go online? (The manual says, that going online 
automatically is the default behavior)
3. Why does the system not boot to the usual run level, when a zpool is in a 
degraded state at boot time?


Regards,
Martin


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Boot from external degraded zpool

2009-12-30 Thread Dr. Martin Mundschenk
Hi!

I wonder if the following scenario works:

I have a mac mini running as an OSOL box. The OS is installed on the internal 
hard drive on the vdrive rpool. On rpool there is no redundancy. 

If I add an external block device (USB / Firewire) to rpool to mirror the 
internal hard drive and if the internal hard drive fails, can I reboot the 
system with the detached internal drive but with the degraded mirror half on 
the external drive?

The mac is definitely capable of booting from all kinds of devices. But does 
OSOL support it in such a way, described above?

Regards,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Kernel Panic

2009-12-12 Thread Dr. Martin Mundschenk
Hi!

My OpenSolaris 2009.06 box runs into kernel panics almost every day. There are 
4 FireWire drives, as a RaidZ pool attached to a MacMini. The panic seems to be 
related to this known bug:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6835533

Since there are no known workarounds, is my hardware configuration worthless? 

Regards,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Messed up zpool (double device label)

2009-12-12 Thread Dr. Martin Mundschenk
Hi!

I tried to add an other FiweFire Drive to my existing four devices but it 
turned out, that the OpenSolaris IEEE1394 support doen't seem to be 
well-engineered.

After not recognizing the new device and exporting and importing the existing 
zpool, I get this zpool status:

  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
  raidz1 DEGRADED 0 0 0
c12t0d0  ONLINE   0 0 0
c12t0d0  FAULTED  0 0 0  corrupted data
c14t0d0  ONLINE   0 0 0
c15t0d0  ONLINE   0 0 0

The device c12t0d0 appears two times!?

'format' returns these devices:

AVAILABLE DISK SELECTIONS:
   0. c7d0 DEFAULT cyl 19454 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@b/i...@0/c...@0,0
   1. c12t0d0 Ext Hard- Disk--931.51GB
  
/p...@0,0/pci10de,a...@16/pci11c1,5...@0/u...@00303c02e014fc66/d...@0,0
   2. c13t0d0 Ext Hard- Disk--931.51GB
  
/p...@0,0/pci10de,a...@16/pci11c1,5...@0/u...@00303c02e014fc32/d...@0,0
   3. c14t0d0 Ext Hard- Disk--931.51GB
  
/p...@0,0/pci10de,a...@16/pci11c1,5...@0/u...@00303c02e014fc61/d...@0,0
   4. c15t0d0 Ext Hard- Disk--931.51GB
  
/p...@0,0/pci10de,a...@16/pci11c1,5...@0/u...@00303c02e014fc9d/d...@0,0


When I scrub data, the devices c12t0d0, c13t0d0 and c14t0d0 re accessed and 
c15t0d0 sleeps. I don't get it! How can such a mess happen and how do I get it 
back straight?

Regards,
Martin

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss