> > Hi There,
> > 
> > I'ld like to use an iSCSI-Target for my
> TimeMachine
> > Backup. 
> > So I take Comstar with Default settings and the
> > globalSAN iSCSI Target Software for MAC.

I have been using Comstar iSCSI target on various snv and oSol respins for 
quite some time. It seems to me one has to be very careful matching the target 
and the initiator. I've had very good experience using various versions of the 
Microsoft iSCSI initiator to different targets (some commercial SANs, Solaris, 
both Comstar and the original daemon, a lot of NetBSD boxes used only as iSCSI 
targets), the results unfortunately vary a lot with the initiator. 


> > 
> > Problem: Transfering Data ist to slow. (1GBit
> > Ethernet). Took about 10 hours for 50GB what SMB
> does

I've posted this elsewhere... On a Gigabit network with Jumbo frames I am 
presently trying to install OpenSolaris on a VirtualBox machine, hosted on 
W2K8R2 system, using the built-in VirtualBox initiator, the target being a 
SX:CE system, snv_125 BFUed to 128. Up to 20% of the installation goes 
reasonably fast, then the speed drops a lot, I start seeing disk errors in the 
virtual machine (I ran format->analyze->verify on a 15GB iSCSI disk, it took me 
some three days with about 150 bad blocks reported; on the target there were no 
indications of an error on the physical disk at all).

After the format I decided to try the installation once more. In two days I am 
on 80% done with about 4480 error messages in the log file of the type:
...
Error for command 'Write sector' - error level fatal
Sense key: ID not found
Vendor 'Gen-ATA ' error code: 0x05
...
with very low bandwidth usage on the target (and very low iostat on the 
underlying ZFS pool - 
...
[cheeky] ~ # zpool iostat bpool 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
bpool       24.5G  32.5G     45     45  2.59M   320K
bpool       24.5G  32.5G     68      0  4.07M      0
bpool       24.5G  32.5G     74      0  4.56M      0
bpool       24.5G  32.5G     70      0  4.31M      0
bpool       24.5G  32.5G     66      0  4.13M      0
bpool       24.5G  32.5G     70      0  4.38M      0
bpool       24.5G  32.5G     54    501  3.44M  2.67M
bpool       24.5G  32.5G     50    122  2.67M  1.90M
bpool       24.5G  32.5G     60     10  3.36M  82.9K
bpool       24.5G  32.5G     81      2  4.78M  24.0K
bpool       24.5G  32.5G     60     11  3.26M  95.9K
bpool       24.5G  32.5G     61     12  2.92M   104K
bpool       24.5G  32.5G     69     10  3.21M  87.9K
bpool       24.5G  32.5G     60     12  3.81M   104K
^C
....
(no other acivity on this pool ATM, only the single zvol Comstar target). 


> > (on the same Server) in something about an hour.
> > 
> > Could someone help me tuning this up?

This was just an example what one can hit using iSCSI in untested combinations. 

Caveat emptor, as usual. 

> > 
> > Thx
> > Henri 
> > 
> > PS: Added the same post at Network-Forum a few
> days
> > before (without an answer). Hope thats OK
> 
> I can't help, because I'm running the old iSCSI
> daemon rather than the
> new COMSTAR based target.  But it does work, and
> perhaps faster than
> you've experienced, although it's still rather slow.
>  (I think I also ported one
> f the BSD iSCSI target daemons to Solaris 9; it also
> appears to work with the
> GlobalSAN initiator last I tried, but I didn't push
> it much.)

As I mentioned above, I have been using NetBSD targets for quite some time. 
Again, it depends a lot on the underlying infrastructure. While I was on a 
100MB network with an old iprb card, I had no problems whatsoever; I had even 
left the NetBSD box on an old 4.99.51 version. Since I switched to a Gigabit 
network, I've hit a number of problems (especially with Jumbo frames, which I 
am gradually beginning to understand). 

> 
> One of these days, I'll update my SB2K to SXCE 129,
> and then maybe I'll
> be able to try this sort of thing, although to really
> push it will have to wait
> until I get more disks.

With the exception of the machine I mentioned above, all my OpenSolaris 
machines follw the latest respins and are on 130 ATM. There is one setup as a 
target in a W2K8R2 Hyper-V fault-tollerant cluster and SCVMM which has never 
had any problems doing it's job (the speed is also as expected). 

>  (a GigE would be nice too,
> but I don't think I have any
> slots left)

Chavdar Ivanov
-- 
This message posted from opensolaris.org

Reply via email to