[zfs-discuss] Guide to COMSTAR iSCSI?

2010-12-12 Thread Martin Mundschenk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi!

I have configured two LUs following this guide:

http://thegreyblog.blogspot.com/2010/02/setting-up-solaris-comstar-and.html

Now I want each LU to be available to only one distinct client in the network. 
I found no easy guide how to accomplish the anywhere in the internet. Any hint?

Martin

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.16 (Darwin)

iQEcBAEBAgAGBQJNBIzZAAoJEA6eiwqkMgR8NhYIALeIA7VTTSP3PkpN+GaIwQ/e
Y5lVRTJCCY5jcj++g7WLniF9NmbrYrm/dGObXGL8WbkdsJSW1G0vUwVoW+lEYU9G
wFbXRtny5uklb7N7coy25aPioSGdJGaIBFk+I7Taus1plc1hs0B0sJffBxNzF4lQ
YfsyQxwd6kY9y4dc8+E41YPgeRojle96UDuJIEnjG4X4nii6VhlfCUOU7vlxvJli
64wB8cE6+4AS582M7/a7q+7+zU/uokTzeS3JAPY+uQEmSMp3COz9YsJSNiqvIiIm
Op7XWeBzr7eDuK+0hrHRaXj/uxhIUfEY9Xci6hdYv2kldM0fD7Ds6fe84wAsHns=
=EB37
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] unable to login with root

2010-12-12 Thread Mohammed Sadiq
dear all

i have a solaris 11 express installed with ZFS . Recently i forgot the
password and i reset the password to empty password and when i reboot i get
this message.

pam_authtok_get : login : empty password not allowed for root from
localhost.

I dont know the ip address of this machine to login from remote host and
there is no othere user except root.

Please give your suggestions.

Thanks/Regards
Sadiq
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to login with root

2010-12-12 Thread Kees Nuyt
On Sun, 12 Dec 2010 13:07:15 +0300, Mohammed Sadiq
sadiq1...@gmail.com wrote:

dear all

i have a solaris 11 express installed with ZFS . Recently i forgot the
password and i reset the password to empty password and when i reboot i get
this message.

pam_authtok_get : login : empty password not allowed for root from
localhost.

I dont know the ip address of this machine to login from remote host and
there is no othere user except root.

Trying to remote login will probably give the same symptoms.

Boot in single user mode (add -s on the grub kernel line).
You get a console login prompt. Login as root.
That one might accept an empty password (untested).

If that fails, boot from a live CD, mount the root pool in /a or
/mnt, and paste the hash of a known password into the second field
of the entry for root in /etc/shadow

The hash for the password  solaris (without the quotes) is
$5$3Euhlh2Y$E9qTjs62HIoipqTwY75Ox.JDVgk/9QFglv.w1rE4wE0

Hope this helps.
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What performance to expect from mirror vdevs?

2010-12-12 Thread Bob Friesenhahn

On Sat, 11 Dec 2010, Stephan Budach wrote:


At first I disabled all write cache and read ahead options for each raid 
group on the raids, since I wanted to provide ZFS as much control over the 
drives as possible, but the performance was quite worse. I am running this 
zpool on a Sun Fire X4170M2 with 32 GB of RAM so I ran bonnie++ with -s 63356 
-n 128 and got these results:


Sequential Output
char: 51819
block: 50602
rewrite: 28090


I am not very familiar with bonnie++ output.  Does 51819 mean 
51MB/second?  If so, that is perhaps 1 disk's worth of performance.



Random seeks: 510 - this seems really low to me, isn't it?


It does seem a bit low.  Everything depends on if the random seek 
was satisfied from ARC cache or from the underlying disk.  You should 
be able to obtain at least the number of physical seeks available from 
1/2 your total disks.  For example, with 16 pair and if each disk 
could do 100 seeks per second, then you should expect at least 8*100 
random seeks per second.  With zfs mirroring and doing only 
read-seeks, you should expect to get up to 75% of the seek capability 
of all 16 disks combined.


Since I was curious, what would happen, if I'd enable WriteCache and 
ReadAhead on the raid groups, I turned them on for all 32 devices and re-ran 
bonnie++. To my great dismay, this time zfs had a lot of random troubles with 
the drives, where zfs would remove drives arbitrarily from the pool since 
they exceeded the error thresholds. On one run, this only happend to 4 drives 
from one fc raid on the next run 3 drives from the other raid got removed 
from the pool.


Ungood.  Note that with this many disks, you should be able to swamp 
your fiber channel link and that the fiber channel should be the 
sequential I/O bottleneck.  It may also be that your RAID array 
firmware/CPUs become severely overloaded.


I know, that I'd better disable all optimizations on the raid side, but the 
performance seems just too bad with these settings. Maybe running 16 mirrors 
in a zpool is not a good idea - but that seems more than unlikely to me.


16 mirrors in a zpool is a very good idea.  Just keep in mind that 
this is a lot of I/O power and you might swamp your FC link and 
adaptor card.



Is there anything else I can check?


Check the output of

  iostat -xn 30

while bonnie++ is running.  This may reveal an issue.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What performance to expect from mirror vdevs?

2010-12-12 Thread Ian Collins

 On 12/12/10 04:48 AM, Stephan Budach wrote:

Hi,

on friday I received  two of my new fc raids, that I intended to use 
as my new zpool devices. These devices are from CiDesign and their 
type/model is iR16FC4ER. These are fc raids, that also allow JBOD 
operation, which is what I chose. So I configured 16 raid groups on 
each system and configured the raids to attach them to their fc 
channel one by one.


On my Sol11Expr host I have created a zpool of mirror vdevs, by 
selecting 1 disk  from either raid. This way I got a zpool that looks 
like this:
At first I disabled all write cache and read ahead options for each 
raid group on the raids, since I wanted to provide ZFS as much control 
over the drives as possible, but the performance was quite worse. I am 
running this zpool on a Sun Fire X4170M2 with 32 GB of RAM so I ran 
bonnie++ with -s 63356 -n 128 and got these results:


Sequential Output
char: 51819
block: 50602
rewrite: 28090

Sequential Input:
char: 62562
block 60979

Random seeks: 510 - this seems really low to me, isn't it?

Sequential Create:
create: 27529
read: 172287
delete: 30522

Random Create:
create: 25531
read: 244977
delete 29423

The closet I have by way of caparison is an old thumper with a stripe of 
9 mirrors:


Sequential Output
char: 206479
block: 601102
rewrite: 218089

Sequential Input:
char: 138945
block 702598

Random seeks: 1970

Getting on for an order of magnitude better on I/O.


Is there anything else I can check?


iostat are recommended elsewhere.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss