Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-17 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 
  Found quite a few posts on
  various
  forums of people complaining that RDP with external auth doesn't work (or
  not reliably),
 
 Actually, it does work, and it works reliably, but the setup is very much not
 straightforward.  I'm likely to follow up on this later today, because as
 coincidence would have it, this is on my to-do for today.

I just published simplesmf http://code.google.com/p/simplesmf/
which includes a lot of the work I've done in the last month.  Relevant to this 
discussion, the step-by-step instructions to enable VBoxHeadless external 
authentication, and connect the RDP client to it.
http://code.google.com/p/simplesmf/source/browse/trunk/samples/virtualbox-guest-control/headless-hints.txt


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread dswartz
 What was wrong with the suggestion to use VMWare ESXi and Nexenta or
 OpenIndiana to do this?

Sorry I don't know which specific post you refer to? I am already running
ESXi with OI on top and serving back to the other guests.  Ned made a good
point that running a virtualization solution on top of the storage host
would eliminate any network traffic for disk access by guests.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread dswartz
 On 11/14/12 15:20, Dan Swartzendruber wrote:
 Well, I think I give up for now.  I spent quite a few hours over the
 last
 couple of days trying to get gnome desktop working on bare-metal OI,
 followed by virtualbox.  Supposedly that works in headless mode with RDP
 for
 management, but nothing but fail for me.  Found quite a few posts on
 various
 forums of people complaining that RDP with external auth doesn't work
 (or
 not reliably), and that was my experience.  The final straw was when I
 rebooted the OI server as part of cleaning things up, and... It hung.
 Last
 line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled
 it
 to no avail.  Even tried a backup BE from hours earlier, to no avail.
 Likely whatever was bunged happened prior to that.  If I could get
 something
 that ran like xen or kvm reliably for a headless setup, I'd be willing
 to
 give it a try, but for now, no...

 SmartOS.

Interesting, I may take a play with this...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Karl Wagner
 

On 2012-11-14 12:55, dswa...@druber.com wrote: 

 On 11/14/12
15:20, Dan Swartzendruber wrote: 
 
 Well, I think I give up for
now. I spent quite a few hours over the last couple of days trying to
get gnome desktop working on bare-metal OI, followed by virtualbox.
Supposedly that works in headless mode with RDP for management, but
nothing but fail for me. Found quite a few posts on various forums of
people complaining that RDP with external auth doesn't work (or not
reliably), and that was my experience. The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.
Last line in verbose boot log is 'ucode0 is /pseudo/ucode@0'. I
power-cycled it to no avail. Even tried a backup BE from hours earlier,
to no avail. Likely whatever was bunged happened prior to that. If I
could get something that ran like xen or kvm reliably for a headless
setup, I'd be willing to give it a try, but for now, no...
 SmartOS.

Interesting, I may take a play with this...
___ zfs-discuss mailing list
zfs-discuss@opensolaris.org [1]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss [2]

That _does
_look interesting. All I need is some time to try it out. Looks perfect
for me to consolidate my home servers. 

Links:
--
[1]
mailto:zfs-discuss@opensolaris.org
[2]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Dan Swartzendruber
 

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov
Sent: Tuesday, November 13, 2012 10:08 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for storage?

On 2012-11-14 03:20, Dan Swartzendruber wrote:
 Well, I think I give up for now.  I spent quite a few hours over the 
 last couple of days trying to get gnome desktop working on bare-metal 
 OI, followed by virtualbox.  Supposedly that works in headless mode 
 with RDP for management, but nothing but fail for me.  Found quite a 
 few posts on various forums of people complaining that RDP with 
 external auth doesn't work (or not reliably), and that was my experience.


I can't say I used VirtualBox RDP extensively, certainly not in the newer
4.x series, yet. For my tasks it sufficed to switch the VM from headless to
GUI and back via savestate, as automated by my script from vboxsvc (vbox.sh
-s vmname startgui for a VM config'd as a vboxsvc SMF service already).

*** The thing for me is my wife needs to be able to get to her XP desktop
console when she reboots, since her company mandates encrypted disks, so at
boot time, she gets a PGP prompt before it will even boot :(  Currently she
has two icons on her desktop - an RDP one for her normal production use,
and a VNC one to get to the ESXi machine console.  If I have to diddle
things manually (or show her how to), the WAF will take a big hit.  I can
already imagine the response: everything works just fine now!  Why do you
have to 'improve' things??? :) Most of my frustration is with the apparent
lack of QA at Oracle for basic things like this.  Jim, I did nothing weird
or unusual.  I installed virtualbox from their site, installed their
extensions pack, configured RDP for the 3 guests, fired up RDP client from
windows, and... FAIL.  Google indicates their RDP server doing the
authentication somehow differently than windows expects.  I dunno about that
- all I know is something basic doesn't work, and searching the web found
literally a dozen threads going back 3-4 years on this same issue.  Things
like not being able to run RDP for a guest if you run it as non-root, since
it can't bind to the default low-numbered port number, so I had to use
higher-numbered ports.  And on and on and on.  Sorry to rant here, just
frustrated.  If I was hacking on some bleeding edge setup, my expectations
are to run into things like this and get lots of paper cuts, but not for
something so basic.

   The final straw was when I
  rebooted the OI server as part of cleaning things up, and... It hung. 
  Last
  line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled
it   to no avail.  Even tried a backup BE from hours earlier, to no avail.
  Likely whatever was bunged happened prior to that.  If I could get
something   that ran like xen or kvm reliably for a headless setup, I'd be
willing to   give it a try, but for now, no...

I can't say much about OI desktop problems either - works for me (along with
VBox 4.2.0 release), suboptimally due to lack of drivers, but reliably.

Try to boot with -k option to use a kmdb debugger as well - maybe the
system would enter it upon getting stuck (does so instead of rebooting when
it is panicking) and you can find some more details there?..

*** Well, I still have the OI boot disk, so I may try that sometime this
weekend when I have cycles...  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Dan Swartzendruber
 

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov
Sent: Tuesday, November 13, 2012 10:08 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for storage?


   The final straw was when I
  rebooted the OI server as part of cleaning things up, and... It hung. 
  Last
  line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled
it   to no avail.  Even tried a backup BE from hours earlier, to no avail.
  Likely whatever was bunged happened prior to that.  If I could get
something   that ran like xen or kvm reliably for a headless setup, I'd be
willing to   give it a try, but for now, no...

Try to boot with -k option to use a kmdb debugger as well - maybe the
system would enter it upon getting stuck (does so instead of rebooting when
it is panicking) and you can find some more details there?..

*** I just re-read my notes, and in fact I *was* booting with '-k -v', so no
help there, I'm afraid :(


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
 
 Well, I think I give up for now.  I spent quite a few hours over the last
 couple of days trying to get gnome desktop working on bare-metal OI,
 followed by virtualbox.  

I would recommend installing OI desktop, not OI server.  Because I too, tried 
to get gnome working in OI server, to no avail.  But if you install OI desktop, 
it simply goes in, brainlessly, simple.


 Found quite a few posts on
 various
 forums of people complaining that RDP with external auth doesn't work (or
 not reliably), 

Actually, it does work, and it works reliably, but the setup is very much not 
straightforward.  I'm likely to follow up on this later today, because as 
coincidence would have it, this is on my to-do for today.

Right now, I'll say this much:  When you RDP from a windows machine to a 
windows machine, you get prompted for password.  Nice, right?  Seems pretty 
obvious.   ;-)   But the VirtualBox RDP server doesn't have that capability.   
Pt...  You need to enter the username  password into the RDP client, and 
save it, before attempting the connection.


 The final straw was when I
 rebooted the OI server as part of cleaning things up, and... It hung.  

Bummer.  That might be some unsupported hardware for running OI.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Dan Swartzendruber
On 11/14/2012 9:44 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.
 

I would recommend installing OI desktop, not OI server.  Because I too, tried 
to get gnome working in OI server, to no avail.  But if you install OI desktop, 
it simply goes in, brainlessly, simple.
   
Ned, sorry if I was unclear.  This was a fresh desktop install.  I tried 
to back-fit GUI to my old server install but it didn't work well...

Found quite a few posts on
various
forums of people complaining that RDP with external auth doesn't work (or
not reliably),
 

Actually, it does work, and it works reliably, but the setup is very much not 
straightforward.  I'm likely to follow up on this later today, because as 
coincidence would have it, this is on my to-do for today.

Right now, I'll say this much:  When you RDP from a windows machine to a windows 
machine, you get prompted for password.  Nice, right?  Seems pretty obvious.   ;-)  
 But the VirtualBox RDP server doesn't have that capability.   Pt...  You need 
to enter the username  password into the RDP client, and save it, before 
attempting the connection.
   

Oh, okay.  Yuck...

The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.
 

Bummer.  That might be some unsupported hardware for running OI.
   
But it worked!  I did the install on the bare metal, it worked fine.  
Did some updates and rebooted, and that worked fine.  Installed 
virtualbox and some other stuff, and when I rebooted to get a clean 
baseline to work from, that is when it hung.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Dan Swartzendruber
On 11/14/2012 9:44 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.
 

I would recommend installing OI desktop, not OI server.  Because I too, tried 
to get gnome working in OI server, to no avail.  But if you install OI desktop, 
it simply goes in, brainlessly, simple.


   

Found quite a few posts on
various
forums of people complaining that RDP with external auth doesn't work (or
not reliably),
 

Actually, it does work, and it works reliably, but the setup is very much not 
straightforward.  I'm likely to follow up on this later today, because as 
coincidence would have it, this is on my to-do for today.
   

Please post your results, I'd like to know this.

The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.
 

Bummer.  That might be some unsupported hardware for running OI.

   
I had two backup BEs, and I only tried the more recent one. Maybe the 
older one would not hang.  I have to say my perception of OI is that it 
is a lot more fragile when it comes to things like this than Linux is.  
I'm used to windows being 'the boy in the bubble', but not Unix 
variants.  I'd be more charitably inclined if it were error messages and 
still comes up okay - a silent hang at bootload is frustrating to put it 
mildly.  I'm wondering if virtualbox ties in here somewhere?  This was 
the first time I rebooted this box since the vbox install.  My previous 
attempt to put gui+vbox on my old server install also hung at bootload, 
but I assumed that was just stuff that hadn't been intended to be done 
the way it was.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Dan Swartzendruber

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.  Supposedly that works in headless mode with RDP for
management, but nothing but fail for me.  Found quite a few posts on various
forums of people complaining that RDP with external auth doesn't work (or
not reliably), and that was my experience.  The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.  Last
line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled it
to no avail.  Even tried a backup BE from hours earlier, to no avail.
Likely whatever was bunged happened prior to that.  If I could get something
that ran like xen or kvm reliably for a headless setup, I'd be willing to
give it a try, but for now, no...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Ian Collins

On 11/14/12 15:20, Dan Swartzendruber wrote:

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.  Supposedly that works in headless mode with RDP for
management, but nothing but fail for me.  Found quite a few posts on various
forums of people complaining that RDP with external auth doesn't work (or
not reliably), and that was my experience.  The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.  Last
line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled it
to no avail.  Even tried a backup BE from hours earlier, to no avail.
Likely whatever was bunged happened prior to that.  If I could get something
that ran like xen or kvm reliably for a headless setup, I'd be willing to
give it a try, but for now, no...


SmartOS.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Jim Klimov

On 2012-11-14 03:20, Dan Swartzendruber wrote:

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.  Supposedly that works in headless mode with RDP for
management, but nothing but fail for me.  Found quite a few posts on various
forums of people complaining that RDP with external auth doesn't work (or
not reliably), and that was my experience.



I can't say I used VirtualBox RDP extensively, certainly not in the
newer 4.x series, yet. For my tasks it sufficed to switch the VM
from headless to GUI and back via savestate, as automated by my
script from vboxsvc (vbox.sh -s vmname startgui for a VM config'd
as a vboxsvc SMF service already).

  The final straw was when I
 rebooted the OI server as part of cleaning things up, and... It hung. 
 Last
 line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I 
power-cycled it

 to no avail.  Even tried a backup BE from hours earlier, to no avail.
 Likely whatever was bunged happened prior to that.  If I could get 
something

 that ran like xen or kvm reliably for a headless setup, I'd be willing to
 give it a try, but for now, no...

I can't say much about OI desktop problems either - works for me
(along with VBox 4.2.0 release), suboptimally due to lack of drivers,
but reliably.

Try to boot with -k option to use a kmdb debugger as well - maybe
the system would enter it upon getting stuck (does so instead of
rebooting when it is panicking) and you can find some more details
there?..

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Edmund White
What was wrong with the suggestion to use VMWare ESXi and Nexenta or
OpenIndiana to do this?

-- 
Edmund White




On 11/13/12 8:20 PM, Dan Swartzendruber dswa...@druber.com wrote:


Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.  Supposedly that works in headless mode with RDP
for
management, but nothing but fail for me.  Found quite a few posts on
various
forums of people complaining that RDP with external auth doesn't work (or
not reliably), and that was my experience.  The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.
Last
line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled
it
to no avail.  Even tried a backup BE from hours earlier, to no avail.
Likely whatever was bunged happened prior to that.  If I could get
something
that ran like xen or kvm reliably for a headless setup, I'd be willing to
give it a try, but for now, no...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Karl Wagner
 

On 2012-11-08 17:49, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote: 

 From:
zfs-discuss-boun...@opensolaris.org [1] [mailto:zfs-discuss-
boun...@opensolaris.org [2]] On Behalf Of Karl Wagner I am just
wondering why you export the ZFS system through NFS? I have had much
better results (albeit spending more time setting up) using iSCSI. I
found that performance was much better,
 A couple years ago, I tested
and benchmarked both configurations on the same system. I found that the
performance was equal both ways (which surprised me because I expected
NFS to be slower due to FS overhead.) I cannot say if CPU utilization
was different - but the IO measurements were the same. At least,
indistinguishably different. Based on those findings, I opted to use NFS
for several weak reasons. If I wanted to, I could export NFS to more
different systems. I know everything nowadays supports iscsi initiation,
but it's not as easy to set up as a NFS client. If you want to expand
the guest disk, in iscsi, ... I'm not completely sure you *can* expand a
zvol, but if you can, you at least have to shut everything down, then
expand and bring it all back up and then have the iscsi initiator expand
to occupy the new space. But in NFS, the client can simply expand, no
hassle. I like being able to look in a filesystem and see the guests
listed there as files. Know I could, if I wanted to, copy those things
out to any type of storage I wish. Someday, perhaps I'll want to move
some guest VM's over to a BTRFS server instead of ZFS. But it would be
more difficult with iscsi. For what it's worth, in more recent times,
I've opted to use iscsi. And here are the reasons: When you create a
guest file in a ZFS filesystem, it doesn't automatically get a
refreservation. Which means, if you run out of disk space thanks to
snapshots and stuff, the guest OS suddenly can't write to disk, and it's
a hard guest crash/failure. Yes you can manually set the refreservation,
if you're clever, but it's easy to get wrong. If you create a zvol, by
default, it has an appropriately sized refreservation that guarantees
the guest will always be able to write to disk. Although I got the same
performance using iscsi or NFS with ESXi... I did NOT get the same
result using VirtualBox. In Virtualbox, if I use a *.vdi file... The
performance is *way* slower than using a *.vmdk wrapper for physical
device (zvol). ( using VBoxManage internalcommands createrawvmdk ) The
only problem with the zvol / vmdk idea in virtualbox is that every
reboot (or remount) the zvol becomes owned by root again. So I have to
manually chown the zvol for each guest each time I restart the
host.

Fair enough, thanks for the info. 

As I say it was quite a while
back and I was using either Xen or KVM (can't remember which). It may be
that the performance profiles are/were just very different. I was also
just using an old desktop for testing purposes, which skews the
performance too (it was far too memory and CPU limited to be used for
real). 

If I was doing this now, I would probably use the ZFS aware OS
bare metal, but I still think I would use iSCSI to export the ZVols
(mainly due to the ability to use it across a real network, hence
allowing guests to be migrated simply) 

Links:
--
[1]
mailto:zfs-discuss-boun...@opensolaris.org
[2]
mailto:boun...@opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Eugen Leitl
On Thu, Nov 08, 2012 at 04:57:21AM +, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:

 Yes you can, with the help of Dell, install OMSA to get the web interface
 to manage the PERC.  But it's a pain, and there is no equivalent option for
 most HBA's.  Specifcally, on my systems with 3ware, I simply installed the
 solaris 3ware utility to manage the HBA.  Which would not be possible on
 ESXi.  This is important because the systems are in a remote datacenter, and
 it's the only way to check for red blinking lights on the hard drives.  ;-)

I thought most IPMI came with full KVM, and also SNMP, and some ssh built-in.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: Dan Swartzendruber [mailto:dswa...@druber.com]
 
 I have to admit Ned's (what do I call you?)idea is interesting.  I may give
 it a try...

Yup, officially Edward, most people call me Ned.

I contributed to the OI VirtualBox instructions.  See here:
http://wiki.openindiana.org/oi/VirtualBox

Jim's vboxsvc is super powerful - But at first I found it overwhelming, mostly 
due to unfamiliarity with SMF.  One of these days I'm planning to contribute a 
Quick Start guide to vboxsvc, but for now, if you find it confusing in any 
way, just ask for help here.  (Right Jim?)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: Karl Wagner [mailto:k...@mouse-hole.com]
 
 If I was doing this now, I would probably use the ZFS aware OS bare metal,
 but I still think I would use iSCSI to export the ZVols (mainly due to the 
 ability
 to use it across a real network, hence allowing guests to be migrated simply)

Yes, if your VM host is some system other than your ZFS baremetal storage 
server, then exporting the zvol via iscsi is a good choice, or exporting your 
storage via NFS.  Each one has their own pros/cons, and I would personally be 
biased in favor of iscsi.

But if you're going to run the guest VM on the same machine that is the ZFS 
storage server, there's no need for the iscsi.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Eugen Leitl
 
 On Thu, Nov 08, 2012 at 04:57:21AM +, Edward Ned Harvey
 (opensolarisisdeadlongliveopensolaris) wrote:
 
  Yes you can, with the help of Dell, install OMSA to get the web interface
  to manage the PERC.  But it's a pain, and there is no equivalent option for
  most HBA's.  Specifcally, on my systems with 3ware, I simply installed the
  solaris 3ware utility to manage the HBA.  Which would not be possible on
  ESXi.  This is important because the systems are in a remote datacenter,
 and
  it's the only way to check for red blinking lights on the hard drives.  ;-)
 
 I thought most IPMI came with full KVM, and also SNMP, and some ssh built-
 in.

Depends.

So, one possible scenario:  You power up the machine for the first time, you 
enter ILOM console, you create username  password  static IP address.  From 
now on, you're able to get the remote console, awesome, great.  No need for 
ipmi-tool in the OS.

Another scenario, that I encounter just as often:  You inherit some system from 
the previous admin.  They didn't set up IPMI or ILOM.  They installed ESXi, and 
now the only thing you can do is power off the system to do it.

But in the situation where I inherit a Linux / Solaris machine from a previous 
admin who didn't config ipmi...  I don't need to power down.  I can config the 
ipmi via ipmi-tools.

Going a little further down these trails...

If you have a basic IPMI device, then all it does is *true* ipmi, which is a 
standard protocol.  You have to send it ipmi signals via the ipmi-tool command 
on your laptop (or another server).  It doesn't use SSL; it uses either no 
encryption, or a preshared key.  The preshared key is a random HEX 20 character 
long string.  If you configure that at the boot time (as in the first situation 
mentioned above) then you have to type in at the physical console at first 
boot:  new username, new password, new static IP address etc, and the new 
encryption key.  But if you're running a normal OS, you can skip all that, boot 
the new OS, and paste all that stuff in via ssh, using the local ipmi-tool to 
config the local ipmi device.

If you have a newer, more powerful ILOM device, then you probably only need to 
assign an IP address to the ilom.  Then you can browse to it via https and do 
whatever else you need to do.

Make sense?

Long story short, Depends.;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Jim Klimov
On 2012-11-09 16:14, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: Karl Wagner [mailto:k...@mouse-hole.com]

If I was doing this now, I would probably use the ZFS aware OS bare metal,
but I still think I would use iSCSI to export the ZVols (mainly due to the 
ability
to use it across a real network, hence allowing guests to be migrated simply)


Yes, if your VM host is some system other than your ZFS baremetal storage 
server, then exporting the zvol via iscsi is a good choice, or exporting your 
storage via NFS.  Each one has their own pros/cons, and I would personally be 
biased in favor of iscsi.

But if you're going to run the guest VM on the same machine that is the ZFS 
storage server, there's no need for the iscsi.



Well, since the ease of re-attachment of VM hosts to iSCSI was mentioned
a few times in this thread (and there are particular nuances with iSCSI
to localhost), it is worth mentioning that NFS files can be re-attached
just as easily - including the localhost.

Cloning disks is just as easy when they are zvols or files in dedicated
datasets; note that disk image UUIDs must be re-forged anyway (see doc).

Also note, that in general, there might be need for some fencing (i.e.
only one host tries to start up a VM from a particular backend image).
I am not sure iSCSI inherently does a better job than NFS at this?..

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Jim Klimov
On 2012-11-09 16:11, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: Dan Swartzendruber [mailto:dswa...@druber.com]

I have to admit Ned's (what do I call you?)idea is interesting.  I may give
it a try...


Yup, officially Edward, most people call me Ned.

I contributed to the OI VirtualBox instructions.  See here:
http://wiki.openindiana.org/oi/VirtualBox

Jim's vboxsvc is super powerful


Thanks for kudos, and I'd also welcome some on the SourceForge
project page :)

http://sourceforge.net/projects/vboxsvc/

 for now, if you find it confusing in any way, just ask for help here. 
 (Right Jim?)


I'd prefer the questions and discussion on vboxsvc to continue in
the VirtualBox forum, so it's all in one place for other users too.
It is certainly an offtopic for the lists about ZFS, so I won't
take this podium for too long :)

https://forums.virtualbox.org/viewtopic.php?f=11t=33249

 One of these days I'm planning to contribute a Quick Start guide to 
vboxsvc,


I agree that the README might need cleaning up, so far it is like a
snowball growing with details and new features. Perhaps some part
should be separated into a concise quick-start guide that would
not scare people off by the sheer amount of letters ;)

I don't think I can point to a chapter and say Take this as the
QuickStart :(

 - But at first I found it overwhelming, mostly due to unfamiliarity 
with SMF.


The current README does, however, provide an overview of SMF as was
needed by some of the inquiring users, and an example on command-line
creation of a service to wrap a VM. A feature to do this by the script
itself is pending, somewhat indefinitely.



Also note that for OI desktop users in particular (and likely for
other OpenSolaris-based OSes with X11 too), I'm now adding features
to ease management of VMs that are not executed headless, but rather are 
interactive. Now these can too be wrapped as SMF services to

automate shutdown and/or backgrounding into headless mode and back.
I made and use this myself to enter other OSes on my laptop that
are dual-bootable and can run in VBox as well as on hardware.
There is also a new foregrounding startgui mode that can trap the
signals which stop its terminal, and properly savestate or shutdown
the VM, as well as this wraps taking of ZFS snapshots for VM disk
resources, if applicable. There is also a mode where this spawns
a dedicated xterm for the script's execution; by closing the xterm
you can properly stop the VM with the preselected method of your
choice with one click, before you log out of X11 session.

However, this part of my work was almost in vain - the end of X11
session happens as a bruteforce close of X-connections, so the
interactive GUIs just die before they can process any signals.
This makes sense for networked X-servers that can't really send
signals to remote client OSes, but is rather stupid for local OS.
I hope the desktop environment gurus might come up with something.

Or perhaps I'll come up with an SMF wrapper for X sessions that the
vbox startgui feature could depend on, and the close of a session
would be an SMF disablement. Hopefully, spawned local X-clients would
also be under the SMF contract and would get chances to stop properly :)

Anyway, if anybody else is interested in the new features described
above - check out the code repository for the vboxsvc project (this
is not yet so finished as to publish a new package version):

http://vboxsvc.svn.sourceforge.net/viewvc/vboxsvc/lib/svc/method/vbox.sh
http://vboxsvc.svn.sourceforge.net/viewvc/vboxsvc/var/svc/manifest/site/vbox-svc.xml
http://vboxsvc.svn.sourceforge.net/viewvc/vboxsvc/usr/share/doc/vboxsvc/README-vboxsvc.txt

See you in the VirtualBox forum thread if you do have questions :)
//Jim Klimov


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Jim Klimov
On 2012-11-08 05:43, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

you've got a Linux or Windows VM isnide of ESX, which is writing to a virtual 
disk, which ESX is then wrapping up inside NFS and TCP, talking on the virtual 
LAN to the ZFS server, which unwraps the TCP and NFS, pushes it all through the 
ZFS/Zpool layer, writing back to the virtual disk that ESX gave it, which is 
itself a layer on top of Ext3


I think this is a part where you disagree. The way I get all-in-ones,
the VM running a ZFS OS enjoys PCI-pass-through, so it gets dedicated
hardware access to the HBA(s) and harddisks at raw speeds, with no
extra layers of lags in between. So there are a couple of OS disks
where ESXi itself is installed, distros, logging and stuff, and the
other disks are managed by a ZFS in a VM and served back to ESXi
to store other VMs on the system.

Also, VMWare does not (AFAIK) use ext3, but their own VMFS which is,
among other things, cluster-aware (same storage can be shared by
several VMware hosts).

That said, on older ESX (with minimized RHEL userspace interface)
which was picky about only using certified hardware with virt-enabled
drivers, I did combine some disks served by the motherboard into a
Linux mdadm array (within the RHEL-based management OS) and exported
that to the vmkernel over NFS. Back then disk performance was indeed
abysmal whatever you do, so the NFS disks were not after all used to
store virtual disks, but rather distros and backups.

HTH,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Karl Wagner
 

On 2012-11-08 4:43, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote: 

 When I said
performance was abysmal, I meant, if you dig right down and pressure the
system for throughput to disk, you've got a Linux or Windows VM isnide
of ESX, which is writing to a virtual disk, which ESX is then wrapping
up inside NFS and TCP, talking on the virtual LAN to the ZFS server,
which unwraps the TCP and NFS, pushes it all through the ZFS/Zpool
layer, writing back to the virtual disk that ESX gave it, which is
itself a layer on top of Ext3, before it finally hits disk. Based purely
on CPU and memory throughput, my VM guests were seeing a max throughput
of around 2-3 Gbit/sec. That's not *horrible* abysmal. But it's bad to
be CPU/memory/bus limited if you can just eliminate all those extra
layers, and do the virtualization directly isnide a system that supports
zfs.

Hi 

I have some experience with Virtualisation, mainly using Xen
and KVM, but not much. I am just wondering why you export the ZFS system
through NFS? 

I have had much better results (albeit spending more time
setting up) using iSCSI. I found that performance was much better, I
believe because a layer was being cut out of the loop. Rather than the
hypervisor having to emulate block storage from files on NFS, the block
storage is exposed directly from Solaris (in my case) through iSCSI, and
passed through the virtual LAN to the other guests. The hypervisor then
sees nothing but ethernet packets. 

This was a few years ago and I
can't remember the numbers, but CPU consumption was drastically reduced
and performance of the guests was increased significantly. 

The only
downside was a slightly more complicated setup for the guests, but not
by enough to sacrifice the performance benefits. 

It is possible that
this is not the case in ESXi, or that more modern hypervisors deal with
it more efficiently. That's why I'm asking the question :) ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Dan Swartzendruber
 

-Original Message-
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com] 
Sent: Wednesday, November 07, 2012 11:44 PM
To: Dan Swartzendruber; Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
Cc: Tiernan OToole; zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for storage?

 From: Dan Swartzendruber [mailto:dswa...@druber.com]
 
 I'm curious here.  Your experience is 180 degrees opposite from mine.  
 I run an all in one in production and I get native disk performance, 
 and ESXi virtual disk I/O is faster than with a physical SAN/NAS for 
 the NFS datastore, since the traffic never leaves the host (I get 
 3gb/sec or so usable thruput.)

What is all in one?
I wonder if we crossed wires somehow...  I thought Tiernan said he was
running Nexenta inside of ESXi, where Nexenta exports NFS back to the ESXi
machine, so ESXi will have the benefit of ZFS underneath its storage.

*** This is what we mean by 'all in one'.  ESXi with a single guest (OI say)
running on a small local disk.  It has one or more HBA passed to it via pci
passthrough with the real data disks attached.  It runs ZFS with a data pool
on those disks, serving the datastore back to ESXi via NFS.  The guests with
their vmdks reside on that datastore.  So, yes, we're talking about the same
thing.

That's what I used to do.

When I said performance was abysmal, I meant, if you dig right down and
pressure the system for throughput to disk, you've got a Linux or Windows VM
isnide of ESX, which is writing to a virtual disk, which ESX is then
wrapping up inside NFS and TCP, talking on the virtual LAN to the ZFS
server, which unwraps the TCP and NFS, pushes it all through the ZFS/Zpool
layer, writing back to the virtual disk that ESX gave it, which is itself a
layer on top of Ext3, before it finally hits disk.  Based purely on CPU and
memory throughput, my VM guests were seeing a max throughput of around 2-3
Gbit/sec.  That's not *horrible* abysmal.  But it's bad to be CPU/memory/bus
limited if you can just eliminate all those extra layers, and do the
virtualization directly isnide a system that supports zfs.

*** I guess I don't think 300MB/sec disk I/O aggregate for your guests is
abysmal.  Also, your analysis misses the crucial point that none of us are
talking about the virtualized SAN/NAS writing to vmdks passed to it, but
rather, actual disks via pci passthrough.  As I said, I can get near native
disk I/O this way.  As far as the ESXi vs vbox thing, I think that's a
matter of taste...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jim Klimov
 
 the VM running a ZFS OS enjoys PCI-pass-through, so it gets dedicated
 hardware access to the HBA(s) and harddisks at raw speeds, with no
 extra layers of lags in between. 

Ah.  But even with PCI pass-thru, you're still limited by the virtual LAN 
switch that connects ESXi to the ZFS guest via NFS.  When I connected ESXi and 
a guest this way, obviously your bandwidth between the host  guest is purely 
CPU and memory limited.  Because you're not using a real network interface; 
you're just emulating the LAN internally.  I streamed data as fast as I could 
between ESXi and a guest, and found only about 2-3 Gbit.  That was over a year 
ago so I forget precisely how I measured it ... NFS read/write perhaps, or wget 
or something.  I know I didn't use ssh or scp, because those tend to slow down 
network streams quite a bit.  The virtual network is a bottleneck (unless 
you're only using 2 disks, in which case 2-3 Gbit is fine.)

I think THIS is where we're disagreeing:  I'm saying Only 2-3 gbit but I see 
Dan's email said  since the traffic never leaves the host (I get 3gb/sec or so 
usable thruput.)  and  No offense, but quite a few people are doing exactly 
what I describe and it works just fine...

It would seem we simply have different definitions of fine and abysmal.
;-)


 Also, VMWare does not (AFAIK) use ext3, but their own VMFS which is,
 among other things, cluster-aware (same storage can be shared by
 several VMware hosts).

I didn't know vmfs3 had extensions - I think vmfs3 is based on ext3.  At least, 
all the performance characteristics I've ever observed are on-par with ext3.  
But it makes sense they would extend it in some way.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Karl Wagner
 
 I am just wondering why you export the ZFS system through NFS?
 I have had much better results (albeit spending more time setting up) using
 iSCSI. I found that performance was much better, 

A couple years ago, I tested and benchmarked both configurations on the same 
system.  I found that the performance was equal both ways (which surprised me 
because I expected NFS to be slower due to FS overhead.)  I cannot say if CPU 
utilization was different - but the IO measurements were the same.  At least, 
indistinguishably different.

Based on those findings, I opted to use NFS for several weak reasons.  If I 
wanted to, I could export NFS to more different systems.  I know everything 
nowadays supports iscsi initiation, but it's not as easy to set up as a NFS 
client.  If you want to expand the guest disk, in iscsi, ...  I'm not 
completely sure you *can* expand a zvol, but if you can, you at least have to 
shut everything down, then expand and bring it all back up and then have the 
iscsi initiator expand to occupy the new space.  But in NFS, the client can 
simply expand, no hassle.  

I like being able to look in a filesystem and see the guests listed there as 
files.  Know I could, if I wanted to, copy those things out to any type of 
storage I wish.  Someday, perhaps I'll want to move some guest VM's over to a 
BTRFS server instead of ZFS.  But it would be more difficult with iscsi.

For what it's worth, in more recent times, I've opted to use iscsi.  And here 
are the reasons:

When you create a guest file in a ZFS filesystem, it doesn't automatically get 
a refreservation.  Which means, if you run out of disk space thanks to 
snapshots and stuff, the guest OS suddenly can't write to disk, and it's a hard 
guest crash/failure.  Yes you can manually set the refreservation, if you're 
clever, but it's easy to get wrong.

If you create a zvol, by default, it has an appropriately sized refreservation 
that guarantees the guest will always be able to write to disk.

Although I got the same performance using iscsi or NFS with ESXi...  I did NOT 
get the same result using VirtualBox.

In Virtualbox, if I use a *.vdi file...  The performance is *way* slower than 
using a *.vmdk wrapper for physical device (zvol).  ( using VBoxManage 
internalcommands createrawvmdk )

The only problem with the zvol / vmdk idea in virtualbox is that every reboot 
(or remount) the zvol becomes owned by root again.  So I have to manually chown 
the zvol for each guest each time I restart the host.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Dan Swartzendruber
On 11/8/2012 12:35 PM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov

the VM running a ZFS OS enjoys PCI-pass-through, so it gets dedicated
hardware access to the HBA(s) and harddisks at raw speeds, with no
extra layers of lags in between.
 

Ah.  But even with PCI pass-thru, you're still limited by the virtual LAN switch 
that connects ESXi to the ZFS guest via NFS.  When I connected ESXi and a guest 
this way, obviously your bandwidth between the host  guest is purely CPU and 
memory limited.  Because you're not using a real network interface; you're just 
emulating the LAN internally.  I streamed data as fast as I could between ESXi and 
a guest, and found only about 2-3 Gbit.  That was over a year ago so I forget 
precisely how I measured it ... NFS read/write perhaps, or wget or something.  I 
know I didn't use ssh or scp, because those tend to slow down network streams quite 
a bit.  The virtual network is a bottleneck (unless you're only using 2 disks, in 
which case 2-3 Gbit is fine.)

I think THIS is where we're disagreeing:  I'm saying Only 2-3 gbit but I see Dan's email said 
 since the traffic never leaves the host (I get 3gb/sec or so usable thruput.)  and  No 
offense, but quite a few people are doing exactly what I describe and it works just fine...

It would seem we simply have different definitions of fine and abysmal.
;-)
   
Now you have me totally confused.  How does your setup get data from the 
guest to the OI box?  If thru a wire, if it's gig-e, it's going to be 
1/3-1/2 the speed of the other way.  If you're saying you use 10gig or 
some-such, we're talking about a whole different animal.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Dan Swartzendruber
On 11/8/2012 12:35 PM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov

the VM running a ZFS OS enjoys PCI-pass-through, so it gets dedicated
hardware access to the HBA(s) and harddisks at raw speeds, with no
extra layers of lags in between.
 

Ah.  But even with PCI pass-thru, you're still limited by the virtual LAN switch 
that connects ESXi to the ZFS guest via NFS.  When I connected ESXi and a guest 
this way, obviously your bandwidth between the host  guest is purely CPU and 
memory limited.  Because you're not using a real network interface; you're just 
emulating the LAN internally.  I streamed data as fast as I could between ESXi and 
a guest, and found only about 2-3 Gbit.  That was over a year ago so I forget 
precisely how I measured it ... NFS read/write perhaps, or wget or something.  I 
know I didn't use ssh or scp, because those tend to slow down network streams quite 
a bit.  The virtual network is a bottleneck (unless you're only using 2 disks, in 
which case 2-3 Gbit is fine.)
   
Also, supposedly vmxnet3 interfaces implement a 10gig nic.  I haven't 
tried that recently due to bugginess in the solaris driver...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: Dan Swartzendruber [mailto:dswa...@druber.com]
 
 Now you have me totally confused.  How does your setup get data from the
 guest to the OI box?  If thru a wire, if it's gig-e, it's going to be
 1/3-1/2 the speed of the other way.  If you're saying you use 10gig or
 some-such, we're talking about a whole different animal.

Sorry - 

In the old setup, I had ESXi host, with solaris 10 guest, exporting NFS back to 
the host.  So ESXi created the other guests inside the NFS storage pool.  In 
this setup, the bottleneck is the virtual LAN that maxes out around 2-3 Gbit, 
plus TCP/IP and NFS overhead that degrades the usable performance a bit more.

In the new setup, I have openindiana running directly on the hardware (OI is 
the host) and virtualization is managed by VirtualBox.  I would use zones if I 
wanted solaris/OI guests, but it just so happens I want linux  windows guests. 
 There is no bottleneck.  My linux guest can read 6Gbit/sec and write 3Gbit/sec 
(I'm using 3 disks mirrored with another 3 disks, each can read/write 1 
Gbit/sec).

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Dan Swartzendruber
On 11/8/2012 1:41 PM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: Dan Swartzendruber [mailto:dswa...@druber.com]

Now you have me totally confused.  How does your setup get data from the
guest to the OI box?  If thru a wire, if it's gig-e, it's going to be
1/3-1/2 the speed of the other way.  If you're saying you use 10gig or
some-such, we're talking about a whole different animal.
 

Sorry -

In the old setup, I had ESXi host, with solaris 10 guest, exporting NFS back to 
the host.  So ESXi created the other guests inside the NFS storage pool.  In 
this setup, the bottleneck is the virtual LAN that maxes out around 2-3 Gbit, 
plus TCP/IP and NFS overhead that degrades the usable performance a bit more.

In the new setup, I have openindiana running directly on the hardware (OI is the 
host) and virtualization is managed by VirtualBox.  I would use zones if I wanted 
solaris/OI guests, but it just so happens I want linux  windows guests.  There 
is no bottleneck.  My linux guest can read 6Gbit/sec and write 3Gbit/sec (I'm using 
3 disks mirrored with another 3 disks, each can read/write 1 Gbit/sec).

   
doesn't vbox have to do some sort of virtual switch?  i think you're 
making a distinction that doesn't exist.  what you're saying is that 
write performance is marginally better, but read performance is 2x?  you 
have me curious enough to try the vmxnet3 driver again (it's been over a 
year since the last time - maybe they've fixed the perf bugs...)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Dan Swartzendruber


wait my brain caught up with my fingers :)  the guest is running on the 
same host, so there is no virtual switch in this setup.  i'm still going 
to try the vmxnet3 and see what difference it makes...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Dan Swartzendruber

I have to admit Ned's (what do I call you?)idea is interesting.  I may give
it a try...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Geoff Nordli
Dan,

If you are going to do the all in one with vbox, you probably want to look
at:

http://sourceforge.net/projects/vboxsvc/

It manages the starting/stopping of vbox vms via smf.

Kudos to Jim Klimov for creating and maintaining it.

Geoff


On Thu, Nov 8, 2012 at 7:32 PM, Dan Swartzendruber dswa...@druber.comwrote:


 I have to admit Ned's (what do I call you?)idea is interesting.  I may give
 it a try...

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Dan Swartzendruber
That looks sweet, thanks!

  _  

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Geoff Nordli
Sent: Thursday, November 08, 2012 10:51 PM
To: ZFS Discussions
Subject: Re: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for storage?


Dan,

If you are going to do the all in one with vbox, you probably want to look
at:

http://sourceforge.net/projects/vboxsvc/

It manages the starting/stopping of vbox vms via smf.  

Kudos to Jim Klimov for creating and maintaining it.

Geoff 



On Thu, Nov 8, 2012 at 7:32 PM, Dan Swartzendruber dswa...@druber.com
wrote:



I have to admit Ned's (what do I call you?)idea is interesting.  I may give
it a try...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Tiernan OToole
Morning all...

I have a Dedicated server in a data center in Germany, and it has 2 3TB
drives, but only software RAID. I have got them to install VMWare ESXi and
so far everything is going ok... I have the 2 drives as standard data
stores...

But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk
to boot off and 2 1Tb disks on separate physical drives... I have created a
mirror pool and shared it with VMWare over NFS and copied my ISOs to this
share...

So, 2 questions:

1: If you where given the same hardware, what would you do? (RAID card is
an extra EUR30 or so a month, which i don't really want to spend, but
could, if needs be...)
2: should i mirror the boot drive for the VM?

Thanks.
-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Sašo Kiselkov
On 11/07/2012 12:39 PM, Tiernan OToole wrote:
 Morning all...
 
 I have a Dedicated server in a data center in Germany, and it has 2 3TB
 drives, but only software RAID. I have got them to install VMWare ESXi and
 so far everything is going ok... I have the 2 drives as standard data
 stores...
 
 But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk
 to boot off and 2 1Tb disks on separate physical drives... I have created a
 mirror pool and shared it with VMWare over NFS and copied my ISOs to this
 share...
 
 So, 2 questions:
 
 1: If you where given the same hardware, what would you do? (RAID card is
 an extra EUR30 or so a month, which i don't really want to spend, but
 could, if needs be...)
 2: should i mirror the boot drive for the VM?

If it were my money, I'd throw ESXi out the window and use Illumos for
the hypervisor as well. You can use KVM for full virtualization and
zones for light-weight. Plus, you'll be able to set up a ZFS mirror on
the data pair and set copies=2 on the rpool if you don't have another
disk to complete the rpool with it. Another possibility, though somewhat
convoluted, is to slice up the disks into two parts: a small OS part and
a large datastore part (e.g. 100GB for the OS, 900GB for the datastore).
Then simply put the OS part in a three-way mirror rpool and the
datastore part in a raidz (plus do a grubinstall on all disks). That
way, you'll be able to sustain a single-disk failure of any one of the
three disks.

--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Eugen Leitl
On Wed, Nov 07, 2012 at 12:58:04PM +0100, Sašo Kiselkov wrote:
 On 11/07/2012 12:39 PM, Tiernan OToole wrote:
  Morning all...
  
  I have a Dedicated server in a data center in Germany, and it has 2 3TB
  drives, but only software RAID. I have got them to install VMWare ESXi and
  so far everything is going ok... I have the 2 drives as standard data
  stores...
  
  But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk
  to boot off and 2 1Tb disks on separate physical drives... I have created a
  mirror pool and shared it with VMWare over NFS and copied my ISOs to this
  share...
  
  So, 2 questions:
  
  1: If you where given the same hardware, what would you do? (RAID card is
  an extra EUR30 or so a month, which i don't really want to spend, but
  could, if needs be...)

A RAID will only hurt you with all in one. Do you have hardware passthrough
with Hetzner (I presume you're with them, from the sound of it) on ESXi?

  2: should i mirror the boot drive for the VM?
 
 If it were my money, I'd throw ESXi out the window and use Illumos for
 the hypervisor as well. You can use KVM for full virtualization and
 zones for light-weight. Plus, you'll be able to set up a ZFS mirror on

I'm very interested, as I'm currently working on an all-in-one with
ESXi (using N40L for prototype and zfs send target, and a Supermicro
ESXi box for production with guests, all booted from USB internally
and zfs snapshot/send source).

Why would you advise against the free ESXi, booted from USB, assuming
your hardware has disk pass-through? The UI is quite friendly, and it's
easy to deploy guests across the network.

 the data pair and set copies=2 on the rpool if you don't have another
 disk to complete the rpool with it. Another possibility, though somewhat
 convoluted, is to slice up the disks into two parts: a small OS part and
 a large datastore part (e.g. 100GB for the OS, 900GB for the datastore).
 Then simply put the OS part in a three-way mirror rpool and the
 datastore part in a raidz (plus do a grubinstall on all disks). That
 way, you'll be able to sustain a single-disk failure of any one of the
 three disks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Sašo Kiselkov
On 11/07/2012 01:16 PM, Eugen Leitl wrote:
 I'm very interested, as I'm currently working on an all-in-one with
 ESXi (using N40L for prototype and zfs send target, and a Supermicro
 ESXi box for production with guests, all booted from USB internally
 and zfs snapshot/send source).

Well, seeing as Illumos KVM requires an Intel CPU with VT-x and EPT
support, the N40L won't be usable for that test.

 Why would you advise against the free ESXi, booted from USB, assuming
 your hardware has disk pass-through? The UI is quite friendly, and it's
 easy to deploy guests across the network.

Several reasons:

1) Zones - much cheaper VMs than is possible with ESXi and at 100%
   native bare-metal speed.
2) Crossbow integrated straight in (VNICs, virtual switches, IPF, etc.)
   - no need for additional firewall boxes or VMs
3) Tight ZFS integration with the possibility to do VM/zone snapshots,
   replication, etc.

In general, for me Illumos is just a tighter package with many features
built-in for which you'd need dedicated hardware in an ESX(i)
deployment. ESX(i) makes sense if you like GUIs for setting things up
and fitting inside neat use-cases and for that it might be great. But if
you need to step out of line at any point, you're pretty much out of
luck. I'm not saying it's good or bad, I just mean that for me and my
needs, Illumos is a much better hypervisor than VMware.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Tiernan OToole
Thanks Eugen.

yea, i am with Hetzner, but no hardware passthough... As for ESXi, i am
happy with it, but its not booting from USB... its using the disk to boot
from... I am thinking of using a USB key to boot from though... just need
to figure out how to remotely do this and if i should...

Thanks again!

--Tiernan


On Wed, Nov 7, 2012 at 12:16 PM, Eugen Leitl eu...@leitl.org wrote:

 On Wed, Nov 07, 2012 at 12:58:04PM +0100, Sašo Kiselkov wrote:
  On 11/07/2012 12:39 PM, Tiernan OToole wrote:
   Morning all...
  
   I have a Dedicated server in a data center in Germany, and it has 2 3TB
   drives, but only software RAID. I have got them to install VMWare ESXi
 and
   so far everything is going ok... I have the 2 drives as standard data
   stores...
  
   But i am paranoid... So, i installed Nexenta as a VM, gave it a small
 disk
   to boot off and 2 1Tb disks on separate physical drives... I have
 created a
   mirror pool and shared it with VMWare over NFS and copied my ISOs to
 this
   share...
  
   So, 2 questions:
  
   1: If you where given the same hardware, what would you do? (RAID card
 is
   an extra EUR30 or so a month, which i don't really want to spend, but
   could, if needs be...)

 A RAID will only hurt you with all in one. Do you have hardware passthrough
 with Hetzner (I presume you're with them, from the sound of it) on ESXi?

   2: should i mirror the boot drive for the VM?
 
  If it were my money, I'd throw ESXi out the window and use Illumos for
  the hypervisor as well. You can use KVM for full virtualization and
  zones for light-weight. Plus, you'll be able to set up a ZFS mirror on

 I'm very interested, as I'm currently working on an all-in-one with
 ESXi (using N40L for prototype and zfs send target, and a Supermicro
 ESXi box for production with guests, all booted from USB internally
 and zfs snapshot/send source).

 Why would you advise against the free ESXi, booted from USB, assuming
 your hardware has disk pass-through? The UI is quite friendly, and it's
 easy to deploy guests across the network.

  the data pair and set copies=2 on the rpool if you don't have another
  disk to complete the rpool with it. Another possibility, though somewhat
  convoluted, is to slice up the disks into two parts: a small OS part and
  a large datastore part (e.g. 100GB for the OS, 900GB for the datastore).
  Then simply put the OS part in a three-way mirror rpool and the
  datastore part in a raidz (plus do a grubinstall on all disks). That
  way, you'll be able to sustain a single-disk failure of any one of the
  three disks.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Eugen Leitl
On Wed, Nov 07, 2012 at 01:33:41PM +0100, Sašo Kiselkov wrote:
 On 11/07/2012 01:16 PM, Eugen Leitl wrote:
  I'm very interested, as I'm currently working on an all-in-one with
  ESXi (using N40L for prototype and zfs send target, and a Supermicro
  ESXi box for production with guests, all booted from USB internally
  and zfs snapshot/send source).
 
 Well, seeing as Illumos KVM requires an Intel CPU with VT-x and EPT
 support, the N40L won't be usable for that test.

Ok; I know it does support ESXi and disk pass-through though,
and even the onboard NIC (though I'll add an Intel NIC) with
the HP patched ESXi.
 
  Why would you advise against the free ESXi, booted from USB, assuming
  your hardware has disk pass-through? The UI is quite friendly, and it's
  easy to deploy guests across the network.
 
 Several reasons:
 
 1) Zones - much cheaper VMs than is possible with ESXi and at 100%
native bare-metal speed.

I use Linux VServer for that, currently. It wouldn't fit this
particular application though, as the needs for VM guests are
highly heterogenous, including plently of Windows (uck, ptui).

 2) Crossbow integrated straight in (VNICs, virtual switches, IPF, etc.)
- no need for additional firewall boxes or VMs

ESXi does this as well, and for this (corporate) application the
firewall is as rented service, administered by the hoster. For my 
personal small business needs I have a pfSense dual-machine cluster,
with fully rendundant hardware and ability to deal with up to
1 GBit/s data rates.

 3) Tight ZFS integration with the possibility to do VM/zone snapshots,
replication, etc.

Well, I get this with an NFS-export of an all-in-one as well, with
the exception of zones. But, I cannot use zones for this anyway.
 
 In general, for me Illumos is just a tighter package with many features
 built-in for which you'd need dedicated hardware in an ESX(i)
 deployment. ESX(i) makes sense if you like GUIs for setting things up

In a corporate environment, I need to create systems which play well
with external customers and can be used by others. GUIs are actually
very useful for less technical co-workers.

 and fitting inside neat use-cases and for that it might be great. But if
 you need to step out of line at any point, you're pretty much out of
 luck. I'm not saying it's good or bad, I just mean that for me and my
 needs, Illumos is a much better hypervisor than VMware.

Thanks!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Tiernan OToole
 
 I have a Dedicated server in a data center in Germany, and it has 2 3TB 
 drives,
 but only software RAID. I have got them to install VMWare ESXi and so far
 everything is going ok... I have the 2 drives as standard data stores...

ESXi doesn't do software raid, so ... what are you talking about?


 But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk to
 boot off and 2 1Tb disks on separate physical drives... I have created a 
 mirror
 pool and shared it with VMWare over NFS and copied my ISOs to this share...

I formerly did exactly the same thing.  Of course performance is abysmal 
because you're booting a guest VM to share storage back to the host where the 
actual VM's run.  Not to mention, there's the startup dependency, which is 
annoying to work around.  But yes it works.


 1: If you where given the same hardware, what would you do? (RAID card is
 an extra EUR30 or so a month, which i don't really want to spend, but could, 
 if
 needs be...)

I have abandoned ESXi in favor of openindiana or solaris running as the host, 
with virtualbox running the guests.  I am S much happier now.  But it takes 
a higher level of expertise than running ESXi, but the results are much better.


 2: should i mirror the boot drive for the VM?

Whenever possible, you should always give more than one storage device to ZFS 
and let it do redundancy of some kind, be it mirror or raidz.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Dan Swartzendruber
On 11/7/2012 10:02 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:


I formerly did exactly the same thing.  Of course performance is abysmal 
because you're booting a guest VM to share storage back to the host where the 
actual VM's run.  Not to mention, there's the startup dependency, which is 
annoying to work around.  But yes it works.
   
I'm curious here.  Your experience is 180 degrees opposite from mine.  I 
run an all in one in production and I get native disk performance, and 
ESXi virtual disk I/O is faster than with a physical SAN/NAS for the NFS 
datastore, since the traffic never leaves the host (I get 3gb/sec or so 
usable thruput.)  One essential (IMO) for this is passing an HBA into 
the SAN/NAS VM using vt-d technology.  If you weren't doing this, I'm 
not surprised the performance sucked.  If you were doing this, you were 
obviously doing something wrong.  No offense, but quite a few people are 
doing exactly what I describe and it works just fine - there IS the 
startup dependency. but  can live with that...

1: If you where given the same hardware, what would you do? (RAID card is
an extra EUR30 or so a month, which i don't really want to spend, but could, if
needs be...)
 

I have abandoned ESXi in favor of openindiana or solaris running as the host, 
with virtualbox running the guests.  I am S much happier now.  But it takes 
a higher level of expertise than running ESXi, but the results are much better.
   

in what respect?  due to the 'abysmal performance'?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Edmund White
Same thing here. With the right setup, an all-in-one system based on
VMWare can be very solid and perform well.
I've documented my process here: http://serverfault.com/a/398579/13325

But I'm surprised at the negative comments about VMWare in this context. I
can't see how Virtual Box would run better.

-- 
Edmund White




On 11/7/12 9:45 AM, Dan Swartzendruber dswa...@druber.com wrote:

On 11/7/2012 10:02 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:

 I formerly did exactly the same thing.  Of course performance is
abysmal because you're booting a guest VM to share storage back to the
host where the actual VM's run.  Not to mention, there's the startup
dependency, which is annoying to work around.  But yes it works.

I'm curious here.  Your experience is 180 degrees opposite from mine.  I
run an all in one in production and I get native disk performance, and
ESXi virtual disk I/O is faster than with a physical SAN/NAS for the NFS
datastore, since the traffic never leaves the host (I get 3gb/sec or so
usable thruput.)  One essential (IMO) for this is passing an HBA into
the SAN/NAS VM using vt-d technology.  If you weren't doing this, I'm
not surprised the performance sucked.  If you were doing this, you were
obviously doing something wrong.  No offense, but quite a few people are
doing exactly what I describe and it works just fine - there IS the
startup dependency. but  can live with that...
 1: If you where given the same hardware, what would you do? (RAID card
is
 an extra EUR30 or so a month, which i don't really want to spend, but
could, if
 needs be...)
  
 I have abandoned ESXi in favor of openindiana or solaris running as the
host, with virtualbox running the guests.  I am S much happier now.
But it takes a higher level of expertise than running ESXi, but the
results are much better.

in what respect?  due to the 'abysmal performance'?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Dan Swartzendruber

On 11/7/2012 10:53 AM, Edmund White wrote:

Same thing here. With the right setup, an all-in-one system based on
VMWare can be very solid and perform well.
I've documented my process here: http://serverfault.com/a/398579/13325

But I'm surprised at the negative comments about VMWare in this context. I
can't see how Virtual Box would run better.
   
Ditto.  I run all my guests headless.  A couple of windows clients are 
accessed via RDP with no issues.  The rest via ssh/terminal windows.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Jim Klimov
On 2012-11-07 16:02, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tiernan OToole

I have a Dedicated server in a data center in Germany, and it has 2 3TB drives,
but only software RAID. I have got them to install VMWare ESXi and so far
everything is going ok... I have the 2 drives as standard data stores...


ESXi doesn't do software raid, so ... what are you talking about?


I believe he could mean the RAID done by motherboard/chipset and
utilizing the CPU for its work. Not all OSes even recognize such
mirrors, i.e. on one of my old systems with a motherboard-made
HDD mirror, the Linux livecd's always saw the two separate disks.

There was recently a statement on the list that all RAIDs are in
software. While pedantically true, the hardware RAIDs usually
have separate processors (XOR engines and so on) to deal with
redundancy, own RAM cache (perhaps with own battery-backed power
source), and the HDD bandwidths and IO paths are often untied
from storage representation to the rest of the host. So there
is some dedicated hardware thrown at hardware RAID, as opposed
to that done by an OS or by additional BIOS/chipset features.

As I wished some years back, ZFS executed on a RAID HBA card
with its own CPU and RAM, and serving ZVOLs as SCSI LUNs to its
hosting computer running any OS would make for a wonderful product.
Then again, this is already done (possibly cheaper, better and
more highly available) by external NAS with a ZFS-aware OS inside...

//Jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: Dan Swartzendruber [mailto:dswa...@druber.com]
 
 I'm curious here.  Your experience is 180 degrees opposite from mine.  I
 run an all in one in production and I get native disk performance, and
 ESXi virtual disk I/O is faster than with a physical SAN/NAS for the NFS
 datastore, since the traffic never leaves the host (I get 3gb/sec or so
 usable thruput.)  

What is all in one?
I wonder if we crossed wires somehow...  I thought Tiernan said he was running 
Nexenta inside of ESXi, where Nexenta exports NFS back to the ESXi machine, so 
ESXi will have the benefit of ZFS underneath its storage.

That's what I used to do.

When I said performance was abysmal, I meant, if you dig right down and 
pressure the system for throughput to disk, you've got a Linux or Windows VM 
isnide of ESX, which is writing to a virtual disk, which ESX is then wrapping 
up inside NFS and TCP, talking on the virtual LAN to the ZFS server, which 
unwraps the TCP and NFS, pushes it all through the ZFS/Zpool layer, writing 
back to the virtual disk that ESX gave it, which is itself a layer on top of 
Ext3, before it finally hits disk.  Based purely on CPU and memory throughput, 
my VM guests were seeing a max throughput of around 2-3 Gbit/sec.  That's not 
*horrible* abysmal.  But it's bad to be CPU/memory/bus limited if you can just 
eliminate all those extra layers, and do the virtualization directly isnide a 
system that supports zfs.


  I have abandoned ESXi in favor of openindiana or solaris running as the
 host, with virtualbox running the guests.  I am S much happier now.
 But it takes a higher level of expertise than running ESXi, but the results 
 are
 much better.
 
 in what respect?  due to the 'abysmal performance'?

No - mostly just the fact that I am no longer constrained by ESXi.  In ESXi, 
you have such limited capabilities of monitoring, storage, and how you 
interface it ...  You need a windows client, you only have a few options in 
terms of guest autostart and so forth.  If you manage all that in a shell 
script (or whatever) you can literally do anything you want.  Startup one 
guest, then launch something that polls the first guest for the operational 
XMPP interface (or whatever service you happen to care about) before launching 
the second guest, etc.  Obviously you can still do brain-dead timeouts or 
monitoring for the existence of late-boot-cycle services such as vmware-tools 
too, but that's no longer your only option.

Of particular interest, I formerly had ESXi running a guest that was a DHCP and 
DNS server, and everything else had to wait for it.  Now I run DHCP and DNS 
directly inside of the host openindiana.  (So I eliminated one VM).  I am now 
able to connect to guest consoles via VNC or RDP (ok on mac and linux), whereas 
with ESXi your only choice is to connect via VSphere from windows.  

In ESXi, you cannot use a removable USB drive to store your removable backup 
storage.  I was using an eSATA drive, and I needed to reboot the whole system 
every time I rotated backups offsite.  But with openindiana as the host, I can 
add/remove removable storage, perform my zpool imports / exports, etc, all 
without any rebooting.

Stuff like that.  I could go on, but it basically comes down to:  With 
openindiana, you can do a lot more than you can with ESXi.  Because it's a 
complete OS.  You simply have more freedom, better performance, less 
maintenance, less complexity.  IMHO, it's better in every way.

I say less complexity but maybe not.  It depends.  I have greater complexity 
in the host OS, but I have less confusion and less VM dependencies, so to me 
that's less complexity.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
  
 Stuff like that.  I could go on, but it basically comes down to:  With
 openindiana, you can do a lot more than you can with ESXi.  Because it's a
 complete OS.  You simply have more freedom, better performance, less
 maintenance, less complexity.  IMHO, it's better in every way.

Oh - I just thought of an important one - make that two, three...

On ESXi, you can't run ipmi-tools.  Which means, if you're configuring ipmi, 
you have to do it at power-on, by hitting the BIOS key, and then you have to 
type in your encryption key by hand (20 hex chars).  Whereas, with a real OS, 
you run ipmi-tool and paste on the ssh prompt.  (Even if you enable ssh prompt 
on ESXi, you won't get ipmi-tool running there.)

I have two systems that have 3ware HBA's, and I have some systems with Dell 
PERC.  

Yes you can, with the help of Dell, install OMSA to get the web interface to 
manage the PERC.  But it's a pain, and there is no equivalent option for most 
HBA's.  Specifcally, on my systems with 3ware, I simply installed the solaris 
3ware utility to manage the HBA.  Which would not be possible on ESXi.  This is 
important because the systems are in a remote datacenter, and it's the only way 
to check for red blinking lights on the hard drives.  ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss