That looks sweet, thanks!
_
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Geoff Nordli
Sent: Thursday, November 08, 2012 10:51 PM
To: ZFS Discussions
Subject: Re: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for s
Dan,
If you are going to do the all in one with vbox, you probably want to look
at:
http://sourceforge.net/projects/vboxsvc/
It manages the starting/stopping of vbox vms via smf.
Kudos to Jim Klimov for creating and maintaining it.
Geoff
On Thu, Nov 8, 2012 at 7:32 PM, Dan Swartzendruber wro
I have to admit Ned's (what do I call you?)idea is interesting. I may give
it a try...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
wait my brain caught up with my fingers :) the guest is running on the
same host, so there is no virtual switch in this setup. i'm still going
to try the vmxnet3 and see what difference it makes...
___
zfs-discuss mailing list
zfs-discuss@opensolar
On 11/8/2012 1:41 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: Dan Swartzendruber [mailto:dswa...@druber.com]
Now you have me totally confused. How does your setup get data from the
guest to the OI box? If thru a wire, if it's gig-e, it's going to be
1/3-1/2 the sp
> From: Dan Swartzendruber [mailto:dswa...@druber.com]
>
> Now you have me totally confused. How does your setup get data from the
> guest to the OI box? If thru a wire, if it's gig-e, it's going to be
> 1/3-1/2 the speed of the other way. If you're saying you use 10gig or
> some-such, we're ta
On 11/8/2012 12:35 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
hardware access to the HB
On 11/8/2012 12:35 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
hardware access to the HB
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
>
> I am just wondering why you export the ZFS system through NFS?
> I have had much better results (albeit spending more time setting up) using
> iSCSI. I found that performance wa
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
> hardware access to the HBA(s) and harddisks at raw speeds, with no
> extra layers of lags in between.
-Original Message-
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
Sent: Wednesday, November 07, 2012 11:44 PM
To: Dan Swartzendruber; Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
Cc: Tiernan OToole;
On 2012-11-08 4:43, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
> When I said
performance was abysmal, I meant, if you dig right down and pressure the
system for throughput to disk, you've got a Linux or Windows VM isnide
of ESX, which is writing to a virtual disk, which ES
On 2012-11-08 05:43, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
you've got a Linux or Windows VM isnide of ESX, which is writing to a virtual
disk, which ESX is then wrapping up inside NFS and TCP, talking on the virtual
LAN to the ZFS server, which unwraps the TCP and NFS,
13 matches
Mail list logo