From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Also, if you have a NFS datastore, which is not available at the time of
ESX
bootup, then the NFS datastore doesn't come online, and there seems to be
no
way of telling
On 9 déc. 2010, at 13:41, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Also, if you have a NFS datastore, which is not available at the time of
ESX
bootup, then the NFS datastore doesn't
For anyone who cares:
I created an ESXi machine. Installed two guest (centos) machines and
vmware-tools. Connected them to each other via only a virtual switch. Used
rsh to transfer large quantities of data between the two guests,
unencrypted, uncompressed. Have found that ESXi virtual switch
On Dec 8, 2010, at 11:41 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
For anyone who cares:
I created an ESXi machine. Installed two guest (centos) machines and
vmware-tools. Connected them to each other via only a virtual switch. Used
rsh to transfer
From: Saxon, Will [mailto:will.sa...@sage.com]
What I am wondering is whether this is really worth it. Are you planning
to
share the storage out to other VM hosts, or are all the VMs running on the
host using the 'local' storage? I know we like ZFS vs. traditional RAID
and
volume
Suppose if you wanted to boot from an iscsi target, just to get vmware a
ZFS server up. And then you could pass-thru the entire local storage
bus(es) to the ZFS server, and you could create other VM's whose storage is
backed by the ZFS server on local disk.
One way you could do this is to buy
On 19 nov. 2010, at 03:53, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
SAS Controller
and all ZFS Disks/ Pools are passed-through to Nexenta to have full
ZFS-Disk
control like on real hardware.
This is precisely the thing I'm interested in.
hmmm br
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide br br
Disabling the ZIL (Don't) br
Caution: Disabling the ZIL on an NFS server can lead to client side corruption.
The ZFS pool integrity itself is not compromised by this tuning. brbr
so especially with nfs i won`t
From: Saxon, Will [mailto:will.sa...@sage.com]
In order to do this, you need to configure passthrough for the device at
the
host level (host - configuration - hardware - advanced settings). This
Awesome. :-)
The only problem is that once a device is configured to pass-thru to the
guest VM,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of VO
How to accomplish ESXi 4 raw device mapping with SATA at least:
http://www.vm-help.com/forum/viewtopic.php?f=14t=1025
It says:
You can pass-thru individual disks, if you have SCSI, but
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of VO
This sounds interesting as I have been thinking something similar but
never
implemented it because all the eggs would be in the same basket. If you
don't mind me asking for more
From: Gil Vidals [mailto:gvid...@gmail.com]
connected to my ESXi hosts using 1 gigabit switches and network cards: The
speed is very good as can be seen by IOZONE tests:
KB reclen write rewrite read reread
512000 32 71789 76155 94382 101022
512000 1024 75104
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Günther
br br Disabling the ZIL (Don't) br
This is relative. There are indeed situations where it's acceptable to
disable ZIL. To make your choice, you need to understand a few things...
On 19 nov. 2010, at 15:04, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Günther
br br Disabling the ZIL (Don't) br
This is relative. There are indeed situations where it's acceptable to
disable ZIL. To
i have the same problem with my 2HE supermicro server (24x2,5, connected via
6x mini SAS 8087) and no additional mounting possibilities for 2,5 or 3,5
drives.
brbr
on those machines i use one sas port (4 drives) of an old adaptec 3805 (i have
used them in my pre zfs-times) to build a raid-1 +
On Fri, 19 Nov 2010 07:16:20 PST, Günther wrote:
i have the same problem with my 2HE supermicro server (24x2,5,
connected via 6x mini SAS 8087) and no additional mounting
possibilities for 2,5 or 3,5 drives.
brbr
on those machines i use one sas port (4 drives) of an old adaptec
3805 (i have
-Original Message-
From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
Sent: Friday, November 19, 2010 8:03 AM
To: Saxon, Will; 'Günther'; zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
From: Saxon, Will [mailto:will.sa...@sage.com
Also, most of the big name vendors have a USB or SD
option for booting ESXi. I believe this is the 'ESXi
Embedded' flavor vs. the typical 'ESXi Installable'
that we're used to. I don't think it's a bad idea at
all. I've got a not-quite-production system I'm
booting off USB right now, and
I confirm that form the fileserver point of view and storage, i had more
network connections used.
Bruno
On Wed, 17 Nov 2010 22:00:21 +0200, Pasi Kärkkäinen pa...@iki.fi wrote:
On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
Hi all,
Let me tell you all that the MC/S
On Wed, 17 Nov 2010 16:31:32 -0500, Ross Walker rswwal...@gmail.com
wrote:
On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen pa...@iki.fi wrote:
On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
Hi all,
Let me tell you all that the MC/S *does* make a difference...I had
a
Up to last year we have had 4 exsxi4 server, each with its own NFS-storage
server (NexentaStor/ Core+napp-it), directly connected via 10Gbe CX4. The
second CX4 Storage-Port was connected to our San (Hp 2910 10Gbe Switch) for
backups. The second port of each ESXI Server was connected (tagged
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
SAS Controller
and all ZFS Disks/ Pools are passed-through to Nexenta to have full
ZFS-Disk
control like on real hardware.
This is precisely the thing I'm interested in. How do you do that? On my
ESXi (test) server, I have a
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Sent: 19 November 2010 09:54
To: 'Günther'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
From
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of
Edward Ned Harvey
Sent: Thursday, November 18, 2010 9:54 PM
To: 'Günther'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Faster than 1G Ether... ESX
I haven't seen too much talk about the actual file read and write speeds. I
recently converted from using OpenFiler, which seems defunct based on their
lack of releases, to using NexentaStor. The NexentaStor server is connected
to my ESXi hosts using 1 gigabit switches and network cards: The speed
Hi all,
Let me tell you all that the MC/S *does* make a difference...I
had a windows fileserver using an ISCSI connection to a host running
snv_134 with an average speed of 20-35 mb/s...After the upgrade to snv_151a
(Solaris 11 express) this same fileserver got a performance boost and now
has
On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
Hi all,
Let me tell you all that the MC/S *does* make a difference...I had a
windows fileserver using an ISCSI connection to a host running snv_134
with an average speed of 20-35 mb/s...After the upgrade to snv_151a
On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen pa...@iki.fi wrote:
On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
Hi all,
Let me tell you all that the MC/S *does* make a difference...I had a
windows fileserver using an ISCSI connection to a host running snv_134
tc == Tim Cook t...@cook.ms writes:
tc Channeling Ethernet will not make it any faster. Each
tc individual connection will be limited to 1gbit. iSCSI with
tc mpxio may work, nfs will not.
well...probably you will run into this problem, but it's not
necessarily totally unsolved.
I
On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin car...@ivy.net wrote:
tc == Tim Cook t...@cook.ms writes:
tc Channeling Ethernet will not make it any faster. Each
tc individual connection will be limited to 1gbit. iSCSI with
tc mpxio may work, nfs will not.
well...probably you
On Nov 16, 2010, at 4:04 PM, Tim Cook t...@cook.ms wrote:
On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin car...@ivy.net wrote:
tc == Tim Cook t...@cook.ms writes:
tc Channeling Ethernet will not make it any faster. Each
tc individual connection will be limited to 1gbit. iSCSI
On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
On Nov 16, 2010, at 4:04 PM, Tim Cook t...@cook.ms wrote:
AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
For iSCSI one just needs to have a second (third or fourth...) iSCSI session
on a different IP to the target and run
On Nov 16, 2010, at 7:49 PM, Jim Dunham james.dun...@oracle.com wrote:
On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
On Nov 16, 2010, at 4:04 PM, Tim Cook t...@cook.ms wrote:
AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
For iSCSI one just needs to have a second (third or
: zfs-discuss-boun...@opensolaris.org
Date: Tue, 16 Nov 2010 22:05:05
To: Jim Dunhamjames.dun...@oracle.com
Cc: zfs-discuss@opensolaris.orgzfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
___
zfs-discuss mailing list
Hi,
we have the same issue, ESX(i) and Solaris on the Storage.
Link Aggregation does not work with ESX(i) (i tried a lot with that for
NFS), when you want to use more than one 1G connection you must
configure one network or vlan and min. one share for each connection.
But this is also limited
Edward,
I recently installed a 7410 cluster, which had added Fiber Channel HBAs.
I know the site also has Blade 6000s running VMware, but no idea if they
were planning to run fiber to those blades (or even had the option to do so).
But perhaps FC would be an option for you?
Mark
On Nov 12,
On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm
in love. But for one thing. The interconnect between the head storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/12/2010 10:03 AM, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi
heads, I?m in love. But for one thing. The interconnect between
the head storage.
1G Ether is so cheap, but not as fast as
Channeling Ethernet will not make it any faster. Each individual connection
will be limited to 1gbit. iSCSI with mpxio may work, nfs will not.
On Nov 12, 2010 9:26 AM, Eugen Leitl eu...@leitl.org wrote:
On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote:
Since combining ZFS
On Fri, Nov 12, 2010 at 09:34:48AM -0600, Tim Cook wrote:
Channeling Ethernet will not make it any faster. Each individual connection
will be limited to 1gbit. iSCSI with mpxio may work, nfs will not.
Would NFSv4 as cluster system over multiple boxes work?
(This question is not limited to
To: Edward Ned Harvey
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/12/2010 10:03 AM, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi
heads, I?m in love
Check infiniband, the guys at anandtech/zfsbuild.com used that as well.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/13/10 04:03 AM, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi
heads, I’m in love. But for one thing. The interconnect between the
head storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast
enough, but it’s overkill
43 matches
Mail list logo