I'm likely to be building a ZFS server to act as NFS shared storage for a
couple of VMware ESX servers. Does anybody have experience of using ZFS with
VMware like this, and can anybody confirm the best zpool configuration?
The server will have 16x 500GB SATA drives, with dual Opteron CPU's and
the procedure is follows:
1. mkdir /tank
2. touch /tank/a
3. zpool create tank c0d0p3
this command give the following error message:
cannot mount '/tank': directory is not empty;
4. reboot.
then the os can only be login in from console. does it a bug?
This message posted from opensolaris.org
Hi,
why are you creating a file in the directory tank?
Matthew
2008/6/27 wan_jm [EMAIL PROTECTED]:
the procedure is follows:
1. mkdir /tank
2. touch /tank/a
3. zpool create tank c0d0p3
this command give the following error message:
cannot mount '/tank': directory is not empty;
4.
[EMAIL PROTECTED] wrote on 06/27/2008 03:39:41 AM:
I'm likely to be building a ZFS server to act as NFS shared storage
for a couple of VMware ESX servers. Does anybody have experience of
using ZFS with VMware like this, and can anybody confirm the best
zpool configuration?
The server
On Fri, 27 Jun 2008, wan_jm wrote:
the procedure is follows:
1. mkdir /tank
2. touch /tank/a
3. zpool create tank c0d0p3
this command give the following error message:
cannot mount '/tank': directory is not empty;
4. reboot.
then the os can only be login in from console. does it a bug?
On Fri, Jun 27, 2008 at 07:58:42AM -0500, [EMAIL PROTECTED] wrote:
Yes. two caveats though. ZFS is a COW filesystem, currently with no
defrag. Placing heavy write (vmware is) on this type of storage
(especially, but not only if you are planning on using snapshots) you will
tend to see
Thanks both, very good pieces of advice there.
Wonko, I was about to question how much difference the iRAM will actually make
with it being on a single SATA connection, but after googling, for £70 + RAM
it's worth buying just as an experiment.
I'm really not interested in iSCSI, it might be
Bleh, just found out the i-RAM is 5v PCI only. Won't work on PCI-X slots which
puts that out of the question for the motherboad I'm using. Vmetro have a 2GB
PCI-E card out, but it's for OEM's only:
http://www.vmetro.com/category4304.html, and I don't have any space in this
server to mount a
Brian Hechinger wrote:
On Fri, Jun 27, 2008 at 07:58:42AM -0500, [EMAIL PROTECTED] wrote:
Yes. two caveats though. ZFS is a COW filesystem, currently with no
defrag. Placing heavy write (vmware is) on this type of storage
(especially, but not only if you are planning on using snapshots)
Hi,
On OpenSolaris 2008.05 ( updated to build 91 ) with two external USB Disk:
Output format -e :
AVAILABLE DISK SELECTIONS:
0. c5d0
/[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
1. c7t0d0
/[EMAIL PROTECTED],0/pci10cf,[EMAIL PROTECTED],7/[EMAIL PROTECTED]/[EMAIL
Hy Peter and all,
after a couple of reboot the PowerPath and reconfiguring the LUN
exported by the Clariion now all seems to working fine.
Following the output of zpool status:
---
machine# zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAME
Thanks to Brandon High [EMAIL PROTECTED] and I am sorry for my thickness
and having an assumption I could not export a faulted pool. After
failure, I exported then imported with zpool cmd and it came back.
Thanks!
Weldon
If memory serves me right, sometime around Wednesday, Weldon S Godfrey
On Fri, Jun 27, 2008 at 08:13:14AM -0700, Ross wrote:
Bleh, just found out the i-RAM is 5v PCI only. Won't work on PCI-X
slots which puts that out of the question for the motherboad I'm
using. Vmetro have a 2GB PCI-E card out, but it's for OEM's only:
http://www.vmetro.com/category4304.html,
Hi all,
based on comments on this list, I bought a new server with 8 SATA bays
and an AOC-SAT2-MV8 SATA controller. I them fired up a jumpstart of
Solaris 10 5/08 of the server. Install runs through perfectly, with a
SVM mirror of / and swap on the first two disks. But during the first
boot, I
Hello Blake,
did you end up purchasing this ? We're considering buying a SilMech K501
as our new fileserver with a pair of Areca controllers in JBOD mode. Any
experience would be appreciated.
Thanks,
Christophe Dupre
Blake Irvin wrote:
The only supported controller I've found is the Areca
We are currently using the 2-port Areca card SilMech offers for boot, and 2 of
the Supermicro/Marvell cards for our array. Silicon Mechanics gave us great
support and burn-in testing for Solaris 10. Talk to a sales rep there and I
don't think you will be disappointed.
cheers,
Blake
This
On Fri, Jun 27, 2008 at 2:47 PM, Christophe Dupre [EMAIL PROTECTED]
wrote:
Hi all,
based on comments on this list, I bought a new server with 8 SATA bays
and an AOC-SAT2-MV8 SATA controller. I them fired up a jumpstart of
Solaris 10 5/08 of the server. Install runs through perfectly, with a
On Fri, Jun 27, 2008 at 11:50 AM, Albert Chin
[EMAIL PROTECTED] wrote:
On Fri, Jun 27, 2008 at 08:13:14AM -0700, Ross wrote:
Bleh, just found out the i-RAM is 5v PCI only. Won't work on PCI-X
slots which puts that out of the question for the motherboad I'm
using. Vmetro have a 2GB PCI-E
On Fri, Jun 27, 2008 at 07:22:48AM -0700, Ross wrote:
Thanks both, very good pieces of advice there.
Wonko, I was about to question how much difference the iRAM will actually
make with it being on a single SATA connection, but after googling, for ??70
+ RAM it's worth buying just as an
On Fri, Jun 27, 2008 at 08:32:23AM -0700, Richard Elling wrote:
You will want mirrored slogs.
Yes, always an excellent recommendation.
Note that there are some companies, Crucial and STEC come to mind,
sell SSDs which fit in disk form factors. IIRC, Mac Book Air and EMC
use STEC's SSDs.
Hi all ;
There are two tinhs that some customers are asking for constantly about ZFS.
Active active clustering support.
Ability to mount snap shots somewhere else. [this doesnt look easy, perhaps
a proxy kind of set up ? ]
Any hope for these features?
Mertol
Unfortunately, we need to be careful here with our terminology.
SSD used to refer strictly to standard DRAM backed with a battery (and,
maybe some sort of a fancy enclosure with a hard drive to write all DRAM
data to after a power outage). It now encompasses the newer Flash-based
devices. My
I made the mistake of upgrading the zfs version on a usb drive I have, while on
my laptop (Solaris SXDE 4) and I really needed to access this from a Solaris 10
U5 system. Is there any way around beside full backup and restore?
___
zfs-discuss mailing
Hi,
there is mirrored pool in i86 system with Solaris 10. The one disc is
internal SATA via PCI link, another USB mobile disc. OS does not boot
when USB is plugged on. It is necessary to take it off, then system
boots, then plug it on and then work OK.
What a hell is that?
Regards,
Re-reading your question is occurs to me that you might be referring
to the ability to mount a snapshot on *another server* ?
There's no built-in feature in zfs for that, but a workaround would be
to do what I just detailed, with the additional step of exporting that
cloned snapshot to the
Sorry about this. I just couldn't resist.
Andrius wrote:
Solaris 10 does not boat
But it does ship!
wink
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss
On Sat, Jun 28, 2008 at 12:58:31AM +0300, Mertol Ozyoney wrote:
Ability to mount snap shots somewhere else. [this doesnt look easy, perhaps
a proxy kind of set up ? ]
Snapshots are available through .zfs/snapshot/snapshot-name.
Snapshots are read-only. They can be cloned to create read-write
Andrius wrote:
Hi,
there is mirrored pool in i86 system with Solaris 10. The one disc is
internal SATA via PCI link, another USB mobile disc. OS does not boot
when USB is plugged on. It is necessary to take it off, then system
boots, then plug it on and then work OK.
What a hell is
Agreed -- inserting the USB drive can sometimes cause the controller
targets to shift (for example, the boot device might now be c1d0 instead
of c0d0) -- that will cause problems...
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James C.
McPherson
Sent:
I just have a try.
In my opinion, if the directory is not empty, zpool should not create the pool.
let me give a senario if some day our software runs on the customer site. but
one engineer of the customer did the above operation failed, but he didn't do
anything more. few days later , the os
I don't know why that zfs mount failed stops all the other network service.
maybe it is not a bug of zfs. it must a bug with SMF in my opinion. do you
think so
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Fri, Jun 27, 2008 at 6:30 PM, wan_jm [EMAIL PROTECTED] wrote:
I just have a try.
In my opinion, if the directory is not empty, zpool should not create the
pool.
let me give a senario if some day our software runs on the customer site. but
one engineer of the customer did the above
On Fri, Jun 27, 2008 at 03:02:43PM -0700, Erik Trimble wrote:
Unfortunately, we need to be careful here with our terminology.
You are completely and 100% correct, Erik. I've been throwing the
term SSD around, but in the context of what I'm thinking, by SSD I
mean this new-fangled flash based
On Fri, Jun 27, 2008 at 07:04:58PM -0400, Dale Ghent wrote:
Re-reading your question is occurs to me that you might be referring
to the ability to mount a snapshot on *another server* ?
Yes, I believe that's what he's talking about. He's thinking the way a
clustered filesystem would work.
Hello Mike,
Wednesday, June 25, 2008, 9:36:16 PM, you wrote:
MG On Wed, Jun 25, 2008 at 3:09 PM, Robert Milkowski [EMAIL PROTECTED] wrote:
Well, I've seen core dumps bigger than 10GB (even without ZFS)... :)
MG Was that the size in the dump device or the size in /var/crash? If it
MG was the
Here is what I found out.
AVAILABLE DISK SELECTIONS:
0. c5t0d0 DEFAULT cyl 4424 alt 2 hd 255 sec 63
/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0
1. c5t1d0 SEAGATE-ST336754LW-0005-34.18GB
/[EMAIL
Hello Mark,
Tuesday, April 15, 2008, 8:32:32 PM, you wrote:
MM The new write throttle code put back into build 87 attempts to
MM smooth out the process. We now measure the amount of time it takes
MM to sync each transaction group, and the amount of data in that group.
MM We dynamically resize
Can anybody confirm that random read performance is definitely
better with mirrored volumes. Does ZFS use all the disks in the
mirror sets independently when reading data? Am I right in thinking
I could have around 7x better random read performance with the 15
mirrored drives, when
38 matches
Mail list logo