What version of Solaris are you running there? For a long while the default
response on encountering unrecoverable errors was to panic, but I believe that
has been improved in newer builds.
Also, part of your problem may be down to running with just a single disk.
With just one disk, ZFS
It's a capital i, not an L. For sending all intermediate snapshots to a
destination.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi all,
I'm about to embark on my first voyage into ZFS (and Solaris, frankly) as it
seems very appealing for a low-cost SAN/NAS solution. I am in the process of
building up a HCL-compliant whitebox server which ultimately will contain
8x1TB SATA disks.
I would appreciate some advice and
Hi,
I'm by no means a ZFS expert, but I do have one comment:
gm_sjo wrote:
- To provide a large slice of storage (~4TB) to a Windows 2003/8 file
server guest on the vmware host, to be accessed by Windows clients over
CIFS.
Solaris provide CIFS support natively too - maybe you can save
Currently, you can mirror your boot but not raidz2 it. I'd recommend using 2
of the drives for a mirrored boot and the other 6 drives for raidz2. I used
2x Addonics AE5RCS35NSA to hold the drives to give me hot swappability.
Out of curiousity, is there any reason you are going with vmware rather
2008/9/12 Malachi de Ælfweald:
Currently, you can mirror your boot but not raidz2 it. I'd recommend using 2
of the drives for a mirrored boot and the other 6 drives for raidz2. I used
2x Addonics AE5RCS35NSA to hold the drives to give me hot swappability.
Sorry, forgot to mention - I have
2008/9/12 Michael Schuster:
Solaris provide CIFS support natively too - maybe you can save yourself the
hassle of going through the vmware + windows combo.
There will be approx. 20 vmware guests running on this infrastructure,
so having a windows guest there for serving files isn't a problem.
Comments inline
On Fri, Sep 12, 2008 at 8:24 AM, gm_sjo [EMAIL PROTECTED] wrote:
2008/9/12 Malachi de Ælfweald:
Currently, you can mirror your boot but not raidz2 it. I'd recommend
using 2 of the drives for a mirrored boot and the other 6 drives for raidz2.
I used 2x Addonics AE5RCS35NSA
2008/9/12 Malachi de Ælfweald:
I'd say that if you are planning on using Windows to host the VMs, then
either vmware or virtualbox is your best bet. If you are looking to have the
OpenSolaris box host the VMs, xVM might be a better choice.
I'm not - as per my original post, the vmware host
greenBytes has a very well produced teaser commercial on their site.
http://www.green-bytes.com
Actually, I think it is one of the better commercials done by tech
companies in a long time. Do you grok it?
-- richard
___
zfs-discuss mailing list
On Sep 12, 2008, at 1:35 PM, Richard Elling wrote:
greenBytes has a very well produced teaser commercial on their site.
http://www.green-bytes.com
Actually, I think it is one of the better commercials done by tech
companies in a long time. Do you grok it?
Did I detect a (well-done)
On Fri, 5 Sep 2008, Kyle McDonald wrote:
Paul Raines wrote:
I am having a very odd problem on one of our ZFS filesystems
On certain files, when accessed on the Solaris server itself locally
where the zfs fs sits, we get an error like the following:
[EMAIL PROTECTED] # ls -l
./README:
On Sep 11, 2008, at 5:16 PM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote:
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
The issue with any form of RAID 1, is that the instant a disk
Corey,
I ran into an odd problem importing a zpool while testing avs. I was
trying to simulate a drive failure, break SNDR replication, and then
import the pool on the secondary. To simulate the drive failure is
just
offlined one of the disks in the RAIDZ set.
Are all constituent
Hi guys,
I recently was adding and removing some devices to a zfs mirror and now
the format command command seems to be a bit confused (or is being given
erroneous information)
This happened under
Solaris Express Community Edition snv_81 X86
I have 3 disks in a pool
Miles,
mb == Matt Beebe [EMAIL PROTECTED] writes:
mb When using AVS's Async replication with memory queue, am I
mb guaranteed a consistent ZFS on the distant end? The assumed
mb failure case is that the replication broke, and now I'm trying
mb to promote the secondary replicate
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, September 12, 2008 4:34 PM
To: Leopold, Corey
Cc: zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] ZPOOL Import Problem
Corey,
I ran into an odd problem importing a
17 matches
Mail list logo