Thanks to all for your comments and sharing your experiences.
In my setup the pools are split and then NFS mounted to other nodes, mostly
Oracle DB boxes. These mounts will provide areas for RMAN Flash backups to be
written.
If I lose connectivity to any host I will swing the luns over to the
I agree, it looks like it would be perfect, but unfortunately without Solaris
drivers it's pretty much a non starter.
That hasn't stopped me pestering Fusion-IO wherever I can though to see if they
are willing to develop Solaris drivers, almost everywhere I've seen these
reviewed there have
When I issue the zpool remove command on the spare I receive no response, good
or bad. Afterwards the drive is still listed as a spare in the zpool. Zpool
shows the spare is listed as FAULTED. Any ideas?
$ zpool status
pool: datapool1
state: ONLINE
scrub: none requested
config:
NAME STATE READ
I was doing a manual resilver, not with spares. I suspect still the issue
comes from your script running as root, which is common for reporting scripts.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Just curiosity, why donĀ“t use SC?
Leal.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was under the impression that MLC is the preferred type of SSD, but I
want to prevent myself from having a think-o.
I'm looking to get (2) SSD to use as my boot drive. It looks like I can
get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
Which would be the better
Erik Trimble wrote:
I was under the impression that MLC is the preferred type of SSD, but I
want to prevent myself from having a think-o.
I'm looking to get (2) SSD to use as my boot drive. It looks like I can
get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
Which
On Wed, 24 Sep 2008, Erik Trimble wrote:
I was under the impression that MLC is the preferred type of SSD, but I
want to prevent myself from having a think-o.
SLC = Single level
MLC = Multi level
Since the SLC stores only a binary value rather than several possible
encoded values it becomes
On Wed, 24 Sep 2008, Erik Trimble wrote:
I was under the impression that MLC is the preferred type of SSD, but I
want to prevent myself from having a think-o.
Depends on what one prefers, I guess. :-)
SLC is prefered for performance reasons, MLC tends to be cheaper.
I installed an SLC SSD in
On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED] wrote:
I was under the impression that MLC is the preferred type of SSD, but I
want to prevent myself from having a think-o.
I'm looking to get (2) SSD to use as my boot drive. It looks like I can
get 32GB SSDs composed of
On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED] wrote:
I was under the impression that MLC is the preferred type of SSD, but I
want to prevent myself from having a think-o.
MLC - description as to why can be found in
http://mags.acm.org/communications/200807/
See Flash
Tim wrote:
On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I was under the impression that MLC is the preferred type of SSD,
but I
want to prevent myself from having a think-o.
I'm looking to get (2) SSD to use as my boot
[EMAIL PROTECTED] wrote on 09/24/2008 05:54:45 AM:
I agree, it looks like it would be perfect, but unfortunately
without Solaris drivers it's pretty much a non starter.
That hasn't stopped me pestering Fusion-IO wherever I can though to
see if they are willing to develop Solaris drivers,
The man page for dumpadm says this:
A given ZFS volume cannot be configured for both the swap area and the dump
device.
And indeed when I try to use a zvol as both, I get:
zvol cannot be used as a swap device and a dump device
My question is, why not ?
Thanks,
John
--
This message posted
In general, I think SLC is better, but there are a number of brand-new
MLC devices on the market that are really fast; until a new generation
of SLC devices show up, the MLC drives kind of win by default.
Intel's supposed to have a SLC drive showing up early next year that
has similar read
John Cecere wrote:
The man page for dumpadm says this:
A given ZFS volume cannot be configured for both the swap area and the dump
device.
And indeed when I try to use a zvol as both, I get:
zvol cannot be used as a swap device and a dump device
My question is, why not ?
Swap is a
Darren,
Thanks for the explanation. Would you object if I opened a bug on the zfs man
page to include what you've written here ?
Thanks again,
John
Darren J Moffat wrote:
John Cecere wrote:
The man page for dumpadm says this:
A given ZFS volume cannot be configured for both the swap area
John Cecere wrote:
Darren,
Thanks for the explanation. Would you object if I opened a bug on the
zfs man page to include what you've written here ?
I don't know if what I said is considered implementation detail or not.
Feel free to log the bug but I can't say either way if it would be
mike wrote:
On Sun, Sep 21, 2008 at 11:49 PM, Volker A. Brandt [EMAIL PROTECTED] wrote:
Hmmm... I run Solaris 10/sparc U4. My /usr/java points to
jdk/jdk1.5.0_16. I am using Firefox 2.0.0.16. Works For Me(TM) ;-)
Sorry, can't help you any further. Maybe a question for desktop-discuss?
On Wed, Sep 24, 2008 at 9:37 PM, James Andrewartha [EMAIL PROTECTED] wrote:
Can you post the java error to the list? Do you have gzip compressed or
aclinherit properties on your filesystems, hitting bug 6715550?
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048457.html
Darren J Moffat wrote:
John Cecere wrote:
The man page for dumpadm says this:
A given ZFS volume cannot be configured for both the swap area and the dump
device.
And indeed when I try to use a zvol as both, I get:
zvol cannot be used as a swap device and a dump device
My question
I have a strange problem involving changes in large file on a mirrored
zpool in
Open solaris snv96.
We use it at storage in a VMware ESXi lab environment. All virtual disk
files gets
corrupted when changes are made within the files (when running the
machine that is).
The sad thing is that I've
22 matches
Mail list logo