I have a system that is running Solaris 10 Update 3 TX with 1 zpool and 5
zones. Everything on it is running fine. I take the drive to my disk duplicator
and dupe it bit by bit to another drive, put the newly duped drive in the same
machine and boot it up everything boots up fine. Then I do a
Just to let everyone know what I did to 'fix' the problem. By halting the
zones and the exporting the zpool I was able to duplicate the drive without
issue. Just had to import the zpool upon booting and boot the zones. Although
my setup uses slices for the zpool (this is not supported by SUN),
I retracted that statement in the above edit.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
- Adam...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with raidz2 so you aren't
necessarily going to lose any capacity depending on how you configure your
pool.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. A coarser solution would be to create a new pool where you
zfs send/zfs recv the filesystems of the old pool.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
with.
Anyhow, very slick UI, sort of dubious back end, interesting possibility
for integration with ZFS.
Adam
On Mon, Aug 07, 2006 at 12:08:17PM -0700, Eric Schrock wrote:
Yeah, I just noticed this line:
Backup Time: Time Machine will back up every night at midnight, unless
you select a different time
taking a stab
as estimating the comparative benefits?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Nov 01, 2006 at 10:05:01AM +, Ceri Davies wrote:
On Wed, Nov 01, 2006 at 01:33:33AM -0800, Adam Leventhal wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below is the proposal I'll be
submitting to PSARC
to be persistent. This is similar to sharing
ZFS filesystems via NFS: you can use share(1M), but it doesn't affect the
persistent properties of the dataset.
What properties are you specifically interested in modifying?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
What properties are you specifically interested in modifying?
LUN for example. How would I configure LUN via zfs command ?
You can't. Forgive my ignorance about how iSCSI is deployed, but why would
you want/need to change the LUN?
Adam
On Wed, Nov 01, 2006 at 01:36:05PM +0200, Cyril Plisko
that it could not be shared.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, but we've considered making that an option you could
set in the 'shareiscsi' property ('alias=blah' for example). The iSCSI
properties I was referring to are the private meta data for the target daemon
such as IQN.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
On Wed, Nov 01, 2006 at 09:25:26PM +0200, Cyril Plisko wrote:
On 11/1/06, Adam Leventhal [EMAIL PROTECTED] wrote:
What properties are you specifically interested in modifying?
LUN for example. How would I configure LUN via zfs command ?
You can't. Forgive my ignorance about how iSCSI
it when it was on Server A?
Clients would need to explicitly change the server they're contacting unless
that new server also took over the IP address, hostname, etc.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
, and
so set 'shareiscsi=on' for the pool.
hey adam, what's direct mean?
It's iSCSI target lingo for vanilla disk emulation.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
I don't think you'd see the same performance benefits on RAID-Z since
parity isn't always on the same disk. Are you seeing hot/cool disks?
Adam
On Sun, Nov 05, 2006 at 04:03:18PM +0100, Pawel Jakub Dawidek wrote:
In my opinion RAID-Z is closer to RAID-3 than to RAID-5. In RAID-3 you
do only
Thanks for all the feedback. This PSARC case was approved yesterday and
will be integrated relatively soon.
Adam
On Wed, Nov 01, 2006 at 01:33:33AM -0800, Adam Leventhal wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below
Hey Robert,
The iSCSI target is targetting Solaris 10 update 4. There wasn't any issue
with the target, rather it was the timing of the its integration into Nevada,
and the sheer quantity of projects targetting update 3.
Adam
On Thu, Dec 14, 2006 at 02:39:17PM -0500, Robert Petkus wrote:
Folks
the blocksize or recordsize
is relatively closer to the number of bytes in a stripe.
Adam
On Thu, Jan 04, 2007 at 11:17:26PM +, Peter Tribble wrote:
I'm being a bit of a dunderhead at the moment and neither the site search
nor
google are picking up the information I seek...
I'm setting up
For what it's worth, there is a plan to allow data to be scrubbed so that
you can enable compression for extant data. No ETA, but it's on the roadmap.
In fact, I was recently reminded that I filed a bug on this in 2004:
5029294 there should be a way to compress an extant file system
Adam
-- and considerably better than LZJB.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not supported, shouldn't the header files be shipped so
people can make sense of kernel data structure types?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
or in the iSCSI target. Please file a bug.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
:
6430003 record size needs to affect zvol reservation size on RAID-Z
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Wed, Mar 21, 2007 at 01:36:10AM +0100, Robert Milkowski wrote:
btw: I assume that compression level will be hard coded after all,
right?
Nope. You'll be able to choose from gzip-N with N ranging from 1 to 9 just
like gzip(1).
Adam
--
Adam Leventhal, Solaris Kernel Development http
:
http://blogs.sun.com/ahl/entry/gzip_for_zfs_update
I've also asked Roch Bourbonnais and Richard Elling to do some more
extensive tests.
Adam
From zfs(1M):
compression=on | off | lzjb | gzip | gzip-N
Controls the compression algorithm used for this
dataset
On Fri, Mar 23, 2007 at 11:41:21AM -0700, Rich Teer wrote:
I recently integrated this fix into ON:
6536606 gzip compression for ZFS
Cool! Can you recall into which build it went?
I put it back yesterday so it will be in build 62.
Adam
--
Adam Leventhal, Solaris Kernel Development
that FC-AL, ... do better in this case
iscsi doesn't use TCP, does it? Anyway, the problem is really transport
independent.
It does use TCP. Were you thinking UDP?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs
configuration, BUT would potentially make much less efficient use of storage.
Adam
[1] http://blogs.sun.com/bonwick/entry/raid_z
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
On Wed, Apr 04, 2007 at 03:34:13PM +0200, Constantin Gonzalez wrote:
- RAID-Z is _very_ slow when one disk is broken.
Do you have data on this? The reconstruction should be relatively cheap
especially when compared with the initial disk access.
Adam
--
Adam Leventhal, Solaris Kernel
you collect the performance
data? I think a fair test would be to compare the performance of a fully
functional RAID-Z stripe against a one with a missing (absent) device.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
.)
There's nothing today preventing Microsoft (or Apple) from sticking ZFS
into their OS. They'd just to have to release the (minimal) diffs to
ZFS-specific files.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs
Hi folks. I'm looking at putting together a 16-disk ZFS array as a server, and
after reading Richard Elling's writings on the matter, I'm now left wondering
if it'll have the performance we expect of such a server. Looking at his
figures, 5x 3-disk RAIDZ sets seems like it *might* be made to do
Bart Smaalders wrote:
Adam Lindsay wrote:
Okay, the way you say it, it sounds like a good thing. I misunderstood
the performance ramifications of COW and ZFS's opportunistic write
locations, and came up with much more pessimistic guess that it would
approach random writes. As it is, I have
?)
- If I focused on simple streaming IO, would giving the server less RAM
have an impact on performance?
- I had assumed four cores would be better than the two faster (3.0GHz)
single-core processors the vendor originally suggested. Agree?
Many thanks for any thoughts,
adam
[EMAIL PROTECTED] wrote:
Adam:
Does anyone have a clue as to where the bottlenecks are going to be with
this:
16x hot swap SATAII hard drives (plus an internal boot drive)
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
2x Areca 8-port PCIe (8-lane) RAID drivers
2x AMD
Nicholas Lee wrote:
On 4/19/07, *Adam Lindsay* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
16x hot swap SATAII hard drives (plus an internal boot drive)
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
2x Areca 8-port PCIe (8-lane) RAID drivers
2x AMD
for it.
Now time to check on the project budget... :)
thanks,
adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
After reading through the ZFS slides, it appears to be the case that
if ZFS wants to modify a single data block, if must rewrite every
block between that modified block and the uberblock (root of the tree).
Is this really the case? If so, does this mean that every commit
operation (ie every
. There are outstanding problems with compression in
the ZIO pipeline that may contribute to the bursty behavior.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
That would be a great RFE. Currently the iSCSI Alias is the dataset name
which should help with identification.
Adam
On Fri, May 04, 2007 at 02:02:34PM +0200, cedric briner wrote:
cedric briner wrote:
hello dear community,
Is there a way to have a ``local_name'' as define in iscsitadm.1m
Try 'trace((int)arg1);' -- 4294967295 is the unsigned representation of -1.
Adam
On Mon, May 14, 2007 at 09:57:23AM -0700, Shweta Krishnan wrote:
Thanks Eric and Manoj.
Here's what ldi_get_size() returns:
bash-3.00# dtrace -n 'fbt::ldi_get_size:return{trace(arg1);}' -c 'zpool
create adsl
for the sharenfs property.
Adam
On Thu, May 24, 2007 at 02:39:24PM +0200, cedric briner wrote:
Starting from this thread:
http://www.opensolaris.org/jive/thread.jspa?messageID=118786#118786
I would love to have the possibility to set an ISCSI alias when doing an
shareiscsi=on on ZFS
On Thu, Jun 07, 2007 at 08:38:10PM -0300, Toby Thain wrote:
When should we expect Solaris kernel under OS X? 10.6? 10.7? :-)
I'm sure Jonathan will be announcing that soon. ;-)
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
Those are interesting results. Does this mean you've already written lzo
support into ZFS? If not, that would be a great next step -- licensing
issues can be sorted out later...
Adam
On Sat, Jun 16, 2007 at 04:40:48AM -0700, roland wrote:
btw - is there some way to directly compare lzjb vs lzo
. That is, choose the compression method (if
any), and then, in effect, partition the CD for RAID-Z or mirroring to
stretch the data to fill the entire disc. It wouldn't necessarily be all
that efficient to access, but it would give you resiliency against media
errors.
Adam
--
Adam Leventhal
_synchronous_ writes on an SSD, but at a pretty steep cost.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
could choose a compression algorithm based on its detection of the
type of data being stored.
Adam
On Thu, Jul 05, 2007 at 08:29:38PM -0300, Domingos Soares wrote:
Bellow, follows a proposal for a new opensolaris project. Of course,
this is open to change since I just wrote down some ideas I had
On Thu, Aug 16, 2007 at 05:20:25AM -0700, ramprakash wrote:
#zfs mount -a
1. mounts c again.
2. but not vol1.. [ ie /dev/zvol/dsk/mytank/b/c does not contain vol1
]
Is this the normal behavior or is it a bug?
That looks like a bug. Please file it.
Adam
--
Adam Leventhal
, data is
written locally and then transmitted shortly after that.
Synchronous replication obviously imposes a much larger performance hit,
but asychronous replication means you may lose data over some recent
period (but the data will always be consistent).
Adam
--
Adam Leventhal, Solaris Kernel
. Assuming relatively large blocks
written, RAID-Z and RAID-5 should be laid out on disk very similarly
resulting in similar read performance.
Did you compare the I/O characteristic of both? Was the bottleneck in
the software or the hardware?
Very interesting experiment...
Adam
--
Adam Leventhal
all over the place), what
else could it be?
Many thanks,
adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that are ZFS-friendly (i.e., JBOD), and the relative
unavailability of motherboards that support both the latest CPUs as well
as have a good PCI-X architecture.
If you come across some potential solutions, I think a lot of people
here will thank you for sharing...
adam
on a 100MHz
slot is measurably slower than 133MHz, but contention over a single
bridge can be even worse.
hth,
adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you say is your most taxing
work load. I say I'm disappointed with the contention on my bus
putting limits on maximum throughputs, but really, what I have far
outstrips my ability to get data into or out of the system.
So all of my disappointment is in theory.
adam
to have on a ZFS
system? (I know this relies on speculation, but) Might anyone know
anything about Norco's usual chipsets to guess about OpenSolaris
compatibility?
adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
have little idea where this comes from, and had no idea that
it would rely on memory concerns.
thanks,
adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[EMAIL PROTECTED] wrote:
If you don't have a 64bit cpu, add more ram(tm).
Actually, no; if you have a 32 bit CPU, you must not add too much
RAM or the kernel will run out of space to put things.
Hrm. Do you have a working definition of too much?
adam
.
adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that this probably is asking people to speculate, but since
I'm still waiting on IP addresses locally, I figured it could hurt to
ask for planning purposes.
cheers,
adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Wed, Nov 07, 2007 at 01:47:04PM -0800, can you guess? wrote:
I do consider the RAID-Z design to be somewhat brain-damaged [...]
How so? In my opinion, it seems like a cure for the brain damage of RAID-5.
Adam
--
Adam Leventhal, FishWorkshttp://blogs.sun.com/ahl
not
constitute a significant drawback.
I don't really think this would be feasible given how ZFS is stratified
today, but go ahead and prove me wrong: here are the instructions for
bringing over a copy of the source code:
http://www.opensolaris.org/os/community/tools/scm
- ahl
--
Adam Leventhal
...
Adam
--
Adam Leventhal, FishWorkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
match a disk of size N with another disk of size 2N
and use RAID-Z to turn them into a single vdev. At that point it's
probably a better idea to build a striped vdev and use ditto blocks to
do
your data redundancy by setting copies=2.
Adam
--
Adam Leventhal, Fishworkshttp
could be
garnered with one using single-parity RAID as with the other using double-
parity RAID. That said, it would be a fairly uncommon scenario.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss
be a coarse-grained
solution to
your problem, work is underway to address the problematic interaction
between
scrubs and snapshots.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
/expand_o_matic_raid_z
I'd encourage anyone interested in getting involved with ZFS development to
take a look.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
rather than later.
Well, tip you off _and_ correct the problems if possible. I believe a long-
standing RFE has been to scrub periodically in the background to ensure that
correctable problems don't turn into uncorrectable ones.
Adam
--
Adam Leventhal, Fishworkshttp
layer which would break all S10 file
systems. So in a very real sense CIFS simply cannot be backported
to S10.
However, the same arguments were made explaining the difficulty backporting
ZFS and GRUB boot to Solaris 10.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com
Hi All,
I'm new to ZFS but I'm intrigued by the possibilities it presents.
I'm told one of the greatest benefits is that, instead of setting
quotas, each user can have their own 'filesystem' under a single pool.
This is obviously great if you've got 10 users but what if you have
10,000? Are
on this in the next few months. I made a
post to my blog that probably won't answer your questions directly, but
may help inform you about what we have in mind.
http://blogs.sun.com/ahl/entry/flash_hybrid_pools_and_future
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
an I/O request on my
enterprise system.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
For writes, however, RAID-Z with an N+1 wide stripe will divide the the data
into N+1 chunks, and reads will need to access the N chunks. This reduces
the total IOPS by a factor of N+1 for reads and writes whereas mirroring
reduces the IOPS by a factor of 2 for writes and not at all for reads.
Adam
for a root device. Performance is typically a bit
better with SLC -- especially on the write side -- but it's not such a
huge difference.
The reason you'd use a flash SSD for a boot device is power (with
maybe a dash of performance), and either SLC or MLC will do just fine.
Adam
On Sep 24
of view)?
You would lose transactions, but the pool would still reflect a
consistent
state.
So is this idea completely crazy?
On the contrary; it's very clever.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs
to the x4540 so that would be required before any upgrade to the
equivalent of the Sun Storage 7210.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Nov 11, 2008, at 10:41 AM, Brent Jones wrote:
Wish I could get my hands on a beta of this GUI...
Take a look at the VMware version that you can run on any machine:
http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp
Adam
--
Adam Leventhal, Fishworks
Is this software available for people who already have thumpers?
We're considering offering an upgrade path for people with existing
thumpers. Given the feedback we've been hearing, it seems very likely
that we will. No word yet on pricing or availability.
Adam
--
Adam Leventhal, Fishworks
.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ZFS that's in OpenSolaris
today. A pool created on the appliance could potentially be imported on an
OpenSolaris system; that is, of course, not explicitly supported in the
service contract.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
. Keep an eye on blogs.sun.com/fishworks.
A little off topic: Do you know when the SSDs used in the Storage 7000 are
available for the rest of us?
I don't think the will be, but it will be possible to purchase them as
replacement parts.
Adam
--
Adam Leventhal, Fishworks
On Tue, Nov 18, 2008 at 09:09:07AM -0800, Andre Lue wrote:
Is the web interface on the appliance available for download or will it make
it to opensolaris sometime in the near future?
It's not, and it's unlikely to make it to OpenSolaris.
Adam
--
Adam Leventhal, Fishworks
The Intel part does about a fourth as many synchronous write IOPS at
best.
Adam
On Jan 16, 2009, at 5:34 PM, Erik Trimble wrote:
I'm looking at the newly-orderable (via Sun) STEC Zeus SSDs, and
they're
outrageously priced.
http://www.stec-inc.com/product/zeusssd.php
I just looked
not have
that assurance.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that would hardly be measurable given
the architecture of the Hybrid Storage Pool. Recall that unlike other
products in the same space, we get our IOPS from flash rather than from
a bazillion spindles spinning at 15,000 RPM.
Adam
--
Adam Leventhal, Fishworks http
their
drives with stuff they've bought at Fry's. Is this still covered by their
service plan or would this only be in an unsupported config?
Thanks.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
than ZFS?
Are you telling me zfs is deficient to the point it can't handle basic
right-sizing like a 15$ sata raid adapter?
How do there $15 sata raid adapters solve the problem? The more details you
could provide the better obviously.
Adam
--
Adam Leventhal, Fishworks
complexity and little to no value? They seem
very different to me, so I suppose the answer to your question is: no I cannot
feel the irony oozing out between my lips, and yes I'm oblivious to the same.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
to use the full extent of the drives they've paid for, say,
if they're using a vendor that already provides that guarantee?
You know, sort of like you not letting people choose their raid layout...
Yes, I'm not saying it shouldn't be done. I'm asking what the right answer
might be.
Adam
--
Adam
it actually
applies.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of SSDs with ZFS as a ZIL device, an L2ARC device,
and eventually as primary storage. We'll first focus on the specific
SSDs we certify for use in our general purpose servers and the Sun
Storage 7000 series, and help influence the industry to move to
standards that we can then use.
Adam
write performance.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of disks
to the OS, so I really only get 14 disks for data. Correct?
That's right. We market the 7110 as either 2TB = 146GB x 14 or 4.2TB =
300GB x 14 raw capacity.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs
Hey Lawrence,
Make sure you're running the latest software update. Note that this forumn
is not the appropriate place to discuss support issues. Please contact your
official Sun support channel.
Adam
On Thu, Jun 18, 2009 at 12:06:02PM -0700, lawrence ho wrote:
We have a 7110 on try and buy
for pointing to relevant documentation.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
documentation.
The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options. See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.
I'll read though that, thanks for the detailed pointers.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel
Another thought in the same vein, I notice many of these systems
support SES-2 for management. Does this do anything useful under
Solaris?
Sorry for these questions, I seem to be having a tough time locating
relevant information on the web.
Thanks,
A.
--
Adam Sherman
CTO, Versature
management
uses of SES?
I'm really just exploring. Where can I read about how FMA is going to
help with failures in my setup?
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss
have a look at to get
=12 SATA disks externally attached to my systems?
Thanks!
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
1 - 100 of 185 matches
Mail list logo