On Mon, Dec 12, 2011 at 03:01:08PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."
wrote:
> 4c@2.4ghz
Yep, that's the plan. Thanks.
> On 12/12/2011 2:44 PM, Albert Chin wrote:
> >On Mon, Dec 12, 2011 at 02:40:52PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹)
> >Ph.D
On Mon, Dec 12, 2011 at 02:40:52PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."
wrote:
> please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and
> ZIL(SSD)
> may be try the ZFS simulator SW
Good point. Thanks.
> regards
>
> On 12/12/2011 2:28 PM,
Recommendations?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oots/hppa1.1-hp-hpux11...@ab
zfs snapshot tww/opt/chroots/hppa1.1-hp-hpux11...@ab
zfs clone tww/opt/chroots/hppa1.1-hp-hpux11...@ab
tww/opt/chroots/ab/hppa1.1-hp-hpux11.11
...
and then perform another zfs send/receive, the error above occurs. Why?
--
albert chin (ch...@the
the thread:
http://opensolaris.org/jive/thread.jspa?threadID=115503&tstart=0
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
p://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> _
ans it will
> take 100 hours. Is this normal? If I had 30TB to back up, it would
> take 1000 hours, which is more than a month. Can I speed this up?
It's not immediately obvious what the cause is. Maybe the server running
zfs send has slow MB/s performance reading from disk. Maybe the
> ZFS ACL activity shown by DTrace. I wonder if there is a lot of sync
>> I/O that would benefit from separately defined ZILs (whether SSD or
>> not), so I've asked them to look for fsync activity.
>>
>> Data collected thus far is listed below. I've asked f
On Mon, Oct 19, 2009 at 09:02:20PM -0500, Albert Chin wrote:
> On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
> > Thanks for reporting this. I have fixed this bug (6822816) in build
> > 127.
>
> Thanks. I just installed OpenSolaris Preview based on 125
t; --matt
>
> Albert Chin wrote:
>> Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
>> snapshot a few days ago:
>> # zfs snapshot a...@b
>> # zfs clone a...@b tank/a
>> # zfs clone a...@b tank/b
>>
>> The system started pan
- switching to Comstar, snv124, VBox
> 3.08, etc., but such a dramatic loss of performance probably has a
> single cause. Is anyone willing to speculate?
Maybe this will help:
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html
--
a
receive [-vnF] -d
>
> For the property list, run: zfs set|get
>
> For the delegated permission list, run: zfs allow|unallow
> r...@xxx:~# uname -a
> SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890
>
> What's wrong?
Looks like -u wa
On Mon, Sep 28, 2009 at 07:33:56PM -0500, Albert Chin wrote:
> When transferring a volume between servers, is it expected that the
> usedbydataset property should be the same on both? If not, is it cause
> for concern?
>
> snv114# zfs list tww/opt/vms/images/vios/
USED AVAIL REFER MOUNTPOINT
t/opt/vms/images/vios/near.img 14.5G 2.42T 14.5G -
snv119# zfs get usedbydataset t/opt/vms/images/vios/near.img
NAMEPROPERTY VALUE SOURCE
t/opt/vms/images/vios/near.img usedbydataset 14.5G -
--
albert chin (ch
properties are
not sent?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Sep 28, 2009 at 10:16:20AM -0700, Richard Elling wrote:
> On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
>
>> On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
>>> On Mon, 28 Sep 2009, Richard Elling wrote:
>>>>
>>>> Scrub co
a.
>> So you simply need to read the data.
>
> This should work but it does not verify the redundant metadata. For
> example, the duplicate metadata copy might be corrupt but the problem
> is not detected since it did not happen to be used.
Too bad we cannot scrub a data
Without doing a zpool scrub, what's the quickest way to find files in a
filesystem with cksum errors? Iterating over all files with "find" takes
quite a bit of time. Maybe there's some zdb fu that will perform the
check for me?
--
albert chin (ch..
l. I'd
pop up there and ask. There are somewhat similar bug reports at
bugs.opensolaris.org. I'd post a bug report just in case.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I started getting this as
> well.. My Mirror array is unaffected.
>
> snv111b (2009.06 release)
What does the panic dump look like?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
On Fri, Sep 25, 2009 at 05:21:23AM +, Albert Chin wrote:
> [[ snip snip ]]
>
> We really need to import this pool. Is there a way around this? We do
> have snv_114 source on the system if we need to make changes to
> usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the
t seems like the "zfs
destroy" transaction never completed and it is being replayed, causing
the panic. This cycle continues endlessly.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
c40 zfs:txg_sync_thread+265 ()
ff00104c0c50 unix:thread_start+8 ()
System is a X4100M2 running snv_114.
Any ideas?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
0 0
c4t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
errors: 855 data errors, use '-v' for a list
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opens
INUSE currently in use
c6t600A0B800029996605C84668F461d0 INUSE currently in use
c6t600A0B80002999660A454A93CEDBd0 AVAIL
c6t600A0B80002999660ADA4A9CF2EDd0 AVAIL
--
albert chin (ch...@thewrittenword.com
On Mon, Aug 31, 2009 at 02:40:54PM -0500, Albert Chin wrote:
> On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
> > # cat /etc/release
> > Solaris Express Community Edition snv_105 X86
> >Copyright 2008 Sun Microsystems, Inc.
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
> # cat /etc/release
> Solaris Express Community Edition snv_105 X86
>Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
> Use is subject to l
ation that might help track this down, just lots of checksum
> errors.
So, on snv_121, can you read the files with checksum errors? Is it
simply the reporting mechanism that is wrong or are the files really
damaged?
--
albert chin (ch...@thewrittenword.com)
_
up.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: resilver in progress for 0h11m, 2.82% done, 6h21m to go
config:
...
So, why is a resilver in progress when I asked for a scrub?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
z
On Tue, Aug 25, 2009 at 06:05:16AM -0500, Albert Chin wrote:
> [[ snip snip ]]
>
> After the resilver completed:
> # zpool status tww
> pool: tww
> state: DEGRADED
> status: One or more devices has experienced an error resulting in data
> corruption. Appl
0299CCC0A194A89E634d0 \
c6t600A0B800029996609EE4A89DA51d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6t600A0B800029996609EE4A89DA51d0s0 is part of active ZFS
pool tww. Please see zpool(1M).
So, what is going on?
--
al
On Mon, Aug 24, 2009 at 02:01:39PM -0500, Bob Friesenhahn wrote:
> On Mon, 24 Aug 2009, Albert Chin wrote:
>>
>> Seems some of the new drives are having problems, resulting in CKSUM
>> errors. I don't understand why I have so many data errors though. Why
>> does th
ors though. Why
does the third raidz2 vdev report 34.0K CKSUM errors?
The number of data errors appears to be increasing as well as the
resilver process continues.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opens
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote:
> I _really_ wish rsync had an option to "copy in place" or something like
> that, where the updates are made directly to the file, rather than a
> temp copy.
Isn't this what --inplace does?
--
alber
unfortunately I am unable
> to verify the driver. "pkgadd -d umem_Sol_Drv_Cust_i386_v01_11.pkg"
> hangs on "## Installing part 1 of 3." on snv_95. I do not have other
> Solaris versions to experiment with; this is really just a hobby for
> me.
Does the card c
he clients though.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; > start thinking bigger.
> >
> > I'd also like to know if there's any easy way to see the current performance
> > of the system once it's in use? I know VMware has performance monitoring
> > built into the console, bu
ve too much ram
Well, if the server attached to the J series is doing ZFS/NFS,
performance will increase with zfs:zfs_nocacheflush=1. But, without
battery-backed NVRAM, this really isn't "safe". So, for this usage case,
unless the server has battery-backed NVRAM, I don't see how
e is an another version called
> J4400 with 24 disks.
>
> Doc is here :
> http://docs.sun.com/app/docs/coll/j4200
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tp://www.vmetro.com/category4304.html, and I don't have any space in
> this server to mount a SSD.
Maybe you can call Vmetro and get the names of some resellers whom you
could call to get pricing info?
--
albert chin ([EMAIL PROTECTED])
yesterday's date to do the incremental
> dump.
Not if you set a ZFS property with the date of the last backup.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
as terrible. I then manually transferred half the LUNs
> to controller A and it started to fly.
http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/59b43034602a7b7f/0b500afc4d62d434?lnk=st&q=#0b500afc4d62d434
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ething equivalent to the performance of
ZIL disabled with ZIL/RAM. I'd do ZIL with a battery-backed RAM in a
heartbeat if I could find a card. I think others would as well.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@open
pt.
>
> All these things are being worked on, but it might take sometime
> before everything is made aware that yes it's no longer unusual that
> there can be 1+ filesystems on one machine.
But shouldn't sharemgr(1M) be "a
S to
> do the mirroring.
Why even both with a H/W RAID array when you won't use the H/W RAID?
Better to find a decent SAS/FC JBOD with cache. Would definitely be
cheaper.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mai
Unfortunately, no inexpensive
cards exist for the common consumer (with ECC memory anyways). If you
convince http://www.micromemory.com/ to sell you one, let us know :)
Set "set zfs:zil_disable = 1" in /etc/system to gauge the type of
improvement you can expect. Don't use this in p
behavior? It really makes
> ZFS less than desirable/reliable.
http://blogs.sun.com/eschrock/entry/zfs_and_fma
FMA For ZFS Phase 2 (PSARC/2007/283) was integrated in b68:
http://www.opensolaris.org/os/community/arc/caselog/2007/283/
http://www.opensolaris.org/os/community/on/flag-days
; > Computer Officer, University of Cambridge, Unix Support
> >
>
> --
> Jorgen Lundman | <[EMAIL PROTECTED]>
> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
> Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
> Japan| +81 (0)3 -3375-1767 (home)
>
On Tue, Nov 20, 2007 at 11:39:30AM -0600, Albert Chin wrote:
> On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote:
> >
> > [EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM:
> >
> > > On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
| tail -1
> > > > 2007-11-20.02:37:13 zpool replace tww
> > > c0t600A0B8000299966059E4668CBD3d0
> > > > c0t600A0B8000299CCC06734741CD4Ed0
> > > >
> > > > So, why did resilvering restart when no zfs operations occurred? I
> > &
and now I get:
> > # zpool status tww
> > pool: tww
> >state: DEGRADED
> > status: One or more devices is currently being resilvered. The pool
> will
> > continue to function, possibly in a degraded state.
> > action: Wait for the resilve
graded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 0.00% done, 134h45m to go
What's going on?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote:
> You should be able to do a 'zpool detach' of the replacement and then
> try again.
Thanks. That worked.
> - Eric
>
> On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
> > Running ON b66
0
cannot replace c0t600A0B8000299966059E4668CBD3d0 with
c0t600A0B8000299CCC06734741CD4Ed0: cannot replace a replacing device
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
http://mail.opensolaris.org/pipermail/storage-discuss/2007-July/003080.html
You'll need to determine the performance impact of removing NVRAM from
your data LUNs. Don't blindly do it.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing li
On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
> I think we are very close to using zfs in our production environment.. Now
> that I have snv_72 installed and my pools set up with NVRAM log devices
> things are hauling butt.
How did you get NVRAM log devices?
--
al
connector and not sure if it is worth the whole effort
> for my personal purposes.
Huh? So your MM-5425CN doesn't fit into a PCI slot?
> Any comment are very appreciated
How did you obtain your card?
--
albert chin ([EMAIL PROTECTED])
___
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
> Albert Chin wrote:
> > On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
> >> I wrote up a blog on the separate intent log called "slog blog"
> >> which describes the interface; some
well but they
cannot find anyone selling them.
> - Eric
>
> On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
> >
> >
> > Albert Chin wrote:
> > > On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
> > >> I wrote up a blog on
on
So, how did you get a "pci Micro Memory pci1332,5425 card" :) I
presume this is the PCI-X version.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jul 10, 2007 at 07:12:35AM -0500, Al Hopper wrote:
> On Mon, 9 Jul 2007, Albert Chin wrote:
>
> > On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
> >>
> >> On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
> >>> It wo
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
>
> On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
> > It would also be nice for extra hardware (PCI-X, PCIe card) that
> > added NVRAM storage to various sun low/mid-range servers that are
> >
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
> On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
> > PSARC 2007/171 will be available in b68. Any documentation anywhere on
> > how to take advantage of it?
> >
> > Some of the Sun storage arr
On Tue, Jul 03, 2007 at 10:31:28AM -0700, Richard Elling wrote:
> Albert Chin wrote:
> > On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote:
> >> Albert Chin wrote:
> >>> Some of the Sun storage arrays contain NVRAM. It would be really nice
> >>
On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote:
> Albert Chin wrote:
> > Some of the Sun storage arrays contain NVRAM. It would be really nice
> > if the array NVRAM would be available for ZIL storage. It would also
> > be nice for extra hardware (PCI-X, PCIe c
't seem very impressive:
http://www.adtron.com/products/A25fb-SerialATAFlashDisk.html
http://www.sandisk.com/OEM/ProductCatalog(1321)-SanDisk_SSD_SATA_5000_25.aspx
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
NVRAM storage
to various sun low/mid-range servers that are currently acting as
ZFS/NFS servers. Or maybe someone knows of cheap SSD storage that
could be used for the ZIL? I think several HD's are available with
SCSI/ATA interfaces.
--
albert chin ([EMAIL PROT
identifier, though
> some features will not be available without an explicit 'zpool
> upgrade'.
> config:
>
> zones ONLINE
> c0d1s5ONLINE
zpool import lists the pools available for import. Maybe you need to
actually _import_ the pool
ut?
Yes.
> Also, no Santricity, just Sun's Common Array Manager. Is it possible
> to use both without completely confusing the array?
I think both are ok. CAM is free. Dunno about Santricity.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
some info that hasn't been
flushed by ZIL). Even having your file server on a UPS won't help
here.
http://blogs.sun.com/erickustarz/entry/zil_disable discusses some of
the issues affecting zil_disable=1.
We know we get better performance with zil_disable=1 but we're not
taking an
On Fri, May 25, 2007 at 12:14:45AM -0400, Torrey McMahon wrote:
> Albert Chin wrote:
> >On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
> >
> >
> >>I'm getting really poor write performance with ZFS on a RAID5 volume
> >>(5 disks) from a
0GB/10K drives, we get ~46MB/s on a
single-drive RAID-0 array, ~83MB/s on a 4-disk RAID-0 array w/128k
stripe, and ~69MB/s on a seven-disk RAID-5 array w/128k strip.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, May 21, 2007 at 13:23:48 -0800, Marion Hakanson wrote:
>Albert Chin wrote:
>> Why can't the NFS performance match that of SSH?
>
> My first guess is the NFS vs array cache-flush issue. Have you
> configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That'
# mount file-server:/opt/test /mnt
# time tar cf - gcc343 | (cd /mnt; tar xpf - )
...
(old) 419721216 bytes in 1:08.65 => 6113928.86 bytes/sec
(new) 419721216 bytes in 0:44.67 => 9396042.44 bytes/sec
>
>
> On 22/05/07, Albert Chin
> <[EMAIL PROTECTE
ression
and/or -oCompressionLevel?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
> > But still, how is tar/SSH any more multi-threaded than tar/NFS?
>
> It's not that it is, but that NFS sync semantics and ZFS sync
> sem
On Mon, May 21, 2007 at 04:55:35PM -0600, Robert Thurlow wrote:
> Albert Chin wrote:
>
> >I think the bigger problem is the NFS performance penalty so we'll go
> >lurk somewhere else to find out what the problem is.
>
> Is this with Solaris 10 or OpenSolaris on t
On Mon, May 21, 2007 at 02:55:18PM -0600, Robert Thurlow wrote:
> Albert Chin wrote:
>
> >Why can't the NFS performance match that of SSH?
>
> One big reason is that the sending CPU has to do all the comparisons to
> compute the list of files to be sent - it has to f
bytes/sec
The network is 100MB. /etc/system on the file server is:
set maxphys = 0x80
set ssd:ssd_max_throttle = 64
set zfs:zfs_nocacheflush = 1
Why can't the NFS performance match that of SSH?
--
albert chin ([EMAIL PROTECTED])
___
zfs-di
iscuss@opensolaris.org
> >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
bre channel.
The 6140 controller unit has either 2GB or 4GB cache. Does the 6140
expansion shelf have cache as well or is the cache in the controller
unit used for all expansions shelves?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing
quot; and take no time so might as
well just snapshot the file system.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
possible to allocate some disks from the 6140 array to ZFS on the
X4100 for the purpose of migrating data from the appliance to ZFS?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
gt; and a rotating spare could disrupt this organization, but would it be
> useful at all?
Agami Systems has the concept of "Enterprise Sparing", where the hot
spare is distributed amongst data drives in the array. When a failure
occurs, the rebuild occurs in parallel across _al
on top of that or a JBOD with ZFS RAIDZ/RAIDZ2 on top of
that.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n making the
> suggestion ... right?
Well, when you buy disk for the Sun 5320 NAS Appliance, you get a
Controller Unit shelf and, if you expand storage, an Expansion Unit
shelf that connects to the Controller Unit. Maybe the Expansion Unit
shelf is a JBOD 6140?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jan 25, 2007 at 10:16:47AM -0500, Torrey McMahon wrote:
> Albert Chin wrote:
> >On Wed, Jan 24, 2007 at 10:19:29AM -0800, Frank Cusack wrote:
> >
> >>On January 24, 2007 10:04:04 AM -0800 Bryan Cantrill <[EMAIL PROTECTED]>
> >>wrote:
> >
f you wanted to use a 6140
with ZFS, and really wanted JBOD, your only choice would be a RAID 0
config on the 6140?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jan 16, 2007 at 01:28:04PM -0800, Eric Kustarz wrote:
> Albert Chin wrote:
> >On Mon, Jan 15, 2007 at 10:55:23AM -0600, Albert Chin wrote:
> >
> >>I have no hands-on experience with ZFS but have a question. If the
> >>file server running ZFS expor
On Mon, Jan 15, 2007 at 10:55:23AM -0600, Albert Chin wrote:
> I have no hands-on experience with ZFS but have a question. If the
> file server running ZFS exports the ZFS file system via NFS to
> clients, based on previous messages on this list, it is not possible
> for an NFS cli
lient access to the remote ZFS file system and the clone?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
91 matches
Mail list logo