?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Dec 12, 2011 at 02:40:52PM -0500, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
wrote:
please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and
ZIL(SSD)
may be try the ZFS simulator SW
Good point. Thanks.
regards
On 12/12/2011 2:28 PM, Albert Chin wrote:
We're preparing
On Mon, Dec 12, 2011 at 03:01:08PM -0500, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
wrote:
4c@2.4ghz
Yep, that's the plan. Thanks.
On 12/12/2011 2:44 PM, Albert Chin wrote:
On Mon, Dec 12, 2011 at 02:40:52PM -0500, Hung-Sheng Tsao (Lao Tsao 老曹)
Ph.D. wrote:
please check out the ZFS appliance
-hpux11...@ab
zfs snapshot tww/opt/chroots/hppa1.1-hp-hpux11...@ab
zfs clone tww/opt/chroots/hppa1.1-hp-hpux11...@ab
tww/opt/chroots/ab/hppa1.1-hp-hpux11.11
...
and then perform another zfs send/receive, the error above occurs. Why?
--
albert chin (ch...@thewrittenword.com
://opensolaris.org/jive/thread.jspa?threadID=115503tstart=0
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a month. Can I speed this up?
It's not immediately obvious what the cause is. Maybe the server running
zfs send has slow MB/s performance reading from disk. Maybe the network.
Or maybe the remote system. This might help:
http://tinyurl.com/yl653am
--
albert chin (ch...@thewrittenword.com
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
level (I believe it's S10u6) and ZFS recordsize.
Any suggestions will be appreciated.
regards, Jeff
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On Mon, Oct 19, 2009 at 09:02:20PM -0500, Albert Chin wrote:
On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
Thanks for reporting this. I have fixed this bug (6822816) in build
127.
Thanks. I just installed OpenSolaris Preview based on 125 and will
attempt to apply
Albert Chin wrote:
Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
snapshot a few days ago:
# zfs snapshot a...@b
# zfs clone a...@b tank/a
# zfs clone a...@b tank/b
The system started panicing after I tried:
# zfs snapshot tank/b...@backup
So, I destroyed tank/b
3.08, etc., but such a dramatic loss of performance probably has a
single cause. Is anyone willing to speculate?
Maybe this will help:
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html
--
albert chin (ch...@thewrittenword.com
Without doing a zpool scrub, what's the quickest way to find files in a
filesystem with cksum errors? Iterating over all files with find takes
quite a bit of time. Maybe there's some zdb fu that will perform the
check for me?
--
albert chin (ch...@thewrittenword.com
.
This should work but it does not verify the redundant metadata. For
example, the duplicate metadata copy might be corrupt but the problem
is not detected since it did not happen to be used.
Too bad we cannot scrub a dataset/object.
--
albert chin (ch...@thewrittenword.com
On Mon, Sep 28, 2009 at 10:16:20AM -0700, Richard Elling wrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
properties are
not sent?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
USED AVAIL REFER MOUNTPOINT
t/opt/vms/images/vios/near.img 14.5G 2.42T 14.5G -
snv119# zfs get usedbydataset t/opt/vms/images/vios/near.img
NAMEPROPERTY VALUE SOURCE
t/opt/vms/images/vios/near.img usedbydataset 14.5G -
--
albert chin (ch
On Mon, Sep 28, 2009 at 07:33:56PM -0500, Albert Chin wrote:
When transferring a volume between servers, is it expected that the
usedbydataset property should be the same on both? If not, is it cause
for concern?
snv114# zfs list tww/opt/vms/images/vios/near.img
NAME
filesystem
For the property list, run: zfs set|get
For the delegated permission list, run: zfs allow|unallow
r...@xxx:~# uname -a
SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890
What's wrong?
Looks like -u was a recent addition.
--
albert chin (ch...@thewrittenword.com
just in case.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Sep 25, 2009 at 05:21:23AM +, Albert Chin wrote:
[[ snip snip ]]
We really need to import this pool. Is there a way around this? We do
have snv_114 source on the system if we need to make changes to
usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the zfs
destroy
destroy transaction never completed and it is being replayed, causing
the panic. This cycle continues endlessly.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
zfs:txg_sync_thread+265 ()
ff00104c0c50 unix:thread_start+8 ()
System is a X4100M2 running snv_114.
Any ideas?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
0
c4t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
errors: 855 data errors, use '-v' for a list
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
INUSE currently in use
c6t600A0B800029996605C84668F461d0 INUSE currently in use
c6t600A0B80002999660A454A93CEDBd0 AVAIL
c6t600A0B80002999660ADA4A9CF2EDd0 AVAIL
--
albert chin (ch...@thewrittenword.com
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
# cat /etc/release
Solaris Express Community Edition snv_105 X86
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms
this down, just lots of checksum
errors.
So, on snv_121, can you read the files with checksum errors? Is it
simply the reporting mechanism that is wrong or are the files really
damaged?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
: http://www.sun.com/msg/ZFS-8000-8A
scrub: resilver in progress for 0h11m, 2.82% done, 6h21m to go
config:
...
So, why is a resilver in progress when I asked for a scrub?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss
\
c6t600A0B800029996609EE4A89DA51d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6t600A0B800029996609EE4A89DA51d0s0 is part of active ZFS
pool tww. Please see zpool(1M).
So, what is going on?
--
albert chin (ch...@thewrittenword.com
On Tue, Aug 25, 2009 at 06:05:16AM -0500, Albert Chin wrote:
[[ snip snip ]]
After the resilver completed:
# zpool status tww
pool: tww
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action
does the third raidz2 vdev report 34.0K CKSUM errors?
The number of data errors appears to be increasing as well as the
resilver process continues.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Mon, Aug 24, 2009 at 02:01:39PM -0500, Bob Friesenhahn wrote:
On Mon, 24 Aug 2009, Albert Chin wrote:
Seems some of the new drives are having problems, resulting in CKSUM
errors. I don't understand why I have so many data errors though. Why
does the third raidz2 vdev report 34.0K CKSUM
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote:
I _really_ wish rsync had an option to copy in place or something like
that, where the updates are made directly to the file, rather than a
temp copy.
Isn't this what --inplace does?
--
albert chin ([EMAIL PROTECTED
umem_Sol_Drv_Cust_i386_v01_11.pkg
hangs on ## Installing part 1 of 3. on snv_95. I do not have other
Solaris versions to experiment with; this is really just a hobby for
me.
Does the card come with any programming specs to help debug the driver?
--
albert chin ([EMAIL PROTECTED
.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
series is doing ZFS/NFS,
performance will increase with zfs:zfs_nocacheflush=1. But, without
battery-backed NVRAM, this really isn't safe. So, for this usage case,
unless the server has battery-backed NVRAM, I don't see how the J series
is good for ZFS/NFS usage.
--
albert chin ([EMAIL PROTECTED
J4400 with 24 disks.
Doc is here :
http://docs.sun.com/app/docs/coll/j4200
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and I don't have any space in
this server to mount a SSD.
Maybe you can call Vmetro and get the names of some resellers whom you
could call to get pricing info?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss
a ZFS property with the date of the last backup.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/0b500afc4d62d434?lnk=stq=#0b500afc4d62d434
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
equivalent to the performance of
ZIL disabled with ZIL/RAM. I'd do ZIL with a battery-backed RAM in a
heartbeat if I could find a card. I think others would as well.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
on, but it might take sometime
before everything is made aware that yes it's no longer unusual that
there can be 1+ filesystems on one machine.
But shouldn't sharemgr(1M) be aware? It's relatively new.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss
, no inexpensive
cards exist for the common consumer (with ECC memory anyways). If you
convince http://www.micromemory.com/ to sell you one, let us know :)
Set set zfs:zil_disable = 1 in /etc/system to gauge the type of
improvement you can expect. Don't use this in production though.
--
albert chin
desirable/reliable.
http://blogs.sun.com/eschrock/entry/zfs_and_fma
FMA For ZFS Phase 2 (PSARC/2007/283) was integrated in b68:
http://www.opensolaris.org/os/community/arc/caselog/2007/283/
http://www.opensolaris.org/os/community/on/flag-days/all/
--
albert chin ([EMAIL PROTECTED
/listinfo/zfs-discuss
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Nov 20, 2007 at 11:39:30AM -0600, Albert Chin wrote:
On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM:
On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
Resilver and scrub are broken
.
scrub: resilver in progress, 0.00% done, 134h45m to go
What's going on?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is currently being resilvered. The
pool
will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 0.00% done, 134h45m to go
What's going on?
--
albert chin ([EMAIL PROTECTED
replace c0t600A0B8000299966059E4668CBD3d0 with
c0t600A0B8000299CCC06734741CD4Ed0: cannot replace a replacing device
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote:
You should be able to do a 'zpool detach' of the replacement and then
try again.
Thanks. That worked.
- Eric
On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
Running ON b66 and had a drive fail. Ran 'zfs replace
-July/003080.html
You'll need to determine the performance impact of removing NVRAM from
your data LUNs. Don't blindly do it.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
I think we are very close to using zfs in our production environment.. Now
that I have snv_72 installed and my pools set up with NVRAM log devices
things are hauling butt.
How did you get NVRAM log devices?
--
albert chin ([EMAIL
if it is worth the whole effort
for my personal purposes.
Huh? So your MM-5425CN doesn't fit into a PCI slot?
Any comment are very appreciated
How did you obtain your card?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss
Memory pci1332,5425 card :) I
presume this is the PCI-X version.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
anyone selling them.
- Eric
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance results; and
general status:
http
On Tue, Jul 10, 2007 at 07:12:35AM -0500, Al Hopper wrote:
On Mon, 9 Jul 2007, Albert Chin wrote:
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
It would also be nice for extra hardware (PCI-X, PCIe card
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
It would also be nice for extra hardware (PCI-X, PCIe card) that
added NVRAM storage to various sun low/mid-range servers that are
currently acting as ZFS/NFS
NVRAM storage
to various sun low/mid-range servers that are currently acting as
ZFS/NFS servers. Or maybe someone knows of cheap SSD storage that
could be used for the ZIL? I think several HD's are available with
SCSI/ATA interfaces.
--
albert chin ([EMAIL PROTECTED
/ProductCatalog(1321)-SanDisk_SSD_SATA_5000_25.aspx
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote:
Albert Chin wrote:
Some of the Sun storage arrays contain NVRAM. It would be really nice
if the array NVRAM would be available for ZIL storage. It would also
be nice for extra hardware (PCI-X, PCIe card) that added NVRAM
On Tue, Jul 03, 2007 at 10:31:28AM -0700, Richard Elling wrote:
Albert Chin wrote:
On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote:
Albert Chin wrote:
Some of the Sun storage arrays contain NVRAM. It would be really nice
if the array NVRAM would be available for ZIL storage
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
PSARC 2007/171 will be available in b68. Any documentation anywhere on
how to take advantage of it?
Some of the Sun storage arrays contain NVRAM. It would
some features will not be available without an explicit 'zpool
upgrade'.
config:
zones ONLINE
c0d1s5ONLINE
zpool import lists the pools available for import. Maybe you need to
actually _import_ the pool first before you can upgrade.
--
albert chin ([EMAIL
On Fri, May 25, 2007 at 12:14:45AM -0400, Torrey McMahon wrote:
Albert Chin wrote:
On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
I'm getting really poor write performance with ZFS on a RAID5 volume
(5 disks) from a storagetek 6140 array. I've searched the web
your file server on a UPS won't help
here.
http://blogs.sun.com/erickustarz/entry/zil_disable discusses some of
the issues affecting zil_disable=1.
We know we get better performance with zil_disable=1 but we're not
taking any chances.
-Andy
On 5/24/07 4:16 PM, Albert Chin
[EMAIL
confusing the array?
I think both are ok. CAM is free. Dunno about Santricity.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
array w/128k
stripe, and ~69MB/s on a seven-disk RAID-5 array w/128k strip.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, May 21, 2007 at 13:23:48 -0800, Marion Hakanson wrote:
Albert Chin wrote:
Why can't the NFS performance match that of SSH?
My first guess is the NFS vs array cache-flush issue. Have you
configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That'll
make a huge difference for NFS
is 100MB. /etc/system on the file server is:
set maxphys = 0x80
set ssd:ssd_max_throttle = 64
set zfs:zfs_nocacheflush = 1
Why can't the NFS performance match that of SSH?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss
On Mon, May 21, 2007 at 02:55:18PM -0600, Robert Thurlow wrote:
Albert Chin wrote:
Why can't the NFS performance match that of SSH?
One big reason is that the sending CPU has to do all the comparisons to
compute the list of files to be sent - it has to fetch the attributes
from both local
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS sync semantics and ZFS sync
semantics conspire against single
On Mon, May 21, 2007 at 04:55:35PM -0600, Robert Thurlow wrote:
Albert Chin wrote:
I think the bigger problem is the NFS performance penalty so we'll go
lurk somewhere else to find out what the problem is.
Is this with Solaris 10 or OpenSolaris on the client as well?
Client is RHEL 4
and/or -oCompressionLevel?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
just snapshot the file system.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
). Is it
possible to allocate some disks from the 6140 array to ZFS on the
X4100 for the purpose of migrating data from the appliance to ZFS?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
the hot
spare is distributed amongst data drives in the array. When a failure
occurs, the rebuild occurs in parallel across _all_ drives in the
array:
http://www.issidata.com/specs/agami/enterprise-classreliability.pdf
--
albert chin ([EMAIL PROTECTED
of
that.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be a RAID 0
config on the 6140?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jan 25, 2007 at 10:16:47AM -0500, Torrey McMahon wrote:
Albert Chin wrote:
On Wed, Jan 24, 2007 at 10:19:29AM -0800, Frank Cusack wrote:
On January 24, 2007 10:04:04 AM -0800 Bryan Cantrill [EMAIL PROTECTED]
wrote:
On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote
). Well - there's no harm in making the
suggestion ... right?
Well, when you buy disk for the Sun 5320 NAS Appliance, you get a
Controller Unit shelf and, if you expand storage, an Expansion Unit
shelf that connects to the Controller Unit. Maybe the Expansion Unit
shelf is a JBOD 6140?
--
albert
On Tue, Jan 16, 2007 at 01:28:04PM -0800, Eric Kustarz wrote:
Albert Chin wrote:
On Mon, Jan 15, 2007 at 10:55:23AM -0600, Albert Chin wrote:
I have no hands-on experience with ZFS but have a question. If the
file server running ZFS exports the ZFS file system via NFS to
clients, based
access to the remote ZFS file system and the clone?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
82 matches
Mail list logo