Re: [zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Albert Chin
On Mon, Dec 12, 2011 at 03:01:08PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D." wrote: > 4c@2.4ghz Yep, that's the plan. Thanks. > On 12/12/2011 2:44 PM, Albert Chin wrote: > >On Mon, Dec 12, 2011 at 02:40:52PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹) > >Ph.D

Re: [zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Albert Chin
On Mon, Dec 12, 2011 at 02:40:52PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D." wrote: > please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and > ZIL(SSD) > may be try the ZFS simulator SW Good point. Thanks. > regards > > On 12/12/2011 2:28 PM,

[zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Albert Chin
Recommendations? -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] cannot receive new filesystem stream: invalid backup stream

2009-12-27 Thread Albert Chin
oots/hppa1.1-hp-hpux11...@ab zfs snapshot tww/opt/chroots/hppa1.1-hp-hpux11...@ab zfs clone tww/opt/chroots/hppa1.1-hp-hpux11...@ab tww/opt/chroots/ab/hppa1.1-hp-hpux11.11 ... and then perform another zfs send/receive, the error above occurs. Why? -- albert chin (ch...@the

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-11-05 Thread Albert Chin
the thread: http://opensolaris.org/jive/thread.jspa?threadID=115503&tstart=0 -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS)

2009-10-26 Thread Albert Chin
p://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _

Re: [zfs-discuss] "zfs send..." too slow?

2009-10-26 Thread Albert Chin
ans it will > take 100 hours. Is this normal? If I had 30TB to back up, it would > take 1000 hours, which is more than a month. Can I speed this up? It's not immediately obvious what the cause is. Maybe the server running zfs send has slow MB/s performance reading from disk. Maybe the

Re: [zfs-discuss] Performance problems with Thumper and >7TB ZFS pool using RAIDZ2

2009-10-24 Thread Albert Chin
> ZFS ACL activity shown by DTrace. I wonder if there is a lot of sync >> I/O that would benefit from separately defined ZILs (whether SSD or >> not), so I've asked them to look for fsync activity. >> >> Data collected thus far is listed below. I've asked f

Re: [zfs-discuss] Help! System panic when pool imported

2009-10-20 Thread Albert Chin
On Mon, Oct 19, 2009 at 09:02:20PM -0500, Albert Chin wrote: > On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote: > > Thanks for reporting this. I have fixed this bug (6822816) in build > > 127. > > Thanks. I just installed OpenSolaris Preview based on 125

Re: [zfs-discuss] Help! System panic when pool imported

2009-10-19 Thread Albert Chin
t; --matt > > Albert Chin wrote: >> Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a >> snapshot a few days ago: >> # zfs snapshot a...@b >> # zfs clone a...@b tank/a >> # zfs clone a...@b tank/b >> >> The system started pan

Re: [zfs-discuss] iscsi/comstar performance

2009-10-13 Thread Albert Chin
- switching to Comstar, snv124, VBox > 3.08, etc., but such a dramatic loss of performance probably has a > single cause. Is anyone willing to speculate? Maybe this will help: http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html -- a

Re: [zfs-discuss] zfs receive should allow to keep received system

2009-09-28 Thread Albert Chin
receive [-vnF] -d > > For the property list, run: zfs set|get > > For the delegated permission list, run: zfs allow|unallow > r...@xxx:~# uname -a > SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890 > > What's wrong? Looks like -u wa

Re: [zfs-discuss] Should usedbydataset be the same after zfs send/recv for a volume?

2009-09-28 Thread Albert Chin
On Mon, Sep 28, 2009 at 07:33:56PM -0500, Albert Chin wrote: > When transferring a volume between servers, is it expected that the > usedbydataset property should be the same on both? If not, is it cause > for concern? > > snv114# zfs list tww/opt/vms/images/vios/

[zfs-discuss] Should usedbydataset be the same after zfs send/recv for a volume?

2009-09-28 Thread Albert Chin
USED AVAIL REFER MOUNTPOINT t/opt/vms/images/vios/near.img 14.5G 2.42T 14.5G - snv119# zfs get usedbydataset t/opt/vms/images/vios/near.img NAMEPROPERTY VALUE SOURCE t/opt/vms/images/vios/near.img usedbydataset 14.5G - -- albert chin (ch

[zfs-discuss] refreservation not transferred by zfs send when sending a volume?

2009-09-28 Thread Albert Chin
properties are not sent? -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Albert Chin
On Mon, Sep 28, 2009 at 10:16:20AM -0700, Richard Elling wrote: > On Sep 28, 2009, at 3:42 PM, Albert Chin wrote: > >> On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote: >>> On Mon, 28 Sep 2009, Richard Elling wrote: >>>> >>>> Scrub co

Re: [zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Albert Chin
a. >> So you simply need to read the data. > > This should work but it does not verify the redundant metadata. For > example, the duplicate metadata copy might be corrupt but the problem > is not detected since it did not happen to be used. Too bad we cannot scrub a data

[zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Albert Chin
Without doing a zpool scrub, what's the quickest way to find files in a filesystem with cksum errors? Iterating over all files with "find" takes quite a bit of time. Maybe there's some zdb fu that will perform the check for me? -- albert chin (ch..

Re: [zfs-discuss] Help! System panic when pool imported

2009-09-27 Thread Albert Chin
l. I'd pop up there and ask. There are somewhat similar bug reports at bugs.opensolaris.org. I'd post a bug report just in case. -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Help! System panic when pool imported

2009-09-27 Thread Albert Chin
I started getting this as > well.. My Mirror array is unaffected. > > snv111b (2009.06 release) What does the panic dump look like? -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mai

Re: [zfs-discuss] Help! System panic when pool imported

2009-09-24 Thread Albert Chin
On Fri, Sep 25, 2009 at 05:21:23AM +, Albert Chin wrote: > [[ snip snip ]] > > We really need to import this pool. Is there a way around this? We do > have snv_114 source on the system if we need to make changes to > usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the

[zfs-discuss] Help! System panic when pool imported

2009-09-24 Thread Albert Chin
t seems like the "zfs destroy" transaction never completed and it is being replayed, causing the panic. This cycle continues endlessly. -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://ma

[zfs-discuss] zfs snapshot -r panic on b114

2009-09-23 Thread Albert Chin
c40 zfs:txg_sync_thread+265 () ff00104c0c50 unix:thread_start+8 () System is a X4100M2 running snv_114. Any ideas? -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mail

[zfs-discuss] How to recover from "can't open objset", "cannot iterate filesystems"?

2009-09-21 Thread Albert Chin
0 0 c4t6d0 ONLINE 0 0 0 c4t7d0 ONLINE 0 0 0 errors: 855 data errors, use '-v' for a list -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opens

[zfs-discuss] zpool replace complete but old drives not detached

2009-09-06 Thread Albert Chin
INUSE currently in use c6t600A0B800029996605C84668F461d0 INUSE currently in use c6t600A0B80002999660A454A93CEDBd0 AVAIL c6t600A0B80002999660ADA4A9CF2EDd0 AVAIL -- albert chin (ch...@thewrittenword.com

Re: [zfs-discuss] zpool scrub started resilver, not scrub (DTL non-empty?)

2009-09-02 Thread Albert Chin
On Mon, Aug 31, 2009 at 02:40:54PM -0500, Albert Chin wrote: > On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote: > > # cat /etc/release > > Solaris Express Community Edition snv_105 X86 > >Copyright 2008 Sun Microsystems, Inc.

Re: [zfs-discuss] zpool scrub started resilver, not scrub

2009-08-31 Thread Albert Chin
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote: > # cat /etc/release > Solaris Express Community Edition snv_105 X86 >Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. > Use is subject to l

Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-08-27 Thread Albert Chin
ation that might help track this down, just lots of checksum > errors. So, on snv_121, can you read the files with checksum errors? Is it simply the reporting mechanism that is wrong or are the files really damaged? -- albert chin (ch...@thewrittenword.com) _

[zfs-discuss] zpool scrub started resilver, not scrub

2009-08-26 Thread Albert Chin
up. see: http://www.sun.com/msg/ZFS-8000-8A scrub: resilver in progress for 0h11m, 2.82% done, 6h21m to go config: ... So, why is a resilver in progress when I asked for a scrub? -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list z

Re: [zfs-discuss] Resilver complete, but device not replaced, odd zpool status output

2009-08-25 Thread Albert Chin
On Tue, Aug 25, 2009 at 06:05:16AM -0500, Albert Chin wrote: > [[ snip snip ]] > > After the resilver completed: > # zpool status tww > pool: tww > state: DEGRADED > status: One or more devices has experienced an error resulting in data > corruption. Appl

[zfs-discuss] Resilver complete, but device not replaced, odd zpool status output

2009-08-25 Thread Albert Chin
0299CCC0A194A89E634d0 \ c6t600A0B800029996609EE4A89DA51d0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c6t600A0B800029996609EE4A89DA51d0s0 is part of active ZFS pool tww. Please see zpool(1M). So, what is going on? -- al

Re: [zfs-discuss] Why so many data errors with raidz2 config and one failing drive?

2009-08-24 Thread Albert Chin
On Mon, Aug 24, 2009 at 02:01:39PM -0500, Bob Friesenhahn wrote: > On Mon, 24 Aug 2009, Albert Chin wrote: >> >> Seems some of the new drives are having problems, resulting in CKSUM >> errors. I don't understand why I have so many data errors though. Why >> does th

[zfs-discuss] Why so many data errors with raidz2 config and one failing drive?

2009-08-24 Thread Albert Chin
ors though. Why does the third raidz2 vdev report 34.0K CKSUM errors? The number of data errors appears to be increasing as well as the resilver process continues. -- albert chin (ch...@thewrittenword.com) ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Albert Chin
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote: > I _really_ wish rsync had an option to "copy in place" or something like > that, where the updates are made directly to the file, rather than a > temp copy. Isn't this what --inplace does? -- alber

Re: [zfs-discuss] ZIL & NVRAM partitioning?

2008-09-06 Thread Albert Chin
unfortunately I am unable > to verify the driver. "pkgadd -d umem_Sol_Drv_Cust_i386_v01_11.pkg" > hangs on "## Installing part 1 of 3." on snv_95. I do not have other > Solaris versions to experiment with; this is really just a hobby for > me. Does the card c

Re: [zfs-discuss] How do you grow a ZVOL?

2008-07-17 Thread Albert Chin
he clients though. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19

2008-07-07 Thread Albert Chin
t; > start thinking bigger. > > > > I'd also like to know if there's any easy way to see the current performance > > of the system once it's in use? I know VMware has performance monitoring > > built into the console, bu

Re: [zfs-discuss] J4200/J4400 Array

2008-07-03 Thread Albert Chin
ve too much ram Well, if the server attached to the J series is doing ZFS/NFS, performance will increase with zfs:zfs_nocacheflush=1. But, without battery-backed NVRAM, this really isn't "safe". So, for this usage case, unless the server has battery-backed NVRAM, I don't see how

Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Albert Chin
e is an another version called > J4400 with 24 disks. > > Doc is here : > http://docs.sun.com/app/docs/coll/j4200 -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS configuration for VMware

2008-06-27 Thread Albert Chin
tp://www.vmetro.com/category4304.html, and I don't have any space in > this server to mount a SSD. Maybe you can call Vmetro and get the names of some resellers whom you could call to get pricing info? -- albert chin ([EMAIL PROTECTED])

Re: [zfs-discuss] zfs send/recv question

2008-03-10 Thread Albert Chin
yesterday's date to do the incremental > dump. Not if you set a ZFS property with the date of the last backup. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Albert Chin
as terrible. I then manually transferred half the LUNs > to controller A and it started to fly. http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/59b43034602a7b7f/0b500afc4d62d434?lnk=st&q=#0b500afc4d62d434 -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] LowEnd Batt. backed raid controllers that will deal with ZFS commit semantics correctly?

2008-01-25 Thread Albert Chin
ething equivalent to the performance of ZIL disabled with ZIL/RAM. I'd do ZIL with a battery-backed RAM in a heartbeat if I could find a card. I think others would as well. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@open

Re: [zfs-discuss] sharenfs with over 10000 file systems

2008-01-23 Thread Albert Chin
pt. > > All these things are being worked on, but it might take sometime > before everything is made aware that yes it's no longer unusual that > there can be 1+ filesystems on one machine. But shouldn't sharemgr(1M) be "a

Re: [zfs-discuss] LowEnd Batt. backed raid controllers that will deal with ZFS commit semantics correctly?

2008-01-22 Thread Albert Chin
S to > do the mirroring. Why even both with a H/W RAID array when you won't use the H/W RAID? Better to find a decent SAS/FC JBOD with cache. Would definitely be cheaper. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mai

Re: [zfs-discuss] LowEnd Batt. backed raid controllers that will deal with ZFS commit semantics correctly?

2008-01-22 Thread Albert Chin
Unfortunately, no inexpensive cards exist for the common consumer (with ECC memory anyways). If you convince http://www.micromemory.com/ to sell you one, let us know :) Set "set zfs:zil_disable = 1" in /etc/system to gauge the type of improvement you can expect. Don't use this in p

Re: [zfs-discuss] ZFS Not Offlining Disk on SCSI Sense Error (X4500)

2008-01-03 Thread Albert Chin
behavior? It really makes > ZFS less than desirable/reliable. http://blogs.sun.com/eschrock/entry/zfs_and_fma FMA For ZFS Phase 2 (PSARC/2007/283) was integrated in b68: http://www.opensolaris.org/os/community/arc/caselog/2007/283/ http://www.opensolaris.org/os/community/on/flag-days

Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-11-28 Thread Albert Chin
; > Computer Officer, University of Cambridge, Unix Support > > > > -- > Jorgen Lundman | <[EMAIL PROTECTED]> > Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) > Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) > Japan| +81 (0)3 -3375-1767 (home) >

Re: [zfs-discuss] Why did resilvering restart?

2007-11-21 Thread Albert Chin
On Tue, Nov 20, 2007 at 11:39:30AM -0600, Albert Chin wrote: > On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote: > > > > [EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM: > > > > > On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:

Re: [zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Albert Chin
| tail -1 > > > > 2007-11-20.02:37:13 zpool replace tww > > > c0t600A0B8000299966059E4668CBD3d0 > > > > c0t600A0B8000299CCC06734741CD4Ed0 > > > > > > > > So, why did resilvering restart when no zfs operations occurred? I > > &

Re: [zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Albert Chin
and now I get: > > # zpool status tww > > pool: tww > >state: DEGRADED > > status: One or more devices is currently being resilvered. The pool > will > > continue to function, possibly in a degraded state. > > action: Wait for the resilve

[zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Albert Chin
graded state. action: Wait for the resilver to complete. scrub: resilver in progress, 0.00% done, 134h45m to go What's going on? -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.openso

Re: [zfs-discuss] Oops (accidentally deleted replaced drive)

2007-11-19 Thread Albert Chin
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote: > You should be able to do a 'zpool detach' of the replacement and then > try again. Thanks. That worked. > - Eric > > On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote: > > Running ON b66

[zfs-discuss] Oops (accidentally deleted replaced drive)

2007-11-19 Thread Albert Chin
0 cannot replace c0t600A0B8000299966059E4668CBD3d0 with c0t600A0B8000299CCC06734741CD4Ed0: cannot replace a replacing device -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.o

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-25 Thread Albert Chin
http://mail.opensolaris.org/pipermail/storage-discuss/2007-July/003080.html You'll need to determine the performance impact of removing NVRAM from your data LUNs. Don't blindly do it. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing li

Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Albert Chin
On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote: > I think we are very close to using zfs in our production environment.. Now > that I have snv_72 installed and my pools set up with NVRAM log devices > things are hauling butt. How did you get NVRAM log devices? -- al

Re: [zfs-discuss] separate intent log blog

2007-07-27 Thread Albert Chin
connector and not sure if it is worth the whole effort > for my personal purposes. Huh? So your MM-5425CN doesn't fit into a PCI slot? > Any comment are very appreciated How did you obtain your card? -- albert chin ([EMAIL PROTECTED]) ___

Re: [zfs-discuss] separate intent log blog

2007-07-18 Thread Albert Chin
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote: > Albert Chin wrote: > > On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote: > >> I wrote up a blog on the separate intent log called "slog blog" > >> which describes the interface; some

Re: [zfs-discuss] separate intent log blog

2007-07-18 Thread Albert Chin
well but they cannot find anyone selling them. > - Eric > > On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote: > > > > > > Albert Chin wrote: > > > On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote: > > >> I wrote up a blog on

Re: [zfs-discuss] separate intent log blog

2007-07-18 Thread Albert Chin
on So, how did you get a "pci Micro Memory pci1332,5425 card" :) I presume this is the PCI-X version. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-10 Thread Albert Chin
On Tue, Jul 10, 2007 at 07:12:35AM -0500, Al Hopper wrote: > On Mon, 9 Jul 2007, Albert Chin wrote: > > > On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote: > >> > >> On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote: > >>> It wo

Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-09 Thread Albert Chin
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote: > > On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote: > > It would also be nice for extra hardware (PCI-X, PCIe card) that > > added NVRAM storage to various sun low/mid-range servers that are > >

Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-03 Thread Albert Chin
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote: > On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote: > > PSARC 2007/171 will be available in b68. Any documentation anywhere on > > how to take advantage of it? > > > > Some of the Sun storage arr

Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-03 Thread Albert Chin
On Tue, Jul 03, 2007 at 10:31:28AM -0700, Richard Elling wrote: > Albert Chin wrote: > > On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote: > >> Albert Chin wrote: > >>> Some of the Sun storage arrays contain NVRAM. It would be really nice > >>

Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-03 Thread Albert Chin
On Tue, Jul 03, 2007 at 09:01:50AM -0700, Richard Elling wrote: > Albert Chin wrote: > > Some of the Sun storage arrays contain NVRAM. It would be really nice > > if the array NVRAM would be available for ZIL storage. It would also > > be nice for extra hardware (PCI-X, PCIe c

Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-03 Thread Albert Chin
't seem very impressive: http://www.adtron.com/products/A25fb-SerialATAFlashDisk.html http://www.sandisk.com/OEM/ProductCatalog(1321)-SanDisk_SSD_SATA_5000_25.aspx -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-03 Thread Albert Chin
NVRAM storage to various sun low/mid-range servers that are currently acting as ZFS/NFS servers. Or maybe someone knows of cheap SSD storage that could be used for the ZIL? I think several HD's are available with SCSI/ATA interfaces. -- albert chin ([EMAIL PROT

Re: [zfs-discuss] ZFS version 5 to version 6 fails to import or upgrade

2007-06-19 Thread Albert Chin
identifier, though > some features will not be available without an explicit 'zpool > upgrade'. > config: > > zones ONLINE > c0d1s5ONLINE zpool import lists the pools available for import. Maybe you need to actually _import_ the pool

Re: [zfs-discuss] Re: No zfs_nocacheflush in Solaris 10?

2007-05-25 Thread Albert Chin
ut? Yes. > Also, no Santricity, just Sun's Common Array Manager. Is it possible > to use both without completely confusing the array? I think both are ok. CAM is free. Dunno about Santricity. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] No zfs_nocacheflush in Solaris 10?

2007-05-25 Thread Albert Chin
some info that hasn't been flushed by ZIL). Even having your file server on a UPS won't help here. http://blogs.sun.com/erickustarz/entry/zil_disable discusses some of the issues affecting zil_disable=1. We know we get better performance with zil_disable=1 but we're not taking an

Re: [zfs-discuss] No zfs_nocacheflush in Solaris 10?

2007-05-25 Thread Albert Chin
On Fri, May 25, 2007 at 12:14:45AM -0400, Torrey McMahon wrote: > Albert Chin wrote: > >On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote: > > > > > >>I'm getting really poor write performance with ZFS on a RAID5 volume > >>(5 disks) from a

Re: [zfs-discuss] No zfs_nocacheflush in Solaris 10?

2007-05-24 Thread Albert Chin
0GB/10K drives, we get ~46MB/s on a single-drive RAID-0 array, ~83MB/s on a 4-disk RAID-0 array w/128k stripe, and ~69MB/s on a seven-disk RAID-5 array w/128k strip. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-22 Thread Albert Chin
On Mon, May 21, 2007 at 13:23:48 -0800, Marion Hakanson wrote: >Albert Chin wrote: >> Why can't the NFS performance match that of SSH? > > My first guess is the NFS vs array cache-flush issue. Have you > configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That'

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-22 Thread Albert Chin
# mount file-server:/opt/test /mnt # time tar cf - gcc343 | (cd /mnt; tar xpf - ) ... (old) 419721216 bytes in 1:08.65 => 6113928.86 bytes/sec (new) 419721216 bytes in 0:44.67 => 9396042.44 bytes/sec > > > On 22/05/07, Albert Chin > <[EMAIL PROTECTE

Re: [zfs-discuss] Re: Rsync update to ZFS server over SSH faster than over

2007-05-21 Thread Albert Chin
ression and/or -oCompressionLevel? -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Albert Chin
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > > But still, how is tar/SSH any more multi-threaded than tar/NFS? > > It's not that it is, but that NFS sync semantics and ZFS sync > sem

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Albert Chin
On Mon, May 21, 2007 at 04:55:35PM -0600, Robert Thurlow wrote: > Albert Chin wrote: > > >I think the bigger problem is the NFS performance penalty so we'll go > >lurk somewhere else to find out what the problem is. > > Is this with Solaris 10 or OpenSolaris on t

Re: [zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Albert Chin
On Mon, May 21, 2007 at 02:55:18PM -0600, Robert Thurlow wrote: > Albert Chin wrote: > > >Why can't the NFS performance match that of SSH? > > One big reason is that the sending CPU has to do all the comparisons to > compute the list of files to be sent - it has to f

[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?

2007-05-21 Thread Albert Chin
bytes/sec The network is 100MB. /etc/system on the file server is: set maxphys = 0x80 set ssd:ssd_max_throttle = 64 set zfs:zfs_nocacheflush = 1 Why can't the NFS performance match that of SSH? -- albert chin ([EMAIL PROTECTED]) ___ zfs-di

Re: [zfs-discuss] ZFS+NFS on storedge 6120 (sun t4)

2007-04-21 Thread Albert Chin
iscuss@opensolaris.org > >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 6410 expansion shelf

2007-04-20 Thread Albert Chin
bre channel. The 6140 controller unit has either 2GB or 4GB cache. Does the 6140 expansion shelf have cache as well or is the cache in the controller unit used for all expansions shelves? -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing

Re: [zfs-discuss] File level snapshots in ZFS?

2007-03-29 Thread Albert Chin
quot; and take no time so might as well just snapshot the file system. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Hypothetical question about multiple host access to 6140 array

2007-02-23 Thread Albert Chin
possible to allocate some disks from the 6140 array to ZFS on the X4100 for the purpose of migrating data from the appliance to ZFS? -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] hot spares - in standby?

2007-01-30 Thread Albert Chin
gt; and a rotating spare could disrupt this organization, but would it be > useful at all? Agami Systems has the concept of "Enterprise Sparing", where the hot spare is distributed amongst data drives in the array. When a failure occurs, the rebuild occurs in parallel across _al

Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-29 Thread Albert Chin
on top of that or a JBOD with ZFS RAIDZ/RAIDZ2 on top of that. -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Albert Chin
n making the > suggestion ... right? Well, when you buy disk for the Sun 5320 NAS Appliance, you get a Controller Unit shelf and, if you expand storage, an Expansion Unit shelf that connects to the Controller Unit. Maybe the Expansion Unit shelf is a JBOD 6140? -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Albert Chin
On Thu, Jan 25, 2007 at 10:16:47AM -0500, Torrey McMahon wrote: > Albert Chin wrote: > >On Wed, Jan 24, 2007 at 10:19:29AM -0800, Frank Cusack wrote: > > > >>On January 24, 2007 10:04:04 AM -0800 Bryan Cantrill <[EMAIL PROTECTED]> > >>wrote: > >

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-25 Thread Albert Chin
f you wanted to use a 6140 with ZFS, and really wanted JBOD, your only choice would be a RAID 0 config on the 6140? -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Mounting a ZFS clone

2007-01-16 Thread Albert Chin
On Tue, Jan 16, 2007 at 01:28:04PM -0800, Eric Kustarz wrote: > Albert Chin wrote: > >On Mon, Jan 15, 2007 at 10:55:23AM -0600, Albert Chin wrote: > > > >>I have no hands-on experience with ZFS but have a question. If the > >>file server running ZFS expor

Re: [zfs-discuss] Mounting a ZFS clone

2007-01-16 Thread Albert Chin
On Mon, Jan 15, 2007 at 10:55:23AM -0600, Albert Chin wrote: > I have no hands-on experience with ZFS but have a question. If the > file server running ZFS exports the ZFS file system via NFS to > clients, based on previous messages on this list, it is not possible > for an NFS cli

[zfs-discuss] Mounting a ZFS clone

2007-01-15 Thread Albert Chin
lient access to the remote ZFS file system and the clone? -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss