Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-05-30 Thread Jim Klimov
No, I did not set that property; not now, not in previous releases. Nice to see secure by default coming to the admin tools as well. Waiting for SSH to become 127.0.0.1:22 sometime... just kidding ;) Thanks for the tip! Any ideas about the stacktrace? - it's still there instead of the web-GUI

Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-01 Thread Jim Klimov
I checked - this system has a UFS root. When installed as snv_84 and then LU'd to snv_89, and when I fiddled with these packages from various other releases, it had the stacktrace instead of the ZFS admin GUI (or the well-known smcwebserver restart effect for the older packages). This system

Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-11 Thread Jim Klimov
Likewise. Just plain doesn't work. Not required though, since the command-line is okay and way powerful ;) And there are some more interesting challenges to work on, so I didn't push this problem any more yet. This message posted from opensolaris.org

Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-17 Thread Jim Klimov
Interesting, we'll try that. Our server with the problem has been boxed now, so I'll check the solution when it gets on site. Thanks ahead, anyway ;) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] copying a ZFS

2008-07-20 Thread Jim Mauro
should have been clearer about that. I will investigate using ZFS snapshots with ZFS send as a method for accomplishing my task. I'm not convinced it's the best way to acheive my goal, but if it's not, I'd like to make sure I understand why not. Thanks for your interest. /jim Mattias Pantzare wrote

Re: [zfs-discuss] ZFS deduplication

2008-08-22 Thread Jim Klimov
Just my 2c: Is it possible to do an offline dedup, kind of like snapshotting? What I mean in practice, is: we make many Solaris full-root zones. They share a lot of data as complete files. This is kind of easy to save space - make one zone as a template, snapshot/clone its dataset, make new

Re: [zfs-discuss] ZFS deduplication

2008-08-26 Thread Jim Klimov
Ok, thank you Nils, Wade for the concise replies. After much reading I agree that the ZFS-development queued features do deserve a higher ranking on the priority list (pool-shrinking/disk-removal and user/group quotas would be my favourites), so probably the deduplication tool I'd need would,

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-07 Thread Jim Dunham
) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-07 Thread Jim Dunham
-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
Ralf, Jim, at first: I never said that AVS is a bad product. And I never will. I wonder why you act as if you were attacked personally. To be honest, if I were a customer with the original question, such a reaction wouldn't make me feel safer. I am sorry that my response came across

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote: On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote: The issue with any form of RAID 1, is that the instant a disk fails out of the RAID set, with the next write I/O to the remaining members of the RAID set, the failed disk (and its

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-12 Thread Jim Dunham
On Sep 11, 2008, at 5:16 PM, A Darren Dunham wrote: On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote: On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote: On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote: The issue with any form of RAID 1, is that the instant a disk

Re: [zfs-discuss] ZPOOL Import Problem

2008-09-12 Thread Jim Dunham
# --- Importing on the primary gives the same error. Anyone have any ideas? Thanks Corey ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering

Re: [zfs-discuss] Will ZFS stay consistent with AVS/ZFS and async replication

2008-09-12 Thread Jim Dunham
-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] ZPOOL Import Problem

2008-09-13 Thread Jim Dunham
be placed into logging mode first. Then ZFS will be left in I/O consistent after the disable is done. Corey Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] ZPOOL Import Problem

2008-09-17 Thread Jim Dunham
On Sep 16, 2008, at 5:39 PM, Miles Nordin wrote: jd == Jim Dunham [EMAIL PROTECTED] writes: jd If at the time the SNDR replica is deleted the set was jd actively replicating, along with ZFS actively writing to the jd ZFS storage pool, I/O consistency will be lost, leaving ZFS

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-19 Thread Jim Dunham
replication 'smarter'. -- Brent Jones [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun

Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-08 Thread Jim Dunham
functionality on a single node, use host based or controller based mirroring software. --Joe ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software

Re: [zfs-discuss] ZFS Replication Question

2008-10-10 Thread Jim Dunham
Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-20 Thread Jim Dunham
backing stores. If one use rdsk backing stores of any type, this is not an issue. Jim I have a similar situation here, with a 2-TB ZFS pool on a T2000 using Iscsi to a Netapp file server. Is there any way to tell in advance if any of those changes will make a difference? Many of them seem

Re: [zfs-discuss] [storage-discuss] Help with bizarre S10U5 / zfs / iscsi / thumper / Oracle RAC problem

2008-11-04 Thread Jim Dunham
to OpenSolaris at build snv_74, and current being backported to Solaris 10, available in S10u7 next year. The weird behavior seen below with 'dd', is likely Oracle's desire to continually repair one of its many redundant header blocks. - Jim FWIW, you don't need a file that contains zeros, as /dev

Re: [zfs-discuss] ZFS, Kernel Panic on import

2008-11-07 Thread Jim Dunham
=vt100; export TERM or setenv TERM vt100 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim

Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-11-29 Thread Jim Mauro
on x64 - it will panic your system. None of this has anything to do with ZFS, which uses a completely different mechanism for caching (the ZFS ARC). Thanks, /jim What is what I heard Jim Mauro tell us. I recall feeling a bit disturbed when I heard it. If it is true, perhaps it applies only

Re: [zfs-discuss] zfs iscsi sustained write performance

2009-01-12 Thread Jim Dunham
[ initiator] iscsiadm modify target-param -p maxrecvdataseglen=65536 target-IQN Jim Can you verify the single connection throughput using either of iperf,uperf,netperf. -r -- This message posted from opensolaris.org ___ zfs-discuss

[zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Jim Klimov
approach, this is useful while expanding home systems when I don't have a spare tape backup to dump my files on it and restore afterwards. I think it's an (intended?) limitation in zpool command itself, since the kernel can very well live with degraded pools. //Jim -- This message posted from

Re: [zfs-discuss] ZFS on partitions

2009-01-15 Thread Jim Klimov
For the sake of curiosity, is it safe to have components of two different ZFS pools on the same drive, with and without HDD write cache turned on? How will ZFS itself behave, would it turn on the disk cache if the two imported pools co-own the drive? An example is a multi-disk system like mine

Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Jim Klimov
Thanks Tomas, I haven't checked yet, but your workaround seems feasible. I've posted an RFE and referenced your approach as a workaround. That's nearly what zpool should do under the hood, and perhaps can be done temporarily with a wrapper script to detect min(physical storage sizes) ;) //Jim

Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-17 Thread Jim Klimov
Thanks to all those who helped, even despite the non-enterprise approach of this question ;) While experimenting I discovered that Solaris /tmp doesn't seem to support sparse files: mkfile -n still creates full-sized files which can either use up the swap space, or not fit there. ZFS and UFS

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-17 Thread Jim Dunham
would next look into one of the various SAR data graphing tools. http://sourceforge.net/projects/ksar/ http://freshmeat.net/projects/ksar Jim Thanks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-18 Thread Jim Klimov
copying data; so estimated speed was bytes or kbytes per sec). //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-19 Thread Jim Dunham
, model, and firmware revision. Consider planning ahead and reserving some space by creating a slice which is smaller than the whole disk instead of the whole disk. Creating a slice, instead of using the whole disk, will cause ZFS to not enable write-caching on the underlying device. - Jim

Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-24 Thread Jim Dunham
Kristof, Jim Yes, in step 5 commands were executed on both nodes. We did some more tests with opensolaris 2008.11. (build 101b) We managed to get AVS setup up and running, but we noticed that performance was really bad. When we configured a zfs volume for replication, we noticed

Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-25 Thread Jim Dunham
maxed out on some system limitation, such as CPU and memory. I/O impact should not be a factor, given that a RAM disk is used. The addition of both SNDR and a RAM disk in the data, regardless of how small their system cost is, will have a profound impact on disk throughput. Jim Please

Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-26 Thread Jim Dunham
://www.opensolaris.org/os/community/performance/filebench/quick_start/ - Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-26 Thread Jim Dunham
Richard Elling wrote: Jim Dunham wrote: Ahmed, The setup is not there anymore, however, I will share as much details as I have documented. Could you please post the commands you have used and any differences you think might be important. Did you ever test with 2008.11 ? instead

Re: [zfs-discuss] ? Changing storage pool serial number

2009-01-27 Thread Jim Dunham
://bugs.opensolaris.org/view_bug.do?bug_id=5097228 It may be that this is an awful idea..in which case I am happy to hear that as well and will feed that back to the customer. For both controller-based host-based snapshots, replicas, even iSCSI LUs, this would be an awful [good] idea. :-) - Jim

Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-28 Thread Jim Dunham
for each drive to be replicated, or is there a better way to do it? Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim

Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-29 Thread Jim Dunham
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] write cache and cache flush

2009-01-29 Thread Jim Mauro
Multiple Thors (more than 2?), with performance problems. Maybe it's the common demnominator - the network. Can you run local ZFS IO loads and determine if performance is expected when NFS and the network are out of the picture? Thanks, /jim Greg Mason wrote: So, I'm still beating my head

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Jim Mauro
clients. I've narrowed my particular performance issue down to the ZIL, and how well ZFS plays with NFS. Great. Good luck. /jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Jim Mauro
, /jim #! /usr/bin/ksh -p # CDDL HEADER START # # The contents of this file are subject to the terms of the # Common Development and Distribution License (the License). # You may not use this file except in compliance with the License. # # You can obtain a copy of the license at usr/src

Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-30 Thread Jim Mauro
Sogranted, tank is about 77% full (not to split hairs ;^), but in this case, 23% is 640GB of free space. I mean, it's not like 15 years ago when a file system was 2GB total, and 23% free meant a measely 460MB to allocate from. 640GB is a lot of space, and our largest writes are less than 5MB.

Re: [zfs-discuss] Oracle raw volumes

2009-02-01 Thread Jim Dunham
-locking at the following location. http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zvol.c#1130 - Jim At least trying to open /dev/zvol/rdsk/datapool/master where master is defined as: # zpool create -f datapool mirror c1t1d0 c2t0d0 # zfs create -V

Re: [zfs-discuss] Two-level ZFS

2009-02-01 Thread Jim Dunham
solution. Jim Dunham Engineering Manager Sun Microsystems, Inc. Storage Platform Software Group What's required to make it work? Consider a file server running ZFS that exports a volume with Iscsi. Consider also an application server that imports the LUN with Iscsi and runs a ZFS filesystem

Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-02-02 Thread Jim Dunham
it out. See a demo at: http://blogs.sun.com/constantin/entry/csi_munich_how_to_save Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jim Dunham
A recent increase in email about ZFS and SNDR (the replication component of Availability Suite), has given me reasons to post one of my replies. Well, now I'm confused! A collegue just pointed me towards your blog entry about SNDR and ZFS which, until now, I thought was not a supported

Re: [zfs-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jim Dunham
Andrew, Jim Dunham wrote: ZFS the filesystem is always on disk consistent, and ZFS does maintain filesystem consistency through coordination between the ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately for SNDR, ZFS caches a lot of an applications filesystem data

Re: [zfs-discuss] [storage-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jim Dunham
Nicolas, On Fri, Mar 06, 2009 at 10:05:46AM -0700, Neil Perrin wrote: On 03/06/09 08:10, Jim Dunham wrote: A simple test I performed to verify this, was to append to a ZFS file (no synchronous filesystem options being set) a series of blocks with a block order pattern contained within

Re: [zfs-discuss] Comstar production-ready?

2009-03-13 Thread Jim Dunham
to Solaris 10 u7. - Jim On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson scott.law...@manukau.ac.nz wrote: Stephen Nelson-Smith wrote: Hi, I recommended a ZFS-based archive solution to a client needing to have a network-based archive of 15TB of data in a remote datacentre. I based

Re: [zfs-discuss] AVS and ZFS demos - link broken?

2009-03-19 Thread Jim Dunham
. What web browser are you using, and it works just fine with Firefox. - Jim James D. Rogers NRA, GOA, DAD -- and I VOTE! 2207 Meadowgreen Circle Franktown, CO 80116 coyote_hunt...@msn.com 303-688-0480 303-885-7410 Cell (Working hours and when coyote huntin

Re: [zfs-discuss] Copying thousands of small files on an expanded ZFS pool crawl to a poor performance-not on other pools.

2009-03-23 Thread Jim Mauro
pool significantly tighter on free space than the other pools (zpool list)? Thanks, /jim Nobel Shelby wrote: Customer has many large zfs pools..He does the same on all pools: Copying overnight large amounts of small files (1-5K). All but one particular pool (that has been expanded) gives them

Re: [zfs-discuss] [perf-discuss] ZFS performance issue - READ is slow as hell...

2009-03-30 Thread Jim Mauro
- capture kstat -n arcstats before a test, after the write test and after the read test. Sorry - I need to think about this a bit more. Something is seriously broken, but I'm not yet sure what it is. Unless you're running an older Solaris version, and/or missing patches. Thanks, /jim

[zfs-discuss] [Fwd: Re: [perf-discuss] ZFS performance issue - READ is slow as hell...]

2009-03-31 Thread Jim Mauro
zfetch needs a whole lotta love. For both CR's the workaround is disabling prefetch (echo zfs_prefetch_disable/W 1 | mdb -kw) Any other theories on this test case? Thanks, /jim Original Message Subject: Re: [perf-discuss] ZFS performance issue - READ is slow as hell... Date

Re: [zfs-discuss] shareiscsi not sharing

2009-05-28 Thread Jim Dunham
iscsioptions zpool-name/zvol-name - Jim -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss

[zfs-discuss] Recover ZFS destroyed dataset?

2009-06-05 Thread Jim Klimov
)? //Thanks in advance, we're expecting a busy weekend ;( //Jim Klimov -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Recover ZFS destroyed dataset?

2009-06-05 Thread Jim Klimov
was aborted: 2009-05-28.11:44:05 zfs destroy -r pond/zones/ldap03 [user root on thumper:global] 2009-05-28.11:44:06 [internal destroy txg:712330] dataset = 445 [user root on thumper] //Jim PS: I guess I'm up to an RFE: zfs destroy should have an interactive option, perhaps (un-)set by default

Re: [zfs-discuss] Recover ZFS destroyed dataset?

2009-06-05 Thread Jim Klimov
technical expertise buffooned by marketing yells led to poor decisions ;( //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] [Fwd: Re: [osol-discuss] Possible ZFS corruption?]

2009-06-11 Thread Jim Walker
---BeginMessage--- Some data I forgot to add: I have tried installing opensolaris 2009.06 and importing the pool, yields the same results as Solaris 10 U7 The array is configured to use all 24 disks in a radz2 configuration with 2 hot-spares this gives me about 16TB of usable space. The

[zfs-discuss] zpool import hangs the entire server (please help; data included)

2009-07-03 Thread Jim Leonard
As the subject says, I can't import a seemingly okay raidz pool and I really need to as it has some information on it that is newer than the last backup cycle :-( I'm really in a bind; I hope anyone can help... Background: A drive in a four-slice pool failed (I have to use slices due to a

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-08 Thread Jim Klimov
to be renamed. Hope this helps, let us know if it does ;) //Jim Klimov -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-08 Thread Jim Klimov
that. //HTH, Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-09 Thread Jim Klimov
idea to detect errors crawling in. // HTH, Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Very slow ZFS write speed to raw zvol

2009-07-09 Thread Jim Klimov
breaks, or errors related to bugs in the firmware itself - in more elaborate conspiracy theories). Due to this it is often recommended to use external RAID implementations as vdev's to a redundant ZFS pool (such as a mirror of two equivalent arrays). //Jim -- This message posted from

Re: [zfs-discuss] Very slow ZFS write speed to raw zvol

2009-07-09 Thread Jim Klimov
(say, the boot one). //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-09 Thread Jim Klimov
Probably better use zfs recv -nFvd first (no-write verbose mode) to be certain about your write-targets and about overwriting stuff (i.e. zfs recv -F would destroy any newer snapshots, if any - so you can first check which ones, and possibly clone/rename them first). // HTH, Jim Klimov

Re: [zfs-discuss] Booting from detached mirror disk

2009-07-09 Thread Jim Klimov
You might also want to force ZFS into accepting a faulty root pool: # zpool set failmode=continue rpool //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-09 Thread Jim Klimov
of these drives, and make a mirrored swap pool on the other couple. //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-09 Thread Jim Klimov
and other factors. It is possible that in the course of your quest you'll try several of them. Starting out with a transactionable approach (i.e. not deleting the originals until necessary) pays off in such cases. //Jim -- This message posted from opensolaris.org

Re: [zfs-discuss] Very slow ZFS write speed to raw zvol

2009-07-09 Thread Jim Klimov
), 234MBps Occasionally I did reruns; user time for the same setups can vary significantly (like 65s vs 84s) while the system time stays pretty much the same. zpool iostat shows larger values (like 320MBps typically) but I think that can be attributed to writing parity stripes on raidz vdevs. //Jim

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-09 Thread Jim Klimov
, or otherwise ;) //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-09 Thread Jim Klimov
, and PulsarOS... http://eonstorage.blogspot.com/2008_11_01_archive.html (features page) http://eonstorage.blogspot.com/2009/05/eon-zfs-nas-0591-based-on-snv114.html http://code.google.com/p/pulsaros/ http://pulsaros.digitalplayground.at/ Haven't yet tried them, though. //Jim -- This message posted from

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Jim Mauro
Bob - Have you filed a bug on this issue? I am not up to speed on this thread, so I can not comment on whether or not there is a bug here, but you seem to have a test case and supporting data. Filing a bug will get the attention of ZFS engineering. Thanks, /jim Bob Friesenhahn wrote: On Mon

[zfs-discuss] Strange errors in zpool scrub, Solaris 10u6 x86_64

2009-07-29 Thread Jim Klimov
/w on each of 2 mirrored drives, and 2 h/w errors on one of the drives). Any ideas? Is it a cosmetic problem or a crawling-hiding bug in my hardware and I should go about replacing something somewhere? I don't see such behavior on any other servers around... Thanks for ideas, //Jim zpool status

Re: [zfs-discuss] sam-fs on zfs-pool

2009-07-31 Thread Jim Klimov
into the samfs although its structures are only 20% used? If by any chance the latter - I think it would count as a bug. If the former - see the posts above for explanations and workarounds :) Thanks in advance for such detail, Jim -- This message posted from opensolaris.org

Re: [zfs-discuss] sam-fs on zfs-pool

2009-07-31 Thread Jim Klimov
If I understand you right it is as you said. Here's an example and you can see what happened. The sam-fs is filled to only 6% and the zvol ist full. I'm afraid I was not clear with my question, so I'd elaborate, then. It remains standing as: during this situation, can you write new data into

Re: [zfs-discuss] sam-fs on zfs-pool

2009-07-31 Thread Jim Klimov
the reservation is less then the volume size. Consequently, writes to a sparse volume can fail with ENOSPC when the pool is low on space. For a sparse volume, changes to volsize are not reflected in the reservation. Did you do anything like this? HTH, //Jim

Re: [zfs-discuss] URGENT: very high busy and average service time with ZFS and USP1100

2009-09-22 Thread Jim Mauro
idle. Thanks, /jim Javier Conde wrote: Hello, IHAC with a huge performance problem in a newly installed M8000 confiured with a USP1100 and ZFS. From what we can see, 2 disks used by in different ZPOOLS have are 100% busy and and average service time is also quite high (between 30 and 5 ms

Re: [zfs-discuss] [dtrace-discuss] How to drill down cause of cross-calls in the kernel? (output provided)

2009-09-23 Thread Jim Mauro
genunix`taskq_thread+0xbc unix`thread_start+0x8 Let's see what the fsstat and zpool iostat data looks like when this starts happening.. Thanks, /jim Jim Leonard wrote: It would also be interesting to see some snapshots of the ZFS arc kstats kstat -n arcstats Here you

Re: [zfs-discuss] [dtrace-discuss] How to drill down cause of cross-calls in the kernel? (output provided)

2009-09-23 Thread Jim Mauro
, /jim Jim Leonard wrote: Can you gather some ZFS IO statistics, like fsstat zfs 1 for a minute or so. Here is a snapshot from when it is exhibiting the behavior: new name name attr attr lookup rddir read read write write file remov chng get setops ops ops bytes ops

Re: [zfs-discuss] [dtrace-discuss] How to drill down cause of cross-calls in the kernel? (output provided)

2009-09-23 Thread Jim Leonard
The only thing that jumps out at me is the ARC size - 53.4GB, or most of your 64GB of RAM. This in-and-of-itself is not necessarily a bad thing - if there are no other memory consumers, let ZFS cache data in the ARC. But if something is coming along to flush dirty ARC pages

Re: [zfs-discuss] Solaris License with ZFS USER quotas?

2009-09-28 Thread Jim Grisanzio
an international group in English for the Tokyo OSUG. There are bi-lingual westerners and Japanese on both lists, and we have events in Yoga as well. http://mail.opensolaris.org/mailman/listinfo/ug-tsug (English ) http://mail.opensolaris.org/mailman/listinfo/ug-jposug (Japanese) Jim -- http://blogs.sun.com

Re: [zfs-discuss] iscsi/comstar performance

2009-10-19 Thread Jim Dunham
device is not a ZVOL. Note: For ZVOL support, there is a corresponding ZFS storage pool change to support this functionality, so a zpool upgrade ... to version 16 is required: # zpool upgrade -v . . 16 stmf property support - Jim The options seem to be a) stay

Re: [zfs-discuss] Performance problems with Thumper and 7TB ZFS pool using RAIDZ2

2009-10-24 Thread Jim Mauro
Posting to zfs-discuss. There's no reason this needs to be kept confidential. 5-disk RAIDZ2 - doesn't that equate to only 3 data disks? Seems pointless - they'd be much better off using mirrors, which is a better choice for random IO... Looking at this now... /jim Jeff Savit wrote: Hi all

[zfs-discuss] (home NAS) zfs and spinning down of drives

2009-11-04 Thread Jim Klimov
the flexibility of ZFS and offline-storage capabilities of HSM? -- Thanks for any replies, including statements that my ideas are insane or my views are outdated ;) But constructive ones are more appreciated ;) //Jim -- This message posted from opensolaris.org

Re: [zfs-discuss] (home NAS) zfs and spinning down of drives

2009-11-04 Thread Jim Klimov
Thanks for the link, but the main concern in spinning down drives of a ZFS pool is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a transaction group (TXG) which requires a synchronous write of metadata to disk. I mentioned reading many blogs/forums on the matter, and some

Re: [zfs-discuss] ZFS dedup issue

2009-12-02 Thread Jim Klimov
? In general, were there any stability issues with snv_128 during internal/BFU testing? TIA, //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS- When do you add more memory?

2009-12-23 Thread Jim Mauro
. As an aside, there's nothing about this that requires it be posted to zfs-discuss-confidential. I posted to zfs-disc...@opensolaris.org. Thanks, /jim Anthony Benenati wrote: Jim, The issue with using scan rate alone is if you are looking for why you have significant performance degradation

Re: [zfs-discuss] ZFS- When do you add more memory?

2009-12-23 Thread Jim Laurent
Think he's looking for a single, intuitively obvious, easy to acces indicator of memory usage along the lines of the vmstat free column (before ZFS) that show the current amount of free RAM. On Dec 23, 2009, at 4:09 PM, Jim Mauro wrote: Hi Anthony - I don't get this. How does the presence

[zfs-discuss] Recovering a broken mirror

2010-01-13 Thread Jim Sloey
We have a production SunFireV240 that had a zfs mirror until this week. One of the drives (c1t3d0) in the mirror failed. The system was shutdown and the bad disk replaced without an export. I don't know what happened next but by the time I got involved there was no evidence that the remaining

Re: [zfs-discuss] Recovering a broken mirror

2010-01-13 Thread Jim Sloey
No. Only slice 6 from what I understand. I didn't create this (the person who did has left the company) and all I know is that the pool was mounted on /oraprod before it faulted. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Recovering a broken mirror

2010-01-15 Thread Jim Sloey
Never mind. It looks like the controller is flakey. Neither disk in the mirror is clean. Attempts to backup and recover the remaining disk produced I/O errors that were traced to the controller. Thanks for your help Victor. -- This message posted from opensolaris.org

Re: [zfs-discuss] Oracle Performance - ZFS vs UFS

2010-02-13 Thread Jim Mauro
at 90% full. Read the link Richard sent for some additional information. Thanks, /jim Tony MacDoodle wrote: Was wondering if anyone has had any performance issues with Oracle running on ZFS as compared to UFS? Thanks

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-27 Thread Jim Horng
Unclear what you want to do? What's the goal for this excise? If you want to replace the pool with larger disks and the pool is in mirror or raidz. You just replace one disk at a time and allow the pool to rebuild it self. Once all the disk has been replace, it will atomically realize the

Re: [zfs-discuss] Solaris 10 default caching segmap/vpm size

2010-04-27 Thread Jim Mauro
, ZFS will release memory being used by the ARC. But, if no one else wants it /jim On Apr 27, 2010, at 9:07 PM, Brad wrote: Whats the default size of the file system cache for Solaris 10 x86 and can it be tuned? I read various posts on the subject and its confusing.. -- This message

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
For this type of migration a downtime is required. However, it can be reduce to only a few hours to a few minutes depending how much change need to be synced. I have done this many times on a NetApp Filer but can be apply to zfs as well. First thing is consider is only do the migration once

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
So on the point of not need an migration back. Even at 144 disk. they won't be on the same raid group. So figure out what is the best raid group size for you since zfs don't support changing number of disk in raidz yet. I usually use the number of the slots per shelf. or a good number is

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
Sorry, I need to correct myself. Mirror luns on the windows side to switch storage pool under it is a great idea and I think you can do this without downtime. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
I understand your point. however in most production system the selves are added incrementally so make sense to be related to number of slots per shelf. and in most case withstand a shelf failure is to much of overhead on storage any are. for example in his case he will have to configure 1+0

[zfs-discuss] OpenSolaris snv_134 zfs pool hangs after some time with dedup=on

2010-04-28 Thread Jim Horng
Sorry for the double post but I think this was better suite for zfs forum. I am running OpenSolaris snv_134 as a file server in a test environment, testing deduplication. I am transferring large amount of data from our production server via using rsync. The Data pool is on a separated raidz1-0

<    1   2   3   4   5   6   7   8   >