[zfs-discuss] Why RAID 5 stops working in 2009

2008-07-03 Thread Jim
Anyone here read the article Why RAID 5 stops working in 2009 at http://blogs.zdnet.com/storage/?p=162 Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux if the RAID has to be rebuilt because of a faulty disk? I imagine so because of the physical constraints that

[zfs-discuss] Cannot replace a replacing device

2010-03-28 Thread Jim
I had a drive fail and replaced it with a new drive. During the resilvering process the new drive had write faults and was taken offline. These faults were caused by a broken SATA cable (drive checked with Manufacturers software and all ok). New cable fixed the the failure. However, now the

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-28 Thread Jim
Yes - but it does nothing. The drive remains FAULTED. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-29 Thread Jim
Thanks for the suggestion, but have tried detaching but it refuses reporting no valid replicas. Capture below. C3P0# zpool status pool: tank state: DEGRADED scrub: none requested config: NAME STATE READ WRITE CKSUM tank

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-29 Thread Jim
Thanks for the suggestion, but have tried detaching but it refuses reporting no valid replicas. Capture below. C3P0# zpool status pool: tank state: DEGRADED scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 ad4 ONLINE 0 0 0 ad6 ONLINE 0 0 0

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-30 Thread Jim
Thanks - have run it and returns pretty quickly. Given the output (attached) what action can I take? Thanks James -- This message posted from opensolaris.orgDirty time logs: tank outage [300718,301073] length 356 outage [301138,301139] length 2 outage

Re: [zfs-discuss] howto: make a pool with ashift=X

2011-05-23 Thread Jim Klimov
either of these methods. I've used (1) successfully on my OI_148a by taking a precompiled binary, and I didn't get around to trying (2). Just my 2c :) //Jim ___ zfs-crypto-discuss mailing list zfs-crypto-discuss@opensolaris.org http

[zfs-discuss] Re: zfs panic when unpacking open solaris source

2006-05-12 Thread Jim Walker
Looks like CR 6411261 busy intent log runs out of space on small pools. I found this one. I just bumped up the priority. Jim When unpacking the solaris source onto a local disk on a system running build 39 I got the following panic: panic[cpu0]/thread=d2c8ade0: really out of space

[zfs-discuss] Re: RE: [Security-discuss] Proposal for new basic privileges related with

2006-06-21 Thread Jim Walker
. In the meantime I would code up your unit tests in ksh so they can be more easily integrated. We'll keep you posted as progress in releasing the test suite is made. Cheers, Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

[zfs-discuss] Let's get cooking...

2006-06-21 Thread Jim Mauro
http://www.tech-recipes.com/solaris_system_administration_tips1446.html ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS components for a minimal Solaris 10 U2 install?

2006-06-28 Thread Jim Connors
to think that this would be all that is needed for ZFS? Thanks, -- Jim C ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-17 Thread Jim Mauro
and RAS. /jim Gregory Shaw wrote: To maximize the throughput, I'd go with 8 5-disk raid-z{2} luns. Using that configuration, a full-width stripe write should be a single operation for each controller. In production, the application needs would probably dictate the resulting disk layout

Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Jim Mauro
in the messages to get a better handle on the observed behviour, but this certainly seems like something we should explore further. Watch this space. /jim At any rate, I don't think adding swap will fix the problem I am seeing in that ZFS is not releasing its unused cache when applications need

Re: [zfs-discuss] ZFS components for a minimal Solaris 10 U2 install?

2006-07-25 Thread Jim Connors
(1M) command. Finding this out via trial and error, there is no dependency mentioned for SUNWsmapi in the SUNWzfsr depend file. Apologies if this is nitpicking, but is this missing dependency worthy of submitting a P5 CR? -- Jim C Jason Schroeder wrote: Dale Ghent wrote: On Jun 28, 2006

[zfs-discuss] ZFS state between reboots for RAM rsident OS?

2006-07-25 Thread Jim Connors
available'. So the question is, what sort of state is required between reboots for ZFS? Regards, -- Jim C ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS state between reboots for RAM rsident OS?

2006-07-25 Thread Jim Connors
I understand. Thanks. Just curious, ZFS manages NFS shares. Have you given any thought to what might be involved for ZFS to manage SMB shares in the same manner. This all goes towards my stateless OS theme. -- Jim C Eric Schrock wrote: You need the following file: /etc/zfs

[zfs-discuss] Re: ZFS state between reboots for RAM rsident OS?

2006-07-25 Thread Jim Connors
/etc/zfs/zpool.cache be symbolically linked to /system/ZPOOL.CACHE -- Jim C This file 'knows' about all the pools on the system. These pools can typically be discovered via 'zpool import', but we can't do this at boot because: a. It can be really, really expensive (tasting every disk

Re: [zfs-discuss] Assertion raised during zfs share?

2006-08-04 Thread Jim Connors
-- Jim C Incidentally, the explicit 'zfs share' isn't needed, as we automatically share the filesystem when the options are set (which did succeed). - Eric On Fri, Aug 04, 2006 at 12:42:02PM -0400, Jim Connors wrote: Working to get ZFS to run on a minimal Solaris 10 U2 configuration

Re: [zfs-discuss] Assertion raised during zfs share?

2006-08-04 Thread Jim Connors
Richard Elling wrote: Jim Connors wrote: Working to get ZFS to run on a minimal Solaris 10 U2 configuration. What does minimal mean? Most likely, you are missing something. -- richard Yeah. Looking at package and SMF dependencies plus a whole lot of and trial and error, I've currently

[zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-08 Thread Jim Sloey
Roch - PAE wrote: The hard part is getting a set of simple requirements. As you go into more complex data center environments you get hit with older Solaris revs, other OSs, SOX compliance issues, etc. etc. etc. The world where most of us seem to be playing with ZFS is on the lower end of

[zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-28 Thread Jim Hranicky
So is there a command to make the spare get used, or so I have to remove it as a spare and add it if it doesn't get automatically used? Is this a bug to be fixed, or will this always be the case when the disks aren't exactly the same size? This message posted from opensolaris.org

[zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-29 Thread Jim Hranicky
I know this isn't necessarily ZFS specific, but after I reboot I spin the drives back up, but nothing I do (devfsadm, disks, etc) can get them seen again until the next reboot. I've got some older scsi drives in an old Andataco Gigaraid enclosure which I thought supported hot-swap, but I seem

[zfs-discuss] Managed to corrupt my pool

2006-11-30 Thread Jim Hranicky
, a Netapp, but the fact that I appear to have been able to nuke my pool by simulating a hardware error gives me pause. I'd love to know if I'm off-base in my worries. Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

[zfs-discuss] Re: Managed to corrupt my pool

2006-12-05 Thread Jim Hranicky
So the questions are: - is this fixable? I don't see an inum I could run find on to remove, and I can't even do a zfs volinit anyway: nextest-01# zfs volinit cannot iterate filesystems: I/O error - would not enabling zil_disable have prevented this? - Should I have

[zfs-discuss] Re: Managed to corrupt my pool

2006-12-05 Thread Jim Hranicky
the pool it's going to give me pause in implementing a ZFS solution. Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Jim Davis
We have two aging Netapp filers and can't afford to buy new Netapp gear, so we've been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come up in the discussion - Adding new disks to a RAID-Z pool (Netapps handle adding

Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-07 Thread Jim Mauro
the workload is not data/bandwidth intensive, but more attribute intensive. Note again zfs_create() is the heavy ZFS function, along with zfs_getattr. Perhaps it's the attribute-intensive nature of the load that is at the root of this. I can spend more time on this tomorrow (traveling today). Thanks, /jim

Re: [zfs-discuss] A Plea for Help: Thumper/ZFS/NFS/B43

2006-12-09 Thread Jim Mauro
component to the NFS configuration, so for any synchronous operation, I would expect things to be slower when done over NFS. Awaiting enlightment :^) /jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

[zfs-discuss] Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
Ok, so I'm planning on wiping my test pool that seems to have problems with non-spare disks being marked as spares, but I can't destroy it: # zpool destroy -f zmir cannot iterate filesystems: I/O error Anyone know how I can nuke this for good? Jim This message posted from opensolaris.org

[zfs-discuss] Re: Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
BTW, I'm also unable to export the pool -- same error. Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
Nevermind: # zfs destroy [EMAIL PROTECTED]:28 cannot open '[EMAIL PROTECTED]:28': I/O error Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

[zfs-discuss] Re: Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
on an individual basis). I'm running b51, but I'll try deleting the cache. Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Can't destroy corrupted pool

2006-12-11 Thread Jim Hranicky
of a drive to a pool wipes the pool of any previous data, especially any zfs metadata. I'll keep the list posted as I continue my tests. Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

[zfs-discuss] zfs exported a live filesystem

2006-12-11 Thread Jim Hranicky
By mistake, I just exported my test filesystem while it was up and being served via NFS, causing my tar over NFS to start throwing stale file handle errors. Should I file this as a bug, or should I just not do that :- Ko, This message posted from opensolaris.org

[zfs-discuss] Re: zfs exported a live filesystem

2006-12-12 Thread Jim Hranicky
filesystems first - which would have failed without a -f flag because they were shared. So IMO it is a bug or at least an RFE. Ok, where should I file an RFE? Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

[zfs-discuss] Kickstart hot spare attachment

2006-12-12 Thread Jim Hranicky
as the other drives -- does that make a difference? - Is there something inherent to an old SCSI bus that causes spun- down drives to hang the system in some way, even if it's just hanging the zpool/zfs system calls? Would a thumper be more resilient to this? Jim This message posted from

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-01-26 Thread Jim Dunham
). So all that needs to be done is to design and build a new variant of the letter 'h', and find the place to separate ZFS into two pieces. - Jim Dunham That would be slick alternative to send/recv. Best Regards, Jason On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote: Project Overview: I propose

[zfs-discuss] Re: ZFS panics system during boot, after 11/06 upgrade

2007-01-29 Thread Jim Walker
to recover. Cheers, Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-01-29 Thread Jim Dunham
checking mechanisms. Jim Best Regards, Jason ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Read Only Zpool: ZFS and Replication

2007-02-05 Thread Jim Dunham
, and this underlying storage behavior is not unique to SNDR as it happens with other host-based replication and controller-based replication. Jim benr. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-05 Thread Jim Dunham
Frank, On Fri, 2 Feb 2007, Torrey McMahon wrote: Jason J. W. Williams wrote: Hi Jim, Thank you very much for the heads up. Unfortunately, we need the write-cache enabled for the application I was thinking of combining this with. Sounds like SNDR and ZFS need some more soak time together

Re: [zfs-discuss] Read Only Zpool: ZFS and Replication

2007-02-05 Thread Jim Dunham
failures between metadata blocks A B. Of course using an instantly accessible II snapshot of an SNDR secondary volume would work just fine, since the data being read is now point-in-time consistent, and static. - Jim I belive what you really need is 'zfs send continuos' feature. We

Re: [zfs-discuss] Read Only Zpool: ZFS and Replication

2007-02-05 Thread Jim Dunham
Ben Rockwood wrote: Jim Dunham wrote: Robert, Hello Ben, Monday, February 5, 2007, 9:17:01 AM, you wrote: BR I've been playing with replication of a ZFS Zpool using the BR recently released AVS. I'm pleased with things, but just BR replicating the data is only part of the problem. The big

[zfs-discuss] FROSUG February Meeting Announcement (2/22/2007)

2007-02-08 Thread Jim Walker
This month's FROSUG (Front Range OpenSolaris User Group) meeting is on Thursday, February 22, 2007. Our presentation is ZFS as a Root File System by Lori Alt. In addition, Jon Bowman will be giving an OpenSolaris Update, and we will also be doing an InstallFest. So, if you want help installing an

[zfs-discuss] UPDATE: FROSUG February Meeting (2/22/2007)

2007-02-15 Thread Jim Walker
***Meeting Update*** We will be having this month's meeting at the Omni Interlocken Resort in Broomfield and a conference call number is being provided for those who can not make the meeting in person, see Meeting Details below for more information. In addition, we will be discussing Solaris

Re: [zfs-discuss] Why number of NFS threads jumps to the max value?

2007-02-27 Thread Jim Mauro
? /jim Leon Koll wrote: Hello, gurus I need your help. During the benchmark test of NFS-shared ZFS file systems at some moment the number of NFS threads jumps to the maximal value, 1027 (NFSD_SERVERS was set to 1024). The latency also grows and the number of IOPS is going down. I've collected

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
c02e2a08 uint64_t c_max = 0t1070318720 . . . Perhaps c_max does not do what I think it does? Thanks, /jim Jim Mauro wrote: Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
the ARC size because for mmap-intensive workloads, it seems to hurt more than help (although, based on experiments up to this point, it's not hurting a lot). I'll do another reboot, and run it all down for you serially... /jim Thanks, -j On Thu, Mar 15, 2007 at 06:57:12PM -0400, Jim Mauro wrote

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
= 0t221839360 ARC_mfu::print -d size lsize size = 0t26897219584 -- MFU list is almost 27GB ... lsize = 0t26869121024 Thanks, /jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
Will try that now... /jim [EMAIL PROTECTED] wrote: I suppose I should have been more forward about making my last point. If the arc_c_max isn't set in /etc/system, I don't believe that the ARC will initialize arc.p to the correct value. I could be wrong about this; however, next time you

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jim Mauro
c02e2a08 uint64_t c_max = 0t536870912--- c_max is 512MB ... } After a few runs of the workload ... arc::print -d size size = 0t536788992 Ah - looks like we're out of the woods. The ARC remains clamped at 512MB. Thanks! /jim [EMAIL PROTECTED] wrote: I suppose I should have been more

Re: [zfs-discuss] Re: ZFS with raidz

2007-03-20 Thread Jim Mauro
. But, that's me... :^) /jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing

[zfs-discuss] The value of validating your backups...

2007-03-20 Thread Jim Mauro
http://www.cnn.com/2007/US/03/20/lost.data.ap/index.html ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: ZFS with raidz

2007-03-20 Thread Jim Mauro
this problem is unique to ZFS, but I do not have experience or empirical data on mount time for 12k UFS, QFS, ext4, etc, file systems. There is an RFE filed on this: http://bugs.opensolaris.org/view_bug.do?bug_id=6478980 As I said, I wish I had a better answer. Thanks, /jim Kory Wheatley wrote

[zfs-discuss] REMINDER: FROSUG March Meeting Announcement (3/29/2007)

2007-03-28 Thread Jim Walker
(Jim Walker) 6:45pm - 8:30pm Sharemgr (Doug McCallum) Where: Sun Broomfield Campus Building 1 - Conference Center 500 Eldorado Blvd. Broomfield, CO 80021 The meeting is free and open to the public. Pizza and soft drinks will be served at the beginning of the meeting

Re: [zfs-discuss] C'mon ARC, stay small...

2007-04-01 Thread Jim Mauro
used for the dnlc. It might, but I need to look at the code to be sure... Let's start with this... /jim Jason J. W. Williams wrote: Hi Guys, Rather than starting a new thread I thought I'd continue this thread. I've been running Build 54 on a Thumper since Mid January and wanted to ask

Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-04-19 Thread Jim Mauro
virtual hard drives in the ZFS space. So I guess the anwer to your question is theoretically yes, but I'm not aware of an implementation that would allow for such a configuration that exists today. I think I just confused the issue...ah well... /jim PS - FWIW, I have a zpool configured in nv62

Re: Fwd: [zfs-discuss] Re: Mac OS X Leopard to use ZFS

2007-06-10 Thread Jim Mauro
. /jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: [storage-discuss] Performance expectations of iscsi targets?

2007-06-19 Thread Jim Dunham
___ storage-discuss mailing list [EMAIL PROTECTED] http://mail.opensolaris.org/mailman/listinfo/storage-discuss Jim Dunham Solaris, Storage Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 Email: [EMAIL PROTECTED] http

[zfs-discuss] ZFS test suite released on OpenSolaris.org

2007-06-26 Thread Jim Walker
to testing discuss at: http://www.opensolaris.org/os/community/testing/discussions. Happy Hunting, Jim This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Sharemgr Test Suite Released on OpenSolaris.org

2007-07-23 Thread Jim Walker
/test/ontest-stc2/src/suites/share/README Any questions about the Sharemgr test suite can be sent to testing discuss at: http://www.opensolaris.org/os/community/testing/discussions Cheers, Jim This message posted from opensolaris.org ___ zfs

Re: [zfs-discuss] Does iSCSI target support SCSI-3 PGR reservation ?

2007-07-27 Thread Jim Dunham
://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Solaris, Storage Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 Email: [EMAIL PROTECTED] http://blogs.sun.com/avs ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] New version of the ZFS test suite released

2007-08-03 Thread Jim Walker
-stc2/src/suites/zfs/ More information on the ZFS test suite is at: http://opensolaris.org/os/community/zfs/zfstestsuite/ Questions about the ZFS test suite can be sent to zfs-discuss at: http://www.opensolaris.org/jive/forum.jspa?forumID=80 Cheers, Jim This message posted from

Re: [zfs-discuss] do zfs filesystems isolate corruption?

2007-08-11 Thread Jim Dunham
@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Solaris, Storage Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 Email: [EMAIL PROTECTED] http://blogs.sun.com/avs ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] ZFS Under the Hood Presentation Slides

2007-08-17 Thread Jim Mauro
Is the referenced Laminated Handout on slide 3 available anywhere in any form electronically? If not, I'd be happy to create an electronic copy and make it pubically available. Thanks, /jim Joy Marshall wrote: It's taken a while but at last we have been able to post the ZFS Under

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-26 Thread Jim Dunham
system within the SAN, be it Fibre Channel or iSCSI. If the node wanting access to the data is distant, Available Suite also offers Remote Replication. http://www.opensolaris.org/os/project/avs/ http://www.opensolaris.org/os/project/iscsitgt/ Jim Ronald, thanks for your comments. I

Re: [zfs-discuss] ZFS, XFS, and EXT4 compared

2007-08-30 Thread Jim Mauro
for the same test. Very odd. Still looking... Thanks, /jim Jeffrey W. Baker wrote: I have a lot of people whispering zfs in my virtual ear these days, and at the same time I have an irrational attachment to xfs based entirely on its lack of the 32000 subdirectory limit. I'm not afraid of ext4's

Re: [zfs-discuss] (politics) Sharks in the waters

2007-09-05 Thread Jim Mauro
for seminars and tutorials on Solaris. Each set was color print on heavy, glossy paper. That represented color printing of about 1600 pages total. All so the attorney could question me about 2 of the slides. I almost fell off my chair /jim Rob Windsor wrote: http://news.com.com/NetApp

Re: [zfs-discuss] question about uberblock blkptr

2007-09-17 Thread Jim Mauro
, /jim [EMAIL PROTECTED] wrote: Hi All, I have modified mdb so that I can examine data structures on disk using ::print. This works fine for disks containing ufs file systems. It also works for zfs file systems, but... I use the dva block number from the uberblock_t to print what

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
) does not work either. Use the zfs r/w function entry points for now. What sayeth the ZFS team regarding the use of a stable DTrace provider with their file system? Thanks, /jim Neelakanth Nadgir wrote: io:::start probe does not seem to get zfs filenames in args[2]-fi_pathname. Any ideas how

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
ufs lookup 8515 Thanks, /jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] io:::start and zfs filenames?

2007-09-26 Thread Jim Mauro
files, and it seems to work /jim Neelakanth Nadgir wrote: Jim I can't use zfs_read/write as the file is mmap()'d so no read/write! -neel On Sep 26, 2007, at 5:07 AM, Jim Mauro [EMAIL PROTECTED] wrote: Hi Neel - Thanks for pushing this out. I've been tripping over this for a while

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-03 Thread Jim Mauro
hurts sustainable performance will depend on several things, but I can envision scenerios where it's overhead I'd rather avoid if I could. Thanks, /jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Jim Mauro
buffering and the ARC. This is entirely my opinion (not that of Sun), and I've been wrong before. Thanks, /jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS File system and Oracle raw files compatibility

2007-10-19 Thread Jim Mauro
answered the question I think you wanted to ask. Thanks, /jim Dale Pannell wrote: I have a customer that would like to know if the ZFS file system is compatible with Oracle raw files. Any help you can provide is greatly appreciated. Please respond directly to me since I am not part

Re: [zfs-discuss] ZFS mirroring

2007-10-22 Thread Jim Dunham
/ Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 http://blogs.sun.com/avs regards image001.gif Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +90212335

Re: [zfs-discuss] iSCSI target using ZFS filesystem as backing

2007-11-21 Thread Jim Dunham
/jelley jelley You can also try: zpool set shareiscsi=on telephone/jelley - Jim Now if I perform a 'iscsitadm list target', the iSCSI target appears like it should: Target: jelley iSCSI Name: iqn.1986-03.com.sun:02:fcaa1650-f202-4fef-b44b- b9452a237511.jelley Connections: 0 Now

Re: [zfs-discuss] Yager on ZFS

2007-12-13 Thread Jim Mauro
Would you two please SHUT THE F$%K UP. Dear God, my kids don't go own like this. Please - let it die already. Thanks very much. /jim can you guess? wrote: Hello can, Thursday, December 13, 2007, 12:02:56 AM, you wrote: cyg On the other hand, there's always the possibility that someone

Re: [zfs-discuss] What does dataset is busy actually mean?

2007-12-13 Thread Jim Klimov
I've hit the problem myself recently, and mounting the filesystem cleared something in the brains of ZFS and alowed me to snapshot. http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg00812.html PS: I'll use Google before asking some questions, a'la (C) Bart Simpson That's how I found

Re: [zfs-discuss] ZFS with array-level block replication (TrueCopy, SRDF, etc.)

2007-12-14 Thread Jim Dunham
This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. wk: 781.442.4042 http

Re: [zfs-discuss] Auto backup and auto restore of ZFS via Firewire drive

2007-12-17 Thread Jim Klimov
It's good he didn't mail you, now we all know some under-the-hood details via Googling ;) Thanks to both of you for this :) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Jim Dunham
głos na najlepszego. - Kliknij: http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fsportowiec2007.htmlsid=166 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham

Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Jim Dunham
value is based on many variables, most of which are changing over time and usage patterns. eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform

Re: [zfs-discuss] Break a ZFS mirror and concatenate the disks

2008-01-10 Thread Jim Dunham
, and then revalidated all the data. As stated earlier, sacrificing redundancy (RAID 1 mirroring) for double the storage (RAID 0 concatenation) is being penny wise, and pound foolish. Jim Cindy Kory Wheatley wrote: Currently c2t2d0 c2t3d0 are setup in a mirror. I want to break the mirror and save

Re: [zfs-discuss] iscsi on zvol

2008-01-24 Thread Jim Dunham
data, from the point of view of Solaris, this disk is not a LUN, and thus can not be accessed as such. Jim I know, this is (also) iSCSI-related, but mostly a ZFS-question. Thanks for your answers, Jan Dreyer ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] iscsi on zvol

2008-01-24 Thread Jim Dunham
and iSCSI start and stop at different times during Solaris boot and shutdown, so I would recommend using legacy mount points, or manual zpool import / exports when trying configurations at this level. Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. wk: 781.442.4042 http

Re: [zfs-discuss] [Fwd: Re: Presales support on ZFS]

2008-01-29 Thread Jim Dunham
/white-papers/data_replication_strategies.pdf http://www.sun.com/storagetek/white-papers/enterprise_continuity.pdf Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-29 Thread Jim Mauro
into Solaris 10, I'm afraid I don't have the information. Hopefully, someone else will know... Thanks, /jim Jonathan Loran wrote: Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state

Re: [zfs-discuss] ZFS replication strategies

2008-02-01 Thread Jim Dunham
://www.nexenta.com/demos/auto-cdp.html Very nice job.. Its refreshing to see something I know oh too well, with an updated management interface, and a good portion of the plumbing hidden away. - Jim On Fri, 2008-02-01 at 10:15 -0800, Vincent Fox wrote: Does anyone have any particularly creative

Re: [zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active

2008-02-04 Thread Jim Dunham
? - Jim # zpool import foopool barpool -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. work

Re: [zfs-discuss] [Fwd: Re: Presales support on ZFS]

2008-02-11 Thread Jim Dunham
requests. Jim Considering the solution we are offering to our customer ( 5 remote sites replicating in one central data-center ) with ZFS ( cheapest solution ) I should consider 3 times the network load of a solution based on SNDR-AVS and 3 times the storage space too..correct ? I

Re: [zfs-discuss] iscsi core dumps when under IO

2008-02-22 Thread Jim Dunham
dumped core, being an assert in the T10 state machine. # mdb /core ::status ::quit This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham

Re: [zfs-discuss] Cpying between pools

2008-03-14 Thread Jim Dunham
the new dmx as a sub mirror of the old dmx and after the sync is finished, remove the old dmx from the mirror. See:zpool replace [-f] pool old_device [new_device] - Jim Thank you, This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] iSCSI targets mapped to a VMWare ESX server

2008-04-07 Thread Jim Dunham
will be resolved. Jim We are running latest Solaris 10 a X4500 Thumper. We defined a test iSCSI Lun. Out put below Target: AkhanTemp/VM iSCSI Name: iqn.1986-03.com.sun:02:72406bf8-2f5f-635a-f64c- cb664935f3d1 Alias: AkhanTemp/VM Connections: 0 ACL list: TPGT list: LUN

Re: [zfs-discuss] Moving zfs pool to new machine?

2008-05-07 Thread Jim Dunham
the two disks or their contents. To see all available pools to import: zpool import From this list, it should include your prior storage pool name zpool import pool-name - Jim The new disks are c6t0d0s0 and c6t1d0s0. They are identical disks set that were set up

Re: [zfs-discuss] Image with DD from ZFS partition

2008-05-08 Thread Jim Dunham
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list

[zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-05-29 Thread Jim Klimov
I've installed SXDE (snv_89) and found that the web console only listens on https://localhost:6789/ now, and the module for ZFS admin doesn't work. When I open the link, the left frame lists a stacktrace (below) and the right frame is plain empty. Any suggestions? I tried substituting

[zfs-discuss] Liveupgrade snv_77 with a ZFS root to snv_89

2008-05-29 Thread Jim Klimov
We have a test machine installed with a ZFS root (snv_77/x86 and rootpol/rootfs with grub support). Recently tried to update it to snv_89 which (in Flag Days list) claimed more support for ZFS boot roots, but the installer disk didn't find any previously installed operating system to upgrade.

Re: [zfs-discuss] Liveupgrade snv_77 with a ZFS root to snv_89

2008-05-30 Thread Jim Klimov
You mean this: https://www.opensolaris.org/jive/thread.jspa?threadID=46626tstart=120 Elegant script, I like it, thanks :) Trying now... Some patching follows: -for fs in `zfs list -H | grep ^$ROOTPOOL/$ROOTFS | awk '{ print $1 };'` +for fs in `zfs list -H | grep ^$ROOTPOOL/$ROOTFS | grep -w

Re: [zfs-discuss] Liveupgrade snv_77 with a ZFS root to snv_89

2008-05-30 Thread Jim Klimov
Alas, didn't work so far. Can the problem be that the zfs-root disk is not the first on the controller (system boots from the grub on the older ufs-root slice), and/or that zfs is mirrored? And that I have snapshots and a data pool too? These are the boot disks (SVM mirror with ufs and grub):

  1   2   3   4   5   6   7   8   >