Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-23 Thread Jim Dunham
try to do as Jim Dunham said? zpool create test_pool c5t0d0p0 zpool destroy test_pool format -e c5t0d0p0 partition print controlD ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-22 Thread Jim Dunham
Kitty, I am trying to mount a WD 2.5TB external drive (was IFS:NTFS) to my OSS box. After connecting it to my Ultra24, I ran pfexec fdisk /dev/rdsk/c5t0d0p0 and changed the Type to EFI. Then, format -e or format showed the disk was config with 291.10GB only. The following message about

Re: [zfs-discuss] Modify stmf_sbd_lu properties

2011-05-10 Thread Jim Dunham
Don, Is it possible to modify the GUID associated with a ZFS volume imported into STMF? To clarify- I have a ZFS volume I have imported into STMF and export via iscsi. I have a number of snapshots of this volume. I need to temporarily go back to an older snapshot without removing all

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-07 Thread Jim Dunham
Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Brandon High Write caching will be disabled on devices that use slices. It can be turned back on by using format -e My experience has been, despite what the BPG

Re: [zfs-discuss] iSCSI initiator question

2011-02-26 Thread Jim Dunham
Roy, Sorry for crossposting, but I'm not really sure where this question belongs. I'm trying to troubleshoot a connection from an s10 box to a SANRAD iSCSI concentrator. After some network issues on the switch, the s10 box seems to lose iSCSI connection to the SANRAD box. The error

Re: [zfs-discuss] existing performance data for on-disk dedup?

2011-02-14 Thread Jim Dunham
Hi Janice, Hello. I am looking to see if performance data exists for on-disk dedup. I am currently in the process of setting up some tests based on input from Roch, but before I get started, thought I'd ask here. I find it somewhat interesting that you are asking this question on behalf

Re: [zfs-discuss] L2ARC - shared or associated with a pool?

2011-01-12 Thread Jim Dunham
Roy, Hi all There was some discussion on #opensolaris recently about L2ARC being dedicated to a pool, or shared. I figured since it's associated with a pool, it must be local, but I really don't know. An L2ARC is made up of one or more Cache Devices associated with a single ZFS storage

Re: [zfs-discuss] zpool import is this safe to use -f option in this case ?

2010-11-16 Thread Jim Dunham
sridhar, I have done the following (which is required for my case) Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1 created a array level snapshot of the device using dscli to another device which is successful. Now I make the snapshot device visible to another

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-16 Thread Jim Dunham
On Nov 16, 2010, at 6:37 PM, Ross Walker wrote: On Nov 16, 2010, at 4:04 PM, Tim Cook t...@cook.ms wrote: AFAIK, esx/i doesn't support L4 hash, so that's a non-starter. For iSCSI one just needs to have a second (third or fourth...) iSCSI session on a different IP to the target and run

Re: [zfs-discuss] zpool import is this safe to use -f option in this case ?

2010-11-16 Thread Jim Dunham
Tim, On Wed, Nov 17, 2010 at 10:12 AM, Jim Dunham james.dun...@oracle.com wrote: sridhar, I have done the following (which is required for my case) Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1 created a array level snapshot of the device using dscli

Re: [zfs-discuss] adding new disks and setting up a raidz2

2010-10-14 Thread Jim Dunham
Derek, I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6 months). I recently added 6 new drives to one of my servers and I would like to create a new RAIDZ2 pool called 'marketData'. I figured the command to do this would be something like: zpool create

Re: [zfs-discuss] ZFS pool issues with COMSTAR

2010-10-08 Thread Jim Dunham
On Oct 8, 2010, at 2:06 AM, Wolfraider wrote: We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with no errors, everything looks good but when we try to access zvols shared out with COMSTAR, windows reports that the devices have bad blocks. Everything has been

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Jim Dunham
Budy, No - not a trick question., but maybe I didn't make myself clear. Is there a way to discover such bad files other than trying to actually read from them one by one, say using cp or by sending a snapshot elsewhere? As noted by your original email, ZFS reports on any corruption using the

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-05 Thread Jim Dunham
...@ceglowski.netmailto:prze...@ceglowski.net Jim, On May 4, 2010, at 3:45 PM, Jim Dunham wrote: On May 4, 2010, at 2:43 PM, Richard Elling wrote: On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote: It does not look like it is: r...@san01a:/export/home/admin# svcs -a | grep iscsi online May_01

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Jim Dunham
Przem, On May 4, 2010, at 2:43 PM, Richard Elling wrote: On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote: It does not look like it is: r...@san01a:/export/home/admin# svcs -a | grep iscsi online May_01 svc:/network/iscsi/initiator:default online May_01

Re: [zfs-discuss] iscsi/comstar performance

2009-10-19 Thread Jim Dunham
Frank Middleton wrote: On 10/13/09 18:35, Albert Chin wrote: Maybe this will help: http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html Well, it does seem to explain the scrub problem. I think it might also explain the slow boot and startup problem - the VM

Re: [zfs-discuss] shareiscsi not sharing

2009-05-28 Thread Jim Dunham
Ian, Ian Collins wrote: I have a volume in a pool that was created under Solaris 10 update 6 that I was sharing over iSCSI to some VMs. The pool in now imported on an update 7 system. For some reason, the volume won't share. shareiscsi is on, but iscsiadm list target shows nothing.

Re: [zfs-discuss] AVS and ZFS demos - link broken?

2009-03-19 Thread Jim Dunham
James, The links to the Part 1 and Part 2 demos on this page (http://www.opensolaris.org/os/project/avs/Demos/ ) appear to be broken. http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/ http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/ They still work for me.

Re: [zfs-discuss] Comstar production-ready?

2009-03-13 Thread Jim Dunham
On Mar 4, 2009, at 7:04 AM, Jacob Ritorto wrote: Caution: I built a system like this and spent several weeks trying to get iscsi share working under Solaris 10 u6 and older. It would work fine for the first few hours but then performance would start to degrade, eventually becoming so poor as

[zfs-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jim Dunham
A recent increase in email about ZFS and SNDR (the replication component of Availability Suite), has given me reasons to post one of my replies. Well, now I'm confused! A collegue just pointed me towards your blog entry about SNDR and ZFS which, until now, I thought was not a supported

Re: [zfs-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jim Dunham
Andrew, Jim Dunham wrote: ZFS the filesystem is always on disk consistent, and ZFS does maintain filesystem consistency through coordination between the ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately for SNDR, ZFS caches a lot of an applications filesystem data

Re: [zfs-discuss] [storage-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jim Dunham
Nicolas, On Fri, Mar 06, 2009 at 10:05:46AM -0700, Neil Perrin wrote: On 03/06/09 08:10, Jim Dunham wrote: A simple test I performed to verify this, was to append to a ZFS file (no synchronous filesystem options being set) a series of blocks with a block order pattern contained within

Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-02-02 Thread Jim Dunham
BJ Quinn wrote: Then what if I ever need to export the pool on the primary server and then import it on the replicated server. Will ZFS know which drives should be part of the stripe even though the device names across servers may not be the same? Yes, zpool import will figure it

Re: [zfs-discuss] Oracle raw volumes

2009-02-01 Thread Jim Dunham
Stefan, one question related with this: would KAIO be supported on such configuration ? Yes, but not as one might expect. As seen from the truss output below, the call to kaio() fails with EBADFD, a direct result of the fact that for ZFS its cb_ops interface for asynchronous read and

Re: [zfs-discuss] Two-level ZFS

2009-02-01 Thread Jim Dunham
solution. Jim Dunham Engineering Manager Sun Microsystems, Inc. Storage Platform Software Group What's required to make it work? Consider a file server running ZFS that exports a volume with Iscsi. Consider also an application server that imports the LUN with Iscsi and runs a ZFS filesystem

Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-29 Thread Jim Dunham
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication

2009-01-28 Thread Jim Dunham
for each drive to be replicated, or is there a better way to do it? Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim

Re: [zfs-discuss] ? Changing storage pool serial number

2009-01-27 Thread Jim Dunham
Hi Tim, I took a look at the archives and I have seen a few threads about using array block level snapshots with ZFS and how we face the old issue that we used to see with logical volumes and unique IDs (quite correctly) stopping the same volume being presented twice to the same server.

Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-26 Thread Jim Dunham
Ahmed, The setup is not there anymore, however, I will share as much details as I have documented. Could you please post the commands you have used and any differences you think might be important. Did you ever test with 2008.11 ? instead of sxce ? Specific to the following: While we

Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-26 Thread Jim Dunham
Richard Elling wrote: Jim Dunham wrote: Ahmed, The setup is not there anymore, however, I will share as much details as I have documented. Could you please post the commands you have used and any differences you think might be important. Did you ever test with 2008.11 ? instead

Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-25 Thread Jim Dunham
Ahmed, Thanks for your informative reply. I am involved with kristof (original poster) in the setup, please allow me to reply below Was the follow 'test' run during resynchronization mode or replication mode? Neither, testing was done while in logging mode. This was chosen to simply

Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-24 Thread Jim Dunham
Kristof, Jim Yes, in step 5 commands were executed on both nodes. We did some more tests with opensolaris 2008.11. (build 101b) We managed to get AVS setup up and running, but we noticed that performance was really bad. When we configured a zfs volume for replication, we noticed that

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-19 Thread Jim Dunham
Richard, Ross wrote: The problem is they might publish these numbers, but we really have no way of controlling what number manufacturers will choose to use in the future. If for some reason future 500GB drives all turn out to be slightly smaller than the current ones you're going to

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-17 Thread Jim Dunham
Brad, I'd like to track a server's ZFS pool I/O throughput over time. What's a good data source to use for this? I like zpool iostat for this, but if I poll at two points in time I would get a number since boot (e.g. 1.2M) and a current number (e.g. 1.3K). If I use the current number

Re: [zfs-discuss] zfs iscsi sustained write performance

2009-01-12 Thread Jim Dunham
Roch Bourbonnais wrote: Le 4 janv. 09 à 21:09, milosz a écrit : thanks for your responses, guys... the nagle's tweak is the first thing i did, actually. not sure what the network limiting factors could be here... there's no switch, jumbo frames are on... maybe it's the e1000g driver? it's

Re: [zfs-discuss] ZFS, Kernel Panic on import

2008-11-07 Thread Jim Dunham
Andrew, I woke up yesterday morning, only to discover my system kept rebooting.. It's been running fine for the last while. I upgraded to snv 98 a couple weeks back (from 95), and had upgraded my RaidZ Zpool from version 11 to 13 for improved scrub performance. After some research

Re: [zfs-discuss] [storage-discuss] Help with bizarre S10U5 / zfs / iscsi / thumper / Oracle RAC problem

2008-11-04 Thread Jim Dunham
George, I'm looking for any pointers or advice on what might have happened to cause the following problem... To run Oracle RAC on iSCSI Target LUs, accessible by three or more iSCSI Initiator nodes, requires support for SCSI-3 Persistent Reservations. This functionality was added to

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-20 Thread Jim Dunham
/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Replication Question

2008-10-10 Thread Jim Dunham
Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-08 Thread Jim Dunham
functionality on a single node, use host based or controller based mirroring software. --Joe ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-19 Thread Jim Dunham
replication 'smarter'. -- Brent Jones [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun

Re: [zfs-discuss] ZPOOL Import Problem

2008-09-17 Thread Jim Dunham
On Sep 16, 2008, at 5:39 PM, Miles Nordin wrote: jd == Jim Dunham [EMAIL PROTECTED] writes: jd If at the time the SNDR replica is deleted the set was jd actively replicating, along with ZFS actively writing to the jd ZFS storage pool, I/O consistency will be lost, leaving ZFS

Re: [zfs-discuss] ZPOOL Import Problem

2008-09-13 Thread Jim Dunham
be placed into logging mode first. Then ZFS will be left in I/O consistent after the disable is done. Corey Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-12 Thread Jim Dunham
On Sep 11, 2008, at 5:16 PM, A Darren Dunham wrote: On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote: On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote: On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote: The issue with any form of RAID 1, is that the instant a disk

Re: [zfs-discuss] ZPOOL Import Problem

2008-09-12 Thread Jim Dunham
# --- Importing on the primary gives the same error. Anyone have any ideas? Thanks Corey ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering

Re: [zfs-discuss] Will ZFS stay consistent with AVS/ZFS and async replication

2008-09-12 Thread Jim Dunham
-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
for opportunities to learn about ZFS, AVS and other replication technologies. In their day, similar War wounds and successful battles have been had regarding AVS in use with UFS, QFS, VxFS, SVM, VxVM, Oracle, Sybase and others. Jim Dunham Engineering Manager Storage Platform Software Group

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote: On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote: The issue with any form of RAID 1, is that the instant a disk fails out of the RAID set, with the next write I/O to the remaining members of the RAID set, the failed disk (and its

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-07 Thread Jim Dunham
) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-07 Thread Jim Dunham
-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Image with DD from ZFS partition

2008-05-08 Thread Jim Dunham
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list

Re: [zfs-discuss] Moving zfs pool to new machine?

2008-05-07 Thread Jim Dunham
Steve, Can someone tell me or point me to links that describe how to do the following. I had a machine that crashed and I want to move to a newer machine anyway. The boot disk on the old machine is fried. The two disks I was using for a zfs pool on that machine need to be moved to a

Re: [zfs-discuss] iSCSI targets mapped to a VMWare ESX server

2008-04-07 Thread Jim Dunham
Mertol Ozyoney wrote: Hi All ; There are a set of issues being looked at that prevent the VMWare ESX server from working with the Solaris iSCSI Target. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6597310 At this time there is no target date when this issues will

Re: [zfs-discuss] Cpying between pools

2008-03-14 Thread Jim Dunham
Vahid, We need to move about 1T of data from one zpool on EMC dmx-3000 to another storage device (dmx-3). DMX-3 can be visible on the same host where dmx-3000 is being used on or from another host. What is the best way to transfer the data from dmx-3000 to dmx-3? Is it possible to add

Re: [zfs-discuss] iscsi core dumps when under IO

2008-02-22 Thread Jim Dunham
dumped core, being an assert in the T10 state machine. # mdb /core ::status ::quit This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham

Re: [zfs-discuss] [Fwd: Re: Presales support on ZFS]

2008-02-11 Thread Jim Dunham
Enrico, Is there any forecast to improve the efficiency of the replication mechanisms of ZFS ? Fishwork - new NAS release I would take some time to talk with and understand exactly what the customer's expectation are for replication. i would not base my decision on the cost of

Re: [zfs-discuss] mounting a copy of a zfs pool /file system while orginal is still active

2008-02-04 Thread Jim Dunham
? - Jim # zpool import foopool barpool -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. work

Re: [zfs-discuss] ZFS replication strategies

2008-02-01 Thread Jim Dunham
-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. wk: 781.442.4042 http://blogs.sun.com/avs http://www.opensolaris.org/os/project/avs/ http://www.opensolaris.org/os/project/iscsitgt/ http://www.opensolaris.org/os/community/storage

Re: [zfs-discuss] [Fwd: Re: Presales support on ZFS]

2008-01-29 Thread Jim Dunham
/white-papers/data_replication_strategies.pdf http://www.sun.com/storagetek/white-papers/enterprise_continuity.pdf Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham

Re: [zfs-discuss] iscsi on zvol

2008-01-24 Thread Jim Dunham
-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. wk: 781.442.4042 http://blogs.sun.com/avs http://www.opensolaris.org/os/project/avs/ http://www.opensolaris.org/os/project/iscsitgt/ http

Re: [zfs-discuss] iscsi on zvol

2008-01-24 Thread Jim Dunham
and iSCSI start and stop at different times during Solaris boot and shutdown, so I would recommend using legacy mount points, or manual zpool import / exports when trying configurations at this level. Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. wk: 781.442.4042 http

Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Jim Dunham
głos na najlepszego. - Kliknij: http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fsportowiec2007.htmlsid=166 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham

Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Jim Dunham
value is based on many variables, most of which are changing over time and usage patterns. eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform

Re: [zfs-discuss] Break a ZFS mirror and concatenate the disks

2008-01-10 Thread Jim Dunham
Kory, Yes, I get it now. You want to detach one of the disks and then readd the same disk, but lose the redundancy of the mirror. Just as long as you realize you're losing the redundancy. I'm wondering if zpool add will complain. I don't have a system to try this at the moment. The

Re: [zfs-discuss] ZFS with array-level block replication (TrueCopy, SRDF, etc.)

2007-12-14 Thread Jim Dunham
This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. wk: 781.442.4042 http

Re: [zfs-discuss] iSCSI target using ZFS filesystem as backing

2007-11-21 Thread Jim Dunham
list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] ZFS mirroring

2007-10-22 Thread Jim Dunham
/ Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 http://blogs.sun.com/avs regards image001.gif Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +90212335

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-26 Thread Jim Dunham
system. Of course everything that you and Tim and Casper said is true, but I'm still inclined to try that scenario. Rainer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham

Re: [zfs-discuss] do zfs filesystems isolate corruption?

2007-08-11 Thread Jim Dunham
@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Solaris, Storage Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 Email: [EMAIL PROTECTED] http://blogs.sun.com/avs ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Does iSCSI target support SCSI-3 PGR reservation ?

2007-07-27 Thread Jim Dunham
://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Solaris, Storage Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 Email: [EMAIL PROTECTED] http://blogs.sun.com/avs ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Re: [storage-discuss] Performance expectations of iscsi targets?

2007-06-19 Thread Jim Dunham
___ storage-discuss mailing list [EMAIL PROTECTED] http://mail.opensolaris.org/mailman/listinfo/storage-discuss Jim Dunham Solaris, Storage Software Group Sun Microsystems, Inc. 1617 Southwood Drive Nashua, NH 03063 Email: [EMAIL PROTECTED] http

Re: [zfs-discuss] Read Only Zpool: ZFS and Replication

2007-02-05 Thread Jim Dunham
Ben, I've been playing with replication of a ZFS Zpool using the recently released AVS. I'm pleased with things, but just replicating the data is only part of the problem. The big question is: can I have a zpool open in 2 places? No. The ability to have a zpool open in two place would

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-02-05 Thread Jim Dunham
Frank, On Fri, 2 Feb 2007, Torrey McMahon wrote: Jason J. W. Williams wrote: Hi Jim, Thank you very much for the heads up. Unfortunately, we need the write-cache enabled for the application I was thinking of combining this with. Sounds like SNDR and ZFS need some more soak time together

Re: [zfs-discuss] Read Only Zpool: ZFS and Replication

2007-02-05 Thread Jim Dunham
Robert, Hello Ben, Monday, February 5, 2007, 9:17:01 AM, you wrote: BR I've been playing with replication of a ZFS Zpool using the BR recently released AVS. I'm pleased with things, but just BR replicating the data is only part of the problem. The big BR question is: can I have a zpool open

Re: [zfs-discuss] Read Only Zpool: ZFS and Replication

2007-02-05 Thread Jim Dunham
Ben Rockwood wrote: Jim Dunham wrote: Robert, Hello Ben, Monday, February 5, 2007, 9:17:01 AM, you wrote: BR I've been playing with replication of a ZFS Zpool using the BR recently released AVS. I'm pleased with things, but just BR replicating the data is only part of the problem. The big

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-01-29 Thread Jim Dunham
Jason, Thank you for the detailed explanation. It is very helpful to understand the issue. Is anyone successfully using SNDR with ZFS yet? Of the opportunities I've been involved with the answer is yes, but so far I've not seen SNDR with ZFS in a production environment, but that does not mean

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-01-26 Thread Jim Dunham
). So all that needs to be done is to design and build a new variant of the letter 'h', and find the place to separate ZFS into two pieces. - Jim Dunham That would be slick alternative to send/recv. Best Regards, Jason On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote: Project Overview: I propose

Re: [zfs-discuss] sharing a storage array

2006-07-28 Thread Jim Dunham - Sun Microsystems
Richard Elling wrote: Danger Will Robinson... Jeff Victor wrote: Jeff Bonwick wrote: If one host failed I want to be able to do a manual mount on the other host. Multiple hosts writing to the same pool won't work, but you could indeed have two pools, one for each host, in a dual