try to do as Jim Dunham said?
zpool create test_pool c5t0d0p0
zpool destroy test_pool
format -e c5t0d0p0
partition
print
controlD
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Kitty,
I am trying to mount a WD 2.5TB external drive (was IFS:NTFS) to my OSS box.
After connecting it to my Ultra24, I ran pfexec fdisk /dev/rdsk/c5t0d0p0 and
changed the Type to EFI. Then, format -e or format showed the disk was
config
with 291.10GB only.
The following message about
Don,
Is it possible to modify the GUID associated with a ZFS volume imported into
STMF?
To clarify- I have a ZFS volume I have imported into STMF and export via
iscsi. I have a number of snapshots of this volume. I need to temporarily go
back to an older snapshot without removing all
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brandon High
Write caching will be disabled on devices that use slices. It can be
turned back on by using format -e
My experience has been, despite what the BPG
Roy,
Sorry for crossposting, but I'm not really sure where this question belongs.
I'm trying to troubleshoot a connection from an s10 box to a SANRAD iSCSI
concentrator. After some network issues on the switch, the s10 box seems to
lose iSCSI connection to the SANRAD box. The error
Hi Janice,
Hello. I am looking to see if performance data exists for on-disk dedup. I
am currently in the process of setting up some tests based on input from
Roch, but before I get started, thought I'd ask here.
I find it somewhat interesting that you are asking this question on behalf
Roy,
Hi all
There was some discussion on #opensolaris recently about L2ARC being
dedicated to a pool, or shared. I figured since it's associated with a pool,
it must be local, but I really don't know.
An L2ARC is made up of one or more Cache Devices associated with a single ZFS
storage
sridhar,
I have done the following (which is required for my case)
Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
created a array level snapshot of the device using dscli to another device
which is successful.
Now I make the snapshot device visible to another
On Nov 16, 2010, at 6:37 PM, Ross Walker wrote:
On Nov 16, 2010, at 4:04 PM, Tim Cook t...@cook.ms wrote:
AFAIK, esx/i doesn't support L4 hash, so that's a non-starter.
For iSCSI one just needs to have a second (third or fourth...) iSCSI session
on a different IP to the target and run
Tim,
On Wed, Nov 17, 2010 at 10:12 AM, Jim Dunham james.dun...@oracle.com wrote:
sridhar,
I have done the following (which is required for my case)
Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
created a array level snapshot of the device using dscli
Derek,
I am relatively new to OpenSolaris / ZFS (have been using it for maybe 6
months). I recently added 6 new drives to one of my servers and I would like
to create a new RAIDZ2 pool called 'marketData'.
I figured the command to do this would be something like:
zpool create
On Oct 8, 2010, at 2:06 AM, Wolfraider wrote:
We have a weird issue with our ZFS pool and COMSTAR. The pool shows online
with no errors, everything looks good but when we try to access zvols shared
out with COMSTAR, windows reports that the devices have bad blocks.
Everything has been
Budy,
No - not a trick question., but maybe I didn't make myself clear.
Is there a way to discover such bad files other than trying to actually read
from them one by one, say using cp or by sending a snapshot elsewhere?
As noted by your original email, ZFS reports on any corruption using the
...@ceglowski.netmailto:prze...@ceglowski.net
Jim,
On May 4, 2010, at 3:45 PM, Jim Dunham wrote:
On May 4, 2010, at 2:43 PM, Richard Elling wrote:
On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
It does not look like it is:
r...@san01a:/export/home/admin# svcs -a | grep iscsi
online May_01
Przem,
On May 4, 2010, at 2:43 PM, Richard Elling wrote:
On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
It does not look like it is:
r...@san01a:/export/home/admin# svcs -a | grep iscsi
online May_01 svc:/network/iscsi/initiator:default
online May_01
Frank Middleton wrote:
On 10/13/09 18:35, Albert Chin wrote:
Maybe this will help:
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html
Well, it does seem to explain the scrub problem. I think it might
also explain the slow boot and startup problem - the VM
Ian,
Ian Collins wrote:
I have a volume in a pool that was created under Solaris 10 update
6 that I was sharing over iSCSI to some VMs. The pool in now
imported on an update 7 system.
For some reason, the volume won't share. shareiscsi is on, but
iscsiadm list target shows nothing.
James,
The links to the Part 1 and Part 2 demos on this page (http://www.opensolaris.org/os/project/avs/Demos/
) appear to be broken.
http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/
http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/
They still work for me.
On Mar 4, 2009, at 7:04 AM, Jacob Ritorto wrote:
Caution: I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older. It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as
A recent increase in email about ZFS and SNDR (the replication
component of Availability Suite), has given me reasons to post one of
my replies.
Well, now I'm confused! A collegue just pointed me towards your blog
entry about SNDR and ZFS which, until now, I thought was not a
supported
Andrew,
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the
ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately
for SNDR, ZFS caches a lot of an applications filesystem data
Nicolas,
On Fri, Mar 06, 2009 at 10:05:46AM -0700, Neil Perrin wrote:
On 03/06/09 08:10, Jim Dunham wrote:
A simple test I performed to verify this, was to append to a ZFS
file
(no synchronous filesystem options being set) a series of blocks
with a
block order pattern contained within
BJ Quinn wrote:
Then what if I ever need to export the pool on the primary server
and then import it on the replicated server. Will ZFS know which
drives should be part of the stripe even though the device names
across servers may not be the same?
Yes, zpool import will figure it
Stefan,
one question related with this: would KAIO be supported on such
configuration ?
Yes, but not as one might expect.
As seen from the truss output below, the call to kaio() fails with
EBADFD, a direct result of the fact that for ZFS its cb_ops interface
for asynchronous read and
solution.
Jim Dunham
Engineering Manager
Sun Microsystems, Inc.
Storage Platform Software Group
What's required
to make it work? Consider a file server running ZFS that exports a
volume with Iscsi. Consider also an application server that imports
the LUN with Iscsi and runs a ZFS filesystem
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
for each drive to be replicated, or is there a better way to do it?
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim
Hi Tim,
I took a look at the archives and I have seen a few threads about
using
array block level snapshots with ZFS and how we face the old issue
that we used to see with logical volumes and unique IDs (quite
correctly) stopping the same volume being presented twice to the same
server.
Ahmed,
The setup is not there anymore, however, I will share as much details
as I have documented. Could you please post the commands you have used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of sxce ?
Specific to the following:
While we
Richard Elling wrote:
Jim Dunham wrote:
Ahmed,
The setup is not there anymore, however, I will share as much
details
as I have documented. Could you please post the commands you have
used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead
Ahmed,
Thanks for your informative reply. I am involved with kristof
(original poster) in the setup, please allow me to reply below
Was the follow 'test' run during resynchronization mode or
replication
mode?
Neither, testing was done while in logging mode. This was chosen to
simply
Kristof,
Jim Yes, in step 5 commands were executed on both nodes.
We did some more tests with opensolaris 2008.11. (build 101b)
We managed to get AVS setup up and running, but we noticed that
performance was really bad.
When we configured a zfs volume for replication, we noticed that
Richard,
Ross wrote:
The problem is they might publish these numbers, but we really have
no way of controlling what number manufacturers will choose to use
in the future.
If for some reason future 500GB drives all turn out to be slightly
smaller than the current ones you're going to
Brad,
I'd like to track a server's ZFS pool I/O throughput over time.
What's a good data source to use for this? I like zpool iostat for
this, but if I poll at two points in time I would get a number since
boot (e.g. 1.2M) and a current number (e.g. 1.3K). If I use the
current number
Roch Bourbonnais wrote:
Le 4 janv. 09 à 21:09, milosz a écrit :
thanks for your responses, guys...
the nagle's tweak is the first thing i did, actually.
not sure what the network limiting factors could be here... there's
no switch, jumbo frames are on... maybe it's the e1000g driver?
it's
Andrew,
I woke up yesterday morning, only to discover my system kept
rebooting..
It's been running fine for the last while. I upgraded to snv 98 a
couple weeks back (from 95), and had upgraded my RaidZ Zpool from
version 11 to 13 for improved scrub performance.
After some research
George,
I'm looking for any pointers or advice on what might have happened
to cause the following problem...
To run Oracle RAC on iSCSI Target LUs, accessible by three or more
iSCSI Initiator nodes, requires support for SCSI-3 Persistent
Reservations. This functionality was added to
/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
functionality on a single node, use
host based or controller based mirroring software.
--Joe
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software
replication 'smarter'.
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun
On Sep 16, 2008, at 5:39 PM, Miles Nordin wrote:
jd == Jim Dunham [EMAIL PROTECTED] writes:
jd If at the time the SNDR replica is deleted the set was
jd actively replicating, along with ZFS actively writing to the
jd ZFS storage pool, I/O consistency will be lost, leaving ZFS
be placed into
logging mode first. Then ZFS will be left in I/O consistent after the
disable is done.
Corey
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss
On Sep 11, 2008, at 5:16 PM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote:
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
The issue with any form of RAID 1, is that the instant a disk
#
---
Importing on the primary gives the same error.
Anyone have any ideas?
Thanks
Corey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering
-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
for
opportunities to learn about ZFS, AVS and other replication
technologies. In their day, similar War wounds and successful
battles have been had regarding AVS in use with UFS, QFS, VxFS, SVM,
VxVM, Oracle, Sybase and others.
Jim Dunham
Engineering Manager
Storage Platform Software Group
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
The issue with any form of RAID 1, is that the instant a disk fails
out of the RAID set, with the next write I/O to the remaining members
of the RAID set, the failed disk (and its
)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs
-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
Steve,
Can someone tell me or point me to links that describe how to
do the following.
I had a machine that crashed and I want to move to a newer machine
anyway. The boot disk on the old machine is fried. The two disks I
was
using for a zfs pool on that machine need to be moved to a
Mertol Ozyoney wrote:
Hi All ;
There are a set of issues being looked at that prevent the VMWare ESX
server from working with the Solaris iSCSI Target.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6597310
At this time there is no target date when this issues will
Vahid,
We need to move about 1T of data from one zpool on EMC dmx-3000 to
another storage device (dmx-3). DMX-3 can be visible on the same
host where dmx-3000 is being used on or from another host.
What is the best way to transfer the data from dmx-3000 to dmx-3?
Is it possible to add
dumped core, being an assert in the T10 state machine.
# mdb /core
::status
::quit
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Enrico,
Is there any forecast to improve the efficiency of the replication
mechanisms of ZFS ? Fishwork - new NAS release
I would take some time to talk with and understand exactly what the
customer's expectation are for replication. i would not base my
decision on the cost of
?
- Jim
# zpool import foopool barpool
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
work
-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.4042
http://blogs.sun.com/avs
http://www.opensolaris.org/os/project/avs/
http://www.opensolaris.org/os/project/iscsitgt/
http://www.opensolaris.org/os/community/storage
/white-papers/data_replication_strategies.pdf
http://www.sun.com/storagetek/white-papers/enterprise_continuity.pdf
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.4042
http://blogs.sun.com/avs
http://www.opensolaris.org/os/project/avs/
http://www.opensolaris.org/os/project/iscsitgt/
http
and iSCSI start and stop at different times during
Solaris boot and shutdown, so I would recommend using legacy mount
points, or manual zpool import / exports when trying configurations at
this level.
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.4042
http
głos na najlepszego. - Kliknij:
http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fsportowiec2007.htmlsid=166
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
value is based on many variables, most of which are changing
over time and usage patterns.
eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform
Kory,
Yes, I get it now. You want to detach one of the disks and then readd
the same disk, but lose the redundancy of the mirror.
Just as long as you realize you're losing the redundancy.
I'm wondering if zpool add will complain. I don't have a system to
try this at the moment.
The
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.4042
http
list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
___
zfs-discuss mailing list
zfs-discuss
/
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
http://blogs.sun.com/avs
regards
image001.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
system.
Of course everything that you and Tim and Casper said is true,
but I'm still inclined to try that scenario.
Rainer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
http://blogs.sun.com/avs
___
zfs-discuss mailing list
zfs
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
http://blogs.sun.com/avs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
http
Ben,
I've been playing with replication of a ZFS Zpool using the recently released AVS. I'm pleased with things, but just replicating the data is only part of the problem. The big question is: can I have a zpool open in 2 places?
No. The ability to have a zpool open in two place would
Frank,
On Fri, 2 Feb 2007, Torrey McMahon wrote:
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR I've been playing with replication of a ZFS Zpool using the
BR recently released AVS. I'm pleased with things, but just
BR replicating the data is only part of the problem. The big
BR question is: can I have a zpool open
Ben Rockwood wrote:
Jim Dunham wrote:
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR I've been playing with replication of a ZFS Zpool using the
BR recently released AVS. I'm pleased with things, but just
BR replicating the data is only part of the problem. The big
Jason,
Thank you for the detailed explanation. It is very helpful to
understand the issue. Is anyone successfully using SNDR with ZFS yet?
Of the opportunities I've been involved with the answer is yes, but so
far I've not seen SNDR with ZFS in a production environment, but that
does not mean
).
So all that needs to be done is to design and build a new variant of the
letter 'h', and find the place to separate ZFS into two pieces.
- Jim Dunham
That would be slick alternative to send/recv.
Best Regards,
Jason
On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote:
Project Overview:
I propose
Richard Elling wrote:
Danger Will Robinson...
Jeff Victor wrote:
Jeff Bonwick wrote:
If one host failed I want to be able to do a manual mount on the
other host.
Multiple hosts writing to the same pool won't work, but you could
indeed
have two pools, one for each host, in a dual
79 matches
Mail list logo