Anyone here read the article Why RAID 5 stops working in 2009 at
http://blogs.zdnet.com/storage/?p=162
Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux
if the RAID has to be rebuilt because of a faulty disk? I imagine so because
of the physical constraints that
I had a drive fail and replaced it with a new drive. During the resilvering
process the new drive had write faults and was taken offline. These faults were
caused by a broken SATA cable (drive checked with Manufacturers software and
all ok). New cable fixed the the failure. However, now the
Yes - but it does nothing. The drive remains FAULTED.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the suggestion, but have tried detaching but it refuses reporting no
valid replicas. Capture below.
C3P0# zpool status
pool: tank
state: DEGRADED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank
Thanks for the suggestion, but have tried detaching but it refuses reporting no
valid replicas. Capture below.
C3P0# zpool status
pool: tank
state: DEGRADED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
Thanks - have run it and returns pretty quickly. Given the output (attached)
what action can I take?
Thanks
James
--
This message posted from opensolaris.orgDirty time logs:
tank
outage [300718,301073] length 356
outage [301138,301139] length 2
outage
either of these methods. I've used (1) successfully on my
OI_148a by taking a precompiled binary, and I didn't get around to trying (2).
Just my 2c :)
//Jim
___
zfs-crypto-discuss mailing list
zfs-crypto-discuss@opensolaris.org
http
Looks like CR 6411261 busy intent log runs out of space on small pools.
I found this one. I just bumped up the priority.
Jim
When unpacking the solaris source onto a local disk
on a system running build 39 I got the following
panic:
panic[cpu0]/thread=d2c8ade0:
really out of space
. In the meantime
I would code up your unit tests in ksh so they can be more
easily integrated. We'll keep you posted as progress in
releasing the test suite is made.
Cheers,
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
http://www.tech-recipes.com/solaris_system_administration_tips1446.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to think that this would be all that is needed for ZFS?
Thanks,
-- Jim C
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and RAS.
/jim
Gregory Shaw wrote:
To maximize the throughput, I'd go with 8 5-disk raid-z{2} luns.
Using that configuration, a full-width stripe write should be a
single operation for each controller.
In production, the application needs would probably dictate the
resulting disk layout
in the messages
to get
a better handle on the observed behviour, but this certainly seems like
something we should explore further.
Watch this space.
/jim
At any rate, I don't think adding swap will fix the problem I am seeing
in that ZFS is not releasing its unused cache when applications need
(1M) command. Finding this out via trial and
error, there is no dependency mentioned for SUNWsmapi in the SUNWzfsr
depend file.
Apologies if this is nitpicking, but is this missing dependency worthy
of submitting a P5 CR?
-- Jim C
Jason Schroeder wrote:
Dale Ghent wrote:
On Jun 28, 2006
available'. So the question is, what sort of state is
required between reboots for ZFS?
Regards,
-- Jim C
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I understand. Thanks.
Just curious, ZFS manages NFS shares. Have you given any thought to
what might be involved for ZFS to manage SMB shares in the same manner.
This all goes towards my stateless OS theme.
-- Jim C
Eric Schrock wrote:
You need the following file:
/etc/zfs
/etc/zfs/zpool.cache be symbolically linked to /system/ZPOOL.CACHE
-- Jim C
This file 'knows' about all the pools on the system. These pools can
typically be discovered via 'zpool import', but we can't do this at boot
because:
a. It can be really, really expensive (tasting every disk
-- Jim C
Incidentally, the explicit 'zfs share' isn't needed, as we automatically
share the filesystem when the options are set (which did succeed).
- Eric
On Fri, Aug 04, 2006 at 12:42:02PM -0400, Jim Connors wrote:
Working to get ZFS to run on a minimal Solaris 10 U2 configuration
Richard Elling wrote:
Jim Connors wrote:
Working to get ZFS to run on a minimal Solaris 10 U2 configuration.
What does minimal mean? Most likely, you are missing something.
-- richard
Yeah. Looking at package and SMF dependencies plus a whole lot of and
trial and error, I've currently
Roch - PAE wrote:
The hard part is getting a set of simple requirements. As you go into
more complex data center environments you get hit with older Solaris
revs, other OSs, SOX compliance issues, etc. etc. etc. The world where
most of us seem to be playing with ZFS is on the lower end of
So is there a command to make the spare get used, or
so I have to remove it as a spare and add it if it doesn't
get automatically used?
Is this a bug to be fixed, or will this always be the case when
the disks aren't exactly the same size?
This message posted from opensolaris.org
I know this isn't necessarily ZFS specific, but after I reboot I spin the
drives back
up, but nothing I do (devfsadm, disks, etc) can get them seen again until the
next reboot.
I've got some older scsi drives in an old Andataco Gigaraid enclosure which
I thought supported hot-swap, but I seem
, a Netapp, but the fact that I appear to have been
able to nuke my pool by simulating a hardware error gives me pause.
I'd love to know if I'm off-base in my worries.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
So the questions are:
- is this fixable? I don't see an inum I could run
find on to remove,
and I can't even do a zfs volinit anyway:
nextest-01# zfs volinit
cannot iterate filesystems: I/O error
- would not enabling zil_disable have prevented
this?
- Should I have
the pool it's going to give me pause in implementing
a ZFS solution.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in the
discussion
- Adding new disks to a RAID-Z pool (Netapps handle adding
the
workload is not
data/bandwidth intensive, but more attribute intensive. Note again
zfs_create()
is the heavy ZFS function, along with zfs_getattr. Perhaps it's the
attribute-intensive
nature of the load that is at the root of this.
I can spend more time on this tomorrow (traveling today).
Thanks,
/jim
component to the NFS
configuration, so for any
synchronous operation, I would expect things to be slower when done over
NFS.
Awaiting enlightment
:^)
/jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Ok, so I'm planning on wiping my test pool that seems to have problems
with non-spare disks being marked as spares, but I can't destroy it:
# zpool destroy -f zmir
cannot iterate filesystems: I/O error
Anyone know how I can nuke this for good?
Jim
This message posted from opensolaris.org
BTW, I'm also unable to export the pool -- same error.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nevermind:
# zfs destroy [EMAIL PROTECTED]:28
cannot open '[EMAIL PROTECTED]:28': I/O error
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
on an individual
basis).
I'm running b51, but I'll try deleting the cache.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of a drive to a pool wipes the pool of any previous data,
especially any zfs metadata.
I'll keep the list posted as I continue my tests.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
Should I file this as a bug, or should I just not do that :-
Ko,
This message posted from opensolaris.org
filesystems first -
which would have failed without a -f flag because they were shared.
So IMO it is a bug or at least an RFE.
Ok, where should I file an RFE?
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
as the other
drives -- does that make a difference?
- Is there something inherent to an old SCSI bus that causes spun-
down drives to hang the system in some way, even if it's just hanging
the zpool/zfs system calls? Would a thumper be more resilient to this?
Jim
This message posted from
).
So all that needs to be done is to design and build a new variant of the
letter 'h', and find the place to separate ZFS into two pieces.
- Jim Dunham
That would be slick alternative to send/recv.
Best Regards,
Jason
On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote:
Project Overview:
I propose
to recover.
Cheers,
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
checking mechanisms.
Jim
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and this underlying storage behavior is not unique to
SNDR as it happens with other host-based replication and
controller-based replication.
Jim
benr.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Frank,
On Fri, 2 Feb 2007, Torrey McMahon wrote:
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
failures between metadata blocks A B.
Of course using an instantly accessible II snapshot of an SNDR secondary
volume would work just fine, since the data being read is now
point-in-time consistent, and static.
- Jim
I belive what you really need is 'zfs send continuos' feature.
We
Ben Rockwood wrote:
Jim Dunham wrote:
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR I've been playing with replication of a ZFS Zpool using the
BR recently released AVS. I'm pleased with things, but just
BR replicating the data is only part of the problem. The big
This month's FROSUG (Front Range OpenSolaris User Group) meeting is on
Thursday, February 22, 2007. Our presentation is ZFS as a Root File
System by Lori Alt. In addition, Jon Bowman will be giving an OpenSolaris
Update, and we will also be doing an InstallFest. So, if you want help
installing an
***Meeting Update***
We will be having this month's meeting at the Omni Interlocken Resort
in Broomfield and a conference call number is being provided for those
who can not make the meeting in person, see Meeting Details below for
more information.
In addition, we will be discussing Solaris
?
/jim
Leon Koll wrote:
Hello, gurus
I need your help. During the benchmark test of NFS-shared ZFS file systems at
some moment the number of NFS threads jumps to the maximal value, 1027
(NFSD_SERVERS was set to 1024). The latency also grows and the number of IOPS
is going down.
I've collected
c02e2a08 uint64_t c_max = 0t1070318720
. . .
Perhaps c_max does not do what I think it does?
Thanks,
/jim
Jim Mauro wrote:
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
(update 3). All file IO is mmap(file), read memory segment, unmap, close.
Tweaked the arc
the ARC size because for mmap-intensive workloads,
it seems to hurt more than help (although, based on experiments up to this
point, it's not hurting a lot).
I'll do another reboot, and run it all down for you serially...
/jim
Thanks,
-j
On Thu, Mar 15, 2007 at 06:57:12PM -0400, Jim Mauro wrote
= 0t221839360
ARC_mfu::print -d size lsize
size = 0t26897219584 -- MFU list is almost 27GB ...
lsize = 0t26869121024
Thanks,
/jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Will try that now...
/jim
[EMAIL PROTECTED] wrote:
I suppose I should have been more forward about making my last point.
If the arc_c_max isn't set in /etc/system, I don't believe that the ARC
will initialize arc.p to the correct value. I could be wrong about
this; however, next time you
c02e2a08 uint64_t c_max = 0t536870912--- c_max is 512MB
...
}
After a few runs of the workload ...
arc::print -d size
size = 0t536788992
Ah - looks like we're out of the woods. The ARC remains clamped at 512MB.
Thanks!
/jim
[EMAIL PROTECTED] wrote:
I suppose I should have been more
.
But, that's me...
:^)
/jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing
http://www.cnn.com/2007/US/03/20/lost.data.ap/index.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this problem is
unique to
ZFS, but I do not have experience or empirical data on mount time for 12k
UFS, QFS, ext4, etc, file systems.
There is an RFE filed on this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6478980
As I said, I wish I had a better answer.
Thanks,
/jim
Kory Wheatley wrote
(Jim Walker)
6:45pm - 8:30pm Sharemgr (Doug McCallum)
Where: Sun Broomfield Campus
Building 1 - Conference Center
500 Eldorado Blvd.
Broomfield, CO 80021
The meeting is free and open to the public.
Pizza and soft drinks will be served at the beginning of the meeting
used for
the dnlc. It might, but I need to look at the code to be sure...
Let's start with this...
/jim
Jason J. W. Williams wrote:
Hi Guys,
Rather than starting a new thread I thought I'd continue this thread.
I've been running Build 54 on a Thumper since Mid January and wanted
to ask
virtual hard drives in the ZFS space.
So I guess the anwer to your question is theoretically yes, but I'm
not aware of an implementation that would allow for such a
configuration that exists today.
I think I just confused the issue...ah well...
/jim
PS - FWIW, I have a zpool configured in nv62
.
/jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
http
to testing discuss at:
http://www.opensolaris.org/os/community/testing/discussions.
Happy Hunting,
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/test/ontest-stc2/src/suites/share/README
Any questions about the Sharemgr test suite can be sent to testing discuss at:
http://www.opensolaris.org/os/community/testing/discussions
Cheers,
Jim
This message posted from opensolaris.org
___
zfs
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
http://blogs.sun.com/avs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
-stc2/src/suites/zfs/
More information on the ZFS test suite is at:
http://opensolaris.org/os/community/zfs/zfstestsuite/
Questions about the ZFS test suite can be sent to zfs-discuss at:
http://www.opensolaris.org/jive/forum.jspa?forumID=80
Cheers,
Jim
This message posted from
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Solaris, Storage Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
Email: [EMAIL PROTECTED]
http://blogs.sun.com/avs
___
zfs-discuss mailing list
zfs
Is the referenced Laminated Handout on slide 3 available anywhere in any
form electronically?
If not, I'd be happy to create an electronic copy and make it pubically
available.
Thanks,
/jim
Joy Marshall wrote:
It's taken a while but at last we have been able to post the ZFS Under
system within the
SAN, be it Fibre Channel or iSCSI. If the node wanting access to the
data is distant, Available Suite also offers Remote Replication.
http://www.opensolaris.org/os/project/avs/
http://www.opensolaris.org/os/project/iscsitgt/
Jim
Ronald,
thanks for your comments.
I
for the same test. Very odd.
Still looking...
Thanks,
/jim
Jeffrey W. Baker wrote:
I have a lot of people whispering zfs in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I'm not afraid of
ext4's
for seminars and tutorials on Solaris. Each set was
color print on heavy, glossy paper. That represented color printing of about
1600 pages total. All so the attorney could question me about 2 of the
slides.
I almost fell off my chair
/jim
Rob Windsor wrote:
http://news.com.com/NetApp
,
/jim
[EMAIL PROTECTED] wrote:
Hi All,
I have modified mdb so that I can examine data structures on disk using
::print.
This works fine for disks containing ufs file systems. It also works
for zfs file systems, but...
I use the dva block number from the uberblock_t to print what
) does not work either.
Use the zfs r/w function entry points for now.
What sayeth the ZFS team regarding the use of a stable DTrace provider
with their file system?
Thanks,
/jim
Neelakanth Nadgir wrote:
io:::start probe does not seem to get zfs filenames in
args[2]-fi_pathname. Any ideas how
ufs lookup 8515
Thanks,
/jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
files, and it seems to work
/jim
Neelakanth Nadgir wrote:
Jim I can't use zfs_read/write as the file is mmap()'d so no read/write!
-neel
On Sep 26, 2007, at 5:07 AM, Jim Mauro [EMAIL PROTECTED] wrote:
Hi Neel - Thanks for pushing this out. I've been tripping over this
for a while
hurts sustainable performance will depend on
several things, but I can envision scenerios where it's overhead I'd
rather avoid if I could.
Thanks,
/jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
buffering and the ARC. This is entirely my opinion (not that of
Sun),
and I've been wrong before.
Thanks,
/jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
answered the question I think you wanted to ask.
Thanks,
/jim
Dale Pannell wrote:
I have a customer that would like to know if the ZFS file system is
compatible with Oracle raw files.
Any help you can provide is greatly appreciated. Please respond
directly to me since I am not part
/
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
1617 Southwood Drive
Nashua, NH 03063
http://blogs.sun.com/avs
regards
image001.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
/jelley jelley
You can also try:
zpool set shareiscsi=on telephone/jelley
- Jim
Now if I perform a 'iscsitadm list target', the iSCSI target
appears like it should:
Target: jelley
iSCSI Name: iqn.1986-03.com.sun:02:fcaa1650-f202-4fef-b44b-
b9452a237511.jelley
Connections: 0
Now
Would you two please SHUT THE F$%K UP.
Dear God, my kids don't go own like this.
Please - let it die already.
Thanks very much.
/jim
can you guess? wrote:
Hello can,
Thursday, December 13, 2007, 12:02:56 AM, you wrote:
cyg On the other hand, there's always the
possibility that someone
I've hit the problem myself recently, and mounting the filesystem cleared
something in the brains of ZFS and alowed me to snapshot.
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg00812.html
PS: I'll use Google before asking some questions, a'la (C) Bart Simpson
That's how I found
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.4042
http
It's good he didn't mail you, now we all know some under-the-hood details via
Googling ;)
Thanks to both of you for this :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
głos na najlepszego. - Kliknij:
http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fsportowiec2007.htmlsid=166
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
value is based on many variables, most of which are changing
over time and usage patterns.
eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform
, and then
revalidated all the data.
As stated earlier, sacrificing redundancy (RAID 1 mirroring) for
double the storage (RAID 0 concatenation) is being penny wise, and
pound foolish.
Jim
Cindy
Kory Wheatley wrote:
Currently c2t2d0 c2t3d0 are setup in a mirror. I want to break the
mirror and save
data, from the point of
view of Solaris, this disk is not a LUN, and thus can not be accessed
as such.
Jim
I know, this is (also) iSCSI-related, but mostly a ZFS-question.
Thanks for your answers,
Jan Dreyer
___
zfs-discuss mailing list
zfs
and iSCSI start and stop at different times during
Solaris boot and shutdown, so I would recommend using legacy mount
points, or manual zpool import / exports when trying configurations at
this level.
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
wk: 781.442.4042
http
/white-papers/data_replication_strategies.pdf
http://www.sun.com/storagetek/white-papers/enterprise_continuity.pdf
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
into Solaris 10, I'm
afraid I
don't have the information. Hopefully, someone else will know...
Thanks,
/jim
Jonathan Loran wrote:
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
that exist in the various recent Open Solaris flavors? I would like to
move my ZIL to solid state
://www.nexenta.com/demos/auto-cdp.html
Very nice job.. Its refreshing to see something I know oh too well,
with an updated management interface, and a good portion of the
plumbing hidden away.
- Jim
On Fri, 2008-02-01 at 10:15 -0800, Vincent Fox wrote:
Does anyone have any particularly creative
?
- Jim
# zpool import foopool barpool
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
work
requests.
Jim
Considering the solution we are offering to our customer ( 5 remote
sites replicating in one central data-center ) with ZFS ( cheapest
solution ) I should consider
3 times the network load of a solution based on SNDR-AVS and 3 times
the
storage space too..correct ?
I
dumped core, being an assert in the T10 state machine.
# mdb /core
::status
::quit
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
the new dmx as a sub mirror of the old dmx and
after the sync is finished, remove the old dmx from the mirror.
See:zpool replace [-f] pool old_device [new_device]
- Jim
Thank you,
This message posted from opensolaris.org
___
zfs-discuss
will be resolved.
Jim
We are running latest Solaris 10 a X4500 Thumper. We defined a test
iSCSI Lun. Out put below
Target: AkhanTemp/VM
iSCSI Name: iqn.1986-03.com.sun:02:72406bf8-2f5f-635a-f64c-
cb664935f3d1
Alias: AkhanTemp/VM
Connections: 0
ACL list:
TPGT list:
LUN
the two disks or their contents.
To see all available pools to import:
zpool import
From this list, it should include your prior storage pool name
zpool import pool-name
- Jim
The new disks are c6t0d0s0 and c6t1d0s0. They are identical disks set
that were set up
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
I've installed SXDE (snv_89) and found that the web console only listens on
https://localhost:6789/ now, and the module for ZFS admin doesn't work.
When I open the link, the left frame lists a stacktrace (below) and the right
frame is plain empty. Any suggestions?
I tried substituting
We have a test machine installed with a ZFS root (snv_77/x86 and
rootpol/rootfs with grub support).
Recently tried to update it to snv_89 which (in Flag Days list) claimed more
support for ZFS boot roots, but the installer disk didn't find any previously
installed operating system to upgrade.
You mean this:
https://www.opensolaris.org/jive/thread.jspa?threadID=46626tstart=120
Elegant script, I like it, thanks :)
Trying now...
Some patching follows:
-for fs in `zfs list -H | grep ^$ROOTPOOL/$ROOTFS | awk '{ print $1 };'`
+for fs in `zfs list -H | grep ^$ROOTPOOL/$ROOTFS | grep -w
Alas, didn't work so far.
Can the problem be that the zfs-root disk is not the first on the controller
(system boots from the grub on the older ufs-root slice), and/or that zfs is
mirrored? And that I have snapshots and a data pool too?
These are the boot disks (SVM mirror with ufs and grub):
1 - 100 of 753 matches
Mail list logo