On Thu, 10 Jul 2008, Mark Phalan wrote:
I find this annoying as well. Another way that would help (but is fairly
orthogonal to your suggestion) would be to write a completion module for
zsh/bash/whatever that could tab-complete options to the z* commands
including zfs filesystems.
You
On Thu, 2008-07-10 at 07:12 -0400, Mark J Musante wrote:
On Thu, 10 Jul 2008, Mark Phalan wrote:
I find this annoying as well. Another way that would help (but is fairly
orthogonal to your suggestion) would be to write a completion module for
zsh/bash/whatever that could tab-complete
On Thu, 10 Jul 2008, Tim Foster wrote:
Mark Musante (famous for recently beating the crap out of lu)
Heh. Although at this point it's hard to tell who's the beat-er and who's
the beat-ee...
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss
0
c0t3d0ONLINE 0 0 0
But I wanted to get these in a mirror - I am unable to remove c0t3d0 from the
pool. There is already data in the pool with filesystems mounted. So I do not
wish to destroy the pool.
Please help
Mark
This message posted from opensolaris.org
On Fri, 27 Jun 2008, wan_jm wrote:
the procedure is follows:
1. mkdir /tank
2. touch /tank/a
3. zpool create tank c0d0p3
this command give the following error message:
cannot mount '/tank': directory is not empty;
4. reboot.
then the os can only be login in from console. does it a bug?
On Jun 26, 2008, at 8:11 PM, Sumit Gupta wrote:
[EMAIL PROTECTED] is the snapshot of the original installation on
snv_92. snv_92.backup is the clone. You can see that the / is
mounted on snv_92.backup but in zfs list output it still shows that
'/' is mounted on snv_92.
It's showing
On Tue, 24 Jun 2008, Justin Vassallo wrote:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
external 449G 427G 27.4K /external
external/backup447G 427G 374G /external/backup
# zoneadm -z anzan boot
could not verify fs /backup: could not access
a valid vnode? How did you find the vnode?
What do you need the inode number for?
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I think most MAID is sold as a (misguided IMHO) replacement for
Tape, not as a Tier 1 kind of storage. YMMV.
-- mark
John Kunze wrote:
My organization is considering an RFP for MAID storage and we're
wondering about potential conflicts between MAID and ZFS.
We want MAID's power management
based on the mode bits of the file. Its not a real ACL.
Also, remember that NFSv3 doesn't officially support ACLs. Sun provided
a sideband protocol to deal with UFS POSIX Draft ACLs, but its not part
of the official protocol.
-Mark
___
zfs-discuss
On Jun 5, 2008, at 4:43 PM, Bill Sommerfeld wrote:
after install, I'd think you could play games with zfs send | zfs
receive on an inactive BE to rewrite everything with the desired
attributes (more important for copies than compression).
I blogged about something similar about a year ago:
On Thu, 5 Jun 2008, Albert Lee wrote:
It doesn't seem to change the bootfs property on zpl or GRUB's menu.lst
on the zpool, so we do it manually:
It *does* acutally update bootfs menu.lst, but not until after the init
6 is run.
Regards,
markm
On Tue, 3 Jun 2008, Gordon Ross wrote:
I'd really like to know: What are the conditions under which the
installer will offer ZFS root?
Only the text-based installer will offer it - not the GUI.
Regards,
markm
___
zfs-discuss mailing list
if you use star. Whereas zfs send
preserves everything.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
kevin kramer wrote:
that is my thread and I'm still having issues even after applying that patch.
It just came up again this week.
[locahost] uname -a
Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST
2008 x86_64 x86_64 x86_64 GNU/Linux
[localhost] cat
On Wed, 21 May 2008, Justin Vassallo wrote:
zpool add -f external c12t0d0p0
zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe
that's because the fs was online)
No, it had nothing to do with the pool being online. It was because a
single disk was being added to a
On Wed, 21 May 2008, Claus Guttesen wrote:
Aren't one supposed to be able to add more disks to an existing raidz(2)
pool and have the data spread all disks in the pool automagically?
Alas, that is not yet possible. See Adam's blog for details:
Andy Lubel wrote:
Paul B. Henson wrote:
On Thu, 8 May 2008, Mark Shellenbaum wrote:
we already have the ability to allow users to create/destroy snapshots
over NFS. Look at the ZFS delegated administration model. If all you
want is snapshot creation/destruction then you will need to grant
Paul B. Henson wrote:
On Thu, 8 May 2008, Mark Shellenbaum wrote:
we already have the ability to allow users to create/destroy snapshots
over NFS. Look at the ZFS delegated administration model. If all you
want is snapshot creation/destruction then you will need to grant
snapshot,mount
fmd(1M) can log faults to syslogd that are already diagnosed. Why
would you want the random spew as well?
-- mark
Carson Gaspar wrote:
[EMAIL PROTECTED] wrote:
It's not safe to jump to this conclusion. Disk drivers that support FMA
won't log error messages to /var/adm/messages. As more
http://www.sun.com/bigadmin/features/articles/selfheal.jsp
-- mark
Carson Gaspar wrote:
Ian Collins wrote:
Carson Gaspar wrote:
If this is possible, it's entirely undocumented... Actually, fmd's
documentation is generally terrible. The sum total of configuration
information
Interesting problem. And yes you are right, there are a number
of problems to solve here, see:
http://blogs.sun.com/mac/en_US/entry/open_archive
-- mark
Bob Friesenhahn wrote:
On Mon, 21 Apr 2008, Mark A. Carlson wrote:
Maybe what you want is to archive files off to optical media
ZFS has always done a certain amount of write throttling. In the past
(or the present, for those of you running S10 or pre build 87 bits) this
throttling was controlled by a timer and the size of the ARC: we would
cut a transaction group every 5 seconds based off of our timer, and
we would also
? Is there any tuning that can be done to either increase
or decrease that?
The limit is 1024, which is the same for ufs POSIX draft ACLs. It can't
currently be changed.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Sun, 23 Mar 2008, msl wrote:
I have some zfs filesystems with two // at the beggining like
//dir1/dir2/dir3. And some other filesystems correct with just one
/ (/dir1/dir2/). The question is: Can i set the mountpoint correctly?
You can set the mountpoint at any time with 'zfs set
quickly with the same setup you had.
-- mark
Cyril Plisko wrote:
On Fri, Mar 21, 2008 at 6:53 PM, Torrey McMahon [EMAIL PROTECTED] wrote:
eric kustarz wrote:
So even with the above, if you add a vdev, slog, or l2arc later on,
that can be lost via the history being a ring buffer. There's
Absolutely. The issue is: will you even *remember* all the ZFS configuration
commands that you have done to your setup when you need to restore it? Thus
I need a file I can backup along with (but separate from) the ZFS files.
-- mark
Cyril Plisko wrote:
On Fri, Mar 21, 2008 at 8:53 PM, Mark
Drew Schatt wrote:
Can anyone explain how the following came about, and/or how to get rid
of it?
What does zdb show? Also, do the partitions look like for c5t0d0? Did
something get overlapped?
Regards,
markm
___
zfs-discuss mailing list
Marc Bevand wrote:
Mark Shellenbaum Mark.Shellenbaum at Sun.COM writes:
# ls -V a
-rw-r--r--+ 1 root root 0 Mar 19 13:04 a
owner@:--:--I:allow
group@:--:--I:allow
everyone@:--:--I:allow
Sachin Palav wrote:
We are using this server as NFS SAMBA server, we created ZFS file systems
considering it features. But un-fortunately we are experiencing problems with
every NFS client (almost all version os UNIX (AIX/Linux/HP). So I have now
set the server to use NFS version 2, as
I think the answer is that the configuration is hidden
and cannot be backed up so that it can be easily restored
to a brand spanking new machine with new disks.
-- mark
Cyril Plisko wrote:
On Thu, Mar 20, 2008 at 7:52 PM, Sachin Palav
[EMAIL PROTECTED] wrote:
Hello Friends
Can someone
Kyle McDonald wrote:
Darren J Moffat wrote:
Sachin Palav wrote:
Hello Friends,
Can some please let me know how I can disable ZFS ACL completely. I want to
use ZFS with plain unix permission without ACL support
I'm really curious as to why you want to do that but it seems that
Here is a draft of the fast track to allow ZFS to inherit mode
permission via owner@, group@ and everyone@
-Mark
SUMMARY:
This proposal is to change the ZFS ACL inheritance rules
when the zfs acl property is set to passthrough.
PROBLEM:
The ZFS ACL
of the application. If no inheritable ACEs are present for owner@,
group, or everyone@ then the mode will be used.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Paul B. Henson wrote:
On Mon, 17 Mar 2008, Mark Shellenbaum wrote:
I will go ahead and do a fastrack to get the behavior that many people
want. Basically, if inheritable ACEs are present for owner@, group@,
everyone@ then the inherited ACE permissions will override the requested
mode
files from unix the POSIX rules apply and the requestors mode must be
honored, which results in the owner@, group@, and everyone@ entries
always being set on the files ACL.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Paul B. Henson wrote:
On Fri, 14 Mar 2008, Mark Shellenbaum wrote:
this behavior is only possible from a Windows client. When creating
files from unix the POSIX rules apply and the requestors mode must be
honored, which results in the owner@, group@, and everyone@ entries
always being set
Paul B. Henson wrote:
On Fri, 14 Mar 2008, Mark Shellenbaum wrote:
That is not correct. The deny entries are necessary for POSIX semantics.
In POSIX are only allowed to pick up permissions from the owner, group or
other class. You can't pick up part of the permissions you are looking
kevin kramer wrote:
client CentOS 5.1 latest kernel
mount option for zfs filesystem =
rw,nosuid,nodev,remount,noatime,nfsvers=3,udp,intr,bg
,hard,rsize=8192,wsize=8192
directory and parent owned by user and users GID, 775
on client touch
for a POSIX draft ACL which ZFS
doesn't support. The ZFS code now returns ENOTSUP which should let the
linux cp program know that ACLs aren't supported on the file system.
what OS is on the client and what OS is on the server?
-Mark
-Original Message-
From: [EMAIL PROTECTED
Kevin Kramer wrote:
client is CentOS 5.1
server is running Sol10
You should look into applying the patch I mentioned earlier on your S10
server.
-Mark
Is your ZFS file system on an S10 system?
You are most likely seeing this bug.
6528189 cp -p invalid argument issue
On Fri, 7 Mar 2008, Paul Raines wrote:
zfs create -o quota=131G -o reserv=131G -o recsize=8K zpool1/itgroup_001
and this is still running now. truss on the process shows nothing. I
don't know how to debug it beyond that. I thought I would ask for any
info from this list before I just
On Fri, 29 Feb 2008, Justin Vassallo wrote:
# zpool status
pool: external
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
model, and I still kind of wish we could have
integrated it that way.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
PROTECTED] where the CIFS server team hangs out.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It's simply a shell grokking issue, when you allow your (l)users to self
name your files then you will have spaces etc in the filename (breaks
shell arguments). In this case the '[E]' is breaking your command line
argument grokking. We have the same issue in our photos tree. We have to
use
On Mon, 14 Jan 2008, Wyllys Ingersoll wrote:
That doesn't work either.
The zpool replace command didn't work? You wouldn't happen to have a copy
of the errors you received, would you? I'd like to see that.
Regards,
markm
___
zfs-discuss mailing
On Fri, 11 Jan 2008, Wyllys Ingersoll wrote:
I want to remove c0d0p4:
# zpool remove bigpool c0d0p4
cannot remove c0d0p4: only inactive hot spares or cache devices can be removed
Use replace, not remove.
Regards,
markm
___
zfs-discuss mailing list
On Mon, 7 Jan 2008, Andre Lue wrote:
I usually have to do a zpool import -f pool to get it back.
What do you mean by 'usually'?
After the import, what's the output of 'zpool status'?
During reboot, are there any relevant messages in the console?
Regards,
markm
On Wed, 19 Dec 2007, Ross wrote:
The title says it all really, we'll be creating one big zpool here, with
many sub filesystems for various systems. Am I right in thinking that
we can use snapshots of the root filesystem to take a complete backup of
everything?
I believe what you're
I can think of two things to check:
First, is there a 'bootfs' line in your grub entry? I didn't see it
in the original email; not sure if it was left out or it simply isn't
present. If it's not present, ensure the 'bootfs' property is set on
your pool.
Secondly, ensure that there's a
.
Stefan,
Looks like delegation+iscsi works in snv_74, but not snv_75+.
I've opened the following bug to address this.
6636599 Integration of PSARC/2007/414 broke zfs delegated admin support
of shareiscsi
The integration PSARC/2007/414 went into build 75.
-Mark
the property, whereas 'share'
lets you share the volume.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Stefan de Konink wrote:
Hi Mark,
Local+Descendent permissions on (tank/iscsi_luns)
user block
clone,create,destroy,mount,mountpoint,promote,rename,reservation,rollback,share,shareiscsi,snapshot,volsize
(added share)
The strange thing is that the command seems to fail:
What
have set
another ISCSI target to. The iscsitgtd process will crash out as well.
That's problem #2. Are there known bugs on this?
ta,
Mark.
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
K wrote:
1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more
flexibility in the way we setup xen networking. What is sad is that
the code is already available in the unreleased crossbow bits... but
it won't appear in nevada until Q1 2008 :(
This is a real blocker
how well it works as I've
not used it. Perhaps the CIFS server project team can follow up here
(but keep in mind it's Thanksgiving week...).
Nico
someone from the CIFS team in storage-discuss should be able to answer
these questions.
-Mark
K wrote:
I have a xVm b75 server and use zfs for storage (zfs root mirror and a
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a
zfs file server... but I also need to relinquish a lot of my memory to
be used by the domUs.
What would a good
On Thu, 15 Nov 2007, Brian Lionberger wrote:
The question is, should I create one zpool or two to hold /export/home
and /export/backup?
Currently I have one pool for /export/home and one pool for /export/backup.
Should it be on pool for both??? Would this be better and why?
One thing to
On Thu, 15 Nov 2007, Manoj Nayak wrote:
I am getting following error message when I run any zfs command.I have
attach the script I use to create ramdisk image for Thumper.
# zfs volinit
internal error: Bad file number
Abort - core dumped
This sounds as if you may have somehow lost the
.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
using FUSE on Linux. You
should check with the Linux FUSE community...
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
Mark.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on a T2000.
ta,
Mark.
pool: windoze
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
scrub: none requested
config:
NAME
Economics for one.
We run a number of testing environments which mimic the production one.
But we don't want to spend $750,000 on EMC storage each time when
something costing $200,000 will do the job we need.
At the moment we have over 100TB on four SE6140s and we're very happy
with the
Dick Davies wrote:
Does anybody know if the upcoming CIFS integration in b77 will
provide a mechanism for users to see snapshots (like .zfs/snapshot/
does for NFS)?
I don't believe that the version in build 77 will traverse down .zfs
It would be a good thing to add though.
-Mark
On Mon, 2007-11-05 at 02:16 -0800, Thomas Lecomte wrote:
Hello there -
I'm still waiting for an answer from Phillip Lougher [the SquashFS developer].
I had already contacted him some month ago, without any answer though.
I'll still write a proposal, and probably start the work soon too.
On Mon, 2007-11-05 at 10:27 +, [EMAIL PROTECTED] wrote:
On Mon, 5 Nov 2007, Mark Phalan wrote:
On Mon, 2007-11-05 at 02:16 -0800, Thomas Lecomte wrote:
Hello there -
I'm still waiting for an answer from Phillip Lougher [the SquashFS
developer].
I had already contacted him
]
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Rayson Ho wrote:
Does anyone know whether the following (copied from Wikipedia) is true or
not??
Solaris has a project called CIFS client for Solaris, based on the
Mac OS X smbfs.
Rayson
Yes, that is true.
http://www.opensolaris.org/os/project/smbfs/
-Mark
On Mon, 29 Oct 2007, Krzys wrote:
everything is great but I've made a mistake and I would like to remove
emcpower2a from my pool and I cannot do that...
Well the mistake that I made is that I did not format my device
correctly so instead of adding 125gig I added 128meg
You can't remove it
On Tue, 23 Oct 2007, A Darren Dunham wrote:
On Tue, Oct 23, 2007 at 09:55:58AM -0700, Scott Laird wrote:
I'm writing a couple scripts to automate backups and snapshots, and I'm
finding myself cringing every time I call 'zfs destroy' to get rid of a
snapshot, because a small typo could take
On Mon, 8 Oct 2007, Kugutsumen wrote:
I just tried..
mount -o rw,remount /
zpool import -f tank
mount -F zfs tank/rootfs /a
zpool status
ls -l /dev/dsk/c1t0d0s0
# /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a
csh
setenv TERM vt100
vi /a/boot/solaris/bootenv.rc
#
Please don't do this as a rule, it makes for horrendous support issues
and breaks a lot of health check tools.
Actually, you can use the existing name space for this. By default,
ZFS uses /dev/dsk. But everything in /dev is a symlink. So you could
setup your own space, say
On Mon, 24 Sep 2007, Michael Schuster wrote:
I recently started seeing zfs chattiness at boot time: reading zfs config
and something like mounting zfs filesystems (n/n).
This was added recently because ZFS can take a while to mount large
configs. Consoles would appear to freeze after the
On Mon, 24 Sep 2007, Michael Schuster wrote:
I'm also quite prepared to see a running tally(?) after an initial timeout
(your minute) has gone by and we haven't finished ... but I guess we'd also
have to make sure that the output generated isn't messed up by other output
to the console that's
On Mon, 17 Sep 2007, Robert Milkowski wrote:
If you do 'zpool create -f test A B C spare D E' and D or E contains UFS
filesystem then despite of -f zpool command will complain that there is
UFS file system on D.
This was fixed recently in build 73. See CR 6573276.
Regards,
markm
.
Thanks
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hey all again,
Looking into a few other options. How about infiniband? it would give us more
bandwidth, but will it increase complexity/price? any thoughts?
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
drive in the loop can be the reporting one sometimes. It's
just a quirk of that storage unit.
These days devices will usually have an individual internal FC-AL loop
to each drive to alleviate this sort of problem.
Cheers,
Mark.
Hi all,
yesterday we had a drive failure on a fc-al jbod with 14
.
Does anybody see a problem with this?
Also i know this isnt ZFS, but is there any upper limit on file size with samba?
Thanks For all your help.
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
that by
using IPMP the bandwidth is increased due to sharing across all the network
cards, is this true?
Thanks again for all your help
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. is this
correct?
thanks again for all your help
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to buy it all for
around AUS$350. cpu, mobo, ram and everything. ive tried it with a few solaris
distro's and its worked fine and been rather fast.
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Hey,
I will submit it. However does Opensolaris have a seperate HCL? or do i just
use the solaris one?
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
. They will be connected by gigabit ethernet. So my
question is how do I mirror one raidz Array across the network to the other?
Thanks for all your help
Mark.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
section). Here are some questions:
* Am I missing a command or something?
* Is there support for lofiadm in a more recent version of ZFS?
* Or is there any way to safely mount a file system image?
Thanks for your help.
Regards
Mark
GOOD NEWS
It looks as if the zfs mount options can stop
snapshots and clones, we can copy an
entire DB setup and create a clone in about ten seconds. Before it took
hours using the EMCs.
Cheers,
Mark.
After reading your post ... I was wondering whether you would give
some input/advice on a certain configuration I'm working on.
A customer (potential
the LUNs and free up the SE6140 arrays so their owners
can begin to use them.
At the moment once a device is in a zpool, it's stuck there. That's a
problem. What sort of time frame are we looking at until it's possible
to remove LUNs from zpools?
ta,
Mark
On Tue, 17 Jul 2007, Kwang-Hyun Baek wrote:
# uname -a
SunOS solaris-devx 5.11 opensol-20070713 i86pc i386 i86pc
===
What's more interesting is that ZFS version shows that it's 8does it
even exist?
Yes, 8 was created to support
On Mon, 16 Jul 2007, Kwang-Hyun Baek wrote:
Is there any way to fix this? I actually tried to destroy the pool and
try to create a new one, but it doesn't let me. Whenever I try, I get
the following error:
[EMAIL PROTECTED]:/var/crash# zpool create -f pool c0d0s5
internal error: No such
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
NAMESTATE READ WRITE CKSUM
poolUNKNOWN 0 0 0
c0d0s5UNKNOWN 0 0 0
c0d0s6UNKNOWN 0 0 0
c0d0s4UNKNOWN 0 0 0
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
zpool import pool (my pool is named 'pool') returns
cannot import 'pool': no such pool available
What does 'zpool import' by itself show you? It should give you a list of
available pools to import.
Regards,
markm
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
zpool list
it shows my pool with health UNKNOWN
That means it's already imported. What's the output of 'zpool status'?
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, 27 Jun 2007, [UTF-8] Jürgen Keil wrote:
Yep, I just tried it, and it refuses to zpool import the newer pool,
telling me about the incompatible version. So I guess the pool format
isn't the correct explanation for the Dick Davies' (number9) problem.
Have you tried creating the pool on
Nicolas Williams wrote:
Couldn't wait for ZFS delegation, so I cobbled something together; see
attachment.
Nico
The *real* ZFS delegation code was integrated into Nevada this morning.
I've placed a little overview in my blog.
http://blogs.sun.com/marks
-Mark
On Tue, 19 Jun 2007, John Brewer wrote:
bash-3.00# zpool import
pool: zones
id: 4567711835620380868
state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will
On Tue, 12 Jun 2007, Tim Cook wrote:
This pool should have 7 drives total, which it does, but for some reason
c4d0 is displayed twice. Once as online (which it is), and once as
unavail (which it is not).
What's the name of the 7th drive? Did you take all the drives from the
old system and
limitations,
beyond those that pertain to ZFS or SVM in isolation.
As a practical measure, you should probably not be duplicating levels of
striping or redundancy in both layers.
--Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Mon, 11 Jun 2007, Rick Mann wrote:
ZFS Readonly implemntation is loaded!
Is that a copy-n-paste error, or is that typo in the actual output?
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
301 - 400 of 515 matches
Mail list logo