On Wed, May 03, 2006 at 03:22:53PM -0400, Maury Markowitz wrote:
I think that's the disconnect. WHY are they full-fledged files?
Because that's what the specification calls for.
Right, but that's my concern. To me this sounds like historically
circular reasoning...
20xx) we need a new
On Thu, May 18, 2006 at 12:46:28PM -0700, Charlie wrote:
Traditional (amanda). I'm not seeing a way to dump zfs file systems to
tape without resorting to 'zfs send' being piped through gtar or
something. Even then, the only thing I could restore was an entire file
system. (We frequently
On Tue, May 23, 2006 at 11:49:47AM +0200, Wout Mertens wrote:
Can that same method be used to figure out what files changed between
snapshots?
To figure out what files changed, we need to (a) figure out what object
numbers changed, and (b) do the object number to file name translation.
The
On Tue, May 23, 2006 at 02:34:30PM -0700, Jeff Victor wrote:
* When you share a ZFS fs via NFS, what happens to files and
filesystems that exceed the limits of NFS?
What limits do you have in mind? I'm not an NFS expert, but I think
that NFSv4 (and probably v3) supports 64-bit file sizes, so
On Wed, May 24, 2006 at 03:43:54PM -0400, Scott Dickson wrote:
I said I had several questions to start threads on
What about ZFS and various HSM solutions? Do any of them already work
with ZFS? Are any going to? It seems like HSM solutions that access
things at a file level would
On Fri, May 26, 2006 at 09:40:57PM +0200, Daniel Rock wrote:
So you can see the second disk of each mirror pair (c4tXd0) gets almost no
I/O. How does ZFS decide from which mirror device to read?
You are almost certainly running in to this known bug:
630 reads from mirror are not
On Thu, Jun 01, 2006 at 11:35:41AM -1000, David J. Orman wrote:
3 - App server would be running in one zone, with a (NFS) mounted ZFS
filesystem as storage.
4 - DB server (PgSQL) would be running in another zone, with a (NFS)
mounted ZFS filesystem as storage.
Why would you use NFS? These
On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote:
So as administrator what do I need to do to set
/export/home up for users to be able to create their own
snapshots, create dependent filesystems (but still mounted
underneath their /export/home/usrname)?
In other words, is
On Fri, Jul 07, 2006 at 04:00:38PM -0400, Dale Ghent wrote:
Add an option to zpool(1M) to dump the pool config as well as the
configuration of the volumes within it to an XML file. This file
could then be sucked in to zpool at a later date to recreate/
replicate the pool and its volume
On Thu, Jul 06, 2006 at 12:46:57AM -0700, Patrick Mauritz wrote:
Hi,
after some unscheduled reboots (to put it lightly), I've got an interesting
setup on my notebook's zfs partition:
setup: simple zpool, no raid or mirror, a couple of zfs partitions, one zvol
for swap. /foo is one such
On Thu, Jul 20, 2006 at 12:58:31AM -0700, Trond Norbye wrote:
I have been using iosoop script (see
http://www.opensolaris.org/os/community/dtrace/scripts/) written by
Brendan Gregg to look at the IO operations of my application.
...
So how can I get the same information from a ZFS file-system?
On Tue, Jul 25, 2006 at 11:13:16AM -0700, Brad Plecs wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
What I'd really like to see is ... the ability for the snapshot space
to *not* impact the filesystem space).
Yep, as Eric mentioned, that is the purpose of this
On Tue, Jul 25, 2006 at 07:24:51PM -0500, Mike Gerdts wrote:
On 7/25/06, Brad Plecs [EMAIL PROTECTED] wrote:
What I'd really like to see is ... the ability for the snapshot space
to *not* impact the filesystem space).
The idea is that you have two storage pools - one for live data, one
for
On Thu, Jun 29, 2006 at 08:20:56PM +0200, Robert Milkowski wrote:
btw: I belive it was discussed here before - it would be great if one
would automatically convert given directory on zfs filesystem into zfs
filesystem (without actually copying all data)
Yep, and an RFE filed: 6400399 want zfs
On Tue, Jun 27, 2006 at 06:30:46PM -0400, Dennis Clarke wrote:
... but I have to ask.
How do I back this up?
The following two RFEs would help you out enormously:
6421958 want recursive zfs send ('zfs send -r')
6421959 want zfs send to preserve properties ('zfs send -p')
As far as RFEs
On Thu, Jul 27, 2006 at 03:54:02PM -0400, Christine Tran wrote:
- What is the compression algorithm used?
It is based on the Lempel-Ziv algorithm.
- Is there a ZFS feature that will output the real uncompressed size of
the data? The scenario is if they had to move a compressed ZFS
On Thu, Jul 27, 2006 at 08:17:03PM -0500, Malahat Qureshi wrote:
Is there any way to boot of from zfs disk work around ??
Yes, see
http://blogs.sun.com/roller/page/tabriz?entry=are_you_ready_to_rumble
--mat
___
zfs-discuss mailing list
On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
filebench/singlestreamread v440
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
On Tue, Aug 08, 2006 at 09:54:16AM -0700, Robert Milkowski wrote:
Hi.
snv_44, v440
filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks.
What is suprising is that the results for both cases are almost the same!
6 disks:
IO Summary: 566997 ops 9373.6
On Tue, Aug 08, 2006 at 10:42:41AM -0700, Robert Milkowski wrote:
filebench in varmail by default creates 16 threads - I configrm it
with prstat, 16 threrads are created and running.
Ah, OK. Looking at these results, it doesn't seem to be CPU bound, and
the disks are not fully utilized either.
On Thu, Aug 10, 2006 at 10:23:20AM -0700, Eric Schrock wrote:
A new option will be added, 'canmount', which specifies whether the
given filesystem can be mounted with 'zfs mount'. This is a boolean
property, and is not inherited.
Cool, looks good. Do you plan to implement this using the
On Thu, Aug 10, 2006 at 10:44:46AM -0700, Eric Schrock wrote:
Right now I'm using the generic property mechanism, but have a special
case in dsl_prop_get_all() to ignore searching parents for this
particular property. I'm not thrilled about it, but I only see two
other options:
1. Do not
On Fri, Aug 11, 2006 at 10:02:41AM -0700, Brad Plecs wrote:
There doesn't appear to be a way to move zfspool/www and its
decendants en masse to a new machine with those quotas intact. I have
to script the recreation of all of the descendant filesystems by hand.
Yep, you need
6421959 want
On Thu, Aug 17, 2006 at 02:53:09PM +0200, Robert Milkowski wrote:
Hello zfs-discuss,
Is someone actually working on it? Or any other algorithms?
Any dates?
Not that I know of. Any volunteers? :-)
(Actually, I think that a RLE compression algorithm for metadata is a
higher priority, but
On Thu, Aug 17, 2006 at 10:28:10AM -0700, Adam Leventhal wrote:
On Thu, Aug 17, 2006 at 10:00:32AM -0700, Matthew Ahrens wrote:
(Actually, I think that a RLE compression algorithm for metadata is a
higher priority, but if someone from the community wants to step up, we
won't turn your code
On Sat, Aug 19, 2006 at 07:21:52PM -0700, Frank Cusack wrote:
On August 19, 2006 7:06:06 PM -0700 Matthew Ahrens [EMAIL PROTECTED]
wrote:
My guess is that the filesystem is not mounted. It should be remounted
after the 'zfs recv', but perhaps that is not happening correctly. You
can see
On Sun, Aug 20, 2006 at 08:38:03PM -0700, Luke Lonergan wrote:
Matthew,
On 8/20/06 6:20 PM, Matthew Ahrens [EMAIL PROTECTED] wrote:
This was not the design, we're working on fixing this bug so that many
threads will be used to do the compression.
Is this also true of decompression?
I
On Tue, Aug 22, 2006 at 06:15:08AM -0700, Tony Galway wrote:
A question (well lets make it 3 really) ? Is vdbench a useful tool
when testing file system performance of a ZFS file system? Secondly -
is ZFS write performance really much worse than UFS or VxFS? and Third
- what is a good
On Tue, Aug 22, 2006 at 08:43:32AM -0700, roland wrote:
can someone tell, how effective is zfs compression and
space-efficiency (regarding small files) ?
since compression works at the block level, i assume compression may
not come into effect as some may expect. (maybe i`m wrong here)
It's
Shane, I wasn't able to reproduce this failure on my system. Could you
try running Eric's D script below and send us the output while running
'zfs list'?
thanks,
--matt
On Fri, Aug 18, 2006 at 09:47:45AM -0700, Eric Schrock wrote:
Can you send the output of this D script while running 'zfs
On Wed, Aug 23, 2006 at 09:57:04AM -0400, James Foronda wrote:
Hi,
[EMAIL PROTECTED] cat /etc/release
Solaris Nevada snv_33 X86
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
On Thu, Aug 24, 2006 at 08:12:34AM +1000, Boyd Adamson wrote:
Isn't the whole point of the zpool upgrade process to allow users to
decide when they want to remove the fall back to old version option?
In other words shouldn't any change that eliminates going back to an
old rev require an
I just realized that I forgot to send this message to zfs-discuss back
in May when I fixed this bug. Sorry for the delay.
The putback of the following bug fix to Solaris Nevada build 42 and
Solaris 10 update 3 build 3 (and coinciding with the change to ZFS
on-disk version 3) changes the behavior
On Thu, Aug 24, 2006 at 10:12:12AM -0600, Arlina Goce-Capiral wrote:
It does appear that the disk is fill up by 140G.
So this confirms what I was saying, that they are only able to write
ndisks-1 worth of data (in this case, ~68GB * (3-1) == ~136GB. So there
is no unexpected behavior with
On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote:
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If
On Thu, Aug 24, 2006 at 01:15:51PM -0500, Nicolas Williams wrote:
I just tried creating 150,000 directories in a ZFS roto directory. It
was speedy. Listing individual directories (lookup) is fast.
Glad to hear that it's working well for you!
Listing the large directory isn't, but that turns
On Thu, Aug 24, 2006 at 02:21:33PM -0700, Joe Little wrote:
well, by deleting my 4-hourlies I reclaimed most of the data. To
answer some of the questions, its about 15 filesystems (decendents
included). I'm aware of the space used by snapshots overlapping. I was
looking at the total space
James Dickens wrote:
Why not make a snapshots on a production and then send incremental
backups over net? Especially with a lot of files it should be MUCH
faster than rsync.
because its a ZFS limited solution, if the source is not ZFS it won't
work, and i'm not sure how much faster
Dick Davies wrote:
On 30/08/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
'zfs send' is *incredibly* faster than rsync.
That's interesting. We had considered it as a replacement for a
certain task (publishing a master docroot to multiple webservers)
but a quick test with ~500Mb of data showed
Theo Bongers wrote:
Please can anyone tell me how to handle with a LUN that is expanded (on a RAID
array or SAN storage)? and grow the filesystem without data-loss?
How does ZFS looks at the volume. In other words how can I grow the filesystem
after LUN expansion.
Do I need to
Roch wrote:
Matthew Ahrens writes:
Robert Milkowski wrote:
IIRC unmounting ZFS file system won't flush its caches - you've got to
export entire pool.
That's correct. And I did ensure that the data was not cached before
each of my tests.
Matt ?
It seems to me that (at least
John Beck wrote:
% zfs snapshot -r [EMAIL PROTECTED]
% zfs send space/[EMAIL PROTECTED] | ssh newbox zfs recv -d space
% zfs send space/[EMAIL PROTECTED] | ssh newbox zfs recv -d space
...
% zfs set mountpoint=/export/home space
% zfs set mountpoint=/usr/local space/local
% zfs set sharenfs=on
Marlanne DeLaSource wrote:
As I understand it, the snapshot of a set is used as a reference by the clone.
So the clone is initially a set of pointers to the snapshot. That's why it is
so fast to create.
How can I separate it from the snapshot ? (so that df -k or zfs list will
display for a
Arlina Goce-Capiral wrote:
Customer's main concern right now is to make the system bootable but it
seems couldn't do that since the bad disks is part
of the zfs filesystems. Is there a way to disable or clear out the bad
zfs filesystem so system can be booted?
Yes, see this FAQ:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
--matt
A. INTRODUCTION
ZFS stores multiple copies of all metadata. This is accomplished by
storing up to three DVAs (Disk Virtual
James Dickens wrote:
On 9/11/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
B. DESCRIPTION
A new property will be added, 'copies', which specifies how many copies
of the given filesystem will be stored. Its value must be 1, 2, or 3.
Like other properties (eg. checksum, compression), it only
Mike Gerdts wrote:
Is there anything in the works to compress (or encrypt) existing data
after the fact? For example, a special option to scrub that causes
the data to be re-written with the new properties could potentially do
this.
This is a long-term goal of ours, but with snapshots, this
James Dickens wrote:
though I think this is a cool feature, I think i needs more work. I
think there sould be an option to make extra copies expendible. So the
extra copies are a request, if the space is availible make them, if
not complete the write, and log the event.
Are you asking for the
Robert Milkowski wrote:
Hello Mark,
Monday, September 11, 2006, 4:25:40 PM, you wrote:
MM Jeremy Teo wrote:
Hello,
how are writes distributed as the free space within a pool reaches a
very small percentage?
I understand that when free space is available, ZFS will batch writes
and then issue
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have some
data that is more important (and thus
Dick Davies wrote:
For the sake of argument, let's assume:
1. disk is expensive
2. someone is keeping valuable files on a non-redundant zpool
3. they can't scrape enough vdevs to make a redundant zpool
(remembering you can build vdevs out of *flat files*)
Given those assumptions, I think
Torrey McMahon wrote:
Matthew Ahrens wrote:
The problem that this feature attempts to address is when you have
some data that is more important (and thus needs a higher level of
redundancy) than other data. Of course in some situations you can use
multiple pools, but that is antithetical
Nicolas Dorfsman wrote:
Hi,
There's something really bizarre in ZFS snaphot specs : Uses no
separate backing store. .
Hum...if I want to mutualize one physical volume somewhere in my SAN
as THE snaphots backing-store...it becomes impossible to do !
Really bad.
Is there any chance to have a
Nicolas Dorfsman wrote:
We need to think ZFS as ZFS, and not as a new filesystem ! I mean,
the whole concept is different.
Agreed.
So. What could be the best architecture ?
What is the problem?
With UFS, I used to have separate metadevices/LUNs for each
application. With ZFS, I thought
Bady, Brant RBCM:EX wrote:
Actually to clarify - what I want to do is to be able to read the
associated checksums ZFS creates for a file and then store them in an
external system e.g. an oracle database most likely
Rather than storing the checksum externally, you could simply let ZFS
verify
Jan Hendrik Mangold wrote:
I didn't ask the original question, but I have a scenario where I
want to use clone as well and encounter a (designed?) behaviour I am
trying to understand.
I create a filesystem A with ZFS and modify it to a point where I
create a snapshot [EMAIL PROTECTED] Then I
Mike Gerdts wrote:
A couple scenarios from environments that I work in, using legacy
file systems and volume managers:
1) Various test copies need to be on different spindles to remove any
perceived or real performance impact imposed by one or the other.
Arguably by having the IO activity
Anantha N. Srirama wrote:
You're most certainly are hitting the SSH limitation. Note that
SSH/SCP sessions are single threaded and won't utilize all of the
system resources even if they are available.
You may want to try 'ssh -c blowfish' to use the (faster) blowfish
encryption algorithm
Michael Phua - PTS wrote:
Hi,
Our customer has an Sun Fire X4100 with Solaris 10 using ZFS and a HW RAID
array (STK D280).
He has extended a LUN on the storage array and want to make this new size
known to ZFS and Solaris.
Does anyone know if this can be done and how it can be done.
Darren Dunham wrote:
What about ZFS root?. And compatibility with Live Upgrade?. Any
timetable estimation?.
ZFS root has been previously announced as targeted for update 4.
ZFS root support will most likely not be available in Solaris 10 until
update 5. (And of course this is subject to
Stefan Urbat wrote:
By the way, I have to wait a few hours to umount and check mountpoint
permissions, because an automated build is currently running on that
zfs --- the performance of [EMAIL PROTECTED] is indeed rather poor (much worse
than ufs), but this is another, already documented and bug
Ewen Chan wrote:
However, in order for me to lift the unit, I needed to pull the
drives out so that it would actually be moveable, and in doing so, I
think that the drive-cable-port allocation/assignment has
changed.
If that is the case, then ZFS would automatically figure out the new
[EMAIL PROTECTED] wrote:
On Fri, Oct 06, 2006 at 01:14:23AM -0600, Chad Leigh -- Shire.Net LLC wrote:
But I would dearly like to have a versioning capability.
Me too.
Example (real life scenario): there is a samba server for about 200
concurrent connected users. They keep mainly doc/xls files
Jeremy Teo wrote:
A couple of use cases I was considering off hand:
1. Oops i truncated my file
2. Oops i saved over my file
3. Oops an app corrupted my file.
4. Oops i rm -rf the wrong directory.
All of which can be solved by periodic snapshots, but versioning gives
us immediacy.
So is
Frank Cusack wrote:
[EMAIL PROTECTED]:~]# zfs send -i export/zone/www/[EMAIL PROTECTED] export/zone/www/[EMAIL PROTECTED]
| ssh cookies zfs recv export/zone/www/html
cannot receive: destination has been modified since most recent snapshot --
use 'zfs rollback' to discard changes
I was going
Frank Cusack wrote:
If you can't run build 48 or later, then you can workaround the problem
by not mounting the filesystem in between the 'rollback' and the 'recv':
cookies# zfs set mountpoint=none export/zone/www/html
cookies# zfs rollback export/zone/www/[EMAIL PROTECTED]
milk# zfs send -i @4
Frank Cusack wrote:
No, I just tried the @[EMAIL PROTECTED] incremental again. I didn't think to
try
another incremental. So I was basically doing the mountpoint=none trick,
they trying @[EMAIL PROTECTED] again without doing mountpoint=none.
Again, seeing the exact sequence of commands you
Frank Cusack wrote:
Really? I find it hard to believe that mountpoint=none causes any more
problems than 'zfs recv' by itself, since 'zfs recv' of an incremental
stream always unmounts the destination fs while the recv is taking place.
You're right. I forgot I was having problems with this
ttoulliu2002 wrote:
Hi:
I have zpool created
# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
ktspool34,5G 33,5K 34,5G 0% ONLINE -
However, zpool status shows no known data error. May I know what is the problem
# zpool status
Stefan Urbat wrote:
What bug was filed?
6421427 is nfs related, but another forum member thought, that it is in fact a
general IDE performance bottleneck behind, and was only made visible in this
case. There is a report, that on an also with simple IDE equipped Blade 150 the
same issue with
Brian Hechinger wrote:
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev?
Are there performance issues with mixing differently sized
Steven Goldberg wrote:
Thanks Matt. So is the config/meta info for the pool that is stored
within the pool kept in a file? Is the file user readable or binary?
It is not user-readable. See the on-disk format document, linked here:
http://www.opensolaris.org/os/community/zfs/docs/
--matt
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do you suggest doing so?
--matt
Roshan Perera wrote:
Hi Jeff Robert, Thanks for the reply. Your interpretation is
correct and the answer spot on.
This is going to be at a VIP clients QA/production environment and
first introduction to 10, zones and zfs. Anything unsupported is not
allowed. Hence I may have to wait for the
Robert Milkowski wrote:
Hello Richard,
Friday, October 13, 2006, 8:05:18 AM, you wrote:
REP Do you want data availability, data retention, space, or performance?
data availability, space, performance
However we're talking about quite a lot of small IOs (r+w).
Then you should seriously
Robert Milkowski wrote:
Hello Noel,
Friday, October 13, 2006, 11:22:06 PM, you wrote:
ND I don't understand why you can't use 'zpool status'? That will show
ND the pools and the physical devices in each and is also a pretty basic
ND command. Examples are given in the sysadmin docs and
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
Here is one relatively straightforward way you could implement this.
You can't (currently) change the recordsize once there are multiple
blocks in the file.
Torrey McMahon wrote:
Richard Elling - PAE wrote:
Anantha N. Srirama wrote:
I'm glad you asked this question. We are currently expecting 3511
storage sub-systems for our servers. We were wondering about their
configuration as well. This ZFS thing throws a wrench in the old line
think ;-)
Robert Milkowski wrote:
If it happens again I'll try to get some more specific data - however
it depends on when it happens as during peak hours I'll probably just
destroy a snapshot to get it working.
If it happens again, it would be great if you could gather some data
before you destroy the
Jeremy Teo wrote:
Heya Anton,
On 10/17/06, Anton B. Rang [EMAIL PROTECTED] wrote:
No, the reason to try to match recordsize to the write size is so that
a small write does not turn into a large read + a large write. In
configurations where the disk is kept busy, multiplying 8K of data
Erblichs wrote:
Now the stupid question..
If the snapshot is identical to the FS, I can't
remove files from the FS because of the snapshot
and removing files from the snapshot only removes
a reference to the file and leaves the memory.
So, how do
Richard Elling - PAE wrote:
Anthony Miller wrote:
Hi,
I've search the forums and not found any answer to the following.
I have 2 JBOD arrays each with 4 disks.
I want to create create a raidz on one array and have it mirrored to
the other array.
Today, the top level raid sets are
Robert Milkowski wrote:
Hello Jeremy,
Monday, October 23, 2006, 5:04:09 PM, you wrote:
JT Hello,
Shrinking the vdevs requires moving data. Once you move data, you've
got to either invalidate the snapshots or update them. I think that
will be one of the more difficult parts.
JT Updating
Jens Elkner wrote:
Yes, I guessed that, but hopefully not that much ...
Thinking about it, it would suggest to me (if I need abs. max. perf): the best
thing to do is, to create a pool inside the zone and to use zfs on it ?
Using a ZFS filesystem within a zone will go just as fast as in the
Erik Trimble wrote:
Matthew Ahrens wrote:
Erik Trimble wrote:
The ability to expand (and, to a less extent, shrink) a RAIDZ or
RAIDZ2 device is actually one of the more critical missing features
from ZFS, IMHO. It is very common for folks to add additional shelf
or shelves into an existing
Robert Milkowski wrote:
Hi.
On nfs clients which are mounting file system f3-1/d611 I can see 3-5s
periods of 100% busy (iostat) and almost no IOs issued to nfs server, on nfs
server at the same time disk activity is almost 0 (both iostat and zpool
iostat). However CPU activity increases
Juergen Keil wrote:
Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of
memory.
Oh, 128MB...
Btw, does anyone know if there are any minimum hardware (physical memory)
requirements for using ZFS?
It seems as if ZFS wan't tested that much on machines with 256MB (or
Jeremy Teo wrote:
This is the same problem described in
6343653 : want to quickly copy a file from a snapshot.
Actually it's a somewhat different problem. Copying a file from a
snapshot is a lot simpler than copying a file from a different
filesystem. With snapshots, things are a lot more
Rince wrote:
Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using
the following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs
the 116 GB I had on my
Jeff Victor wrote:
If I add a ZFS dataset to a zone, and then want to zfs send from
another computer into a file system that the zone has created in that
data set, can I zfs send to the zone, or can I send to that zone's
global zone, or will either of those work?
I believe that the 'zfs
Vahid Moghaddasi wrote:
I created a raidz from three 70GB disks and got a total of 200GB out
of it. It't that supposed to give 140GB?
You are hitting
6288488 du reports misleading size on RAID-Z
which affects pools created before build 42 or s10u3.
--matt
Robert Milkowski wrote:
PvdZ This could be related to Linux trading reliability for speed by doing
PvdZ async metadata updates.
PvdZ If your system crashes before your metadata is flushed to disk your
PvdZ filesystem might be hosed and a restore
PvdZ from backups may be needed.
you can
Elizabeth Schwartz wrote:
On 11/28/06, *David Dyer-Bennet* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
wrote:
Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem. And if you're
concerned with the integrity of the
Jason J. W. Williams wrote:
Hi all,
Having experienced this, it would be nice if there was an option to
offline the filesystem instead of kernel panicking on a per-zpool
basis. If its a system-critical partition like a database I'd prefer
it to kernel-panick and thereby trigger a fail-over of
Gino Ruopolo wrote:
Hi All,
we have some ZFS pools on production with more than 100s fs and more
than 1000s snapshots on them. Now we do backups with zfs send/receive
with some scripting but I'm searching for a way to mirror each zpool
to an other one for backup purposes (so including all
Jeb Campbell wrote:
After upgrade you did actually re-create your raid-z
pool, right?
No, but I did zpool upgrade -a.
Hmm, I guess I'll try re-writing the data first. I know you have to do that if
you change compression options.
Ok -- rewriting the data doesn't work ...
I'll create a new
Anantha N. Srirama wrote:
- Why is the destroy phase taking so long?
Destroying clones will be much faster with build 53 or later (or the
unreleased s10u4 or later) -- see bug 6484044.
- What can explain the unduly long snapshot/clone times
- Why didn't the Zone startup?
- More
Bill Sommerfeld wrote:
On Tue, 2006-12-19 at 16:19 -0800, Matthew Ahrens wrote:
Darren J Moffat wrote:
I believe that ZFS should provide a method of bleaching a disk or part
of it that works without crypto having ever been involved.
I see two use cases here:
I agree with your two, but I
Nathalie Poulet (IPSL) wrote:
Hello,
After an export and an importation, the size of the pool remains
unchanged. As there were no data on this partition, I destroyed and
recreate the pool. The size was indeed taken into account.
The correct size is indicated by the order zpool list. The
Jason J. W. Williams wrote:
INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect
your data.
This is a bug, not a feature. We are currently working on fixing it.
--matt
1 - 100 of 335 matches
Mail list logo