On Thu, Jun 03, 2010 at 12:40:34PM -0700, Frank Cusack wrote:
On 6/3/10 12:06 AM -0400 Roman Naumenko wrote:
I think there is a difference. Just quickly checked netapp site:
Adding new disks to a RAID group If a volume has more than one RAID
group, you can specify the RAID group to which you
On Sun, May 09, 2010 at 10:55:08PM +0530, Johnson Thomas wrote:
Customer has this query
If there is a way to flush ARC for filebench runs without rebooting
the system
He is running firmware 2010.02.09.0.2,1-1.13 on the NAS 7410
In the pre-ZFS world I would have suggested unmounting the
On Wed, Apr 28, 2010 at 09:49:04PM +0200, Ragnar Sundblad wrote:
On 28 apr 2010, at 14.06, Edward Ned Harvey wrote:
What indicators do you have that ONTAP/WAFL has inode-name lookup
functionality? I don't think it has any such thing - WAFL is pretty
much an UFS/FFS that does COW instead of
On Wed, Apr 21, 2010 at 10:10:09PM -0400, Edward Ned Harvey wrote:
From: Nicolas Williams [mailto:nicolas.willi...@oracle.com]
POSIX doesn't allow us to have special dot files/directories outside
filesystem root directories.
So? Tell it to Netapp. They don't seem to have any problem
On Wed, Apr 21, 2010 at 04:49:30PM +0100, Darren J Moffat wrote:
/foo is the filesystem
/foo/bar is a directory in the filesystem
cd /foo/bar/
touch stuff
[ you wait, time passes; a snapshot is taken ]
At this point /foo/bar/.snapshot/.../stuff exists
Now do this:
rm -rf /foo/bar
On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
zfs list -t snapshot lists in time order.
Good to know. I'll keep that in mind for my zfs send scripts but it's not
relevant for the case at hand. Because zfs list isn't available on the
NFS client, where the users are
On Tue, Mar 02, 2010 at 11:42:30AM -0800, Carson Gaspar wrote:
NetApp does _not_ expose an ACL via NFSv3, just old school POSIX
mode/owner/group info. I don't know how NetApp deals with chmod, but
I'm sure it's documented.
I can't get a chmod to succeed in that situation. This particular
On Fri, Feb 05, 2010 at 08:35:15AM -0800, J wrote:
To be more descriptive, I plan to have a Raid 1 array for the OS, and
then I will need 3 additional Raid5/RaidZ/etc arrays for data
archiving, backups and other purposes. There is plenty of
documentation on how to recover an array if one of
On Wed, Jan 27, 2010 at 10:55:21AM -0800, Albert Frenz wrote:
hi there,
maybe this is a stupid question, yet i haven't found an answer anywhere ;)
let say i got 3x 1,5tb hdds, can i create equal partitions out of each and
make a raid5 out of it? sure the safety would drop, but that is not
On Wed, Jan 20, 2010 at 08:11:27AM +1300, Ian Collins wrote:
True, but I wonder how viable its future is. One of my clients
requires 17 LT04 types for a full backup, which cost more and takes
up more space than the equivalent in removable hard drives.
What kind of removable hard drives are
On Thu, Jan 21, 2010 at 12:38:56AM +0100, Ragnar Sundblad wrote:
On 21 jan 2010, at 00.20, Al Hopper wrote:
I remember for about 5 years ago (before LT0-4 days) that streaming
tape drives would go to great lengths to ensure that the drive kept
streaming - because it took so much time to
On Fri, Jan 15, 2010 at 02:07:40PM -0600, Gary Mills wrote:
I have a ZFS filesystem that I wish to split into two
ZFS filesystems at one of the subdirectories. I understand that I
first need to make a snapshot of the filesystem and then make a clone
of the snapshot, with a different name.
On Wed, Jan 13, 2010 at 04:38:42PM +0200, Cyril Plisko wrote:
On Wed, Jan 13, 2010 at 4:35 PM, Max Levine max...@gmail.com wrote:
Veritas has this feature called fast mirror resync where they have ?a
DRL on each side of the mirror and, detaching/re-attaching a mirror
causes only the changed
On Thu, Jan 14, 2010 at 06:11:10PM -0500, Miles Nordin wrote:
zpool offline / zpool online of a mirror component will indeed
fast-resync, and I do it all the time. zpool detach / attach will
not.
Yes, but the offline device is still part of the pool. What are you
doing with the device when
On Tue, Jan 05, 2010 at 04:49:00PM +, Robert Milkowski wrote:
A possible *workaround* is to use SVM to set-up RAID-5 and create a
zfs pool on top of it.
How does SVM handle R5 write hole? IIRC SVM doesn't offer RAID-6.
As far as I know, it does not address it. It's possible that adding a
On Tue, Dec 29, 2009 at 02:37:20PM -0800, Brad wrote:
I would appreciate some feedback on what I've understood so far:
WRITES
raid5 - A FS block is written on a single disk (or multiple disks
depending on size data???)
There is no direct relationship between a filesystem and the RAID
On Sun, Dec 27, 2009 at 06:02:18PM +0100, Colin Raven wrote:
Are there any negative consequences as a result of a force import? I mean
STUNT; Sudden Totally Unexpected and Nasty Things
-Me
If the pool is not in use, no. It's a safety check to avoid problems
that can easily crop up when
On Thu, Dec 17, 2009 at 12:30:29PM -0800, Stacy Maydew wrote:
So thanks for that answer. I'm a bit confused though if the dedup is
applied per zfs filesystem, not zpool, why can I only see the dedup on
a per pool basis rather than for each zfs filesystem?
Seems to me there should be a way to
On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
ZFS deduplication is block-level, so to deduplicate one needs data
broken into blocks to be written. With compression enabled, you don't
have these until data is compressed. Looks like cycles waste indeed,
but ...
ZFS compression
On Thu, Dec 03, 2009 at 09:36:23AM -0800, Per Baatrup wrote:
The reason I was speaking about cat in stead of cp is that in
addition to copying a single file I would like also to concatenate
several files into a single file. Can this be accomplished with your
(z)cp?
Unless you have special
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote:
any of f1..f5's last blocks are partial
Does this mean that f1,f2,f3,f4 needs to be exact multiplum of the ZFS
blocksize? This is a severe restriction that will fail unless in very
special cases. Is this related to the disk format
On Tue, Nov 10, 2009 at 03:04:24PM -0600, Tim Cook wrote:
No. The whole point of a snapshot is to keep a consistent on-disk state
from a certain point in time. I'm not entirely sure how you managed to
corrupt blocks that are part of an existing snapshot though, as they'd be
read-only.
On Mon, Nov 09, 2009 at 03:25:02PM -0700, Robert Thurlow wrote:
Andrew Daugherity wrote:
if I invoke bart via truss, I see it calls statvfs() and fails. Way to
keep up with the times, Sun!
% file /bin/truss /bin/amd64/truss
/bin/truss: ELF 32-bit LSB executable 80386 Version 1
On Wed, Nov 04, 2009 at 09:59:05AM +, Andrew Gabriel wrote:
It can be done by careful use of fdisk (with some risk of blowing away
the data if you get it wrong), but I've seen other email threads here
that indicate ZFS then won't mount the pool, because the two labels at
the end of the
On Mon, Oct 26, 2009 at 10:24:16AM -0700, Brian wrote:
Why does resilvering an entire disk, yield different amounts of data that was
resilvered each time.
I have read that ZFS only resilvers what it needs to, but in the case
of replacing an entire disk with another formatted clean disk, you
On Fri, Oct 16, 2009 at 01:42:49PM +0200, Sander Smeenk wrote:
Recently i switched on 'snapdir=visible' on one of the zfs volumes to
easily expose the available snapshots and then i noticed rsync -removes-
snapshots even though i am not able to do so myself, even as root, with
plain /bin/rm.
On Thu, Oct 15, 2009 at 05:31:42AM -0700, Julio P?rez wrote:
I am thinking in another possibility. Format the current NTFS
partition to ZFS and then I would be able to use this space like
another disk, to store the user home for example, or other
stuff. Would it be possible?
Not easily.
On Tue, Oct 13, 2009 at 05:32:35AM -0700, Julio wrote:
Hi,
I have the following partions on my laptop, Inspiron 6000, from fdisk:
1 Other OS 011 12 0
2 EXT LBA 12 25612550 26
3
On Tue, Oct 06, 2009 at 06:53:15PM -0500, Harry Putnam wrote:
I don't get that... I was thinking of something like
set use:z3 use - mirror rhosts public_html
Probably something more like:
zfs set local:use=mirror rhosts public_html tank/pubfs
local means nothing here. Just something to put
On Mon, Oct 05, 2009 at 02:14:24PM -0700, Mark Horstman wrote:
I have a snapshot that I'd like to destroy:
If you have a filesystem and a clone of that filesystem, a snapshot
always connects them. You can destroy the snapshot only if there are no
clones.
--
Darren
On Wed, Aug 12, 2009 at 04:53:20AM -0700, Sascha wrote:
confirmed, it's really an EFI Label. (see below)
format label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changing to SMI label will erase all
current partitions.
Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a
RAIDZ, you will get only 1TB of usable space.
On Wed, Aug 12, 2009 at 05:30:14PM -0400, Adam Sherman wrote:
I believe you will get .5 TB in this example, no?
The slices used on each of the three disks will be .5TB. Multiply
On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha wrote:
Then creating a zpool:
[b]zpool create -m /zones/huhctmp huhctmppool
c6t6001438002A5435A0001005Ad0[/b]
[b]zpool list[/b]
NAME SIZE USED AVAILCAP HEALTH ALTROOT
huhctmppool 59.5G 103K 59.5G 0% ONLINE
On Mon, Aug 03, 2009 at 01:15:49PM -0700, Jan wrote:
Yes, I have an EFI label on that device.
This is my procedure to try growing the capacity of the device:
- export the zpool
- overwrite the existing EFI label with format tool
- auto-configure it
- import the zpool
What do you mean
On Wed, Jul 29, 2009 at 03:51:22AM -0700, Jan wrote:
Hi all,
I need to know if it is possible to expand the capacity of a zpool
without loss of data by growing the LUN (2TB) presented from an HP EVA
to a Solaris 10 host.
Yes.
I know that there is a possible way in Solaris Express Community
On Sun, Jun 07, 2009 at 10:38:29AM -0700, Leonid Zamdborg wrote:
Out of curiosity, would destroying the zpool and then importing the
destroyed pool have the effect of recognizing the size change? Or
does 'destroying' a pool simply label a pool as 'destroyed' and make
no other changes...
It
On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote:
Kinda scary then. Better make sure we delete all the bad files before
I back it up.
That shouldn't be necessary. Clearing the error count doesn't disable
checksums. Every read is going to verify checksums on the file data
On Fri, Apr 10, 2009 at 01:18:05PM -0500, Harry Putnam wrote:
I'm looking for ways to backup data on a linux server that has been
using rsync with the script `rsnapshot'. Some of you may know how
that works... I won't explain it here other than to say only changed
data gets rsynced to the
On Wed, Apr 01, 2009 at 12:41:25AM +, A Darren Dunham wrote:
On Wed, Apr 01, 2009 at 01:41:06AM +0300, Dimitar Vasilev wrote:
Hi all,
Could someone give a hint if it's possible to create rpool/tmp, mount
it as /tmp so that tmpfs has some disk-based back-end instead of
memory-based
On Wed, Apr 01, 2009 at 01:41:06AM +0300, Dimitar Vasilev wrote:
Hi all,
Could someone give a hint if it's possible to create rpool/tmp, mount
it as /tmp so that tmpfs has some disk-based back-end instead of
memory-based size-limited one.
You mean you want /tmp to be a regular ZFS filesystem
On Tue, Mar 17, 2009 at 03:51:25PM -0700, Neal Pollack wrote:
Step 3, you'll be presented with the disks to be selected as in
previous releases. So, for example, to select the boot disks on the
Thumper,
select both of them:
[x] c5t0d0
[x] c4t0d0
Why have the controller
On Wed, Mar 18, 2009 at 07:13:41PM +0100, Carsten Aulbert wrote:
Well, consider one box being installed from CD (external USB-CD) and
another one which is jumpstarted via the network. The results usually
are two different boot device names :(
Q: Is there an easy way to reset this without
On Mon, Mar 16, 2009 at 10:34:54PM +0100, Carsten Aulbert wrote:
Can ZFS make educated guesses where the seek targets might be or will it
read the file block by block until it reaches the target position, in
the latter case it might be quite inefficient if the file is huge and
has a large
On Mon, Mar 16, 2009 at 09:54:57PM +0100, Carsten Aulbert wrote:
o what happens when a user opens the file and does a lot of seeking
inside the file? For example our scientists use a data format where
quite compressible data is contained in stretches and the file header
contains a dictionary
On Tue, Mar 10, 2009 at 05:57:16PM -0500, Bob Friesenhahn wrote:
On Tue, 10 Mar 2009, Moore, Joe wrote:
As far as workload, any time you use RAIDZ[2], ZFS must read the
entire stripe (across all of the disks) in order to verify the
checksum for that data block. This means that a 128k read
On Thu, Feb 12, 2009 at 10:33:40AM -0500, Greg Mason wrote:
What I'm looking for is a faster way to do this than format -e -d disk
-f script, for all 48 disks.
Is the speed critical? I mean, do you have to pause startup while the
script runs, or does it interfere with data transfer?
--
On Wed, Jan 14, 2009 at 04:39:03PM -0600, Gary Mills wrote:
I realize that any error can occur in a storage subsystem, but most
of these have an extremely low probability. I'm interested in this
discussion in only those that do occur occasionally, and that are
not catastrophic.
What level is
On Tue, Jan 06, 2009 at 08:44:01AM -0800, Jacob Ritorto wrote:
Is this increase explicable / expected? The throughput calculator
sheet output I saw seemed to forecast better iops with the striped
raidz vdevs and I'd read that, generally, throughput is augmented by
keeping the number of vdevs
On Tue, Jan 06, 2009 at 10:22:20AM -0800, Alex Viskovatoff wrote:
I did an install of OpenSolaris in which I specified that the whole disk
should be used for the installation. Here is what format verify produces
for that disk:
Part TagFlag Cylinders Size
On Tue, Jan 06, 2009 at 11:49:27AM -0700, cindy.swearin...@sun.com wrote:
My wish for this year is to boot from EFI-labeled disks so examining
disk labels is mostly unnecessary because ZFS pool components could be
constructed as whole disks, and the unpleasant disk
format/label/partitioning
On Tue, Jan 06, 2009 at 01:24:17PM -0800, Alex Viskovatoff wrote:
a...@diotiima:~# installgrub -m /boot/grub/stage1 /boot/grub/stage2
/dev/rdsk/c4t0d0s0
Updating master boot sector destroys existing boot managers (if any).
continue (y/n)?y
stage1 written to partition 0 sector 0 (abs 16065)
On Tue, Jan 06, 2009 at 04:10:10PM -0500, JZ wrote:
Hello Darren,
This one, ok, was a validate thought/question --
Darn, I was hoping...
On Solaris, root pools cannot have EFI labels (the boot firmware doesn't
support booting from them).
http://blog.yucas.info/2008/11/26/zfs-boot-solaris/
On Sat, Jan 03, 2009 at 09:58:37PM -0500, JZ wrote:
Under what situations would you expect any differences between the ZFS
checksums and the Netapp checksums to appear?
I have no evidence, but I suspect the only difference (modulo any bugs)
is how the software handles checksum failures.
As
On Wed, Dec 31, 2008 at 01:53:03PM -0500, Miles Nordin wrote:
The thing I don't like about the checksums is that they trigger for
things other than bad disks, like if your machine loses power during a
resilver, or other corner cases and bugs. I think the Netapp
block-level RAID-layer
On Thu, Dec 18, 2008 at 10:24:26AM +0200, Johan Hartzenberg wrote:
Similarly, adding a device into a raid-Z vdev seems easy to do: All future
writes include that device in the list of devices from which to allocate
blocks.
In general, I agree completely. But in practice there are limitations
On Wed, Dec 17, 2008 at 01:57:37PM -0600, Tim wrote:
On Wed, Dec 17, 2008 at 10:23 AM, cindy.swearin...@sun.com wrote:
Hi Alex,
Sorry, I missed the 1.5 TB disk/boot issue previously.
A project is underway to provide booting for disks that are large
than 1 TB. This project is outside
On Tue, Dec 16, 2008 at 12:07:52PM +, Ross Smith wrote:
It sounds to me like there are several potentially valid filesystem
uberblocks available, am I understanding this right?
1. There are four copies of the current uberblock. Any one of these
should be enough to load your pool with no
On Wed, Nov 26, 2008 at 04:30:59PM +0100, C. Bergstr?m wrote:
Ok. here's a trick question.. So to the best of my understanding zfs
turns off write caching if it doesn't own the whole disk.. So what if s0
*is* the whole disk? Is write cache supposed to be turned on or off?
Actually, ZFS
On Tue, Nov 04, 2008 at 05:52:33AM -0800, Ivan Wang wrote:
$ /usr/bin/amd64/ls -l .gtk-bookmarks
-rw-r--r-- 1 user opc0 oct. 16 2057
.gtk-bookmarks
This is a bit absurd. I thought Solaris was fully 64
bit. I hope those tools will be integrated soon.
Solaris runs on
On Sat, Oct 11, 2008 at 03:19:49AM +0300, Marcus Sundman wrote:
I've used format's volname command to give labels to my drives
according to their physical location. I did quite a lot of work
labeling all my drives (I couldn't figure out which controller got
which numbers so I had to disconnect
On Tue, Sep 30, 2008 at 03:19:40PM -0700, Erik Trimble wrote:
To make Will's argument more succinct (wink), with a NetApp,
undetectable (by the NetApp) errors can be introduced at the HBA and
transport layer (FC Switch, slightly damage cable) levels. ZFS will
detect such errors, and fix
On Tue, Sep 23, 2008 at 08:56:39AM +0200, Nils Goroll wrote:
That case appears to be about trying to get a raidz sized properly
against disks of different sizes. I don't see a similar issue for
someone preferring a concat over a stripe.
I don't quite understand your comment.
The question I
On Mon, Sep 22, 2008 at 01:03:13PM +0200, Nils Goroll wrote:
See
http://www.opensolaris.org/jive/thread.jspa?messageID=271983#271983
The case mentioned there is one where concatenation in zdevs would be
useful.
That case appears to be about trying to get a raidz sized properly
against
On Fri, Sep 19, 2008 at 10:31:07AM -0400, Michael Dvinyaninov wrote:
Hello,
I am sure that this question was answered already but I could not find
an answer.
Is it possible to force zfs pool to have concatenation not striping or
it can't be specified.
No, it can't.
How would having
On Thu, Sep 18, 2008 at 01:26:09PM +0200, Nils Goroll wrote:
Thank you very much for correcting my long-time misconception.
On the other hand, isn't there room for improvement here? If it was
possible to break large writes into smaller blocks with individual
checkums(for instance those which
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
The issue with any form of RAID 1, is that the instant a disk fails
out of the RAID set, with the next write I/O to the remaining members
of the RAID set, the failed disk (and its replica) are instantly out
of sync.
Does raidz
On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote:
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
The issue with any form of RAID 1, is that the instant a disk fails
out of the RAID set, with the next write I/O
On Fri, Sep 05, 2008 at 03:17:44PM -0400, Paul Raines wrote:
[EMAIL PROTECTED] # ls -l
./README: Value too large for defined data type
total 36
-rw-r- 1 mreuter mreuter 1019 Sep 25 2006 Makefile
-rw-r- 1 mreuter mreuter 3185 Feb 22 2000 lcompgre.cc
-rw-r- 1
On Fri, Aug 22, 2008 at 10:54:00AM -0700, Gordon Ross wrote:
I noted this PSARC thread with interest:
Re: zpool autoexpand property [PSARC/2008/353 Self Review]
because it so happens that during a recent disk upgrade,
on a laptop. I've migrated a zpool off of one partition
onto a slightly
On Thu, Aug 07, 2008 at 11:34:12AM -0700, Richard Elling wrote:
Anton B. Rang wrote:
First, there are two types of utilities which might be useful in the
situation where a ZFS pool has become corrupted. The first is a file system
checking utility (call it zfsck); the second is a data
On Thu, Jun 12, 2008 at 07:29:08AM -0700, Rich Teer wrote:
Hi all,
Booting from a two-way mirrored metadevice created using SVM
can be a bit risky, especially when one of the drives fail
(not being able to form a quarum, the kernel will panic).
SVM doesn't panic in that situation. At boot
On Thu, Jun 12, 2008 at 07:28:23AM -0400, Brian Hechinger wrote:
I think something else that might help is if ZFS were to boot, see that
the volume it booted from is older than the other one, print a message
to that effect and either halt the machine or issue a reboot pointing
at the other
On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
However, some apps will probably be very unhappy if i/o takes 60 seconds
to complete.
It's certainly not uncommon for that to occur in an NFS environment.
All of our applications seem to hang on just fine for minor planned and
On Thu, Jun 05, 2008 at 11:13:01AM -0400, Luke Scharf wrote:
So, can I build a working system without s2?
Build? I'm not so sure. The first label is going to have s2 by
default. You'd have to remove it later. I doubt there's language in
the jumpstart scripts to remove it then.
But yes,
On Wed, Jun 04, 2008 at 06:28:58PM -0400, Luke Scharf wrote:
2. The number s2 is arbitrary. If it were s0, then there would at
least be the beginning of the list. If it were s3, it would be at
the end of a 2-bit list, which could be explained historically.
If it were
On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L. Hamilton wrote:
How about SPARC - can it do zfs install+root yet, or if not, when?
Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
have a mirrored pool where zfs owns the entire drives, if possible.
(I'd also eventually
On Wed, May 21, 2008 at 02:43:26PM -0400, Will Murnane wrote:
So, my questions are:
* Are there options I can set server- or client-side to make Solaris
child mounts happen automatically (i.e., match the Linux behavior)?
I think these are known as mirror-mounts in Solaris. They first
On Fri, May 16, 2008 at 07:29:31PM -0700, Paul B. Henson wrote:
For ZFS root, is it required to have a partition and slices? Or can I just
give it the whole disk and have it write an EFI label on it?
Last I heard, no support yet for EFI boot. I'm not sure if that's
something that's being
On Tue, May 13, 2008 at 10:02:01AM -0700, Marc Glisse wrote:
Can't you turn the snapshot into a clone (kind of an editable
snapshot)? Or does the existence of a clone created from this snapshot
prevent from removing the snapshot afterwards?
You can create a clone from the snapshot, but it does
On Tue, May 13, 2008 at 04:33:29PM +0200, Simon Breden wrote:
If multiple snapshots reference (own?) the same file, what's the quickest
way to zap that file from all snapshots?
There is no way.
If you could do that, then they wouldn't really be snapshots.
I'm not saying that the ability
that?
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank to confuse you
the virtual memory system.
Why a misnomer? swap and virtual memory are used as identical
terms in many places in Solaris.
But since /tmp was mentioned, perhaps you're referring to tmpfs instead
of swapfs?
--
Darren Dunham [EMAIL PROTECTED]
Senior
/fixes applied, but I don't know of any
significant feature changes in U5. There's a lot that have been
targeting U6 for quite a while.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr
appreciably. I'm assuming you'll need to test
with your load to see how it works.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay
in the
live filesystem.
Because you are dealing with snapshots and quotas, you may also be
interested in the separation of snapshots from the quota considerations
of the parent filesystem:
http://opensolaris.org/os/community/arc/caselog/2007/555/
--
Darren Dunham
or particular data to that vdev, so the
added redundancy can't be concentrated anywhere.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA
.
When logging was first added to UFS, it had the same issue. But I
believe statvfs was modified to take future removes (logged) into
account. Can ZFS do the same?
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOS
on these 500GB disks?
zfs list:
NAME USED AVAIL REFER MOUNTPOINT
pile 269K 2.67T 40.4K /pile
What do you have for zpool list?
3.00TB ~~ 2.73TiB
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp
I notice that files within a snapshot show a different deviceID to stat
than the parent file does. But this is not true when mounted via NFS.
Is this a limitation of the NFS client, or just what the ZFS fileserver
is doing?
Will this change in the future? With NFS4 mirror mounts?
--
Darren
every block)?
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank to confuse you
.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank to confuse you
, but as far as I
understand rather similar in capability.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left
be obnoxious, I didn't see if an RFE was filed or any status of
such.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
don't see how to force an ordering
of pool imports.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left
they way they are.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
This line left intentionally blank to confuse
available. It be handy nice to boot from flash, import a
pool, then make that the running root.
Does anyone know if that's a target of any OpenSolaris projects?
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp
On Fri, Oct 05, 2007 at 02:01:29PM -0500, Nicolas Williams wrote:
On Fri, Oct 05, 2007 at 06:54:21PM +, A Darren Dunham wrote:
I wonder how much this would change if a functional pivot-root
mechanism were available. It be handy nice to boot from flash, import a
pool, then make
corrupted, it shouldn't cause a
panic on import. It seems reasonable to detect this and fail the import
instead.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper
of VxVM disks is valuable and powerful because the names are
stored in the diskgroup and are visible to any host that imports it.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper
(at whatever
level of the stack makes sense), then that will go a long way toward
good solutions. In the meantime, adding a name seems easier and
possibly helpful.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp
1 - 100 of 225 matches
Mail list logo