From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Joe Auty
I'm also noticing that I'm a little short on RAM. I have 6 320 gig
drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would
mean that I need at least 6.4 gig of RAM.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Garrett D'Amore
On Mon, 2010-06-07 at 11:49 -0700, Richard Jahnel wrote:
Do you lose the data if you lose that 9v feed at the same time the
computer losses power?
Yes. Hence the need
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of sensille
I'm quite sure scrub does not check spares or unused areas of the disks
(it
could check if the disks detects any errors there).
But what about the parity? Obviously it has to be
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ross Walker
To get around this create a basic NTFS partition on the new third
drive, copy the data to that drive and blow away the dynamic mirror.
Better yet, build the opensolaris machine
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
A temporary clone is created for an incremental receive and
in some cases, is not removed automatically.
1. Determine clone names:
# zdb -d poolname | grep %
2. Destroy identified clones:
# zfs destroy
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Contrepois
r...@fsbu ~# zfs destroy fsbu01-zp/mailbo...@zfs-auto-snap:daily-2010-
04-14-00:00:00
cannot destroy 'fsbu01-zp/mailbo...@zfs-auto-snap:daily-2010-04-14-
00:00:00': dataset
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
Step 1:Break the mirror of A B inside Windows 7
Step 2:Purchase the new C hard drive, and install it in the case.
Step 3:Boot to OpenSolaris
Step 4:Make sure
This is the problem:
[r...@nasbackup backup-scripts]# zfs destroy
storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30
cannot destroy 'storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30':
dataset already exists
This is apparently a common problem. It's happened to me twice already,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Cassandra Pugh
I was wondering if there is a special option to share out a set of
nested
directories? Currently if I share out a directory with
/pool/mydir1/mydir2
on a system,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Gregory J. Benscoter
After looking through the archives I havent been able to assess the
reliability of a backup procedure which employs zfs send and recv.
If there's data corruption in the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of sensille
The basic idea: the main problem when using a HDD as a ZIL device
are the cache flushes in combination with the linear write pattern
of the ZIL. This leads to a whole rotation of
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise.
I know it doesn't have a supercap so lets' say dataloss
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
I've been thinking lately that I'm not sure I like the root pool being
unprotected, but I can't afford to give up another drive bay.
I'm guessing you won't be able to use the
From: Thomas Burgess [mailto:wonsl...@gmail.com]
Just dataloss.
WRONG!
I didn't ask about losing my zil.
I asked about power loss taking out my pool.
As I recall:
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question
is,
From: Thomas Burgess [mailto:wonsl...@gmail.com]
I might be somewhat confused to how the ZIL
works but i thought the point of the ZIL was to pretend a write
actually happened when it may not have actually been flushed to disk
yet...
No. How the ZIL works is like this:
Whenever a process
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brent Jones
Problem with mbuffer, if you do scripted send/receives, you'd have to
pre-start an Mbuffer session on the receiving end somehow.
SSH is always running on the receiving end, so no
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Thomas Burgess
but i get an error:
cannot receive incremental stream: destination tank/nas/dump has been
modified
since most recent snapshot
Whenever you send a snap, and you intend to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Philippe
c7t2d0s0/o FAULTED 0 0 0 corrupted data
When I've done the zpool replace, I had to add -f to force, because
ZFS told that these was a ZFS label on the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andrew Gabriel
If you are reading blocks from your initial hdd images (golden images)
frequently enough, and you have enough memory on your system, these
blocks will end up on the ARC
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
If I create a file in a file system and then snapshot the file system.
Then delete the file.
Is it guaranteed that while the snapshot exists no new file will be
created with the same inode number as the deleted file?
I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of John Hoogerdijk
I'm building a campus cluster with identical storage in two locations
with ZFS mirrors spanning both storage frames. Data will be mirrored
using zfs. I'm looking for the
How full is your filesystem? Give us the output of zfs list
You might be having a hardware problem, or maybe it's extremely full.
Also, if you have dedup enabled, on a 3TB filesystem, you surely want more
RAM. I don't know if there's any rule of thumb you could follow, but
offhand I'd say 16G
The purpose of zhist is to simplify access to past snapshots. For example,
if you zhist ls somefile then the results will be: A list of all the
previous snapshot versions of that file or directory. No need to find the
right .zfs directory, or check to see which ones have changed. Some
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Geoff Nordli
I was messing around with a ramdisk on a pool and I forgot to remove it
before I shut down the server. Now I am not able to mount the pool. I
am
not concerned with the data
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Hillel Lubman
Oracle (together with RedHat) contributes a bulk part of BTRFS
development. Given that ZFS and BTRFS both share many similar goals,
wouldn't it be reasonable for Oracle to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
I'm no expert in this subject, but I do feel confident in my knowledge
that
the license conflict is just one of many obstacles to prevent ZFS from
working nicely in Linux
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
On Sat, 8 May 2010, Edward Ned Harvey wrote:
A vast majority of the time, the opposite is true. Most of the time,
having
swap available increases performance. Because the kernel is able to
choose:
Should I swap out
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
On Sat, 8 May 2010, Edward Ned Harvey wrote:
A vast majority of the time, the opposite is true. Most of the time,
having
swap available increases performance. Because the kernel is able to
choose:
Should I swap out
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Dalen
If I want to reduce the I/O accesses for example to SSD media on a
laptop
and I don't plan to run any big applications is it safe to delete the
swap file ?
How do I configure
From: Matt Keenan [mailto:matt...@opensolaris.org]
After some playing around I've noticed some kinks particularly around
booting.
I'm going to continue encouraging you to staying mainstream, because what
people do the most is usually what's supported the best. I think you'll
have a more
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
In neither case do you have data or filesystem corruption.
ZFS probably is still OK, since it's designed to handle this (?),
but the data can't be OK if you lose 30 secs of writes.. 30 secs of
writes
that have been ack'd being done to the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ragnar Sundblad
But if you have an application, protocol and/or user that demands
or expects persistant storage, disabling ZIL of course could be fatal
in case of a crash. Examples are mail
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Matt Keenan
Just wondering whether mirroring a USB drive with main laptop disk for
backup purposes is recommended or not.
Plan would be to connect the USB drive, once or twice a week, let
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Thanks to Victor, here is at least proof of concept that yes, it is possible
to reverse resolve, inode number -- pathname, and yes, it is almost
infinitely faster than doing
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
From a zfs standpoint, Solaris 10 does not seem to be behind the
currently supported OpenSolaris release.
I'm sorry, I'll have to disagree with you there. In solaris 10,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
Well, being able to remove ZIL devices is one important feature
missing. Hopefully in U9. :)
I did have a support rep confirm for me that both the log device removal,
and the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael Sullivan
I have a question I cannot seem to find an answer to.
Google for ZFS Best Practices Guide (on solarisinternals). I know this
answer is there.
I know if I set up ZIL on
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
if you can disable ZIL and compare the performance to when it is off it
will give you an estimate of what's the absolute maximum performance
increase (if any) by having a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Steven Stallion
I had a question regarding how the ZIL interacts with zpool import:
Given that the intent log is replayed in the event of a system failure,
does the replay behavior differ
From: Michael Sullivan [mailto:michael.p.sulli...@mac.com]
My Google is very strong and I have the Best Practices Guide committed
to bookmark as well as most of it to memory.
While it explains how to implement these, there is no information
regarding failure of a device in a striped L2ARC
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michelle Knight
I seem to have a problem changing the owner of a symlinked directory.
As root...
mkdir a
chown admin:audiogroup a
ln -s a b
Directory b shows up owned by root, but I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
If you're only sharing them to Linux machines, then NFS would be so
much
easier to use. You'll still want relative links though.
Only if you have infrastructure to sanitize
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
Sent: Monday, May 03, 2010 12:58 PM
Hi Ned,
Yes, I agree that it is a good idea not to update your root pool
version before restoring your existing root pool snapshots.
If you are using a later Solaris OS to recover your pool
From: Richard Elling [mailto:richard.ell...@gmail.com]
Once you register your original Solaris 10 OS for updates, are
you
unable to get updates on the removable OS?
This is not a problem on Solaris 10. It can affect OpenSolaris, though.
That's precisely the opposite of what I thought.
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
Therefore, it should be very easy to implement proof of concept, by
writing
a setuid root C program, similar to sudo which could then become
root,
identify the absolute path of a directory by its inode number, and
then
print
From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
Behalf Of Jason King
If you're just wanting to do something like the netapp .snapshot
(where it's in every directory), I'd be curious if the CIFS shadow
copy support might already have done a lot of the heavy lifting
From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf
Of casper@sun.com
It is certainly possible to create a .zfs/snapshot_byinode but it is
not
clear when it helps but it can be used for finding the earlier copy of
a
directory (netapp/.snapshot)
Do you happen to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Steve Staples
My problem is, is that not all 2TB hard drives are the same size (even
though they should be 2 trillion bytes, there is still sometimes a +/-
(I've
only noticed this 2x so
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
bash-4.0# mkfile -n 2 d0
bash-4.0# zpool create pool $PWD/d0
bash-4.0# mkfile -n 1992869543936 d1
bash-4.0# zpool attach pool $PWD/d0 $PWD/d1
As long as
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
Sent: Sunday, May 02, 2010 11:55 AM
I am currently using OpenSolaris 2009.06
If I was to upgrade to the current developer version, forgive my
ignorance
(since I am new to *solaris), but how would I do this?
# pkg set-publisher
From: Steve Staples [mailto:thestapler...@gmail.com]
I am currently using OpenSolaris 2009.06
If I was to upgrade to the current developer version, forgive my
ignorance
(since I am new to *solaris), but how would I do this?
If you go to genunix.org (using the URL in my previous email) you
Forget about files for the moment, because directories are fundamentally
easier to deal with.
Let's suppose I've got the inode number of some directory in the present
filesystem.
[r...@filer ~]# ls -id /share/projects/foo/goo/rev1.0/working
14363 /share/projects/foo/goo/rev1.0/working/
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mattias Pantzare
The nfs server can find the file but not the file _name_.
inode is all that the NFS server needs, it does not need the file name
if it has the inode number.
It is not
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Saturday, May 01, 2010 7:07 PM
On Sat, 1 May 2010, Peter Tribble wrote:
With the new Oracle policies, it seems unlikely that you will be
able to
reinstall the OS and achieve what you had before.
And what policies
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Euan Thoms
My ideal solution would be to have the data accessible from the backup
media (external HDD) as well as be used as full syatem restore. Below
is what I would consider ideal:
1.)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Euan Thoms
pfexec zfs send rp...@first | pfexec zfs receive -u backup-pool/rpool
pfexec zfs send rpool/r...@first | pfexec zfs receive -u backup-
pool/rpool/ROOT
pfexec zfs send
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
I gather you are suggesting that the inode be extended to contain a
list of the inode numbers of all directories that contain a filename
referring to that inode.
Correct.
[inodes] can have up to 32767 links [to them]. Where
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
Whilst it's trivially easy to get from the file to the list of
directories containing that file, actually getting from one directory
to its parent is less so: A directory containing N sub-directories has
N+2 links. Whilst the '.'
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Friday, April 30, 2010 1:40 PM
With the new Oracle policies, it seems unlikely that you will be able
to reinstall the OS and achieve what you had before. An exact
recovery method (dd of partition images or recreate pool with
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Euan Thoms
I'm looking for a way to backup my entire system, the rpool zfs pool to
an external HDD so that it can be recovered in full if the internal HDD
fails. Previously with Solaris 10
I finally got it, I think. Somebody (with deep and intimate knowledge of
ZFS development) please tell me if I've been hitting the crack pipe too
hard. But .
Part 1 of this email:
Netapp snapshot security flaw. Inherent in their implementation of
.snapshot directories.
Part 2 of this
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Each inode contain a link count. It seems trivially
simple to me, that along with the link count in each inode, the
filesystem could also store a list of which inodes link
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Cindy Swearingen
For full root pool recovery see the ZFS Administration Guide, here:
http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=ena=view
Recovering the ZFS Root Pool or Root Pool
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Look up the inode number of README. (for example, ls -i README)
(suppose its inode 12345)
find /tank/.zfs/snapshot -inum 12345
Problem is, the find
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Sent: Wednesday, April 28, 2010 3:49 PM
What indicators do you have that ONTAP/WAFL has inode-name lookup
functionality?
I don't have any such indicator, and if that's the way my words came out,
sorry for that. Allow me to clarify:
In
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If something like this already exists, please let me know. Otherwise,
I
plan to:
Create zfshistory command, written in python. (open source, public,
free)
So, I
Let's suppose you rename a file or directory.
/tank/widgets/a/rel2049_773.13-4/somefile.txt
Becomes
/tank/widgets/b/foogoo_release_1.9/README
Let's suppose you are now working on widget B, and you want to look at the
past zfs snapshot of README, but you don't remember where it came from.
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Sunday, April 25, 2010 2:12 PM
E did exist. Inode 12345 existed, but it had a different name at the
time
OK, I'll believe you.
How about this?
mv a/E/c a/c
mv a/E a/c
mv a/c a/E
The thing that's
From: Ian Collins [mailto:i...@ianshome.com]
Sent: Sunday, April 25, 2010 5:09 PM
To: Edward Ned Harvey
Cc: 'Robert Milkowski'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS Pool, what happen when disk failure
On 04/26/10 12:08 AM, Edward Ned Harvey wrote:
[why do you snip
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave Pooser
(lots of small writes/reads), how much benefit will I see from the SAS
interface?
In some cases, SAS outperforms SATA. I don't know what circumstances those
are.
I think the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Travis Tabbal
I have a few old drives here that I thought might help me a little,
though not at much as a nice SSD, for those uses. I'd like to speed up
NFS writes, and there have been some
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Travis Tabbal
Oh, one more thing. Your subject says ZIL/L2ARC and your message says I
want to speed up NFS writes.
ZIL (log) is used for writes.
L2ARC (cache) is used for reads.
I'd recommend
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
About SAS vs SATA, I'd guess you won't be able to see any change at
all. The bottleneck is the drives, not the interface to them.
That doesn't agree with my
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
With 2TB drives priced at €150 or lower, I somehow think paying for
drive lifetime is far more expensive than getting a few more drives and
add redundancy
If you have a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
On 24/04/2010 13:51, Edward Ned Harvey wrote:
But what you might not know: If any pool fails, the system will
crash.
This actually depends on the failmode property
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Tripp
here, I'll swap it in for the sparse file and let it resilver.
Can someone with a stronger understanding of ZFS tell me why a degraded
RaidZ2 (minus one disk) is less efficient
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Saturday, April 24, 2010 10:43 AM
Nope. That discussion seems to be concluded now. And the netapp
does not
have the problem that was suspected.
I do not recall reaching that conclusion. I think the definition of
the
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Sent: Saturday, April 24, 2010 5:18 PM
To answer the question you linked to:
.shapshot/snapname.0/a/b/c/d.txt from the top of the filesystem
a/.snapshot/snapname.0/b/c/d.txt
a/e/.shapshot/snapname.0/c/d.txt
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
From the sounds of it, the .snapshot directory is just a pointer to the
corresponding directory in the actual snapshot tree. The snapshots are
not actually saved per-directory.
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Saturday, April 24, 2010 7:42 PM
Next,
mv /a/e /a/E
ls -l a/e/.snapshot/snaptime
ENOENT?
ls -l a/E/.snapshot/snapname/d.txt
this should be ENOENT because d.txt did not exist in a/E at snaptime.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Actually, I find this very surprising
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Haudy Kazemi
Your remaining space can be configured as slices. These slices can be
added directly to a second pool without any redundancy. If any drive
fails, that whole non-redundant pool
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of aneip
I really new to zfs and also raid.
I have 3 hard disk, 500GB, 1TB, 1.5TB.
On each HD i wanna create 150GB partition + remaining space.
I wanna create raidz for 3x150GB
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of thomas
Someone on this list threw out the idea a year or so ago to just setup
2 ramdisk servers, export a ramdisk from each and create a mirror slog
from them.
Isn't the whole point of a
From: Richard Elling [mailto:richard.ell...@gmail.com]
One last try. If you change the real directory structure, how are
those
changes reflected in the snapshot directory structure?
Consider:
echo whee /a/b/c/d.txt
[snapshot]
mv /a/b /a/B
What does
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Actually, I find this very surprising:
Question posted:
http://lopsa.org/pipermail/tech/2010-April/004356.html
As the thread unfolds, it appears, although netapp may
From: Richard Elling [mailto:richard.ell...@gmail.com]
Repeating my previous question in another way...
So how do they handle mv home/joeuser home/moeuser ?
Does that mv delete all snapshots below home/joeuser?
To make this work in ZFS, does this require that the mv(1)
command only work
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
And you can even create, rename and destroy
From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com]
You can create/destroy/rename snapshots via mkdir, rmdir, mv inside
the
.zfs/snapshot directory, however, it will only work if you're running
the
command locally. It will not work from a NFS client.
It will work over NFS
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Haley
You can see it with ls:
# ls -ld -% all /net/server/export/ws/timh/nvc
drwxr-xr-x 9 timh staff 13 Apr 21 01:25
/net/server/export/ws/timh/nvc/
timestamp:
From: Brandon High [mailto:bh...@freaks.com]
On Fri, Apr 16, 2010 at 10:54 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
there's a file or something you want to rollback, it's presently
difficult
to know how far back up the tree you need to go, to find the correct
.zfs
From: Richard Elling [mailto:richard.ell...@gmail.com]
What happens when you remove the directory?
Same thing that happens when you remove the .zfs directory. You
can't.
Are you sure I cannot rmdir on a NetApp? That seems like basic
functionality to me.
Or are you thinking rmdir
From: Nicolas Williams [mailto:nicolas.willi...@oracle.com]
POSIX doesn't allow us to have special dot files/directories outside
filesystem root directories.
So? Tell it to Netapp. They don't seem to have any problem with it.
___
zfs-discuss
From: Richard Elling [mailto:richard.ell...@gmail.com]
So you are saying that the OnTap .snapshot directory is equivalent to a
symlink
to $FSROOT/.zfs/snapshot? That would solve the directory shuffle
problem.
Not quite.
In Ontap, all you do is go into .snapshot, and select which snap
If you did the symlink .snapshot -- $FSROOT/.zfs/snapshot, and somehow
made
that magically appear in every directory all the time, you would have
this:
/share/home/joeuser/foo/.snapshot/bestsnapever/home/joeuser/foo/bar
/share/home/joeuser/.snapshot/bestsnapever/home/joeuser/foo/bar
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
On Mon, 19 Apr 2010, Edward Ned Harvey wrote:
Just be aware that if *any* of your devices fail, all is lost.
(Because
you've said it's configured as a nonredundant stripe.)
The good news is that it is easy to convert any
From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf
Of casper@sun.com
On Mon, 19 Apr 2010, Edward Ned Harvey wrote:
Improbability assessment aside, suppose you use something like the
DDRDrive
X1 ... Which might be more like 4G instead of 32G ... Is it even
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Sunday, April 18, 2010 11:34 PM
To: Edward Ned Harvey
Cc: Christopher George; zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] SSD best practices
On Sun, 18 Apr 2010, Edward Ned Harvey wrote:
This seems
From: Kyle McDonald [mailto:kmcdon...@egenera.com]
I think I saw an ARC case go by recently for anew 'zfs diff' command. I
think it allows you compare 2 snapshots, or maybe the live filesystem
and a snapshot and see what's changed.
It sounds really useful, Hopefully it will integrate soon.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Don
Continuing on the best practices theme- how big should the ZIL slog
disk be?
The ZFS evil tuning guide suggests enough space for 10 seconds of my
synchronous write load- even assuming
801 - 900 of 1109 matches
Mail list logo