Hi,
We are currently using NetApp file clone option to clone multiple VMs on our FS.
ZFS dedup feature is great storage space wise but when we need to clone allot
of VMs it just takes allot of time.
Is there a way (or a planned way) to clone a file without going through the
process of actually
On 21/04/2010 07:41, Schachar Levin wrote:
Hi,
We are currently using NetApp file clone option to clone multiple VMs on our FS.
ZFS dedup feature is great storage space wise but when we need to clone allot
of VMs it just takes allot of time.
Is there a way (or a planned way) to clone a file
but it suggests that it had nothing to do with a double slash - rather
some process (your shell?) had an open file within the mountpoint. But
supplying -f you forced zfs to unmount it anyway.
--
Robert Milkowski
http://milek.blogspot.com
On 21/04/2010 06:16, Ryan John wrote:
Thanks. That
On 21/04/2010 04:43, Ben Rockwood wrote:
I'm doing a little research study on ZFS benchmarking and performance
profiling. Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.
To that end, I'm curious if folks
On 21/04/2010 04:43, Ben Rockwood wrote:
I'm doing a little research study on ZFS benchmarking and performance
profiling. Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.
To that end, I'm curious if folks
Ben,
never trust a benchmark, you haven't faked yourself!
There are many benchmarks out there, but the question is, how relevant are
they for your usage pattern. How important are single stream benchmarks, when
you are opening and closing 1000s of files per second or if you run a DB on
top of
On Tue, Apr 20, 2010 at 06:51:01PM +0100, Bayard Bell wrote:
These folks running the relevant business lines have already said
publicly to the OGB that Oracle's corporate management accepts the
basic premise of OpenSolaris, so why pass the time waiting to learn
how they're going to make
Title: signature
But it is surely taking in a whole boatload of anecdotal information
and precious little in the way of real data or online references.
alan.
Eugen Leitl wrote:
On Tue, Apr 20, 2010 at 06:51:01PM +0100, Bayard Bell wrote:
These folks running the relevant
I have setup 2 servers with zfs pools I want to replicate continously to the
second server is there a way I can do it with zfs are there any examples I can
follow to build this setup. There are no good books on opensolaris explaining
this method.
Also are there a script repository for zfs
Great! Dominik, Oracle needs to silence FUD immediately. Proactive initiative.
:o)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
And you can even create, rename and destroy
On 4/21/10 6:49 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
And you can
From: matthew patton [mailto:patto...@yahoo.com]
Sent: Tuesday, April 20, 2010 12:54 PM
Geoff Nordli geo...@grokworx.com wrote:
With our particular use case we are going to do a save
state on their
virtual machines, which is going to write 100-400 MB
per VM via CIFS or
NFS, then we take a
On 21/04/2010 05:09, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
The .zfs/snapshot directory is most certainly available over NFS.
I'm not sure you've been following this thread. Nobody said
f I could mount unlimited NFS mounts on ESX it would be good for me, but
there is a 64 nfs datastore limit.
Right now if I copy a big file it can take several minutes. I already tried
having one VM per zfs file system, and snapshot clone works great, as long
as I stay below 64 NFS mounts.
Note
On 04/21/10 03:24 AM, Darren J Moffat wrote:
On 21/04/2010 05:09, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
The .zfs/snapshot directory is most certainly available over NFS.
I'm not sure
On Wed, April 21, 2010 09:18, Schachar Levin wrote:
NetApp has the ability to instantly clone single files and that would also
solve our problem if its somewhere in ZFS road-map (unless the issues we
have above can be resolved)
Beyond things like dedupe (and compression), ZFS currently does
On 04/20/10 11:06 AM, Don wrote:
Who else, besides STEC, is making write optimized drives and what
kind of IOP performance can be expected?
Just got a distributor email about Texas Memory Systems' RamSan-630,
one of a range of huge non-volatile SAN products they make. Other
than that this
From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com]
You can create/destroy/rename snapshots via mkdir, rmdir, mv inside
the
.zfs/snapshot directory, however, it will only work if you're running
the
command locally. It will not work from a NFS client.
It will work over NFS
Eugen,
Oracle has a number of technologies that they've acquired that have
remained dual-licensed, and that includes acquiring InnoTech, which
they carried forward despite being able to use it as nearly an
existential threat to MySQL. In the case of their acquisition of
Sleepycat, I'm
On Wed, Apr 21, 2010 at 10:45:24AM -0400, Edward Ned Harvey wrote:
From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com]
You can create/destroy/rename snapshots via mkdir, rmdir, mv inside
the
.zfs/snapshot directory, however, it will only work if you're running
the
command
On Apr 21, 2010, at 7:24 AM, Frank Middleton wrote:
On 04/20/10 11:06 AM, Don wrote:
Who else, besides STEC, is making write optimized drives and what
kind of IOP performance can be expected?
Just got a distributor email about Texas Memory Systems' RamSan-630,
one of a range of huge
On 04/21/10 08:45 AM, Edward Ned Harvey wrote:
From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com]
You can create/destroy/rename snapshots via mkdir, rmdir, mv inside
the
.zfs/snapshot directory, however, it will only work if you're running
the
command locally. It will not work
On Apr 20, 2010, at 11:41 PM, Schachar Levin wrote:
Hi,
We are currently using NetApp file clone option to clone multiple VMs on our
FS.
In your use case, how do you isolate one VM from another?
If they are both in the same file system, then they are not isolated.
If they are not
On Apr 20, 2010, at 10:21 PM, Brandon High wrote:
On Fri, Apr 16, 2010 at 10:54 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
there's a file or something you want to rollback, it's presently difficult
to know how far back up the tree you need to go, to find the correct .zfs
On Apr 21, 2010, at 6:32 AM, c.hanover wrote:
In my earlier investigations of ZFS, I ran across a product called StorageTek
Availability Suite that sounds like it might do what you want. If I recall
correctly, it would allow you to define devices that are actually hosted on
another
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Haley
You can see it with ls:
# ls -ld -% all /net/server/export/ws/timh/nvc
drwxr-xr-x 9 timh staff 13 Apr 21 01:25
/net/server/export/ws/timh/nvc/
timestamp:
From: Brandon High [mailto:bh...@freaks.com]
On Fri, Apr 16, 2010 at 10:54 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
there's a file or something you want to rollback, it's presently
difficult
to know how far back up the tree you need to go, to find the correct
.zfs
On 21/04/2010 16:35, Edward Ned Harvey wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
On Apr 20, 2010, at 10:21 PM, Brandon High wrote:
On Fri, Apr 16, 2010 at 10:54 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
there's a file or something you want to rollback, it's
Howdy All,
I have a raidz pool that hangs the system when importing. I attempted a pfexec
zpool import -F pool1 (which has been importing for two days with no result),
but doesn't seem to get anywhere and makes the system mostly non-responsive --
existing logins continue to work, new logins
What build are you on?
zpool import hangs for me on b134.
On Wed, Apr 21, 2010 at 9:21 AM, John Balestrini j...@balestrini.netwrote:
Howdy All,
I have a raidz pool that hangs the system when importing. I attempted a
pfexec zpool import -F pool1 (which has been importing for two days with no
I'm on b134 and the disks are at ZFS pool version 22 ZFS filesystem version 4.
Thanks,
John
On Apr 21, 2010, at 10:10 AM, Brad Stone wrote:
What build are you on?
zpool import hangs for me on b134.
On Wed, Apr 21, 2010 at 9:21 AM, John Balestrini j...@balestrini.net wrote:
Howdy All,
On 4/21/10 2:15 AM, Robert Milkowski wrote:
I haven't heard from you in a while! Good to see you here again :)
Sorry for stating obvious but at the end of a day it depends on what
your goals are.
Are you interested in micro-benchmarks and comparison to other file
systems?
I think the most
From: Richard Elling [mailto:richard.ell...@gmail.com]
What happens when you remove the directory?
Same thing that happens when you remove the .zfs directory. You
can't.
Are you sure I cannot rmdir on a NetApp? That seems like basic
functionality to me.
Or are you thinking rmdir
ISTR POSIX also doesn't allow a number of features that can be turned
on with zfs (even ignoring the current issues that prevent ZFS from
being fully POSIX compliant today). I think an additional option for
the snapdir property ('directory' ?) that provides this behavior (with
suitable warnings
On Wed, Apr 21, 2010 at 01:03:39PM -0500, Jason King wrote:
ISTR POSIX also doesn't allow a number of features that can be turned
on with zfs (even ignoring the current issues that prevent ZFS from
being fully POSIX compliant today). I think an additional option for
the snapdir property
Richard Elling wrote:
So you are saying that the OnTap .snapshot directory is equivalent to a symlink
to $FSROOT/.zfs/snapshot? That would solve the directory shuffle problem.
Not quite. It's equivalent(ish) to:
cd $MYDIR mkdir .snapshot cd .snapshot
for s in $FSROOT/.zfs/snapshot/*; do
On 21 April, 2010 - Justin Lee Ewing sent me these 0,3K bytes:
So I can obviously see what zpools I have imported... but how do I see
pools that have been exported? Kind of like being able to see deported
volumes using vxdisk -o alldgs list.
'zpool import'
/Tomas
--
Tomas Ögren,
Hi Justin,
Maybe I misunderstand your question...
When you export a pool, it becomes available for import by using
the zpool import command. For example:
1. Export tank:
# zpool export tank
2. What pools are available for import:
# zpool import
pool: tank
id: 7238661365053190141
On 04/21/10 02:16 PM, Erik Trimble wrote:
Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I
see pools that have been exported? Kind of like being able to see
deported volumes using vxdisk -o alldgs list.
Justin
Hi all,
I would like to add a new partition to my ZFS pool but it looks like it's more
stricky than expected.
The layout of my disk is the following:
- first partition for Windows. I want to keep it. (no formatting !)
- second partition for OpenSolaris.This is where I have all the Solaris slices
Justin Lee Ewing wrote:
On 04/21/10 02:16 PM, Erik Trimble wrote:
Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I
see pools that have been exported? Kind of like being able to see
deported volumes using vxdisk -o alldgs list.
Justin
On 04/22/10 06:59 AM, Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I see
pools that have been exported? Kind of like being able to see
deported volumes using vxdisk -o alldgs list.
zpool import, kind of counter intuitive!
--
Ian.
Hello,
Due to a power outage our file server running FreeBSD 8.0p2 will no longer come
up due to zpool corruption. I get the following output when trying to import
the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143
cd:
FreeBSD mfsbsd
On Apr 20, 2010, at 12:13 AM, Sunil funt...@yahoo.com wrote:
Hi,
I have a strange requirement. My pool consists of 2 500GB disks in
stripe which I am trying to convert into a RAIDZ setup without data
loss but I have only two additional disks: 750GB and 1TB. So, here
is what I thought:
On Wed, Apr 21, 2010 at 10:38 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
At present, the workaround I have for zfs is:
ln -s .zfs/snapshot snapshot
This makes the snapshot directory plainly visible to all NFS and CIFS users.
Easy to find every time, easy to remember.
It still has the issue that the end user has to know where the root of
the filesystem is in the tree (assuming it's even accessible on the
system -- might not be for an NFS mount).
On Wed, Apr 21, 2010 at 6:01 PM, Brandon High bh...@freaks.com wrote:
On Wed, Apr 21, 2010 at 10:38 AM, Edward Ned
On Mon, Apr 19, 2010 at 9:13 PM, Sunil funt...@yahoo.com wrote:
Any potential gotchas?
As others mentioned, doing raidz with two slices on the same disk is
pointless from a redundancy perspective. You may as well just create a
pool using only the 1TB drive, copy the data over, then create a
On Wed, Apr 21, 2010 at 7:24 AM, Frank Middleton
f.middle...@apogeect.com wrote:
On 04/20/10 11:06 AM, Don wrote:
Just got a distributor email about Texas Memory Systems' RamSan-630,
one of a range of huge non-volatile SAN products they make. Other
than that this has a capacity of 4-10TB,
For those I you still strapped to the Solaris 10 boat anchor, I see
that kernel patch 142901-09 (142900 on SPARC) includes new interesting
zfs fixes:
6586537 async zio taskqs can block out userland commands
6793877 lockd and Apache can block ZFS force-unmounting on behalf of clients
6844158
From: Nicolas Williams [mailto:nicolas.willi...@oracle.com]
POSIX doesn't allow us to have special dot files/directories outside
filesystem root directories.
So? Tell it to Netapp. They don't seem to have any problem with it.
___
zfs-discuss
From: Richard Elling [mailto:richard.ell...@gmail.com]
So you are saying that the OnTap .snapshot directory is equivalent to a
symlink
to $FSROOT/.zfs/snapshot? That would solve the directory shuffle
problem.
Not quite.
In Ontap, all you do is go into .snapshot, and select which snap
If you did the symlink .snapshot -- $FSROOT/.zfs/snapshot, and somehow
made
that magically appear in every directory all the time, you would have
this:
/share/home/joeuser/foo/.snapshot/bestsnapever/home/joeuser/foo/bar
/share/home/joeuser/.snapshot/bestsnapever/home/joeuser/foo/bar
Hi Richard
I was looking at nexentastor developer edition and I see they have given it a
nice gui
but I cant configure auto-cdp plugin can I configure it through the shell.
Also if the performance is problem what other options are there for a high
availability zfs system across 2 systems.
In
are you using comstar or the old iSCSI target (iscsitadm) to provision
targets?
I'm using zfs set shareiscsi=on to confugure the logical units and COMSTAR for
the rest on the OpenSolaris side. The targets are initiated on Solaris 10 with
iscsiadm.
This thing was humming right along and all
On Apr 21, 2010, at 7:27 PM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:richard.ell...@gmail.com]
So you are saying that the OnTap .snapshot directory is equivalent to a
symlink
to $FSROOT/.zfs/snapshot? That would solve the directory shuffle
problem.
Not quite.
In
On Apr 21, 2010, at 8:41 PM, tranceash wrote:
Hi Richard
I was looking at nexentastor developer edition and I see they have given it a
nice gui
but I cant configure auto-cdp plugin can I configure it through the shell.
The auto-cdp plugin is managed through nmc. It is simple to manage
57 matches
Mail list logo