which gap?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/009872.html
second bug, its the same link like in the first post.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hmm.. I guess that's what I've heard as well.
I do run compression and believe a lot of others would as well. So then, it
seems
to me that if I have guests that run a filesystem formatted with 4k blocks for
example.. I'm inevitably going to have this overlap when using ZFS network
storage?
So
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write) which is $699.
Are
That is an interesting bit of kit. I wish a white box manufacturer would
create something like this (hint hint supermicro)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I'll admit, I was cheap at first and my
fileserver right now is consumer drives. nbsp;You
can bet all my future purchases will be of the enterprise grade. nbsp;And
guess what... none of the drives in my array are less than 5 years old, so
even
if they did die, and I had bought the
Hi Richard,
So you have to wait for the sd (or other) driver to
timeout the request. By
default, this is on the order of minutes. Meanwhile,
ZFS is patiently awaiting a status on the request. For
enterprise class drives, there is a limited number
of retries on the disk before it reports an
For whatever it's worth to have someone post on a list.. I would *really*
like to see this improved as well. The time it takes to iterate over
both thousands of filesystems and thousands of snapshots makes me very
cautious about taking advantage of some of the built-in zfs features in
an HA
Even if it might not be the best technical solution, I think what a lot of
people are looking for when this comes up is a knob they can use to say I only
want X IOPS per vdev (in addition to low prioritization) to be used while
scrubbing. Doing so probably helps them feel more at ease that they
Someone on this list threw out the idea a year or so ago to just setup 2
ramdisk servers, export a ramdisk from each and create a mirror slog from them.
Assuming newer version zpools, this sounds like it could be even safer since
there is (supposedly) less of a chance of catastrophic failure if
40k IOPS sounds like best in case, you'll never see it in the real world
marketing to me. There are a few benchmarks if you google and they all seem to
indicate the performance is probably +/- 10% of an intel x25-e. I would
personally trust intel over one of these drives.
Is it even possible
On the PCIe side, I noticed there's a new card coming from LSI that claims
150,000 4k random writes. Unfortunately this might end up being an OEM-only
card.
I also notice on the ddrdrive site that they now have an opensolaris driver and
are offering it in a beta program.
--
This message
Is there a best practice on keeping a backup of the zpool.cache file? Is it
possible? Does it change with changes to vdevs?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Very interesting. This could be useful for a number of us. Would you be willing
to share your work?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi
2006/8/22, Constantin Gonzalez [EMAIL PROTECTED]:
Thomas Deutsch wrote:
I'm thinking about to change from Linux/Softwareraid to
OpenSolaris/ZFS. During this, I've got some (probably stupid)
questions:
don't worry, there are no stupid questions :).
1. Is ZFS able to encrypt all the data
does a zfs filesystem get mounted?
Probably a zfs legacy mount together with a lower priority lofs mount
would do it.
Regards,
Thomas
On Fri, Sep 08, 2006 at 08:18:06AM -0400, Steffen Weiberle wrote:
I have a jumpstart server where the install images are on a ZFS pool.
For PXE boot, several
On Sep 12, 2006, at 2:04 PM, Mark Maybee wrote:
Thomas Burns wrote:
Hi,
We have been using zfs for a couple of months now, and, overall,
really
like it. However, we have run into a major problem -- zfs's
memory requirements
crowd out our primary application. Ultimately, we have
Also, where do I set arc.c_max? In etc/system? Out of
curiosity, why isn't
limiting arc.c_max considered best practice (I just want to make
sure I am
not missing something about the effect limiting it will have)?
My guess is
that in our case (lots of small groups -- 50 people or less
using a cluster-framework with heartbeats
and all that great stuff ...
Regards,
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
filedescriptors or ünmount, then operator-predefined
actions will be triggered.
Actions like zfs create rulebased-name, take a snapshot or
zsend on a snapshot and others could be thought of.
Thomas
--
___
zfs-discuss mailing list
zfs-discuss
the
original disk to complete the array?
Thanks!
Thomas
On 11/30/06, Krzys [EMAIL PROTECTED] wrote:
Ah, did not see your follow up. Thanks.
Chris
On Thu, 30 Nov 2006, Cindy Swearingen wrote:
Sorry, Bart, is correct:
If new_device is not specified, it defaults
So there is no current way to specify the creation of a 3 disk raid-z
array with a known missing disk?
On 12/5/06, David Bustos [EMAIL PROTECTED] wrote:
Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:
I currently have a 400GB disk that is full of data on a linux system.
If I buy
of the filesystem?
Thanks!
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for what purpose ?
Darren's correct, it's a simple case of ease of use. Not
show-stopping by any means but would be nice to have.
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
I'm an Oracle DBA and we are doing ASM on SUN with RAC. I am happy with ASM's
performance but am interested in Clustering. I mentioned to Bob Netherton that
if Sun could make it a clustering file system, that helps them enable the grid
further. Oracle wrote and gave OCFS2 to the Linux Kernel.
How embarrassing is that? Pete kindly pointed me to the man page where it
clearly states that I should use zpool scrub [-s] pool. -s for Stop
scrubbing. Sorry folks, I just looked in the Administration guide where I
couldn't find it. But I am sure it's in there, too.
This message posted
the read? Would someone please explain how the mechanism
works in that case?
Of course in the meantime we attached another box in mirror configuration
;)
Thanks in advance
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D
what they are about
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
is gone forever?
If so, is this a transport independent problem which can also happen if
ZFS used Fibre Channel attached drives instead of iSCSI devices?
Thanks for your help
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D
that FC-AL, ... do better in this case
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
to see if they use that sequence. Allow me one more question: why
is fflush() required prior to fsync()?
Putting all pieces together this means that if the app doesn't do it it
suffered from the problem with UFS anyway just with typically smaller
caches, right?
Thanks again
Thomas
as possible applications
but we need to have redundancy for the fileserver itself too
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs
on UFS or any similar FS anymore
It probably will be really slow, but everythink should be consistent
all the time I guess.
You might be right about. I did a quick check with dtrace on the mail
server and it seems IMAP, sendmail and the others nicely sync data as they
should
Thomas
drives online? Shouldn't it have been writing data/parity to
the replacement drive? Is this normal and the expected behavior?
Thanks for any insight!
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
My initial reaction is that the world has got by without
[email|cellphone|
other technology] for a long time ... so not a big deal.
Well, I did say I viewed it as an indefensible position :-)
Now shall we debate if the world is a better place because of cell
phones :-P
So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd
to stop responding after 2 hours of running a bittorrent client over
nfs4 from a linux client, causing zfs snapshots to hang and requiring
a hard reboot to get the world back in order?
Thomas
There is no NFS over ZFS issue
) and zfs would be
having problems taking snapshots, if I hadn't disabled the hourly
snapshots.
Thanks!
Thomas
[EMAIL PROTECTED] ~]$ rpcinfo -t filer0 nfs
rpcinfo: RPC: Timed out
program 13 version 0 is not available
echo ::pgrep nfsd | ::walk thread | ::findstack -v | mdb -k
stack pointer
Thanks, Roch! Much appreciated knowing what the problem is and that a
fix is in a forthcoming release.
Thomas
On 6/25/07, Roch - PAE [EMAIL PROTECTED] wrote:
Sorry about that; looks like you've hit this:
6546683 marvell88sx driver misses wakeup for mv_empty_cv
http
Does anyone have a best practice for utilizing ZFS with Hitachi SE99x0 arrays??
I'm curious about what type of parity-groups work best with ZFS for various
application uses. Examples: OLTP, warehousing, NFS, .
Thanks!
This message posted from opensolaris.org
the difference being faktor 2 between reading
and writing when using a 1:1 mirror setup, I would say, you
hit the bottleneck of your PCI-Bus.
Thomas
On Sun, Jul 15, 2007 at 03:37:06AM -0700, Orvar Korvar wrote:
I did that, and here are the results from the ZFS jury:
bash-3.00$ timex dd if=/dev/zero
Hi
if I create a storage pool with multiple RAID-Z stripes in it does ZFS
dynamically stripe data across all the RAID-Z stripes in the pool
automagically ?
If I relate this back to my storage array experience, this would be
Plaiding which is/was creating a RAID-0 logical volume across
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Sun Logo http://www.sun.com
http://www.sun.com*Tim Thomas
*Storage Systems Product Group*
* Sun Microsystems, Inc.
Internal Extension: x(70)18097
Office Direct Dial: +44-161
Hi all,
i am about to put together a one month test configuration for a
graphics-production server (prepress-filer that is). I would like to test zfs
on a x4200 with two sas2sata-jbods attached. Initially i wanted to use an
infortrend fc2sata-jbod-enclosure but these are at out of production
mirror A B here lives the OS and userdata-one
pool userdata-two
mirror C D userdata-two spanning CD - XY
mirror X Y
Thomas
On Thu, Sep 27, 2007 at 08:39:40PM +0100, Dick Davies wrote:
On 26/09/2007, Christopher [EMAIL PROTECTED] wrote:
I'm about to build a fileserver and I think I'm
Hi all,
i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun
x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver
suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS
controllers, attached two sas-jbods with 8
Hi
this may be of interest:
http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire
I appreciate that this is not a frightfully clever set of tests but I
needed some throughout numbersand the easiest way to share the
results is to blog.
Rgds
Tim
--
*Tim Thomas
*Storage
Hi again,
i did not want to compare the filebench test with the single mkfile command.
Still, i was hoping to see similar numbers in the filbench stats.
Any hints what i could do to further improve the performance?
Would a raid1 over two stripes be faster?
TIA,
Tom
This message posted from
Hi,
i checked with $nthreads=20 which will roughly represent the expected load and
these are the results:
IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s,255us
cpu/op, 0.2ms latency
BTW, smpatch is still running and further tests will get done when the system
is
Hi,
i checked with $nthreads=20 which will roughly represent the expected load and
these are the results:
IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us cpu/op, 0.2ms
latency
BTW, smpatch is still running and further tests will get done when the system
is rebooted.
The
i wanted to test some simultanious sequential writes and wrote this little
snippet:
#!/bin/bash
for ((i=1; i=20; i++))
do
dd if=/dev/zero of=lala$i bs=128k count=32768
done
While the script was running i watched zpool iostat and measured the time
between starting and stopping of the writes
://blogs.sun.com/timthomas/entry/another_samba_test_on_sun
What I find nice about Thumper/X4500's is that they behave very
predictably..in my experience anyway.
Rgds
Tim
--
Tim Thomas
Storage
Systems Product Group
Sun Microsystems, Inc.
Internal Extension: x(70
Hi Eric,
Are you talking about the documentation at:
http://sourceforge.net/projects/filebench
or:
http://www.opensolaris.org/os/community/performance/filebench/
and:
http://www.solarisinternals.com/wiki/index.php/FileBench
?
i was talking about the solarisinternals wiki. I can't find any
Hi,
compression is off.
I've checked rw-perfomance with 20 simultaneous cp and with the following...
#!/usr/bin/bash
for ((i=1; i=20; i++))
do
cp lala$i lulu$i
done
(lala1-20 are 2gb files)
...and ended up with 546mb/s. Not too bad at all.
This message posted from opensolaris.org
Hi all,
i am currently using two XStore XJ 1100 SAS JBOD
enclosures(http://www.xtore.com/product_detail.asp?id_cat=11) attached to a
x4200 for testing. So far it works rather nicly, but i am still looking for
alternatives.
The Infortrend JBOD-expansions are not deliverable at the moment.
What
Hi,
did you read the following?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Currently, pool performance can degrade when a pool is very full and
filesystems are updated frequently, such as on a busy mail server.
Under these circumstances, keep pool space under 80%
Hi,
from sun germany i got the info hat the 2u JBODs wille be officially announced
in q1 2008 and the 4u JBODs in q2 2008.
Both will have SAS connectors and support either SAS and SATA drives.
Ragards,
Tom
This message posted from opensolaris.org
is the status of this ?
Thanks
Tim
--
Tim Thomas
Staff Engineer
Storage
Systems Product Group
Sun Microsystems, Inc.
Internal Extension: x(70)18097
Office Direct Dial: +44-161-905-8097
Mobile: +44-7802-212-209
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com
Hi all,
i am planning a zfs-fileserver for a larger prepress-company in Germany.
Knowing that users tend to use all the space they can get, i am looking for a
solution to avoid a rapid performance loss when the production-pool is more
than 80% used.
Would it be a practical solution to just set
bda wrote:
I haven't noticed this behavior when ZFS has (as recommended) the
full disk.
Good to know, as i intended to use the whole disks anyway.
Thanks,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
Ralf Ramge wrote:
Quotas are applied to file systems, not pools, and a such are pretty
independent from the pool size. I found it best to give every user
his/her own filesystem and applying individual quotas afterwards.
Does this mean, that if i have a pool of 7TB with one filesystem for
Nobody out there who ever had problems with low diskspace?
Regrads,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If you can't use zpool status, you probably should check wether your system
is right and not all devices needed for this pool are currently available...
i.e. format...
Regards,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing
Ralf Ramge schrieb:
Thomas Liesner wrote:
Does this mean, that if i have a pool of 7TB with one filesystem for all
users
with a quota of 6TB i'd be alright?
Yep. Although I *really* recommend creating individual file systems, e.g.
if you have 1,000 users on your server, I'd create 1,000
Title: Signature
Hi
I just loaded up opensolaris on an X4500 (Thumper) and tried to connect
to the ZFS GUI (https://x:6789)...and it is not there.
Is this not part of Open Solaris...or do I just need to work out how to
switch it on..
Thanks
Tim
--
Tim Thomas
Staff
ollowing :
It should be there... try starting the webconsole service.
On 2/14/08, Tim Thomas [EMAIL PROTECTED] wrote:
Hi
I just loaded up opensolaris on an X4500 (Thumper) and tried to connect
to the ZFS GUI (https://x:6789)...and
it is not there.
Is this not part of Open Solaris
Title: Signature
are you sure the service is actually running? does "svcs -a | grep
webconsole" say "online"?
Yes, it is online
--
Tim Thomas
Staff Engineer
Storage
Systems Product Group
Sun Microsystems, Inc.
Internal E
Title: Signature
A reboot did it
Tim Thomas said the following :
Thanks Chris
someone else has suggested that to me but it still does not work.
I also tried...
# svccfg -s svc:/system/webconsole setprop options/tcp_listen = true
# svcadm refresh svc:/system/webconsole
Hi,
I've looked at ZFS for a while now and i'm wondering if it's possible on a
server create a ZFS mirror between two different iSCSI targets (two MD3000i
located in two different server rooms).
Or is it any setup that you guys recommend for maximal data protection.
Thanks,
/Thom
This
Hi Tomas,
I will try it my self, but it's just that if i google the subject i only find
old entries describing things as kernel panics and system freeze. I'm just
wondering if this problem is fixed in the newer releases, or if there is
another recommended way to keep data stored on different
that seems to be one root cause for not seeing the
added space after recreating the targets. The later was necessary to make
sure the disk size matches the ZVOL size
Any hints are greatly appreciated!
Thomas
-
GPG fingerprint: B1 EE D2
Hi,
a
zfs create -V 1M pool/foo
dd if=/dev/random of=/dev/zvol/rdsk/pool/foo bs=1k count=1k
(using Nevada b94) yields
zfs get all pool/foo
pool/foo used 1,09M -
pool/foo referenced 1,09M -
pool/foo volsize 1M-
on that?
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
raidz's, but does not seem to fit what I've
seen empirically. What am I missing? Note that the following is a
snapshot of time in the middle of a large streaming write, not the
initial output from zpool iostat.
Thomas
zpool iostat -v tank 1
capacity operationsbandwidth
in snv_96.
Thanks for finding this.
Thanks for fixing but it also happens if the snapshot directory isn't
empty as /.zfs/snapshot holds the name of the snapshot that was taken
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D
with OpenSolaris clients
Any hints?
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Miles
On Sat, 2 Aug 2008, Miles Nordin wrote:
tn == Thomas Nau [EMAIL PROTECTED] writes:
tn Nevertheless during the first hour of operation after onlining
tn we recognized numerous checksum errors on the formerly
tn offlined device. We decided to scrub the pool and after
tn
issues, but is definitely not cool when it
happens.
Thomas
On Wed, Aug 6, 2008 at 1:31 PM, Bryan Allen [EMAIL PROTECTED] wrote:
Good afternoon,
I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The
pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured).
When
make a call to service?
Thanks in advance,
thomas
--
Dr. Thomas Bleek, Netzwerkadministrator
Helmholtz-Zentrum Potsdam
Deutsches GeoForschungsZentrum
Telegrafenberg G261
D-14473 Potsdam
Tel.: +49 331 288- 1818/1681 Fax.: 1730 Mobil: +49 172 1543233
E-Mail: [EMAIL PROTECTED]
smime.p7s
Hi
has anyone attempted transfers of large volume of data with zfs
send/receive in a production environment.
I am seeing interest in zfs send/receive from people who have use rsync
and similar technologies to copy data for DR purposes..but I have no
idea of waht to expect so far as
of the physical device I can see all the data but I
can't get to it... aaarrrggghhh
Has anybody successfully patched/tweaked/whatever a zpool or zfs to
recover from this?
I would be most and for ever greatful I somebody could give me a hint.
Thanx,
Thomas
Following is the 'zpool status':
zpool status -xv
Are these machines 32-bit by chance? I ran into similar seemingly
unexplainable hangs, which Marc correctly diagnosed and have since not
reappeared:
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-August/049994.html
Thomas
___
zfs-discuss
vs disk i/o, but would love to hear
how to measure it.
Thomas
On Sat, Jan 17, 2009 at 4:07 AM, Brad bst...@aspirinsoftware.com wrote:
I'd like to track a server's ZFS pool I/O throughput over time. What's a good
data source to use for this? I like zpool iostat for this, but if I poll at
two
Hi
I took a look at the archives and I have seen a few threads about using
array block level snapshots with ZFS and how we face the old issue
that we used to see with logical volumes and unique IDs (quite
correctly) stopping the same volume being presented twice to the same
server.
IHAC
especially in SAN environments need this.
Projects own theyr own pools and constantly grow and *shrink* space.
And they have no downtime available for that.
give a +1 if you agree
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
redundancy.
Thomas
PS: think of the day where simple operator $NAME makes a typo
zfs destroy -r poolname and all the data still sits on the
disk. But no one is able to bring that valueable data back,
except restoration from tape with hours of downtime.
Sorry for repeating
Just wanted to ask how we make progress with zpool shrinking?
Are there any prerequisite projects we are waiting on?
e.g. tracked by CR 4852783 reduce pool capacity
Thomas
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the problem as does zfs mount -a. So far it simply worked, as
said till we updated.
Any hints?
Thomas
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing
Miles,
Miles Nordin wrote:
tn == Thomas Nau thomas@uni-ulm.de writes:
tn After updating the machine to b114 we ran into a strange
tn problem. The pool get's imported (listed by 'zpool list') but
tn none of it's ZFS filesystems get mounted. Exporting and
tn reimporting
/home was certainly created after setting the ACLs ..
So i think actualy it is not possible, but it might be possible in the future ?
Or had i misunderstood your comment ?
Thomas
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
some controllers still create jbods in the same way. A perfect example is
any of the highpoint controllers. But yah, when we say JBOD we mean it as
it was originally intended..just a bunch of discs
On Thu, Jul 2, 2009 at 10:23 AM, Kees Nuyt k.n...@zonnet.nl wrote:
On Thu, 02 Jul 2009
i might be wrong because i'm kind of new but i THINK you need to disable
automatic snapshots when resilvering, at least on the older version you did.
if not it would restart every time a new snapshot was madebut then
again, i may be wrong.
On Fri, Jul 10, 2009 at 6:18 PM, Galen
You can't replace it because this disk is still a valid member of the pool,
although it is marked faulty.
Put in a replacement disk, add this to the pool and replace the faulty one with
the new disk.
Regards,
Tom
--
This message posted from opensolaris.org
You could offline the disk if [b]this[/b] disk (not the pool) had a replica.
Nothing wrong with the documentation. Hmm, maybe it is little misleading here.
I walked into the same trap.
The pool is not using the disk anymore anyway, so (from the zfs point of view)
there is no need to offline
You're right, from the documentation it definitely should work. Still, it
doesn't. At least not in Solaris 10. But i am not a zfs-developer, so this
should probably answered by them. I will give it a try with a recent
OpneSolaris-VM and check, wether this works in newer implementations of zfs.
FYI:
In b117 it works as expected and stated in the documentation.
Tom
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
i'm pretty sure you're just looking for the zfs rollback command.
a quick google brings up a lot of information and also man zfs
check out this page
http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch06.html
On Sun, Jul 19, 2009 at 10:29 AM, Brian Wilson
i was under the impression it was virtualbox and it's default setting that
ignored the command, not the hard drive
On Mon, Jul 27, 2009 at 1:27 PM, Eric D. Mudama
edmud...@bounceswoosh.orgwrote:
On Sun, Jul 26 at 1:47, David Magda wrote:
On Jul 25, 2009, at 16:30, Carson Gaspar wrote:
I don't have an answer to your question exactly because i'm a noob and i'm
not using mac but i can say that on FreeBSD which i'm using atm there is a
method to name devices ahead of time so if the drive letters change you
avoid this exact problem. I'm sure opensolaris and mac have something
sometimes the disk will be busy just from being in the directory or if
something is trying to connect to it.
Again, i'm no expert so i'm going to refrain from commenting on your issue
further.
2009/7/28 Avérous Julien-Pierre no-re...@opensolaris.org
There is a little mistake :
If I do a
filesystems to
the new filesystems, but it seems like there should be a way to mirror or
replicate the pool itself rather than doing it at the filesystem level.
Thomas Walker
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
1 - 100 of 413 matches
Mail list logo