On 10/19/11 01:18 AM, Edward Ned Harvey wrote:
I recently put my first btrfs system into production. Here are the
similarities/differences I noticed different between btrfs and zfs:
Differences:
* Obviously, one is meant for linux and the other solaris (etc)
* In btrfs, there is only raid1.
On 10/19/11 09:31 AM, Tim Cook wrote:
I had and have redundant storage, it has *NEVER* automatically fixed
it. You're the first person I've heard that has had it automatically
fix it.
I'm another, I have had many cases of ZFS fixing corrupted data on a
number of different pool
I have an application that iterates through snapshots sending them to
a remote host. With a Solaris 10 receiver, empty snapshots are received
in under a second, but with a Solaris 11 Express receiver, empty
snapshots are received in 2 to three seconds. This is becoming a real
nuisance where
On 09/30/11 05:14 AM, erik wrote:
On Thu, 29 Sep 2011 21:13:56 +1300, Ian Collins wrote:
I have an application that iterates through snapshots sending them to
a remote host. With a Solaris 10 receiver, empty snapshots are received
in under a second, but with a Solaris 11 Express receiver
On 09/30/11 08:03 AM, Bob Friesenhahn wrote:
On Fri, 30 Sep 2011, Ian Collins wrote:
Slowing down replication is not a good move!
Do you prefer pool corruption? ;-)
Probably they fixed a dire bug and this is the cost of the fix.
Could be. I think I'll raise a support case to find out why
On 09/30/11 11:59 AM, Rich Teer wrote:
Hi all,
Got a quick question: what are the latest zpool and zfs versions
supported in Solaris 10 Update 10?
In update 10: pool version 29, ZFS version 5.
--
Ian.
___
zfs-discuss mailing list
On 09/27/11 07:55 AM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I just upgraded to Solaris 10 Update 10, and one of the improvements
is zfs diff.
Using the birthtime of the sectors, I would expect very high
performance. The actual performance doesn't seems better that an
On 09/27/11 10:59 AM, Tomas Forsman wrote:
On 27 September, 2011 - Ian Collins sent me these 0,8K bytes:
On 09/27/11 07:55 AM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I just upgraded to Solaris 10 Update 10, and one of the improvements
is zfs diff.
Using
On 09/22/11 04:10 PM, Raúl Valencia wrote:
Hi, everyone!
I have a beginner's question:
I must configure a small file server. It only has two disk drives, and
they are (forcibly) destined to be used in a mirrored, hot-spare
configuration.
The OS is installed and working, and rpool is
On 09/14/11 08:49 PM, Sami Ketola wrote:
On Sep 13, 2011, at 3:21 , Peter Tribble wrote:
On Tue, Sep 13, 2011 at 1:50 AM, Paul B. Hensonhen...@acm.org wrote:
I recently saw a message posted to the sunmanagers list complaining
about installing a kernel patch and suddenly having his ACL's
On 09/13/11 09:00 PM, Peter Tribble wrote:
On Tue, Sep 13, 2011 at 9:48 AM, cephas maposahmapo...@gmail.com wrote:
hello team
i have an issue with my ZFS system, i have 5 file systems and i need to take
a daily backup of these onto tape. how best do you think i should do these?
the smallest
On 09/14/11 01:59 PM, Evgueni Martynov wrote:
It's not a good idea to hijack an existing thread!
Does anyone see long timeouts during zfs snapshot destroys?
Define long in this context.
How long does it take to destroy in your situation?
No more than a second or two.
If yes, in what
On 09/ 9/11 06:40 AM, Richard Elling wrote:
On Sep 7, 2011, at 2:05 AM, Roy Sigurd Karlsbakk wrote:
A drive retrying a single sector for two whole minutes is nonsense, even on a
desktop
or laptop, at least when it does so without logging the error to SMART or
summing up
the issues so to flag
On 09/ 2/11 11:58 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
I have an S11 Express system with a pool with an SSD split between ZIL
and cache. My friendly power company donated a free spike
I have an S11 Express system with a pool with an SSD split between ZIL
and cache. My friendly power company donated a free spike that killed
the SSD and one of the pool drives.
I was able to successfully zpool remove the cache slice from the pool,
but any attempt at removing the log slice
On 08/14/11 12:51 AM, Edward Ned Harvey wrote:
From: Ian Collins [mailto:i...@ianshome.com]
Have you already tested it? Anybody? Or is it still just theoretical
performance enhancement, compared to using a normal sized drive in a
normal mode?
How would you test it? I guess you would need
On 08/12/11 04:42 PM, Vikash Gupta wrote:
Hi Ian,
It's there in the subject line.
I am unable to see the zfs file system in df output.
How did you mount it and did it fail? As I said, what commands did you
use and what errors you get?
What is the output of zfs mount -a?
--
Ian.
On 08/12/11 01:35 AM, Bob Friesenhahn wrote:
On Wed, 10 Aug 2011, Nix wrote:
Yes, I have enabled the dedup on the pool.
I will off the dedup and will try to delete the new created snapshot.
Unfortunately, if dedup was previously enabled, the damage was already
done since dedup is baked into
On 08/12/11 08:00 AM, Ray Van Dolson wrote:
Are any of you using the Intel 320 as ZIL? It's MLC based, but I
understand its wear and performance characteristics can be bumped up
significantly by increasing the overprovisioning to 20% (dropping
usable capacity to 80%).
A log device doesn't
On 08/12/11 08:25 AM, Vikash Gupta wrote:
# uname -a
Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010
x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
On 08/10/11 05:13 PM, Nix wrote:
Hi,
I am facing issue with zfs destroy, this takes almost 3 Hours to delete the
snapshot of size 150G.
Could you please help me to resolve this issue, why zfs destroy takes this much
time.
Do you have dedup enabled?
--
Ian.
On 08/ 9/11 07:53 AM, marvin curlee wrote:
Is it possible to recover the rpool with only a tar/star archive of the root
filesystem? I have used the zfs send/receive methods and that work without a
problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems
On 08/ 4/11 10:52 PM, Stuart James Whitefish wrote:
Ian wrote:
Put your old drive in a USB enclosure and connect it
to another system in order to read back the data.
Given that update 9 can't import the pool is this really worth trying?
I would use a newer (express maybe) system.
Most
On 08/ 6/11 10:42 AM, Orvar Korvar wrote:
Is mirrors really a realistic alternative?
To what? Some context would be helpful.
I mean, if I have to resilver a raid with 3TB discs, it can take days I
suspect. With 4TB disks it can take a week, maybe. So, if I use mirror and one
disk break,
On 08/ 6/11 11:48 AM, stuart anderson wrote:
After upgrading to zpool version 29/zfs version 5 on a S10 test system via the
kernel patch 144501-19 it will now boot only as far as the to the grub menu.
What is a good Solaris rescue image that I can boot that will allow me to
import this rpool
On 08/ 4/11 01:29 AM, Stuart James Whitefish wrote:
I have Solaris on Sparc boxes available if it would help to do a net install
or jumpstart. I have never done those and it looks complicated, although I
think I may be able to get to the point in the u9 installer on my Intel box
where it asks
On 07/25/11 04:21 AM, Roberto Waltman wrote:
Edward Ned Harvey wrote:
So I'm getting comparisons of write speeds for 10G files, sampling
at 100G
intervals. For a 6x performance degradation, it would be 7 sec to write
without dedup, and 40-45sec to write with dedup.
For a totally
On 07/25/11 04:17 PM, Jesus Cea wrote:
I am creating a recursive snapshot in my ZPOOL and I am getting an
unexpected error:
[root@stargate-host /]# ./z-snapshotZFS 20110725-05:56
cannot create snapshot 'datos/swap@20110725-05:56': out of space
no snapshots were created
Swap can have a huge
On 07/10/11 04:04 AM, Edward Ned Harvey wrote:
There were a lot of useful details put into the thread Summary: Dedup
and L2ARC memory requirements
Please refer to that thread as necessary... After much discussion
leading up to that thread, I thought I had enough understanding to
make
On 07/23/11 08:50 AM, Edward Ned Harvey wrote:
In my new oracle server, sol11exp, it's using multipath device
names... Presently I have two disks attached: (I removed the other
10 disks for now, because these device names are so confusing. This
way I can focus on **just** the OS disks.)
On 07/13/11 12:04 AM, Ciaran Cummins wrote:
Hi, we had a server that lost connection to fiber attached disk array where
data luns were housed, due to 3510 power fault. After connection restored alot
of the zpool status had these permanent errors listed as per below. I check the
files in
On 06/ 2/11 09:18 AM, lance wilson wrote:
At your suggestion I created a file locally and these were correct, in that
they inherited the acl that was applied to the top level.
-rwxrwxrwx+ 1 testuid testgid0 Jun 1 21:04 localtest
user:root:rwxpdDaARWcCos:--I:allow
On 05/27/11 04:34 AM, Eugen Leitl wrote:
How bad would raidz2 do on mostly sequential writes and reads
(Athlon64 single-core, 4 GByte RAM, FreeBSD 8.2)?
The best way is to go is striping mirrored pools, right?
I'm worried about losing the two wrong drives out of 8.
These are all 7200.11
On 05/26/11 12:15 AM, Garrett D'Amore wrote:
You are welcome to your beliefs. There are many groups that do standards that
do not meet in public. In fact, I can't think of any standards bodies that
*do* hold open meetings.
ISO language standards committees may not hold public meetings,
On 05/26/11 04:21 AM, Richard Elling wrote:
Actually, this doesn't always work. There have been attempts to stack the deck
and force votes at IETF. One memorable meeting was more of a flashmob than a
standards meeting :-)
Is there a video :)
The key stakeholders and contributors of ZFS code
On 05/25/11 07:49 AM, Brandon High wrote:
On Tue, May 24, 2011 at 12:41 PM, Richard Elling
richard.ell...@gmail.com wrote:
There are many ZFS implementations, each evolving as the contributors desire.
Diversity and innovation is a good thing.
... unless Oracle's zpool v30 is different than
On 05/ 5/11 10:02 PM, Joerg Schilling wrote:
Ian Collinsi...@ianshome.com wrote:
*ufsrestore works fine on ZFS filesystems (although I haven't tried it
with any POSIX ACLs on the original ufs filesystem, which would probably
simply get lost).
star -copy -no-fsync is typically 30%
On 05/ 6/11 09:53 AM, Ray Van Dolson wrote:
Have a failed drive on a ZFS pool (three RAIDZ2 vdevs, one hot spare).
The hot spare kicked in and all is well.
Is it possible to just make that hot spare disk -- already silvered
into the pool -- as a permanent part of the pool? We could then throw
On 05/ 4/11 01:35 AM, Joerg Schilling wrote:
Andrew Gabrielandrew.gabr...@oracle.com wrote:
Dan Shelton wrote:
Is anyone aware of any freeware program that can speed up copying tons
of data (2 TB) from UFS to ZFS on same server?
I use 'ufsdump | ufsrestore'*. I would also suggest try
On 04/30/11 06:00 AM, Freddie Cash wrote:
On Fri, Apr 29, 2011 at 10:53 AM, Dan Sheltondan.shel...@oracle.com wrote:
Is anyone aware of any freeware program that can speed up copying tons of
data (2 TB) from UFS to ZFS on same server?
rsync, with --whole-file --inplace (and other options),
On 04/29/11 07:44 AM, Brandon High wrote:
Is there an easy way to find out what datasets have dedup'd data in
them. Even better would be to discover which files in a particular
dataset are dedup'd.
Dedup is at the block, not file level.
--
Ian.
On 04/26/11 04:47 PM, Erik Trimble wrote:
On 4/25/2011 6:23 PM, Ian Collins wrote:
On 04/26/11 01:13 PM, Fred Liu wrote:
H, it seems dedup is pool-based not filesystem-based.
That's correct. Although it can be turned off and on at the filesystem
level (assuming it is enabled
On 04/26/11 01:13 PM, Fred Liu wrote:
H, it seems dedup is pool-based not filesystem-based.
That's correct. Although it can be turned off and on at the filesystem
level (assuming it is enabled for the pool).
If it can have fine-grained granularity(like based on fs), that will be great!
On 04/11/11 04:01 PM, Matt Harrison wrote:
I'm running a slightly old version of OSOL, I'm sorry I can't remember
the version.
I had a de-dup dataset and tried to destroy it. The command hung and
so did anything else zfs related. I waited half and hour or so, the
dataset was only 15G, and
On 04/10/11 05:41 AM, Chris Forgeron wrote:
I see your point, but you also have to understand that sometimes too many
helpers/opinions are a bad thing. There is a set core of ZFS developers who
make a lot of this move forward, and they are the key right now. The rest of us will just
muddy
On 04/10/11 09:25 AM, Garrett D'Amore wrote:
On Sun, 2011-04-10 at 08:56 +1200, Ian Collins wrote:
On 04/10/11 05:41 AM, Chris Forgeron wrote:
I see your point, but you also have to understand that sometimes too many
helpers/opinions are a bad thing. There is a set core of ZFS developers
On 04/ 8/11 06:30 PM, Erik Trimble wrote:
On 4/7/2011 10:25 AM, Chris Banal wrote:
While I understand everything at Oracle is top secret these days.
Does anyone have any insight into a next-gen X4500 / X4540? Does some
other Oracle / Sun partner make a comparable system that is fully
On 04/ 8/11 08:08 PM, Mark Sandrock wrote:
On Apr 8, 2011, at 2:37 AM, Ian Collinsi...@ianshome.com wrote:
On 04/ 8/11 06:30 PM, Erik Trimble wrote:
On 4/7/2011 10:25 AM, Chris Banal wrote:
While I understand everything at Oracle is top secret these days.
Does anyone have any insight into
On 04/ 8/11 09:49 PM, Mark Sandrock wrote:
On Apr 8, 2011, at 3:29 AM, Ian Collinsi...@ianshome.com wrote:
On 04/ 8/11 08:08 PM, Mark Sandrock wrote:
On Apr 8, 2011, at 2:37 AM, Ian Collinsi...@ianshome.com wrote:
On 04/ 8/11 06:30 PM, Erik Trimble wrote:
The move seems to be to the
On 04/ 9/11 03:20 AM, Mark Sandrock wrote:
On Apr 8, 2011, at 7:50 AM, Evaldas Aurylaevaldas.aur...@edqm.eu wrote:
On 04/ 8/11 01:14 PM, Ian Collins wrote:
You have built-in storage failover with an AR cluster;
and they do NFS, CIFS, iSCSI, HTTP and WebDav
out of the box.
And you have
On 04/ 9/11 03:53 PM, Mark Sandrock wrote:
I'm not arguing. If it were up to me,
we'd still be selling those boxes.
Maybe you could whisper in the right ear?
:)
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 04/ 6/11 12:28 AM, Paul Kraus wrote:
I have a zpool with one dataset and a handful of snapshots. I
cannot delete two of the snapshots. The message I get is dataset is
busy. Neither fuser or lsof show anything holding open the
.zfs/snapshot/sanpshot name directory. What can cause this ?
On 03/29/11 02:52 AM, Paul Kraus wrote:
On Thu, Mar 24, 2011 at 7:07 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
When you have a backup server, which does nothing but zfs receive, that's
probably your best case scenario. Because the data is as nonvolatile
On 03/24/11 07:28 AM, David Magda wrote:
On Wed, March 23, 2011 13:31, Linder, Doug wrote:
Toby Thain wrote:
Linder, Doug wrote:
[Minor] From: v...@hostname.ourdomain.com
mailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011
3:02:25 AM
[ 81:84 ] /directoryname
Cannot
On 03/22/11 10:39 AM, Edward Ned Harvey wrote:
So the conclusion to draw is:
Yes, there are situations where ZFS resilver is a strength, and limited by
serial throughput. But for what I call typical usage patterns, it's a
weakness, and it's dramatically much worse than resilvering the whole
Has anyone seen a resilver longer than this for a 500G drive in a
riadz2 vdev?
scrub: resilver completed after 169h25m with 0 errors on Sun Mar 20
19:57:37 2011
c0t0d0 ONLINE 0 0 0 769G resilvered
and I told the client it would take 3 to 4 days!
:)
--
Ian.
On 03/20/11 08:57 PM, Ian Collins wrote:
Has anyone seen a resilver longer than this for a 500G drive in a
riadz2 vdev?
scrub: resilver completed after 169h25m with 0 errors on Sun Mar 20
19:57:37 2011
c0t0d0 ONLINE 0 0 0 769G resilvered
I didn't intend
On 03/21/11 12:20 PM, Richard Elling wrote:
On Mar 20, 2011, at 3:02 PM, Ian Collins wrote:
On 03/20/11 08:57 PM, Ian Collins wrote:
Has anyone seen a resilver longer than this for a 500G drive in a riadz2 vdev?
scrub: resilver completed after 169h25m with 0 errors on Sun Mar 20 19:57:37
On 03/18/11 04:46 AM, Karl Wagner wrote:
Hi all
I have only just seen this, and thought someone may be able to help.
On heavy IO activity, my Solaris 11 Express box hosting a ZFS data pool
crashes. It seems to show page faults in several things, including nfsd,
sched, zpool-tank and
On 02/19/11 01:00 PM, Rahul Deb wrote:
Hi All,
I have two text files:
1. *snap_prev.txt:* This one lists the name of the immediate previous
snapshots of the 100 zfs file systems. One per line.
2. *snap_latest.txt:* This one contains the name of the corresponding
latest snapshots of those
On 02/16/11 09:50 AM, David Strom wrote:
Up to the moderator whether this will add anything:
I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between
SANs. configured a private subnet allowed rsh on the receiving V440.
command: zfs send | (rsh receiving-host zfs receive
On 02/15/11 10:14 AM, Cindy Swearingen wrote:
Hi Ian,
You are correct.
Previous Solaris releases displayed older POSIX ACL info on this
directory. It was changed to the new ACL style from the integration of
this CR:
6792884 Vista clients cannot access .zfs
Thanks Cindy. Unfortunately
While scanning filesystems looking fro who has read access to files, I
see the ACL type of the .zfs/snapshot directory varies between releases
(non-ZFS in Solaris 10, ZFS in Solaris 11 Express).
Is this documented anywhere?
--
Ian.
___
zfs-discuss
On 02/ 7/11 03:45 PM, Matthew Angelo wrote:
I require a new high capacity 8 disk zpool. The disks I will be
purchasing (Samsung or Hitachi) have an Error Rate (non-recoverable,
bits read) of 1 in 10^14 and will be 2TB. I'm staying clear of WD
because they have the new 2048b sectors which
On 01/25/11 08:42 PM, Rahul Deb wrote:
Thanks Ian for your response.
So you are saying, if I create recursive snapshot of the pool, it will
be able to do the incremental send/recv for the file systems created
on the fly?
I was thinking that if the file systems are created on the fly, then
On 01/26/11 09:50 AM, Lasse Osterild wrote:
I'd go with some Dell MD1200's, for us they ended up being cheaper (incl disks)
than a SuperMicro case with the same model disks, and it's way nicer than the
low-quality SuperMicro stuff.
That's an odd comment. I've used a fair bit of SuperMicro
On 01/24/11 09:13 PM, Ddl wrote:
Hi,
I have a Solaris 10 x86 server with 2 hard disks running on mirrored UFS
configuration.
Currently we are trying to implement a OS backup solution using Networker 7.6.
I can successfully backup the OS to a remote Networker server.
But now the trouble is
On 01/25/11 06:52 AM, Ashley Nicholls wrote:
Hello all,
I'm having a problem that I find difficult to diagnose.
I have an IBM x3550 M3 running nexenta core platform 3.0.1 (134f) with
7x6 disk RAIDZ2 vdevs (see listing at bottom).
Every day a disk fails with Too many checksum errors, is
On 01/25/11 12:30 PM, Rahul Deb wrote:
There is only one pool and hundreds of zfs file systems under that
pool. New file systems are getting created on the fly.
Is it possible to automate zfs incremental send/recv in this scenario?
My assumption is negative as incremental send/recv needs a
On 01/18/11 04:00 PM, Repetski, Stephen wrote:
Hi All,
I believe this has been asked before, but I wasn’t able to find too
much information about the subject. Long story short, I was moving
data around on a storage zpool of mine and a zfs destroy filesystem
hung (or so I thought). This
On 01/18/11 05:22 PM, Repetski, Stephen wrote:
On Mon, Jan 17, 2011 at 22:08, Ian Collins i...@ianshome.com
mailto:i...@ianshome.com wrote:
On 01/18/11 04:00 PM, Repetski, Stephen wrote:
Hi All,
I believe this has been asked before, but I wasn’t able to
find
On 01/12/11 04:15 AM, David Strom wrote:
I've used several tape autoloaders during my professional life. I
recall that we can use ufsdump or tar or dd with at least some
autoloaders where the autoloader can be set to automatically eject a
tape when it's full load the next one. Has always
On 01/11/11 11:40 AM, fred wrote:
Hello,
I'm having a weird issue with my incremental setup.
Here is the filesystem as it shows up with zfs list:
NAMEUSED AVAIL REFER MOUNTPOINT
Data/FS1 771M 16.1T 116M /Data/FS1
Data/f...@05
On 12/23/10 08:44 AM, Jerry Kemp wrote:
I have a coworker, who's primary expertise is in another flavor of Unix.
This coworker lists floating point operations as one of ZFS detriments.
I's not really sure what he means specifically, or where he got this
reference from.
It sounds like your
On 12/21/10 08:36 AM, Alexander Lesle wrote:
Hello All
I read this thread Resilver/scrub times? for a few minutes
and I have recognize that I dont know the different between
Resilvering and Scrubing. Shame on me. :-(
Scrubbing is used to check the contents of a pool by reading the data
and
On 12/18/10 10:49 AM, Ian D wrote:
I have 159x 15K RPM SAS drives I want to build a ZFS appliance with.
75x 145G
60x 300G
24x 600G
The box has 4 CPUs, 256G of RAM, 14x 100G SLC SSDs for the cache and a mirrored
pair of 4G DDRDrive X1s for the SLOG.
My plan is to mirror all these drives and
On 12/12/10 04:48 AM, Stephan Budach wrote:
Hi,
on friday I received two of my new fc raids, that I intended to use
as my new zpool devices. These devices are from CiDesign and their
type/model is iR16FC4ER. These are fc raids, that also allow JBOD
operation, which is what I chose. So I
On 12/11/10 02:43 PM, Tony MacDoodle wrote:
The below ZFS pool:
zpool create tank mirror c1t2d0 c1t3d0 mirror c1t4d0 c1t5d0 spare c1t6d0
Is this a 0 then 1 (mirror of stripes)
or
1 then 0 (stripe of mirrors)
ZFS only supports stripes of mirrors.
--
Ian.
On 12/10/10 12:31 PM, Moazam Raja wrote:
Hi all, from much of the documentation I've seen, the advice is to set
readonly=on on volumes on the receiving side during send/receive
operations. Is this still a requirement?
I've been trying the send/receive while NOT setting the receiver to
readonly
On 11/23/10 03:34 AM, Harry Putnam wrote:
Are either or both of you sharing these files in the sense of storing
them on zfs and moving to whatever OS for usage or do you mean that
files are used in place... that is, the zfs fs is not just storing
files but the files are being used by other OSs
On 11/22/10 05:43 PM, Harry Putnam wrote:
I find that at least some kinds of video files when accessed on zfs
server from windows machines will not work. In particular that seems
to hold for quicktime files.
When *.mov file reside on a windows host, and assuming your browser
has the right
On 11/17/10 05:45 AM, Cindy Swearingen wrote:
Hi Ian,
The pool and file system version information is available in
the ZFS Administration Guide, here:
http://docs.sun.com/app/docs/doc/821-1448/appendixa-1?l=ena=view
The OpenSolaris version pages are up-to-date now also.
Thanks Cindy!
--
On 11/15/10 10:50 PM, sridhar surampudi wrote:
Hi Andrew,
Regarding your point
-
You will not be able to access the hardware
snapshot from the system which has the original zpool mounted, because
the two zpools will have the same pool GUID (there's an RFE outstanding
on fixing
Is there an up to date reference following on from
http://hub.opensolaris.org/bin/view/Community+Group+zfs/24
listing what's in the zpool versions up to the current 31?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 11/16/10 07:19 PM, sridhar surampudi wrote:
Hi,
How it would help for instant recovery or point in time recovery ?? i.e
restore data at device/LUN level ?
Why would you want to? If you are sending snapshots to another pool,
you can do instant recovery at the pool level.
Currently
On 11/14/10 12:00 PM, Kyle McDonald wrote:
Hi all,
I'd like to give my machine a little more swap.
I ran:
zfs get volsize rpool/swap
and saw it was 2G
So I ran:
zfs set volsize=4G rpool/swap
to double it. zfs get shows it took affect, but swap -l doesn't show
any change.
I ran swap -d to
On 11/13/10 04:03 AM, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi
heads, I’m in love. But for one thing. The interconnect between the
head storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast
enough, but it’s overkill
On 11/10/10 10:29 AM, bhanu prakash wrote:
Hi ,
Currently the file system is with the capacity 50 GB. I want to reduce
that to 30 Gb.
Quota or physical limit?
When I am trying to set the quota as
#zfs set quota=30G File system Name
it's giving error like cannot set property for file sys
On 11/10/10 04:11 PM, Peter Taps wrote:
Folks,
I am trying to understand if there is a way to increase the capacity of a
root-vdev. After reading zpool man pages, the following is what I understand:
1. If you add a new disk by using zpool add, this disk gets added as a new
root-vdev. The
Oracle have deleted the best ZFS platform I know, the X4540.
Does anyone know of an equivalent system? None of the current
Oracle/Sun offerings come close.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 11/ 2/10 08:33 AM, Mark Sandrock wrote:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done,
On 11/ 2/10 11:55 AM, Ross Walker wrote:
On Nov 1, 2010, at 3:33 PM, Mark Sandrock mark.sandr...@oracle.com
mailto:mark.sandr...@oracle.com wrote:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be
On 10/29/10 09:40 AM, Rob Cohen wrote:
I have a couple drive enclosures:
15x 450gb 15krpm SAS
15x 600gb 15krpm SAS
I'd like to set them up like RAID10. Previously, I was using two hardware
RAID10 volumes, with the 15th drive as a hot spare, in each enclosure.
Using ZFS, it could be nice to
On 10/25/10 08:39 PM, Markus Kovero wrote:
You are asking for a world of hurt. You may luck out, and it may work
great, thus saving you money. Take my example for example ... I took the
safe approach (as far as any non-sun hardware is concerned.) I bought an
officially supported dell server,
On 10/26/10 01:38 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
Sun hardware? Then you get all your support from one vendor.
+1
Sun hardware costs more, but it's worth it, if you want
On 10/20/10 08:12 PM, sridhar surampudi wrote:
Hi Cindys,
Thank you for reply.
zfs/ zpool should have ability of accessing snapshot devices with a
configurable name.
As an example if file system stack is created as
vxfs( /mnt1)
|
|
vxvm(lv1)
|
|
(device from an array / LUN say
On 10/21/10 07:00 AM, Jeff Bacon wrote:
So, Best Practices says use (N^2)+2 disks for your raidz2.
I wanted to use 7 disk stripes not 6, just to try to balance my risk
level vs available space.
Doing some testing on my hardware, it's hard to say there's a ton of
difference one way or the other
On 10/21/10 03:47 PM, Harry Putnam wrote:
build 133
zpool version 22
I'm getting:
zpool status:
NAMESTATE READ WRITE CKSUM
z3 DEGRADED 0 0 167
mirror-0 DEGRADED 0 0 334
c5d0DEGRADED 0 0 335 too
On 10/18/10 06:28 AM, Simon Breden wrote:
I would just like to confirm or not whether a vdev failure would lead to
failure of the whole pool or not.
For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail
within the first vdev, is all the data within the whole pool
On 10/17/10 12:37 PM, Roy Sigurd Karlsbakk wrote:
- Original Message -
On 10/17/10 04:54 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I'm seeing some rather bad resilver times for a pool of WD Green
drives (I know, bad drives, but leave that). Does resilver go
through the whole
101 - 200 of 728 matches
Mail list logo