[EMAIL PROTECTED] wrote on 10/19/2008 01:59:29 AM:
Ares Drake wrote:
Greetings.
I am currently looking into setting up a better backup solution for our
family.
I own a ZFS Fileserver with a 5x500GB raidz. I want to back up data
(not
the OS itself) from multiple PCs running linux
[EMAIL PROTECTED] wrote on 10/11/2008 09:36:02 PM:
On Oct 10, 2008, at 7:55 PM 10/10/, David Magda wrote:
If someone finds themselves in this position, what advice can be
followed to minimize risks?
Can you ask for two LUNs on different physical SAN devices and have
an expectation
Any news on if the scrub/resilver/snap reset patch will make it into 10/08
update?
Thanks!
Wade Stuart
we are fallon
P: 612.758.2660
C: 612.877.0385
** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette
Ave, Suite 2400, Minneapolis, MN 55402
[EMAIL PROTECTED] wrote on 10/07/2008 07:15:46 AM:
Hello Wade,
Monday, October 6, 2008, 8:56:12 PM, you wrote:
WSfc [EMAIL PROTECTED] wrote on 10/06/2008 01:57:10
PM:
Hi all
In another thread a short while ago.. A cool little movie with some
gumballs was all we got to learn about
[EMAIL PROTECTED] wrote on 10/07/2008 10:59:06 AM:
On Tue, 7 Oct 2008, [EMAIL PROTECTED] wrote:
Wouldn't it be great if programmers could just focus on writing
code rather than having to worry about getting sued over whether
someone else is able or not to make a derivative program from
[EMAIL PROTECTED] wrote on 10/07/2008 01:10:51 PM:
I don't know if this is already available in S10 10/08, but in
opensolaris build 71 you can set the:
zpool failmode property
see:
http://opensolaris.org/os/community/arc/caselog/2007/567/
available options are:
The property can be
[EMAIL PROTECTED] wrote on 10/06/2008 01:57:10 PM:
Hi all
In another thread a short while ago.. A cool little movie with some
gumballs was all we got to learn about green-bytes. The product
launched and maybe some of the people that follow this list have had a
chance to take a look at the
On Mon, Oct 6, 2008 at 3:00 PM, C. Bergström [EMAIL PROTECTED]
wrote:
Matt Aitkenhead wrote:
I see that you have wasted no time. I'm still determining if you
have a sincere interest in working with us or alternatively have an
axe to grind. The latter is shining through.
Regards,
[EMAIL PROTECTED] wrote on 09/25/2008 05:30:04 AM:
On Thu, 2008-09-25 at 12:07 +0200, [EMAIL PROTECTED] wrote:
On Thu, Sep 25, 2008 at 11:43:51AM +0200, Nils Goroll wrote:
Storage Checkpoints in Veritas software has this feature (removing
the oldest checkpoint in case of 100% filesystem
[EMAIL PROTECTED] wrote on 09/25/2008 09:16:48 AM:
On 25 Sep 2008, at 14:40, Ross wrote:
For a default setup, I would have thought a years worth of data
would be enough, something like:
Given that this can presumably be configured to suit everyone's
particular data retention plan, for a
[EMAIL PROTECTED] wrote on 09/25/2008 10:34:41 AM:
On Thu, 2008-09-25 at 10:19 -0500, [EMAIL PROTECTED] wrote:
That snap schedule seems reasonable to me. Relate to the cleanup
part
of the doc linked, do you know the rational for killing off the most
recent
(15 minute and hourly)
[EMAIL PROTECTED] wrote on 09/24/2008 05:54:45 AM:
I agree, it looks like it would be perfect, but unfortunately
without Solaris drivers it's pretty much a non starter.
That hasn't stopped me pestering Fusion-IO wherever I can though to
see if they are willing to develop Solaris drivers,
Are you doing snaps? If so unless you have the new bits to handle the
issue, each snap restarts a scrub or resilver.
Thanks!
Wade Stuart
we are fallon
P: 612.758.2660
C: 612.877.0385
** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette
Ave, Suite 2400, Minneapolis, MN
[EMAIL PROTECTED] wrote on 09/15/2008 11:32:15 PM:
Brandon High wrote:
On Fri, Sep 12, 2008 at 11:49 AM, Dale Ghent [EMAIL PROTECTED]
wrote:
Did I detect a (well-done) metaphor for shared ZFS?
Probably not. It looks like a deduplication / MAID solution.
Yeah, I think they blew
[EMAIL PROTECTED] wrote on 09/04/2008 02:19:23 AM:
Jorgen Lundman wrote:
We did ask our vendor, but we were just told that AVS does not support
x4500.
The officially supported AVS works on the X4500 since the X4500 came
out. But, although Jim Dunham and others will tell you otherwise, I
[EMAIL PROTECTED] wrote on 09/04/2008 03:40:46 PM:
On 4-Sep-08, at 4:52 PM, Richard Elling wrote:
Marcelo Leal wrote:
Hello all,
Any plans (or already have), a send/receive way to get the
transfer backup statistics? I mean, the how much was transfered,
time and/or bytes/sec?
[EMAIL PROTECTED] wrote on 08/28/2008 09:00:23 AM:
On 28-Aug-08, at 10:54 AM, Toby Thain wrote:
On 28-Aug-08, at 10:11 AM, Richard Elling wrote:
It is rare to see this sort of CNN Moment attributed to file
corruption.
http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-
Does some script-usable ZFS API (if any) provide for fetching
block/file hashes (checksums) stored in the filesystem itself? In
fact, am I wrong to expect file-checksums to be readily available?
Yes. Files are not checksummed, blocks are checksummed.
-- richard
Further, even if
[EMAIL PROTECTED] wrote on 08/22/2008 04:26:35 PM:
Just my 2c: Is it possible to do an offline dedup, kind of like
snapshotting?
What I mean in practice, is: we make many Solaris full-root zones.
They share a lot of data as complete files. This is kind of easy to
save space - make one
As others have noted, the COW nature of ZFS means that there is a
good chance that on a mostly-empty pool, previous data is still intact
long after you might think it is gone. A utility to recover such data is
(IMHO) more likely to be in the category of forensic analysis than
a mount
[EMAIL PROTECTED] wrote on 07/22/2008 08:05:01 AM:
Hi All
Is there any hope for deduplication on ZFS ?
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems
Email [EMAIL PROTECTED]
There is always hope.
Seriously thought, looking at http://en.wikipedia.
[EMAIL PROTECTED] wrote on 07/22/2008 09:58:53 AM:
To do dedup properly, it seems like there would have to be some
overly complicated methodology for a sort of delayed dedup of the
data. For speed, you'd want your writes to go straight into the
cache and get flushed out as quickly as
[EMAIL PROTECTED] wrote on 07/22/2008 11:48:30 AM:
Chris Cosby wrote:
On Tue, Jul 22, 2008 at 11:19 AM, [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote on 07/22/2008
09:58:53 AM:
To do dedup properly, it
[EMAIL PROTECTED] wrote on 07/13/2008 11:29:07 PM:
ZFS co-inventor Matt Ahrens recently fixed this:
6343667 scrub/resilver has to start over when a snapshot is taken
Trust me when I tell you that solving this correctly was much harder
than you might expect. Thanks again, Matt.
Jeff
Is
Even better would be using the ZFS block checksums (assuming we are only
summing the data, not it's position or time :)...
Then we could have two files that have 90% the same blocks, and still
get some dedup value... ;)
Yes, but you will need to add some sort of highly collision resistant
[EMAIL PROTECTED] wrote on 07/08/2008 03:08:26 AM:
Does anyone know a tool that can look over a dataset and give
duplication statistics? I'm not looking for something incredibly
efficient but I'd like to know how much it would actually benefit our
Check out the following blog..:
[EMAIL PROTECTED] wrote on 07/08/2008 01:26:15 PM:
Something else came to mind which is a negative regarding
deduplication. When zfs writes new sequential files, it should try to
allocate blocks in a way which minimizes fragmentation (disk seeks).
Disk seeks are the bane of existing storage
[EMAIL PROTECTED] wrote on 06/27/2008 03:39:41 AM:
I'm likely to be building a ZFS server to act as NFS shared storage
for a couple of VMware ESX servers. Does anybody have experience of
using ZFS with VMware like this, and can anybody confirm the best
zpool configuration?
The server
[EMAIL PROTECTED] wrote on 05/21/2008 10:38:10 AM:
On May 21, 2008, at 11:15 AM, Bob Friesenhahn wrote:
I encountered an issue that people using OS-X systems as NFS clients
need to be aware of. While not strictly a ZFS issue, it may be
encounted most often by ZFS users since ZFS makes
[EMAIL PROTECTED] wrote on 05/08/2008 02:31:43 PM:
Luke Scharf wrote:
Dave wrote:
On 05/08/2008 08:11 AM, Ross wrote:
It may be an obvious point, but are you aware that snapshots
need to be stopped any time a disk fails? It's something to
consider if you're planning frequent
[EMAIL PROTECTED] wrote on 04/02/2008 04:54:58 PM:
On 02 April, 2008 - [EMAIL PROTECTED] sent me these 3,4K bytes:
Been goggling around on this to no avail...
We're hoping to soon put into production an x4500 with a big ZFS pool,
replacing a (piece of junk) NAS head which replaced our
).
-Wade Stuart
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[EMAIL PROTECTED] wrote on 03/20/2008 05:12:01 PM:
All,
I assume this issue is pretty old given the time ZFS has been
around. I have tried searching the list but could not get understand
the structure of how ZFS actually takes snapshot space into account.
Snapshot space recording does not
there are some bugs in the Solaris iSCSI target system (Mostly
about failover and recovery processes) it performs quite well. The
problem
is as always MS itself. :)
Whatever you did to create this email -- please don't. Links work fine.
Thanks!
Wade Stuart
[EMAIL PROTECTED] wrote on 01/15/2008 03:04:15 PM:
Sri
Paul Kraus wrote:
On 1/15/08, Selim Daoud [EMAIL PROTECTED] wrote:
with zfs you can compress data on disk ...that is a grat advantage
when doing backup to disk
also, for DSSU you need to multiply number of filesystem (1 fs per
[EMAIL PROTECTED] wrote on 01/10/2008 08:07:37 PM:
I finaly found the cause of the error
Since my disks are mounted in a cassette with four in each I had to
disconnect all cables to them to replace the crashed disk.
When re-attaching the cables I reversed the order of them by
accident.
If you care enough to do backups, at least care enough to be
able to restore. For my home backups, I use portable drives with
copies=2 or 3 and compression enabled. I don't fool with
incrementals, but many people do. The failure mode I'm worried
about is decay, as the drives will be
I keep getting ETOOMUCHTROLL errors thrown while reading this list, is
there a list admin that can clean up the mess? I would hope that repeated
personal attacks could be considered grounds for removal/blocking.
Wade Stuart
Fallon Worldwide
P: 612.758.2660
C: 612.877.0385
Please see below for an example.
-Wade
[EMAIL PROTECTED] wrote on 12/07/2007 03:07:29 PM:
I keep getting ETOOMUCHTROLL errors thrown while
reading this list, is
there a list admin that can clean up the mess? I
would hope that repeated
personal attacks could be considered grounds
Thanks Darren.
I found another link that goes into the 2003 implementation:
http://blogs.technet.com/filecab/archive/tags/Single+Instance+Store+_2800_SIS_2900_/default.aspx
It looks pretty nice, although I am not sure about the userland dedup
service design -- I would like to see it
Darren,
Do you happen to have any links for this? I have not seen anything
about NTFS and CAS/dedupe besides some of the third party apps/services
that just use NTFS as their backing store.
Thanks!
Wade Stuart
Fallon Worldwide
P: 612.758.2660
C: 612.877.0385
[EMAIL PROTECTED] wrote
[EMAIL PROTECTED] wrote on 12/06/2007 09:58:00 AM:
On Dec 6, 2007 1:13 AM, Bakul Shah [EMAIL PROTECTED] wrote:
Note that I don't wish to argue for/against zfs/billtodd but
the comment above about no *real* opensource software
alternative zfs automating checksumming and simple
Resilver and scrub are broken and restart when a snapshot is created -- the
current workaround is to disable snaps while resilvering, the ZFS team is
working on the issue for a long term fix.
-Wade
[EMAIL PROTECTED] wrote on 11/20/2007 09:58:19 AM:
On b66:
# zpool replace tww
[EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM:
On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
Resilver and scrub are broken and restart when a snapshot is created
-- the current workaround is to disable snaps while resilvering,
the ZFS team is working on the issue
On 9-Nov-07, at 2:45 AM, can you guess? wrote:
Au contraire: I estimate its worth quite
accurately from the undetected error rates reported
in the CERN Data Integrity paper published last
April (first hit if you Google 'cern data
integrity').
While I have yet to see any checksum
It may make sense to post your code level host-emc and your
topology/hba(type and firmware level) info for the systems you are having
the issues on. EMC setups are very well known to have their reliability
linked to code level and topology -- a machine running 16 code against
backreved emulex +
[EMAIL PROTECTED] wrote on 10/09/2007 01:11:16 PM:
I am using a x4500 with a single 4*( raid2z 9 + 2)+ 2 spare pool.
I some bad blocks on one of the disks
Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],
0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL
Just checking status on the resilver/scrub + snap reset issue-- it is very
painful for large pools such as exist on thumpers that make heavy use of
snaps. Is this still on track for u5/pre-u5 or has it changed? Is there a
different view of these bugs with more information so I do no need to
[EMAIL PROTECTED] wrote on 09/12/2007 08:04:33 AM:
Gino wrote:
The real problem is that ZFS should stop to force
kernel panics.
I found these panics very annoying, too. And even
more that the zpool
was faulted afterwards. But my problem is that when
someone asks me what
ZFS
Invalidating COW filesystem patents would of course be the best.
Unfortunately those lawsuits are usually not handled in the open
and in order
to understand everything you would need to know about the
background interests
of both parties.
IANAL, but I was under the impression that it
[EMAIL PROTECTED] wrote on 09/10/2007 11:40:16 AM:
Richard Elling wrote:
There is also a long tail situation here, which is how I approached the
problem at eng.Auburn.edu. 1% of the users will use 90% of the
space. For
them, I had special places. For everyone else, they were lumped
[EMAIL PROTECTED] wrote on 09/10/2007 12:13:18 PM:
[EMAIL PROTECTED] wrote:
Very true, you could even pay people to track down heavy users
and
bonk them on the head. Why is everyone responding with alternate
routes to
a simple need?
For the simple reason that sometimes it is
This is my personal opinion and all, but even knowing that Sun
encourages open conversations on these mailing lists and blogs it seems to
falter common sense for people from @sun.com to be commenting on this
topic. It seems like something users should be aware of, but if I were
working
Playing with patent portfolios is the modern equivalent to playing
the mutually assured destruction game with nuclear missiles. Yes
we all appreciate how dangereous this game is and how high the
stakes are. But ... notice that a live/armed ballistic missile has
never been fired at a
[EMAIL PROTECTED] wrote on 09/06/2007 01:14:56 PM:
It really is a shot in the dark at this point, you really never know
what
will happen in court (take the example of the recent court decision that
all data in RAM be held for discovery ?!WHAT, HEAD HURTS!?). But at the
end of the day,
If that's the correct reading of the story then the story is very badly
written. Or am I misreading the story?
Hmmm, the order itself goes on and on about RAM. I think the judge
should have been clearer that the issue is the specific data, as opposed
to generic RAM contents.
(Bug is root caused) (does not tell me anything
about the status)
Description
See comments. (What Comments?)
Work Around
N/A
Thanks!
Wade Stuart
Fallon Worldwide
P: 612.758.2660
C: 612.877.0385
___
zfs-discuss mailing list
zfs-discuss
[EMAIL PROTECTED] wrote on 08/07/2007 10:53:28 AM:
Hello
Is there a way to limit size of filesystem not including snapshots?
Or even better size of data on filesystem regardless of compression.
If not is it planned?
It is hard to explain to user that it is normal that after deleting
[EMAIL PROTECTED] wrote on 08/03/2007 01:35:04 PM:
The OP here is posting the Zillion dollar question ... And
apologies in advance for the verbal diarrhea.
Most of the Enterprise Level systems people here (my company) look
at ZFS and say, Wow that's really cool...but... What comes
On 01/08/2007, at 7:50 PM, Joerg Schilling wrote:
Boyd Adamson [EMAIL PROTECTED] wrote:
Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
Linux? That doesn't seem to make sense since the userspace
implementation will always suffer.
Someone has just mentioned
[EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM:
Peter Tribble wrote:
I've not got that far. During an import, ZFS just pokes around - there
doesn't seem to be an explicit way to tell it which particular devices
or SAN paths to use.
You can't tell it which devices to use in a
[EMAIL PROTECTED] wrote on 07/12/2007 07:28:29 AM:
Hi all,
Nevada build 67, USB flash voayager, ...
created zpool on one of the FDISK partitions on the flash drive, zpool
import export works fine,
tried to take the USB stick out of the system while the pool is mounted,
..., 3
These must be examples of data that was corrupted on some other filesystem?
Bitrot certainly did a number on these.
-Wade
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[EMAIL PROTECTED] wrote on 06/27/2007 06:25:47 PM:
The only thing I haven't found in zfs yet, is metadata etc info.
The previous 'next best thing' in FS was of course ReiserFS (4). Reiser3
was quite a nice thing, fast, journaled and all that, but Reiser4
promised to bring all those
I guess my real question should have been, IF it turns out that quick
indexing and the like are really the next hot thing, would ZFS support
it (yes from what i gathered earlier on this list).
rant modeCome to think of it, the biggest difference of putting this
info in the FS layer or
[EMAIL PROTECTED] wrote on 06/06/2007 11:45:48 AM:
With US patent laws the way they are, no one but a patent lawyer
could safely give you an answer.
If by some chance a patent lawyer is lurking and decided to comment,
none of the rest of us
could safely read such comments. No one
[EMAIL PROTECTED] wrote on 05/15/2007 09:01:00 AM:
Has anyone else run into this situation? Does anyone have any
solutions other than removing snapshots or increasing the quota?
I'd like to put in an RFE to reserve some space so files can be
removed when users are at their quota. Any
[EMAIL PROTECTED] wrote on 05/14/2007 02:10:28 PM:
I did this on Solaris 10u3. 4 120GB - 4 500GB drives. Replace,
resilver; repeat until all all drives replaced.
Just beware of the long resilver times -- on a 500gb x 6 raidz2 group at
70% used space a resilver takes 7+days where snaps
[EMAIL PROTECTED] wrote on 05/10/2007 02:19:17 PM:
I have a scenario where I have several ORACLE databases. I'm trying to
keep system downtime to a minimum for business reasons. I've created
zpools on three devices, an internal 148 Gb drive (data) and two
partitions on an HP SAN. HP
[EMAIL PROTECTED] wrote on 05/03/2007 11:35:24 AM:
with recent bits ZFS compression is now handled concurrently with
many CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
So the observed pauses should be consistent
[EMAIL PROTECTED] wrote on 04/25/2007 10:17:50 AM:
hello the list,
After reading the _excellent_ ZFS Best Practices Guide, I've seen in the
section: ZFS and Complex Storage Consideration that we should configure
the storage system to ignore command which will flush the memory into
the
Eric Schrock [EMAIL PROTECTED] wrote on 04/16/2007 05:29:05 PM:
On Mon, Apr 16, 2007 at 05:13:37PM -0500, [EMAIL PROTECTED] wrote:
Why it was considered a valid data column in its current state is
anyone's guess.
This column is precise and valid. It represents the amount of
[EMAIL PROTECTED] wrote on 04/16/2007 04:57:43 PM:
one pool is mirror on 300gb dirives and the other is raidz1 on 7 x
143gb drives.
I did make clone of my zfs file systems with their snaps and something is
not
right, sizes do not match... anyway here is what I have:
[17:50:32] [EMAIL
[EMAIL PROTECTED] wrote on 04/12/2007 04:47:06 PM:
Management here is worried about performance under ZFS because they had
a bad experience with Instant Image a number of years ago. When iiamd
was used, server performance was reduced to a crawl. Hence they want
proof in the form of
[EMAIL PROTECTED] wrote on 04/12/2007 04:47:46 PM:
On Thu, 2007-04-12 at 14:09 -0600, Mark Shellenbaum wrote:
Pawel Jakub Dawidek wrote:
What are your suggestions?
I am currently working on adding a number of the BSD flags into ZFS.
The existance of the FreeBSD port plus the
[EMAIL PROTECTED] wrote on 03/28/2007 06:34:12 AM:
Hi Gino,
this looks like an instance of bug 6458218 (see
http://bugs.opensolaris.org/view_bug.do?bug_id=6458218)
The fix for this bug is integrated into snv_60.
Kind regards,
Victor
I know I may be somewhat of an outsider here,
[EMAIL PROTECTED] wrote on 03/26/2007 11:00:23 AM:
Tomas Ögren wrote:
On 26 March, 2007 - Hans-Juergen Schnitzer sent me these 3,5K bytes:
I am sorry, we don't have oracle databases on zfs filesystems
(colleagues of mine are currently exploring RMAN to backup oracle
databases,
- Forwarded by Wade Stuart/FALMSP/USA/NA/FALLON on 03/26/07 11:30 AM
-
Wade Stuart/FALMSP/USA/NA/FALLON wrote on 03/26/2007 11:02:08 AM:
[EMAIL PROTECTED] wrote on 03/26/2007 10:58:07 AM:
i have tested links performance. and i have got such results:
with hardlinks
[EMAIL PROTECTED] wrote on 03/21/2007 11:00:43 AM:
The problem is that in order to restrict disk usage, ZFS *requires*
that you create this many filesystems. I think most in this situation
would prefer not to have to do that. The two solutions I see would
be to add user quotas to ZFS
.
http://mail.opensolaris.org/pipermail/perf-discuss/2006-May/000540.html
Wade Stuart
Fallon Worldwide
P: 612.758.2660
C: 612.877.0385
Conjeturo que no soy buen cocinero.
[EMAIL PROTECTED] wrote on 03/20/2007 11:40:15 AM:
Erast Benson wrote:
On Tue, 2007-03-20 at 09:29 -0700, Erast Benson
Folks,
Is there any update on the progress of fixing the resilver/snap/scrub
reset issues? If the bits have been pushed is there a patch for Solaris
10U3?
http://bugs.opensolaris.org/view_bug.do?bug_id=6343667
Also the scrub/resilver priority setting?
[EMAIL PROTECTED] wrote on 03/07/2007 12:31:14 PM:
So it sounds like the consensus is that I should not worry about
using slices with ZFS
and the swap best practice doesn't really apply to my situation of a
4 disk x4200.
So in summary(please confirm) this is what we are saying is a
[EMAIL PROTECTED] wrote on 03/05/2007 04:18:44 AM:
How embarrassing is that? Pete kindly pointed me to the man page
where it clearly states that I should use zpool scrub [-s] pool. -
s for Stop scrubbing. Sorry folks, I just looked in the
Administration guide where I couldn't find it.
[EMAIL PROTECTED] wrote on 03/05/2007 03:56:28 AM:
one question,
is there a way to stop the default txg push behaviour (push at regular
timestep-- default is 5sec) but instead push them on the fly...I
would imagine this is better in the case of an application doing big
sequential write
[EMAIL PROTECTED] wrote on 03/01/2007 10:05:44 AM:
i am forced to reinstall s10u3 on my x4500. SP 1.1.1. exported zpool,
and discovered during the reinstall that the controller numbers have
changed. what used to be c5t0d0 is now c6t0d0. it it happens the exported
zpool is using only half
[EMAIL PROTECTED] wrote on 02/26/2007 11:36:18 AM:
Jeff Davis wrote:
Given your question are you about to come back with a
case where you are not
seeing this?
Actually, the case where I saw the bad behavior was in Linux using
the CFQ I/O scheduler. When reading the same file
[EMAIL PROTECTED] wrote on 02/23/2007 02:15:44 PM:
Is there anyway to convert existing ufs file system to zfs ?
Not live, good old tape-restore or transfer to new disk methods only. You
may minimize downtime depending on workload by doing a disk-disk copy live
and then doing an rsync
[EMAIL PROTECTED] wrote on 02/20/2007 08:10:59 AM:
On Tue, Feb 20, 2007 at 02:07:41PM +0100, Robert Milkowski wrote:
Hello Jeremy,
Monday, February 19, 2007, 1:58:18 PM, you wrote:
Something similar was proposed here before and IIRC someone even has
a
working implementation. I
There's a fundamental problem with an undelete facility.
$ echo FILE
$ undelete FILE
cannot undelete FILE: file exists
Why the assumption that an undelete command would be brain dead -- this
IS
Unix. =) Seems like a low bar issue, if file exists and
If you run a 'zpool scrub preplica-1', then the persistent error log
will be cleaned up. In the future, we'll have a background scrubber
to make your life easier.
eric
Eric,
Great news! Are there any details about how this will be implemented
yet? I am most curious to how
[EMAIL PROTECTED] wrote on 02/13/2007 09:48:54 AM:
In the ZFS Best Practises Guide here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
It says:
``Currently, pool performance can degrade when a pool is very full
and file systems are updated
TW Am I using send/recv incorrectly or is there something else
going on here that
TW I am missing?
It's a known bug.
umount and rollback file system on host 2. You should see 0 used space
on a snapshot and then it should work.
Bug ID? Is it related to atime changes?
--
Best
Hello Wade,
Thursday, February 8, 2007, 8:00:40 PM, you wrote:
TW Am I using send/recv incorrectly or is there something else
going on here that
TW I am missing?
It's a known bug.
umount and rollback file system on host 2. You should see 0 used space
on a snapshot
[EMAIL PROTECTED] wrote on 02/02/2007 10:34:22 AM:
thanks Darren! I got led down the wrong path by following newfs.
Now my other question is. How would you add raw storage to the vtank
(virtual filesystem) as the usage approached the current underlying
raw storage?
Would you going
[EMAIL PROTECTED] wrote on 02/02/2007 11:16:32 AM:
Hi all,
Longtime reader, first time poster Sorry for the lengthy intro
and not really sure the title matches what I'm trying to get at... I
am trying to find a solution where making use of a zfs filesystem
can shorten our backup
[EMAIL PROTECTED] wrote on 02/01/2007 01:17:15 PM:
The ZFS On-Disk specification and other ZFS documentation describe
the labeling scheme used for the vdevs that comprise a ZFS pool. A
label entry contains, among other things, an array of uberblocks,
one of which will point to the
One of the benefits of ZFS is that not only is head synchronization
not
needed, but also block offsets do not have to be the same. For
example,
in a traditional mirror, block 1 on device 1 is paired with block 1
on
device 2. In ZFS, this 1:1 mapping is not required. I believe
[EMAIL PROTECTED] wrote on 01/29/2007 03:45:58 PM:
I attempted to increase my zraid from 2 disks to 3, but it looks
like I added the drive outside of the raid:
# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
amber 1.36T879G
, but as a feature that we know is
missing that we really need to consider moving some of our systems from
vxvm/fs to zfs.
-Wade Stuart
--
Constantin GonzalezSun Microsystems GmbH,
Germany
Platform Technology Group, Client Solutions
http://www.sun.de/
Tel.: +49 89/4 60 08
and summed (
username:100,000:/tank/home/username;/tank/departments/usersdepartment #
allow 100,000 bytes to be used in total between these two unrelated
filesystems by this user)...
I have faith that user quotas are going to come sometime, these how
questions are interesting to me...
-Wade Stuart
1 - 100 of 121 matches
Mail list logo