On 18-Nov-07, at 7:30 PM, Dickon Hood wrote:
...
: If you're still referring to your incompetent alleged research,
[...]
: [...] right out of the
: same orifice from which you've pulled the rest of your crap.
It's language like that that is causing the problem. IMHO you're
being a
On 29-Nov-07, at 2:48 PM, Tom Buskey wrote:
Getting back to 'consumer' use for a moment, though,
given that something like 90% of consumers entrust
their PC data to the tender mercies of Windows, and a
large percentage of those neither back up their data,
nor use RAID to guard against media
On 29-Nov-07, at 4:09 PM, Paul Kraus wrote:
On 11/29/07, Toby Thain [EMAIL PROTECTED] wrote:
Xserve + Xserve RAID... ZFS is already in OS X 10.5.
As easy to set up and administer as any OS X system; a problem free
and FAST network server to Macs or PCs.
That is a great theory
On 5-Dec-07, at 4:19 AM, can you guess? wrote:
On 11/7/07, can you guess?
[EMAIL PROTECTED]
wrote:
However, ZFS is not the *only* open-source
approach
which may allow that to happen, so the real
question
becomes just how it compares with equally
inexpensive
current and potential
On 4-Dec-07, at 9:35 AM, can you guess? wrote:
Your response here appears to refer to a different post in this
thread.
I never said I was a typical consumer.
Then it's unclear how your comment related to the material which
you quoted (and hence to which it was apparently responding).
On 11-Dec-07, at 9:44 PM, Robert Milkowski wrote:
Hello can,
...
What some people are also looking for, I guess, is a black-box
approach - easy to use GUI on top of Solaris/ZFS/iSCSI/etc. So they
don't have to even know it's ZFS or Solaris. Well...
Pretty soon OS X will be exactly that -
On 13-Dec-07, at 1:56 PM, Shawn Joy wrote:
What are the commands? Everything I see is c1t0d0, c1t1d0. no
slice just the completed disk.
I have used the following HOWTO. (Markup is TWiki, FWIW.)
Device names are for a 2-drive X2100. Other machines may differ, for
example, X4100
On 13-Dec-07, at 3:54 PM, Richard Elling wrote:
[EMAIL PROTECTED] wrote:
Shawn,
Using slices for ZFS pools is generally not recommended so I think
we minimized any command examples with slices:
# zpool create tank mirror c1t0d0s0 c1t1d0s0
Cindy,
I think the term generally not
On 9-Jan-08, at 10:26 PM, Noël Dellofano wrote:
As I mentioned, ZFS is still BETA, so there are (and likely will be)
some issues turn up with compatibility with the upper layers of the
system if that's what you're referring to.
Two potential areas come immediately to mind - case sensitivity
On 26-Jan-08, at 2:24 AM, Joachim Pihl wrote:
Running SXDE (snv_70) for a file server, and I must admit I'm new to
Solaris and zfs. zfs does not appear to do any compression at all,
here is
what I did to set it up:
I created a four drive raidz array:
zpool create pool raidz c0d0 c0d1
On 27-Aug-08, at 1:41 PM, W. Wayne Liauh wrote:
Please read Akhilesh's answer carefully and stop
repeating
the same thing. Staroffice is to Latex/Framemaker
what a
mid-size sedan is to an 18-wheeler. To the untrained
eye,
they appear to perform similar actions, but the
actual overlap
On 27-Aug-08, at 5:47 PM, Ian Collins wrote:
Tim writes:
On Wed, Aug 27, 2008 at 3:29 PM, Ian Collins [EMAIL PROTECTED]
wrote:
Does anyone have any tuning tips for a Subversion repository on
ZFS? The
repository will mainly be storing binary (MS Office documents).
It looks like a
On 27-Aug-08, at 7:21 PM, Ian Collins wrote:
Miles Nordin writes:
In addition, I'm repeating myself like crazy at this point, but ZFS
tools used for all pools like 'zpool status' need to not freeze
when a
single pool, or single device within a pool, is unavailable or slow,
and this
On 28-Aug-08, at 10:11 AM, Richard Elling wrote:
It is rare to see this sort of CNN Moment attributed to file
corruption.
http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-
Down-FAAs-Antiquated-IT-System/?kc=EWKNLNAV08282008STR4
two 20-year-old redundant mainframe
On 28-Aug-08, at 10:54 AM, Toby Thain wrote:
On 28-Aug-08, at 10:11 AM, Richard Elling wrote:
It is rare to see this sort of CNN Moment attributed to file
corruption.
http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-
Down-FAAs-Antiquated-IT-System/?kc=EWKNLNAV08282008STR4
On 30-Aug-08, at 2:32 AM, Todd H. Poole wrote:
Wrt. what I've experienced and read in ZFS-discussion etc. list
I've the
__feeling__, that we would have got really into trouble, using
Solaris
(even the most recent one) on that system ...
So if one asks me, whether to run Solaris+ZFS on
On 4-Sep-08, at 4:52 PM, Richard Elling wrote:
Marcelo Leal wrote:
Hello all,
Any plans (or already have), a send/receive way to get the
transfer backup statistics? I mean, the how much was transfered,
time and/or bytes/sec?
I'm not aware of any plans, you should file an RFE.
And
On 30-Sep-08, at 6:58 AM, Ahmed Kamal wrote:
Thanks for all the answers .. Please find more questions below :)
- Good to know EMC filers do not have end2end checksums! What about
netapp ?
Blunty - no remote storage can have it by definition. The checksum
needs to be computed as close as
On 30-Sep-08, at 7:50 AM, Ram Sharma wrote:
Hi,
can anyone please tell me what is the maximum number of files that
can be there in 1 folder in Solaris with ZSF file system.
I am working on an application in which I have to support 1mn
users. In my application I am using MySql MyISAM
On 30-Sep-08, at 6:31 PM, Tim wrote:
On Tue, Sep 30, 2008 at 5:19 PM, Erik Trimble
[EMAIL PROTECTED] wrote:
To make Will's argument more succinct (wink), with a NetApp,
undetectable (by the NetApp) errors can be introduced at the HBA
and transport layer (FC Switch, slightly damage
On 30-Sep-08, at 9:54 PM, Tim wrote:
On Tue, Sep 30, 2008 at 8:50 PM, Toby Thain
[EMAIL PROTECTED] wrote:
NetApp's block-appended checksum approach appears similar but is
in fact much stronger. Like many arrays, NetApp formats its drives
with 520-byte sectors. It then groups them
On 1-Oct-08, at 1:56 AM, Ram Sharma wrote:
Hi Guys,
Thanks for so many good comments. Perhaps I got even more than what
I asked for!
I am targeting 1 million users for my application.My DB will be on
solaris machine.And the reason I am making one table per user is
that it will be a
On 18-Oct-08, at 12:46 AM, Roch Bourbonnais wrote:
Leave the default recordsize. With 128K recordsize, files smaller than
128K are stored as single record
tightly fitted to the smallest possible # of disk sectors. Reads and
writes are then managed with fewer ops.
Not tuning the recordsize
On 24-Nov-08, at 10:40 AM, Scara Maccai wrote:
Why would it be assumed to be a bug in Solaris? Seems
more likely on
balance to be a problem in the error reporting path
or a controller/
firmware weakness.
Weird: they would use a controller/firmware that doesn't work? Bad
call...
Seems
On 24-Nov-08, at 3:49 PM, Miles Nordin wrote:
tt == Toby Thain [EMAIL PROTECTED] writes:
tt Why would it be assumed to be a bug in Solaris? Seems more
tt likely on balance to be a problem in the error reporting path
tt or a controller/ firmware weakness.
It's not really
On 25-Nov-08, at 5:10 AM, Ross Smith wrote:
Hey Jeff,
Good to hear there's work going on to address this.
What did you guys think to my idea of ZFS supporting a waiting for a
response status for disks as an interim solution that allows the pool
to continue operation while it's waiting for
On 26-Nov-08, at 10:30 AM, C. Bergström wrote:
... Also is it more efficient/better
performing to give swap a 2nd slice on the inner part of the disk
or not
care and just toss it on top of zfs?
I think the thing about swap is that if you're swapping, you probably
have more to worry
On 1-Dec-08, at 10:05 PM, Glaser, David wrote:
Hi all,
I have a Thumper (ok, actually 3) with each having one large pool,
multiple filesystems and many snapshots. They are holding rsync
copies of multiple clients, being synced every night (using
snapshots to keep ‘incremental’
On 2-Dec-08, at 8:24 AM, Glaser, David wrote:
Ok, thanks for all the responses. I'll probably do every other week
scrubs, as this is the backup data (so doesn't need to be checked
constantly).
Even that is probably more frequent than necessary. I'm sure somebody
has done the MTTDL
On 2-Dec-08, at 3:35 PM, Miles Nordin wrote:
r == Ross [EMAIL PROTECTED] writes:
r style before I got half way through your post :) [...status
r problems...] could be a case of oversimplifying things.
...
And yes, this is a religious argument. Just because it spans decades
of
On 6-Dec-08, at 7:10 AM, Orvar Korvar wrote:
Its not me. There are people on Linux forums that wont to try out
Solaris + ZFS and this is a concern, for them. What should I tell
them? That it is not fixed? That they have reboot every week?
Someone knows?
That it's not recommended for
On 11-Dec-08, at 12:28 PM, Robert Milkowski wrote:
Hello Anton,
Thursday, December 11, 2008, 4:17:15 AM, you wrote:
It sounds like you have access to a source of information that the
rest of us don't have access to.
ABR I think if you read the archives of this mailing list, and
ABR
On 12-Dec-08, at 3:10 PM, Miles Nordin wrote:
tt == Toby Thain t...@telegraphics.com.au writes:
mg == Mike Gerdts mger...@gmail.com writes:
tt I think we have to assume Anton was joking - otherwise his
tt measure is uselessly unscientific.
I think it's rude to talk about someone
On 12-Dec-08, at 3:38 PM, Johan Hartzenberg wrote:
...
The only bit that I understand about why HW raid might be bad is
that if it had access to the disks behind a HW RAID LUN, then _IF_
zfs were to encounter corrupted data in a read, it will probably be
able to re-construct that data.
Maybe the format allows unlimited O(1) snapshots, but it's at best
O(1) to take them. All over the place it's probably O(n) or worse to
_have_ them. to boot with them, to scrub with them.
Why would a scrub be O(n snapshots)?
The O(n filesystems) effects reported from time to time in
On 6-Jan-09, at 1:19 PM, Bob Friesenhahn wrote:
On Tue, 6 Jan 2009, Jacob Ritorto wrote:
Is urandom nonblocking?
The OS provided random devices need to be secure and so they depend on
collecting entropy from the system so the random values are truely
random. They also execute complex
On 7-Jan-09, at 9:43 PM, JZ wrote:
ok, Scott, that sounded sincere. I am not going to do the pic thing
on you.
But do I have to spell this out to you -- somethings are invented
not for
home use?
Cindy, would you want to do ZFS at home,
Why would you disrespect your personal data?
On 11-Jan-09, at 3:28 PM, Tom Bird wrote:
Bob Friesenhahn wrote:
On Sun, 11 Jan 2009, Eric D. Mudama wrote:
My impression is not that other OS's aren't interested in ZFS, they
are, it's that the licensing restrictions limit native support to
Solaris, BSD, and OS-X.
Perhaps the
On 12-Jan-09, at 3:43 PM, JZ wrote:
[having late lunch hour for beloved Orvar]
one more baby scenario for your consideration --
you can give me some ZFS based codes and I will go to china and
burn some HW
RAID ASICs to fulfill your desire?
Is that what passes for product development
On 18-Jan-09, at 6:12 PM, Nathan Kroenert wrote:
Hey, Tom -
Correct me if I'm wrong here, but it seems you are not allowing ZFS
any
sort of redundancy to manage.
Which is particularly catastrophic when one's 'content' is organized
as a monolithic file, as it is here - unless, of
On 21-Jan-09, at 9:11 PM, Brandon High wrote:
On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
Several people reported this same problem. They changed their
ethernet adaptor to an Intel ethernet interface and the performance
problem went away. It was
On 26-Jan-09, at 6:21 PM, Jakov Sosic wrote:
So I wonder now, how to fix this up? Why doesn't
scrub overwrite bad data with good data from first
disk?
ZFS doesn't know why the errors occurred, the most
likely scenario would be a
bad disk -- in which case you'd need to replace it.
I know
On 26-Jan-09, at 8:15 PM, Miles Nordin wrote:
js == Jakov Sosic jso...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
js Yes but that will do the complete resilvering, and I just want
js to fix the corrupted blocks... :)
tt What you are asking
On 29-Jan-09, at 2:17 PM, Ross wrote:
Yeah, breaking functionality in one of the main reasons people are
going to be trying OpenSolaris is just dumb... really, really dumb.
One thing Linux, Windows, OS/X, etc all get right is that they're
pretty easy to use right out of the box.
On 29-Jan-09, at 4:53 PM, Volker A. Brandt wrote:
Given the massive success of GNU based systems (Linux, OS X, *BSD)
Ouch! Neither OSX nor *BSD are GNU-based.
I meant, extensive GNU userland (in OS X's case).
(sorry Ian)
--Toby
They do ship with
GNU-related things but that's been a
On 4-Feb-09, at 6:19 AM, Michael McKnight wrote:
Hello everyone,
I am trying to take ZFS snapshots (ie. zfs send) and burn them to
DVD's for offsite storage. In many cases, the snapshots greatly
exceed the 8GB I can stuff onto a single DVD-DL.
In order to make this work, I have used
On 4-Feb-09, at 2:29 PM, Bob Friesenhahn wrote:
On Wed, 4 Feb 2009, Toby Thain wrote:
In order to make this work, I have used the split utility ...
I use the following command to convert them back into a single file:
#cat mypictures.zfssnap.split.a[a-g] testjoin
But when I compare
On 9-Feb-09, at 6:17 PM, Miles Nordin wrote:
ok == Orvar Korvar knatte_fnatte_tja...@yahoo.com writes:
ok You are not using ZFS correctly.
ok You have misunderstood how it is used. If you dont follow the
ok manual (which you havent) then any filesystem will cause
ok
On 10-Feb-09, at 1:03 PM, Charles Binford wrote:
Jeff, what do you mean by disks that simply blow off write
ordering.?
My experience is that most enterprise disks are some flavor of
SCSI, and
host SCSI drivers almost ALWAYS use simple queue tags, implying the
target is free to re-order the
On 10-Feb-09, at 1:05 PM, Peter Schuller wrote:
YES! I recently discovered that VirtualBox apparently defaults to
ignoring flushes, which would, if true, introduce a failure mode
generally absent from real hardware (and eventually resulting in
consistency problems quite unexpected to the user
On 10-Feb-09, at 7:41 PM, Jeff Bonwick wrote:
wellif you want a write barrier, you can issue a flush-cache and
wait for a reply before releasing writes behind the barrier. You
will
get what you want by doing this for certain.
Not if the disk drive just *ignores* barrier and
On 10-Feb-09, at 10:36 PM, Frank Cusack wrote:
On February 10, 2009 4:41:35 PM -0800 Jeff Bonwick
jeff.bonw...@sun.com wrote:
Not if the disk drive just *ignores* barrier and flush-cache commands
and returns success. Some consumer drives really do exactly that.
ouch.
If it were possible
On 11-Feb-09, at 10:08 AM, David Dyer-Bennet wrote:
On Tue, February 10, 2009 23:43, Uwe Dippel wrote:
1. Can the relevant people confirm that drives might turn dead when
leaving a pool at unfortunate moments? Despite of complete physical
integrity? [I'd really appreciate an answer here,
On 11-Feb-09, at 11:19 AM, Tim wrote:
...
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the
fact remains even a *crappy* filesystem like fat32 manages to
handle a hot unplug without any prior notice
On 11-Feb-09, at 5:52 PM, David Dyer-Bennet wrote:
On Wed, February 11, 2009 15:52, Bob Friesenhahn wrote:
On Wed, 11 Feb 2009, Tim wrote:
Right, except the OP stated he unmounted the filesystem in
question, and
it
was the *ONLY* one on the drive, meaning there is absolutely 0
chance
On 11-Feb-09, at 7:16 PM, Uwe Dippel wrote:
I need to disappoint you here, LED inactive for a few seconds is a
very bad indicator of pending writes. Used to experience this on a
stick on Ubuntu, which was silent until the 'umount' and then it
started to write for some 10 seconds.
On the
On 12-Feb-09, at 3:02 PM, Tim wrote:
On Thu, Feb 12, 2009 at 11:31 AM, David Dyer-Bennet d...@dd-b.net
wrote:
On Thu, February 12, 2009 10:10, Ross wrote:
Of course, that does assume that devices are being truthful when
they say
that data has been committed, but a little data loss
On 12-Feb-09, at 7:02 PM, Eric D. Mudama wrote:
On Thu, Feb 12 at 21:45, Mattias Pantzare wrote:
A read of data in the disk cache will be read from the disk cache.
You
can't tell the disk to ignore its cache and read directly from the
plater.
The only way to test this is to write and the
On 14-Feb-09, at 2:40 AM, Andras Spitzer wrote:
Damon,
Yes, we can provide simple concat inside the array (even though
today we provide RAID5 or RAID1 as our standard, and using Veritas
with concat), the question is more of if it's worth it to switch
the redundancy from the array to the
On 17-Feb-09, at 3:01 PM, Scott Lawson wrote:
Hi All,
...
I have seen other people discussing power availability on other
threads
recently. If you
want it, you can have it. You just need the business case for it. I
don't buy the comments
on UPS unreliability.
Hi,
I remarked on it. FWIW,
On 17-Feb-09, at 8:28 PM, Asif Iqbal wrote:
On Tue, Feb 17, 2009 at 5:52 PM, Robert Milkowski
mi...@task.gda.pl wrote:
Hello Asif,
Tuesday, February 17, 2009, 7:43:41 PM, you wrote:
AI Hi All
AI Does anyone have any experience on running qmail on solaris 10
with ZFS only?
AI I would
On 17-Feb-09, at 9:35 PM, Scott Lawson wrote:
Toby Thain wrote:
On 17-Feb-09, at 3:01 PM, Scott Lawson wrote:
Hi All,
...
I have seen other people discussing power availability on other
threads
recently. If you
want it, you can have it. You just need the business case for it. I
don't
On 24-Feb-09, at 1:37 PM, Mattias Pantzare wrote:
On Tue, Feb 24, 2009 at 19:18, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Mon, Feb 23, 2009 at 10:05:31AM -0800, Christopher Mera wrote:
I recently read up on Scott Dickson's blog with his solution for
jumpstart/flashless cloning of
On 25-Feb-09, at 9:53 AM, Moore, Joe wrote:
Miles Nordin wrote:
that SQLite2 should be equally as tolerant of snapshot backups
as it
is of cord-yanking.
The special backup features of databases including ``performing a
checkpoint'' or whatever, are for systems incapable of snapshots,
On 4-Mar-09, at 2:07 AM, Stephen Nelson-Smith wrote:
Hi,
I recommended a ZFS-based archive solution to a client needing to have
a network-based archive of 15TB of data in a remote datacentre. I
based this on an X2200 + J4400, Solaris 10 + rsync.
This was enthusiastically received, to the
On 4-Mar-09, at 1:28 PM, Bob Friesenhahn wrote:
I don't know if anyone has noticed that the topic is google summer
of code. There is only so much that a starving college student
can accomplish from a dead-start in 1-1/2 months. The ZFS
equivalent of eliminating world hunger is not among
On 4-Mar-09, at 7:35 PM, Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm I suppose my RFE for two-level ZFS should be included,
Not that my opinion counts for much, but I wasn't deaf to it---I did
On 5-Mar-09, at 2:03 PM, Miles Nordin wrote:
gm == Gary Mills mi...@cc.umanitoba.ca writes:
gm There are many different components that could contribute to
gm such errors.
yes of course.
gm Since only the lower ZFS has data redundancy, only it can
gm correct the error.
um,
On 14-Mar-09, at 12:09 PM, Blake wrote:
I just thought of an enhancement to zfs that would be very helpful in
disaster recovery situations - having zfs cache device serial/model
numbers - the information we see in cfgadm -v.
+1 I haven't needed this but it sounds very sensible. I can
On 17-Mar-09, at 3:32 PM, cindy.swearin...@sun.com wrote:
Neal,
You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:
http://opensolaris.org/os/community/zfs/docs/
Page 114:
Example 4–1 Initial
On 10-Apr-09, at 2:03 PM, Harry Putnam wrote:
David Magda dma...@ee.ryerson.ca writes:
On Apr 7, 2009, at 16:43, OpenSolaris Forums wrote:
if you have a snapshot of your files and rsync the same files again,
you need to use --inplace rsync option , otherwise completely new
blocks will be
On 10-Apr-09, at 5:05 PM, Mark J Musante wrote:
On Fri, 10 Apr 2009, Patrick Skerrett wrote:
degradation) when these write bursts come in, and if I could
buffer them even for 60 seconds, it would make everything much
smoother.
ZFS already batches up writes into a transaction group,
On 15-Apr-09, at 8:31 PM, Frank Middleton wrote:
On 04/15/09 14:30, Bob Friesenhahn wrote:
On Wed, 15 Apr 2009, Frank Middleton wrote:
zpool status shows errors after a pkg image-update
followed by a scrub.
If a corruption occured in the main memory, the backplane, or the
disk
On 16-Apr-09, at 5:27 PM, Florian Ermisch wrote:
Uwe Dippel schrieb:
Bob Friesenhahn wrote:
Since it was not reported that user data was impacted, it seems
likely that there was a read failure (or bad checksum) for ZFS
metadata which is redundantly stored.
(Maybe I am too much of a
On 17-Apr-09, at 11:49 AM, Frank Middleton wrote:
... One might argue that a machine this flaky should
be retired, but it is actually working quite well,
If it has bad memory, you won't get much useful work done on it until
the memory is replaced - unless you want to risk your data with
On 19-Apr-09, at 10:38 AM, Uwe Dippel wrote:
casper@sun.com wrote:
We are back at square one; or, at the subject line.
I did a zpool status -v, everything was hunky dory.
Next, a power failure, 2 hours later, and this is what zpool
status -v thinks:
zpool status -v
pool: rpool
state:
On 22-May-09, at 5:24 PM, Frank Middleton wrote:
There have been a number of threads here on the reliability of ZFS
in the
face of flaky hardware. ZFS certainly runs well on decent (e.g.,
SPARC)
hardware, but isn't it reasonable to expect it to run well on
something
less well engineered?
On 26-May-09, at 10:21 AM, Frank Middleton wrote:
On 05/26/09 03:23, casper@sun.com wrote:
And where exactly do you get the second good copy of the data?
From the first. And if it is already bad, as noted previously, this
is no worse than the UFS/ext3 case. If you want total freedom
On 25-May-09, at 11:16 PM, Frank Middleton wrote:
On 05/22/09 21:08, Toby Thain wrote:
Yes, the important thing is to *detect* them, no system can run
reliably
with bad memory, and that includes any system with ZFS. Doing nutty
things like calculating the checksum twice does not buy
On 10-Jun-09, at 7:25 PM, Alex Lam S.L. wrote:
On Thu, Jun 11, 2009 at 2:08 AM, Aaron Blewaaronb...@gmail.com
wrote:
That's quite a blanket statement. MANY companies (including Oracle)
purchased Xserve RAID arrays for important applications because of
their
price point and capabilities.
On 16-Jun-09, at 6:22 PM, Ray Van Dolson wrote:
On Tue, Jun 16, 2009 at 03:16:09PM -0700, milosz wrote:
yeah i pretty much agree with you on this. the fact that no one has
brought this up before is a pretty good indication of the demand.
there are about 1000 things i'd rather see
On 17-Jun-09, at 7:37 AM, Orvar Korvar wrote:
Ok, so you mean the comments are mostly FUD and bull shit? Because
there are no bug reports from the whiners? Could this be the case?
It is mostly FUD? Hmmm...?
Having read the thread, I would say without a doubt.
Slashdot was never the
On 17-Jun-09, at 5:42 PM, Miles Nordin wrote:
bmm == Bogdan M Maryniuk bogdan.maryn...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
ok == Orvar Korvar no-re...@opensolaris.org writes:
tt Slashdot was never the place to go for accurate information
tt about ZFS
On 18-Jun-09, at 12:14 PM, Miles Nordin wrote:
bmm == Bogdan M Maryniuk bogdan.maryn...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
...
tt /. is no person...
... you and I both know it's plausible
speculation that Apple delayed unleashing ZFS on their consumers
On 23-Jun-09, at 1:58 PM, Erik Trimble wrote:
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly
_how_ does ZFS do resilvering? Both in the case of mirrors, and
of RAIDZ[2] ?
I've seen some mention that it goes in cronological
On 14-Jul-09, at 5:18 PM, Orvar Korvar wrote:
With dedup, will it be possible somehow to identify files that are
identical but has different names? Then I can find and remove all
duplicates. I know that with dedup, removal is not really needed
because the duplicate will just be a
On 19-Jul-09, at 7:12 AM, Russel wrote:
Guys guys please chill...
First thanks to the info about virtualbox option to bypass the
cache (I don't suppose you can give me a reference for that info?
(I'll search the VB site :-))
I posted about that insane default, six months ago. Obviously ZFS
On 24-Jul-09, at 6:41 PM, Frank Middleton wrote:
On 07/24/09 04:35 PM, Bob Friesenhahn wrote:
Regardless, it [VirtualBox] has committed a crime.
But ZFS is a journalled file system! Any hardware can lose a flush;
No, the problematic default in VirtualBox is flushes being *ignored*,
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record is
underneath the corrupted data in the tree then it won't be able to be
reached.
Can you comment on if/how mirroring or raidz
than to manage it.
Now if you were too lazy to bother to follow the instructions
properly,
we could end up with bizarre things. This is what happens when
storage
lies and re-orders writes across boundaries.
On 07/25/09 07:34 PM, Toby Thain wrote:
The problem is assumed *ordering
On 27-Jul-09, at 5:46 AM, erik.ableson wrote:
The zfs send command generates a differential file between the two
selected snapshots so you can send that to anything you'd like.
The catch of course is that then you have a collection of files on
your Linux box that are pretty much useless
On 27-Jul-09, at 3:44 PM, Frank Middleton wrote:
On 07/27/09 01:27 PM, Eric D. Mudama wrote:
Everyone on this list seems to blame lying hardware for ignoring
commands, but disks are relatively mature and I can't believe that
major OEMs would qualify disks or other hardware that willingly
On 31-Jul-09, at 7:15 PM, Richard Elling wrote:
wow, talk about a knee jerk reaction...
On Jul 31, 2009, at 3:23 PM, Dave Stubbs wrote:
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
On 4-Aug-09, at 9:28 AM, Roch Bourbonnais wrote:
Le 26 juil. 09 à 01:34, Toby Thain a écrit :
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure /
record is
underneath the corrupted data
On 14-Aug-09, at 11:14 AM, Peter Schow wrote:
On Thu, Aug 13, 2009 at 05:02:46PM -0600, Louis-Fr?d?ric Feuillette
wrote:
I saw this question on another mailing list, and I too would like to
know. And I have a couple questions of my own.
== Paraphrased from other list ==
Does anyone have any
On 25-Sep-09, at 2:58 PM, Frank Middleton wrote:
On 09/25/09 11:08 AM, Travis Tabbal wrote:
... haven't heard if it's a known
bug or if it will be fixed in the next version...
Out of courtesy to our host, Sun makes some quite competitive
X86 hardware. I have absolutely no idea how difficult
On 26-Sep-09, at 9:56 AM, Frank Middleton wrote:
On 09/25/09 09:58 PM, David Magda wrote:
...
Similar definition for [/tmp] Linux FWIW:
Yes, but unless they fixed it recently (=RHFC11), Linux doesn't
actually
nuke /tmp, which seems to be mapped to disk. One side effect is
that (like
On 26-Sep-09, at 2:55 PM, Frank Middleton wrote:
On 09/26/09 12:11 PM, Toby Thain wrote:
Yes, but unless they fixed it recently (=RHFC11), Linux doesn't
actually nuke /tmp, which seems to be mapped to disk. One side
effect is that (like MSWindows) AFAIK there isn't a native tmpfs
On 30-Sep-09, at 10:48 AM, Brian Hubbleday wrote:
I had a 50mb zfs volume that was an iscsi target. This was mounted
into a Windows system (ntfs) and shared on the network. I used
notepad.exe on a remote system to add/remove a few bytes at the end
of a 25mb file.
I'm astonished that's
On 5-Oct-09, at 3:32 PM, Miles Nordin wrote:
bm == Brandon Mercer yourcomputer...@gmail.com writes:
I'm now starting to feel that I understand this issue,
and I didn't for quite a while. And that I understand the
risks better, and have a clearer idea of what the possible
fixes are. And I
101 - 200 of 263 matches
Mail list logo