Hello,
while testing some code changes, I managed to fail an assertion while
doing a zfs create.
My zpool is now invulnerable to destruction. :(
bash-3.00# zpool destroy -f test_undo
internal error: unexpected error 0 at line 298 of ../common/libzfs_dataset.c
bash-3.00# zpool status
pool:
Hello,
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But cannot detect
and repair
errors in its (ZFS) code.
I
What would versioning of files in ZFS buy us over a zfs snapshots +
cron solution?
I can think of one:
1. The usefulness of the ability to get the prior version of anything
at all (as richlowe puts it)
Any others?
--
Regards,
Jeremy
___
zfs-discuss
A couple of use cases I was considering off hand:
1. Oops i truncated my file
2. Oops i saved over my file
3. Oops an app corrupted my file.
4. Oops i rm -rf the wrong directory.
All of which can be solved by periodic snapshots, but versioning gives
us immediacy.
So is immediacy worth it to you
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Heya Roch,
On 10/17/06, Roch [EMAIL PROTECTED] wrote:
-snip-
Oracle will typically create it's files with 128K writes
not recordsize ones.
Darn, that makes things difficult doesn't it? :(
Come to think of it, maybe we're approaching things from the wrong
perspective. Databases such as Oracle
Heya Anton,
On 10/17/06, Anton B. Rang [EMAIL PROTECTED] wrote:
No, the reason to try to match recordsize to the write size is so that a small
write does not turn into a large read + a large write. In configurations where
the disk is kept busy, multiplying 8K of data transfer up to 256K
Kudos Eric! :)
On 10/17/06, eric kustarz [EMAIL PROTECTED] wrote:
Hi everybody,
Yesterday I putback into nevada:
PSARC 2006/288 zpool history
6343741 want to store a command history on disk
This introduces a new subcommand to zpool(1m), namely 'zpool history'.
Yes, team ZFS is tracking what
Hello all,
Isn't a large block size a simple case of prefetching? In other words,
if we possessed an intelligent prefetch implementation, would there
still be a need for large block sizes? (Thinking aloud)
:)
--
Regards,
Jeremy
___
zfs-discuss
Hello,
Shrinking the vdevs requires moving data. Once you move data, you've
got to either invalidate the snapshots or update them. I think that
will be one of the more difficult parts.
Updating snapshots would be non-trivial, but doable. Perhaps some sort
of reverse mapping or brute force
This is the same problem described in
6343653 : want to quickly copy a file from a snapshot.
On 10/30/06, eric kustarz [EMAIL PROTECTED] wrote:
Pavan Reddy wrote:
This is the time it took to move the file:
The machine is a Intel P4 - 512MB RAM.
bash-3.00# time mv ../share/pav.tar .
real
On 11/14/06, Bill Sommerfeld [EMAIL PROTECTED] wrote:
On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
After examining the source, it clearly wipes the vdev label during a detach.
I suppose it does this so that the machine can't get confused at a later date.
It would be nice if the
On 12/5/06, Bill Sommerfeld [EMAIL PROTECTED] wrote:
On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote:
mypool2/[EMAIL PROTECTED] 34.4M - 151G -
mypool2/[EMAIL PROTECTED] 141K - 189G -
mypool2/d3 492G 254G 11.5G legacy
I am so confused with all of
The whole raid does not fail -- we are talking about corruption
here. If you lose some inodes your whole partition is not gone.
My ZFS pool would not salvage -- poof, whole thing was gone (granted
it was a test one and not a raidz or mirror yet). But still, for
what happened, I cannot believe
Yes. But its going to be a few months.
i'll presume that we will get background disk scrubbing for free once
you guys get bookmarking done. :)
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The instructions will tell you how to configure the array to ignore
SCSI cache flushes/syncs on Engenio arrays. If anyone has additional
instructions for other arrays, please let me know and I'll be happy to
add them!
Wouldn't it be more appropriate to allow the administrator to disable
ZFS
On 12/16/06, Richard Elling [EMAIL PROTECTED] wrote:
Jason J. W. Williams wrote:
Hi Jeremy,
It would be nice if you could tell ZFS to turn off fsync() for ZIL
writes on a per-zpool basis. That being said, I'm not sure there's a
consensus on that...and I'm sure not smart enough to be a ZFS
On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
nice to have?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I'm defining zpool split as the ability to divide a pool into 2
separate pools, each with identical FSes. The typical use case would
be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool,
and then transport the 1 disk pool to another machine.
While contemplating zpool split
System specifications please?
On 1/25/07, ComCept Net GmbH Soliva [EMAIL PROTECTED] wrote:
Hello
now I was configuring my syste with RaidZ and with Spares (explained below).
I would like to test the configuration it means after successful config of
ZFS I pulled-out a disk of one of the
This is 6456939:
sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which calls
biowait()and deadlock/hangs host
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6456939
(Thanks to Tpenta for digging this up)
--
Regards,
Jeremy
___
On 1/25/07, ComCept Net GmbH Andrea Soliva [EMAIL PROTECTED] wrote:
Hi Jeremy
Did I understand it correct there is no workaround or patch available to
solve this situation?
Do not misunderstand me but this issue (and it is not a small issue) is from
September 2006?
Is this in work or..?
On 1/25/07, Tim Cook [EMAIL PROTECTED] wrote:
Just want to verify, if I have say, 1 160GB disk, can I format it so that the
first say 40GB is my main UFS parition with the base OS install, and then make
the rest of the disk zfs? Or even better yet, for testing purposes make two
60GB
Hello,
On 1/30/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello zfs-discuss,
I had a pool with only two disks in a mirror. I detached one disks
and have erased later first disk. Now i would really like to quickly
get data from the second disk available again. Other than detaching
On 2/1/07, Nathan Essex [EMAIL PROTECTED] wrote:
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was calculated at a file
level, as
Something similar was proposed here before and IIRC someone even has a
working implementation. I don't know what happened to it.
That would be me. AFAIK, no one really wanted it. The problem that it
solves can be solved by putting snapshots in a cronjob.
--
Regards,
Jeremy
On 2/26/07, Thomas Garner [EMAIL PROTECTED] wrote:
Since I have been unable to find the answer online, I thought I would
ask here. Is there a knob to turn to on a zfs filesystem put the .zfs
snapshot directory into all of the children directories of the
filesystem, like the .snapshot
Read the man page for zpool. Specifically, zpool attach.
On 4/10/07, Martin Girard [EMAIL PROTECTED] wrote:
Hi,
I have a zpool with only one disk. No mirror.
I have some data in the file system.
Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror
28 matches
Mail list logo