Re: [zfs-discuss] Add mirror to an existing Zpool

2007-04-10 Thread Jeremy Teo

Read the man page for zpool. Specifically, zpool attach.

On 4/10/07, Martin Girard [EMAIL PROTECTED] wrote:

Hi,

I have a zpool with only one disk. No mirror.
I have some data in the file system.

Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror with the initial disk?
If yes, how?

Thanks

Martin


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] .zfs snapshot directory in all directories

2007-02-26 Thread Jeremy Teo

On 2/26/07, Thomas Garner [EMAIL PROTECTED] wrote:

Since I have been unable to find the answer online, I thought I would
ask here.  Is there a knob to turn to on a zfs filesystem put the .zfs
snapshot directory into all of the children directories of the
filesystem, like the .snapshot directories of NetApp systems, instead
of just the root of the filesystem?


No. File an RFE? :)


--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-19 Thread Jeremy Teo

Something similar was proposed here before and IIRC someone even has a
working implementation. I don't know what happened to it.


That would be me. AFAIK, no one really wanted it.  The problem that it
solves can be solved by putting snapshots in a cronjob.

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS checksums - block or file level

2007-02-01 Thread Jeremy Teo

On 2/1/07, Nathan Essex [EMAIL PROTECTED] wrote:

I am trying to understand if zfs checksums apply at a file or a block level.  
We know that zfs provides end to end checksum integrity, and I assumed that 
when I write a file to a zfs filesystem, the checksum was calculated at a file 
level, as opposed to say, a block level.


ZFS checksums are done at the block level. End to end checksum
integrity means that when the actual data reaches the application from
the platter, we can guarantee to a very high certainty that the data
is uncorrupted. Either a block level checksum or a file level checksum
will suffice.
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] restore pool from detached disk from mirror

2007-01-30 Thread Jeremy Teo

Hello,

On 1/30/07, Robert Milkowski [EMAIL PROTECTED] wrote:

Hello zfs-discuss,

  I had a pool with only two disks in a mirror. I detached one disks
  and have erased later first disk. Now i would really like to quickly
  get data from the second disk available again. Other than detaching
  the second disk nothing else was done to it.

  Has anyone written such a tool in-house? I guess only vdev label was
  erased so it should be possible to do it.


My grokking of the code confirms that the vdev labels of a detached
vdev are wiped. The vdev label contains the uber block. Without the
uber block, we can't access the rest of the zpool/zfs data on the
detached vdev.


--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS brings System to panic/freeze

2007-01-25 Thread Jeremy Teo

System specifications please?

On 1/25/07, ComCept Net GmbH Soliva [EMAIL PROTECTED] wrote:

Hello

now I was configuring my syste with RaidZ and with Spares (explained below).
I would like to test the configuration it means after successful config of
ZFS I pulled-out a disk of one of the RaidZ's. Imidiately the system brought
up on the console:

Jan 25 09:08:46 global-one scsi: WARNING: /[EMAIL 
PROTECTED],4000/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
(sd38):
Jan 25 09:08:46 global-one  disk not responding to selection

After a few seconds the Console was freezed and also over ssh not chance to
login again. At least I powered off and on the system. As the system was up
again (without errors) I logged in again and looked to the zpool:

# zpool status -x zfs-pool
  pool: zfs-pool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist
for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
zfs-pool  DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c0t2d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c2t3d0  UNAVAIL  0 0 0  cannot open
c3t1d0  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
spares
  c3t2d0AVAIL
  c3t3d0AVAIL

errors: No known data errors

I was going a little bit through the forum and found a similar issue:

http://www.opensolaris.org/jive/thread.jspa?threadID=22487tstart=0

Is my test to pullout a disk not visible? Is there a bug in ZFS like
mentioned by in the above threadID22487?

I do not understand the reaction of the system and I do not find any logs
which points to a issue except the console message depending the disk? Did I
do something wrong?

That we understand us correct if I bring the Disk again online all is fine
but it could not be that if a disk fails that the overall system goes down
or..?

Below the documentation what I did:

Many thanks for any help!

Andrea


 ZFS CONFIG on Solaris Sparc 11/06 (without any patch)-


system  SUNWadmr System  Network Administration
Root
system  SUNWarc  Lint Libraries (usr)
system  SUNWatfsrAutoFS, (Root)
system  SUNWatfsuAutoFS, (Usr)
system  SUNWbcp  SunOS 4.x Binary Compatibility
system  SUNWbip  Basic IP commands (Usr)
system  SUNWbipr Basic IP commands (Root)
system  SUNWbtoolCCS tools bundled with SunOS
system  SUNWbzip The bzip compression utility
system  SUNWcakr Core Solaris Kernel Architecture
(Root)
system  SUNWcar  Core Architecture, (Root)
system  SUNWcfpl fp cfgadm plug-in library
system  SUNWcfplrfp cfgadm plug-in library (root)
system  SUNWchxgeChelsio N110 10GE NIC Driver
system  SUNWckr  Core Solaris Kernel (Root)
system  SUNWcnetrCore Solaris Network Infrastructure
(Root)
system  SUNWcpp  Solaris cpp
system  SUNWcsd  Core Solaris Devices
system  SUNWcsl  Core Solaris, (Shared Libs)
system  SUNWcslr Core Solaris Libraries (Root)
system  SUNWcsr  Core Solaris, (Root)
system  SUNWcsu  Core Solaris, (Usr)
system  SUNWerid Sun RIO 10/100 Mb Ethernet Drivers
system  SUNWesu  Extended System Utilities
system  SUNWfcip Sun FCIP IP/ARP over FibreChannel
Device Driver
system  SUNWfcp  Sun FCP SCSI Device Driver
system  SUNWfctl Sun Fibre Channel Transport layer
system  SUNWflexruntime  Flex Lexer (Runtime Libraries)
system  SUNWgss  GSSAPI V2
system  SUNWgssc GSSAPI CONFIG V2
system  SUNWgzip The GNU Zip (gzip) compression
utility
system  SUNWhea  SunOS Header Files
system  SUNWhmd  SunSwift Adapter Drivers
system  SUNWib   Sun InfiniBand Framework
system  SUNWinstall-patch-utils-root Install and Patch Utilities (root)
system  SUNWipc  Interprocess Communications
system  SUNWipoib

Re: [zfs-discuss] Re: Some questions I had while testing ZFS.

2007-01-25 Thread Jeremy Teo

This is 6456939:
sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which calls
biowait()and deadlock/hangs host

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6456939

(Thanks to Tpenta for digging this up)

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS brings System to panic/freeze

2007-01-25 Thread Jeremy Teo

On 1/25/07, ComCept Net GmbH Andrea Soliva [EMAIL PROTECTED] wrote:

Hi Jeremy

Did I understand it correct there is no workaround or patch available to
solve this situation?

Do not misunderstand me but this issue (and it is not a small issue) is from
September 2006?

Is this in work or..?
From what I gathered from a Sun dev (Tpenta), this should be fixed in

S10 Update 4.

If you have a support contract, you can try asking for the patch with
reference to
test patch (IDR125057-01). (This info is from the other thread you posted to.)

Something elsewhy did you put your message not in the thread?

Forgot to cc the mailing list. :(



--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can I use zfs on just a partition?

2007-01-25 Thread Jeremy Teo

On 1/25/07, Tim Cook [EMAIL PROTECTED] wrote:

Just want to verify, if I have say, 1 160GB disk, can I format it so that the 
first say 40GB is my main UFS parition with the base OS install, and then make 
the rest of the disk zfs?  Or even better yet, for testing purposes make two 
60GB partitions out of the rest of it and make them a *mirror*?  Or do I have 
to give zfs the entire disk?


If you're talking about slices, yes. (partitions). You can give zfs
just a slice of the disk, while giving UFS another slice. You can
mirror across slices (though performance will be sub optimal).

Take note though, that giving zfs the entire disk gives a possible
performance win, as zfs will only enable the write cache for the disk
if it is given the entire disk.

A caveat: giving the whole disk to a zfs zpool results in an EFI label
being assigned: some motherboard BIOSes refuse to boot with a hard
disk that has an EFI label.
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool split

2007-01-23 Thread Jeremy Teo

I'm defining zpool split as the ability to divide a pool into 2
separate pools, each with identical FSes. The typical use case would
be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool,
and then transport the 1 disk pool to another machine.

While contemplating zpool split functionality, I wondered whether we
really want such a feature because

1) SVM allows it and admins are used to it.

or

2) We can't do what we want using zfs send |zfs recv

Right now, it's looking mightly close to 1.

Comments appreciated as always. Thanks. :)


--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Jeremy Teo

On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
nice to have?

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-15 Thread Jeremy Teo

The instructions will tell you how to configure the array to ignore
SCSI cache flushes/syncs on Engenio arrays. If anyone has additional
instructions for other arrays, please let me know and I'll be happy to
add them!


Wouldn't it be more appropriate to allow the administrator to disable
ZFS from issuing the write cache enable command during a commit?
(assuming expensive high end battery backed cache etc etc)
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-15 Thread Jeremy Teo

On 12/16/06, Richard Elling [EMAIL PROTECTED] wrote:

Jason J. W. Williams wrote:
 Hi Jeremy,

 It would be nice if you could tell ZFS to turn off fsync() for ZIL
 writes on a per-zpool basis. That being said, I'm not sure there's a
 consensus on that...and I'm sure not smart enough to be a ZFS
 contributor. :-)

 The behavior is a reality we had to deal with and workaround, so I
 posted the instructions to hopefully help others in a similar boat.

 I think this is a valuable discussion point though...at least for us. :-)

This is one of those systems engineering problems that can
be difficult to identify, as Jason discovered, and with no
perfect solution.  I hope someone here can help with the
design concept behind the cache flush on the FLX 210, as well
as other RAID arrays.  I think there is ample discussion here
already of the behaviour of ZFS.
  -- richard


I presume you feel that the current behaviour of ZFS always issuing
the write cache flush is the best compromise.

Are there actually storage arrays with battery backed cache that
*don't* allow themselves to be configured to ignore cache flush
commands?


--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-09 Thread Jeremy Teo

Yes.  But its going to be a few months.


i'll presume that we will get background disk scrubbing for free once
you guys get bookmarking  done. :)

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-12-07 Thread Jeremy Teo

The whole raid does not fail -- we are talking about corruption
here.  If you lose some inodes your whole partition is not gone.

My ZFS pool would not salvage -- poof, whole thing was gone (granted
it was a test one and not a raidz or mirror yet).  But still, for
what happened, I cannot believe that 20G of data got messed up
because a 1GB cache was not correctly flushed.


Chad, I think what you're saying is for a zpool to allow you to
salvage whatever remaining data that passes it's checksums.

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-05 Thread Jeremy Teo

On 12/5/06, Bill Sommerfeld [EMAIL PROTECTED] wrote:

On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote:
 mypool2/[EMAIL PROTECTED]  34.4M  -   151G  -
 mypool2/[EMAIL PROTECTED]  141K  -   189G  -
 mypool2/d3 492G   254G  11.5G  legacy

 I am so confused with all of this... Why its taking so long to replace that 
one
 bad disk?

To workaround a bug where a pool traverse gets lost when the snapshot
configuration of a pool changes, both scrubs and resilvers will start
over again any time you create or delete a snapshot.

Unfortunately, this workaround has problems of its own -- If your
inter-snapshot interval is less than the time required to complete a
scrub, the resilver will never complete.

The open bug is:

6343667 scrub/resilver has to start over when a snapshot is taken

if it's not going to be fixed any time soon, perhaps we need a better
workaround:


Anyone internal working on this?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dead drives and ZFS

2006-11-14 Thread Jeremy Teo

On 11/14/06, Bill Sommerfeld [EMAIL PROTECTED] wrote:

On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote:
 After examining the source, it clearly wipes the vdev label during a detach.
 I suppose it does this so that the machine can't get confused at a later date.
 It would be nice if the detach simply renamed something, rather than
 destroying the pool though.  At the very least, the manual page ought
 to reflect the destructive nature of the detach command.

rather than patch it up after the detach, why not have the filesystem do
it?

seems like the problem would be solved by something looking vaguely
like:

   zpool fork -p poolname -n newpoolname [devname ...]

   Create the new exported pool newpoolname from poolname by detaching
   one side from each mirrored vdev, starting with the
   device names listed on the command line.  Fails if the pool does not
   consist exclusively of mirror vdevs, if any device listed on the
   command line is not part of the pool, or if there is a scrub or resilver
   necessary or in progress.   Use on bootable pools not recommended.
   For best results, snapshot filesystems you care about before the fork.


I'm more inclined to split instead of fork. ;)


--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: copying a large file..

2006-10-30 Thread Jeremy Teo

This is the same problem described in
6343653 : want to quickly copy a file from a snapshot.

On 10/30/06, eric kustarz [EMAIL PROTECTED] wrote:

Pavan Reddy wrote:
 This is the time it took to move the file:

 The machine is a Intel P4 - 512MB RAM.

 bash-3.00# time mv ../share/pav.tar .

 real1m26.334s
 user0m0.003s
 sys 0m7.397s


 bash-3.00# ls -l pav.tar
 -rw-r--r--   1 root root 516628480 Oct 29 19:30 pav.tar


 A similar move on my Mac OS X took this much time:


 pavan-mettus-computer:~/public pavan$ time mv pav.tar.gz ./Burn\ Folder.fpbf/

 real0m0.006s
 user0m0.001s
 sys 0m0.004s

 pavan-mettus-computer:~/public/Burn Folder.fpbf pavan$ ls -l pav.tar.gz
 -rw-r--r--   1 pavan  pavan  347758518 Oct 29 19:09 pav.tar.gz

 NOTE: The file size here is 347 MB where as the previous one was 516MB. But 
still the time taken is huge when compared.

 Its an X86 machine running Nevada build 51.  More info about the disk and 
pool:
 bash-3.00# zpool list
 NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
 mypool 29.2G789M   28.5G 2%  ONLINE -
 bash-3.00# zfs list
 NAMEUSED  AVAIL  REFER  MOUNTPOINT
 mypool  789M  28.0G  24.5K  /mypool
 mypool/nas  789M  28.0G  78.8M  /export/home
 mypool/nas/pavan710M  28.0G   710M  /export/home/pavan
 mypool/nas/rajeev  24.5K  28.0G  24.5K  /export/home/rajeev
 mypool/nas/share   24.5K  28.0G  24.5K  /export/home/share

 It took lots of time when I moved the file from /export/home/pavan/ to 
/export/home/share directory.

You're moving that file from one filesystem to another, so it will have
to copy all the data of the file in addition to just a few metadata
blocks.  If you mv it within the same filesystem it will be quick (as in
your OSX example).

eric


 I was not doing any other operation other than the move command.

 There are no files in that directories other than this one.

 It has  a 512MB Physical memory. The Mac machine has 1Gig RAM.

 No snapshots are taken.

 iostat and vmstat info:
 bash-3.00# iostat -x
  extended device statistics
 devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
 cmdk0 3.13.0  237.9  132.8  0.6  0.1  112.6   2   3
 fd0   0.00.00.00.0  0.0  0.0 3152.2   0   0
 sd0   0.00.00.00.0  0.0  0.00.0   0   0
 sd1   0.00.00.00.0  0.0  0.00.0   0   0
 bash-3.00# vmstat
  kthr  memorypagedisk  faults  cpu
  r b w   swap  free  re  mf pi po fr de sr cd f0 s0 s1   in   sy   cs us sy id
  0 0 0 2083196 111668 5  25 13  3 13  0 44  6  0 -1 -0  296  276  231  1  2 97


 -Pavan


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing number of disks in a RAID-Z?

2006-10-23 Thread Jeremy Teo

Hello,


Shrinking the vdevs requires moving data.  Once you move data, you've
got to either invalidate the snapshots or update them.  I think that
will be one of the more difficult parts.



Updating snapshots would be non-trivial, but doable. Perhaps some sort
of reverse mapping or brute force search to relate snapshots to
blocks.
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Self-tuning recordsize

2006-10-22 Thread Jeremy Teo

Hello all,

Isn't a large block size a simple case of prefetching? In other words,
if we possessed an intelligent prefetch implementation, would there
still be a need for large block sizes? (Thinking aloud)

:)

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Self-tuning recordsize

2006-10-17 Thread Jeremy Teo

Heya Roch,

On 10/17/06, Roch [EMAIL PROTECTED] wrote:
-snip-

Oracle will typically create it's files with 128K writes
not recordsize ones.


Darn, that makes things difficult doesn't it? :(

Come to think of it, maybe we're approaching things from the wrong
perspective. Databases such as Oracle have their own cache *anyway*.
The reason why we set recordsize is to

1) Match the block size to the write/read access size, so we don't
read/buffer too much.

But if we improved our caching to not cache blocks being accessed by a
database, wouldn't that solve the problem also?

(I'm not precisely sure where and how much performance we win from
setting an optimal recordsize)

Thanks for listening folks! :)

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Self-tuning recordsize

2006-10-17 Thread Jeremy Teo

Heya Anton,

On 10/17/06, Anton B. Rang [EMAIL PROTECTED] wrote:

No, the reason to try to match recordsize to the write size is so that a small 
write does not turn into a large read + a large write.  In configurations where 
the disk is kept busy, multiplying 8K of data transfer up to 256K hurts.

Ah. I knew i was missing something. What COW giveth, COW taketh away...


This is really orthogonal to the cache — in fact, if we had a switch to disable 
caching, this problem would get worse instead of better (since we wouldn't 
amortize the initial large read over multiple small writes).

Agreed.

It looks to me there are only 2 ways to solve this:

1) Set recordsize manually
2) Allow the blocksize of a file be changed even if there are multiple
blocks in the file.
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool history integrated

2006-10-17 Thread Jeremy Teo

Kudos Eric! :)

On 10/17/06, eric kustarz [EMAIL PROTECTED] wrote:

Hi everybody,

Yesterday I putback into nevada:
PSARC 2006/288 zpool history
6343741 want to store a command history on disk

This introduces a new subcommand to zpool(1m), namely 'zpool history'.
Yes, team ZFS is tracking what you do to our precious pools.

For more information, check out:
http://blogs.sun.com/erickustarz/entry/zpool_history

enjoy,
eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Jeremy Teo

Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A versioning FS

2006-10-06 Thread Jeremy Teo

A couple of use cases I was considering off hand:

1. Oops i truncated my file
2. Oops i saved over my file
3. Oops an app corrupted my file.
4. Oops i rm -rf the wrong directory.
All of which can be solved by periodic snapshots, but versioning gives
us immediacy.

So is immediacy worth it to you folks? I rather not embark on writing
and finishing code on something no one wants besides me.
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Versioning in ZFS: Do we need it?

2006-10-05 Thread Jeremy Teo

What would versioning of files in ZFS buy us over a zfs snapshots +
cron solution?

I can think of one:

1. The usefulness of the ability to get the prior version of anything
at all (as richlowe puts it)

Any others?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Jeremy Teo

Hello,


What I wanted to point out is the Al's example: he wrote about damaged data. 
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. 
ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But cannot detect 
and repair
errors in its (ZFS) code.

I am comparing firmware code to ZFS code.



Firmware doesn't do end to end checksumming. If ZFS code is buggy, the
checksums won't match up anyway, so you still detect errors.

Plus it is a lot easier to debug ZFS code than firmware.

--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Borked zpool is now invulnerable

2006-05-18 Thread Jeremy Teo

Hello,

while testing some code changes, I managed to fail an assertion while
doing a zfs create.

My zpool is now invulnerable to destruction. :(

bash-3.00# zpool destroy -f test_undo
internal error: unexpected error 0 at line 298 of ../common/libzfs_dataset.c

bash-3.00# zpool status
 pool: test_undo
state: ONLINE
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   test_undo   ONLINE   0 0 0
 c0d1s1ONLINE   0 0 0
 c0d1s0ONLINE   0 0 0

errors: No known data errors

How can I destroy this pool so I can use the disk for a new pool? Thanks! :)
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss