that it uses private interfaces and doing
so is not supported by any Oracle support contract you have.
--
Darren J Moffat
___
zfs-crypto-discuss mailing list
zfs-crypto-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-crypto-discuss
snapshots recv'd ?
--
Darren J Moffat
___
zfs-crypto-discuss mailing list
zfs-crypto-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-crypto-discuss
.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Where does the 12.5% compression rule in zio_compress_data() come from ?
Given that this is in the generic function for all compression
algorithms rather than in the implementation of lzjb I wonder where the
number comes from ?
Just curious.
--
Darren J Moffat
anything other than that is IMO poor
planning ZFS data sets are cheap!
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Joerg Schilling wrote:
Darren J Moffat [EMAIL PROTECTED] wrote:
There is a difference though as far as I can tell. Sometimes on Solaris
we have p? for fdisk partitioning included and sometimes we don't;
similarly we sometimes don't have t? for target. Personally I'd prefer
us
is to add a new compression method, the other is to port
ZFS to FUSE/Linux[1]
Both of these projects will need a mentor if we are to accept a student
for them.
[1] Please file your GPL vs CDDL replies in /dev/null.
--
Darren J Moffat
___
zfs-discuss
J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
include it earlier.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jeremy Teo wrote:
How can I destroy this pool so I can use the disk for a new pool?
Thanks! :)
dd if=/dev/zero of=/dev/dsk/c0d1s1
dd if=/dev/zero of=/dev/dsk/c0d1s0
that should do it.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
the every single write case and see
for yourself the massive explosion of snapshots that would occur as a
result.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to request it but by
default we don't do it.
*Only* if we fail to come up with a mechanism to do this properly,
efficiently automagically.
Which bit did you mean ? The API itself or the policy part ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs
bother trying this is an MP3.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
we do need something like this.
This is already covered by the following CRs 6280676, 6421209.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roland Mainz wrote:
Darren J Moffat wrote:
James Dickens wrote:
I think ZFS should add the concept of ownership to a ZFS filesystem,
so if i create a filesystem for joe, he should be able to use his
space how ever he see's fit, if he wants to turn on compression or
take 5000 snapshots its his
Networker or Veritas Netbackup or anything else that just walks the file
system using POSIX APIs, then keep using that. Be aware though that
they may not pickup on ZFS ACLs yet and they won't save the zfs data set
config, ie compression, checksum etc.
--
Darren J Moffat
is special we might be able to invent
additional things for doing the other operations. The harder part is
setting the options like share/checksum/compression etc.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
or not ?
What proof is there that the checksumming in ZFS is actually hurting any
performance ? We know that the implementation of the fletcher
algorithms can be improved on some systems (a test implementation exists
for UltraSPARC T1).
--
Darren J Moffat
there is no checksumming to detect problems, they don't
think they have problems --
the insidious effects of cancer.
Or most of the data is stored in file formats don't get impacted too
much by the odd bit flip here and there (eg MPEG streams).
--
Darren J Moffat
)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I accidentally tried to create a ZFS file system called lost+found[1]
and zfs(1) told me that + was an invalid char for a filesystem name.
Why is that ?
[1] cd /export/projects (where that is a ufs file system)
for i in * ; do
zfs create cube/projects/$i
done
--
Darren J
impacts the device name, ie
the ZFS filesystem name used instead of things like /dev/md/dsk/d50.
So I don't see how this could be a problem for KDE.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
.
Yum Yum!!
We could even build this into nightly(1) once we have user delegation to
create clones.
nightly(1) would zfs clone, zfs set reservation=, zfs set sync=deferred,
and when it is done release the reservation unset deffered and snapshot.
When we can have it ?
--
Darren J Moffat
obvious but it is possible that a bad peer SSH
server could exploit the client. If ssh(1) drops basic file privs after
it reads all the config it is a good layer of protection.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
the pain
32-bits gives you.
Are VIA processor chips 64bit capable yet ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bill Sommerfeld wrote:
On Thu, 2006-06-22 at 13:01, Roch wrote:
Is there a sync command that targets individual FS ?
Yes. lockfs -f
Does lockfs work with ZFS ? The man page appears to indicate it is very
UFS specific.
--
Darren J Moffat
to bring up Sun business choices here.
Where that is appropriate is when Sun employees need to justify to their
manager what they are working on.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
used to diagnose and prove that we have a faulty router in a lab
that was causing very strange build errors. TCP/IP alone didn't catch
the problems and sometimes they showed up with SCCS simple checksums and
sometimes we had compile errors.
--
Darren J Moffat
Torrey McMahon wrote:
Darren J Moffat wrote:
So everything you are saying seems to suggest you think ZFS was a
waste of engineering time since hardware raid solves all the problems ?
I don't believe it does but I'm no storage expert and maybe I've drank
too much cool aid. I'm software
.
If you want end-to-end data integrity it has to be checked on a
server.
but the checking could be done by a cooperating zfs module and stuff in
the hardware array. That is making some of ZFS pluggable in away that
it can be delegated to hardware.
--
Darren J Moffat
from an environment with
them removed from the limit set).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
space* (not the same as more VM) hence the 64
bit processor.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/msg04497.html
And does so by pointing to *my* blog even, cool!
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
if these fixes that were applicable to 32bit SPARC work
for 32bit x86.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to craft a zfsbackup and zfsrestore command ?
Or fix the existing send/recv to pass the options and be able to be told
how to manage tapes in the simple way that ufsdump can.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
doesn't
solve ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that doesn't give you the spacing.
I now understand the ps -o reference it is about the key=format part not
just the ability to choose which fields and the order.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
,
for example:
$ zenity --list --title=ZFS File systems --column NAME --column
MOUNTPOINT --column USED --column AVAIL `zfs list -t filesystem -H -o
name,mountpoint,used,avail`
Okay so that isn't tty but it solves the column width problem nicely :-)
--
Darren J Moffat
.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
one now (I wouldn't really put scrub in this category).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bill Sommerfeld wrote:
On Fri, 2006-07-14 at 07:03, Darren J Moffat wrote:
The current plan is that encryption must be turned on when the file
system is created and can't be turned on later. This means that the
zfs-crypto work depends on the RFE to set properties at file system
creation time
Where can I get the source file for the on disk spec document. I want
to update it with the changes to the structures for crypto support. I
need to do this for customer I'm meeting with to discuss the crypto
features being added to ZFS.
--
Darren J Moffat
Mark Shellenbaum wrote:
Darren J Moffat wrote:
Bill La Forge wrote:
I like to think of delegation as being a bit different than granting
permision--in fact, as a special permission that may include counts.
For example, you might delegate to a manager the ability to grant
select permissions
that is planned.
This is covered in LSARC 2006/329
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
performance at the moment.
So I'm looking for some advice on how best to do this. I suspect that
after much discussion some best practice advice might come out of this.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Richard Elling wrote:
Darren J Moffat wrote:
So with that in mind this is my plan so far.
On the target (the V880):
Put all the 12 36G disks into a single zpool (call it iscsitpool).
Use iscsitadm to create 2 targets of 202G each.
On the initiator (the v40z):
Use iscsiadm to discover (import
Spencer Shepler wrote:
On Wed, Darren J Moffat wrote:
I have 12 36G disks (in a single D2 enclosure) connected to a V880 that
I want to share to a v40z that is on the same gigabit network switch.
I've already decided that NFS is not the answer - the performance of ON
consolidation builds over
.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of attributes by anyone other than the actual
owner (at least by default) maybe a little confusing but I'm not too
bothered about that.
TMPFS has the same behaviour as UFS with respect to the problem around
when the chown is done.
--
Darren J Moffat
Mark Shellenbaum wrote:
Darren J Moffat wrote:
Mark Shellenbaum wrote:
Lets have another root owned file but this time one that is
world writable:
islay:pts/4$ ls -l
total 0
-rw-r--r-- 1 darrenm staff 0 Aug 7 15:34 test1
-rw-r--r-- 1 darrenm root 0 Aug 7 15:35
:-)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
different data replication requirements between the
accounting and engineering departments and different quota etc between
the dev and qa subgroups of engineering.
Very cool stuff, now I actually understand the real need for this!
--
Darren J Moffat
feel that strongly about this but thing it might be helpful in
some cases.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for
attachments supported by your mail server. It also often gives you auto
mail expiry and clean up on the server side.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
with live upgrade or
require you to move that stuff onto your ZFS /opt datasets.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to zfs_prop_get_int(), like in the df
code, to find out what I want? Will this blow up later?
What is it that you are trying to do here ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
it is off that isn't the truth either because
some of it maybe in encrypted.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the same issue if there
was a mechanism to rewrite with the new checksums or compression settings.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Neil A. Wilson wrote:
Darren J Moffat wrote:
While encryption of existing data is not in scope for the first ZFS
crypto phase I am being careful in the design to ensure that it can be
done later if such a ZFS framework becomes available.
The biggest problem I see with this is one
Frank Cusack wrote:
Sounds cool! Better than depending on an out-of-band heartbeat.
I disagree it sounds really really bad. If you want a high availability
cluster you really need a faster interconnect than spinning rust which
is probably the slowest interface we have now!
--
Darren J
/extend that to deal with the fact that ZFS doesn't use vfstab
and instead express it in terms of ZFS import/export.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
systems - and
whats more it has a supported and Committed interface:
zfs set mountpoint=none mypool
But that isn't the issue :-)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
zpool import -Rf :-)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
From C code use statvfs and look at f_fstr.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or strategies to remove this dependancy?
What do you suggest in its place?
Or better yet, exactly what is the problem with having the cache ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
? bootadm update-archive does that for you.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
after all started on Solaris :-)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
machine
and you would need to have the users login to the Solaris machine once
first.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
them to the other X2100s and ship it out.
if clone really means make completely identical then do this:
boot of cd or network.
dd if=/dev/dsk/sourcedisk of=/dev/dsk/destdisk
Where sourcedisk and destdisk are both localally attached.
--
Darren J Moffat
Frank Cusack wrote:
On October 20, 2006 12:00:26 PM +0100 Darren J Moffat
[EMAIL PROTECTED] wrote:
msl wrote:
Ok, thanks very much for your answer. I will look the automounter. But,
about the pam module, how it would work? running on linux machine, and
creating a zfs filesystem on a solaris
. Particularly if more than a single
file system is in use and most especially if there is more than a
single host involved.
From the people that keep asking for this, how would they like to see
this managed in an idea world ?
--
Darren J Moffat
___
zfs
intended for read-only
mounts. I can't remember though if this is enforced or not though.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be an interesting and useful project.
What about porting openfiler to OpenSolaris ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
? Thanks.
It might also be that where you posted it the correct people
aren't hanging out if you haven't already try:
The Appliances and NFS communities.
http://opensolaris.org/os/community/appliances/
http://opensolaris.org/os/community/nfs
--
Darren J Moffat
slower?
I haven't but then I haven't done much benchmarking since it was fast
enough for what I needed.
This is actually a very good pair of things to compare. It would also
be interesting to compare rsync over ssh as well.
--
Darren J Moffat
Prashanth Radhakrishnan wrote:
Hi,
Is it possible to create snapshots off ZFS clones and further clones off
those snapshots recursively?
Yes.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
worked on their new ZFS file systems.
I then destroyed the original SVM+UFS config and added those 6 disks
into the ZFS pool as a second 6 disk raidz group.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
have pointed
out that there is a pathological case that is being used in these tests.
For some idea of what NFS can do see:
http://blogs.sun.com/shepler/entry/spec_sfs_over_the_years
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
that until
I have approval to do so, but that's the soonest we could do it, given
restrictions on the earlier builds.)
Or you could have a zfs-boot project on opensolaris before you integrate
however if you are that close to integrating it might just slow you down.
--
Darren J Moffat
but due to the lack of
redundancy at that ZFS layer it isn't able to correct them. In a raidz
or mirroring configuration correction is possible.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
of the Linux boxes and replace them with OpenSolaris
based ones ;-)
Seriously, what are you expecting OpenSolaris and ZFS/NFS in particular
to be able to do about a restriction in Linux ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
with limits imposed by the linux kernel.
brandz currently ships in an solaris express (or solaris express
community release) build snv_49 or later.
Another alternative is to pick an OpenSolaris based distribution that
looks and feels more like Linux. Nexenta might do that for you.
--
Darren J Moffat
- but that was a mistake because I had no hook to
decrypt it.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
at the hardware compatibility list:
http://www.sun.com/bigadmin/hcl/
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
filesystems. I would have expected that exporting the pool should have
attempted to unmount all the ZFS filesystems first - which would have
failed without a -f flag because they were shared.
So IMO it is a bug or at least an RFE.
--
Darren J Moffat
___
zfs
have failed without a -f flag because they were shared.
So IMO it is a bug or at least an RFE.
Ok, where should I file an RFE?
http://bugs.opensolaris.org/
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
://www.sun.com/software/solaris/trustedsolaris/ts_tech_faq/faqs/purge.xml
[2] http://csrc.nist.gov/publications/nistpubs/800-88/NISTSP800-88_rev1.pdf
[3] 09-11-06 update to [2] on page 7.
[4] http://cmrr.ucsd.edu/hughes/subpgset.htm
--
Darren J Moffat
___
zfs
was purposely
avoiding using erase (should have said that!).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, say).
I didn't have anything per file, but exactly what you said. The policy
was when files are removed, when data sets are removed, when pools are
removed.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
involved in this, the whole
point of my proposal being the way it was is that it works equally for
all applications and no application code needs to or can be changed to
change this behaviour. Just like doing crypto in the filesystem vs
doing it at the application layer.
--
Darren J Moffat
in MacOS X.
Bleaching is a time consuming task, not something I'd want to do at
system boot/halt.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in redundancy decisions?
Yes because if ZFS doesn't know about it then ZFS can't use it to do
corrections when the checksums (which always work) detect problems.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Torrey McMahon wrote:
Darren J Moffat wrote:
Jonathan Edwards wrote:
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should
Nicolas Williams wrote:
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
In case it wasn't clear I am NOT proposing a UI like this:
$ zfs bleach ~/Documents/company-finance.odp
Instead ~/Documents or ~ would be a ZFS file system with a policy set
something like this:
# zfs
destory or zpool destroy. Also on hotsparing in a
disk - if the old disk can still be written to in some way we should do
our best to bleach it.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
overwritten by zeros when they were freed,
because this would permit the underlying compressed zvol to free *its*
blocks.
A very interesting observation. Particularly given that I have just
created such a configuration - with iSCSI in the middle.
--
Darren J Moffat
Matthew Ahrens wrote:
Darren J Moffat wrote:
I believe that ZFS should provide a method of bleaching a disk or part
of it that works without crypto having ever been involved.
I see two use cases here:
1. This filesystem contains sensitive information. When it is freed,
make sure it's
policies this is not enough ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 716 matches
Mail list logo