and if you didn't export the pool those would now all possibly
be hot in the ARC.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/2009/615/mail
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and just do zfs send | zfs recv.
Or do I have to completely move the data out of the pool and back in again?
That is what zfs send and recv actually does.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Steven Sim wrote:
Hello;
Is the ZFS dedup single instancing across the entire pool or is it only
single instance inside each filesystem and not across the entire pool?
Opt in to it is per filesystem (dataset) but the deduplicates are
searched for and matched pool wide.
--
Darren J Moffat
/history/onnv/onnv-gate/usr/src/uts/common/fs/zfs/sha256.c
Look at the date when that integrated 31st October 2005.
In case you still doubt me look at the fix I just integrated today:
http://mail.opensolaris.org/pipermail/onnv-notify/2009-December/011090.html
--
Darren J Moffat
is set to dedup=verify.
Why ? Is it because you don't believe SHA256 (which is the default
checksum used when dedup=on is specified) is strong enough ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
canmount=noauto on the source. it's a bit
ugly, since you'll get an outage if the source server reboots before you
set it back.
Or use the -u argument to 'zfs recv':
-u
File system that is associated with the received
stream is not mounted.
--
Darren J Moffat
J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the deduplication code is never run
in that case.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the compression - and may actually
use a compression algorithm that ZFS doesn't use (bzip2).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Western Digital unfortunately.
Are there some settings in ZFS that can be used to compensate for this?
What is the problem you are trying to solve that makes you think you
need this or a similar feature ?
--
Darren J Moffat
___
zfs-discuss mailing list
now has.
The reason dedup should help here is that after the 'cat' f15 will be
made up of blocks that match the blocks of f1 f2 f3 f4 f5.
Copy-on-write isn't what helps you here it is dedup.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
Bob Friesenhahn wrote:
On Thu, 3 Dec 2009, Darren J Moffat wrote:
The answer to this is likely deduplication which ZFS now has.
The reason dedup should help here is that after the 'cat' f15 will be
made up of blocks that match the blocks of f1 f2 f3 f4 f5.
Copy-on-write isn't what helps
and
others where 1K is the upper bound of the file system.
So I don't think you can say at all that Most files are 128K.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
draft
(ASCII is fine) man page changes. Once there is consensus from the
core ZFS developer team I'll submit it to ARC for you.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
to is what
is the access time/throughput of a single local 15k SCSI drive vs a GigE
iSCSI volume?
For the L2ARC it is worth at try - what have you got to loose since you
can remove the cache device really easily.
--
Darren J Moffat
___
zfs-discuss mailing
think that would be great feature.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
by restricting
the number of files that can be created ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
account.
Agree, creating ZFS per account would solve my problems, but I can't use nfsv4 ,
nor automounter, so I can't export thousand of filesystems right now.
Or using per userquotas, eg:
# zfs set userqu...@bob=1g rpool/mail
# zfs set userqu...@jane=2g rpool/mail
...
--
Darren J Moffat
Jozef Hamar wrote:
Darren J Moffat wrote:
Jozef Hamar wrote:
Hi Darren,
thanks for reply.
E.g., I have mail quota implemented as per-directory quota. I know
this can be solved in another way, but still, I would have to change
many things in my system in order to make it work
/unified_storage/
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke wrote:
I second the use of zilstat - very useful, especially if you don't want to mess
around with adding a log device and then having to destroy the pool if you
don't want the log device any longer.
log devices can be removed as of zpool version 19.
--
Darren J Moffat
Steven Sim wrote:
Hello;
Dedup on ZFS is an absolutely wonderful feature!
Is there a way to conduct dedup replication across boxes from one dedup
ZFS data set to another?
Pass the '-D' argument to 'zfs send'.
--
Darren J Moffat
___
zfs-discuss
Miles Nordin wrote:
djm == Darren J Moffat darr...@opensolaris.org writes:
encrypted blocks is much better, even though
encrypted blocks may be subject to freeze-spray attack if the
whole computer is compromised
the idea of crypto deletion is to use many keys to encrypt
juristictions if the data was always encrypted on disk then
you don't need to write any patterns to erase the blocks. So ZFS Crypto
can help there.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
then it is a
completely separate and complementary feature to encryption.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bill Sommerfeld wrote:
On Wed, 2009-11-11 at 10:29 -0800, Darren J Moffat wrote:
Joerg Moellenkamp wrote:
Hi,
Well ... i think Darren should implement this as a part of
zfs-crypto. Secure Delete on SSD looks like quite challenge, when wear
leveling and bad block relocation kicks in ;)
No I
Bob Friesenhahn wrote:
On Wed, 11 Nov 2009, Darren J Moffat wrote:
note that eradication via overwrite makes no sense if the underlying
storage uses copy-on-write, because there's no guarantee that the newly
written block actually will overlay the freed block.
Which is why this has
the OpenSolaris
Security web page:
http://hub.opensolaris.org/bin/view/Community+Group+security/library
or directly at:
http://www.sun.com/blueprints/0206/819-5507.pdf
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
off between IO bandwidth and CPU/memory. Sometimes dedup
will improve performance, since like compression it can reduce IO
requirements, but depending on workload the CPU/memory overhead may or
may not be worth it (same with compression).
--
Darren J Moffat
pools can be used in a Sun Cluster configuration but will only
imported into a single node of a Sun Cluster configuration at a time.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
Orvar Korvar wrote:
I was under the impression that you can create a new zfs dataset and turn on
the dedup functionality, and copy your data to it. Or am I wrong?
you don't even have to create a new dataset just do:
# zfs set dedup=on dataset
--
Darren J Moffat
but how much benefit you will get from it, since it is block
not file based, depends on what type of filesystem and/or application is
on the iSCSI target.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Kyle McDonald wrote:
Hi Darren,
More below...
Darren J Moffat wrote:
Tristan Ball wrote:
Obviously sending it deduped is more efficient in terms of bandwidth
and CPU time on the recv side, but it may also be more complicated to
achieve?
A stream can be deduped even if the on disk format
Trevor Pretty wrote:
Darren J Moffat wrote:
Orvar Korvar wrote:
I was under the impression that you can create a new zfs dataset and turn on
the dedup functionality, and copy your data to it. Or am I wrong?
you don't even have to create a new dataset just do:
# zfs set dedup
Mike Gerdts wrote:
On Mon, Nov 2, 2009 at 7:20 AM, Jeff Bonwick jeff.bonw...@sun.com wrote:
Terrific! Can't wait to read the man pages / blogs about how to use it...
Just posted one:
http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup
Enjoy, and let me know if you have any questions or
| passthrough
I'm not sure they will help you much but I was curious if you had looked
at this area for help.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
run on many of them (e.g., PowerPC- and
ARM-based SoCs). Though AFAIK, ReadyNAS actually runs (ran?) on SPARC
(Leon), but used Linux nonetheless.
OpenSolaris is on its way to running on ARM.
http://hub.opensolaris.org/bin/view/Project+osarm/
--
Darren J Moffat
and memory resources to enable compression.
On the other hand if it is full of source code or ASCII text enabling
compression could potentially improved performance - depending on the
read and write access patterns.
--
Darren J Moffat
___
zfs-discuss
, but as long as that rule is adhered to there is no problem of
legal issues.
That is my personal understanding as well, however this is not legal
advice and I am not qualified to (or even wish to) give it in any case.
Good luck with the port.
--
Darren J Moffat
want to configure one or most host groups
with stmfadm(1M). I'm not a COMSTAR expert so I suggest asking on
storage-discuss if you need more help than that.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
either side of the
mirror you will get the same level of protection (maybe even better) and
better performance by setting the property copies to 2 (eg 'zfs set
copies=2 rpool').
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
step 1 will fail).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with that though is that today ZFS doesn't know that the
ZVOLs are used for swap and doesn't actually care.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
builds recently.
We also depend on the ZFS Fast System Attributes project and can't
integrate until that has done so.
When I can commit to more detailed dates I will do.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Mike DeMarco wrote:
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive?
None providing the drive is available to the OS by normal means.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
/ as filesystems you could just use
basic UNIX tools like find/diff etc.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
:3b4d66984ac9d6b4
0 2048 1 ZFS plain file SHA256 uncompressed
57f1e8168c58e8cf:3b20be148f57852e:f72ee8e3358f:1bfae4ae0599577c
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
of the files are written ???
Set it on the afx01 dataset before you do the receive and it will be
inherited.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and the property is on
the home zfs file system.
doesn't mater if zfs01 is the top level dataset or not.
Before you do the receive do this:
zfs set checksum=sha256 zfs01
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Joerg Schilling wrote:
Just to prove my information: I invented fbk (which Sun now calls lofi)
Sun does NOT call your fbk by the name lofi. Lofi is a completely
different implementation of the same concept.
--
Darren J Moffat
___
zfs-discuss
:
Changing this property only affects newly-written data.
Therefore, set this property at file system creation
time by using the -o copies=N option.
I've filed a man page bug 6885203 to have similar text added for
checksum and compression.
--
Darren J Moffat
and live demo of ZFS Crypto at this event.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?
What was the exact command line used in all three cases ?
How was the time measured ?
Were you sending a lot of snapshots as well ? cp/cpio don't know
anything about ZFS snapshots (and shouldn't).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs
) a SLOG (Separate Log device).
Note also the recent addition of the logbias dataset property.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not (ie it needs to be a single disk-sized partion), I can try moving.
I'm assuming if I add a partition, I can do something like:
zpool replace datapool sda sda1
What kind of partition table is on the disks, is it EFI ? If not that
might be part of the issue.
--
Darren J Moffat
0 3 4 0
2 ActiveSolaris2 4 4559945596100
and on the other:
1 ActiveSolaris2 1 4559945599100
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
it - but sounds like you don't.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
...@opensolaris.org where the pkg experts will be
happy to engage.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
version it just isn't possible in some cases because, while
you could choose to nolonger care about userquota not caring about
create time only properties such as ut8only and normalization is a very
difficult issue.
--
Darren J Moffat
___
zfs-discuss
Andre Lue wrote:
Can anyone answer if we will get zfs de-duplication before SXCE EOL? If
possible also anser the same on encryption?
Why do you care wither it happens before SXCE EOL or not ?
--
Darren J Moffat
___
zfs-discuss mailing list
zfs
label that 'zpool add' was going to fail
anyway like this:
# zpool add rpool c4t600144F04AA7AA68d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses mirror and new vdev is disk
--
Darren J Moffat
to rebalance as new writes come in.
If that's correct, is there a way to avoid that and get ZFS to write
sequentially on the LUNs that are part of myPool?
Why do you want to do that ? What do you actually think it gives you,
other than possibly *worse* performance ?
--
Darren J Moffat
://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases
and after that if you still need performance help read this one (last!):
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
don't jump straight to the ZFS_Evil_Tuning_Guide - seriously!
--
Darren J Moffat
no older BE's left that you may wish to boot into that don't
support the pool version you will be running if you do the upgrade.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
That isn't acutally another bug but an implementation artefact of the
multiple release support in Bugster. Bug numbers beginning with 2*
aren't actually real bugs bug sub-CRs of the main one.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss
Duncan Groenewald wrote:
Is there an easy way to update to snv_118 ? I am using 2009.06 (snv_111).
# pkg set-authority -O http://pkg.opensolaris.org/dev opensolaris.org
# pkg image-update
# init 6
--
Darren J Moffat
___
zfs-discuss mailing list
build if it does.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/source/xref/jds/zfs-snapshot/README.zfs-auto-snapshot.txt
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=681263
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
then.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Garrett D'Amore wrote:
Darren J Moffat wrote:
Dataset rename restrictions
---
On rename a dataset can non be moved out of its wrapping key hierarchy
ie where it inherits the keysource property from. This is best
explained by example:
# zfs get -r keysource tank
Crypto project directly but
it will benefit if it is implemented. If you can and wish to help out
please contact crypto-disc...@opensolaris.org
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Template Version: @(#)sac_nextcase 1.68 02/23/09 SMI
This information is Copyright 2009 Sun Microsystems
1. Introduction
1.1. Project/Component Working Name:
ZFS Crypto Updates
1.2. Name of Document Author/Supplier:
Author: Darren Moffat
1.3 Date of This Document:
if you have a hostname called inherit that is going to
be very confusing for the share* properties.
If there is an issue here I believe we should first trying and resolve
it with documentation changes.
--
Darren J Moffat
___
zfs-discuss mailing list
integrates I hope to have time to look at a erase behind capability,
this would be a per dataset property (or maybe even a per file attribute).
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
boot environments. Especially since the menu format of
grub2 is different to the grub 0.97 that OpenSolaris currently uses.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
the implementation and keeping the meaning. It is the intent that
matters to the administrator not the implementation.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
' is much more intuitive :-)
That would make sense though, eg:
logdest=latency|throughput
Which is why it is logbias, it isn't the destination of the log that is
latency or throughput but the bias.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs
, that shouldn't be a
problem.
I'll gladly sponsor the ARC case for you if you are willing to code and
test this ready for integration.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
into the kernel.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in the SS7000 systems.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
things by:
$ mkdir .zfs/snapshot/snap-name
That already works if you have the snapshot delegation as that user. It
even works over NFS and CIFS.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
James Lever wrote:
Hi Darryn,
On 30/07/2009, at 6:33 PM, Darren J Moffat wrote:
That already works if you have the snapshot delegation as that user.
It even works over NFS and CIFS.
Can you give us an example of how to correctly get this working?
On the host that has the ZFS datasets (ie
James Lever wrote:
On 30/07/2009, at 11:32 PM, Darren J Moffat wrote:
On the host that has the ZFS datasets (ie the NFS/CIFS server) you
need to give the user the delegation to create snapshots and to mount
them:
# zfs allow -u james snapshot,mount,destroy tank/home/james
Ahh
can work unmodified and do the right
thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.
A good use case. Another good one is shared build machine which is
similar to the home dir case.
--
Darren J Moffat
a flag (-Z?) to useradd(1M) and usermod(1M) so that if
base_dir is on ZFS, then the user's homedir is created as a new file
system (assuming -m).
Which makes me wonder: is there a programmatic way to determine if a path
is on ZFS?
st_fstype field of struct stat.
--
Darren J Moffat
(2) syscall.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
over NFS/CIFS but still useful). I have a prototype PAM module
that uses the users login password as the ZFS dataset wrapping key and
keeps that in sync with the users login password on password change.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs
the new
filesystems.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this:
zpool export newpool
zpool import newpool oldpool
Sorry I'm being dense here, I think I sort of get it but I don't have
the whole picture.
You are very close, there is some more info in the zfs(1M) man page.
--
Darren J Moffat
___
zfs
/boot pool.
See 6849185 and 5097228.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. In particular what type of
disks these are and how they are attached, eg: IDE, SATA, SAS, USB, FC,
iSCSI...
I'm assuming since you said basic mirroring you don't have any hot
spares configured that would have kicked in.
--
Darren J Moffat
___
zfs-discuss
encryption=on tank c0t0d0s0
Even if you need to boot from a filesystem in the pool you *can* still
have the swap ZVOL encrypted.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
more flexibility in that the source and destination filesystem
types can be different or even not a filesystem!
zfs send|recv and [g,s]tar exist for different purposes, but there are
some overlapping use cases either either could do the job.
--
Darren J Moffat
Joerg Schilling wrote:
Darren J Moffat darr...@opensolaris.org wrote:
use star from Jörg Schilling because its dead easy :
star -copy -p -acl -sparse -dump -C old_dir . new_dir
...
star doesn't (and shouldn't) create the destination ZFS filesystem like
the zfs recv would. It also
a support
contract ? Do you actually have a support contract for OpenSolaris
2009.06 (if not then personally I'd say zero difference but I'm an
OpenSolaris developer and I'm used to living on the latest builds).
--
Darren J Moffat
___
zfs-discuss
201 - 300 of 716 matches
Mail list logo