Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2007-01-03 Thread james hughes


On Jan 2, 2007, at 6:48 AM, Darren Reed wrote:


Darren J Moffat wrote:


...
Of course.  I didn't mention it because I thought it was obvious  
but this would NOT break the COW or the transactional integrity of  
ZFS.


One of the possible ways that the to be bleached blocks are  
dealt with in the face of a crash is just like everything else -  
they would be in the ZFS Intent Log as things to do.



Do NIST and other specifications that come into play here dictate
what should be done in these and other situations?


From
http://csrc.nist.gov/publications/nistpubs/800-88/NISTSP800-88_rev1.pdf
includes the statement that
	The security goal of the overwriting process is to replace written  
data with random data.


How we achieve this is our problem. I expect that this is a  
subjective analysis if we meet the goals...



Do they say how this feature must be provided or in which situations
it is required to be covered in order to meet their criteria.


They talk about Clearing vs Purging vs Destruction... Clearing  
is the lowest level and seems to be useful when repurposing storage  
within an organization. All of these are supposed to be NSA/CSS  
Approved. I do not see the exact approval requirements in this  
document.



Or do they just document overwrite patterns, depending on the
security of the information?


From
http://csrc.nist.gov/publications/nistpubs/800-88/NISTSP800-88_rev1.pdf
it states
	Studies have shown that most of today’s media can be effectively  
cleared by one overwrite.



Darren

___
security-discuss mailing list
[EMAIL PROTECTED]


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2007-01-02 Thread Darren J Moffat

David Bustos wrote:

Quoth Darren J Moffat on Thu, Dec 21, 2006 at 03:31:59PM +:

Pawel Jakub Dawidek wrote:

I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modification... Heh, well, if privacy of
your data is important enough, you probably don't care too much about
performance. 
I'm not sure it will be that slow, the bleaching will be done in a 
separate (new) transaction group in most (probably all) cases anyway so 
it shouldn't really impact your write performance unless you are very 
I/O bound and already running near the limit.  However this is 
speculation until someone tries to implement this!


Bleaching previously used blocks will corrupt files pointed to by older
uberblocks.  I think that means that you'd have to verify that the new
uberblock is readable before you proceed, since part of ZFS's fault
tolerance is falling back to the most recent good uberblock if the
latest one is corrupt.  I don't think this makes bleaching unworkable,
but the interplay will require analysis.


Of course.  I didn't mention it because I thought it was obvious but 
this would NOT break the COW or the transactional integrity of ZFS.


One of the possible ways that the to be bleached blocks are dealt with 
in the face of a crash is just like everything else - they would be in 
the ZFS Intent Log as things to do.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2007-01-02 Thread Darren Reed

Darren J Moffat wrote:


...
Of course.  I didn't mention it because I thought it was obvious but 
this would NOT break the COW or the transactional integrity of ZFS.


One of the possible ways that the to be bleached blocks are dealt 
with in the face of a crash is just like everything else - they would 
be in the ZFS Intent Log as things to do.



Do NIST and other specifications that come into play here dictate
what should be done in these and other situations?

Do they say how this feature must be provided or in which situations
it is required to be covered in order to meet their criteria?

Or do they just document overwrite patterns, depending on the
security of the information?

Darren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2007-01-01 Thread David Bustos
Quoth Darren J Moffat on Thu, Dec 21, 2006 at 03:31:59PM +:
 Pawel Jakub Dawidek wrote:
 I like the idea, I really do, but it will be s expensive because of
 ZFS' COW model. Not only file removal or truncation will call bleaching,
 but every single file system modification... Heh, well, if privacy of
 your data is important enough, you probably don't care too much about
 performance. 
 
 I'm not sure it will be that slow, the bleaching will be done in a 
 separate (new) transaction group in most (probably all) cases anyway so 
 it shouldn't really impact your write performance unless you are very 
 I/O bound and already running near the limit.  However this is 
 speculation until someone tries to implement this!

Bleaching previously used blocks will corrupt files pointed to by older
uberblocks.  I think that means that you'd have to verify that the new
uberblock is readable before you proceed, since part of ZFS's fault
tolerance is falling back to the most recent good uberblock if the
latest one is corrupt.  I don't think this makes bleaching unworkable,
but the interplay will require analysis.


David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-27 Thread Bill Sommerfeld
On Tue, 2006-12-26 at 13:59 -0500, Torrey McMahon wrote:
  clearly you'd need to store the unbleached list persistently in the
  pool.
 
 Which could then be easily referenced to find all the blocks that were 
 recently deleted but not yet bleached? Is my paranoia running a bit too 
 high?

I think your paranoia is indeed running a bit high if the alternative is
that some blocks escape bleaching forever when they were freed shortly
before a crash.

For a portable system, the risk of theft is highest when the laptop is
unattended and idle -- and that's the point where the bleaching process
would have time to catch up; most likely, the unbleached list would be
small or empty..  

- Bill

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-27 Thread Nicolas Williams
On Wed, Dec 27, 2006 at 08:45:23AM -0500, Bill Sommerfeld wrote:
 I think your paranoia is indeed running a bit high if the alternative is
 that some blocks escape bleaching forever when they were freed shortly
 before a crash.

Lazy bg bleaching of freed blocks is not enough if you're really
paranoid about deleting things that might be cloned.  (See sub-thread
about bleach(2), which is off the table.)

 For a portable system, the risk of theft is highest when the laptop is
 unattended and idle -- and that's the point where the bleaching process
 would have time to catch up; most likely, the unbleached list would be
 small or empty..  

For portable systems the risk is not the loss of unbleached freed blocks
-- it's the loss of all those live blocks.  Thus you'd need encryption.

But encryption's still not enough if the system is stolen while the keys
are in memory.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-26 Thread Victor Latushkin



Darren J Moffat wrote:

Pawel Jakub Dawidek wrote:

I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modification... Heh, well, if privacy of
your data is important enough, you probably don't care too much about
performance. 


I'm not sure it will be that slow, the bleaching will be done in a 
separate (new) transaction group in most (probably all) cases anyway so 
it shouldn't really impact your write performance unless you are very 
I/O bound and already running near the limit.  However this is 
speculation until someone tries to implement this!




What happens if fatal failure occurs after the txg which frees blocks 
have been written but before before txg doing bleaching will be 
started/completed?



I for one would prefer encryption, which may turns out to be
much faster than bleaching and also more secure.


At least NIST, under I believe the guidance of the NSA, does not 
consider that encryption and key destruction alone is sufficient in all 
cases.  Which is why I'm proposing this as complementary.




True, dropping the keys leaves lots of encrypted material for determined 
cryptoanalytic to analyze, so it should be bleached in some good way.


Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-26 Thread Bill Sommerfeld
On Tue, 2006-12-26 at 14:01 +0300, Victor Latushkin wrote:

 What happens if fatal failure occurs after the txg which frees blocks 
 have been written but before before txg doing bleaching will be 
 started/completed?

clearly you'd need to store the unbleached list persistently in the
pool.

transactions which freed blocks (by punching holes in the allocation
space map) would instead or additionally move them to the unbleached
list; a separate bleaching task queue would pick blocks off the
unbleached list, bleach them; only once bleaching was complete would
they be removed from the unbleached list.

In the face of a crash, some blocks might get bleached twice.

- Bill





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-26 Thread Torrey McMahon

Bill Sommerfeld wrote:

On Tue, 2006-12-26 at 14:01 +0300, Victor Latushkin wrote:

  
What happens if fatal failure occurs after the txg which frees blocks 
have been written but before before txg doing bleaching will be 
started/completed?



clearly you'd need to store the unbleached list persistently in the
pool.


Which could then be easily referenced to find all the blocks that were 
recently deleted but not yet bleached? Is my paranoia running a bit too 
high?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-21 Thread Darren J Moffat

Pawel Jakub Dawidek wrote:

I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modification... Heh, well, if privacy of
your data is important enough, you probably don't care too much about
performance. 


I'm not sure it will be that slow, the bleaching will be done in a 
separate (new) transaction group in most (probably all) cases anyway so 
it shouldn't really impact your write performance unless you are very 
I/O bound and already running near the limit.  However this is 
speculation until someone tries to implement this!



I for one would prefer encryption, which may turns out to be
much faster than bleaching and also more secure.


At least NIST, under I believe the guidance of the NSA, does not 
consider that encryption and key destruction alone is sufficient in all 
cases.  Which is why I'm proposing this as complementary.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-21 Thread Darren J Moffat

Frank Hofmann wrote:
And this kind of deep bleaching would also break if you use snapshots 
- how do you reliably bleach if you need to keep the all of the old data 
around ? You only could do so once the last snapshot is gone. Kind of 
defeating the idea - automatic but delayed indefinitely till operator 
intervention (deleting the last snapshot).


Right that doesn't break snapshots at all in fact it is working exactly 
like snapshots work today anyway.  With user delegation (when is this 
integrating BTW?) the file system operator might be the end user anyway.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-21 Thread Nicolas Williams
On Thu, Dec 21, 2006 at 03:31:59PM +, Darren J Moffat wrote:
 Pawel Jakub Dawidek wrote:
 I like the idea, I really do, but it will be s expensive because of
 ZFS' COW model. Not only file removal or truncation will call bleaching,
 but every single file system modification... Heh, well, if privacy of
 your data is important enough, you probably don't care too much about
 performance. 
 
 I'm not sure it will be that slow, the bleaching will be done in a 
 separate (new) transaction group in most (probably all) cases anyway so 
 it shouldn't really impact your write performance unless you are very 
 I/O bound and already running near the limit.  However this is 
 speculation until someone tries to implement this!

Yes, bleaching lazily would help with performance.  You might even delay
bleaching for very long periods of time if you want, as long as there's
an interface by which to request that all outstanding free-but-not-yet-
bleached blocks be bleached immediately and synchronously.

 I for one would prefer encryption, which may turns out to be
 much faster than bleaching and also more secure.
 
 At least NIST, under I believe the guidance of the NSA, does not 
 consider that encryption and key destruction alone is sufficient in all 
 cases.  Which is why I'm proposing this as complementary.

James makes a good argument that this scheme won't suffice for customers
who need that level of assurance.  I'm inclined to agree.  For customers
who don't need that level of assurance then encryption ought to suffice.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-21 Thread Darren J Moffat

Nicolas Williams wrote:

James makes a good argument that this scheme won't suffice for customers
who need that level of assurance.  I'm inclined to agree.  For customers
who don't need that level of assurance then encryption ought to suffice.


Has anyone other than me actually read the current NIST guidelines on 
this ? [ the url was in my original email message ].


The current NIST guidelines, or at least my reading of it, says that 
even if you are using encryption and even if you are going to do 
physical destruction you still need to do something like this.


So this is complementary to encrypting the data - not that we can't in 
ZFS encryption ALL ZFS metadata (we should be able to encrypt all the 
file system relevant meta data) at least thats my current belief based 
on my knowledge of ZFS.


Maybe doing this in ZFS isn't necessary and what we have with format(1M) 
 purge/analyze is the correct user interface.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-21 Thread Nicolas Williams
On Thu, Dec 21, 2006 at 03:47:07PM +, Darren J Moffat wrote:
 Nicolas Williams wrote:
 James makes a good argument that this scheme won't suffice for customers
 who need that level of assurance.  I'm inclined to agree.  For customers
 who don't need that level of assurance then encryption ought to suffice.
 
 Has anyone other than me actually read the current NIST guidelines on 
 this ? [ the url was in my original email message ].
 
 The current NIST guidelines, or at least my reading of it, says that 
 even if you are using encryption and even if you are going to do 
 physical destruction you still need to do something like this.

I think it's a bit nuanced.

Pages 15-16 obliquely mention encryption in the description of
clearing:

... It must be resistant to keystore recovery attempts executed from
standard input devices and from data scavenging tools.  ...

I'm not sure how to interpret that in the case of ZFS encryption.  The
actual keys used to encrypt file are not typed in by users, and data
scavenging tools could only get at them if: a) they recovered user
passwords from which master FS keys are derived, b) have access to the
media.

On page 4 (errata), it says that on 9-11-06 (version 10-06) text was
deleted that had once declared encryption insufficient.

So, altogether I would read this as allowing deletion of keys as a
method of clearing.

Since clearing is all we can hope to do in ZFS then I think this should
be sufficient.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-21 Thread Darren J Moffat
One other area where is is useful is when you are in a jurisdiction 
where a court order may require you to produce your encryption keys - 
yes such jurisdictions exist and I don't want to debate the human 
rights angle or social engineering aspects of this just state that it 
exists.


For such environments you may not want to use encryption, because you 
could be forced to give up your key, or even if you are you want a 
background method of destroying the cipher text without doing full disk 
destruction.


Think of court cases between companies rather than criminal activity.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-21 Thread Darren Reed

Darren J Moffat wrote:

One other area where is is useful is when you are in a jurisdiction 
where a court order may require you to produce your encryption keys - 
yes such jurisdictions exist and I don't want to debate the human 
rights angle or social engineering aspects of this just state that it 
exists.



I think in these cases you want plausable deniability where different
encryption keys produce different view of the disk, none of which
give away that there are any other correct views of the data.

If it is possible to destroy a small piece of the ZFS meta data (key
material) and that makes it thereafter impossible to encrypt data,
sure, but otherwise, bleaching is probably going to take a bit too
long once you hear the knock on the door...


For such environments you may not want to use encryption, because you 
could be forced to give up your key, or even if you are you want a 
background method of destroying the cipher text without doing full 
disk destruction.


Think of court cases between companies rather than criminal activity.



For corporations there are different requirements, for examples laws
that regulate data retention.  Not only this but you also need to make
sure that the data you want to make unavailable never got backed
up or that those backups get wiped...

Darren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-20 Thread Pawel Jakub Dawidek
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
 In case it wasn't clear I am NOT proposing a UI like this:
 
 $ zfs bleach ~/Documents/company-finance.odp
 
 Instead ~/Documents or ~ would be a ZFS file system with a policy set 
 something like this:
 
 # zfs set erase=file:zero
 
 Or maybe more like this:
 
 # zfs create -o erase=file -o erasemethod=zero homepool/darrenm
 
 The goal is the same as the goal for things like compression in ZFS, no 
 application change it is free for the applications.

I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modification... Heh, well, if privacy of
your data is important enough, you probably don't care too much about
performance. I for one would prefer encryption, which may turns out to be
much faster than bleaching and also more secure.

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpbmTXIeqnBE.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-20 Thread James Carlson
Pawel Jakub Dawidek writes:
  The goal is the same as the goal for things like compression in ZFS, no 
  application change it is free for the applications.
 
 I like the idea, I really do, but it will be s expensive because of
 ZFS' COW model. Not only file removal or truncation will call bleaching,
 but every single file system modification... Heh, well, if privacy of
 your data is important enough, you probably don't care too much about
 performance. I for one would prefer encryption, which may turns out to be
 much faster than bleaching and also more secure.

I think the idea here is that since ZFS encourages the use of lots of
small file systems (rather than one or two very big ones), you'd have
just one or two very small file systems with crucial data marked as
needing bleach, while the others would get by with the usual
complement of detergent and switch fabric softener.

Having _every_ file modification result in dozens of I/Os would
probably be bad, but perhaps not if it's not _every_ modification
that's affected.

-- 
James Carlson, KISS Network[EMAIL PROTECTED]
Sun Microsystems / 1 Network Drive 71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-20 Thread Frank Hofmann

On Wed, 20 Dec 2006, Pawel Jakub Dawidek wrote:


On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:

In case it wasn't clear I am NOT proposing a UI like this:

$ zfs bleach ~/Documents/company-finance.odp

Instead ~/Documents or ~ would be a ZFS file system with a policy set something 
like this:

# zfs set erase=file:zero

Or maybe more like this:

# zfs create -o erase=file -o erasemethod=zero homepool/darrenm

The goal is the same as the goal for things like compression in ZFS, no application 
change it is free for the applications.


I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modification... Heh, well, if privacy of
your data is important enough, you probably don't care too much about
performance. I for one would prefer encryption, which may turns out to be
much faster than bleaching and also more secure.


And this kind of deep bleaching would also break if you use snapshots - 
how do you reliably bleach if you need to keep the all of the old data 
around ? You only could do so once the last snapshot is gone. Kind of 
defeating the idea - automatic but delayed indefinitely till operator 
intervention (deleting the last snapshot).


FrankH.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

Nicolas Williams wrote:

On Mon, Dec 18, 2006 at 05:44:08PM -0500, Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 11:32:37 AM -0600 Nicolas Williams 
[EMAIL PROTECTED] wrote:

  I'd say go for both, (a) and (b).  Of course, (b) may not be easy to
  implement.
Another option would be to warn the user and set a flag on the shared block 
which causes it to be bleached when the last reference goes away.  Of 
course, one still might want to give the user the option of forcing 
immediate bleaching of the shared data.


Sure, but if I want something bleached I probably want it bleahced
_now_, not who knows when.


I think there are two related things here, given your comments and 
suggestions for a bleach(1) command and VOP/FOP implementation you are 
thinking about a completely different usage method than I am.


How do you /usr/bin/bleach the tmp file that your editor wrote to before 
it did the rename ?  You can't easily do that - if at all in some cases.


I'm looking for the systemic solution here not the end user controlled one.

For comparison what you are suggesting is like doing crypto with 
encrypt(1) it works on pathnames, where as what I'm suggesting is more 
like ZFS crypto it works inside ZFS with deep intimate knowledge of 
ZFS and requires zero change on behalf of the user or admin.


While I think having this in the VOP/FOP layer is interesting it isn't 
the problem I was trying to solve and to be completely honest I'm really 
not interested in solving this outside of ZFS - why make it easy for 
people to stay on UFS ;-)



But why set that per-file?  Why not per-dataset/volume?  Bleach all
blocks when they are freed automatically means bleaching blocks when
the lask reference is gone (as a result of an unlink of the last file
that had some block, say).


I didn't have anything per file, but exactly what you said.  The policy 
was when files are removed, when data sets are removed, when pools are 
removed.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

Jeffrey Hutzelman wrote:



On Monday, December 18, 2006 05:51:14 PM -0600 Nicolas Williams 
[EMAIL PROTECTED] wrote:



On Mon, Dec 18, 2006 at 06:46:09PM -0500, Jeffrey Hutzelman wrote:



On Monday, December 18, 2006 05:16:28 PM -0600 Nicolas Williams
[EMAIL PROTECTED] wrote:

 Or an iovec-style specification.  But really, how often will one 
prefer
 this to truncate-and-bleach?  Also, the to-be-bleached octet ranges 
may

 not be meaningful in snapshots/clones.  Hmmm.  That convinces me:
 truncate-and-bleach or bleach-and-zero, but not bleach individual 
octet

 ranges.

Well, consider a file with some structure, like a berkeley db database.
The application may well want to bleach each record as it is deleted.


My point is those byte ranges might differe from one version of that
file to another.


That byte range contains the data the application is trying to bleach in 
any version of the file which contains the affected block(s).  Obviously 
if the file has been modified and the data moved to someplace else, then 
your bleach won't affect the version(s) of the file before the change.  
But then, there's only so much you can do.


I explicitly do NOT want the applications involved in this, the whole 
point of my proposal being the way it was is that it works equally for 
all applications and no application code needs to or can be changed to 
change this behaviour.  Just like doing crypto in the filesystem vs 
doing it at the application layer.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

In case it wasn't clear I am NOT proposing a UI like this:

$ zfs bleach ~/Documents/company-finance.odp

Instead ~/Documents or ~ would be a ZFS file system with a policy set 
something like this:


# zfs set erase=file:zero

Or maybe more like this:

# zfs create -o erase=file -o erasemethod=zero homepool/darrenm

The goal is the same as the goal for things like compression in ZFS, no 
application change it is free for the applications.


All of the same reasons for doing crypto outside of a command like 
encrypt(1) apply here too - especially the temp file and rename problems.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Nicolas Williams
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
 In case it wasn't clear I am NOT proposing a UI like this:
 
 $ zfs bleach ~/Documents/company-finance.odp
 
 Instead ~/Documents or ~ would be a ZFS file system with a policy set 
 something like this:
 
 # zfs set erase=file:zero
 
 Or maybe more like this:
 
 # zfs create -o erase=file -o erasemethod=zero homepool/darrenm

I get it.  This should be lots easier than bleach(1).  Snapshots/clones
are mostly not an issue here.  When a block is truly freed, then it is
wiped.

Clones are an issue here only if they have different settings for this
property than the FS that spawned them (so you might want to disallow
re-setting of this property).

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Darren J Moffat

Nicolas Williams wrote:

On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:

In case it wasn't clear I am NOT proposing a UI like this:

$ zfs bleach ~/Documents/company-finance.odp

Instead ~/Documents or ~ would be a ZFS file system with a policy set 
something like this:


# zfs set erase=file:zero

Or maybe more like this:

# zfs create -o erase=file -o erasemethod=zero homepool/darrenm


I get it.  This should be lots easier than bleach(1).  Snapshots/clones
are mostly not an issue here.  When a block is truly freed, then it is
wiped.


Yep.


Clones are an issue here only if they have different settings for this
property than the FS that spawned them (so you might want to disallow
re-setting of this property).


I think you are saying it should have INHERITY set to YES and EDIT set 
to NO.  We don't currently have any properties like that but crypto will 
need this as well - for a very similar reason with clones.



--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Nicolas Williams
On Tue, Dec 19, 2006 at 04:37:36PM +, Darren J Moffat wrote:
 I think you are saying it should have INHERITY set to YES and EDIT set 
 to NO.  We don't currently have any properties like that but crypto will 
 need this as well - for a very similar reason with clones.

What I mean is that if there's a block that's shared between two
filesystems then what do you do if the two filesystems have different
settings for this property?  IMO you shouldn't allow this to happen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-19 Thread Nicolas Williams
On Tue, Dec 19, 2006 at 03:09:03PM -0500, Jeffrey Hutzelman wrote:
 
 
 On Tuesday, December 19, 2006 01:54:56 PM + Darren J Moffat 
 [EMAIL PROTECTED] wrote:
 
 While I think having this in the VOP/FOP layer is interesting it isn't
 the problem I was trying to solve and to be completely honest I'm really
 not interested in solving this outside of ZFS - why make it easy for
 people to stay on UFS ;-)
 
 Because as great as ZFS is, someday someone is going to run into a problem 
 that it doesn't solve.  Having the right abstraction to begin with will 
 make that day easier when it comes.

I understand what Darren was proposing now.  He's talking about wiping
blocks as they are freed.

I initially thought he meant something like a guarantee on file deletion
that the file's data is gone -- but snapshots and clones are in conflict
with that, but not with wiping blocks as they are freed.

Now, if we had a bleach(1) operation, then we'd need a bleach(2) and a
VOP_BLEACH and fop_bleach.  But that's not what Darren is proposing.

 I didn't have anything per file, but exactly what you said.  The policy
 was when files are removed, when data sets are removed, when pools are
 removed.
 
 Well, that's great for situations where things actually get removed.  It's 
 not so great for things that get rewritten rather than removed, and it 
 seems nearly useless for vdevs.  I think there's some benefit to making the 
 functionality directly available to user-mode, but more importantly, 
 there's a definite advantage to a system in which the user knows that a 
 file was bleached when they removed it, and not decades later when someone 
 gets around to removing a stray snapshot.  That difference can have serious 
 legal and/or intelligence implications.

Yes, I think that a bleach operation that forcefully removes a file's
contents even in all snapshots and clones, could be useful.  But I'm not
sure that we can get it.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-18 Thread James Dickens

On 12/18/06, Darren J Moffat [EMAIL PROTECTED] wrote:


[ This is for discussion, it doesn't mean I'm actively working on this
   functionality at this time or that I might do so in the future. ]

When we get crypto support one way to do secure delete is to destroy
the key.  This is usually a much simpler and faster task than erasing
and overwriting all the data and metadata blocks that have been used.
It also provides the ability of doing time/policy based deletion.

However for some use cases this will not be deemed sufficient [3]. It
also doesn't cover the case where crypto wasn't always used on that
physical storage.

There are userland applications like GNU shred that attempt to cover
this for a single file by opening it up and doing overwriting before
finally calling unlink(2) on it.  However shred and anything like it
that works outside of ZFS will not work with ZFS because of COW.

I'm going to use the term bleach here because ZFS already uses scrub
  which is what some other people use here, and because bleach doesn't
overload any of the specific terms used in this area.

I believe that ZFS should provide a method of bleaching a disk or part
of it that works without crypto having ever been involved.

Currently OpenSolaris only provides what format(1M) gives with the
analyze/purge command[1].

This doesn't help in some cases because it requires that it be run on
the whole disk.  It is also not implementing the current recommendation
from NIST as documented in NIST-800-88 [2].

I think with ZFS we need to provide a method of bleaching that is
compatible with COW, and doesn't require we do it on the whole disk,
because this is very time consuming. Ideally it should also be
compatible with hot sparing.

I think we need 5 distinct places to set the policy:

1) On file delete
This would be a per dataset policy.
The bleaching would happen in a new transaction group
created by the one that did the normal deletion, and would
run only if the original one completed.  It needs to be done in
such away that the file blocks aren't on the free list until
after the bleaching txg is completed.

2) On ZFS data set destroy
A per pool policy and possibly per dataset with inheritance.
As above for the txg and the free blocks.

3) On demand for a pool without destroying active data.
This is similar to today's scrub, it is a background task
that we start off periodically and view the status of it
via zpool status.

4) On pool destroy (purposely breaks import -d)

5) On hotsparing, bleach the outgoing disk.

When doing all of these (but particularly 2 through 5) we need a way to
tell the admin that this has been completed:  I think zpool status is
probably sufficient initially but we might want to do more.

For case 4 zpool destroy would not return until all the data has been
bleached - this might take some time so we should provide progress.

Option 3 is needed for the case where we have no policy, ie 1 and 2
aren't in effect and we know that soon we need to do a disk replacement.

With option 5 we would spare in the new disk and start doing a full
media bleaching on the outgoing disk.  In this case zpool status would
show that a bleaching is in progress on that disk and that the admin
should wait until it completes before physical removal.

Instead of just implementing the current NIST algorithm it would be much
more helpful if we made this exensible, not necessarily fully plugable
without modifying the ZFS source. The CMRR project at UCSD has a good
paper on the security/speed tradeoffs [4]. Initially we would provide at
least the following bleaching methods:

1) N passes of zeros, default being 1.
2) Same algorithm that format(1M) uses today.
3) NIST-800-88 compliant method.
4) others to be discussed.

One of the tools from the CMRR project specifically takes advantage of
features of ATA drives, I don't think we should do that in ZFS because
we could be dealing with a pool created from a mix of different drive
types or we could be dealing with things like iSCSI targets which are
really ZVOLs on the other side.


Thoughts ?



one Idea that would seem easy for people that aren't totally paranoid would
be to pass the group of blocks that contained the deleted file to the top
of  the list of available blocks to be used for new writes, so that at least
no matter what else happens in a few minutes the data will be over written
at least once. Perhaps this should happen with all levels of secure delete,
so that even after the blocks have been overwritten with 6 times with random
data, it would be next to receive a new file. This might even help
performance since the hard disk heads should still be near the blocks of
overwritten data.

Of course the converse might also be nice, if I'm a home user and I have
been known to make errors it might be nice to have recently deleted blocks
not written over for a while, of course someone would 

[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-18 Thread Nicolas Williams
IMO:


 - The hardest problem in the case of bleaching individual files or
   datasets is dealing with snapshots/clones:

- blocks not shared with parent/child snapshots can be bleached with
  little trouble, of course.

- But what about shared blocks?

  IMO we have two options:

  a) Warn the user that some blocks remain unbleached;
  b) Give the user an option to force bleaching shared blocks.

  (b) Would break the snapshot read-only-ness, but it's better than
  forcing the user to destroy clones and snapshots in order to
  really bleach some sensitive content.

   I'd say go for both, (a) and (b).  Of course, (b) may not be easy to
   implement.


 - Unlink is not the right operation to bleach on!

   That's because we don't have a fast dnode# - path function (ncheck),
   and we don't have path info for the unlinked file in the snapshots
   and clones that share it (worse, they may have additional hard
   links).

   Instead I would propose a new bleach(1), bleach(2), VOP_BLEACH() and
   fop_bleach() (with emulation in the fop for UFS and filesystems that
   don't COW).

   The new system call should either appear to truncate the file or to
   overwrite it with zeros.  The latter would allow for bleaching some
   byte ranges, rather than the whole file (ZFS complexity: first COW
   the non-bleached parts of blocks, then bleach the freed blocks).


 - Bleaching vdevs is easier, of course, but the whole vdev has to be
   bleached as by that point we've no knowledge of which blocks have
   never been used.


Cheers,

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-18 Thread Nicolas Williams
On Mon, Dec 18, 2006 at 05:44:08PM -0500, Jeffrey Hutzelman wrote:
 On Monday, December 18, 2006 11:32:37 AM -0600 Nicolas Williams 
 [EMAIL PROTECTED] wrote:
I'd say go for both, (a) and (b).  Of course, (b) may not be easy to
implement.
 
 Another option would be to warn the user and set a flag on the shared block 
 which causes it to be bleached when the last reference goes away.  Of 
 course, one still might want to give the user the option of forcing 
 immediate bleaching of the shared data.

Sure, but if I want something bleached I probably want it bleahced
_now_, not who knows when.

  - Unlink is not the right operation to bleach on!
 
That's because we don't have a fast dnode# - path function (ncheck),
and we don't have path info for the unlinked file in the snapshots
and clones that share it (worse, they may have additional hard
links).
 
 I'm not sure how this matters.  It certainly is desirable to be able to set 
 policy which causes files to be automatically bleached on unlink (or, more 
 properly, when the link count goes to zero, modulo the complexity added by 
 clones and snapshots).

But why set that per-file?  Why not per-dataset/volume?  Bleach all
blocks when they are freed automatically means bleaching blocks when
the lask reference is gone (as a result of an unlink of the last file
that had some block, say).

The new system call should either appear to truncate the file or to
overwrite it with zeros.  The latter would allow for bleaching some
byte ranges, rather than the whole file (ZFS complexity: first COW
the non-bleached parts of blocks, then bleach the freed blocks).
 
 Perhaps the system call should act on a file descriptor, and specify a 
 range to be bleached, with a whence/start/len a la F_SETLK.  This would 
 allow for bleaching byte ranges and also help with the vdev issue (one 
 might imagine an application which knows enough about what is on the device 
 to bleach only certain blocks).

Or an iovec-style specification.  But really, how often will one prefer
this to truncate-and-bleach?  Also, the to-be-bleached octet ranges may
not be meaningful in snapshots/clones.  Hmmm.  That convinces me:
truncate-and-bleach or bleach-and-zero, but not bleach individual octet
ranges.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [security-discuss] Thoughts on ZFS Secure Delete - without using Crypto

2006-12-18 Thread Nicolas Williams
On Mon, Dec 18, 2006 at 06:46:09PM -0500, Jeffrey Hutzelman wrote:
 
 
 On Monday, December 18, 2006 05:16:28 PM -0600 Nicolas Williams 
 [EMAIL PROTECTED] wrote:
 
 Or an iovec-style specification.  But really, how often will one prefer
 this to truncate-and-bleach?  Also, the to-be-bleached octet ranges may
 not be meaningful in snapshots/clones.  Hmmm.  That convinces me:
 truncate-and-bleach or bleach-and-zero, but not bleach individual octet
 ranges.
 
 Well, consider a file with some structure, like a berkeley db database. 
 The application may well want to bleach each record as it is deleted.

My point is those byte ranges might differe from one version of that
file to another.  Bleaching byte ranges could only affect the current
FS, not any snapshots/clones.  (Of course, if we decide that snapshots
are so read-only that we can't provide a bleach facility that bleaches
across snapshots, then that's fine.)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss