Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Mon, Aug 09, 2010 at 08:00:55PM -0400, Miloslav Trmac wrote:
 - Herbert Xu herb...@gondor.hengli.com.au wrote:
 
  On Thu, Aug 05, 2010 at 10:17:53PM +0200, Miloslav Trmač wrote:
   Hello,
   following is a patchset providing an user-space interface to the kernel 
   crypto
   API.  It is based on the older, BSD-compatible, implementation, but the
   user-space interface is different.
  
  Thanks for the patches.
  
  Unfortunately it fails to satisfy the requirement of supporting
  all our existing kernel crypto interfaces, such as AEAD, as well
  as being flexible enough in adding new interfaces such as xor.
 Thanks for the review.
 
 I think I can add AEAD (and compression/RNG, if requested) easily enough, 
 I'll send an updated patch set.
 
 (Talking specifically about xor, xor does not seem to be cryptographic 
 operation that should be exposed as such to userspace - and AFAICS it is not 
 integrated in the crypto API algorithm mechanism in the kernel either.)
 
 Is the proposed interface acceptable in the general approach (enums for 
 algorithms/operations, unions for parameters, session init/update/finalize)?  
 With respect to flexibility, do you have specific suggestions or areas of 
 concern?

I know we spoke about this previously, but since you're asking publically, I'll
make my argument here so that we can have the debate in public as well.  I'm ok
wih the general enumeration of operations in user space (I honestly can't see
how you would do it otherwise).  What worries me though is the use ioctl for
these operations  and the flexibility that it denies you in future updates.

Specifically, my concerns are twofold:

1) struct format.  By passing down a structure as your doing through an ioctl
call, theres  no way to extend/modify that structure easily for future use.  For
instance the integration of aead might I think requires a blocking factor to be
sepcified, and entry for which you have no available field in your crypt_ops
structure.  If you extend the structure in a later release of this code, you
create a potential incompatibility with user space because you are no longer
guaranteed that the size of the crypt_op structure is the same, and you need to
be prepared for a range of sizes to get passed down, at which point it becomes
difficult to differentiate between older code thats properly formatted, and
newer code thats legitimately broken.  You might could add a version field to
mitigate that, but it gets ugly pretty quickly.

2) Use of pointers.  Thats just a pain.  You have the compat ioctl stuff in
place to handle the 32/64 bit conversions, but it would be really nice if you
could avoid having to maintain that extra code path.

Thats why I had suggested the use of a netlink protocol to manage this kind of
interface.  There are other issue to manage there, but they're really not that
big a deal, comparatively speaking, and that interface allows for the easy
specification of arbitrary length data in a fashion that doesn't cause user
space breakage when additional algorithms/apis are added.

Thats just my opinion, I'm sure others have differing views.  I'd be interested
to hear what others think.

Regards
Neil

 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Steve Grubb
On Tuesday, August 10, 2010 08:46:28 am Neil Horman wrote:
 Specifically, my concerns are twofold:
 
 1) struct format.  By passing down a structure as your doing through an
 ioctl call, theres  no way to extend/modify that structure easily for
 future use.  For instance the integration of aead might I think requires a
 blocking factor to be sepcified, and entry for which you have no available
 field in your crypt_ops structure.  If you extend the structure in a later
 release of this code, you create a potential incompatibility with user
 space because you are no longer guaranteed that the size of the crypt_op
 structure is the same, and you need to be prepared for a range of sizes to
 get passed down, at which point it becomes difficult to differentiate
 between older code thats properly formatted, and newer code thats
 legitimately broken.  You might could add a version field to mitigate
 that, but it gets ugly pretty quickly.

Yeah, this does call out for versioning. But the ioctls are just for crypto 
parameter setup. Hopefully, that doesn't change too much since its just to 
select an algorithm, possibly ask for a key to be wrapped and loaded, or ask 
for a key to be created and exported. After setup, you just start using the 
device.
 

 Thats why I had suggested the use of a netlink protocol to manage this kind
 of interface.  There are other issue to manage there, but they're really
 not that big a deal, comparatively speaking, and that interface allows for
 the easy specification of arbitrary length data in a fashion that doesn't
 cause user space breakage when additional algorithms/apis are added.

The problem with the netlink approach is that auditing is not as good because 
netlink is an async protocol. The kernel can only use the credentials that 
ride in the skb with the command since there is no guarantee the process has 
not changed credentials by the time you get the packet. Adding more 
credentials to the netlink headers is also not good. As it is now, the 
auditing is synchronous with the syscall and we can get at more information 
and reliably create the audit records called out by the protection profiles or 
FIPS-140 level 2.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
 On Tuesday, August 10, 2010 08:46:28 am Neil Horman wrote:
  Specifically, my concerns are twofold:
  
  1) struct format.  By passing down a structure as your doing through an
  ioctl call, theres  no way to extend/modify that structure easily for
  future use.  For instance the integration of aead might I think requires a
  blocking factor to be sepcified, and entry for which you have no available
  field in your crypt_ops structure.  If you extend the structure in a later
  release of this code, you create a potential incompatibility with user
  space because you are no longer guaranteed that the size of the crypt_op
  structure is the same, and you need to be prepared for a range of sizes to
  get passed down, at which point it becomes difficult to differentiate
  between older code thats properly formatted, and newer code thats
  legitimately broken.  You might could add a version field to mitigate
  that, but it gets ugly pretty quickly.
 
 Yeah, this does call out for versioning. But the ioctls are just for crypto 
 parameter setup. Hopefully, that doesn't change too much since its just to 
 select an algorithm, possibly ask for a key to be wrapped and loaded, or ask 
 for a key to be created and exported. After setup, you just start using the 
 device.
  
 
  Thats why I had suggested the use of a netlink protocol to manage this kind
  of interface.  There are other issue to manage there, but they're really
  not that big a deal, comparatively speaking, and that interface allows for
  the easy specification of arbitrary length data in a fashion that doesn't
  cause user space breakage when additional algorithms/apis are added.
 
 The problem with the netlink approach is that auditing is not as good because 
 netlink is an async protocol. The kernel can only use the credentials that 
 ride in the skb with the command since there is no guarantee the process has 
 not changed credentials by the time you get the packet. Adding more 
 credentials to the netlink headers is also not good. As it is now, the 
 auditing is synchronous with the syscall and we can get at more information 
 and reliably create the audit records called out by the protection profiles 
 or 
 FIPS-140 level 2.
 
 -Steve

I think thats pretty easy to serialize though.  All you need to do is enforce a
rule in the kernel that dictates any creditial changes to a given context must
be serialized behind all previously submitted crypto operations.  It might make
life a bit tougher on the audit code, but netlink contains pid/sequence data in
all messages so that audit can correlate requests and responses easily.

Neil
 
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac

- Neil Horman nhor...@redhat.com wrote:
 On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
   Thats why I had suggested the use of a netlink protocol to manage this 
   kind
   of interface.  There are other issue to manage there, but they're really
   not that big a deal, comparatively speaking, and that interface allows for
   the easy specification of arbitrary length data in a fashion that doesn't
   cause user space breakage when additional algorithms/apis are added.
  
  The problem with the netlink approach is that auditing is not as good 
  because 
  netlink is an async protocol. The kernel can only use the credentials that 
  ride in the skb with the command since there is no guarantee the process 
  has 
  not changed credentials by the time you get the packet. Adding more 
  credentials to the netlink headers is also not good. As it is now, the 
  auditing is synchronous with the syscall and we can get at more information 
  and reliably create the audit records called out by the protection
 profiles or 
  FIPS-140 level 2.
  
  -Steve
 
 I think thats pretty easy to serialize though.  All you need to do is enforce 
 a
 rule in the kernel that dictates any creditial changes to a given context must
 be serialized behind all previously submitted crypto operations.
That would be a bit unusual from the layering/semantics aspect - why should 
credential changes care about crypto operations when they don't care about any 
other operations? - and it would require pretty widespread changes throughout 
the kernel core.
Mirek
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 10:47:14AM -0400, Miloslav Trmac wrote:
 
 - Neil Horman nhor...@redhat.com wrote:
  On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
Thats why I had suggested the use of a netlink protocol to manage this 
kind
of interface.  There are other issue to manage there, but they're really
not that big a deal, comparatively speaking, and that interface allows 
for
the easy specification of arbitrary length data in a fashion that 
doesn't
cause user space breakage when additional algorithms/apis are added.
   
   The problem with the netlink approach is that auditing is not as good 
   because 
   netlink is an async protocol. The kernel can only use the credentials 
   that 
   ride in the skb with the command since there is no guarantee the process 
   has 
   not changed credentials by the time you get the packet. Adding more 
   credentials to the netlink headers is also not good. As it is now, the 
   auditing is synchronous with the syscall and we can get at more 
   information 
   and reliably create the audit records called out by the protection
  profiles or 
   FIPS-140 level 2.
   
   -Steve
  
  I think thats pretty easy to serialize though.  All you need to do is 
  enforce a
  rule in the kernel that dictates any creditial changes to a given context 
  must
  be serialized behind all previously submitted crypto operations.
 That would be a bit unusual from the layering/semantics aspect - why should 
 credential changes care about crypto operations when they don't care about 
 any other operations? - and it would require pretty widespread changes 
 throughout the kernel core.
 Mirek


I'm sorry, I thought steve was referring to credentials in the sense of changing
keys/etc while crypto operations were in flight.  If you're referring to users
credentials in terms of permissions to use a device or service, then I think its
all moot anyway.  As you say why should credential changes care about crypto ops
when they don't care about other ops.  If you have permissions to use a
device/service, changing those permissions doesnt really change the fact that
you're in the process of using that service.  We could do some modicum of
credential check when requests are submitted and deny service based on that
check, but anything already submitted was done when you had permission to do so
and should be allowed to complete.

Neil

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac

- Neil Horman nhor...@tuxdriver.com wrote:
 On Mon, Aug 09, 2010 at 08:00:55PM -0400, Miloslav Trmac wrote:
  Is the proposed interface acceptable in the general approach (enums
 for algorithms/operations, unions for parameters, session
 init/update/finalize)?  With respect to flexibility, do you have
 specific suggestions or areas of concern?
 
 I know we spoke about this previously, but since you're asking publically, 
 I'll
 make my argument here so that we can have the debate in public as well.  I'm 
 ok
 wih the general enumeration of operations in user space (I honestly can't see
 how you would do it otherwise).  What worries me though is the use ioctl for
 these operations  and the flexibility that it denies you in future updates. 
 Specifically, my concerns are twofold:
 
 1) struct format.  By passing down a structure as your doing through an ioctl
 call, theres  no way to extend/modify that structure easily for future use.  
 For
 instance the integration of aead might I think requires a blocking factor to 
 be
 sepcified, and entry for which you have no available field in your crypt_ops
 structure.  If you extend the structure in a later release of this code, you
 create a potential incompatibility with user space because you are no longer
 guaranteed that the size of the crypt_op structure is the same, and you need 
 to
 be prepared for a range of sizes to get passed down, at which point it becomes
 difficult to differentiate between older code thats properly formatted, and
 newer code thats legitimately broken.  You might could add a version field to
 mitigate that, but it gets ugly pretty quickly.

I think it would be useful to separate thinking about the data format and about 
the transmission mechanism.  ioctl() can quite well be used to carry 
netlink-like packets - blobs of data with specified length and flexible 
internal structure - and on the other hand, netlink could be (and often is) 
used to carry fixed structs instead of variable-length packets.

So, any advantages netlink packets have in this respect can be provided using 
the ioctl() interface as well.  Besides, adding new ioctl commands provides a 
quite natural versioning mechanism (of course, it would be better to design the 
data format right the first time, but the option to easily extend the interface 
is available).  Versioned data structure that gets ugly pretty quickly is 
exactly what one can get with the netlink packets, I think :) - and it is not 
that bad actually.

Using a different design of the structures, perhaps appending a flexible array 
of attributes at the end of each operation structure, is certainly something 
to consider, if you think this is the way to go; the added flexibility is 
undisputable, at the cost of some encoding/decoding overhead.

As for the transmission mechanism, netlink seems to me to be one of the least 
desirable options: as described above, it does not provide any inherent 
advantages to ioctl, and it has significantly higher overhead 
(sendmsg()+recvmsg(), handling netlink ACKs, matching requests and replies in 
multi-threaded programs), and it makes auditing difficult.

 2) Use of pointers.  Thats just a pain.  You have the compat ioctl stuff in
 place to handle the 32/64 bit conversions, but it would be really nice if you
 could avoid having to maintain that extra code path.
Pointers are pretty much unavoidable, to allow zero-copy references to 
input/output data.  If that means maintaining 32-bit compat paths (as opposed 
to codifying say uint128_t for pointers in fixed-size structures), then so be 
it: the 32-bit paths will simply have to be maintained.  Once we have the 
32-bit compat paths, using pointers for other unformatted, variable-sized data 
(e.g. IVs, multiple-precision integers) seems to be easier than using 
variable-size data structures or explicit pointer arithmetic to compute data 
locations within the data packets.
Mirek
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac
- Neil Horman nhor...@redhat.com wrote:
 On Tue, Aug 10, 2010 at 10:47:14AM -0400, Miloslav Trmac wrote:
  - Neil Horman nhor...@redhat.com wrote:
   On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
The problem with the netlink approach is that auditing is not as good 
because 
netlink is an async protocol. The kernel can only use the credentials 
that 
ride in the skb with the command since there is no guarantee the 
process has 
not changed credentials by the time you get the packet. Adding more 
credentials to the netlink headers is also not good. As it is now, the 
auditing is synchronous with the syscall and we can get at more 
information 
and reliably create the audit records called out by the protection 
profiles or 
FIPS-140 level 2.

-Steve
   
   I think thats pretty easy to serialize though.  All you need to do is 
   enforce a
   rule in the kernel that dictates any creditial changes to a given context 
   must
   be serialized behind all previously submitted crypto operations.
  That would be a bit unusual from the layering/semantics aspect - why
 should credential changes care about crypto operations when they don't
 care about any other operations? - and it would require pretty
 widespread changes throughout the kernel core.
  Mirek
 
 
 I'm sorry, I thought steve was referring to credentials in the sense of 
 changing
 keys/etc while crypto operations were in flight.
The audited values are mainly process/thread attributes: pid, ppid, 
{,e,fs}[ug]id, session id, and the like.  Most of these are attached to the 
netlink packet, and performing a lookup by PID is at packet handling time is 
unreliable - as far as I understand the netlink receive routine does not have 
to run in the same process context, so the PID might not even refer to the same 
process.
Mirek
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 11:36:16AM -0400, Miloslav Trmac wrote:
 
 - Neil Horman nhor...@tuxdriver.com wrote:
  On Mon, Aug 09, 2010 at 08:00:55PM -0400, Miloslav Trmac wrote:
   Is the proposed interface acceptable in the general approach (enums
  for algorithms/operations, unions for parameters, session
  init/update/finalize)?  With respect to flexibility, do you have
  specific suggestions or areas of concern?
  
  I know we spoke about this previously, but since you're asking publically, 
  I'll
  make my argument here so that we can have the debate in public as well.  
  I'm ok
  wih the general enumeration of operations in user space (I honestly can't 
  see
  how you would do it otherwise).  What worries me though is the use ioctl for
  these operations  and the flexibility that it denies you in future updates. 
  Specifically, my concerns are twofold:
  
  1) struct format.  By passing down a structure as your doing through an 
  ioctl
  call, theres  no way to extend/modify that structure easily for future use. 
   For
  instance the integration of aead might I think requires a blocking factor 
  to be
  sepcified, and entry for which you have no available field in your crypt_ops
  structure.  If you extend the structure in a later release of this code, you
  create a potential incompatibility with user space because you are no longer
  guaranteed that the size of the crypt_op structure is the same, and you 
  need to
  be prepared for a range of sizes to get passed down, at which point it 
  becomes
  difficult to differentiate between older code thats properly formatted, and
  newer code thats legitimately broken.  You might could add a version field 
  to
  mitigate that, but it gets ugly pretty quickly.
 
 I think it would be useful to separate thinking about the data format and 
 about the transmission mechanism.  ioctl() can quite well be used to carry 
 netlink-like packets - blobs of data with specified length and flexible 
 internal structure - and on the other hand, netlink could be (and often is) 
 used to carry fixed structs instead of variable-length packets.

Yes, both mechanism can be used to carry either fixed or variable length
payloads.  The difference is ioctl has no built in mechanism to do variable
length data safely.  To do variable length data you need to setup a bunch of
pointers to extra data that you have to copy separately.  Even then, you're
fixing the number of pointers that you have in your base structure.  To add or
remove any would break your communication protocol to user space.  This is
exactly the point I made above.

 
 So, any advantages netlink packets have in this respect can be provided using 
 the ioctl() interface as well.  Besides, adding new ioctl commands provides a 
 quite natural versioning mechanism (of course, it would be better to design 
 the data format right the first time, but the option to easily extend the 
 interface is available).  Versioned data structure that gets ugly pretty 
 quickly is exactly what one can get with the netlink packets, I think :) - 
 and it is not that bad actually.
No they can't.  I just explained again why it can't above.  At least not without
additional metadata and compatibility code built into the struct that you pass
in the ioctl.

Add commands is irrelevant.  Its equally easy to do so in an enumeration
provided in a struct as it is to do in a netlink message.  Theres no advantage
in either case there.

And there is no need for versioning in the netlink packet, because the data
types are all inlined, typed and attached to length values (at least when done
correctly, see then nl_attr structure and associated macros).  You don't have
that with your ioctl.  You could add it for certain, but at that point you're
re-inventing the wheel.

 
 Using a different design of the structures, perhaps appending a flexible 
 array of attributes at the end of each operation structure, is certainly 
 something to consider, if you think this is the way to go; the added 
 flexibility is undisputable, at the cost of some encoding/decoding overhead.

Thats exactly what netlink already has built in.  Each netlink message consistes
of a nlmsghdr structure which defines the source/dest process/etc.  Thats
followed by a variable number of nlattr attributes, each of which consists of a
type, length, and array of data bytes. We have kernel macros built to
encode/decode these attribtues already.  The work is already done (see the
nlmsg_parse function in the kernel).


 
 As for the transmission mechanism, netlink seems to me to be one of the least 
 desirable options: as described above, it does not provide any inherent 
 advantages to ioctl, and it has significantly higher overhead 
 (sendmsg()+recvmsg(), handling netlink ACKs, matching requests and replies in 
 multi-threaded programs), and it makes auditing difficult.

You're incorrect.  I've explained several of the advantiges above and in my 
previous
email, you're just not seeing 

Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 11:40:00AM -0400, Miloslav Trmac wrote:
 - Neil Horman nhor...@redhat.com wrote:
  On Tue, Aug 10, 2010 at 10:47:14AM -0400, Miloslav Trmac wrote:
   - Neil Horman nhor...@redhat.com wrote:
On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
 The problem with the netlink approach is that auditing is not as good 
 because 
 netlink is an async protocol. The kernel can only use the credentials 
 that 
 ride in the skb with the command since there is no guarantee the 
 process has 
 not changed credentials by the time you get the packet. Adding more 
 credentials to the netlink headers is also not good. As it is now, 
 the 
 auditing is synchronous with the syscall and we can get at more 
 information 
 and reliably create the audit records called out by the protection 
 profiles or 
 FIPS-140 level 2.
 
 -Steve

I think thats pretty easy to serialize though.  All you need to do is 
enforce a
rule in the kernel that dictates any creditial changes to a given 
context must
be serialized behind all previously submitted crypto operations.
   That would be a bit unusual from the layering/semantics aspect - why
  should credential changes care about crypto operations when they don't
  care about any other operations? - and it would require pretty
  widespread changes throughout the kernel core.
   Mirek
  
  
  I'm sorry, I thought steve was referring to credentials in the sense of 
  changing
  keys/etc while crypto operations were in flight.
 The audited values are mainly process/thread attributes: pid, ppid, 
 {,e,fs}[ug]id, session id, and the like.  Most of these are attached to the 
 netlink packet, and performing a lookup by PID is at packet handling time is 
 unreliable - as far as I understand the netlink receive routine does not have 
 to run in the same process context, so the PID might not even refer to the 
 same process.
 Mirek

I'm not so sure I follow.  how can you receive messages on a socket in response
to requests that were sent from a different socket.  In the netlink multicast
and broadcast case, sure, but theres no need to use those.  I suppose you could
exit a process, start another process in which the pid gets reused, open up a
subsequent socket and perhaps confuse audit that way, but you're not going to
get responses to the requests that the previous process sent in that case.  And
in the event that happens, Audit should log a close event on the fd inquestion
between the operations.


Theres also the child process case, in which a child process might read
responses from requests sent by a parent, and vice versa, since fork inherits
file descriptors, but thats an issue inherent in any mechanism you use,
including the character interface, so I'm not sure what the problem is there.

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac

- Neil Horman nhor...@redhat.com wrote:

 On Tue, Aug 10, 2010 at 11:40:00AM -0400, Miloslav Trmac wrote:
  - Neil Horman nhor...@redhat.com wrote:
   On Tue, Aug 10, 2010 at 10:47:14AM -0400, Miloslav Trmac wrote:
- Neil Horman nhor...@redhat.com wrote:
 On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
  The problem with the netlink approach is that auditing is not as 
  good because
  netlink is an async protocol. The kernel can only use the 
  credentials that
  ride in the skb with the command since there is no guarantee the 
  process has
  not changed credentials by the time you get the packet. Adding more
  credentials to the netlink headers is also not good. As it is now, 
  the
  auditing is synchronous with the syscall and we can get at more 
  information
  and reliably create the audit records called out by the protection 
  profiles or
  FIPS-140 level 2.
 
  -Steve

 I think thats pretty easy to serialize though.  All you need to do is 
 enforce a
 rule in the kernel that dictates any creditial changes to a given 
 context must
 be serialized behind all previously submitted crypto operations.
That would be a bit unusual from the layering/semantics aspect - why
   should credential changes care about crypto operations when they don't
   care about any other operations? - and it would require pretty
   widespread changes throughout the kernel core.
Mirek
  
  
   I'm sorry, I thought steve was referring to credentials in the sense of 
   changing
   keys/etc while crypto operations were in flight.
  The audited values are mainly process/thread attributes: pid, ppid,
 {,e,fs}[ug]id, session id, and the like.  Most of these are attached
 to the netlink packet, and performing a lookup by PID is at packet
 handling time is unreliable - as far as I understand the netlink
 receive routine does not have to run in the same process context, so
 the PID might not even refer to the same process.

 I'm not so sure I follow.  how can you receive messages on a socket in 
 response
 to requests that were sent from a different socket.  In the netlink multicast
 and broadcast case, sure, but theres no need to use those.  I suppose you 
 could
 exit a process, start another process in which the pid gets reused, open up a
 subsequent socket and perhaps confuse audit that way, but you're not going to
 get responses to the requests that the previous process sent in that case.
I don't even need to open a subsequent socket - as son as the process ID is 
reused, the audit message is incorrect, which is not really acceptable in 
itself.

 And
 in the event that happens, Audit should log a close event on the fd inquestion
 between the operations.
audit only logs explicitly requested operations, so an administrator that asks 
for crypto events does not automatically get any close events.  Besides, the 
audit record should be correct in the first place, instead of giving the admin 
a puzzle to decipher.

 Theres also the child process case, in which a child process might read
 responses from requests sent by a parent, and vice versa, since fork inherits
 file descriptors, but thats an issue inherent in any mechanism you use,
 including the character interface, so I'm not sure what the problem is
 there.
Actually that's not a problem with ioctl() because the response is written back 
_before_ ioctl() returns, so it is always provided to the original calling 
thread.

With the current design the parent and child share the key/session namespace 
after fork(), but operation results are always reliably and automatically 
provided to the thread that invoked the operation, without any user-space 
collaboration.
Mirek
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 12:57:43PM -0400, Miloslav Trmac wrote:
 
 - Neil Horman nhor...@redhat.com wrote:
 
  On Tue, Aug 10, 2010 at 11:40:00AM -0400, Miloslav Trmac wrote:
   - Neil Horman nhor...@redhat.com wrote:
On Tue, Aug 10, 2010 at 10:47:14AM -0400, Miloslav Trmac wrote:
 - Neil Horman nhor...@redhat.com wrote:
  On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
   The problem with the netlink approach is that auditing is not as 
   good because
   netlink is an async protocol. The kernel can only use the 
   credentials that
   ride in the skb with the command since there is no guarantee the 
   process has
   not changed credentials by the time you get the packet. Adding 
   more
   credentials to the netlink headers is also not good. As it is 
   now, the
   auditing is synchronous with the syscall and we can get at more 
   information
   and reliably create the audit records called out by the 
   protection profiles or
   FIPS-140 level 2.
  
   -Steve
 
  I think thats pretty easy to serialize though.  All you need to do 
  is enforce a
  rule in the kernel that dictates any creditial changes to a given 
  context must
  be serialized behind all previously submitted crypto operations.
 That would be a bit unusual from the layering/semantics aspect - why
should credential changes care about crypto operations when they don't
care about any other operations? - and it would require pretty
widespread changes throughout the kernel core.
 Mirek
   
   
I'm sorry, I thought steve was referring to credentials in the sense of 
changing
keys/etc while crypto operations were in flight.
   The audited values are mainly process/thread attributes: pid, ppid,
  {,e,fs}[ug]id, session id, and the like.  Most of these are attached
  to the netlink packet, and performing a lookup by PID is at packet
  handling time is unreliable - as far as I understand the netlink
  receive routine does not have to run in the same process context, so
  the PID might not even refer to the same process.
 
  I'm not so sure I follow.  how can you receive messages on a socket in 
  response
  to requests that were sent from a different socket.  In the netlink 
  multicast
  and broadcast case, sure, but theres no need to use those.  I suppose you 
  could
  exit a process, start another process in which the pid gets reused, open up 
  a
  subsequent socket and perhaps confuse audit that way, but you're not going 
  to
  get responses to the requests that the previous process sent in that case.
 I don't even need to open a subsequent socket - as son as the process ID is 
 reused, the audit message is incorrect, which is not really acceptable in 
 itself.
 
But, you do, thats my point.  If a process exits, and another process starts up
that happens to reuse the same pid, it can't just call recvmsg on the socket
descriptor that the last process used for netlink messages and expect to get any
data.  That socket descriptor won't be connected to the netlink service (or
anything) anymore, and you'll get an error from the kernel.

  And
  in the event that happens, Audit should log a close event on the fd 
  inquestion
  between the operations.
 audit only logs explicitly requested operations, so an administrator that 
 asks for crypto events does not automatically get any close events.  Besides, 
 the audit record should be correct in the first place, instead of giving the 
 admin a puzzle to decipher.
I still don't see whats incorrect here.  If two processes wind up reusing a
process id, thats audits problem to figure out, nothing elses.  What exactly is
the problem that you see involving netlink and audit here?  Compare whatever
problem you see a crypto netlink protocol having in regards to audit to another
netlink protocol (say rtnetlink), and explain to me why the latter doesn't have
that issue as well.


 
  Theres also the child process case, in which a child process might read
  responses from requests sent by a parent, and vice versa, since fork 
  inherits
  file descriptors, but thats an issue inherent in any mechanism you use,
  including the character interface, so I'm not sure what the problem is
  there.
 Actually that's not a problem with ioctl() because the response is written 
 back _before_ ioctl() returns, so it is always provided to the original 
 calling thread.
 
 With the current design the parent and child share the key/session namespace 
 after fork(), but operation results are always reliably and automatically 
 provided to the thread that invoked the operation, without any user-space 
 collaboration.
Is that _really_ that important?  That the kernel return data in the same call
that its made?  I don't believe for a minute that FIPS has any madate on that.
I believe that it might be easier for audit to provide records on single calls,
rather than 

Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Steve Grubb
On Tuesday, August 10, 2010 01:57:40 pm Neil Horman wrote:
   I'm not so sure I follow.  how can you receive messages on a socket in
   response to requests that were sent from a different socket.  In the
   netlink multicast and broadcast case, sure, but theres no need to use
   those.  I suppose you could exit a process, start another process in
   which the pid gets reused, open up a subsequent socket and perhaps
   confuse audit that way, but you're not going to get responses to the
   requests that the previous process sent in that case.
  
  I don't even need to open a subsequent socket - as son as the process ID
  is reused, the audit message is incorrect, which is not really
  acceptable in itself.
 
  
 
 But, you do, thats my point.  If a process exits, and another process
 starts up that happens to reuse the same pid, it can't just call recvmsg
 on the socket descriptor that the last process used for netlink messages
 and expect to get any data.  That socket descriptor won't be connected to
 the netlink service (or anything) anymore, and you'll get an error from
 the kernel.

You are looking at it from the wrong end. Think of it from audit's perspective 
about how do you guarantee that the audit trail is correct? This has been 
discussed on linux-audit mail list before and the conclusion is you have very 
limited information to work with. By being synchronous the syscall, we get 
everything in the syscall record from the processes audit context.

The audit logs require non-repudiation. It cannot be racy or stitch together 
possibly wrong events. Audit logs can and do wind up in court and we do not 
want problems with any part of the system.

   And in the event that happens, Audit should log a close event on the fd
   inquestion between the operations.
  
  audit only logs explicitly requested operations, so an administrator that
  asks for crypto events does not automatically get any close
  events.  Besides, the audit record should be correct in the first place,
  instead of giving the admin a puzzle to decipher.
 
 I still don't see whats incorrect here.  If two processes wind up reusing a
 process id, thats audits problem to figure out, nothing elses

True, but that is the point of this exercise - meeting common criteria and 
FIPS. They both have rules about what the audit logs should present and the 
assuarnce that the information is correct and not racy.

 .  What exactly is the problem that you see involving netlink and audit
 here?  Compare whatever problem you see a crypto netlink protocol having
 in regards to audit to another netlink protocol (say rtnetlink), and
 explain to me why the latter doesn't have that issue as well.

That one is not security sensitive. Nowhere in any protection profile does it 
say to audit that.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac
- Neil Horman nhor...@tuxdriver.com wrote:
 On Tue, Aug 10, 2010 at 12:57:43PM -0400, Miloslav Trmac wrote:
  
  - Neil Horman nhor...@redhat.com wrote:
  
   On Tue, Aug 10, 2010 at 11:40:00AM -0400, Miloslav Trmac wrote:
- Neil Horman nhor...@redhat.com wrote:
 On Tue, Aug 10, 2010 at 10:47:14AM -0400, Miloslav Trmac wrote:
  - Neil Horman nhor...@redhat.com wrote:
   On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
The problem with the netlink approach is that auditing is not 
as good because
netlink is an async protocol. The kernel can only use the 
credentials that
ride in the skb with the command since there is no guarantee 
the process has
not changed credentials by the time you get the packet. Adding 
more
credentials to the netlink headers is also not good. As it is 
now, the
auditing is synchronous with the syscall and we can get at more 
information
and reliably create the audit records called out by the 
protection profiles or
FIPS-140 level 2.
   
-Steve
  
   I think thats pretty easy to serialize though.  All you need to 
   do is enforce a
   rule in the kernel that dictates any creditial changes to a given 
   context must
   be serialized behind all previously submitted crypto operations.
  That would be a bit unusual from the layering/semantics aspect - why
 should credential changes care about crypto operations when they don't
 care about any other operations? - and it would require pretty
 widespread changes throughout the kernel core.
  Mirek


 I'm sorry, I thought steve was referring to credentials in the sense 
 of changing
 keys/etc while crypto operations were in flight.
The audited values are mainly process/thread attributes: pid, ppid,
   {,e,fs}[ug]id, session id, and the like.  Most of these are attached
   to the netlink packet, and performing a lookup by PID is at packet
   handling time is unreliable - as far as I understand the netlink
   receive routine does not have to run in the same process context, so
   the PID might not even refer to the same process.
  
   I'm not so sure I follow.  how can you receive messages on a socket in 
   response
   to requests that were sent from a different socket.  In the netlink 
   multicast
   and broadcast case, sure, but theres no need to use those.  I suppose you 
   could
   exit a process, start another process in which the pid gets reused, open 
   up a
   subsequent socket and perhaps confuse audit that way, but you're not 
   going to
   get responses to the requests that the previous process sent in that case.
  I don't even need to open a subsequent socket - as son as the
 process ID is reused, the audit message is incorrect, which is not
 really acceptable in itself.
  
 But, you do, thats my point.  If a process exits, and another process starts 
 up
 that happens to reuse the same pid, it can't just call recvmsg on the socket
 descriptor that the last process used for netlink messages and expect to get 
 any
 data.
It _doesn't matter_ that I don't receive a response.  I have caused an 
operation in the kernel and the requested audit record is incorrect.  The audit 
subsystem needs to work even if - especially if - the userspace process is 
malicious.  The process might have wanted to simply cause a misleading audit 
record; or the operation (such as importing a key from supplied data) might not 
really need the response in order to achieve its effect.

   And
   in the event that happens, Audit should log a close event on the
 fd inquestion
   between the operations.
  audit only logs explicitly requested operations, so an administrator
 that asks for crypto events does not automatically get any close
 events.  Besides, the audit record should be correct in the first
 place, instead of giving the admin a puzzle to decipher.
 I still don't see whats incorrect here.  If two processes wind up reusing a
 process id, thats audits problem to figure out, nothing elses.
If an operation attributed to user A is recorded by the kernel as being 
performed by user B, that is not an user problem.

 What exactly is
 the problem that you see involving netlink and audit here?  Compare whatever
 problem you see a crypto netlink protocol having in regards to audit to 
 another
 netlink protocol (say rtnetlink), and explain to me why the latter doesn't 
 have
 that issue as well.
AFAIK most netlink users in the kernel are not audited, and those that are 
typically only allow operations by system administrators.  And I haven't 
claimed that the other users are correct :)  Notably audit itself does not 
record as much information about the invoking process as we'd like because it 
is not available.

   Theres also the child process case, in which a child process might read
   responses from 

RE: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Loc Ho
Hi Miloslav,

I had read or glance over the patch from  
http://people.redhat.com/mitr/cryptodev-ncr/0002;. We have post a version of 
the CryptoDev over a year ago. Unfortunately, we did not got a chance to pick 
up again. In that process, Herbert Xu provides a number of comments. You can 
search the mailing list for that. Here are my comment based on experience with 
hardware crypto:

1. This CrytoDev (user space) interface needs to support multiple operations at 
once
2. This CrytoDev interface needs to support async
3. This CryotDev interface needs to support large data operation such as an 
entire file
4. Zero crypto (already see this in your version)
5. Avoid a lot of little kmalloc for various small structures (This once is 
from Herbert Xu.)
6. This interface needs to work with OpenSSL which allow OpenSwan to work

Reason for item above:
Item #1. Multiple operations - This needs to take advance of the hardware 
offload. If you only submit one operation at a time, the latency of the 
software/hardware will be a problem. By allow submitting multiple operations, 
you fill up the hardware buffer and keep in running. Otherwise, it just be idle 
a majority of the time and the difference between SW vs HW is nill.

Item #2. Asnc - You can argue that you can open multiple /dev/crypto session 
and submit them. But this does not work for the same session and for HW base 
crypto. Having an async interface has the benefit to the user space application 
developer as they can use the same async programming interface as with other 
I/O operations.

Item #3. Large file support - Most hardware algorithms can't support this as 
the operation cannot be continue. Not sure how to handle this.

Item #4. Right now you are pining the entire buffer. For small buffer, this may 
not make sense. We have not got a chance to see if what is the limit for this.

Item #5. Herbert Xu mentioned that we should avoid having a lot of small 
kmalloc when possible.

Item #6. You should give OpenSSL a patach and see how it works out. Although, 
OpenSSL does not take advantage of batch submission. Again, to really take 
advantage of the HW, you really need to support batch operation.

For all other comments, search for previous /dev/cryptodev submission and you 
will find a bunch of argue on this user space crypto API.

-Loc


CONFIDENTIALITY NOTICE: This e-mail message, including any attachments, 
is for the sole use of the intended recipient(s) and contains information�
that is confidential and proprietary to AppliedMicro Corporation or its 
subsidiaries. 
It is to be used solely for the purpose of furthering the parties' business 
relationship. 
All unauthorized review, use, disclosure or distribution is prohibited. 
If you are not the intended recipient, please contact the sender by reply 
e-mail 
and destroy all copies of the original message.


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac
Hello,
- Loc Ho l...@apm.com wrote:
 I had read or glance over the patch from 
 http://people.redhat.com/mitr/cryptodev-ncr/0002;. We have post a
 version of the CryptoDev over a year ago. Unfortunately, we did not
 got a chance to pick up again. In that process, Herbert Xu provides a
 number of comments. You can search the mailing list for that. Here are
 my comment based on experience with hardware crypto:
Thanks for the comments.

 1. This CrytoDev (user space) interface needs to support multiple
 operations at once

I think this would be naturally solved by providing the async interface.

 Item #2. Asnc - You can argue that you can open multiple /dev/crypto
 session and submit them. But this does not work for the same session
 and for HW base crypto. Having an async interface has the benefit to
 the user space application developer as they can use the same async
 programming interface as with other I/O operations.

Right, that would make sense - but it can be added later (by providing an 
async op operation, poll() handler, and get next result operation).  I'd 
like to get the existing functionality acceptable first.

 Item #3. Large file support - Most hardware algorithms can't support
 this as the operation cannot be continue. Not sure how to handle
 this.
The proposed interface does not have any inherent input/output size limits.

 Item #4. Right now you are pining the entire buffer. For small buffer,
 this may not make sense. We have not got a chance to see if what is
 the limit for this.
Good point; this kind of performance optimization can be added later as well, 
noted for future work.

 Item #5. Herbert Xu mentioned that we should avoid having a lot of
 small kmalloc when possible.
Looking at the session code, which is most critical, I see a few opportunities 
to improve this; noted.

 Item #6. You should give OpenSSL a patach and see how it works out.
 Although, OpenSSL does not take advantage of batch submission. Again,
 to really take advantage of the HW, you really need to support batch
 operation.
OpenSSL support is planned, but not ready yet.

Thanks again,
Mirek
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac
- Herbert Xu herb...@gondor.hengli.com.au wrote:
 On Tue, Aug 10, 2010 at 02:19:59PM -0400, Miloslav Trmac wrote:
 
  2) simplicity and reliability: you are downplaying the overhead and
 synchronization necessary (potentially among multiple processes!) to
 simply receive a response, but it is still enormous compared to the
 single syscall case.  Even worse, netlink(7) says netlink is not a
 reliable protocol.  ... but may drop messages.  Would you accept such
 a mechanism to transfer write data to file operations?  Compress
 data using AES is much more similar to write data to file than to
 change this aspect of kernel routing configuration - it is an
 application-level service, not a way to communicate long-term
 parameters to a pretty independent subsystem residing in the kernel.
 
 That just shows you have no idea how netlink works.
I'm learning as fast as I can by reading all available documentation :)  
Nevertheless I believe the point stands even without the reliability problem.
Mirek
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Loc Ho

 1. This CrytoDev (user space) interface needs to support multiple
 operations at once

I think this would be naturally solved by providing the async interface.
[Loc Ho]
Async only support a single operation at a time. HW are quite fast. The ability 
to submit multiple payload for a single OS call improve in performance. The 
software overhead with the submission alone can be more than a single 
encrypt/decrypt/hashing operation.

-Loc

CONFIDENTIALITY NOTICE: This e-mail message, including any attachments, 
is for the sole use of the intended recipient(s) and contains information�
that is confidential and proprietary to AppliedMicro Corporation or its 
subsidiaries. 
It is to be used solely for the purpose of furthering the parties' business 
relationship. 
All unauthorized review, use, disclosure or distribution is prohibited. 
If you are not the intended recipient, please contact the sender by reply 
e-mail 
and destroy all copies of the original message.


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 02:14:24PM -0400, Steve Grubb wrote:
 On Tuesday, August 10, 2010 01:57:40 pm Neil Horman wrote:
I'm not so sure I follow.  how can you receive messages on a socket in
response to requests that were sent from a different socket.  In the
netlink multicast and broadcast case, sure, but theres no need to use
those.  I suppose you could exit a process, start another process in
which the pid gets reused, open up a subsequent socket and perhaps
confuse audit that way, but you're not going to get responses to the
requests that the previous process sent in that case.
   
   I don't even need to open a subsequent socket - as son as the process ID
   is reused, the audit message is incorrect, which is not really
   acceptable in itself.
  
   
  
  But, you do, thats my point.  If a process exits, and another process
  starts up that happens to reuse the same pid, it can't just call recvmsg
  on the socket descriptor that the last process used for netlink messages
  and expect to get any data.  That socket descriptor won't be connected to
  the netlink service (or anything) anymore, and you'll get an error from
  the kernel.
 
 You are looking at it from the wrong end. Think of it from audit's 
 perspective 
 about how do you guarantee that the audit trail is correct? This has been 
 discussed on linux-audit mail list before and the conclusion is you have very 
 limited information to work with. By being synchronous the syscall, we get 
 everything in the syscall record from the processes audit context.
 

What information do you need in the audit record that you might loose accross
two syscalls?  It sounds from previous emails that, generally speaking, you're
worried that you want the task struct that current points to in the recvmsg call
be guaranteeed to be the same as the task struct that current points to in the
sendmsg call (i.e. no children (re)using the same socket descriptor, etc).  Can
this be handled by using the fact that netlink is actually syncronous under the
covers?  i.e. when you send a message to a netlink service, there is no reason
that all the relevant crypto ops in the request can't be completed in the
context of that call, as long as all your crypto operations are themselves
synchronous.  By the time you are done with the sendmsg call, you can know if
your entire crypto op is successfull.  The only thing that isn't complete is the
retrieval of the completed operations data from the kernel.  Is that enough to
make an audit log entry in the same way that an ioctl would?

 The audit logs require non-repudiation. It cannot be racy or stitch together 
 possibly wrong events. Audit logs can and do wind up in court and we do not 
 want problems with any part of the system.
 
And in the event that happens, Audit should log a close event on the fd
inquestion between the operations.
   
   audit only logs explicitly requested operations, so an administrator that
   asks for crypto events does not automatically get any close
   events.  Besides, the audit record should be correct in the first place,
   instead of giving the admin a puzzle to decipher.
  
  I still don't see whats incorrect here.  If two processes wind up reusing a
  process id, thats audits problem to figure out, nothing elses
 
 True, but that is the point of this exercise - meeting common criteria and 
 FIPS. They both have rules about what the audit logs should present and the 
 assuarnce that the information is correct and not racy.
 

Can you ennumerate here what FIPS and Common Criteria mandate be presented in
the audit logs? 

Neil

  .  What exactly is the problem that you see involving netlink and audit
  here?  Compare whatever problem you see a crypto netlink protocol having
  in regards to audit to another netlink protocol (say rtnetlink), and
  explain to me why the latter doesn't have that issue as well.
 
 That one is not security sensitive. Nowhere in any protection profile does it 
 say to audit that.
 
 -Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac
- Neil Horman nhor...@tuxdriver.com wrote:
 On Tue, Aug 10, 2010 at 11:36:16AM -0400, Miloslav Trmac wrote:
  I think it would be useful to separate thinking about the data
 format and about the transmission mechanism.  ioctl() can quite well
 be used to carry netlink-like packets - blobs of data with specified
 length and flexible internal structure - and on the other hand,
 netlink could be (and often is) used to carry fixed structs instead of
 variable-length packets.
 
 Yes, both mechanism can be used to carry either fixed or variable length
 payloads.  The difference is ioctl has no built in mechanism to do variable
 length data safely.  To do variable length data you need to setup a bunch of
 pointers to extra data that you have to copy separately.
No, I can do exactly what netlink does:

struct operation {
// fixed params;
size_t size_of_attrs;
char attrs[]; // struct nlattr-tagged data
};

at the cost of one additional copy_from_user() (which netlink users often pay 
as well because using struct iovec is so convenient).  Or perhaps even better, 
I can make sure the userspace application won't mess up the pointer arithmetic 
and get rid of all of the macros:

struct operation {
// fixed params;
size_t num_attrs;
struct { struct nlattr header; union { ... } data } attrs[];
}

at the same cost (and sacrificing string and variable-length parameters - but 
as argued below, pointers are still an option).

 Even then, you're
 fixing the number of pointers that you have in your base structure.  To add or
 remove any would break your communication protocol to user space.  This is
 exactly the point I made above.
The existence of a base structure does not inherently limit the number of 
extensions.

 And there is no need for versioning in the netlink packet, because the data
 types are all inlined, typed and attached to length values (at least when done
 correctly, see then nl_attr structure and associated macros).  You don't have
 that with your ioctl.  You could add it for certain, but at that point you're
 re-inventing the wheel.
Right, the nl_attr structure is quite nice, and I didn't know about it.  
Still...


  As for the transmission mechanism, netlink seems to me to be one of
 the least desirable options: as described above, it does not provide
 any inherent advantages to ioctl, and it has significantly higher
 overhead (sendmsg()+recvmsg(), handling netlink ACKs, matching
 requests and replies in multi-threaded programs), and it makes
 auditing difficult.
 
 You're incorrect.  I've explained several of the advantiges above and in my 
 previous
 email, you're just not seeing them.  I will grant you some additional overhead
 in the use of of two system calls rather than one per operation in the nominal
 case, but there is no reason that can't be mitigated by the bundling of 
 multiple
 operations into a single netlink packet. 
None of the existing crypto libraries provide such interfaces, and 
restructuring applications to take advantage of them would often be difficult - 
just restructuring the NSS _internals_ to support this would be difficult.  
Porting applications to a Linux-specific interface would make them less 
portable; besides, we have tried porting applications to a different library 
before (http://fedoraproject.org/wiki/FedoraCryptoConsolidation ), and based on 
that experience, any such new interface would have minimal, if any, impact.

 Likewise, matching requests and responses in a multi-threaded program is also 
 an
 already solved issue in multiple ways.  The use of multiple sockets, in a 1 
 per
 thread fashion is the most obvious.
That would give each thread a separate namespace of keys/sessions, rather 
strange and a problem to fit into existing applications.

 Theres also countless approaches in which a
 thread can reassemble responses to registered requests in such a way that the
 user space portion of an application sees these calls as being synchronous.  
 Its
 really not that hard.
The overhead is still unnecessary.

   2) Use of pointers.  Thats just a pain.  You have the compat ioctl stuff 
   in
   place to handle the 32/64 bit conversions, but it would be really nice if 
   you
   could avoid having to maintain that extra code path.
  Pointers are pretty much unavoidable, to allow zero-copy references
 to input/output data.  If that means maintaining 32-bit compat paths
 (as opposed to codifying say uint128_t for pointers in fixed-size
 structures), then so be it: the 32-bit paths will simply have to be
 maintained.  Once we have the 32-bit compat paths, using pointers for
 other unformatted, variable-sized data (e.g. IVs, multiple-precision
 integers) seems to be easier than using variable-size data structures
 or explicit pointer arithmetic to compute data locations within the
 data packets.
 
 No, they're not unavoidable.  Thats the point I made in my last email.  If you
 use netlink, you inline all your data into a single byte array, 

Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Steve Grubb
On Tuesday, August 10, 2010 02:45:44 pm Neil Horman wrote:
 On Tue, Aug 10, 2010 at 02:14:24PM -0400, Steve Grubb wrote:
  On Tuesday, August 10, 2010 01:57:40 pm Neil Horman wrote:
 I'm not so sure I follow.  how can you receive messages on a socket
 in response to requests that were sent from a different socket. 
 In the netlink multicast and broadcast case, sure, but theres no
 need to use those.  I suppose you could exit a process, start
 another process in which the pid gets reused, open up a subsequent
 socket and perhaps confuse audit that way, but you're not going to
 get responses to the requests that the previous process sent in
 that case.

I don't even need to open a subsequent socket - as son as the process
ID is reused, the audit message is incorrect, which is not really
acceptable in itself.
   
   But, you do, thats my point.  If a process exits, and another process
   starts up that happens to reuse the same pid, it can't just call
   recvmsg on the socket descriptor that the last process used for
   netlink messages and expect to get any data.  That socket descriptor
   won't be connected to the netlink service (or anything) anymore, and
   you'll get an error from the kernel.
  
  You are looking at it from the wrong end. Think of it from audit's
  perspective about how do you guarantee that the audit trail is correct?
  This has been discussed on linux-audit mail list before and the
  conclusion is you have very limited information to work with. By being
  synchronous the syscall, we get everything in the syscall record from
  the processes audit context.
 
 What information do you need in the audit record that you might loose
 accross two syscalls?

This is easier to show that explain. With Mirek's current patch:

type=SYSCALL msg=audit(1281013374.713:11671): arch=c03e syscall=2 
success=yes exit=3 a0=400b67 a1=2 a2=0 a3=7fff4daa1200 items=1 ppid=1352 
pid=1375 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 
tty=tty6 ses=3 comm=ncr-setkey exe=/home/mitr/cryptodev-
linux/userspace/ncr-setkey subj=unconfined_u:unconfined_r:unconfined_t:s0-
s0:c0.c1023 key=(null)
type=CRYPTO_USERSPACE_OP msg=audit(1281013374.713:11671): 
crypto_op=context_new ctx=0
type=CWD msg=audit(1281013374.713:11671):  cwd=/root
type=PATH msg=audit(1281013374.713:11671): item=0 name=/dev/crypto 
inode=12498 dev=00:05 mode=020660 ouid=0 ogid=0 rdev=0a:3a 
obj=system_u:object_r:device_t:s0
type=CRYPTO_STORAGE_KEY msg=audit(1281013374.715:11672): key_size=16


The netlink aproach, we only get the credentials that ride with the netlink 
packet 

http://lxr.linux.no/#linux+v2.6.35/include/linux/netlink.h#L159

There really is no comparison between what can be recorded synchronously vs 
async.


 It sounds from previous emails that, generally speaking, you're worried that
 you want the task struct that current points to in the recvmsg call be
 guaranteeed to be the same as the task struct that current points to in the
 sendmsg call (i.e. no children (re)using the same socket descriptor, etc). 

Not really, we are _hoping_ that the pid in the netlink credentials is the 
same pid that sent the packet in the first place. From the time we reference 
the pid out of the skb until we go hunting and locate the pid in the list of 
tasks, it could have died and been replaced with another task with different 
credentials. We can't take that risk.


 Can this be handled by using the fact that netlink is actually syncronous
 under the covers?

But its not or all the credentials would not be riding in the skb.


 Can you ennumerate here what FIPS and Common Criteria mandate be presented
 in the audit logs?

Who did what to whom at what time and what was the outcome. In the case of 
configuration changes we need the new and old values. However, we need extra 
information to make the selective audit work right.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 02:19:59PM -0400, Miloslav Trmac wrote:
 - Neil Horman nhor...@tuxdriver.com wrote:
  On Tue, Aug 10, 2010 at 12:57:43PM -0400, Miloslav Trmac wrote:
   
   - Neil Horman nhor...@redhat.com wrote:
   
On Tue, Aug 10, 2010 at 11:40:00AM -0400, Miloslav Trmac wrote:
 - Neil Horman nhor...@redhat.com wrote:
  On Tue, Aug 10, 2010 at 10:47:14AM -0400, Miloslav Trmac wrote:
   - Neil Horman nhor...@redhat.com wrote:
On Tue, Aug 10, 2010 at 09:24:31AM -0400, Steve Grubb wrote:
 The problem with the netlink approach is that auditing is not 
 as good because
 netlink is an async protocol. The kernel can only use the 
 credentials that
 ride in the skb with the command since there is no guarantee 
 the process has
 not changed credentials by the time you get the packet. 
 Adding more
 credentials to the netlink headers is also not good. As it is 
 now, the
 auditing is synchronous with the syscall and we can get at 
 more information
 and reliably create the audit records called out by the 
 protection profiles or
 FIPS-140 level 2.

 -Steve
   
I think thats pretty easy to serialize though.  All you need to 
do is enforce a
rule in the kernel that dictates any creditial changes to a 
given context must
be serialized behind all previously submitted crypto operations.
   That would be a bit unusual from the layering/semantics aspect - 
   why
  should credential changes care about crypto operations when they 
  don't
  care about any other operations? - and it would require pretty
  widespread changes throughout the kernel core.
   Mirek
 
 
  I'm sorry, I thought steve was referring to credentials in the 
  sense of changing
  keys/etc while crypto operations were in flight.
 The audited values are mainly process/thread attributes: pid, ppid,
{,e,fs}[ug]id, session id, and the like.  Most of these are attached
to the netlink packet, and performing a lookup by PID is at packet
handling time is unreliable - as far as I understand the netlink
receive routine does not have to run in the same process context, so
the PID might not even refer to the same process.
   
I'm not so sure I follow.  how can you receive messages on a socket in 
response
to requests that were sent from a different socket.  In the netlink 
multicast
and broadcast case, sure, but theres no need to use those.  I suppose 
you could
exit a process, start another process in which the pid gets reused, 
open up a
subsequent socket and perhaps confuse audit that way, but you're not 
going to
get responses to the requests that the previous process sent in that 
case.
   I don't even need to open a subsequent socket - as son as the
  process ID is reused, the audit message is incorrect, which is not
  really acceptable in itself.
   
  But, you do, thats my point.  If a process exits, and another process 
  starts up
  that happens to reuse the same pid, it can't just call recvmsg on the socket
  descriptor that the last process used for netlink messages and expect to 
  get any
  data.
 It _doesn't matter_ that I don't receive a response.  I have caused an 
 operation in the kernel and the requested audit record is incorrect.  The 
 audit subsystem needs to work even if - especially if - the userspace process 
 is malicious.  The process might have wanted to simply cause a misleading 
 audit record; or the operation (such as importing a key from supplied data) 
 might not really need the response in order to achieve its effect.
 

Why is it incorrect?  What would audit have recorded in the above case?  That a
process attempted to recieve data on a descriptor that it didn't have open?
That seems correct to me.

And
in the event that happens, Audit should log a close event on the
  fd inquestion
between the operations.
   audit only logs explicitly requested operations, so an administrator
  that asks for crypto events does not automatically get any close
  events.  Besides, the audit record should be correct in the first
  place, instead of giving the admin a puzzle to decipher.
  I still don't see whats incorrect here.  If two processes wind up reusing a
  process id, thats audits problem to figure out, nothing elses.
 If an operation attributed to user A is recorded by the kernel as being 
 performed by user B, that is not an user problem.
 
You're right, its also not a netlink problem, a cryptodev problem, or an ioctl
problem.  Its an audit problem.



  What exactly is
  the problem that you see involving netlink and audit here?  Compare whatever
  problem you see a crypto netlink protocol having in regards to audit to 
  another
  netlink protocol (say rtnetlink), and explain to 

Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac
- Miloslav Trmac m...@redhat.com wrote:
 - Neil Horman nhor...@tuxdriver.com wrote:
  Likewise, matching requests and responses in a multi-threaded program is 
  also an
  already solved issue in multiple ways.  The use of multiple sockets, in a 1 
  per
  thread fashion is the most obvious.
 That would give each thread a separate namespace of keys/sessions,
 rather strange and a problem to fit into existing applications.
Hm, that's actually not necessarily true, depending on the specifics, but it 
does bring up the more general problem of crypto contexts and their lifetimes.

The proposed patch treats each open(/dev/crypto) as a new context within 
which keys and sessions are allocated.  Only processes having this FD can 
access the keys and namespaces, and transferring the FD also transfers access 
to the crypto data.  On last close() the data is automatically freed.

With netlink we could associate the data with the caller's netlink address, but 
we don't get any notification that the caller has exited.  The other option is 
to have only one, per-process, context, which again runs into the problem that 
netlink message handling is, at least semantically, separate from the calling 
process.
Mirek
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 03:10:12PM -0400, Steve Grubb wrote:
 On Tuesday, August 10, 2010 02:45:44 pm Neil Horman wrote:
  On Tue, Aug 10, 2010 at 02:14:24PM -0400, Steve Grubb wrote:
   On Tuesday, August 10, 2010 01:57:40 pm Neil Horman wrote:
  I'm not so sure I follow.  how can you receive messages on a socket
  in response to requests that were sent from a different socket. 
  In the netlink multicast and broadcast case, sure, but theres no
  need to use those.  I suppose you could exit a process, start
  another process in which the pid gets reused, open up a subsequent
  socket and perhaps confuse audit that way, but you're not going to
  get responses to the requests that the previous process sent in
  that case.
 
 I don't even need to open a subsequent socket - as son as the process
 ID is reused, the audit message is incorrect, which is not really
 acceptable in itself.

But, you do, thats my point.  If a process exits, and another process
starts up that happens to reuse the same pid, it can't just call
recvmsg on the socket descriptor that the last process used for
netlink messages and expect to get any data.  That socket descriptor
won't be connected to the netlink service (or anything) anymore, and
you'll get an error from the kernel.
   
   You are looking at it from the wrong end. Think of it from audit's
   perspective about how do you guarantee that the audit trail is correct?
   This has been discussed on linux-audit mail list before and the
   conclusion is you have very limited information to work with. By being
   synchronous the syscall, we get everything in the syscall record from
   the processes audit context.
  
  What information do you need in the audit record that you might loose
  accross two syscalls?
 
 This is easier to show that explain. With Mirek's current patch:
 
 type=SYSCALL msg=audit(1281013374.713:11671): arch=c03e syscall=2 
 success=yes exit=3 a0=400b67 a1=2 a2=0 a3=7fff4daa1200 items=1 ppid=1352 
 pid=1375 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 
 tty=tty6 ses=3 comm=ncr-setkey exe=/home/mitr/cryptodev-
 linux/userspace/ncr-setkey subj=unconfined_u:unconfined_r:unconfined_t:s0-
 s0:c0.c1023 key=(null)
 type=CRYPTO_USERSPACE_OP msg=audit(1281013374.713:11671): 
 crypto_op=context_new ctx=0
 type=CWD msg=audit(1281013374.713:11671):  cwd=/root
 type=PATH msg=audit(1281013374.713:11671): item=0 name=/dev/crypto 
 inode=12498 dev=00:05 mode=020660 ouid=0 ogid=0 rdev=0a:3a 
 obj=system_u:object_r:device_t:s0
 type=CRYPTO_STORAGE_KEY msg=audit(1281013374.715:11672): key_size=16
 
 
 The netlink aproach, we only get the credentials that ride with the netlink 
 packet 
 
 http://lxr.linux.no/#linux+v2.6.35/include/linux/netlink.h#L159
 
 There really is no comparison between what can be recorded synchronously vs 
 async.
 

Ok, so the $64 dollar question then: Do FIPS or Common Criteria require that you
log more than whats in the netlink packet?

 
  It sounds from previous emails that, generally speaking, you're worried that
  you want the task struct that current points to in the recvmsg call be
  guaranteeed to be the same as the task struct that current points to in the
  sendmsg call (i.e. no children (re)using the same socket descriptor, etc). 
 
 Not really, we are _hoping_ that the pid in the netlink credentials is the 
 same pid that sent the packet in the first place. From the time we reference 
 the pid out of the skb until we go hunting and locate the pid in the list of 
 tasks, it could have died and been replaced with another task with different 
 credentials. We can't take that risk.
 
 
  Can this be handled by using the fact that netlink is actually syncronous
  under the covers?
 
 But its not or all the credentials would not be riding in the skb.
 
Not true.  It not guaranteed to, as you can do anything you want when you
receive a netlink msg in the kernel.  But thats not to say that you can't
enforce synchronization, and in so doing obtain more information.

 
  Can you ennumerate here what FIPS and Common Criteria mandate be presented
  in the audit logs?
 
 Who did what to whom at what time and what was the outcome. In the case of 
 configuration changes we need the new and old values. However, we need extra 
 information to make the selective audit work right.
 
Somehow I doubt that FIPS mandates that audit messages include who did what to
whoom and what the result was :).  Seriously, what do FIPS and common criteria
require in an audit message?  I assume this is a partial list:
* sending process id
* requested operation
* session id
* user id
* group id
* operation result

I see lots of other items in the above log message, but I'm not sure what else
is required.  Can you elaborate?

Neil

 -Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info 

Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac
- Neil Horman nhor...@redhat.com wrote:
 On Tue, Aug 10, 2010 at 02:19:59PM -0400, Miloslav Trmac wrote:
  - Neil Horman nhor...@tuxdriver.com wrote:
  It _doesn't matter_ that I don't receive a response.  I have caused
 an operation in the kernel and the requested audit record is
 incorrect.  The audit subsystem needs to work even if - especially if
 - the userspace process is malicious.  The process might have wanted
 to simply cause a misleading audit record; or the operation (such as
 importing a key from supplied data) might not really need the response
 in order to achieve its effect.
 
 Why is it incorrect?  What would audit have recorded in the above case?  That 
 a
 process attempted to recieve data on a descriptor that it didn't have open?
Again, the receive side _doesn't matter_.  At least by the formal semantics of 
netlink (not relying on the AFAIK undocumented synchronous implementation), 
there's a risk that the audit record about the _sender_ is incorrect (athough 
the inability to audit the identity of the recipient of the data might be a 
problem as well, I'm not sure).

 And
 in the event that happens, Audit should log a close event on the
 fd inquestion
 between the operations.
audit only logs explicitly requested operations, so an administrator
   that asks for crypto events does not automatically get any close
   events.  Besides, the audit record should be correct in the first
   place, instead of giving the admin a puzzle to decipher.
   I still don't see whats incorrect here.  If two processes wind up reusing 
   a
   process id, thats audits problem to figure out, nothing elses.
  If an operation attributed to user A is recorded by the kernel as
 being performed by user B, that is not an user problem.
  
 You're right, its also not a netlink problem, a cryptodev problem, or an ioctl
 problem.  Its an audit problem.
The user doesn't particularly care which subsystem maintainer is supposed to 
fix what :)  The reality seems to be that audit and netlink don't play well 
together.

 Theres also the child process case, in which a child process might 
 read
 responses from requests sent by a parent, and vice versa, since fork 
 inherits
 file descriptors, but thats an issue inherent in any mechanism you 
 use,
 including the character interface, so I'm not sure what the problem is
 there.
Actually that's not a problem with ioctl() because the response is
   written back _before_ ioctl() returns, so it is always provided to the
   original calling thread.

With the current design the parent and child share the key/session
   namespace after fork(), but operation results are always reliably and
   automatically provided to the thread that invoked the operation,
   without any user-space collaboration.
  
   Is that _really_ that important?  That the kernel return data in the same 
   call
   that its made?
  Not for audit; but yes, it is important.
  
 I don't follow you here.  If its not important for audit to have synchrnous
 calls, why is it important at all?
There are two separate reasons to avoid netlink:

- Audit wants the operation to run in the process context of the caller.  This 
is not directly related to the question if there is one or =2 syscalls.

- Operation invocation method.  Here one syscall is important for the reasons I 
have (performance, simplicity/reliability).  Making audit work well with 
netlink does not override these reasons in my opinion.

  1) performance: this is not ip(8), where no one cares if the
 operation runs 10ms or 1000ms.  Crypto is at least 2 operations per
 SSL record (which can be as little as 1 byte of application data), and
 users will care about the speed.  Batching more operations into a
 single request is impractical because it would require very extensive
 changes in users of the crypto libraries, and the generic crypto
 interfaces such as PKCS#11 don't natively support batching so it might
 not even be possible to express this in some libraries.
  
 
 Ok, so is the assertion here that your kernel interface is restricted by 
 other user space
 libraries?
The kernel interface is not restricted by other user-space libraries as such, 
but the only practical way for most user-space applications to use the kernel 
interface (within the next 5 years) is to modify existing user space libraries 
to use the kernel interface as a backend (there's just not enough manpower to 
port the whole world, and quite a few upstream maintainers are understandably 
reluctant to port their code to a Linux-specific interface).  In that sense, 
only the features that are _currently_ provided by user-space libraries matter 
when discussing performance and other impact on users.

 i.e. the operation has to be syncronous because anymore than 1
 syscall per operation has too much impact on performance?
We need to assume that this implementation will be benchmarked against pure 
user-space 

Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Neil Horman
On Tue, Aug 10, 2010 at 02:58:01PM -0400, Miloslav Trmac wrote:
 - Neil Horman nhor...@tuxdriver.com wrote:
  On Tue, Aug 10, 2010 at 11:36:16AM -0400, Miloslav Trmac wrote:
   I think it would be useful to separate thinking about the data
  format and about the transmission mechanism.  ioctl() can quite well
  be used to carry netlink-like packets - blobs of data with specified
  length and flexible internal structure - and on the other hand,
  netlink could be (and often is) used to carry fixed structs instead of
  variable-length packets.
  
  Yes, both mechanism can be used to carry either fixed or variable length
  payloads.  The difference is ioctl has no built in mechanism to do variable
  length data safely.  To do variable length data you need to setup a bunch of
  pointers to extra data that you have to copy separately.
 No, I can do exactly what netlink does:
 
 struct operation {
 // fixed params;
 size_t size_of_attrs;
 char attrs[]; // struct nlattr-tagged data
 };
 
 at the cost of one additional copy_from_user() (which netlink users often pay 
 as well because using struct iovec is so convenient).  Or perhaps even 
 better, I can make sure the userspace application won't mess up the pointer 
 arithmetic and get rid of all of the macros:
 
 struct operation {
 // fixed params;
 size_t num_attrs;
 struct { struct nlattr header; union { ... } data } attrs[];
 }
 
 at the same cost (and sacrificing string and variable-length parameters - but 
 as argued below, pointers are still an option).
 
  Even then, you're
  fixing the number of pointers that you have in your base structure.  To add 
  or
  remove any would break your communication protocol to user space.  This is
  exactly the point I made above.
 The existence of a base structure does not inherently limit the number of 
 extensions.
 
  And there is no need for versioning in the netlink packet, because the data
  types are all inlined, typed and attached to length values (at least when 
  done
  correctly, see then nl_attr structure and associated macros).  You don't 
  have
  that with your ioctl.  You could add it for certain, but at that point 
  you're
  re-inventing the wheel.
 Right, the nl_attr structure is quite nice, and I didn't know about it.  
 Still...
 
 
   As for the transmission mechanism, netlink seems to me to be one of
  the least desirable options: as described above, it does not provide
  any inherent advantages to ioctl, and it has significantly higher
  overhead (sendmsg()+recvmsg(), handling netlink ACKs, matching
  requests and replies in multi-threaded programs), and it makes
  auditing difficult.
  
  You're incorrect.  I've explained several of the advantiges above and in my 
  previous
  email, you're just not seeing them.  I will grant you some additional 
  overhead
  in the use of of two system calls rather than one per operation in the 
  nominal
  case, but there is no reason that can't be mitigated by the bundling of 
  multiple
  operations into a single netlink packet. 
 None of the existing crypto libraries provide such interfaces, and 
 restructuring applications to take advantage of them would often be difficult 
 - just restructuring the NSS _internals_ to support this would be difficult.  
 Porting applications to a Linux-specific interface would make them less 
 portable; besides, we have tried porting applications to a different library 
 before (http://fedoraproject.org/wiki/FedoraCryptoConsolidation ), and based 
 on that experience, any such new interface would have minimal, if any, impact.
 
  Likewise, matching requests and responses in a multi-threaded program is 
  also an
  already solved issue in multiple ways.  The use of multiple sockets, in a 1 
  per
  thread fashion is the most obvious.
 That would give each thread a separate namespace of keys/sessions, rather 
 strange and a problem to fit into existing applications.
 
  Theres also countless approaches in which a
  thread can reassemble responses to registered requests in such a way that 
  the
  user space portion of an application sees these calls as being synchronous. 
   Its
  really not that hard.
 The overhead is still unnecessary.
 
2) Use of pointers.  Thats just a pain.  You have the compat ioctl 
stuff in
place to handle the 32/64 bit conversions, but it would be really nice 
if you
could avoid having to maintain that extra code path.
   Pointers are pretty much unavoidable, to allow zero-copy references
  to input/output data.  If that means maintaining 32-bit compat paths
  (as opposed to codifying say uint128_t for pointers in fixed-size
  structures), then so be it: the 32-bit paths will simply have to be
  maintained.  Once we have the 32-bit compat paths, using pointers for
  other unformatted, variable-sized data (e.g. IVs, multiple-precision
  integers) seems to be easier than using variable-size data structures
  or explicit pointer arithmetic to compute data 

Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Steve Grubb
On Tuesday, August 10, 2010 03:17:57 pm Neil Horman wrote:
  There really is no comparison between what can be recorded synchronously
  vs async.
 
 Ok, so the $64 dollar question then: Do FIPS or Common Criteria require
 that you log more than whats in the netlink packet?

The TSF shall be able to include or exclude auditable events from the set
of audited events based on the following attributes:
a) object identity,
b) user identity,
c) host identity,
d) event type,
e) success of auditable security events, and
f) failure of auditable security events.

We need the ability to selectively audit based on uid for example. The netlink 
credentials doesn't have that. In many cases its the same as the loginuid, but 
its not in the skb.

There is also a table of additional requirements. Take the case of logging in. 
It says:   

Origin of the attempt (e.g., terminal identifier, source IP address). 

So, when someone changes their password and a hash function is called, we need 
the origin information.


 
   It sounds from previous emails that, generally speaking, you're worried
   that you want the task struct that current points to in the recvmsg
   call be guaranteeed to be the same as the task struct that current
   points to in the sendmsg call (i.e. no children (re)using the same
   socket descriptor, etc).
  
  Not really, we are _hoping_ that the pid in the netlink credentials is
  the same pid that sent the packet in the first place. From the time we
  reference the pid out of the skb until we go hunting and locate the pid
  in the list of tasks, it could have died and been replaced with another
  task with different credentials. We can't take that risk.
  
   Can this be handled by using the fact that netlink is actually
   syncronous under the covers?
  
  But its not or all the credentials would not be riding in the skb.
 
 Not true.  It not guaranteed to, as you can do anything you want when you
 receive a netlink msg in the kernel.  But thats not to say that you can't
 enforce synchronization, and in so doing obtain more information.

Racy. 

 
   Can you ennumerate here what FIPS and Common Criteria mandate be
   presented in the audit logs?
  
  Who did what to whom at what time and what was the outcome. In the case
  of configuration changes we need the new and old values. However, we
  need extra information to make the selective audit work right.
 
 Somehow I doubt that FIPS mandates that audit messages include who did
 what to whoom and what the result was :).  Seriously, what do FIPS and
 common criteria require in an audit message?  I assume this is a partial
 list:

The TSF shall record within each audit record at least the following 
information:

a) Date and time of the event, type of event, subject identity (if 
applicable), and the outcome (success or failure) of the event; and
additional data as required. (Usually the object)

There are other implicit requirement such as selective audit that causes more 
information to be needed as well as the additional requirements clause.

-Steve
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] RFC: New /dev/crypto user-space interface

2010-08-10 Thread Miloslav Trmac
- Neil Horman nhor...@redhat.com wrote:
 Ok, well, I suppose we're just not going to agree on this.  I don't know how
 else to argue my case, you seem to be bent on re-inventing the wheel instead 
 of
 using what we have.  Good luck. 
Well, I basically spent yesterday learning about netlink and looking how it can 
or can not be adapted.  I still see drawbacks that are not balanced by any 
benefits that are exclusive to netlink.

As a very unscientific benchmark, I modified a simple example program to add a 
simple non-blocking getmsg(..., MSG_PEEK) call on a netlink socket after each 
encryption operation; this is only a lower bound on the overhead because no 
copying of data between userspace and the skbuffer takes place and zero copy is 
still available.  With cbc(aes), encrypting 256 bytes per operation, the 
throughput dropped from 156 MB/s to 124 MB/s (-20%); even with 32 KB per 
operation there is still a decrease from 131 to 127 MB/s (-2.7%).

If I have to do a little more programming work to get a permanent 20% 
performance boost, why not?


How about this: The existing ioctl (1-syscall interface) remains, using a very 
minimal fixed header (probably just a handle and perhaps algorithm ID) and 
using the netlink struct nlattr for all other attributes (both input and 
output, although I don't expect many output attribute).

- This gives us exactly the same flexibility benefits as using netlink directly.
- It uses the 1-syscall, higher performance, interface.
- The crypto operations are run in process context, making it possible to 
implement zero-copy operations and reliable auditing.
- The existing netlink parsing routines (both in the kernel and in libnl) can 
be used; formatting routines will have to be rewritten, but that's the easier 
part.

This kind of partial netlink reuse already has a precedent in the kernel, see 
zlib_compress_setup().

Would everyone be happy with such an approach?
Mirek
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html