Default system CA (X.509) Certificates [PSARC/2009/430 FastTrack timeout 08/19/2009]

2009-08-13 Thread johan...@sun.com
[Originally sent this to Darren, but forgot to CC PSARC-ext]

Hi Darren,

I got forwarded a pointer to this case that you filed.  Thanks for
taking the time to do this.

 http://sac.eng/Archives/CaseLog/arc/PSARC/2009/430/20090811_darren.moffat

I would recommend using the certificate directory approach instead of
creating a single file with all certificates.

The directory allows us to use a single PEM file per-certificate instead
of having a huge PEM blob.  The single PEM file consumes more memory,
since the whole blob gets loaded into memory.  If the directory is used,
individual keys are loaded into memory instead.

Delivering a single blob also has implications for package delivery.  If
we use a directory, other packages can deliver certs to a common
location, if needed.  The blob approach blocks multiple party certificate
delivery, and requires us to update the entire blob when one certificate
changes.  It would be more elegant to add/remove the affected files from
a certficiate directory.

Since I had to solve this problem for pkg(5), I've already written code
that can extract the certs from mozilla's nss library, or their CVS
server, and then build a directory of certs with corresponding
hash-value named symlinks.  Feel free to use this code instead of
writing more from scratch.

Thanks,

-j





Default system CA (X.509) Certificates [PSARC/2009/430 FastTrack timeout 08/19/2009]

2009-08-14 Thread johan...@sun.com
On Fri, Aug 14, 2009 at 09:24:00AM +0100, Darren J Moffat wrote:
 johansen at sun.com wrote:
 http://sac.eng/Archives/CaseLog/arc/PSARC/2009/430/20090811_darren.moffat

 I would recommend using the certificate directory approach instead of
 creating a single file with all certificates.

 This case doesn't preclude that.

It may.  There are still bugs in OpenSSL's certificate lookup mechanisms
that make it difficult to use both a CertificateFile and a
CertificateDirectory together:

Although the issuer checks are a considerably improvement over
the old technique they still suffer from limitations in the
underlying X509_LOOKUP API. One consequence of this is that
trusted certificates with matching subject name must either
appear in a file (as specified by the -CAfile option) or a
directory (as specified by -CApath. If they occur in both then
only the certificates in the file will be recognised.

Previous versions of OpenSSL assume certificates with matching
subject name are identical and mishandled them. 
(http://www.openssl.org/docs/apps/verify.html#BUGS)

As another example, Libcurl allows callers to use either the CAFile or
the CADirectory approach, but not both.  (Although, the documentation
makes no mention of this.)

 The directory allows us to use a single PEM file per-certificate instead
 of having a huge PEM blob.  The single PEM file consumes more memory,
 since the whole blob gets loaded into memory.  If the directory is used,
 individual keys are loaded into memory instead.

 It is only 198k

The individual certficiates are between 1 and 2k.  If an application
only needs one or two of those certificates you're wasting 98-99% of
that memory.

 Delivering a single blob also has implications for package delivery.  If
 we use a directory, other packages can deliver certs to a common
 location, if needed.  The blob approach blocks multiple party certificate
 delivery, and requires us to update the entire blob when one certificate
 changes.  It would be more elegant to add/remove the affected files from
 a certficiate directory.

 This case doesn't preclude other packages adding additional certs to  
 /etc/certs/  in fact other packages already do.

I don't believe that this response addresses my previous comment.
OpenSolaris should be moving towards configuration systems that are self
assembling.  Delivering well-known certificates into a directory
facilitates this kind of self assembly.  The directory approach can be
used by OpenSSL without need for any assembly service, as long as
symlinks are delivered with the certificates.

You also haven't addressed the issue of handling individual
certificates.  The blob approach requires us to deliver a large file
every time any of the constituent pieces change.  It seems more
reasonable to deliver just the pieces that have changed.

 This case is about delivering the well known browser SSL certs ...

I understand that.  I'm asking that we do so in such a way that we have
maximum flexibility and minimal overhead.

 I think it is entirely appropriate to do so in a single file.  I
 believer other systems do it that way.

I disagree, and I've explained why I think there is a better approach.
I don't believe it matters what other systems do, at least in this case.

 Since I had to solve this problem for pkg(5), I've already written code
 that can extract the certs from mozilla's nss library, or their CVS
 server, and then build a directory of certs with corresponding
 hash-value named symlinks.  Feel free to use this code instead of
 writing more from scratch.

 One reason for using a single file is to avoid having to do the  
 hash-value symlinks.

This isn't a difficult problem, and I have code that already does this.
You're welcome to use / borrow / whatever, if you like.

 This case is already closed and ready to be delivered, unless you think  
 it is fundamentally broken I really don't want to re-open it.

I wouldn't have sent these comments unless I thought that they were
important.  How many posts have I made to PSARC-ext in the last few
years?  Using a directory based approach may be a little more effort in
the short term, but it should save us a lot of headache later on.

Thanks,

-j




Copy Reduction Interfaces [PSARC/2009/478 FastTrack timeout 09/16/2009]

2009-09-11 Thread johan...@sun.com
On Wed, Sep 09, 2009 at 04:02:15PM -0500, Rich.Brown at sun.com wrote:
  == Introduction/Background ==
 
  Zero-copy (copy avoidance) is essentially buffer sharing
  among multiple modules that pass data between the modules. 
  This proposal avoids the data copy in the READ/WRITE path 
  of filesystems, by providing a mechanism to share data buffers
  between the modules. It is intended to be used by network file
  sharing services like NFS, CIFS or others.
 
  Although the buffer sharing can be achieved through a few different
  solutions, any such solution must work with File Event Monitors
  (FEM monitors)[1] installed on the files. The solution must
  allow the underlying filesystem to maintain any existing file 
  range locking in the filesystem.
  
  The proposed solution provides extensions to the existing VOP
  interface to request and return buffers from a filesystem. The 
  buffers are then used with existing VOP_READ/VOP_WRITE calls with
  minimal changes.
 
 
  == Proposed Changes ==
...

  == Using the New VOP Interfaces for Zero-copy ==
 
  VOP_REQZCBUF()/VOP_RETZCBUF() are expected to be used in conjunction with
  VOP_READ() or VOP_WRITE() to implement zero-copy read or write. 
 
  a. Read
 
 In a normal read, the consumer allocates the data buffer and passes it to
 VOP_READ().  The provider initiates the I/O, and copies the data from its
 own cache buffer to the consumer supplied buffer.
 
 To avoid the copy (initiating a zero-copy read), the consumer
 first calls VOP_REQZCBUF() to inform the provider to prepare to
 loan out its cache buffer.  It then calls VOP_READ().  After the
 call returns, the consumer has direct access to the cache buffer
 loaned out by the provider.  After processing the data, the
 consumer calls VOP_RETZCBUF() to return the loaned cache buffer to
 the provider.
...

  b. Write
 
 In a normal write, the consumer allocates the data buffer, loads the data,
 and passes the buffer to VOP_WRITE().  The provider copies the data from
 the consumer supplied buffer to its own cache buffer, and starts the I/O.
 
 To initiate a zero-copy write, the consumer first calls VOP_REQZCBUF() to
 grab a cache buffer from the provider.  It loads the data directly to
 the loaned cache buffer, and calls VOP_WRITE().  After the call returns,
 the consumer calls VOP_RETZCBUF() to return the loaned cache buffer to
 the provider.

Just for clarification: this interface only affects pages mapped in the
kernel, correct?  I'm trying to understand if this is just for reducing
the number of in-kernel copies, or if this is a userland - kernel
zero-copy interface.


Thanks,

-j


increase number of realtime signals [PSARC/2010/062 Self Review]

2010-02-22 Thread johan...@sun.com
On Mon, Feb 22, 2010 at 10:17:34PM +0100, I. Szczesniak wrote:
 On Mon, Feb 22, 2010 at 8:37 PM, Garrett D'Amore gdamore at sun.com wrote:
  On 02/22/10 11:28 AM, Roger A. Faulkner wrote:
 
  I am sponsoring this automatic case for myself.
 
 
  +1 on the case, on the justification for not expanding to 64.
 
 -1 for this case. I did some research on this subject and I disagree
 that increasing the number of signals to 64 breaks binary
 compatibility.

So far, all you have presented is an opinion.  Please explain, using
actual evidence and technical reasoning, how you came to your
conclusion.

 sigset_t is a limited resource but there are still 24 signals left
 *and* if there is ever the need to add more signals the number of
 realtime signals can be reduced again.

Increasing and then decreasing the number of available signals is going
to be even more disruptive.  How does your proposal ensure binary
compatibility?

 IMO the case can pass with 64 realtime signals, otherwise I request a
 derail and full case which explains why again a resource is increased
 in a half hearted manner.

That's unreasonable.  The number of signals in the sigset_t is an
implementation detail.  This has already received a superfluous amount
of attention, despite it's trivial nature.

-j


More ksh93 builtins [PSARC/2010/095 FastTrack timeout 03/25/2010]

2010-03-19 Thread johan...@sun.com
On Fri, Mar 19, 2010 at 08:13:48AM -0700, Garrett D'Amore wrote:
 The fact that we have to put /usr/gnu at the head of $PATH of new
 users is a bit of a travesty, and I'm of the opinion that we should
 reexamine *that* particular decision...

This is merely one opinion.  There are compelling business and
architecture cases for having the default userland be approachable by
the majority of users of other popular unix-like operating systems.  The
/usr/gnu isn't the default in my path either, but it makes a lot of
sense to present a userland that's familiar to users of Linux, and
similar environments.

Anyone is free to create a distro with a different default shell, or
default path.  Anyone is free to change their path as well as their
shell.  Your fixation on /usr/gnu's presence in the default path isn't
productive.  Why make it harder to get users from Linux and elsewhere to
adopt Solaris?

 ... in which case much of the motivation behind *this* case comes into
 question.  (If /usr/gnu isn't the default for most users, then there
 is little motivation to provide builtin wrappers for them.)

I disagree.  Modulo the issues about the profile shell, I see no reason
why it matters that the ARC delve into the minutia of shell builtins.
In general, that's an implementation or configuration detail of the shell.

I would recommend against derailing this case in favor of one about a
grand shell [re-]architecture.  We should be making it easier add
different shells for Solaris.  Using this case as an opportunity to rail
about Gnu is just divisive.

-j


More ksh93 builtins [PSARC/2010/095 FastTrack timeout 03/25/2010]

2010-03-19 Thread johan...@sun.com
On Fri, Mar 19, 2010 at 04:08:09PM -0700, Garrett D'Amore wrote:
 I'd rather see us modernize our own tools.  I resent abdication of
 our own engineering, and the necessity of abandoning all good
 innovations (like shell builtins) because some people feel its
 critical that the only way to achieve these goals is to provide
 these 3rd party tools.  Its more offensive to me specifically
 because there is no good reason why we can't use tools from the
 ksh93 community (who seems to be a lot more willing to work with us
 on key engineering issues than the GNU folks who are mostly fixated
 on Linux) to achieve this.

Instead of re-inventing the wheel at every opportunity, it makes more
sense to take the open source projects that have wide acceptance and
incorporate them into our product.  I think that both ksh93 and gnu fall
into this category.  It's much better for us to focus our engineering
efforts on areas where we can actually differentiate our product from
our competitors.

I don't have a problem with ksh93 or the builtins, nor am I advocating
an entirely GNU userland.  What I am suggesting, however, is that the
folks who decided to put /usr/gnu in the default path did talk to our
customers, and also took note of the fact that Linux is widely adoped
across the industry.

 I'm also of the opinion that it is a mistake to sacrifice
 familiarity for our paying Solaris 10 customers in favor of
 familiarity for people coming from Linux.

This is a false dilemma.  It should be entirely possible for customers
to configure whatever default path they desire and deploy that in their
enterprise via AI.  

 Which group do you think contributes more towards the $$ that pay our
 salaries?

I would love to argue this point with you, but it's not appropriate to
discuss it on a public mailing list.

-j