Cathy,

On Tue, 2006-08-15 at 13:30 +0800, Cathy Zhou wrote:
> Please find the draft of 20 questions document for UV (Nemo Unification and 
> vanity naming) in the attachment.

Thank you for writing this, it's excellent.  Comments in-line:


> 1. What specifically is the proposal that we are reviewing?
> 
>   This project aims to provide a consistent model to administer
>   all network interfaces (Nemo unification). In addition, it will
>   allow all network interfaces to be given administratively-chosen
>   names (vanity naming).

I'd mention up front in this section that this is part of the Clearview
project and is thus part of the umbrella case.

> - Is this a new product, or a change to a pre-existing one? If it is
>   a change, would you consider it a "major", "minor", or "micro" change?
> 
>   This project will make some changes to the existing administrative
>   command: dladm(1M). Further, while it will keep the current network
>   devices nodes under /dev, it will introduce a /dev/net namespace to
>   hold all vanity-named nodes for network interfaces.
> 
>   This project is targeted for a minor release.

When project teams state that they're targeting a minor release, the
committee always ask if that's because of an inherent incompatible
change delivered by the project, or because of a side-effect of
planning.  As a result, I'd try and be clear about what incompatible
changes would make this project unsuitable for a patch.  For example,
"Because x and y will be incompatible changes unsuitable for a patch or
micro release of Solaris, this project is targeting a minor release."

>   * Nemo unification
> 
>     While GLDv3 provides a new network driver framework with enhanced
>     features such as link aggregation, it also introduces a confusing
>     administrative model, since only GLDv3-based devices can be managed
>     by dladm(1M). Further, legacy devices can not make use of the
>     performance enhancement or fancy features GLDv3 provides, such as 
>     the DL_CAPAB_POLL and DL_CAPAB_SOFT_RING capabilities.

Seeing as how we are fairly certain at this point that softmac over
legacy devices will not result in a performance improvement over vanilla
legacy devices, I'm not sure I'd include that last sentence.  It makes
it sound as if softmac will result in a net improvement in performance.
That's certainly not the goal.

I'd mention VLANs as a GLDv3 feature here as well.

>     This project will "unify" all the legacy devices into GLDv3, hence
>     provide a consistent administrative model and feature sets to all
>     network devices. Other than the existing feature sets GLDv3 current
>     has, there are other ongoing projects aim to make further enhancement
>     to GLDv3 (for example, the Crossbow project which is going to support
>     virtualization and resource management of network devices). With Nemo
>     unification, those enhancements can work for non-GLDv3 network devices
>     potentially.


"can potentially work for non-GLDv3 ..."


>   * Vanity naming
> 
>     Today, the network names are tied to the underlying network hardware
>     (e.g., bge0, ce0). Because configuring the system requires network
>     interface names to be referenced in various administrative tasks and
>     a wide range of configuration files, being able to give a meaningful
>     vanity name to a network interface (for example, based on its
>     functionality or its topology) will help to make network configuration

The word "topology" here might be confusing without context.  Perhaps,
"its relationship with the network topology" would be better?  Other
opinions?


> - What are the expected benefits for Sun?
> 
>   The customer will have the more consistent and flexible model to

s/the/a/

>   Specifically, customer will have the

"the customer"

>   ability to configure aggregations and VLANs on all Ethernet devices,
>   and configure vanity names to all network interfaces.

"and use vanity names with all network interfaces"


> - By what criteria will you judge its success?
> 
>   The project will be complete once the following requirements have been met:
> 
>   * Must provide a consistent model for network interface administration:
> 
>     - All network interfaces are administrated by the same set of commands.

"administered"

>     - All network interfaces support all features that specific hardware
>       can support.

I'm not sure everyone will follow this bullet item.  How about, "All
network interfaces of a given type (e.g. Ethernet) will support a
uniform set of administrative features."


>    * Must be able to configure a vanity name for any network interface:
> 
>      - The administrator are able to name network interfaces, and also are
>        able to administrate the interfaces based on their vanity names.

"Administrators are able" (no "The").
s/administrate/administer/


> 2. Describe how your project changes the user experience, upon
>    installation and during normal operation.
> 
>    There will be no changes to installation. In the future, some follow
>    up work can be done to make configuring network interface names part
>    of the installation (optionally), therefore allowing network vanity
>    naming to be used by default.

Hmm, this is something that we may want to discuss again prior to going
to PSARC.  We will undoubtedly be asked, "why won't Solaris choose
useful vanity names by default from the start"?  It's a valid question,
and it's something that I'm sure some administrators would welcome.

>    * (re)name network interfaces. With administrative-chosen names,

"administratively-chosen"

>      network configurations will not be tied to the driver names. This
>      will easy the network administration and sometimes make former
>      impossible operation become possible (for example, replace an
>      interface with one of another driver using dynamic configuration).
>      See details in the design document.

This paragraph is a bit difficult to parse.  May I suggest:

"This will ease network administration and make some formerly impossible
administrative operations possible (for example, ..."

> - What does the user perceive when the system is upgraded from a
>   previous release?
> 
>   None.

This is an understatement. :-)  If the system had interfaces using
legacy drivers, the administrator will have some new GLDv3-related
administrative tools at his disposal to manage his network interfaces.
The perception will be that all network interfaces have the same set of
features.

> - What is its current status? Has a design review been done?  Are there 
>   multiple delivery phases?
> 
>   It is currently in developing phase. The Nemo unification support is
>   mostly done, and we just started the vanity naming support.

"development phase"

>    * libdlpi (PSARC/2006/436)
> 
>      The libdlpi library is a public library which will be used by
>      all DLPI applications. Therefore, the details of accessing DLPI nodes
>      under /dev/net will be hide from applications.

"will be hidden"

>      Same as the project, the libdlpi project is also part of Clearview and
>      both are under the same umbrella case PSARC/2005/132 (Clearview, Network
>      Interface Coherence).

I would word this, "This project and the libdlpi project are both part
of Clearview, and both ..."

>     * VLAN Observability Enhancement (PSARC/2006/358)
> 
>       The project depends on the correct behavior proposed by PSARC/2006/358
>       to receive all tagged packets when binding to ETHERTYPE_VLAN sap.
>       We are changing GLDv2 and GLDv3 to comply to PSARC/2006/358, and the
>       SSG person are aware of the issue and agreed to change the Cassini
>       driver.

"and SSG is aware of the issue and has agreed to change ..."

>       Note that obsolete of Sun Trunking and the configuration conversion
>       tool does not fall into the scope of this project.

"Note that neither the obsolescence of Sun Trunking nor the
configuration conversion tool fall into the scope of this project."


>     * GLDv3 IPoIB (IP over InfiniBand) driver
> 
>       Currently, the IPoIB driver (ibd) is written in GLDv2. Because of the
>       specialties of ibd (in particular, packets are not passed upstream

Instead of "specialties", I'd write "peculiarities".

Are we going to state that the ibd GLDv3 port will be integrated prior
to UV?  Is that the current plan?

>     * Crossbow
> 
>       Crossbow is a network virtualization project which allows effective
>       sharing of physical networking resources among multiple user. It
>       allows administrator to create multiple data devices (VNICs) to map
>       to a single physical MAC instance.
> 
>       Although Crossbow can be implemented independently from this project,
>       with Nemo unification, Crossbow will be able to support VNICs on
>       non-GLDv3 devices. Further, the design of the project will allow the
>       vanity naming support for VNICs as well.

That last sentence is awkward.  Which project?

> - How does this project's administrative mechanisms fit into Sun's system
>   administration strategies?  E.g., how does it fit under the Solaris
>   Management Console (SMC) and Web-Based Enterprise Management (WBEM), how
>   does it make use of roles, authorizations and rights profiles?
>   Additionally, how does it provide for administrative audit in support of
>   the Solaris BSM configuration?
> 
>   N/A

We may need to look deeper into this.  Specifically, is renaming a
network interface something that could be delegated separately from
other networking administrative tasks?  If so, do we need to create
special authorizations or privileges for the rename tasks?

> - How does the project handle dynamic reconfiguration (DR) events?
> 
>   Before this project, although network interfaces can be removed and
>   reinstalled using DR, an interfaces being reinstalled must be of the
>   same driver because its names is tied to its driver name. After this
>   project, there will not be such restriction: an interfaces of a
>   different driver can be installed and inherit all the configuration
>   of the interface being formerly removed using DR.

This paragraph needs some work.  Let me suggest:

"Before this project, during a DR operation, both the failed interface
and the repaired interface were required to have the same network
driver.  This is because the network interface name was tied to the
driver name.  After this project, there will be no such restriction: an
interface can be replaced with another using a different driver, and a
renaming operation will make it possible for the new interface to
inherit all of the existing configuration related to the failed
interface."

This could probably be polished somewhat as well, others can feel free
to suggest modifications.

> - Can its files be corrupted by failures?  Does it clean up any
>   locks /files after crashes?
> 
>   The file (/etc/datalink.conf) which keeps all dladm configuration
>   (including vanity naming configuration) will be automatically
>   updated and must not be directly edited. This file should not be
>   corrupted by failures but in case it does get corrupted, the system
>   and all network devices on the system should still work, under
>   the expectation that some dladm configuration might get lost.

I would not say, "This file should not be corrupted by failures".  I
don't know what that means.  Instead, I would simply say, "If this file
is corrupted, then the system ..."

> - The Solaris BSM configuration carries a Common Criteria (CC) Controlled
>   Access Protection Profile (CAPP) -- Orange Book C2 -- and a Role Based
>   Access Control Protection Profile (RBAC) -- rating, does the addition
>   of your project effect this rating?  E.g., does it introduce interfaces
>   that make access or privilege decisions that are not audited, does it
>   introduce removable media support that is not managed by the allocate
>   subsystem, does it provide administration mechanisms that are not audited?
> 
>   No.

Are dladm operations audited?

> - Include a thorough description of the security assumptions,
>   capabilities and any potential risks (possible attack points) being
>   introduced by your project.  A separate Security Questionnaire
>       http://sac.sfbay/cgi-bin/bp.cgi?NAME=Security.bp
>   is provided for more detailed guidance on the necessary information.
>   Cases are encouraged to fill out and include the Security
>   questionnaire (leveraging references to existing documentation) in the
>   case materials.
> 
>    Projects must highlight information for the following important areas:
>    - What features are newly visible on the network and how are they
>      protected from exploitation (e.g. unauthorized access, eavesdropping)

None.

> 
>    - If the project makes decisions about which users, hosts, services, ...
>      are allowed to access resources it manages, how is the requestor's
>      identity determined and what data is used to determine if the access
>      granted.  Also how this data is protected from tampering.

N/A

> 
>    - What privileges beyond what a common user (e.g. 'noaccess') can 
>      perform does this project require and why those are necessary.

PRIV_SYS_NET_CONFIG?

>    - What parts of the project are active upon default install and how it 
>      can be turned off.
> 
>    TBD.

I suppose there could be a default vanity naming scheme that could be
turned off...

> - Command line or calling syntax:  
>   What options are supported?  (please include man pages if available)
> 
>     This project will introduce several new subcommands and options to
>     dladm(1M). See details in section 4.5.1 in the one pager.

It would be extremely helpful to provide the man page changes as part of
the materials in this case, and they could be referred to directly here.
The one pager really shouldn't be the primary architectural
specification.  The design document would at least be a much better
source of information than the one pager.

> - What shared libraries does it use?  (Hint: if you have code use "ldd"
>   and "dump -Lv")? 
> 
>   Other than libdladm, liblaadm, libkstat and libdlpi, no new libraries
>   dladm will depend on. libdladm will be updated to support vanity naming.

It would be less confusing to say, "dladm will not depend on any
additional libraries.  libdladm will be ..."

> - Identify and justify the requirement for any static libraries.
> 
>   No.

s/No/None

>                       Interfaces Exported
> Interface             Classification          Comments
> ---------------
> mac_open()
> mac_close()           Consolidation Private

Why are these included as exported interfaces?

> MAC capabilities
>  - MAC_CAPAB_NOZCOPY
>  - MAC_CAPAB_TX_LOOPBACK
>  - MAC_CAPAB_NOVLAN
>  - MAC_CAPAB_PUTNEXT_TX
>  - MAC_CAPAB_MDT
>  - MAC_CAPAB_IPSEC    Consolidation Private

To orient the reviewer, it would help if you provided some text in the
comments section to say where these come from (e.g., <sys/mac.h>).

> -----------------
> net_postattach
> net_predetach         Consolidation Private
> -------------------
> softmac_create
> softmac_destroy               Consolidation Private

Same comment for these, most reviewers will have no idea what these
strings mean without some minimal context (a reference to a
specification would be appropriate).

On a related note, why are these Consolidation Private rather than
Project Private?  Are you expecting other parts of ON to call these
directly?

> ---------
> libdladm              Consolidation Private

The library itself is not something this project exports.  Are we
exporting new functions?

> ---------------
> dacf_get_dev()                Consolidation Private
> ---------------
> DL_IOC_VLAN_CAPAB     Consolidation Private

Pointers to specs will be needed.  Same comment for other exported
interfaces.  (i.e.: design doc section numbers)

> - Is there a public namespace? (Can third parties create names in your 
>   namespace?)  How is this administered?
> 
>   Yes, the /dev/net namespace. It will be used to keep all of the
>   available interfaces on the system. Each interface will have a DLPI
>   style-1 /dev/net node with the same name as its interface name.
> 
>   Interface names will be administered using dladm(1M) utility.

Here, you should provide a pointer to the specification that describes
what ends up in /dev/net.  The semantics behind what ends up in which
subdirectories under /dev was a point of contention during the devname
review.

> 15. Is the interface extensible?  How will the interface evolve?
> 
> - How is versioning handled?  

N/A, you're not providing a versioned API.

> 
> - What was the commitment level of the previous version? 

N/A

> - Can this version co-exist with existing standards and with earlier
>   and later versions or with alternative implementations (perhaps by
>   other vendors)?

N/A

> - What are the clients over which a change should be managed?

N/A

> - How is transition to a new version to be accomplished? What are the 
>   consequences to ISV's and their customers?

N/A

> 16. How do the interfaces adapt to a changing world?
> 
>   TBD.

This is a great place for you to brag about how great UV will be for
future development of MAC and link-layer features.  For example, in the
future, new nifty features will be developed (such as Crossbow VNICs),
and UV will have made it possible for these features to apply to _all_
networking interfaces, not just those that were written directly using
GLDv3.

> - How will the project contribute (positively or negatively) to
>   "system load" and "perceived performance"?
> 
>   The project aims to no degrade of network performance on GLDv3 devices
>   and on the fast data-path of non-GLDv3 devices.

"aims to not degrade network performance"

> 19. Please identify any issues that you would like the ARC to address.
> 
> - Interface classification, deviations from standards, architectural
>   conflicts, release constraints...
> - Are there issues or related projects that the ARC should advise the 
>   appropriate steering committees?
> 
>   None.

If we can't come to an agreement about the default behavior of vanity
naming on a fresh install (e.g., automatic naming using net0, net1),
then perhaps the ARC can provide some guidance.

Thanks,
-Seb



Reply via email to