Re: [secdir] Secdir Review of draft-stjohns-sipso-05

2008-10-22 Thread Bill Sommerfeld
On Mon, 2008-10-20 at 20:44 -0500, Nicolas Williams wrote:
 But then:
 
 |In order to
 |   maintain data Sensitivity Labeling for such applications, in
 |   order to be able to implement routing and Mandatory Access
 |   Control decisions in routers and guards on a per-IP-packet basis,
 |   and for other reasons, there is a need to have a mechanism for
 |   explicitly labeling the sensitivity information for each IPv6
 |   packet.
 
 
 So if I understand correctly then this document would have an
 implementation of, say, NFSv4[0] over TCP[1] send TCP packets for the
 same TCP connection with different labels, *and* ensure that each packet
 contains parts of no more than one (exactly one) NFSv4 RPC.

You do not understand correctly.

See section 6.2.1 of that document, which reads in part:

   NOTE WELL:
A connection-oriented transport-layer protocol session
 (e.g. TCP session, SCTP session) MUST have the same DOI and
 same Sensitivity Label for the life of that connection.  The
 DOI is selected at connection initiation and MUST NOT change
 during the session.

- Bill

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Publication track for IBE documents (Was Second Last Call...)

2008-10-22 Thread Tim Polk


Stephen,

I will concede that most of the excitement about IBE and other Weil  
Pairing based cryptography has been
in the research community.  However, the technology has matured and  
products are slowly emerging.  (I am
also loath to write off any technology that attempts to address our  
enrollment and credentialing problems,
even though I see it as a simple re-ordering of the same process.   
That's a philosophical rathole, though.)
Publication as Informational RFCs is worthwhile since these documents  
provide a basis for interoperability

*if* adoption of IBE technology picks up steam.

We already have multiple non-interoperable implementations of IBE- 
based email (Voltage and Trend Micro).
These RFCs *won't* address the fundamental interoperability problem  
between Trend and Voltage, since
Trend is using the Sakai-Kasahara algorithm and Voltage uses Boneh- 
Boeyen or Boneh-Franklin.  However,
if additional companies wish to join the IBE-based email market,  
these RFCs are a proactive step towards

interoperability of future  implementations.

Thanks,

Tim


So while I don't strongly object to these as informational RFCs,
I do wonder why, if only one implementation is ever likely, we
need any RFC at all. Its not like these docs describe something
one couldn't easily figure out were there a need, given that
the (elegant but not especially useful) crypto has been around
for a while without finding any serious applications.

Stephen.

Tim Polk wrote:
 Okay, I fat fingered this one.  The S/MIME WG actually forwarded  
these

 documents
 with a recommendation that they be published as Informational.  I
 intended to respect
 that consensus, but for one reason or another, they ended up in the
 Tracker marked
 for Standards track.

 It is clear that the WG got this one right, and I have changed the
 intended status on
 both documents to Informational.

 Thanks,

 Tim Polk



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Publication track for IBE documents (Was Second Last Call...)

2008-10-22 Thread Harald Alvestrand

Stephen Farrell wrote:

So while I don't strongly object to these as informational RFCs,
I do wonder why, if only one implementation is ever likely, we
need any RFC at all. Its not like these docs describe something
one couldn't easily figure out were there a need, given that
the (elegant but not especially useful) crypto has been around
for a while without finding any serious applications.
  
My personal opinion is that Informational documents should have a low 
bar for publication.


Thus, in the absence of compelling other information (such as a claim 
that the technology is incompetently described, or can't be implemented 
from the specs), I'd favour publication.


(That said, the RFC Editor's work on these will cost the IETF a known 
amount of dollars. The bar shouldn't be TOO low.)


   Harald
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir Review of draft-stjohns-sipso-05

2008-10-22 Thread Nicolas Williams
On Tue, Oct 21, 2008 at 04:16:14PM -0400, Russ Housley wrote:
 Nico:
 
 So if I understand correctly then this document would have an
 implementation of, say, NFSv4[0] over TCP[1] send TCP packets for the
 same TCP connection with different labels, *and* ensure that each packet
 contains parts of no more than one (exactly one) NFSv4 RPC.
 
 I am aware of several multi-level secure implementations; none of 
 them of make any attempt to do anything like this.

Bill Sommerfeld points out that I have read much too much into the
paragraph that I quoted.  I should only have read that a solution is
desired, and that the solution could well be not to multiplex traffic
for multiple users on a single session.

Nico
-- 
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [p2pi] WG Review: Application-Layer Traffic Optimization (alto)

2008-10-22 Thread Nicholas Weaver

Hey, stupid thought...

Could you do proximity based on who's your DNS resolver?  Do a few  
name lookups: one to register YOU as using YOUR DNS resolver to the  
remote coordinator, and one to get who are other peers using the same  
resolver?


An ugly, UGLY hack, but it might be interesting to think about.

Has anyone done this already?

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [secdir] Secdir Review of draft-stjohns-sipso-05

2008-10-22 Thread Nicolas Williams
On Tue, Oct 21, 2008 at 04:57:12PM -0400, Michael StJohns wrote:
 Classified documents have this thing called paragraph marking.  Each
 paragraph within a document is marked with the highest level of data
 within the paragraph.  A page is marked with the highest level of data
 in any paragraph on that page.  The overall document is marked with
 and protected at the highest level of data within the document.
 
 For your example, what would probably happen is that the NFS processes
 on both sides would create a connection at the highest level of data
 they expect to exchange.  The NFS processes would be responsible for
 the labeling and segregation of data exchanged over that connection.
 E.g. the IP packets would ALL be labeled at the high level, even if
 some of them carried data at a level below.

Thanks for the clarification.

Nico
-- 
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [p2pi] WG Review: Application-Layer Traffic Optimization (alto)

2008-10-22 Thread Stanislav Shalunov

On Oct 13, 2008, at 5:23 AM, Pekka Savola wrote:
I believe this work could be useful and would provide an improvement  
over existing p2p usage and traffic management.


I also believe that an ALTO WG should be formed and would like to  
contribute to a solutions draft.


The current requirements and problem statement are scoped rather  
narrowly around a solution in mind.  This is recognized by the BoF  
chairs and the authors, and I would be interested in contributing to  
these documents so that they more generally applicable.


A solutions draft should be a start to help think about the solutions  
space.  The starting point for the solutions idea is described at http://www.ietf.org/mail-archive/web/p2pi/current/msg00508.html


This is intended to answer Lisa's call for example candidate solutions  
to better understand the space.  The solution will emphasize  
simplicity, privacy, and, correspondingly, clear understanding of what  
information is given to whom.


The question at hand is not consensus on the requirements or even  
problem statement, but just the charter.


I believe the charter has by now been crafted with the intention of  
covering the known cases.


One small concern that I didn't previously raise because I just  
noticed it is that the charter still says

A request/response protocol for querying the ALTO service to obtain
  information useful for peer selection, and a format for requests and
  responses.

This starts to specify an architecture.  While the known candidate  
solutions seem to fit, I would prefer clarifying that it's not request/ 
response protocol or data format that's the point, but the information.


One way of doing so would to to rephrase as follows:

A complete mechanisms that enables clients to learn from the ALTO  
service information useful for peer selection.


Again, this should go forward.

Thanks,  -- Stas

--
Stanislav Shalunov



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [dhcwg] Last Call: draft-ietf-dhc-dhcpv6-bulk-leasequery (DHCPv6 Bulk Leasequery) to Proposed Standard

2008-10-22 Thread David W. Hankins
On Wed, Oct 22, 2008 at 08:36:22AM +0300, Pekka Savola wrote:
 $ snmpwalk -m IP-FORWARD-MIB -v 2c -c foo foo-rtr 

This essentially would be identical to DHCP leasequery, minus the
bulk.  Even if transported via TCP, on the wire it would look like
a single client-server GETNEXT, waiting for a server-client reply
before sending the next one.  One PDU and one OID in it at a time.

snmpbulkwalk is a minor improvement; a single UDP reply can contain
many iterated GETNEXT's, or similarly over TCP.  There is still a
'pause' between the client's request and reply.

What the leasequery bulk methods are looking for is all at once.
The objective is to fill the socket at maximum window size.  I am
not aware of a means to do this with SNMP considering the kinds of
data in DHCP lease tables, and the usual ways MIBs are constructed.

 ip.ipForward.ipCidrRouteTable.ipCidrRouteEntry.ipCidrRouteProto.128.214.46
 IP-FORWARD-MIB::ipCidrRouteProto.128.214.46.0.255.255.255.0.0.0.0.0.0 = 
 INTEGER: netmgmt(3)
 IP-FORWARD-MIB::ipCidrRouteProto.128.214.46.254.255.255.255.255.0.0.0.0.0 = 
 INTEGER: local(2)

This kind of underlines a qualm with SNMP data management (as opposed
to network management).

For each iterated GETNEXT or GETBULK both rely upon a fixed point in
the database (node n), which was present in the database and was
the last PDU in the server's reply in the previous packet.  But it may
not be present in the database at the time the next request is made.

This requires the database models used by the servers to be able to
find a reliable sorting, where the previously-valid-now-invalid
OID can still be used as an index to provide continuation; it can
still grant the next OID.

This essentially is a problem, or at least a facility not present in
some DHCP databases, which caused us to motivate away from UDP based
bulk queries (the original bulk leasequery proposal was more similar
to snmpwalk in this regard, and this was a concern raised by
implementers).


In addition...

SNMP likes to present a single table of a single variable at a time.
I suppose we could overcome this by having the DHCP lease information
in an 'blob of octets' rather than in classical SNMP variable form
(INTEGER etc), so you only have one MIB to walk.  But it seems foreign
to SNMP to do so.

The problem is that most leasequery clients are not positioned to
allocate fields of temporary memory in order to make sense of SNMP's
kind of scatter-gather approach to this kind of data transfer.

To make sense of SNMP MIBs you have to develop some strategy to
receive multiple datapoints from different locations and times.

For example, you start by walking a table of index advertisements,
where you receive an 'index number' that can be used into other MIBs
to find variables associated with that database entry.

For each of these indexes you discover, you could then queue single
GET PDU's for each separate variable you were interested in (lease
state, lease expiration time, ...).

There are 'performance alternatives' from there, and they are
fantastic to entertain because so many SNMP server implementations
will outright crash if too many PDUs are queued in a single packet
(others corrupt their replies if there are more than single PDU's).

This becomes more problematic when you consider that some leasequery
clients are going to want only a subset of the MIB's total contents.
The question truly is what leases did I have in my table before I
rebooted?  Such filtration in an SNMP MIB model I think would be
done on the client end, not on the server end, meaning the client
still must traverse some entire MIB one PDU (GETNEXT or GETBULK) at a
time.

This is different from the proposed bulk leasequery models, where the
server writes to the TCP socket all at once, with all data for a
given lease spatially located in the same position in the TCP stream,
and a primitive query language (by query type) to provide subsets.


This doesn't mean a standard DHCP MIB isn't a bad idea for entirely
different reasons.

-- 
Ash bugud-gul durbatuluk agh burzum-ishi krimpatul.
Why settle for the lesser evil?  https://secure.isc.org/store/t-shirt/
-- 
David W. HankinsIf you don't do it right the first time,
Software Engineeryou'll just have to do it again.
Internet Systems Consortium, Inc.   -- Jack T. Hankins


pgpWCakKMUnRv.pgp
Description: PGP signature
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: WG Review: Application-Layer Traffic Optimization (alto)

2008-10-22 Thread Narayanan, Vidya
All,
We have submitted a draft explaining the overall problem of peer selection - 
http://www.ietf.org/internet-drafts/draft-saumitra-alto-multi-ps-00.txt.  

Below are my suggested revisions to the charter based on arguments the draft 
puts forth (and based on emails exchanged over the last several days). 

Thanks,
Vidya

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of IESG Secretary
 Sent: Monday, October 06, 2008 1:36 PM
 To: IETF Announcement list
 Cc: [EMAIL PROTECTED]
 Subject: WG Review: Application-Layer Traffic Optimization (alto) 
 
 A new IETF working group has been proposed in the 
 Applications Area.  The IESG has not made any determination 
 as yet.  The following draft charter was submitted, and is 
 provided for informational purposes only.  Please send your 
 comments to the IESG mailing list ([EMAIL PROTECTED]) by Monday, 
 October 13, 2008.
 
 Application-Layer Traffic Optimization (alto) 
 =
 Last Modified: 2008-09-29
 
 Current Status: Proposed Working Group
 
 Chair(s): TBD
 
 Applications Area Director(s):
 
 Lisa Dusseault (lisa at osafoundation.org) Chris Newman 
 (Chris.Newman at sun.com)
 
 Applications Area Advisor:
 
 Lisa Dusseault (lisa at osafoundation.org)
 
 Mailing List:
 
 General Discussion: p2pi at ietf.org
 To Subscribe: https://www.ietf.org/mailman/listinfo/p2pi
 Archive: http://www.ietf.org/pipermail/p2pi/
 
 Description of Working Group:
 
 A significant part of the Internet traffic today is generated 
 by peer-to-peer (P2P) applications used for file sharing, 
 real-time communications, and live media streaming.  P2P 
 applications exchange large amounts of data, often uploading 
 as much as downloading.  In contrast to client/server 
 architectures, P2P applications often have a selection of 
 peers and must choose.
 

Add: Peer selection is also a problem that has many different applications in 
p2p systems - e.g., identifying the best peer to download content from, 
identifying the best super peer to contact in a system, using the best relay 
for NAT traversal, identifying the best next hop for a query based on several 
criteria (e.g., quality, reputation, semantic expertise, etc.), etc. 

 One of the advantages of P2P systems comes from redundancy in 
 resource availability.  This requires choosing among download 
 locations, 

s/download locations/a list of peers

 yet applications have at best incomplete 
 information about the topology of the network. 

s/incomplete information about the topology of the network/incomplete 
information to help the selection, e.g., topology of the network. 

 Applications 
 can sometimes make empirical measurements of link 
 performance, but even when this is an option it takes time.
 The application cannot always start out with an optimal 
 arrangement of peers, thus causing at least temporary reduced 
 performance and excessive cross-domain traffic.  Providing 
 more information for use in peer selection can improve P2P 
 performance and lower ISP costs.
 
 The Working Group will design and specify an 
 Application-Layer Traffic Optimization (ALTO) service that 
 will provide applications with information to perform 
 better-than-random initial peer selection.
 ALTO services may take different approaches at balancing 
 factors including

s/including/such as

 maximum bandwidth, minimum cross-domain 
 traffic, lowest cost to the user, etc.  The WG will consider 
 the needs of BitTorrent, tracker-less P2P, and other 
 applications, such as content delivery networks (CDN) and 
 mirror selection.
 
 The WG will focus on the following items:
 
 - A problem statement document providing a description of the
   problem and a common terminology.
 
 - A requirements document.  This document will list 
 requirements for  the ALTO service, identifying, for example, 
 what kind of information  P2P applications will need for 
 optimizing their choices.
 

I propose deleting identifying, for example, what kind of information  P2P 
applications will need for optimizing their choices.  

 - A request/response protocol for querying the ALTO service 
 to obtain information useful for peer selection, and a format 
 for requests and
 responses.   

I suggest replacing this with Stanislav's suggestion: 

A complete mechanism that enables clients to learn from the ALTO service 
information useful for peer selection.

 The WG does not require intermediaries between the ALTO
 server and the peer querying it.  

s/the ALTO server and the peer querying it/the communicating ALTO endpoints. 

 If the requirements 
 analysis identifies the need to allow clients to delegate 
 third-parties to query the ALTO service on their behalf, the 
 WG will ensure that the protocol provides a mechanism to 
 assert the consent of the delegating client.
 
 - A document defining core request and response formats and 
 semantics to communicate network preferences to applications. 

Re: Publication track for IBE documents (Was Second Last Call...)

2008-10-22 Thread Doug Otis


On Oct 22, 2008, at 7:50 AM, Tim Polk wrote:



Stephen,

I will concede that most of the excitement about IBE and other Weil  
Pairing based cryptography has been in the research community.   
However, the technology has matured and products are slowly  
emerging.  (I am also loath to write off any technology that  
attempts to address our enrollment and credentialing problems, even  
though I see it as a simple re-ordering of the same process.  That's  
a philosophical rathole, though.) Publication as Informational RFCs  
is worthwhile since these documents provide a basis for  
interoperability *if* adoption of IBE technology picks up steam.


We already have multiple non-interoperable implementations of IBE- 
based email (Voltage and Trend Micro) These RFCs *won't* address the  
fundamental interoperability problem between Trend and Voltage,  
since Trend is using the Sakai-Kasahara algorithm and Voltage uses  
Boneh-Boeyen or Boneh-Franklin.  However, if additional companies  
wish to join the IBE-based email market, these RFCs are a proactive  
step towards interoperability of future  implementations.


One motivation for adopting the Identum technology was that encryption  
is based upon the sender's ID, where tokens combine with recipient's  
IDs provide a means for many recipients to decrypt a common message  
body.  This approach solves a difficult problem when complying with  
HIPPA, GLBA, PCI DSS, UK Data Protection Act that want outbound  
messages managed.  S/MIME encryption interferes with an ability to  
monitor one's outbound traffic, making compliance assurance  
difficult.  It is my understanding all of these solutions are  
encumbered, but I am not a lawyer.


-Doug


Thanks,

Tim


So while I don't strongly object to these as informational RFCs,
I do wonder why, if only one implementation is ever likely, we
need any RFC at all. Its not like these docs describe something
one couldn't easily figure out were there a need, given that
the (elegant but not especially useful) crypto has been around
for a while without finding any serious applications.

Stephen.

Tim Polk wrote:
 Okay, I fat fingered this one.  The S/MIME WG actually forwarded  
these

 documents
 with a recommendation that they be published as Informational.  I
 intended to respect
 that consensus, but for one reason or another, they ended up in the
 Tracker marked
 for Standards track.

 It is clear that the WG got this one right, and I have changed the
 intended status on
 both documents to Informational.

 Thanks,

 Tim Polk



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [dhcwg] Last Call: draft-ietf-dhc-dhcpv6-bulk-leasequery(DHCPv6 Bulk Leasequery) to Proposed Standard

2008-10-22 Thread Randy Presuhn
Hi -

 From: David W. Hankins [EMAIL PROTECTED]
 To: DHC WG [EMAIL PROTECTED]
 Cc: ietf@ietf.org
 Sent: Wednesday, October 22, 2008 10:17 AM
 Subject: Re: [dhcwg] Last Call: draft-ietf-dhc-dhcpv6-bulk-leasequery(DHCPv6 
 Bulk Leasequery) to Proposed Standard
...

 SNMP likes to present a single table of a single variable at a time.
 I suppose we could overcome this by having the DHCP lease information
 in an 'blob of octets' rather than in classical SNMP variable form
 (INTEGER etc), so you only have one MIB to walk.  But it seems foreign
 to SNMP to do so.

Uh...  One of the useful features of tables is to organize related information
into conceptual rows with common INDEX values.  I'm not sure where
a single table of a single variable at a time comes from - GetBulk
certainly has no such limitation.

...
 For example, you start by walking a table of index advertisements,
 where you receive an 'index number' that can be used into other MIBs
 to find variables associated with that database entry.

 For each of these indexes you discover, you could then queue single
 GET PDU's for each separate variable you were interested in (lease
 state, lease expiration time, ...).

That would be a spectacularly inefficient implementation strategy.
I should hope there's nothing in the SNMP RFCs that would be
read as encouraging such wasteful behaviour.

 There are 'performance alternatives' from there, and they are
 fantastic to entertain because so many SNMP server implementations
 will outright crash if too many PDUs are queued in a single packet
 (others corrupt their replies if there are more than single PDU's).

I'm not sure what you're trying to say here.  An SNMP message (which
would normally be carried in a single UDP datagram) by definition
contains exactly one SNMP PDU.

 This becomes more problematic when you consider that some leasequery
 clients are going to want only a subset of the MIB's total contents.
 The question truly is what leases did I have in my table before I
 rebooted?  Such filtration in an SNMP MIB model I think would be
 done on the client end, not on the server end, meaning the client
 still must traverse some entire MIB one PDU (GETNEXT or GETBULK) at a
 time.

This depends on the design of the MIB module in general, and the
selection of the INDEX elements in particular.  Choosing INDEX
elements for a MIB module is *not* the same problem as selecting
indexes for a database.  The use cases for information access, such
as what leases did I have in my table during time period X are also
important.  Sometimes it makes to have shadow tables that do
nothing but provide re-ordered access to the table with the real
data - but this requires careful thought about what the high-frequency
or high-value use cases are.

Randy

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf