RE: RFC 6234 code

2013-06-28 Thread Dearlove, Christopher (UK)
I'd actually tried the authors, but no reply yet (only a few days). I also 
tried the RFC Editor thinking they might have e.g. XML from which extraction 
might have been easier, but also no response yet. And I had found several 
libraries, but not the RFC code. I can't see any suggestion that the library 
you indicate includes that code, it also bills itself as a C++ library, which 
the RFC code is not (and also not what I want, but that's not a subject for 
this list).

But the broader point is that if it's worth the IETF publishing the code as an 
RFC, it's worth making the code available straightforwardly.

For me, a thanks to Tony Hansen, who did the extraction for me. (That makes me 
feel a little guilty, why should he do my work I could have done?) But the 
point of posting on this list was to say that the code should be available so 
that each person wanting that code doesn't have to do that again.

Christopher

-- 
Christopher Dearlove
Senior Principal Engineer, Communications Group
Communications, Networks and Image Analysis Capability
BAE Systems Advanced Technology Centre
West Hanningfield Road, Great Baddow, Chelmsford, CM2 8HN, UK
Tel: +44 1245 242194 |  Fax: +44 1245 242124
chris.dearl...@baesystems.com | http://www.baesystems.com

BAE Systems (Operations) Limited
Registered Office: Warwick House, PO Box 87, Farnborough Aerospace Centre, 
Farnborough, Hants, GU14 6YU, UK
Registered in England  Wales No: 1996687

-Original Message-
From: Hector Santos [mailto:hsan...@isdg.net] 
Sent: 27 June 2013 20:38
To: Dearlove, Christopher (UK)
Cc: ietf@ietf.org
Subject: Re: RFC 6234 code

--! WARNING ! --
This message originates from outside our organisation,
either from an external partner or from the internet.
Keep this in mind if you answer this message.
Follow the 'Report Suspicious Emails' link on IT matters
for instructions on reporting suspicious email messages.


Ok, other than time, it should be easy to extract, clean up and cross 
your fingers that it compiles with your favorite C compiler.  But I 
would write to the authors to get the original source. Or google:

 C source crypto libraries API hashing functions

among the first hit:

http://www.cryptopp.com/


On 6/27/2013 12:33 PM, Dearlove, Christopher (UK) wrote:
 I want the specific code that is in the RFC (which happens to be in C) rather 
 than some other implementation.




This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.




Re: RFC 6234 code

2013-06-28 Thread Eggert, Lars
Hi,

On Jun 28, 2013, at 10:53, Dearlove, Christopher (UK) 
chris.dearl...@baesystems.com
 wrote:
 But the broader point is that if it's worth the IETF publishing the code as 
 an RFC, it's worth making the code available straightforwardly.

some WGs are good at this. RFC5662 for example includes the shell commands to 
extract the sources it contains.

It would certainly be nice if other WGs did this, too, but I'm not sure we need 
to make it a requirement.

Lars

RE: RFC 6234 code

2013-06-28 Thread Dearlove, Christopher (UK)
I think we must agree to differ on whether the IETF should do this. If it takes 
twenty minutes work for everyone who has to do it, either that stacks up, or no 
one is actually interested, so is the RFC worth it?

As for maintaining the code, I'm just suggesting the same snapshot as the RFC 
is maintained. It could even be another link off the tools page: here's the 
text, here's the XML, here's any code.

-- 
Christopher Dearlove
Senior Principal Engineer, Communications Group
Communications, Networks and Image Analysis Capability
BAE Systems Advanced Technology Centre
West Hanningfield Road, Great Baddow, Chelmsford, CM2 8HN, UK
Tel: +44 1245 242194 |  Fax: +44 1245 242124
chris.dearl...@baesystems.com | http://www.baesystems.com

BAE Systems (Operations) Limited
Registered Office: Warwick House, PO Box 87, Farnborough Aerospace Centre, 
Farnborough, Hants, GU14 6YU, UK
Registered in England  Wales No: 1996687


-Original Message-
From: Joe Abley [mailto:jab...@hopcount.ca] 
Sent: 27 June 2013 18:22
To: Dearlove, Christopher (UK)
Cc: ietf@ietf.org
Subject: Re: RFC 6234 code

--! WARNING ! --
This message originates from outside our organisation,
either from an external partner or from the internet.
Keep this in mind if you answer this message.
Follow the 'Report Suspicious Emails' link on IT matters
for instructions on reporting suspicious email messages.


On 2013-06-27, at 11:49, Dearlove, Christopher (UK) 
chris.dearl...@baesystems.com wrote:

 RFC 6234 contains, embedded in it, code to implement various functions, 
 including SHA-2.
 
 Extracting that code from the RFC is not a clean process. In addition the 
 code must have existed unembedded before being embedded.
 
 Is that code available from the IETF or elsewhere?
 
 (I have tried some approaches to finding such code before posting here, but 
 none successful.)

Turns out that extracting the code from the RFC is only twenty minutes' work. 
Probably faster just to do it than to talk about it. If you'd like a copy of 
the files I extracted from the RFC text, I can easily send you a tarball.

In general I think that maintaining code repositories is non-trivial, and also 
has been known (especially with respect to crypto/hash algorithms) to run into 
export licensing issues. Maintaining an archives of RFCs is already a core 
competency for the IETF, and (at least in some cases) exporting text is less 
problematic than exporting code.


Joe

[krill:~]% mkdir 6234
[krill:~]% cd 6234
[krill:~/6234]% curl -O 'http://tools.ietf.org/rfc/rfc6234.txt'
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  231k  100  231k0 0   190k  0  0:00:01  0:00:01 --:--:--  190k
[krill:~/6234]% vi rfc6234.txt 
[krill:~/6234]% ls
hkdf.c  sha.h   stdint-example.h
hmac.c  sha1.c  test-driver.c
rfc6234.txt sha224-256.cusha.c
sha-private.h   sha384-512.c
[krill:~/6234]% vi *.h *.c
10 files to edit
[krill:~/6234]% date
Thu 27 Jun 2013 13:14:53 EDT
[krill:~/6234]% 
[krill:~/6234]% cc -c hmac.c
[krill:~/6234]% cc -c sha1.c
[krill:~/6234]% cc -c sha224-256.c
[krill:~/6234]% cc -c sha384-512.c 
[krill:~/6234]% cc -c usha.c
[krill:~/6234]% cc -c test-driver.c
[krill:~/6234]% ls
hkdf.c  sha.h   sha384-512.o
hkdf.o  sha1.c  stdint-example.h
hmac.c  sha1.o  test-driver.c
hmac.o  sha224-256.ctest-driver.o
rfc6234.txt sha224-256.ousha.c
sha-private.h   sha384-512.cusha.o
[krill:~/6234]% cc -o test-driver *.o
[krill:~/6234]% ./test-driver| egrep '(PASS|FAIL)'
SHA1 sha standard test 1: PASSED
SHA1 sha standard test 2: PASSED
SHA1 sha standard test 3: PASSED
SHA1 sha standard test 4: PASSED
SHA1 sha standard test 5: PASSED
SHA1 sha standard test 6: PASSED
SHA1 sha standard test 7: PASSED
SHA1 sha standard test 8: PASSED
SHA1 sha standard test 9: PASSED
SHA1 sha standard test 10: PASSED
SHA1 random test 0: PASSED
SHA1 random test 1: PASSED
SHA1 random test 2: PASSED
SHA1 random test 3: PASSED
SHA224 sha standard test 1: PASSED
SHA224 sha standard test 2: PASSED
SHA224 sha standard test 3: PASSED
SHA224 sha standard test 4: PASSED
SHA224 sha standard test 5: PASSED
SHA224 sha standard test 6: PASSED
SHA224 sha standard test 7: PASSED
SHA224 sha standard test 8: PASSED
SHA224 sha standard test 9: PASSED
SHA224 sha standard test 10: PASSED
SHA224 random test 0: PASSED
SHA224 random test 1: PASSED
SHA224 random test 2: PASSED
SHA224 random test 3: PASSED
SHA256 sha standard test 1: PASSED
SHA256 sha standard test 2: PASSED
SHA256 sha standard test 3: PASSED
SHA256 sha standard test 4: PASSED
SHA256 sha standard 

New form of remote attendance [was Re: The Nominating Committee Process: Eligibility]

2013-06-28 Thread Elwyn Davies
On Thu, 2013-06-27 at 12:44 +0100, Arturo Servin (probably did not
intend to) wrote:

 What is the rationale of the requirement to attend psychically to
 meetings?

I attend all meetings psychically so spriritual!

Sorry.. couldn't resist.
E.



Re: Evi Nemeth

2013-06-28 Thread Jun Murai
As we have been working together as IETF community as well as univ.
community with us, I would share thiss loss of Evi with us and all the
students working with her at IETF.
Jun Murai


2013年6月28日金曜日 Brian E Carpenter brian.e.carpen...@gmail.com:

 Evi used to be an IETF regular. There is rather ominous news - she is
 lost at sea between New Zealand and Australia:

 http://www.nzherald.co.nz/nz/news/article.cfm?c_id=1objectid=10893482

 http://www.nzherald.co.nz/nz/news/article.cfm?c_id=1objectid=10893503

 Regards
Brian Carpenter





Re: RFC 6234 code

2013-06-28 Thread Tony Hansen
On 6/28/2013 4:53 AM, Dearlove, Christopher (UK) wrote:
 I'd actually tried the authors, but no reply yet (only a few days). 

 For me, a thanks to Tony Hansen, who did the extraction for me. (That makes 
 me feel a little guilty, why should he do my work I could have done?) But the 
 point of posting on this list was to say that the code should be available so 
 that each person wanting that code doesn't have to do that again.

Hmmm, as one of the authors of RFC6234, I'm not sure how to respond to
that, other than to say, You're welcome. :-) (And no, you didn't get the
files extracted from the RFC, but instead got the files as used to
generate the RFC.)

 I also tried the RFC Editor thinking they might have e.g. XML from which 
 extraction might have been easier, but also no response yet. And I had found 
 several libraries, but not the RFC code. ...
 But the broader point is that if it's worth the IETF publishing the
 code as an RFC, it's worth making the code available straightforwardly. 

I've suggested on a couple occasions to the RFC Editor that, when an RFC
provides source code, they should allow rfc.tar or rfc.tgz to be
provided as well.

There's only a handful of RFCs that do provide source code, for whatever
reason, so this should not be an onerous additional feature.

I've CC'd the RFC Interest mailing list, where this would be more
properly discussed.

Tony Hansen






Re: RFC 6234 code

2013-06-28 Thread Riccardo Bernardini
On Fri, Jun 28, 2013 at 4:11 PM, Tony Hansen t...@att.com wrote:
 On 6/28/2013 4:53 AM, Dearlove, Christopher (UK) wrote:
 I'd actually tried the authors, but no reply yet (only a few days). 

 For me, a thanks to Tony Hansen, who did the extraction for me. (That makes 
 me feel a little guilty, why should he do my work I could have done?) But 
 the point of posting on this list was to say that the code should be 
 available so that each person wanting that code doesn't have to do that 
 again.

 Hmmm, as one of the authors of RFC6234, I'm not sure how to respond to
 that, other than to say, You're welcome. :-) (And no, you didn't get the
 files extracted from the RFC, but instead got the files as used to
 generate the RFC.)

 I also tried the RFC Editor thinking they might have e.g. XML from which 
 extraction might have been easier, but also no response yet. And I had found 
 several libraries, but not the RFC code. ...
 But the broader point is that if it's worth the IETF publishing the
 code as an RFC, it's worth making the code available straightforwardly.

 I've suggested on a couple occasions to the RFC Editor that, when an RFC
 provides source code, they should allow rfc.tar or rfc.tgz to be
 provided as well.

 There's only a handful of RFCs that do provide source code, for whatever
 reason, so this should not be an onerous additional feature.

 I've CC'd the RFC Interest mailing list, where this would be more
 properly discussed.

This came to my mind while reading: why not embedding the archive in
the text with some special line header that makes automatic
extraction possible?  For example, this could be a first rough draft
of the procedure

  1.  Create an archive of the source files with ar.  This command has
the characteristic that if the files are text files, also the archive
is pure text
  2.  Embed the archive in the RFC by adding at the beginning of each
line a '!' (or another marker).  Lines too long for an RFC can be
split and the marker changed to (say) '?'

In order to extract the archive should suffice to process it with an
awk script, piped to ar to extract the files (and if you have Windows,
you install cygwin :-).  With this approach the source code would also
remain human-readable in the RFC text.  The only drawback is the
length of the resulting RFC if the embedded code is very large.

Riccardo Bernardini


 Tony Hansen






Re: adaptive http

2013-06-28 Thread Richard Barnes
Hi Faiz,

Thanks for bringing your idea to the IETF.  The best way to propose your
idea is to write down the details in an Internet-draft.  That helps make
sure that everyone understands how your system works.  There are some tools
for writing drafts here:
http://tools.ietf.org/

Once you've written your draft, submit it here:
http://datatracker.ietf.org/submit/

--Richard



On Thursday, June 27, 2013, Fail ul haque Zeya wrote:

 Hi all,
  I have an idea and want to share.It is about adaptive http. The protocol
 requests the interests and other parameters from the client and send the
 web page dynamically different to different users based on the interests.
 The intersts can be shared via XML. This is in contrast to configured html
 pages a different mechanism,
 regards
 faiz




Re: RFC 6234 code

2013-06-28 Thread Paul Hoffman

On Jun 28, 2013, at 1:57 AM, Eggert, Lars l...@netapp.com wrote:

 some WGs are good at this. RFC5662 for example includes the shell commands to 
 extract the sources it contains.

Earlier example: RFC 4134.

 It would certainly be nice if other WGs did this, too, but I'm not sure we 
 need to make it a requirement.

Fully agree. This is a service that the RFC Editor should provide to the 
community.

--Paul Hoffman

Part of Improving IETF Electronic Diversity [was: RFC 6234 code]

2013-06-28 Thread Hector Santos
I believe this is all part of improving the IETF Electronic Diversity 
picture. Just like we have to deal with greater people personal 
globalization diversity issues, there is also greater technology and 
legal diversity issues to deal with. So many tools, so many languages, 
so many OSes, so many devices and communications API platforms, where 
are the proposals for better, new IETF Global Commons?  Guidelines for 
technical writing for the new world engineers to use, etc.


For me, when I saw this RFC, the things crossed my mind:

- I have trouble with the licensing statement. The RFC describes public 
domain technology.  It requires passing this thru your lawyer(s) to see 
if it can used in our commercial product lines.


- Far too big to distribute via a RFC.  Provide a link to some RFC site. 
Note, I'm just saying in general. I did not read in detail if

the RFC already included links to the official source code.

- Because it was too big, it requires a stripper/parser, although a good 
power programmer can quickly macro-clean it up.  The RFCSTRIP tool, 
well, what language is that? I'm not an *nix person. So this adds to the 
complexity for the Windows shops to get at these hashing functions.  Of 
course, its a piece of cake for a sharp programmer, but even the 
sharpest knives eventually get dull.


--
HLS


On 6/28/2013 4:53 AM, Dearlove, Christopher (UK) wrote:

I'd actually tried the authors, but no reply yet (only a few days). I also 
tried the RFC Editor thinking they might have e.g. XML from which extraction 
might have been easier, but also no response yet. And I had found several 
libraries, but not the RFC code. I can't see any suggestion that the library 
you indicate includes that code, it also bills itself as a C++ library, which 
the RFC code is not (and also not what I want, but that's not a subject for 
this list).

But the broader point is that if it's worth the IETF publishing the code as an 
RFC, it's worth making the code available straightforwardly.

For me, a thanks to Tony Hansen, who did the extraction for me. (That makes me 
feel a little guilty, why should he do my work I could have done?) But the 
point of posting on this list was to say that the code should be available so 
that each person wanting that code doesn't have to do that again.

Christopher





draft-eastlake-trill-rfc6327bis-00 was replaced by draft-ietf-trill-rfc6327bis-00

2013-06-28 Thread Donald Eastlake
Hi,

Please note that draft-eastlake-trill-rfc6327bis-00 was replaced by
draft-ietf-trill-rfc6327bis-00.

Thanks,
Donald
=
 Donald E. Eastlake 3rd   +1-508-333-2270 (cell)
 155 Beaver Street, Milford, MA 01757 USA
 d3e...@gmail.com


Experience with Online Protocol Testing

2013-06-28 Thread Hannes Tschofenig
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

[I posted this question a little while ago to the WG chairs mailing list and 
got no response.
Maybe my question is too trivial but I thought I should try it on the 
ietf@ietf.org list as well to 
get some feedback.]

Hi all,

when concerns about the lack of interoperability surfaced mid last year 
in the OAuth working group we (Derek, and myself) tried to figure out 
whether we should schedule a face-to-face interop and/or to develop an 
online test suite. We got in touch with Lucy Lynch (ISOC) and she helped 
us to find developers to work with us on the test software.

Roland Hedberg, one of the guys working on the project for OAuth 
testing, presented his ongoing work in the OAuth working group, see
http://www.ietf.org/proceedings/86/slides/slides-86-oauth-2.pdf

OAuth is a bit more complex since it involves more than two parties and 
we were looking for a test framework that could be re-used to develop 
the desired results more quickly. To our surprise we couldn't find 
a test framework that we could easily re-use since most test frameworks 
really focus on different types of tests. Of course, we might 
have looked in the wrong direction.

Here is how it works at the moment:
* Imagine you have developed an OAuth-based identity management server 
(that contains an OAuth 2.0 authorization server) and it runs somewhere 
on the Internet (or in your lab). You don't need to have access to the 
source code to execute the tests.
* You download the scripts that Roland  Co had developed and configure 
them. Of course you will have to create an account at your IdP as well.
* You run the test scripts against the authorization server and the 
script plays the other OAuth 2.0 parties in the exchange. The script contains a 
number 
of test cases (around 60+ at the moment) and determines whether the IdP 
responds correctly in the exchanges.

I know that these ideas have come up in other working groups in the past 
already (such as in SCIM, which also has a test server up and 
running).

It would be interesting to hear what others have been doing and what 
worked for you or what didn't.

Ciao
Hannes



-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQEcBAEBCgAGBQJRzb+vAAoJEGhJURNOOiAtZBwIAISKHjD7gv8irkL4yaBR31K8
KLZCr/1n0n1OcXl3rE9MFOyA85hYNplZFd1giJLLgEX3UyofYXg/L2QOOLqtP0lT
JgnW2CvR0WWKfIT1iKjAwAodCVLsHF8MdPE4tl0LBlCeqhA1waj/oCLkBzZrrhhq
oWnZzP0I9nFdlSxV9EAHQ62RAxLUVQmBEqgMxl7A+iC9fGD8IhWSNSqqsaF0WOaB
6bHdwCFLYYAyqKhiuJAo/f6YFGEzIbPgpHPGjwBZzBIjwP/EGiFnAliyF8WATHzF
RM+OWg6QASh1cNwzc0dbMcrcr1L1ve7amATMc4uPN7sRjhv0s62iguWfGRhQhHw=
=YT5M
-END PGP SIGNATURE-


Re: Experience with Online Protocol Testing

2013-06-28 Thread Michael Tuexen
On Jun 28, 2013, at 6:54 PM, Hannes Tschofenig hannes.tschofe...@gmx.net 
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 [I posted this question a little while ago to the WG chairs mailing list and 
 got no response.
 Maybe my question is too trivial but I thought I should try it on the 
 ietf@ietf.org list as well to 
 get some feedback.]
 
 Hi all,
 
 when concerns about the lack of interoperability surfaced mid last year 
 in the OAuth working group we (Derek, and myself) tried to figure out 
 whether we should schedule a face-to-face interop and/or to develop an 
 online test suite. We got in touch with Lucy Lynch (ISOC) and she helped 
 us to find developers to work with us on the test software.
 
 Roland Hedberg, one of the guys working on the project for OAuth 
 testing, presented his ongoing work in the OAuth working group, see
 http://www.ietf.org/proceedings/86/slides/slides-86-oauth-2.pdf
 
 OAuth is a bit more complex since it involves more than two parties and 
 we were looking for a test framework that could be re-used to develop 
 the desired results more quickly. To our surprise we couldn't find 
 a test framework that we could easily re-use since most test frameworks 
 really focus on different types of tests. Of course, we might 
 have looked in the wrong direction.
 
 Here is how it works at the moment:
 * Imagine you have developed an OAuth-based identity management server 
 (that contains an OAuth 2.0 authorization server) and it runs somewhere 
 on the Internet (or in your lab). You don't need to have access to the 
 source code to execute the tests.
 * You download the scripts that Roland  Co had developed and configure 
 them. Of course you will have to create an account at your IdP as well.
 * You run the test scripts against the authorization server and the 
 script plays the other OAuth 2.0 parties in the exchange. The script contains 
 a number 
 of test cases (around 60+ at the moment) and determines whether the IdP 
 responds correctly in the exchanges.
 
 I know that these ideas have come up in other working groups in the past 
 already (such as in SCIM, which also has a test server up and 
 running).
 
 It would be interesting to hear what others have been doing and what 
 worked for you or what didn't.
So it sounds like you are doing some sort of conformance testing...

For SCTP we did a number of interoperability tests, which were
face to face meetings and the people who were developing stacks
we there. This events were always very helpful not only for improving
the stacks but also for improving the IETF documents.
I also developed a test tool for conformance testing based on some
test descriptions provided by ETSI. However, it would have made sense to
specify also some tests within the IETF. That can also help to clarify
some protocol aspects and to focus on common mistakes.
Providing tests is a very good thing in my experience. Unfortunately,
at the point we did this for SCTP, the IETF position was that testing
isn't an objective. Maybe it is time to change that...

Best regards
Michael
 
 Ciao
 Hannes
 
 
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
 Comment: GPGTools - http://gpgtools.org
 
 iQEcBAEBCgAGBQJRzb+vAAoJEGhJURNOOiAtZBwIAISKHjD7gv8irkL4yaBR31K8
 KLZCr/1n0n1OcXl3rE9MFOyA85hYNplZFd1giJLLgEX3UyofYXg/L2QOOLqtP0lT
 JgnW2CvR0WWKfIT1iKjAwAodCVLsHF8MdPE4tl0LBlCeqhA1waj/oCLkBzZrrhhq
 oWnZzP0I9nFdlSxV9EAHQ62RAxLUVQmBEqgMxl7A+iC9fGD8IhWSNSqqsaF0WOaB
 6bHdwCFLYYAyqKhiuJAo/f6YFGEzIbPgpHPGjwBZzBIjwP/EGiFnAliyF8WATHzF
 RM+OWg6QASh1cNwzc0dbMcrcr1L1ve7amATMc4uPN7sRjhv0s62iguWfGRhQhHw=
 =YT5M
 -END PGP SIGNATURE-
 



Re: Experience with Online Protocol Testing

2013-06-28 Thread Hannes Tschofenig
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

 For SCTP we did a number of interoperability tests, which were
 face to face meetings and the people who were developing stacks
 we there. This events were always very helpful not only for improving
 the stacks but also for improving the IETF documents.
 I also developed a test tool for conformance testing based on some
 test descriptions provided by ETSI. However, it would have made sense to
 specify also some tests within the IETF. That can also help to clarify
 some protocol aspects and to focus on common mistakes.
 Providing tests is a very good thing in my experience. Unfortunately,
 at the point we did this for SCTP, the IETF position was that testing
 isn't an objective. Maybe it is time to change that...
 
 Best regards
 Michael


I guess the code you wrote for your online test cases was your own work and you 
didn't re-use some else's test framework? 
The SCTP case is also a bit simpler than the OAuth scenario since in SCTP might 
have only tested client-server interactions (right?)

Ciao
Hannes

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQEcBAEBCgAGBQJRzcSCAAoJEGhJURNOOiAt064H/3iowM2lBBunk58jOq1/eAfK
Cy+y+dpEqJoXeqMI7RLtTsXXDbblxI83cxO6OXol7HU37elmomdiD7qGg2KPeeOJ
1GyEWnXXJ1oaC9AcWAkPiRan2EhdXGxu6yWG1iuHniOSkC3WvU9nhVU8PLUkSPwB
WSzh4hsj6tnTpR67oYoHkDwtQsFuvcvxiYkNV9rI3fy+FIXJ5ygf9UPmgVJpNqIx
azmc8W9iWUUeGGnaU+/2APRybe8PpCZf6y+QKVFA/6Zkai86/HfTaDFdAnL/eOXV
xhdXjHsn8Ci4CkCXJ7Pn2JRpIYX1y7ccMOvu8WHuvJnSQ2S091/EFHBY2w/Wydk=
=vU1M
-END PGP SIGNATURE-


Re: Experience with Online Protocol Testing

2013-06-28 Thread Michael Tuexen
On Jun 28, 2013, at 7:14 PM, Hannes Tschofenig hannes.tschofe...@gmx.net 
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 For SCTP we did a number of interoperability tests, which were
 face to face meetings and the people who were developing stacks
 we there. This events were always very helpful not only for improving
 the stacks but also for improving the IETF documents.
 I also developed a test tool for conformance testing based on some
 test descriptions provided by ETSI. However, it would have made sense to
 specify also some tests within the IETF. That can also help to clarify
 some protocol aspects and to focus on common mistakes.
 Providing tests is a very good thing in my experience. Unfortunately,
 at the point we did this for SCTP, the IETF position was that testing
 isn't an objective. Maybe it is time to change that...
 
 Best regards
 Michael
 
 
 I guess the code you wrote for your online test cases was your own work and 
 you didn't re-use some else's test framework? 
Not a test framework, but a scheme interpreter (guile) was extended with the
necessary stuff for sending/receiving packets, such that tests can be 
implemented
as simple scheme functions. Doing a test is calling a scheme function which
returns whether the corresponding test was passed or failed. Some shell scripts
allows you running a test suite.
 The SCTP case is also a bit simpler than the OAuth scenario since in SCTP 
 might have only tested client-server interactions (right?)
Well, the testtool only deals with network layer events. Taking interactions
with the application (API) into account is a different story. We also have
an API tester, which is also self written. We currently don't have something
which takes network and API events into account.

Best regards
Michael
 
 Ciao
 Hannes
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
 Comment: GPGTools - http://gpgtools.org
 
 iQEcBAEBCgAGBQJRzcSCAAoJEGhJURNOOiAt064H/3iowM2lBBunk58jOq1/eAfK
 Cy+y+dpEqJoXeqMI7RLtTsXXDbblxI83cxO6OXol7HU37elmomdiD7qGg2KPeeOJ
 1GyEWnXXJ1oaC9AcWAkPiRan2EhdXGxu6yWG1iuHniOSkC3WvU9nhVU8PLUkSPwB
 WSzh4hsj6tnTpR67oYoHkDwtQsFuvcvxiYkNV9rI3fy+FIXJ5ygf9UPmgVJpNqIx
 azmc8W9iWUUeGGnaU+/2APRybe8PpCZf6y+QKVFA/6Zkai86/HfTaDFdAnL/eOXV
 xhdXjHsn8Ci4CkCXJ7Pn2JRpIYX1y7ccMOvu8WHuvJnSQ2S091/EFHBY2w/Wydk=
 =vU1M
 -END PGP SIGNATURE-
 



Re: RFC 6234 code

2013-06-28 Thread John C Klensin


--On Friday, June 28, 2013 10:11 -0400 Tony Hansen
t...@att.com wrote:

 I also tried the RFC Editor thinking they might have e.g. XML
 from which extraction might have been easier, but also no
 response yet. And I had found several libraries, but not the
 RFC code. ... But the broader point is that if it's worth the
 IETF publishing the code as an RFC, it's worth making the
 code available straightforwardly. 
 
 I've suggested on a couple occasions to the RFC Editor that,
 when an RFC provides source code, they should allow
 rfc.tar or rfc.tgz to be provided as well.
 
 There's only a handful of RFCs that do provide source code,
 for whatever reason, so this should not be an onerous
 additional feature.

Folks, IANAL, but please be _very_ careful about the comment Joe
made about the potential difference between publishing a paper
or article that contains code and exporting the code itself or
making it generally available for export.   I have no reason to
believe that this particular case is a problem given how widely
the details of SHA-2 has been published but, if we want the RFC
Editor to go into the code distribution business, we should be
very sure that an attorney with the right specialties has looked
at the situation and either cleared it generally or advised on
what needs to be examined case by case.

   john





Re: RFC 6234 code

2013-06-28 Thread Paul Hoffman
On Jun 28, 2013, at 12:10 PM, John C Klensin john-i...@jck.com wrote:

 Folks, IANAL, but please be _very_ careful about the comment Joe
 made about the potential difference between publishing a paper
 or article that contains code and exporting the code itself or
 making it generally available for export.   I have no reason to
 believe that this particular case is a problem given how widely
 the details of SHA-2 has been published but, if we want the RFC
 Editor to go into the code distribution business, we should be
 very sure that an attorney with the right specialties has looked
 at the situation and either cleared it generally or advised on
 what needs to be examined case by case.

The RFC Editor is publishing code in a text file that is formatted like an RFC. 
The proposal is for the RFC Editor to publish *the exact same code* in a file 
without the RFC wrapping.

If you really think you see a legal difference in doing the second, fine; I 
propose that you are just searching for problems that do not exist.

--Paul Hoffman

Re: RFC 6234 code

2013-06-28 Thread Joe Abley

On 2013-06-28, at 15:19, Paul Hoffman paul.hoff...@vpnc.org wrote:

 The RFC Editor is publishing code in a text file that is formatted like an 
 RFC. The proposal is for the RFC Editor to publish *the exact same code* in a 
 file without the RFC wrapping.
 
 If you really think you see a legal difference in doing the second, fine; I 
 propose that you are just searching for problems that do not exist.

Quite possibly they don't, and I'm not presuming to talk for John. But the 
vague thoughts that crossed my mine were the issues that allowed Phil Zimmerman 
to publish PGP source code in book form and avoid conviction for munitions 
export without a licence.

Perhaps this is no longer an issue in the US following Bernstein vs. United 
States and Junger vs. Daley.

Just thinking aloud, really.

I will observe that far more than 20 minutes of combined person-minutes have 
been consumed by this thread already, and the approach of minimum collective 
effort is still scrape the code from the RFC yourself and compile it.


Joe

Re: Part of Improving IETF Electronic Diversity [was: RFC 6234 code]

2013-06-28 Thread Donald Eastlake
Hi Hector,

On Fri, Jun 28, 2013 at 11:56 AM, Hector Santos hsan...@isdg.net wrote:
 I believe this is all part of improving the IETF Electronic Diversity
 picture. Just like we have to deal with greater people personal
 globalization diversity issues, there is also greater technology and legal
 diversity issues to deal with. So many tools, so many languages, so many
 OSes, so many devices and communications API platforms, where are the
 proposals for better, new IETF Global Commons?  Guidelines for technical
 writing for the new world engineers to use, etc.

I think that computational code written in C, pretty much the lowest
common denominator, is just fine. I don't think there is any greater
diversity now that then was 10 or 20 years ago at that level. In fact,
I think there is less.

 For me, when I saw this RFC, the things crossed my mind:

 - I have trouble with the licensing statement. The RFC describes public
 domain technology.  It requires passing this thru your lawyer(s) to see if
 it can used in our commercial product lines.

Sorry, the standard simplified BSD license is required for code in
RFCs. The authors couldn't change it. But if you are doing a serious
commercial product and you don't have a lawyer checking such things
anyway, you are a fool.

 - Far too big to distribute via a RFC.  Provide a link to some RFC site.
 Note, I'm just saying in general. I did not read in detail if
 the RFC already included links to the official source code.

Nonsense. An RFC should be self contained and IO don't see what's too
big about this one. The code is in a relatively compact region of the
RFC, so you can read the text and skip the code, if you want, without
undue effort.

 - Because it was too big, it requires a stripper/parser, although a good
 power programmer can quickly macro-clean it up.  The RFCSTRIP tool, well,
 what language is that? I'm not an *nix person. So this adds to the
 complexity for the Windows shops to get at these hashing functions.  Of
 course, its a piece of cake for a sharp programmer, but even the sharpest
 knives eventually get dull.

This is just nonsense. If you are going to do anything serious with
code, you need a source code editor. With that, it is no big deal to
extract and test the code. Although I would agree that making the
RFCSTRIP tool available over the web, as many other tools are, would
be reasonable.

Thanks,
Donald
=
 Donald E. Eastlake 3rd   +1-508-333-2270 (cell)
 155 Beaver Street, Milford, MA 01757 USA
 d3e...@gmail.com

 --
 HLS


 On 6/28/2013 4:53 AM, Dearlove, Christopher (UK) wrote:

 I'd actually tried the authors, but no reply yet (only a few days). I also
 tried the RFC Editor thinking they might have e.g. XML from which extraction
 might have been easier, but also no response yet. And I had found several
 libraries, but not the RFC code. I can't see any suggestion that the library
 you indicate includes that code, it also bills itself as a C++ library,
 which the RFC code is not (and also not what I want, but that's not a
 subject for this list).

 But the broader point is that if it's worth the IETF publishing the code
 as an RFC, it's worth making the code available straightforwardly.

 For me, a thanks to Tony Hansen, who did the extraction for me. (That
 makes me feel a little guilty, why should he do my work I could have done?)
 But the point of posting on this list was to say that the code should be
 available so that each person wanting that code doesn't have to do that
 again.

 Christopher




Re: RFC 6234 code

2013-06-28 Thread Martin Rex
Dearlove, Christopher (UK) wrote:

 I'd actually tried the authors, but no reply yet (only a few days).
 I also tried the RFC Editor thinking they might have e.g. XML
 from which extraction might have been easier, but also no response yet.

Extracting code from text is pretty trivial.

Use copypaste from the output of below simple perl script
(which removes the pagebreaks):


-Martin


#!/usr/bin/perl
#
$rfcnum=6234;
$footerpattern=^Eastlake;
$headerpattern=^RFC ${rfcnum};

$url = http://tools.ietf.org/rfc/rfc${rfcnum}.txt;;

open(IN,curl '${url}'|)
  || die(Download of \$url\ failed: $!\n);
@doc  = ();
$show = 2;
while($line = IN) {
$line =~ tr/\r\n//d;
if ( $line =~ m/${footerpattern}/io ) {
$show = 0;
while ( $doc[$#doc] eq  ) {
pop(@doc);
}
}
if ( $show0 ) {
if ( 2==$show || $line ne  ) {
push(@doc, $line);
$show = 2;
}
}
if ( 0==$show  $line =~ m/^${headerpattern}/ ) {
$show = 1;
}
}

close(IN);

print join(\n,@doc), \n;




Re: RFC 6234 code

2013-06-28 Thread Martin Rex
Martin Rex wrote:
 Dearlove, Christopher (UK) wrote:
 
  I'd actually tried the authors, but no reply yet (only a few days).
  I also tried the RFC Editor thinking they might have e.g. XML
  from which extraction might have been easier, but also no response yet.
 
 Extracting code from text is pretty trivial.
 
 Use copypaste from the output of below simple perl script
 (which removes the pagebreaks):

Modified script which also does the extraction of the files
into the current directory as well (below).

  cc -c shatest.c sha1.c sha224-256.c sha384-512.c usha.c hmac.c hkdf.c
  cc -o shatest shatest.o sha1.o sha224-256.o sha384-512.o usha.o hmac.o hkdf.o
  ./shatest

it seems to work (for me on Linux, at least)

-Martin



#!/usr/bin/perl
#
$rfcnum=6234;
$footerpattern=^Eastlake;
$headerpattern=^RFC ${rfcnum};

$url = http://tools.ietf.org/rfc/rfc${rfcnum}.txt;;

open(IN,curl '${url}'|)
  || die(Download of \$url\ failed: $!\n);
@origdoc = IN;
close(IN);

@doc  = ();
$show = 2;
$fname = ;
@file = ();

foreach $line ( @origdoc ) {

   $line =~ tr/\r\n//d;
   if ( $line =~ m/${footerpattern}/io ) {
  $show = 0;
  while ( $#doc =0   eq $doc[$#doc] ) {
 pop(@doc);
  }
  while ( $#file=0   eq $file[$#file] ) {
 pop(@file);
  }
   }
   if ( $show0 ) {
  if ( 2==$show || $line ne  ) {
 push(@doc, $line);
 $show = 2;
 if ( $fname eq   $line =~ m#^/\*\*.* (\w.+) \*\*#io ) {
$fname = $1;
$fname =~ tr/a-zA-Z0-9._-//cd;
@file  = ();
 }
 if (  ne $fname ) {
if ( $line =~ m/^\d/io ) {
   $filedata = join(\n,@file) . \n;
   $filelen  = length($filedata);
   printf(Writing file \$fname\ (len=${filelen})\n);
   open(OUT,$fname)
 || die(Could not write file \$fname\: $!\n);
   print OUT $filedata;
   close(OUT);
   $fname = ;
} else {
   push(@file, $line);
}
 }
  }
   }
   if ( 0==$show  $line =~ m/${headerpattern}/io ) {
  $show = 1;
   }
}

# print join(\n,@doc), \n;



Comments For I-D: draft-moonesamy-nomcom-eligibility-00 (was Re: The Nominating Committee Process: Eligibility)

2013-06-28 Thread Abdussalam Baryun
This message is reply to an author of a new draft under ietf discussion.
If this list is not the correct place to discuss such matter, then the
list's responsible Chair is required to give details of where to
discuss such new work.
+

Hi Moonesamy,
(the Author of draft-moonesamy-nomcom-eligibility-00)

I think the draft still needs more details, for example, the Abstract
says to give remote contributors eligible to serve but how many
remote, it is not-reasonable/not-practical to have most remote, and it
is not fare/diverse to have all not remote. Furthermore, you did not
mention diversity in the draft related to members selected.

AB I prefer if you refer me, or the discussion list chair can refer
me to somewhere we can discuss this new draft. Please note that I was
told not to post more discuss messages on this list, so the chair or
you are required to respond on this issue related to discussing the
draft, because this may be my last post regarding this I-D.

AB the update may need an informational draft (or better
introduction) like what [1] is doing, so if we know the information on
process challenges we will know the best practice. I like the [1]
draft I think it needs to be renewed including remote members
possibilities.
[1]  http://tools.ietf.org/id/draft-crocker-nomcom-process-00.txt

AB you need to define *remote contributor* in the draft. When the
authors define it then I can amend or edit. You need to mention that
most of meeting of IETF per year are in one region which makes some
from other regions to contribute remotely.

Section 2 The section is not reasonable because you changed with no
strong reasons. Why you want to change totally, I recommend to add
idea not change. As to give opportunity to additional memebrs that are
remote. These additional memebrs will have a special condition. This
way you don't change the conditions for the current procedure of
selecting f2f memebrs, and you may limit the number of remote
contributors maybe 10 % of the total memebrs.

AB suggest in Section 2 I suggest not to update the text of the RFC
but to add new rule for selecting few remote participants.

AB you need to add what are the remote memebrs responsibilities,
because they may be similar or different than the other memebrs.

my answers to your questions below,

On Fri, Jun 28, 2013 at 1:50 AM, S Moonesamy sm+i...@elandsys.com wrote:

 Hi Abdussalam,


 Thanks for explaining why you support the draft.  I am going to list some 
 questions.  Please read them as points to consider.  There isn't any 
 obligation to provide comments.

You mean the draft should consider,


  - What is your opinion about helping the pie get larger?


No we don't want things to get larger for others to eat, we want
things to get smarter for others to use, share, and develop equally.



  - What would be an acceptable way of determining whether someone
has been contributing to the IETF over a period of five meetings?

Where are the five meetings (is it a f2f meeting?)and what kind of
contributing you are asking?

  - Dave Cridland suggested that working groups provide a smallish set of
volunteers each for the selection process.  Is it okay to leave it
to the working group chair to make the decision?

I will send you discusses/answers offline
I really want to focus questions related to the new draft not other
issues. Therefore, I think the draft needs to involve what was
discussed on the list (feedback). Updating this RFC procedure may need
more reasons than what was presented in the draft, I think it is nice
if you add more and change info to renew this draft for more further
discusses. Thanks.

AB


WG Review: Security Automation and Continuous Monitoring (sacm)

2013-06-28 Thread The IESG
A new IETF working group has been proposed in the Security Area. The IESG
has not made any determination yet. The following draft charter was
submitted, and is provided for informational purposes only. Please send
your comments to the IESG mailing list (iesg at ietf.org) by 2013-07-08.

Security Automation and Continuous Monitoring (sacm)

Current Status: Proposed WG

Chairs:
  Dan Romascanu droma...@avaya.com
  Kathleen Moriarty kathleen.moria...@emc.com

Assigned Area Director:
  Sean Turner turn...@ieca.com

Mailing list
  Address: s...@ietf.org
  To Subscribe: https://www.ietf.org/mailman/listinfo/sacm
  Archive: http://www.ietf.org/mail-archive/web/sacm/

Charter:

Securing information and the systems that store, process, and transmit
that information is a challenging task for organizations of all sizes,
and many security practitioners spend much of their time on manual
processes. Standardized protocols to collect, verify, and update system
security configurations would allow this process to be automated, which
would free security practitioners to focus on high priority tasks and
should improve their ability to prioritize risk based on timely
information about threats and vulnerabilities.  Because this is a broad
area of work, this working group will first address enterprise use cases
pertaining to the assessment of endpoint posture (using the definitions
of Endpoint and Posture from RFC 5209).

The working group will, whenever reasonable and possible, reuse existing
protocols, mechanisms, information and data models. Of particular
interest to this working group are existing industry standards,
preferably IETF standards, that could support automation of asset,
change, configuration, and vulnerability management.

The working group will define:

1. A set of standards to enable assessment of endpoint posture. This area
of focus provides for necessary language and data format specifications.

  An example of such an endpoint posture assessment could include,
  but is not limited to, the following general steps:

  1. Identify endpoints

  2. Determine specific endpoint elements to assess

  3. Collect actual value of elements

  4. Compare actual element values to expected element values

  5. Report results

  This approach to endpoint posture collection enables multiple policies
  to be evaluated based on a single data collection activity. Policies
  will be evaluated by comparing available endpoint posture data 
  according to rules expressed in the policy. Typically, these rules 
  will compare the actual value against an expected value or range for 
  specific posture elements. In some cases, the posture element may 
  pertain to software installation state, in which case the actual and 
  expected values may be installed or not installed. Evaluation of 
  multiple posture elements may be combined using Boolean logic.

2. A set of standards for interacting with repositories of content
related to assessment of endpoint posture.

  Repository protocols are needed to store, update, and retrieve
  configuration checks and other types of content required for posture
  assessment (see step 2 above). A content repository is expected to
  house specific versions of checklists (i.e. benchmarks), may be
  required to satisfy different use cases (i.e. asset inventory,
  configuration settings, or vulnerabilities). In addition, content 
  repositories are expected to store up-to-date dictionary of specific 
  enumerations, such as those used for configuration element 
  identifiers, asset classifications, vulnerability identifiers, and so 
  on.

This working group will produce the following deliverables:

- A document or documents describing the SACM Architecture. This will
include protocol requirements and their associated use cases as well as a
description of how SACM protocols fit together into a system. This may be
a single standards-track document, or may be split up as the WG sees fit.
- A standards-track document specifying the informational model for
endpoints data posture.
- A standards-track document specifying a protocol and data format for
retrieving configuration and policy information for driving data
collection and analysis
- A standards-track document specifying a protocol and data format for
collecting actual endpoint posture

The working group will create an Internet-Draft documenting the existing
work in the IETF and in other organizations that might be used as-is or
as a starting point for developing solutions to the SACM requirements. 
The working group may decide to make of this document an Informational
RFC, but this is not a mandatory deliverable.

The working group will work in close coordination with other WGs in the
IETF (including, but not limited to MILE and NEA) in order to create
solutions that do not overlap and can be used or re-used to meet the
goals of more than one working group.

The working group will communicate with 

WG Review: Managed Incident Lightweight Exchange (mile)

2013-06-28 Thread The IESG
The Managed Incident Lightweight Exchange (mile) working group in the
Security Area of the IETF is undergoing rechartering. The IESG has not
made any determination yet. The following draft charter was submitted,
and is provided for informational purposes only. Please send your
comments to the IESG mailing list (iesg at ietf.org) by 2013-07-08.

Managed Incident Lightweight Exchange (mile)

Current Status: Active WG

Chairs:
  Kathleen Moriarty kathleen.moria...@emc.com
  Brian Trammell tramm...@tik.ee.ethz.ch

Assigned Area Director:
  Sean Turner turn...@ieca.com

Mailing list
  Address: m...@ietf.org
  To Subscribe: https://www.ietf.org/mailman/listinfo/mile
  Archive: http://www.ietf.org/mail-archive/web/mile/

Charter:

The Managed Incident Lightweight Exchange (MILE) working group develops
standards to support computer and network security incident management;
an incident is an unplanned event that occurs in an information
technology (IT) infrastructure. An incident could be a benign
configuration issue, IT incident, an infraction to a service level
agreement (SLA), a system compromise, socially engineered phishing
attack, or a denial-of-service (DoS) attack, etc.  When an incident is
detected, or suspected, there may be a need for organizations to
collaborate. This collaboration effort may take several forms including
joint analysis, information dissemination, and/or a coordinated
operational response.  Examples of the response may include filing a
report, notifying the source of the incident, requesting that a third
party resolve/mitigate the incident, sharing select indicators of
compromise, or requesting that the source be located. By sharing
indicators of compromise associated with an incident or possible threat,
the information becomes a proactive defense for others that may include
mitigation options. The Incident Object Description Exchange Format
(IODEF) defines an information framework to represent computer and
network security incidents; IODEF is defined in RFC 5070 and has been
extended by RFC 5091 to support phishing reports; RFC 6484 provides a
template for defining extensions to IODEF. Real-time Inter-network
Defense (RID) defines a protocol to facilitate sharing computer and
network security incidents; RID is defined in RFC 6545, and RID over
HTTPS is defined in RFC 6546.

The MILE WG is focused on two areas, IODEF, the data format and
extensions
to represent incident and indicator data, and RID, the policy and
transport for structured data.  With respect to IODEF, the working group
will:

- Revise the IODEF document to incorporate enhancements and extensions
based on operational experience. Use by Computer Security Incident
Response Teams (CSIRTs) and others has exposed the need to extend IODEF
to support industry specific extensions, use case specific content, and
representations to associate information related to represented threats
(system, threat actors, campaigns, etc.).  The value of information
sharing has been demonstrated and highlighted at an increasing rate
through the success of the Information Sharing and Analysis Centers
(ISACs) and the recent cyber security Executive Order in the US.
International groups, such as the Multinational Alliance for
Collaborative Cyber Situational Awareness (CCSA) have been running
experiments to determine what data is useful to exchange between
industries and nations to effectively mitigate threats.  The work of
these and other groups have identified or are working to develop data
representations relevant to their use cases that may compliment/extend
IODEF or be useful to exchange using RID and related transport protocols.

- Provide guidance on the implementation and use of IODEF to aid
implementers in developing interoperable specifications.

With respect to RID, the working group will:

- Define a resource-oriented approach to cyber security information
sharing that follows the REST architectural style. This mechanism will
allow CSIRTS to be more dynamic and agile in collaborating with a
broader, and varying constituency.

- Provide guidance on the implementation and use of RID transports based
on use cases.  The guidance document will show the relationship between
transport options (RID + RID transport and IODEF/RID + ROLIE) and may
identify the need for additional transport bindings.

- RID may require modifications to address data provenance, additional
policy options, or other changes now that there are multiple
interoperable implementations of RFC6545 and RFC6546.  With the RID
implementations in the open source community, increased use and
experimentation may demonstrate the need for a revision. 


Milestones:
  Aug 2013 - Submit a draft on the representation of Structured
Cybersecurity Information in IODEF to the IESG for publication as a
Standards Track RFC
  Aug 2013 - Submit a draft on enumeration reference formats for IODEF to
the IESG for publication as a Standards Track RFC
  

WG Action: Formed Large-Scale Measurement of Broadband Performance (lmap)

2013-06-28 Thread The IESG
A new IETF working group has been formed in the Operations and Management
Area. For additional information please contact the Area Directors or the
WG Chair.

Large-Scale Measurement of Broadband Performance (lmap)

Current Status: Proposed WG

Chair:
  Dan Romascanu droma...@avaya.com

Assigned Area Director:
  Benoit Claise bcla...@cisco.com

Mailing list
  Address: l...@ietf.org
  To Subscribe: http://www.ietf.org/mailman/listinfo/lmap
  Archive: http://www.ietf.org/mail-archive/web/lmap

Charter:

The Large-Scale Measurement of Broadband Performance (LMAP) working group
standardizes the LMAP measurement system for performance measurements of
broadband access devices such as home and enterprise edge routers,
personal computers, mobile devices, set top box, whether wired or
wireless.

Measuring portions of the Internet on a large scale is essential for
accurate characterizations of performance over time and geography, for
network diagnostic investigations by providers and their users, and for
collecting information to support public policy development. The goal is
to have the measurements (made using the same metrics and mechanisms)
 for a large number of points on the Internet, and to have the results
collected and stored in the same form.
 
The LMAP working group is chartered to specify an information model, the
associated data models, and select/extend one or more protocols for the
secure communication: 
1.  A Control Protocol, from a Controller to instruct Measurement Agents
what performance metrics to measure, when to measure them, how/when to
report the measurement results to a Collector,
2.  A Report Protocol, for a Measurement Agent to report the results to
the Collector. 
The data models should be extensible for new and additional measurements.
LMAP will consider re-use of existing data models languages.

A key assumption constraining the initial work is that the measurement
system is under the control of a single organization (for example, an
Internet Service Provider or a regulator). However, the components of an
LMAP measurement system can be deployed in administrative domains that
are not owned by the measuring organization. Thus, the system of
functions deployed by a single organization constitutes a single LMAP
domain which may span ownership or other administrative boundaries. 

The LMAP architecture will allow for measurements that utilize either
IPv4 or IPv6, or possibly both. Devices containing Measurement Agents may
have several interfaces using different link technologies. Multiple
address families and interfaces must be considered in the Control and
Report protocols.

It is assumed that different organization's LMAP measurement domains can
overlap, and that active measurement packets appear along with normal
user traffic when crossing another organization's network. There is no
requirement to specify a mechanism for coordination between the LMAP
measurements in overlapping domains (for instance a home network with MAs
on the home gateway, set top box and laptop). In principle, there are no
restrictions on the type of device in which the MA function resides. 

Both active and passive measurements are in scope, although there may be
differences in their applicability to specific use cases, or in the
security measures needed according to the threats specific to each
measurement category. LMAP will not standardize performance metrics.

The LMAP WG will consider privacy as a core requirement and will ensure
that by default measurement and collection mechanisms and protocols
standardized  operate in a privacy-sensitive manner, for example,
ensuring that measurements are not personally identifying except where
permission for such has been granted by identified subjects. Note that
this does not mean that all uses of LMAP need to turn on all privacy
features but it does mean that privacy features do need to be at least
well-defined.

Standardizing control of end users Measurement Agents is out of scope.
However, end users can obtain an MA to run measurement tasks if desired
and report their results to whomever they want, most likely the supplier
of the MA. This provides for user-initiated on-demand measurement, which
is an important component of the ISP use case.

Inter-organization communication of results is out of scope of the LMAP
charter.

The management protocol to bootstrap the MAs in measurement devices is
out of scope of the LMAP charter. 

Service parameters, such as product category, can be useful to decide
which measurements to run and how to interpret the results. These
parameters are already gathered and stored by existing operations
systems. 
Discovering the service parameters on the MAs or sharing the service
parameters between MAs are out of the scope. However, if the service
parameters are available to the MAs, they could be reported with the
measurement results in the Report Protocol.

Deciding the set of 

UPDATED WG Review: Security Automation and Continuous Monitoring (sacm)

2013-06-28 Thread The IESG
A new IETF working group has been proposed in the Security Area. The IESG
has not made any determination yet. The following draft charter was
submitted, and is provided for informational purposes only. Please send
your comments to the IESG mailing list (iesg at ietf.org) by 2013-07-08.

Security Automation and Continuous Monitoring (sacm)

Current Status: Proposed WG

Chairs:
  Dan Romascanu droma...@avaya.com
  Adam Montville adam.montvi...@cisecurity.org

Assigned Area Director:
  Sean Turner turn...@ieca.com

Mailing list
  Address: s...@ietf.org
  To Subscribe: https://www.ietf.org/mailman/listinfo/sacm
  Archive: http://www.ietf.org/mail-archive/web/sacm/

Charter:

Securing information and the systems that store, process, and transmit
that information is a challenging task for organizations of all sizes,
and many security practitioners spend much of their time on manual
processes. Standardized protocols to collect, verify, and update system
security configurations would allow this process to be automated, which
would free security practitioners to focus on high priority tasks and
should improve their ability to prioritize risk based on timely
information about threats and vulnerabilities.  Because this is a broad
area of work, this working group will first address enterprise use cases
pertaining to the assessment of endpoint posture (using the definitions
of Endpoint and Posture from RFC 5209).

The working group will, whenever reasonable and possible, reuse existing
protocols, mechanisms, information and data models. Of particular
interest to this working group are existing industry standards,
preferably IETF standards, that could support automation of asset,
change, configuration, and vulnerability management.

The working group will define:

1. A set of standards to enable assessment of endpoint posture. This area
of focus provides for necessary language and data format specifications.

  An example of such an endpoint posture assessment could include,
  but is not limited to, the following general steps:

  1. Identify endpoints

  2. Determine specific endpoint elements to assess

  3. Collect actual value of elements

  4. Compare actual element values to expected element values

  5. Report results

  This approach to endpoint posture collection enables multiple policies
  to be evaluated based on a single data collection activity. Policies
  will be evaluated by comparing available endpoint posture data 
  according to rules expressed in the policy. Typically, these rules 
  will compare the actual value against an expected value or range for 
  specific posture elements. In some cases, the posture element may 
  pertain to software installation state, in which case the actual and 
  expected values may be installed or not installed. Evaluation of 
  multiple posture elements may be combined using Boolean logic.

2. A set of standards for interacting with repositories of content
related to assessment of endpoint posture.

  Repository protocols are needed to store, update, and retrieve
  configuration checks and other types of content required for posture
  assessment (see step 2 above). A content repository is expected to
  house specific versions of checklists (i.e. benchmarks), may be
  required to satisfy different use cases (i.e. asset inventory,
  configuration settings, or vulnerabilities). In addition, content 
  repositories are expected to store up-to-date dictionary of specific 
  enumerations, such as those used for configuration element 
  identifiers, asset classifications, vulnerability identifiers, and so 
  on.

This working group will produce the following deliverables:

- A document or documents describing the SACM Architecture. This will
include protocol requirements and their associated use cases as well as a
description of how SACM protocols fit together into a system. This may be
a single standards-track document, or may be split up as the WG sees fit.
- A standards-track document specifying the informational model for
endpoints data posture.
- A standards-track document specifying a protocol and data format for
retrieving configuration and policy information for driving data
collection and analysis
- A standards-track document specifying a protocol and data format for
collecting actual endpoint posture

The working group will create an Internet-Draft documenting the existing
work in the IETF and in other organizations that might be used as-is or
as a starting point for developing solutions to the SACM requirements. 
The working group may decide to make of this document an Informational
RFC, but this is not a mandatory deliverable.

The working group will work in close coordination with other WGs in the
IETF (including, but not limited to MILE and NEA) in order to create
solutions that do not overlap and can be used or re-used to meet the
goals of more than one working group.

The working group will communicate with 

New Non-WG Mailing List: vmeet -- IETF remote participation meeting services discussion

2013-06-28 Thread IETF Secretariat
A new IETF non-working group email list has been created.

List address: vm...@ietf.org
Archive: http://www.ietf.org/mail-archive/web/vmeet/current/maillist.html
To subscribe: https://www.ietf.org/mailman/listinfo/vmeet

Purpose: Explore, specify and develop improved services for remotely 
participating in IETF meetings. Look for eventually supporting virtual meetings 
that have no physical venue. A modest form of a virtual meeting is a 
small-group conference call, such as regularly conducted by the IESG, IAB and 
IAOC.
 
For additional information, please contact the list administrators.


RFC 6956 on Forwarding and Control Element Separation (ForCES) Logical Function Block (LFB) Library

2013-06-28 Thread rfc-editor
A new Request for Comments is now available in online RFC libraries.


RFC 6956

Title:  Forwarding and Control Element Separation 
(ForCES) Logical Function Block (LFB) Library 
Author: W. Wang, E. Haleplidis,
K. Ogawa, C. Li,
J. Halpern
Status: Standards Track
Stream: IETF
Date:   June 2013
Mailbox:wmw...@zjsu.edu.cn, 
eha...@ece.upatras.gr, 
ogawa.kent...@lab.ntt.co.jp,  
chuanhuang...@zjsu.edu.cn, 
joel.halp...@ericsson.com
Pages:  111
Characters: 236056
Updates/Obsoletes/SeeAlso:   None

I-D Tag:draft-ietf-forces-lfb-lib-12.txt

URL:http://www.rfc-editor.org/rfc/rfc6956.txt

This document defines basic classes of Logical Function Blocks (LFBs)
used in Forwarding and Control Element Separation (ForCES).  The
basic LFB classes are defined according to the ForCES Forwarding
Element (FE) model and ForCES protocol specifications; they are
scoped to meet requirements of typical router functions and are
considered the basic LFB library for ForCES.  The library includes
the descriptions of the LFBs and the XML definitions.

This document is a product of the Forwarding and Control Element Separation 
Working Group of the IETF.

This is now a Proposed Standard.

STANDARDS TRACK: This document specifies an Internet standards track
protocol for the Internet community,and requests discussion and suggestions
for improvements.  Please refer to the current edition of the Internet
Official Protocol Standards (STD 1) for the standardization state and
status of this protocol.  Distribution of this memo is unlimited.

This announcement is sent to the IETF-Announce and rfc-dist lists.
To subscribe or unsubscribe, see
  http://www.ietf.org/mailman/listinfo/ietf-announce
  http://mailman.rfc-editor.org/mailman/listinfo/rfc-dist

For searching the RFC series, see http://www.rfc-editor.org/rfcsearch.html.
For downloading RFCs, see http://www.rfc-editor.org/rfc.html.

Requests for special distribution should be addressed to either the
author of the RFC in question, or to rfc-edi...@rfc-editor.org.  Unless
specifically noted otherwise on the RFC itself, all RFCs are for
unlimited distribution.


The RFC Editor Team
Association Management Solutions, LLC




RFC 6971 on Depth-First Forwarding (DFF) in Unreliable Networks

2013-06-28 Thread rfc-editor
A new Request for Comments is now available in online RFC libraries.


RFC 6971

Title:  Depth-First Forwarding (DFF) in Unreliable 
Networks 
Author: U. Herberg, Ed.,
A. Cardenas,
T. Iwao,
M. Dow, 
S. Cespedes
Status: Experimental
Stream: IETF
Date:   June 2013 
Mailbox:ulrich.herb...@us.fujitsu.com, 
alvaro.carde...@me.com, 
smartnetpro-iwao_...@ml.css.fujitsu.com,  
m@freescale.com, 
scespe...@icesi.edu.co
Pages:  41
Characters: 95276
Updates/Obsoletes/SeeAlso:   None

I-D Tag:draft-cardenas-dff-14.txt

URL:http://www.rfc-editor.org/rfc/rfc6971.txt

This document specifies the Depth-First Forwarding (DFF) protocol for
IPv6 networks, a data-forwarding mechanism that can increase
reliability of data delivery in networks with dynamic topology and/or
lossy links.  The protocol operates entirely on the forwarding plane
but may interact with the routing plane.  DFF forwards data packets
using a mechanism similar to a depth-first search for the
destination of a packet.  The routing plane may be informed of
failures to deliver a packet or loops.  This document specifies the
DFF mechanism both for IPv6 networks (as specified in RFC 2460) and
for mesh-under Low-Power Wireless Personal Area Networks (LoWPANs),
as specified in RFC 4944.  The design of DFF assumes that the
underlying link layer provides means to detect if a packet has been
successfully delivered to the Next Hop or not.  It is applicable for
networks with little traffic and is used for unicast transmissions
only.


EXPERIMENTAL: This memo defines an Experimental Protocol for the
Internet community.  It does not specify an Internet standard of any
kind. Discussion and suggestions for improvement are requested.
Distribution of this memo is unlimited.

This announcement is sent to the IETF-Announce and rfc-dist lists.
To subscribe or unsubscribe, see
  http://www.ietf.org/mailman/listinfo/ietf-announce
  http://mailman.rfc-editor.org/mailman/listinfo/rfc-dist

For searching the RFC series, see http://www.rfc-editor.org/rfcsearch.html.
For downloading RFCs, see http://www.rfc-editor.org/rfc.html.

Requests for special distribution should be addressed to either the
author of the RFC in question, or to rfc-edi...@rfc-editor.org.  Unless
specifically noted otherwise on the RFC itself, all RFCs are for
unlimited distribution.


The RFC Editor Team
Association Management Solutions, LLC