Re: No MBONE access

2004-03-04 Thread George Michaelson

Just a pov. 

I love MBONE. I see no reason not to try and have it.

for APNIC member meetings we decided to go with Quicktime RTSP: and SDP: url
encoded streams. they worked well. one onsite, one offsite. avoid congestion.

they have really bad scaling for hundreds of people. but lets be realistic here,
the IETF isn't actually the same as Kylies underwear show: it has a small set
of people who could live off refeeds if need be.

So, while it has costs, I say: why not do both?

also, the rtsp: and sdp: instances would have worked over multicast. MBONE is
not an encoding, its a point of view. Quicktime over MBONE is fine for me, and
Quicktime alongside MBONE is fine for me too.

judging from jabber, we have 20 people who would really have meshed this way
if needed. we could have done p2p feeds if needed. labour costs, effort. but 
technology wise, it works.

-George



Re: IETF59 Network Tear Down Starts at 12:00 Noon Friday

2004-03-04 Thread Steven M. Bellovin
In message [EMAIL PROTECTED]
com, Romascanu, Dan (Dan) writes:
IETF 59 NOC Crew,

Please allow me to THANK YOU for the excellent support that you provided durin
g the whole IETF meeting. I have sometimes in the past complained about the ne
twork conditions during the IETF meetings. Your work was not only technically 
competent, but also open in acknowledging problems and quick in finding soluti
ons. 

Let me echo that.  The network was far better than we've seen at other 
recent IETFs.  My congratulations and thanks.

--Steve Bellovin, http://www.research.att.com/~smb





Re: IETF59 Network Tear Down Starts at 12:00 Noon Friday

2004-03-04 Thread Aaron Falk
Woohyong-

Thank you for such a well-run network!

One question: do you know if the free wireless in the rooms (essid: 
lottehotel) will continue after the IETF network teardown?

--aaron

On Mar 4, 2004, at 4:43 PM, Woohyong Choi wrote:

Seoul meeting participants,

We'll begin tearing down gears and wires right after 12:00 noon
on Friday. Terminal room will also be closed at 12:00.
Access points in the 1st floor will probably be the last ones to
remain active, but still they have be removed during the afternoon.
We hope you had an enjoyable access during your stay here.

Please make a safe trip back!

Regards,
Woohyong Choi, on behalf of the entire IETF59 NOC crew
___
This message was passed through [EMAIL PROTECTED], 
which is a sublist of [EMAIL PROTECTED] Not all messages are passed. 
Decisions on what to pass are made solely by IETF_CENSORED ML 
Administrator ([EMAIL PROTECTED]).




Re: Community Collaboration, versus Central Management

2004-03-04 Thread Harald Tveit Alvestrand
Dave,

I'm trying to give a constructive response near the end - and it turns out 
that a lot of the things you wish for match up with the things I have tried 
to start us executing. those who wish to skip the name game can search 
for #POSITIVE

--On 4. mars 2004 16:39 +0900 Dave Crocker [EMAIL PROTECTED] wrote:

Harald,

HTA Dave,

HTA could you please quote people by name?

This will probably devolve into a yes-you-did, no-i-didn't exchange,
thereby nicely distracting us all from the focus of my posting.
However, just so no one thinks that I constructed these:

 However things have changed. That change was nicely summarized a
 couple of years ago by someone who is now an area director. He
 said that, really, working groups work for the area director. The
 IESG really makes the standards.
This was said by Jon Peterson, in the IMPP working group, before he
became an area director.
Thanks!

Frankly, I was so shocked by the statement that I wanted to go talk to the 
relevant AD and figure out where his head was at - asking the question of 
each of them in turn didn't seem optimal.

 If someone says they do not trust you because they usually disagree
 with you, then they are missing this essential point.  And, indeed,
 that appears to be a pervasive problem in the IETF today.
In a public mail thread concerning trust, you said that you do not
trust me because you disagree with me so often.
Thanks for pointing this out, and giving me the chance of quoting what I 
said at the time.

According to 
http://www.alvestrand.no/pipermail/problem-statement/2003-June/002445.html
, I said:

Trust networks may be the wrong term too; while I don't trust Dave
Crocker's proposals for action that much (I disagree with him too
often!),  I do trust him to care about many of the same things I do - and
we do have  a long history together. So he's part of my network in a
way that many  others aren't - but calling it a trust network may be
simplistic. (apologies for using you as a named example again, Dave!)
Google is wonderful.

My mentioning these will probably be taken as a personal response, but
I made a point of not doing the citations in my original postings,
hoping to avoid this concern.
Thanks for coming forth with those.

I think the quote above illustrates well why I like to use quotes, and make 
source data for my assertions available where possible: I agree with a lot 
of what you say about the situation we want to have, but believe your 
statements about the current situation are incorrect. I think we're a lot 
closer to where you want to be than you claim we are.

Did you have any comments about the constructive points in my posting,
rather than the points being offered as background?
#POSITIVE

Since I have come off my adrenaline kick from seeing the first quote you 
quoted, I think I should go back to my preparations for tonight's plenary, 
but I'll attempt a response.

My own version of the urgent needs is:

   1.  Better quality work, where quality covers such things as
   utility and efficiency of the design.
Agreed. See the chartering of ICAR and the discussions we have there.

   2.  More timely work, so its consumers get it when they need it.
Agreed. See the PROTO team for one example of trying to speed things up.
But the majority of a document's IETF time is still spent in the WGs.
   3.  More accountable lines (and processes) of IETF management, so
   that things happen predictably, appropriately, and in the best
   interests of the IETF community
Agreed. However, we have chosen to build an organization that depends on 
human judgment, not mechanistic decision-making - and I think appropriately 
so. Humans are notoriously hard to predict, so we should be careful that 
our strife towards predictability does not interfere with other values we 
hold high. Again, the devil is in the details.

   4.  Stable funding, so that the IETF can attend to its work without
   economic distraction.
Agreed, with the proviso that we also need a clear, understandable and 
accountable structure for handling the economic side of the IETF operation; 
that's the essence of the advcomm and adminrest documents.

I do believe our goals are compatible..

  Harald




Re: MBONE access?

2004-03-04 Thread Iljitsch van Beijnum
On 4-mrt-04, at 6:14, Joel Jaeggli wrote:

There's a reasonable cross-section of clients for most platforms the
supports a set of mostly interoperable codecs and transports. It is
possible to source with real/darwin streaming server/videolan a source
that will be visbile to users of quicktime/real/vlc and some other 
clients
via multicast or unicast transports.
Right. And unless I'm mistaken, streaming servers will happily take 
either a unicast or a multicast feed and reflect this feed over one of 
several transports (including some that will bypass NAT).

The transport is an issue. 500Kb/s isma mpeg-4 streams have a real 
cost if
you want 200 of them...
That's 100 Mbps. There are a lot of outfits that care about the IETF 
for which 100 Mbps is small change. (The latest episode of the Steve 
Jobs Show had 60,000 people watching at 300 kbps...) And 200 x 24 kbps 
audio only is just 5 Mbps, which would even be doable from the meeting 
site. A reasonable charge to cover the costs for online participation 
wouldn't be out of the question either, IMO.

The thing I consider most unworkable frankly is low-bitrate video... I
don't consider a 40 80 100Kb/s streams terribly usable regardless of 
the
codec chosen, I want to be able to read the slides, I want to be able 
hear
the speakers from someplace other than the bottom a barrel and I want 
to
be able to discern who's standing at the mic.
Our little multi6 experiment taught me that low quality video indeed 
isn't all that useful, and goood quality video isn't simple. However, 
workable quality audio is both simple and useful. So if good video 
can't be done, forget about it altogether and do audio only. If the 
speakers get in the habit of putting their slides online before a 
session and either they or the jabber scribe say which slide is up, 
that part is covered.

Another option would be slow scan tv: rather than stream relatively 
low quality moving video, why not send out periodic high quality 
stills? The advantage here is that there is no set rate at which those 
have to load so there is no binary good/none quality problem as with 
streaming.




IETF59 Hotel's Wireless Network

2004-03-04 Thread Woohyong Choi
As IETF59 NOC do not have any control over hotel's facilities,
all we could do was just to ask, and what hotel says is ...

Hotel's wireless will be free of charge until 3PM on Friday.

We regret that we could not arrange better terms for you.

Regards,
Woohyong Choi / IETF59 NOC Team

P.S. After I announced that SMTP access to smtp.ietf59.or.kr
is available from hotel's network, I realized later today
that updating of one access list has been neglected. It should
work now until we shutdown our servers tomorrow afternoon.



Re: Community Collaboration, versus Central Management

2004-03-04 Thread Dave Crocker
Harald,

HTA Thanks for pointing this out, and giving me the chance of quoting what I
HTA said at the time...

You might also want to consider how your posting was interpreted.

Of course, it's probably just me who has these erroneous reactions
from what you and other IESG members intend. I'm sure no one else has
such problems with their over- or mis-interpretation anything said by
folks on the IESG.


 My own version of the urgent needs is:

I am hoping that folks will take my comments in the whole, rather than
feeling the need to do a step-by-step bookkeeping process that
proves we are or are not making all the right changes.

My intent was to suggest some perspective and questions for things
being proposed.


3.  More accountable lines (and processes) of IETF management, so
that things happen predictably, appropriately, and in the best
interests of the IETF community

HTA Agreed. However, we have chosen to build an organization that depends on
HTA human judgment, not mechanistic decision-making

I cannot even guess what prompts the however in your response.  For
that matter, I am sorry but I did not see how the rest of your comment
related to the concern I was stating.

On the off-chance you are suggesting that the involvement of humans
means that we cannot seek to have -- and even to demand, and even
better to achieve -- a more predictable decision process, then we have
very different experiences with management processes.

/d
--
 Dave Crocker dcrocker-at-brandenburg-dot-com
 Brandenburg InternetWorking www.brandenburg.com
 Sunnyvale, CA  USA tel:+1.408.246.8253




Re: IETF59 Hotel's Wireless Network

2004-03-04 Thread Masataka Ohta
Woohyong Choi wrote:

As IETF59 NOC do not have any control over hotel's facilities,
all we could do was just to ask, and what hotel says is ...
Hotel's wireless will be free of charge until 3PM on Friday.
Then, why don't you shutdown IETF WLAN service today?

Hotel one is good enough.

We regret that we could not arrange better terms for you.
We do appreciate your effort.

Usually, free network access (Hotel one or not) was shutdown
at noon of Friday, which was inconvenient for some foreigners,
for example for those having Saturday flights.
		Masataka Ohta





Re: IETF59 Hotel's Wireless Network

2004-03-04 Thread Rob Austein
At Thu, 04 Mar 2004 19:51:31 +0900, Masataka Ohta wrote:

 Then, why don't you shutdown IETF WLAN service today?
 
 Hotel one is good enough.

Ohta-san: given what I saw as part of the NOC team at IETF55, I very
much doubt that the hotel's WiFi could handle 1000+ users in one
ballroom, that is still very much the bleeding edge for 802.11.

Woohyong: thank you and the team very much for a very good IETF
network.



Re: IETF59 Network Tear Down Starts at 12:00 Noon Friday

2004-03-04 Thread JORDI PALET MARTINEZ
Hi,

I totally agree, the network has been working better than ever I can remember.

I think it will be important if you can document what you did, so we can take 
advantage of that in future meetings. Probably is not too much extra work, may be a 
couple of pages cleaning up your notes and experiences ?

By the way, I think that this is something that we can suggest for every hosts, 
providing information about the network setup, traffic statistics, etc.

Regards,
Jordi


- Original Message - 
From: Romascanu, Dan (Dan) [EMAIL PROTECTED]
To: Woohyong Choi [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, March 04, 2004 4:55 PM
Subject: RE: IETF59 Network Tear Down Starts at 12:00 Noon Friday


IETF 59 NOC Crew,

Please allow me to THANK YOU for the excellent support that you provided during the 
whole IETF meeting. I have sometimes in the past complained about the network 
conditions during the IETF meetings. Your work was not only technically competent, but 
also open in acknowledging problems and quick in finding solutions. This was one of 
the many reasons that make this IETF 59 experience in Korea one of the best I ever 
had. 


Regards,

Dan



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Behalf Of Woohyong Choi
 Sent: 04 March, 2004 9:43 AM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: IETF59 Network Tear Down Starts at 12:00 Noon Friday
 
 
 Seoul meeting participants,
 
 We'll begin tearing down gears and wires right after 12:00 noon
 on Friday. Terminal room will also be closed at 12:00.
 
 Access points in the 1st floor will probably be the last ones to
 remain active, but still they have be removed during the afternoon.
 
 We hope you had an enjoyable access during your stay here.
 
 Please make a safe trip back!
 
 Regards,
 Woohyong Choi, on behalf of the entire IETF59 NOC crew
 
 

**
Madrid 2003 Global IPv6 Summit
Presentations and videos on line at:
http://www.ipv6-es.com

This electronic message contains information which may be privileged or confidential. 
The information is intended to be for the use of the individual(s) named above. If you 
are not the intended recipient be aware that any disclosure, copying, distribution or 
use of the contents of this information, including attached files, is prohibited.





Re: IETF59 Hotel's Wireless Network

2004-03-04 Thread Masataka Ohta
Rob Austein wrote:

Then, why don't you shutdown IETF WLAN service today?
I mean today at midnight or so.

Hotel one is good enough.


Ohta-san: given what I saw as part of the NOC team at IETF55, I very
much doubt that the hotel's WiFi could handle 1000+ users in one
ballroom, that is still very much the bleeding edge for 802.11.
1000+?

I've heard that only about 1/4 of participants, which is a
lot less than 4000, are from US and none of them will be in
a ballroom tomorrow.
And I expect the number should even decrease tomorrow.

Woohyong: thank you and the team very much for a very good IETF
network.
I fully agree with you here, except that I also thank you. :-)

		Masataka Ohta




On supporting NAT, was: Re: MBONE access?

2004-03-04 Thread Iljitsch van Beijnum
On 4-mrt-04, at 2:44, Hallam-Baker, Phillip wrote:

In case you had not noticed there are now tens of millions of NAT
devices in use. End users are not going to pay $10 per month for
an extra IP address when they can connect unlimited numbers of
devices to the net using a $40 NAT box.
Sounds like a conspiracy... ISPs charging orders of magnitude more than 
cost for additional addresses forcing people to use NAT.

The NAT war has been over for years, NAT won. The problem is that
the IETF still has not come to terms with that fact.
I don't think anyone has won here, there are just casualties all over 
the place: more work for the IETF and vendors, less functionality for 
the users.

The Internet was designed to be a network of networks. The core
architecture is NOT end-to-end, that is a political shiboleth that
has been imposed later.
Suppose for the sake of argument that the above is a valid position, 
and that we would actually want to make NAT work. What we need to do 
then is extend it such that it becomes possible to address hosts behind 
a NAT from the public internet. That should be perfectly doable, in 
essence we'd be redefining the protocol and port numbers to be part of 
the address. However, this means these must now also be put in the DNS 
and in most other places where IP addresses show up. So this adds up to 
a HUGE amount of new work.

Guess what: we already did pretty much the same thing with IPv6. The 
logical conclusion here is that we can save a lot of time and effort by 
simply adding IPv6 to the mix, as it is just a hair shy of being ready 
for full deployment, while all this stuff to make NAT actually work is 
all over the place.

In the case of H323 the problem is not just NAT, it is the derranged
protocol which uses a block of 3000 odd TCP/IP ports to receive
messages on. there is no way that this is consistent with good
firewall management
So now you are complaining because after you install a firewall, it 
turns out the thing does its job? The whole idea that decent security 
can be had by allowing packets with certain port numbers in them in and 
not others is fatally flawed, as it just makes for an arms race between 
firewall vendors that inspect deeper and deeper into packes and 
firewall bypass utilities that tunnel the real protocol through more 
and more layers of accepted protocols.

What we need is corporate zone alarm like functionality, where 
firewalls get to see which applications (and users) are trying to 
communicate with the outside world, rather than guess based on the port 
number in the packet. This would allow some very nice features such as 
blocking vulnerable versions of applications but allowing patched 
versions of the same application.




My Notes from Thursday Night Plenary - IETF 59

2004-03-04 Thread Spencer Dawkins
Please contact me with any updates or corrections - thanks!

Spencer Dawkins



Thursday Plenary - Leslie Daigle

Erik Nordmark - locator/identifier split

This concept is sticking it's head up from multiple holes, like a
gopher

- Want to start the entire community thinking about this concept

- One minute summary of multi-homing = sites connected to multiple
ISPs
want to improve failure resilience, load balancing, better quality
connectivity
today - addresses usually assigned by ISPs and aggregated toward
the default-free zone
provider-independent addresses can't be aggregated by ISPs -
doesn't scale to one router per site in the world
IPv6 could use IPv4 technique (but doesn't scale), multiple
addresses per site and per host (but has limitations)
transport/application paths don't survive when there's a problem -
applications have to do recovery on their own
don't know how to do address combinations - RFC 3484 is (only) a
start, because of ingress filtering

- Big questions
separate identifiers and locators?
current IP addresses would become locators - need an indirection
layer, but may not need a new name space
one approach - ID-locator shim layer, but need a protocol to set
up this mapping
works for most applications, but referrals and callbacks need help
not sure where the shim layer goes - IP routing layer, IP endpoint
layer ...
need a new namespace?
FQDN as the key, sets of locators, ephemeral ID/purpose-built key
...
stable long-term? survice ISP changes, etc.
need a managed hierarchy? as IP addresses and domain names are
today? or self-allocated? hard to build a mapping function without
hierarchy
don't know how to make these spaces scale without hierarchy -
could it be made to work in the self-managed case?
how to re-home traffic? plenty of proposals to do this in multi6
how to accommodate ingres filtering? if you can tell each ISP
about all other ISPs, this goes away, but won't happen for
consumer-class services
need to select source/destination locators - when communication
starts, when communication fails. Is there SCTP experience we can
learn from?
how to detect failures? depends based on application. transports
can give hints. routing system may have a role here. Or we could
heartbeat...
new protocols should not make Internet less secure than it is
today, should not become the weakest link
various threats exist (redirection, 3rd-party flooding - and this
includes amplifiers) don't depend on security pixie dust (PKI).
security threats that force hosts to do more work - vulnerable to
DoS attacks
Mobile IP doesn't cleanly separate out multihoming - may be OK for
sites but not for hosts
multi6 and HIP are working groups, HIP-related research group
forming in IRTF
Erik asking for help in several areas

- Comments
what about NEMO? NEMO is a consumer for multihoming, not an
inventor
does HIP need hierarchy to scale? getting into details here
thank you for reintroducing my proposal four years later...
multihoming and mobility are mostly unrelated ... ???
renewal of multi6 working group meeting in plenary
structured and unstructured identity space - birthday problem is
real, even if it's 128 bits, for unstructured identity space
separation between who and where is turning into a giant NAT - can
we remember peer-to-peer applications in use cases?
this is an architected NAT when we rewrite headers
tight presentation in a tough problem space
multihoming and mobility are almost the same ... !!!
presentation assumed new identifier space - please be skeptical
about this in your work - could have different identifiers at startup
and in the association
may also use the same mechanisms for renumbering
site multihoming, but maybe host multihoming is the interesting
problem
trying to avoid assumption of a new identifier name space, but
it's really hard
need help in understanding the requirements from people in the
room - is this damage control for an application, or a feature?
can you sketch some NON-requirements? path selection based on path
quality, for example - is this really requirement? not an integral
part of the problem
there is complex interaction between IP layer, transport layer,
application layer - solving the problem for TCP, but nothing beyond
that
don't think application layer sees a different interface whether
this is provided in IP layer or transport layer
can applications make use of locator information as well as
identifier information - remember site-local? this is a terrible idea
I think of these solutions as a routing overlay, but this requires
end hosts to participate in the routing protocol - not a bad thing
rewriting things scare me - going down a path that's not
sustainable
mobile IP gives you one identifier, but MIPv6 is working on
bootstrapping, so you're not tied to one 

IETF59 Quick Facts about IETF59 Wireless (was Re: IETF59 Hotel's Wireless Network)

2004-03-04 Thread Woohyong Choi
On Thu, Mar 04, 2004 at 08:41:09PM +0900, Masataka Ohta wrote:
 Rob Austein wrote:
 
 Then, why don't you shutdown IETF WLAN service today?
 
 I mean today at midnight or so.

It's basically because we wanted to provide what we have
promised for.

Hotel's access points for banquet area have been shutdown
to minimize interferences, and they are scheduled to
brought back up at around noon tomorrow afternoon before
we shutdown IETF59 network.

 Ohta-san: given what I saw as part of the NOC team at IETF55, I very
 much doubt that the hotel's WiFi could handle 1000+ users in one
 ballroom, that is still very much the bleeding edge for 802.11.
 
 1000+?
 
 I've heard that only about 1/4 of participants, which is a
 lot less than 4000, are from US and none of them will be in
 a ballroom tomorrow.

For the record, max concurrent 11b users were 525 during the
second afternoon session on Monday while there were max of
46 11a users during the last night's plenary(11b users topped
around 360 during the time). Number of unique nodes that
appeared so far is 1295.

These are quick facts made available from IETF59 NOC's lead
wireless geek, Masafumi OE from the WIDE project.  He is going
to make more information available later.

Regards,
Woohyong Choi / IETF59 NOC Team



Re: MBONE access?

2004-03-04 Thread Robert G. Brown
On Wed, 3 Mar 2004, Ole Jacobsen wrote:

 Paul,
 
 This is simply silly.
 
 What you are saying is that for religious reasons you are unwilling to use
 FREE and widely used tools in order to help us develop our own.
 
 Next thing you'll be telling me PDF is a bad thing.
 
 If you want the IETF to be a place where more people can participate you
 need to ditch some of this religion.
...
  the fact that realmedia and windowsmedia aren't interoperable means that
  we (this community) failed to recognize and address a common need, and
  that the world (including this community) is suffering for it.
 
  compounding this failure by adopting proprietary technology for the primary
  work of this community -- which is interior and published communications --
  would be a bad, bad (bad) thing.

This is not silly, it is just smart in a longer timeframe than you're
thinking in.  Proprietary tools that utilize a proprietary/non-open data
interface are a serious problem for a variety of very sound,
non-religious reasons (as well as for a variety of political and
economic reasons, which is what I think you're calling religious
reasons).  Free is irrelevant to the issue, unless free means open.

   1) Proprietary data formats and long term archiving of any sort of
data are fundamentally incompatible.  Ask anybody who has lived through
the last twenty or thirty years of computer evolution how many documents
they've lost or had to go in and rescue (sometimes at great expense) as
the tools they were built with have disappeared.  Sometimes along with
their vendors.  Other times the vendors simply decided to release a new
version that was sufficiently incompatible that it could no longer
manage the old documents.  I think all of us can remember multiple
instances where this has happened to us -- I personally have lived
through wordstar, pcwrite, wordperfect, word, and several income tax
programs (which are the worst, as one HAS to be able to access the
records up to seven years later, which is a real problem with operating
systems and Moore's Law). There is also the uncertainty even now
surrounding the encumbered mp3 format versus e.g.  the unencumbered
ogg format.

Formats used to encode long-term public records need to be open and
tools to manage those records need to be available from many sources.
So putting up realmedia shows is short-run glitzy and nifty and all that
(even though lots of people won't have the players and cannot play the
media) but it is long run foolish IF the production is intended as any
sort of serious historical or archival record.

   2) Using a proprietary data format that can only be accessed by using
a proprietary tool (even a free one) leaves one vulnerable to all
sorts of shenanigans and hidden costs.  For example, nothing prevents
the vendor from waiting until you have a large amount of valuable data
built up with their format that would be very expensive to convert and
then deciding to charge you.  It's their tool, they can charge you if
and when they please.  Worse, since their tool is generally a closed
source, proprietary object, there are the usual problems with libraries
and compatibility when trying to get the tool to run on the wide range
of platforms it is advertised for.  Free may just refer to the cost of
getting the program, but it may well cost quite a bit of time to install
and maintain it, and time is money.

  3) The Internet has been built on open standards from the very
beginning.  This is absolutely the key to its success and tremendous
degree of universality and functionality to this very day.  Any vendor
can build a mail tool, an ftp tool, a tool using TCP/IP as a transport
layer.  Any vendor can build a browser, an http daemon.  The
specifications for those tools are laid out in RFCs, and modifications
to the open standards proceed in a serious and systematic way.  The
Internet has RESISTED being co-opted by monopolistic vendors who have
sought to introduce their own proprietary and essential layer of
middleware on hidden protocols, although they continue trying. The DMCA
makes it quite possible that if they ever succeed it will be
tremendously expensive and damaging to the entire structure.  You can
call this a religious argument if you like, but I think it is really a
statement of both politics and economics, in this case the politics of
freedom and the economics associated with having lots of choices.

So I'm afraid that I agree with Paul 100% on this one (although I
respectfully disagree with him on others;-).  The IETF absolutely should
avoid using proprietary tools to create documents that they might wish
to archive, and should strongly encourage the development of open
standards for and open document formats (one data format, many tools
both free and non-free) for data transmission on the Internet to ensure
that the Internet NOT be co-opted by any single vendor and that records
that might be archived today can still be accessed ten or twenty years
from now without 

RE: On supporting NAT, was: Re: MBONE access?

2004-03-04 Thread Hallam-Baker, Phillip
 Sounds like a conspiracy... ISPs charging orders of magnitude 
 more than 
 cost for additional addresses forcing people to use NAT.

Its called a monopoly.

There are good reasons why ISPs are encouraging their customers
to use NAT, they provide a weak firewall capability and that
in turn significantly reduces exposure to being hacked which
in turn reduces the cost of chasing zombie machines.

The next generation of cable modems my ISP will be installing will
have a NAT box built in.

  The NAT war has been over for years, NAT won. The problem is that
  the IETF still has not come to terms with that fact.
 
 I don't think anyone has won here, there are just casualties all over 
 the place: more work for the IETF and vendors, less functionality for 
 the users.

Less functionality is a deliberate, concious choice on the part of
the IETF. Fixing the problem is utterly trivial.

Think of all the machines in my network as a single machine with a
single IP address. The requests to open and close ports to the outside
world are simply RPC requests (without the RPC syntax).


 That should be perfectly doable, in 
 essence we'd be redefining the protocol and port numbers to 
 be part of 
 the address. However, this means these must now also be put 
 in the DNS 
 and in most other places where IP addresses show up. So this 
 adds up to 
 a HUGE amount of new work.

No, the machines do not need to be individually addressable.


 Guess what: we already did pretty much the same thing with IPv6. The 
 logical conclusion here is that we can save a lot of time and 
 effort by 
 simply adding IPv6 to the mix, as it is just a hair shy of 
 being ready 
 for full deployment, while all this stuff to make NAT 
 actually work is all over the place.

Simply repeating the claim that IPv6 is the solution to every
issue does not make it so, or advance the deployment of IPv6.
The problem is the intrinsic asymmetry between the value of
an IPv4 and an IPv6 address. An IPv4 address will be visible 
to the world, an IPv6 address will only be visible to other
IPv6 addresses.

The main reason IPv6 is nowhere is the refusal to deal with NAT
except by ideological reactions like the above. NAT is the
way to deploy IPv6. 

The consumer's internal network can then be a NAT'd IPv4 net
and the external network can be IPv6.


  In the case of H323 the problem is not just NAT, it is the derranged
  protocol which uses a block of 3000 odd TCP/IP ports to receive
  messages on. there is no way that this is consistent with good
  firewall management
 
 So now you are complaining because after you install a firewall, it 
 turns out the thing does its job? 

No, I am complaining about a protocol that is not firewall friendly.

 The whole idea that decent security 
 can be had by allowing packets with certain port numbers in 
 them in and 
 not others is fatally flawed, 

Your view is not held by the computer security industry. Sure firewalls
are not infallible. But that does not mean that they do not provide a 
valuable service.

One reason everything is migrating to Web Services is that the 
Web Services stack is designed to support a new generation of
firewalls and expose exactly the right data at the perimeter.

 What we need is corporate zone alarm like functionality, where 
 firewalls get to see which applications (and users) are trying to 
 communicate with the outside world, rather than guess based 
 on the port 
 number in the packet. This would allow some very nice 
 features such as 
 blocking vulnerable versions of applications but allowing patched 
 versions of the same application.

That is not a bad idea. In essence it would mean extending requests 
to open incomming AND outgoing ports to the perimeter defense.

Hey Mr firewall, this is Internet Explorer version 9.2, please
allow me to connect up to port 80 on 23.43.2.2




Re: NAT's (was MBONE access?)

2004-03-04 Thread Noel Chiappa
 From: Hallam-Baker, Phillip [EMAIL PROTECTED]

I am generally in agreement with your comments, but I have a few quibbles:


 NAT is the big bad dog here, that is what breaks the end to end
 connectivity.

 The core architecture is NOT end-to-end, that is a political shiboleth
 that has been imposed later.

Actually, back in the dark/golden ages (i.e. before there was SPAM, viruses,
etc - not to mention lots of money), it *was* an end-end network. IP packets
flowed unmolested/unrestricted/unmodified pretty much everywhere. We fell
from that state of grace many moons ago.

It's unfair to blame to the loss of end-end on NAT boxes alone. There are a
number of forces which drove against that - and I just listed some of them
above. Firewalls damage end-end - and firewalls are her to keep sites secure.
My home ISP won't let in TCP SYN's for SMTP and HTTP - because they want more
money out of me before they will let me run servers. Etc, etc, etc.

In general, there's what Clark et al called tussle, in a paper that everyone
should check out:

http://www.acm.org/sigs/sigcomm/sigcomm2002/papers/tussle.pdf

in which it turns out to not be in the interests of a number of players to
allow unrestricted end-end - and these forces will exist even without NAT
boxes.


 As for IPv6, the only feasible way to deploy it is by co-opting those
 NAT boxes.

Ah, you just correctly observed that:

 In case you had not noticed there are now tens of millions of NAT
 devices in use.
 ...
 The NAT war has been over for years, NAT won.

That's now *installed base*. The average home owner isn't interested in going
out and buying a new NAT box, or downloading and reblowing the EEPROM code.
We're stuck with the current braindamaged NAT functionality, alas.

The time to do something useful, in terms of making NAT lemonade, would have
been 5-8 years again, when it was obvious that NAT was going to happen. Had
the IETF moved adroitly, we could have had something useful out in the field
now. However, for a variety of reasons, one of which is, as you correctly
observed:

 IETF still has not come to terms with that fact.

the IETF's NAT phobia, along with the general ludicrousness of any sentence
that includes IETF and adroit motion in it, it didn't happen.

Having done what men could, they suffered as men must. - Thucydides.

Noel



Re: Photos from IETF-59

2004-03-04 Thread Randall R. Stewart (home)
Patrik Fältström wrote:

You can find my photos from IETF-59 in Seoul here:

http://alexandria.paf.se/ietf-59

paf




Patrik:

Thank's so much for the nice pictures... I too could not
attend due to my travel schedule... it is almost like
being there (without the jet-lag :-D)
R

--
Randall R. Stewart
815-477-2127 (office)
815-342-5222 (cell phone)




Re: Perimeter security (was: MBONE access?)

2004-03-04 Thread Noel Chiappa
 From: Hallam-Baker, Phillip [EMAIL PROTECTED]

Oh, one other thing I wanted to rant about:

 I don't know of any serious security professionals who now claim that
 firewalls are bogus or that they will go away as the myth has it.
 Perimeter security is here to stay.

Perimeter security is brittle, inflexible, complex security. You have to have
understanding of the semantics of an application at the perimeter to check
whether the operation is allowed - which is bad so many ways I don't feel
like listing them all.

(The old security breach where people had debugging turned on in their SMTP
server is an example of this. If would have flown right through a simplistic
firewall. Yes, we've fixed that one - but imagine e.g. a bug where a field
overflow in an SMTP transaction allows a break-in. Generalize to all security
problems caused by bugs in applications. And there are lots and lots and lots
of lines of code to find bugs in Yes, the bad guys aren't using that
technique at the moment - because they don't have to. When the easier holes
get plugged, they will.)


The CS community *was* on the right track for the real solution - about
thirty years ago, with Multics' AIM boxes. We made a bad mistake when we saw
workstations as personal machines, so we don't need any of that security
stuff.

Wrongo.

As soon as you connect your personal machine up to a network, and start
interacting in any but the most basic ways, it's not personal any more.
Hell, we should have learned that lesson from floppy viruses. If they could
spread so easily with such a lame transmission medium, how would they do with
instant communication over a network?

And don't get me started on the ignorance/cupidity/stupidity/arrogance/etc of
certain software companies who distributed applications which basically
downloaded arbitary chunks of code from the network and ran it...

But even without that level of incompetence, bugs in applications aren't
going to go away anytime soon.

Noel



Re: MBONE access?

2004-03-04 Thread Frank Solensky
A nit, perhaps, but:

On Wed, 2004-03-03 at 20:17 -0800, Ole Jacobsen wrote:
 ..Note that Real
 Player is available for multiple platforms for free, ..

The Linux version, last I tried [8.0.3.412], didn't include support for
multicast.



signature.asc
Description: This is a digitally signed message part


RE: Perimeter security (was: MBONE access?)

2004-03-04 Thread Hallam-Baker, Phillip

 Perimeter security is brittle, inflexible, complex security. 
 You have to have
 understanding of the semantics of an application at the 
 perimeter to check
 whether the operation is allowed - which is bad so many ways 
 I don't feel
 like listing them all.

It is only useful in my view if you have a human expert monitoring
the firewall 24x365. That is what we do as a managed service. But
you also need all the intrusion detection, patch management etc.

I would like to go deeper into the corporate nets, but the customers
rarely let this happen.

 Generalize to all security
 problems caused by bugs in applications. And there are lots 
 and lots and lots
 of lines of code to find bugs in Yes, the bad guys aren't 
 using that
 technique at the moment - because they don't have to. When 
 the easier holes
 get plugged, they will.)

In a conventional installation there are twin firewalls and the
mail server along with all the other external services is 
situated in the DMZ in between.

It is not proof perfect of course, people keep knocking holes
in the perimeter, and don't get me started on viruses. But we
can usually detect when a machine on the internal network 
has been zombiefied and shut it down.

To make it work well you need to have network wide information.
We combine information from all our NOCs and SOCs so that we can
be pro-active.

The firewall by itself does not provide much value.

 The CS community *was* on the right track for the real 
 solution - about
 thirty years ago, with Multics' AIM boxes. We made a bad 
 mistake when we saw
 workstations as personal machines, so we don't need any of 
 that security
 stuff.

I would like to put protocol enforcement modules into hubs.
I like the idea of separating network security into a different
device to the workstation - gives a much more secure trusted
computing base.


 As soon as you connect your personal machine up to a 
 network, and start
 interacting in any but the most basic ways, it's not 
 personal any more.
 Hell, we should have learned that lesson from floppy viruses. 

Yep, it is really funny hearing the Mac guys smuggly saying that
there are no viruses on Mac...

 And don't get me started on the 
 ignorance/cupidity/stupidity/arrogance/etc of
 certain software companies who distributed applications which 
 basically
 downloaded arbitary chunks of code from the network and ran it...

Hey they were signed chunks of code!

Actually the problems we have had from ActiveX and Java are considerably 
less than from Javascript and worst of all click to execute malicious
code in email.

If you are going to launch applications Windows had all the machinery
built in from day one to do it safely. You create a subprocess and
remove the privs necessary to attack the host machine.

And just why do we allow untrusted code to modify the O/S boot path?


The spammers are not sending out viruses, they are blasting out spam 
that contains a trojan. No need to bother reading address books any more!


Phill



Re: MBONE access?

2004-03-04 Thread Daniel Senie
At 09:51 PM 3/3/2004, Joel Jaeggli wrote:
On Wed, 3 Mar 2004, Ole Jacobsen wrote:


 begin naive question

 Apart from the eating our own dogfood bit ...

 Most other Internet events I attend, or follow remotely, use Real
 audio/video, or sometimes Windows Media Player.

 Can anyone tell me if there are any TECHNICAL reasons why we can't
 do this for the IETF meetings?
how about economic and poltical...

In point of fact it has been done, either by reflecting sources to
unicast through real servers or by simply generating additional sources.
The resources to do this on an ongoing basis (namely transit) haven't been
forthcoming.
On the economic front, there have been offers, at least from me, to PAY for 
remote attendance. Let's face it, I'd have been happy to pay $500 to have 
access to all WG sessions and plenaries via Real Player or other Unicast 
mechanism in Seoul. There's just no way my company can afford the travel 
expenses for me to personally travel to Korea.

The IETF should be interested in individuals, not just large corporations, 
participating. What better way than to provide a way for individuals to 
attend on their own budgets? If that means virtual attendance, then so be it.

Multicast is just not available to many of the folks who might otherwise 
attend. I don't care what unicast mechanism is chosen, provided it allows a 
wider crosssection of the community access to live participation in the 
meetings.


joelja

 Ole



 Ole J. Jacobsen
 Editor and Publisher,  The Internet Protocol Journal
 Tel: +1 408-527-8972   GSM: +1 415-370-4628
 E-mail: [EMAIL PROTECTED]  URL: http://www.cisco.com/ipj




--
--
Joel Jaeggli   Unix Consulting [EMAIL PROTECTED]
GPG Key Fingerprint: 5C6E 0104 BAF0 40B0 5BD3 C38B F000 35AB B67F 56B2




Re: On supporting NAT, was: Re: MBONE access?

2004-03-04 Thread Iljitsch van Beijnum
On 4-mrt-04, at 14:42, Hallam-Baker, Phillip wrote:

There are good reasons why ISPs are encouraging their customers
to use NAT, they provide a weak firewall capability and that
in turn significantly reduces exposure to being hacked which
in turn reduces the cost of chasing zombie machines.
Hm, but apparently many ISPs don't care about their customer boxes 
being turned into zombies as they let them persist in their undead 
state for long periods of time. ISPs would probably love it if there 
were no NAT (or proxies, NAT is no more than a simple way to implement 
generic proxying), that way they could extort their customers even 
more.

I don't think anyone has won here, there are just casualties all over
the place: more work for the IETF and vendors, less functionality for
the users.

Less functionality is a deliberate, concious choice on the part of
the IETF. Fixing the problem is utterly trivial.
A pointer or two to support the former would be nice. An explanation of 
the latter too, as I'm unfamiliar with this trivial fix.

Or are you referring to the issue that some client/server type 
interactions are broken when the client is behind NAT? This should 
probably be fixable in most cases (I wouldn't call updating huge 
installed bases trivial though), but that still leaves the applications 
and protocols that don't conform to the client/server model, such as 
VoIP.

Think of all the machines in my network as a single machine with a
single IP address. The requests to open and close ports to the outside
world are simply RPC requests (without the RPC syntax).
What if two boxes want to have the same port open? Believe me, you are 
just making the port number part of the address. Internal address 
allocation (your RPC w/o RPC mechanism) is only one of the issues that 
needs attention in this case.

Guess what: we already did pretty much the same thing with IPv6. The
logical conclusion here is that we can save a lot of time and
effort by simply adding IPv6 to the mix, as it is just a hair shy of
being ready for full deployment, while all this stuff to make NAT
actually work is all over the place.

Simply repeating the claim that IPv6 is the solution to every
issue does not make it so, or advance the deployment of IPv6.
It's annoying when people complain about a problem when the solution is 
right there in their face.

The problem is the intrinsic asymmetry between the value of
an IPv4 and an IPv6 address. An IPv4 address will be visible
to the world, an IPv6 address will only be visible to other
IPv6 addresses.
In this case, you only need to be visible to the streaming server... 
Use your IPv4 address for other stuff if you like.

The main reason IPv6 is nowhere is the refusal to deal with NAT
except by ideological reactions like the above. NAT is the
way to deploy IPv6.
I don't follow.

The whole idea that decent security can be had by allowing packets 
with certain port numbers in them in and not others is fatally 
flawed,

Your view is not held by the computer security industry.
The idea that security is a substance with an independent existance 
which underpins a security industry is also fatally flawed. (And one 
of the main reasons why I chose to avoid becoming a part of said 
industry.) Security is a property of all aspects of life and must be 
managed as such.

firewalls get to see which applications (and users) are trying to
communicate with the outside world, rather than guess based
on the port number in the packet.

That is not a bad idea. In essence it would mean extending requests
to open incomming AND outgoing ports to the perimeter defense.

Hey Mr firewall, this is Internet Explorer version 9.2, please
allow me to connect up to port 80 on 23.43.2.2
Right! When the firewall considers IE 9.2 safe it may allow it to 
communicate freely, but at a later time, when a vulnerability is found 
and (hopefully) a new version is installed, IE 9.2 isn't allowed to 
connect to the rest of the world anymore, or only to trusted 
destinations. This mechanism would also be great to catch trojans and 
other unauthorized software trying to communicate over the net.




RE: On supporting NAT, was: Re: MBONE access?

2004-03-04 Thread Hallam-Baker, Phillip
 Or are you referring to the issue that some client/server type 
 interactions are broken when the client is behind NAT? This should 
 probably be fixable in most cases (I wouldn't call updating huge 
 installed bases trivial though), but that still leaves the 
 applications 
 and protocols that don't conform to the client/server model, such as 
 VoIP.

VoIP is still a client-server model when you get down to the individual
communications. For that matter so is UDP.

In any communication you have to have a listener waiting for attention
and a party that initiates the message transfer. This happens even
when you look at CSP type mechanisms where there is a symmetric 
rendezvous.

 What if two boxes want to have the same port open? 

Same thing that happens when two processes on the same box ask for
the same port. The latecomer loses. Either that or the port is 
assigned on a different IP address, these could be pooled at the 
ISP level.

That is not a problem if you know about this restriction when you design
the protocol that is NAT listener friendly. 

You could even build into the protocol a feature that would allow
a response of the type, 'the port requested is already in use by 
machine at address 10.2.1.1, he is accepting requests to share 
via protocol X on port Y'

 Believe me, you are 
 just making the port number part of the address. Internal address 
 allocation (your RPC w/o RPC mechanism) is only one of the 
 issues that needs attention in this case.

Needs attention is not the same as 'impossible'.

It is very clear that the negative effects of NAT can be largely
eliminated if there is goodwill and an intention to succeed.

It is equally clear that the whole issue of NAT has been addressed
here with a complete determination to fail.

 It's annoying when people complain about a problem when the 
 solution is right there in their face.

A 'solution' that requires action by parties completely out of
my control does not qualify as such in my opinion.

At present I don't expect much in the way of NAT deployment before
2015, and that is building in expectation of a major shakeup in 
the management structure in 2010. If things go on as at present I
don't expect IPv6 deployed before 2025.

  The main reason IPv6 is nowhere is the refusal to deal with NAT
  except by ideological reactions like the above. NAT is the
  way to deploy IPv6.
 
 I don't follow.

NAT performs address translation. it is in effect an Ipv4 to IPv4 
translator. Make that transparent and you can make Ipv4 to IPv6
and IPv6 to Ipv4 transparent in the same way.


 The idea that security is a substance with an independent existance 
 which underpins a security industry is also fatally flawed. 
 (And one 
 of the main reasons why I chose to avoid becoming a part of said 
 industry.) Security is a property of all aspects of life and must be 
 managed as such.

Which is why the empirical observation that firewalls significantly 
reduce the number of successful penetration incidents is important.

The theoretical strength of a firewall against the NSA is irrelevant
when 99% of the attacks are from script kiddies. Filtering out
the 99% of script kiddies allows more time to focus on the remainder.


 Right! When the firewall considers IE 9.2 safe it may allow it to 
 communicate freely, but at a later time, when a vulnerability 
 is found 
 and (hopefully) a new version is installed, IE 9.2 isn't allowed to 
 connect to the rest of the world anymore, or only to trusted 
 destinations. This mechanism would also be great to catch trojans and 
 other unauthorized software trying to communicate over the net.

Yes, and sign the messages under a key protected by the trustworthy
computing base.

That is where Palladium gives real value.

Phill



Re: Multicast access

2004-03-04 Thread Kevin C. Almeroth

As has been pointed out, this is a little more complicated than just
the choice of client, in particular multicast is not widely available
to the average Internet user.

But I still find it ironic that I can watch a webcast from an ICANN
meeting but I am unable to do the same for an IETF meeting (until after
the fact). That is but one example.

Pretty standard response every time this comes up:

0.  Arguing the merits of multicast is really a separate issue, but
some facts:  (1) the MBone is long dead,  (2) multicast is a highly
successive (revenue generating) service in a suprising number of 
enterprises, (3) multicast is certainly NOT ubiquitous in the 
wide-area infrastructure, but people really ought to understand its 
deployment by looking at measured statistics, and (4) before bashing 
the MBone, make sure you understand the huge challenge that was 
undertaken (compare to moving the entire Internet to IPv6) and 
understand that there are a lot of non-technical challenges that 
were not properly envisioned.

1.  As Joel pointed out, the single reason for using multicast is
scalability.  We simply don't have enough bandwidth to support X
(where X  5-10) simultaneous streams of the same content from the hotel.
A very fine idea is to have an exploder or some sort of server available
off-site.  We send one stream to them and it replicates.  Volunteers?

2.  The whole multicast effort is run on a shoe-string budget.  Until
now, and maybe even still now, there seems very little willingness by
remote users to pay for even a hypothetically perfpect service.  What 
everyone needs to realize is that of what is currently done, almost 
zero $$$ of IETF registration money goes to pay for it.  As Harald 
mentioned, it is time donated by UofO (and others), it is a grant from 
Cisco, and it is money from ISOC.

3.  Just some back of the envelope numbers:  you want every session 
encoded (even single camera) and available by unicast, I would estimate 
this to cost about $15K per meeting plus equipment (assuming someone
is willing to do replicated service for free).  Given a replacement
time for the equipment of three years (reasonable, especially since
a lot of the equipment doesn't travel well) and an esimated cost
of about $50K, that means, per meeting we are talking about $20K.

$100 per remote attendee = 200 attendees
$500 per remote attendee = 40 attendees

A bit tough to support but possibly doable.

-

And if you are STILL reading...  as Harald sent in an email, we are
approaching the end of the grant period, so lots of opportunity for
recommendations.

-Kevin






Re: Proposed Standard and Perfection

2004-03-04 Thread Eliot Lear
Sam,

As the person who most recently complained, let me elaborate on my 
comments.  The problem I believe we all are facing is that the 
distinction between Proposed, Draft, and Internet Standard has been lost.

I agree with you 100% that...

The point of proposed standard is to throw things out there and get
implementation experience.
But when it comes to...

If specs are unclear,  then we're not going to get implementation
experience; we are going to waste time.
We disagree (slightly).  In my experience one needs to actually get the 
implementation experience to recognize when things are unclear.  And my 
understanding is that this is precisely why we have PS and DS.

I've had a lot of experience with a rather unclear spec with some
significant problems that managed to make its way to proposed
standard: For the past 10 years I have been dealing with problems in
Kerberos (RFC 1510).  This leads me to believe very strongly that
catching problems before documents reach PS is worth a fairly high
price in time.
We come to different conclusions here.  My conclusion is that no 
standard should remain at proposed for more than 2 years unless it's 
revised.  Either it goes up, it goes away, or it gets revised and goes 
around again.

Your fundamental problem with RFC 1510 is that it is too painful for 
people to go and fix the text.  And that's a problem that should be 
addressed as well.

Thus, let the IESG have a bias towards approval for PS, and let 
implementation experience guide them on DS and full standard.  But set a 
clock.

This has impact on the WG process of course.  People want to do their 
work and go home.  We like WGs to end.  Well, what really needs to 
happen is that either the WG hangs around to push the thing forward, or 
the doc needs to be assigned some sort of standing WG, akin to a an area 
directorate, who will take responsibility for moving it forward or 
killing it.

And moving it forward shouldn't be that hard EITHER.  Mostly in the 
editing of clarifications, removal of functions not found to be used, or 
perhaps changing a few SHOULDs to MUSTs and visa versa.

Let's take an example: COPS-PR; RFC 3084.  How many people actually 
implement it?  If we can't find anyone who is, won't it just cause 
confusion to leave it at PS?

And I'd like to know when someone plans to do the work to get Kerberos 
to DS.  Heck, at least it's used by people.  Consider HARPOON, RFC1496 
on the downgrading of X.400/88 - X.400/84 with MIME.  Ya think Harald 
wants to take the time to update that one now?!  Well, why didn't it 
happen in some reasonable period of time, when perhaps it might have 
been more interesting?  Was it because nobody actually implemented it or 
was it simply because nobody felt the need to update it?

That said, I realize too much time can be spent on a review.  When
we're not sure we understand the implications of an issue well enough
to know whether it will a be a problem, letting a document go to PS
and getting implementation experience can be useful.
Don't get me wrong.  Some review is definitely in order.   In as much as 
they are going to happen they should happen either prior to sending the 
doc to the IESG at all (remaining within the WG) or in parallel with 
IESG review.

Similarly, if the review process will never successfully conclude,
then having the review early  is good.  

ALso, I am simply saying that waiting for complete reviews is good and
the pressure to get things out as PS faster with less review is
dangerous.
Only because today PS = Internet Standard, in reality.  And that's what 
needs to change.

Eliot




Re: MBONE access?

2004-03-04 Thread ned . freed
On the economic front, there have been offers, at least from me, to PAY for
remote attendance. Let's face it, I'd have been happy to pay $500 to have
access to all WG sessions and plenaries via Real Player or other Unicast
mechanism in Seoul. There's just no way my company can afford the travel
expenses for me to personally travel to Korea.
I too would have no problem paying for this.

The IETF should be interested in individuals, not just large corporations,
participating. What better way than to provide a way for individuals to
attend on their own budgets? If that means virtual attendance, then so be it.
Exactly right.

Multicast is just not available to many of the folks who might otherwise
attend. I don't care what unicast mechanism is chosen, provided it allows a
wider crosssection of the community access to live participation in the
meetings.
The choice of unicast technology is largely irrelevant to me as well. While I
might find one approach somewhat more advantageous than another at this
particular point in time, if something is chosen I will take the necessary
steps to get it working. The same cannot be said for multicast, since no amount
of my own futzing will make it work for me.
Ned



Re: Principles of Spam-abatement

2004-03-04 Thread Ed Gerck

grenville armitage wrote:
 
 Many  moons ago Ed Gerck wrote:
If someone sends me a message asking for my comment
because they read some other comment I wrote, do I really
care who that someone is... or who they know?
 
 You yourself have identified the criteria 'they read some other comment
 I wrote', not just they wrote something interesting. I observe
 that the former criteria qualifies as a variant of ...who they know.

We seem to be in agreement that claiming that they know me, or that they 
know something I wrote, is indeed a variant of who they know. My point
was that who they know is useless as a criterium to _block_ email. 
What should matter most, in receiving email from people with no previous 
relationship to me, is the content of such message.

Thus, who you know (in whatever variant) should be a bad metric to block 
email (even though it can be used well to accept email). A message should 
be of higher interest to me the less I know the person. That's one thing
we shouldn't break in email.



Re: Multicast access

2004-03-04 Thread JORDI PALET MARTINEZ
Regarding point 1, yes we do it at this way.

I can offer the service, for free, up to 8Mbits but only in IPv6. We have very small 
IPv4 bandwidth (2 Mbits), but I'm sure we can coordinate with some universities or 
NRENs to support this w/o any cost.

Regards,
Jordi

- Original Message - 
From: Kevin C. Almeroth [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, March 05, 2004 3:52 AM
Subject: Re: Multicast access


 
 As has been pointed out, this is a little more complicated than just
 the choice of client, in particular multicast is not widely available
 to the average Internet user.
 
 But I still find it ironic that I can watch a webcast from an ICANN
 meeting but I am unable to do the same for an IETF meeting (until after
 the fact). That is but one example.
 
 Pretty standard response every time this comes up:
 
 0.  Arguing the merits of multicast is really a separate issue, but
 some facts:  (1) the MBone is long dead,  (2) multicast is a highly
 successive (revenue generating) service in a suprising number of 
 enterprises, (3) multicast is certainly NOT ubiquitous in the 
 wide-area infrastructure, but people really ought to understand its 
 deployment by looking at measured statistics, and (4) before bashing 
 the MBone, make sure you understand the huge challenge that was 
 undertaken (compare to moving the entire Internet to IPv6) and 
 understand that there are a lot of non-technical challenges that 
 were not properly envisioned.
 
 1.  As Joel pointed out, the single reason for using multicast is
 scalability.  We simply don't have enough bandwidth to support X
 (where X  5-10) simultaneous streams of the same content from the hotel.
 A very fine idea is to have an exploder or some sort of server available
 off-site.  We send one stream to them and it replicates.  Volunteers?
 
 2.  The whole multicast effort is run on a shoe-string budget.  Until
 now, and maybe even still now, there seems very little willingness by
 remote users to pay for even a hypothetically perfpect service.  What 
 everyone needs to realize is that of what is currently done, almost 
 zero $$$ of IETF registration money goes to pay for it.  As Harald 
 mentioned, it is time donated by UofO (and others), it is a grant from 
 Cisco, and it is money from ISOC.
 
 3.  Just some back of the envelope numbers:  you want every session 
 encoded (even single camera) and available by unicast, I would estimate 
 this to cost about $15K per meeting plus equipment (assuming someone
 is willing to do replicated service for free).  Given a replacement
 time for the equipment of three years (reasonable, especially since
 a lot of the equipment doesn't travel well) and an esimated cost
 of about $50K, that means, per meeting we are talking about $20K.
 
 $100 per remote attendee = 200 attendees
 $500 per remote attendee = 40 attendees
 
 A bit tough to support but possibly doable.
 
 -
 
 And if you are STILL reading...  as Harald sent in an email, we are
 approaching the end of the grant period, so lots of opportunity for
 recommendations.
 
 -Kevin


**
Madrid 2003 Global IPv6 Summit
Presentations and videos on line at:
http://www.ipv6-es.com

This electronic message contains information which may be privileged or confidential. 
The information is intended to be for the use of the individual(s) named above. If you 
are not the intended recipient be aware that any disclosure, copying, distribution or 
use of the contents of this information, including attached files, is prohibited.





Re: MBONE access?

2004-03-04 Thread Ole Jacobsen
Right, but multicast appears to be a large part of the problem.

I know this is heresy, but good engineers are usually able to use
available tools. It is possible to use the handle of a screwdriver to hit
the head of nail and drive it into the wall --- when you don't have a
hammer.

Now, it was a *naive* question, and others have pointed out the cost
of unicast alternatives, so I am not saying adopt commercial solutions
today, but if we are only doing multicast for religious reasons AND
it isn't reaching the large group of people who cannot participate
THEN it might be time to consider some alternatives, that's all.

Ole



Ole J. Jacobsen
Editor and Publisher,  The Internet Protocol Journal
Tel: +1 408-527-8972   GSM: +1 415-370-4628
E-mail: [EMAIL PROTECTED]  URL: http://www.cisco.com/ipj



On Thu, 4 Mar 2004, Frank Solensky wrote:

 A nit, perhaps, but:

 On Wed, 2004-03-03 at 20:17 -0800, Ole Jacobsen wrote:
  ..Note that Real
  Player is available for multiple platforms for free, ..

 The Linux version, last I tried [8.0.3.412], didn't include support for
 multicast.





IETF59 Lost a Jacket?

2004-03-04 Thread Woohyong Choi
Hotel has informed us that they are keeping a lost jacket.

It is a dark brown/khaki colored cotton jacket, and has clips
of matches from somewhere in Pennsylvania, USA.

Please contact hotel to check if it's yours!

Regards,
Woohyong Choi / IETF59 NOC Team



Re: MBONE access

2004-03-04 Thread Keith Moore
FWIW, I tried to participate in a couple of WG meetings this week.  I
had to go to work to get multicast access - efforts to set up a tunnel
to my home failed (partially because there wasn't any obvious way to try
it out in advance of the actual meeting). 

Even when I could get multicast access, I could get video but the audio
would cut out after a few seconds.  This was with the latest QuickTime
player for the mac.  I also tried VLC for the mac but that didn't work
at all.

We've been experimenting with this stuff for longer than I can remember.
I'd like for this stuff to really be useful.  Here's my list of things
that I think are necessary for it to be useful:

1. We need to broadcast _every_ session, or at least most of them.

   That has a number of implications.  It means we need more bandwidth.
   It might be that we would have to use fewer codecs - maybe just
   H.261.  It might also mean that we need to set up some meeting rooms
   differently so that a single camera would suffice (thus removing the
   need for someone to switch between cameras).  Comments from the floor
   might have to be made from the front of the room.

2. We need the ability to access these transmissions via unicast.

   That probably means that we need willing parties on various
   continents to provide tunnel endpoints and/or proxies and/or
   reflectors.

3. The client software needed to participate must be available to
   everybody.

   That probably means using an open source tool as the baseline.  If it
   happens to also work with proprietary tools, so much the better.

4. It needs to be possible to test things well in advance of the
   meeting.

   By the time the meeting is underway, there's not enough time to debug
   tunnel setup, client problems, etc.  This probably means having live
   video feeds set up in advance, using the same codecs and tools that
   will be used at the meeting.  Ideally there should be feeds sourced
   from somewhere near the meeting site's point of attachment to the
   network.

5. In my limited experience, Jabber is an acceptable way for remote 
   participants to make comments - provided there is someone willing to
   read those comments to those physically present at the meeting.  But
   if this were to be a widespread practice, we'd need to have some 
   reasonably fair way to divide time between the local participants 
   and the remote participants.

6. We should require those who use slides to make them available for
   download, in a portable format, well in advance of the meeting.


I realize this is a tall order.  I'm trying to make a realistic
statement of what it takes for remote access to meetings to be useful.  
(I realize it might not seem realistic to actually _do_ this, but if 
it's not realistic, we may as well admit it.)   

I suspect the problem boils down to _money_.  Money would buy more 
bandwidth.  Money would help get local tunnel/redirector/proxies set up.
Money would help software development if open source tools needed to be
tweaked.  Money would pay for more on-site equipment and operators.

If this stuff _worked_ I would be willing to pay something close
to the full conference fee to attend the conference remotely, for those
cases when I couldn't be there in person.  Though I guess I wonder 
whether there would ever be enough remote participants to cover the
expenses.  (for that matter, if it _worked_, would lots of people stop 
travelling to meetings?)

If it is feasible it should be possible to implement this in stages:

1. Pick an open source client.  Tweak it as necessary.  Set up
   pointers in appropriate web pages so that IETFers could easily
   find and download the code.  Provide instructions for how to
   configure and use it.

2. Set up tunnel endpoints / proxies / redirectors.
   If it's necessary to limit access to IETFers, find a way to do this.

3. Set up live video/audio feeds.
   If it's necessary to limit access to IETFers, find a way to do this.

4. Encourage IETFers to download clients, test with the live feeds,
   and provide feedback.

5. Try this in one or two meeting rooms.  Experiment with a single
   microphone setup in one room.  Once it works, expand it gradually to 
   include additional meeting rooms (perhaps even during the same week).

6. Once it is demonstrated to work, start charging money :)