Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-12-04 Thread Jim Choate

On Mon, 2 Dec 2002, Eugen Leitl wrote:

 Of course it should be given an unique IP address.

Actually there is no reason that a fixed IP is ever used. You actually
don't even need a fixed hostname (at least above the per-connection
level, you do it for convenience).


 --


We don't see things as they are,  [EMAIL PROTECTED]
we see them as we are.   www.ssz.com
  [EMAIL PROTECTED]
Anais Nin www.open-forge.org






Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-12-04 Thread Jim Choate

On Mon, 2 Dec 2002, David Howe wrote:

 I think what I am trying to say is  -  given a normal internet user
 using IPv4 software that wants to connect to someone in the cloud, how
 does he identify *to his software* the machine in the cloud if that
 machine is not given a unique IP address? few if any IPv4 packages can
 address anything more complex than a IPv4 dotted quad (or if given a DNS
 name, will resolve same to a dotted quad)

You don't. What you'll need is an extension to the current software, which
is woefully inadequate for distributed/cloud/grid processing. It was never
designed to do this sort of stuff.

Plan 9 solves all these, plus you get to keep your IPv4 and IPv6 (not that
it's of any real use in that environment).


 --


We don't see things as they are,  [EMAIL PROTECTED]
we see them as we are.   www.ssz.com
  [EMAIL PROTECTED]
Anais Nin www.open-forge.org






Re: CDR: Re: CNN.com - WiFi activists on free Web crusade - Nov.29, 2002 (fwd)

2002-12-02 Thread Jim Choate

On Sat, 30 Nov 2002, Dave Howe wrote:

 Jim Choate wrote:
  On Sat, 30 Nov 2002, Dave Howe wrote:
  The scaling problem is a valid one up to a point. The others are not.
  The biggest problem is people trying to do distributed computing using
  non-distributed os'es (eg *nix clones and Microsloth).
 not as such, no. the vast majority of free internet cloud users couldn't
 care less about computer resources and/or distributed computing

They don't careYet!

see...

Smart Mobs: The next social revolution
H. Rheingold
ISBN 0-7386-0608-3

Leonardo's Laptop: Human needs and the computing technologies
B. Shneiderman
ISDN 0-262-19476-7

As to the other points you make, they are all addressible and are in fact
being implemented now using existing technology.


 --


We don't see things as they are,  [EMAIL PROTECTED]
we see them as we are.   www.ssz.com
  [EMAIL PROTECTED]
Anais Nin www.open-forge.org






Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002(fwd)

2002-12-02 Thread Jim Choate

On Sun, 1 Dec 2002, Tyler Durden wrote:

 Photons are bosons, so they don't interact with each other.

Generally, don't forget 'entanglement' which is clearly interacting with
each other ;)

 Well, by interfere I meant in the detectors of course. So are you telling me
 that two WiFi receivers pointed in different directions will not receive the
 same information? I don't think WiFi (IR) is all that directional is it? If
 it is, then maybe we CAN have a new LAN segment.

It all depends on the antenna. If you use a Pringle Can kludge they are
quite directional. There is at least a couple of the Austin Wireless group
who have worked with other groups to build a phased array assembly that
allows 801.11b to reach several miles instead of several hundred feet. It
claims to be able to handle multiple connections. Haven't had a chance to
look at it and see if it really works as advertised.

Several of Hangar 18 are currently working on 'Open Air Optical Network'
serial adapters that will work with Linux, Plan 9, Winblows, etc. Just
about anything that will do SLIP or PPP over a serial port and has line
of sight for he lasers.

Our next project along these lines is to start using 900MHz radios to
increase the 'backbone' range. The idea here is to expand the current
'regular Internet' backbone for open-forge.org (two sites seperated by
about six miles using ISDN, with one site using a T1 to access the regular
network). When we get this up we should have about six to eight major
'backbone' sites scattered around Austin using 900MHz to connect to the
T1.

Our current backbone project is created by several commercial entitites
and individuals using non-consumer AUP's (for 'free', we use Tit-for-Tat,
we only interact with other 'producers' not 'consumers' - the idea is to
promote others to handle the fan-out to a larger user community). We've
got nodes in several states. We're currently looking at setting up the
auth servers so that we can better manage resources and access. We've got
somewhere in the neighborhood of about 40 machines in the pool. We'll be
using not only the traditional DNS but also custom namespaces (accessed
through VPN Gateways). We're also building a pool of 'community
accessible' process servers (ala Plan 9).


 --


We don't see things as they are,  [EMAIL PROTECTED]
we see them as we are.   www.ssz.com
  [EMAIL PROTECTED]
Anais Nin www.open-forge.org






Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-12-02 Thread David Howe
at Monday, December 02, 2002 8:42 AM, Eugen Leitl [EMAIL PROTECTED] was
seen to say:
 No, an orthogonal identifier is sufficient. In fact, DNS loc would be
 a good start.
I think what I am trying to say is  -  given a normal internet user
using IPv4 software that wants to connect to someone in the cloud, how
does he identify *to his software* the machine in the cloud if that
machine is not given a unique IP address? few if any IPv4 packages can
address anything more complex than a IPv4 dotted quad (or if given a DNS
name, will resolve same to a dotted quad)

 The system can negotiate whatever routing method it uses. If the node
 doesn't understand geographic routing, it falls back to legacy
 methods.
odds are good that cloud nodes will be fully aware of geographic
routing (there are obviously issues there though; given a node that is
geographically closer to the required destination, but does not have a
valid path to it, purely geographic routing will fail and fail badly; it
may also be that the optimum route is a longer but less congested (and
therefore higher bandwidth) path than the direct one.

For a mental image, imagine a circular cloud with a H shaped hole in
it; think about routing between the pockets at top and bottom of the
H, now imagine a narrow (low bandwidth) bridge across the crossbar
(which is a high cost path for traffic). How do you handle these two
cases?




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-12-02 Thread Eugen Leitl
On Mon, 2 Dec 2002, David Howe wrote:

 I think what I am trying to say is  -  given a normal internet user
 using IPv4 software that wants to connect to someone in the cloud, how
 does he identify *to his software* the machine in the cloud if that
 machine is not given a unique IP address? few if any IPv4 packages can

Of course it should be given an unique IP address. IPv6 is pretty popular
with the ad hoc mesh crowd, btw. It's the only address space where you can
still get large address slices for free or nearly so. (The space is
probably large enough so that one could really map WGS 84 - IPv6, and
have very few direct collisions -- if it wasn't for small well-populated
address slices and addresses and networks with magical meaning).

But it should also get a geographic address, preferrably one refinable to
~~um scale, if needed. Bits are cheap, right?

 address anything more complex than a IPv4 dotted quad (or if given a DNS
 name, will resolve same to a dotted quad)

 odds are good that cloud nodes will be fully aware of geographic
 routing (there are obviously issues there though; given a node that is

Hopefully, 

 geographically closer to the required destination, but does not have a
 valid path to it, purely geographic routing will fail and fail badly; it

Geographic routing stands and falls with some (simple) connectivity
assumptions. These are present in wireless dense node clouds in urban
areas.

 may also be that the optimum route is a longer but less congested (and
 therefore higher bandwidth) path than the direct one.

The connectivity in a line of sight network is not very high, and 
it is perfectly feasible to maintain a quality metric (latency, bandwidth) 
for each link. Given short range and high bandwidth within each cell 
that's not worth the trouble.
 
 For a mental image, imagine a circular cloud with a H shaped hole in
 it; think about routing between the pockets at top and bottom of the
 H, now imagine a narrow (low bandwidth) bridge across the crossbar
 (which is a high cost path for traffic). How do you handle these two
 cases?

High-dimensional networks don't block (map a high-dimensional network to
Earth surface to see why). But that doesn't help much with current
networks, where no satellite clouds are available. It hurts, but for nodes 
at and nearby the edge one would need to use special case treatment 
(implementing backpropagating pressure flow, so there would be less 
incentive to send packets to nodes at a wall).




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-12-01 Thread Eugen Leitl
On Sat, 30 Nov 2002, Dave Howe wrote:

 without routing and name services, you have what amounts to a propriatory

I believe I mentioned geographic routing (which is actually switching, and
not routing) so your packets get delivered, as the crow flies. The
question of name services. How often do you actually use a domain name as
an end user? Not very often. People typically use a search engine. It
doesn't matter how the URI looks like, as long as it can be clicked on, or
is short enough to be cut and pasted, or written down on a piece of paper
and entered manually, in a pinch.

So you need (distributed) searching and document (not machine) address 
spaces, which current P2P suites create the architecture for.

 NAT solution - no way to address an interior node on the cloud from the

It depends on how large the network is. Wireless is potentially a much
bigger node cloud, so the current Internet could became a 'proprietary
niche' eventually.  However, there is no reason why the nodes wouldn't
have a second address, or the IPv6 address would double as a geographic
coordinate. At least during the migration.

 internet (and hence, peer to peer services or any other protocol that
 requires an inbound connection not directly understood by the nat
 translation - eg ftp on a non standard port or ssl-encrypted as ftps)

Fear not.
 
 under ipv6 you can avoid having to have a explicit naming service - the

You obviously understand under naming service something other than DNS.

 cloud id of the card (possibly with a network prefix to identify the cloud
 as a whole) can *be* the unique name; routing is still an issue but that

Anything which relies on global routing tables and their refresh will 
always has an issue. Which is why geographical local-knowledge routing 
will dominate global networks.

 reduces to being able to route to a unique node inside the cloud - which
 appears from a brief glance at the notes from Morlock Elloi (thanks again :)
 to have at least a workable trial solution.  if a IPv6 internet ever becomes
 a reality, clouds would fit right in.

It is a patch, not a solution. But wireless ad hoc meshes are really a 
first real reason to go IPv6.

 TCP/IP tunnelling without a name service at at least one end isn't workable;
 *static* NAT/PAT is of course a name service and can't be considered, but
 SOCKS and socks aware p2p is a definite possibility.

The best solution would seem to leave the multilingual node the choice of
means of delivery. It would be completely transparent to the packet.




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-12-01 Thread Eugen Leitl
On Sat, 30 Nov 2002, Morlock Elloi wrote:

 Self-routing mesh networks have potential to sidestep this. Transistors are
 small and cheap enough even today - the centralised communication
 infrastructure is there so that you can be charged, not because technology
 dictates that any more. With wireless there is a potential that everyone paves
 (and marks street number) in front of their house. The only way to subvert this
 would be to erase santa monica from minds of everyone. I don't see that
 happening. 

The cool part about wireless meshes running geographic routing is that
they're self-labelling, and create a grassroot positioning service. It can
be coarse, like a node id'iing a cell, or really fine-resolution using
relativistic ping to the end-user device (ideally, all nodes, even your 
handheld, are part of the ad hoc mesh).

If your space is labelled, you can just publish a database which 
allows you to annotate arbitrary 3d coordinate regions with info. Could be 
proprietary, could be something like a Wiki. Of course, virtual graffiti 
will result in lot of database defacement, so you have to use prestige 
accounting to be able to filter out the twits.
 
 The day that I can send a packet from LAX to SFO via non-ISP-ed network will be
 the beginning of the end of telco/telecom monopolies. Or, should I say,
 directory monopolies.

The only way to make this low-latency is relativistic cut-through in the
wireless domain with some serious local bandwidth, and some long-range
links. Somebody needs to be motivated to haul boxes up the mountain
ranges. This needs permits, and dedication, and some $$$s.

High-latency low-QoS services should be dead easy, though. There goes 
SMS...




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-12-01 Thread Dave Howe
Eugen Leitl wrote:
 On Sat, 30 Nov 2002, Dave Howe wrote:
 I believe I mentioned geographic routing (which is actually
 switching, and not routing) so your packets get delivered, as the
 crow flies. The question of name services. How often do you actually
 use a domain name as an end user? Not very often. People typically
 use a search engine. It doesn't matter how the URI looks like, as
 long as it can be clicked on, or is short enough to be cut and
 pasted, or written down on a piece of paper and entered manually, in
 a pinch.
ah. Sorry, I don't think of dns as a name service (apart from once
removed) - we are talking DHCP or similar routable-address assignment.

 under ipv6 you can avoid having to have a explicit naming service -
 the
 You obviously understand under naming service something other than
 DNS.
yup - I recognise anything as a naming service that allows you to associate
a routable name with a node that otherwise has only a mac address;

 Anything which relies on global routing tables and their refresh will
 always has an issue. Which is why geographical local-knowledge routing
 will dominate global networks.
Indeed so - but of course the current internet *does* work that way, so any
new solution that advertises itself as Free Internet access *must* fit
into the current scheme or it is worthless.

 The best solution would seem to leave the multilingual node the
 choice of means of delivery. It would be completely transparent to
 the packet.
Unfortunately, such abstraction fails unless the *sender* knows how to push
the packet in the right direction, and each hop knows how to get it a little
nearer; this more or less requires that each node be given a unique
identifier compatable with the existing system, and given the existing
system is still ipv4, there are problems.




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-11-30 Thread Dave Howe
 http://www.cnn.com/2002/TECH/11/21/yourtech.wifis/index.html
Its a nice idea, but unfortunately gets easily bitten by the usual
networking bugbears
1. large wifi networks start to hit scaling problems - they start to need
routers and name services that are relatively expensive, and ip address
ranges start to become a scarce resource.
2. no matter how large the new network becomes, it still needs a link to the
old network; almost all ISPs frown on use of home connections for sharing
more than just the owner's machines, and many consider using even unmetered
in a manner they didn't provision for (ie, using unmetered more than 100
hours a month at the full bandwidth limit) as abuse and end the contracts
of those who do so. what you would need would be an ISP (or large
commercial) style contract with a guaranteeed bandwidth and dedicated ip
addresses - which do not come cheap enough to be worth giving away.
3. unmetered is only just becoming common in england, and is still mostly on
56K modem. broadband is often *massively* underprovisioned, and quite often
all the connections in an area feed to a single fixed-bandwidth multiplexor
at the telecomms office, so adding additional connections doesn't actually
add any bandwidth at all. the *only* end user deal is 500kb down, 250kb up
shared amongst *50* people in your area (the uk has a telecomms monopoly
from a recently privatised company that has already forced two would-be
competitors out of the market). Even now (given expected usage patterns) the
mere existance of a microsoft OS service pack more than 30mb in size is
enough to throw available bandwidth per-user below modem levels




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-11-30 Thread Dave Howe
Jim Choate wrote:
 On Sat, 30 Nov 2002, Dave Howe wrote:
 The scaling problem is a valid one up to a point. The others are not.
 The biggest problem is people trying to do distributed computing using
 non-distributed os'es (eg *nix clones and Microsloth).
not as such, no. the vast majority of free internet cloud users couldn't
care less about computer resources and/or distributed computing - they want
to access websites, ftp servers and read/send their email. with a large(ish)
number of otherwise standalone nodes, you need to worry about addressing
space, routing and (to conserve what little bandwidth you have to the
classic internet) caching. ad-hoc routing also doesn't scale well - so you
get into issues of cells mapping to address ranges and dynamic allocation to
mobile nodes as they move from cell to cell (there are probably better ways
to do that than cells and static ranges, but self-networking swarms blow out
their bandwidth purely negotiating routing long before the amount of traffic
those nodes needs becomes an issue)
its possible I am wrong and there is a wonderful distributed-computing
method to solve these purely network routing problems, but it is news to me.

 2. no matter how large the new network becomes, it still needs a
 link to the old network;
 Granted, up to a point. That point is when this network has more
 resources than the 'old' networks. At some point the old networks
 move over and start running from the new one.
that would require that the new network be not only larger (and more cost
effective) but joined up enough (and routing-efficient enough) to see it
become the primary backbone. I am willing to imagine a world where classic
isps have a peering arrangement with such cloud networks (giving free access
to their own sites in return for free access to Cloud sites by their
customers) but there is always the prisoner's dilemma (which has been
attempted by so many ISPs lately) of refusing to peer with anyone they think
they can sell transit to instead.

 almost all ISPs frown on use of home connections for sharing
 more than just the owner's machines, and many consider using even
 unmetered in a manner they didn't provision for (ie, using unmetered
 more than 100 hours a month at the full bandwidth limit) as abuse
 and end the contracts of those who do so. what you would need would
 be an ISP (or large commercial) style contract with a guaranteeed
 bandwidth and dedicated ip addresses - which do not come cheap
 enough to be worth giving away.
 Bullshit on the too expensive to give away.
A typical commercial setup (2mb bandwidth, no ratio, no contention (sharing)
over a dedicated line) is about 700ukp/month. (say about a thousand
dollars). ok, to a large commercial operation that is about the cost of one
employee - probably even less. assuming that it came out of a pr budget
though, that is one less staff member and/or one less campaign a year, for
the (dubious) benefit of whatever pr you could get by donating it. I didn't
say it was too expensive to give way, I said it was too expensive to be
*worth* giving away compared to cheaper pr stunts that don't have to paid
for every year as an ongoing cost (with all the pr loss of having to shut it
down if it becomes too much of a drain)
and that is a *recent* cost - as little as two years ago you could pay that
for a 512K link.

 Irrelevant since there are plenty of commercial feeds out there that
 are not ISP's.
yes, of course there are - but they aren't cheap. the US has a history of
cheap connectivity and free local calls - the uk (along with most of the
rest of the world) doesn't.

 I keep seeing thes ney saying views yet the guerrilla networks just
 keep getting bigger...
There is a ratio thing - anyone with a home broadband connection (which is a
lot more common in london, where most of the free MAN schemes seem to be
concentrated) can afford to carry a few freeloaders on an ad-hoc basis, and
it isn't currently in the interests of the telco monopoly to crack down on
it - it doesn't cut into their core business (selling phone lines and leased
lines) and the traffic blips can be absorbed by the ISP who has statistical
models of how much they can underprovision their total sold broadband and/or
dialup pool bandwidth by without complaints (the monopoly, who is also an
ISP got its sums wrong a couple of years back when it first went unmetered
and the *average* bandwidth allocation during busy times was less than
2Kbits - and that was dialup pool only)
If the number of freeloaders became significant, and more importantly,
became predominantly home users (who want continuous high bandwidth) rather
than passing war driving people grabbing a few ks of download for email or
a quick website surf purely because it is cool) then it would both cut into
the bandwidth available to the person paying for it, and the higher average
load curve would alert both the isp and the telco to take a closer look at
why that user is using so much bandwidth.  

Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-11-30 Thread Morlock Elloi
  Geographic routing completely eliminates need for expensive routing
  and admin traffic. Name services? Who needs name services? Localhost
  is sufficient for a prefix to an address namepace.
 without routing and name services, you have what amounts to a propriatory
 NAT solution - no way to address an interior node on the cloud from the


The importance of geographic routing is that the cataloguing system is public.

Imagine that city streets had absolutely no signs and no house numbers. In
order to get to, say, quality whorehouse, you need to pay for someone to
guide you, and ultimately that someone may choose not to. If, however, streets
were marked, you could use maps from many sources - or even create your own -
to guide you.

Localities put up the addressing infrastructure and they get aggregated on
global levels in any desireable/sellable form.

Compare this to Internet, where you essentially have to pay to get routed via
closed systems. These characters were routed based on decisions and policies
of no more than 2-3 corporations. We all know what the consequences are.

Self-routing mesh networks have potential to sidestep this. Transistors are
small and cheap enough even today - the centralised communication
infrastructure is there so that you can be charged, not because technology
dictates that any more. With wireless there is a potential that everyone paves
(and marks street number) in front of their house. The only way to subvert this
would be to erase santa monica from minds of everyone. I don't see that
happening. 

The day that I can send a packet from LAX to SFO via non-ISP-ed network will be
the beginning of the end of telco/telecom monopolies. Or, should I say,
directory monopolies.



=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-11-30 Thread Dave Howe
Eugen Leitl wrote:
 On Sat, 30 Nov 2002, Morlock Elloi wrote:

 1. large wifi networks start to hit scaling problems - they start
 to need routers and name services that are relatively expensive,
 and ip address
 Geographic routing completely eliminates need for expensive routing
 and admin traffic. Name services? Who needs name services? Localhost
 is sufficient for a prefix to an address namepace.
without routing and name services, you have what amounts to a propriatory
NAT solution - no way to address an interior node on the cloud from the
internet (and hence, peer to peer services or any other protocol that
requires an inbound connection not directly understood by the nat
translation - eg ftp on a non standard port or ssl-encrypted as ftps)

 Actually, even a MAC has enough address space to label entire Earth
 surface with ~1 address/m^2, IPv6 addresses are plenty better here.
 And of course no one forces you to use actual IP addresses. You can
 sure tunnel TCP/IP through a geographic routing protocol.
under ipv6 you can avoid having to have a explicit naming service - the
cloud id of the card (possibly with a network prefix to identify the cloud
as a whole) can *be* the unique name; routing is still an issue but that
reduces to being able to route to a unique node inside the cloud - which
appears from a brief glance at the notes from Morlock Elloi (thanks again :)
to have at least a workable trial solution.  if a IPv6 internet ever becomes
a reality, clouds would fit right in.
TCP/IP tunnelling without a name service at at least one end isn't workable;
*static* NAT/PAT is of course a name service and can't be considered, but
SOCKS and socks aware p2p is a definite possibility.




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-11-30 Thread Dave Howe
Morlock Elloi wrote:
 Not so. Self-organasing mesh networks appear to have some interesting
 properties. There is a number of open solutions and at least one
 startup I know about based on this.
snip links
fascinating - I obviously have a lot of reading to do - thankyou :)




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-11-30 Thread Morlock Elloi
 1. large wifi networks start to hit scaling problems - they start to need
 routers and name services that are relatively expensive, and ip address
 ranges start to become a scarce resource.

Not so. Self-organasing mesh networks appear to have some interesting
properties. There is a number of open solutions and at least one startup I know
about based on this.

Real stuff ...

http://w3.antd.nist.gov/wctg/manet/manet_bibliog.html
http://locustworld.com/
http://www.mitre.org/tech_transfer/mobilemesh/

... and rants:

http://www.wirelessanarchy.com/
http://www.gldialtone.com/whyP2Pwireless.htm
http://slashdot.org/articles/02/10/01/2220255.shtml?tid=126



=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com




Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-11-30 Thread Tyler Durden
its possible I am wrong and there is a wonderful distributed-computing
method to solve these purely network routing problems, but it is news to 
me.

I just don't see how a single WiFi cloud will be able to scale very far. All 
the WiFi users within eyeshot of each other are always going to contend 
for bandwidth, no? It'll be just like the old half-duplex 10BaseT copper 
LANs. And I still don't understand how a WiFi router will help you...if the 
different Layer2 LANs overlap in space at all, they'll interfere with each 
other optically even if they are on different segments. (With copper you 
didn't even have this problem.) Thus, aren't you stuck with zillions o 
little WiFi islands that must not overlap without things getting very slow?


As for service providers not wanting freeloaders, I'd point out that DSL 
cares much lessthe DSL connection is mapped over ATM and is basically 
a dedicated connection to a router port, with fixed bandwidth in either 
direction. Whether that port is processing lots of freeloader packets or 
idle packets from a single dedicated user shouldn't matter much.

Uh, but now that I think of it ATM does allow for some oversubscription, so 
in order to maximize the conection between the DSLAM and the ATM switch 
that's in front of the router (it might be in thesame box as the router, I 
know!), maybe they'l discourage freeloading. BUT, DSL companies have been 
touting that they're very happy for you to put a home-based LAN on your side 
of the connection (Cable Modem providers don't normally like that).






From: Dave Howe [EMAIL PROTECTED]
To: Email List: Cypherpunks [EMAIL PROTECTED]
Subject: Re: CNN.com - WiFi activists on free Web crusade - Nov. 29,
2002  (fwd)
Date: Sat, 30 Nov 2002 20:57:13 -

Jim Choate wrote:
 On Sat, 30 Nov 2002, Dave Howe wrote:
 The scaling problem is a valid one up to a point. The others are not.
 The biggest problem is people trying to do distributed computing using
 non-distributed os'es (eg *nix clones and Microsloth).
not as such, no. the vast majority of free internet cloud users couldn't
care less about computer resources and/or distributed computing - they want
to access websites, ftp servers and read/send their email. with a 
large(ish)
number of otherwise standalone nodes, you need to worry about addressing
space, routing and (to conserve what little bandwidth you have to the
classic internet) caching. ad-hoc routing also doesn't scale well - so you
get into issues of cells mapping to address ranges and dynamic allocation 
to
mobile nodes as they move from cell to cell (there are probably better ways
to do that than cells and static ranges, but self-networking swarms blow 
out
their bandwidth purely negotiating routing long before the amount of 
traffic
those nodes needs becomes an issue)
its possible I am wrong and there is a wonderful distributed-computing
method to solve these purely network routing problems, but it is news to 
me.

 2. no matter how large the new network becomes, it still needs a
 link to the old network;
 Granted, up to a point. That point is when this network has more
 resources than the 'old' networks. At some point the old networks
 move over and start running from the new one.
that would require that the new network be not only larger (and more cost
effective) but joined up enough (and routing-efficient enough) to see it
become the primary backbone. I am willing to imagine a world where 
classic
isps have a peering arrangement with such cloud networks (giving free 
access
to their own sites in return for free access to Cloud sites by their
customers) but there is always the prisoner's dilemma (which has been
attempted by so many ISPs lately) of refusing to peer with anyone they 
think
they can sell transit to instead.

 almost all ISPs frown on use of home connections for sharing
 more than just the owner's machines, and many consider using even
 unmetered in a manner they didn't provision for (ie, using unmetered
 more than 100 hours a month at the full bandwidth limit) as abuse
 and end the contracts of those who do so. what you would need would
 be an ISP (or large commercial) style contract with a guaranteeed
 bandwidth and dedicated ip addresses - which do not come cheap
 enough to be worth giving away.
 Bullshit on the too expensive to give away.
A typical commercial setup (2mb bandwidth, no ratio, no contention 
(sharing)
over a dedicated line) is about 700ukp/month. (say about a thousand
dollars). ok, to a large commercial operation that is about the cost of one
employee - probably even less. assuming that it came out of a pr budget
though, that is one less staff member and/or one less campaign a year, for
the (dubious) benefit of whatever pr you could get by donating it. I didn't
say it was too expensive to give way, I said it was too expensive to be
*worth* giving away compared to cheaper pr stunts that don't have to paid
for every year as an ongoing cost (with all the pr loss of having

Re: CNN.com - WiFi activists on free Web crusade - Nov. 29, 2002 (fwd)

2002-11-30 Thread Eugen Leitl
On Sat, 30 Nov 2002, Morlock Elloi wrote:

  1. large wifi networks start to hit scaling problems - they start to need
  routers and name services that are relatively expensive, and ip address

Geographic routing completely eliminates need for expensive routing and 
admin traffic. Name services? Who needs name services? Localhost is 
sufficient for a prefix to an address namepace.

  ranges start to become a scarce resource.

Actually, even a MAC has enough address space to label entire Earth 
surface with ~1 address/m^2, IPv6 addresses are plenty better here. And of 
course no one forces you to use actual IP addresses. You can sure tunnel 
TCP/IP through a geographic routing protocol.
 
 Not so. Self-organasing mesh networks appear to have some interesting
 properties. There is a number of open solutions and at least one startup I know
 about based on this.