From participation in other working groups, discussions about encoding, known as the famous binary vs. text discussion, have a long and unproductive history. If this is to be more than assertions and argument-by-authority, proponents of feature-restricted or special- purpose protocols have the burden of proof: They have to show, based on realistic assumptions and real-world measurements, that more general techniques don't work in an interesting set of cases. Just saying that you can construct a network where a meter wants to tell its life story every ten seconds to 1000 light bulbs equipped with 2- bit processors all on the same PAN isn't too helpful unless this is a realistic deployment scenario. (I'm not saying anybody has done this so far, but we've come close.) In particular, I'd like to see observation-based estimates of message and bit rates for likely deployment scenarios. Even a 20 kb/s network can shovel 160,000 bytes a minute - that's a lot of air time for meters to fill.

From experience, binary and other special-purpose encodings often only save 20 or 50%, i.e., not enough to make a fundamental difference and the processing effort is also often exaggerated. This is particularly true once you add crypto to the mix, which tends to completely dominate simple processing like embedded web servers.

We are unlikely to get away with protocols that ignore security or rely on non-crypto mechanisms going forward, so focusing on whether it takes 1 or 10 bytes to turn on a lightbulb isn't likely going to help much if the signature takes 500 bytes.

I also agree with the deployed-base fallacy. In real life, much of that base is never upgraded - often because remote upgrades simply aren't supported. By the time you send a technician to the household to swap the EPROM, the hardware cost is in the noise compared to the cost of the truck roll.

In such cases, it's much better to run the old, insecure and inflexible protocol in parallel with the new version for more capable hardware.

Henning

On Nov 10, 2009, at 1:20 AM, Shidan wrote:

+100

On Tue, Nov 10, 2009 at 1:12 AM, Kris Pister <[email protected]> wrote:
> Abandoning the installed base just goes to reinforce the idea
> that IP isn't an appropriate technology for things.

Michael - I think that we have the same goal, but I disagree with that statement. I think that re-writing every protocol from discovery through transport to applications, from scratch, is what reinforces the idea that IP isn't an appropriate technology for things.

I realize that there are pressures from an installed base, but at this point it's a tiny fraction of the overall potential. If we let the 1% installed base dictate the path for the next 99%, we should do our best to ensure that it's the right path.

ksjp


Stuber, Michael wrote:
Life may be getting better, but that doesn't mean we have the wrong
target.  Abandoning the installed base just goes to reinforce the idea
that IP isn't an appropriate technology for things. Qualifications for parts in appliances, meters, and cars may take much longer than in other
consumer electronics.  There are lots of products shipping today with
802.15.4 chips that do not match the (nicer) specs you outline below.
If we want to enable IP everywhere, we must acknowledge that small
footprint parts are an important part of "everywhere."

That said, I too am in favor of exploring optimized DHCP.  It would
provide the flexibility of living in an edge router, or being
centralized.  It is a well defined, characterized protocol.
-----Original Message-----
From: [email protected] [mailto:[email protected]] On
Behalf Of Kris Pister
Sent: Monday, November 09, 2009 6:53 PM
To: Jonathan Hui
Cc: Carsten Bormann; 6lowpan; [email protected]
Subject: [6lowpan] hardware trends, new vs. existing protocols [Re: 4861
usage in LLNs]

+1 in favor of using optimized DHCP if possible (no opinion on 'if possible'), rather than inventing something new.

As I've shared with several people in private emails recently, it's pretty clear that lowpan nodes are going to get more capable moving forward, not less. Why? Radios don't scale down in area when you scale

CMOS processes. Today's 15.4 single-chip nodes are made in technologies

that are several (maybe five?) generations behind the cutting edge. This makes economic sense because the sales volumes don't support the need for expensive mask sets yet. When there's a volume application, and someone puts a 5mm2 radio into modern CMOS, it just doesn't make sense to put 48kB of rom/ flash and 10kB of RAM next to it. You'll put hundreds of kB of rom/ flash, and many tens of kB of RAM, and the radio will still be by far the biggest thing on the chip.

Even the 48k/10k node from the (very nice) 6lowapp bof presentation is not up to commercial standards - it's a five year old, expensive, academic platform - great for it's time, but old. Single-chip nodes from Jennic, Freescale, etc. have ~200kB ROM/flash + 128kB RAM, a 32bit processor, and they aren't made in cutting-edge processes yet either. Life is just going to get better. Let's try to find the smallest optimized set of *existing* protocols that serve our needs, that run on the existing new low-cost hardware (not the old workhorses). Let's invent the absolute minimum of new "optimized" protocols, because it's not at all clear to me that we are optimizing the right things at this point. The less we invent, the broader the set of applications and applications programmers we address.

ksjp

Jonathan Hui wrote:

On Nov 9, 2009, at 5:50 PM, Carsten Bormann wrote:


Again, entirely getting rid of a function is always the best optimization.
Can we do that for DAD?

The *need* for DAD is the core question for me. As specified within 6lowpan-nd now, IPv6 addresses are maintained using a centralized protocol. That protocol looks and smells like DHCP - there's request/response, lease times, relays. The whiteboard may also administratively assign addresses. So in the end, it's not clear to me why we would need to *detect* duplicates when we essentially *avoid* them from the beginning.

I've voiced my comment several times over the past 1+ years and presented a draft that argues for the use of optimized DHCP in Dublin,



so this is not new from my end. The fact that the current 6lowpan- nd document has evolved towards using DHCP-like mechanisms is not an accident. But if what we do is DHCP-like, it would seem to make sense



to utilize existing DHCP infrastructure rather than defining something



new.

--
Jonathan Hui


_______________________________________________
6lowpan mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6lowpan

_______________________________________________
6lowapp mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6lowapp

_______________________________________________
6lowapp mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6lowapp

_______________________________________________
6lowpan mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/6lowpan

Reply via email to