On Sun, Apr 02, 2000 at 03:53:16PM -0700, Shawn T. Rutledge wrote:
> On Sun, Apr 02, 2000 at 01:26:37PM -0700, Doug Royer wrote:
> > The largest problem with WAP is that they are now concerned that
> > they have made a big mistake - they want the internet protocols
> > so that they can interoperate with the internet. So they are
> > backtracking and going with IP protocols.
> 
> Funny how the technically best solution often loses to convenience...

        Disclaimer: I don't speak for my employer, my opinnions
                    are not my employers opinnions.
        (My employer thinks that WAP is *the* best thing...)

        Sorry, IMO the WAP has never been "technically best".
        - WML is loosely based on XML, and borrows some ideas from HTML,
          but of course there are concepts which just can't be translated
          thru nicely.
        - Instead of running TCP at the mobile appliance, they use UDP
          to the WAP-GW (I am not sure of this, though), possibly to
          be able to ignore all the myriad rules about TCP backoffs, etc.
          (And not to require some 60 kB code for TCP implementation..)
        - They need compression scheme because they think that current
          HTML&graphics-bload contained web pages are not processable by
          limited memory mobile appliances.  (That is failure of web
          designers thinking that they can have 1000+ kB of stuff in
          single web page, and users who think that pretty graphics is
          an end in itself..)
        - Instead of a general purpose compression schemes, they use
          XML/WML specific tokenization system which becomes suboptimal
          the instant the WML gets new attributes.

        Wireless environment bit transfer in general case is TERRIBLE,
        a IEEE 802.11 LAN works sometimes 20 meters from base station
        unit, but larger scale things like digital mobile phones tend
        to have terrible time at getting thru a single 500 byte UDP
        message, let alone a bunch of 1500 byte TCP datagrams.  Even
        if link layer (anything below the PPP framing) were to do
        "reliable transport" (e.g. V.42, a.k.a. LAPM) latency variation
        in between two TCP datagrams would render TCP flows quite slow
        with e.g. retransmissions crowding on the link, and timer backoff
        shooting to the wild yonder.   With advent of abominations like
        GPRS, what little was guaranteeable about the message delivery
        delay (latency) in good conditions (low bit error rate) goes
        completely out of the window...  (I should know, I have done
        sufficiently many hours of GSM DATA calls to see both good,
        and bad performances.)

        What especially GPRS needs is a proxy in between GPRS side, and
        the general internet, which hides the weird variance of packet
        transport delays in the GPRS mode from the general internet,
        thus improving TCP troughput.

        Such a proxy acks packets to the sender during the flow, and
        pushes data to the recipient with different timer rules (or
        perhaps with tighter binding with the GPRS link layer so that
        when there is data, it activates the flow, and pushes buffer
        out immediately.)  For FINs the proxy should not ack them itself,
        but let the appliance to do it, once all buffered data in the
        flow has been delivered.

> kindof like computer power supplies... I've always thought it'd be nice if
> they all came with aux. DC power inputs, so that a UPS could just simply be a
> battery.  But instead we need these complicated inverter things to convert
> to 120V AC because that's all the power supply can accommodate.

        Right, and with "universal power supplies" (capable of using AC
        voltages from 110V to 250V), which would be the battery voltage ?

        Take also into consideration that in modern switching PSU, there
        is a line input filter which is a low-pass intended to prevent
        high frequency interference from the flyback (or whatever the
        switch topology is) to the feeder network.

        Having a parallel DC feed port to which a 150V - 350V DC could be
        fed would be neat indeed, presuming some connector could be
        standardized for it.  A polarity-free DC/AC port would be better.
        Then a switcher upping 24V battery to 240V square ware at 10 kHz
        would work just fine.

...
>  Or, like LCD monitors...  [digitality, D/A/D conversions, etc..]

        There are at least 4 digital connector specifications for that
        purpose.  Maybe one of them will catch on.
        (And of course the display cards need to get that connector too,
         so that your favourite xyz gizmo card can drive LCD digitally..
         .. oops, only some rare ones have digital interface ?)

> Anyway... yes it does seem like many Internet protocols are not optimized
> for packet operation.  SMTP is a really good example; the sender expects 
> to carry on a verbose, real-time conversation directly with the recipient,
> regardless how many hops are between, where a store-and-forward scheme
> makes more sense on just about any network, not just the most unreliable
> ones.

        When a message reaches environment where link connectivity
        does not (in real life) allow direct end-to-end interactive
        connectivity, nothing really prevents one from gatewaying
        from general Internet rules/behaviour to a hop-by-hop routing
        with e.g. static/periodically revised routing tables a'la UUCP,
        or BITNET.

        In BITNET we had (and still have in its leftover nodes) email
        transfer called BSMTP3.  Gatewaying MTA did put a SMTP session
        like input (HELO+MAIL FROM+RCPT TO+DATA) into a file, and then
        did send that file in a virtual card deck to the recipient node
        via node-by-node store and forward.  At the recipient system
        arrived message deck was processed to pick all of the SMTP data.

        Sure it can't supply you with feed-time error reporting, but is
        that necessarily such a loss ?

>    And this is important to me for my efforts at getting an email 
> gateway going.  Local BBS operators need a gateway to which they can forward
> outgoing email.  The best solution proposed so far is to use the convention
> [EMAIL PROTECTED]; and it turns out my sendmail is capable
> of decoding that, and forwarding the mail to [EMAIL PROTECTED]  I was glad
> to find it wasn't going to be any extra trouble.  But the route has to be
> explicitly defined, how primitive.  It'd be nice if it could determine the
> route from the MX records, one hop or hop-sequence at a time.  Maybe this 
> is possible, I'm not sure.

        There are many ways to describe routing to MTAs, some have
        lots of different methods, others seemingly none. (or just one)
        (BBS is a sort of MTA in my thinking.)

> I've also grown fond of another idea I read about a couple weeks ago... a
> decentralized alternative to the web.  Each node has a stack of name/value
...
        This is at least 6 years old idea, original inventors of HTTP
        protocol had in mind that we need (and we do!) a way to identify
        resources (e.g. HTML pages) independent of their locations.

        It never caught on sufficiently (it is HARD problem) interest at
        IETF for creating the infrastucture for it, thus we are stuck with
        the abomination of using DNS system as the object directory.

        How many times have you seen a movie advertisement which says
                www.xyzthemovie.com
        instead of  www.distributor.com/movies/xyz/  ???

>   _______                                     http://www.bigfoot.com/~ecloud
>  (_  | |_)  [EMAIL PROTECTED]   finger [EMAIL PROTECTED]

/Matti Aarnio <[EMAIL PROTECTED]>

Reply via email to