For us who were around and shaping the protocols in the 1980s, and people
before us (and before standards like RS-232), a lot of the "specifications"
came out of "observation of implementation we managed to get to work",
rather than "implement this spec". A lot was due to extreme memory
constraints (in my case, multi-tasking operating system, serial protocol
187kbps, interpreted programming language with floating point ops and user
applications in 2kB RAM and 8kB EPROM) and a general lack of information,
like what other people were doing, sharing experiences and so on.

And there were many "innovative" ways to squeeze just a little bit extra
out of the hardware, resulting in "hard to understand" consequences. Bit
packing was a typical one, multiple functions packed into a single byte.
Look at page 14 in https://www.nxp.com/docs/en/data-sheet/80C31_80C32.pdf
and read up on "UART Enahanced Mode", and we used this, i.e. 9 bits, no
parity and clever use of address and mask to create a slave-to-slave direct
protocol, where the master's role was to signal which slave "owned" the
cable. Yeah, in that 8kB ROM limitation (I think protocol was about 1kB
ROM) and something like 150 bytes RAM for comm protocol.

Could you implement a compatible device to this with PLC4X and modern
hardware (i.e. no 8031/32 co-processor)? Possibly but bit-banging is needed
to support the 9bit data (+start and stop bits) and an awful lot of CPU
cycles on something that was automatic on one of the slowest long-lived
microcontroller ever.

My point was only to highlight that some of the strange things you see in
protocols today, have its roots in pre-standardization days. Today no one
would go down that route, because the hardware cost nothing now (8031  +
8kB EPROM + 2kB static RAM + battery backup => ~$50 in 1983's currency) and
longevity of software is more important.


On Sun, Apr 12, 2020 at 10:10 PM Christofer Dutz <christofer.d...@c-ware.de>

> Hi Lukasz,
> I think it really gets tricky when using BE and having some byte-odd-sizes
> ... I remember in the Firmata protocol there were some bitmasks and then 10
> bit uint as BE ... not it really got tricky as the specs were written from
> a point of view: You read 16 bits BE and then the first6 bits mean XYZ
> instead of describing how the bits actually travel over the wire.
> Chris
> Am 11.04.20, 01:21 schrieb "Łukasz Dywicki" <l...@code-house.org>:
>     I've made some progress with topic by modyfing mspec and allowing
>     'little endian' flag on fields. This moved me further to next issue -
>     which is whole type encoded little endian.
>     In ADS driver such type is State, which has 2 bytes and uses 8 bits for
>     various flags.
>     There are two cases which require different approach - reading and
>     writing. So for reading we need to swap N bytes based on type length.
>     For writing we need to alocate buffer for N bytes and swap them before
>     writing.
>     I am stuck now with freemaker templates and bit-io.
>     Cheers,
>     Łukasz
>     On 10.04.2020 17:57, Łukasz Dywicki wrote:
>     > I am doing some tests of ADS serialization.
>     >
>     > I've run into some troubles with payload which is generated with new
>     > driver. I'm not sure if that's my fault or generated code.
>     >
>     > I did a verification of what Wireshark shows and how ads structures
> are
>     > parsed. There is a gap I think. For example ams port number 1000
>     > (0x1027) is read as 4135.
>     >
>     > Obviously I used wrong structures while implementing protocol logic
> in
>     > first place, but now I am uncertain of how fields are encoded. How we
>     > mark field as little endian when rest of payload is big endian? Do we
>     > have `uint_le`?
>     >
>     > As far I remember route creation logic I was tracking last week used
>     > combination of LE and BE.
>     >
>     > Best regards,
>     > Łukasz
>     >

Reply via email to