Maarten ter Huurne  <[EMAIL PROTECTED]>  wrote:


> What you call an 'extra' signal, is just as much part of the signal
> as the data signals are. Dependant on the encoding, it may not even
> be possible to say "these bits are data and these bits are timing".

I agree completely.


> Take for example the encoding I proposed on this list a while ago:
> There are 2 signal bits: D0 and D1. Flipping of D0 means "a 0 is
> transmitted". Flipping of D1 means "a 1 is transmitted".

With the above example, the transmitted information depends not on 
the state of a signal at certain times, but on the TRANSITIONS of the 
state of the signal(s). The receiver can determine a transition by 
comparing the state of a signal with a previous state. But consider 
this:

Suppose the sender would toggle the signal, sending a (whatever), 
toggle it again, sending another (whatever). And then suppose the 
receiver would miss these 2 transitions somehow, and see no change. 
If sender and receiver work at different speeds, or check the signals 
at different times, this would be easy.

To prevent this, they might: check the state of a signal against 
another signal (data/clock/changing function, doesn't matter). That 
would be a SYNCHRONOUS communication method / protocol.
I assume that's not the idea here.

Another way might be, to use such a timing, that the receiver(s) will 
ALWAYS have enough time to check the status of the signal(s). That 
would use a fixed timing when sending, and not using another 
signal, and therefore be a Asynchronous method. I assume that's not 
meant here either.

Yet another method would be, to signal the sender a return signal, 
that a sent 'unit' of information (here: probably 1 bit) was 
received. In this case: using the "ACK" signal. That would again be a 
SYNCHRONOUS method (every transfer synchronised with the returned 
acknowledge). It seems that's the idea with JoyNet.


> 
> Since the timing information is part of the communicated message,
> the sender and receiver need not be synchronized.

The matter synchronous <-> aynchronous has NOTHING to do with the 
sender and receiver being synchronised somehow, when you mean they 
would be 'working' on the same thing, at the same time. That need 
ofcourse not be the case, but the protocol could be synchronous 
nevertheless.


But they ALWAYS have to be synchronised with the SIGNAL somehow. The 
receiver would be synchronised at the point in time, that it has 
determined the start of a particular unit of information. In this 
case, upon seeing a transition on one of these bit lines, that would 
make it synchronised with the bit-stream, but still not tell the 
receiver where a 'pack' of information started. That might be when 
receiving a special signal (here: not available), or when using a 
sort of 'Escape sequence' (like toggling the 2 data lines above 
simultaniously, or receiving a special bit sequence, that does not 
occur in normal data).


The sender also has to be synchronised with the transmitter. For 
instance, by only sending when any receiver(s) is known to be ready. 
Or, when assuming receivers are always ready, such synchronisation 
would be immediate.

In this case, the transmitter would be synchronised with the 
receiver, at the moment it receives acknowledgement that the receiver 
is ready for (a) new bit(s). Here, when the return "ACK" is seen, 
indicating: "got it, send next bits now". 


> I just strongly recommend an delay-insensitive asynchronous
> communication protocol, because that will give far less trouble. For
> example, it is insensitive to cable lengths and it will run at any
> clock speed.

I agree, that would make the practical use of it much more easy, and 
'robust'. Eehh, delay insensitive -> not using some fixed timing -> 
thus a SYNCHRONOUS (sorry   ;-) method / protocol. NOT meaning the 
computers involved need to do things simultanious...


For any applications, the use (protocol) depends HIGHLY on what you 
use it for:


If it's *only* data-transfer (moving the contents of a harddisk to 
another computer or such), you'd best use either as many signals 
of a joystick port as possible, allowing maximum tranfer speed, OR 
use as few signals, you feel you need for programming it, to make a 
connection cable as simple as possible.

> I too think reliability is important. But you suggested 32 bit CRC
> without any proper argument why what error detection would be the
> right one. Besides, error detection depends on message size as well.
> One 32 bit CRC per kilobyte offers far better error detection that
> one 32 bit CRC per megabyte.

The message size depends on how many errors occur (I hope 'few'), and 
the overhead it would give to respond on this with resending a block. 
For data-transfer, I would take anything convenient, that is far 
bigger than the checksum itself, so that the overhead is small. For 
instance, with a CRC-32 (4 bytes), take >>4 byte blocks, like a 
couple of KB.'s.

When you use a simple checksum, and for instance a few bits get 
'eaten' on transport, it might easily give the same checksum, when 
these bits were in the same positions, in different bytes/words. In 
the past (long ago, computers are used several decades by now), there 
has been some scientific research on this, and a CRC (cyclic 
redundancy check) PROVED to be one the better methods. Any single bit 
change in the data, changes any number of bits in the CRC, with a 
very complex relation. And this complex relation helps to filter out 
a great deal of such 'simple' transmission errors, where other 
checksum methods wouldn't.

A CRC-32 is just another step better in doing this, then a CRC-16 or 
so. The calculation of it doesn't require that much extra time 
compared to smaller CRC's, because in practice, it just comes down to 
looking up table elements. With a CRC-32, these table(s?) just get 
somewhat bigger (our MSX's can handle it   :-)).

So why not use that 'extra protection'?


If the application is network-gaming, it's probably better to rely on 
a reliable tranfer method, rather than error checking:
-you might detect an error, but would then need to re-transfer your 
data, taking extra time.
-the error checking itself takes extra time.

My earlier 'oversampling' suggestion still aplies here. The effect of 
cables & connections on signals is this: send a 'clean' signal on one 
end, and you get a 'polluted' signal on the other end. For instance, 
when you have a stable state of the signals, and then toggle 1 of 
these, on the other end, this signal might first change, then make a 
short swing back (possible even to the reverse logic level), swing 
back again, and so forth, a couple of times. This is called bouncing. 
Any transition(s) of signals can also influence the 'neighbour' 
signals in the cable.

This effect has a very complex relation with type and lenght of 
cable, the resistances involved in steering the signals, the speed 
at which they change (the time taken to go from 1 to 0 or vice 
versa), voltage range, etc. etc. For a connection like with JoyNet, 
most of these factors are 'unknowns', you could do some calculations, 
or recommend a certain type of cable, but no guarantuees about what 
will be used in practice.

For other connectors, this is well defined: for floppy drive cables, 
between every 2 signals you have a ground wire, giving a primitive 
shielding, so that signals influence each other less, and a limited 
cable length. For SCSI cables, exact types of cable/connectors are 
defined, and you have those terminators to suppress such effects, and 
these should be at the outer ends of the cable, and have exactly 
determined values.

For JoyNet, that won't be the case, so the best way to go about it, 
would be to make software insensitive to such effects.

If the software depends on signal TRANSISTIONS, my sugggestion of 
'oversampling' would come down to oversampling the transition of 
signals:

When you saw a 0 earlier, 'watch' until you see a "1".
When you see a "1", check the signal again, shortly after (you figure 
out how short after): when it got back to "0" right after: consider 
it noise.
When again "1", check it again: when "0" now: must have been noise.
When still "1": that would be a transition 0 -> 1.

In this example, any transition would be checked 2 extra times. You 
might check it only 1 extra time, or 20 extra times, you decide. When 
using intervals programmed as short as possible, and not exagerating 
the number of times, total overhead is minimal, but would still 
filter out the effect of bad, or long cables, or a very short 
disconnection (the kind of 'noise' you might get for instance when a 
connector is moved, without being disconnected).

When programming a network-game application, I suggest simply 
checking the effect of a bad transmission against the effect it would 
have on the game.

For movements of players, probably minimal, so send it once.

If a player dies (really important) send it multiple times, or: send 
it once, then send a differently coded confirmation 1 or more times 
(following the same idea as with oversampling the signal lines).

In any case, implement it such, that any receiver could synchronise 
itself with the data sent by the other computers, starting at a 
random point in time. For instance by using special codes/bit 
sequences that signal the start of a message, which never occur 
anywhere in the 'body' of messages.


Personally, for a network-game, I think I would use the MIDI built 
into the Philips Music module (or separate cartridge): just send a 
byte once, in 1 programming action, and without any software 
interference, it gets sent to any other receiver connected to the 
MIDI-out, with little delay.

For data-transfer, I would rather make a 'dedicated' cable, optimised 
for either speed, simple construction (largely 1:1 connection) or as 
few wires as possible.


BTW: I checked the 'official' JoyNet pages, and although I doubt the 
practical use of defining such a standard, especially when it only 
defines the cable (that is the simple part, anyone can do that), from 
a hardware point of view, I'd say there's no problem with what's on 
those webpages now. Connecting the signals as shown, shouldn't cause 
any hardware parts to burn, regardless of what programmers do.


Greetings,

Alwin Henseler     ([EMAIL PROTECTED])

http://huizen.dds.nl/~alwinh/msx     (MSX Tech Doc page)
http://www.twente.nl/~cce/index.htm    (Computerclub Enschede)


****
MSX Mailinglist. To unsubscribe, send an email to [EMAIL PROTECTED] and put
in the body (not subject) "unsubscribe msx [EMAIL PROTECTED]" (without the
quotes :-) Problems? contact [EMAIL PROTECTED] (www.stack.nl/~wiebe/mailinglist/)
****

Reply via email to