--- In [email protected], Cornelius Claussen <[EMAIL PROTECTED]> wrote:
> /* use the kernels delay routine */
> #define i2c_delay( usecs ) udelay( 5*usecs )
> 
> is it the correct way of doing this? 

Well, this is one way of doing it.

I've modified the driver such that it runs at exactly the I2C speed of
100kHz (you can check this with a scope, screen shots of scope screens
are also part of the document I wrote), because most modern devices
are adhering to this standard (some of them even go up to 400kHz, if
not even up to 1MBit).

Did you use a scope already to look to the signals?

I'm using my driver already for months now and never had any problems
on 100kHz.

You could indeed change the speed of the I2C driver using an option,
but then you will sacrifice the speed of I2C for one or two devices
that are too slow: all other devices get a penalty because of a few
and that would be a pity...

What I find strange: if the device cannot follow, it should stretch
the clock such that the next byte cannot be sent until the device has
processed the data.

In stead of changing the delay statically, I could add an option in
the IOCTL routine that, for instance, halves the speed.  Then you
could change the speed to half for those slow devices and revert back
to full speed once the communication is done.
That would be better then changing the speed to half "forever"...

I will see how I can change this, but I'm now busy writing a C++ I2C
HAL classes on top of the direct IOCTL calls.  Having such additional
option is not a lot of work, so I can take this into account...

Anyone a better idea maybe?

Best rgds,

--Geert

Reply via email to