I'm developing an embedded app under X, but the real app runs on the
Linux frame buffer.  Part of the real hardware pretends to be a USB
keyboard.  Some of the fake key events it sends need to be discarded if
they are too old by the time the app gets around to process them.  Some
need to be processed no matter how late they are.  So I'm relying on
the Evas_Event_Key_Down->timestamp field that evas sends to my
callback.  The idea is to compare that to the current time, and discard
it if it's too old, if it's one of the ones that need to be filtered
for age.

From what I can tell, frame buffer sets timestamp to ecore_time_get(),
which is a double, the number of seconds since some undefined point in
time. Though the timestamp is an unsigned int, so that gets implicitly
rounded or truncated in whatever manner the compiler decides.

X stuffs XKeyEvent->time into timestamp, which looks like microseconds
as an integer.  Orders of magnitude bigger than what ecore_time_get()
returns.  I dunno if it's using the same time base though.  My tests so
far seem to show it is the same time base.

This presents two problems - I need to write two versions of "get
current time", one for X, and one for frame buffer, so that it will
have the correct number for the comparison.  Frame buffer version can
use ecore_time_get(), but I dunno where X get's it's timestamp from.

I want the code to have as little special casing as possible, so prefer
the X and frame buffer code paths to be identical, or only as little
difference as possible.

The second problem is that the frame buffer timestamp only has a
resolution of whole seconds, which is not good.  Microseconds is good.

Now I understand why the X timestamp is being used, it's best if that
timestamp is as close to the real time the key was actually pressed,
for exactly the reasons I need, filtering out old key events.  X is
supplying that, so it's a bit closer to the actual event time than what
we can guess by the time it gets to us.  I guess the frame buffer does
not supply a similar timestamp, so we have to generate one when we can.
What I don't understand is why X is microseconds and frame buffer is
seconds?

Can we change frame buffer to multiply ecore_time_get() by 1000 before
it rounds it by stuffing it into an int?  And any other similar input
type that uses ecore_time_get().  That will make things more consistent
for people like me that are writing code that has to work the same
across different canvas types.  As a bonus, a more useful resolution
for frame buffer key event timings.

-- 
A big old stinking pile of genius that no one wants
coz there are too many silver coated monkeys in the world.

Attachment: signature.asc
Description: PGP signature

------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to