If you want 10msec granularity, traditional Linux may not be the right 
choice.

I did quite a bit of data-gathering (See my April 14 post in this subject), 
and got 64-bit serial packets to transmit , on average, in 32usec. 
 Occasionally there is an outlier, on the order of milliseconds, due to 
system housekeeping tasks. THAT is what may clobber your 10msec fading.

My code is intentionally 'bursty'. Every second, the system clock is 
queried, then the date or time display-message is composed (message varies, 
such as time format, day, date). Once the message is composed and 
translated into segment data, the actual 128-bit serial transmission takes 
place. It's a 'flattened' subroutine with no loops or calculations that 
hammers the GPIO pins as quickly as possible to get the data send to the 
display board. Since I'm running 2 boards, I send 128 bits. The last step 
is to check the PIR sensor, if that function is enabled, and decide whether 
or not to keep the display on.

Last night while running a system update, the CPU was pegged at 100% for 
about half an hour. There was no degradation in the display. Apparently the 
task swapper in Linux gives my clock software a large enough slice so that 
it finishes; I believe the default is 100msec.

I dont have fading, but I do plan to add 'dissolving', where 
segment-changes are staged in 100msec intervals and characters morph from 
one to the next.

-- 
You received this message because you are subscribed to the Google Groups 
"neonixie-l" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to neonixie-l+unsubscr...@googlegroups.com.
To post to this group, send an email to neonixie-l@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/neonixie-l/c34c9817-0ca0-4c1c-a790-a5baf94e986b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to