Hey Harvey, and Walter

Just kind of an update. Last night after our discussion I found a really
good resource / discussion of what fork() is and the different ways it can
be used. So with this information in mind along with our discussion
yesterday it seems that what I want to do can indeed be done without using
POSIX shared memory( I had little doubt ) - *and* seemingly more simple.

I'd still have to use a Semaphore - I think to keep the web server callback
from stalling my canbus routines. But I think that seems fairly reasonable.

Still I may just implement semaphores into my current code to check it out,
but not sure when. Been a semi rough day, and I'm whooped . . .

On Sun, Aug 23, 2015 at 9:44 PM, William Hermans <[email protected]> wrote:

> OK have a good one, thanks for the discussion.
>
> On Sun, Aug 23, 2015 at 9:11 PM, Harvey White <[email protected]>
> wrote:
>
>> On Sun, 23 Aug 2015 20:18:26 -0700 (PDT), you wrote:
>>
>> >
>> >>
>> >> *Well, you're certainly right that the callback is messing*
>> >> * things up.  If I assume the same callback, then the callback is*
>> >> * certainly changing data.  If you can set the right breakpoint, you
>> can*
>> >> * tag the situation *if* the breakpoint also knows that the process is*
>> >> * reading from the CAN bus.*
>> >>
>> >> * Had you considered disabling that callback function until the read*
>> >> * from the CANbus is finished?  Would it be practical?  That's where
>> the*
>> >> * semaphore might help a lot.*
>> >>
>> >> * what variables could be common between the two routines?*
>> >>
>> >> * Harvey*
>> >>
>> >
>> >Well this is where previous experience fails me. I've pretty much avoided
>> >code related to threading in software. In the past. I do know of fork()
>> and
>> >roughly what it is capable of, and I know about threads, but not to
>> >implement them in C on Linux. Or what can be done with them. Lets talk
>> code
>> >a minute.
>>
>> OK, as well as I can follow it.
>>
>> >
>> >*IPC - Server - Reads from canbus*
>> >int main(){
>> >    struct can_frame frame;
>> >    int sock = InitializeCAN("vcan0");
>> >
>> >    statistics_t *stats = NULL;
>> >
>> >    const long shm_size = sysconf(_SC_PAGESIZE);
>> >
>> >    int shm_fd = shm_open("acme", O_CREAT | O_RDWR, FILE_PERMS);
>>
>> **NOTE:  the problem may be "acme", since we know that acme products
>> are not effective against roadrunners.....
>>
>> >    if(shm_fd == -1)
>> >        HandleError(strerror(errno));
>> >
>> >    const int retval = ftruncate(shm_fd, shm_size);
>> >    if(retval == -1)
>> >        HandleError(strerror(errno));
>> >
>> >    shared_memory = InitializeShm(shm_size * sizeof(char), shm_fd);
>> >    close(shm_fd);
>> >
>> >    while(1){
>> >        frame = ReadFrame(sock);
>> >        if(frame.can_dlc == FRAME_DLC)
>> >            stats = ProcessFastpacket(frame);
>>
>> right at this point, you have no protection against access and no
>> interlocking.
>>
>> I'll have to give you pseudocode, because I don't know how to do this
>> in Linux.
>>
>>         In the init routine, before you set up either main as a
>> process (I assume you do this).  Declare a semaphore:
>>
>> semaphore_handle shared_access;                 // create semaphore
>> handle accessible to both processes.
>> semaphore_create (shared_access);                       // create
>> semaphore
>>
>>
>> then modify this next section to:
>>
>>         if(stats != NULL){
>>         if (semaphore_take(shared_access), <wait forever>)
>>         {
>>                         WriteToShm(shared_memory, stats);
>>                 semaphore_give (shared_access);
>>         }
>>         stats = NULL;
>>             printf("%s", ReadFromShm(shared_memory));
>>         }
>>        task_delay(n);
>>
>> NOTE:   Process A hangs until it can "get" the semaphore; if Process B
>> has it, B can keep it only long enough to send the packet
>> >
>> >        if(stats != NULL){
>> >            WriteToShm(shared_memory, stats);
>> >            stats = NULL;
>> >            printf("%s", ReadFromShm(shared_memory));
>> >        }
>> >    }
>> >}/* main() */
>> >
>> >
>> >
>> >*IPC - Client / webserver*
>> >
>> >int main(void) {
>> >        struct mg_server *server = mg_create_server(NULL, ev_handler);
>> >
>> >        mg_set_option(server, "listening_port", "8000");
>> >        mg_set_option(server, "document_root", "./web");
>> >
>> >        printf("Started on port %s\n", mg_get_option(server,
>> >"listening_port"));
>> >
>> >        // POSIX IPC - shared memory
>> >        const long shm_size = sysconf(_SC_PAGESIZE);
>> >        int shm_fd = shm_open("file", O_CREAT | O_RDWR, FILE_PERMS);
>> >        if(shm_fd == -1)
>> >                HandleError(strerror(errno));
>> >
>> >        const int retval = ftruncate(shm_fd, shm_size);
>> >        if(retval == -1)
>> >                HandleError(strerror(errno));
>> >
>> >        shared_memory = InitializeShm(shm_size * sizeof(char), shm_fd);
>> >
>> >        close(shm_fd);
>> >
>> >        char id = 0x00;
>> >        for (;;) {
>> >                mg_poll_server(server, 10);
>> >
>> then do the same here
>>
>>         if (semaphore_take(shared_access), <wait forever>)
>>         {
>>                         if(shared_memory->sdata.data[19] != id){
>>  push_message(server,shared_memory->sdata.data);
>>                                         id =
>> shared_memory->sdata.data[19];
>>                         }
>>                 semaphore_give (shared_access);
>>         }
>>         task_delay (n clock ticks);
>>
>> semaphore_take gets the semaphore if and only if it's available.  It
>> does so in a thread safe manner.  the <wait_forever> is whatever value
>> the system uses to tell the process to hang.  You don't want the
>> process to wait and then just go.
>>
>> Because each example here releases the semaphore (semaphore_give) if
>> and only if it could get it, and since giving and taking the semaphore
>> is thread safe, the two threads should be fine.
>>
>> So your "consumer" thread can't check for valid data until there's
>> something there.   When it first starts up, it has to get bad (null)
>> data and throw that away, since you can't guarantee that one thread
>> starts before the other (unless you block the thread using a suspend,
>> but that's not really the best thing to do), so you have to consider
>> that you have two parallel and independent threads.
>>
>> The consumer thread can access shared memory only when it's not been
>> actively written to.  It has to figure out if data is good and what to
>> do with it.  However, once written, that data will remain uncorrupted
>> until the consumer has read and processed it (because the consumer has
>> the semaphore and doesn't give it up until then).
>>
>> The producer thread checks to see if the data is there to send,
>> accesses shared memory by getting the semaphore (when the consumer is
>> not reading it), and then writes that shared memory.  It then releases
>> the semaphore, goes idle (because the task switcher has to have a time
>> to start up the other task unless you have multiple cores), and then
>> checks for data, and waits to see when it can write that data.
>>
>> The typical task clock is either 1 ms or 10 ms, and the clock tick is
>> that (1 ms or 10 ms per tick).  You play with the values for best
>> throughput on the n delays.
>>
>>
>> >                if(shared_memory->sdata.data[19] != id){
>> >                        push_message(server, shared_memory->sdata.data);
>> >                        id = shared_memory->sdata.data[19];
>> >                }
>> >        }
>> >
>> >        mg_destroy_server(&server);
>> >        return 0;
>> >}
>> >
>> >In the context of whats interesting where threading is concerned. The
>> loops
>> >in each executable here might be useful. If somehow each, or even just
>> the
>> >for loop in the IPC client could somehow use objects in memory from the
>> IPC
>> >server.
>>
>> That was the shared memory, right?
>>
>> >That is let us suppose for a minute IPC was removed entirely, then
>> >somehow I could turn off the callback in the IPC client. This is what I'm
>> >having a problem imagining. How could this be done ?
>>
>> You may possibly be able to schedule *when* the callback happens.
>>
>> What causes the callback, sending a CAN message?
>>
>> > In the context of
>> >libmongoose I'm not sure. In the context of threading or using fork() I'm
>> >also not sure.
>>
>> Fork creates a separate process which can be controlled or killed as
>> needed, running as a sub-process (IIRC).
>>
>> you're dealing with creating two processes (really two programs) and
>> interprocess communication.
>>
>> >But if I could somehow through using threading context to
>> >disable the callback I think that would be ideal. That way I could simply
>> >disable that whole thread for a fraction of a second, and then resume it
>> >once a fastpacket is constructed.
>>
>>
>>
>> Well, synchronizing the two tasks with semaphores says that if the
>> callback happens and you can turn off that callback, then the data is
>> ok as long as you can schedule the callback.  No idea when that
>> happens.
>>
>> So you maybe able to
>> 1) produce data
>> 2) keep from overwriting it
>> 3) enable the consumer to read data
>> 4) have it send data (and I assume the callback happens here)
>> 5) data is clobbered in the shared area, but we don't care since it's
>> sent already
>> 6) give the semaphore back allowing new data to be written
>> 7) that data can't be clobbered by the callback (assuming) until after
>> it's read and in the send process
>>
>> May solve the problem...
>>
>>
>> >
>> >Anyway, a little information that might be needed. socketCAN reads data
>> in
>> >8 byte lengths for each frame..fastpackets are several frames in length,
>> >and with the only current one I'm tracking being 11 frames long. Or 88
>> >total bytes, not discounting the initial char from each frame which is a
>> >sequence number. If there is a way, and I'm sure there is, I am all for
>> >changing from an IPC model to a threaded model. But I still have some
>> >doubts. Such as will it be fast enough to track multiple fastpackets a
>> >second ? Past that how complex will it be ?
>>
>> Won't be all that complex, I think
>> the processes are written as two parts
>> one is a system call to set up a process
>> the other is the process itself which looks like
>>
>> void processA(void* arguments if any)
>> {
>>         //      declarations and inits the first time through
>>         while (1)
>>         {
>>                 basic process loop;
>>         }
>> }
>>
>> not complicated at all, how to create the process ought to be well
>> documented
>>
>> you just need to make sure that the two processes have access to
>> shared memory
>>
>> assuming 1000 us available per process, a context switching time of 50
>> us (may be shorter, but it's a number)
>>
>> You have 950 us to send a complete message without it having a delay
>> you have that same 950 us to detect and build a message.
>>
>> that gives you 500 message cycles/second
>>
>> taking twice as long gives you 250 message cycles/second and about
>> 1950 us to compose and send a message, that's with a 2 ms clock tick.
>> All that clock tick does is control task switching.  The processor
>> clock controls the speed of operations otherwise.
>>
>> >
>> >I have given multiple approaches consideration, just having a hard time
>> >imaging how to work this out using a threading model.
>>
>> perhaps this might help
>>
>> Harvey
>>
>> (off to bed, have to be in training for 8 am classes in a week).
>>
>> --
>> For more options, visit http://beagleboard.org/discuss
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "BeagleBoard" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
For more options, visit http://beagleboard.org/discuss
--- 
You received this message because you are subscribed to the Google Groups 
"BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to