My first impression when I read you e-mail was: this sounds like an application for 
reflective memory.  If your front end computers gathered the data and stored it into
reflective memory cards (one per machine) then each card would transmit the data 
to receiving reflective memory cards in the one or two PCs collecting the data.  
There are fiber optic versions of the reflective memory cards that have high transfer
rates.

You would use RT-Linux to read and write to the reflective memory to make sure that
no data was lost.

I don't know if this helps but it at least gives you something else to consider when
deciding on how to implement this project.

> ----------
> From:         Jacek M. Holeczek[SMTP:[EMAIL PROTECTED]]
> Sent:         Thursday, May 11, 2000 4:03 AM
> To:   [EMAIL PROTECTED]
> Subject:      [rtl] max size of shmem
> 
> Hi,
> I am possibly to work on a data acquisition system (to be used in physics)
> where I expect huge amounts of data. The idea is to get 8 PCs running
> Linux (RTLinux ?) each responsible for a "part" of data. These PCs are
> then connected to a giga-switch (each PC running 100Mbs ethernet), and on
> the "other" side there is 1 PC (or 2 PCs) again running Linux (RTLinux ?)
> with 1G ethernet. This "special" PC (or 2 PCs) is "collecting" data from
> these 8 "acquisition" PCs.
> As I expect huge amount of data, I would like to limit the number the data
> are copied in memory. So, the idea it to run a real-time task (driven by
> an interrupt) which would store data in a shared memory and a linux task
> which would send this data directly from this shared memory via ethernet
> (I can also consider the "ethernet-transport" task to be real-time instead
> of user's process - if this gives me performance improvement).
> Unfortunately, looking at the shmem HOWTO I have found that the maximum
> size of shmem segment is limited to 4MB (older machines to 1MB). Do I
> understand right that even if I have 215MB RAM I cannot have 100MB shared
> memory segment ? If yes, is there any way to overcome this limitation ?
> This reminds me another question - what is the maximum time that I can
> spend in an interrupt (1ms , 10ms, 100ms), so that the system does not
> become instable ? On principle I expect a structure of about 4s when I
> need to collect events as fast as possible (I mean there can be interrupt
> after interrupt coming) followed by some 4s of quiet (I can send the
> collected data in that time via ethernet).
> If I decide to use FIFOs - is there any size limit for the real-time
> part to store the data before it is read by the user's program ? I mean
> can I "buffer" 100MB of data in the FIFO and then "start" the user's
> process to send the data via ethernet ?
> How much time do I lose using a FIFO and storing, in the real time part,
> byte after byte (in a loop - I read a byte from a VME module, then I
> store it in the FIFO) as compared to storing the same amount of bytes in a
> shared memory (also in a loop) ? I can't use memcpy - I need to build a
> loop which reads acquisition modules and stores the read data in the
> "buffer" (the "ethernet-transport" task can read "blocks" of data, of
> course).
> Any hints welcome.
> Thanks in advance,
> Jacek.
> 
> -- [rtl] ---
> To unsubscribe:
> echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
> echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
> ---
> For more information on Real-Time Linux see:
> http://www.rtlinux.org/rtlinux/
> 
-- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
---
For more information on Real-Time Linux see:
http://www.rtlinux.org/rtlinux/

Reply via email to