I'm writing a network driver for a USB ADSL modem. I previously wrote
the Mac OS9 and Mac OSX drivers for this modem. The modem has a DSP that
requires code to be uploaded from the host PC. The modem does not have
enough memory to contain the entire DSP code, so throughout its various
stages of operation, it requests new pages of the DSP code from the
driver (via a USB interrupt). 

In the Windows and MacOS9 drivers, the DSP code is loaded into memory
from hex-format text files, then converted into the appropriate binary
format. In MacOSX there was no easy way to get the kernel driver to get
the DSP code, so to get the driver up and running, I just turned the
hex-format text files into a huge .h file, so I essentially link the DSP
code into the driver. That is my current approach on Linux.

The actual loading of the DSP code into the driver isn't the issue I'm
trying to resolve right now - I'm hoping by the time I need to resolve
it, you folks will have hammered out the details of this firmware
loading thread (maybe their already hammered out). What I need to
resolve is the internal processing of the data. All the other drivers
allocate a big chunk of memory (1 megabyte), turn the hex data into a 
binary format, then free the hex data. However, when I tried that here,
my kmalloc failed. Should I be using vmalloc for this operation?

Does the memory that you put into the transfer_buffer pointer in an URB
need to be alloced with kmalloc - or can it be alloced with vmalloc? The
current code assumes that the binary DSP code can be transfered as-is,
so if kmalloc-alloced data is required for the URB, I'll have to
allocate a temporary transfer buffer for each block and copy the data
from the vmalloc'ed blocked to the kmalloc'ed block.

Thanks,
-Chris

_______________________________________________
[EMAIL PROTECTED]
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to