On Thu, Jul 10, 2008 at 6:57 PM, Simon Pickering
<[EMAIL PROTECTED]> wrote:
> No, I understood, I was just mentioning that there appear to be two
> heaps to chose from - presumably one is used by the DSP tasks (malloc is
> probably #defined as one of the CSL MEM* fns in the DSP Gateway task
> functions).

Maybe it is just clever/stupid enough to do the allocation
automatically. At least when I did some experiments with DSP before,
it was alocating DARAM memory. Surely, you might want to have better
control to put most performance critical data into DARAM, but malloc
is a standard C function and is more portable.

>> Yes, accessing SDRAM memory is extremely slow. And if you access SDRAM
>> memory using 16-bit accesses instead of 32-bit accesses, the overhead
>> doubles. So if your data processing algorithm does not deal
> exclusively
>> with 32-bit data accesses, you are better not to run it to process
> data
>> in SDRAM memory. Copying data to a temporary buffer in DARAM or
>> SARAM, processing it there and copying results back to SDRAM would be
>> faster in this case.
>
> The X[] array data type is an int32, so even accessing 32bit from SDRAM
> is still slower than using a local buffer (depending on what you need to
> do with it of course).

It depends on how many times the data is accessed. For example, if you
have some algorithm that accesses this memory location 10 times, you
would have 2 SDRAM + 10 SRAM memory accesses by using
fetch/process/store pattern vs. just 10 SDRAM memory accesses if
working with this buffer directly in SDRAM. As SDRAM is an order of
magnitude slower (decimal order, not binary), you really want to avoid
dealing with SDRAM as much as possible.
_______________________________________________
maemo-developers mailing list
maemo-developers@maemo.org
https://lists.maemo.org/mailman/listinfo/maemo-developers

Reply via email to