Hi Asa,

 

I'd suggest you take a look at the application note Spraae7b.pdf 
(http://focus.ti.com/general/docs/techdocsabstract.tsp?abstractName=spraae7b). 
You are not so much interested in the adapter concept introduced in this app 
note; however, the companion code does show an example algorithm that's been 
'massaged' into an IMGDEC in one case and how building an 'extension package' 
would clean up the interface subsequently. You can compare the two 
implementations and see which one you prefer. Generally speaking, I personally 
think it is better to build an extension whenever your algorithm is 
significantly different when compared with the class of codec you are 
implementing (IMGENC in your case). Massaging/Extending the interface does save 
you from having to learn about stubs and skeletons, but results in less than 
optimal performance. So it is mainly good for minor differences (e.g. addition 
of 1-2 parameters/arguments on top of existing ones which still mostly apply to 
your algorithm of interest). The main advantages of creating an extension 
package are:

 

-          No excess baggage from unused data structures and features supported 
by VISA and xDM. This eliminates overhead by reducing the size of the messages 
exchanged between the application and the server and the amount of data to be 
cache invalidated and written back in a remote procedure call.

 

-          Enhanced readability when using your own customized API. Calling 
IMGENC_process to e.g. run some median filtering on an image may look quite 
unreadable at the application level if it doesn't use any of the 
inArgs/outArgs/creation params defined in the IMGENC interface. However, you 
can also write a façade layer to help with readability when 'massaging' an 
existing interface, as described in the application note.

 

-          Flexibility to add additional error checking when creating your own 
API/stubs/skeletons.

 

-          Re-use of the extension package for other algorithms from other 
vendors, which can potentially lead to an industry-standard interface that can 
be adopted by the mass market.

 

Hope this helps,

Vincent

 

________________________________

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Asa
Sent: Friday, April 18, 2008 8:34 AM
To: [email protected]
Subject: xDM vs xDAIS

 

Hi All,

 

I have an algorithm which I need to run on the DSP. Once the algorithm is xDAIS 
compliant, I have two options to make it callable from the ARM: 

1. Extend it to be xDM compliant and use one of the VISA interfaces (for 
example IMGENC). 

2. Build extensions (stub/skeleton) to the xDAIS compliant algorithm (similar 
to the TI's SCALE example) . 

 

Now the thing is that this algorithm is not really and encoder or decoder. 
However, it seems that I could use the VISA interface and extend the 
input/output arguments structures with my own parameters. I would have to live 
with the "IMGENC_process" function name and maybe carry around some redundant 
parameters. Implementation wise Option #1 seems faster/easier than #2.

So, my question is: are there any really good reasons for one approach vs. the 
other? When would I absolutely *have to* use the stub/skeleton approach? Is one 
approach indeed faster/easier than the other?

 

Thank you.

 

 

 

_______________________________________________
Davinci-linux-open-source mailing list
[email protected]
http://linux.davincidsp.com/mailman/listinfo/davinci-linux-open-source

Reply via email to