Sorry about not posting anything for the last month -- I thought I had subscribed,
but I guess not.

As for the last month, I think some very nice options have been pointed out, not
the least of which include:
1) The DisplayLink 160 chip. Running uncompressed screen information is an OK solution, but of course it requires more hardware to actually decode in real time. My 3 Ghz dual core can do it, but I doubt all the interested consumers have that kind of power (many do, yes, but laptops are getting more popular). Still
something to consider.
2) The Fujitsu chips. These are a really nice possibility and make our life easy. We would need a micro or FPGA to interface it to ethernet/usb/etc, but that is not a big deal. The only disadvantages are they probably cost a lot and they can't be
configured at all.
3) Blackfins. They are nice, but we don't need floating point. We can get 2400 FLOPS for $30, however, so it is a possibility (if that is enough with an FPGA).
3) The 900Mhz TI DSP. This looks fantastic! 7200 MACs, only $70, etc.
4) The Ambarella chip. This looks good, is low power, etc, but I imagine it would be more useful in an application that uses the encoding and processing. We can easily adjust our requirements to support this, however -- I linked to an Engadget article a while back with a bunch of people commenting that they would buy a PCI H.264 encoder accelerator in a second, and I don't see how an ethernet one
is any different.
5) The broadcom chip (BCM7412). This in again a nice SoC solution.

In response to JRT:
>I find one issue.  Only fixed point is needed for video operations
>(MPEG* decode, YUV -> RGB, & scan conversion).  However OpenGL needs
>floating point.  The problem appears to be that float DSPs are not as
>fast as fixed point DSPs. I'm not clear on why other than the fact that
>it takes more hardware to do floating point.

I think it is a combination of that and the fact that there is a high (er) demand for processors that have fast fixed point (for, you guessed it, applications like ours).

>IIUC, you could use 3 fixed point DSPs running in parallel for MPEG*
>decoding.  NXP (was Philips) has chips with three DSPs on it.  OTOH,
>VLIW with > 3 ALUs should accomplish the same thing.
What do you mean by this? There will not be any performance benefits after
3 DSPs? I imagine that video processing scales nicely to much more than
just 3 DSPs -- at the very least, you could just process every set of inter-keyframes separately. You would end up with some serious latency and would probably run
out of cache, but it would be possible.
>Actually, a large
>array of 8 x 8 (or 8 x 9 -- YUV <> RGB needs one binary digit to the
>left of the binary point) fixed point with only the 8 MSBs output (YUV
><> RGB is going to need overflow bits to the left of the binary point)
>multipliers would be most useful for these video operations.
Many pieces of hardware use FPGAs/ASICs for e.g. system interconnect, power sequencing, etc, so it is easier for them to use those to do the massively parallel
operations, leaving all the R&D intensive algorithms to the custom ICs.
>
>For a shader, VLIW with 4 ALUs seems to be what is needed.  The best
>hardware appears to be the AltiVec processor in POWER-PC chips. But, a
>DSP with VLIW and 4 ALUs would be OK.
I agree. We could always just support OpenGL ES, but that is not a very good solution.


My personal thought is that we should try to steer away from the integrated solutions and go with either FPGA + DSP (best) or FPGA + ASIC, since that gives us more flexibility in the future. The only advantages that I can see of the ASICs and SoCs is lower power (very
important, and lower by a lot) and less development time.

Did I miss anything?
Nicholas

_______________________________________________
Open-hardware-ethervideo mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-hardware-ethervideo

Reply via email to