Artec has built a 6 dof NURB in FPGA, and hacked EMC2 (sim) to feed it. 
An open source solution for FPGA would be fascinating to me, but the 
problem might have already been solved.  What do you want to do that is 
different than their solution?

On Tue, 11 Sep 2012 00:23:44 +0300, Topi Rinkinen wrote:
> Hi,
>
> I have RPi playing Internet radio, and another one waiting for some 
> fun
> activities.
> I have been thinking integrating a small FPGA or CPLD with RPi,
> especially targeted to CNC or motor control applications.
> For starters one could use Lattice's MachXO2 based evaluation kits 
> (USD
> 30), and connect it to GPIO connector of RPi.
>
> But I don't know anything about EMC2 software architecture, so I 
> cannot
> start easily.
>
> My background is in FPGA design, and I think I can quite quickly 
> develop
> a library of needed VHDL modules to be run on CPLD/FPGA.
> If someone other could compile a list of activities that need FPGA's
> realtime and rough specifications, I can start coding the modules.
>
> BR, -Topi
>
>
> On Mon, 2012-09-10 at 16:41 -0400, Kent A. Reed wrote:
>> On 9/10/2012 10:45 AM, Michael Haberler wrote:
>> > here's linpack figures for the Rpi and an Intel D525.
>> >
>> > the D5252 is almost a factor of 6 faster than the Rpi for this 
>> benchmark
>> >
>> > (pull test code with wget 
>> http://www.netlib.org/benchmark/linpackc.new)
>> >
>> > pi@raspberrypi ~/tests $ gcc -O linpackc.c -lm -olinpackc
>> > pi@raspberrypi ~/tests $ ./linpackc
>> > Enter array size (q to quit) [200]:
>> > Memory required:  315K.
>> >
>> >
>> > LINPACK benchmark, Double precision.
>> > Machine precision:  15 digits.
>> > Array size 200 X 200.
>> > Average rolled and unrolled performance:
>> >
>> >      Reps Time(s) DGEFA   DGESL  OVERHEAD    KFLOPS
>> > ----------------------------------------------------
>> >        16   0.66  90.91%   3.03%   6.06%  35440.860
>> >        32   1.33  88.72%   3.01%   8.27%  36021.858
>> >        64   2.64  88.26%   2.65%   9.09%  36622.222
>> >       128   5.28  89.58%   3.22%   7.20%  35874.830
>> >       256  10.56  88.92%   3.12%   7.95%  36170.096
>> >
>> >
>> > mah@atom:~/src$ gcc -O linpackc.c  -lm
>> > linpackc.c: In function 'main':
>> > linpackc.c:78: warning: ignoring return value of '€˜fgets'€™, 
>> declared with attribute warn_unused_result
>> > mah@atom:~/src$ ./a.out
>> > Enter array size (q to quit) [200]:
>> > Memory required:  315K.
>> >
>> >
>> > LINPACK benchmark, Double precision.
>> > Machine precision:  15 digits.
>> > Array size 200 X 200.
>> > Average rolled and unrolled performance:
>> >
>> >      Reps Time(s) DGEFA   DGESL  OVERHEAD    KFLOPS
>> > ----------------------------------------------------
>> >       128   0.96  86.46%   3.12%  10.42%  204403.101
>> >       256   1.91  87.96%   1.05%  10.99%  206807.843
>> >       512   3.82  87.43%   2.62%   9.95%  204403.101
>> >      1024   7.63  87.68%   2.36%   9.96%  204700.631
>> >      2048  15.26  87.42%   2.82%   9.76%  204254.660
>> >
>>
>>
>> I wanted to point to http://elinux.org/Rpi_Performance earlier but I
>> couldn't remember where I had seen the numbers.
>>
>> They reported marginally (ca 15 percent) better LINPACK performance 
>> with
>> their RPi at 700MHz and managed ca 60KFLOPS by overclocking to 
>> 1000MHz.
>>
>> Your numbers also don't seem out of line with
>> 
>> http://www.ptxdist.org/development/kernel/arm-benchmarks-20100729_en.html,
>> although they tested a BeagleBoard.
>>
>> I remember feeling ok in the Spring about my BeagleBoard-XM because 
>> I
>> knew the Raspberry Pi ARM11 cpu uses the older ARMv6 instruction 
>> set*;
>> then I felt queazy when I realized the RPi includes VFP; now I am
>> bemused to read the following (from
>> 
>> http://vanshardware.com/2010/08/mirror-the-coming-war-arm-versus-x86/
>> which tested yet another ARM system)
>>
>> > Oddly enough, during our performance optimization experiments, 
>> Neon
>> > generated the same level of double-precision performance as the 
>> VFP,
>> > while doubling the VFP’s single-precision performance. When we 
>> asked
>> > ARM about this, company representatives replied, “NEON improves FP
>> > performance significantly. The compiler should be directed to use 
>> NEON
>> > over the VFP.”
>>
>>
>> Finally, from mathtest.c comes this 2004 comment by Paul Comer 
>> (drawn
>> from notes by Fred Proctor)
>>
>> > /*
>> >    math functions are:
>> >
>> >    extern double sin(double);            used in posemath, siggen, 
>> & noncartesian kins
>> >    extern double cos(double);            used in posemath, siggen, 
>> & noncartesian kins
>> >    extern double tan(double);            not used in RT
>> >    extern double asin(double);           not used in RT
>> >    extern double acos(double);           used in posemath & 
>> noncartesian kins
>> >    extern double atan2(double, double);  used in posemath & 
>> noncartesian kins
>> >    extern double sinh(double);           not used in RT
>> >    extern double cosh(double);           not used in RT
>> >    extern double tanh(double);           not used in RT
>> >    extern double exp(double);            not used in RT
>> >    extern double log(double);            not used in RT
>> >    extern double log10(double);          not used in RT
>> >    extern double pow(double, double);    not used in RT
>> >    extern double sqrt(double);           used in tc, segmot, & 
>> noncartesean kins.
>> >    extern double ceil(double);           used in segmot & emcpid
>> >    extern double floor(double);          used by siggen & segmot
>> >    extern double fabs(double);           used a lot in RT
>> >    extern double ldexp(double, int);     not used in RT
>> >
>> >    extern double sincos(double, double *, double *); Is called at 
>> four places in
>> >                                                      posemath - 
>> None of the resulting
>> >                                                      functions are 
>> used in EMC.
>> >    Extras:
>> >
>> >    extern int isnan(double);             Not used directly in RT - 
>> But is called
>> >                                          by several (all ?) of the 
>> floating point
>> >                                    math functions.
>> > */
>>
>> Obviously, this ignores the pedestrian functions like add, subtract,
>> multiply, divide, negate, and compare, but do you suppose this is 
>> still
>> an accurate accounting of the floating-point math functions used in
>> LinuxCNC?

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Emc-developers mailing list
Emc-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-developers

Reply via email to