Hey Jeff that was a crazy fast follow up, thank you so much! I've benchmarked the ISPC against the handmade SSE2 and surprisingly, their performance is very similar, almost identical. I've compiled ISPC with avx2-i32x16. Would have expected ispc to be significantly faster. https://horugame.com/wp-content/uploads/2020/03/ISPCTest.zip This is a benchmark project where I've set both of the methods up to run on the same dataset. Run build_ispc.bat to build the ISPC objs then open the sln in visual studio (2019) and run Release/x64.
Again thanks for your time, this is amazing. David On Tuesday, March 24, 2020 at 1:33:02 AM UTC-5, Jeff Rous wrote: > > Hey David, give this is a shot. What this does is load programCount / 4 > float4s into your SIMD registers (1 for i32x4, 2 for i32x8 and 4 for > i32x16). To handle count that isn't divisible by that number, I left your > original algorithm in to do single iterations. Think of this like a foreach > using 4-wide vectors instead of 1-wide float. It should scale up to AVX-512 > automatically. > > Hopefully no bugs! > > https://ispc.godbolt.org/z/NVtkv4 <https://ispc.godbolt.org/z/Jrk5_Q> > > Jeff > > On Monday, March 23, 2020 at 3:43:41 PM UTC-7, David Nadaski wrote: >> >> Thanks Pete, that's great! >> >> Trying to understand the potential optimizations to be made to make this >> work w/ AVX2 and in the future AVX-512: when you say I'd have to rework the >> math or pack 2 vectors into an mm256, would that be something specific to >> AVX2 or would it automatically scale to AVX-512 as well? >> In other words, will I potentially have to rework my ISPC code once >> AVX-512 becomes more widely adopted or can I write something now that will >> scale automatically thanks to ispc? >> I can handcraft both AVX2 and AVX-512 implementations but was hoping ISPC >> would sort of do that for me. I understand that dealing with vectors and >> scalars is different. >> >> Thanks, David >> >> On Monday, March 23, 2020 at 4:51:39 PM UTC-5, Pete Brubaker wrote: >>> >>> No problem, let us know if we can help further. I mentioned this to >>> Jeff and he's going to link some examples. >>> >>> On Mon, Mar 23, 2020 at 2:25 PM David Nadaski <[email protected]> >>> wrote: >>> >>>> Thanks so much Pete, this does seem to be the fastest version yet, >>>> rivaling the manually vectorized SSE2 code. >>>> I will try and get ispc to pack the vectors into YMM and see how it >>>> fares. Thanks again. >>>> >>>> On Monday, March 23, 2020 at 3:18:25 PM UTC-5, Pete Brubaker wrote: >>>>> >>>>> Hi David, >>>>> >>>>> In looking over your code, unless you can reorder the data to SoA your >>>>> best strategy is to use the following method. This is what Jeff does in >>>>> Unreal. >>>>> >>>>> https://ispc.godbolt.org/z/UdWjH- >>>>> >>>>> The compiler knows you're working on <4> element vectors and will only >>>>> use XMM registers. To get to YMM registers you'd have to rework the >>>>> math >>>>> or do two 4 component vectors at the same time packed in a single 8 wide >>>>> varying float. >>>>> >>>>> Cheers, >>>>> >>>>> Pete >>>>> >>>>> On Mon, Mar 23, 2020 at 1:05 AM David Nadaski <[email protected]> >>>>> wrote: >>>>> >>>>>> Thank you, much appreciated! >>>>>> >>>>>> On Monday, March 23, 2020 at 2:25:13 AM UTC-5, Dmitry Babokin wrote: >>>>>>> >>>>>>> Current code is using only "uniform" data and operations, so it's >>>>>>> expectedly using only xmms. >>>>>>> >>>>>>> I've asked folks, who developed similar code, to have a look and >>>>>>> comment on the best strategies to express your code in ISPC. >>>>>>> >>>>>>> Dmitry. >>>>>>> >>>>>>> On Sun, Mar 22, 2020 at 2:42 PM David Nadaski <[email protected]> >>>>>>> wrote: >>>>>>> >>>>>>>> I've also made an implementation that's using aos_to_soa4 / >>>>>>>> soa_to_aos4 which does use YMM but it's even slower than the other two: >>>>>>>> https://ispc.godbolt.org/z/zNbbU2 >>>>>>>> >>>>>>>> This is what I'm ultimately trying to implement in ispc so that it >>>>>>>> can benefit from ispc's automatic AVXization etc: >>>>>>>> https://godbolt.org/z/DwaTuW >>>>>>>> >>>>>>>> On Sunday, March 22, 2020 at 4:56:37 AM UTC-5, David Nadaski wrote: >>>>>>>>> >>>>>>>>> Hi Dmitry, here it is: https://ispc.godbolt.org/z/8omkHp >>>>>>>>> >>>>>>>>> Basically I'm looking for the most optimized way of taking in an >>>>>>>>> array of Vector4 from c++ in AOS form and doing calculations on them >>>>>>>>> in >>>>>>>>> ispc. >>>>>>>>> If I use foreach, ispc complains about stores and loads and the >>>>>>>>> generated code is slower, but it has YMM. >>>>>>>>> If I use the code above (on godbolt, simple for), the code is >>>>>>>>> faster than the foreach version but lacks YMM, so I'm guessing it >>>>>>>>> could be >>>>>>>>> made more performant. >>>>>>>>> Thank you for your help. >>>>>>>>> >>>>>>>>> David >>>>>>>>> >>>>>>>>> On Sunday, March 22, 2020 at 3:51:36 AM UTC-5, Dmitry Babokin >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> David, >>>>>>>>>> >>>>>>>>>> Typically if you target avx2 (using --target=avx2-i32x8), the >>>>>>>>>> code should use ymm registers. If you use --target=avx2-i32x4, then >>>>>>>>>> your >>>>>>>>>> data is 128 bit wide (for int and float vectors), which means that >>>>>>>>>> xmm >>>>>>>>>> registers will be used. >>>>>>>>>> >>>>>>>>>> Another possibility that you are using uniform float<4>, which >>>>>>>>>> again means that you are operating on 128 bit vectors. >>>>>>>>>> >>>>>>>>>> If you are already using avx2-i32x8 target, can you share the >>>>>>>>>> code via Compiler Explorer link, so I can see both the code and >>>>>>>>>> compilation >>>>>>>>>> flags? >>>>>>>>>> >>>>>>>>>> Dmitry. >>>>>>>>>> >>>>>>>>>> On Sat, Mar 21, 2020 at 4:17 PM David Nadaski <[email protected]> >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Thank you Oleh! >>>>>>>>>>> >>>>>>>>>>> Upon checking the generated assembly, I'm noticing that it's >>>>>>>>>>> executing against AVX2 but not using YMM registers at all. Would >>>>>>>>>>> you know >>>>>>>>>>> why? >>>>>>>>>>> I've compiled it using -O2. >>>>>>>>>>> >>>>>>>>>>> *AddVec4sProper_avx2:* >>>>>>>>>>> 00007FF6F9E167E0 test r8d,r8d >>>>>>>>>>> 00007FF6F9E167E3 jle AddVec4sProper_avx2+0B9h >>>>>>>>>>> (07FF6F9E16899h) >>>>>>>>>>> 00007FF6F9E167E9 lea eax,[r8-1] >>>>>>>>>>> 00007FF6F9E167ED mov r9d,r8d >>>>>>>>>>> 00007FF6F9E167F0 and r9d,3 >>>>>>>>>>> 00007FF6F9E167F4 cmp eax,3 >>>>>>>>>>> 00007FF6F9E167F7 jae AddVec4sProper_avx2+26h >>>>>>>>>>> (07FF6F9E16806h) >>>>>>>>>>> 00007FF6F9E167F9 xor r10d,r10d >>>>>>>>>>> 00007FF6F9E167FC test r9d,r9d >>>>>>>>>>> 00007FF6F9E167FF jne AddVec4sProper_avx2+90h >>>>>>>>>>> (07FF6F9E16870h) >>>>>>>>>>> 00007FF6F9E16801 jmp AddVec4sProper_avx2+0B9h >>>>>>>>>>> (07FF6F9E16899h) >>>>>>>>>>> 00007FF6F9E16806 sub r8d,r9d >>>>>>>>>>> 00007FF6F9E16809 mov eax,30h >>>>>>>>>>> 00007FF6F9E1680E xor r10d,r10d >>>>>>>>>>> 00007FF6F9E16811 vmovss xmm0,dword ptr [__real@3f800000 >>>>>>>>>>> (07FF6FA0EAF20h)] >>>>>>>>>>> 00007FF6F9E16819 nop dword ptr [rax] >>>>>>>>>>> 00007FF6F9E16820 vmovaps xmm1,xmmword ptr [rdx+rax-30h] >>>>>>>>>>> 00007FF6F9E16826 vaddss xmm1,xmm1,xmm0 >>>>>>>>>>> 00007FF6F9E1682A vmovaps xmmword ptr [rcx+rax-30h],xmm1 >>>>>>>>>>> 00007FF6F9E16830 vmovaps xmm1,xmmword ptr [rdx+rax-20h] >>>>>>>>>>> 00007FF6F9E16836 vaddss xmm1,xmm1,xmm0 >>>>>>>>>>> 00007FF6F9E1683A vmovaps xmmword ptr [rcx+rax-20h],xmm1 >>>>>>>>>>> 00007FF6F9E16840 vmovaps xmm1,xmmword ptr [rdx+rax-10h] >>>>>>>>>>> 00007FF6F9E16846 vaddss xmm1,xmm1,xmm0 >>>>>>>>>>> 00007FF6F9E1684A vmovaps xmmword ptr [rcx+rax-10h],xmm1 >>>>>>>>>>> 00007FF6F9E16850 vmovaps xmm1,xmmword ptr [rdx+rax] >>>>>>>>>>> 00007FF6F9E16855 vaddss xmm1,xmm1,xmm0 >>>>>>>>>>> 00007FF6F9E16859 vmovaps xmmword ptr [rcx+rax],xmm1 >>>>>>>>>>> 00007FF6F9E1685E add r10,4 >>>>>>>>>>> 00007FF6F9E16862 add rax,40h >>>>>>>>>>> 00007FF6F9E16866 cmp r8d,r10d >>>>>>>>>>> 00007FF6F9E16869 jne AddVec4sProper_avx2+40h >>>>>>>>>>> (07FF6F9E16820h) >>>>>>>>>>> 00007FF6F9E1686B test r9d,r9d >>>>>>>>>>> 00007FF6F9E1686E je AddVec4sProper_avx2+0B9h >>>>>>>>>>> (07FF6F9E16899h) >>>>>>>>>>> 00007FF6F9E16870 shl r10,4 >>>>>>>>>>> 00007FF6F9E16874 neg r9d >>>>>>>>>>> 00007FF6F9E16877 vmovss xmm0,dword ptr [__real@3f800000 >>>>>>>>>>> (07FF6FA0EAF20h)] >>>>>>>>>>> 00007FF6F9E1687F nop >>>>>>>>>>> 00007FF6F9E16880 vmovaps xmm1,xmmword ptr [rdx+r10] >>>>>>>>>>> 00007FF6F9E16886 vaddss xmm1,xmm1,xmm0 >>>>>>>>>>> 00007FF6F9E1688A vmovaps xmmword ptr [rcx+r10],xmm1 >>>>>>>>>>> 00007FF6F9E16890 add r10,10h >>>>>>>>>>> 00007FF6F9E16894 inc r9d >>>>>>>>>>> 00007FF6F9E16897 jne AddVec4sProper_avx2+0A0h >>>>>>>>>>> (07FF6F9E16880h) >>>>>>>>>>> 00007FF6F9E16899 ret >>>>>>>>>>> 00007FF6F9E1689A nop word ptr [rax+rax] >>>>>>>>>>> >>>>>>>>>>> On Saturday, March 21, 2020 at 12:21:43 PM UTC-5, Oleh Nechaev >>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>> struct Vector4SOA >>>>>>>>>>>> { >>>>>>>>>>>> float<4> V; >>>>>>>>>>>> }; >>>>>>>>>>>> >>>>>>>>>>>> export void Test(uniform Vector4SOA outs[], uniform >>>>>>>>>>>> Vector4SOA ins[], uniform int count) >>>>>>>>>>>> { >>>>>>>>>>>> for (uniform int i=0; i< count ; ++i) >>>>>>>>>>>> { >>>>>>>>>>>> uniform Vector4SOA vv = ins[i]; >>>>>>>>>>>> vv.V.x++; // builtin access by x y z w and r g b a >>>>>>>>>>>> outs[i] = vv; >>>>>>>>>>>> } >>>>>>>>>>>> } >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> You received this message because you are subscribed to the >>>>>>>>>>> Google Groups "Intel SPMD Program Compiler Users" group. >>>>>>>>>>> To unsubscribe from this group and stop receiving emails from >>>>>>>>>>> it, send an email to [email protected]. >>>>>>>>>>> To view this discussion on the web visit >>>>>>>>>>> https://groups.google.com/d/msgid/ispc-users/cf0af12b-d84a-463c-be09-c8b35b3baf81%40googlegroups.com >>>>>>>>>>> >>>>>>>>>>> <https://groups.google.com/d/msgid/ispc-users/cf0af12b-d84a-463c-be09-c8b35b3baf81%40googlegroups.com?utm_medium=email&utm_source=footer> >>>>>>>>>>> . >>>>>>>>>>> >>>>>>>>>> -- >>>>>>>> You received this message because you are subscribed to the Google >>>>>>>> Groups "Intel SPMD Program Compiler Users" group. >>>>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>>>> send an email to [email protected]. >>>>>>>> To view this discussion on the web visit >>>>>>>> https://groups.google.com/d/msgid/ispc-users/405aca52-e245-4c3a-8a0b-27bb35a3c0ee%40googlegroups.com >>>>>>>> >>>>>>>> <https://groups.google.com/d/msgid/ispc-users/405aca52-e245-4c3a-8a0b-27bb35a3c0ee%40googlegroups.com?utm_medium=email&utm_source=footer> >>>>>>>> . >>>>>>>> >>>>>>> -- >>>>>> You received this message because you are subscribed to the Google >>>>>> Groups "Intel SPMD Program Compiler Users" group. >>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>> send an email to [email protected]. >>>>>> To view this discussion on the web visit >>>>>> https://groups.google.com/d/msgid/ispc-users/41351dc0-539d-4685-a8c6-e66dde273fd2%40googlegroups.com >>>>>> >>>>>> <https://groups.google.com/d/msgid/ispc-users/41351dc0-539d-4685-a8c6-e66dde273fd2%40googlegroups.com?utm_medium=email&utm_source=footer> >>>>>> . >>>>>> >>>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "Intel SPMD Program Compiler Users" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected]. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/ispc-users/3514c98e-9e05-498b-8a2c-f4074b6b4f07%40googlegroups.com >>>> >>>> <https://groups.google.com/d/msgid/ispc-users/3514c98e-9e05-498b-8a2c-f4074b6b4f07%40googlegroups.com?utm_medium=email&utm_source=footer> >>>> . >>>> >>> -- You received this message because you are subscribed to the Google Groups "Intel SPMD Program Compiler Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/ispc-users/3bb72400-76a5-4616-ac94-7f9ae12b05f2%40googlegroups.com.
