Will the 'basic' run be standard for online reporting?  The reason I ask is
I think it would be useful to be able to gauge various hardware setups I
don't have access to and see where things stack up.

Side note, I can probably get Anandtech.com to use this as a standard
benchmark for compute if there is a reproducible standard test which can be
produced.

Also, if we do go that route, can we use a larger (more difficult) sparse
test?  It completes in the blink of an eye right now.  Might be nice to run
a few different matrices so we can see how it scales with size, like the
other benchmarks.

Thanks,
Matt
On Aug 12, 2014 4:33 AM, "Karl Rupp" <r...@iue.tuwien.ac.at> wrote:

> Hi again,
>
>
> > It's actually important to have finer grained data for small vectors,
> > and more spaced points as the data grows bigger : this is why it is
> > better to choose the sizes according to a a^x law than an a*x one. You
> > can experiment other values than 2 for a, if you want. If I were you,
> > I'd probably go with something like :
> > [int(1.5**x) for x in range(30,45)]
> >
> > That is, an increment 1.5 factor from ~190,000 to ~55,000,000
>
> Hmm, 55M elements is a bit too much for the default mode, it would
> exceed the RAM available on a bunch of mobile GPUs. I'd rather suggest
> the range ~1k to ~10M elements so that the latency at small vector sizes
> is also captured.
>
> Best regards,
> Karli
>
>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> ViennaCL-devel mailing list
> ViennaCL-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/viennacl-devel
>
------------------------------------------------------------------------------
_______________________________________________
ViennaCL-devel mailing list
ViennaCL-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/viennacl-devel

Reply via email to