Hey hey,
I've reviewed a bit my positions already :P While it will be a separate
package, I plan to maintain only two different backends :
- OpenBLAS based for CPU
- ViennaCL based for OpenCL/CUDA
Anyway, it is trivial to extend the library for additional backends. As
soon as blocked_{matrix,vector} is there, I will immediatly add support for:
template<class ScalarType>
struct blocked_viennacl_backendl{
typedef blocked_matrix<ScalarType> MatrixType;
typedef blocked_vector<ScalarType> VectorType;
//... some other typedef
void axpy(...); //redirects to operator overloads
void dot(...); //redirects to operator overloads
//several other BLAS1 and BLAS2 wrappers
};
For very large problems, and (as far as I know) the first (open-source)
multi-gpu nonlinear minimization package. ;) Similar extensions can be
trivially done for Eigen, MKL, CuBlas, etc... I want to make it very
explicit that this library is meant as a front end rather than a backend,
which is a reason why it will be a separate library, pre-shipped with some
backends definition
Beyond that, I think that this will be an additional very intensive
correctness and performance test for the forthcoming operation packing /
micro-scheduling mechanism!
Best regards,
Philippe
2013/9/7 Karl Rupp <r...@iue.tuwien.ac.at>
> Hey hey hey,
>
> > While integrating it completely is trivial. I have no real name for it
> > right now though. Let's assume it is named fminpp, then :
> >
> > typedef fminpp::optimization_options optimization_options;
> > typedef fminpp::directions::cg cg;
> > //other aliases...
> >
> > template<class ScalarType, class Fun>
> > viennacl::vector<ScalarType> minimize(Fun const & fun,
> > viennacl::vector<ScalarType> & x0){
> > return fminpp::minimize<fminpp::viennacl_backend<ScalarType> >(fun,
> x0);
> > }
> >
> > I think it is harmful from a maintainability and documentation point of
> > view, since the lib will probably have different release cycles as well
> > as its own user guide, github, and potentially different contributors.
> > Since it is generic, the end goal is to have contributors who would work
> > either with ViennaCL, Eigen, Armadillo, OpenBlas, MKL, CuBLAS, etc...
> > such that modifications for any backend should stay consistent and work
> > on every other backend aswell (this is another reason why the MIT
> > License is appropriate, since it is completely unrestrictive and
> > compatible with the license of any other linear algebra out there)...
> > the risk with a full integration is that the two paths would diverge,
> > viennacl eventually using an "old" version.
>
> Either the package (fminpp) is run as a separate standalone package,
> which is not shipped with ViennaCL (there is no problem with referring
> to it in the documentation, of course), or it is only part of ViennaCL.
> Anything in between will only result in version confusion and
> maintenance troubles.
>
>
> > Therefore, I'm rather in favor of keeping them entirely separate. On the
> > other hand, the most important is that the users should know that they
> > can minimize ViennaCL functions in one line. Maybe they will not use
> > this functionnality if it is not fully integrated...
>
> This can be partially addressed via documentation. If you decide to play
> the generic library game, you have to accept that you will have a small
> share of users using different libraries rather than a large share of
> users from a single library. This, on the other hand, means that you'll
> also have to do quite a significant amount of testing. Linking with MKL
> alone can be quite some fun already ;-)
>
>
> > I think integrating it into Python is doable. I can't go into the
> > details, however, since I don't know the details of function pointers
> > and functors in Python... :P However, I'm very enthusiastic about such a
> > project. There is already a scipy.optimize library, which rely on numpy
> > arrays. I'm not a numpy expert, but it seems to me like Python's
> > ducktyping would allow any library with a numpy-like interface to work
> > seemlessly with scipy.optimize, thus avoiding unnecessary CPU/GPU
> > transfers, supposedly... Now, as far as I know, PyViennaCL and numpy
> > have a different interface, so relying on scipy.optimize may not work...
> > We can discuss this in private if you want, since this is not entirely
> > viennacl-related and since there is no mailing list yet for the library
> :P
>
> Just keep in mind that you also need to provide 'additional value' to
> users. If you only duplicate functionality which is in numpy/scipy
> without providing e.g. a significant performance benefit, nobody will
> use it. None of the current algorithms in ViennaCL is revolutionary new,
> it's just that most of them provide GPU acceleration, which one would
> not easily get otherwise.
>
> Best regards,
> Karli
>
>
>
> ------------------------------------------------------------------------------
> Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
> Discover the easy way to master current and previous Microsoft technologies
> and advance your career. Get an incredible 1,500+ hours of step-by-step
> tutorial videos with LearnDevNow. Subscribe today and save!
> http://pubads.g.doubleclick.net/gampad/clk?id=58041391&iu=/4140/ostg.clktrk
> _______________________________________________
> ViennaCL-devel mailing list
> ViennaCL-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/viennacl-devel
>
------------------------------------------------------------------------------
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58041391&iu=/4140/ostg.clktrk
_______________________________________________
ViennaCL-devel mailing list
ViennaCL-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/viennacl-devel