Hi folks: (purely non-commercial purpose question)
We are looking to get feedback from informatics users and researchers where your computational bottlenecks are. Basically, we are trying to understand which programs or algorithms are the most time/resource consuming. The groups we speak to frequently often have their own particular needs, so I would like to make sure we are addressing the larger community of informatics users and researchers.
The goal of this effort is to explore whether or not acceleration of some of these codes is possible or beneficial on GPU based systems.
Part of my reason for asking is that I see a number of dynamic programming algorithms have been accelerated on FPGAs and other platforms, but I am not seeing large usage of these accelerators. This suggests that the barriers to obtaining/using these accelerators may be larger than the benefit of using them ... either because the costs are too high, or the wrong thing is being accelerated, and bottlenecks are elsewhere. The GPU-HMMer work recently announced does seem to have good uptake, and we want to see what else we should be looking at.
We would like to know where the computational rate limiting steps are today, and what you need to run faster, and what the benefit of additional speed would be (as well as how much additional speed is needed to positively impact your research).
Please feel free to email me offline, if there is interest, we will summarize. Again, strictly non-commercial in nature.
Regards, Joe -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics LLC, email: [email protected] web : http://www.scalableinformatics.com http://jackrabbit.scalableinformatics.com phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 _______________________________________________ BBB mailing list [email protected] http://www.bioinformatics.org/mailman/listinfo/bbb
