I recently use glpk to solve large scale integer programming. But the iteration time is too long, I want ask whether there's some implementation take advantage of GPU's computing ability(for CUDA/OpenCL/etc.) .
If the answer is no, I think I can work on it. Is there anyone be familiar with OpenCL? _______________________________________________ Help-glpk mailing list [email protected] https://lists.gnu.org/mailman/listinfo/help-glpk
