I've cobbled together an example on the wiki of running jobs using a very minimalist queueing arrangement.
It still needs some work, e.g. put back the attribution that belongs in tst.mod, but it's complete. It will create a set of data files and then run 4 instances of glpsol in parallel. The queue part is just the last script, tst4. All the rest is generating test data. All my real work is done on a system which is not internet connected, so I sneaker net stuff. I ran into a glitch with a flash drive that deleted my scripts forcing me to recreate them. This was on top of a problem that cost me all of my first attempt when I was not able to save it to the wiki. The creation of the test data is a bit odd because Matteo's post prompted me to experiment with the effect of random input orderings. I have done very large jobs with scripts like this that took all weekend on a cluster and on occasion come in on a Monday morning to discover the job died because I completely filled a 2 TB filesystem with the output. Poor planning on my part, but it should make clear that you can do a lot of work like this. Running thousands of permutations of a problem just requires electricity and patience. I find that patience is best supplied by going home. If anyone has questions please let me know and I'll try to address them in the wiki. Also many thanks to everyone who has helped me get this far with glpk, and Marc especially. Have Fun! Reg _______________________________________________ Help-glpk mailing list [email protected] https://lists.gnu.org/mailman/listinfo/help-glpk
