Hello, Below is my post from a few days ago and this time I've attached the output from log_summary. " Until a few days ago I've only be using PETSc in debug mode and when I switch to the optimised version(--with-debugging=0) I got a strange result regarding the solve time, what I mean is that it was 10-15 % higher then in debug mode. I'm trying to solve a linear system in parallel with superlu_dist, and I've tested my program on a Beowulf cluster, so far only using a single node with 2 quad-core Intel processors. From what I know the "no debug" version should be faster and I know it should be faster because on my laptop(dual-core Intel) for the same program and even the same matrices the solve time for the optimised version is 2 times faster, but when I use the cluster the optimised version time is slower then the debug version. Any thoughts?
" Best regards, Bogdan Dita -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc_log_debug.pdf Type: application/pdf Size: 23620 bytes Desc: not available URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111114/f97806ae/attachment-0002.pdf> -------------- next part -------------- A non-text attachment was scrubbed... Name: petsc_log_NOdebug.pdf Type: application/pdf Size: 23101 bytes Desc: not available URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20111114/f97806ae/attachment-0003.pdf>
