VecLoadIntoVector and double dispatch

2009-12-05 Thread Jed Brown
On Fri, 4 Dec 2009 18:29:19 -0600, Barry Smith bsmith at mcs.anl.gov wrote: This may be crux of the current discussion. This part was actually orthogonal to the extensible double dispatch issue which was: It should be possible for VecView(X,V) to invoke a third-party viewer V even when

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Matthew Knepley
take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091205

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Jed Brown
This is an interesting proposal. Two thoughts: Residual and Jacobian evaluation cannot be written in Python (though it can be prototyped there). After a discretization is chosen, the physics is usually representable as a tiny kernel (Riemann solvers/pointwise operation at quadrature points),

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Matthew Knepley
they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091205/abcf93c0/attachment.html

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Jed Brown
On Sat, 5 Dec 2009 13:09:33 -0600, Matthew Knepley knepley at gmail.com wrote: Then kernels are moved to an accelerator. These kernels necessarily involve user code (physics). It's a lot to ask users to maintain two versions of their physics, one which is debuggable and another which is fast

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Brad Aagaard
As someone who has a finite-element code built upon PETSc/Sieve with the top-level code in Python, I am in favor of Barry's approach. As Matt mentions debugging multi-languages is more complex. Unit testing helps solve some of this because tests associated with the low-level code involve only

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Jed Brown
Somehow this drifted off the list, hopefully the deep citations provide sufficient context. On Sat, 5 Dec 2009 13:42:31 -0600, Matthew Knepley knepley at gmail.com wrote: On Sat, Dec 5, 2009 at 1:32 PM, Jed Brown jed at 59a2.org wrote: On Sat, 5 Dec 2009 13:20:20 -0600, Matthew Knepley

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Dima Karpeyev
Cython can accelerate almost any Python code nearly immediately (although it supports a somewhat restricted subset of Python). This is simply due to converting it to equivalent C code that is compiled and runs within CPython. Then, chunks of the Python code can be explicitly typed, which can

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Matthew Knepley
is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091205/67786fec/attachment.html

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Ethan Coon
This is a very interesting issue. Suppose you write the RHSFunction in Python and pass to SNES. Are you saying that pdb cannot stop in that method when you step over SNESSolve() in Python? That would suck. If on the other hand, you passed in C, I can see how you are relegated to obscure

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Jed Brown
On Fri, 4 Dec 2009 22:42:35 -0600, Barry Smith bsmith at mcs.anl.gov wrote: generally we would have one python process per compute node and local parallelism would be done via the low-level kernels to the cores and/or GPUs. I think one MPI process per node is fine for MPI performance on good

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Matthew Knepley
they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091205/71c4bab1/attachment.html

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Jed Brown
On Sat, 5 Dec 2009 16:02:38 -0600, Matthew Knepley knepley at gmail.com wrote: I need to understand better. You are asking about the case where we have many GPUs and one CPU? If its always one or two GPUs per CPU I do not see the problem. Barry initially proposed one Python thread per node,

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Matthew Knepley
they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091205/490d4c3a/attachment.html

since developing object oriented software is so cumbersome in C and we are all resistent to doing it in C++

2009-12-05 Thread Matthew Knepley
results to which their experiments lead. -- Norbert Wiener -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20091205/7662bfc8/attachment.html