I have an existing MPI code that builds a linear system corresponding to an 
unstructured mesh.  I'm hoping that I can change my code to work with PETSc, 
but I'm not sure the domain decomposition scheme is compatible.  

The big problem seems to be that my domains are not guaranteed to have 
contiguous global node ids.  How can I specify explicitly which processor owns 
which node/vector element (for the purposes of ghost-node synchronization)?

Thanks for your help,
Craig Tanis

Reply via email to