Thanks for the links. I had come across the first one, but I haven’t found any major speedup using the —inline option on this 1D problem. Initially I thought that remeshing would be fairly simple, not unlike what our C++ code does. It wouldn’t speed things up necessarily since our code uses a constant number of points, but it would increase the accuracy since you are reducing the computational window to only contain areas with non-zero concentration (in the case of a diffusion problem).
>From the response by the maintainers of FiPy it would seem that this isn’t as >straightforward as I thought. I’ll give it another go, though, and would >report back if it works. The overhead might be pretty large, let’s see. Carsten _____________________________________Dipl.-Phys. Carsten Langrock, Ph.D. Senior Research Scientist Edward L. Ginzton Laboratory, Rm. 202 Stanford University 348 Via Pueblo Mall 94305 Stanford, CA Tel. (650) 723-0464 Fax (650) 723-2666 Ginzton Lab Shipping Address: James and Anna Marie Spilker Engineering and Applied Sciences Building 04-040 348 Via Pueblo Mall 94305 Stanford, CA _____________________________________ > On Jul 24, 2018, at 9:19 AM, Drew Davidson <davidson...@gmail.com> wrote: > > > Hi Carsten, > > Have you looked at: > > > https://www.ctcms.nist.gov/fipy/documentation/EFFICIENCY.html > > > > > http://nbviewer.jupyter.org/github/wd15/fipy-efficiency/blob/master/notebooks/FiPy-IPython.ipynb > > > > > https://www.mail-archive.com/fipy@nist.gov/msg03180.html > thread > > > > I struggled to install weave and get --inline working. I didn't use it much > yet, but my initial impression was the speedup was modest. > > > Remeshing seems like a great possibility for making problems such as > examples.phase.anisotropy run faster (use higher mesh density at the > interface), but I am afraid that FiPy architecture (lazy evaluation etc) > could interfere. I hope you share what you find with remeshing. > > > Thanks > > > On Tue, Jul 24, 2018 at 10:56 AM Carsten Langrock <langr...@stanford.edu >> wrote: > >> >> Thanks for pointing out the performance order of the solvers. I’ll try to >> get pySparse to work to compare it other solvers. It’s also good to know >> that I shouldn’t give up on 2D just yet;-) >> >> >> Regards, >> Carsten >> >> >> _____________________________________Dipl.-Phys. Carsten Langrock, Ph.D. >> >> Senior Research Scientist >> Edward L. Ginzton Laboratory, Rm. 202 >> Stanford University >> >> 348 Via Pueblo Mall >> 94305 Stanford, CA >> >> Tel. (650) 723-0464 >> Fax (650) 723-2666 >> >> Ginzton Lab Shipping Address: >> James and Anna Marie Spilker Engineering and Applied Sciences Building >> 04-040 >> >> 348 Via Pueblo Mall >> 94305 Stanford, CA >> _____________________________________ >> >> >> >> >> >>> On Jul 24, 2018, at 6:11 AM, Guyer, Jonathan E. Dr. (Fed) >>> <jonathan.gu...@nist.gov >>>> wrote: >>> >>> >>> FiPy still does not support remeshing. >>> >>> >>> As Dario said, choice of solver can make a big difference. I've not used >>> PyAMG much, but PySparse is dramatically faster than SciPy. PyTrilinos is >>> slower than PySparse, but enables you to solve in parallel. >>> >>> >>> I've also found that 2D problems solve much better than the 1D performance >>> would lead you to believe. There's just a lot of overhead in setting up the >>> problem and the Python communication with the lower-level libraries. >>>> >>>> >>>> On Jul 23, 2018, at 6:44 PM, Carsten Langrock <langr...@stanford.edu >>>>> wrote: >>>> >>>> >>>> Hi, >>>> >>>> >>>> Thanks for the help with getting FiPy running under Linux! I am trying to >>>> re-create a 1D nonlinear diffusion problem for which we have C++ code that >>>> uses the implicit Thomas algorithm based on >>>> >>>> >>>> J. Weickert, B. Romerny, M. Viergever, "Efficient and Reliable Schemes >>>> for Nonlinear Diffusion Filtering”, IEEE transactions on Image Processing, >>>> vol.7, N03, page 398, March 1998 >>>> >>>> >>>> I have been able to get results in FiPy that match this code very closely >>>> which was a great start. Our C++ code uses a fixed number of spatial >>>> points and a fixed time step, but re-meshes space to most efficiently use >>>> the size of the array; it increases the spatial step size by 2 whenever >>>> the concentration at a particular point reaches a set threshold. I tried >>>> implementing this in FiPy as well, but haven’t had much luck so far. I saw >>>> an old mailing-list entry from 2011 where a user was told that FiPy wasn’t >>>> meant to do remeshing. Is that still the case? >>>> >>>> >>>> I’d imagine one would somehow need to update the Grid1D object with the >>>> new ‘dx’, but since the CellVariable that holds the solution was >>>> initialized with that mesh object, I am not sure that such a change would >>>> propagate in a sensible fashion. I think I know how to map the value of >>>> the CellVariable to account for the change in ‘dx’ by >>>> >>>> >>>> array_size = 2000 >>>> phi.value = numpy.concatenate((phi.value[1:array_size/2:2], >>>> numpy.zeros(1500))) >>>> >>>> >>>> for the case when the initial variable holds 2000 spatial points. Maybe >>>> there’s a more elegant way, but I think this works in principle. >>>> >>>> >>>> Another question would be execution speed. Right now, even when not >>>> plotting the intermediate solutions, it takes many seconds on a very >>>> powerful computer to run a simple diffusion problem. I am probably doing >>>> something really wrong. I wasn’t expecting the code to perform as well as >>>> the C++ code, but I had hoped to come within an order of magnitude. Are >>>> there ways to optimize the performance? Maybe select a particularly clever >>>> solver? If someone could point me into the right direction that’d be >>>> great. In the end, I would like to expand the code into 2D, but given the >>>> poor 1D performance, I don’t think that this would be feasible at this >>>> point. >>>> >>>> >>>> Thanks, >>>> Carsten >>>> >>>> >>>> _____________________________________ >>>> Dipl.-Phys. Carsten Langrock, Ph.D. >>>> >>>> >>>> Senior Research Scientist >>>> Edward L. Ginzton Laboratory, Rm. 202 >>>> Stanford University >>>> >>>> >>>> 348 Via Pueblo Mall >>>> 94305 Stanford, CA >>>> >>>> >>>> Tel. (650) 723-0464 >>>> Fax (650) 723-2666 >>>> >>>> >>>> Ginzton Lab Shipping Address: >>>> James and Anna Marie Spilker Engineering and Applied Sciences Building >>>> 04-040 >>>> 348 Via Pueblo Mall >>>> 94305 Stanford, CA >>>> _____________________________________ >>>> _______________________________________________ >>>> fipy mailing list >>>> fipy@nist.gov >>>> >>>> http://www.ctcms.nist.gov/fipy >>>> >>>> [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy >>>> ] >>>> >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> fipy mailing list >>> fipy@nist.gov >>> >>> http://www.ctcms.nist.gov/fipy >>> >>> [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy >>> ] >>> >> _______________________________________________ >> fipy mailing list >> fipy@nist.gov >> >> http://www.ctcms.nist.gov/fipy >> >> [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy >> ] >> > _______________________________________________ > fipy mailing list > fipy@nist.gov > http://www.ctcms.nist.gov/fipy > [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ] >
_______________________________________________ fipy mailing list fipy@nist.gov http://www.ctcms.nist.gov/fipy [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]