No worries. If building trilinos doesn't blindside you with something 
unexpected and unpleasant, you're not doing it right.

I have a conda recipe at 
https://github.com/guyer/conda-recipes/tree/trilinos_upgrade_11_10_2/trilinos 
that has worked for me to build 11.10.2 on both OS X and Docker (Debian?). I 
haven't tried to adjust it to 12.x, yet.


On Mar 30, 2016, at 2:42 PM, Michael Waters <waters.mik...@gmail.com> wrote:

> Hi Jon,
> 
> I was just reviewing my version of Trilinos 11.10 and discovered that there 
> is no way that I compiled it last night after exercising. It has unsatisfied 
> dependencies on my machine. So I must apologize, I must have been more tired 
> than I thought. 
> 
> Sorry for the error!
> -Mike Waters
> 
> On 3/30/16 11:52 AM, Guyer, Jonathan E. Dr. (Fed) wrote:
>> It looked to me like steps and accuracy were the way to do it, but my runs 
>> finish in one step, so I was confused. When I change to accuracy = 10.0**-6, 
>> it takes 15 steps, but still no leak (note, the hiccup in RSS and in ELAPSED 
>> time is because I put my laptop to sleep for awhile, but VSIZE is 
>> rock-steady).
>> 
>> The fact that things never (or slowly) converge for you and Trevor, in 
>> addition to the leak, makes me wonder if Trilinos seriously broke something 
>> between 11.x and 12.x. Trevor's been struggling to build 12.4. I'll try to 
>> find time to do the same.
>> 
>> In case it matters, I'm running on OS X. What's your system?
>> 
>> - Jon
>> 
>> On Mar 29, 2016, at 3:59 PM, Michael Waters <waters.mik...@gmail.com> wrote:
>> 
>>> When I did my testing and made those graphs, I ran Trilinos in serial. 
>>> Syrupy didn't seem to track the other processes memory. I watched in 
>>> real time as the parallel version ate all my ram though.
>>> 
>>> To make the program run longer while not changing the memory:
>>> 
>>> steps = 100  # increase this, (limits the number of self-consistent 
>>> iterations)
>>> accuracy = 10.0**-5 # make this number smaller, (relative energy 
>>> eigenvalue change for being considered converged )
>>> initial_solver_iterations_per_step = 7 # reduce this to 1,  (number of 
>>> solver iterations per self-consistent iteration, to small and it's slow, 
>>> to high and the solutions are not stable)
>>> 
>>> I did those tests on a machine with 128 GB of ram so I wasn't expecting 
>>> any swapping.
>>> 
>>> Thanks,
>>> -mike
>>> 
>>> 
>>> On 3/29/16 3:38 PM, Guyer, Jonathan E. Dr. (Fed) wrote:
>>>> I guess I spoke too soon. FWIW, I'm running Trilinos version: 11.10.2.
>>>> 
>>>> 
>>>> On Mar 29, 2016, at 3:34 PM, Guyer, Jonathan E. Dr. (Fed) 
>>>> <jonathan.gu...@nist.gov> wrote:
>>>> 
>>>>> I'm not seeing a leak. The below is for trilinos. VSIZE grows to about 11 
>>>>> MiB and saturates and RSS saturates at around 5 MiB. VSIZE is more 
>>>>> relevant for tracking leaks, as RSS is deeply tied to your system's 
>>>>> swapping architecture and what else is running; either way, neither seems 
>>>>> to be leaking, but this problem does use a lot of memory.
>>>>> 
>>>>> What do I need to do to get it to run longer?
>>>>> 
>>>>> 
>>>>> 
>>>>> On Mar 25, 2016, at 7:16 PM, Michael Waters <waters.mik...@gmail.com> 
>>>>> wrote:
>>>>> 
>>>>>> Hello,
>>>>>> 
>>>>>> I still have a large memory leak when using Trilinos. I am not sure 
>>>>>> where to start looking so I made an example code that produces my 
>>>>>> problem in hopes that someone can help me.
>>>>>> 
>>>>>> But! my example is cool. I implemented Density Functional Theory in FiPy!
>>>>>> 
>>>>>> My code is slow, but runs in parallel and is simple (relative to most 
>>>>>> DFT codes). The example I have attached is just a lithium and hydrogen 
>>>>>> atom. The electrostatic boundary conditions are goofy but work well 
>>>>>> enough for demonstration purposes. If you set use_trilinos to True, the 
>>>>>> code will slowly use more memory. If not, it will try to use Pysparse.
>>>>>> 
>>>>>> Thanks,
>>>>>> -Michael Waters
>>>>>> <input.xyz><fipy-dft.py>_______________________________________________
>>>>>> fipy mailing list
>>>>>> fipy@nist.gov
>>>>>> http://www.ctcms.nist.gov/fipy
>>>>>>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>>>>> 
>>>>> <attachment.png>
>>>>> _______________________________________________
>>>>> fipy mailing list
>>>>> fipy@nist.gov
>>>>> http://www.ctcms.nist.gov/fipy
>>>>>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>>>> 
>>>> _______________________________________________
>>>> fipy mailing list
>>>> fipy@nist.gov
>>>> http://www.ctcms.nist.gov/fipy
>>>>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>>> 
>>> _______________________________________________
>>> fipy mailing list
>>> fipy@nist.gov
>>> http://www.ctcms.nist.gov/fipy
>>>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>> <Mail Attachment.png>
>> 
>> 
>> _______________________________________________
>> fipy mailing list
>> 
>> fipy@nist.gov
>> http://www.ctcms.nist.gov/fipy
>> 
>>   [ NIST internal ONLY: 
>> https://email.nist.gov/mailman/listinfo/fipy
>>  ]
>> 
> 
> _______________________________________________
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


_______________________________________________
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to