Re: [julia-users] Re: [ANN] Jumos: a package for molecular simulations in Julia

2015-01-30 Thread Amuthan A. Ramabathiran
I think a few clarifications are necessary here:

1. The linked-cell code can mimic the naive O(N^2) algorithm if you make 
the following changes to md_main(...) (In main.jl)

n_cell = [ 1   for dim = 1:DIM ]
l_cell = [ 2r_cutoff   for dim = 1:DIM ]

With this modification the code is pretty much like a direct sum algorithm. 
In fact, this would be slightly faster than using the 2 x 2 x 2 grid for 
the the current benchmark problem since it is cheaper here (just 125 
particles!) to do an explicit search than to do all the bookkeeping about 
which particle is in which cell. The real advantage of the linked-cell 
method would show up as you increase the domain size and the number of 
particles.

2. The linked-cell method is perfectly valid as a serial algorithm (though 
the size problems you can tackle would accordingly be restricted). The 
basic idea behind the method is that if the forces between the particles 
are short-ranged, the calculation of the inter-particle forces is best 
restricted to the set of all particles within a cut-off radius of a given 
particle. To accomplish this the physical domain is represented as a 
disjoint union smaller cells whose physical dimensions are of the order of 
the cut-off radius for the inter-particle potential - this ensures that 
each cell handshakes only with its nearest neighbor cells (in the geometric 
sense). An elementary description of the method can be found 
here: http://en.wikipedia.org/wiki/Cell_lists. Note that this method would 
not work in the presence of long-range forces. Regarding parallelization, 
it is not necessary that each cell lives in a different processor (and 
moreover, that is likely to be quite inefficient). Parallelizing this 
method is essentially about exploiting the geometric layout of the cells 
and mapping groups of them into various processors so as to minimize the 
talking time between the cells on different processors. I'm currently 
working on this and will have something interesting to share soon, 
hopefully.

3. I had slightly modified the initial positions to make sure that all the 
particles are inside [0,10]x[0,10]x[0,10] initially (you can find the 
modified He-LJ.xyz file in the link I gave earlier). Running the code with 
the original code .xyz file would likely discard the points (just 4 I 
think) outside the domain, and this could be one reason why it runs faster 
on your machine. Currently, periodic boundary conditions are enforced only 
during computation, but not during initialization. Or maybe my laptop has 
grown old :)

Cheers,
Amuthan


On Friday, January 30, 2015 at 2:51:20 AM UTC-8, Luthaf wrote:

 That's already way better than the simple and naive version ! On my 
 computer, this gives me something 4x slower than serial LAMMPS code.

 I'll try to integrate this into the package. It will need changes in the 
 data structures, but this is worth it ! 
 I'll have to think a bit about how I can organise the data for 
 parallelism. The basic idea is to have every cell on a different processor, 
 isn't it ?

 Bests, 
 Guillaume


 Amuthan a écrit : 

 Hi Guillaume: I've added the serial version of the linked-cell MD code 
 here: https://gist.github.com/Amuthan

 The code is pretty basic and needless to say, a lot more functionality 
 needs to be added (though I should admit that I'm more interested in 
 extending this code to a parallel version than to develop it as a 
 full-fledged MD package). I've made a few changes to make it work for your 
 benchmark; I didn't spend too much time optimizing it, but the current 
 version is just 7 times slower than the LAMMPS code. I guess someone with a 
 better understanding of Julia can optimize it further.

 Let me know if you have any comments or questions or suggestions for 
 improvement. 

 Cheers,
 Amuthan

 On Wed, Jan 28, 2015 at 7:48 AM, Luthaf lut...@luthaf.fr javascript: 
 wrote:

 Hi Amuthan, 

 The current code uses the direct sum because this was the easiest 
 algorithm to implement, and your understanding is totally correct.

 Adding other algorithms is a piece of cake: it is as easy as subtyping 
 `BaseForceComputer` (I could not came with a better name), and providing a 
 `call` method.
 So I am planning to add at least (serial) Verlet neighbor lists for the 
 forces computations, and maybe some parallel code.

 But I don't know a lot about parallel methods in MD, and I still have a 
 lot to learn about it. It would definitively be very nice to have it !
 I think that only a few changes are needed to use julia parallel 
 framework in the package. All the data are stored in objects of type 
 Array3D, which could be modified to use DArray instead.
 I am open to change the internal data structure if this can improve the 
 performances.

 Is your neighbor list code available somewhere ? I am interested in 
 reading it.

 Cheers, 
 Guillaume

 Amuthan A. Ramabathiran a écrit : 

 Hi Guillaume: 

 This looks good -- thanks for the package! I have a 

[julia-users] Re: It would be great to see some Julia talks at OSCON 2015

2015-01-30 Thread Phil Tomson
Just a reminder: the call for proposals for OSCON closes Feb 2. You've got 
one more weekend to get your Julia talk proposals in.  It's great 
visibility for the language as well as for presenters.
http://www.oscon.com/open-source-2015/public/cfp/360 
http://www.google.com/url?q=http%3A%2F%2Fwww.oscon.com%2Fopen-source-2015%2Fpublic%2Fcfp%2F360sa=Dsntz=1usg=AFQjCNGnugQLYPpLXmuaRhg3KCplOimxHw

Phil


On Tuesday, January 6, 2015 at 7:16:07 AM UTC-8, Phil Tomson wrote:

 Hello Julia users:  

 I'm on the program committee for OSCON (the O'Reilly Open Source 
 Convention) and we're always looking for interesting programming talks. I'm 
 pretty sure that there hasn't been any kind of Julia talk at OSCON in the 
 past. Given the rising visibility of the language it would be great if we 
 could get some talk proposals for OSCON 2015.  This year there is a special 
 emphasis on languages for math  science (see the Call for Proposals here: 
 http://www.oscon.com/open-source-2015/public/cfp/360 
 http://www.google.com/url?q=http%3A%2F%2Fwww.oscon.com%2Fopen-source-2015%2Fpublic%2Fcfp%2F360sa=Dsntz=1usg=AFQjCNGnugQLYPpLXmuaRhg3KCplOimxHw
  
 specifically the Solve track) and Julia really should be featured.

 Are you using Julia to solve some interesting problems? Please submit a 
 talk proposal.

 OSCON is held in beautiful Portland Oregon July 20-24 this year.  Hope to 
 see your Julia talk there.

 Phil



[julia-users] Parametric vs Abstract typed collections, design question

2015-01-30 Thread James Crist
The problem:

I have 2 types. They have the same internal structure, and both can be 
thought of as subtypes of an abstract root.

abstract Root

type Foo : Root
...
end

type Bar : Root
...
end

Because they have the same internal structure, I could equally define a 
parametric type `Root`, and typealias `Foo` and `Bar`:

type Root
 ...
end
typealias Foo Root{:Foo}
typealias Bar Root{:Bar}

The Question:

Given a collection of objects of these types (say a vector), where both Foo 
and Bar will be present, which way is more performant/Julian? Basically, is 
it better to have a collection of an abstract type (option 1), or of a 
parametric type with the parameters yet to be specified (option 2)?


[julia-users] Re: linking against external libraries built with Julia

2015-01-30 Thread Viral Shah
This is a long pending issue - how do we make it easy for packages to link 
to Julia's libraries, especially openblas. One quick thing we could do is 
have something like julia-config that can provide the right include paths 
and ldflags and such.

-viral

On Friday, January 30, 2015 at 4:30:24 AM UTC+5:30, Nick Henderson wrote:

 Hello,

 I am working on a Julia package which includes some Fortran code that I 
 would like to link against openblas built with Julia.

 On my linux machine, I find the libraries in JULIA_HOME/../lib

 However, on the OSX nightly build I find them in JULIA_HOME/../lib/julia

 Is there a simple way to find the location of the Julia libraries?

 Is linking against libraries built with Julia considered an acceptable 
 practice?

 Thanks so much!

 --Nick Henderson  



[julia-users] Re: [ANN] Differential Equation solver test suite

2015-01-30 Thread Alex
Very nice! Thanks for doing this. It is great to see that DASSL is largely 
competitive. I am also happy that ode23s isn't completely useless :-) In ODE.jl 
we have still the problem that we don't use in-place rhs functions or 
Jacobians. I guess this becomes problematic for larger systems (especially in 
combination with low order methods).

I think it would be nice if your tests/plots would include the version (or 
commit) of the packages. Then we could use your package to monitor performance 
improvements/regressions.

Thanks again,

Alex.

On Friday, 30 January 2015 15:03:24 UTC+1, Mauro  wrote:
 Hi all,
 
 those of you who solve initial value problems (IVP) of ordinary and
 algebraic differential equations (ODE/DAE) might be interested in:
 https://github.com/mauro3/IVPTestSuite.jl It provides several test cases
 for ODE and DAE solvers based on previous, well-known test sets.  It can
 easily be adapted to run with your solvers.
 
 I ran the tests for the three ODE-solver packages ODE.jl, DASSL.jl and
 Sundials.jl.  Results are here:
 https://github.com/mauro3/IVPTestSuite.jl/blob/master/results/results.md
 
 I found:
 - DASSL.jl seems to be as capable as the Sundials solvers if sometimes
   slower but also sometimes faster.
 - For large(-ish) systems, e.g. stemming from PDEs, DASSL.jl seems to
   be the best and fastest choice at the moment because of the patchy support 
 of
   sparse Jacobians in Sundials. (please correct me if wrong).
 - ODE.ode23s does ok too but is generally a lot slower than DASSL,
   presumably because of its lower order.
 
 However, take those results with a grain of salt as I haven't spent too
 much time optimising yet.
 
 -- Mauro



Re: [julia-users] Re: [ANN] Differential Equation solver test suite

2015-01-30 Thread Mauro
 Very nice! Thanks for doing this. It is great to see that DASSL is
 largely competitive.  I am also happy that ode23s isn't completely
 useless :-)

Yep, DASSL seems good and ode23s is fine too.

 In ODE.jl we have still the problem that we don't use in-place rhs
 functions or Jacobians. I guess this becomes problematic for larger
 systems (especially in combination with low order methods).

Yes, it would be good the have the option to use in-place functions in
ODE.jl and DASSL.jl.  Memory allocations with the Sundials solvers is way
lower.  In fact, it would be nice to be able to use both.

 I think it would be nice if your tests/plots would include the version
 (or commit) of the packages. Then we could use your package to monitor
 performance improvements/regressions.

Yes, I am planning to add some way to store previous test results.  This
should include versions of the solver, IVPTestSuite and Julia.  I could
set up a ode-speed center, just like http://speed.julialang.org/ ;-)

Glad you like it!  M

 Thanks again,

 Alex.

 On Friday, 30 January 2015 15:03:24 UTC+1, Mauro  wrote:
 Hi all,
 
 those of you who solve initial value problems (IVP) of ordinary and
 algebraic differential equations (ODE/DAE) might be interested in:
 https://github.com/mauro3/IVPTestSuite.jl It provides several test cases
 for ODE and DAE solvers based on previous, well-known test sets.  It can
 easily be adapted to run with your solvers.
 
 I ran the tests for the three ODE-solver packages ODE.jl, DASSL.jl and
 Sundials.jl.  Results are here:
 https://github.com/mauro3/IVPTestSuite.jl/blob/master/results/results.md
 
 I found:
 - DASSL.jl seems to be as capable as the Sundials solvers if sometimes
   slower but also sometimes faster.
 - For large(-ish) systems, e.g. stemming from PDEs, DASSL.jl seems to
   be the best and fastest choice at the moment because of the patchy support 
 of
   sparse Jacobians in Sundials. (please correct me if wrong).
 - ODE.ode23s does ok too but is generally a lot slower than DASSL,
   presumably because of its lower order.
 
 However, take those results with a grain of salt as I haven't spent too
 much time optimising yet.
 
 -- Mauro



Re: [julia-users] Parametric vs Abstract typed collections, design question

2015-01-30 Thread James Crist
Woops, example 2 should be:

type Root{T}
...
end

On Fri, Jan 30, 2015 at 4:53 PM, James Crist crist...@umn.edu wrote:

 The problem:

 I have 2 types. They have the same internal structure, and both can be
 thought of as subtypes of an abstract root.

 abstract Root

 type Foo : Root
 ...
 end

 type Bar : Root
 ...
 end

 Because they have the same internal structure, I could equally define a
 parametric type `Root`, and typealias `Foo` and `Bar`:

 type Root
  ...
 end
 typealias Foo Root{:Foo}
 typealias Bar Root{:Bar}

 The Question:

 Given a collection of objects of these types (say a vector), where both
 Foo and Bar will be present, which way is more performant/Julian?
 Basically, is it better to have a collection of an abstract type (option
 1), or of a parametric type with the parameters yet to be specified (option
 2)?



[julia-users] odeprint

2015-01-30 Thread DP
In sundials, is there anything equivalent to Matlab's odeprint, which 
tells the progress?


[julia-users] Deleted message

2015-01-30 Thread Jiahao Chen
Apologies for the mistaken post. This was a note to myself that Gmail 
suggested a recipient for and I hit send by accident.


[julia-users] Jeremy Kepner on Python vs Julia

2015-01-30 Thread Jiahao Chen
Jeremy gave a talk today on associative arrays and said this during the QA:

Python's mathematics is fundamentally broken; there are too many ways to
represent 2D arrays due to decisions made 15 years ago... For Python
programmers we recommend them to use Julia. We find that Julia gives Python
programmers the correct mathematics in the environment they like. Using
Julia is almost the easier solution than fixing Python.

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory


[julia-users] Re: short circuit list comprehension

2015-01-30 Thread Gray Calhoun
You can use a macro. I've written short-circuiting 'any' and 'all' macros 
that are available in this gist:

https://gist.github.com/grayclhn/5e70f5f61d91606ddd93

I'm sure they can be substantially improved; the usage would be

if @all [f(x) for x in 1:10]
## success
else
## failure
end

and it rewrites the list comprehension as a loop and inserts a break 
statement

On Friday, January 30, 2015 at 12:51:13 AM UTC-6, Wai Yip Tung wrote:

 I want to apply function f() over a range of value. f() returns true for 
 success and false for failure. Since f() is expensive, I want short circuit 
 computation, i.e. it stops after the first failure. 

 In python, I can do this in an elegant way with the all() function and 
 generator expression.

 if all(f(x) for x in values)
   # success
 else
   # failure

 From what I understand, there is no generator expression in Julia. List 
 comprehension will evaluate the full list. Even if I try to use for loop, I 
 can't use the control variable to check if the loop has run to finish or 
 not.

 i = 0
 for i in 1:length(values)
 if !f(values[i])
   break
end
 end
 # The status is ambiguous if i==length(values) 

 My last resort is to add flags to indicate if is success or not. Is there 
 some more elegant way to do this?



[julia-users] Re: It would be great to see some Julia talks at OSCON 2015

2015-01-30 Thread Eric Forgy
It looks like there is at least one Julia talk 
http://www.oreilly.com/conferences/sample_proposals.html#tech :)

On Saturday, January 31, 2015 at 6:32:53 AM UTC+8, Phil Tomson wrote:

 Just a reminder: the call for proposals for OSCON closes Feb 2. You've got 
 one more weekend to get your Julia talk proposals in.  It's great 
 visibility for the language as well as for presenters.
 http://www.oscon.com/open-source-2015/public/cfp/360 
 http://www.google.com/url?q=http%3A%2F%2Fwww.oscon.com%2Fopen-source-2015%2Fpublic%2Fcfp%2F360sa=Dsntz=1usg=AFQjCNGnugQLYPpLXmuaRhg3KCplOimxHw

 Phil


 On Tuesday, January 6, 2015 at 7:16:07 AM UTC-8, Phil Tomson wrote:

 Hello Julia users:  

 I'm on the program committee for OSCON (the O'Reilly Open Source 
 Convention) and we're always looking for interesting programming talks. I'm 
 pretty sure that there hasn't been any kind of Julia talk at OSCON in the 
 past. Given the rising visibility of the language it would be great if we 
 could get some talk proposals for OSCON 2015.  This year there is a special 
 emphasis on languages for math  science (see the Call for Proposals here: 
 http://www.oscon.com/open-source-2015/public/cfp/360 
 http://www.google.com/url?q=http%3A%2F%2Fwww.oscon.com%2Fopen-source-2015%2Fpublic%2Fcfp%2F360sa=Dsntz=1usg=AFQjCNGnugQLYPpLXmuaRhg3KCplOimxHw
  
 specifically the Solve track) and Julia really should be featured.

 Are you using Julia to solve some interesting problems? Please submit a 
 talk proposal.

 OSCON is held in beautiful Portland Oregon July 20-24 this year.  Hope to 
 see your Julia talk there.

 Phil



[julia-users] What is the difference between these two functions?

2015-01-30 Thread Kirill Ignatiev
I have a newbie-type performance question.

In some of my code there is a structure that looks like this:

type FourierPoly
   periods :: Vector{Int}
   radiuses :: Vector{Float64}
   phase_offsets :: Vector{Float64}
 end


and the following two functions that operate on it:

- function polyval(f::FourierPoly, t::Float64)
  33694968   s = zero(Complex128)
 0   @inbounds for k = 1:length(f.periods)
 0 s += exp(2.pi*(t + f.phase_offsets[k]) * f.periods[k] * im) 
 * f.radiuses[k]
 -   end
 0   return s::Complex128
 0 end
 0 
 0 function polyder(f::FourierPoly, t::Float64)
 0   s = zero(Complex128)
 492303248   @inbounds for k = 1:length(f.periods)
 0 θ = 2.pi * f.periods[k]
 164100720 s += θ * im * exp((t + f.phase_offsets[k]) * θ * im) * 
 f.radiuses[k]
257652   end
 0   return s::Complex128
 - end


(copied from output of julia run with --track-allocation=user).

What is the difference between these two functions? polyval seems fine, but 
polyder is called at most as often as polyval from the rest of the code, 
yet its memory consumption is at least an order of magnitude higher? Can 
somebody please point out what I'm missing here?


[julia-users] Show method for types

2015-01-30 Thread i . costigan
I've got lots of multi-field types in a package I'm developing 
https://github.com/imanuelcostigan/FinancialMarkets.jl and would like to 
understand how I might best show() them optimally. The default method 
applied to some types prints what is effectively unreadable garbage to the 
REPL. Has someone else successfully dealt with this?


[julia-users] Question on tuple size

2015-01-30 Thread Jung Soo Park
Hi,

I am generating two functions and the outcome of the first function will be 
use as input values of the second function.

Like Matlab's case of [x,y,z] =test_function(input), I used Julia's tuple 
function to generate [x,y,z] and it worked well.
  
   function test_function(input)
 x=rand(20,5);
 y=rand(30,50);
 z=rand(5,100);
 ALL=tuple(x,y,z);
 return ALL
   end

   I sliced the output of ALL with [[ ]] and save as xx,yy,zz.
  
 ALL= test_function(best)
 xx=ALL[[1]];
 yy=ALL[[2]];
 zz=ALL[[3]];
   
   But I found the size of original output (say x) and sliced (say xx) are 
not identical so I can not transfer the values into the second function.   
   size(x)
   (20,5)
   
size(xx)
`size` has no method matching size(::(Array{Float64,2},))

  while loading In[1], in expression starting on line 12

   
Q) How can I convert the size of xx into size of x so that I can run my 
second function like following?
   
function test_2(x,y,z)
smile!!
end


  Thank you for your time.
 
  Jase


[julia-users] [ANN] Differential Equation solver test suite

2015-01-30 Thread Mauro
Hi all,

those of you who solve initial value problems (IVP) of ordinary and
algebraic differential equations (ODE/DAE) might be interested in:
https://github.com/mauro3/IVPTestSuite.jl It provides several test cases
for ODE and DAE solvers based on previous, well-known test sets.  It can
easily be adapted to run with your solvers.

I ran the tests for the three ODE-solver packages ODE.jl, DASSL.jl and
Sundials.jl.  Results are here:
https://github.com/mauro3/IVPTestSuite.jl/blob/master/results/results.md

I found:
- DASSL.jl seems to be as capable as the Sundials solvers if sometimes
  slower but also sometimes faster.
- For large(-ish) systems, e.g. stemming from PDEs, DASSL.jl seems to
  be the best and fastest choice at the moment because of the patchy support of
  sparse Jacobians in Sundials. (please correct me if wrong).
- ODE.ode23s does ok too but is generally a lot slower than DASSL,
  presumably because of its lower order.

However, take those results with a grain of salt as I haven't spent too
much time optimising yet.

-- Mauro


Re: [julia-users] Re: Is it possible to file jld only read range ?

2015-01-30 Thread Tim Holy
Do Pkg.update() and it will start working.

--Tim

On Friday, January 30, 2015 03:20:27 AM paul analyst wrote:
 But why not work this : dset[:,1] ?
 julia dset[:,1]
 ERROR: `size` has no method matching size(::JldDataset, ::Int64)
 
 julia dset[1:k,1]
 30070x1 Array{Float64,2}:



Re: [julia-users] short circuit list comprehension

2015-01-30 Thread Mike Innes
How about `all(f, values)`?

On 30 January 2015 at 06:51, Wai Yip Tung w...@tungwaiyip.info wrote:

 I want to apply function f() over a range of value. f() returns true for
 success and false for failure. Since f() is expensive, I want short circuit
 computation, i.e. it stops after the first failure.

 In python, I can do this in an elegant way with the all() function and
 generator expression.

 if all(f(x) for x in values)
   # success
 else
   # failure

 From what I understand, there is no generator expression in Julia. List
 comprehension will evaluate the full list. Even if I try to use for loop, I
 can't use the control variable to check if the loop has run to finish or
 not.

 i = 0
 for i in 1:length(values)
 if !f(values[i])
   break
end
 end
 # The status is ambiguous if i==length(values)

 My last resort is to add flags to indicate if is success or not. Is there
 some more elegant way to do this?




Re: [julia-users] Re: [ANN] Jumos: a package for molecular simulations in Julia

2015-01-30 Thread Amuthan
Hi Guillaume: I've added the serial version of the linked-cell MD code
here: https://gist.github.com/Amuthan

The code is pretty basic and needless to say, a lot more functionality
needs to be added (though I should admit that I'm more interested in
extending this code to a parallel version than to develop it as a
full-fledged MD package). I've made a few changes to make it work for your
benchmark; I didn't spend too much time optimizing it, but the current
version is just 7 times slower than the LAMMPS code. I guess someone with a
better understanding of Julia can optimize it further.

Let me know if you have any comments or questions or suggestions for
improvement.

Cheers,
Amuthan

On Wed, Jan 28, 2015 at 7:48 AM, Luthaf lut...@luthaf.fr wrote:

 Hi Amuthan,

 The current code uses the direct sum because this was the easiest
 algorithm to implement, and your understanding is totally correct.

 Adding other algorithms is a piece of cake: it is as easy as subtyping
 `BaseForceComputer` (I could not came with a better name), and providing a
 `call` method.
 So I am planning to add at least (serial) Verlet neighbor lists for the
 forces computations, and maybe some parallel code.

 But I don't know a lot about parallel methods in MD, and I still have a
 lot to learn about it. It would definitively be very nice to have it !
 I think that only a few changes are needed to use julia parallel framework
 in the package. All the data are stored in objects of type Array3D, which
 could be modified to use DArray instead.
 I am open to change the internal data structure if this can improve the
 performances.

 Is your neighbor list code available somewhere ? I am interested in
 reading it.

 Cheers,
 Guillaume

 Amuthan A. Ramabathiran a écrit :

 Hi Guillaume:

 This looks good -- thanks for the package! I have a basic question
 regarding the force computation: I had a quick look at the code and it
 appears to me that the current code uses an O(N^2) algorithm (direct sum)
 for this. Is my understanding correct? Software like LAMMPS use very
 efficient algorithms for short and long range potentials and this might
 possibly account for the fact that the benchmark is 80x slower.

 I had developed a serial MD code using neighbor lists last year and I'm
 currently in the process of parallelizing it to study the suitability of
 Julia for developing (reasonably) parallel research codes. Is
 parallelization something you're looking into? I would be very interested
 in hearing about this. Thanks!

 Cheers,
 Amuthan

 On Tuesday, January 27, 2015 at 2:46:59 PM UTC-8, Luthaf wrote:

 Hi julians !

 I am very happy to announce the release of Jumos, a Julia package for
 molecular simulations.

 You can find the code here: https://github.com/Luthaf/Jumos.jl, and the
 documentation is hosted by readthedocs : http://jumos.readthedocs.org/
 en/latest/.

 This package aims at being as extensible as possible. Every algorithm is
 a type that can be subtyped and changed at runtime, during the simulation.
  This makes it easy to write new algorithms, experiment with them, and
 combine algorithms in all the fancy ways you can imagine.

  Presently, only molecular dynamic is implemented, but I am planning to
 add energy minimisation, and trajectory replay from files (maybe
 monte-carlo simulations if I find some time). A handful of tools are
 already implemented :
  - Velocity-Verlet and Verlet integrator;
  - Lennard-Jones and Harmonic potential;
  - User-defined potentials;
  - RDF and density profile analysis;
  - Berendsen thermostat;
  - Velocity-rescale thermostat;
  - Temperature and energy computations.

  And a lot more are planned: https://github.com/Luthaf/Jumos.jl/issues.
  If you want to help, contributions are very welcome ! Just pick up an
 issue or create a new one as a feature request.
  The first benchmark (against LAMMPS) gives me a speed 80 times slower
 than LAMMPS. This is already great for a dynamic language, but there is
 still room for improvement !

  Any feedback on the implementation is also very welcome.

  Regards
  Guillaume

  PS: What does the ANN in the title mean ? It seems to be everywhere,
 but I don't have any clue about its meaning





Re: [julia-users] short circuit list comprehension

2015-01-30 Thread Ivar Nesje
any() is short-circuiting if the length of array is more than 16. That 
seems like dubious semantics to me.

Ivar

fredag 30. januar 2015 09.39.28 UTC+1 skrev Mike Innes følgende:

 Ok, I now see that `all` isn't short-circuiting, which is kind of 
 surprising/annoying. Anyone know why that is?

 You can easily define your own short-circuiting `all` to do this:

 all′(f, xs) =
   isempty(xs) ? true :
   f(xs[1])  all′(f, xs[2:end])

 The other thing you could try is to check out Lazy.jl. If you call, for 
 example,

 @time all(map(f,list(1,2,3,5)))

 The `list` construct actually takes care of short-circuiting for you, so 
 what looks like a very expensive `map` operation is actually very cheap.

 On 30 January 2015 at 08:05, Mike Innes mike.j...@gmail.com javascript:
  wrote:

 How about `all(f, values)`?

 On 30 January 2015 at 06:51, Wai Yip Tung w...@tungwaiyip.info 
 javascript: wrote:

 I want to apply function f() over a range of value. f() returns true for 
 success and false for failure. Since f() is expensive, I want short circuit 
 computation, i.e. it stops after the first failure. 

 In python, I can do this in an elegant way with the all() function and 
 generator expression.

 if all(f(x) for x in values)
   # success
 else
   # failure

 From what I understand, there is no generator expression in Julia. List 
 comprehension will evaluate the full list. Even if I try to use for loop, I 
 can't use the control variable to check if the loop has run to finish or 
 not.

 i = 0
 for i in 1:length(values)
 if !f(values[i])
   break
end
 end
 # The status is ambiguous if i==length(values) 

 My last resort is to add flags to indicate if is success or not. Is 
 there some more elegant way to do this?





Re: [julia-users] short circuit list comprehension

2015-01-30 Thread Mike Innes
Ok, I now see that `all` isn't short-circuiting, which is kind of
surprising/annoying. Anyone know why that is?

You can easily define your own short-circuiting `all` to do this:

all′(f, xs) =
  isempty(xs) ? true :
  f(xs[1])  all′(f, xs[2:end])

The other thing you could try is to check out Lazy.jl. If you call, for
example,

@time all(map(f,list(1,2,3,5)))

The `list` construct actually takes care of short-circuiting for you, so
what looks like a very expensive `map` operation is actually very cheap.

On 30 January 2015 at 08:05, Mike Innes mike.j.in...@gmail.com wrote:

 How about `all(f, values)`?

 On 30 January 2015 at 06:51, Wai Yip Tung w...@tungwaiyip.info wrote:

 I want to apply function f() over a range of value. f() returns true for
 success and false for failure. Since f() is expensive, I want short circuit
 computation, i.e. it stops after the first failure.

 In python, I can do this in an elegant way with the all() function and
 generator expression.

 if all(f(x) for x in values)
   # success
 else
   # failure

 From what I understand, there is no generator expression in Julia. List
 comprehension will evaluate the full list. Even if I try to use for loop, I
 can't use the control variable to check if the loop has run to finish or
 not.

 i = 0
 for i in 1:length(values)
 if !f(values[i])
   break
end
 end
 # The status is ambiguous if i==length(values)

 My last resort is to add flags to indicate if is success or not. Is there
 some more elegant way to do this?





Re: [julia-users] LAPACKException(1) during SVD

2015-01-30 Thread Tamas Papp
I think that LAPACK error codes above 0 are about ill-conditioning
and/or non-convergence. Perhaps you could share the matrix (in a binary
format, since copy-pasting decimal floats can change the values and
thus the conditioning).

Best,

Tamas

On Fri, Jan 30 2015, Steven Sagaert steven.saga...@gmail.com wrote:

 when doing an SVD of a large matrix I get
 ERROR: LAPACKException(1)
  in gesdd! at linalg/lapack.jl:1046
  in svdfact! at linalg/factorization.jl:660
  in svdfact at linalg/factorization.jl:664

 It's definitely something related to the data because it works on different
 matrices  the code has been working for months. Any idea what that error
 is about?

 I'm working on Ubuntu 14.04, julia version 0.3.5


Re: [julia-users] Re: [ANN] Jumos: a package for molecular simulations in Julia

2015-01-30 Thread Luthaf
That's already way better than the simple and naive version ! On my 
computer, this gives me something 4x slower than serial LAMMPS code.


I'll try to integrate this into the package. It will need changes in the 
data structures, but this is worth it !
I'll have to think a bit about how I can organise the data for 
parallelism. The basic idea is to have every cell on a different 
processor, isn't it ?


Bests,
Guillaume


Amuthan a écrit :
Hi Guillaume: I've added the serial version of the linked-cell MD code 
here: https://gist.github.com/Amuthan


The code is pretty basic and needless to say, a lot more functionality 
needs to be added (though I should admit that I'm more interested in 
extending this code to a parallel version than to develop it as a 
full-fledged MD package). I've made a few changes to make it work for 
your benchmark; I didn't spend too much time optimizing it, but the 
current version is just 7 times slower than the LAMMPS code. I guess 
someone with a better understanding of Julia can optimize it further.


Let me know if you have any comments or questions or suggestions for 
improvement.


Cheers,
Amuthan

On Wed, Jan 28, 2015 at 7:48 AM, Luthaf lut...@luthaf.fr 
mailto:lut...@luthaf.fr wrote:


Hi Amuthan,

The current code uses the direct sum because this was the easiest
algorithm to implement, and your understanding is totally correct.

Adding other algorithms is a piece of cake: it is as easy as
subtyping `BaseForceComputer` (I could not came with a better
name), and providing a `call` method.
So I am planning to add at least (serial) Verlet neighbor lists
for the forces computations, and maybe some parallel code.

But I don't know a lot about parallel methods in MD, and I still
have a lot to learn about it. It would definitively be very nice
to have it !
I think that only a few changes are needed to use julia parallel
framework in the package. All the data are stored in objects of
type Array3D, which could be modified to use DArray instead.
I am open to change the internal data structure if this can
improve the performances.

Is your neighbor list code available somewhere ? I am interested
in reading it.

Cheers,
Guillaume

Amuthan A. Ramabathiran a écrit :

Hi Guillaume:

This looks good -- thanks for the package! I have a basic
question regarding the force computation: I had a quick look at
the code and it appears to me that the current code uses an
O(N^2) algorithm (direct sum) for this. Is my understanding
correct? Software like LAMMPS use very efficient algorithms for
short and long range potentials and this might possibly account
for the fact that the benchmark is 80x slower.

I had developed a serial MD code using neighbor lists last year
and I'm currently in the process of parallelizing it to study the
suitability of Julia for developing (reasonably) parallel
research codes. Is parallelization something you're looking into?
I would be very interested in hearing about this. Thanks!

Cheers,
Amuthan

On Tuesday, January 27, 2015 at 2:46:59 PM UTC-8, Luthaf wrote:

Hi julians !

I am very happy to announce the release of Jumos, a Julia
package for molecular simulations.

You can find the code here:
https://github.com/Luthaf/Jumos.jl, and the documentation is
hosted by readthedocs : http://jumos.readthedocs.org/en/latest/.

This package aims at being as extensible as possible. Every
algorithm is a type that can be subtyped and changed at
runtime, during the simulation.
This makesit easy to write new algorithms, experiment with
them, and combine algorithms in all the fancy waysyou can
imagine.

Presently, only molecular dynamic is implemented, but I am
planning to add energy minimisation, and trajectory replay
from files (maybe monte-carlo simulations if I find some
time). A handful of tools are alreadyimplemented :
- Velocity-Verlet and Verlet integrator;
- Lennard-Jones and Harmonic potential;
- User-defined potentials;
- RDF and density profile analysis;
- Berendsen thermostat;
- Velocity-rescale thermostat;
- Temperature and energy computations.

And a lot more are planned:
https://github.com/Luthaf/Jumos.jl/issues.
If you want to help, contributions are very welcome ! Just
pick up an issue or create a new one as a feature request.
The first benchmark (against LAMMPS) givesme a speed 80 times
slower than LAMMPS. This is already great for a dynamic
language, but there is still room for improvement !

Any feedback on the implementation is also very welcome.

Regards
Guillaume

PS: Whatdoes the ANNin the titlemean ? It seems to be
 

[julia-users] LAPACKException(1) during SVD

2015-01-30 Thread Steven Sagaert
when doing an SVD of a large matrix I get
ERROR: LAPACKException(1)
 in gesdd! at linalg/lapack.jl:1046
 in svdfact! at linalg/factorization.jl:660
 in svdfact at linalg/factorization.jl:664

It's definitely something related to the data because it works on different 
matrices  the code has been working for months. Any idea what that error 
is about?

I'm working on Ubuntu 14.04, julia version 0.3.5


[julia-users] Is it possible to file jld only read range ?

2015-01-30 Thread paul analyst


Is it possible to file jld only read range ? Without reading the entire file 
into memory ?

julia using HDF5,JLD

julia load(C1o2.jld,C1,(1,1))

julia load(C1o2.jld,C1)[1:1]

Above methods ar reading all file .

Paul



[julia-users] Re: Is it possible to file jld only read range ?

2015-01-30 Thread paul analyst
But why not work this : dset[:,1] ?
julia dset[:,1]
ERROR: `size` has no method matching size(::JldDataset, ::Int64)

julia dset[1:k,1]
30070x1 Array{Float64,2}:




[julia-users] Re: Is it possible to file jld only read range ?

2015-01-30 Thread paul analyst
I have , it work!

dset2 = jldopen(C1o2.jld)[C1]
dset2[1:3,1]
Paul


W dniu piątek, 30 stycznia 2015 12:06:29 UTC+1 użytkownik paul analyst 
napisał:

 Is it possible to file jld only read range ? Without reading the entire file 
 into memory ?

 julia using HDF5,JLD

 julia load(C1o2.jld,C1,(1,1))

 julia load(C1o2.jld,C1)[1:1]

 Above methods ar reading all file .

 Paul