Re: [julia-users] Re: Nullable use cases / expected behavior?

2015-01-06 Thread Milan Bouchet-Valat
Le lundi 05 janvier 2015 à 19:16 -0800, ele...@gmail.com a écrit :
 My reasoning for Nullable{T} is that it is type stable.  Taking your
 example, None and Int would be different type objects, introducing a
 type instability and potential performance penalty.  But Nullable{T}
 is always type Nullable{T} and get(Nullable{T}) is always type T.
  Allowing Nullable{T} to decay into T would re-introduce the type
 instability.
Right. But that doesn't mean `Nullable(3) == 3` shouldn't return `true`.
This operation could be allowed, provided that `Nullable{Int}() == 3`
raised a `NullException` or returned `Nullable{Bool}()`.

Regarding the original question:
 On Tuesday, January 6, 2015 12:03:24 PM UTC+10, Seth wrote:
 I'm trying to figure out how (and under what circumstances)
 one would use Nullable. That is, it seems that it might be
 valuable when you don't know whether the value/object exists
 (sort of like Python's None, I guess), but then something like
 Nullable(3) == 3 returns false, and that sort of messes up
 how I'm thinking about it.
 
 
 The code I'd imagine would be useful would be something like
 
 
 function foo(x::Int, y=Nullable{Int}())  # that is, y defaults
 to python's None but is restricted to Int
 if !isnull(y)
 return x+y  # x + get(y) works, but why must we invoke
 another method to get the value?
 else
 return 2x
 end
 end
 
 
 I'm left wondering why it wasn't reasonable to allow y to
 return get(y) if not null, else raise a NullException,
The question is how you define return. In the strict sense, if you
write `return y`, then `y` must be returned, not `get(y)`, or the Julia
language would really be a mess.

That said, with return type declarations, if `foo()::Int` allowed
stating that `foo()` always returns an `Int`, then `y` could
automatically be converted to an `Int`, raising an exception if it's
`null`. But that merely allows you to type `return y` instead of
`return get(y)`, so not a big deal.

Finally, there's the question of whether `x + y` should be allowed to
mean `x + get(y)`. That's debatable, but I think a more useful behavior
would be to make it equivalent to
`isnull(y) ? Nullable(x + get(y)) : Nullable{Int}()`. That would allow
handling the possibility of missingness only when you actually want to
get an `Int` from a `Nullable{Int}`.

This has been discussed more generally for any function call at
https://github.com/JuliaLang/julia/pull/9446

 and the conclusion I'm coming to is that I don't understand
 the concept of Nullable yet. Any pointers?
 

Regards


Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Milan Bouchet-Valat
Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit :
 Hi guys,
 
 How does one figure out where allocation  of memory occurs?   When I
 use the @time  macro it tells me there's a lot of memory allocation
 and deallocation going on.  Just looking at the code I'm at a loss: I
 can't see the reasons for it there.
 
 So, what are the tips and tricks for the curious?  How do I debug the
 memory allocation issue?  I looked at the lint, the type check, and
 the code_typed().  Perhaps I don't know where to look, but  these
 didn't seem to be of much help.
See this:
http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis

(Would probably be good to backport to the 0.3 manual...)


Regards



Re: [julia-users] sum of 1-element array of composite type returns reference

2015-01-06 Thread Milan Bouchet-Valat
Le lundi 05 janvier 2015 à 18:53 -0800, Greg Plowman a écrit :
 
 The only reason I can think of is that a copy may be
 costly for certain 
 types, and it's not needed in most cases since the
 summation will create 
 a new value in the general case. But as you noted this
 is not true when 
 the array only contains one element. So it looks like
 the most efficient 
 fix would be to copy only when n == 1 in
 _mapreduce(). 
 
 
 
 I must admit I don't really understand the code, however it doesn't
 look like evaluation would be affected for n=2. 
 The extra cost would only be for 1-element arrays:
 
 
 Apparently, for 1-element arrays, zero(::MyType) needs to be defined
 For 0-element arrays, both zero(::MyType) and zero(::Type{MyType})
 need to be defined
 (strangely, for 0-element arrays, mr_empty() calls r_promote(::AddFun,
 zero(T)) which effectively calls zero(T) + zero(zero(T)), so both
 forms of zero() need to be defined
Yes, but that's not an issue as the definitions are equivalent:
zero(x::Number) = oftype(x,0)
zero{T:Number}(::Type{T}) = convert(T,0)

help? oftype
Base.oftype(x, y)

   Convert y to the type of x (convert(typeof(x), y)).


 In any case, at the moment I guess I have 2 workarounds:
 
 
 I could define MyType as a subtype of Number and provide zero()
 functions. 
 However, I'm not sure what the side effects of subtyping are, and
 whether this is advisable?
I don't think it would be a problem, it may well make a lot of sense if
your type is similar to a number (which is apparently the case since you
can sum it).

 
 type MyType :Number
 x::Int
 end
 
 Base.zero(::Type{MyType}) = MyType(0) # required for sum(0-element
 array)
 Base.zero(::MyType) = MyType(0)   # required for sum(0-element
 array) and sum(1-element array)
 
 +(a::MyType, b::MyType) = MyType(a.x + b.x)
 
 
 
 
 Alternatively, I could define my own sum() functions, but then if I
 want the general functionality of all variants of sum(), this seems
 non-trivial.
Indeed. Another solution is to make a pull request with a possible fix,
it could be included quite soon in a 0.3.x minor release so that you can
use it.


Regards



[julia-users] status of reload / Autoreload

2015-01-06 Thread Andreas Lobinger
Hello colleagues,

in one of the threads here, John Myles White commented




 I think there are two schools of thought on that question: 

 * Matlab users like the automatic reloading they’re used to from Matlab 
 and find Autoreload really helpful to their productivity. 
 * Users of languages where automatic reloading doesn’t happen are more 
 likely to end their Julia session and start a new one. 

 FWIW, the second option is the only way to absolutely guarantee that your 
 code will be rendered correctly because Julia does not currently recompile 
 functions that other function depend upon. 


i'm somehow rather in the matlab school, but i also like the python 
explicit reload(module). Restarting the REPL everytime is not really an 
option in implementing (in modules).

Can i expect someday a reload(module)?

Wishing a happy day,
 Andreas





[julia-users] string literals split to multilines?

2015-01-06 Thread Andreas Lobinger
Hello colleagues,

is there a counterpart for the string literal split to multiple lines like 
in python?

d = '09ab\
eff1\
a2a4'

Wishing a happy day,
   Andreas



Re: [julia-users] string literals split to multilines?

2015-01-06 Thread René Donner
hi,

this should work:

d = aaa
bbb
ccc

Rene


Am 06.01.2015 um 11:15 schrieb Andreas Lobinger lobing...@gmail.com:

 Hello colleagues,
 
 is there a counterpart for the string literal split to multiple lines like in 
 python?
 
 d = '09ab\
 eff1\
 a2a4'
 
 Wishing a happy day,
Andreas
 



[julia-users] [ANN] MatrixDepot.jl v0.1 (Better Documentation)

2015-01-06 Thread Weijian Zhang
Hello,

I would like to announce the release of MatrixDepot.jl v0.1: a test matrix 
collection for Julia. 

https://github.com/weijianzhang/MatrixDepot.jl.

The major change is the documentation. It is now hosted on Read the Doc.   

http://matrixdepotjl.readthedocs.org/en/latest/ 

You can find information on all the matrices in the collection at: 
http://matrixdepotjl.readthedocs.org/en/latest/matrices.html#matrices 

Please let me know if you have any suggestions. 

Best wishes and happy new year,

Weijian




Re: [julia-users] Julia REPL segfaults non-deterministically...

2015-01-06 Thread Isaiah Norton
Looks like:
https://github.com/JuliaLang/julia/issues/8550

If you can find a semi-reproducible test case and/or gdb backtrace, that
would be great.

On Mon, Jan 5, 2015 at 7:59 PM, Tomas Lycken tomas.lyc...@gmail.com wrote:

 I just got the following in the REPL:

 julia module Foo

type Bar{T} end

end

 signal (11): Segmentation fault
 unknown function (ip: -716631494)
 jl_get_binding at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
 jl_get_global at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
 jl_module_run_initializer at /opt/julia-0.4/usr/bin/../lib/libjulia.so 
 (unknown line)
 unknown function (ip: -716770303)
 unknown function (ip: -716771435)
 jl_toplevel_eval_in at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown 
 line)
 eval_user_input at REPL.jl:54
 jlcall_eval_user_input_42363 at  (unknown line)
 jl_apply_generic at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
 anonymous at task.jl:96
 unknown function (ip: -716815279)
 unknown function (ip: 0)

 I’ve seen segfaults a couple of times since I last built Julia from
 source, but I was always too concentrated on what I was doing to try to pin
 it down - but this time I wasn’t =) I hadn’t done anything in that session
 except a few Pkg commands, but I couldn’t reproduce the problem either in a
 fresh REPL or in one where I ran exactly the same Pkg commands in the same
 sequence before doing that. Is this a known problem, or should I report it
 on github?

 julia versioninfo()
 Julia Version 0.4.0-dev+2416
 Commit db3fa60 (2015-01-02 18:23 UTC)
 Platform Info:
   System: Linux (x86_64-linux-gnu)
   CPU: Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 // T
 ​



Re: [julia-users] Re: Nullable use cases / expected behavior?

2015-01-06 Thread elextr


On Tuesday, January 6, 2015 10:43:16 PM UTC+10, Milan Bouchet-Valat wrote:

 Le mardi 06 janvier 2015 à 04:38 -0800, ele...@gmail.com javascript: a 
 écrit : 
  
  
  On Tuesday, January 6, 2015 7:38:00 PM UTC+10, Milan Bouchet-Valat 
  wrote: 
  Le lundi 05 janvier 2015 à 19:16 -0800, ele...@gmail.com a 
 écrit : 
   My reasoning for Nullable{T} is that it is type stable. 
  Taking your 
   example, None and Int would be different type objects, 
 introducing a 
   type instability and potential performance penalty.  But 
 Nullable{T} 
   is always type Nullable{T} and get(Nullable{T}) is always type 
 T. 
Allowing Nullable{T} to decay into T would re-introduce the 
 type 
   instability. 
  Right. But that doesn't mean `Nullable(3) == 3` shouldn't return 
 `true`. 
  This operation could be allowed, provided that `Nullable{Int}() 
 == 3` 
  raised a `NullException` or returned `Nullable{Bool}()`. 
  
  
  Yeah, (==){T}(a::Nullable{T}, b::T) should be able to be defined as 
  !isnull(a)  get(a) == b 
 I'd consider this definition (which is different from the ones I 
 suggested above) as unsafe: if `a` is `null`, then you silently get 
 `false`. Better provide additional safety by either returning a 
 `Nullable`, or raising an exception. 


If the Nullable does not have a value then it doesn't equal any value of 
the type, so the correct answer is false.  If it returns bool or some sort 
of Nullable than again its type unstable and also can't be directly used in 
an if.  A user who cares about the null case can always check isnull() 
themselves directly on the original object.

And throwing exceptions is expensive and prevents the test being used in 
high performance code.  In fact I would consider it rather nasty if 
something like an equality test could throw.  That means the == function 
can't be used in any code that is a callback from C if there is any 
possibility of one of its parameters being a nullable.

Thats not to say that a user can't define their *own* version with either 
of these characteristics if it suits their use-case, but any general case 
should prefer the type stable high performance usage.

Cheers
Lex
 

 
  Cheers 
  Lex 

  
  Regarding the original question: 
   On Tuesday, January 6, 2015 12:03:24 PM UTC+10, Seth wrote: 
   I'm trying to figure out how (and under what 
 circumstances) 
   one would use Nullable. That is, it seems that it 
 might be 
   valuable when you don't know whether the value/object 
 exists 
   (sort of like Python's None, I guess), but then 
 something like 
   Nullable(3) == 3 returns false, and that sort of 
 messes up 
   how I'm thinking about it. 
   
   
   The code I'd imagine would be useful would be 
 something like 
   
   
   function foo(x::Int, y=Nullable{Int}())  # that is, y 
 defaults 
   to python's None but is restricted to Int 
   if !isnull(y) 
   return x+y  # x + get(y) works, but why must 
 we invoke 
   another method to get the value? 
   else 
   return 2x 
   end 
   end 
   
   
   I'm left wondering why it wasn't reasonable to allow y 
 to 
   return get(y) if not null, else raise a NullException, 
  The question is how you define return. In the strict sense, if 
 you 
  write `return y`, then `y` must be returned, not `get(y)`, or 
 the Julia 
  language would really be a mess. 
  
  That said, with return type declarations, if `foo()::Int` 
 allowed 
  stating that `foo()` always returns an `Int`, then `y` could 
  automatically be converted to an `Int`, raising an exception if 
 it's 
  `null`. But that merely allows you to type `return y` instead of 
  `return get(y)`, so not a big deal. 
  
  Finally, there's the question of whether `x + y` should be 
 allowed to 
  mean `x + get(y)`. That's debatable, but I think a more useful 
 behavior 
  would be to make it equivalent to 
  `isnull(y) ? Nullable(x + get(y)) : Nullable{Int}()`. That would 
 allow 
  handling the possibility of missingness only when you actually 
 want to 
  get an `Int` from a `Nullable{Int}`. 
  
  This has been discussed more generally for any function call at 
  https://github.com/JuliaLang/julia/pull/9446 
  
   and the conclusion I'm coming to is that I don't 
 understand 
   the concept of Nullable yet. Any pointers? 
   

Re: [julia-users] Re: Nullable use cases / expected behavior?

2015-01-06 Thread Milan Bouchet-Valat
Le mardi 06 janvier 2015 à 04:38 -0800, ele...@gmail.com a écrit :
 
 
 On Tuesday, January 6, 2015 7:38:00 PM UTC+10, Milan Bouchet-Valat
 wrote:
 Le lundi 05 janvier 2015 à 19:16 -0800, ele...@gmail.com a écrit : 
  My reasoning for Nullable{T} is that it is type stable.  Taking 
 your 
  example, None and Int would be different type objects, introducing 
 a 
  type instability and potential performance penalty.  But 
 Nullable{T} 
  is always type Nullable{T} and get(Nullable{T}) is always type T. 
   Allowing Nullable{T} to decay into T would re-introduce the type 
  instability. 
 Right. But that doesn't mean `Nullable(3) == 3` shouldn't return 
 `true`. 
 This operation could be allowed, provided that `Nullable{Int}() == 3` 
 raised a `NullException` or returned `Nullable{Bool}()`. 
 
 
 Yeah, (==){T}(a::Nullable{T}, b::T) should be able to be defined as
 !isnull(a)  get(a) == b
I'd consider this definition (which is different from the ones I
suggested above) as unsafe: if `a` is `null`, then you silently get
`false`. Better provide additional safety by either returning a
`Nullable`, or raising an exception.

 
 Cheers
 Lex
  
 
 Regarding the original question: 
  On Tuesday, January 6, 2015 12:03:24 PM UTC+10, Seth wrote: 
  I'm trying to figure out how (and under what circumstances) 
  one would use Nullable. That is, it seems that it might be 
  valuable when you don't know whether the value/object 
 exists 
  (sort of like Python's None, I guess), but then something 
 like 
  Nullable(3) == 3 returns false, and that sort of messes 
 up 
  how I'm thinking about it. 
  
  
  The code I'd imagine would be useful would be something 
 like 
  
  
  function foo(x::Int, y=Nullable{Int}())  # that is, y 
 defaults 
  to python's None but is restricted to Int 
  if !isnull(y) 
  return x+y  # x + get(y) works, but why must we 
 invoke 
  another method to get the value? 
  else 
  return 2x 
  end 
  end 
  
  
  I'm left wondering why it wasn't reasonable to allow y to 
  return get(y) if not null, else raise a NullException, 
 The question is how you define return. In the strict sense, if you 
 write `return y`, then `y` must be returned, not `get(y)`, or the 
 Julia 
 language would really be a mess. 
 
 That said, with return type declarations, if `foo()::Int` allowed 
 stating that `foo()` always returns an `Int`, then `y` could 
 automatically be converted to an `Int`, raising an exception if it's 
 `null`. But that merely allows you to type `return y` instead of 
 `return get(y)`, so not a big deal. 
 
 Finally, there's the question of whether `x + y` should be allowed to 
 mean `x + get(y)`. That's debatable, but I think a more useful 
 behavior 
 would be to make it equivalent to 
 `isnull(y) ? Nullable(x + get(y)) : Nullable{Int}()`. That would 
 allow 
 handling the possibility of missingness only when you actually want 
 to 
 get an `Int` from a `Nullable{Int}`. 
 
 This has been discussed more generally for any function call at 
 https://github.com/JuliaLang/julia/pull/9446 
 
  and the conclusion I'm coming to is that I don't understand 
  the concept of Nullable yet. Any pointers? 
  
 
 Regards 



Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Mike Innes
Sure, if you have a window object you can call `tools(w)` from Julia to
open the dev tools. If you end up using Blink.jl I'd love to hear about it!

On 6 January 2015 at 11:21, Eric Forgy eric.fo...@gmail.com wrote:

 Hi Mike,

 This is awesome!

 Please forgive a question before doing my homework, but is there a way to
 access a javascript console in the window?

 I think I can use this for something I'm working on. I almost have a basic
 javascript/d3 version of GUIDE working together with my own homegrown data
 visualizations for building GUIs in Chrome, so this is a very welcome gift
 :)

 I was looking into node-webkit, but this looks maybe better :)

 Happy New Year!

 Best regards,
 Eric

 PS: Here is a screenshot. I usually run things in the console.


 https://lh3.googleusercontent.com/-68UfNA8pby0/VKvEd_n8hFI/AIA/KnReGRopd70/s1600/Screen%2BShot%2B2015-01-06%2Bat%2B7.17.29%2Bpm.png




 On Monday, January 5, 2015 10:30:33 PM UTC+8, Mike Innes wrote:

 Hello Julians,

 I have a shiny late Christmas present for you, complete with Julia-themed
 wrapping.

 Blink.jl https://github.com/one-more-minute/Blink.jl wraps Chrome to
 enable web-based GUIs. It's very primitive at the moment, but as a proof of
 concept it includes BlinkDisplay, which will display graphics like Gadfly
 plots in a convenient popup window (matplotlib style).

 Shashi has some great ideas for ways to control HTML from Julia, and
 hopefully in future we'll have more nice things like matrix/data frame
 explorers and other graphical tools.

 (Incidentally, I'd also appreciate any feedback on the display system
 I've made to enable this, since I'm hoping to propose it to replace Base's
 current one in future)

 Anyway, let me know if this is useful to you and/or there are any
 problems.

 – Mike




Re: [julia-users] Re: Nullable use cases / expected behavior?

2015-01-06 Thread elextr


On Tuesday, January 6, 2015 7:38:00 PM UTC+10, Milan Bouchet-Valat wrote:

 Le lundi 05 janvier 2015 à 19:16 -0800, ele...@gmail.com javascript: a 
 écrit : 
  My reasoning for Nullable{T} is that it is type stable.  Taking your 
  example, None and Int would be different type objects, introducing a 
  type instability and potential performance penalty.  But Nullable{T} 
  is always type Nullable{T} and get(Nullable{T}) is always type T. 
   Allowing Nullable{T} to decay into T would re-introduce the type 
  instability. 
 Right. But that doesn't mean `Nullable(3) == 3` shouldn't return `true`. 
 This operation could be allowed, provided that `Nullable{Int}() == 3` 
 raised a `NullException` or returned `Nullable{Bool}()`. 


Yeah, (==){T}(a::Nullable{T}, b::T) should be able to be defined as 
!isnull(a)  get(a) == b

Cheers
Lex
 


 Regarding the original question: 
  On Tuesday, January 6, 2015 12:03:24 PM UTC+10, Seth wrote: 
  I'm trying to figure out how (and under what circumstances) 
  one would use Nullable. That is, it seems that it might be 
  valuable when you don't know whether the value/object exists 
  (sort of like Python's None, I guess), but then something like 
  Nullable(3) == 3 returns false, and that sort of messes up 
  how I'm thinking about it. 
  
  
  The code I'd imagine would be useful would be something like 
  
  
  function foo(x::Int, y=Nullable{Int}())  # that is, y defaults 
  to python's None but is restricted to Int 
  if !isnull(y) 
  return x+y  # x + get(y) works, but why must we invoke 
  another method to get the value? 
  else 
  return 2x 
  end 
  end 
  
  
  I'm left wondering why it wasn't reasonable to allow y to 
  return get(y) if not null, else raise a NullException, 
 The question is how you define return. In the strict sense, if you 
 write `return y`, then `y` must be returned, not `get(y)`, or the Julia 
 language would really be a mess. 

 That said, with return type declarations, if `foo()::Int` allowed 
 stating that `foo()` always returns an `Int`, then `y` could 
 automatically be converted to an `Int`, raising an exception if it's 
 `null`. But that merely allows you to type `return y` instead of 
 `return get(y)`, so not a big deal. 

 Finally, there's the question of whether `x + y` should be allowed to 
 mean `x + get(y)`. That's debatable, but I think a more useful behavior 
 would be to make it equivalent to 
 `isnull(y) ? Nullable(x + get(y)) : Nullable{Int}()`. That would allow 
 handling the possibility of missingness only when you actually want to 
 get an `Int` from a `Nullable{Int}`. 

 This has been discussed more generally for any function call at 
 https://github.com/JuliaLang/julia/pull/9446 

  and the conclusion I'm coming to is that I don't understand 
  the concept of Nullable yet. Any pointers? 
  

 Regards 



Re: [julia-users] Setting CPU affinity? Taskset does not work

2015-01-06 Thread Isaiah Norton
I think we do want to respect taskset when it is explicit. Please file a
bug -- or PR if you can test this (second answer):

http://stackoverflow.com/questions/4592575/is-it-possible-to-prevent-children-inheriting-the-cpu-core-affinity-of-the-paren

See some discussion here:
https://github.com/JuliaLang/julia/issues/1802#issuecomment-17175787
https://github.com/JuliaLang/julia/pull/3097

On Sun, Jan 4, 2015 at 12:52 PM, Aaron Okano aaronok...@gmail.com wrote:

 I am running some code on a system with two processor sockets, and would
 prefer for the processes to be restricted to a single socket for the time
 being. I have tried to use taskset, but Julia appears to manually set its
 own affinity when it starts.

 $ taskset -c 0,1,2,3,4,5 julia -q -p 5 
 [1] 13173
 $ ps -eF | grep -e 'julia\|PSR'
 UIDPID  PPID  CSZ   RSS PSR STIME TTY  TIME CMD
 aokano   13173  4626 54 192807 108788 0 09:48 pts/200:00:02 julia -q
 -p 5
 aokano   13185 13173 27 114376 71492  6 09:49 ?00:00:01
 /home/aokano/julia/usr/bin/./julia --worker --bind-to 169.237.10.172
 aokano   13186 13173 31 131261 73616  9 09:49 ?00:00:01
 /home/aokano/julia/usr/bin/./julia --worker --bind-to 169.237.10.172
 aokano   13187 13173 30 123234 74216  8 09:49 ?00:00:01
 /home/aokano/julia/usr/bin/./julia --worker --bind-to 169.237.10.172
 aokano   13188 13173 30 123077 73700  2 09:49 ?00:00:01
 /home/aokano/julia/usr/bin/./julia --worker --bind-to 169.237.10.172
 aokano   13189 13173 29 126412 87232  5 09:49 ?00:00:01
 /home/aokano/julia/usr/bin/./julia --worker --bind-to 169.237.10.172
 aokano   13246  4626  0  1908  1008   0 09:49 pts/200:00:00 grep
 --color=auto -e julia\|PSR


 Note the processor number is shown in the PSR column.

 The offending code appears to be this segment of init.c

 int ncores = jl_cpu_cores();
 if (ncores  1) {
 cpu_set_t cpumask;
 CPU_ZERO(cpumask);
 for(int i=0; i  ncores; i++) {
 CPU_SET(i, cpumask);
 }
 sched_setaffinity(0, sizeof(cpu_set_t), cpumask);
 }


 where the function jl_cpu_cores() calls sysconf(_SC_NPROCESSORS_ONLN) to
 determine the number of available cores. According to my tests, this
 function ignores taskset,

 $ cat test.c
 #include stdio.h
 #include unistd.h

 int main() {
   printf(%d\n,sysconf( _SC_NPROCESSORS_ONLN ) );
   return 0;
 }


 $ ./a.out
 12
 $ taskset -c 0,1,2,3,4,5 ./a.out
 12

 Long explanation aside, is there any way to set CPU affinity for Julia
 processes?



Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Jeff Waller
Hmm Atom eh? I read that you're communicating via TCP, but I wonder If 
there is some sort of abstraction possible, and it need not be process to 
process.   Have not thought it through, but feel something is there.


[julia-users] Re: Julia vs C++-11 for random walks

2015-01-06 Thread Frank Kampas
Microsoft C code is not very fast these days.  You might want to compare 
with MinGW.

On Monday, January 5, 2015 9:56:07 AM UTC-5, lapeyre@gmail.com wrote:

 Hi, here is a comparison of Julia and C++ for simulating a random walk 
 https://github.com/jlapeyre/ranwalk-Julia-vs-Cxx.

 It is the first Julia program I wrote. I just pushed it to github.

 --John



Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Mike Innes
Well, the abstraction is already there – you can call JS
https://github.com/one-more-minute/Blink.jl/blob/3baa5c6b7ea035a2bb3fa3faad95d3f74aba2f26/src/window.jl#L103-L104
pretty directly
https://github.com/one-more-minute/Blink.jl/blob/3baa5c6b7ea035a2bb3fa3faad95d3f74aba2f26/src/window.jl#L68-L72
without
worrying about how the connection is made. It would be great to have more
direct (shared memory?) communication, but I don't know if that's possible.

On 6 January 2015 at 13:28, Jeff Waller truth...@gmail.com wrote:

 Hmm Atom eh? I read that you're communicating via TCP, but I wonder If
 there is some sort of abstraction possible, and it need not be process to
 process.   Have not thought it through, but feel something is there.



Re: [julia-users] Re: Nullable use cases / expected behavior?

2015-01-06 Thread Seth


On Tuesday, January 6, 2015 4:43:16 AM UTC-8, Milan Bouchet-Valat wrote:


  
  Yeah, (==){T}(a::Nullable{T}, b::T) should be able to be defined as 
  !isnull(a)  get(a) == b 
 I'd consider this definition (which is different from the ones I 
 suggested above) as unsafe: if `a` is `null`, then you silently get 
 `false`. Better provide additional safety by either returning a 
 `Nullable`, or raising an exception. 


But - if null is just another legitimate value, why wouldn't it make 
sense to define null != a for all (a != null)? Why must we treat it as 
some sort of special abstraction? We don't do this with, say, imaginary 
numbers. (Identity for null is a separate issue).


[julia-users] It would be great to see some Julia talks at OSCON 2015

2015-01-06 Thread Phil Tomson
Hello Julia users:  

I'm on the program committee for OSCON (the O'Reilly Open Source 
Convention) and we're always looking for interesting programming talks. I'm 
pretty sure that there hasn't been any kind of Julia talk at OSCON in the 
past. Given the rising visibility of the language it would be great if we 
could get some talk proposals for OSCON 2015.  This year there is a special 
emphasis on languages for math  science (see the Call for Proposals here: 
http://www.oscon.com/open-source-2015/public/cfp/360 specifically the 
Solve track) and Julia really should be featured.

Are you using Julia to solve some interesting problems? Please submit a 
talk proposal.

OSCON is held in beautiful Portland Oregon July 20-24 this year.  Hope to 
see your Julia talk there.

Phil


[julia-users] Re: Interface design advice - optional plotting?

2015-01-06 Thread Steven G. Johnson
In Julia it is usually considered bad style to have the output type of 
function depend on the input values --- it is a good idea to get into the 
habit of writing type-stable functions whose output types depend only on 
their input types, and not on the values of their inputs, because this is 
essential to get good performance if the function is ever used in an inner 
loop of a computation.

In your case, I would normally just write two functions, foo() and 
plotfoo().   The former computes the data and returns it, and plotfoo() 
calls foo() to first compute and then plot the data.   (I don't see the 
advantage of @plot foo(), your option 3, vs. plotfoo().  A macro is a lot 
more messy to write and gives you no real benefit here.)


Re: [julia-users] Running mean/median in Julia

2015-01-06 Thread Tomas Mikoviny
Hi Kevin, 

generally I'm trying to do baseline correction on mass spectra with ~30 
bins. I've tried several algorithms to evaluate baseline but the ones 
working the best implement running median and mean. I've just got mean 
sorted out via cumsum trick in coincidence with Tim's suggestion (found 
some MATLAB discussion on that). Although I'll check Tamas' suggestion too.

I've got stacked with running median that would have reasonable performance 
since computer has to crunch runmed of array of 200 x 30 within couple 
of seconds (max) to manage online analysis. 

On Tuesday, January 6, 2015 6:18:02 PM UTC+1, Kevin Squire wrote:

 Hi Tomas,
 I'm bit aware of any (though they might exist). It might help if you gave 
 a little more context--what kind of data are you working with?

 Cheers,
Kevin

 On Tuesday, January 6, 2015, Tomas Mikoviny tomas.m...@gmail.com 
 javascript: wrote:

 Hi, 
 I was just wondering if anyone knows if there is a package that 
 implements *fast* running median/mean algorithms?

 Thanks a lot...
  



Re: [julia-users] Squeezing more performance out of a mandelbrot set computation

2015-01-06 Thread Craig Bosma
Hi Kevin, thanks for suggestions. I did try wrapping the main code in a 
function, but it made no appreciable difference. I've read through the 
Performance Tips a couple of times now, but I'll review it again.

On Tuesday, January 6, 2015 11:15:36 AM UTC-6, Kevin Squire wrote:

 Hi Craig,

 If you haven't yet, you should read through the Performance Tips section 
 of the manual. 

 One easy thing you should try is to wrap the main code in a function. 
 While this will hopefully change before v1.0, Julia has a hard time 
 optimizing code in global scope. 

 Cheers,
Kevin

 On Tuesday, January 6, 2015, Craig Bosma craig...@gmail.com javascript: 
 wrote:

 Hi,

 I've been playing with Julia lately by creating a little toy mandelbrot 
 set computation, and trying to optimize it for comparison with a C++ 
 version. I've also been trying to parallelize it to learn more about 
 Julia's parallel computing model and to compare with parallel 
 implementations in C++. My latest implementation can be found in this 
 gist. https://gist.github.com/bosmacs/eb329dc8360dbf1282b1

 On my machine, this runs in about 50 seconds, with (23622108264 bytes 
 allocated, 20.50% gc time) when executed serially. Those allocations seem 
 excessive to me, but I'm not sure what to do about it. With 8 workers on 4 
 cores, I can cut that down to about 20 seconds. On the same machine, my C++ 
 serial/parallel implementations run in ~8s/2s. I expected the difference to 
 be a less than a full order of magnitude, so I guess I'm wondering if there 
 are any obvious performance problems with my implementation. If anyone has 
 any suggestions of how to improve my code, I'd sure like to hear about it. 

 Thanks,
 Craig



Re: [julia-users] Running mean/median in Julia

2015-01-06 Thread Tomas Mikoviny
Tamas, thanks for the quantile reference - I'm just immersed in that paper.

As I've mentioned in reply to Kevin, I have rather bulky mass spec data and 
performance is rather crucial. I've read about the approach with limited 
distinct values but unfortunately it is not very applicable in my case. 
Let's see what I'll learn from Greenwald paper

Tomas


On Tuesday, January 6, 2015 6:35:06 PM UTC+1, Tamas Papp wrote:

 Hi Tomas, 

 I don't know any packages, but a running mean is trivial to 
 implement. See eg 

 http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm
  
 http://www.google.com/url?q=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAlgorithms_for_calculating_variance%23Online_algorithmsa=Dsntz=1usg=AFQjCNGY1sv8symyRT_JBeOg4AwZb6Nt-g
  
 , it doesn't get faster than this, and it is reasonably accurate. 

 Medians (and quantiles in general) are trickier, because you have to 
 keep track of the whole distribution or sacrifice accuracy. See 

 @inproceedings{greenwald2001space, 
   title={Space-efficient online computation of quantile summaries}, 
   author={Greenwald, Michael and Khanna, Sanjeev}, 
   booktitle={ACM SIGMOD Record}, 
   volume={30}, 
   number={2}, 
   pages={58--66}, 
   year={2001}, 
   organization={ACM} 
 } 

 You might want to tell us more about your problem and then you could get 
 more specific advice. Eg if number of observations  number of distinct 
 values, you could simply count them in a Dict. 

 Best, 

 Tamas 

 On Tue, Jan 06 2015, Tomas Mikoviny tomas.m...@gmail.com javascript: 
 wrote: 

  Hi, 
  I was just wondering if anyone knows if there is a package that 
 implements 
  *fast* running median/mean algorithms? 
  
  Thanks a lot... 




Re: [julia-users] Running mean/median in Julia

2015-01-06 Thread Tamas Papp
Hi Tomas,

A less sophisticated approach is to just bin the values using some
regular grid and count them, and then use the midpoint of the median bin
as an approximation.

Then

abs((median bin midpoint) - (true median)) = (median bin width)/2

which may be good enough for your application.

Best,

Tamas

On Tue, Jan 06 2015, Tomas Mikoviny tomas.mikov...@gmail.com wrote:

 Tamas, thanks for the quantile reference - I'm just immersed in that paper.

 As I've mentioned in reply to Kevin, I have rather bulky mass spec data and 
 performance is rather crucial. I've read about the approach with limited 
 distinct values but unfortunately it is not very applicable in my case. 
 Let's see what I'll learn from Greenwald paper

 Tomas


 On Tuesday, January 6, 2015 6:35:06 PM UTC+1, Tamas Papp wrote:

 Hi Tomas, 

 I don't know any packages, but a running mean is trivial to 
 implement. See eg 

 http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm
  
 http://www.google.com/url?q=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAlgorithms_for_calculating_variance%23Online_algorithmsa=Dsntz=1usg=AFQjCNGY1sv8symyRT_JBeOg4AwZb6Nt-g
  
 , it doesn't get faster than this, and it is reasonably accurate. 

 Medians (and quantiles in general) are trickier, because you have to 
 keep track of the whole distribution or sacrifice accuracy. See 

 @inproceedings{greenwald2001space, 
   title={Space-efficient online computation of quantile summaries}, 
   author={Greenwald, Michael and Khanna, Sanjeev}, 
   booktitle={ACM SIGMOD Record}, 
   volume={30}, 
   number={2}, 
   pages={58--66}, 
   year={2001}, 
   organization={ACM} 
 } 

 You might want to tell us more about your problem and then you could get 
 more specific advice. Eg if number of observations  number of distinct 
 values, you could simply count them in a Dict. 

 Best, 

 Tamas 

 On Tue, Jan 06 2015, Tomas Mikoviny tomas.m...@gmail.com javascript: 
 wrote: 

  Hi, 
  I was just wondering if anyone knows if there is a package that 
 implements 
  *fast* running median/mean algorithms? 
  
  Thanks a lot... 






Re: [julia-users] SubArray - indexing speed bottleneck

2015-01-06 Thread Tomas Mikoviny
Indeed you're right Tim...  I've implemented it and it works as expected... 

Thank you very much for the insight!

Tomas

On Tuesday, January 6, 2015 6:36:13 PM UTC+1, Tim Holy wrote:

 Put the loop inside a function, and call @time on the function call. For 
 me f2 
 below is 3x faster on julia 0.4. 
 --Tim 

 julia a = rand(30); 

 julia w = 200; 

 julia function f1(a, w) 
local b 
for i = 1:length(a)-w 
b = a[i:i+w] 
end 
b 
end 
 f1 (generic function with 1 method) 

 julia function f2(a, w) 
local b 
for i = 1:length(a)-w 
b = sub(a, i:i+w) 
end 
b 
end 
 f2 (generic function with 1 method) 

 julia f1(a,w); 

 julia @time f1(a,w); 
 elapsed time: 0.990406786 seconds (614004280 bytes allocated, 82.08% gc 
 time) 

 julia f2(a,w); 

 julia @time f2(a,w); 
 elapsed time: 0.106962692 seconds (55163280 bytes allocated, 57.32% gc 
 time) 

 julia 35976080/55163280 
 0.652174417474813 


 On Tuesday, January 06, 2015 09:17:27 AM Tomas Mikoviny wrote: 
  Tim, thanks for suggestion.. To be honest I primarily work with julia 
 0.4 
  (0.4.0-dev+2523) and do everything there first. 
  Unfortunately there is no visible performance improvement (~ same 
  performance) compared to 0.3.4 for this one-dimensional problem. 
  
  Tomas 
  
  On Tuesday, January 6, 2015 6:01:27 PM UTC+1, Tim Holy wrote: 
   SubArrays work much better in julia 0.4; on the tasks you posted, you 
 are 
   likely to see substantially better performance. 
   
   --Tim 
   
   On Tuesday, January 06, 2015 08:15:03 AM Tomas Mikoviny wrote: 
Hi, 
I'm trying to optimise julia script for spectra baseline correction 
   
   using 
   
rolling ball algorithm 
(http://linkinghub.elsevier.com/retrieve/pii/0168583X95009086, 
http://cran.r-project.org/web/packages/baseline). 
Profiling the code showed that the most time consuming part is 
 actually 
subarray pickup. 
I was just wondering if there is any other possible speedup for this 
problem? 

I've started initially with standard sub-indexing: 


a = rand(30); 
w = 200; 



@time for i in 1:length(a)-w 

 a[i:i+w] 
  
  end 

elapsed time: 0.387236571 seconds (645148344 bytes allocated, 56.46% 
 gc 
time) 



Then I've tried directly with subarray function and it improved the 
   
   runtime 
   
significantly: 

@time for i in 1:length(a)-w 

 sub(a,i:i+w) 
  
  end 

elapsed time: 0.10720574 seconds (86321144 bytes allocated, 32.13% 
 gc 
   
   time) 
   
 With approach to internally remove and add elements I've gained yet 
   
   some 
   
extra speed-up (eliminating gc): 

subset = a[1:1+w] 

@time for i in 2:length(a)-w 

 splice!(subset,1) 
 insert!(subset,w+1,a[i+w]) 

   end 

elapsed time: 0.067341484 seconds (33556344 bytes allocated) 


However I wonder if this is the end 

And obligatory version info: 

Julia Version 0.3.4 
Commit 3392026* (2014-12-26 10:42 UTC) 

Platform Info: 
  System: Darwin (x86_64-apple-darwin13.4.0) 
  CPU: Intel(R) Core(TM) i5-4690 CPU @ 3.50GHz 
  WORD_SIZE: 64 
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell) 
  LAPACK: libopenblas 
  LIBM: libopenlibm 
  LLVM: libLLVM-3.3 



[julia-users] Re: string literals split to multilines?

2015-01-06 Thread Steven G. Johnson


On Tuesday, January 6, 2015 5:15:13 AM UTC-5, Andreas Lobinger wrote:

 Hello colleagues,

 is there a counterpart for the string literal split to multiple lines like 
 in python?

 d = '09ab\
 eff1\
 a2a4'


You can always just concatenate:

d = 09ab *
  eff1 *
  a2a4 


Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Splendid! Thanks a lot.

P

On Tuesday, January 6, 2015 8:57:59 AM UTC-8, Tim Holy wrote:

 I also recently added some docs on how to analyze the results; that PR is 
 not 
 yet merged, but you can look at 
 https://github.com/IainNZ/Coverage.jl/pull/36/files 

 --Tim 

 On Tuesday, January 06, 2015 10:49:54 AM Milan Bouchet-Valat wrote: 
  Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
   Hi guys, 
   
   How does one figure out where allocation  of memory occurs?   When I 
   use the @time  macro it tells me there's a lot of memory allocation 
   and deallocation going on.  Just looking at the code I'm at a loss: I 
   can't see the reasons for it there. 
   
   So, what are the tips and tricks for the curious?  How do I debug the 
   memory allocation issue?  I looked at the lint, the type check, and 
   the code_typed().  Perhaps I don't know where to look, but  these 
   didn't seem to be of much help. 
  
  See this: 
  
 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analys 
  is 
  
  (Would probably be good to backport to the 0.3 manual...) 
  
  
  Regards 



Re: [julia-users] Squeezing more performance out of a mandelbrot set computation

2015-01-06 Thread Craig Bosma
I just tried inlining x^2+z as you and Steven suggested, and the serial 
execution time dropped to ~6 seconds!

I'm slightly disappointed to give up that flexibility, especially when the 
C++ version can use lambdas to achieve the same result without a 
significant slowdown, but at least it's good to understand the source of 
the problem. I'll keep a close eye on issue #1864.

Thanks so much,
Craig

On Tuesday, January 6, 2015 1:40:07 PM UTC-6, Jeff Bezanson wrote:

 Anonymous functions are in fact compiled, but are not inlined, and 
 calling them is a bit slow. For a hot inner loop like this, they 
 cannot currently be used. You will need to manually inline the x^2+z 
 to get performance. 

 On Tue, Jan 6, 2015 at 2:32 PM, Craig Bosma craig...@gmail.com 
 javascript: wrote: 
  Hi Kevin, thanks for suggestions. I did try wrapping the main code in a 
  function, but it made no appreciable difference. I've read through the 
  Performance Tips a couple of times now, but I'll review it again. 
  
  On Tuesday, January 6, 2015 11:15:36 AM UTC-6, Kevin Squire wrote: 
  
  Hi Craig, 
  
  If you haven't yet, you should read through the Performance Tips 
 section 
  of the manual. 
  
  One easy thing you should try is to wrap the main code in a function. 
  While this will hopefully change before v1.0, Julia has a hard time 
  optimizing code in global scope. 
  
  Cheers, 
 Kevin 
  
  On Tuesday, January 6, 2015, Craig Bosma craig...@gmail.com wrote: 
  
  Hi, 
  
  I've been playing with Julia lately by creating a little toy 
 mandelbrot 
  set computation, and trying to optimize it for comparison with a C++ 
  version. I've also been trying to parallelize it to learn more about 
 Julia's 
  parallel computing model and to compare with parallel implementations 
 in 
  C++. My latest implementation can be found in this gist. 
  
  On my machine, this runs in about 50 seconds, with (23622108264 bytes 
  allocated, 20.50% gc time) when executed serially. Those allocations 
 seem 
  excessive to me, but I'm not sure what to do about it. With 8 workers 
 on 4 
  cores, I can cut that down to about 20 seconds. On the same machine, 
 my C++ 
  serial/parallel implementations run in ~8s/2s. I expected the 
 difference to 
  be a less than a full order of magnitude, so I guess I'm wondering if 
 there 
  are any obvious performance problems with my implementation. If anyone 
 has 
  any suggestions of how to improve my code, I'd sure like to hear about 
 it. 
  
  Thanks, 
  Craig 



Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-06 Thread Tim Davis
Most of my code in SuiteSparse is under my copyright, not the University of
Florida.  (I'm now at Texas AM by the way ...
http://faculty.cse.tamu.edu/davis )

Most of SuiteSparse is LGPL or GPL, but the Factorize package itself is
2-clause BSD (attached).

So you can use the Factorize package as you wish.  The Factorize does
connect to sparse Cholesky (chol in MATLAB),
sparse LU, etc, but those are different packages (and all of them are GPL
or LGPL).  The backslash polyalgorithm is in
Factorize, however, and is thus 2-clause BSD.





On Mon, Jan 5, 2015 at 10:29 PM, Viral Shah vi...@mayin.org wrote:

 This is similar to the FFTW situation, where the license is held by MIT.

 -viral

  On 06-Jan-2015, at 8:14 am, Viral Shah vi...@mayin.org wrote:
 
  I believe that it is University of Florida that owns the copyright and
 they would lose licencing revenue. I would love it too if we could have
 these under the MIT licence, but it may not be a realistic expectation.
 
  Looking at the paper is the best way to go. Jiahao has already produced
 the pseudo code in the issue, and we do similar things in our dense \.
 
  -viral
 
  On 6 Jan 2015 07:31, Kevin Squire kevin.squ...@gmail.com wrote:
  Since Tim wrote the code (presumably?), couldn't he give permission to
 license it under MIT?  (Assuming he was okay with that, of course!).
 
  Cheers,
 Kevin
 
  On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski ste...@karpinski.org
 wrote:
  A word of legal caution: Tim, I believe some (all?) of your SuiteSparse
 code is GPL and since Julia is MIT (although not all libraries are), we can
 look at pseudocode but not copy GPL code while legally keeping the MIT
 license on Julia's standard library.
 
  Also, thanks so much for helping with this.
 
 
  On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari e.eftekh...@gmail.com
 wrote:
  Following your advice, I tried the code again, this time I also used
 MUMPS solver from https://github.com/lruthotto/MUMPS.jl
  I used a 42x43x44 grid. These are the results:
 
  MUMPS: elapsed time: 2.09091471 seconds
  lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
  backslash: elapsed time: 16.604061696 seconds (80189136 bytes allocated,
 0.45% gc time)
 
  and in Matlab:
  Elapsed time is 5.423656 seconds.
 
  Thanks a lot Tim and Viral for your quick and helpful comments.
 
  Kind regards,
  Ehsan
 
 
  On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
  Thanks, that is great. I was wondering about the symmetry checker - we
 have the naive one currently, but I can just use the CHOLMOD one now.
 
  -viral
 
 
 
   On 06-Jan-2015, at 2:22 am, Tim Davis da...@tamu.edu wrote:
  
   oops.  Yes, your factorize function is broken.  You might try mine
 instead, in my
   factorize package.
  
   I have a symmetry-checker in CHOLMOD.  It checks if the matrix is
 symmetric and
   with positive diagonals.  I think I have a MATLAB interface for it
 too.  The code is efficient,
   since it doesn't form A transpose, and it quits early as soon as
 asymmetry is detected.
  
   It does rely on the fact that MATLAB requires its sparse matrices to
 have sorted row indices
   in each column, however.
  
   On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah vi...@mayin.org wrote:
   Tim - thanks for the reference. The paper will come in handy. This is
 a longstanding issue, that we just haven’t got around to addressing yet,
 but perhaps now is a good time.
  
   https://github.com/JuliaLang/julia/issues/3295
  
   We have a very simplistic factorize() for sparse matrices that must
 have been implemented as a stopgap. This is what it currently does and that
 explains everything.
  
   # placing factorize here for now. Maybe add a new file
   function factorize(A::SparseMatrixCSC)
   m, n = size(A)
   if m == n
   Ac = cholfact(A)
   Ac.c.minor == m  ishermitian(A)  return Ac
   end
   return lufact(A)
   end
  
   -viral
  
  
  
On 06-Jan-2015, at 1:57 am, Tim Davis da...@tamu.edu wrote:
   
That does sound like a glitch in the \ algorithm, rather than in
 UMFPACK.  The OpenBLAS is pretty good.
   
This is very nice in Julia:
   
F = lufact (d[M]) ; F \ d
   
That's a great idea to have a factorization object like that.  I
 have a MATLAB toolbox that does
the same thing, but it's not a built-in function inside MATLAB.
 It's written in M, so it can be slow for
small matrices.   With it, however, I can do:
   
F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or
 whatever.  Uses my polyalgorithm for \.
x = F\b ;
   
I can do S = inverse(A); which returns a factorization, not an
 inverse, but with a flag
set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a
 copy of A inside it, as well).
   
You can also specify the factorization, such as
   
 F=factorize(A, 'lu')
F=factorize(A,'svd') ; etc.
   
It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.
 I've suggested 

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Thanks very much. This is useful.
P

On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:

 Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
  Hi guys, 
  
  How does one figure out where allocation  of memory occurs?   When I 
  use the @time  macro it tells me there's a lot of memory allocation 
  and deallocation going on.  Just looking at the code I'm at a loss: I 
  can't see the reasons for it there. 
  
  So, what are the tips and tricks for the curious?  How do I debug the 
  memory allocation issue?  I looked at the lint, the type check, and 
  the code_typed().  Perhaps I don't know where to look, but  these 
  didn't seem to be of much help. 
 See this: 

 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  

 (Would probably be good to backport to the 0.3 manual...) 


 Regards 



Re: [julia-users] string literals split to multilines?

2015-01-06 Thread Sean Marshallsay
Rene, multiline literals should also work with just a single quote.

julia d = aaa 
   bbb
   ccc
aaa\nbbb\nccc



On Tuesday, 6 January 2015 10:19:37 UTC, René Donner wrote:

 hi, 

 this should work: 

 d = aaa 
 bbb 
 ccc 

 Rene 


 Am 06.01.2015 um 11:15 schrieb Andreas Lobinger lobi...@gmail.com 
 javascript:: 

  Hello colleagues, 
  
  is there a counterpart for the string literal split to multiple lines 
 like in python? 
  
  d = '09ab\ 
  eff1\ 
  a2a4' 
  
  Wishing a happy day, 
 Andreas 
  



[julia-users] Re: Squeezing more performance out of a mandelbrot set computation

2015-01-06 Thread Steven G. Johnson
In addition to putting your main loops into a function, I would also just 
inline the normalized_iterations function rather than passing it as an 
itmap parameter, and inline the function f.  i.e. specialize your 
boundedorbit function to the mandelbrot case.   (Anonymous functions like x 
- x^2 + z currently aren't even compiled, much less inlined.  See 
https://github.com/JuliaLang/julia/issues/1864)


Re: [julia-users] Squeezing more performance out of a mandelbrot set computation

2015-01-06 Thread Jeff Bezanson
Anonymous functions are in fact compiled, but are not inlined, and
calling them is a bit slow. For a hot inner loop like this, they
cannot currently be used. You will need to manually inline the x^2+z
to get performance.

On Tue, Jan 6, 2015 at 2:32 PM, Craig Bosma craig.bo...@gmail.com wrote:
 Hi Kevin, thanks for suggestions. I did try wrapping the main code in a
 function, but it made no appreciable difference. I've read through the
 Performance Tips a couple of times now, but I'll review it again.

 On Tuesday, January 6, 2015 11:15:36 AM UTC-6, Kevin Squire wrote:

 Hi Craig,

 If you haven't yet, you should read through the Performance Tips section
 of the manual.

 One easy thing you should try is to wrap the main code in a function.
 While this will hopefully change before v1.0, Julia has a hard time
 optimizing code in global scope.

 Cheers,
Kevin

 On Tuesday, January 6, 2015, Craig Bosma craig...@gmail.com wrote:

 Hi,

 I've been playing with Julia lately by creating a little toy mandelbrot
 set computation, and trying to optimize it for comparison with a C++
 version. I've also been trying to parallelize it to learn more about Julia's
 parallel computing model and to compare with parallel implementations in
 C++. My latest implementation can be found in this gist.

 On my machine, this runs in about 50 seconds, with (23622108264 bytes
 allocated, 20.50% gc time) when executed serially. Those allocations seem
 excessive to me, but I'm not sure what to do about it. With 8 workers on 4
 cores, I can cut that down to about 20 seconds. On the same machine, my C++
 serial/parallel implementations run in ~8s/2s. I expected the difference to
 be a less than a full order of magnitude, so I guess I'm wondering if there
 are any obvious performance problems with my implementation. If anyone has
 any suggestions of how to improve my code, I'd sure like to hear about it.

 Thanks,
 Craig


Re: [julia-users] How to overwrite to an existing file, only range of data? HDF5 can do this ?

2015-01-06 Thread paul analyst
 2.if sum k and l  9 Julia cant work. Is it to big size for hdf5 or for 

  system (Win7 64 Home Premium) ? 

 Not sure. It works for me (Kubuntu Linux 14.04). 

 I checked on Ubuntu :

If sum of k or/and  l is more then 7 I have problem with reading 
vectors (cols)

 

 samsung2@samsung2:~$ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.3 (2014-10-21 20:18 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  i686-linux-gnu

 julia using HDF5

 julia hfi=h5open(F_big.h5,w);close(hfi)

 julia k,l=8,8;

 julia fid = h5open(F_big.h5,r+)
 HDF5 data file: F_big.h5

 julia fid[mygroup/A]=rand(2)#niepotrzebny
 2-element Array{Float64,1}:
  0.318459
  0.258055

 julia g = fid[mygroup]
 HDF5 group: /mygroup (file: F_big.h5)

 julia dset = d_create(g, F, datatype(Float64), dataspace(10^k,1*10^l))
 HDF5 dataset: /mygroup/F (file: F_big.h5)

 julia h5read(F_big.h5,mygroup/F,(:,1))
 1x1 Array{Float64,2}:
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0
  ⋮  
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0
  0.0

 julia close(fid)

 julia hfi=h5open(F_big.h5,w);close(hfi)

 julia k,l=9,8;

 julia fid = h5open(F_big.h5,r+)
 HDF5 data file: F_big.h5

 julia fid[mygroup/A]=rand(2)#niepotrzebny
 2-element Array{Float64,1}:
  0.129214
  0.4785  

 julia g = fid[mygroup]
 HDF5 group: /mygroup (file: F_big.h5)

 julia dset = d_create(g, F, datatype(Float64), dataspace(10^k,1*10^l))
 HDF5 dataset: /mygroup/F (file: F_big.h5)

 julia h5read(F_big.h5,mygroup/F,(:,1))
 ERROR: invalid Array size
  in Array at base.jl:223
  in _getindex at /home/samsung2/.julia/v0.3/HDF5/src/plain.jl:1557
  in getindex at /home/samsung2/.julia/v0.3/HDF5/src/plain.jl:1550
  in getindex at /home/samsung2/.julia/v0.3/HDF5/src/plain.jl:1620
  in h5read at /home/samsung2/.julia/v0.3/HDF5/src/plain.jl:602

 julia close(fid)

 julia h5read(F_big.h5,mygroup/F,(1:2,:))
 2x1 Array{Float64,2}:
  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0  
 0.0
  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0 0.0  0.0  0.0  0.0  0.0  0.0  
 0.0

 julia h5read(F_big.h5,mygroup/F,(:,1))
 ERROR: invalid Array size
  in Array at base.jl:223
  in _getindex at /home/samsung2/.julia/v0.3/HDF5/src/plain.jl:1557
  in getindex at /home/samsung2/.julia/v0.3/HDF5/src/plain.jl:1550
  in getindex at /home/samsung2/.julia/v0.3/HDF5/src/plain.jl:1620
  in h5read at /home/samsung2/.julia/v0.3/HDF5/src/plain.jl:602

 julia 

 Paul 
 


Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Tim Holy
It doesn't write .mem files until you quit julia. The mem allocation is 
cumulative throughout your whole session (or since the last call to 
clear_malloc_data()).

--Tim

On Tuesday, January 06, 2015 02:15:02 PM Petr Krysl wrote:
 I did this as suggested. The code  executed as shown below, preceded by the
 command line.
 The process completes,  but there are no .mem files anywhere. Should I ask
 for them specifically?
 
 # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe
 --track-allocation=all memory_debugging.jl
 cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
 include(examples/acoustics/sphere_scatterer_example.jl)
 Profile.clear_malloc_data()
 include(examples/acoustics/sphere_scatterer_example.jl)
 quit()
 
 On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
  Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit :
   Hi guys,
   
   How does one figure out where allocation  of memory occurs?   When I
   use the @time  macro it tells me there's a lot of memory allocation
   and deallocation going on.  Just looking at the code I'm at a loss: I
   can't see the reasons for it there.
   
   So, what are the tips and tricks for the curious?  How do I debug the
   memory allocation issue?  I looked at the lint, the type check, and
   the code_typed().  Perhaps I don't know where to look, but  these
   didn't seem to be of much help.
  
  See this:
  
  http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-anal
  ysis
  
  (Would probably be good to backport to the 0.3 manual...)
  
  
  Regards



Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Actually, correction: for 0.3.4 the _system_ *.mem files are in the julia 
folders. For _my_ source files the .mem files cannot be located.

P

On Tuesday, January 6, 2015 2:15:02 PM UTC-8, Petr Krysl wrote:

 I did this as suggested. The code  executed as shown below, preceded by 
 the command line.
 The process completes,  but there are no .mem files anywhere. Should I ask 
 for them specifically?

 # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe 
 --track-allocation=all memory_debugging.jl
 cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
 include(examples/acoustics/sphere_scatterer_example.jl)
 Profile.clear_malloc_data()
 include(examples/acoustics/sphere_scatterer_example.jl)
 quit()



 On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:

 Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
  Hi guys, 
  
  How does one figure out where allocation  of memory occurs?   When I 
  use the @time  macro it tells me there's a lot of memory allocation 
  and deallocation going on.  Just looking at the code I'm at a loss: I 
  can't see the reasons for it there. 
  
  So, what are the tips and tricks for the curious?  How do I debug the 
  memory allocation issue?  I looked at the lint, the type check, and 
  the code_typed().  Perhaps I don't know where to look, but  these 
  didn't seem to be of much help. 
 See this: 

 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  

 (Would probably be good to backport to the 0.3 manual...) 


 Regards 



Re: [julia-users] Running mean/median in Julia

2015-01-06 Thread Stefan Karpinski
You may not need online methods at all. Sorting the rows of a 200 x 30
matrix doesn't take very long on my laptop:

julia X = randn(200,30);

julia @time X = sortrows(X);
elapsed time: 0.297998739 seconds (480053384 bytes allocated)




On Tue, Jan 6, 2015 at 2:33 PM, Tomas Mikoviny tomas.mikov...@gmail.com
wrote:

 Hi Kevin,

 generally I'm trying to do baseline correction on mass spectra with
 ~30 bins. I've tried several algorithms to evaluate baseline but the
 ones working the best implement running median and mean. I've just got mean
 sorted out via cumsum trick in coincidence with Tim's suggestion (found
 some MATLAB discussion on that). Although I'll check Tamas' suggestion too.

 I've got stacked with running median that would have reasonable
 performance since computer has to crunch runmed of array of 200 x 30
 within couple of seconds (max) to manage online analysis.

 On Tuesday, January 6, 2015 6:18:02 PM UTC+1, Kevin Squire wrote:

 Hi Tomas,
 I'm bit aware of any (though they might exist). It might help if you gave
 a little more context--what kind of data are you working with?

 Cheers,
Kevin

 On Tuesday, January 6, 2015, Tomas Mikoviny tomas.m...@gmail.com wrote:

 Hi,
 I was just wondering if anyone knows if there is a package that
 implements *fast* running median/mean algorithms?

 Thanks a lot...





Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Rob J. Goedman
Petr,

Not sure if this helps you, but below sequence creates the .mem file.

ProjDir is set in Ex07.jl and is the directory that contains the .mem file

Regards,
Rob J. Goedman
goed...@mac.com


Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user

   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type help() for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/   |  x86_64-apple-darwin13.4.0

julia 
include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)

julia cd(ProjDir)

julia clear_malloc_data()

julia 
include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)

shell ls
Ex07.jl Ex07.svgEx08.svgEx09.svgSection2.3.svg
Ex07.jl.mem Ex08.jl Ex09.jl Section2.3.jl   Section2.4.nb

 On Jan 6, 2015, at 2:15 PM, Petr Krysl krysl.p...@gmail.com wrote:
 
 I did this as suggested. The code  executed as shown below, preceded by the 
 command line.
 The process completes,  but there are no .mem files anywhere. Should I ask 
 for them specifically?
 
 # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe 
 --track-allocation=all memory_debugging.jl
 cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
 include(examples/acoustics/sphere_scatterer_example.jl)
 Profile.clear_malloc_data()
 include(examples/acoustics/sphere_scatterer_example.jl)
 quit()
 
 
 
 On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
 Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
  Hi guys, 
  
  How does one figure out where allocation  of memory occurs?   When I 
  use the @time  macro it tells me there's a lot of memory allocation 
  and deallocation going on.  Just looking at the code I'm at a loss: I 
  can't see the reasons for it there. 
  
  So, what are the tips and tricks for the curious?  How do I debug the 
  memory allocation issue?  I looked at the lint, the type check, and 
  the code_typed().  Perhaps I don't know where to look, but  these 
  didn't seem to be of much help. 
 See this: 
 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  
 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  
 
 (Would probably be good to backport to the 0.3 manual...) 
 
 
 Regards 
 



Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Tim Holy
In particular, if all of the calls that you're making got attributed to code 
in Base, that could explain what you saw. Inlining could also be a source of 
confusion. The most accurate way to do this kind of analysis is only available 
in julia 0.4:

julia --inline=no --track-allocation=user

--Tim

On Tuesday, January 06, 2015 02:59:11 PM Petr Krysl wrote:
 Actually, correction: for 0.3.4 the _system_ *.mem files are in the julia
 folders. For _my_ source files the .mem files cannot be located.
 
 P
 
 On Tuesday, January 6, 2015 2:15:02 PM UTC-8, Petr Krysl wrote:
  I did this as suggested. The code  executed as shown below, preceded by
  the command line.
  The process completes,  but there are no .mem files anywhere. Should I ask
  for them specifically?
  
  # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe
  --track-allocation=all memory_debugging.jl
  cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
  include(examples/acoustics/sphere_scatterer_example.jl)
  Profile.clear_malloc_data()
  include(examples/acoustics/sphere_scatterer_example.jl)
  quit()
  
  On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
  Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit :
   Hi guys,
   
   How does one figure out where allocation  of memory occurs?   When I
   use the @time  macro it tells me there's a lot of memory allocation
   and deallocation going on.  Just looking at the code I'm at a loss: I
   can't see the reasons for it there.
   
   So, what are the tips and tricks for the curious?  How do I debug the
   memory allocation issue?  I looked at the lint, the type check, and
   the code_typed().  Perhaps I don't know where to look, but  these
   didn't seem to be of much help.
  
  See this:
  
  http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-ana
  lysis
  
  (Would probably be good to backport to the 0.3 manual...)
  
  
  Regards



Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Rob J. Goedman
Hi Petr,

It’s your book, I used this name for the time being while working my way 
through the first 6 or 7 chapters using Julia (and Mathematica occasionally, 
don’t have Matlab).

If you would prefer that, I can easily change the name, I have no intention to 
ever register the package.

Just trying to figure out a good way to replace my current (Fortran) FEM/R 
program with a Julia equivalent.

Regards,
Rob J. Goedman
goed...@mac.com





 On Jan 6, 2015, at 3:01 PM, Petr Krysl krysl.p...@gmail.com wrote:
 
 Rob,
 
 Thanks. I did find some .mem files (see above). Not for my own source files 
 though.
 
 Petr
 
 PS: You have a fineale book? Interesting... I thought no one else had 
 claimed that name for a software project before...
 
 On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:
 Petr,
 
 Not sure if this helps you, but below sequence creates the .mem file.
 
 ProjDir is set in Ex07.jl and is the directory that contains the .mem file
 
 Regards,
 Rob J. Goedman
 goe...@mac.com javascript:
 
 
 Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user
 
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org 
 http://docs.julialang.org/
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ 
 http://julialang.org/ release
 |__/   |  x86_64-apple-darwin13.4.0
 
 julia 
 include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)
 
 julia cd(ProjDir)
 
 julia clear_malloc_data()
 
 julia 
 include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)
 
 shell ls
 Ex07.jl   Ex07.svgEx08.svgEx09.svg
 Section2.3.svg
 Ex07.jl.mem   Ex08.jl Ex09.jl Section2.3.jl   Section2.4.nb
 
 On Jan 6, 2015, at 2:15 PM, Petr Krysl krysl...@gmail.com javascript: 
 wrote:
 
 I did this as suggested. The code  executed as shown below, preceded by the 
 command line.
 The process completes,  but there are no .mem files anywhere. Should I ask 
 for them specifically?
 
 # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe 
 --track-allocation=all memory_debugging.jl
 cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
 include(examples/acoustics/sphere_scatterer_example.jl)
 Profile.clear_malloc_data()
 include(examples/acoustics/sphere_scatterer_example.jl)
 quit()
 
 
 
 On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
 Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
  Hi guys, 
  
  How does one figure out where allocation  of memory occurs?   When I 
  use the @time  macro it tells me there's a lot of memory allocation 
  and deallocation going on.  Just looking at the code I'm at a loss: I 
  can't see the reasons for it there. 
  
  So, what are the tips and tricks for the curious?  How do I debug the 
  memory allocation issue?  I looked at the lint, the type check, and 
  the code_typed().  Perhaps I don't know where to look, but  these 
  didn't seem to be of much help. 
 See this: 
 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  
 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  
 
 (Would probably be good to backport to the 0.3 manual...) 
 
 
 Regards 
 
 



Re: [julia-users] Running mean/median in Julia

2015-01-06 Thread Tomas Mikoviny
Hi Stefan, 

you are right. But the moment I try to do this for any subset along the 
array it gets inefficient. 
Here is the simplest code for running median of one dimensional array. Time 
consuming...

function runmed(input::Array, w::Int)
L = length(input)
output = zeros(input)
subset = zeros(w+1)
for i in 1:L-w
subset = sub(input, i:i+w)
output[i] = median!(subset)
end
return output
end

a = zeros(30);

runmed(a, 200);

@time runmed(a, 200);
elapsed time: 1.460171594 seconds (43174976 bytes allocated)


On Tuesday, January 6, 2015 10:10:39 PM UTC+1, Stefan Karpinski wrote:

 You may not need online methods at all. Sorting the rows of a 200 x 30 
 matrix doesn't take very long on my laptop:

 julia X = randn(200,30);

 julia @time X = sortrows(X);
 elapsed time: 0.297998739 seconds (480053384 bytes allocated)




 On Tue, Jan 6, 2015 at 2:33 PM, Tomas Mikoviny tomas.m...@gmail.com 
 javascript: wrote:

 Hi Kevin, 

 generally I'm trying to do baseline correction on mass spectra with 
 ~30 bins. I've tried several algorithms to evaluate baseline but the 
 ones working the best implement running median and mean. I've just got mean 
 sorted out via cumsum trick in coincidence with Tim's suggestion (found 
 some MATLAB discussion on that). Although I'll check Tamas' suggestion too.

 I've got stacked with running median that would have reasonable 
 performance since computer has to crunch runmed of array of 200 x 30 
 within couple of seconds (max) to manage online analysis. 

 On Tuesday, January 6, 2015 6:18:02 PM UTC+1, Kevin Squire wrote:

 Hi Tomas,
 I'm bit aware of any (though they might exist). It might help if you 
 gave a little more context--what kind of data are you working with?

 Cheers,
Kevin

 On Tuesday, January 6, 2015, Tomas Mikoviny tomas.m...@gmail.com 
 wrote:

 Hi, 
 I was just wondering if anyone knows if there is a package that 
 implements *fast* running median/mean algorithms?

 Thanks a lot...
  




Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Rob J. Goedman
Petr,

I ran the Poisson_FE_example_model in REPL as shown below and find the .mem 
files in the src directory and in the top-level directory.

You were running a different example though.

Rob J. Goedman
goed...@mac.com

julia cd(Pkg.dir(homedir(), Projects/Julia/Rob/jfineale_for_trying_out))

julia 
include(/Users/rob/Projects/Julia/Rob/jfineale_for_trying_out/Poisson_FE_example_model.jl)
Heat conduction example described by Amuthan A. Ramabathiran
http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia:
Unit square, with known temperature distribution along the boundary, 
and uniform heat generation rate inside.  Mesh of regular TRIANGLES,
in a grid of N x N edges. 
This version uses the JFinEALE algorithm module.

Total time elapsed = 2.8418619632720947s

julia clear_malloc_data()

shell ls src
AssemblyModule.jl   HeatDiffusionAlgorithmModule.jl 
MeshQuadrilateralModule.jl
FEMMBaseModule.jl   IntegRuleModule.jl  
MeshSelectionModule.jl
FEMMHeatDiffusionModule.jl  JFFoundationModule.jl   
MeshTriangleModule.jl
FENodeSetModule.jl  MaterialHeatDiffusionModule.jl  
NodalFieldModule.jl
FESetModule.jl  MeshExportModule.jl 
PropertyHeatDiffusionModule.jl
ForceIntensityModule.jl MeshModificationModule.jl

shell ls
JFinEALE.jl Poisson_FE_example_model.jl src
Poisson_FE_Q4_example.jlREADME.md   tests
Poisson_FE_example.jl   annulus_Q4_example.jl

julia 
include(/Users/rob/Projects/Julia/Rob/jfineale_for_trying_out/Poisson_FE_example_model.jl)
Heat conduction example described by Amuthan A. Ramabathiran
http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia:
Unit square, with known temperature distribution along the boundary, 
and uniform heat generation rate inside.  Mesh of regular TRIANGLES,
in a grid of N x N edges. 
This version uses the JFinEALE algorithm module.

Total time elapsed = 0.017609119415283203s

julia CTRL-D



Robs-MacBook-Pro:jfineale_for_trying_out rob$ 

Robs-MacBook-Pro:~ rob$ pwd
/Users/rob
Robs-MacBook-Pro:~ rob$ cd Projects/Julia/Rob/jfineale_for_trying_out/
Robs-MacBook-Pro:jfineale_for_trying_out rob$ ls src
AssemblyModule.jl   ForceIntensityModule.jl 
MeshModificationModule.jl
AssemblyModule.jl.mem   ForceIntensityModule.jl.mem 
MeshQuadrilateralModule.jl
FEMMBaseModule.jl   HeatDiffusionAlgorithmModule.jl 
MeshSelectionModule.jl
FEMMBaseModule.jl.mem   HeatDiffusionAlgorithmModule.jl.mem 
MeshSelectionModule.jl.mem
FEMMHeatDiffusionModule.jl  IntegRuleModule.jl  
MeshTriangleModule.jl
FEMMHeatDiffusionModule.jl.mem  IntegRuleModule.jl.mem  
MeshTriangleModule.jl.mem
FENodeSetModule.jl  JFFoundationModule.jl   
NodalFieldModule.jl
FENodeSetModule.jl.mem  MaterialHeatDiffusionModule.jl  
NodalFieldModule.jl.mem
FESetModule.jl  MaterialHeatDiffusionModule.jl.mem  
PropertyHeatDiffusionModule.jl
FESetModule.jl.mem  MeshExportModule.jl 
PropertyHeatDiffusionModule.jl.mem
Robs-MacBook-Pro:jfineale_for_trying_out rob$ ls
JFinEALE.jl Poisson_FE_example_model.jl 
annulus_Q4_example.jl
Poisson_FE_Q4_example.jlPoisson_FE_example_model.jl.mem src
Poisson_FE_example.jl   README.md   tests
Robs-MacBook-Pro:jfineale_for_trying_out rob$ 




 On Jan 6, 2015, at 3:01 PM, Petr Krysl krysl.p...@gmail.com wrote:
 
 Rob,
 
 Thanks. I did find some .mem files (see above). Not for my own source files 
 though.
 
 Petr
 
 PS: You have a fineale book? Interesting... I thought no one else had 
 claimed that name for a software project before...
 
 On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:
 Petr,
 
 Not sure if this helps you, but below sequence creates the .mem file.
 
 ProjDir is set in Ex07.jl and is the directory that contains the .mem file
 
 Regards,
 Rob J. Goedman
 goe...@mac.com javascript:
 
 
 Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user
 
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org 
 http://docs.julialang.org/
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ 
 http://julialang.org/ release
 |__/   |  x86_64-apple-darwin13.4.0
 
 julia 
 include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)
 
 julia cd(ProjDir)
 
 julia clear_malloc_data()
 
 julia 
 

Re: [julia-users] Re: Nullable use cases / expected behavior?

2015-01-06 Thread elextr
Oops posted too soon :)

On Wednesday, January 7, 2015 10:37:01 AM UTC+10, ele...@gmail.com wrote:

 [...]

 This definition will be type-stable (it will always return a 
 Nullable{Bool}) and it will be able to signal all three possible results; 
 get(a) == b, get(a) != b and get(a) == null.


Forgot to say Base.== returns bool, so this now makes == type unstable.
 


 It is then messy to use == in an if when the return is not a bool. 

 In fact you still have to write the isnull() test which is essentially the 
 same code as your definition of == so nothing has been gained by defining 
 ==.

 Cheers
 Lex

  


 [...]



Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Eric Forgy
Hi Jeff,

I'd be interested in getting a Julia engine in Atom, but I would not be so 
interested in Julia for visualization when, unless I'm mistaken, at that point 
you can use d3 directly. That would be cool if true. Is it? Can we get the 
Julia.eval to return a javascript array? Getting Julia and javascript working 
side by side in the same console would be pretty awesome.

Best regards,
Eric

Sent from my iPad

 On 7 Jan, 2015, at 8:31 am, Jeff Waller truth...@gmail.com wrote:
 
 Oh man, I think there might be a way!  
 
 Inspired by this because you know Atom is essentially node + chromium, I tried
 
 git clone node-julia
 and then
 bizarro% cd node-julia/
 
 bizarro% HOME=~/.atom-shell-gyp node-gyp rebuild --target=0.19.5 --arch=x64 
 --dist-url=https://gh-contractor-zcbenz.s3.amazonaws.com/atom-shell/dist
 
 that 0.19.5 value is critical and I ended up just trying the versions at 
 random
 
 linked node-julia in 
 
 pwd
 /Applications/Atom.app/Contents/Resources/app/node_modules
 bizarro% ls -l node-julia
 lrwxr-xr-x  1 jeffw  staff  32 Jan  6 18:10 node-julia - 
 /Users/jeffw/src/atom/node-julia
 
 
 
 and then finally within the javascript REPL in Atom 
 
 var julia = require('node-julia');
 undefined
 julia.exec('rand',200);
 Array[200]
 
 
 and then (bonus)
 
 julia.eval('using Gadfly')
 JRef {getHIndex:function}__proto__: JRef
 julia.eval('plot(rand(10)');
 
 
 that last part didn't work of course but it didn't crash though and maybe 
 with a little more...  A julia engine within Atom.  Would that be useful?  
 I'm not sure what you guys are wanting to do, but maybe some collaboration?
 
 -Jeff


[julia-users] Gadfly contour plot with all contour levels in a single color

2015-01-06 Thread Tomas Lycken
I posted this question as an issue to Gadfly.jl 
https://github.com/dcjones/Gadfly.jl/issues/527, and now realized that I 
have a better chance of getting an answer, if there is one, here.

I given some arrays xs, ys and zs that hold my data set, I can do 
plot(x=xs, y=ys, z=zs, Geom.contour) to get a contour plot of the data. On 
this plot, the contour lines are colored by the function value for each 
level. Is there a way I could, manually, color them all the same?

// T


Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Tracy Wadleigh
You mention in the readme about in the future possibly using Cxx.jl to wrap
libchromiumcontent. Might you be able to avoid the need for Cxx.jl by using
the C API of the Chromium Embedded Framework
http://code.google.com/p/chromiumembedded/?

On Tue, Jan 6, 2015 at 7:31 PM, Jeff Waller truth...@gmail.com wrote:

 Oh man, I think there might be a way!

 Inspired by this because you know Atom is essentially node + chromium, I
 tried

 git clone node-julia
 and then

 bizarro% cd node-julia/

 bizarro% HOME=~/.atom-shell-gyp node-gyp rebuild --target=0.19.5
 --arch=x64 --dist-url=
 https://gh-contractor-zcbenz.s3.amazonaws.com/atom-shell/dist

 that 0.19.5 value is critical and I ended up just trying the versions at
 random

 linked node-julia in
 pwd

 /Applications/Atom.app/Contents/Resources/app/node_modules
 bizarro% ls -l node-julia
 lrwxr-xr-x  1 jeffw  staff  32 Jan  6 18:10 node-julia - /Users/jeffw/src
 /atom/node-julia


 and then finally within the javascript REPL in Atom
 var julia = require('node-julia');
 undefined
 julia.exec('rand',200);
 Array[200]


 and then (bonus)
 julia.eval('using Gadfly')
 JRef {getHIndex:function}__proto__: JRef
 julia.eval('plot(rand(10)');


 that last part didn't work of course but it didn't crash though and maybe
 with a little more...  A julia engine within Atom.  Would that be useful?
 I'm not sure what you guys are wanting to do, but maybe some collaboration?

 -Jeff



[julia-users] Re: Build troubles

2015-01-06 Thread Tony Kelman
Not a noob question at all. The Intel compiler build support isn't 
regularly tested like GCC/Clang, and it's susceptible to occasional 
breakage on master. Hopefully not on release-0.3, but please let us know if 
that happens too. It would be great if we could somehow get CI running with 
Intel compilers, or maybe one or two nightly buildbots using them?

Some inline assembly was added in PR #9266, which also broke the 
(barely-supported) build with MSVC. Intel should at least allow 64 bit 
inline assembly, I would think? Jameson Nash has a suggested workaround 
involving longjmp for the MSVC case, but I'm not sure whether it applies to 
you - are you trying this on OSX or Linux?

This is a legitimate build problem, please open an issue (e.g. Build 
broken with Intel compilers) and cross-reference this thread. Include as 
much info about your system and compiler versions as you can.

-Tony


On Tuesday, January 6, 2015 1:37:17 PM UTC-8, samuel_a...@brown.edu wrote:

 Forgive me if this is a noob question but I'm having some trouble building 
 Julia. I'm on commit a318578. Running `make' gives me:

 $ make
 CC src/task.o
 task.c(352): catastrophic error: Cannot match asm operand constraint
 compilation aborted for task.c (code 1)
 make[2]: *** [task.o] Error 1
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2

 Make.user:
 USEICC = 1
 USEIFC = 1
 USE_INTEL_MKL = 1
 USE_INTEL_MKL_FFT = 1
 USE_INTEL_LIBM = 1

 JULIA_CPU_TARGET = core2

 Weirdly, the exact same setup works fine compiling v0.3. Does anyone have 
 any ideas?

 Thanks,
 Samuel



Re: [julia-users] Re: Build troubles

2015-01-06 Thread Jameson Nash
it's failing on task.c::357 (on the current master). can you see if it
compiles replacing i(start_task) with (start_task)

On Tue Jan 06 2015 at 6:33:37 PM Tony Kelman t...@kelman.net wrote:

 Not a noob question at all. The Intel compiler build support isn't
 regularly tested like GCC/Clang, and it's susceptible to occasional
 breakage on master. Hopefully not on release-0.3, but please let us know if
 that happens too. It would be great if we could somehow get CI running with
 Intel compilers, or maybe one or two nightly buildbots using them?

 Some inline assembly was added in PR #9266, which also broke the
 (barely-supported) build with MSVC. Intel should at least allow 64 bit
 inline assembly, I would think? Jameson Nash has a suggested workaround
 involving longjmp for the MSVC case, but I'm not sure whether it applies to
 you - are you trying this on OSX or Linux?

 This is a legitimate build problem, please open an issue (e.g. Build
 broken with Intel compilers) and cross-reference this thread. Include as
 much info about your system and compiler versions as you can.

 -Tony


 On Tuesday, January 6, 2015 1:37:17 PM UTC-8, samuel_a...@brown.edu wrote:

 Forgive me if this is a noob question but I'm having some trouble
 building Julia. I'm on commit a318578. Running `make' gives me:

 $ make
 CC src/task.o
 task.c(352): catastrophic error: Cannot match asm operand constraint
 compilation aborted for task.c (code 1)
 make[2]: *** [task.o] Error 1
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2

 Make.user:
 USEICC = 1
 USEIFC = 1
 USE_INTEL_MKL = 1
 USE_INTEL_MKL_FFT = 1
 USE_INTEL_LIBM = 1

 JULIA_CPU_TARGET = core2

 Weirdly, the exact same setup works fine compiling v0.3. Does anyone have
 any ideas?

 Thanks,
 Samuel




[julia-users] Why does string('a') work but convert(ASCIIString,'a') does not?

2015-01-06 Thread Ronald L. Rivest
Using Julia 0.3.4.  The following seems somehow inconsistent.  Is there
something about the philosophy of `convert` I am missing??

*julia convert(ASCIIString,'a')*

*ERROR: `convert` has no method matching convert(::Type{ASCIIString}, 
::Char)*

* in convert at base.jl:13*

*julia string('a')*

*a*

*julia typeof(ans)*

*ASCIIString (constructor with 2 methods)*

Cheers,

Ron


[julia-users] Notice about coverage results

2015-01-06 Thread Tim Holy
To package authors  users:

Those of you who are using Coverage  Coveralls should be prepared for some 
large changes to your coverage statistics. Until now, the mechanisms we've 
been using to count coverage percentage were pretty badly broken; while we're 
still far from perfect, some recent changes in the Coverage package appear to 
have made those measurements considerably more accurate. In most cases, you'll 
see your coverage percentages drop, in some cases by a large amount. Those 
changes to Coverage were just merged and tagged, so the next time you push a 
commit, your numbers may plummet. Unless, of course, you're simply awesome 
about writing tests, or plan to become so :-).

References:
https://groups.google.com/d/msg/julia-dev/NFVvSQvd-Dk/Nyr8IXGQaTIJ
How to use Coverage to measure your coverage percentage, and find those 
functions in your package that need testing:
https://github.com/IainNZ/Coverage.jl

Best,
--Tim



Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Jeff Waller
Oh man, I think there might be a way!  

Inspired by this because you know Atom is essentially node + chromium, I 
tried

git clone node-julia
and then

bizarro% cd node-julia/

bizarro% HOME=~/.atom-shell-gyp node-gyp rebuild --target=0.19.5 --arch=x64 
--dist-url=https://gh-contractor-zcbenz.s3.amazonaws.com/atom-shell/dist

that 0.19.5 value is critical and I ended up just trying the versions at 
random

linked node-julia in 
pwd

/Applications/Atom.app/Contents/Resources/app/node_modules
bizarro% ls -l node-julia
lrwxr-xr-x  1 jeffw  staff  32 Jan  6 18:10 node-julia - /Users/jeffw/src/
atom/node-julia


and then finally within the javascript REPL in Atom 
var julia = require('node-julia');
undefined
julia.exec('rand',200);
Array[200]


and then (bonus)
julia.eval('using Gadfly')
JRef {getHIndex:function}__proto__: JRef
julia.eval('plot(rand(10)');


that last part didn't work of course but it didn't crash though and maybe 
with a little more...  A julia engine within Atom.  Would that be useful? 
 I'm not sure what you guys are wanting to do, but maybe some collaboration?

-Jeff


Re: [julia-users] Re: Nullable use cases / expected behavior?

2015-01-06 Thread elextr
[...]

 This definition will be type-stable (it will always return a 
 Nullable{Bool}) and it will be able to signal all three possible results; 
 get(a) == b, get(a) != b and get(a) == null.


It is then messy to use == in an if when the return is not a bool. 

In fact you still have to write the isnull() test which is essentially the 
same code as your definition of == so nothing has been gained by defining 
==.

Cheers
Lex

 


 [...]



Re: [julia-users] Julia REPL segfaults non-deterministically...

2015-01-06 Thread Tomas Lycken
OK, thanks for the reference. I'll keep an eye out and post there if I find 
something.

// T

On Tuesday, January 6, 2015 2:07:56 PM UTC+1, Isaiah wrote:

 Looks like:
 https://github.com/JuliaLang/julia/issues/8550

 If you can find a semi-reproducible test case and/or gdb backtrace, that 
 would be great.

 On Mon, Jan 5, 2015 at 7:59 PM, Tomas Lycken tomas@gmail.com 
 javascript: wrote:

 I just got the following in the REPL:

 julia module Foo

type Bar{T} end

end

 signal (11): Segmentation fault
 unknown function (ip: -716631494)
 jl_get_binding at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
 jl_get_global at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
 jl_module_run_initializer at /opt/julia-0.4/usr/bin/../lib/libjulia.so 
 (unknown line)
 unknown function (ip: -716770303)
 unknown function (ip: -716771435)
 jl_toplevel_eval_in at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown 
 line)
 eval_user_input at REPL.jl:54
 jlcall_eval_user_input_42363 at  (unknown line)
 jl_apply_generic at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
 anonymous at task.jl:96
 unknown function (ip: -716815279)
 unknown function (ip: 0)

 I’ve seen segfaults a couple of times since I last built Julia from 
 source, but I was always too concentrated on what I was doing to try to pin 
 it down - but this time I wasn’t =) I hadn’t done anything in that session 
 except a few Pkg commands, but I couldn’t reproduce the problem either in a 
 fresh REPL or in one where I ran exactly the same Pkg commands in the same 
 sequence before doing that. Is this a known problem, or should I report it 
 on github?

 julia versioninfo()
 Julia Version 0.4.0-dev+2416
 Commit db3fa60 (2015-01-02 18:23 UTC)
 Platform Info:
   System: Linux (x86_64-linux-gnu)
   CPU: Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 // T
 ​




Re: [julia-users] Re: Nullable use cases / expected behavior?

2015-01-06 Thread Tomas Lycken
I think many of the questions raised in this thread can be answered by 
considering the history behind why Nullable{T} was introduced in the first 
place; to replace NAtype and NA, from the DataArrays package. As such, 
Nullable{T} is supposed to be used more as Milan describes, than as a 
drop-in replacemet for Python's None - the idea is rather to have a wrapper 
type for data that, if it is missing (which is what NA signalled) 
poisons all calculations to return a missing value instead of the result.

Thus, equality with null should better be defined as

(==){T}(a::Nullable{T}, b::T) = !isnull(a) ? Nullable(get(a) == b) : 
Nullable{Bool}()

This definition will be type-stable (it will always return a 
Nullable{Bool}) and it will be able to signal all three possible results; 
get(a) == b, get(a) != b and get(a) == null.

Now, for a sum function, it becomes a little less trivial: how do we treat 
missing data? On one hand, we could argue that if all values are not known, 
then the sum is not known either, and we should return null. On the other 
hand, it might be more useful to return the sum of all non-null values. 
Either way, we should make sure to do something that is type-stable. Naïve 
implementations could look like

# if any nulls, return null:
function sum{T}(A::Array{Nullable{T,1}})
@inbounds for i = 1:length(A)
s = zero(T)
if isnull(A[i])
return Nullable{T}()
else
s += get(A[i])
end
end
return Nullable(s)
end

# just ignore null values:
function sum{T}(A::Array{Nullable{T,1}})
@inbounds for i = 1:length(A)
s = zero(T)
if !isnull(A[i])
s += get(A[i])
end
end
return Nullable(s)
end

If you take a look at the DataArrays package 
(https://github.com/JuliaStats/DataArrays.jl) you'll find lots of examples 
of functions like this for NA; you'll also notice that many of them are 
*not*  type stable, which - as stated above - is the original reason for 
the Nullable{T} type in the first place.

// T

On Tuesday, January 6, 2015 2:32:07 PM UTC+1, Seth wrote:



 On Tuesday, January 6, 2015 4:43:16 AM UTC-8, Milan Bouchet-Valat wrote:


  
  Yeah, (==){T}(a::Nullable{T}, b::T) should be able to be defined as 
  !isnull(a)  get(a) == b 
 I'd consider this definition (which is different from the ones I 
 suggested above) as unsafe: if `a` is `null`, then you silently get 
 `false`. Better provide additional safety by either returning a 
 `Nullable`, or raising an exception. 


 But - if null is just another legitimate value, why wouldn't it make 
 sense to define null != a for all (a != null)? Why must we treat it as 
 some sort of special abstraction? We don't do this with, say, imaginary 
 numbers. (Identity for null is a separate issue).



[julia-users] Build troubles

2015-01-06 Thread samuel_ainsworth
Forgive me if this is a noob question but I'm having some trouble building 
Julia. I'm on commit a318578. Running `make' gives me:

$ make
CC src/task.o
task.c(352): catastrophic error: Cannot match asm operand constraint
compilation aborted for task.c (code 1)
make[2]: *** [task.o] Error 1
make[1]: *** [julia-release] Error 2
make: *** [release] Error 2

Make.user:
USEICC = 1
USEIFC = 1
USE_INTEL_MKL = 1
USE_INTEL_MKL_FFT = 1
USE_INTEL_LIBM = 1

JULIA_CPU_TARGET = core2

Weirdly, the exact same setup works fine compiling v0.3. Does anyone have 
any ideas?

Thanks,
Samuel


Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Rob,

Thanks. I did find some .mem files (see above). Not for my own source files 
though.

Petr

PS: You have a fineale book? Interesting... I thought no one else had 
claimed that name for a software project before...

On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:

 Petr,

 Not sure if this helps you, but below sequence creates the .mem file.

 ProjDir is set in Ex07.jl and is the directory that contains the .mem file

 Regards,
 Rob J. Goedman
 goe...@mac.com javascript:


 Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user

*_*
*_**   _ **_**(_)**_** |  A fresh approach to technical 
 computing*
   *(_)** | **(_)* *(_)**|  Documentation: 
 http://docs.julialang.org http://docs.julialang.org*
 *   _ _   _| |_  __ _   |  Type help() for help.*
 *  | | | | | | |/ _` |  |*
 *  | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)*
 * _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ 
 http://julialang.org/ release*
 *|__/   |  x86_64-apple-darwin13.4.0*

 *julia *
 *include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)*

 *julia **cd(ProjDir)*

 *julia **clear_malloc_data()*

 *julia *
 *include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)*

  *shell **ls*
 Ex07.jl Ex07.svg Ex08.svg Ex09.svg Section2.3.svg
 Ex07.jl.mem Ex08.jl Ex09.jl Section2.3.jl Section2.4.nb

 On Jan 6, 2015, at 2:15 PM, Petr Krysl krysl...@gmail.com javascript: 
 wrote:

 I did this as suggested. The code  executed as shown below, preceded by 
 the command line.
 The process completes,  but there are no .mem files anywhere. Should I ask 
 for them specifically?

 # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe 
 --track-allocation=all memory_debugging.jl
 cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
 include(examples/acoustics/sphere_scatterer_example.jl)
 Profile.clear_malloc_data()
 include(examples/acoustics/sphere_scatterer_example.jl)
 quit()



 On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:

 Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
  Hi guys, 
  
  How does one figure out where allocation  of memory occurs?   When I 
  use the @time  macro it tells me there's a lot of memory allocation 
  and deallocation going on.  Just looking at the code I'm at a loss: I 
  can't see the reasons for it there. 
  
  So, what are the tips and tricks for the curious?  How do I debug the 
  memory allocation issue?  I looked at the lint, the type check, and 
  the code_typed().  Perhaps I don't know where to look, but  these 
  didn't seem to be of much help. 
 See this: 

 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  

 (Would probably be good to backport to the 0.3 manual...) 


 Regards 




Re: [julia-users] Running mean/median in Julia

2015-01-06 Thread Steven G. Johnson


On Tuesday, January 6, 2015 12:37:24 PM UTC-5, Tim Holy wrote:

 For running mean, cumsum gives you an easy approach, if you don't mind a 
 little floating-point error. 


Yikes, just noticed that cumsum is significantly less accurate than sum; 
basically, cumsum is no better than naive summation, whereas the intention 
was to get pairwise-summation accuracy.  This is fixed in #9650 


Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-06 Thread Stefan Karpinski
2-clause BSD is basically MIT-equivalent, so that works.

On Tue, Jan 6, 2015 at 2:49 PM, Tim Davis da...@tamu.edu wrote:

 Most of my code in SuiteSparse is under my copyright, not the University
 of Florida.  (I'm now at Texas AM by the way ...
 http://faculty.cse.tamu.edu/davis )

 Most of SuiteSparse is LGPL or GPL, but the Factorize package itself is
 2-clause BSD (attached).

 So you can use the Factorize package as you wish.  The Factorize does
 connect to sparse Cholesky (chol in MATLAB),
 sparse LU, etc, but those are different packages (and all of them are GPL
 or LGPL).  The backslash polyalgorithm is in
 Factorize, however, and is thus 2-clause BSD.





 On Mon, Jan 5, 2015 at 10:29 PM, Viral Shah vi...@mayin.org wrote:

 This is similar to the FFTW situation, where the license is held by MIT.

 -viral

  On 06-Jan-2015, at 8:14 am, Viral Shah vi...@mayin.org wrote:
 
  I believe that it is University of Florida that owns the copyright and
 they would lose licencing revenue. I would love it too if we could have
 these under the MIT licence, but it may not be a realistic expectation.
 
  Looking at the paper is the best way to go. Jiahao has already produced
 the pseudo code in the issue, and we do similar things in our dense \.
 
  -viral
 
  On 6 Jan 2015 07:31, Kevin Squire kevin.squ...@gmail.com wrote:
  Since Tim wrote the code (presumably?), couldn't he give permission to
 license it under MIT?  (Assuming he was okay with that, of course!).
 
  Cheers,
 Kevin
 
  On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski ste...@karpinski.org
 wrote:
  A word of legal caution: Tim, I believe some (all?) of your SuiteSparse
 code is GPL and since Julia is MIT (although not all libraries are), we can
 look at pseudocode but not copy GPL code while legally keeping the MIT
 license on Julia's standard library.
 
  Also, thanks so much for helping with this.
 
 
  On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari e.eftekh...@gmail.com
 wrote:
  Following your advice, I tried the code again, this time I also used
 MUMPS solver from https://github.com/lruthotto/MUMPS.jl
  I used a 42x43x44 grid. These are the results:
 
  MUMPS: elapsed time: 2.09091471 seconds
  lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
  backslash: elapsed time: 16.604061696 seconds (80189136 bytes
 allocated, 0.45% gc time)
 
  and in Matlab:
  Elapsed time is 5.423656 seconds.
 
  Thanks a lot Tim and Viral for your quick and helpful comments.
 
  Kind regards,
  Ehsan
 
 
  On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
  Thanks, that is great. I was wondering about the symmetry checker - we
 have the naive one currently, but I can just use the CHOLMOD one now.
 
  -viral
 
 
 
   On 06-Jan-2015, at 2:22 am, Tim Davis da...@tamu.edu wrote:
  
   oops.  Yes, your factorize function is broken.  You might try mine
 instead, in my
   factorize package.
  
   I have a symmetry-checker in CHOLMOD.  It checks if the matrix is
 symmetric and
   with positive diagonals.  I think I have a MATLAB interface for it
 too.  The code is efficient,
   since it doesn't form A transpose, and it quits early as soon as
 asymmetry is detected.
  
   It does rely on the fact that MATLAB requires its sparse matrices to
 have sorted row indices
   in each column, however.
  
   On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah vi...@mayin.org wrote:
   Tim - thanks for the reference. The paper will come in handy. This is
 a longstanding issue, that we just haven’t got around to addressing yet,
 but perhaps now is a good time.
  
   https://github.com/JuliaLang/julia/issues/3295
  
   We have a very simplistic factorize() for sparse matrices that must
 have been implemented as a stopgap. This is what it currently does and that
 explains everything.
  
   # placing factorize here for now. Maybe add a new file
   function factorize(A::SparseMatrixCSC)
   m, n = size(A)
   if m == n
   Ac = cholfact(A)
   Ac.c.minor == m  ishermitian(A)  return Ac
   end
   return lufact(A)
   end
  
   -viral
  
  
  
On 06-Jan-2015, at 1:57 am, Tim Davis da...@tamu.edu wrote:
   
That does sound like a glitch in the \ algorithm, rather than in
 UMFPACK.  The OpenBLAS is pretty good.
   
This is very nice in Julia:
   
F = lufact (d[M]) ; F \ d
   
That's a great idea to have a factorization object like that.  I
 have a MATLAB toolbox that does
the same thing, but it's not a built-in function inside MATLAB.
 It's written in M, so it can be slow for
small matrices.   With it, however, I can do:
   
F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or
 whatever.  Uses my polyalgorithm for \.
x = F\b ;
   
I can do S = inverse(A); which returns a factorization, not an
 inverse, but with a flag
set so that S*b does A\b (and yes, S\b would do A*b, since S keeps
 a copy of A inside it, as well).
   
You can also specify the factorization, such as
   
 

Re: [julia-users] Re: Is there a julia user group in New York area?

2015-01-06 Thread Stefan Karpinski
I'm actually planning on jump-starting this and having the first meetup in
late January (I'll be giving the first talk). I've been working on lining
up six speakers so that we have a full half year – I'm having trouble
getting anyone for February though.

On Sun, Jan 4, 2015 at 8:55 PM, jgabriele...@gmail.com wrote:

 On Saturday, January 3, 2015 11:48:48 AM UTC-5, Tony Fong wrote:

 I'm planning a trip in late Jan. It'd be nice to be able to connect.


 The [Julia Community page](http://julialang.org/community/) has a meetups
 section, which leads to http://www.meetup.com/julia-nyc/.




Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
I did this as suggested. The code  executed as shown below, preceded by the 
command line.
The process completes,  but there are no .mem files anywhere. Should I ask 
for them specifically?

# C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe 
--track-allocation=all memory_debugging.jl
cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
include(examples/acoustics/sphere_scatterer_example.jl)
Profile.clear_malloc_data()
include(examples/acoustics/sphere_scatterer_example.jl)
quit()



On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:

 Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
  Hi guys, 
  
  How does one figure out where allocation  of memory occurs?   When I 
  use the @time  macro it tells me there's a lot of memory allocation 
  and deallocation going on.  Just looking at the code I'm at a loss: I 
  can't see the reasons for it there. 
  
  So, what are the tips and tricks for the curious?  How do I debug the 
  memory allocation issue?  I looked at the lint, the type check, and 
  the code_typed().  Perhaps I don't know where to look, but  these 
  didn't seem to be of much help. 
 See this: 

 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  

 (Would probably be good to backport to the 0.3 manual...) 


 Regards 



Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-06 Thread Tony Kelman
That's good to hear on the BSD license, and thanks for correcting our 
misunderstanding of the SuiteSparse licensing situation.

One thing I'd like us to be clearer about in our notation for Julia is when 
we say ldltfact, we only have a modified-Cholesky, diagonal-D variant of 
the LDL^T factorization hooked up. In the Factorize paper and in Matlab, 
ldl is a Bunch-Kaufman LDL^T where D is a block-diagonal matrix with either 
1x1 or 2x2 blocks. Most of the people in this thread are well aware of 
this, but for the sake of any who aren't I'd like us to make the 
distinction more clear. Bunch-Kaufman block-diagonal LDL^T can handle 
symmetric indefinite matrices, and do useful things like give you the 
inertia (number of positive, negative, and zero eigenvalues) of the matrix. 
This is mandatory in many optimization applications. Modified-Cholesky with 
diagonal D can only handle semidefinite matrices, or with extra care, some 
special classes like quasidefinite that appear in a subset (e.g. convex 
optimization) of symmetric problems.

I believe CSparse does have a Bunch-Kaufman LDL^T implementation, but it's 
not as high-performance as say Cholmod or UMFPACK. MATLAB uses HSL MA57 for 
sparse ldl, which is an excellent well-regarded piece of Fortran code but 
unfortunately is not under a redistributable license. The public-domain 
MUMPS code has this functionality and is exposed by several Julia packages, 
none of which currently meet the cross-platform build system requirements 
that Base Julia has.

-Tony


On Tuesday, January 6, 2015 1:13:55 PM UTC-8, Stefan Karpinski wrote:

 2-clause BSD is basically MIT-equivalent, so that works.

 On Tue, Jan 6, 2015 at 2:49 PM, Tim Davis da...@tamu.edu javascript: 
 wrote:

 Most of my code in SuiteSparse is under my copyright, not the University 
 of Florida.  (I'm now at Texas AM by the way ...
 http://faculty.cse.tamu.edu/davis )

 Most of SuiteSparse is LGPL or GPL, but the Factorize package itself is 
 2-clause BSD (attached).

 So you can use the Factorize package as you wish.  The Factorize does 
 connect to sparse Cholesky (chol in MATLAB),
 sparse LU, etc, but those are different packages (and all of them are GPL 
 or LGPL).  The backslash polyalgorithm is in
 Factorize, however, and is thus 2-clause BSD.





 On Mon, Jan 5, 2015 at 10:29 PM, Viral Shah vi...@mayin.org 
 javascript: wrote:

 This is similar to the FFTW situation, where the license is held by MIT.

 -viral

  On 06-Jan-2015, at 8:14 am, Viral Shah vi...@mayin.org javascript: 
 wrote:
 
  I believe that it is University of Florida that owns the copyright and 
 they would lose licencing revenue. I would love it too if we could have 
 these under the MIT licence, but it may not be a realistic expectation.
 
  Looking at the paper is the best way to go. Jiahao has already 
 produced the pseudo code in the issue, and we do similar things in our 
 dense \.
 
  -viral
 
  On 6 Jan 2015 07:31, Kevin Squire kevin@gmail.com javascript: 
 wrote:
  Since Tim wrote the code (presumably?), couldn't he give permission to 
 license it under MIT?  (Assuming he was okay with that, of course!).
 
  Cheers,
 Kevin
 
  On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski ste...@karpinski.org 
 javascript: wrote:
  A word of legal caution: Tim, I believe some (all?) of your 
 SuiteSparse code is GPL and since Julia is MIT (although not all libraries 
 are), we can look at pseudocode but not copy GPL code while legally keeping 
 the MIT license on Julia's standard library.
 
  Also, thanks so much for helping with this.
 
 
  On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari e.eft...@gmail.com 
 javascript: wrote:
  Following your advice, I tried the code again, this time I also used 
 MUMPS solver from https://github.com/lruthotto/MUMPS.jl
  I used a 42x43x44 grid. These are the results:
 
  MUMPS: elapsed time: 2.09091471 seconds
  lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
  backslash: elapsed time: 16.604061696 seconds (80189136 bytes 
 allocated, 0.45% gc time)
 
  and in Matlab:
  Elapsed time is 5.423656 seconds.
 
  Thanks a lot Tim and Viral for your quick and helpful comments.
 
  Kind regards,
  Ehsan
 
 
  On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
  Thanks, that is great. I was wondering about the symmetry checker - we 
 have the naive one currently, but I can just use the CHOLMOD one now.
 
  -viral
 
 
 
   On 06-Jan-2015, at 2:22 am, Tim Davis da...@tamu.edu wrote:
  
   oops.  Yes, your factorize function is broken.  You might try mine 
 instead, in my
   factorize package.
  
   I have a symmetry-checker in CHOLMOD.  It checks if the matrix is 
 symmetric and
   with positive diagonals.  I think I have a MATLAB interface for it 
 too.  The code is efficient,
   since it doesn't form A transpose, and it quits early as soon as 
 asymmetry is detected.
  
   It does rely on the fact that MATLAB requires its sparse matrices to 
 have sorted row indices

Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Tim Holy
Sounds like a bug; I'll be curious to find out what platform you're on. But 
don't do it here: please file an issue (see 
https://github.com/JuliaLang/julia/blob/master/CONTRIBUTING.md#how-to-file-a-bug-report).

Also note that there were some recent improvements in the ability to 
distinguish user-code and base (https://github.com/JuliaLang/julia/pull/9581 
), but they're not available in 0.3 yet. So if you're using 0.3, there may be 
some inaccuracies.

--Tim

On Tuesday, January 06, 2015 02:59:11 PM Petr Krysl wrote:
 Actually, correction: for 0.3.4 the _system_ *.mem files are in the julia
 folders. For _my_ source files the .mem files cannot be located.
 
 P
 
 On Tuesday, January 6, 2015 2:15:02 PM UTC-8, Petr Krysl wrote:
  I did this as suggested. The code  executed as shown below, preceded by
  the command line.
  The process completes,  but there are no .mem files anywhere. Should I ask
  for them specifically?
  
  # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe
  --track-allocation=all memory_debugging.jl
  cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
  include(examples/acoustics/sphere_scatterer_example.jl)
  Profile.clear_malloc_data()
  include(examples/acoustics/sphere_scatterer_example.jl)
  quit()
  
  On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:
  Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit :
   Hi guys,
   
   How does one figure out where allocation  of memory occurs?   When I
   use the @time  macro it tells me there's a lot of memory allocation
   and deallocation going on.  Just looking at the code I'm at a loss: I
   can't see the reasons for it there.
   
   So, what are the tips and tricks for the curious?  How do I debug the
   memory allocation issue?  I looked at the lint, the type check, and
   the code_typed().  Perhaps I don't know where to look, but  these
   didn't seem to be of much help.
  
  See this:
  
  http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-ana
  lysis
  
  (Would probably be good to backport to the 0.3 manual...)
  
  
  Regards



Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Mike Innes
That's very cool. You should definitely package this up if you can. The
JS-on-top approach might actually make it easier to package up a Julia app,
at least in the short term. (Also, if you don't want to call julia.eval
every time, it should be easy to hook up the Julia instance to Juno and use
it as a repl).

The Blink.jl model turns out to work quite well for us – since it's
basically a thin layer over a Julia server + browser window, it should be
easy to serve Blink.jl apps both locally and over the internet, which will
open up some interesting possibilities. It does hurt ease-of-use a little
though, so I'd be happy to see alternative approaches crop up.

On 7 January 2015 at 00:31, Jeff Waller truth...@gmail.com wrote:

 Oh man, I think there might be a way!

 Inspired by this because you know Atom is essentially node + chromium, I
 tried

 git clone node-julia
 and then

 bizarro% cd node-julia/

 bizarro% HOME=~/.atom-shell-gyp node-gyp rebuild --target=0.19.5
 --arch=x64 --dist-url=
 https://gh-contractor-zcbenz.s3.amazonaws.com/atom-shell/dist

 that 0.19.5 value is critical and I ended up just trying the versions at
 random

 linked node-julia in
 pwd

 /Applications/Atom.app/Contents/Resources/app/node_modules
 bizarro% ls -l node-julia
 lrwxr-xr-x  1 jeffw  staff  32 Jan  6 18:10 node-julia - /Users/jeffw/src
 /atom/node-julia


 and then finally within the javascript REPL in Atom
 var julia = require('node-julia');
 undefined
 julia.exec('rand',200);
 Array[200]


 and then (bonus)
 julia.eval('using Gadfly')
 JRef {getHIndex:function}__proto__: JRef
 julia.eval('plot(rand(10)');


 that last part didn't work of course but it didn't crash though and maybe
 with a little more...  A julia engine within Atom.  Would that be useful?
 I'm not sure what you guys are wanting to do, but maybe some collaboration?

 -Jeff



Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Jeff Waller


 I'd be interested in getting a Julia engine in Atom, but I would not be so 
 interested in Julia for visualization when, unless I'm mistaken, at that 
 point you can use d3 directly. That would be cool if true. Is it? Can we 
 get the Julia.eval to return a javascript array? Getting Julia and 
 javascript working side by side in the same console would be pretty awesome.


Hi Eric,

I haven't done much with D3 myself, but I work with a number that do, I'll 
ask what of any limitations there are.  If all else fails, there's always 
https://www.npmjs.com/package/d3 so long the javascript engine can be fed.

Now as for JavaScript arrays; yea, that's what Julia arrays and tuples are 
mapped to.  There are some subtleties, that are documented here 
http://node-julia.readme.io/v0.2.3/docs/datatype-mapping.  For arrays of 
primitive unboxed types, I'm planning in the next version on changing the 
datatype mapping to using JavaScript typed arrays as they are faster by at 
least an order of magnitude.  The syntax and use would be essentially the 
same though.

Yea, pretty awesome!


Re: [julia-users] How can I achieve C-like ## in Julia

2015-01-06 Thread Mike Innes
Cool. $(symbol(gen_$x)) might also be a bit more compact, now I think
about it.

On 7 January 2015 at 02:00, Chi-wei Wang cwwang...@gmail.com wrote:

 Exactly what I want! Thanks!

 Mike Innes於 2015年1月7日星期三UTC+8上午9時34分21秒寫道:

 Is $(symbol(string(gen_, x))) what you're looking for?

 On 7 January 2015 at 01:28, Chi-wei Wang cwwa...@gmail.com wrote:

 Hi, everyone.

 I'd like to do something like following C code in Julia.

 #define gen_func(x) \
 void f_##x() { \
 }

 I tried the following Julia code, but I have no idea how to get 'gen_'
 appended in front of the function name.

 macro gen_func(x)
   return esc(quote
 function $x()
 # do something
 end
   end)
 end







Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-06 Thread Tim Davis
CSparse doesn't have a proper Bunch-Kaufman LDL^T with proper 2-by-2
pivoting.  It just allows for 1-by-1 pivots, and it doesn't check to see if
they are small.  MA57 is the best code out there for that, but if MUMPS can
do it then that would be the best option.  My SuiteSparse is missing this
feature.

On Tue, Jan 6, 2015 at 5:00 PM, Tony Kelman t...@kelman.net wrote:

 That's good to hear on the BSD license, and thanks for correcting our
 misunderstanding of the SuiteSparse licensing situation.

 One thing I'd like us to be clearer about in our notation for Julia is
 when we say ldltfact, we only have a modified-Cholesky, diagonal-D variant
 of the LDL^T factorization hooked up. In the Factorize paper and in Matlab,
 ldl is a Bunch-Kaufman LDL^T where D is a block-diagonal matrix with either
 1x1 or 2x2 blocks. Most of the people in this thread are well aware of
 this, but for the sake of any who aren't I'd like us to make the
 distinction more clear. Bunch-Kaufman block-diagonal LDL^T can handle
 symmetric indefinite matrices, and do useful things like give you the
 inertia (number of positive, negative, and zero eigenvalues) of the matrix.
 This is mandatory in many optimization applications. Modified-Cholesky with
 diagonal D can only handle semidefinite matrices, or with extra care, some
 special classes like quasidefinite that appear in a subset (e.g. convex
 optimization) of symmetric problems.

 I believe CSparse does have a Bunch-Kaufman LDL^T implementation, but it's
 not as high-performance as say Cholmod or UMFPACK. MATLAB uses HSL MA57 for
 sparse ldl, which is an excellent well-regarded piece of Fortran code but
 unfortunately is not under a redistributable license. The public-domain
 MUMPS code has this functionality and is exposed by several Julia packages,
 none of which currently meet the cross-platform build system requirements
 that Base Julia has.

 -Tony


 On Tuesday, January 6, 2015 1:13:55 PM UTC-8, Stefan Karpinski wrote:

 2-clause BSD is basically MIT-equivalent, so that works.

 On Tue, Jan 6, 2015 at 2:49 PM, Tim Davis da...@tamu.edu wrote:

 Most of my code in SuiteSparse is under my copyright, not the University
 of Florida.  (I'm now at Texas AM by the way ...
 http://faculty.cse.tamu.edu/davis )

 Most of SuiteSparse is LGPL or GPL, but the Factorize package itself is
 2-clause BSD (attached).

 So you can use the Factorize package as you wish.  The Factorize does
 connect to sparse Cholesky (chol in MATLAB),
 sparse LU, etc, but those are different packages (and all of them are
 GPL or LGPL).  The backslash polyalgorithm is in
 Factorize, however, and is thus 2-clause BSD.





 On Mon, Jan 5, 2015 at 10:29 PM, Viral Shah vi...@mayin.org wrote:

 This is similar to the FFTW situation, where the license is held by MIT.

 -viral

  On 06-Jan-2015, at 8:14 am, Viral Shah vi...@mayin.org wrote:
 
  I believe that it is University of Florida that owns the copyright
 and they would lose licencing revenue. I would love it too if we could have
 these under the MIT licence, but it may not be a realistic expectation.
 
  Looking at the paper is the best way to go. Jiahao has already
 produced the pseudo code in the issue, and we do similar things in our
 dense \.
 
  -viral
 
  On 6 Jan 2015 07:31, Kevin Squire kevin@gmail.com wrote:
  Since Tim wrote the code (presumably?), couldn't he give permission
 to license it under MIT?  (Assuming he was okay with that, of course!).
 
  Cheers,
 Kevin
 
  On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski 
 ste...@karpinski.org wrote:
  A word of legal caution: Tim, I believe some (all?) of your
 SuiteSparse code is GPL and since Julia is MIT (although not all libraries
 are), we can look at pseudocode but not copy GPL code while legally keeping
 the MIT license on Julia's standard library.
 
  Also, thanks so much for helping with this.
 
 
  On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari e.eft...@gmail.com
 wrote:
  Following your advice, I tried the code again, this time I also used
 MUMPS solver from https://github.com/lruthotto/MUMPS.jl
  I used a 42x43x44 grid. These are the results:
 
  MUMPS: elapsed time: 2.09091471 seconds
  lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
  backslash: elapsed time: 16.604061696 seconds (80189136 bytes
 allocated, 0.45% gc time)
 
  and in Matlab:
  Elapsed time is 5.423656 seconds.
 
  Thanks a lot Tim and Viral for your quick and helpful comments.
 
  Kind regards,
  Ehsan
 
 
  On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
  Thanks, that is great. I was wondering about the symmetry checker -
 we have the naive one currently, but I can just use the CHOLMOD one now.
 
  -viral
 
 
 
   On 06-Jan-2015, at 2:22 am, Tim Davis da...@tamu.edu wrote:
  
   oops.  Yes, your factorize function is broken.  You might try mine
 instead, in my
   factorize package.
  
   I have a symmetry-checker in CHOLMOD.  It checks if the matrix is
 symmetric and
   with 

Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Christoph Ortner
Maybe run 

test()

then 

tic()
testf()
toc()

so that the code is compiled first? Just a guess

   Christoph


On Wednesday, 7 January 2015 02:11:55 UTC, Rodolfo Santana wrote:

 Thanks for the replies, I really appreciate it! I don't fully understand 
 the solution tough. I did

 function testf()
 My Code
 end

 Then, when I do the commands shown below, I still get 1e-3 seconds. 

 tic()
 testf()
 toc()



 On Tuesday, January 6, 2015 7:48:02 PM UTC-6, Joshua Adelman wrote:


 When I just stick this whole thing in a function (as is recommended by 
 the performance tips section of the docs), it goes from 0.03 seconds to 
 1.2e-6 seconds. Literally:

 function testf()
 your code
 end

 I’ve just started playing around with Julia myself, and I’ve definitely 
 appreciated that there is, what feels like, a very small set of rules to 
 follow to get good performance. 

 Josh

 On January 6, 2015 at 8:38:44 PM, Rodolfo Santana (santa...@gmail.com) 
 wrote:

 # Electron mass in grams
 m_e = 9.1094e-28 
 # Speed of light in cm/sec
 c = 2.9979e10 

 # Electron momentum direction in Cartesian coordinates
 v10elec = 1.0/sqrt(2.0)
 v20elec = -1.0/sqrt(3.0)
 v30elec = 1.0/sqrt(6.0)

 # Photon momentum direction in Cartesian coordinates
 omega_one_phot = -1.0/sqrt(3.0) 
 omega_two_phot = 1.0/sqrt(4.0) 
 omega_three_phot = sqrt(5.0/12.0) 

 # Dimensionless electron speed and Lorentz factor
 beta_e = 0.98 ; 
 gamma_e = 1.0/sqrt(1-beta_e*beta_e)

 # Photon energy in ergs
 E_phot_comv = 1.6022e-9

 # Angle between electron and photon momentum 
 mu = v10elec*omega_one_phot + v20elec*omega_two_phot + 
 v30elec*omega_three_phot

 # Dimensionless energy of photon in electron rest frame
 tic()
 x = (2.0*gamma_e*E_phot_comv*(1-mu*beta_e))/(m_e*c*c)
 toc()



Re: [julia-users] How can I achieve C-like ## in Julia

2015-01-06 Thread Chi-wei Wang
May I ask why $ is needed before symbol()?

Mike Innes於 2015年1月7日星期三UTC+8上午10時02分27秒寫道:

 Cool. $(symbol(gen_$x)) might also be a bit more compact, now I think 
 about it.

 On 7 January 2015 at 02:00, Chi-wei Wang cwwa...@gmail.com javascript: 
 wrote:

 Exactly what I want! Thanks!

 Mike Innes於 2015年1月7日星期三UTC+8上午9時34分21秒寫道:

 Is $(symbol(string(gen_, x))) what you're looking for?

 On 7 January 2015 at 01:28, Chi-wei Wang cwwa...@gmail.com wrote:

 Hi, everyone.

 I'd like to do something like following C code in Julia.

 #define gen_func(x) \
 void f_##x() { \
 }

 I tried the following Julia code, but I have no idea how to get 'gen_' 
 appended in front of the function name. 

 macro gen_func(x)
   return esc(quote
 function $x()
 # do something
 end
   end)
 end







Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread K leo
Might be slightly off-topic, but closely related.

Does anyone find the logic to run a code first just to compile it and then
do the real run afterwards somewhat flawed, or am I missing anything?

Suppose I have a code that takes a day to finish after being compiled.  So
the first run (since it is being compiled) might take say 5 days.  But
after that 5 days I have got the results and there is no need to run it the
second time.  So the supposedly fast execution after compile is not going
to be necessary anyway, and hence provides no benefits.

On Wednesday, January 7, 2015, Christoph Ortner christophortn...@gmail.com
wrote:

 Maybe run

 test()

 then

 tic()
 testf()
 toc()

 so that the code is compiled first? Just a guess

Christoph





Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Mike Innes
Sure, I mainly mentioned libc'c' because it's what atom-shell uses, but CEF
looks good too. I actually had a go with CEF myself but with my limited C
experience it was way too fiddly – getting it working well on all platforms
would've taken me years.

Thinking about it more, Chromium uses a multi-process model anyway, so it's
possible the native api wouldn't even give us much performance benefit.
Node-webkit does some magic to make node run in the same process as the
DOM, but from what I hear it's a huge maintenance effort to keep up to date
with the latest Chrome (which is part of the reason Light Table has
switched to atom-shell as well).

On 7 January 2015 at 00:45, Tracy Wadleigh tracy.wadle...@gmail.com wrote:

 You mention in the readme about in the future possibly using Cxx.jl to
 wrap libchromiumcontent. Might you be able to avoid the need for Cxx.jl by
 using the C API of the Chromium Embedded Framework
 http://code.google.com/p/chromiumembedded/?

 On Tue, Jan 6, 2015 at 7:31 PM, Jeff Waller truth...@gmail.com wrote:

 Oh man, I think there might be a way!

 Inspired by this because you know Atom is essentially node + chromium, I
 tried

 git clone node-julia
 and then

 bizarro% cd node-julia/

 bizarro% HOME=~/.atom-shell-gyp node-gyp rebuild --target=0.19.5
 --arch=x64 --dist-url=
 https://gh-contractor-zcbenz.s3.amazonaws.com/atom-shell/dist

 that 0.19.5 value is critical and I ended up just trying the versions at
 random

 linked node-julia in
 pwd

 /Applications/Atom.app/Contents/Resources/app/node_modules
 bizarro% ls -l node-julia
 lrwxr-xr-x  1 jeffw  staff  32 Jan  6 18:10 node-julia - /Users/jeffw/
 src/atom/node-julia


 and then finally within the javascript REPL in Atom
 var julia = require('node-julia');
 undefined
 julia.exec('rand',200);
 Array[200]


 and then (bonus)
 julia.eval('using Gadfly')
 JRef {getHIndex:function}__proto__: JRef
 julia.eval('plot(rand(10)');


 that last part didn't work of course but it didn't crash though and maybe
 with a little more...  A julia engine within Atom.  Would that be useful?
 I'm not sure what you guys are wanting to do, but maybe some collaboration?

 -Jeff





[julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Rodolfo Santana
I am showing a Julia code below. The goal is just to calculate x in the end 
of the code and calculate how long it takes Julia to calculate x. In Julia, 
it takes about 1e-3 seconds. This is very surprising to me. I wrote the 
same code in Matlab and it only takes 2e-6 seconds and in Python it takes 
5e-6 seconds. Is there any way to speed up this calculation in Julia? I 
couldn't find the answer on the internet, any help would be much 
appreciated.

Sincerely,
-Rodolfo




# Electron mass in grams
m_e = 9.1094e-28 
# Speed of light in cm/sec
c = 2.9979e10 

# Electron momentum direction in Cartesian coordinates
v10elec = 1.0/sqrt(2.0)
v20elec = -1.0/sqrt(3.0)
v30elec = 1.0/sqrt(6.0)

# Photon momentum direction in Cartesian coordinates
omega_one_phot = -1.0/sqrt(3.0) 
omega_two_phot = 1.0/sqrt(4.0) 
omega_three_phot = sqrt(5.0/12.0) 

# Dimensionless electron speed and Lorentz factor
beta_e = 0.98 ; 
gamma_e = 1.0/sqrt(1-beta_e*beta_e)

# Photon energy in ergs
E_phot_comv = 1.6022e-9

# Angle between electron and photon momentum 
mu = v10elec*omega_one_phot + v20elec*omega_two_phot + 
v30elec*omega_three_phot

# Dimensionless energy of photon in electron rest frame
tic()
x = (2.0*gamma_e*E_phot_comv*(1-mu*beta_e))/(m_e*c*c)
toc()



Re: [julia-users] How can I achieve C-like ## in Julia

2015-01-06 Thread Chi-wei Wang
Exactly what I want! Thanks!

Mike Innes於 2015年1月7日星期三UTC+8上午9時34分21秒寫道:

 Is $(symbol(string(gen_, x))) what you're looking for?

 On 7 January 2015 at 01:28, Chi-wei Wang cwwa...@gmail.com javascript: 
 wrote:

 Hi, everyone.

 I'd like to do something like following C code in Julia.

 #define gen_func(x) \
 void f_##x() { \
 }

 I tried the following Julia code, but I have no idea how to get 'gen_' 
 appended in front of the function name. 

 macro gen_func(x)
   return esc(quote
 function $x()
 # do something
 end
   end)
 end






Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Rodolfo Santana
Thanks for the replies, I really appreciate it! I don't fully understand 
the solution tough. I did

function testf()
My Code
end

Then, when I do the commands shown below, I still get 1e-3 seconds. 

tic()
testf()
toc()



On Tuesday, January 6, 2015 7:48:02 PM UTC-6, Joshua Adelman wrote:


 When I just stick this whole thing in a function (as is recommended by the 
 performance tips section of the docs), it goes from 0.03 seconds to 1.2e-6 
 seconds. Literally:

 function testf()
 your code
 end

 I’ve just started playing around with Julia myself, and I’ve definitely 
 appreciated that there is, what feels like, a very small set of rules to 
 follow to get good performance. 

 Josh

 On January 6, 2015 at 8:38:44 PM, Rodolfo Santana (santa...@gmail.com 
 javascript:) wrote:

 # Electron mass in grams
 m_e = 9.1094e-28 
 # Speed of light in cm/sec
 c = 2.9979e10 

 # Electron momentum direction in Cartesian coordinates
 v10elec = 1.0/sqrt(2.0)
 v20elec = -1.0/sqrt(3.0)
 v30elec = 1.0/sqrt(6.0)

 # Photon momentum direction in Cartesian coordinates
 omega_one_phot = -1.0/sqrt(3.0) 
 omega_two_phot = 1.0/sqrt(4.0) 
 omega_three_phot = sqrt(5.0/12.0) 

 # Dimensionless electron speed and Lorentz factor
 beta_e = 0.98 ; 
 gamma_e = 1.0/sqrt(1-beta_e*beta_e)

 # Photon energy in ergs
 E_phot_comv = 1.6022e-9

 # Angle between electron and photon momentum 
 mu = v10elec*omega_one_phot + v20elec*omega_two_phot + 
 v30elec*omega_three_phot

 # Dimensionless energy of photon in electron rest frame
 tic()
 x = (2.0*gamma_e*E_phot_comv*(1-mu*beta_e))/(m_e*c*c)
 toc()



Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Thanks for the update.   I'm not getting the process to work myself.  I 
suspect it is the Windows platform that is somehow causing the trouble.  I 
will try later  on Linux. (I believe you were successful on the Mac?)

Petr

On Tuesday, January 6, 2015 3:59:45 PM UTC-8, Rob J Goedman wrote:

 Petr,

 I ran the Poisson_FE_example_model in REPL as shown below and find the 
 .mem files in the src directory and in the top-level directory.

 You were running a different example though.

 Rob J. Goedman
 goe...@mac.com javascript:

 *julia **cd(Pkg.dir(homedir(), 
 Projects/Julia/Rob/jfineale_for_trying_out))*

 *julia *
 *include(/Users/rob/Projects/Julia/Rob/jfineale_for_trying_out/Poisson_FE_example_model.jl)*
 Heat conduction example described by Amuthan A. Ramabathiran

 http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia
 :
 Unit square, with known temperature distribution along the boundary, 
 and uniform heat generation rate inside.  Mesh of regular TRIANGLES,
 in a grid of N x N edges. 
 This version uses the JFinEALE algorithm module.

 Total time elapsed = 2.8418619632720947s

 *julia **clear_malloc_data()*

 *shell **ls src*
 AssemblyModule.jl HeatDiffusionAlgorithmModule.jl 
 MeshQuadrilateralModule.jl
 FEMMBaseModule.jl IntegRuleModule.jl MeshSelectionModule.jl
 FEMMHeatDiffusionModule.jl JFFoundationModule.jl MeshTriangleModule.jl
 FENodeSetModule.jl MaterialHeatDiffusionModule.jl NodalFieldModule.jl
 FESetModule.jl MeshExportModule.jl PropertyHeatDiffusionModule.jl
 ForceIntensityModule.jl MeshModificationModule.jl

 *shell **ls*
 JFinEALE.jl Poisson_FE_example_model.jl src
 Poisson_FE_Q4_example.jl README.md tests
 Poisson_FE_example.jl annulus_Q4_example.jl

 *julia *
 *include(/Users/rob/Projects/Julia/Rob/jfineale_for_trying_out/Poisson_FE_example_model.jl)*
 Heat conduction example described by Amuthan A. Ramabathiran

 http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia
 :
 Unit square, with known temperature distribution along the boundary, 
 and uniform heat generation rate inside.  Mesh of regular TRIANGLES,
 in a grid of N x N edges. 
 This version uses the JFinEALE algorithm module.

 Total time elapsed = 0.017609119415283203s

 *julia CTRL-D*



 Robs-MacBook-Pro:jfineale_for_trying_out rob$ 

 Robs-MacBook-Pro:~ rob$ pwd
 /Users/rob
 Robs-MacBook-Pro:~ rob$ cd Projects/Julia/Rob/jfineale_for_trying_out/
 Robs-MacBook-Pro:jfineale_for_trying_out rob$ ls src
 AssemblyModule.jl ForceIntensityModule.jl MeshModificationModule.jl
 AssemblyModule.jl.mem ForceIntensityModule.jl.mem 
 MeshQuadrilateralModule.jl
 FEMMBaseModule.jl HeatDiffusionAlgorithmModule.jl MeshSelectionModule.jl
 FEMMBaseModule.jl.mem HeatDiffusionAlgorithmModule.jl.mem 
 MeshSelectionModule.jl.mem
 FEMMHeatDiffusionModule.jl IntegRuleModule.jl MeshTriangleModule.jl
 FEMMHeatDiffusionModule.jl.mem IntegRuleModule.jl.mem 
 MeshTriangleModule.jl.mem
 FENodeSetModule.jl JFFoundationModule.jl NodalFieldModule.jl
 FENodeSetModule.jl.mem MaterialHeatDiffusionModule.jl 
 NodalFieldModule.jl.mem
 FESetModule.jl MaterialHeatDiffusionModule.jl.mem 
 PropertyHeatDiffusionModule.jl
 FESetModule.jl.mem MeshExportModule.jl PropertyHeatDiffusionModule.jl.mem
 Robs-MacBook-Pro:jfineale_for_trying_out rob$ ls
 JFinEALE.jl Poisson_FE_example_model.jl annulus_Q4_example.jl
 Poisson_FE_Q4_example.jl Poisson_FE_example_model.jl.mem src
 Poisson_FE_example.jl README.md tests
 Robs-MacBook-Pro:jfineale_for_trying_out rob$ 



  
 On Jan 6, 2015, at 3:01 PM, Petr Krysl krysl...@gmail.com javascript: 
 wrote:

 Rob,

 Thanks. I did find some .mem files (see above). Not for my own source 
 files though.

 Petr

 PS: You have a fineale book? Interesting... I thought no one else had 
 claimed that name for a software project before...

 On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:

 Petr,

 Not sure if this helps you, but below sequence creates the .mem file.

 ProjDir is set in Ex07.jl and is the directory that contains the .mem file

 Regards,
 Rob J. Goedman
 goe...@mac.com


 Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user

*_*
*_**   _ **_**(_)**_** |  A fresh approach to technical 
 computing*
   *(_)** | **(_)* *(_)**|  Documentation: 
 http://docs.julialang.org http://docs.julialang.org/*
 *   _ _   _| |_  __ _   |  Type help() for help.*
 *  | | | | | | |/ _` |  |*
 *  | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)*
 * _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ 
 http://julialang.org/ release*
 *|__/   |  x86_64-apple-darwin13.4.0*

 *julia *
 *include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)*

 *julia **cd(ProjDir)*

 *julia **clear_malloc_data()*

 *julia *
 *include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)*

  *shell **ls*
 Ex07.jl Ex07.svg Ex08.svg Ex09.svg Section2.3.svg
 Ex07.jl.mem Ex08.jl Ex09.jl Section2.3.jl 

[julia-users] How can I achieve C-like ## in Julia

2015-01-06 Thread Chi-wei Wang
Hi, everyone.

I'd like to do something like following C code in Julia.

#define gen_func(x) \
void f_##x() { \
}

I tried the following Julia code, but I have no idea how to get 'gen_' 
appended in front of the function name. 

macro gen_func(x)
  return esc(quote
function $x()
# do something
end
  end)
end





Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Isaiah Norton
Try putting it in a function:
http://docs.julialang.org/en/release-0.3/manual/performance-tips/


On Tue, Jan 6, 2015 at 8:38 PM, Rodolfo Santana santana9...@gmail.com
wrote:

 I am showing a Julia code below. The goal is just to calculate x in the
 end of the code and calculate how long it takes Julia to calculate x. In
 Julia, it takes about 1e-3 seconds. This is very surprising to me. I wrote
 the same code in Matlab and it only takes 2e-6 seconds and in Python it
 takes 5e-6 seconds. Is there any way to speed up this calculation in Julia?
 I couldn't find the answer on the internet, any help would be much
 appreciated.

 Sincerely,
 -Rodolfo




 # Electron mass in grams
 m_e = 9.1094e-28
 # Speed of light in cm/sec
 c = 2.9979e10

 # Electron momentum direction in Cartesian coordinates
 v10elec = 1.0/sqrt(2.0)
 v20elec = -1.0/sqrt(3.0)
 v30elec = 1.0/sqrt(6.0)

 # Photon momentum direction in Cartesian coordinates
 omega_one_phot = -1.0/sqrt(3.0)
 omega_two_phot = 1.0/sqrt(4.0)
 omega_three_phot = sqrt(5.0/12.0)

 # Dimensionless electron speed and Lorentz factor
 beta_e = 0.98 ;
 gamma_e = 1.0/sqrt(1-beta_e*beta_e)

 # Photon energy in ergs
 E_phot_comv = 1.6022e-9

 # Angle between electron and photon momentum
 mu = v10elec*omega_one_phot + v20elec*omega_two_phot +
 v30elec*omega_three_phot

 # Dimensionless energy of photon in electron rest frame
 tic()
 x = (2.0*gamma_e*E_phot_comv*(1-mu*beta_e))/(m_e*c*c)
 toc()




Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Joshua Adelman

When I just stick this whole thing in a function (as is recommended by the 
performance tips section of the docs), it goes from 0.03 seconds to 1.2e-6 
seconds. Literally:

function testf()
your code
end

I’ve just started playing around with Julia myself, and I’ve definitely 
appreciated that there is, what feels like, a very small set of rules to follow 
to get good performance. 

Josh

On January 6, 2015 at 8:38:44 PM, Rodolfo Santana (santana9...@gmail.com) wrote:

# Electron mass in grams
m_e = 9.1094e-28 
# Speed of light in cm/sec
c = 2.9979e10 

# Electron momentum direction in Cartesian coordinates
v10elec = 1.0/sqrt(2.0)
v20elec = -1.0/sqrt(3.0)
v30elec = 1.0/sqrt(6.0)

# Photon momentum direction in Cartesian coordinates
omega_one_phot = -1.0/sqrt(3.0) 
omega_two_phot = 1.0/sqrt(4.0) 
omega_three_phot = sqrt(5.0/12.0) 

# Dimensionless electron speed and Lorentz factor
beta_e = 0.98 ; 
gamma_e = 1.0/sqrt(1-beta_e*beta_e)

# Photon energy in ergs
E_phot_comv = 1.6022e-9

# Angle between electron and photon momentum 
mu = v10elec*omega_one_phot + v20elec*omega_two_phot + v30elec*omega_three_phot

# Dimensionless energy of photon in electron rest frame
tic()
x = (2.0*gamma_e*E_phot_comv*(1-mu*beta_e))/(m_e*c*c)
toc()

Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Joshua Adelman
Here's what I did:

function testf()
   # Electron mass in grams
   m_e = 9.1094e-28
   # Speed of light in cm/sec
   c = 2.9979e10

   # Electron momentum direction in Cartesian coordinates
   v10elec = 1.0/sqrt(2.0)
   v20elec = -1.0/sqrt(3.0)
   v30elec = 1.0/sqrt(6.0)

   # Photon momentum direction in Cartesian coordinates
   omega_one_phot = -1.0/sqrt(3.0)
   omega_two_phot = 1.0/sqrt(4.0)
   omega_three_phot = sqrt(5.0/12.0)

   # Dimensionless electron speed and Lorentz factor
   beta_e = 0.98 ;
   gamma_e = 1.0/sqrt(1-beta_e*beta_e)

   # Photon energy in ergs
   E_phot_comv = 1.6022e-9

   # Angle between electron and photon momentum
   mu = v10elec*omega_one_phot + v20elec*omega_two_phot + v30elec*
omega_three_phot

   # Dimensionless energy of photon in electron rest frame
   x = (2.0*gamma_e*E_phot_comv*(1-mu*beta_e))/(m_e*c*c)
 end


and then to time it, I used:

julia @time testf()
elapsed time: 0.002718398 seconds (172020 bytes allocated)
0.028022594193490992


julia @time testf()
elapsed time: 2.838e-6 seconds (96 bytes allocated)
0.028022594193490992

Notice that the first time it ran the time includes the time it takes for 
Julia to JIT the code, but then all subsequent runs are much faster since 
they're using the compiled/specialized code. 

Josh

On Tuesday, January 6, 2015 9:11:55 PM UTC-5, Rodolfo Santana wrote:

 Thanks for the replies, I really appreciate it! I don't fully understand 
 the solution tough. I did

 function testf()
 My Code
 end

 Then, when I do the commands shown below, I still get 1e-3 seconds. 

 tic()
 testf()
 toc()



 On Tuesday, January 6, 2015 7:48:02 PM UTC-6, Joshua Adelman wrote:


 When I just stick this whole thing in a function (as is recommended by 
 the performance tips section of the docs), it goes from 0.03 seconds to 
 1.2e-6 seconds. Literally:

 function testf()
 your code
 end

 I’ve just started playing around with Julia myself, and I’ve definitely 
 appreciated that there is, what feels like, a very small set of rules to 
 follow to get good performance. 

 Josh

 On January 6, 2015 at 8:38:44 PM, Rodolfo Santana (santa...@gmail.com) 
 wrote:

 # Electron mass in grams
 m_e = 9.1094e-28 
 # Speed of light in cm/sec
 c = 2.9979e10 

 # Electron momentum direction in Cartesian coordinates
 v10elec = 1.0/sqrt(2.0)
 v20elec = -1.0/sqrt(3.0)
 v30elec = 1.0/sqrt(6.0)

 # Photon momentum direction in Cartesian coordinates
 omega_one_phot = -1.0/sqrt(3.0) 
 omega_two_phot = 1.0/sqrt(4.0) 
 omega_three_phot = sqrt(5.0/12.0) 

 # Dimensionless electron speed and Lorentz factor
 beta_e = 0.98 ; 
 gamma_e = 1.0/sqrt(1-beta_e*beta_e)

 # Photon energy in ergs
 E_phot_comv = 1.6022e-9

 # Angle between electron and photon momentum 
 mu = v10elec*omega_one_phot + v20elec*omega_two_phot + 
 v30elec*omega_three_phot

 # Dimensionless energy of photon in electron rest frame
 tic()
 x = (2.0*gamma_e*E_phot_comv*(1-mu*beta_e))/(m_e*c*c)
 toc()



Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Eric Forgy
Oh man. I was drafting a note and lost it. Oh well. Maybe best. I can be long 
winded :)

I can see lots of cool apps built with either Julia on top (Blink.jl) or JS on 
top (node-julia) for hybrid apps.

My idea is a bit unorthodox (although not entirely original). I want to 
effectively turn the browser into a desktop. Whereas in the Blink Gadfly 
example, the plot gets launched into a new pop-up window, I'd like to launch 
the plot into an SVG-based figure window in the browser like the screenshot I 
sent earlier. 

Here is a demo video:

http://youtu.be/IriE1ZP-uOM

I've made a lot of progress since then and hope to have a new demo soon.

The Blink.jl sample on the Readme page has got me imagining a situation where 
you expand the window to full screen (making it like a desktop), open the 
console, which acts like a REPL and then launch d3-based visualizations into 
figure windows all within the browser all driven by Julia. This would give me 
everything I like about Matlab, but better.

Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Tim Holy
On Wednesday, January 07, 2015 10:34:38 AM K leo wrote:
 Suppose I have a code that takes a day to finish after being compiled.  So
 the first run (since it is being compiled) might take say 5 days.

That will never happen. If you compare code that takes a second to run vs. 
code that takes a day to run, that's an 80,000-fold difference in run time. 
I've never seen such an example correspond to 80,000-fold more lines of 
code---long-running code simply tends to have more iterations. Compilation 
time is (loosely) related to the number of lines, not the running time. So 
your code that takes a day to run will compile within seconds or minutes. 
Still longer than anyone would like, but a trivial fraction of the total.

--Tim



Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
I filed the report. Identical results with versions 0.3.4 and 0.4.  Windows 
7.

Petr

On Tuesday, January 6, 2015 3:06:25 PM UTC-8, Tim Holy wrote:

 Sounds like a bug; I'll be curious to find out what platform you're on. 
 But 
 don't do it here: please file an issue (see 

 https://github.com/JuliaLang/julia/blob/master/CONTRIBUTING.md#how-to-file-a-bug-report).
  


 Also note that there were some recent improvements in the ability to 
 distinguish user-code and base (
 https://github.com/JuliaLang/julia/pull/9581 
 ), but they're not available in 0.3 yet. So if you're using 0.3, there may 
 be 
 some inaccuracies. 

 --Tim 

 On Tuesday, January 06, 2015 02:59:11 PM Petr Krysl wrote: 
  Actually, correction: for 0.3.4 the _system_ *.mem files are in the 
 julia 
  folders. For _my_ source files the .mem files cannot be located. 
  
  P 
  
  On Tuesday, January 6, 2015 2:15:02 PM UTC-8, Petr Krysl wrote: 
   I did this as suggested. The code  executed as shown below, preceded 
 by 
   the command line. 
   The process completes,  but there are no .mem files anywhere. Should I 
 ask 
   for them specifically? 
   
   # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe 
   --track-allocation=all memory_debugging.jl 
   cd( C:/Users/pkrysl/Documents/GitHub/jfineale); 
 include(JFinEALE.jl); 
   include(examples/acoustics/sphere_scatterer_example.jl) 
   Profile.clear_malloc_data() 
   include(examples/acoustics/sphere_scatterer_example.jl) 
   quit() 
   
   On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat 
 wrote: 
   Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
Hi guys, 

How does one figure out where allocation  of memory occurs?   When 
 I 
use the @time  macro it tells me there's a lot of memory allocation 
and deallocation going on.  Just looking at the code I'm at a loss: 
 I 
can't see the reasons for it there. 

So, what are the tips and tricks for the curious?  How do I debug 
 the 
memory allocation issue?  I looked at the lint, the type check, and 
the code_typed().  Perhaps I don't know where to look, but  these 
didn't seem to be of much help. 
   
   See this: 
   
   
 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-ana 
   lysis 
   
   (Would probably be good to backport to the 0.3 manual...) 
   
   
   Regards 



[julia-users] Re: Why does string('a') work but convert(ASCIIString,'a') does not?

2015-01-06 Thread elextr


On Wednesday, January 7, 2015 10:51:54 AM UTC+10, Ronald L. Rivest wrote:

 Using Julia 0.3.4.  The following seems somehow inconsistent.  Is there
 something about the philosophy of `convert` I am missing??

 *julia convert(ASCIIString,'a')*

 *ERROR: `convert` has no method matching convert(::Type{ASCIIString}, 
 ::Char)*

Char is a Unicode codepoint so it cannot be guaranteed to be convertable to 
ASCIIString.  But you could argue that conversion to UTF8String (or other 
unicodes) should work, but at least in 0.3.4 it doesn't.

Cheers
Lex
 

 * in convert at base.jl:13*

 *julia string('a')*

 *a*

 *julia typeof(ans)*

 *ASCIIString (constructor with 2 methods)*

 Cheers,

 Ron



Re: [julia-users] How can I achieve C-like ## in Julia

2015-01-06 Thread Mike Innes
Is $(symbol(string(gen_, x))) what you're looking for?

On 7 January 2015 at 01:28, Chi-wei Wang cwwang...@gmail.com wrote:

 Hi, everyone.

 I'd like to do something like following C code in Julia.

 #define gen_func(x) \
 void f_##x() { \
 }

 I tried the following Julia code, but I have no idea how to get 'gen_'
 appended in front of the function name.

 macro gen_func(x)
   return esc(quote
 function $x()
 # do something
 end
   end)
 end






Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Jeff Waller


On Tuesday, January 6, 2015 8:24:23 PM UTC-5, Mike Innes wrote:

 That's very cool. You should definitely package this up if you can. The 
 JS-on-top approach might actually make it easier to package up a Julia app, 
 at least in the short term. (Also, if you don't want to call julia.eval 
 every time, it should be easy to hook up the Julia instance to Juno and use 
 it as a repl).


julia.eval(string) is essentially what happens someone types string, and 
easy you say?  yes definitely!  I read that Atom has an app database and a 
package manager (apm), but low level nodejs stuff needs to interact more 
directly with Atom-shell and it might be difficult to use the Atom supplied 
extension framework.  I'll certainly follow up.
 


 The Blink.jl model turns out to work quite well for us – since it's 
 basically a thin layer over a Julia server + browser window, it should be 
 easy to serve Blink.jl apps both locally and over the internet, which will 
 open up some interesting possibilities. It does hurt ease-of-use a little 
 though, so I'd be happy to see alternative approaches crop up.


Cool!  I read the src and it seems to boil down to the @js macro which 
printlns a JSON object over a socket, would it be as simple as instead 
sending to some sort of IO buffer?
 
-Jeff


Re: [julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-06 Thread Petr Krysl
Oh, in that case I am tickled pink.

Please do let me know if you find any typos or mistakes.

Best regards,

Petr

On Tuesday, January 6, 2015 3:36:34 PM UTC-8, Rob J Goedman wrote:

 Hi Petr,

 It’s your book, I used this name for the time being while working my way 
 through the first 6 or 7 chapters using Julia (and Mathematica 
 occasionally, don’t have Matlab).

 If you would prefer that, I can easily change the name, I have no 
 intention to ever register the package.

 Just trying to figure out a good way to replace my current (Fortran) FEM/R 
 program with a Julia equivalent.

 Regards,
 Rob J. Goedman
 goe...@mac.com javascript:




  
 On Jan 6, 2015, at 3:01 PM, Petr Krysl krysl...@gmail.com javascript: 
 wrote:

 Rob,

 Thanks. I did find some .mem files (see above). Not for my own source 
 files though.

 Petr

 PS: You have a fineale book? Interesting... I thought no one else had 
 claimed that name for a software project before...

 On Tuesday, January 6, 2015 2:46:26 PM UTC-8, Rob J Goedman wrote:

 Petr,

 Not sure if this helps you, but below sequence creates the .mem file.

 ProjDir is set in Ex07.jl and is the directory that contains the .mem file

 Regards,
 Rob J. Goedman
 goe...@mac.com


 Robs-MacBook-Pro:~ rob$ clear; julia  --track-allocation=user

*_*
*_**   _ **_**(_)**_** |  A fresh approach to technical 
 computing*
   *(_)** | **(_)* *(_)**|  Documentation: 
 http://docs.julialang.org http://docs.julialang.org/*
 *   _ _   _| |_  __ _   |  Type help() for help.*
 *  | | | | | | |/ _` |  |*
 *  | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)*
 * _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ 
 http://julialang.org/ release*
 *|__/   |  x86_64-apple-darwin13.4.0*

 *julia *
 *include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)*

 *julia **cd(ProjDir)*

 *julia **clear_malloc_data()*

 *julia *
 *include(/Users/rob/.julia/v0.3/FinealeBook/Examples/Fineale/Ch02/Ex07.jl)*

  *shell **ls*
 Ex07.jl Ex07.svg Ex08.svg Ex09.svg Section2.3.svg
 Ex07.jl.mem Ex08.jl Ex09.jl Section2.3.jl Section2.4.nb

 On Jan 6, 2015, at 2:15 PM, Petr Krysl krysl...@gmail.com wrote:

 I did this as suggested. The code  executed as shown below, preceded by 
 the command line.
 The process completes,  but there are no .mem files anywhere. Should I 
 ask for them specifically?

 # C:\Users\pkrysl\AppData\Local\Julia-0.4.0-dev\bin\julia.exe 
 --track-allocation=all memory_debugging.jl
 cd( C:/Users/pkrysl/Documents/GitHub/jfineale); include(JFinEALE.jl);
 include(examples/acoustics/sphere_scatterer_example.jl)
 Profile.clear_malloc_data()
 include(examples/acoustics/sphere_scatterer_example.jl)
 quit()



 On Tuesday, January 6, 2015 1:50:11 AM UTC-8, Milan Bouchet-Valat wrote:

 Le lundi 05 janvier 2015 à 20:48 -0800, Petr Krysl a écrit : 
  Hi guys, 
  
  How does one figure out where allocation  of memory occurs?   When I 
  use the @time  macro it tells me there's a lot of memory allocation 
  and deallocation going on.  Just looking at the code I'm at a loss: I 
  can't see the reasons for it there. 
  
  So, what are the tips and tricks for the curious?  How do I debug the 
  memory allocation issue?  I looked at the lint, the type check, and 
  the code_typed().  Perhaps I don't know where to look, but  these 
  didn't seem to be of much help. 
 See this: 

 http://docs.julialang.org/en/latest/manual/profile/#memory-allocation-analysis
  

 (Would probably be good to backport to the 0.3 manual...) 


 Regards 





Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Isaiah Norton

 Suppose I have a code that takes a day to finish after being compiled.  So
 the first run (since it is being compiled) might take say 5 days.  But
 after that 5 days I have got the results and there is no need to run it the
 second time.  So the supposedly fast execution after compile is not going
 to be necessary anyway, and hence provides no benefits.


The only situation in which run twice applies is microbenchmarks where
the compilation time exceeds the actual runtime. If the process takes a day
to run, compile time will be a rounding error.

On Tue, Jan 6, 2015 at 9:34 PM, K leo cnbiz...@gmail.com wrote:

 Might be slightly off-topic, but closely related.

 Does anyone find the logic to run a code first just to compile it and then
 do the real run afterwards somewhat flawed, or am I missing anything?

 Suppose I have a code that takes a day to finish after being compiled.  So
 the first run (since it is being compiled) might take say 5 days.  But
 after that 5 days I have got the results and there is no need to run it the
 second time.  So the supposedly fast execution after compile is not going
 to be necessary anyway, and hence provides no benefits.

 On Wednesday, January 7, 2015, Christoph Ortner 
 christophortn...@gmail.com wrote:

 Maybe run

 test()

 then

 tic()
 testf()
 toc()

 so that the code is compiled first? Just a guess

Christoph





Re: [julia-users] status of reload / Autoreload

2015-01-06 Thread Andreas Lobinger
No?

On Tuesday, January 6, 2015 5:59:32 PM UTC+1, Tim Holy wrote:

 It already exists, if your module has the same name as a package. Read the 
 help on `reload`. 


lobi@orange4:~/juliarepo$ ../julia/julia 
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type help() for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.5-pre+5 (2014-12-27 04:45 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 6e3d0b8 (11 days old release-0.3)
|__/   |  x86_64-linux-gnu

julia using Rsvg

julia reload(Rsvg)
ERROR: `reload` has no method matching reload(::Module)

help? reload
INFO: Loading help data...
Base.reload(file::String)

   Like require, except forces loading of files regardless of
   whether they have been loaded before. Typically used when
   interactively developing libraries.

julia 

 or did you mean, a reload(module) exists in a package?


Re: [julia-users] Re: DArrays performance

2015-01-06 Thread Amuthan
Amit: Thanks for the suggestion. I gave it a quick try, but wasn't
successful. It appears to me that communication between the processors (to
obtain the boundary data) would require reconstructing the DArray from the
localparts at the end of each iteration. I guess I'll have to take a deeper
look into the implementation of DArrays to understand how best to implement
this.

In the meantime, I got a reasonable speedup using the Julia wrapper for MPI
(https://github.com/JuliaParallel/MPI.jl). Has anyone tried comparing the
performance of the one-sided message passing model of DArray and the
standard (2-sided) MPI model?

Amuthan

On Mon, Jan 5, 2015 at 12:53 AM, Amit Murthy amit.mur...@gmail.com wrote:

 You can have only two DArrays and use localpart() to get the local parts
 of the arrays on each worker and work off that.

 With a single iteration the network overhead will be much more than any
 gains from distributed computation - it depends on the computation of
 course.

 Currently, DArrays work best if the distributed computation can work
 solely off localparts. An efficient means of setindex! on darrays is a TODO
 at this time.

 On Mon, Jan 5, 2015 at 12:34 PM, Amuthan apar...@gmail.com wrote:

 Hi Amit: yes, the idea is to have just two DArrays, one each for the
 previous and current iterations. I had some trouble assigning values
 directly to a DArray (a setindex! error) and so had to write it like this.
 Do you know any means around this?

 Btw, the parallel code runs slower than the serial version even for just
 one iteration.

 On Sun, Jan 4, 2015 at 10:27 PM, Amit Murthy amit.mur...@gmail.com
 wrote:

 As written, this is creating a 1000 DArrays. I think you intended to
 have only 2 of them and swap values in each iteration?


 On Sunday, 4 January 2015 11:07:47 UTC+5:30, Amuthan A. Ramabathiran
 wrote:

 Hello: I recently started exploring the parallel capabilities of Julia
 and I need some help in understanding and improving the performance a very
 elementary parallel code using DArrays (I use Julia
 version 0.4.0-dev+2431). The code pasted below (based essentially on
 plife.jl) solves u''(x) = 0, x \in [0,1] with u(0) and u(1) specified,
 using the 2nd order central difference approximation. The parallel version
 of the code runs significantly slower than the serial version. It would be
 nice if someone could point out ways to improve this and/or suggest an
 alternative efficient version.

 function laplace_1D_serial(u::Array{Float64})
N = length(u) - 2
u_new = zeros(N)

for i = 1:N
   u_new[i] = 0.5(u[i] + u[i + 2])
end

u_new
 end

 function serial_iterate(u::Array{Float64})
u_new = laplace_1D_serial(u)

for i = 1:length(u_new)
   u[i + 1] = u_new[i]
end
 end

 function parallel_iterate(u::DArray)
DArray(size(u), procs(u)) do I
   J = I[1]

   if myid() == 2
  local_array = zeros(length(J) + 1)
  for i = J[1] : J[end] + 1
 local_array[i - J[1] + 1] = u[i]
  end
  append!([float(u[1])], laplace_1D_serial(local_array))

   elseif myid() == length(procs(u)) + 1
  local_array = zeros(length(J) + 1)
  for i = J[1] - 1 : J[end]
 local_array[i - J[1] + 2] = u[i]
  end
  append!(laplace_1D_serial(local_array), [float(u[end])])

   else
  local_array = zeros(length(J) + 2)
  for i = J[1] - 1 : J[end] + 1
 local_array[i - J[1] + 2] = u[i]
  end
  laplace_1D_serial(local_array)

   end
end
 end

 A sample run on my laptop with 4 processors:
 julia u = zeros(1000); u[end] = 1.0; u_distributed = distribute(u);

 julia @time for i = 1:1000
  serial_iterate(u)
end
 elapsed time: 0.011452192 seconds (8300112 bytes allocated)

 julia @time for i = 1:1000
  u_distributed = parallel_iterate(u_distributed)
end
 elapsed time: 4.461922218 seconds (190565036 bytes allocated, 10.17% gc
 time)

 Thanks for your help!

 Cheers,
 Amuthan







Re: [julia-users] status of reload / Autoreload

2015-01-06 Thread Amuthan A. Ramabathiran
Hi Andreas: is this what you are having in 
mind: https://github.com/malmaud/Autoreload.jl ? (This seems to work pretty 
well for me, most of the time.)

On Tuesday, January 6, 2015 10:15:05 PM UTC-8, Andreas Lobinger wrote:

 No?

 On Tuesday, January 6, 2015 5:59:32 PM UTC+1, Tim Holy wrote:

 It already exists, if your module has the same name as a package. Read 
 the 
 help on `reload`. 


 lobi@orange4:~/juliarepo$ ../julia/julia 
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.5-pre+5 (2014-12-27 04:45 UTC)
  _/ |\__'_|_|_|\__'_|  |  Commit 6e3d0b8 (11 days old release-0.3)
 |__/   |  x86_64-linux-gnu

 julia using Rsvg

 julia reload(Rsvg)
 ERROR: `reload` has no method matching reload(::Module)

 help? reload
 INFO: Loading help data...
 Base.reload(file::String)

Like require, except forces loading of files regardless of
whether they have been loaded before. Typically used when
interactively developing libraries.

 julia 

  or did you mean, a reload(module) exists in a package?



Re: [julia-users] Re: Build troubles

2015-01-06 Thread Ainsworth, Samuel
Created https://github.com/JuliaLang/julia/issues/9656 and tried
replacing i(start_task)
with (start_task).

Samuel

On Tue, Jan 6, 2015 at 7:06 PM, Jameson Nash vtjn...@gmail.com wrote:

 it's failing on task.c::357 (on the current master). can you see if it
 compiles replacing i(start_task) with (start_task)


 On Tue Jan 06 2015 at 6:33:37 PM Tony Kelman t...@kelman.net wrote:

 Not a noob question at all. The Intel compiler build support isn't
 regularly tested like GCC/Clang, and it's susceptible to occasional
 breakage on master. Hopefully not on release-0.3, but please let us know if
 that happens too. It would be great if we could somehow get CI running with
 Intel compilers, or maybe one or two nightly buildbots using them?

 Some inline assembly was added in PR #9266, which also broke the
 (barely-supported) build with MSVC. Intel should at least allow 64 bit
 inline assembly, I would think? Jameson Nash has a suggested workaround
 involving longjmp for the MSVC case, but I'm not sure whether it applies to
 you - are you trying this on OSX or Linux?

 This is a legitimate build problem, please open an issue (e.g. Build
 broken with Intel compilers) and cross-reference this thread. Include as
 much info about your system and compiler versions as you can.

 -Tony


 On Tuesday, January 6, 2015 1:37:17 PM UTC-8, samuel_a...@brown.edu
 wrote:

 Forgive me if this is a noob question but I'm having some trouble
 building Julia. I'm on commit a318578. Running `make' gives me:

 $ make
 CC src/task.o
 task.c(352): catastrophic error: Cannot match asm operand constraint
 compilation aborted for task.c (code 1)
 make[2]: *** [task.o] Error 1
 make[1]: *** [julia-release] Error 2
 make: *** [release] Error 2

 Make.user:
 USEICC = 1
 USEIFC = 1
 USE_INTEL_MKL = 1
 USE_INTEL_MKL_FFT = 1
 USE_INTEL_LIBM = 1

 JULIA_CPU_TARGET = core2

 Weirdly, the exact same setup works fine compiling v0.3. Does anyone
 have any ideas?

 Thanks,
 Samuel




Re: [julia-users] Speeding up Floating Point Operations in Julia

2015-01-06 Thread Joshua Adelman
My understanding from the style guide and general recommendations is that you 
shouldn’t have a single monolithic function that gets called once. Instead it’s 
idiomatic and performant to compose complex algorithms from many concise 
functions that are called repeatedly. If one of those functions is called 
within a loop with many iterations, than that first call should be negligible 
compared to the total runtime.

As an added benefit, small functions that do one well-defined thing also make 
testing much easier.

Josh

On January 6, 2015 at 9:34:39 PM, K leo (cnbiz...@gmail.com) wrote:

Might be slightly off-topic, but closely related.

Does anyone find the logic to run a code first just to compile it and then do 
the real run afterwards somewhat flawed, or am I missing anything?

Suppose I have a code that takes a day to finish after being compiled.  So the 
first run (since it is being compiled) might take say 5 days.  But after that 5 
days I have got the results and there is no need to run it the second time.  So 
the supposedly fast execution after compile is not going to be necessary 
anyway, and hence provides no benefits.

On Wednesday, January 7, 2015, Christoph Ortner christophortn...@gmail.com 
wrote:
Maybe run 

test()

then 

tic()
testf()
toc()

so that the code is compiled first? Just a guess

   Christoph




Re: [julia-users] Re: Is there a julia user group in New York area?

2015-01-06 Thread Spencer Lyon
I live in NYC and will be back in a few weeks. Happy to meet up. Also 
interested in a meetup.

On Tuesday, January 6, 2015 2:38:02 PM UTC-7, Stefan Karpinski wrote:

 I'm actually planning on jump-starting this and having the first meetup in 
 late January (I'll be giving the first talk). I've been working on lining 
 up six speakers so that we have a full half year – I'm having trouble 
 getting anyone for February though.

 On Sun, Jan 4, 2015 at 8:55 PM, jgabri...@gmail.com javascript: wrote:

 On Saturday, January 3, 2015 11:48:48 AM UTC-5, Tony Fong wrote:

 I'm planning a trip in late Jan. It'd be nice to be able to connect.


 The [Julia Community page](http://julialang.org/community/) has a 
 meetups section, which leads to http://www.meetup.com/julia-nyc/.




[julia-users] Running mean/median in Julia

2015-01-06 Thread Tomas Mikoviny
Hi, 
I was just wondering if anyone knows if there is a package that implements 
*fast* running median/mean algorithms?

Thanks a lot...
 


Re: [julia-users] SubArray - indexing speed bottleneck

2015-01-06 Thread Tim Holy
SubArrays work much better in julia 0.4; on the tasks you posted, you are 
likely to see substantially better performance.

--Tim

On Tuesday, January 06, 2015 08:15:03 AM Tomas Mikoviny wrote:
 Hi,
 I'm trying to optimise julia script for spectra baseline correction using
 rolling ball algorithm
 (http://linkinghub.elsevier.com/retrieve/pii/0168583X95009086,
 http://cran.r-project.org/web/packages/baseline).
 Profiling the code showed that the most time consuming part is actually
 subarray pickup.
 I was just wondering if there is any other possible speedup for this
 problem?
 
 I've started initially with standard sub-indexing:
 
 
 a = rand(30);
 w = 200;
 
 
 
 @time for i in 1:length(a)-w
 
  a[i:i+w]
   end
 elapsed time: 0.387236571 seconds (645148344 bytes allocated, 56.46% gc
 time)
 
 
 
 Then I've tried directly with subarray function and it improved the runtime
 significantly:
 
 @time for i in 1:length(a)-w
 
  sub(a,i:i+w)
   end
 elapsed time: 0.10720574 seconds (86321144 bytes allocated, 32.13% gc time)
 
 
 
  With approach to internally remove and add elements I've gained yet some
 extra speed-up (eliminating gc):
 
 subset = a[1:1+w]
 
 @time for i in 2:length(a)-w
  splice!(subset,1)
  insert!(subset,w+1,a[i+w])
end
 elapsed time: 0.067341484 seconds (33556344 bytes allocated)
 
 
 However I wonder if this is the end
 
 And obligatory version info:
 
 Julia Version 0.3.4
 Commit 3392026* (2014-12-26 10:42 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin13.4.0)
   CPU: Intel(R) Core(TM) i5-4690 CPU @ 3.50GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3



Re: [julia-users] Running mean/median in Julia

2015-01-06 Thread Kevin Squire
Hi Tomas,
I'm bit aware of any (though they might exist). It might help if you gave a
little more context--what kind of data are you working with?

Cheers,
   Kevin

On Tuesday, January 6, 2015, Tomas Mikoviny tomas.mikov...@gmail.com
wrote:

 Hi,
 I was just wondering if anyone knows if there is a package that implements
 *fast* running median/mean algorithms?

 Thanks a lot...




[julia-users] Interface design advice - optional plotting?

2015-01-06 Thread James Crist
I'm trying to replicate the interface of some MATLAB code I use frequently. 
Because functions in MATLAB have knowledge of their output arguments, 
different behaviors can occur if an assignment is used or not (i.e. `a = 
foo()` can do something different than just `foo()`). Specifically, the 
interface I'm trying to replicate displays a plot with no output arguments, 
and otherwise returns the data. 

In Python, I'd use a kwarg `plot=False`. What's the most Julian way of 
doing this? I have two thoughts right now:

1. Use a kwarg. `foo(...)` returns the data, `foo(..., plot=true)` creates 
a plot and returns nothing (or the plot handle?).

2. Create a `@plot` macro that sets a global `PLOT` boolean variable. This 
indicates to the functions it's configured to work with that the data 
should be plotted. The use would be: `@plot foo(...)`, which generates the 
code:
```
PLOT = true
foo(...)
PLOT = false
```
This could also be modified to return the plot handle at the end of the 
block. While I tend to like this interface more than #1, it's also more 
magical, which is bad.

3. Create a `@plot` macro that somehow knows how to plot each function (a 
registry of some kind?). Then the function needs no knowledge of how to 
plot itself, and the macro is just a fast way of writing in the plotting 
code. Use is `@plot foo(...)`, which expands to
```
data = foo(...)
plotting code here...
```

4. Create a separate function `fooplot`. `foo` returns the data. `fooplot` 
runs foo, and creates a plot.

Out of all of these, I like option #3 best, but want to use the most Julian 
way of doing things as possible. Thoughts?

--

Also, is there a good way to only load the plotting package (Gadfly, 
Winston, etc...) when it's first used? Loading these takes a long time; it 
would be nice if they could only be loaded upon the first call to `plot`.


Re: [julia-users] Running mean/median in Julia

2015-01-06 Thread Tamas Papp
Hi Tomas,

I don't know any packages, but a running mean is trivial to
implement. See eg
http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm
, it doesn't get faster than this, and it is reasonably accurate.

Medians (and quantiles in general) are trickier, because you have to
keep track of the whole distribution or sacrifice accuracy. See

@inproceedings{greenwald2001space,
  title={Space-efficient online computation of quantile summaries},
  author={Greenwald, Michael and Khanna, Sanjeev},
  booktitle={ACM SIGMOD Record},
  volume={30},
  number={2},
  pages={58--66},
  year={2001},
  organization={ACM}
}

You might want to tell us more about your problem and then you could get
more specific advice. Eg if number of observations  number of distinct
values, you could simply count them in a Dict.

Best,

Tamas

On Tue, Jan 06 2015, Tomas Mikoviny tomas.mikov...@gmail.com wrote:

 Hi, 
 I was just wondering if anyone knows if there is a package that implements 
 *fast* running median/mean algorithms?

 Thanks a lot...
  



Re: [julia-users] Interface design advice - optional plotting?

2015-01-06 Thread Milan Bouchet-Valat
Le mardi 06 janvier 2015 à 09:25 -0800, James Crist a écrit :
 I'm trying to replicate the interface of some MATLAB code I use
 frequently. Because functions in MATLAB have knowledge of their
 output arguments, different behaviors can occur if an assignment is
 used or not (i.e. `a = foo()` can do something different than just
 `foo()`). Specifically, the interface I'm trying to replicate displays
 a plot with no output arguments, and otherwise returns the data. 
 
 In Python, I'd use a kwarg `plot=False`. What's the most Julian way of
 doing this? I have two thoughts right now:
 
 1. Use a kwarg. `foo(...)` returns the data, `foo(..., plot=true)`
 creates a plot and returns nothing (or the plot handle?).
 
 2. Create a `@plot` macro that sets a global `PLOT` boolean variable.
 This indicates to the functions it's configured to work with that the
 data should be plotted. The use would be: `@plot foo(...)`, which
 generates the code:
 ```
 PLOT = true
 foo(...)
 PLOT = false
 ```
 This could also be modified to return the plot handle at the end of
 the block. While I tend to like this interface more than #1, it's also
 more magical, which is bad.
 
 3. Create a `@plot` macro that somehow knows how to plot each function
 (a registry of some kind?). Then the function needs no knowledge of
 how to plot itself, and the macro is just a fast way of writing in the
 plotting code. Use is `@plot foo(...)`, which expands to
 ```
 data = foo(...)
 plotting code here...
 ```
 
 4. Create a separate function `fooplot`. `foo` returns the data.
 `fooplot` runs foo, and creates a plot.
 
 Out of all of these, I like option #3 best, but want to use the most
 Julian way of doing things as possible. Thoughts?
If your function can return data instead of plotting it, a possibly more
elegant design would be to make it return a custom type that would
implement a `plot` method. That way you could do either:
a = foo(...)
plot(a)

or
plot(foo(...))

which is as short as
foo(..., plot=true)
and
@plot foo(...)


Depending on the kind of data you need to return, it could be e.g. a
very simple type wrapping an array. Unfortunately, AFAIK there is no
easy way currently to inherit all methods from the array in your custom
type, but it may become easier in the future (and APIs need to be
forward-looking). See https://github.com/JuliaLang/julia/pull/3292


 --
 
 Also, is there a good way to only load the plotting package (Gadfly,
 Winston, etc...) when it's first used? Loading these takes a long
 time; it would be nice if they could only be loaded upon the first
 call to `plot`.
This has been discussed several times on the list, though I'm not sure
what the most up-to-date reply is.



Regards


Re: [julia-users] SubArray - indexing speed bottleneck

2015-01-06 Thread Tomas Mikoviny
Tim, thanks for suggestion.. To be honest I primarily work with julia 0.4 
(0.4.0-dev+2523) and do everything there first.
Unfortunately there is no visible performance improvement (~ same 
performance) compared to 0.3.4 for this one-dimensional problem.

Tomas

On Tuesday, January 6, 2015 6:01:27 PM UTC+1, Tim Holy wrote:

 SubArrays work much better in julia 0.4; on the tasks you posted, you are 
 likely to see substantially better performance. 

 --Tim 

 On Tuesday, January 06, 2015 08:15:03 AM Tomas Mikoviny wrote: 
  Hi, 
  I'm trying to optimise julia script for spectra baseline correction 
 using 
  rolling ball algorithm 
  (http://linkinghub.elsevier.com/retrieve/pii/0168583X95009086, 
  http://cran.r-project.org/web/packages/baseline). 
  Profiling the code showed that the most time consuming part is actually 
  subarray pickup. 
  I was just wondering if there is any other possible speedup for this 
  problem? 
  
  I've started initially with standard sub-indexing: 
  
  
  a = rand(30); 
  w = 200; 
  
  
  
  @time for i in 1:length(a)-w 
  
   a[i:i+w] 
end 
  elapsed time: 0.387236571 seconds (645148344 bytes allocated, 56.46% gc 
  time) 
  
  
  
  Then I've tried directly with subarray function and it improved the 
 runtime 
  significantly: 
  
  @time for i in 1:length(a)-w 
  
   sub(a,i:i+w) 
end 
  elapsed time: 0.10720574 seconds (86321144 bytes allocated, 32.13% gc 
 time) 
  
  
  
   With approach to internally remove and add elements I've gained yet 
 some 
  extra speed-up (eliminating gc): 
  
  subset = a[1:1+w] 
  
  @time for i in 2:length(a)-w 
   splice!(subset,1) 
   insert!(subset,w+1,a[i+w]) 
 end 
  elapsed time: 0.067341484 seconds (33556344 bytes allocated) 
  
  
  However I wonder if this is the end 
  
  And obligatory version info: 
  
  Julia Version 0.3.4 
  Commit 3392026* (2014-12-26 10:42 UTC) 
  Platform Info: 
System: Darwin (x86_64-apple-darwin13.4.0) 
CPU: Intel(R) Core(TM) i5-4690 CPU @ 3.50GHz 
WORD_SIZE: 64 
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell) 
LAPACK: libopenblas 
LIBM: libopenlibm 
LLVM: libLLVM-3.3 



Re: [julia-users] string literals split to multilines?

2015-01-06 Thread Mike Nolta
On Tue, Jan 6, 2015 at 5:19 AM, René Donner li...@donner.at wrote:
 d = aaa
 bbb
 ccc

I think what Andreas wanted was a string without newlines. At the
moment, you can't do that in julia.

-Mike


[julia-users] Re: Package name for embedding R within Julia

2015-01-06 Thread Douglas Bates
In answer to Viral's question the purpose of RCall.jl is to link to the R 
API directly from Julia instead of writing glue code in C.  I decided to 
do a clean room implementation rather than build on Rif.jl because I was 
(mistakenly) concerned about the license on Rif.jl

Some of the goals are:
 - use dep/build.jl to determine the location of libR.so and suitable 
values for various environment variables expected by R
 - reflect the internal structure of R objects in a templated Julia type.
 - provide easy access to data sets from R packages.  There is code to do 
this in DataFrames/src/RDA.jl but that depends upon reproducing the entire 
decoding mechanism for saved R objects in Julia.  I think the easier way to 
do this is to use R to decode its own save format and just pick up the 
values from R.  This will also provide an alternative to using the 
RDatasets package.
 - provide a reference implementation for the formula/data representation 
to produce ModelFrame and ModelMatrix objects.
 - allow access to R packages.

Ultimately I would like RCall to be as complete as PyCall but that would 
probably require participation by other more skilled than me.

On Tuesday, January 6, 2015 12:28:51 AM UTC-6, Randy Lai wrote:

 Hi all,


 Sorry for joining the game late, I have been very busy in the past few 
 days. 



 First of all, I am very excited to learn that RCall.jl is now alive at 
 JuliaStats. To resolve the confusion between the package names of my 
 original package and the one hosted on JuliaStats, I renamed my original 
 package and now it is called “RCalling.jl”. 


 https://github.com/randy3k/RCalling.jl



 I have a brief look of what Douglas have been doing in the new repo of 
 RCall.jl. 

 His direction in porting all R API functions from C to Julia may allow 
 further development of the R/Julia interface easier.

 In contrest, in RCalling.jl, I have been using the Julia API (in C) and R 
 API (of course, also in C) intensively, which actually make coding 
 difficult and hard to be maintained.


 It is good to have more people playing the game.

 Please let me know how I could be helping in the development of the 
 package.


 Best


 Randy

 On Friday, January 2, 2015 11:59:04 AM UTC-8, Douglas Bates wrote:

 For many statistics-oriented Julia users there is a great advantage in 
 being able to piggy-back on R development and to use at least the data sets 
 from R packages.  Hence the RDatasets package and the read_rda function in 
 the DataFrames package for reading saved R data.

 Over the last couple of days I have been experimenting with running an 
 embedded R within Julia and calling R functions from Julia. This is similar 
 in scope to the Rif package except that this code is written in Julia and 
 not as a set of wrapper functions written in C. The R API is a C API and, 
 in some ways, very simple. Everything in R is represented as a symbolic 
 expression or SEXPREC and passed around as pointers to such expressions 
 (called an SEXP type).  Most functions take one or more SEXP values as 
 arguments and return an SEXP.

 I have avoided reading the code for Rif for two reasons:
  1. It is GPL3 licensed
  2. I already know a fair bit of the R API and where to find API function 
 signatures.

 Here's a simple example
 julia initR()
 1

 julia globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
 Ptr{Void} @0x08c1c388

 julia formaldehyde = tryEval(install(:Formaldehyde))
 Ptr{Void} @0x08fd1d18

 julia inherits(formaldehyde,data.frame)
 true

 julia printValue(formaldehyde)
   carb optden
 1  0.1  0.086
 2  0.3  0.269
 3  0.5  0.446
 4  0.6  0.538
 5  0.7  0.626
 6  0.9  0.782

 julia length(formaldehyde)
 2

 julia names(formaldehyde)
 2-element Array{ASCIIString,1}:
  carb  
  optden

 julia form1 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,0)
 Ptr{Void} @0x0a5baf58

 julia ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
 14

 julia carb = 
 copy(pointer_to_array(ccall((:REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
 6-element Array{Float64,1}:
  0.1
  0.3
  0.5
  0.6
  0.7
  0.9

 julia form2 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,1)
 Ptr{Void} @0x0a5baef0

 julia ccall((:TYPEOF,libR),Cint,(SEXP,),form2)
 14

 julia optden = 
 copy(pointer_to_array(ccall((:REAL,libR),Ptr{Cdouble},(SEXP,),form2),length(form2)))
 6-element Array{Float64,1}:
  0.086
  0.269
  0.446
  0.538
  0.626
  0.782


 A call to printValue uses the R printing mechanism.

 Questions:
  - What would be a good name for such a package?  In the spirit of PyCall 
 it could be RCall or Rcall perhaps.

  - Right now I am defining several functions that emulate the names of 
 functions in R itself ir in the R API.  What is a good balance?  Obviously 
 it would not be a good idea to bring in all the names in the R base 
 namespace.  On the other hand, those who know names like inherits and 
 what it means in R will find it convenient to have such names 

  1   2   >