[julia-users] Re: Creating function with a macro

2015-06-17 Thread Ben Ward
For the toy example you are right, parametric function would be better - 
what I'm actually trying to do is define several `searchsortfirst` 
functions, where the lt function is different. 

This is because I have a vector of one immutable composite type of several 
values I wish to sort and search. I could sort/search them by their first 
value, second value, or third value, and there is no canonical lt for the 
type. 

Base.sort can be provided an anonymous function to use as lt - but 
performance is absolutely critical in my use case and I need to squeeze as 
much performance as I can. Benchmark's I have done have shown that using 
anonymous functions slow things down. However, if I copy the searchsorted 
code and modify it to contain my custom lt condition, then everything is 
faster.

Therefore I wanted to make a macro that would define several of my custom 
searchsortedfirst functions: If I provided it a name for the function, the 
types/arguments is accepts, and then the custom lt expression. It would 
return a definition of the custom function with the lt condition hard coded 
in.


On Wednesday, June 17, 2015 at 8:07:11 PM UTC+1, Tom Breloff wrote:

 My gut reaction is that you don't want to use a macro here.  Can you use a 
 parametric definition:
 f{T}(vectype:T) = do something useful with the T

 or can you just use multiple dispatch:
 f{T:FloatingPoint}(v::Vector{T}) = something for floats
 f{T:Integer}(v::Vector{T}) = something for ints

 What's your use case?

 But to answer your question... I think this should work:


 julia macro customFun(vectype::Expr, name::Symbol)
quote
function $(esc(name))(v::$(esc(vectype)))
println(typeof(v))
end
end
end

 julia @customFun Vector{Int} f
 f (generic function with 1 method)

 julia f(Int[])
 Array{Int64,1}




 On Wednesday, June 17, 2015 at 2:13:41 PM UTC-4, Ben Ward wrote:

 Hi, I want to create a macro with which I can create a function with a 
 custom bit of code:

 In the repl I can do a toy example:


 *name = :hi*


 *vectype = Vector{Int}*


 *quote**  function ($name)(v::$vectype)*

 *println(hi)*

   *end*

 *end*

 However if I try to put this in a macro and use it I get an error:

 *macro customFun(vectype::DataType, name::Symbol)*


 *  quote**   function ($name)(v::$vectype)*

 *  println(hi World!)*

 *end*

 *  end*



 *end*

 *@customFun(Vector{Int}, :hi)*

 What am I doing wrong? I'd like to use macro arguments to provide a 
 function's name, and the datatype of the argument. I haven't used macros to 
 define functions more complex than simple one liners.

 Thanks,
 Ben.



[julia-users] Re: Creating function with a macro

2015-06-17 Thread Kristoffer Carlsson
Alos, as Mauro said, instead of manually creating the functors, packages 
like FastAnonymous.jl makes it easier for you.

On Wednesday, June 17, 2015 at 10:11:22 PM UTC+2, Kristoffer Carlsson wrote:

 You could also use something called functors which basically are types 
 that overload the call function. When you pass these as argument the 
 compiler can specialize the function on the type of the functor and thus 
 inline the call. See here for example for them being used effectively for 
 performance increase: https://github.com/JuliaLang/julia/pull/11685

 As an example I took the code for insertionsort and made it instead accept 
 an argument f which will be the functor. I then create some functors to 
 sort on the different type fields and show an example how sort is called.


 function sort!(v::AbstractVector, f, lo::Int=1, hi::Int=length(v))
 @inbounds for i = lo+1:hi
 j = i
 x = v[i]
 while j  lo
 if f(x, v[j-1])
 v[j] = v[j-1]
 j -= 1
 continue
 end
 break
 end
 v[j] = x
 end
 return v
 end

 # Some type
 immutable CompType
 a::Int
 b::Int
 c::Int
 end


 b = [CompType(1,2,3), CompType(3,2,1), CompType(2,1,3)]

 # Functors 
 immutable AFunc end
 call(::AFunc, x, y) = x.a  y.a
 immutable BFunc end
 call(::BFunc, x, y) = x.b  y.b
 immutable CFunc end
 call(::CFunc, x, y) = x.c  y.c

 # Can now sort with good performance
 sort!(b, AFunc())
 println(b)
 sort!(b, BFunc())
 println(b)
 sort!(b, CFunc())
 println(b)


 Now, this is of course not optimal to rip code out of base. It would be 
 better if we could pass a functor straight to Base.sort!. 


 On Wednesday, June 17, 2015 at 8:13:41 PM UTC+2, Ben Ward wrote:

 Hi, I want to create a macro with which I can create a function with a 
 custom bit of code:

 In the repl I can do a toy example:


 *name = :hi*


 *vectype = Vector{Int}*


 *quote**  function ($name)(v::$vectype)*

 *println(hi)*

   *end*

 *end*

 However if I try to put this in a macro and use it I get an error:

 *macro customFun(vectype::DataType, name::Symbol)*


 *  quote**   function ($name)(v::$vectype)*

 *  println(hi World!)*

 *end*

 *  end*



 *end*

 *@customFun(Vector{Int}, :hi)*

 What am I doing wrong? I'd like to use macro arguments to provide a 
 function's name, and the datatype of the argument. I haven't used macros to 
 define functions more complex than simple one liners.

 Thanks,
 Ben.



[julia-users] Hands-on Julia workshop materials

2015-06-17 Thread David P. Sanders
Dear all,

I have just finished giving a 2-day Julia workshop in Paris for scientists 
who already had some knowledge
on scientific computing. The materials (Jupyter notebooks) are available at

https://github.com/dpsanders/hands_on_julia/

This is somewhat different from most previous materials, since I have tried 
to make it
more hands-on, in the sense that it asks the user to try things out and 
guess the syntax,
rather than just giving the answer straight away. This can be more 
frustrating, but also more
educational!

Please let me know if you find this useful; pull requests with corrections 
etc. are, of course,
more than welcome!

David.


[julia-users] Creating function with a macro

2015-06-17 Thread Ben Ward
Hi, I want to create a macro with which I can create a function with a 
custom bit of code:

In the repl I can do a toy example:


*name = :hi*


*vectype = Vector{Int}*


*quote**  function ($name)(v::$vectype)*

*println(hi)*

  *end*

*end*

However if I try to put this in a macro and use it I get an error:

*macro customFun(vectype::DataType, name::Symbol)*


*  quote**   function ($name)(v::$vectype)*

*  println(hi World!)*

*end*

*  end*



*end*

*@customFun(Vector{Int}, :hi)*

What am I doing wrong? I'd like to use macro arguments to provide a 
function's name, and the datatype of the argument. I haven't used macros to 
define functions more complex than simple one liners.

Thanks,
Ben.


[julia-users] Re: Creating function with a macro

2015-06-17 Thread Tom Breloff
My gut reaction is that you don't want to use a macro here.  Can you use a 
parametric definition:
f{T}(vectype:T) = do something useful with the T

or can you just use multiple dispatch:
f{T:FloatingPoint}(v::Vector{T}) = something for floats
f{T:Integer}(v::Vector{T}) = something for ints

What's your use case?

But to answer your question... I think this should work:


julia macro customFun(vectype::Expr, name::Symbol)
   quote
   function $(esc(name))(v::$(esc(vectype)))
   println(typeof(v))
   end
   end
   end

julia @customFun Vector{Int} f
f (generic function with 1 method)

julia f(Int[])
Array{Int64,1}




On Wednesday, June 17, 2015 at 2:13:41 PM UTC-4, Ben Ward wrote:

 Hi, I want to create a macro with which I can create a function with a 
 custom bit of code:

 In the repl I can do a toy example:


 *name = :hi*


 *vectype = Vector{Int}*


 *quote**  function ($name)(v::$vectype)*

 *println(hi)*

   *end*

 *end*

 However if I try to put this in a macro and use it I get an error:

 *macro customFun(vectype::DataType, name::Symbol)*


 *  quote**   function ($name)(v::$vectype)*

 *  println(hi World!)*

 *end*

 *  end*



 *end*

 *@customFun(Vector{Int}, :hi)*

 What am I doing wrong? I'd like to use macro arguments to provide a 
 function's name, and the datatype of the argument. I haven't used macros to 
 define functions more complex than simple one liners.

 Thanks,
 Ben.



Re: [julia-users] Re: Creating function with a macro

2015-06-17 Thread Mauro
On Wed, 2015-06-17 at 21:23, Ben Ward axolotlfan9...@gmail.com wrote:
 For the toy example you are right, parametric function would be better - 
 what I'm actually trying to do is define several `searchsortfirst` 
 functions, where the lt function is different. 

 This is because I have a vector of one immutable composite type of several 
 values I wish to sort and search. I could sort/search them by their first 
 value, second value, or third value, and there is no canonical lt for the 
 type. 

 Base.sort can be provided an anonymous function to use as lt - but 
 performance is absolutely critical in my use case and I need to squeeze as 
 much performance as I can. Benchmark's I have done have shown that using 
 anonymous functions slow things down. However, if I copy the searchsorted 
 code and modify it to contain my custom lt condition, then everything is 
 faster.

 Therefore I wanted to make a macro that would define several of my custom 
 searchsortedfirst functions: If I provided it a name for the function, the 
 types/arguments is accepts, and then the custom lt expression. It would 
 return a definition of the custom function with the lt condition hard coded 
 in.

Sounds like you should use FastAnonymous.jl or NumericFuns.jl or
Functors:
https://github.com/JuliaLang/julia/blob/e97588db65f590d473e7fbbb127f30c01ea94995/base/functors.jl



 On Wednesday, June 17, 2015 at 8:07:11 PM UTC+1, Tom Breloff wrote:

 My gut reaction is that you don't want to use a macro here.  Can you use a 
 parametric definition:
 f{T}(vectype:T) = do something useful with the T

 or can you just use multiple dispatch:
 f{T:FloatingPoint}(v::Vector{T}) = something for floats
 f{T:Integer}(v::Vector{T}) = something for ints

 What's your use case?

 But to answer your question... I think this should work:


 julia macro customFun(vectype::Expr, name::Symbol)
quote
function $(esc(name))(v::$(esc(vectype)))
println(typeof(v))
end
end
end

 julia @customFun Vector{Int} f
 f (generic function with 1 method)

 julia f(Int[])
 Array{Int64,1}




 On Wednesday, June 17, 2015 at 2:13:41 PM UTC-4, Ben Ward wrote:

 Hi, I want to create a macro with which I can create a function with a 
 custom bit of code:

 In the repl I can do a toy example:


 *name = :hi*


 *vectype = Vector{Int}*


 *quote**  function ($name)(v::$vectype)*

 *println(hi)*

   *end*

 *end*

 However if I try to put this in a macro and use it I get an error:

 *macro customFun(vectype::DataType, name::Symbol)*


 *  quote**   function ($name)(v::$vectype)*

 *  println(hi World!)*

 *end*

 *  end*



 *end*

 *@customFun(Vector{Int}, :hi)*

 What am I doing wrong? I'd like to use macro arguments to provide a 
 function's name, and the datatype of the argument. I haven't used macros to 
 define functions more complex than simple one liners.

 Thanks,
 Ben.





[julia-users] Re: Creating function with a macro

2015-06-17 Thread Kristoffer Carlsson
You could also use something called functors which basically are types that 
overload the call function. When you pass these as argument the compiler 
can specialize the function on the type of the functor and thus inline the 
call. See here for example for them being used effectively for performance 
increase: https://github.com/JuliaLang/julia/pull/11685

As an example I took the code for insertionsort and made it instead accept 
an argument f which will be the functor. I then create some functors to 
sort on the different type fields and show an example how sort is called.


function sort!(v::AbstractVector, f, lo::Int=1, hi::Int=length(v))
@inbounds for i = lo+1:hi
j = i
x = v[i]
while j  lo
if f(x, v[j-1])
v[j] = v[j-1]
j -= 1
continue
end
break
end
v[j] = x
end
return v
end

# Some type
immutable CompType
a::Int
b::Int
c::Int
end


b = [CompType(1,2,3), CompType(3,2,1), CompType(2,1,3)]

# Functors 
immutable AFunc end
call(::AFunc, x, y) = x.a  y.a
immutable BFunc end
call(::BFunc, x, y) = x.b  y.b
immutable CFunc end
call(::CFunc, x, y) = x.c  y.c

# Can now sort with good performance
sort!(b, AFunc())
println(b)
sort!(b, BFunc())
println(b)
sort!(b, CFunc())
println(b)


Now, this is of course not optimal to rip code out of base. It would be 
better if we could pass a functor straight to Base.sort!. 


On Wednesday, June 17, 2015 at 8:13:41 PM UTC+2, Ben Ward wrote:

 Hi, I want to create a macro with which I can create a function with a 
 custom bit of code:

 In the repl I can do a toy example:


 *name = :hi*


 *vectype = Vector{Int}*


 *quote**  function ($name)(v::$vectype)*

 *println(hi)*

   *end*

 *end*

 However if I try to put this in a macro and use it I get an error:

 *macro customFun(vectype::DataType, name::Symbol)*


 *  quote**   function ($name)(v::$vectype)*

 *  println(hi World!)*

 *end*

 *  end*



 *end*

 *@customFun(Vector{Int}, :hi)*

 What am I doing wrong? I'd like to use macro arguments to provide a 
 function's name, and the datatype of the argument. I haven't used macros to 
 define functions more complex than simple one liners.

 Thanks,
 Ben.



Re: [julia-users] How to manually install julia packages on a Windows system

2015-06-17 Thread Yonatan Tekleab
I think I'm moving in the right direction. I downloaded several packages 
that IJulia depends on and put them in the ~/.julia/v0.3 directory along 
with the IJulia package itself. Before I was just sticking them in the 
~/.julia directory, and I don't think Julia was seeing the packages.

When trying `using IJulia`, I get ERROR: ZMQ not properly installed. 
Please run Pkg.build(ZMQ). When I run that command, it tries to build 
Homebrew, WinRPM, and ZMQ, all of which have their own errors.
Homebrew: could not spawn setenv(`git rev-parse --git-dir`; 
dir=P:\\.julia\\v0.3\\Homebrew\\deps\\usr): no such file or directory 
(ENOENT)
WinRPM: update not defined
ZMQ: RPM not defined

When I try `Pkg.build(IJulia)`, it trys to build Homebrew, WinRPM, 
Nettle, ZMQ, and IJulia.  I get errors for all except IJulia.  The 
Homebrew, WinRPM and ZMQ errors are the same.  For Nettle, I get: RPM not 
defined

Now I can open an IJulia instance, but the kernel dies shortly after it 
comes up. The command window states ERROR: ZMQ not properly installed. 
Please run Pkg.build(ZMQ). Then it attempts to restart the kernel and 
repeats the process.


On Wednesday, June 17, 2015 at 12:31:22 AM UTC-4, Tony Kelman wrote:

 Can you do `using IJulia`, and/or `Pkg.build(IJulia)` ? Note also that 
 IJulia depends on several other packages, indicated in the REQUIRE file 
 (and those packages may have other dependencies of their own).


 On Tuesday, June 16, 2015 at 3:40:14 PM UTC-7, Yonatan Tekleab wrote:

 Hi Stefan,

 I'm having the same problem.  Unfortunately the firewall I'm behind is 
 clever enough prevent me from re-configuring git to use https, as many 
 other threads have indicated.

 I downloaded the master branch IJulia package from 
 https://github.com/JuliaLang/IJulia.jl, extracted the folder, placed it 
 inside the ~/.julia folder, then removed the .jl-master suffix.  This 
 still isn't working for me.  When I try to open IJulia from the command 
 prompt (ipython notebook --profile julia), it pulls up the typical 
 IPython notebook.

 Any thoughts on what I'm doing wrong?

 Thanks in advance.

 On Thursday, October 31, 2013 at 10:16:06 AM UTC-4, Stefan Karpinski 
 wrote:

 If you just make sure that the package source exists in ~/.julia, that 
 should do the trick. In fact, you don't need to mess around with the 
 package manager at all – Pkg commands will fail but loading packages should 
 work fine. Unfortunately, building packages with binary dependencies will 
 likely fail, but if you stick with pure-Julia packages, you should be ok.


 On Thu, Oct 31, 2013 at 7:51 AM, Able Mashamba amas...@gmail.com 
 wrote:

 Dear Informed,

 Is there a way to manually install julia packages on a Windows system 
 that has a proxy.pac config system with a paranoid firewall. I have 
 downloaded the packages I need and would want to install them manually as 
 it appears Internet permission settings at my institution are making all 
 Pkg.*() commands fail.

_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() to list help topics
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.2.0-rc2
  _/ |\__'_|_|_|\__'_|  |  Commit b372a68 2013-10-26 02:06:56 UTC
 |__/   |  i686-w64-mingw32

 julia Pkg.add(Distributions)
 INFO: Initializing package repository C:\Users\amashamba\.julia
 INFO: Cloning METADATA from git://github.com/JuliaLang/METADATA.jl
 fatal: unable to connect to github.com:
 github.com[0: 192.30.252.130]: errno=No error

 ERROR: failed process: Process(`git clone -q -b metadata-v2 git://
 github.com/Jul
 iaLang/METADATA.jl METADATA`, ProcessExited(128)) [128]

 julia








Re: [julia-users] How to manually install julia packages on a Windows system

2015-06-17 Thread Isaiah Norton
Well, if https is in fact accessible then the best bet is to troubleshoot
git directly first. After configuring the `insteadOf` git setting (per the
README) try something simple like `git clone
https://github.com/JuliaLang/julia`. There are a lot of guides on the
internet for troubleshooting this issue.

On Wed, Jun 17, 2015 at 4:46 PM, Yonatan Tekleab ytekl...@gmail.com wrote:

 yea, i figured the same thing since I am on the same system using https
 through my browser, but for some reason that I don't understand, Julia
 won't add/update packages, even when git is configured to use https

 On Wednesday, June 17, 2015 at 3:04:26 PM UTC-4, Isaiah wrote:

 Are you using the gmail web interface from this same system? If so, then
 https:// should, in principle, be available and work for git too...

 On the other hand, if you are using a separate (windows) system for
 gmail, then you ought to be able to run Pkg.install/build on the second
 system, get all requirements you need, and then copy your
 C:/Users/USERNAME/.julia/v0.# directory onto the firewalled system. This
 is tricky/unreliable on linux, but should be quite simple on windows as
 long as both systems are same word size -- both 32-bit or 64-bit (because
 of Microsoft's ABI permanence).

 On Wed, Jun 17, 2015 at 2:31 PM, Yonatan Tekleab ytek...@gmail.com
 wrote:

 I think I'm moving in the right direction. I downloaded several packages
 that IJulia depends on and put them in the ~/.julia/v0.3 directory along
 with the IJulia package itself. Before I was just sticking them in the
 ~/.julia directory, and I don't think Julia was seeing the packages.

 When trying `using IJulia`, I get ERROR: ZMQ not properly installed.
 Please run Pkg.build(ZMQ). When I run that command, it tries to build
 Homebrew, WinRPM, and ZMQ, all of which have their own errors.
 Homebrew: could not spawn setenv(`git rev-parse --git-dir`;
 dir=P:\\.julia\\v0.3\\Homebrew\\deps\\usr): no such file or directory
 (ENOENT)
 WinRPM: update not defined
 ZMQ: RPM not defined

 When I try `Pkg.build(IJulia)`, it trys to build Homebrew, WinRPM,
 Nettle, ZMQ, and IJulia.  I get errors for all except IJulia.  The
 Homebrew, WinRPM and ZMQ errors are the same.  For Nettle, I get: RPM not
 defined

 Now I can open an IJulia instance, but the kernel dies shortly after it
 comes up. The command window states ERROR: ZMQ not properly installed.
 Please run Pkg.build(ZMQ). Then it attempts to restart the kernel and
 repeats the process.


 On Wednesday, June 17, 2015 at 12:31:22 AM UTC-4, Tony Kelman wrote:

 Can you do `using IJulia`, and/or `Pkg.build(IJulia)` ? Note also
 that IJulia depends on several other packages, indicated in the REQUIRE
 file (and those packages may have other dependencies of their own).


 On Tuesday, June 16, 2015 at 3:40:14 PM UTC-7, Yonatan Tekleab wrote:

 Hi Stefan,

 I'm having the same problem.  Unfortunately the firewall I'm behind is
 clever enough prevent me from re-configuring git to use https, as many
 other threads have indicated.

 I downloaded the master branch IJulia package from
 https://github.com/JuliaLang/IJulia.jl, extracted the folder, placed
 it inside the ~/.julia folder, then removed the .jl-master suffix.  This
 still isn't working for me.  When I try to open IJulia from the command
 prompt (ipython notebook --profile julia), it pulls up the typical
 IPython notebook.

 Any thoughts on what I'm doing wrong?

 Thanks in advance.

 On Thursday, October 31, 2013 at 10:16:06 AM UTC-4, Stefan Karpinski
 wrote:

 If you just make sure that the package source exists in ~/.julia,
 that should do the trick. In fact, you don't need to mess around with the
 package manager at all – Pkg commands will fail but loading packages 
 should
 work fine. Unfortunately, building packages with binary dependencies will
 likely fail, but if you stick with pure-Julia packages, you should be ok.


 On Thu, Oct 31, 2013 at 7:51 AM, Able Mashamba amas...@gmail.com
 wrote:

 Dear Informed,

 Is there a way to manually install julia packages on a Windows
 system that has a proxy.pac config system with a paranoid firewall. I 
 have
 downloaded the packages I need and would want to install them manually 
 as
 it appears Internet permission settings at my institution are making all
 Pkg.*() commands fail.

_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() to list help topics
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.2.0-rc2
  _/ |\__'_|_|_|\__'_|  |  Commit b372a68 2013-10-26 02:06:56 UTC
 |__/   |  i686-w64-mingw32

 julia Pkg.add(Distributions)
 INFO: Initializing package repository C:\Users\amashamba\.julia
 INFO: Cloning METADATA from git://github.com/JuliaLang/METADATA.jl
 fatal: unable to connect to github.com:
 github.com[0: 192.30.252.130]: errno=No error

 ERROR: failed process: Process(`git 

[julia-users] Re: X.2=X*0.2, easy to make mistake.

2015-06-17 Thread Art Kuo
That's great that it's fixed in 0.4, but even in 0.3.X I would still label 
it inconsistent behavior, or perhaps even a bug. Why should this happen:

*julia **x.2 == x*.2*

*true*

*julia **x0.2 == x*0.2*

*ERROR: x0 not defined*

*julia **x2 == x*2*

*ERROR: x2 not defined*

It seems consistent that .2X == .2*X, 0.2X == 0.2*X, 2X == 2*X, so it is 
fine if the number occurs before the variable. But not if the number occurs 
after, so I agree with the proposal to ban X.2, meaning trigger an error. 
Shouldn't this be the case for 0.3 versions as well?


On Wednesday, June 17, 2015 at 9:14:24 AM UTC-4, Seth wrote:



 On Wednesday, June 17, 2015 at 8:04:11 AM UTC-5, Jerry Xiong wrote:

 Today I spend many time to find a bug in my code. It is turn out that I 
 mistakenly wrote sum(X,2) as sum(X.2). No any error information is reported 
 and Julia regarded X.2 as X*0.2. The comma , is quite close to dot . in 
 the keyboard and looks quite similar in some fonts. As there is no any 
 error occur, this bug will be dangerous. Also, it is not intuitive to 
 understand X.2 is X*0.2. I think maybe it is better to forbid syntax like 
 X.2 but only allowed .2X. 


 This appears to be fixed in 0.4:

 julia x = 100
 100

 julia x.2
 ERROR: syntax: extra token 0.2 after end of expression

 julia sum(x.2)
 ERROR: syntax: missing comma or ) in argument list

 julia f(x) = x.2
 ERROR: syntax: extra token 0.2 after end of expression

 julia f(x) = sum(x.2)
 ERROR: syntax: missing comma or ) in argument list

  



[julia-users] Re: Creating function with a macro

2015-06-17 Thread Ben Ward
I guess for Base sort! passing a functior overriding call() as lt is 
possible, if it accepts the three arguments passed to lt in lt(o, x, v[j-1])

On Wednesday, June 17, 2015 at 9:11:22 PM UTC+1, Kristoffer Carlsson wrote:

 You could also use something called functors which basically are types 
 that overload the call function. When you pass these as argument the 
 compiler can specialize the function on the type of the functor and thus 
 inline the call. See here for example for them being used effectively for 
 performance increase: https://github.com/JuliaLang/julia/pull/11685

 As an example I took the code for insertionsort and made it instead accept 
 an argument f which will be the functor. I then create some functors to 
 sort on the different type fields and show an example how sort is called.


 function sort!(v::AbstractVector, f, lo::Int=1, hi::Int=length(v))
 @inbounds for i = lo+1:hi
 j = i
 x = v[i]
 while j  lo
 if f(x, v[j-1])
 v[j] = v[j-1]
 j -= 1
 continue
 end
 break
 end
 v[j] = x
 end
 return v
 end

 # Some type
 immutable CompType
 a::Int
 b::Int
 c::Int
 end


 b = [CompType(1,2,3), CompType(3,2,1), CompType(2,1,3)]

 # Functors 
 immutable AFunc end
 call(::AFunc, x, y) = x.a  y.a
 immutable BFunc end
 call(::BFunc, x, y) = x.b  y.b
 immutable CFunc end
 call(::CFunc, x, y) = x.c  y.c

 # Can now sort with good performance
 sort!(b, AFunc())
 println(b)
 sort!(b, BFunc())
 println(b)
 sort!(b, CFunc())
 println(b)


 Now, this is of course not optimal to rip code out of base. It would be 
 better if we could pass a functor straight to Base.sort!. 


 On Wednesday, June 17, 2015 at 8:13:41 PM UTC+2, Ben Ward wrote:

 Hi, I want to create a macro with which I can create a function with a 
 custom bit of code:

 In the repl I can do a toy example:


 *name = :hi*


 *vectype = Vector{Int}*


 *quote**  function ($name)(v::$vectype)*

 *println(hi)*

   *end*

 *end*

 However if I try to put this in a macro and use it I get an error:

 *macro customFun(vectype::DataType, name::Symbol)*


 *  quote**   function ($name)(v::$vectype)*

 *  println(hi World!)*

 *end*

 *  end*



 *end*

 *@customFun(Vector{Int}, :hi)*

 What am I doing wrong? I'd like to use macro arguments to provide a 
 function's name, and the datatype of the argument. I haven't used macros to 
 define functions more complex than simple one liners.

 Thanks,
 Ben.



Re: [julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread andrew cooke

i guess i had less confidence that types come out right with vcat than i do 
with tuple.  but now i come to justify why, i can't, so yes, sorry for 
ignoring that earlier.

On Wednesday, 17 June 2015 19:18:32 UTC-3, Jameson wrote:

 In 0.4, you can now construct vector types directly:
 Vector{T}(dims...)
 making construction of some arrays a bit clear.

 But I think the array equivalent of `tuple` is `vcat`.


 On Wed, Jun 17, 2015 at 5:20 PM andrew cooke and...@acooke.org 
 javascript: wrote:

 thanks for all the replies.

 i really wanted something that took *only* the contents of the array, 
 while getindex has the type too.  so i've defined my own array function:

 array(T) = (builder(x...) = T[x...])

 which i can use as, say, array(Any).

 (the final use case is that this is given to a parser and is used to 
 construct an array from the results, in much the same way as i can 
 currently pass in the function tuple).

 i'd be interested if there's some nicer (ie using curly brackets) way of 
 handling the type, T, or some way of defining anon functions (to avoid 
 explicitly naming builder) with interpolated args.

 cheers,
 andrew



 On Wednesday, 17 June 2015 12:06:50 UTC-3, David Gold wrote:

 Soo, in summary (and I do apologize for spamming this thread; I don't 
 usually drink coffee, and when I do it's remarkable):

 1) Mauro and Avik seem to be right about the common use case Any[1, 2, 
 3, 4] lowering to getindex -- my apologies for hastily suggesting otherwise
 2) I suspect I was wrong to say that concatenation in general doesn't go 
 through a function. It probably does, just not always through getindex 
 (sometimes vcat, typed_vcat, vect, etc).

 On Wednesday, June 17, 2015 at 10:15:00 AM UTC-4, David Gold wrote:

 However, it does look like this use pattern does resolve to 'getindex',

 *julia **dump(:( Int[1, 2, 3, 4] ))*

 Expr 

   head: Symbol ref

   args: Array(Any,(5,))

 1: Symbol Int

 2: Int64 1

 3: Int64 2

 4: Int64 3

 5: Int64 4

   typ: Any


 *julia **getindex(Int, 1, 2, 3, 4)*

 *4-element Array{Int64,1}:*

 * 1*

 * 2*

 * 3*

 * 4*


 So now I am not so sure about my claim that most concatenation doesn't 
 go through function calls. However, I don't think that the rest of the 
 concatenating heads (:vect, :vcat, :typed_vcat, etc.) lower to getindex:


 *julia **@which([1, 2, 3, 4])*

 *vect{T}(X::T...) at abstractarray.jl:13*


 *julia **@which([1; 2; 3; 4])*

 *vcat{T:Number}(X::T:Number...) at abstractarray.jl:643* 


 they seem instead to call the type 'vect', 'vcat', etc. itself on the 
 array arguments.



Re: [julia-users] subtracting two uint8's results in a Uint64?

2015-06-17 Thread Stefan Karpinski
After much discussion this was changed in 0.4.


 On Jun 17, 2015, at 6:33 PM, Phil Tomson philtom...@gmail.com wrote:
 
 Maybe this is expected, but it was a bit of a surprise to me:
 
  julia function foo()
  red::Uint8 = 0x33
  blue::Uint8 = 0x36
  (red-blue)
   end
 julia foo()
 0xfffd
 julia typeof(foo())
 Uint64
 
 The fact that it overflowed wasn't surprising, but the fact that it got 
 converted to a Uint64 is a bit surprising (it ended up being a very large 
 number that got used in other calculations later which led to odd results) . 
 So it looks like all of the math operators will always promote to the largest 
 size (but keep the same signed or unsignedness).
 
 I'm wondering if it might make more sense if:
 Uint8 - Uint8 - Uint8
 Or more generally: UintN op UintN - UintN ?
 and:  IntN op IntN - IntN
 
 
 
 


[julia-users] subtracting two uint8's results in a Uint64?

2015-06-17 Thread Phil Tomson
Maybe this is expected, but it was a bit of a surprise to me:

 julia function foo()
 red::Uint8 = 0x33
 blue::Uint8 = 0x36
 (red-blue)
  end
julia foo()
0xfffd
julia typeof(foo())
Uint64

The fact that it overflowed wasn't surprising, but the fact that it got 
converted to a Uint64 is a bit surprising (it ended up being a very large 
number that got used in other calculations later which led to odd results) 
. So it looks like all of the math operators will always promote to the 
largest size (but keep the same signed or unsignedness).

I'm wondering if it might make more sense if:
Uint8 - Uint8 - Uint8
Or more generally: UintN op UintN - UintN ?
and:  IntN op IntN - IntN






Re: [julia-users] subtracting two uint8's results in a Uint64?

2015-06-17 Thread Jacob Quinn
This has been changed on 0.4.

https://github.com/JuliaLang/julia/issues/3759

-Jacob

On Wed, Jun 17, 2015 at 4:33 PM, Phil Tomson philtom...@gmail.com wrote:

 Maybe this is expected, but it was a bit of a surprise to me:

  julia function foo()
  red::Uint8 = 0x33
  blue::Uint8 = 0x36
  (red-blue)
   end
 julia foo()
 0xfffd
 julia typeof(foo())
 Uint64

 The fact that it overflowed wasn't surprising, but the fact that it got
 converted to a Uint64 is a bit surprising (it ended up being a very large
 number that got used in other calculations later which led to odd results)
 . So it looks like all of the math operators will always promote to the
 largest size (but keep the same signed or unsignedness).

 I'm wondering if it might make more sense if:
 Uint8 - Uint8 - Uint8
 Or more generally: UintN op UintN - UintN ?
 and:  IntN op IntN - IntN







Re: [julia-users] Help: My parallel code 8300x slower than the serial.

2015-06-17 Thread Daniel Carrera
Sadly, this function is pretty close to the real workload. I do n-body
simulations of planetary systems. In this problem, the value of n is
small, but the simulation runs for a very long time. So this nested for
loop will be called maybe 10 billion times in the course of one simulation,
and that's where all the CPU time goes.

I already have simulation software that works well enough for this. I just
wanted to experiment with Julia to see if this could be made parallel. An
irritating problem with all the codes that solve planetary systems is that
they are all serial -- this problem is apparently hard to parallelize.


Cheers,
Daniel.



On 17 June 2015 at 17:02, Tim Holy tim.h...@gmail.com wrote:

 You're copying a lot of data between processes. Check out SharedArrays.
 But I
 still fear that if each job is tiny, you may not get as much benefit
 without
 further restructuring.

 I trust that your real workload will take more than 1ms. Otherwise, it's
 very unlikely that your experiments in parallel programming will end up
 saving
 you time :-).

 --Tim

 On Wednesday, June 17, 2015 06:37:28 AM Daniel Carrera wrote:
  Hi everyone,
 
  My adventures with parallel programming with Julia continue. Here is a
  different issue from other threads: My parallel function is 8300x slower
  than my serial function even though I am running on 4 processes on a
  multi-core machine.
 
  julia nprocs()
  4
 
  I have Julia 0.3.8. Here is my program in its entirety (not very long).
 
  function main()
 
  nbig::Int16 = 7
  nbod::Int16 = nbig
  bod  = Float64[
  0   1  2  3  4  5  6  # x position
  0   0  0  0  0  0  0  # y position
  0   0  0  0  0  0  0  # z position
  0   0  0  0  0  0  0  # x velocity
  0   0  0  0  0  0  0  # y velocity
  0   0  0  0  0  0  0  # z velocity
  1   1  1  1  1  1  1  # Mass
  ]
 
  a = zeros(3,nbod)
 
  @time for k = 1:1000
  gravity_1!(bod, nbig, nbod, a)
  end
  println(a[1,:])
 
  @time for k = 1:1000
  gravity_2!(bod, nbig, nbod, a)
  end
  println(a[1,:])
  end
 
  function gravity_1!(bod, nbig, nbod, a)
 
  for i = 1:nbod
  a[1,i] = 0.0
  a[2,i] = 0.0
  a[3,i] = 0.0
  end
 
  @inbounds for i = 1:nbig
  for j = (i + 1):nbod
 
  dx = bod[1,j] - bod[1,i]
  dy = bod[2,j] - bod[2,i]
  dz = bod[3,j] - bod[3,i]
 
  s_1 = 1.0 / sqrt(dx*dx+dy*dy+dz*dz)
  s_3 = s_1 * s_1 * s_1
 
  tmp1 = s_3 * bod[7,i]
  tmp2 = s_3 * bod[7,j]
 
  a[1,j] = a[1,j] - tmp1*dx
  a[2,j] = a[2,j] - tmp1*dy
  a[3,j] = a[3,j] - tmp1*dz
 
  a[1,i] = a[1,i] + tmp2*dx
  a[2,i] = a[2,i] + tmp2*dy
  a[3,i] = a[3,i] + tmp2*dz
  end
  end
  return a
  end
 
  function gravity_2!(bod, nbig, nbod, a)
 
  for i = 1:nbod
  a[1,i] = 0.0
  a[2,i] = 0.0
  a[3,i] = 0.0
  end
 
  @inbounds @sync @parallel for i = 1:nbig
  for j = (i + 1):nbod
 
  dx = bod[1,j] - bod[1,i]
  dy = bod[2,j] - bod[2,i]
  dz = bod[3,j] - bod[3,i]
 
  s_1 = 1.0 / sqrt(dx*dx+dy*dy+dz*dz)
  s_3 = s_1 * s_1 * s_1
 
  tmp1 = s_3 * bod[7,i]
  tmp2 = s_3 * bod[7,j]
 
  a[1,j] = a[1,j] - tmp1*dx
  a[2,j] = a[2,j] - tmp1*dy
  a[3,j] = a[3,j] - tmp1*dz
 
  a[1,i] = a[1,i] + tmp2*dx
  a[2,i] = a[2,i] + tmp2*dy
  a[3,i] = a[3,i] + tmp2*dz
  end
  end
  return a
  end
 
 
 
  So this is a straight forward N-body gravity calculation. Yes, I realize
  that gravity_2!() is wrong, but that's fine. Right now I'm just talking
  about the CPU time. When I run this on my computer I get:
 
  julia main()
  elapsed time: 0.000475294 seconds (0 bytes allocated)
  [1.49138889 0.4636 0.1736
  -5.551115123125783e-17 -0.17366 -0.46361112
  -1.49138889]
  elapsed time: 3.953546654 seconds (126156320 bytes allocated, 13.49% gc
  time)
  [0.0 0.0 0.0 0.0 0.0 0.0 0.0]
 
 
  So, the serial version takes 0.000475 seconds and the parallel takes 3.95
  seconds. Furthermore, the parallel version is calling the garbage
  collector. I suspect that the problem has something to do with the memory
  access. Maybe the parallel code is wasting a lot of time copying
 variables
  in memory. But whatever the reason, this is bad. The documentation says
  that @parallel is supposed to be fast, even for very small loops, but
  that's not what I'm seeing. A non-buggy implementation will be even
 slower.
 
  Have I missed something? Is there an obvious error in 

Re: [julia-users] How to manually install julia packages on a Windows system

2015-06-17 Thread Yonatan Tekleab
yea, i figured the same thing since I am on the same system using https 
through my browser, but for some reason that I don't understand, Julia 
won't add/update packages, even when git is configured to use https

On Wednesday, June 17, 2015 at 3:04:26 PM UTC-4, Isaiah wrote:

 Are you using the gmail web interface from this same system? If so, then 
 https:// should, in principle, be available and work for git too...

 On the other hand, if you are using a separate (windows) system for gmail, 
 then you ought to be able to run Pkg.install/build on the second system, 
 get all requirements you need, and then copy your 
 C:/Users/USERNAME/.julia/v0.# directory onto the firewalled system. This 
 is tricky/unreliable on linux, but should be quite simple on windows as 
 long as both systems are same word size -- both 32-bit or 64-bit (because 
 of Microsoft's ABI permanence).

 On Wed, Jun 17, 2015 at 2:31 PM, Yonatan Tekleab ytek...@gmail.com 
 javascript: wrote:

 I think I'm moving in the right direction. I downloaded several packages 
 that IJulia depends on and put them in the ~/.julia/v0.3 directory along 
 with the IJulia package itself. Before I was just sticking them in the 
 ~/.julia directory, and I don't think Julia was seeing the packages.

 When trying `using IJulia`, I get ERROR: ZMQ not properly installed. 
 Please run Pkg.build(ZMQ). When I run that command, it tries to build 
 Homebrew, WinRPM, and ZMQ, all of which have their own errors.
 Homebrew: could not spawn setenv(`git rev-parse --git-dir`; 
 dir=P:\\.julia\\v0.3\\Homebrew\\deps\\usr): no such file or directory 
 (ENOENT)
 WinRPM: update not defined
 ZMQ: RPM not defined

 When I try `Pkg.build(IJulia)`, it trys to build Homebrew, WinRPM, 
 Nettle, ZMQ, and IJulia.  I get errors for all except IJulia.  The 
 Homebrew, WinRPM and ZMQ errors are the same.  For Nettle, I get: RPM not 
 defined

 Now I can open an IJulia instance, but the kernel dies shortly after it 
 comes up. The command window states ERROR: ZMQ not properly installed. 
 Please run Pkg.build(ZMQ). Then it attempts to restart the kernel and 
 repeats the process.


 On Wednesday, June 17, 2015 at 12:31:22 AM UTC-4, Tony Kelman wrote:

 Can you do `using IJulia`, and/or `Pkg.build(IJulia)` ? Note also that 
 IJulia depends on several other packages, indicated in the REQUIRE file 
 (and those packages may have other dependencies of their own).


 On Tuesday, June 16, 2015 at 3:40:14 PM UTC-7, Yonatan Tekleab wrote:

 Hi Stefan,

 I'm having the same problem.  Unfortunately the firewall I'm behind is 
 clever enough prevent me from re-configuring git to use https, as many 
 other threads have indicated.

 I downloaded the master branch IJulia package from 
 https://github.com/JuliaLang/IJulia.jl, extracted the folder, placed 
 it inside the ~/.julia folder, then removed the .jl-master suffix.  This 
 still isn't working for me.  When I try to open IJulia from the command 
 prompt (ipython notebook --profile julia), it pulls up the typical 
 IPython notebook.

 Any thoughts on what I'm doing wrong?

 Thanks in advance.

 On Thursday, October 31, 2013 at 10:16:06 AM UTC-4, Stefan Karpinski 
 wrote:

 If you just make sure that the package source exists in ~/.julia, that 
 should do the trick. In fact, you don't need to mess around with the 
 package manager at all – Pkg commands will fail but loading packages 
 should 
 work fine. Unfortunately, building packages with binary dependencies will 
 likely fail, but if you stick with pure-Julia packages, you should be ok.


 On Thu, Oct 31, 2013 at 7:51 AM, Able Mashamba amas...@gmail.com 
 wrote:

 Dear Informed,

 Is there a way to manually install julia packages on a Windows system 
 that has a proxy.pac config system with a paranoid firewall. I have 
 downloaded the packages I need and would want to install them manually 
 as 
 it appears Internet permission settings at my institution are making all 
 Pkg.*() commands fail.

_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() to list help topics
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.2.0-rc2
  _/ |\__'_|_|_|\__'_|  |  Commit b372a68 2013-10-26 02:06:56 UTC
 |__/   |  i686-w64-mingw32

 julia Pkg.add(Distributions)
 INFO: Initializing package repository C:\Users\amashamba\.julia
 INFO: Cloning METADATA from git://github.com/JuliaLang/METADATA.jl
 fatal: unable to connect to github.com:
 github.com[0: 192.30.252.130]: errno=No error

 ERROR: failed process: Process(`git clone -q -b metadata-v2 git://
 github.com/Jul
 iaLang/METADATA.jl METADATA`, ProcessExited(128)) [128]

 julia









Re: [julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread Jameson Nash
In 0.4, you can now construct vector types directly:
Vector{T}(dims...)
making construction of some arrays a bit clear.

But I think the array equivalent of `tuple` is `vcat`.


On Wed, Jun 17, 2015 at 5:20 PM andrew cooke and...@acooke.org wrote:

 thanks for all the replies.

 i really wanted something that took *only* the contents of the array,
 while getindex has the type too.  so i've defined my own array function:

 array(T) = (builder(x...) = T[x...])

 which i can use as, say, array(Any).

 (the final use case is that this is given to a parser and is used to
 construct an array from the results, in much the same way as i can
 currently pass in the function tuple).

 i'd be interested if there's some nicer (ie using curly brackets) way of
 handling the type, T, or some way of defining anon functions (to avoid
 explicitly naming builder) with interpolated args.

 cheers,
 andrew



 On Wednesday, 17 June 2015 12:06:50 UTC-3, David Gold wrote:

 Soo, in summary (and I do apologize for spamming this thread; I don't
 usually drink coffee, and when I do it's remarkable):

 1) Mauro and Avik seem to be right about the common use case Any[1, 2, 3,
 4] lowering to getindex -- my apologies for hastily suggesting otherwise
 2) I suspect I was wrong to say that concatenation in general doesn't go
 through a function. It probably does, just not always through getindex
 (sometimes vcat, typed_vcat, vect, etc).

 On Wednesday, June 17, 2015 at 10:15:00 AM UTC-4, David Gold wrote:

 However, it does look like this use pattern does resolve to 'getindex',

 *julia **dump(:( Int[1, 2, 3, 4] ))*

 Expr

   head: Symbol ref

   args: Array(Any,(5,))

 1: Symbol Int

 2: Int64 1

 3: Int64 2

 4: Int64 3

 5: Int64 4

   typ: Any


 *julia **getindex(Int, 1, 2, 3, 4)*

 *4-element Array{Int64,1}:*

 * 1*

 * 2*

 * 3*

 * 4*


 So now I am not so sure about my claim that most concatenation doesn't
 go through function calls. However, I don't think that the rest of the
 concatenating heads (:vect, :vcat, :typed_vcat, etc.) lower to getindex:


 *julia **@which([1, 2, 3, 4])*

 *vect{T}(X::T...) at abstractarray.jl:13*


 *julia **@which([1; 2; 3; 4])*

 *vcat{T:Number}(X::T:Number...) at abstractarray.jl:643*


 they seem instead to call the type 'vect', 'vcat', etc. itself on the
 array arguments.




Re: [julia-users] Reading from and writing to a process using a pipe

2015-06-17 Thread elextr


On Thursday, June 18, 2015 at 3:05:26 AM UTC+10, Kevin Squire wrote:

 `open(cmd, w)` gives back a tuple.  Try using

 f, p = open(`gnuplot`,w)
 write(f, plot sin(x))

 There was a bit of discussion when this change was made (I couldn't find 
 it with a quick search), 


https://github.com/JuliaLang/julia/issues/9659
 

 about this returning a tuple--it's a little unintuitive, and could be 
 `fixed` in a few different ways (easiest: returning a complex type that can 
 be written to and read from), but it's probably been off most people's 
 radar.  If you're up for it, why don't you open an issue (if one doesn't 
 exist).

 Anyway, for your particular application, you probably want `readandwrite`:

 help? readandwrite
 search: readandwrite

 Base.readandwrite(command)

Starts running a command asynchronously, and returns a tuple
(stdout,stdin,process) of the output stream and input stream of the
process, and the process object itself.

 Which *also* returns a tuple (but at least now you know).

 See also http://blog.leahhanson.us/running-shell-commands-from-julia.html, 
 which has a full rundown of reading and writing from processes.

 Cheers!
Kevin

 On Wed, Jun 17, 2015 at 9:03 AM, Miguel Bazdresch eorl...@gmail.com 
 javascript: wrote:

 Hello,

 Gaston.jl is a plotting package based on gnuplot. Gnuplot is command-line 
 tool, so I send commands to it via a pipe. I open the pipe (on Linux) with 
 a ccall to popen, and write gnuplot commands to the pipe using a ccall to 
 fputs.

 This works fine, but I'm trying to see if Julia's native pipe and stream 
 functionality can make this process more Julian and, in the process, more 
 cross-platform. The documentation is encouraging:

 You can use [a Cmd] object to connect the command to others via pipes, 
 run it, and read or write to it. and Julia provides a rich interface to 
 deal with streaming I/O objects such as terminals, pipes and TCP sockets. 
 Unfortunately, I just can't figure out how to use Julia's functionality for 
 this purpose. This is what I've tried (I am on Julia 0.3.9):

 First, I tried using `open` with read and write:

 julia f=open(`gnuplot`,r+)
 ERROR: ArgumentError(mode must be \r\ or \w\, not \r+\)

 So I tried with write only:

 julia f=open(`gnuplot`,w)
 (Pipe(open, 0 bytes waiting),Process(`gnuplot`, ProcessRunning))

 So far, this looks good. I can see a gnuplot process running.

 Then I try to `write` to the pipe:

 julia write(f,plot sin(x))
 ERROR: `write` has no method matching write(::(Pipe,Process), 
 ::ASCIIString)

 OK, so let's try with `println`:

 julia println(f,plot sin(x))
 (Pipe(open, 0 bytes waiting),Process(`gnuplot`, ProcessRunning))plot 
 sin(x)

 and no plot is produced.

 I can't figure out how to read from the pipe, either:

 julia readbytes(f)
 ERROR: `readbytes` has no method matching readbytes(::(Pipe,Process))

 julia readall(f)
 ERROR: `readall` has no method matching readall(::(Pipe,Process))

 I'd appreciate any pointers. Thanks!

 -- mb




Re: [julia-users] Reading from and writing to a process using a pipe

2015-06-17 Thread Miguel Bazdresch
Thanks! `readandwrite` looks exactly like what I need. I'll try it out.

I don't mind too much that this (and related) commands return a tuple,
although I'd prefer the alternative discussed in issue 9659. What I'd much
prefer is for them to be documented under Running external programs, and
not under Essentials, where I never thought to look for them. Once I get
a better handle for how they work, I'll try to improve the docs in this
respect.

-- mb

On Wed, Jun 17, 2015 at 1:05 PM, Kevin Squire kevin.squ...@gmail.com
wrote:

 `open(cmd, w)` gives back a tuple.  Try using

 f, p = open(`gnuplot`,w)
 write(f, plot sin(x))

 There was a bit of discussion when this change was made (I couldn't find
 it with a quick search), about this returning a tuple--it's a little
 unintuitive, and could be `fixed` in a few different ways (easiest:
 returning a complex type that can be written to and read from), but it's
 probably been off most people's radar.  If you're up for it, why don't you
 open an issue (if one doesn't exist).

 Anyway, for your particular application, you probably want `readandwrite`:

 help? readandwrite
 search: readandwrite

 Base.readandwrite(command)

Starts running a command asynchronously, and returns a tuple
(stdout,stdin,process) of the output stream and input stream of the
process, and the process object itself.

 Which *also* returns a tuple (but at least now you know).

 See also http://blog.leahhanson.us/running-shell-commands-from-julia.html,
 which has a full rundown of reading and writing from processes.

 Cheers!
Kevin

 On Wed, Jun 17, 2015 at 9:03 AM, Miguel Bazdresch eorli...@gmail.com
 wrote:

 Hello,

 Gaston.jl is a plotting package based on gnuplot. Gnuplot is command-line
 tool, so I send commands to it via a pipe. I open the pipe (on Linux) with
 a ccall to popen, and write gnuplot commands to the pipe using a ccall to
 fputs.

 This works fine, but I'm trying to see if Julia's native pipe and stream
 functionality can make this process more Julian and, in the process, more
 cross-platform. The documentation is encouraging:

 You can use [a Cmd] object to connect the command to others via pipes,
 run it, and read or write to it. and Julia provides a rich interface to
 deal with streaming I/O objects such as terminals, pipes and TCP sockets.
 Unfortunately, I just can't figure out how to use Julia's functionality for
 this purpose. This is what I've tried (I am on Julia 0.3.9):

 First, I tried using `open` with read and write:

 julia f=open(`gnuplot`,r+)
 ERROR: ArgumentError(mode must be \r\ or \w\, not \r+\)

 So I tried with write only:

 julia f=open(`gnuplot`,w)
 (Pipe(open, 0 bytes waiting),Process(`gnuplot`, ProcessRunning))

 So far, this looks good. I can see a gnuplot process running.

 Then I try to `write` to the pipe:

 julia write(f,plot sin(x))
 ERROR: `write` has no method matching write(::(Pipe,Process),
 ::ASCIIString)

 OK, so let's try with `println`:

 julia println(f,plot sin(x))
 (Pipe(open, 0 bytes waiting),Process(`gnuplot`, ProcessRunning))plot
 sin(x)

 and no plot is produced.

 I can't figure out how to read from the pipe, either:

 julia readbytes(f)
 ERROR: `readbytes` has no method matching readbytes(::(Pipe,Process))

 julia readall(f)
 ERROR: `readall` has no method matching readall(::(Pipe,Process))

 I'd appreciate any pointers. Thanks!

 -- mb





Re: [julia-users] Help: My parallel code 8300x slower than the serial.

2015-06-17 Thread Miguel Bazdresch
Can you arrange the problem so that you send each CPU a few seconds of
work? The overhead would become negligible.

-- mb

On Wed, Jun 17, 2015 at 4:38 PM, Daniel Carrera dcarr...@gmail.com wrote:

 Sadly, this function is pretty close to the real workload. I do n-body
 simulations of planetary systems. In this problem, the value of n is
 small, but the simulation runs for a very long time. So this nested for
 loop will be called maybe 10 billion times in the course of one simulation,
 and that's where all the CPU time goes.

 I already have simulation software that works well enough for this. I just
 wanted to experiment with Julia to see if this could be made parallel. An
 irritating problem with all the codes that solve planetary systems is that
 they are all serial -- this problem is apparently hard to parallelize.


 Cheers,
 Daniel.



 On 17 June 2015 at 17:02, Tim Holy tim.h...@gmail.com wrote:

 You're copying a lot of data between processes. Check out SharedArrays.
 But I
 still fear that if each job is tiny, you may not get as much benefit
 without
 further restructuring.

 I trust that your real workload will take more than 1ms. Otherwise, it's
 very unlikely that your experiments in parallel programming will end up
 saving
 you time :-).

 --Tim

 On Wednesday, June 17, 2015 06:37:28 AM Daniel Carrera wrote:
  Hi everyone,
 
  My adventures with parallel programming with Julia continue. Here is a
  different issue from other threads: My parallel function is 8300x slower
  than my serial function even though I am running on 4 processes on a
  multi-core machine.
 
  julia nprocs()
  4
 
  I have Julia 0.3.8. Here is my program in its entirety (not very long).
 
  function main()
 
  nbig::Int16 = 7
  nbod::Int16 = nbig
  bod  = Float64[
  0   1  2  3  4  5  6  # x position
  0   0  0  0  0  0  0  # y position
  0   0  0  0  0  0  0  # z position
  0   0  0  0  0  0  0  # x velocity
  0   0  0  0  0  0  0  # y velocity
  0   0  0  0  0  0  0  # z velocity
  1   1  1  1  1  1  1  # Mass
  ]
 
  a = zeros(3,nbod)
 
  @time for k = 1:1000
  gravity_1!(bod, nbig, nbod, a)
  end
  println(a[1,:])
 
  @time for k = 1:1000
  gravity_2!(bod, nbig, nbod, a)
  end
  println(a[1,:])
  end
 
  function gravity_1!(bod, nbig, nbod, a)
 
  for i = 1:nbod
  a[1,i] = 0.0
  a[2,i] = 0.0
  a[3,i] = 0.0
  end
 
  @inbounds for i = 1:nbig
  for j = (i + 1):nbod
 
  dx = bod[1,j] - bod[1,i]
  dy = bod[2,j] - bod[2,i]
  dz = bod[3,j] - bod[3,i]
 
  s_1 = 1.0 / sqrt(dx*dx+dy*dy+dz*dz)
  s_3 = s_1 * s_1 * s_1
 
  tmp1 = s_3 * bod[7,i]
  tmp2 = s_3 * bod[7,j]
 
  a[1,j] = a[1,j] - tmp1*dx
  a[2,j] = a[2,j] - tmp1*dy
  a[3,j] = a[3,j] - tmp1*dz
 
  a[1,i] = a[1,i] + tmp2*dx
  a[2,i] = a[2,i] + tmp2*dy
  a[3,i] = a[3,i] + tmp2*dz
  end
  end
  return a
  end
 
  function gravity_2!(bod, nbig, nbod, a)
 
  for i = 1:nbod
  a[1,i] = 0.0
  a[2,i] = 0.0
  a[3,i] = 0.0
  end
 
  @inbounds @sync @parallel for i = 1:nbig
  for j = (i + 1):nbod
 
  dx = bod[1,j] - bod[1,i]
  dy = bod[2,j] - bod[2,i]
  dz = bod[3,j] - bod[3,i]
 
  s_1 = 1.0 / sqrt(dx*dx+dy*dy+dz*dz)
  s_3 = s_1 * s_1 * s_1
 
  tmp1 = s_3 * bod[7,i]
  tmp2 = s_3 * bod[7,j]
 
  a[1,j] = a[1,j] - tmp1*dx
  a[2,j] = a[2,j] - tmp1*dy
  a[3,j] = a[3,j] - tmp1*dz
 
  a[1,i] = a[1,i] + tmp2*dx
  a[2,i] = a[2,i] + tmp2*dy
  a[3,i] = a[3,i] + tmp2*dz
  end
  end
  return a
  end
 
 
 
  So this is a straight forward N-body gravity calculation. Yes, I realize
  that gravity_2!() is wrong, but that's fine. Right now I'm just talking
  about the CPU time. When I run this on my computer I get:
 
  julia main()
  elapsed time: 0.000475294 seconds (0 bytes allocated)
  [1.49138889 0.4636 0.1736
  -5.551115123125783e-17 -0.17366 -0.46361112
  -1.49138889]
  elapsed time: 3.953546654 seconds (126156320 bytes allocated, 13.49% gc
  time)
  [0.0 0.0 0.0 0.0 0.0 0.0 0.0]
 
 
  So, the serial version takes 0.000475 seconds and the parallel takes
 3.95
  seconds. Furthermore, the parallel version is calling the garbage
  collector. I suspect that the problem has something to do with the
 memory
  access. Maybe the parallel code is wasting a lot of time copying
 variables
  in memory. But whatever the reason, this is bad. The documentation 

Re: [julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread andrew cooke
thanks for all the replies.

i really wanted something that took *only* the contents of the array, while 
getindex has the type too.  so i've defined my own array function:

array(T) = (builder(x...) = T[x...])

which i can use as, say, array(Any).

(the final use case is that this is given to a parser and is used to 
construct an array from the results, in much the same way as i can 
currently pass in the function tuple).

i'd be interested if there's some nicer (ie using curly brackets) way of 
handling the type, T, or some way of defining anon functions (to avoid 
explicitly naming builder) with interpolated args.

cheers,
andrew


On Wednesday, 17 June 2015 12:06:50 UTC-3, David Gold wrote:

 Soo, in summary (and I do apologize for spamming this thread; I don't 
 usually drink coffee, and when I do it's remarkable):

 1) Mauro and Avik seem to be right about the common use case Any[1, 2, 3, 
 4] lowering to getindex -- my apologies for hastily suggesting otherwise
 2) I suspect I was wrong to say that concatenation in general doesn't go 
 through a function. It probably does, just not always through getindex 
 (sometimes vcat, typed_vcat, vect, etc).

 On Wednesday, June 17, 2015 at 10:15:00 AM UTC-4, David Gold wrote:

 However, it does look like this use pattern does resolve to 'getindex',

 *julia **dump(:( Int[1, 2, 3, 4] ))*

 Expr 

   head: Symbol ref

   args: Array(Any,(5,))

 1: Symbol Int

 2: Int64 1

 3: Int64 2

 4: Int64 3

 5: Int64 4

   typ: Any


 *julia **getindex(Int, 1, 2, 3, 4)*

 *4-element Array{Int64,1}:*

 * 1*

 * 2*

 * 3*

 * 4*


 So now I am not so sure about my claim that most concatenation doesn't go 
 through function calls. However, I don't think that the rest of the 
 concatenating heads (:vect, :vcat, :typed_vcat, etc.) lower to getindex:


 *julia **@which([1, 2, 3, 4])*

 *vect{T}(X::T...) at abstractarray.jl:13*


 *julia **@which([1; 2; 3; 4])*

 *vcat{T:Number}(X::T:Number...) at abstractarray.jl:643* 


 they seem instead to call the type 'vect', 'vcat', etc. itself on the 
 array arguments.



Re: [julia-users] How to manually install julia packages on a Windows system

2015-06-17 Thread Yonatan Tekleab
Okay, at home I have a mac and so I just downloaded the packages that 
`Pkg.add(IJulia)` installed on my home machine, because I wasn't sure 
what packages IJulia depended on, and I didn't want to sift through the 
source code.  So I've now deleted the Homebrew directory.

`using WinRPM` does nothing... no errors.  It just returns the julia prompt 
again.

If it helps any, i'm attaching a screenshot of the Julia packages I have 
downloaded and placed into ~/.julia/v0.3/

I appreciate the help.


On Wednesday, June 17, 2015 at 2:51:40 PM UTC-4, Tony Kelman wrote:

 So, to explain this a little, IJulia.jl depends on ZMQ.jl for interprocess 
 communication. ZMQ.jl is a Julia wrapper around the C++ libzmq library, so 
 the Julia package needs to download an architecture-specific compiled 
 version of the library (or use a pre-existing installed copy from a package 
 manager, or build from source) before it can work. Homebrew.jl is a Julia 
 wrapper around the homebrew package manager which we use for binary package 
 installation on Macs, and WinRPM.jl is an RPM-metadata parser that 
 downloads cross-compiled Window binaries from the OpenSUSE build service.

 If you're on a Windows machine, you should never need to use Homebrew.jl, 
 anywhere that is mentioned in the REQUIRE file should be preceded with the 
 @osx modifier which denotes that it only applies on OS X. On Windows, the 
 zmq library will come from WinRPM.jl. What happens if you just run `using 
 WinRPM` from a freshly-started Julia REPL?

 WinRPM itself will need internet access - it doesn't use the same 
 mechanism as Pkg does to download binaries, but it could easily run afoul 
 of a paranoid proxy. Let's see.


 On Wednesday, June 17, 2015 at 11:31:35 AM UTC-7, Yonatan Tekleab wrote:

 I think I'm moving in the right direction. I downloaded several packages 
 that IJulia depends on and put them in the ~/.julia/v0.3 directory along 
 with the IJulia package itself. Before I was just sticking them in the 
 ~/.julia directory, and I don't think Julia was seeing the packages.

 When trying `using IJulia`, I get ERROR: ZMQ not properly installed. 
 Please run Pkg.build(ZMQ). When I run that command, it tries to build 
 Homebrew, WinRPM, and ZMQ, all of which have their own errors.
 Homebrew: could not spawn setenv(`git rev-parse --git-dir`; 
 dir=P:\\.julia\\v0.3\\Homebrew\\deps\\usr): no such file or directory 
 (ENOENT)
 WinRPM: update not defined
 ZMQ: RPM not defined

 When I try `Pkg.build(IJulia)`, it trys to build Homebrew, WinRPM, 
 Nettle, ZMQ, and IJulia.  I get errors for all except IJulia.  The 
 Homebrew, WinRPM and ZMQ errors are the same.  For Nettle, I get: RPM not 
 defined

 Now I can open an IJulia instance, but the kernel dies shortly after it 
 comes up. The command window states ERROR: ZMQ not properly installed. 
 Please run Pkg.build(ZMQ). Then it attempts to restart the kernel and 
 repeats the process.


 On Wednesday, June 17, 2015 at 12:31:22 AM UTC-4, Tony Kelman wrote:

 Can you do `using IJulia`, and/or `Pkg.build(IJulia)` ? Note also that 
 IJulia depends on several other packages, indicated in the REQUIRE file 
 (and those packages may have other dependencies of their own).


 On Tuesday, June 16, 2015 at 3:40:14 PM UTC-7, Yonatan Tekleab wrote:

 Hi Stefan,

 I'm having the same problem.  Unfortunately the firewall I'm behind is 
 clever enough prevent me from re-configuring git to use https, as many 
 other threads have indicated.

 I downloaded the master branch IJulia package from 
 https://github.com/JuliaLang/IJulia.jl, extracted the folder, placed 
 it inside the ~/.julia folder, then removed the .jl-master suffix.  This 
 still isn't working for me.  When I try to open IJulia from the command 
 prompt (ipython notebook --profile julia), it pulls up the typical 
 IPython notebook.

 Any thoughts on what I'm doing wrong?

 Thanks in advance.

 On Thursday, October 31, 2013 at 10:16:06 AM UTC-4, Stefan Karpinski 
 wrote:

 If you just make sure that the package source exists in ~/.julia, that 
 should do the trick. In fact, you don't need to mess around with the 
 package manager at all – Pkg commands will fail but loading packages 
 should 
 work fine. Unfortunately, building packages with binary dependencies will 
 likely fail, but if you stick with pure-Julia packages, you should be ok.


 On Thu, Oct 31, 2013 at 7:51 AM, Able Mashamba amas...@gmail.com 
 wrote:

 Dear Informed,

 Is there a way to manually install julia packages on a Windows system 
 that has a proxy.pac config system with a paranoid firewall. I have 
 downloaded the packages I need and would want to install them manually 
 as 
 it appears Internet permission settings at my institution are making all 
 Pkg.*() commands fail.

_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() to list help topics
   | | 

Re: [julia-users] Help: My parallel code 8300x slower than the serial.

2015-06-17 Thread Angel de Vicente
Hi,

Daniel Carrera dcarr...@gmail.com writes:
 I already have simulation software that works well enough for this. I
 just wanted to experiment with Julia to see if this could be made
 parallel. An irritating problem with all the codes that solve
 planetary systems is that they are all serial -- this problem is
 apparently hard to parallelize.

I was not very lucky with getting Julia in parallel to perform
efficiently, but (and this is a bit off-topic) parallelizing simple
N-body codes is quite easy. I have a demo code for this in Fortran. The
serial part just does basically the same as your Julia code (plus
calculate also the new velocities and positions for all the bodies):

,
|   DO t = 0.0, t_end, dt
|  v = v + a * dt/2
|  r = r + v * dt
| 
|  a = 0.0
|  DO i = 1,n
| DO j = i+1,n
|rji = r(j,:) - r(i,:)
|r2 = SUM(rji**2)
|r3 = r2 * SQRT(r2)
|a(i,:) = a(i,:) + m(j) * rji / r3
|a(j,:) = a(j,:) - m(i) * rji / r3
| END DO
|  END DO
|  
|  v = v + a * dt/2
| 
|  t_out = t_out + dt
|  IF (t_out = dt_out) THEN
| DO i = 1,n
|PRINT*, r(i,:)
| END DO
| t_out = 0.0
|  END IF
| 
|   END DO
`

The parallel version (with MPI) of this toy code is quite efficient. For
1000 bodies and 10k iterations, the serial code at my work station
measured with 'time' takes ~138 seconds. The parallel version (running
in 4 processors) takes only ~33 seconds (by the way, showing that
super-linear speedup, though unusual, is possible :-)

[angelv@comer ~/NBODY]$ time ./nbody_serial  stars_sphere.txt 
stars_serial.out
137.414u 0.079s 2:17.83 99.7%0+0k 0+9056io 0pf+0w

[angelv@comer ~/NBODY]$ time mpirun -np 4 ./nbody_parallel 
stars_sphere.txt  stars_parallel.out
110.891u 0.954s 0:32.80 340.9%0+0k 15128+8992io 64pf+0w

Since this doesn't involve Julia at all, if you want further details or
the code itself, perhaps we can talk off-list to avoid non-Julia noise
in the list.

Cheers,
-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/  


[julia-users] unable to update julianightlies - 16 days old master - ubuntu14.04

2015-06-17 Thread SVAKSHA
Hi,

For some strange reason the nightly build does not update/upgrade
beyond the old commit (317a4d1). Any idea why ubuntu does not pull the
updated PPA builds?

$ julia -e 'versioninfo()'
Julia Version 0.4.0-dev+5149
Commit 317a4d1 (2015-06-01 18:58 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz
  WORD_SIZE: 64
  BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: liblapack.so.3
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

$ uname -a
Linux ilak 3.13.0-53-generic #89-Ubuntu SMP Wed May 20 10:34:39 UTC
2015 x86_64 x86_64 x86_64 GNU/Linux

Thanks, SVAKSHA ॥  http://about.me/svaksha  ॥


[julia-users] Re: Julia computing problem in a loop

2015-06-17 Thread colintbowers
Hi Jim,

A couple of points:

1) Maybe I'm missing something, but you appear to be calculating the same 
inverse twice on every iteration of your loop. That is, 
inv(Om_e[(j-1)*nvar+1:j*nvar,:]) gets called twice.

2) As Mauro points out, memory allocation is currently triggered when 
slicing into 2d arrays on v0.3.x. You can read more about this at the 
following StackOverflow question: 
http://stackoverflow.com/questions/28271308/avoid-memory-allocation-when-indexing-an-array-in-julia.
 
Since you are indexing with ranges, my understanding is that in v0.4 you 
should be able to avoid the allocation. In the meantime, you could try 
performing the slice once and assign it to a new variable on each 
iteration, and then use that variable in your matrix calls.

3) I'll strongly second Mauro's suggestion that you pass in to your 
function everything that is not explicitly defined as a global constant. 
This should provide a significant performance improvement. For more reading 
on this, check out the first item in the Performance Tips section of the 
official docs 
(http://julia.readthedocs.org/en/latest/manual/performance-tips/)

So taking all these things together, my version of your function would look 
something like this:

function bb_update(bbj, bbcovj, capZ, nvar, Om_e, yyy)
 for j = 1:size(yyy, 2)
currentInverse = inv(Om_e[(j-1)*nvar+1:j*nvar,:])
currentZSlice = capZ[(j-1)*nvar+1:j*nvar,:]
bbcovj += currentZSlice' * currentInverse * currentZSlice
bbj += currentZSlice' * currentInverse * yyy[:,j]
 end
 return (bbj, bbcovj)
end

A final question: are you planning on implementing the Model Confidence Set 
in Julia at any time soon (I'm making the possibly incorrect assumption 
that you're the same James Nason from Hansen, Lunde, Nason (2011))? Bit of 
a co-incidence, but I was hoping to implement the Model Confidence Set in 
Julia sometime in the next few weeks as part of a forecast evaluation 
package. If you're interested, I can point you to the github source once it 
is done.

Cheers,

Colin 


On Wednesday, 17 June 2015 14:11:49 UTC+10, james...@gmail.com wrote:

 Hi All:

 I am a novice using Julia.  As a way to learn Julia, my project is to 
 convert MatLab code that estimates Bayesian vector autoregressions.  The 
 estimator uses Gibbs sampling.

 The Julia code is running, but is slower than MatLab.  Thus, I am doing 
 something very inefficiently in Julia.  

 The Profile.view package and the @time function indicate the problem is 
 located in a loop of the function bb_update(bbj, bbcovj), which appears 
 below.  

 bb_update(.,.) represents a step in the Gibbs sampling procedure.  This 
 part of the Gibbs sampler updates the column vector of coefficients, bbj, 
 which are to be estimated and its covariance matrix, bbcovj, which is psd.  
 bb_update(bbj, bbcovj) returns bbcovj and bbj.

 bbj is (nvar*nvar*plags + nvar) x 1 and bbcovj is (nvar*nvar*plags + nvar) 
 x (nvar*nvar*plags + nvar).  

 The (global) constants are nvar = 6, number of variables (Int64) and plags 
 = 6, number of lags in the vector autoregression (Int64), which are defined 
 previously in the programs.

 The loop in bb_update(.,.) involves actual data, capZ and yyy, and a 
 different covariance matrix, Om_e, that for the purposes of the loops is 
 fixed.  capZ and Om_e are defined outside bb_update(.,.).  bbj is estimated 
 conditional on Om_e.  

 capZ = a data matrix (Array Float64, 2), which is (nvar x obs) x 
 (nvar*nvar*plags + nvar), where the (global) constant obs = 223, number of 
 observations (Int64).

 yyy is nvar x obs (Array Float64, 2).

 The covariance matrix Om_e is (nvar x obs) x nvar (Array Float64, 2).

 Prior to calling bb_update(.,.), the program sets bbj = zeros( 
 nvar*nvar*plags + nvar, 1) and bbcovj =  zeros( nvar*nvar*plags + nvar, 
 nvar*nvar*plags), respectively.  These are input into bb_update(.,.).

 The loop for j = 1:obs picks off

 1) a nvar x(nvar*nvar*plags + nvar) block of capZ to form a quadratic 
 around a nvar x nvar block of Om_e to construct bbcovj

 2) the transpose of the same block of capZ, the same block of Om_e, and a 
 nvar x 1 column of yyy are multiplied to compute bbj.

 The @time function suggests 80 to 90% of the run time of the entire Gibbs 
 sampling procedure is tied up in the loop of bb_update(.,.) and that 55% of 
 the run time of bb_update(.,.) is devoted to gc() activities.

 I am running version 0.3.8 of Julia and the OS is Xubuntu64, v14.04.

 Your help/advice is much appreciated.

 Sincerely,

 Jim Nason

 ==

 function bb_update(bbj, bbcovj)

  for j = 1:obs
  bbcovj += capZ[(j-1)*nvar+1:j*nvar,:]'*( 
 inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*capZ[(j-1)*nvar+1:j*nvar,:] ; 
  bbj += capZ[(j-1)*nvar+1:j*nvar,:]'*( 
 inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*yyy[:,j] ;
  end

  return (bbj, bbcovj)
 end



[julia-users] Re: Creating function with a macro

2015-06-17 Thread Ben Ward
Actually, looking at it again with fresh eyes - I can define a few new 
classes inhering from Ordering, and then define my own lt(o, x, [j-1]) for 
each. It looks like these predefined orderings and lt methods in the 
sorting api - ForwardOrdering, ReverseOrdering and so on are fast as their 
lt() methods are defined as:

lt(o::ForwardOrdering,   a, b) = isless(a,b)lt(o::ReverseOrdering, 
  a, b) = lt(o.fwd,b,a)

Whereas with feeding in custom functions it builds a Lt Operdering type 
that contains a reference for the custom Lt function - which I believe is 
the slow version.

On Wednesday, June 17, 2015 at 11:20:47 PM UTC+1, Ben Ward wrote:

 I guess for Base sort! passing a functior overriding call() as lt is 
 possible, if it accepts the three arguments passed to lt in lt(o, x, v[j-1
 ])

 On Wednesday, June 17, 2015 at 9:11:22 PM UTC+1, Kristoffer Carlsson wrote:

 You could also use something called functors which basically are types 
 that overload the call function. When you pass these as argument the 
 compiler can specialize the function on the type of the functor and thus 
 inline the call. See here for example for them being used effectively for 
 performance increase: https://github.com/JuliaLang/julia/pull/11685

 As an example I took the code for insertionsort and made it instead 
 accept an argument f which will be the functor. I then create some functors 
 to sort on the different type fields and show an example how sort is called.


 function sort!(v::AbstractVector, f, lo::Int=1, hi::Int=length(v))
 @inbounds for i = lo+1:hi
 j = i
 x = v[i]
 while j  lo
 if f(x, v[j-1])
 v[j] = v[j-1]
 j -= 1
 continue
 end
 break
 end
 v[j] = x
 end
 return v
 end

 # Some type
 immutable CompType
 a::Int
 b::Int
 c::Int
 end


 b = [CompType(1,2,3), CompType(3,2,1), CompType(2,1,3)]

 # Functors 
 immutable AFunc end
 call(::AFunc, x, y) = x.a  y.a
 immutable BFunc end
 call(::BFunc, x, y) = x.b  y.b
 immutable CFunc end
 call(::CFunc, x, y) = x.c  y.c

 # Can now sort with good performance
 sort!(b, AFunc())
 println(b)
 sort!(b, BFunc())
 println(b)
 sort!(b, CFunc())
 println(b)


 Now, this is of course not optimal to rip code out of base. It would be 
 better if we could pass a functor straight to Base.sort!. 


 On Wednesday, June 17, 2015 at 8:13:41 PM UTC+2, Ben Ward wrote:

 Hi, I want to create a macro with which I can create a function with a 
 custom bit of code:

 In the repl I can do a toy example:


 *name = :hi*


 *vectype = Vector{Int}*


 *quote**  function ($name)(v::$vectype)*

 *println(hi)*

   *end*

 *end*

 However if I try to put this in a macro and use it I get an error:

 *macro customFun(vectype::DataType, name::Symbol)*


 *  quote**   function ($name)(v::$vectype)*

 *  println(hi World!)*

 *end*

 *  end*



 *end*

 *@customFun(Vector{Int}, :hi)*

 What am I doing wrong? I'd like to use macro arguments to provide a 
 function's name, and the datatype of the argument. I haven't used macros to 
 define functions more complex than simple one liners.

 Thanks,
 Ben.



[julia-users] Re: X.2=X*0.2, easy to make mistake.

2015-06-17 Thread Patrick O'Leary
Changing this would be breaking--syntax that currently works (even if you 
don't expect it to) wouldn't work anymore. If someone is actually using 
this syntax, then we'd break their code on a release which is billed as a 
minor maintenance release. That's not going to work.

There may be a Lint.jl check for this, though? It sounds familiar.

On Wednesday, June 17, 2015 at 5:12:31 PM UTC-5, Art Kuo wrote:

 That's great that it's fixed in 0.4, but even in 0.3.X I would still label 
 it inconsistent behavior, or perhaps even a bug. Why should this happen:

 *julia **x.2 == x*.2*

 *true*

 *julia **x0.2 == x*0.2*

 *ERROR: x0 not defined*

 *julia **x2 == x*2*

 *ERROR: x2 not defined*

 It seems consistent that .2X == .2*X, 0.2X == 0.2*X, 2X == 2*X, so it is 
 fine if the number occurs before the variable. But not if the number occurs 
 after, so I agree with the proposal to ban X.2, meaning trigger an error. 
 Shouldn't this be the case for 0.3 versions as well?


 On Wednesday, June 17, 2015 at 9:14:24 AM UTC-4, Seth wrote:



 On Wednesday, June 17, 2015 at 8:04:11 AM UTC-5, Jerry Xiong wrote:

 Today I spend many time to find a bug in my code. It is turn out that I 
 mistakenly wrote sum(X,2) as sum(X.2). No any error information is reported 
 and Julia regarded X.2 as X*0.2. The comma , is quite close to dot . in 
 the keyboard and looks quite similar in some fonts. As there is no any 
 error occur, this bug will be dangerous. Also, it is not intuitive to 
 understand X.2 is X*0.2. I think maybe it is better to forbid syntax like 
 X.2 but only allowed .2X. 


 This appears to be fixed in 0.4:

 julia x = 100
 100

 julia x.2
 ERROR: syntax: extra token 0.2 after end of expression

 julia sum(x.2)
 ERROR: syntax: missing comma or ) in argument list

 julia f(x) = x.2
 ERROR: syntax: extra token 0.2 after end of expression

 julia f(x) = sum(x.2)
 ERROR: syntax: missing comma or ) in argument list

  



Re: [julia-users] Julia computing problem in a loop

2015-06-17 Thread jamesmnason
Dear Mauro:

Thanks for the advice.  I did not declare const nvar=6 , but will and let 
you know the result.

Yes, I have read the performance section of the manual.  Part of the 
problem is I still think in MatLab coding rules.  

Jim Nason

On Wednesday, June 17, 2015 at 3:00:03 AM UTC-4, Mauro wrote:


  The (global) constants are nvar = 6, number of variables (Int64) and 
 plags 
  = 6, number of lags in the vector autoregression (Int64), which are 
 defined 
  previously in the programs. 

 did you actually declare those const: 

 const nvar=6 

 ? 

 If not, this will impact your loop. 

  function bb_update(bbj, bbcovj) 
  
   for j = 1:obs 
   bbcovj += capZ[(j-1)*nvar+1:j*nvar,:]'*( 
  inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*capZ[(j-1)*nvar+1:j*nvar,:] ; 
   bbj += capZ[(j-1)*nvar+1:j*nvar,:]'*( 
  inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*yyy[:,j] ; 
   end 
  
   return (bbj, bbcovj) 
  end 

 Any object you do not pass in as argument needs to be const, thus maybe 
 pass them as arguments instead: 

 function bb_update(bbj, bbcovj, yyy, Om_e, capZ) 
 for j = 1:size(yyy,2) 
 jj = (j-1)*nvar+1:j*nvar 
 bbcovj += capZ[jj,:]'*(inv(Om_e[jj,:]) )*capZ[jj,:] 
 bbj += capZ[jj,:]'*(inv(Om_e[jj,:]) )*yyy[:,j] 
 end 
 return (bbj, bbcovj) 
 end 

 Also, note, Julia is fastest with unrolled loops as at the moment Julia 
 makes a copy when doing slices (this will change sometime).  However, as 
 you're doing an matrix inverse, that trick is not applicable here. 

 Have you seen the performance section in the manual? 



[julia-users] Re: Backend deployment for website

2015-06-17 Thread Matthew Krick
Great stuff, seems like I've still got a bunch of research to do, 
especially regarding ZMQ. 

Jack, I'm really interested in your approach of using your julia process as 
a separate dockerfile. Could you explain what you meant by using a ZMQ 
server when you needed more communication? 

On Tuesday, June 16, 2015 at 10:35:40 AM UTC-5, Matthew Krick wrote:

 I've read everything I could on deployment options, but my head is still a 
 mess when it comes to all the choices, especially with how fast julia is 
 moving! I have a website on a node.js server  when the user inputs a list 
 of points, I want to solve a traveling salesman problem (running time 
 between 2 and 10 minutes, multiple users). Can someone offer some advice on 
 what's worked for them or any pros/cons to each option? Least cost is 
 preferable to performance.


1. Spawn a new node.js instance  solve using node-julia (
https://www.npmjs.com/package/node-julia)
2. Use Forio's epicenter to host the code (
http://forio.com/products/epicenter/)
3. Create a julia HTTP server  make a REST API (
https://github.com/JuliaWeb/HttpServer.jl)
4. Host on Google Compute Engine (https://cloud.google.com/compute/)
5. Host on Amazon's Simple Queue (http://aws.amazon.com/sqs/)
6. Use Julia-box, if it can somehow accept inputs via an http call (
https://www.juliabox.org/)
7. ???




[julia-users] Re: Set precision when printing to file

2015-06-17 Thread Robert DJ
It certainly does -- thanks a lot!

On Monday, June 15, 2015 at 4:37:35 PM UTC+2, Huda Nassar wrote:

 julia f = open(test2.txt,w)
 IOStream(file test2.txt) 
 julia @printf(f,%0.2f,1/3) 
 julia close(f)

 This should do the job

 On Monday, June 15, 2015 at 9:50:17 AM UTC-4, Robert DJ wrote:

 Hi,

 I would like to write floating point numbers to a file and limit the 
 number of digits/decimals. With e.g.

 f = open(test.txt, w)
 println(f, 1/3)
 close(f)

 test.txt contains 0. and I would like it to be only 0.33.

 Is there a way to do this?

 Thanks,

 Robert



Re: [julia-users] Adding a native function to julia's scheduler

2015-06-17 Thread Tim Holy
Not sure it will help your specific use case, but see 
http://docs.julialang.org/en/release-0.3/manual/calling-c-and-fortran-code/#thread-safety
and an example in 
https://github.com/JuliaGPU/CUDArt.jl/blob/master/src/stream.jl

--Tim


On Wednesday, June 17, 2015 12:29:18 AM yigiter.pub...@gmail.com wrote:
 I need to call waitformultipleobjects (windows equivalent of select
 call in linux) from julia using ccall. As this is a blocking function, I
 would like to call it within another coroutine (task).
 
 
 
 The problem is that the “Taks” in Julia only function effectively if all
 the blocking calls within it emanates from julia's own I/O interface. It
 cannot deal with any blocking call to native C-functions.
 
 
 
 As far as I can see julia is based on Libuv. I guess every time a blocking
 call (from defined I/O interface) is issued, julia internally calls a
 corresponding function from the asynchronous libuv and then waits() for a
 notify() from the libuv. I guess the entire scheduler of julia is based on
 this paradigm so that It can deal with asynch operation within a single
 thread.
 
 
 
 My question is, is it possible to extent this wait() - notify() paradigm
 for any arbitrary blocking ccall call?
 
 
 
 I have tried the following solution, but it fails miserably:
 
 0) Start a task which calls a non-blocking function from the dll and then
 wait() for a notify().
 1)  (In C) Implement a dll which creates another thread to call the real
 blocking function whenever julia calls the non-blocking function in the
 previous step.
 
 2) Provide a Julia callback function to the dll which is called at the
 finalizing step of the thread by the dll.
 
 3) (In Julia) the callback function calls the notify() function.
 
 However, it turned out that notify() function itself is not thread safe and
 Julia’s respond to notify() from another thread (created in C) is totally
 random.
 
 Is it possible to make the julia’s scheduler handle the arbitrary blocking
 calls?
 
 
 
 (PS: I was previously advised a solution based on parallel processes.
 However, for several reasons, multi-process paradigm is not a suitable
 option for me right now.)



[julia-users] Re: Need help writing parallel code with @sync and @everywhere

2015-06-17 Thread David Gold
Have you tried macroexpanding the expression? Doing so yields

julia macroexpand(:( for i = 1:N 
  @sync @parallel for j = (i + 1):N  
  tmp[j] = i * j  
  end 
  end )) 

:(for i = 1:N # line 2: 
begin  # task.jl, line 342: 
Base.sync_begin() # line 343: 
#6#v = begin  # multi.jl, line 1487: 
Base.pfor($(Expr(:localize, :(()-begin  # expr.jl, 
line 113: 
begin  # multi.jl, line 1460: 
function (#7#lo::Base.Int,#8#hi::Base.Int) # multi.jl, line 
1461: 
for j = (i + 1:N)[#7#lo:#8#hi] # line 1462: 
begin  # line 3: 
tmp[j] = i * j 
end 
end 
end 
end 
end))),Base.length(i + 1:N)) 
end # line 344: 
Base.sync_end() # line 345: 
#6#v 
end 
end)


 It looks like @parallel does the work of setting up a properly formatted 
call to Base.pfor. In particular, it builds an Expr object with head 
:localize and argument a zero-arg anonymous function, and then passes the 
interpolation of that expression along with `Base.length(i + 1:N)` to 
Base.pfor. The body of the anonymous function declares another function 
with arguments `#7#lo`, `#8#hi`. The latter variables somehow annotate the 
delimiters of your inner loop, which gets reproduced inside the body of the 
declared function. I'm *guessing* that the anonymous function is used as a 
vehicle to pass the code of the annotated inner loop to Base.pfor without 
executing it beforehand. But I could be wrong.


Then @sync just wraps all the above between calls to `Base.sync_begin` and 
`Base.sync_end`.


I also should note I have zero experience with Julia's parallel machinery 
and am entirely unfamiliar with the internals of Base.pfor. I just enjoy 
trying to figure out macros.

On Wednesday, June 17, 2015 at 5:49:58 AM UTC-4, Daniel Carrera wrote:


 On Wednesday, 17 June 2015 10:28:37 UTC+2, Nils Gudat wrote:

 I haven't used @everywhere in combination with begin..end blocks, I 
 usually pair @sync with @parallel - see an example here 
 https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl, 
 where I've parallelized the entire nested loop ranging from lines 25 to 47. 



 Aha! Thanks. Copying your example I was able to produce this:

 N = 5
 tmp = SharedArray(Int, (N))
 
 for i = 1:N
 # Compute tmp in parallel #
 @sync @parallel for j = (i + 1):N
 tmp[j] = i * j
 end
 
 # Consume tmp in serial #
 for j = (i + 1):N
 println(tmp[j])
 end
 end


 This seems to work correctly and gives the same answer as the serial code. 
 Can you help me understand how it works? What does @sync @parallel do? I 
 feel like I half-understand it, but the concept is not clear in my head.

 Thanks.

 Daniel.



[julia-users] Re: Help me understand @sync, @async and pmap()?

2015-06-17 Thread Daniel Carrera
Thanks. That does help. I'm still having problems making it all work, but I 
think I need to prepare a minimalist example that illustrates what I want 
to do. I'll post a new thread with an example later.

Cheers,
Daniel.


On Tuesday, 16 June 2015 22:28:22 UTC+2, Avik Sengupta wrote:

 The following is an informal description, please don't take it as ground 
 truth... 

 So @async will start a job and return, without waiting for the result of 
 that job being available. So, in the code above, the real work is being 
 done in the remotecall_fetch, which runs a function f in the remote 
 process. Due to the @async, the while true loop will continue as soon as 
 the function is sent to the remote process, without waiting for its result 
 to be passed back. 

 When the while true loop is completed, you dont then want to return out 
 of the pmap function, since the remote functions may not yet have 
 finished. At that point, you want to wait till all the jobs that you have 
 started do finish. That is what @sync does. It waits for all @async jobs to 
 have finished before continuing. Hence, you will typically (though not 
 always) see @sync/@async pairs. 

 Hope that helps. 

 Regards
 -
 Avik


 On Tuesday, 16 June 2015 20:22:07 UTC+1, Daniel Carrera wrote:

 Hello,

 I have been looking at the documentation for parallel programming, with 
 special interest in SharedArrays. But no matter how hard I try, I cannot 
 get a clear picture of what @sync and @async do. I think they are not 
 really explained anywhere. Maybe they are explained somewhere and I just 
 haven't found it. To make my question concrete, here is the implementation 
 of pmap:

 function pmap(f, lst)
 np = nprocs()  # determine the number of processes available
 n = length(lst)
 results = cell(n)
 i = 1
 # function to produce the next work item from the queue.
 # in this case it's just an index.
 nextidx() = (idx=i; i+=1; idx)
 @sync begin
 for p=1:np
 if p != myid() || np == 1
 @async begin
 while true
 idx = nextidx()
 if idx  n
 break
 end
 results[idx] = remotecall_fetch(p, f, lst[idx])
 end
 end
 end
 end
 end
 resultsend




 Can someone help me understand how this function works? In particular, 
 what do @sync and @async do?

 Thanks for the help.

 Cheers,
 Daniel



[julia-users] Re: Need help writing parallel code with @sync and @everywhere

2015-06-17 Thread David Gold
Actually, it seems that @sync is also responsible for setting the variable 
#6#v equal to the return object of the call to Base.pfor and then returning 
#6#v after calling Base.sync_end().

On Wednesday, June 17, 2015 at 8:22:08 AM UTC-4, David Gold wrote:

 Have you tried macroexpanding the expression? Doing so yields

 julia macroexpand(:( for i = 1:N 
   @sync @parallel for j = (i + 1):N  
   tmp[j] = i * j  
   end 
   end )) 

 :(for i = 1:N # line 2: 
 begin  # task.jl, line 342: 
 Base.sync_begin() # line 343: 
 #6#v = begin  # multi.jl, line 1487: 
 Base.pfor($(Expr(:localize, :(()-begin  # expr.jl, 
 line 113: 
 begin  # multi.jl, line 1460: 
 function (#7#lo::Base.Int,#8#hi::Base.Int) # multi.jl, 
 line 1461: 
 for j = (i + 1:N)[#7#lo:#8#hi] # line 1462: 
 begin  # line 3: 
 tmp[j] = i * j 
 end 
 end 
 end 
 end 
 end))),Base.length(i + 1:N)) 
 end # line 344: 
 Base.sync_end() # line 345: 
 #6#v 
 end 
 end)


  It looks like @parallel does the work of setting up a properly formatted 
 call to Base.pfor. In particular, it builds an Expr object with head 
 :localize and argument a zero-arg anonymous function, and then passes the 
 interpolation of that expression along with `Base.length(i + 1:N)` to 
 Base.pfor. The body of the anonymous function declares another function 
 with arguments `#7#lo`, `#8#hi`. The latter variables somehow annotate the 
 delimiters of your inner loop, which gets reproduced inside the body of the 
 declared function. I'm *guessing* that the anonymous function is used as a 
 vehicle to pass the code of the annotated inner loop to Base.pfor without 
 executing it beforehand. But I could be wrong.


 Then @sync just wraps all the above between calls to `Base.sync_begin` and 
 `Base.sync_end`.


 I also should note I have zero experience with Julia's parallel machinery 
 and am entirely unfamiliar with the internals of Base.pfor. I just enjoy 
 trying to figure out macros.

 On Wednesday, June 17, 2015 at 5:49:58 AM UTC-4, Daniel Carrera wrote:


 On Wednesday, 17 June 2015 10:28:37 UTC+2, Nils Gudat wrote:

 I haven't used @everywhere in combination with begin..end blocks, I 
 usually pair @sync with @parallel - see an example here 
 https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl,
  
 where I've parallelized the entire nested loop ranging from lines 25 to 47. 



 Aha! Thanks. Copying your example I was able to produce this:

 N = 5
 tmp = SharedArray(Int, (N))
 
 for i = 1:N
 # Compute tmp in parallel #
 @sync @parallel for j = (i + 1):N
 tmp[j] = i * j
 end
 
 # Consume tmp in serial #
 for j = (i + 1):N
 println(tmp[j])
 end
 end


 This seems to work correctly and gives the same answer as the serial 
 code. Can you help me understand how it works? What does @sync @parallel 
 do? I feel like I half-understand it, but the concept is not clear in my 
 head.

 Thanks.

 Daniel.



Re: [julia-users] Julia computing problem in a loop

2015-06-17 Thread Mauro

 The (global) constants are nvar = 6, number of variables (Int64) and plags 
 = 6, number of lags in the vector autoregression (Int64), which are defined 
 previously in the programs.

did you actually declare those const:

const nvar=6

?

If not, this will impact your loop.

 function bb_update(bbj, bbcovj)

  for j = 1:obs
  bbcovj += capZ[(j-1)*nvar+1:j*nvar,:]'*( 
 inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*capZ[(j-1)*nvar+1:j*nvar,:] ; 
  bbj += capZ[(j-1)*nvar+1:j*nvar,:]'*( 
 inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*yyy[:,j] ;
  end

  return (bbj, bbcovj)
 end

Any object you do not pass in as argument needs to be const, thus maybe
pass them as arguments instead:

function bb_update(bbj, bbcovj, yyy, Om_e, capZ)
for j = 1:size(yyy,2)
jj = (j-1)*nvar+1:j*nvar
bbcovj += capZ[jj,:]'*(inv(Om_e[jj,:]) )*capZ[jj,:]
bbj += capZ[jj,:]'*(inv(Om_e[jj,:]) )*yyy[:,j] 
end
return (bbj, bbcovj)
end

Also, note, Julia is fastest with unrolled loops as at the moment Julia
makes a copy when doing slices (this will change sometime).  However, as
you're doing an matrix inverse, that trick is not applicable here.

Have you seen the performance section in the manual?


[julia-users] Re: Need help writing parallel code with @sync and @everywhere

2015-06-17 Thread Nils Gudat
I haven't used @everywhere in combination with begin..end blocks, I usually 
pair @sync with @parallel - see an example here 
https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl, 
where I've parallelized the entire nested loop ranging from lines 25 to 47. 


[julia-users] Need help writing parallel code with @sync and @everywhere

2015-06-17 Thread Daniel Carrera
Hello,

I have been having a lot of trouble figuring out how to write parallel code 
in Julia. Consider this toy example of a serial program:

N = 5
for i = 1:N
for j = (i + p):N
println(i * j)
end
end

Now, suppose that i * j is an expensive operation, so I want to compute 
those values in parallel and print them later. Here is my (failed) attempt 
at doing that:

N = 5
tmp = SharedArray(Int, (N))

for i = 1:N
# --- #
# Compute tmp in parallel #
# --- #
@sync begin
@everywhere begin
p = myid()
np = nprocs()
for j = (i + p):np:N
tmp[j] = i * j
end
end
end

# - #
# Consume tmp in serial #
# - #
for j = (i + 1):N
println(tmp[j])
end
end


So, my idea is to have a shared array with N elements where I store some of 
the i * j calculations in parallel. Once everyone is finished, I consume 
the results and repeat the loop. However, I get an error saying that i is 
not visible inside the @everywhere block:

julia nprocs()
4

julia foo_parallel()
exception on 1: ERROR: i not defined
 in anonymous at /home/sigrid/Daniel/Science/VENUS/venus.jl:303
 in eval at /build/buildd/julia-0.3.8-docfix/base/sysimg.jl:7
 in anonymous at multi.jl:1310
 in run_work_thunk at multi.jl:621
 in run_work_thunk at multi.jl:630
 in anonymous at task.jl:6
 ... many lines ...


So, apparently the workers in the @everywhere block cannot see outside 
variables. What can I do? ... One thing I do not want to do is turn tmp 
into an N x N matrix. That would make tmp very large for large N, and in 
my real problem there are other outside variables that I want to use.

Help?

Cheers,
Daniel.



Re: [julia-users] Re: How to read any lines with stream open(file)

2015-06-17 Thread Paul Analyst

Unfrtunatly can`t run, (in Julia 3.6 the same)
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type help() for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.0-dev+2847 (2015-01-21 18:34 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit fc61385 (146 days old master)
|__/   |  x86_64-w64-mingw32


julia f = open(temp.txt,w)
IOStream(file temp.txt)

julia for i in 1:20 write(f, $i\n) end

julia close(f)


julia readline(open(temp.txt), 15)
ERROR: MethodError: `readline` has no method matching 
readline(::IOStream, ::Int64)SYSTEM: show(lasterr) caused an

 error
julia readline(open(temp.txt))
1\n

julia readline(open(temp.txt))
1\n

julia readlines(open(temp.txt))
20-element Array{Union(ASCIIString,UTF8String),1}:
 1\n
 2\n
 3\n
 4\n
 5\n
 6\n
 7\n
 8\n
 9\n
 10\n
 11\n
 12\n
 13\n
 14\n
 15\n
 16\n
 17\n
 18\n
 19\n
 20\n
Paul
W dniu 2015-06-16 o 19:49, Tom Breloff pisze:

You could create your own:

|
julia Base.readline(s, i::Int) = (for (j,line) in 
enumerate(eachline(s)); if j==i; return line; end; end; error(not 
enough lines))

readline (generic function with 4 methods)

julia f = open(/tmp/tmp.txt, w)
IOStream(file /tmp/tmp.txt)

julia for i in 1:20 write(f, $i\n) end

julia close(f)

julia readline(open(/tmp/tmp.txt), 15)
15\n

julia readline(open(/tmp/tmp.txt), 25)
ERROR: not enough lines
 in readline at none:1

|



On Tuesday, June 16, 2015 at 12:17:28 PM UTC-4, paul analyst wrote:

If o is stream
o=open(file)

how to read any line  ?e.g. 15

julia readline(o,15)
ERROR: MethodError: `readline` has no method matching
readline(::IOStream, ::Int64)SYSTEM: show(lasterr) caused
 error

Paul





[julia-users] Re: Please sdvise how to plot the matrix (10x1000) with different color

2015-06-17 Thread Nils Gudat
It's not clear to me from your post what you are trying to do - do you want 
to plot each of the 10 rows of vec as a line with a different color? 
You might want to read this discussion 
https://github.com/dcjones/Gadfly.jl/issues/526 suggesting that it is 
much easier to to the kind of plot you (potentially) want to do with a 
DataFrame than with an Array.
If you want to stick to the Array, use layers as described in the linked 
issue; adapted to your example you'd do:

plot(layer( x=[1:size(vec,2)], y=vec[1,:]+2, Geom.line, 
Theme(default_color=color(orange)) ),
  layer( x=[1:size(vec,2)], y=vec[2,:],Geom.line, 
Theme(default_color=color(purple))) )

(note that I've shifted the first row up by 2 to make it easier to 
distinguish the lines); obviously you'd need 10 layers to plot your 10 
rows, or maybe write a macro if you want to plot a lot of rows.
The Gadfly way (Disclaimer: I plot almost exclusively using PyPlot, so 
take this with a grain of salt) would be to convert the Array into a 
stacked DataFrame as follows (partly lifted from this discussion 
https://github.com/dcjones/Gadfly.jl/issues/529):

df=DataFrame(y=vec[:], x=repeat([1:1000], outer=[10]), 
row=repeat([vec*string(i) for i = 1:10], inner=[1000]))
plot(df, color=row, x=x, y=y, Geom.line)


[julia-users] Re: Need help writing parallel code with @sync and @everywhere

2015-06-17 Thread Daniel Carrera

On Wednesday, 17 June 2015 10:28:37 UTC+2, Nils Gudat wrote:

 I haven't used @everywhere in combination with begin..end blocks, I 
 usually pair @sync with @parallel - see an example here 
 https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl, 
 where I've parallelized the entire nested loop ranging from lines 25 to 47. 



Aha! Thanks. Copying your example I was able to produce this:

N = 5
tmp = SharedArray(Int, (N))

for i = 1:N
# Compute tmp in parallel #
@sync @parallel for j = (i + 1):N
tmp[j] = i * j
end

# Consume tmp in serial #
for j = (i + 1):N
println(tmp[j])
end
end


This seems to work correctly and gives the same answer as the serial code. 
Can you help me understand how it works? What does @sync @parallel do? I 
feel like I half-understand it, but the concept is not clear in my head.

Thanks.

Daniel.


Re: [julia-users] Re: How to deploy Julia

2015-06-17 Thread Charles Novaes de Santana
Did I hear AstroJulia? :)

Best,

Charles

On 16 June 2015 at 14:38, Daniel Carrera dcarr...@gmail.com wrote:

 I would love to see the paper when it comes out. I cannot use your code
 directly because I need to do a direct NBody rather than a tree code (I am
 modelling planetary systems). But it's nice to see another astronomer using
 Julia.

 Cheers,
 Daniel.


 On 16 June 2015 at 08:28, Ariel Keselman skar...@gmail.com wrote:

 FYI just since NBody was mentioned -- I wrote a gravitational NBody tree
 code with parallel shared memory execution. I run it with many millions of
 particles and I'm very pleased with the results. They are comparable (or
 even faster) than some very popular C codes (e.g. Gadget2). I'm working on
 a paper currently, will publish a package once everything is cleaned up,
 documented, etc.




 --
 When an engineer says that something can't be done, it's a code phrase
 that means it's not fun to do.




-- 
Um axé! :)

--
Charles Novaes de Santana, PhD
http://www.imedea.uib-csic.es/~charles


[julia-users] Re: code style: indentation?

2015-06-17 Thread Scott Jones
The lack of a formal grammar (and the ability to have one - it's not just 
that one hasn't been written yet, I'm not sure that one *could* be written 
- the grammar is basically defined only by the parser itself) is one of the 
harder problems with Julia.
I had enough problems with a simple language has a formal grammar, but was 
a LL(1) (left-recursive) grammar... you need a tool like ANTLR to parse it 
(if you want to use a parsing tool), lex/yacc can't handle it since it's 
not LALR(1).

On Wednesday, June 17, 2015 at 1:16:19 AM UTC-4, Andreas Lobinger wrote:

 Hello colleague,

 On Tuesday, June 16, 2015 at 11:16:18 PM UTC+2, Scott Jones wrote:

 Not whitespace sensitive?  I thought julia was very whitespace 
 sensitive... blanks and newlines can make a big difference...
 Did you just mean that julia is not sensitive to the number of 
 tabs/blanks (1 or more)


 try python as an example of whitespace sensitivity...
 In julia you can move around code with whitespace very far until you get 
 complaints by the parser. Newlines seem to be a part of the syntax 
 somewhere, but without formal grammar that's hard to test.



Re: [julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread Avik Sengupta
Indeed

*julia **getindex(Any, (1,2,3))*

*1-element Array{Any,1}:*

* (1,2,3)*


Basically, any code with X[y...] gets lowered to getindex, and then the 
corresponding method takes over to do the right thing. Makes the lowering 
relatively simple, i imagine. 



On Wednesday, 17 June 2015 13:54:11 UTC+1, Mauro wrote:

 I think it just uses getindex (a bit of a hack...): 

 julia @which Int[3] 
 getindex(T::Union(DataType,UnionType,TypeConstructor),vals...) at 
 array.jl:119 


 On Wed, 2015-06-17 at 14:50, andrew cooke and...@acooke.org javascript: 
 wrote: 
  Oh, I think the call() thing is just me being confused.  That's *only* a 
  mechanism to allow non-functions to look like functions?  I guess my 
  misunderstanding is more about how apply is defined (it mentions call), 
  which really isn't important to me right now, so feel free to ignore 
 that 
  part of my question.  Sorry. 
  
  
  On Wednesday, 17 June 2015 09:45:46 UTC-3, andrew cooke wrote: 
  
  
  If I want to pass the function that constructs an array of Any, given 
 some 
  values, to another function, what do I use? 
  
  Here's an example that might make things clearer: 
  
  julia f(x...) = Any[x...] 
  f (generic function with 1 method) 
  
  julia apply(f, 1,2,3) 
  3-element Array{Any,1}: 
   1 
   2 
   3 
  
  julia apply(Any[], 1,2,3) 
  ERROR: MethodError: `call` has no method matching call(::Array{Any,1}, 
 :: 
  Int64, ::Int64, ::Int64) 
  Closest candidates are: 
BoundsError(::Any...) 
TypeVar(::Any...) 
TypeConstructor(::Any...) 
... 
   in apply at deprecated.jl:116 
  
  where I am looking for what the built-in equivalent of f() is. 
  
  I may be even more confused, because I also don't understand why this 
  fails: 
  
  julia call(f, 1, 2) 
  ERROR: MethodError: `call` has no method matching call(::Function, 
 ::Int64 
  , ::Int64) 
  Closest candidates are: 
BoundsError(::Any...) 
TypeVar(::Any...) 
TypeConstructor(::Any...) 
... 
  
  So any guidance appreciated. 
  
  Thanks, 
  Andrew 
  



Re: [julia-users] Re: How to read any lines with stream open(file)

2015-06-17 Thread Seth
You can also use chomp() which is specific to newlines (strip() removes all 
whitespace)
Base.chomp(string)

   Remove a trailing newline from a string

julia a = asd 
asd 

julia chomp(a)
asd 

julia strip(a)
asd



On Wednesday, June 17, 2015 at 4:41:45 AM UTC-5, René Donner wrote:

 Hi, 

 you can use strip() for that. This and some other very handy functions 
 (e.g. lstrip / rstrip) are listed in 
 http://docs.julialang.org/en/release-0.3/stdlib/strings/?highlight=strip#strings
  

 cheers, 

 rene 




 Am 17.06.2015 um 11:38 schrieb Paul Analyst paul.a...@mail.com 
 javascript:: 

  Is another way [1:end-1] to lost \n on the end of line? 
  
  julia readline(open(temp.txt))[1:end] 
  1\n 
  
  julia readline(open(temp.txt))[1:end-1] 
  1 
  Paul 
  
  W dniu 2015-06-17 o 11:33, Paul Analyst pisze: 
  Ok, sorry, i don`t see 1 line:) 
  Base.readline(s, i::Int) = (for (j,line) in enumerate(eachline(s)); if 
 j==i; return line; end; end; error(not enough lines)) 
  
  Is oK ,big thx 
  Paul 
  
  W dniu 2015-06-17 o 11:27, Paul Analyst pisze: 
  Unfrtunatly can`t run, (in Julia 3.6 the same) 
 _ 
 _   _ _(_)_ |  A fresh approach to technical computing 
(_) | (_) (_)|  Documentation: http://docs.julialang.org 
 _ _   _| |_  __ _   |  Type help() for help. 
| | | | | | |/ _` |  | 
| | |_| | | | (_| |  |  Version 0.4.0-dev+2847 (2015-01-21 18:34 
 UTC) 
   _/ |\__'_|_|_|\__'_|  |  Commit fc61385 (146 days old master) 
  |__/   |  x86_64-w64-mingw32 
  
  
  julia f = open(temp.txt,w) 
  IOStream(file temp.txt) 
  
  julia for i in 1:20 write(f, $i\n) end 
  
  julia close(f) 
  
  
  julia readline(open(temp.txt), 15) 
  ERROR: MethodError: `readline` has no method matching 
 readline(::IOStream, ::Int64)SYSTEM: show(lasterr) caused an 
   error 
  julia readline(open(temp.txt)) 
  1\n 
  
  julia readline(open(temp.txt)) 
  1\n 
  
  julia readlines(open(temp.txt)) 
  20-element Array{Union(ASCIIString,UTF8String),1}: 
   1\n 
   2\n 
   3\n 
   4\n 
   5\n 
   6\n 
   7\n 
   8\n 
   9\n 
   10\n 
   11\n 
   12\n 
   13\n 
   14\n 
   15\n 
   16\n 
   17\n 
   18\n 
   19\n 
   20\n 
  Paul 
  W dniu 2015-06-16 o 19:49, Tom Breloff pisze: 
  You could create your own: 
  
  julia Base.readline(s, i::Int) = (for (j,line) in 
 enumerate(eachline(s)); if j==i; return line; end; end; error(not enough 
 lines)) 
  readline (generic function with 4 methods) 
  
  julia f = open(/tmp/tmp.txt, w) 
  IOStream(file /tmp/tmp.txt) 
  
  julia for i in 1:20 write(f, $i\n) end 
  
  julia close(f) 
  
  julia readline(open(/tmp/tmp.txt), 15) 
  15\n 
  
  julia readline(open(/tmp/tmp.txt), 25) 
  ERROR: not enough lines 
   in readline at none:1 
  
  
  
  
  On Tuesday, June 16, 2015 at 12:17:28 PM UTC-4, paul analyst wrote: 
  If o is stream 
  o=open(file) 
  
  how to read any line  ?e.g. 15 
  
  julia readline(o,15) 
  ERROR: MethodError: `readline` has no method matching 
 readline(::IOStream, ::Int64)SYSTEM: show(lasterr) caused 
   error 
  
  Paul 
  
  
  



[julia-users] Help: My parallel code 8300x slower than the serial.

2015-06-17 Thread Daniel Carrera
Hi everyone,

My adventures with parallel programming with Julia continue. Here is a 
different issue from other threads: My parallel function is 8300x slower 
than my serial function even though I am running on 4 processes on a 
multi-core machine.

julia nprocs()
4

I have Julia 0.3.8. Here is my program in its entirety (not very long).

function main()

nbig::Int16 = 7
nbod::Int16 = nbig
bod  = Float64[
0   1  2  3  4  5  6  # x position
0   0  0  0  0  0  0  # y position
0   0  0  0  0  0  0  # z position
0   0  0  0  0  0  0  # x velocity
0   0  0  0  0  0  0  # y velocity
0   0  0  0  0  0  0  # z velocity
1   1  1  1  1  1  1  # Mass
]

a = zeros(3,nbod)

@time for k = 1:1000
gravity_1!(bod, nbig, nbod, a)
end
println(a[1,:])

@time for k = 1:1000
gravity_2!(bod, nbig, nbod, a)
end
println(a[1,:])
end

function gravity_1!(bod, nbig, nbod, a)

for i = 1:nbod
a[1,i] = 0.0
a[2,i] = 0.0
a[3,i] = 0.0
end

@inbounds for i = 1:nbig
for j = (i + 1):nbod

dx = bod[1,j] - bod[1,i]
dy = bod[2,j] - bod[2,i]
dz = bod[3,j] - bod[3,i]

s_1 = 1.0 / sqrt(dx*dx+dy*dy+dz*dz)
s_3 = s_1 * s_1 * s_1

tmp1 = s_3 * bod[7,i]
tmp2 = s_3 * bod[7,j]

a[1,j] = a[1,j] - tmp1*dx
a[2,j] = a[2,j] - tmp1*dy
a[3,j] = a[3,j] - tmp1*dz

a[1,i] = a[1,i] + tmp2*dx
a[2,i] = a[2,i] + tmp2*dy
a[3,i] = a[3,i] + tmp2*dz
end
end
return a
end

function gravity_2!(bod, nbig, nbod, a)

for i = 1:nbod
a[1,i] = 0.0
a[2,i] = 0.0
a[3,i] = 0.0
end

@inbounds @sync @parallel for i = 1:nbig
for j = (i + 1):nbod

dx = bod[1,j] - bod[1,i]
dy = bod[2,j] - bod[2,i]
dz = bod[3,j] - bod[3,i]

s_1 = 1.0 / sqrt(dx*dx+dy*dy+dz*dz)
s_3 = s_1 * s_1 * s_1

tmp1 = s_3 * bod[7,i]
tmp2 = s_3 * bod[7,j]

a[1,j] = a[1,j] - tmp1*dx
a[2,j] = a[2,j] - tmp1*dy
a[3,j] = a[3,j] - tmp1*dz

a[1,i] = a[1,i] + tmp2*dx
a[2,i] = a[2,i] + tmp2*dy
a[3,i] = a[3,i] + tmp2*dz
end
end
return a
end



So this is a straight forward N-body gravity calculation. Yes, I realize 
that gravity_2!() is wrong, but that's fine. Right now I'm just talking 
about the CPU time. When I run this on my computer I get:

julia main()
elapsed time: 0.000475294 seconds (0 bytes allocated)
[1.49138889 0.4636 0.1736 
-5.551115123125783e-17 -0.17366 -0.46361112 
-1.49138889]
elapsed time: 3.953546654 seconds (126156320 bytes allocated, 13.49% gc 
time)
[0.0 0.0 0.0 0.0 0.0 0.0 0.0]


So, the serial version takes 0.000475 seconds and the parallel takes 3.95 
seconds. Furthermore, the parallel version is calling the garbage 
collector. I suspect that the problem has something to do with the memory 
access. Maybe the parallel code is wasting a lot of time copying variables 
in memory. But whatever the reason, this is bad. The documentation says 
that @parallel is supposed to be fast, even for very small loops, but 
that's not what I'm seeing. A non-buggy implementation will be even slower.

Have I missed something? Is there an obvious error in how I'm using the 
parallel constructs?

I would appreciate any guidance you may offer.

Cheers,
Daniel.



[julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread andrew cooke

Oh, I think the call() thing is just me being confused.  That's *only* a 
mechanism to allow non-functions to look like functions?  I guess my 
misunderstanding is more about how apply is defined (it mentions call), 
which really isn't important to me right now, so feel free to ignore that 
part of my question.  Sorry.


On Wednesday, 17 June 2015 09:45:46 UTC-3, andrew cooke wrote:


 If I want to pass the function that constructs an array of Any, given some 
 values, to another function, what do I use?

 Here's an example that might make things clearer:

 julia f(x...) = Any[x...]
 f (generic function with 1 method)

 julia apply(f, 1,2,3)
 3-element Array{Any,1}:
  1
  2
  3

 julia apply(Any[], 1,2,3)
 ERROR: MethodError: `call` has no method matching call(::Array{Any,1}, ::
 Int64, ::Int64, ::Int64)
 Closest candidates are:
   BoundsError(::Any...)
   TypeVar(::Any...)
   TypeConstructor(::Any...)
   ...
  in apply at deprecated.jl:116

 where I am looking for what the built-in equivalent of f() is.

 I may be even more confused, because I also don't understand why this 
 fails:

 julia call(f, 1, 2)
 ERROR: MethodError: `call` has no method matching call(::Function, ::Int64
 , ::Int64)
 Closest candidates are:
   BoundsError(::Any...)
   TypeVar(::Any...)
   TypeConstructor(::Any...)
   ...

 So any guidance appreciated.

 Thanks,
 Andrew



[julia-users] Re: Need help writing parallel code with @sync and @everywhere

2015-06-17 Thread Avik Sengupta

So again, this is an informal description... In particular, my nomenclature 
is not precise... 

So basically, an @parallel is a construct which will take the work to be 
done in each iteration of a for loop, and will farm them out to available 
remote processors, all at once. This will happen asynchronously, which 
means that all these jobs will be started without waiting for any of them 
to finish. You then want to wait for all the jobs to complete before going 
on the the Consume tmp stage. Hence you put an @async around this, to 
wait for all the parallel tasks to complete. 

Hope this makes it a little more understandable. I realise this does not 
help in designing a parallel system from scratch, but that is a much longer 
story. 

Note that with tmp being a shared array, this code will work only when 
all julia processes are in a single physical machine. 

Also, the @parallel construct is most useful when you combine a reduction 
operator with the for loop. 

Hope this helps
-
Avik

On Wednesday, 17 June 2015 10:49:58 UTC+1, Daniel Carrera wrote:


 On Wednesday, 17 June 2015 10:28:37 UTC+2, Nils Gudat wrote:

 I haven't used @everywhere in combination with begin..end blocks, I 
 usually pair @sync with @parallel - see an example here 
 https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl, 
 where I've parallelized the entire nested loop ranging from lines 25 to 47. 



 Aha! Thanks. Copying your example I was able to produce this:

 N = 5
 tmp = SharedArray(Int, (N))
 
 for i = 1:N
 # Compute tmp in parallel #
 @sync @parallel for j = (i + 1):N
 tmp[j] = i * j
 end
 
 # Consume tmp in serial #
 for j = (i + 1):N
 println(tmp[j])
 end
 end


 This seems to work correctly and gives the same answer as the serial code. 
 Can you help me understand how it works? What does @sync @parallel do? I 
 feel like I half-understand it, but the concept is not clear in my head.

 Thanks.

 Daniel.



Re: [julia-users] Re: Need help writing parallel code with @sync and @everywhere

2015-06-17 Thread Daniel Carrera
Wait #6#v is the name of a variable? How is that possible?

On 17 June 2015 at 14:25, David Gold david.gol...@gmail.com wrote:

 Actually, it seems that @sync is also responsible for setting the variable
 #6#v equal to the return object of the call to Base.pfor and then returning
 #6#v after calling Base.sync_end().

 On Wednesday, June 17, 2015 at 8:22:08 AM UTC-4, David Gold wrote:

 Have you tried macroexpanding the expression? Doing so yields

 julia macroexpand(:( for i = 1:N
   @sync @parallel for j = (i + 1):N
   tmp[j] = i * j
   end
   end ))

 :(for i = 1:N # line 2:
 begin  # task.jl, line 342:
 Base.sync_begin() # line 343:
 #6#v = begin  # multi.jl, line 1487:
 Base.pfor($(Expr(:localize, :(()-begin  # expr.jl,
 line 113:
 begin  # multi.jl, line 1460:
 function (#7#lo::Base.Int,#8#hi::Base.Int) # multi.jl,
 line 1461:
 for j = (i + 1:N)[#7#lo:#8#hi] # line 1462:
 begin  # line 3:
 tmp[j] = i * j
 end
 end
 end
 end
 end))),Base.length(i + 1:N))
 end # line 344:
 Base.sync_end() # line 345:
 #6#v
 end
 end)


  It looks like @parallel does the work of setting up a properly
 formatted call to Base.pfor. In particular, it builds an Expr object with
 head :localize and argument a zero-arg anonymous function, and then passes
 the interpolation of that expression along with `Base.length(i + 1:N)` to
 Base.pfor. The body of the anonymous function declares another function
 with arguments `#7#lo`, `#8#hi`. The latter variables somehow annotate the
 delimiters of your inner loop, which gets reproduced inside the body of the
 declared function. I'm *guessing* that the anonymous function is used as a
 vehicle to pass the code of the annotated inner loop to Base.pfor without
 executing it beforehand. But I could be wrong.


 Then @sync just wraps all the above between calls to `Base.sync_begin`
 and `Base.sync_end`.


 I also should note I have zero experience with Julia's parallel machinery
 and am entirely unfamiliar with the internals of Base.pfor. I just enjoy
 trying to figure out macros.

 On Wednesday, June 17, 2015 at 5:49:58 AM UTC-4, Daniel Carrera wrote:


 On Wednesday, 17 June 2015 10:28:37 UTC+2, Nils Gudat wrote:

 I haven't used @everywhere in combination with begin..end blocks, I
 usually pair @sync with @parallel - see an example here
 https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl,
 where I've parallelized the entire nested loop ranging from lines 25 to 47.



 Aha! Thanks. Copying your example I was able to produce this:

 N = 5
 tmp = SharedArray(Int, (N))

 for i = 1:N
 # Compute tmp in parallel #
 @sync @parallel for j = (i + 1):N
 tmp[j] = i * j
 end

 # Consume tmp in serial #
 for j = (i + 1):N
 println(tmp[j])
 end
 end


 This seems to work correctly and gives the same answer as the serial
 code. Can you help me understand how it works? What does @sync @parallel
 do? I feel like I half-understand it, but the concept is not clear in my
 head.

 Thanks.

 Daniel.




-- 
When an engineer says that something can't be done, it's a code phrase that
means it's not fun to do.


Re: [julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread Mauro
I think it just uses getindex (a bit of a hack...):

julia @which Int[3]
getindex(T::Union(DataType,UnionType,TypeConstructor),vals...) at array.jl:119


On Wed, 2015-06-17 at 14:50, andrew cooke and...@acooke.org wrote:
 Oh, I think the call() thing is just me being confused.  That's *only* a 
 mechanism to allow non-functions to look like functions?  I guess my 
 misunderstanding is more about how apply is defined (it mentions call), 
 which really isn't important to me right now, so feel free to ignore that 
 part of my question.  Sorry.


 On Wednesday, 17 June 2015 09:45:46 UTC-3, andrew cooke wrote:


 If I want to pass the function that constructs an array of Any, given some 
 values, to another function, what do I use?

 Here's an example that might make things clearer:

 julia f(x...) = Any[x...]
 f (generic function with 1 method)

 julia apply(f, 1,2,3)
 3-element Array{Any,1}:
  1
  2
  3

 julia apply(Any[], 1,2,3)
 ERROR: MethodError: `call` has no method matching call(::Array{Any,1}, ::
 Int64, ::Int64, ::Int64)
 Closest candidates are:
   BoundsError(::Any...)
   TypeVar(::Any...)
   TypeConstructor(::Any...)
   ...
  in apply at deprecated.jl:116

 where I am looking for what the built-in equivalent of f() is.

 I may be even more confused, because I also don't understand why this 
 fails:

 julia call(f, 1, 2)
 ERROR: MethodError: `call` has no method matching call(::Function, ::Int64
 , ::Int64)
 Closest candidates are:
   BoundsError(::Any...)
   TypeVar(::Any...)
   TypeConstructor(::Any...)
   ...

 So any guidance appreciated.

 Thanks,
 Andrew




[julia-users] X.2=X*0.2, easy to make mistake.

2015-06-17 Thread Jerry Xiong
Today I spend many time to find a bug in my code. It is turn out that I 
mistakenly wrote sum(X,2) as sum(X.2). No any error information is reported 
and Julia regarded X.2 as X*0.2. The comma , is quite close to dot . in 
the keyboard and looks quite similar in some fonts. As there is no any 
error occur, this bug will be dangerous. Also, it is not intuitive to 
understand X.2 is X*0.2. I think maybe it is better to forbid syntax like 
X.2 but only allowed .2X. 


[julia-users] Re: X.2=X*0.2, easy to make mistake.

2015-06-17 Thread Jerry Xiong
Great, looking forward the next version!!
My Julia is 0.3.9 so far.


On Wednesday, June 17, 2015 at 3:14:24 PM UTC+2, Seth wrote:



 On Wednesday, June 17, 2015 at 8:04:11 AM UTC-5, Jerry Xiong wrote:

 Today I spend many time to find a bug in my code. It is turn out that I 
 mistakenly wrote sum(X,2) as sum(X.2). No any error information is reported 
 and Julia regarded X.2 as X*0.2. The comma , is quite close to dot . in 
 the keyboard and looks quite similar in some fonts. As there is no any 
 error occur, this bug will be dangerous. Also, it is not intuitive to 
 understand X.2 is X*0.2. I think maybe it is better to forbid syntax like 
 X.2 but only allowed .2X. 


 This appears to be fixed in 0.4:

 julia x = 100
 100

 julia x.2
 ERROR: syntax: extra token 0.2 after end of expression

 julia sum(x.2)
 ERROR: syntax: missing comma or ) in argument list

 julia f(x) = x.2
 ERROR: syntax: extra token 0.2 after end of expression

 julia f(x) = sum(x.2)
 ERROR: syntax: missing comma or ) in argument list

  



Re: [julia-users] Re: Need help writing parallel code with @sync and @everywhere

2015-06-17 Thread Daniel Carrera
Thanks. I didn't know about macroexpand(). To me macros often feel like
black magic.

On 17 June 2015 at 14:22, David Gold david.gol...@gmail.com wrote:

 Have you tried macroexpanding the expression? Doing so yields

 julia macroexpand(:( for i = 1:N
   @sync @parallel for j = (i + 1):N
   tmp[j] = i * j
   end
   end ))

 :(for i = 1:N # line 2:
 begin  # task.jl, line 342:
 Base.sync_begin() # line 343:
 #6#v = begin  # multi.jl, line 1487:
 Base.pfor($(Expr(:localize, :(()-begin  # expr.jl,
 line 113:
 begin  # multi.jl, line 1460:
 function (#7#lo::Base.Int,#8#hi::Base.Int) # multi.jl,
 line 1461:
 for j = (i + 1:N)[#7#lo:#8#hi] # line 1462:
 begin  # line 3:
 tmp[j] = i * j
 end
 end
 end
 end
 end))),Base.length(i + 1:N))
 end # line 344:
 Base.sync_end() # line 345:
 #6#v
 end
 end)


  It looks like @parallel does the work of setting up a properly formatted
 call to Base.pfor. In particular, it builds an Expr object with head
 :localize and argument a zero-arg anonymous function, and then passes the
 interpolation of that expression along with `Base.length(i + 1:N)` to
 Base.pfor. The body of the anonymous function declares another function
 with arguments `#7#lo`, `#8#hi`. The latter variables somehow annotate the
 delimiters of your inner loop, which gets reproduced inside the body of the
 declared function. I'm *guessing* that the anonymous function is used as a
 vehicle to pass the code of the annotated inner loop to Base.pfor without
 executing it beforehand. But I could be wrong.


 Then @sync just wraps all the above between calls to `Base.sync_begin` and
 `Base.sync_end`.


 I also should note I have zero experience with Julia's parallel machinery
 and am entirely unfamiliar with the internals of Base.pfor. I just enjoy
 trying to figure out macros.

 On Wednesday, June 17, 2015 at 5:49:58 AM UTC-4, Daniel Carrera wrote:


 On Wednesday, 17 June 2015 10:28:37 UTC+2, Nils Gudat wrote:

 I haven't used @everywhere in combination with begin..end blocks, I
 usually pair @sync with @parallel - see an example here
 https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl,
 where I've parallelized the entire nested loop ranging from lines 25 to 47.



 Aha! Thanks. Copying your example I was able to produce this:

 N = 5
 tmp = SharedArray(Int, (N))

 for i = 1:N
 # Compute tmp in parallel #
 @sync @parallel for j = (i + 1):N
 tmp[j] = i * j
 end

 # Consume tmp in serial #
 for j = (i + 1):N
 println(tmp[j])
 end
 end


 This seems to work correctly and gives the same answer as the serial
 code. Can you help me understand how it works? What does @sync @parallel
 do? I feel like I half-understand it, but the concept is not clear in my
 head.

 Thanks.

 Daniel.




-- 
When an engineer says that something can't be done, it's a code phrase that
means it's not fun to do.


Re: [julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread David Gold
The `getindex` explanation is not quite correct. Note, for instance, that 
Avik's example returns a 1-element array containing the tuple (1, 2, 3), 
not the 3-element array [1, 2, 3].  

As far as *callable* objects are concerned, I think you probably want 
`vcat` (or 'hcat', depending on your case):

*julia **apply(vcat, 1, 2, 3)*

*WARNING: apply(f, x) is deprecated, use `f(x...)` instead*

 in depwarn at ./deprecated.jl:62

 in apply at deprecated.jl:115

while loading no file, in expression starting on line 0

*3-element Array{Int64,1}:*

* 1*

* 2*

* 3*

HOWEVER, it's worth noting that most array construction actually does not 
go through a callable object, but rather is the result of parsing an 
expression with a special head. Dumping expressions containing common array 
construction patterns will be more revealing here than looking at methods:

*julia **dump( :( [1, 2, 3, 4] ))*

Expr 

  head: Symbol vect

  args: Array(Any,(4,))

1: Int64 1

2: Int64 2

3: Int64 3

4: Int64 4

  typ: Any


*julia **dump( :( [1 2 3 4] ))*

Expr 

  head: Symbol hcat

  args: Array(Any,(4,))

1: Int64 1

2: Int64 2

3: Int64 3

4: Int64 4

  typ: Any


*julia **dump( :( Int[1; 2; 3; 4] ))*

Expr 

  head: Symbol typed_vcat

  args: Array(Any,(5,))

1: Symbol Int

2: Int64 1

3: Int64 2

4: Int64 3

5: Int64 4

  typ: Any


*julia **eval(Expr(:vcat, 1, 2, 3, 4))*

*4-element Array{Int64,1}:*

* 1*

* 2*

* 3*

* 4*

On Wednesday, June 17, 2015 at 8:54:11 AM UTC-4, Mauro wrote:

 I think it just uses getindex (a bit of a hack...): 

 julia @which Int[3] 
 getindex(T::Union(DataType,UnionType,TypeConstructor),vals...) at 
 array.jl:119 


 On Wed, 2015-06-17 at 14:50, andrew cooke and...@acooke.org javascript: 
 wrote: 
  Oh, I think the call() thing is just me being confused.  That's *only* a 
  mechanism to allow non-functions to look like functions?  I guess my 
  misunderstanding is more about how apply is defined (it mentions call), 
  which really isn't important to me right now, so feel free to ignore 
 that 
  part of my question.  Sorry. 
  
  
  On Wednesday, 17 June 2015 09:45:46 UTC-3, andrew cooke wrote: 
  
  
  If I want to pass the function that constructs an array of Any, given 
 some 
  values, to another function, what do I use? 
  
  Here's an example that might make things clearer: 
  
  julia f(x...) = Any[x...] 
  f (generic function with 1 method) 
  
  julia apply(f, 1,2,3) 
  3-element Array{Any,1}: 
   1 
   2 
   3 
  
  julia apply(Any[], 1,2,3) 
  ERROR: MethodError: `call` has no method matching call(::Array{Any,1}, 
 :: 
  Int64, ::Int64, ::Int64) 
  Closest candidates are: 
BoundsError(::Any...) 
TypeVar(::Any...) 
TypeConstructor(::Any...) 
... 
   in apply at deprecated.jl:116 
  
  where I am looking for what the built-in equivalent of f() is. 
  
  I may be even more confused, because I also don't understand why this 
  fails: 
  
  julia call(f, 1, 2) 
  ERROR: MethodError: `call` has no method matching call(::Function, 
 ::Int64 
  , ::Int64) 
  Closest candidates are: 
BoundsError(::Any...) 
TypeVar(::Any...) 
TypeConstructor(::Any...) 
... 
  
  So any guidance appreciated. 
  
  Thanks, 
  Andrew 
  



[julia-users] What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread andrew cooke

If I want to pass the function that constructs an array of Any, given some 
values, to another function, what do I use?

Here's an example that might make things clearer:

julia f(x...) = Any[x...]
f (generic function with 1 method)

julia apply(f, 1,2,3)
3-element Array{Any,1}:
 1
 2
 3

julia apply(Any[], 1,2,3)
ERROR: MethodError: `call` has no method matching call(::Array{Any,1}, ::
Int64, ::Int64, ::Int64)
Closest candidates are:
  BoundsError(::Any...)
  TypeVar(::Any...)
  TypeConstructor(::Any...)
  ...
 in apply at deprecated.jl:116

where I am looking for what the built-in equivalent of f() is.

I may be even more confused, because I also don't understand why this fails:

julia call(f, 1, 2)
ERROR: MethodError: `call` has no method matching call(::Function, ::Int64, 
::Int64)
Closest candidates are:
  BoundsError(::Any...)
  TypeVar(::Any...)
  TypeConstructor(::Any...)
  ...

So any guidance appreciated.

Thanks,
Andrew


Re: [julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread David Gold
However, it does look like this use pattern does resolve to 'getindex',

*julia **dump(:( Int[1, 2, 3, 4] ))*

Expr 

  head: Symbol ref

  args: Array(Any,(5,))

1: Symbol Int

2: Int64 1

3: Int64 2

4: Int64 3

5: Int64 4

  typ: Any


*julia **getindex(Int, 1, 2, 3, 4)*

*4-element Array{Int64,1}:*

* 1*

* 2*

* 3*

* 4*


So now I am not so sure about my claim that most concatenation doesn't go 
through function calls. However, I don't think that the rest of the 
concatenating heads (:vect, :vcat, :typed_vcat, etc.) lower to getindex:


*julia **@which([1, 2, 3, 4])*

*vect{T}(X::T...) at abstractarray.jl:13*


*julia **@which([1; 2; 3; 4])*

*vcat{T:Number}(X::T:Number...) at abstractarray.jl:643* 


they seem instead to call the type 'vect', 'vcat', etc. itself on the array 
arguments.


Re: [julia-users] Please sdvise how to plot the matrix (10x1000) with different color

2015-06-17 Thread Miguel Bazdresch
Note that in most plotting packages, `plot(vec)` is interpreted as 1000
functions of 10 elements each. That is, when given a matrix as argument,
they plot the columns.

-- mb

On Tue, Jun 16, 2015 at 8:10 PM, Nelson Mok laishun@gmail.com wrote:

 HI,

 Please comment, how to plot the matrix (10x1000) with different color,
 thank you

 ==
 using Gadfly

 ## number of vectors
 n = 10

 ## number of elements
 n_elm = 1000

 vec = randn(n, n_elm)
 plot(vec)   # it doesn't work

 Regards,
 Nelson.



[julia-users] Adding a native function to julia's scheduler

2015-06-17 Thread yigiter . public



I need to call waitformultipleobjects (windows equivalent of select 
call in linux) from julia using ccall. As this is a blocking function, I 
would like to call it within another coroutine (task).

 

The problem is that the “Taks” in Julia only function effectively if all 
the blocking calls within it emanates from julia's own I/O interface. It 
cannot deal with any blocking call to native C-functions.

 

As far as I can see julia is based on Libuv. I guess every time a blocking 
call (from defined I/O interface) is issued, julia internally calls a 
corresponding function from the asynchronous libuv and then waits() for a 
notify() from the libuv. I guess the entire scheduler of julia is based on 
this paradigm so that It can deal with asynch operation within a single 
thread.

 

My question is, is it possible to extent this wait() - notify() paradigm 
for any arbitrary blocking ccall call?

 

I have tried the following solution, but it fails miserably:
 
0) Start a task which calls a non-blocking function from the dll and then 
wait() for a notify().
1)  (In C) Implement a dll which creates another thread to call the real 
blocking function whenever julia calls the non-blocking function in the 
previous step.

2) Provide a Julia callback function to the dll which is called at the 
finalizing step of the thread by the dll.

3) (In Julia) the callback function calls the notify() function.

However, it turned out that notify() function itself is not thread safe and 
Julia’s respond to notify() from another thread (created in C) is totally 
random.
 
Is it possible to make the julia’s scheduler handle the arbitrary blocking 
calls?

 

(PS: I was previously advised a solution based on parallel processes. 
However, for several reasons, multi-process paradigm is not a suitable 
option for me right now.)


Re: [julia-users] Please sdvise how to plot the matrix (10x1000) with different color

2015-06-17 Thread Nils Gudat
^ True, I assumed in my post above that the rows where the object of 
interest here, just because it seems more natural to plot 10 1000 points 
data series than 1000 data series with 10 observations each...

On Wednesday, June 17, 2015 at 3:35:18 PM UTC+1, Miguel Bazdresch wrote:

 Note that in most plotting packages, `plot(vec)` is interpreted as 1000 
 functions of 10 elements each. That is, when given a matrix as argument, 
 they plot the columns.

 -- mb

 On Tue, Jun 16, 2015 at 8:10 PM, Nelson Mok laish...@gmail.com 
 javascript: wrote:

 HI,

 Please comment, how to plot the matrix (10x1000) with different color, 
 thank you

 ==
 using Gadfly

 ## number of vectors
 n = 10

 ## number of elements
 n_elm = 1000

 vec = randn(n, n_elm)
 plot(vec)   # it doesn't work

 Regards,
 Nelson.




Re: [julia-users] Help: My parallel code 8300x slower than the serial.

2015-06-17 Thread Tim Holy
You're copying a lot of data between processes. Check out SharedArrays. But I 
still fear that if each job is tiny, you may not get as much benefit without 
further restructuring.

I trust that your real workload will take more than 1ms. Otherwise, it's 
very unlikely that your experiments in parallel programming will end up saving 
you time :-).

--Tim

On Wednesday, June 17, 2015 06:37:28 AM Daniel Carrera wrote:
 Hi everyone,
 
 My adventures with parallel programming with Julia continue. Here is a
 different issue from other threads: My parallel function is 8300x slower
 than my serial function even though I am running on 4 processes on a
 multi-core machine.
 
 julia nprocs()
 4
 
 I have Julia 0.3.8. Here is my program in its entirety (not very long).
 
 function main()
 
 nbig::Int16 = 7
 nbod::Int16 = nbig
 bod  = Float64[
 0   1  2  3  4  5  6  # x position
 0   0  0  0  0  0  0  # y position
 0   0  0  0  0  0  0  # z position
 0   0  0  0  0  0  0  # x velocity
 0   0  0  0  0  0  0  # y velocity
 0   0  0  0  0  0  0  # z velocity
 1   1  1  1  1  1  1  # Mass
 ]
 
 a = zeros(3,nbod)
 
 @time for k = 1:1000
 gravity_1!(bod, nbig, nbod, a)
 end
 println(a[1,:])
 
 @time for k = 1:1000
 gravity_2!(bod, nbig, nbod, a)
 end
 println(a[1,:])
 end
 
 function gravity_1!(bod, nbig, nbod, a)
 
 for i = 1:nbod
 a[1,i] = 0.0
 a[2,i] = 0.0
 a[3,i] = 0.0
 end
 
 @inbounds for i = 1:nbig
 for j = (i + 1):nbod
 
 dx = bod[1,j] - bod[1,i]
 dy = bod[2,j] - bod[2,i]
 dz = bod[3,j] - bod[3,i]
 
 s_1 = 1.0 / sqrt(dx*dx+dy*dy+dz*dz)
 s_3 = s_1 * s_1 * s_1
 
 tmp1 = s_3 * bod[7,i]
 tmp2 = s_3 * bod[7,j]
 
 a[1,j] = a[1,j] - tmp1*dx
 a[2,j] = a[2,j] - tmp1*dy
 a[3,j] = a[3,j] - tmp1*dz
 
 a[1,i] = a[1,i] + tmp2*dx
 a[2,i] = a[2,i] + tmp2*dy
 a[3,i] = a[3,i] + tmp2*dz
 end
 end
 return a
 end
 
 function gravity_2!(bod, nbig, nbod, a)
 
 for i = 1:nbod
 a[1,i] = 0.0
 a[2,i] = 0.0
 a[3,i] = 0.0
 end
 
 @inbounds @sync @parallel for i = 1:nbig
 for j = (i + 1):nbod
 
 dx = bod[1,j] - bod[1,i]
 dy = bod[2,j] - bod[2,i]
 dz = bod[3,j] - bod[3,i]
 
 s_1 = 1.0 / sqrt(dx*dx+dy*dy+dz*dz)
 s_3 = s_1 * s_1 * s_1
 
 tmp1 = s_3 * bod[7,i]
 tmp2 = s_3 * bod[7,j]
 
 a[1,j] = a[1,j] - tmp1*dx
 a[2,j] = a[2,j] - tmp1*dy
 a[3,j] = a[3,j] - tmp1*dz
 
 a[1,i] = a[1,i] + tmp2*dx
 a[2,i] = a[2,i] + tmp2*dy
 a[3,i] = a[3,i] + tmp2*dz
 end
 end
 return a
 end
 
 
 
 So this is a straight forward N-body gravity calculation. Yes, I realize
 that gravity_2!() is wrong, but that's fine. Right now I'm just talking
 about the CPU time. When I run this on my computer I get:
 
 julia main()
 elapsed time: 0.000475294 seconds (0 bytes allocated)
 [1.49138889 0.4636 0.1736
 -5.551115123125783e-17 -0.17366 -0.46361112
 -1.49138889]
 elapsed time: 3.953546654 seconds (126156320 bytes allocated, 13.49% gc
 time)
 [0.0 0.0 0.0 0.0 0.0 0.0 0.0]
 
 
 So, the serial version takes 0.000475 seconds and the parallel takes 3.95
 seconds. Furthermore, the parallel version is calling the garbage
 collector. I suspect that the problem has something to do with the memory
 access. Maybe the parallel code is wasting a lot of time copying variables
 in memory. But whatever the reason, this is bad. The documentation says
 that @parallel is supposed to be fast, even for very small loops, but
 that's not what I'm seeing. A non-buggy implementation will be even slower.
 
 Have I missed something? Is there an obvious error in how I'm using the
 parallel constructs?
 
 I would appreciate any guidance you may offer.
 
 Cheers,
 Daniel.



Re: [julia-users] Re: How to deploy Julia

2015-06-17 Thread Kyle Barbary
... and we're always looking for new contributors to JuliaAstro
http://juliaastro.github.io/ for commonly-used functionality!

- Kyle

On Wed, Jun 17, 2015 at 4:56 AM, Tim Holy tim.h...@gmail.com wrote:

 On Wednesday, June 17, 2015 12:00:02 PM Charles Novaes de Santana wrote:
  Did I hear AstroJulia? :)

 There's already the reverse:
 https://github.com/JuliaAstro

 
  Best,
 
  Charles
 
  On 16 June 2015 at 14:38, Daniel Carrera dcarr...@gmail.com wrote:
   I would love to see the paper when it comes out. I cannot use your code
   directly because I need to do a direct NBody rather than a tree code
 (I am
   modelling planetary systems). But it's nice to see another astronomer
   using
   Julia.
  
   Cheers,
   Daniel.
  
   On 16 June 2015 at 08:28, Ariel Keselman skar...@gmail.com wrote:
   FYI just since NBody was mentioned -- I wrote a gravitational NBody
 tree
   code with parallel shared memory execution. I run it with many
 millions
   of
   particles and I'm very pleased with the results. They are comparable
 (or
   even faster) than some very popular C codes (e.g. Gadget2). I'm
 working
   on
   a paper currently, will publish a package once everything is cleaned
 up,
   documented, etc.
  
   --
   When an engineer says that something can't be done, it's a code phrase
   that means it's not fun to do.




Re: [julia-users] Please sdvise how to plot the matrix (10x1000) with different color

2015-06-17 Thread Tom Breloff
But it's so colorful!!

http://imgur.com/F7PqMMZ


On Wednesday, June 17, 2015 at 11:09:26 AM UTC-4, Nils Gudat wrote:

 ^ True, I assumed in my post above that the rows where the object of 
 interest here, just because it seems more natural to plot 10 1000 points 
 data series than 1000 data series with 10 observations each...

 On Wednesday, June 17, 2015 at 3:35:18 PM UTC+1, Miguel Bazdresch wrote:

 Note that in most plotting packages, `plot(vec)` is interpreted as 1000 
 functions of 10 elements each. That is, when given a matrix as argument, 
 they plot the columns.

 -- mb

 On Tue, Jun 16, 2015 at 8:10 PM, Nelson Mok laish...@gmail.com wrote:

 HI,

 Please comment, how to plot the matrix (10x1000) with different color, 
 thank you

 ==
 using Gadfly

 ## number of vectors
 n = 10

 ## number of elements
 n_elm = 1000

 vec = randn(n, n_elm)
 plot(vec)   # it doesn't work

 Regards,
 Nelson.




Re: [julia-users] map() vs list comprehension - any preference?

2015-06-17 Thread Seth
Gah. I'm sorry. I can't reproduce my original results! I don't know why, 
but the same tests I ran two days ago are not giving me the same timing. I 
need to go back to the drawing board here.

On Wednesday, June 17, 2015 at 11:37:52 AM UTC-5, Josh Langsfeld wrote:

 For me, map is 100x slower:

 julia function f1(g,a)
  [Pair(x,g) for x in a]
end
 f1 (generic function with 1 method)


 julia function f2(g,a)
  map(x-Pair(x,g), a)
end
 f2 (generic function with 1 method)


 julia @time f1(2,ones(1_000_000));
   25.158 milliseconds (28491 allocations: 24736 KB, 12.69% gc time)


 julia @time f1(2,ones(1_000_000));
6.866 milliseconds (8 allocations: 23438 KB, 37.10% gc time)


 julia @time f1(2,ones(1_000_000));
6.126 milliseconds (8 allocations: 23438 KB, 25.99% gc time)


 julia @time f2(2,ones(1_000_000));
  684.994 milliseconds (2057 k allocations: 72842 KB, 1.72% gc time)


 julia @time f2(2,ones(1_000_000));
  647.267 milliseconds (2000 k allocations: 70313 KB, 3.64% gc time)


 julia @time f2(2,ones(1_000_000));
  633.149 milliseconds (2000 k allocations: 70313 KB, 0.91% gc time)


 On Wednesday, June 17, 2015 at 12:04:52 PM UTC-4, Seth wrote:

 Sorry - it's part of a function:

 in_edges(g::AbstractGraph, v::Int) = [Edge(x,v) for x in badj(g,v)]

 vs

 in_edges(g::AbstractGraph, v::Int) = map(x-Edge(x,v), badj(g,v))




 On Wednesday, June 17, 2015 at 10:51:22 AM UTC-5, Mauro wrote:

 Note that inside a module is also global scope as each module has its 
 own global scope.  Best move it into a function.  M 

 On Wed, 2015-06-17 at 17:22, Seth catc...@bromberger.com wrote: 
  The speedups are both via the REPL (global scope?) and inside a 
 module. I 
  did a code_native on both - results are 
  here: https://gist.github.com/sbromberger/b5656189bcece492ffd9. 
  
  
  
  On Wednesday, June 17, 2015 at 9:56:22 AM UTC-5, Stefan Karpinski 
 wrote: 
  
  I would have expected the comprehension to be faster. Is this in 
 global 
  scope? If so you may want to try the speed comparison again where 
 each of 
  these occur in a function body and only depend on function arguments. 
  
  On Tue, Jun 16, 2015 at 10:12 AM, Seth catc...@bromberger.com 
  javascript: wrote: 
  
  I have been using list comprehensions of the form 
  bar(g, a) = [Pair(x, g) for x in a] and [foo(x) for x in a] 
  
  but recently evaluated bar(g, a) = map(x-Pair(x, g),a) and 
  map(x-foo(x),a)as substitutes. 
  
  It seems from some limited testing that map is slightly faster than 
 the 
  list comprehension, but it's on the order of 3-4% so it may just be 
 noise. 
  Allocations and gc time are roughly equal (380M allocations, 
 ~27000MB, ~6% 
  gc). 
  
  Should I prefer one approach over the other (and if so, why)? 
  
  Thanks! 
  
  
  



Re: [julia-users] Reading from and writing to a process using a pipe

2015-06-17 Thread Kevin Squire
`open(cmd, w)` gives back a tuple.  Try using

f, p = open(`gnuplot`,w)
write(f, plot sin(x))

There was a bit of discussion when this change was made (I couldn't find it
with a quick search), about this returning a tuple--it's a little
unintuitive, and could be `fixed` in a few different ways (easiest:
returning a complex type that can be written to and read from), but it's
probably been off most people's radar.  If you're up for it, why don't you
open an issue (if one doesn't exist).

Anyway, for your particular application, you probably want `readandwrite`:

help? readandwrite
search: readandwrite

Base.readandwrite(command)

   Starts running a command asynchronously, and returns a tuple
   (stdout,stdin,process) of the output stream and input stream of the
   process, and the process object itself.

Which *also* returns a tuple (but at least now you know).

See also http://blog.leahhanson.us/running-shell-commands-from-julia.html,
which has a full rundown of reading and writing from processes.

Cheers!
   Kevin

On Wed, Jun 17, 2015 at 9:03 AM, Miguel Bazdresch eorli...@gmail.com
wrote:

 Hello,

 Gaston.jl is a plotting package based on gnuplot. Gnuplot is command-line
 tool, so I send commands to it via a pipe. I open the pipe (on Linux) with
 a ccall to popen, and write gnuplot commands to the pipe using a ccall to
 fputs.

 This works fine, but I'm trying to see if Julia's native pipe and stream
 functionality can make this process more Julian and, in the process, more
 cross-platform. The documentation is encouraging:

 You can use [a Cmd] object to connect the command to others via pipes,
 run it, and read or write to it. and Julia provides a rich interface to
 deal with streaming I/O objects such as terminals, pipes and TCP sockets.
 Unfortunately, I just can't figure out how to use Julia's functionality for
 this purpose. This is what I've tried (I am on Julia 0.3.9):

 First, I tried using `open` with read and write:

 julia f=open(`gnuplot`,r+)
 ERROR: ArgumentError(mode must be \r\ or \w\, not \r+\)

 So I tried with write only:

 julia f=open(`gnuplot`,w)
 (Pipe(open, 0 bytes waiting),Process(`gnuplot`, ProcessRunning))

 So far, this looks good. I can see a gnuplot process running.

 Then I try to `write` to the pipe:

 julia write(f,plot sin(x))
 ERROR: `write` has no method matching write(::(Pipe,Process),
 ::ASCIIString)

 OK, so let's try with `println`:

 julia println(f,plot sin(x))
 (Pipe(open, 0 bytes waiting),Process(`gnuplot`, ProcessRunning))plot
 sin(x)

 and no plot is produced.

 I can't figure out how to read from the pipe, either:

 julia readbytes(f)
 ERROR: `readbytes` has no method matching readbytes(::(Pipe,Process))

 julia readall(f)
 ERROR: `readall` has no method matching readall(::(Pipe,Process))

 I'd appreciate any pointers. Thanks!

 -- mb




Re: [julia-users] map() vs list comprehension - any preference?

2015-06-17 Thread Stefan Karpinski
I would have expected the comprehension to be faster. Is this in global
scope? If so you may want to try the speed comparison again where each of
these occur in a function body and only depend on function arguments.

On Tue, Jun 16, 2015 at 10:12 AM, Seth catch...@bromberger.com wrote:

 I have been using list comprehensions of the form
 bar(g, a) = [Pair(x, g) for x in a] and [foo(x) for x in a]

 but recently evaluated bar(g, a) = map(x-Pair(x, g),a) and
 map(x-foo(x),a)as substitutes.

 It seems from some limited testing that map is slightly faster than the
 list comprehension, but it's on the order of 3-4% so it may just be noise.
 Allocations and gc time are roughly equal (380M allocations, ~27000MB, ~6%
 gc).

 Should I prefer one approach over the other (and if so, why)?

 Thanks!



Re: [julia-users] Re: What's [] called? ie How do I refer to the array construction function?

2015-06-17 Thread David Gold
Soo, in summary (and I do apologize for spamming this thread; I don't 
usually drink coffee, and when I do it's remarkable):

1) Mauro and Avik seem to be right about the common use case Any[1, 2, 3, 
4] lowering to getindex -- my apologies for hastily suggesting otherwise
2) I suspect I was wrong to say that concatenation in general doesn't go 
through a function. It probably does, just not always through getindex 
(sometimes vcat, typed_vcat, vect, etc).

On Wednesday, June 17, 2015 at 10:15:00 AM UTC-4, David Gold wrote:

 However, it does look like this use pattern does resolve to 'getindex',

 *julia **dump(:( Int[1, 2, 3, 4] ))*

 Expr 

   head: Symbol ref

   args: Array(Any,(5,))

 1: Symbol Int

 2: Int64 1

 3: Int64 2

 4: Int64 3

 5: Int64 4

   typ: Any


 *julia **getindex(Int, 1, 2, 3, 4)*

 *4-element Array{Int64,1}:*

 * 1*

 * 2*

 * 3*

 * 4*


 So now I am not so sure about my claim that most concatenation doesn't go 
 through function calls. However, I don't think that the rest of the 
 concatenating heads (:vect, :vcat, :typed_vcat, etc.) lower to getindex:


 *julia **@which([1, 2, 3, 4])*

 *vect{T}(X::T...) at abstractarray.jl:13*


 *julia **@which([1; 2; 3; 4])*

 *vcat{T:Number}(X::T:Number...) at abstractarray.jl:643* 


 they seem instead to call the type 'vect', 'vcat', etc. itself on the 
 array arguments.



Re: [julia-users] map() vs list comprehension - any preference?

2015-06-17 Thread Seth
The speedups are both via the REPL (global scope?) and inside a module. I 
did a code_native on both - results are 
here: https://gist.github.com/sbromberger/b5656189bcece492ffd9.



On Wednesday, June 17, 2015 at 9:56:22 AM UTC-5, Stefan Karpinski wrote:

 I would have expected the comprehension to be faster. Is this in global 
 scope? If so you may want to try the speed comparison again where each of 
 these occur in a function body and only depend on function arguments.

 On Tue, Jun 16, 2015 at 10:12 AM, Seth catc...@bromberger.com 
 javascript: wrote:

 I have been using list comprehensions of the form
 bar(g, a) = [Pair(x, g) for x in a] and [foo(x) for x in a]

 but recently evaluated bar(g, a) = map(x-Pair(x, g),a) and 
 map(x-foo(x),a)as substitutes.

 It seems from some limited testing that map is slightly faster than the 
 list comprehension, but it's on the order of 3-4% so it may just be noise. 
 Allocations and gc time are roughly equal (380M allocations, ~27000MB, ~6% 
 gc).

 Should I prefer one approach over the other (and if so, why)?

 Thanks!




Re: [julia-users] map() vs list comprehension - any preference?

2015-06-17 Thread Josh Langsfeld
For me, map is 100x slower:

julia function f1(g,a)
 [Pair(x,g) for x in a]
   end
f1 (generic function with 1 method)


julia function f2(g,a)
 map(x-Pair(x,g), a)
   end
f2 (generic function with 1 method)


julia @time f1(2,ones(1_000_000));
  25.158 milliseconds (28491 allocations: 24736 KB, 12.69% gc time)


julia @time f1(2,ones(1_000_000));
   6.866 milliseconds (8 allocations: 23438 KB, 37.10% gc time)


julia @time f1(2,ones(1_000_000));
   6.126 milliseconds (8 allocations: 23438 KB, 25.99% gc time)


julia @time f2(2,ones(1_000_000));
 684.994 milliseconds (2057 k allocations: 72842 KB, 1.72% gc time)


julia @time f2(2,ones(1_000_000));
 647.267 milliseconds (2000 k allocations: 70313 KB, 3.64% gc time)


julia @time f2(2,ones(1_000_000));
 633.149 milliseconds (2000 k allocations: 70313 KB, 0.91% gc time)


On Wednesday, June 17, 2015 at 12:04:52 PM UTC-4, Seth wrote:

 Sorry - it's part of a function:

 in_edges(g::AbstractGraph, v::Int) = [Edge(x,v) for x in badj(g,v)]

 vs

 in_edges(g::AbstractGraph, v::Int) = map(x-Edge(x,v), badj(g,v))




 On Wednesday, June 17, 2015 at 10:51:22 AM UTC-5, Mauro wrote:

 Note that inside a module is also global scope as each module has its 
 own global scope.  Best move it into a function.  M 

 On Wed, 2015-06-17 at 17:22, Seth catc...@bromberger.com wrote: 
  The speedups are both via the REPL (global scope?) and inside a module. 
 I 
  did a code_native on both - results are 
  here: https://gist.github.com/sbromberger/b5656189bcece492ffd9. 
  
  
  
  On Wednesday, June 17, 2015 at 9:56:22 AM UTC-5, Stefan Karpinski 
 wrote: 
  
  I would have expected the comprehension to be faster. Is this in 
 global 
  scope? If so you may want to try the speed comparison again where each 
 of 
  these occur in a function body and only depend on function arguments. 
  
  On Tue, Jun 16, 2015 at 10:12 AM, Seth catc...@bromberger.com 
  javascript: wrote: 
  
  I have been using list comprehensions of the form 
  bar(g, a) = [Pair(x, g) for x in a] and [foo(x) for x in a] 
  
  but recently evaluated bar(g, a) = map(x-Pair(x, g),a) and 
  map(x-foo(x),a)as substitutes. 
  
  It seems from some limited testing that map is slightly faster than 
 the 
  list comprehension, but it's on the order of 3-4% so it may just be 
 noise. 
  Allocations and gc time are roughly equal (380M allocations, 
 ~27000MB, ~6% 
  gc). 
  
  Should I prefer one approach over the other (and if so, why)? 
  
  Thanks! 
  
  
  



Re: [julia-users] map() vs list comprehension - any preference?

2015-06-17 Thread Mauro
Note that inside a module is also global scope as each module has its
own global scope.  Best move it into a function.  M

On Wed, 2015-06-17 at 17:22, Seth catch...@bromberger.com wrote:
 The speedups are both via the REPL (global scope?) and inside a module. I 
 did a code_native on both - results are 
 here: https://gist.github.com/sbromberger/b5656189bcece492ffd9.



 On Wednesday, June 17, 2015 at 9:56:22 AM UTC-5, Stefan Karpinski wrote:

 I would have expected the comprehension to be faster. Is this in global 
 scope? If so you may want to try the speed comparison again where each of 
 these occur in a function body and only depend on function arguments.

 On Tue, Jun 16, 2015 at 10:12 AM, Seth catc...@bromberger.com 
 javascript: wrote:

 I have been using list comprehensions of the form
 bar(g, a) = [Pair(x, g) for x in a] and [foo(x) for x in a]

 but recently evaluated bar(g, a) = map(x-Pair(x, g),a) and 
 map(x-foo(x),a)as substitutes.

 It seems from some limited testing that map is slightly faster than the 
 list comprehension, but it's on the order of 3-4% so it may just be noise. 
 Allocations and gc time are roughly equal (380M allocations, ~27000MB, ~6% 
 gc).

 Should I prefer one approach over the other (and if so, why)?

 Thanks!






[julia-users] Re: Backend deployment for website

2015-06-17 Thread Jack Minardi
We've put Julia and our package in a docker container and built a Ruby on 
Rails front end that spins up containers when needed. When we've needed to 
do a little more communication with the julia instance, we've run a ZMQ 
server in Julia and fed in the required data. (after poking open the ports 
through Docker) This has worked very well in that it allows Julia to do 
what it does best (numerical computation) and lets Rails and some other 
packages handle all the networking. Using this architecture we've been able 
to develop our julia package just as a command line app.

On Tuesday, June 16, 2015 at 11:35:40 AM UTC-4, Matthew Krick wrote:

 I've read everything I could on deployment options, but my head is still a 
 mess when it comes to all the choices, especially with how fast julia is 
 moving! I have a website on a node.js server  when the user inputs a list 
 of points, I want to solve a traveling salesman problem (running time 
 between 2 and 10 minutes, multiple users). Can someone offer some advice on 
 what's worked for them or any pros/cons to each option? Least cost is 
 preferable to performance.


1. Spawn a new node.js instance  solve using node-julia (
https://www.npmjs.com/package/node-julia)
2. Use Forio's epicenter to host the code (
http://forio.com/products/epicenter/)
3. Create a julia HTTP server  make a REST API (
https://github.com/JuliaWeb/HttpServer.jl)
4. Host on Google Compute Engine (https://cloud.google.com/compute/)
5. Host on Amazon's Simple Queue (http://aws.amazon.com/sqs/)
6. Use Julia-box, if it can somehow accept inputs via an http call (
https://www.juliabox.org/)
7. ???




[julia-users] Reading from and writing to a process using a pipe

2015-06-17 Thread Miguel Bazdresch
Hello,

Gaston.jl is a plotting package based on gnuplot. Gnuplot is command-line
tool, so I send commands to it via a pipe. I open the pipe (on Linux) with
a ccall to popen, and write gnuplot commands to the pipe using a ccall to
fputs.

This works fine, but I'm trying to see if Julia's native pipe and stream
functionality can make this process more Julian and, in the process, more
cross-platform. The documentation is encouraging:

You can use [a Cmd] object to connect the command to others via pipes, run
it, and read or write to it. and Julia provides a rich interface to deal
with streaming I/O objects such as terminals, pipes and TCP sockets.
Unfortunately, I just can't figure out how to use Julia's functionality for
this purpose. This is what I've tried (I am on Julia 0.3.9):

First, I tried using `open` with read and write:

julia f=open(`gnuplot`,r+)
ERROR: ArgumentError(mode must be \r\ or \w\, not \r+\)

So I tried with write only:

julia f=open(`gnuplot`,w)
(Pipe(open, 0 bytes waiting),Process(`gnuplot`, ProcessRunning))

So far, this looks good. I can see a gnuplot process running.

Then I try to `write` to the pipe:

julia write(f,plot sin(x))
ERROR: `write` has no method matching write(::(Pipe,Process),
::ASCIIString)

OK, so let's try with `println`:

julia println(f,plot sin(x))
(Pipe(open, 0 bytes waiting),Process(`gnuplot`, ProcessRunning))plot
sin(x)

and no plot is produced.

I can't figure out how to read from the pipe, either:

julia readbytes(f)
ERROR: `readbytes` has no method matching readbytes(::(Pipe,Process))

julia readall(f)
ERROR: `readall` has no method matching readall(::(Pipe,Process))

I'd appreciate any pointers. Thanks!

-- mb


Re: [julia-users] map() vs list comprehension - any preference?

2015-06-17 Thread Seth
Sorry - it's part of a function:

in_edges(g::AbstractGraph, v::Int) = [Edge(x,v) for x in badj(g,v)]

vs

in_edges(g::AbstractGraph, v::Int) = map(x-Edge(x,v), badj(g,v))




On Wednesday, June 17, 2015 at 10:51:22 AM UTC-5, Mauro wrote:

 Note that inside a module is also global scope as each module has its 
 own global scope.  Best move it into a function.  M 

 On Wed, 2015-06-17 at 17:22, Seth catc...@bromberger.com javascript: 
 wrote: 
  The speedups are both via the REPL (global scope?) and inside a module. 
 I 
  did a code_native on both - results are 
  here: https://gist.github.com/sbromberger/b5656189bcece492ffd9. 
  
  
  
  On Wednesday, June 17, 2015 at 9:56:22 AM UTC-5, Stefan Karpinski wrote: 
  
  I would have expected the comprehension to be faster. Is this in global 
  scope? If so you may want to try the speed comparison again where each 
 of 
  these occur in a function body and only depend on function arguments. 
  
  On Tue, Jun 16, 2015 at 10:12 AM, Seth catc...@bromberger.com 
  javascript: wrote: 
  
  I have been using list comprehensions of the form 
  bar(g, a) = [Pair(x, g) for x in a] and [foo(x) for x in a] 
  
  but recently evaluated bar(g, a) = map(x-Pair(x, g),a) and 
  map(x-foo(x),a)as substitutes. 
  
  It seems from some limited testing that map is slightly faster than 
 the 
  list comprehension, but it's on the order of 3-4% so it may just be 
 noise. 
  Allocations and gc time are roughly equal (380M allocations, ~27000MB, 
 ~6% 
  gc). 
  
  Should I prefer one approach over the other (and if so, why)? 
  
  Thanks!