[julia-users] ANN: Errorfree Transforms for Arithmetic, Faster MultiDirected Rounding

2016-06-24 Thread Jeffrey Sarnoff
These ?require v0.5-, some may well work correctly with v0.4.6+ (I don't 
have that handy).

FastDirectedRounding,jl was written for work with floating point intervals, 
where there is much switching of rounding direction.
The the use of errorfree transformations to speed rounding is my own 
approach (afaik, and I have looked),
  
It works with AdjacentFloat.jl (also available, see below) which provides 
faster versions of nextfloat, prevfloat that are value equivalent for 
normal (not subnormal) floats.

It works with ErrorfreeArithmetic.jl (also available, see below) which 
provides an additional, low order float value with the usual result of 
float arithmetic. 
(in function names, __as2 means returns the two most significant parts of a 
something with more than two parts at full significance, and __GTE(a,b) 
means |a|>=|b|) 

I'm announcing these so they are available for the hackathon at JuliaCon. 
 They will live at this distribution place temporarily, for a while.


https://github.com/jsarnoff-juliacon/AdjacentFloat.jl
https://github.com/jsarnoff-juliacon/FastDirectedRounding.jl
https://github.com/jsarnoff-juliacon/ErrorfreeArithmetic.jl



[julia-users] Tutorial Julia language brazilian portuguese

2016-06-24 Thread jmarcellopereira
Hello everyone

I am writing a tutorial Julia language Portuguese (over 200 pages):

https://github.com/jmarcellopereira/juliatutorialbr


[julia-users] Tensorflow like dataflow graphs for Julia?

2016-06-24 Thread Gabriel Goh
I'm wondering if a library in Julia exists where I can specify dataflow 
graphs which can be compiled and differenciated, similar to what tensorflow 
does. Thanks a lot!


Re: [julia-users] A question of style: type constructors vs methods

2016-06-24 Thread Gabriel Gellner
Really good point about dispatch. Man need to think this through. Might 
just be my brain not fully getting multiple dispatch versus OO inheritance 
for this kind of design.

On Friday, June 24, 2016 at 12:29:48 PM UTC-7, Toivo Henningsson wrote:
>
> If you call the function Bar, then users might expect to be able to do 
> dispatch and type assertions using ::Bar, so I'd say it's safer to use bar 
> in this case. 
>
> Also consider if you're sure that Foo will always return a Foo in the 
> future; if you want to keep the flexibility to change that then I tubing I 
> think you should go with foo as well. 
>


[julia-users] Re: Wrap fortran 90 interface code for DDE

2016-06-24 Thread Dupont
Thank you for your suggestion, I will take a look.

Best regards


Re: [julia-users] ODE.jl - backwards (negative in time) integration

2016-06-24 Thread Chris
Submitted https://github.com/JuliaLang/ODE.jl/issues/104 

Thanks.

On Friday, June 24, 2016 at 3:50:42 PM UTC-4, Mauro wrote:
>
> This is a bug, could you file an issue? Thanks!  Note that some solvers, 
> even some of the Runge-Kutta family work, for instance ode78, ode45_fe, 
> ode21 or ode23s. 
>
> On Fri, 2016-06-24 at 20:48, Chris <7hunde...@gmail.com > 
> wrote: 
> > Hello, 
> > 
> > I came across this issue (recreated here using example code): 
> > 
> > julia> using ODE 
> > 
> > julia> function f(t, y) 
> ># Extract the components of the y vector 
> >(x, v) = y 
> > 
> ># Our system of differential equations 
> >x_prime = v 
> >v_prime = -x 
> > 
> ># Return the derivatives as a vector 
> >[x_prime; v_prime] 
> >end; 
> > 
> > julia> # Initial condtions -- x(0) and x'(0) 
> >const start = [0.0; 0.1] 
> > WARNING: redefining constant start 
> > 2-element Array{Float64,1}: 
> >  0.0 
> >  0.1 
> > 
> > julia> # Time vector going from 0 to 2*PI in 0.01 increments 
> >time = 0.:-.1:-4pi; 
> > 
> > julia> t, y = ode45(f, start, time); 
> > 
> > This hangs for a long time (I've waited up to a minute before killing 
> it). 
> > Is this a bug, or something I'm doing wrong? I do notice that reversing 
> the 
> > time vector (-4pi:.1:0.0) yields a result. 
> > 
> > Thanks, 
> > Chris 
>


Re: [julia-users] ODE.jl - backwards (negative in time) integration

2016-06-24 Thread Mauro
This is a bug, could you file an issue? Thanks!  Note that some solvers,
even some of the Runge-Kutta family work, for instance ode78, ode45_fe,
ode21 or ode23s.

On Fri, 2016-06-24 at 20:48, Chris <7hunderstr...@gmail.com> wrote:
> Hello,
>
> I came across this issue (recreated here using example code):
>
> julia> using ODE
>
> julia> function f(t, y)
># Extract the components of the y vector
>(x, v) = y
>
># Our system of differential equations
>x_prime = v
>v_prime = -x
>
># Return the derivatives as a vector
>[x_prime; v_prime]
>end;
>
> julia> # Initial condtions -- x(0) and x'(0)
>const start = [0.0; 0.1]
> WARNING: redefining constant start
> 2-element Array{Float64,1}:
>  0.0
>  0.1
>
> julia> # Time vector going from 0 to 2*PI in 0.01 increments
>time = 0.:-.1:-4pi;
>
> julia> t, y = ode45(f, start, time);
>
> This hangs for a long time (I've waited up to a minute before killing it).
> Is this a bug, or something I'm doing wrong? I do notice that reversing the
> time vector (-4pi:.1:0.0) yields a result.
>
> Thanks,
> Chris


Re: [julia-users] A question of style: type constructors vs methods

2016-06-24 Thread Toivo Henningsson
If you call the function Bar, then users might expect to be able to do dispatch 
and type assertions using ::Bar, so I'd say it's safer to use bar in this case. 

Also consider if you're sure that Foo will always return a Foo in the future; 
if you want to keep the flexibility to change that then I tubing I think you 
should go with foo as well. 

[julia-users] Fixing JavaCall for 0.5: generating methods which call ccall

2016-06-24 Thread Eric Davies
I have an intuition that this should be possible because the ccall depends 
on the input types, but I can't figure out how to make it work. I've tried 
a few things and they all seem to result in an UndefVarError or 
UndefRefError.

I want to turn this:
for (x, y, z) in [ (:jboolean, :(jnifunc.CallBooleanMethodA), :(jnifunc.
CallStaticBooleanMethodA)),
  (:jchar, :(jnifunc.CallCharMethodA), :(jnifunc.
CallStaticCharMethodA)),
  (:jbyte, :(jnifunc.CallByteMethodA), :(jnifunc.
CallStaticByteMethodA)),
  (:jshort, :(jnifunc.CallShortMethodA), :(jnifunc.
CallStaticShortMethodA)),
  (:jint, :(jnifunc.CallIntMethodA), :(jnifunc.
CallStaticIntMethodA)),
  (:jlong, :(jnifunc.CallLongMethodA), :(jnifunc.
CallStaticLongMethodA)),
  (:jfloat, :(jnifunc.CallFloatMethodA), :(jnifunc.
CallStaticFloatMethodA)),
  (:jdouble, :(jnifunc.CallDoubleMethodA), :(jnifunc.
CallStaticDoubleMethodA)),
  (:Void, :(jnifunc.CallVoidMethodA), :(jnifunc.
CallStaticVoidMethodA)) ]
m = quote
function _jcall(obj,  jmethodId::Ptr{Void}, callmethod::Ptr{Void}, 
rettype::Type{$(x)}, argtypes::Tuple, args... )
if callmethod == C_NULL #!
callmethod = ifelse( typeof(obj)<:JavaObject, $y , $z )
end
@assert callmethod != C_NULL
@assert jmethodId != C_NULL
if(isnull(obj)); error("Attempt to call method on Java NULL"); 
end
savedArgs, convertedArgs = convert_args(argtypes, args...)
result = ccall(callmethod, $x , (Ptr{JNIEnv}, Ptr{Void}, Ptr{
Void}, Ptr{Void}), penv, obj.ptr, jmethodId, convertedArgs)
if result==C_NULL; geterror(); end
if result == nothing; return; end
return convert_result(rettype, result)
end
end
eval(m)
end

Into something that works on 0.5. Code located here: 
https://github.com/invenia/JavaCall.jl/blob/compat-0.5/src/core.jl#L196

I've learned tricks to deal with the 0.4 function eval pattern, but they 
don't seem to work with ccall, which is a special feature that requires 
some arguments to be static and known at compile time.

Anyone have any tips?

Thanks,
Eric


[julia-users] ODE.jl - backwards (negative in time) integration

2016-06-24 Thread Chris
Hello,

I came across this issue (recreated here using example code):

julia> using ODE

julia> function f(t, y)
   # Extract the components of the y vector
   (x, v) = y

   # Our system of differential equations
   x_prime = v
   v_prime = -x

   # Return the derivatives as a vector
   [x_prime; v_prime]
   end;

julia> # Initial condtions -- x(0) and x'(0)
   const start = [0.0; 0.1]
WARNING: redefining constant start
2-element Array{Float64,1}:
 0.0
 0.1

julia> # Time vector going from 0 to 2*PI in 0.01 increments
   time = 0.:-.1:-4pi;

julia> t, y = ode45(f, start, time);

This hangs for a long time (I've waited up to a minute before killing it). 
Is this a bug, or something I'm doing wrong? I do notice that reversing the 
time vector (-4pi:.1:0.0) yields a result.

Thanks,
Chris


[julia-users] Re: Plots with Plots

2016-06-24 Thread jw3126
Looks pretty awesome!

On Friday, June 24, 2016 at 5:37:15 PM UTC+2, Tom Breloff wrote:
>
> I just uploaded the IJulia notebook which was my JuliaCon workshop: 
> https://github.com/tbreloff/ExamplePlots.jl/blob/master/notebooks/plotswithplots.ipynb
> .
>
> You'll need to be on master or dev to follow along until I tag 0.7.3.  The 
> documentation is getting more and more complete, so I recommend giving it a 
> quick read if you're curious about Plots: 
> http://plots.readthedocs.io/en/latest/
>
> -Tom
>


[julia-users] Re: Drop element from tuple

2016-06-24 Thread jw3126
The following is faster, though it does not scale very well for large 
indices because of ridiculous if chains...

```
@generated function droptup{N,T,i}(x::NTuple{N,T}, ::Type{Val{i}})
@assert 1 <= i <= N
args = [:(x[$j]) for j in deleteat!([1:N...], i)]
Expr(:tuple, args...)
end

@generated function droptup{N,T}(x::NTuple{N,T}, i::Int)
quote
@nif $N d->(i==d) d-> droptup(x, Val{d})
end
end

using BenchmarkTools
x = (10,20,30,40)

@benchmark droptup($x, 4)
```

BenchmarkTools.Trial: 
  samples:  1
  evals/sample: 1000
  time tolerance:   5.00%
  memory tolerance: 1.00%
  memory estimate:  0.00 bytes
  allocs estimate:  0
  minimum time: 6.00 ns (0.00% GC)
  median time:  6.00 ns (0.00% GC)
  mean time:6.11 ns (0.00% GC)
  maximum time: 70.00 ns (0.00% GC)



On Friday, June 24, 2016 at 4:50:59 PM UTC+2, jw3126 wrote:
>
> Okay thanks, it works! However it has extremely poor performance. I would 
> love to do this stack allocated. 
>
> ```
> using BenchmarkTools
> function subtuple(t::Tuple,i::Integer)
> idx = 1:length(t)
> idx = setdiff(idx,i)
> t[idx]
> end
>
> @benchmark subtuple($(1,2,3,4), $1)
> ```
>
> BenchmarkTools.Trial: 
>   samples:  1
>   evals/sample: 10
>   time tolerance:   5.00%
>   memory tolerance: 1.00%
>   memory estimate:  1.33 kb
>   allocs estimate:  22
>   minimum time: 1.52 μs (0.00% GC)
>   median time:  1.69 μs (0.00% GC)
>   mean time:1.96 μs (9.07% GC)
>   maximum time: 323.58 μs (98.21% GC)
>
>
> On Friday, June 24, 2016 at 4:42:17 PM UTC+2, STAR0SS wrote:
>>
>> You can do something like that:
>>
>> t = tuple(1,2,3,4)
>>
>> function subtuple(t::Tuple,i::Integer)
>> idx = 1:length(t)
>> idx = setdiff(idx,i)
>> t[idx]
>> end
>>
>> subtuple(t,3)
>>
>> (1,2,4)
>>
>

[julia-users] Re: Achitecture for solving largish LASSO/elasticnet problem

2016-06-24 Thread Matthew Pearce
Thanks for the suggestions so far, I'll be investigating these options :)


[julia-users] mkpath() race condition on cluster?

2016-06-24 Thread David van Leeuwen
Hello, 

I am experiencing strange errors related to code that tries to ensure a 
directory is there:

dir = dirname(dest)

if isfile(dir)

warn("Directory is file: ", dir)

continue

else

mkpath(dir)

end


However, while running this script parallel on a cluster, I regularly get 
the error:

ERROR: LoadError: SystemError: mkdir: File exists

 in mkdir at file.jl:42

 in mkpath at file.jl:50

 [inlined code] from /home/...


Maybe it is NFS that is caching too aggressively, or lying about the 
existence of concurrently created directories.  I don't know. 


Would there be a way to make `mkpath()` behave more atomically?  At the 
moment I have to replace the `mkpath()` above with `ensuredir()` below:


function ensuredir(d::AbstractString)

if !isdir(d)

try

mkpath(d)

catch e

isdir(d) || throw(e)

end

end 

end

Thanks, 

---david

P.S. (If this is a re-post, I didn't loose the original in the end in 
google somewhat buggy groups interface...)



[julia-users] Plots with Plots

2016-06-24 Thread Tom Breloff
I just uploaded the IJulia notebook which was my JuliaCon workshop:
https://github.com/tbreloff/ExamplePlots.jl/blob/master/notebooks/plotswithplots.ipynb
.

You'll need to be on master or dev to follow along until I tag 0.7.3.  The
documentation is getting more and more complete, so I recommend giving it a
quick read if you're curious about Plots:
http://plots.readthedocs.io/en/latest/

-Tom


[julia-users] Re: Wrap fortran 90 interface code for DDE

2016-06-24 Thread Helge Eichhorn
Oops, sent the message by accident while editing.

The code I posted is an example for a C interface from Dopri.jl.

Am Mittwoch, 8. Juni 2016 04:46:40 UTC-4 schrieb Dupont:
>
> Dear users,
>
> I would like to wrap the code 
> to solve delay 
> differential equations. I have been wrapping C code in the past, but I 
> don't know much about fortran. 
>
> In the file *dde_solver_m.f90*, it is said that one must call the fortran 
> function *DDE_SOLVER *through an interface*. *Hence, I compiled the code 
> as a library 
>
> gfortran -Wall -shared -o libdde_solver.dylib -lm -fPIC dde_solver_m.f90
>
> but when I looked at
>
> nm -a libdde_solver.dylib
>
> I could not find the function DDE_SOLVER. I know that this may seem more 
> like a fortran question than a Julia one, but I would be gratefull if one 
> could give me a hint on how to call this library from Julia,
>
> Thank you for your help,
>
> Best regards
>
>
>
>
>

[julia-users] Re: Wrap fortran 90 interface code for DDE

2016-06-24 Thread Helge Eichhorn
You might want to have a look at my Dopri.jl 
 package. In my experience the best way 
to wrap modern Fortran (90+) in Julia is to implement a C interface to the 
Fortran code via the ISO_C_BINDING intrinsic module. You can then call the 
interface easily from Julia.

subroutine c_dop853(n, cfcn, x, y, xend, rtol, atol,&
itol, csolout, iout, work, lwork, iwork,&
liwork, tnk, idid) bind(c)
integer(c_int), intent(in) :: n
type(c_funptr), intent(in), value :: cfcn
real(c_double), intent(inout) :: x
real(c_double), dimension(n), intent(inout) :: y
real(c_double), intent(in) :: xend
real(c_double), dimension(n), intent(in) :: rtol
real(c_double), dimension(n), intent(in) :: atol
integer(c_int), intent(in) :: itol
type(c_funptr), intent(in), value :: csolout
integer(c_int), intent(in) :: iout
real(c_double), dimension(lwork), intent(inout) :: work
integer(c_int), intent(in) :: lwork
integer(c_int), dimension(liwork), intent(inout) :: iwork
integer(c_int), intent(in) :: liwork type(c_ptr), intent(in) :: tnk
integer(c_int), intent(out) :: idid procedure(c_fcn), pointer :: fcn
procedure(c_solout), pointer :: solout

call c_f_procpointer(cfcn, fcn)
call c_f_procpointer(csolout, solout)
call dop853(n, fcn, x, y, xend, rtol, atol,&
itol, solout, iout, work, lwork, iwork,& liwork, tnk, idid)end subroutine 
c_dop853

Am Mittwoch, 8. Juni 2016 04:46:40 UTC-4 schrieb Dupont:
>
> Dear users,
>
> I would like to wrap the code 
> to solve delay 
> differential equations. I have been wrapping C code in the past, but I 
> don't know much about fortran. 
>
> In the file *dde_solver_m.f90*, it is said that one must call the fortran 
> function *DDE_SOLVER *through an interface*. *Hence, I compiled the code 
> as a library 
>
> gfortran -Wall -shared -o libdde_solver.dylib -lm -fPIC dde_solver_m.f90
>
> but when I looked at
>
> nm -a libdde_solver.dylib
>
> I could not find the function DDE_SOLVER. I know that this may seem more 
> like a fortran question than a Julia one, but I would be gratefull if one 
> could give me a hint on how to call this library from Julia,
>
> Thank you for your help,
>
> Best regards
>
>
>
>
>

Re: [julia-users] Kernerk dead

2016-06-24 Thread Kevin Squire
Hello, Marco,

Something you're doing is crashing Julia. This is bad, but without more
information, it's impossible to tell what's going on or where the problem
is--it could be a bug in Julia, ZMQ.jl, IJulia.jl (maybe), or somewhere
else entirely.

Can you post the code you're trying to run (or preferably, a short snippet
that causes the problem, along with the output of  versioninfo()?

You also might look through closed IJulia.jl issues.

Cheers,
  Kevin

On Friday, June 24, 2016, Marco Forti  wrote:

> Hi all,
>
> I am facing a problem: everytime I try to run a code this message use to
> appear
>
> "The kernel has died, and the automatic restart has failed. It is
> possible the kernel cannot be restarted. If you are not able to restart the
> kernel, you will still be able to save the notebook, but running code will
> no longer work until the notebook is reopened."
>
> Any clue about how to solve it?
>
> Thanks,
> Marco
>


Re: [julia-users] complex int ? A gcc-ism?

2016-06-24 Thread Stefan Karpinski
Yes, you can comment that bit out. Would you also mind filing an issue – we
should #ifdef that bit out on non-gcc compilers.

On Fri, Jun 24, 2016 at 6:06 AM, Victor Eijkhout 
wrote:

> My attempt to install Julia with the Intel compilers flounders on
>
> make[2]: Entering directory `/work/00434/eijkhout/julia/julia-master/src'
>
> CC usr/lib/libccalltest.so
>
> /work/00434/eijkhout/julia/julia-master/src/ccalltest.c(295): error:
> _Complex can only be used with float, double, or long double types
>
>   complex int r1;
>
>   ^
>
>
> /work/00434/eijkhout/julia/julia-master/src/ccalltest.c(296): error:
> _Complex can only be used with float, double, or long double types
>
>   complex int r2;
>
>   ^
>
>
> From all the googling that I've done, "complex int" is not legal C, but
> seems a gcc extension. True?
>
>
> This looks like a test file, and the only place it ever occurs. Shall I
> just edit it out and call my installation completely?
>
>
> Victor.
>


[julia-users] Re: Drop element from tuple

2016-06-24 Thread jw3126
Okay thanks, it works! However it has extremely poor performance. I would 
love to do this stack allocated. 

```
using BenchmarkTools
function subtuple(t::Tuple,i::Integer)
idx = 1:length(t)
idx = setdiff(idx,i)
t[idx]
end

@benchmark subtuple($(1,2,3,4), $1)
```

BenchmarkTools.Trial: 
  samples:  1
  evals/sample: 10
  time tolerance:   5.00%
  memory tolerance: 1.00%
  memory estimate:  1.33 kb
  allocs estimate:  22
  minimum time: 1.52 μs (0.00% GC)
  median time:  1.69 μs (0.00% GC)
  mean time:1.96 μs (9.07% GC)
  maximum time: 323.58 μs (98.21% GC)


On Friday, June 24, 2016 at 4:42:17 PM UTC+2, STAR0SS wrote:
>
> You can do something like that:
>
> t = tuple(1,2,3,4)
>
> function subtuple(t::Tuple,i::Integer)
> idx = 1:length(t)
> idx = setdiff(idx,i)
> t[idx]
> end
>
> subtuple(t,3)
>
> (1,2,4)
>


[julia-users] Kernerk dead

2016-06-24 Thread Marco Forti
Hi all,

I am facing a problem: everytime I try to run a code this message use to 
appear

"The kernel has died, and the automatic restart has failed. It is possible 
the kernel cannot be restarted. If you are not able to restart the kernel, 
you will still be able to save the notebook, but running code will no 
longer work until the notebook is reopened."

Any clue about how to solve it?

Thanks,
Marco


[julia-users] Re: Drop element from tuple

2016-06-24 Thread STAR0SS
You can do something like that:

t = tuple(1,2,3,4)

function subtuple(t::Tuple,i::Integer)
idx = 1:length(t)
idx = setdiff(idx,i)
t[idx]
end

subtuple(t,3)

(1,2,4)


Re: [julia-users] How to make a Matrix of Matrix's?

2016-06-24 Thread Patrick Kofod Mogensen
https://github.com/JuliaLang/julia/pull/17089 should fix it!

On Friday, June 24, 2016 at 9:38:04 AM UTC+2, Lutfullah Tomak wrote:
>
> While experimenting this, I don't know if it is intentional but [[M] [M];] 
> makes it sparse matrix of matrices. :)
>
> On Friday, June 24, 2016 at 5:39:51 AM UTC+3, Dan wrote:
>>
>> [[M] [M]] works.
>> And is the same as Matrix{Float64}[[M] [M]]
>>
>> But concur it is unintuitive.
>>
>> On Thursday, June 23, 2016 at 10:28:39 PM UTC-4, Sheehan Olver wrote:
>>>
>>>
>>> [M,M]  will do a Vector of Matrices in 0.5.   But [M M] still 
>>> does concatenation.  So the question is how to do Matrices of Matrices 
>>> without concatinating. 
>>>
>>>
>>>
>>>
>>> > On 24 Jun 2016, at 12:05 PM, Lutfullah Tomak  
>>> wrote: 
>>> > 
>>> > By default [M M] (without delimeter like , or ;) means concatenation 
>>> so it throws error. But I think in julia 0.5 [M, M] should do Matrix of 
>>> Matricies. 
>>>
>>>

[julia-users] Drop element from tuple

2016-06-24 Thread jw3126
I have a Tuple and I want to drop its ith element (e.g. construct a new 
tuple with the same elements, except the ith is missing). For example

(1,2,3,4) , 1 --> (2,3,4)
(1,2,3,4) , 3 --> (1,2,4)
(1,2,3,4) , 4 --> (1,2,3)

How to do this?


Re: [julia-users] How to make a Matrix of Matrix's?

2016-06-24 Thread Patrick Kofod Mogensen
Not intended, thanks for noticing!

On Friday, June 24, 2016 at 9:38:04 AM UTC+2, Lutfullah Tomak wrote:
>
> While experimenting this, I don't know if it is intentional but [[M] [M];] 
> makes it sparse matrix of matrices. :)
>
> On Friday, June 24, 2016 at 5:39:51 AM UTC+3, Dan wrote:
>>
>> [[M] [M]] works.
>> And is the same as Matrix{Float64}[[M] [M]]
>>
>> But concur it is unintuitive.
>>
>> On Thursday, June 23, 2016 at 10:28:39 PM UTC-4, Sheehan Olver wrote:
>>>
>>>
>>> [M,M]  will do a Vector of Matrices in 0.5.   But [M M] still 
>>> does concatenation.  So the question is how to do Matrices of Matrices 
>>> without concatinating. 
>>>
>>>
>>>
>>>
>>> > On 24 Jun 2016, at 12:05 PM, Lutfullah Tomak  
>>> wrote: 
>>> > 
>>> > By default [M M] (without delimeter like , or ;) means concatenation 
>>> so it throws error. But I think in julia 0.5 [M, M] should do Matrix of 
>>> Matricies. 
>>>
>>>

[julia-users] Re: Achitecture for solving largish LASSO/elasticnet problem

2016-06-24 Thread Josh Day
I'm working on https://github.com/joshday/SparseRegression.jl for penalized 
regression problems.  I'm still optimizing the code, but a test set of that 
size is not a problem.  

julia> n, p = 1000, 262144; x = randn(n, p); y = x*randn(p) + randn(n);

julia> @time o = SparseReg(x, y, ElasticNetPenalty(.1), Fista(tol = 1e-4, 
step = .1), lambda = [.5])
 22.356062 seconds (1.69 k allocations: 408.851 MB, 0.16% gc time)
■ SparseReg
  >  Model:  SparseRegression.LinearRegression()
  >Penalty:  ElasticNetPenalty (α = 0.1)
  >  Intercept:  true
  > nλ:  1
  >  Algorithm:  Fista


Re: [julia-users] Skylake support

2016-06-24 Thread felipenoris
Thanks! I'll check the hardware.


Re: [julia-users] Achitecture for solving largish LASSO/elasticnet problem

2016-06-24 Thread Tom Breloff
You could consider streaming data to multiple instances of OnlineStats.jl
in parallel. There should be no problem with memory usage as long as you
don't explicitly load your whole data set at once.

On Friday, June 24, 2016, Matthew Pearce  wrote:

> Hello
>
> I'm trying to solve a largish elasticnet type problem (convex
> optimisation).
>
>- The LARS.jl package produces Out of Memory errors for a test (1000,
>262144) problem. /proc/meminfo suggests I have 17x this array size free so
>not sure what's going on there.
>- I have access to multiple GPUs and nodes.
>- I would potentially need to solve problems of the above sort of size
>or bigger (10k, 200k) many, many times.
>
> Looking for thoughts on the appropriate way to go about tackling this:
>
>- Rewrap an existing glmnet library for Julia (e.g. this CUDA enabled
>one https://github.com/jeffwong/cudaglmnet or
>http://www-hsc.usc.edu/~garykche/gpulasso.pdf)
>- Go back to basics and use and optimisation package on the objective
>function (https://github.com/JuliaOpt), but which one? Would this be
>inefficient compared to specific glmnet solvers which do some kind of
>coordinate descent?
>- Rewrite some CUDA library from scratch (OK - probably a bad idea).
>
> Thoughts on the back of a postcard would be gratefully received.
>
>
> Cheers
>
>
> Matthew
>


[julia-users] polynomial terms in formula specification

2016-06-24 Thread Michael Borregaard
Sorry for asking a question that should be super-basic, but I have looked 
all over the internet for this for an hour now: How does one specify 
polynomial terms in glm models?

In R, I would:

y ~ x + I(x^2)

Thanks!


Michael


Re: [julia-users] pyplot how to change bground color ?

2016-06-24 Thread Tom Breloff
When you say "doesn't work for pyplot", what have you tried?

On Friday, June 24, 2016, Henri Girard  wrote:

> Hi,
> I didn't find anything to modify background in pyplot, it's so easy in
> plots but that doesn't work for pyplot, even maplotlib command ?
> Any help ?
> HG
>


[julia-users] pyplot how to change bground color ?

2016-06-24 Thread Henri Girard
Hi,
I didn't find anything to modify background in pyplot, it's so easy in 
plots but that doesn't work for pyplot, even maplotlib command ?
Any help ?
HG


[julia-users] Achitecture for solving largish LASSO/elasticnet problem

2016-06-24 Thread Matthew Pearce
Hello

I'm trying to solve a largish elasticnet type problem (convex 
optimisation). 

   - The LARS.jl package produces Out of Memory errors for a test (1000, 
   262144) problem. /proc/meminfo suggests I have 17x this array size free so 
   not sure what's going on there. 
   - I have access to multiple GPUs and nodes.
   - I would potentially need to solve problems of the above sort of size 
   or bigger (10k, 200k) many, many times.

Looking for thoughts on the appropriate way to go about tackling this:

   - Rewrap an existing glmnet library for Julia (e.g. this CUDA enabled 
   one https://github.com/jeffwong/cudaglmnet or 
   http://www-hsc.usc.edu/~garykche/gpulasso.pdf)
   - Go back to basics and use and optimisation package on the objective 
   function (https://github.com/JuliaOpt), but which one? Would this be 
   inefficient compared to specific glmnet solvers which do some kind of 
   coordinate descent?
   - Rewrite some CUDA library from scratch (OK - probably a bad idea).

Thoughts on the back of a postcard would be gratefully received.


Cheers


Matthew


[julia-users] Re: Parallel computing: SharedArrays not updating on cluster

2016-06-24 Thread Matthew Pearce
As the others have said, it won't work like that. I found a few options:

   1. DistributedArrays. Message passing handled in the background. Some 
   limitations, but I've not used much.
   2. SharedArrays on each machine. You can share memory between all the 
   pids on a single machine, and then pass messages between one process from 
   each machine to updated.
   3. Regular Arrays on each machine. Swap messages between all processes.
   
Which one works for you will depend on how big your arrays are and the 
access patterns of the code you're trying to run on them.





[julia-users] Re: Found T Pass in Stata

2016-06-24 Thread wookyoung noh
oh. it's mine. thanks!

On Wednesday, June 22, 2016 at 11:10:15 PM UTC-4, Xiangxi Gao wrote:
>
> Picked up a one week T pass valid till 6/28 this afternoon and handed it 
> to Edelman. The pass was purchased at South station on 6/21 at 0721.



[julia-users] complex int ? A gcc-ism?

2016-06-24 Thread Victor Eijkhout
My attempt to install Julia with the Intel compilers flounders on

make[2]: Entering directory `/work/00434/eijkhout/julia/julia-master/src'

CC usr/lib/libccalltest.so

/work/00434/eijkhout/julia/julia-master/src/ccalltest.c(295): error: 
_Complex can only be used with float, double, or long double types

  complex int r1;

  ^


/work/00434/eijkhout/julia/julia-master/src/ccalltest.c(296): error: 
_Complex can only be used with float, double, or long double types

  complex int r2;

  ^


>From all the googling that I've done, "complex int" is not legal C, but 
seems a gcc extension. True?


This looks like a test file, and the only place it ever occurs. Shall I 
just edit it out and call my installation completely?


Victor.


[julia-users] Re: Installing with Intel compiler

2016-06-24 Thread Victor Eijkhout
I wish I could supply some general help. Here's the story as I understand 
it: the latest Intel compilers rely on gcc for full C++11 and higher 
support. That means you have to specify some tricky combination of Intel & 
gcc setup. On my system this was done by someone else, so I don't know the 
details. Here's the script that loads C++ support:

prepend_path("PATH","/opt/apps/gcc/4.9.1/bin")

prepend_path("LD_LIBRARY_PATH","/opt/apps/gcc/4.9.1/lib")

prepend_path("LD_LIBRARY_PATH","/opt/apps/gcc/4.9.1/lib64")


(that's lua, btw, but the meaning is obvious)


If you know what to ask me I'll be happy to dig further, but right now I 
can't explicitly tell you what Intel icpc uses from gcc to obtain C++1y 
functionality.


Victor.


[julia-users] Re: how to save array to text file "correctly"?

2016-06-24 Thread Pieterjan Robbe


f = open("myfile.csv","w")

for i in 1:length(data)

write(f,@sprintf("%20.16f\n",data[i]))

end

close(f)


shell> cat myfile.csv

 -0.5000

  0.

 -0.21819900

  0.15396700

 -0.17899000

  0.12671700

 -0.02243270

  0.01600870

Op vrijdag 24 juni 2016 04:55:37 UTC+2 schreef Hoang-Ngan Nguyen:
>
> Hi,
>
> I have the following array
> data = [
>  -0.5 
>  0.0 
>  -2.18199e-5
>  1.53967e-5
>  -1.7899e-5 
>  1.26717e-5
>  -2.24327e-6
>  1.60087e-6]
>
>
> When I save it using either 
>
> writecsv("filename.csv",data)
>
> or
>
> writedlm("filename.csv",data,",")
>
>
> I get this
> -.5
> 0
> -21819881018654233e-21
> 153966589305464e-19
> -17898976869144106e-21
> 12671715235247999e-21
> -22432716786997375e-22
> 16008706220269127e-22
>
> Is there anyway for me to, instead, get the following:
> -.5
> 0
> -.21819881018654233
> .153966589305464
> -.17898976869144106
> .12671715235247999
> -.022432716786997375
> .016008706220269127
>
> Thanks,
> Ngan
>
>

[julia-users] Re: indexing over `zip(collection1, collection2)`

2016-06-24 Thread Jussi Piitulainen
module Proof
import Base: Zip, Zip2, getindex
getindex(o::Zip, k::Int) = (o.a[k], o.z[k]...)
getindex(o::Zip2, k::Int) = (o.a[k], o.b[k])
end





Re: [julia-users] How to make a Matrix of Matrix's?

2016-06-24 Thread Lutfullah Tomak
While experimenting this, I don't know if it is intentional but [[M] [M];] 
makes it sparse matrix of matrices. :)

On Friday, June 24, 2016 at 5:39:51 AM UTC+3, Dan wrote:
>
> [[M] [M]] works.
> And is the same as Matrix{Float64}[[M] [M]]
>
> But concur it is unintuitive.
>
> On Thursday, June 23, 2016 at 10:28:39 PM UTC-4, Sheehan Olver wrote:
>>
>>
>> [M,M]  will do a Vector of Matrices in 0.5.   But [M M] still 
>> does concatenation.  So the question is how to do Matrices of Matrices 
>> without concatinating. 
>>
>>
>>
>>
>> > On 24 Jun 2016, at 12:05 PM, Lutfullah Tomak  
>> wrote: 
>> > 
>> > By default [M M] (without delimeter like , or ;) means concatenation so 
>> it throws error. But I think in julia 0.5 [M, M] should do Matrix of 
>> Matricies. 
>>
>>

Re: [julia-users] indexing over `zip(collection1, collection2)`

2016-06-24 Thread Mauro
On Fri, 2016-06-24 at 05:26, Rafael Fourquet  wrote:
>> My recollection is that part of the indexing interface in Julia (just by
>> convention) is that indexing should be of O(1) (or close to that)
>> complexity.
>
> As the OP suggested, this could still be the case, the zip object
> would simply forward the indexing to the zipped collections, which
> would fail if one of then is not indexable (in O(1)). Alternatively
> there could be a trait for that.
> I've also wanted that for a long time, and even setindex!, to be able
> to sort an array2 according to the order defined in array1:
> sort!(zip(array1, array2)).

Yes, you're right.


[julia-users] Re: indexing over `zip(collection1, collection2)`

2016-06-24 Thread Jussi Piitulainen


> Yes, I meant python 2.x
>

There are no zip objects in Python 2. You can easily get the exact same 
effect (pull the full result into memory at once, allow indexing, slicing, 
sorting in place, everything) in Julia. 
collect(zip(...))


But in Julia, unlike Python, you should be able to implement getindex for 
zip objects yourself, at least if you are happy to allocate a new tuple at 
every call. They seem to have types Base.Zip (with fields a, z containing 
the data) and Base.Zip2 (with fields a, b containing the data). 
dump(zip("foo", "bar", "whatever"))

That should actually be a nice exercise. I suppose indexing would just fail 
when the underlying data is not indexable.