Re: [julia-users] Signal / slot (or publish-subscribe) libraries in Julia

2016-06-03 Thread Penn Taylor
Femto Trader,

I came up with a way to get emit to work the way you wanted: 
https://gist.github.com/penntaylor/a4dd1ed268b2401e54a72705954180c5


[julia-users] Re: parametrized type with inner constructor fails

2016-06-03 Thread Eric Forgy
I've been bitten by this many times and see so many others being bitten. I 
know the docs explain this as well as I could, but it would be great if 
someone could come up with some educational magic to improve the docs a bit 
more.

On Saturday, June 4, 2016 at 10:17:16 AM UTC+8, David P. Sanders wrote:
>
>
>
> El viernes, 3 de junio de 2016, 22:06:20 (UTC-4), xdavidliu escribió:
>>
>> with
>>
>>
>> type foo
>> x::Int
>> foo(x) = x > 0 ? new(x) : new(-x)
>> end
>>
>>
>> type bar{T<:Integer}
>> x::T
>> end
>>
>>
>> type baz{T<:Integer}
>> x::T
>> baz(x) = x > 0 ? new(x) : new(-x)
>> end
>>
>>
>>
>> "foo(-5).x" gives 5, "bar(-5).x" gives -5, but "baz(-5).x" gives a 
>> "MethodError: 'convert' has no method matching..." error. 
>>
>> It seems the relevant section in the manual is this 
>> ,
>>  
>> but I only have a single field (as opposed to the examples in the link in 
>> which there are almost always two or more fields), so there should be no 
>> type disagreement or ambiguity here. Is this intended behavior?
>>
>
> This is rather subtle.
> The inner constructor defines *only* the parametrised constructor:
>
> julia> type baz{T<:Integer}
>   x::T
>   baz(x) = x > zero(x) ? new(x) : new(-x)
>   end
>
> julia> methods(baz)
> 2-element Array{Any,1}:
>  call{T}(::Type{T}, arg) at essentials.jl:56
>  call{T}(::Type{T}, args...) at essentials.jl:57
>
> julia> baz{Int}(-5)
> baz{Int64}(5)
>
> If you want to use a non-parametrized constructor like baz(-5), you need 
> to explicitly define it:
>
> julia> baz{T}(x::T) = baz{T}(x)
>
> Note that on the left, this means "for each T, define a function baz(x) 
> for x of that type"; on the right it tells you to call the parametric 
> constructor with *that particular* type T:
>
> julia> baz(-5)
> baz{Int64}(5)
>  
>


[julia-users] Re: parametrized type with inner constructor fails

2016-06-03 Thread David P. Sanders


El viernes, 3 de junio de 2016, 22:06:20 (UTC-4), xdavidliu escribió:
>
> with
>
>
> type foo
> x::Int
> foo(x) = x > 0 ? new(x) : new(-x)
> end
>
>
> type bar{T<:Integer}
> x::T
> end
>
>
> type baz{T<:Integer}
> x::T
> baz(x) = x > 0 ? new(x) : new(-x)
> end
>
>
>
> "foo(-5).x" gives 5, "bar(-5).x" gives -5, but "baz(-5).x" gives a 
> "MethodError: 'convert' has no method matching..." error. 
>
> It seems the relevant section in the manual is this 
> ,
>  
> but I only have a single field (as opposed to the examples in the link in 
> which there are almost always two or more fields), so there should be no 
> type disagreement or ambiguity here. Is this intended behavior?
>

This is rather subtle.
The inner constructor defines *only* the parametrised constructor:

julia> type baz{T<:Integer}
  x::T
  baz(x) = x > zero(x) ? new(x) : new(-x)
  end

julia> methods(baz)
2-element Array{Any,1}:
 call{T}(::Type{T}, arg) at essentials.jl:56
 call{T}(::Type{T}, args...) at essentials.jl:57

julia> baz{Int}(-5)
baz{Int64}(5)

If you want to use a non-parametrized constructor like baz(-5), you need to 
explicitly define it:

julia> baz{T}(x::T) = baz{T}(x)

Note that on the left, this means "for each T, define a function baz(x) for 
x of that type"; on the right it tells you to call the parametric 
constructor with *that particular* type T:

julia> baz(-5)
baz{Int64}(5)
 


Re: [julia-users] parametrized type with inner constructor fails

2016-06-03 Thread Yichao Yu
On Fri, Jun 3, 2016 at 10:06 PM, xdavidliu  wrote:
> with
>
>
> type foo
> x::Int
> foo(x) = x > 0 ? new(x) : new(-x)
> end
>
>
> type bar{T<:Integer}
> x::T
> end
>
>
> type baz{T<:Integer}
> x::T
> baz(x) = x > 0 ? new(x) : new(-x)
> end
>
>
>
> "foo(-5).x" gives 5, "bar(-5).x" gives -5, but "baz(-5).x" gives a
> "MethodError: 'convert' has no method matching..." error.
>
> It seems the relevant section in the manual is this, but I only have a
> single field (as opposed to the examples in the link in which there are
> almost always two or more fields), so there should be no type disagreement
> or ambiguity here. Is this intended behavior?

Yes, this is.
This has nothing to do with ambiguity and is all about `bas{Int}` and
`baz` being completely different types (even though one is a
instantiation and subtype of the other) and has completely different
set of methods defined. The default constructor is always disabled
when custom inner constructors are defined.


[julia-users] parametrized type with inner constructor fails

2016-06-03 Thread xdavidliu
with


type foo
x::Int
foo(x) = x > 0 ? new(x) : new(-x)
end


type bar{T<:Integer}
x::T
end


type baz{T<:Integer}
x::T
baz(x) = x > 0 ? new(x) : new(-x)
end



"foo(-5).x" gives 5, "bar(-5).x" gives -5, but "baz(-5).x" gives a 
"MethodError: 'convert' has no method matching..." error. 

It seems the relevant section in the manual is this 
,
 
but I only have a single field (as opposed to the examples in the link in 
which there are almost always two or more fields), so there should be no 
type disagreement or ambiguity here. Is this intended behavior?


Re: [julia-users] errors using packages in parallel

2016-06-03 Thread Ethan Anderes
Hi Tim:

I just checked and the package versions are the same (v0.1.8). However, the 
julia versions on my laptop is slightly different from what is on the 
server (Version 0.4.6-pre+37 vrs Version 0.4.6-pre+36). Is that a problem? 

Ethan

On Friday, June 3, 2016 at 1:48:37 PM UTC-7, Tim Holy wrote:
>
> Do you have different versions of the package installed on the different 
> machines? 
>
> --Tim 
>
> On Friday, June 3, 2016 8:56:39 AM CDT Ethan Anderes wrote: 
> > I still get an error (see below). Even if it did work, it would still be 
> > strange that one would need a different syntax when the workers are on 
> my 
> > local machine vrs connected to servers with ssh tunnel. 
> > 
> > julia> addprocs( 
> >machines, 
> >tunnel=true, 
> >dir="/home/anderes/", 
> >exename="/usr/local/bin/julia", 
> >topology=:master_slave, 
> >) 
> > 2-element Array{Int64,1}: 
> >  2 
> >  3 
> > 
> > julia> @everywhere using  Dierckx 
> > WARNING: node state is inconsistent: node 2 failed to load cache from 
> > /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji WARNING: node state is 
> > inconsistent: node 3 failed to load cache from 
> > /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji WARNING: deserialization 
> > checks failed while attempting to load cache from 
> > /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji WARNING: deserialization 
> > checks failed while attempting to load cache from 
> > /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji WARNING: deserialization 
> > checks failed while attempting to load cache from 
> > /Users/ethananderes/.julia/lib/v0.4/Compat.ji WARNING: deserialization 
> > checks failed while attempting to load cache from 
> > /Users/ethananderes/.julia/lib/v0.4/Compat.ji ERROR: On worker 2: 
> > LoadError: InitError: Dierckx not properly installed. Run 
> > Pkg.build("Dierckx") in __init__ at 
> > /Users/ethananderes/.julia/v0.4/Dierckx/src/Dierckx.jl:27 in 
> include_string 
> > at loading.jl:282 
> >  in include_from_node1 at ./loading.jl:323 
> >  in require at ./loading.jl:259 
> >  in eval at ./sysimg.jl:14 
> >  in anonymous at multi.jl:1394 
> >  in anonymous at multi.jl:923 
> >  in run_work_thunk at multi.jl:661 
> >  [inlined code] from multi.jl:923 
> >  in anonymous at task.jl:63 
> > during initialization of module Dierckx 
> > while loading /Users/ethananderes/.julia/v0.4/Dierckx/src/Dierckx.jl, in 
> > expression starting on line 714 in remotecall_fetch at multi.jl:747 
> >  in remotecall_fetch at multi.jl:750 
> >  in anonymous at multi.jl:1396 
> > 
> > ...and 1 other exceptions. 
> > 
> >  in sync_end at ./task.jl:413 
> >  in anonymous at multi.jl:1405 
> > 
> > Just to convince you that it’s not a problem with Dierckx on the remote 
> > machine… everything works fine (even without @everywhere before using 
> > Dierckx) when the master node is on server (rather than on my laptop) 
> > 
> > $ ssh xxx...@xxx.xxx.edu  
> > 
> > (xxx)-~$ julia 
> >_ 
> >_   _ _(_)_ |  A fresh approach to technical computing 
> >   (_) | (_) (_)|  Documentation: http://docs.julialang.org 
> >_ _   _| |_  __ _   |  Type "?help" for help. 
> > 
> >   | | | | | | |/ _` |  | 
> >   | | | 
> >   | | |_| | | | (_| |  |  Version 0.4.6-pre+37 (2016-05-27 22:56 UTC) 
> > 
> >  _/ |\__'_|_|_|\__'_|  |  Commit 430601c (6 days old release-0.4) 
> > 
> > |__/   |  x86_64-redhat-linux 
> > 
> > julia> addprocs(2, topology=:master_slave) 
> > 2-element Array{Int64,1}: 
> >  2 
> >  3 
> > 
> > julia> using  Dierckx 
> > 
> > julia> @everywhere spl = Dierckx.Spline1D([1., 2., 3.], [1., 2., 3.], 
> k=2) 
> > 
> > julia> 
> > 
> > 
> > I did find this old issue on github which seems to have a similar error 
> ( 
> > https://github.com/JuliaLang/julia/issues/12381). Should I file an 
> issue, 
> > or do you think it’s a problem on my end? 
> > 
> > 
> > 
> > 
> > 
> > On Friday, June 3, 2016 at 7:31:35 AM UTC-7, Isaiah wrote: 
> > 
> > Try `@everywhere using Dierckx` 
> > 
> > > ​ 
>
>
>

Re: [julia-users] Re: big on the mac with large matrices

2016-06-03 Thread Stefan Karpinski
What BLAS library are you using? I.e. what is the output of versioninfo()?

On Fri, Jun 3, 2016 at 5:17 PM, Steven G. Johnson 
wrote:

>
>
> On Friday, June 3, 2016 at 3:16:17 PM UTC-4, Jeremy Kozdon wrote:
>>
>> On my mac (and the several other macs I have tried) the following command
>> in Julia 0.4.5
>>
>>eig(rand(3000,3000))
>>
>> causes Julia to crash. It seems to run fine on my linux machines and
>> JuliaBox.
>
>
> Works fine for me on my Mac with both Julia 0.4.3 and 0.4.5
>
> julia> @time eig(rand(3000,3000))
>  69.368845 seconds (668.10 k allocations: 780.731 MB, 0.30% gc time)
>
>


[julia-users] Re: row or col span for subplots, using PyPlot

2016-06-03 Thread jda
doh! thanks 

[julia-users] Re: Thanks Julia

2016-06-03 Thread M. Zahirul ALAM
Thanks. Maybe we will continue to look forward to it for a while... I think 
the idea might have been oversold in the 80's and 90's. There remains a 
tremendous amount of engineering and scientific challenges. But I think the 
era of hybrid processor might have started. Please check the following 
paper This was an incredible work.

http://www.nature.com/nature/journal/v528/n7583/full/nature16454.html

On friday, 3 June 2016 10:47:11 UTC-4, Eric Forgy wrote:
>
> The Abstract looks awesome. All-optical processors is something I've been 
> looking forward to since nearly going to University of Arizona to study the 
> topic back in 1994 :)
>
> On Friday, June 3, 2016 at 10:37:54 PM UTC+8, M. Zahirul ALAM wrote:
>>
>> I have started using Julia roughly two summers ago. I have been visiting 
>> this forum ever since looking for answers and tips. You guys have been 
>> incredible. I have used 80 percent of the of the calculations for the 
>> following paper using Julia. Sadly, during the final revision, we had to 
>> take the Julia reference (and a few more) out of the paper due to the 
>> journal's policy on a maximum number of references. But a big thanks is in 
>> order.   THANK YOU ALL !!
>>
>> Here is a link: http://science.sciencemag.org/content/352/6287/795
>>
>

[julia-users] Re: How to build a range of -Inf to Inf

2016-06-03 Thread Steven G. Johnson


On Thursday, June 2, 2016 at 11:42:32 PM UTC-4, colint...@gmail.com wrote:
>
> function Base.filter!{T}(x::AbstractVector{T}, r::BasicInterval{T})
> for n = length(x):-1:1
> !in(x[n], r) && deleteat!(x, n)
> end
> return(x)
> end
>

I'm pretty sure this implementation has O(n^2) complexity, because 
deleteat! for an array has to actually move all of the elements to fill the 
hole.

To get an O(n) algorithm, you could do something like:

j = 0
for i = 1:length(x)
if x[i] in r
x[j += 1] = x[i]
end
end
return resize!(x, j)

Or you could just do filter!(x -> x in r, x), which is fast in Julia 0.5


[julia-users] Re: row or col span for subplots, using PyPlot

2016-06-03 Thread Steven G. Johnson
On Friday, June 3, 2016 at 3:58:19 PM UTC-4, jda wrote:
>
> I am looking for a way for subplots to have column or row span in julia 
> using PyPlot.  In other words, I want to find out how to translate this:
> http://matplotlib.org/users/gridspec.html
>

 The exact same syntax works fine for me:

using PyPlot
subplot2grid((3,3), (1,0), colspan=2)
plot(rand(10))
subplot2grid((3,3), (1, 2), rowspan=2)
plot(rand(100))



[julia-users] Re: big on the mac with large matrices

2016-06-03 Thread Steven G. Johnson


On Friday, June 3, 2016 at 3:16:17 PM UTC-4, Jeremy Kozdon wrote:
>
> On my mac (and the several other macs I have tried) the following command 
> in Julia 0.4.5 
>
>eig(rand(3000,3000)) 
>
> causes Julia to crash. It seems to run fine on my linux machines and 
> JuliaBox. 


Works fine for me on my Mac with both Julia 0.4.3 and 0.4.5

julia> @time eig(rand(3000,3000))
 69.368845 seconds (668.10 k allocations: 780.731 MB, 0.30% gc time) 



[julia-users] Is HPTCDL workshop happening this year?

2016-06-03 Thread Jong Wook Kim
I'm wondering if HPTCDL(High Performance Technical Computing in Dynamic 
Languages) workshop is being planned this year.

I could only find the proceedings of the first workshop (2014), and only a 
call for papers for the 2015 workshop.

Does anyone know the details of the HPTCDL workshop in 2015 and 2016?

Thanks,
Jong Wook


Re: [julia-users] errors using packages in parallel

2016-06-03 Thread Tim Holy
Do you have different versions of the package installed on the different 
machines?

--Tim

On Friday, June 3, 2016 8:56:39 AM CDT Ethan Anderes wrote:
> I still get an error (see below). Even if it did work, it would still be
> strange that one would need a different syntax when the workers are on my
> local machine vrs connected to servers with ssh tunnel.
> 
> julia> addprocs(
>machines,
>tunnel=true,
>dir="/home/anderes/",
>exename="/usr/local/bin/julia",
>topology=:master_slave,
>)
> 2-element Array{Int64,1}:
>  2
>  3
> 
> julia> @everywhere using  Dierckx
> WARNING: node state is inconsistent: node 2 failed to load cache from
> /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji WARNING: node state is
> inconsistent: node 3 failed to load cache from
> /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji WARNING: deserialization
> checks failed while attempting to load cache from
> /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji WARNING: deserialization
> checks failed while attempting to load cache from
> /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji WARNING: deserialization
> checks failed while attempting to load cache from
> /Users/ethananderes/.julia/lib/v0.4/Compat.ji WARNING: deserialization
> checks failed while attempting to load cache from
> /Users/ethananderes/.julia/lib/v0.4/Compat.ji ERROR: On worker 2:
> LoadError: InitError: Dierckx not properly installed. Run
> Pkg.build("Dierckx") in __init__ at
> /Users/ethananderes/.julia/v0.4/Dierckx/src/Dierckx.jl:27 in include_string
> at loading.jl:282
>  in include_from_node1 at ./loading.jl:323
>  in require at ./loading.jl:259
>  in eval at ./sysimg.jl:14
>  in anonymous at multi.jl:1394
>  in anonymous at multi.jl:923
>  in run_work_thunk at multi.jl:661
>  [inlined code] from multi.jl:923
>  in anonymous at task.jl:63
> during initialization of module Dierckx
> while loading /Users/ethananderes/.julia/v0.4/Dierckx/src/Dierckx.jl, in
> expression starting on line 714 in remotecall_fetch at multi.jl:747
>  in remotecall_fetch at multi.jl:750
>  in anonymous at multi.jl:1396
> 
> ...and 1 other exceptions.
> 
>  in sync_end at ./task.jl:413
>  in anonymous at multi.jl:1405
> 
> Just to convince you that it’s not a problem with Dierckx on the remote
> machine… everything works fine (even without @everywhere before using
> Dierckx) when the master node is on server (rather than on my laptop)
> 
> $ ssh xxx...@xxx.xxx.edu
> 
> (xxx)-~$ julia
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
> 
>   | | | | | | |/ _` |  |
>   | | |
>   | | |_| | | | (_| |  |  Version 0.4.6-pre+37 (2016-05-27 22:56 UTC)
> 
>  _/ |\__'_|_|_|\__'_|  |  Commit 430601c (6 days old release-0.4)
> 
> |__/   |  x86_64-redhat-linux
> 
> julia> addprocs(2, topology=:master_slave)
> 2-element Array{Int64,1}:
>  2
>  3
> 
> julia> using  Dierckx
> 
> julia> @everywhere spl = Dierckx.Spline1D([1., 2., 3.], [1., 2., 3.], k=2)
> 
> julia>
> 
> 
> I did find this old issue on github which seems to have a similar error (
> https://github.com/JuliaLang/julia/issues/12381). Should I file an issue,
> or do you think it’s a problem on my end?
> 
> 
> 
> 
> 
> On Friday, June 3, 2016 at 7:31:35 AM UTC-7, Isaiah wrote:
> 
> Try `@everywhere using Dierckx`
> 
> > ​




[julia-users] ANN: PkgSearch - a REPL utility for package discovery

2016-06-03 Thread Adrian Salceanu
Hi, 

I have released PkgSearch, a small REPL utility for package discovery. 

Package discovery seemed to be a recurring issue, with many related 
questions - and I can still remember how difficult was for me too, when I 
started. So it might be a useful tool. 
I've been using it for a few days and it's kind of neat, being able to 
quickly search through all the publicly available packages without leaving 
the REPL :) I hope you'll enjoy it! 

It works in conjunction with an API which powers the actual search. On the 
server side, a full text search is performed against the README files. It 
covers both official packages and unofficial ones, searching for them on 
GitHub (not in real time, the data is imported regularly). This GitHub 
search is a bit naive still, so false positives might come up. 

More details in the README, at https://github.com/essenciary/PkgSearch

===

On a related note, the API providing the search results and all the tooling 
for importing and processing the data is done with Genie (formerly Jinnie) 
- the Julia web framework I've been working on for many months now. It's 
not ready for prime time yet but this is definitely a major milestone! 

With this occasion I've also added a very comprehensive README to give you 
an idea about what it does, how it works and where it's heading. 

You can find it here https://github.com/essenciary/genie - and if you like 
it, please star it :) 

Cheers,
Adrian


[julia-users] hash highlighting for atom or julia-vim

2016-06-03 Thread Ritchie Lee
I've been using LightTable for developing Julia and I really enjoy the June 
Night theme (https://github.com/JuliaIDE/June-LT) with its hash 
highlighting and font/colors.  I am considering switching to Atom and/or 
julia-vim, but can't seem to recreate the hash highlighting feature.  Am I 
missing something?

Thanks!


Re: [julia-users] Uniform syntax

2016-06-03 Thread Stefan Karpinski
In Julia 0.5, [1, 2, 3] and [1; 2; 3] do not mean the same thing – the
former does array construction while the latter does array concatenation.
These happen to produce the same result for scalars. When the elements are
collections, however, they do not:

julia> [1:3, 4:6, 7:9]
3-element Array{UnitRange{Int64},1}:
 1:3
 4:6
 7:9

julia> [1:3; 4:6; 7:9]
9-element Array{Int64,1}:
 1
 2
 3
 4
 5
 6
 7
 8
 9


Eliminating the `for i=1:10` syntax has been discussed repeatedly, but
there are situations like array comprehensions where the additional three
characters required to write `for i in 1:10` seems annoyingly verbose:

julia> [1/(i+j-1) for i=1:3, j=1:3]
3×3 Array{Float64,2}:
 1.0   0.5   0.33
 0.5   0.33  0.25
 0.33  0.25  0.2

julia> [1/(i+j-1) for i in 1:3, j in 1:3]
3×3 Array{Float64,2}:
 1.0   0.5   0.33
 0.5   0.33  0.25
 0.33  0.25  0.2


Regularizing this syntax just doesn't seem like a pressing matter – if you
don't like the = syntax, don't use it.

Syntax 3 is not available since it already means something:

julia> a, b, c = rand(3), rand(3), rand(3);

julia> (a, b, c) = a + b + c
3-element Array{Float64,1}:
 1.55292
 1.89841
 1.72875


As a general comment, "armchair programming language design" is not very
effective or helpful. If you want to gain credibility when it comes to
influencing the design of Julia, you should make some pull requests that
fix or improve things first. Get a feel for the language and how it is
built. By working with it, you will come to understand why things work the
way they do – there is usually a reason. If you'd attempted to change
syntaxes 1 or 3, for example, you would have quickly found that they are
not actually possible to change without significantly altering the language
and breaking lots of code.


On Fri, Jun 3, 2016 at 4:05 AM, Ford Ox  wrote:

> I think this deserves an own topic.
>
> *You* should post here syntax that looks like duplicate or you think it
> could use already existing keyword. (mark with* # identical *or *#
> replacement*)
> Rule of thumb - *the less special syntax the better*.
>
> # identical
> # replace ' , ' with ' ; ' in arrays ?
> [1, 2, 3, 4]
> [1; 2; 3; 4]
>
>
>
> # identical
> # replace ' = ' with ' in ' in for loops ?
> for i = 1:10
> for i in 1:10
>
>
>
>
> # replacement
> # replace ' -> ' with ' = ' in anonymous functions ?
> (a, b, c) -> a + b + c
> (a, b, c) = a + b + c
>


Re: [julia-users] Uniform syntax

2016-06-03 Thread Mauro
On Fri, 2016-06-03 at 21:23, Siyi Deng  wrote:
> I am neutral about 1 and 2. But I like suggestion 3.

Note, suggestion 3 is currently valid syntax which is unlikely to go away:

julia> (a, b, c) = [1,2,3]
3-element Array{Int64,1}:
 1
 2
 3

julia> a
1

There is a reason for `->`.


[julia-users] row or col span for subplots, using PyPlot

2016-06-03 Thread jda
Hi,

I am looking for a way for subplots to have column or row span in julia 
using PyPlot.  In other words, I want to find out how to translate this:
http://matplotlib.org/users/gridspec.html
into julia language.  I tried some naive things such 
as subplot(311,rowspan=2) or subplot(3,1,[1:2]).  Ultimately I don't know 
the right syntax for translating from matplotlib to julia.  Any resources 
on the translation process would be appreciated, but at the moment the 
correct syntax for the row or column span is what I'm looking for.


Re: [julia-users] Signal / slot (or publish-subscribe) libraries in Julia

2016-06-03 Thread Penn Taylor
I commented on https://github.com/JuliaLang/Reactive.jl/issues/99 with some 
information about slots not being executed twice unless there's a delay. 

The disconnect problem is a type error on your first parameter. I think you 
meant to have this:
function disconnect(signal::Reactive.Signal, slot::Slot)
  ...
end

To get `disconnect` to work correctly, I think you need to keep a 
collection of the Signal objects returned from `connect` so that you can 
call `unpreserve` and `close` on those objects.

The same collection of Signal objects needed to handle `disconnect` will 
also provide an obvious way to handle `is_connected`.

I'm not sure how (or even if) `emit` could be changed to allow it to pass 
positional and keyword arguments using Reactive. My understanding of 
Reactive is that it operates on single values (which also includes Tuples, 
Dicts, user-defined types, etc.). I think this is pointing at a fundamental 
difference between treating signals as a declarative graph (the Reactive 
approach) and treating them as in-process RPC (the Qt approach). You could 
still get somewhat similar behavior out of Reactive by having all slots 
accept a single tuple of this form:
(arg1,arg2,arg3,Dict())

...but then each slot would have to manually destructure the tuple to use 
it. All slots would have the same signature as well, which breaks part of 
the beauty of "slots are just normal functions" that Qt and Boost allow. 
That said, there might be some obvious solution I'm not seeing, such as a 
wrapper object holding the slot function, or something like that.

On Friday, June 3, 2016 at 12:34:37 AM UTC-5, Femto Trader wrote:
>
> This implementation have some issue
>
> emit is not able to pass arguments (positionnal arguments and keyword 
> arguments)
> I don't know how to implement is_connected
> Removing comments for disconnect raises an error.
> If there isn't enough delay between 2 emit calls... slots are not executed 
> twice as they should !
>
> Any idea to fix these issues ?
>


[julia-users] Uniform syntax

2016-06-03 Thread Siyi Deng
I am neutral about 1 and 2. But I like suggestion 3.

[julia-users] big on the mac with large matrices

2016-06-03 Thread Jeremy Edward Kozdon
On my mac (and the several other macs I have tried) the following command in 
Julia 0.4.5

   eig(rand(3000,3000))

causes Julia to crash. It seems to run fine on my linux machines and JuliaBox.

Anyone run into this and know of a work around?

Julia 0.5.0 on the mac seems not to have this issue.

[julia-users] Re: How to build a range of -Inf to Inf

2016-06-03 Thread DNF
Your in-method is a bit odd:

Base.in{T}(x::T, r::BasicInterval{T}) = (r.start <= x <= r.stop) ? true : 
false

Why don't you just write

Base.in{T}(x::T, r::BasicInterval{T}) = (r.start <= x <= r.stop)
?

The extra stuff is redundant.


Re: [julia-users] How to launch a Julia cluster under Torque scheduler

2016-06-03 Thread Erik Schnetter
David

One way to go would be to use the MPI package, and to use mpirun to start
the Julia processes. In this way, all processes are started automatically,
and you do not need to run addprocs. The example "06-cman-transport.jl" in
the MPI package shows how to use this setup.

-erik

On Fri, Jun 3, 2016 at 2:53 PM, David Parks  wrote:

> I've been reading various discussions on here about launching a Julia
> cluster.
> It's hard to tell how current they are, and the documentation is a bit
> lacking on how to launch clusters under environments such as Torque. I'm
> using Julia 0.4.5 (e.g. no cookies)
>
> The primary approach that is documented is for the master Julia process to
> `addprocs` and ssh to the workers in the cluster.
>
> But this seem to conflict with the torque model, where torque launches a
> script under each assigned host. When those scripts exit the worker process
> are considered finished (as far as I understand).
>
> So it would make most sense to let torque launch the Julia workers, and
> have those workers connect back to a designated master.
>
> Is that straight forward to do with 0.4.5?
>
> I see --worker, and I thought --bind-to would allow me to specify the
> master host, but I don't see a way to tell the master that it should listen
> for workers. This is about where the documentation ends.
>
>
>


-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] How to launch a Julia cluster under Torque scheduler

2016-06-03 Thread David Parks
I've been reading various discussions on here about launching a Julia 
cluster.  
It's hard to tell how current they are, and the documentation is a bit 
lacking on how to launch clusters under environments such as Torque. I'm 
using Julia 0.4.5 (e.g. no cookies)

The primary approach that is documented is for the master Julia process to 
`addprocs` and ssh to the workers in the cluster.

But this seem to conflict with the torque model, where torque launches a 
script under each assigned host. When those scripts exit the worker process 
are considered finished (as far as I understand).

So it would make most sense to let torque launch the Julia workers, and 
have those workers connect back to a designated master.

Is that straight forward to do with 0.4.5? 

I see --worker, and I thought --bind-to would allow me to specify the 
master host, but I don't see a way to tell the master that it should listen 
for workers. This is about where the documentation ends.




Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
https://github.com/tbreloff/OnlineAI.jl/issues/5

On Friday, June 3, 2016 at 11:17:28 AM UTC-3, Kevin Liu wrote:
>
> I plan to write Julia for the rest of me life... given it remains 
> suitable. I am still reading all of Colah's material on nets. I ran 
> Mocha.jl a couple weeks ago and was very happy to see it work. Thanks for 
> jumping in and telling me about OnlineAI.jl, I will look into it once I am 
> ready. From a quick look, perhaps I could help and learn by building a very 
> clear documentation of it. Would really like to see Julia a leap ahead of 
> other languages, and plan to contribute heavily to it, but at the moment am 
> still getting introduced to CS, programming, and nets at the basic level. 
>
> On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>>
>> Kevin: computers that program themselves is a concept which is much 
>> closer to reality than most would believe, but julia-users isn't really the 
>> best place for this speculation. If you're actually interested in writing 
>> code, I'm happy to discuss in OnlineAI.jl. I was thinking about how we 
>> might tackle code generation using a neural framework I'm working on. 
>>
>> On Friday, June 3, 2016, Kevin Liu  wrote:
>>
>>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
>>> lecture at Google with a TensorFlow question in the end), were unsuccessful 
>>> penny traders, Julia was a language for web design, and the tribes in the 
>>> video didn't actually solve problems, perhaps this would be a wildly 
>>> off-topic, speculative discussion. But these statements couldn't be farther 
>>> from the truth. In fact, if I had known about this video some months ago I 
>>> would've understood better on how to solve a problem I was working on.  
>>>
>>> For the founders of Julia: I understand your tribe is mainly CS. This 
>>> master algorithm, as you are aware, would require collaboration with other 
>>> tribes. Just citing the obvious. 
>>>
>>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:

 There could be parts missing as Domingos mentions, but induction, 
 backpropagation, genetic programming, probabilistic inference, and SVMs 
 working together-- what's speculative about the improved versions of 
 these? 

 Julia was made for AI. Isn't it time for a consolidated view on how to 
 reach it? 

 On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>
> This is not a forum for wildly off-topic, speculative discussion.
>
> Take this to Reddit, Hacker News, etc.
>
>
> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>
>> I am wondering how Julia fits in with the unified tribes
>>
>> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>>
>> https://www.youtube.com/watch?v=B8J4uefCQMc
>>
>
>

Re: [julia-users] Re: Using Julia for real time astronomy

2016-06-03 Thread Páll Haraldsson
On Thursday, June 2, 2016 at 7:55:03 AM UTC, John leger wrote:
>
> Páll: don't worry about the project failing because of YOUU ;) in any 
> case we wanted to try Julia and see if we could get help/tips from the 
> community.
>

Still, feel free to ask me anytime. I just do not want to give bad 
professional advice or oversell Julia.
 

> About the nogc I wonder if activating it will also prevent the core of 
> Julia to be garbage collected ? If yes for long run it's a bad idea to 
> disable it too long.
>

Not really,* see below.
 

> For now the only options options are C/C++ and Julia, sorry no D or Lisp 
> :) Why would you not recommend C for this kind of tasks ?
> And I said 1000 images/sec but the camera may be able to go up to 10 000 
> images/sec so I think we can define it as hard real time.
>

Not really. There's a hard and fast definition of hard real-time (and 
real-time in general), it's not about speed, is about timely actions. That 
said 10 000 images/sec is a lot.. 9 GB uncompressed data per second, 
assuming gray-scale byte-per-pixel megapixel resolution. You will fill up 
your 2 TB SSD I've seen advertised [I don't know about radiation-hardening 
those, I guess anything is possible, you know anything about the potential 
hardware used?], in three and a half minute.

How fast are the down-links on these satellites? Would you get all the 
[processed] data down to earth? If you can not, do you pick and choose 
framerate and/or which period of time to "download"? Since I'm sure you 
want lossless compression, it seems http://flif.info/ might be of interest 
to you. [FLIF should really be wrapped as a Julia library.. There's also a 
native executable, that could do, while maybe not suitable for 
you/real-time, for invoking a separate process.] FLIF was GPL licensed, 
that shouldn't be a problem for government work, and should be even more 
non-issue now [for anybody].


You can see from here:

https://github.com/JuliaLang/julia/pull/12915#issuecomment-137114298

that soft real-time was proposed for the NEWS section and even that 
proposal was shot down. That may have been be overly cautious for the 
incremental GC and I've seen audio (that is more latency sensitive than 
video - at the usual frame rates..) being talked about working in some 
thread, and software-defined-radio being discussed as a Julia project.


* "About the nogc", if you meant the function to disable the GC, then it 
doesn't block allocations (but my proposal did), only postpones 
deallocations. There is no @nogc macro; my proposal for @nogc to block 
allocations, was only a proposal, and rethinking it, not really too 
helpful. It was a fail-fast debugging proposal, but as @time does show 
allocations (or not when there are none), not just GC activity, it should 
do, for debugging. I did a test:

[Note, this has to be in a function, not in the global scope:]

julia> function test()
 @time a=[1 2 3]
 @time a[1] = 2
   end
test (generic function with 1 method)

julia> test()
  0.01 seconds (1 allocation: 96 bytes)
  0.00 seconds

You want to see similar to the latter result, not the former, not even with 
"1 allocation". It seems innocent enough, as there is no GC activity (then 
there would be more text), but that is just an accident. When garbage 
accumulates, even one allocation can tip off a GC and lots of 
deallocations. And take an unbounded amount of time in naive GC 
implementations. Incremental, means it's not that bad, but still 
theoretically unbounded time I think.

I've seen recommending disabling GC periodically, such as in games with 
Lua, after each drawn frame ("vblank"). That scenario is superficially 
similar to yours. I'm however skeptical of that approach, as a general 
idea, if you do not minimize allocations. Note, that in games, the heavy 
lifting is done by game engines, almost exclusively done in C++. As they do 
not use GC (while GC IS optional in C++ and C), Lua will handle game logic 
with probably much less memory allocated, so it works ok there, postponing 
deallocations, while taking [potentially] MORE cumulative time later at the 
convenient time.

Why do I say more? The issue of running out of RAM because of garbage isn't 
the only issue. NOT deallocating early, prevents reusing memory (that is 
currently covered by the cache) and doing that would have helped for cache 
purposes.

By recommending FILF, I've actually recommended using C++ indirectly, and 
reusing C++ (or C) code isn't bad all things equal. It's just that for new 
code, I recommend not using C for many reasons, such as safety and C++ as 
it's a complex language, to easy to "blow your leg off", to quote it's 
designer.. and in both cases there are better languages, with some rare 
exceptions that do not apply here (except one reason, can be reusing the 
code, that MAY apply here).

I believe lossy compression such as JPEG (even MPEG etc. at least on 
average), has a consistent performance profile. B

Re: [julia-users] errors using packages in parallel

2016-06-03 Thread Ethan Anderes


I still get an error (see below). Even if it did work, it would still be 
strange that one would need a different syntax when the workers are on my 
local machine vrs connected to servers with ssh tunnel. 

julia> addprocs(
   machines,
   tunnel=true,
   dir="/home/anderes/",
   exename="/usr/local/bin/julia",
   topology=:master_slave,
   )
2-element Array{Int64,1}:
 2
 3

julia> @everywhere using  Dierckx
WARNING: node state is inconsistent: node 2 failed to load cache from 
/Users/ethananderes/.julia/lib/v0.4/Dierckx.ji
WARNING: node state is inconsistent: node 3 failed to load cache from 
/Users/ethananderes/.julia/lib/v0.4/Dierckx.ji
WARNING: deserialization checks failed while attempting to load cache from 
/Users/ethananderes/.julia/lib/v0.4/Dierckx.ji
WARNING: deserialization checks failed while attempting to load cache from 
/Users/ethananderes/.julia/lib/v0.4/Dierckx.ji
WARNING: deserialization checks failed while attempting to load cache from 
/Users/ethananderes/.julia/lib/v0.4/Compat.ji
WARNING: deserialization checks failed while attempting to load cache from 
/Users/ethananderes/.julia/lib/v0.4/Compat.ji
ERROR: On worker 2:
LoadError: InitError: Dierckx not properly installed. Run Pkg.build("Dierckx")
 in __init__ at /Users/ethananderes/.julia/v0.4/Dierckx/src/Dierckx.jl:27
 in include_string at loading.jl:282
 in include_from_node1 at ./loading.jl:323
 in require at ./loading.jl:259
 in eval at ./sysimg.jl:14
 in anonymous at multi.jl:1394
 in anonymous at multi.jl:923
 in run_work_thunk at multi.jl:661
 [inlined code] from multi.jl:923
 in anonymous at task.jl:63
during initialization of module Dierckx
while loading /Users/ethananderes/.julia/v0.4/Dierckx/src/Dierckx.jl, in 
expression starting on line 714
 in remotecall_fetch at multi.jl:747
 in remotecall_fetch at multi.jl:750
 in anonymous at multi.jl:1396

...and 1 other exceptions.

 in sync_end at ./task.jl:413
 in anonymous at multi.jl:1405

Just to convince you that it’s not a problem with Dierckx on the remote 
machine… everything works fine (even without @everywhere before using 
Dierckx) when the master node is on server (rather than on my laptop)

$ ssh xxx...@xxx.xxx.edu

(xxx)-~$ julia
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.6-pre+37 (2016-05-27 22:56 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 430601c (6 days old release-0.4)
|__/   |  x86_64-redhat-linux

julia> addprocs(2, topology=:master_slave)
2-element Array{Int64,1}:
 2
 3

julia> using  Dierckx

julia> @everywhere spl = Dierckx.Spline1D([1., 2., 3.], [1., 2., 3.], k=2)

julia>


I did find this old issue on github which seems to have a similar error (
https://github.com/JuliaLang/julia/issues/12381). Should I file an issue, 
or do you think it’s a problem on my end?





On Friday, June 3, 2016 at 7:31:35 AM UTC-7, Isaiah wrote:

Try `@everywhere using Dierckx`
>
>
> ​


Re: [julia-users] Is Julia slow with large arrays?

2016-06-03 Thread Lutfullah Tomak
I think it is on julia 0.5 but it does not help. Though it produces some 
simd instructions for moving memory 
it still uses scalar float instructions for this loop for @simd version 
(similar for just @inbounds and julia -O on 0.4)

movsd (%r15,%r11,8), %xmm0# xmm0 = mem[0],zero
movsd (%rbx,%r11,8), %xmm2# xmm2 = mem[0],zero
subsd (%rax), %xmm0
mulsd %xmm0, %xmm0
addsd %xmm1, %xmm0
subsd 8(%rax), %xmm2
mulsd %xmm2, %xmm2
addsd %xmm0, %xmm2


On Friday, June 3, 2016 at 6:15:50 PM UTC+3, Stefan Karpinski wrote:
>
> Julia also has an `-O3` option – you could try that.
>
> On Fri, Jun 3, 2016 at 10:54 AM, Angel de Vicente  > wrote:
>
>> Lutfullah Tomak > writes:
>> > It may be because ifort uses proper simd instructions. Eriks's 
>> suggestion for @simd
>> > does not use simd instructions in my laptop.
>>
>> I don't see any improvement in Julia by using @simd either.
>>
>> On the other hand, vectorization with the Intel compiler definitely
>> helps, but even with vectorization off, somehow it manages to go faster
>>
>> [angelv@duna TESTS]$ ifort -no-vec -O3 -o test_ifort test.F90
>> [angelv@duna TESTS]$ ./test_ifort
>>   0.065636 seconds
>>9363171.53179644
>> [angelv@duna TESTS]$
>>
>> --
>> Ángel de Vicente
>> http://www.iac.es/galeria/angelv/
>>
>
>

Re: [julia-users] Handling signals/ctrl-c in Julia

2016-06-03 Thread Yichao Yu
On Fri, Jun 3, 2016 at 4:51 AM, Ulf Worsoe  wrote:
> I am developing Mosek.jl.
>
> That library works by creating a task object, adding data to it and calling
> a solve function. When a user in interactive mode hits ctrl-c, calls to
> native functions are terminated, and that leaves the task object in an
> inconsistent state, meaning that even calling the destructor may cause it to
> segfault. This is why I am trying to take over the ctrl-c signal. I do
> something like this
> (https://github.com/JuliaOpt/Mosek.jl/blob/sigint/src/msk8_functions.jl#L2291-L2308):
> msk_global_break = false
> function msk_global_sigint_handler(sig::Int32)
>println("msk_global_sigint_handler")
>global msk_global_break = true
>nothing
> end
>
> function optimize(task_:: MSKtask)
>  trmcode_ = Array(Int32,(1,))
>  SIGINT=2
>  old_sighandler =
> ccall((:signal,"libc"),Ptr{Void},(Cint,Ptr{Void}),SIGINT,cfunction(msk_global_sigint_handler,
> Void, (Cint,)))
>  res = @msk_ccall(
> "optimizetrm",Int32,(Ptr{Void},Ptr{Int32}),task_.task,trmcode_)
>  ccall((:signal,"libc"),Void,(Cint,Ptr{Void}),SIGINT,old_sighandler)
>  global msk_global_break
>  if msk_global_break
>  global msk_global_break = false
>  throw(InterruptException())
>  end
>
>   if res != MSK_RES_OK
>msg = getlasterror(task_)
>throw(MosekError(res,msg))
>  end
>  (convert(Int32,trmcode_[1]))
> end
>
> During long calls the msk_global_break flag is checked, and if set, the
> operation terminates in a controlled manner.

This will not work on 0.5 at least, not sure about 0.4.
SIGINT is blocked on all threads and there are always more then one
threads, one of them unmanaged and can't run any julia code.
You should be able to just use `disable_sigint` to do this on both 0.4
and 0.5. It is not necessary on 0.5 if the ccall doesn't call back to
julia.

>
> I have seen older answers to similar questions, saying that this could cause
> problems, but have been told that threading may be handled different now. Is
> this something I can expect to work in Julia 0.4 and/or 0.5? In particular,
> if the native optimizetrm function creates multiple threads, will this still
> work?


Re: [julia-users] ^ operator cannot compute correctly on "modified" range

2016-06-03 Thread Stefan Karpinski
This problem boils down to this:

julia> (-10.0)^2.2
ERROR: DomainError:
 in nan_dom_err at ./math.jl:134 [inlined]
 in ^(::Float64, ::Float64) at ./math.jl:293
 in eval(::Module, ::Any) at ./boot.jl:225
 in macro expansion at ./REPL.jl:92 [inlined]
 in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:46


The problem is that you're raising a negative value to a fractional power,
which has a complex result, but the power function gives real results for
real values, so that result cannot be represented. If you convert the
argument to complex first, it works:

julia> Complex(-10.0)^2.2
128.22055269702062 + 93.15768449873806im


Applying this to your original problem:

julia> map(Complex,bb).^2.2
100-element Array{Complex{Float64},1}:
  128.221+93.1577im
  101.693+73.8843im
  78.4794+57.0186im
 ⋮
  18490.2+0.0im
  18961.0+0.0im
  19438.3+0.0im


On Fri, Jun 3, 2016 at 5:25 AM, Technet  wrote:

> Look at this code:
>
> x = 0.1:100
> m = 10.1
> bb = x-m
> display(typeof(x)) # FloatRange{Float64}
> display(typeof(bb)) # FloatRange{Float64}
>
> # x.^2.1 #correctly done
> bb.^2.2 # error -> why ?
>
>
> The last line throw this error:
> LoadError: DomainError:
> while loading In[13], in expression starting on line 6
>
>  in .^ at range.jl:653
>
> x and bb are the same type of data
> Why cannot elevate the "bb" range to a float exponent ?
>


[julia-users] ^ operator cannot compute correctly on "modified" range

2016-06-03 Thread Steven G. Johnson
bb has negative elements, and if you exponentiate these to a fractional power 
you get a complex result. To get a complex result, one of the arguments needs 
to be complex, e.g. try bb.^(2.1+0im)

(Automatically switching to a complex result for negative inputs would be 
type-unstable and kill performance. See the "type stability" section in the 
manual.)

Re: [julia-users] Is Julia slow with large arrays?

2016-06-03 Thread Stefan Karpinski
Julia also has an `-O3` option – you could try that.

On Fri, Jun 3, 2016 at 10:54 AM, Angel de Vicente <
angel.vicente.garr...@gmail.com> wrote:

> Lutfullah Tomak  writes:
> > It may be because ifort uses proper simd instructions. Eriks's
> suggestion for @simd
> > does not use simd instructions in my laptop.
>
> I don't see any improvement in Julia by using @simd either.
>
> On the other hand, vectorization with the Intel compiler definitely
> helps, but even with vectorization off, somehow it manages to go faster
>
> [angelv@duna TESTS]$ ifort -no-vec -O3 -o test_ifort test.F90
> [angelv@duna TESTS]$ ./test_ifort
>   0.065636 seconds
>9363171.53179644
> [angelv@duna TESTS]$
>
> --
> Ángel de Vicente
> http://www.iac.es/galeria/angelv/
>


[julia-users] ^ operator cannot compute correctly on "modified" range

2016-06-03 Thread Technet
Look at this code:

x = 0.1:100
m = 10.1
bb = x-m
display(typeof(x)) # FloatRange{Float64}
display(typeof(bb)) # FloatRange{Float64}

# x.^2.1 #correctly done
bb.^2.2 # error -> why ? 


The last line throw this error:
LoadError: DomainError:
while loading In[13], in expression starting on line 6

 in .^ at range.jl:653

x and bb are the same type of data
Why cannot elevate the "bb" range to a float exponent ?


[julia-users] Handling signals/ctrl-c in Julia

2016-06-03 Thread Ulf Worsoe
I am developing Mosek.jl. 

That library works by creating a task object, adding data to it and calling 
a solve function. When a user in interactive mode hits ctrl-c, calls to 
native functions are terminated, and that leaves the task object in an 
inconsistent state, meaning that even calling the destructor may cause it 
to segfault. This is why I am trying to take over the ctrl-c signal. I do 
something like this 
(https://github.com/JuliaOpt/Mosek.jl/blob/sigint/src/msk8_functions.jl#L2291-L2308):
msk_global_break = false
function msk_global_sigint_handler(sig::Int32)
   println("msk_global_sigint_handler")
   global msk_global_break = true
   nothing
end

function optimize(task_:: MSKtask)
 trmcode_ = Array(Int32,(1,))
 SIGINT=2
 old_sighandler = ccall((:signal,"libc"),Ptr{Void},(Cint,Ptr{Void}),SIGINT,
cfunction(msk_global_sigint_handler, Void, (Cint,)))
 res = @msk_ccall( "optimizetrm",Int32,(Ptr{Void},Ptr{Int32}),task_.task,
trmcode_)
 ccall((:signal,"libc"),Void,(Cint,Ptr{Void}),SIGINT,old_sighandler)
 global msk_global_break
 if msk_global_break
 global msk_global_break = false
 throw(InterruptException())
 end

  if res != MSK_RES_OK
   msg = getlasterror(task_)
   throw(MosekError(res,msg))
 end
 (convert(Int32,trmcode_[1]))
end

During long calls the msk_global_break flag is checked, and if set, the 
operation terminates in a controlled manner.

I have seen older answers to similar questions, saying that this could 
cause problems, but have been told that threading may be handled different 
now. Is this something I can expect to work in Julia 0.4 and/or 0.5? In 
particular, if the native optimizetrm function creates multiple threads, 
will this still work?


Re: [julia-users] Is Julia slow with large arrays?

2016-06-03 Thread Angel de Vicente
Lutfullah Tomak  writes:
> It may be because ifort uses proper simd instructions. Eriks's suggestion for 
> @simd
> does not use simd instructions in my laptop.

I don't see any improvement in Julia by using @simd either.

On the other hand, vectorization with the Intel compiler definitely
helps, but even with vectorization off, somehow it manages to go faster

[angelv@duna TESTS]$ ifort -no-vec -O3 -o test_ifort test.F90
[angelv@duna TESTS]$ ./test_ifort 
  0.065636 seconds
   9363171.53179644
[angelv@duna TESTS]$ 

-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/  


[julia-users] Re: Thanks Julia

2016-06-03 Thread Eric Forgy
The Abstract looks awesome. All-optical processors is something I've been 
looking forward to since nearly going to University of Arizona to study the 
topic back in 1994 :)

On Friday, June 3, 2016 at 10:37:54 PM UTC+8, M. Zahirul ALAM wrote:
>
> I have started using Julia roughly two summers ago. I have been visiting 
> this forum ever since looking for answers and tips. You guys have been 
> incredible. I have used 80 percent of the of the calculations for the 
> following paper using Julia. Sadly, during the final revision, we had to 
> take the Julia reference (and a few more) out of the paper due to the 
> journal's policy on a maximum number of references. But a big thanks is in 
> order.   THANK YOU ALL !!
>
> Here is a link: http://science.sciencemag.org/content/352/6287/795
>


Re: [julia-users] Is Julia slow with large arrays?

2016-06-03 Thread Lutfullah Tomak
It may be because ifort uses proper simd instructions. Eriks's suggestion 
for @simd 
does not use simd instructions in my laptop.


[julia-users] Thanks Julia

2016-06-03 Thread M. Zahirul ALAM
I have started using Julia roughly two summers ago. I have been visiting 
this forum ever since looking for answers and tips. You guys have been 
incredible. I have used 80 percent of the of the calculations for the 
following paper using Julia. Sadly, during the final revision, we had to 
take the Julia reference (and a few more) out of the paper due to the 
journal's policy on a maximum number of references. But a big thanks is in 
order.   THANK YOU ALL !!

Here is a link: http://science.sciencemag.org/content/352/6287/795


Re: [julia-users] errors using packages in parallel

2016-06-03 Thread Isaiah Norton
Try `@everywhere using Dierckx`

On Thu, Jun 2, 2016 at 3:49 PM, Ethan Anderes 
wrote:

> I’m looking for help setting up a parallel job spread across different
> servers. I would like to use my laptop as the master node. I’m getting
> errors when using packages and I’m not sure what I’m doing wrong. Any
> help would be appreciated
>
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.4.6-pre+36 (2016-05-19 19:11 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Commit 1e3e941 (14 days old release-0.4)
> |__/   |  x86_64-apple-darwin15.5.0
>
> julia> machines = ["ande...@xxx.xxx.edu", "ande...@yyy.yyy.edu"]
> 2-element Array{ASCIIString,1}:
>  "ande...@xxx.xxx.edu"
>  "ande...@yyy.yyy.edu"
>
> julia> addprocs(
>machines,
>tunnel=true,
>dir="/home/anderes/",
>exename="/usr/local/bin/julia",
>topology=:master_slave,
>)
> 2-element Array{Int64,1}:
>  2
>  3
>
> julia> using  Dierckx
> WARNING: node state is inconsistent: node 2 failed to load cache from 
> /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji
> WARNING: node state is inconsistent: node 3 failed to load cache from 
> /Users/ethananderes/.julia/lib/v0.4/Dierckx.ji
>
> julia> @everywhere spl = Dierckx.Spline1D([1., 2., 3.], [1., 2., 3.], k=2)
> ERROR: On worker 2:
> ERROR: On worker 2:
> UndefVarError: Dierckx not defined
>  in eval at ./sysimg.jl:14
>  in anonymous at multi.jl:1394
>  in anonymous at multi.jl:923
>  in run_work_thunk at multi.jl:661
>  [inlined code] from multi.jl:923
>  in anonymous at task.jl:63
>  in remotecall_fetch at multi.jl:747
>  in remotecall_fetch at multi.jl:750
>  in anonymous at multi.jl:1396
>
> ...and 1 other exceptions.
>
>  in sync_end at ./task.jl:413
>  in anonymous at multi.jl:1405
>
> Note: I’ve got Dierckx installed and working on the remote servers (not
> even sure if that is needed). Also, as you can see below, I can get the
> code to run fine if I have the workers on my local machine.
>
> julia> addprocs(2, topology=:master_slave)
> 2-element Array{Int64,1}:
>  2
>  3
>
> julia> using  Dierckx
>
> julia> @everywhere spl = Dierckx.Spline1D([1., 2., 3.], [1., 2., 3.], k=2)
>
> Thanks!
> ​
>


Re: [julia-users] Re: Private Forks of Julia Repositories

2016-06-03 Thread Chris Rackauckas
I did something like this and it worked. I have private repos for github 
(student) so I used that. I made a private repo, made a branch in my local 
git repo, pushed that to the new private repo, used this 

 to 
set the default remotes properly, and can merge/rebase between them on my 
own local repo. After setting this up, it even seems to work with the GUI 
which is nice. Thanks for the idea!

On Friday, June 3, 2016 at 12:55:38 AM UTC-7, Mauro wrote:
>
> On Fri, 2016-06-03 at 08:45, Chris Rackauckas  > wrote: 
> > I can keep a private local branch, but then it's only on one computer 
> and I 
> > can't develop/test on any other computer (/hpc). 
>
> You can have several remotes for one repository.  So you can have your 
> private branch on a different remote.  Easiest is to to just clone the 
> package on github and set that up as a remote.  If you need actual 
> privacy then you need to pay github to make the fork a private repo. 
> Alternatively use bitbucket.org to host it, they have free private 
> repos. 
>
> I think something like this should work with bitbucket (untested): 
>
> First, on bitbucket create a private repo. 
>
> git clone g...@github.com:/xuser/YPgk.jl.git YPkg-myfork 
> cd YPkg-myfork 
>
> git remote rename origin upstream  # upstream is the traditional name 
>
> git remote add origin g...@bitbucket.org:myusername/ypkg.git # origin is 
> traditional for where you usually push to 
>
> git push orgin # puts the repo to bitbucket 
>
> I would then keep master in sync with upstream (git fetch upstream; git 
> rebase upstream master) and do my personal changes on a feature branch. 
>
> Once the time comes for a pull request, also fork the package repo on 
> github.  Add the github-fork as a remote: 
>
> git remote add github ... 
>
> git checkout my-feature-branch 
>
> git push github 
>
> Make PR on github. 
>
> I hope this helps. 
>
> > On Thursday, June 2, 2016 at 11:18:09 PM UTC-7, Mauro wrote: 
> >> 
> >> On Fri, 2016-06-03 at 07:58, Chris Rackauckas  >> > wrote: 
> >> > I think I will need both versions available, since the majority of 
> the 
> >> work 
> >> > is public, while the private work will tend to sit around longer 
> (i.e. 
> >> > waiting to hear back from reviewers). So I'd want to be able to 
> easily 
> >> work 
> >> > with the public repository, but basically switch over to a private 
> >> branch 
> >> > every once in awhile. 
> >> 
> >> Well, then just checkout the branch you need at the time, easy. 
> >> 
> >> Alternatively, have two different folders for the two branches and set 
> >> the LOAD_PATH depending on what you want to do.  If the REQUIREments 
> are 
> >> different, in particular, different versions, it might be more tricky. 
> >> I think then you'd need two ~/.julia/v0.* folders. 
> >> 
> >> > On Thursday, June 2, 2016 at 9:21:13 PM UTC-7, Curtis Vogt wrote: 
> >> >> 
> >> >> If you don't need to have both versions of the package available at 
> the 
> >> >> same time then I would recommend using a single Git repo with 
> multiple 
> >> >> remotes. With this setup you can push to your private remote for 
> >> >> experiments and later push to the public remote when your ready to 
> >> share 
> >> >> your work. 
> >> >> 
> >> >> Some reading material on Git remotes: 
> >> >> https://git-scm.com/book/en/v2/Git-Basics-Working-with-Remotes 
> >> >> 
> >> 
>


Re: [julia-users] Is Julia slow with large arrays?

2016-06-03 Thread Angel de Vicente
Hi,

Mosè Giordano  writes:
> I'm working on a Fortran77 code, but I'd much prefer to translate it
> to Julia. However, I've the impression that when working with large
> arrays (of the order of tens of thousands elements) Julia is way
> slower than the equivalent Fortran code.

I'm late into the conversation, but I wanted to have a go at
this. First, I rewrote the Fortran77 code to use modern Fortran, so I
end up with:

JULIA:
,
| function foo()
|  array1 = rand(70, 1000)
|  array2 = rand(70, 1000)
|  array3 = rand(2, 70, 20, 20)
|  bar = 0.0
|  @time @inbounds for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
| bar = bar +
| (array1[i, l] - array3[1, i, j, k])^2 +
| (array2[i, l] - array3[2, i, j, k])^2
|  end
| 
|  println("$bar")
| 
| end
| foo()
`

FORTRAN:
,
| PROGRAM main
|   IMPLICIT NONE
| 
|   DOUBLE PRECISION, DIMENSION(70,1000) :: array1, array2
|   DOUBLE PRECISION, DIMENSION(2,70,20,20) :: array3
|   DOUBLE PRECISION :: bar, start, finish
|   INTEGER :: i,j,k,l
| 
|   CALL RANDOM_NUMBER(array1) ; CALL RANDOM_NUMBER(array2) ; CALL  
RANDOM_NUMBER(array3)
| 
|   bar = 0.0D0
| 
|   CALL CPU_TIME(start)
|   DO CONCURRENT (l=1:1000,k=1:20,j=1:20,i=1:70)
|  bar = bar + (array1(i,l) - array3(1,i,j,k))**2 + &
|   (array2(i,l) - array3(2,i,j,k))**2
|   END DO
| 
|   CALL CPU_TIME(finish)
|   WRITE(*,'(f10.6,a)') finish-start, " seconds"
|   PRINT*, bar
| END PROGRAM main
`

The timings in my laptop for the julia code and the Fortran code
compiled with gfortran are basically identical, but in a machine where I
have access to the Intel compiler, Fortran gets again the lead (though I
understand that it is not a very fair comparison). [Fortran code
compiled with -O3].


[angelv@duna TESTS]$ julia -f test.jl
elapsed time: 0.085587272 seconds (0 bytes allocated)
9.35894273977078e6

[angelv@duna TESTS]$ ./test_gfortran 
  0.131018 seconds
   9316684.8268513940

[angelv@duna TESTS]$ ./test_ifort 
  0.032021 seconds
   9363171.53179595
[angelv@duna TESTS]$ 

-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/  


Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
I plan to write Julia for the rest of me life... given it remains suitable. 
I am still reading all of Colah's material on nets. I ran Mocha.jl a couple 
weeks ago and was very happy to see it work. Thanks for jumping in and 
telling me about OnlineAI.jl, I will look into it once I am ready. From a 
quick look, perhaps I could help and learn by building a very clear 
documentation of it. Would really like to see Julia a leap ahead of other 
languages, and plan to contribute heavily to it, but at the moment am still 
getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>
> Kevin: computers that program themselves is a concept which is much closer 
> to reality than most would believe, but julia-users isn't really the best 
> place for this speculation. If you're actually interested in writing code, 
> I'm happy to discuss in OnlineAI.jl. I was thinking about how we might 
> tackle code generation using a neural framework I'm working on. 
>
> On Friday, June 3, 2016, Kevin Liu > wrote:
>
>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
>> lecture at Google with a TensorFlow question in the end), were unsuccessful 
>> penny traders, Julia was a language for web design, and the tribes in the 
>> video didn't actually solve problems, perhaps this would be a wildly 
>> off-topic, speculative discussion. But these statements couldn't be farther 
>> from the truth. In fact, if I had known about this video some months ago I 
>> would've understood better on how to solve a problem I was working on.  
>>
>> For the founders of Julia: I understand your tribe is mainly CS. This 
>> master algorithm, as you are aware, would require collaboration with other 
>> tribes. Just citing the obvious. 
>>
>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>>
>>> There could be parts missing as Domingos mentions, but induction, 
>>> backpropagation, genetic programming, probabilistic inference, and SVMs 
>>> working together-- what's speculative about the improved versions of these? 
>>>
>>> Julia was made for AI. Isn't it time for a consolidated view on how to 
>>> reach it? 
>>>
>>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:

 This is not a forum for wildly off-topic, speculative discussion.

 Take this to Reddit, Hacker News, etc.


 On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:

> I am wondering how Julia fits in with the unified tribes
>
> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>
> https://www.youtube.com/watch?v=B8J4uefCQMc
>



Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
I plan to write Julia for the rest of me life... given it remains suitable. 
I am still reading all of Colah's material on nets. I ran Mocha.jl a couple 
weeks ago and was very happy to see it work. Thanks for jumping in and 
telling me about OnlineAI.jl, I will look into it once I am ready. From a 
quick look, perhaps I could help and learn by building a very clear 
documentation for it. Would really like to see Julia a leap ahead of other 
languages, and plan to contribute heavily to it, but at the moment am still 
getting introduced to CS, programming, and nets at the basic level. 

On Friday, June 3, 2016 at 10:48:15 AM UTC-3, Tom Breloff wrote:
>
> Kevin: computers that program themselves is a concept which is much closer 
> to reality than most would believe, but julia-users isn't really the best 
> place for this speculation. If you're actually interested in writing code, 
> I'm happy to discuss in OnlineAI.jl. I was thinking about how we might 
> tackle code generation using a neural framework I'm working on. 
>
> On Friday, June 3, 2016, Kevin Liu > wrote:
>
>> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
>> lecture at Google with a TensorFlow question in the end), were unsuccessful 
>> penny traders, Julia was a language for web design, and the tribes in the 
>> video didn't actually solve problems, perhaps this would be a wildly 
>> off-topic, speculative discussion. But these statements couldn't be farther 
>> from the truth. In fact, if I had known about this video some months ago I 
>> would've understood better on how to solve a problem I was working on.  
>>
>> For the founders of Julia: I understand your tribe is mainly CS. This 
>> master algorithm, as you are aware, would require collaboration with other 
>> tribes. Just citing the obvious. 
>>
>> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>>
>>> There could be parts missing as Domingos mentions, but induction, 
>>> backpropagation, genetic programming, probabilistic inference, and SVMs 
>>> working together-- what's speculative about the improved versions of these? 
>>>
>>> Julia was made for AI. Isn't it time for a consolidated view on how to 
>>> reach it? 
>>>
>>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:

 This is not a forum for wildly off-topic, speculative discussion.

 Take this to Reddit, Hacker News, etc.


 On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:

> I am wondering how Julia fits in with the unified tribes
>
> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>
> https://www.youtube.com/watch?v=B8J4uefCQMc
>



[julia-users] Re: Arithmetic with TypePar

2016-06-03 Thread Robert DJ
I've been thinking more about your suggestion with Array{Any, D}. The type 
of the entries (Vector{Float64} in my case) are not inferred. Is there a 
way to inform Julia that 'Any' is actually Vector{Float64} for all entries. 
I.e., something like Array{Vector{Float64}, D} ?

On Thursday, June 2, 2016 at 12:04:51 AM UTC+2, Cedric St-Jean wrote:
>
> I really doubt that it can be expressed this way, because Julia will do 
> pattern matching/unification on the type of `bar`, and it would have to 
> know that -1 is the inverse of +1 to unify D+1 with the type of the input 
> array. Can you give more context about what you're trying to do? Why can't 
> you have `bar::Array{Any, D}`?
>
> You can also put D inside the constructor
>
> type Foo{E}
> bar::Array{Any, E}
> end
> Foo(D::Int) = Foo(Array{Any, D+1}())
>
> Foo(1)
>
> or use typealias
>
> On Wednesday, June 1, 2016 at 2:56:28 PM UTC-4, Robert DJ wrote:
>>
>> I have a custom type with a TypePar denoting a dimension and would like 
>> to define the following:
>>
>> type Foo{D}
>> bar::Array{D+1}
>> end
>>
>> However, this does not work. As D is only 1 or 2 it would OK with
>>
>> type Foo{1}
>> bar::Matrix
>> end
>>
>> type Foo{2}
>> bar::Array{3}
>> end
>>
>> but unfortunately this isn't working, either. 
>>
>> Can this problem be solved?
>>
>> Thanks!
>>
>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Tom Breloff
Kevin: computers that program themselves is a concept which is much closer
to reality than most would believe, but julia-users isn't really the best
place for this speculation. If you're actually interested in writing code,
I'm happy to discuss in OnlineAI.jl. I was thinking about how we might
tackle code generation using a neural framework I'm working on.

On Friday, June 3, 2016, Kevin Liu  wrote:

> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not
> lecture at Google with a TensorFlow question in the end), were unsuccessful
> penny traders, Julia was a language for web design, and the tribes in the
> video didn't actually solve problems, perhaps this would be a wildly
> off-topic, speculative discussion. But these statements couldn't be farther
> from the truth. In fact, if I had known about this video some months ago I
> would've understood better on how to solve a problem I was working on.
>
> For the founders of Julia: I understand your tribe is mainly CS. This
> master algorithm, as you are aware, would require collaboration with other
> tribes. Just citing the obvious.
>
> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>
>> There could be parts missing as Domingos mentions, but induction,
>> backpropagation, genetic programming, probabilistic inference, and SVMs
>> working together-- what's speculative about the improved versions of these?
>>
>> Julia was made for AI. Isn't it time for a consolidated view on how to
>> reach it?
>>
>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>>>
>>> This is not a forum for wildly off-topic, speculative discussion.
>>>
>>> Take this to Reddit, Hacker News, etc.
>>>
>>>
>>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>>>
 I am wondering how Julia fits in with the unified tribes

 mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

 https://www.youtube.com/watch?v=B8J4uefCQMc

>>>
>>>


Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
Correction: Founders: tribe is mainly of Symbolists?

On Friday, June 3, 2016 at 10:36:01 AM UTC-3, Kevin Liu wrote:
>
> If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
> lecture at Google with a TensorFlow question in the end), were unsuccessful 
> penny traders, Julia was a language for web design, and the tribes in the 
> video didn't actually solve problems, perhaps this would be a wildly 
> off-topic, speculative discussion. But these statements couldn't be farther 
> from the truth. In fact, if I had known about this video some months ago I 
> would've understood better on how to solve a problem I was working on.  
>
> For the founders of Julia: I understand your tribe is mainly CS. This 
> master algorithm, as you are aware, would require collaboration with other 
> tribes. Just citing the obvious. 
>
> On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>>
>> There could be parts missing as Domingos mentions, but induction, 
>> backpropagation, genetic programming, probabilistic inference, and SVMs 
>> working together-- what's speculative about the improved versions of these? 
>>
>> Julia was made for AI. Isn't it time for a consolidated view on how to 
>> reach it? 
>>
>> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>>>
>>> This is not a forum for wildly off-topic, speculative discussion.
>>>
>>> Take this to Reddit, Hacker News, etc.
>>>
>>>
>>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>>>
 I am wondering how Julia fits in with the unified tribes

 mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ

 https://www.youtube.com/watch?v=B8J4uefCQMc

>>>
>>>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
If Andrew Ng who cited Gates, and Gates who cited Domingos (who did not 
lecture at Google with a TensorFlow question in the end), were unsuccessful 
penny traders, Julia was a language for web design, and the tribes in the 
video didn't actually solve problems, perhaps this would be a wildly 
off-topic, speculative discussion. But these statements couldn't be farther 
from the truth. In fact, if I had known about this video some months ago I 
would've understood better on how to solve a problem I was working on.  

For the founders of Julia: I understand your tribe is mainly CS. This 
master algorithm, as you are aware, would require collaboration with other 
tribes. Just citing the obvious. 

On Friday, June 3, 2016 at 10:21:25 AM UTC-3, Kevin Liu wrote:
>
> There could be parts missing as Domingos mentions, but induction, 
> backpropagation, genetic programming, probabilistic inference, and SVMs 
> working together-- what's speculative about the improved versions of these? 
>
> Julia was made for AI. Isn't it time for a consolidated view on how to 
> reach it? 
>
> On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>>
>> This is not a forum for wildly off-topic, speculative discussion.
>>
>> Take this to Reddit, Hacker News, etc.
>>
>>
>> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu  wrote:
>>
>>> I am wondering how Julia fits in with the unified tribes
>>>
>>> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>>>
>>> https://www.youtube.com/watch?v=B8J4uefCQMc
>>>
>>
>>

Re: [julia-users] Is the master algorithm on the roadmap?

2016-06-03 Thread Kevin Liu
There could be parts missing as Domingos mentions, but induction, 
backpropagation, genetic programming, probabilistic inference, and SVMs 
working together-- what's speculative about the improved versions of these? 

Julia was made for AI. Isn't it time for a consolidated view on how to 
reach it? 

On Thursday, June 2, 2016 at 11:20:35 PM UTC-3, Isaiah wrote:
>
> This is not a forum for wildly off-topic, speculative discussion.
>
> Take this to Reddit, Hacker News, etc.
>
>
> On Thu, Jun 2, 2016 at 10:01 PM, Kevin Liu 
> > wrote:
>
>> I am wondering how Julia fits in with the unified tribes
>>
>> mashable.com/2016/06/01/bill-gates-ai-code-conference/#8VmBFjIiYOqJ
>>
>> https://www.youtube.com/watch?v=B8J4uefCQMc
>>
>
>

[julia-users] Re: How to build a range of -Inf to Inf

2016-06-03 Thread colintbowers
Very true. For boring reasons, I actually prefer it that way for my own 
work (I want errors if the types don't exactly match as it means other 
parts of my code are doing something unexpected - I like cheap redundant 
error checks). But I agree that for general use it should work as you 
suggest.

Cheers,

Colin

On Friday, 3 June 2016 18:22:21 UTC+10, Jutho wrote:
>
> Looks ok, but I think you could generically have less restricted types in 
> your functions: e.g. 
> 4 in BasicInterval(3.1,4.9)
> won't work, nor will you be able to construct an interval 
> BasicInterval(3,4.5)



Re: [julia-users] Re: Private Forks of Julia Repositories

2016-06-03 Thread Curtis Vogt
Maybe Playground.jl will fit your use case. It works similarly to Python's 
virtualenv which allows having multiple versions of the same package to be 
installed at the same time.

 https://github.com/Rory-Finnegan/Playground.jl

[julia-users] Re: How to produce the ≉ symbol?

2016-06-03 Thread jw3126
Thanks, guys it works! 

On Friday, June 3, 2016 at 12:16:00 PM UTC+2, Avik Sengupta wrote:
>
> The canonical list of completions is in the latex_symbols.jl file within 
> the julia source
>
> https://github.com/JuliaLang/julia/blob/master/base/latex_symbols.jl#L546
>
> On Friday, 3 June 2016 10:45:16 UTC+1, Pablo Zubieta wrote:
>>
>> Use `\napprox`
>>
>

[julia-users] Re: How to produce the ≉ symbol?

2016-06-03 Thread Avik Sengupta
The canonical list of completions is in the latex_symbols.jl file within 
the julia source

https://github.com/JuliaLang/julia/blob/master/base/latex_symbols.jl#L546

On Friday, 3 June 2016 10:45:16 UTC+1, Pablo Zubieta wrote:
>
> Use `\napprox`
>


[julia-users] Re: How to produce the ≉ symbol?

2016-06-03 Thread Pablo Zubieta
Use `\napprox`


[julia-users] Re: How to produce the ≉ symbol?

2016-06-03 Thread jw3126
Okay, thanks. I tried

`\approx + tab + go 1 back + \not + tab`

It gives me the same thing as

`\not + tab + \approx + tab`

on ubuntu with both atom and jupyter. It gives me something that does look 
roughly but not exactly like ≉. I think it is two symbols on top of each 
other, a slash thingy and ≈. Anyway julia does not like it:

`

LoadError: syntax: invalid character "̸"
while loading In[38], in expression starting on line 1
`



On Friday, June 3, 2016 at 10:16:21 AM UTC+2, Jutho wrote:
>
> (on Mac): what seems to work is to first use \approx + tab to get ≈, and 
> then go back one character and type \not + tab. If you first use \not + 
> tab, and then start typing \approx, the not acts only on the \ . If you 
> write \not\approx + tab, the \not is not being substituted.



[julia-users] Re: How to build a range of -Inf to Inf

2016-06-03 Thread Jutho
Looks ok, but I think you could generically have less restricted types in your 
functions: e.g. 
4 in BasicInterval(3.1,4.9)
won't work, nor will you be able to construct an interval BasicInterval(3,4.5)

[julia-users] How to produce the ≉ symbol?

2016-06-03 Thread Jutho
(on Mac): what seems to work is to first use \approx + tab to get ≈, and then 
go back one character and type \not + tab. If you first use \not + tab, and 
then start typing \approx, the not acts only on the \ . If you write 
\not\approx + tab, the \not is not being substituted.

[julia-users] Uniform syntax

2016-06-03 Thread Ford Ox
I think this deserves an own topic.

*You* should post here syntax that looks like duplicate or you think it 
could use already existing keyword. (mark with* # identical *or *# 
replacement*)
Rule of thumb - *the less special syntax the better*.

# identical
# replace ' , ' with ' ; ' in arrays ?
[1, 2, 3, 4]
[1; 2; 3; 4]



# identical
# replace ' = ' with ' in ' in for loops ?
for i = 1:10
for i in 1:10




# replacement
# replace ' -> ' with ' = ' in anonymous functions ?
(a, b, c) -> a + b + c
(a, b, c) = a + b + c


Re: [julia-users] Re: Private Forks of Julia Repositories

2016-06-03 Thread Mauro
On Fri, 2016-06-03 at 08:45, Chris Rackauckas  wrote:
> I can keep a private local branch, but then it's only on one computer and I
> can't develop/test on any other computer (/hpc).

You can have several remotes for one repository.  So you can have your
private branch on a different remote.  Easiest is to to just clone the
package on github and set that up as a remote.  If you need actual
privacy then you need to pay github to make the fork a private repo.
Alternatively use bitbucket.org to host it, they have free private
repos.

I think something like this should work with bitbucket (untested):

First, on bitbucket create a private repo.

git clone g...@github.com:/xuser/YPgk.jl.git YPkg-myfork
cd YPkg-myfork

git remote rename origin upstream  # upstream is the traditional name

git remote add origin g...@bitbucket.org:myusername/ypkg.git # origin is 
traditional for where you usually push to

git push orgin # puts the repo to bitbucket

I would then keep master in sync with upstream (git fetch upstream; git
rebase upstream master) and do my personal changes on a feature branch.

Once the time comes for a pull request, also fork the package repo on
github.  Add the github-fork as a remote:

git remote add github ...

git checkout my-feature-branch

git push github

Make PR on github.

I hope this helps.

> On Thursday, June 2, 2016 at 11:18:09 PM UTC-7, Mauro wrote:
>>
>> On Fri, 2016-06-03 at 07:58, Chris Rackauckas > > wrote:
>> > I think I will need both versions available, since the majority of the
>> work
>> > is public, while the private work will tend to sit around longer (i.e.
>> > waiting to hear back from reviewers). So I'd want to be able to easily
>> work
>> > with the public repository, but basically switch over to a private
>> branch
>> > every once in awhile.
>>
>> Well, then just checkout the branch you need at the time, easy.
>>
>> Alternatively, have two different folders for the two branches and set
>> the LOAD_PATH depending on what you want to do.  If the REQUIREments are
>> different, in particular, different versions, it might be more tricky.
>> I think then you'd need two ~/.julia/v0.* folders.
>>
>> > On Thursday, June 2, 2016 at 9:21:13 PM UTC-7, Curtis Vogt wrote:
>> >>
>> >> If you don't need to have both versions of the package available at the
>> >> same time then I would recommend using a single Git repo with multiple
>> >> remotes. With this setup you can push to your private remote for
>> >> experiments and later push to the public remote when your ready to
>> share
>> >> your work.
>> >>
>> >> Some reading material on Git remotes:
>> >> https://git-scm.com/book/en/v2/Git-Basics-Working-with-Remotes
>> >>
>>


[julia-users] How to produce the ≉ symbol?

2016-06-03 Thread jw3126
 How to produce the ` ≉` character in julia? I tried detexify 
 and it told me `\not\isapprox` 
but that does not work. More generally if I see some cool UTF-8 character 
how do I find out its Latex name?