[julia-users] Re: Avoiding building LLVM with ever Julia release

2015-10-14 Thread Tomas Lycken
I have multiple git-trees for my Julia installations, one for each "major" 
(i.e. semver minor) version that I'm interested in (currently running 0.3.x 
and 0.4.x, but not worrying about 0.5-dev until it stabilizes a little...). 
This works well; I only need to rebuild large dependencies whenever they 
actually update in one of the branches, which isn't very often. I've found 
this works well, although it's admittedly a bit space-consuming.

// T

On Wednesday, October 14, 2015 at 4:54:16 PM UTC+2, milktrader wrote:
>
> I like this solution and I've been using git from the beginning until I 
> decided I needed to have multiple versions of Julia around at the same 
> time, with the ability to open each version whenever I choose.
>
> My still rough implementation of this is to rename *julia* to other names 
> based on version numbers
>
> [julia (master)] 
> ✈  chefs
>
> 0.3.11  .  kevin
> 0.4-rc1 .  wanda
> 0.4-rc4 .  frida
> 0.4-0   .  julius
>
> So this is straightforward to do the hard way (which is how I'm doing it 
> now) by simply building each julia version from scratch.
>
> How would this work with using git versus tar files?
>
> On Wednesday, October 14, 2015 at 10:37:06 AM UTC-4, Tero Frondelius wrote:
>>
>> git clone https://github.com/JuliaLang/julia.git
>> cd julia
>> make
>>
>>
>> after update in julia folder:
>> git fetch
>> git branch v0.4.1
>> make
>>
>>
>> Maybe some fine tuning in commands, but basically drop the method of 
>> downloading tar and start using git. 
>>
>>
>>
>> On Wednesday, October 14, 2015 at 5:21:36 PM UTC+3, milktrader wrote:
>>>
>>> I'm downloading full tar files for each new Julia version and of course 
>>> it comes with LLVM. I'd like to avoid building LLVM every single time and 
>>> have it compiled once, and available for all the new Julia releases (that 
>>> use that LLVM version of course).
>>>
>>> Any pointers?
>>>
>>> Dan
>>>
>>

Re: [julia-users] Re: Avoiding building LLVM with ever Julia release

2015-10-14 Thread milktrader
Thanks for the Make.user suggestion. I have an idea on how to make that 
work. Presumably (and this is not part of the original post) you 
can put linear algebra libraries somewhere too?


On Wednesday, October 14, 2015 at 11:11:56 AM UTC-4, Isaiah wrote:
>
> It should be possible to use an existing LLVM build by setting the 
> following in Make.user:
>
> USE_SYSTEM_LLVM=1
> LLVM_CONFIG=/path/to/llvm-config
>
> If you build LLVM manually you will need to be sure to apply the patches 
> specified in `deps/Makefile`.
>
> On Wed, Oct 14, 2015 at 10:54 AM, milktrader  > wrote:
>
>> I like this solution and I've been using git from the beginning until I 
>> decided I needed to have multiple versions of Julia around at the same 
>> time, with the ability to open each version whenever I choose.
>>
>> My still rough implementation of this is to rename *julia* to other names 
>> based on version numbers
>>
>> [julia (master)] 
>> ✈  chefs
>>
>> 0.3.11  .  kevin
>> 0.4-rc1 .  wanda
>> 0.4-rc4 .  frida
>> 0.4-0   .  julius
>>
>> So this is straightforward to do the hard way (which is how I'm doing it 
>> now) by simply building each julia version from scratch.
>>
>> How would this work with using git versus tar files?
>>
>> On Wednesday, October 14, 2015 at 10:37:06 AM UTC-4, Tero Frondelius 
>> wrote:
>>>
>>> git clone https://github.com/JuliaLang/julia.git
>>> cd julia
>>> make
>>>
>>>
>>> after update in julia folder:
>>> git fetch
>>> git branch v0.4.1
>>> make
>>>
>>>
>>> Maybe some fine tuning in commands, but basically drop the method of 
>>> downloading tar and start using git. 
>>>
>>>
>>>
>>> On Wednesday, October 14, 2015 at 5:21:36 PM UTC+3, milktrader wrote:

 I'm downloading full tar files for each new Julia version and of course 
 it comes with LLVM. I'd like to avoid building LLVM every single time and 
 have it compiled once, and available for all the new Julia releases (that 
 use that LLVM version of course).

 Any pointers?

 Dan

>>>
>

Re: [julia-users] Re: Markdown.parse question

2015-10-14 Thread j verzani
I found that I can get double dollar signs with a minor change:

$$~
latex code here
~$$

A bit hacky, but \[ and \] don't work as expected, $$\n doesn't work, I can 
search and replace easily, and can't bring my self to have any text after 
my opening $$. Consistency with Pandoc would be a good thing. I'm not so 
crazy about `` as it takes some work to distinguish from ```, though I'm 
sure I could get used to it.

On Wednesday, October 14, 2015 at 10:53:38 AM UTC-4, Stefan Karpinski wrote:
>
> Scholarly Markdown uses double backticks for inline math:
>
> http://scholarlymarkdown.com/Scholarly-Markdown-Guide.html#math
>
> Has two nice properties:
>
>1. doesn't conflict with our string interpolation syntax;
>2. gracefully degrades to code markup in other Markdowns (e.g. GitHub)
>
>
> On Wed, Oct 14, 2015 at 7:34 PM, Steven G. Johnson  > wrote:
>
>> I wish it would use the same equation syntax as pandoc and Jupyter. You 
>> need a darn good reason to be different from the dominant implementation of 
>> equations in Markdown.
>>
>> (And "$$ is deprecated in LaTeX is not a good enough reason. Markdown 
>> isn't LaTeX.)
>
>
>

Re: [julia-users] julia newb seeks critique of a "Defstruct" clone macro

2015-10-14 Thread Mauro
> You might consider supporting full type declaration syntax, i.e.:
>
> @defwithvals type emp
>   age=0
>   salary=1
> end
>
> (Though perhaps this is what Mauro's Parameters.jl package does?)

Yes (and extra bits)


[julia-users] Re: Avoiding building LLVM with ever Julia release

2015-10-14 Thread Tero Frondelius
git clone https://github.com/JuliaLang/julia.git
cd julia
make


after update in julia folder:
git fetch
git branch v0.4.1
make


Maybe some fine tuning in commands, but basically drop the method of 
downloading tar and start using git. 



On Wednesday, October 14, 2015 at 5:21:36 PM UTC+3, milktrader wrote:
>
> I'm downloading full tar files for each new Julia version and of course it 
> comes with LLVM. I'd like to avoid building LLVM every single time and have 
> it compiled once, and available for all the new Julia releases (that use 
> that LLVM version of course).
>
> Any pointers?
>
> Dan
>


[julia-users] Re: Function push! adds objects where it should not

2015-10-14 Thread Tomas Lycken


I’m a bit surprised that this code was working in 0.3.11

There’s actually no way that code worked exactly as is under 0.3.11, since 
the Tuple{...} syntax and the AbstractString type, to name a couple of 
things, weren’t introduced until 0.4. So you probably changed something 
else too, when you ported it :)

(In fact, I copied the code you posted into a file and tried to include it 
in Julia 0.3.11, and immediately got ERROR: AbstractString not defined…)

// T

On Wednesday, October 14, 2015 at 4:32:11 PM UTC+2, Josh Langsfeld wrote:

There must have been some other subtle change you made when converting to 
> 0.4. I see the same fill! behavior in 0.3.
>
> julia> VERSION
> v"0.3.11"
>
> julia> X = Array(Vector{Int},5); fill!(X, Int[]);
>
> julia> push!(X[1], 11); @show X;
> X => [[11],[11],[11],[11],[11]]
>
>
> On Tuesday, October 13, 2015 at 6:05:07 PM UTC-4, ami...@gmail.com wrote:
>>
>> Thank you to both of you, this makes sense. However I'm a bit surprised 
>> that this code was working in 0.3.11, I'm certain of that, so the fill! 
>> function was using different objects each time? I'd have thought that such 
>> an important thing as evaluation of function arguments would not change 
>> from one version to another. But maybe I missed something?
>>
> ​


Re: [julia-users] Re: Markdown.parse question

2015-10-14 Thread Stefan Karpinski
Scholarly Markdown uses double backticks for inline math:

http://scholarlymarkdown.com/Scholarly-Markdown-Guide.html#math

Has two nice properties:

   1. doesn't conflict with our string interpolation syntax;
   2. gracefully degrades to code markup in other Markdowns (e.g. GitHub)


On Wed, Oct 14, 2015 at 7:34 PM, Steven G. Johnson 
wrote:

> I wish it would use the same equation syntax as pandoc and Jupyter. You
> need a darn good reason to be different from the dominant implementation of
> equations in Markdown.
>
> (And "$$ is deprecated in LaTeX is not a good enough reason. Markdown
> isn't LaTeX.)


[julia-users] Re: Avoiding building LLVM with ever Julia release

2015-10-14 Thread milktrader
I like this solution and I've been using git from the beginning until I 
decided I needed to have multiple versions of Julia around at the same 
time, with the ability to open each version whenever I choose.

My still rough implementation of this is to rename *julia* to other names 
based on version numbers

[julia (master)] 
✈  chefs

0.3.11  .  kevin
0.4-rc1 .  wanda
0.4-rc4 .  frida
0.4-0   .  julius

So this is straightforward to do the hard way (which is how I'm doing it 
now) by simply building each julia version from scratch.

How would this work with using git versus tar files?

On Wednesday, October 14, 2015 at 10:37:06 AM UTC-4, Tero Frondelius wrote:
>
> git clone https://github.com/JuliaLang/julia.git
> cd julia
> make
>
>
> after update in julia folder:
> git fetch
> git branch v0.4.1
> make
>
>
> Maybe some fine tuning in commands, but basically drop the method of 
> downloading tar and start using git. 
>
>
>
> On Wednesday, October 14, 2015 at 5:21:36 PM UTC+3, milktrader wrote:
>>
>> I'm downloading full tar files for each new Julia version and of course 
>> it comes with LLVM. I'd like to avoid building LLVM every single time and 
>> have it compiled once, and available for all the new Julia releases (that 
>> use that LLVM version of course).
>>
>> Any pointers?
>>
>> Dan
>>
>

Re: [julia-users] Strange performance problem with Float32*Bool inside @simd loop

2015-10-14 Thread Yichao Yu
On Wed, Oct 14, 2015 at 10:57 AM, Damien  wrote:
> Hi all,
>
> I'm noticing a strange performance issue with expressions such as this one:
>
> n = 10
> a = zeros(Float32, n)
> b = rand(Float32, n)
> c = rand(Float32, n)
>
> function test(a, b, c)
> @simd for i in 1:length(a)
> @inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) *
> (c[i] <= b[i]) * (c[i] >= b[i])
> end
> end
>
>
> The problem depends on the number of statements in the expression and
> whether the comparisons are explicitely cast to Float32.
>
> In Julia 0.4-rc4, I get the following:
> @inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) *
> (c[i] <= b[i]) * (c[i] >= b[i])
>
>> test(a, b, c)
>> @time test(a, b, c)
>
> 0.000143 seconds (4 allocations: 160 bytes)
>
>
>
>
> @inbounds a[i] += b[i] * (c[i] < b[i]) * (c[i] < b[i]) * (c[i] < b[i])
>
>> test(a, b, c)
>> @time test(a, b, c)
> 0.04 seconds (4 allocations: 160 bytes)
>
>
> Four or more, loop is NOT vectorised: @inbounds a[i] += b[i] * (c[i] < b[i])
> * (c[i] < b[i]) * (c[i] < b[i]) * (c[i] < b[i])
>
>
>> test(a, b, c)
>> @time test(a, b, c)
> 0.21 seconds (204 allocations: 3.281 KB)
>
>
> Explicit casts, loop is vectorised again: @inbounds a[i] += b[i] *
> Float32(c[i] < b[i]) * Float32(c[i] < b[i]) * Float32(c[i] < b[i]) *
> Float32(c[i] < b[i])
>
>> test(a, b, c)
>> @time test(a, b, c)
>
> 0.03 seconds (4 allocations: 160 bytes)
>
>
>
> Julia Version 0.5.0-dev+769
> Commit d9f7c21* (2015-10-14 12:03 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.4.0)
>   CPU: Intel(R) Core(TM) i7-2635QM CPU @ 2.00GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>

The inlining is a little too fragile and you should check with
@code_llvm if all the functions are inlined.
I've also noticed that the SHA you give doesn't seems to be a valid
commit on JuliaLang/julia so I couldn't check if the inlining fix is
included.

>
>
>


[julia-users] Re: Function push! adds objects where it should not

2015-10-14 Thread Josh Langsfeld
There must have been some other subtle change you made when converting to 
0.4. I see the same fill! behavior in 0.3.

julia> VERSION
v"0.3.11"

julia> X = Array(Vector{Int},5); fill!(X, Int[]);

julia> push!(X[1], 11); @show X;
X => [[11],[11],[11],[11],[11]]


On Tuesday, October 13, 2015 at 6:05:07 PM UTC-4, ami...@gmail.com wrote:
>
> Thank you to both of you, this makes sense. However I'm a bit surprised 
> that this code was working in 0.3.11, I'm certain of that, so the fill! 
> function was using different objects each time? I'd have thought that such 
> an important thing as evaluation of function arguments would not change 
> from one version to another. But maybe I missed something?
>


[julia-users] Strange performance problem with expression length inside @simd loop

2015-10-14 Thread Damien


Hi all,

I'm noticing a strange performance issue with expressions such as this one:

n = 10
a = zeros(Float32, n)
b = rand(Float32, n)
c = rand(Float32, n)

function test(a, b, c)
   @simd for i in 1:length(a)
   @inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * 
(c[i] <= b[i]) * (c[i] >= b[i])
   end
end

The problem is that performance and successful vectorisation depend on the 
number of comparison statements in the expression and whether the 
comparisons are explicitely cast to Float32.

In Julia 0.4-rc4, I get the following:

@inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * (c[i] <= 
b[i])
> test(a, b, c)
> @time test(a, b, c) 
0.000169 seconds (4 allocations: 160 bytes)

@inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * (c[i] <= 
b[i]) * (c[i] >= b[i])
> test(a, b, c)
> @time test(a, b, c)
0.007258 seconds (200.00 k allocations: 3.052 MB, 47.59% gc time)

@inbounds a[i] += b[i] * c[i] * Float32(c[i] < b[i]) * Float32(c[i] > b[i]) 
* Float32(c[i] <= b[i]) * Float32(c[i] <= b[i])
> test(a, b, c)
> @time test(a, b, c)
0.000137 seconds (4 allocations: 160 bytes)

I get a similar behavior in the current 0.5 HEAD (Commit d9f7c21* with the 
fix for issue #13553) but the threshold for the number of comparisons is 
slightly different.

(a) Is meant to be OK to use expressions like a[i] * (c[i] < b[i]) or 
should I always cast explicitely? I really like the implicit version, 
because it is very readable and a natural translation of equations 
involving cases.

(b) What is causing the vectorisation threshold observed here?

Best,
Damien



[julia-users] Re: Which public key is Julia 0.4.0 Pkg.Git trying to use on Windows?

2015-10-14 Thread Tony Kelman
I'm not sure. When you run in git-bash, is the environment variable HOME 
set? If you set it in Julia, does it change anything?


On Wednesday, October 14, 2015 at 2:10:48 AM UTC-7, Tomas Lycken wrote:
>
> Thanks! That was a useful pointer, and it got me someways down the road, 
> but I still see really weird things...
>
> Without me (knowingly) changing anything, I went into the Julia install 
> folder, into Git, and double-clicked git-bash.cmd. That opened a bash 
> shell, in which I could cd to e.g. the METADATA.jl package directory and do 
> git pull. It asked me to accept the server's fingerprint, but otherwise 
> didn't complain. git pull worked then without error.
>
> After doing this, I started a new Julia 0.4.0 instance, and now 
> Pkg.update() works - once. The second time, it borked on a couple of 
> repositories which I have forked, but where my fork is not the "main 
> source", with the message "error: could not fetch tlycken" (which is the 
> name of my fork in the output of `git remote -v`; I still have a remote 
> called origin). Manually going into those package directories using 
> git-bash, and manually saying `git fetch tlycken`, completes without error.
>
> If Julia is using the same git as the git-bash from Julia's installation 
> folder, why can one fetch without problem, while the other is denied 
> permission?
>
> // T
>
>
> On Wednesday, October 14, 2015 at 10:34:00 AM UTC+2, Tony Kelman wrote:
>>
>> The Win and Mac binaries bundle their own git, rather than relying on 
>> having it manually installed and on the path. Check the Git folder under 
>> the Julia install, run the git-bash there to try getting keys working.
>
>

[julia-users] CUDArt main example does not run smoothly

2015-10-14 Thread Joaquim Masset Lacombe Dias Garcia
I could not run the main example of CUDArt from 
: https://github.com/JuliaGPU/CUDArt.jl

The code and error follow bellow.

running:
#MyCudaModule previously defined as 
in: https://github.com/JuliaGPU/CUDArt.jl
using CUDArt, MyCudaModule

A = rand(10,5)

result = devices(dev->capability(dev)[1]>=2) do devlist
MyCudaModule.init(devlist) do
function1(CudaArray(A))
end
end

error output:

wrong number of arguments
while loading In[9], in expression starting on line 5

 in anonymous at In[9]:7
 in init at In[8]:27
 in anonymous at In[9]:6
 in devices at C:\Users\joaquimgarcia\.julia\v0.3\CUDArt\src\device.jl:57



I tried in both julia 0.3 and 0.4


also i tried Pkg.test("CUDArt")

and al tests passed



[julia-users] CUDArt example

2015-10-14 Thread Joaquim Masset Lacombe Dias Garcia
Hi, I am trying to use CUDArt library but i could not run the example in the 
git repository.
I ran the pkg.test("CUDArt") and it seems to be well installed.

Does anyone has a simple example. Something like summing two vectors with 
vadd.cu (of CUDA.jl git repository) would be awesome!

Thanks!

Re: [julia-users] julia newb seeks critique of a "Defstruct" clone macro

2015-10-14 Thread Isaiah Norton
>
> Q1) where to find julia macro tutorials (beyond the standard manual)?


I'm not aware of any tutorials, but there were several JuliaCon talks that
are related to some of the nittier details if you are interested [1]. If
anything in the manual is unclear or could use more explanation, please do
make suggestions. (we're trying to make it accessible for people with no
macro background as well as advanced Lispers)

Q2) ... improvements


You might consider supporting full type declaration syntax, i.e.:

@defwithvals type emp
  age=0
  salary=1
end

(Though perhaps this is what Mauro's Parameters.jl package does?)

Eventually this will be supported without needing a macro, see:
https://github.com/JuliaLang/julia/issues/10146

[1]
https://www.youtube.com/watch?v=RYZkHudRTvI=PLP8iPy9hna6Sdx4soiGrSefrmOPdUWixM=16
https://www.youtube.com/watch?v=xUP3cSKb8sI=PLP8iPy9hna6Sdx4soiGrSefrmOPdUWixM=62
https://www.youtube.com/watch?v=KAN8zbM659o=PLP8iPy9hna6Sdx4soiGrSefrmOPdUWixM=55


On Tue, Oct 13, 2015 at 5:18 PM, Tim Menzies  wrote:

> this is day3 of julia so pardon dumb questions
>
> i had some trouble  finding tutorials on julia macros. i've read the
> textbooks but i suspect there is somewhere else to look (i say that since i
> would have thought that the following would exist in standard Julia, but i
> could not find it). so, two questions
>
> Q1) where to find julia macro tutorials (beyond the standard manual)?
>
> Q2) here's my first julia macro to do something like LISP's defstruct
> where i can define a type and its init contents all at one go. comments?
> traps for the unwary? improvements?
>
> # e.g. @def emp age=0 salary=1
>
> # autocreates a constructor emp0 that returns and emp
>
> # initialized with 0,1
>
> macro has(typename, pairs...)
> name = esc(symbol(string(typename,0)))
> tmp  = esc(symbol("tmp"))
> ones = [x.args[1] for x in pairs]
> twos = [x.args[2] for x in pairs]
> :(type $(typename)
>  $(ones...)
>   end;
>   function $(name)()
>  $(typename)($(twos...))
>   end)
> end
>
>
> thanks!
> tiim menzies
>
>


Re: [julia-users] Strange performance problem with Float32*Bool inside @simd loop

2015-10-14 Thread Damien
Thanks for your answer!
I deleted the post you quoted and re-posted a complete version because I 
posted it prematurely by accident, sorry about that.
The commit in question is d9f7c2125831a16c2386888904f303846a1ced95

Best,
Damien

On Wednesday, 14 October 2015 17:09:52 UTC+2, Yichao Yu wrote:
>
> On Wed, Oct 14, 2015 at 10:57 AM, Damien  
> wrote: 
> > Hi all, 
> > 
> > I'm noticing a strange performance issue with expressions such as this 
> one: 
> > 
> > n = 10 
> > a = zeros(Float32, n) 
> > b = rand(Float32, n) 
> > c = rand(Float32, n) 
> > 
> > function test(a, b, c) 
> > @simd for i in 1:length(a) 
> > @inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * 
> > (c[i] <= b[i]) * (c[i] >= b[i]) 
> > end 
> > end 
> > 
> > 
> > The problem depends on the number of statements in the expression and 
> > whether the comparisons are explicitely cast to Float32. 
> > 
> > In Julia 0.4-rc4, I get the following: 
> > @inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * 
> > (c[i] <= b[i]) * (c[i] >= b[i]) 
> > 
> >> test(a, b, c) 
> >> @time test(a, b, c) 
> > 
> > 0.000143 seconds (4 allocations: 160 bytes) 
> > 
> > 
> > 
> > 
> > @inbounds a[i] += b[i] * (c[i] < b[i]) * (c[i] < b[i]) * (c[i] < b[i]) 
> > 
> >> test(a, b, c) 
> >> @time test(a, b, c) 
> > 0.04 seconds (4 allocations: 160 bytes) 
> > 
> > 
> > Four or more, loop is NOT vectorised: @inbounds a[i] += b[i] * (c[i] < 
> b[i]) 
> > * (c[i] < b[i]) * (c[i] < b[i]) * (c[i] < b[i]) 
> > 
> > 
> >> test(a, b, c) 
> >> @time test(a, b, c) 
> > 0.21 seconds (204 allocations: 3.281 KB) 
> > 
> > 
> > Explicit casts, loop is vectorised again: @inbounds a[i] += b[i] * 
> > Float32(c[i] < b[i]) * Float32(c[i] < b[i]) * Float32(c[i] < b[i]) * 
> > Float32(c[i] < b[i]) 
> > 
> >> test(a, b, c) 
> >> @time test(a, b, c) 
> > 
> > 0.03 seconds (4 allocations: 160 bytes) 
> > 
> > 
> > 
> > Julia Version 0.5.0-dev+769 
> > Commit d9f7c21* (2015-10-14 12:03 UTC) 
> > Platform Info: 
> >   System: Darwin (x86_64-apple-darwin13.4.0) 
> >   CPU: Intel(R) Core(TM) i7-2635QM CPU @ 2.00GHz 
> >   WORD_SIZE: 64 
> >   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge) 
> >   LAPACK: libopenblas 
> >   LIBM: libopenlibm 
> >   LLVM: libLLVM-3.3 
> > 
>
> The inlining is a little too fragile and you should check with 
> @code_llvm if all the functions are inlined. 
> I've also noticed that the SHA you give doesn't seems to be a valid 
> commit on JuliaLang/julia so I couldn't check if the inlining fix is 
> included. 
>
> > 
> > 
> > 
>


[julia-users] copy!() vs element-by-element copying

2015-10-14 Thread Alan Crawford
I have written the following test to see the difference between going 
copying element-by-element and using copy!(x,y):

function test1(x,y)
for i in eachindex(x)
@inbounds x[i] = y[i]
end
end

function test2(x,y)
copy!(x,y)
end

function test()
NumObs = 1000
x = rand(NumObs)
y = rand(NumObs)
test1(x,y)
test2(x,y)
println("Test 1") 
@time(for z in 1:1e5 test1(x,y) end)
println("Test 2")
@time(for z in 1:1e5 test2(x,y) end)
end
test()


I get the following timings:

Test 1

  0.031750 seconds

Test 2

  0.009360 seconds

So it seems copy!() is quite a bit faster ... 

I ran this test because I would like to copy an element of a vector y into 
an element of vector x, but x and y are not the same everywhere - so i 
can’t do copy!(x,y). Moreover, since copy!() only works on arrays, I can’t 
do copy!(x[i],y[i]). 

So my question is whether there is a way to get the speed of copy!() , but 
for copying a Float to a Float? Seems like i am probably missing something 
fairly simple...

Thanks
Alan 



Re: [julia-users] copy!() vs element-by-element copying

2015-10-14 Thread Stefan Karpinski
Copy uses memmove which uses hand-coded assembly for optimal data copy
performance.

On Wed, Oct 14, 2015 at 9:08 PM, Alan Crawford 
wrote:

> I have written the following test to see the difference between going
> copying element-by-element and using copy!(x,y):
>
> function test1(x,y)
> for i in eachindex(x)
> @inbounds x[i] = y[i]
> end
> end
>
> function test2(x,y)
> copy!(x,y)
> end
>
> function test()
> NumObs = 1000
> x = rand(NumObs)
> y = rand(NumObs)
> test1(x,y)
> test2(x,y)
> println("Test 1")
> @time(for z in 1:1e5 test1(x,y) end)
> println("Test 2")
> @time(for z in 1:1e5 test2(x,y) end)
> end
> test()
>
>
> I get the following timings:
>
> Test 1
>
>   0.031750 seconds
>
> Test 2
>
>   0.009360 seconds
>
> So it seems copy!() is quite a bit faster ...
>
> I ran this test because I would like to copy an element of a vector y into
> an element of vector x, but x and y are not the same everywhere - so i
> can’t do copy!(x,y). Moreover, since copy!() only works on arrays, I can’t
> do copy!(x[i],y[i]).
>
> So my question is whether there is a way to get the speed of copy!() , but
> for copying a Float to a Float? Seems like i am probably missing something
> fairly simple...
>
> Thanks
> Alan
>
>


[julia-users] Avoiding building LLVM with ever Julia release

2015-10-14 Thread milktrader
I'm downloading full tar files for each new Julia version and of course it 
comes with LLVM. I'd like to avoid building LLVM every single time and have 
it compiled once, and available for all the new Julia releases (that use 
that LLVM version of course).

Any pointers?

Dan


Re: [julia-users] Re: Avoiding building LLVM with ever Julia release

2015-10-14 Thread Isaiah Norton
It should be possible to use an existing LLVM build by setting the
following in Make.user:

USE_SYSTEM_LLVM=1
LLVM_CONFIG=/path/to/llvm-config

If you build LLVM manually you will need to be sure to apply the patches
specified in `deps/Makefile`.

On Wed, Oct 14, 2015 at 10:54 AM, milktrader  wrote:

> I like this solution and I've been using git from the beginning until I
> decided I needed to have multiple versions of Julia around at the same
> time, with the ability to open each version whenever I choose.
>
> My still rough implementation of this is to rename *julia* to other names
> based on version numbers
>
> [julia (master)]
> ✈  chefs
>
> 0.3.11  .  kevin
> 0.4-rc1 .  wanda
> 0.4-rc4 .  frida
> 0.4-0   .  julius
>
> So this is straightforward to do the hard way (which is how I'm doing it
> now) by simply building each julia version from scratch.
>
> How would this work with using git versus tar files?
>
> On Wednesday, October 14, 2015 at 10:37:06 AM UTC-4, Tero Frondelius wrote:
>>
>> git clone https://github.com/JuliaLang/julia.git
>> cd julia
>> make
>>
>>
>> after update in julia folder:
>> git fetch
>> git branch v0.4.1
>> make
>>
>>
>> Maybe some fine tuning in commands, but basically drop the method of
>> downloading tar and start using git.
>>
>>
>>
>> On Wednesday, October 14, 2015 at 5:21:36 PM UTC+3, milktrader wrote:
>>>
>>> I'm downloading full tar files for each new Julia version and of course
>>> it comes with LLVM. I'd like to avoid building LLVM every single time and
>>> have it compiled once, and available for all the new Julia releases (that
>>> use that LLVM version of course).
>>>
>>> Any pointers?
>>>
>>> Dan
>>>
>>


[julia-users] Strange performance problem with Float32*Bool inside @simd loop

2015-10-14 Thread Damien
Hi all,

I'm noticing a strange performance issue with expressions such as this one:

n = 10
a = zeros(Float32, n)
b = rand(Float32, n)
c = rand(Float32, n)

function test(a, b, c)
@simd for i in 1:length(a)
@inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * 
(c[i] <= b[i]) * (c[i] >= b[i])
end
end


The problem depends on the number of statements in the expression and 
whether the comparisons are explicitely cast to Float32.

In Julia 0.4-rc4, I get the following:
@inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * 
(c[i] <= b[i]) * (c[i] >= b[i])

> test(a, b, c)
> @time test(a, b, c)

0.000143 seconds (4 allocations: 160 bytes)




@inbounds a[i] += b[i] * (c[i] < b[i]) * (c[i] < b[i]) * (c[i] < b[i])

> test(a, b, c)
> @time test(a, b, c)
0.04 seconds (4 allocations: 160 bytes)


Four or more, loop is NOT vectorised: @inbounds a[i] += b[i] * (c[i] < 
b[i]) * (c[i] < b[i]) * (c[i] < b[i]) * (c[i] < b[i])
 

> test(a, b, c)
> @time test(a, b, c)
0.21 seconds (204 allocations: 3.281 KB)


Explicit casts, loop is vectorised again: @inbounds a[i] += b[i] * 
Float32(c[i] < b[i]) * Float32(c[i] < b[i]) * Float32(c[i] < b[i]) * 
Float32(c[i] < b[i])

> test(a, b, c)
> @time test(a, b, c)

0.03 seconds (4 allocations: 160 bytes)



Julia Version 0.5.0-dev+769
Commit d9f7c21* (2015-10-14 12:03 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin13.4.0)
  CPU: Intel(R) Core(TM) i7-2635QM CPU @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3






[julia-users] ANN: PiMath - Simple arithmetic and nice axis labels in Gadfly when working with multiples of π

2015-10-14 Thread Fabian Gans
Hi all, 

I want to share some code which gives Gadfly nice formatting when values 
are given as multiples of π. The link to the code is here:

https://github.com/meggart/PiMath.jl

and this NoteBook should be quite self-explaining:

http://nbviewer.ipython.org/github/meggart/PiMath.jl/blob/1e730ffd12d68cd9ba36ad4df9fc927f84e59a34/NoteBook/PiMath.ipynb

Maybe this is useful to some. 

Fabian


Re: [julia-users] copy!() vs element-by-element copying

2015-10-14 Thread Kristoffer Carlsson
For this type you should be memory bound so they should perform roughly 
equal. I increased the length of the arrays a bit to reduce noise and got:

Test 1
  0.307454 seconds
Test 2
  0.293684 seconds



On Wednesday, October 14, 2015 at 5:53:58 PM UTC+2, Stefan Karpinski wrote:
>
> Copy uses memmove which uses hand-coded assembly for optimal data copy 
> performance.
>
> On Wed, Oct 14, 2015 at 9:08 PM, Alan Crawford  > wrote:
>
>> I have written the following test to see the difference between going 
>> copying element-by-element and using copy!(x,y):
>>
>> function test1(x,y)
>> for i in eachindex(x)
>> @inbounds x[i] = y[i]
>> end
>> end
>>
>> function test2(x,y)
>> copy!(x,y)
>> end
>>
>> function test()
>> NumObs = 1000
>> x = rand(NumObs)
>> y = rand(NumObs)
>> test1(x,y)
>> test2(x,y)
>> println("Test 1") 
>> @time(for z in 1:1e5 test1(x,y) end)
>> println("Test 2")
>> @time(for z in 1:1e5 test2(x,y) end)
>> end
>> test()
>>
>>
>> I get the following timings:
>>
>> Test 1
>>
>>   0.031750 seconds
>>
>> Test 2
>>
>>   0.009360 seconds
>>
>> So it seems copy!() is quite a bit faster ... 
>>
>> I ran this test because I would like to copy an element of a vector y 
>> into an element of vector x, but x and y are not the same everywhere - so i 
>> can’t do copy!(x,y). Moreover, since copy!() only works on arrays, I can’t 
>> do copy!(x[i],y[i]). 
>>
>> So my question is whether there is a way to get the speed of copy!() , 
>> but for copying a Float to a Float? Seems like i am probably missing 
>> something fairly simple...
>>
>> Thanks
>> Alan 
>>
>>
>

Re: [julia-users] Escher image update

2015-10-14 Thread Shashi Gowda
This is browser caching at work

one way to resolve this is to just imread the image instead of loading it
using image() which just creates the equivalent of an  HTML tag...

e.g.

using Images

run(`convert a.jpg  assets/a.jpg`)
imread("assets/a.jpg")


On Wed, Oct 14, 2015 at 3:52 AM, Yakir Gagnon <12.ya...@gmail.com> wrote:

>
>
> I have some widgets that run imagemagick updating images in `assets/`, and
> the only way I can get those to update with `image` is to change their
> names...
> So if I just do this:
> ```julia
> run(`convert a.jpg  assets/a.jpg`)
> image("assets/a.jpg")
> ```
> the image doesn't get updated. But this craziness works:
> ```julia
> name = rand(Int)
> run(`convert a.jpg  assets/$name.jpg`)
> image("assets/$name.jpg")
> ```
> Isn't there a better way?
>


Re: [julia-users] Escher image update

2015-10-14 Thread Shashi Gowda
Sorry the problem is actually not even browser caching... what happens is
Escher thinks there is nothing to update because the URL attribute remains
the same...

imread is a good solution although slower. Another good solution is the one
you yourself gave if you can figure out a way to clean up all the temporary
files it creates.

On Wed, Oct 14, 2015 at 10:19 PM, Shashi Gowda 
wrote:

> This is browser caching at work
>
> one way to resolve this is to just imread the image instead of loading it
> using image() which just creates the equivalent of an  HTML tag...
>
> e.g.
>
> using Images
>
> run(`convert a.jpg  assets/a.jpg`)
> imread("assets/a.jpg")
>
>
> On Wed, Oct 14, 2015 at 3:52 AM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>
>>
>>
>> I have some widgets that run imagemagick updating images in `assets/`,
>> and the only way I can get those to update with `image` is to change their
>> names...
>> So if I just do this:
>> ```julia
>> run(`convert a.jpg  assets/a.jpg`)
>> image("assets/a.jpg")
>> ```
>> the image doesn't get updated. But this craziness works:
>> ```julia
>> name = rand(Int)
>> run(`convert a.jpg  assets/$name.jpg`)
>> image("assets/$name.jpg")
>> ```
>> Isn't there a better way?
>>
>
>


Re: [julia-users] Prevent Escher from evaluating signals on start

2015-10-14 Thread Shashi Gowda
You can pass in an init=empty keyword argument to lift / consume, and the
initial value of the signal will be an empty UI. The actual value will be
computed next time the input signal udpates. You should also provide a
typ=Any kwarg to lift / consume so that if you replace empty with something
that is not Empty, you do not get a type conversion error. something like:

selected_tab = Input(1)
vbox(
tabs(["a", "b"]) >>> selected_tab,
lift(selected_tab, typ=Any, init=empty) do page_no
// compute page at page_no
end
)

may be what you need. To avoid recomputing when you switch to tab 1 and
then to tab 2 and then back to tab 1, you will need to memoize the function
you pass to lift...


On Wed, Oct 14, 2015 at 6:06 PM, Yakir Gagnon <12.ya...@gmail.com> wrote:

> Title...
> I have a bunch of tabs and it all takes time to load. I don't need the
> functions to evaluate before the user presses on something. Any easy way i
> can prevent all the signals from running their functions when the pages
> load up?


[julia-users] Call by name

2015-10-14 Thread juliatylors
Hi,

I was wondering whether there is a syntax for call by name parameter 
passing ?

Thanks.


Re: [julia-users] copy!() vs element-by-element copying

2015-10-14 Thread Tim Holy
Make sure you run your function once before you time anything.

--Tim

On Wednesday, October 14, 2015 08:38:55 AM Alan Crawford wrote:
> I have written the following test to see the difference between going
> copying element-by-element and using copy!(x,y):
> 
> function test1(x,y)
> for i in eachindex(x)
> @inbounds x[i] = y[i]
> end
> end
> 
> function test2(x,y)
> copy!(x,y)
> end
> 
> function test()
> NumObs = 1000
> x = rand(NumObs)
> y = rand(NumObs)
> test1(x,y)
> test2(x,y)
> println("Test 1")
> @time(for z in 1:1e5 test1(x,y) end)
> println("Test 2")
> @time(for z in 1:1e5 test2(x,y) end)
> end
> test()
> 
> 
> I get the following timings:
> 
> Test 1
> 
>   0.031750 seconds
> 
> Test 2
> 
>   0.009360 seconds
> 
> So it seems copy!() is quite a bit faster ...
> 
> I ran this test because I would like to copy an element of a vector y into
> an element of vector x, but x and y are not the same everywhere - so i
> can’t do copy!(x,y). Moreover, since copy!() only works on arrays, I can’t
> do copy!(x[i],y[i]).
> 
> So my question is whether there is a way to get the speed of copy!() , but
> for copying a Float to a Float? Seems like i am probably missing something
> fairly simple...
> 
> Thanks
> Alan



Re: [julia-users] 900mb csv loading in Julia failed: memory comparison vs python pandas and R

2015-10-14 Thread Grey Marsh
Done with the testing in the cloud instance.
It works and the timings in my case

58.346345 seconds (694.00 M allocations: 12.775 GB, 2.63% gc time)

result of "*top*" command:  VIRT: 11.651g RES: 3.579g

~13gb memory for a 900mb file!
Thanks to Jacob atleast I was able check that the process works.


On Wednesday, October 14, 2015 at 12:10:02 PM UTC+5:30, bernhard wrote:
>
> Jacob
>
> I do run into the same issue as Grey. the step
> ds = DataStreams.DataTable(f);
> gets stuck.
> I also tried this with a smaller file (150MB) which I have. This file is 
> read by readtable in 15s. But the DataTable function freezes. I use 0.4 on 
> Windows 7.
>
> I note that your code did work on a tiny file though (40 lines or so).
> I do get a dataframe, but when I show it (by simply typing df, or 
> dump(df)) Julia crashes...
>
> Bernhard
>
>
> Am Mittwoch, 14. Oktober 2015 06:54:16 UTC+2 schrieb Grey Marsh:
>>
>> I am using Julia 0.4 for this purpose, if that's what is meant by "0.4 
>> only". 
>>
>> On Wednesday, October 14, 2015 at 9:53:09 AM UTC+5:30, Jacob Quinn wrote:
>>>
>>> Oh yes, I forgot to mention that the CSV/DataStreams code is 0.4 only. 
>>> Definitely interested to hear about any results/experiences though.
>>>
>>> -Jacob
>>>
>>> On Tue, Oct 13, 2015 at 10:11 PM, Yichao Yu  wrote:
>>>
 On Wed, Oct 14, 2015 at 12:02 AM, Grey Marsh  wrote:
 > @Jacob, I tried your approach. Somehow it got stuck in the "@time ds =
 > DataStreams.DataTable(f)" line. After 15 minutes running, julia is 
 using
 > ~500mb and 1 cpu core with no sign of end. The memory use has been 
 almost
 > same for the whole duration of 15 minutes. I'm letting it run, hoping 
 that
 > it finishes after some time.
 >
 > From your run, I can see it needs 12gb memory which is higher than my
 > machine memory of 8gb. could it be the problem?

 12GB is the total number of memory ever allocated during the timing. A
 lot of them might be intermediate results that are freed by the GC.
 Also, from the output of @time, it looks like 0.4.

 >
 > On Wednesday, October 14, 2015 at 2:28:09 AM UTC+5:30, Jacob Quinn 
 wrote:
 >>
 >> I'm hesitant to suggest, but if you're in a bind, I have an 
 experimental
 >> package for fast CSV reading. The API has stabilized somewhat over 
 the last
 >> week and I'm planning a more broad release soon, but I'd still 
 consider it
 >> alpha mode. That said, if anyone's willing to give it a drive, you 
 just need
 >> to
 >>
 >> Pkg.add("Libz")
 >> Pkg.add("NullableArrays")
 >> Pkg.clone("https://github.com/quinnj/DataStreams.jl;)
 >> Pkg.clone("https://github.com/quinnj/CSV.jl;)
 >>
 >> With the original file referenced here I get:
 >>
 >> julia> reload("CSV")
 >>
 >> julia> f = 
 CSV.Source("/Users/jacobquinn/Downloads/train.csv";null="NA")
 >> CSV.Source: "/Users/jacobquinn/Downloads/train.csv"
 >> delim: ','
 >> quotechar: '"'
 >> escapechar: '\\'
 >> null: "NA"
 >> schema:
 >> 
 DataStreams.Schema(UTF8String["ID","VAR_0001","VAR_0002","VAR_0003","VAR_0004","VAR_0005","VAR_0006","VAR_0007","VAR_0008","VAR_0009"
 >> …
 >> 
 "VAR_1926","VAR_1927","VAR_1928","VAR_1929","VAR_1930","VAR_1931","VAR_1932","VAR_1933","VAR_1934","target"],[Int64,DataStreams.PointerString,Int64,Int64,Int64,DataStreams.PointerString,Int64,Int64,DataStreams.PointerString,DataStreams.PointerString
 >> …
 >> 
 Int64,Int64,Int64,Int64,Int64,Int64,Int64,Int64,DataStreams.PointerString,Int64],145231,1934)
 >> dateformat: Base.Dates.DateFormat(Base.Dates.Slot[],"","english")
 >>
 >>
 >> julia> @time ds = DataStreams.DataTable(f)
 >>  43.513800 seconds (694.00 M allocations: 12.775 GB, 2.55% gc time)
 >>
 >>
 >> You can convert the result to a DataFrame with:
 >>
 >> function DataFrames.DataFrame(dt::DataStreams.DataTable)
 >> cols = dt.schema.cols
 >> data = Array(Any,cols)
 >> types = DataStreams.types(dt)
 >> for i = 1:cols
 >> data[i] = DataStreams.column(dt,i,types[i])
 >> end
 >> return DataFrame(data,Symbol[symbol(x) for x in 
 dt.schema.header])
 >> end
 >>
 >>
 >> -Jacob
 >>
 >> On Tue, Oct 13, 2015 at 2:40 PM, feza  wrote:
 >>>
 >>> Finally was able to load it, but the process   consumes a ton of 
 memory.
 >>> julia> @time train = readtable("./test.csv");
 >>> 124.575362 seconds (376.11 M allocations: 13.438 GB, 10.77% gc time)
 >>>
 >>>
 >>>
 >>> On Tuesday, October 13, 2015 at 4:34:05 PM UTC-4, feza wrote:
 
  Same here on a 12gb ram machine
 
 _
 _   _ _(_)_ |  A fresh approach to technical computing
    (_) | (_) (_)|  

Re: [julia-users] 900mb csv loading in Julia failed: memory comparison vs python pandas and R

2015-10-14 Thread bernhard
Jacob

I do run into the same issue as Grey. the step
ds = DataStreams.DataTable(f);
gets stuck.
I also tried this with a smaller file (150MB) which I have. This file is 
read by readtable in 15s. But the DataTable function freezes. I use 0.4 on 
Windows 7.

I note that your code did work on a tiny file though (40 lines or so).
I do get a dataframe, but when I show it (by simply typing df, or dump(df)) 
Julia crashes...

Bernhard


Am Mittwoch, 14. Oktober 2015 06:54:16 UTC+2 schrieb Grey Marsh:
>
> I am using Julia 0.4 for this purpose, if that's what is meant by "0.4 
> only". 
>
> On Wednesday, October 14, 2015 at 9:53:09 AM UTC+5:30, Jacob Quinn wrote:
>>
>> Oh yes, I forgot to mention that the CSV/DataStreams code is 0.4 only. 
>> Definitely interested to hear about any results/experiences though.
>>
>> -Jacob
>>
>> On Tue, Oct 13, 2015 at 10:11 PM, Yichao Yu  wrote:
>>
>>> On Wed, Oct 14, 2015 at 12:02 AM, Grey Marsh  wrote:
>>> > @Jacob, I tried your approach. Somehow it got stuck in the "@time ds =
>>> > DataStreams.DataTable(f)" line. After 15 minutes running, julia is 
>>> using
>>> > ~500mb and 1 cpu core with no sign of end. The memory use has been 
>>> almost
>>> > same for the whole duration of 15 minutes. I'm letting it run, hoping 
>>> that
>>> > it finishes after some time.
>>> >
>>> > From your run, I can see it needs 12gb memory which is higher than my
>>> > machine memory of 8gb. could it be the problem?
>>>
>>> 12GB is the total number of memory ever allocated during the timing. A
>>> lot of them might be intermediate results that are freed by the GC.
>>> Also, from the output of @time, it looks like 0.4.
>>>
>>> >
>>> > On Wednesday, October 14, 2015 at 2:28:09 AM UTC+5:30, Jacob Quinn 
>>> wrote:
>>> >>
>>> >> I'm hesitant to suggest, but if you're in a bind, I have an 
>>> experimental
>>> >> package for fast CSV reading. The API has stabilized somewhat over 
>>> the last
>>> >> week and I'm planning a more broad release soon, but I'd still 
>>> consider it
>>> >> alpha mode. That said, if anyone's willing to give it a drive, you 
>>> just need
>>> >> to
>>> >>
>>> >> Pkg.add("Libz")
>>> >> Pkg.add("NullableArrays")
>>> >> Pkg.clone("https://github.com/quinnj/DataStreams.jl;)
>>> >> Pkg.clone("https://github.com/quinnj/CSV.jl;)
>>> >>
>>> >> With the original file referenced here I get:
>>> >>
>>> >> julia> reload("CSV")
>>> >>
>>> >> julia> f = 
>>> CSV.Source("/Users/jacobquinn/Downloads/train.csv";null="NA")
>>> >> CSV.Source: "/Users/jacobquinn/Downloads/train.csv"
>>> >> delim: ','
>>> >> quotechar: '"'
>>> >> escapechar: '\\'
>>> >> null: "NA"
>>> >> schema:
>>> >> 
>>> DataStreams.Schema(UTF8String["ID","VAR_0001","VAR_0002","VAR_0003","VAR_0004","VAR_0005","VAR_0006","VAR_0007","VAR_0008","VAR_0009"
>>> >> …
>>> >> 
>>> "VAR_1926","VAR_1927","VAR_1928","VAR_1929","VAR_1930","VAR_1931","VAR_1932","VAR_1933","VAR_1934","target"],[Int64,DataStreams.PointerString,Int64,Int64,Int64,DataStreams.PointerString,Int64,Int64,DataStreams.PointerString,DataStreams.PointerString
>>> >> …
>>> >> 
>>> Int64,Int64,Int64,Int64,Int64,Int64,Int64,Int64,DataStreams.PointerString,Int64],145231,1934)
>>> >> dateformat: Base.Dates.DateFormat(Base.Dates.Slot[],"","english")
>>> >>
>>> >>
>>> >> julia> @time ds = DataStreams.DataTable(f)
>>> >>  43.513800 seconds (694.00 M allocations: 12.775 GB, 2.55% gc time)
>>> >>
>>> >>
>>> >> You can convert the result to a DataFrame with:
>>> >>
>>> >> function DataFrames.DataFrame(dt::DataStreams.DataTable)
>>> >> cols = dt.schema.cols
>>> >> data = Array(Any,cols)
>>> >> types = DataStreams.types(dt)
>>> >> for i = 1:cols
>>> >> data[i] = DataStreams.column(dt,i,types[i])
>>> >> end
>>> >> return DataFrame(data,Symbol[symbol(x) for x in dt.schema.header])
>>> >> end
>>> >>
>>> >>
>>> >> -Jacob
>>> >>
>>> >> On Tue, Oct 13, 2015 at 2:40 PM, feza  wrote:
>>> >>>
>>> >>> Finally was able to load it, but the process   consumes a ton of 
>>> memory.
>>> >>> julia> @time train = readtable("./test.csv");
>>> >>> 124.575362 seconds (376.11 M allocations: 13.438 GB, 10.77% gc time)
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Tuesday, October 13, 2015 at 4:34:05 PM UTC-4, feza wrote:
>>> 
>>>  Same here on a 12gb ram machine
>>> 
>>> _
>>> _   _ _(_)_ |  A fresh approach to technical computing
>>>    (_) | (_) (_)|  Documentation: http://docs.julialang.org
>>> _ _   _| |_  __ _   |  Type "?help" for help.
>>>    | | | | | | |/ _` |  |
>>>    | | |_| | | | (_| |  |  Version 0.5.0-dev+429 (2015-09-29 09:47 
>>> UTC)
>>>   _/ |\__'_|_|_|\__'_|  |  Commit f71e449 (14 days old master)
>>>  |__/   |  x86_64-w64-mingw32
>>> 
>>>  julia> using DataFrames
>>> 
>>>  julia> train = readtable("./test.csv");
>>>  ERROR: OutOfMemoryError()
>>>   in resize! at array.jl:452
>>>  

[julia-users] Re: Which public key is Julia 0.4.0 Pkg.Git trying to use on Windows?

2015-10-14 Thread Tony Kelman
The Win and Mac binaries bundle their own git, rather than relying on having it 
manually installed and on the path. Check the Git folder under the Julia 
install, run the git-bash there to try getting keys working.

[julia-users] Re: Which public key is Julia 0.4.0 Pkg.Git trying to use on Windows?

2015-10-14 Thread Tomas Lycken
Thanks! That was a useful pointer, and it got me someways down the road, 
but I still see really weird things...

Without me (knowingly) changing anything, I went into the Julia install 
folder, into Git, and double-clicked git-bash.cmd. That opened a bash 
shell, in which I could cd to e.g. the METADATA.jl package directory and do 
git pull. It asked me to accept the server's fingerprint, but otherwise 
didn't complain. git pull worked then without error.

After doing this, I started a new Julia 0.4.0 instance, and now 
Pkg.update() works - once. The second time, it borked on a couple of 
repositories which I have forked, but where my fork is not the "main 
source", with the message "error: could not fetch tlycken" (which is the 
name of my fork in the output of `git remote -v`; I still have a remote 
called origin). Manually going into those package directories using 
git-bash, and manually saying `git fetch tlycken`, completes without error.

If Julia is using the same git as the git-bash from Julia's installation 
folder, why can one fetch without problem, while the other is denied 
permission?

// T


On Wednesday, October 14, 2015 at 10:34:00 AM UTC+2, Tony Kelman wrote:
>
> The Win and Mac binaries bundle their own git, rather than relying on 
> having it manually installed and on the path. Check the Git folder under 
> the Julia install, run the git-bash there to try getting keys working.



Re: [julia-users] 900mb csv loading in Julia failed: memory comparison vs python pandas and R

2015-10-14 Thread Milan Bouchet-Valat
Le mercredi 14 octobre 2015 à 00:15 -0700, Grey Marsh a écrit :
> Done with the testing in the cloud instance.
> It works and the timings in my case
> 
> 58.346345 seconds (694.00 M allocations: 12.775 GB, 2.63% gc time)
> 
> result of "top" command:  VIRT: 11.651g RES: 3.579g
> 
> ~13gb memory for a 900mb file!
> Thanks to Jacob atleast I was able check that the process works.
As Yichao noted, at no point in the import did Julia use 13GB of RAM.
That's the total amount of memory that was allocated and freed by
pieces (694M of them). You'd need to watch the Julia process while
working to see what's the maximum value of RES when importing.


Regards

> On Wednesday, October 14, 2015 at 12:10:02 PM UTC+5:30, bernhard
> wrote:
> > Jacob
> > 
> > I do run into the same issue as Grey. the step
> > ds = DataStreams.DataTable(f);
> > gets stuck.
> > I also tried this with a smaller file (150MB) which I have. This
> > file is read by readtable in 15s. But the DataTable function
> > freezes. I use 0.4 on Windows 7.
> > 
> > I note that your code did work on a tiny file though (40 lines or
> > so).
> > I do get a dataframe, but when I show it (by simply typing df, or
> > dump(df)) Julia crashes...
> > 
> > Bernhard
> > 
> > 
> > Am Mittwoch, 14. Oktober 2015 06:54:16 UTC+2 schrieb Grey Marsh:
> > > I am using Julia 0.4 for this purpose, if that's what is meant by
> > > "0.4 only". 
> > > 
> > > On Wednesday, October 14, 2015 at 9:53:09 AM UTC+5:30, Jacob
> > > Quinn wrote:
> > > > Oh yes, I forgot to mention that the CSV/DataStreams code is
> > > > 0.4 only. Definitely interested to hear about any
> > > > results/experiences though.
> > > > 
> > > > -Jacob
> > > > 
> > > > On Tue, Oct 13, 2015 at 10:11 PM, Yichao Yu 
> > > > wrote:
> > > > > On Wed, Oct 14, 2015 at 12:02 AM, Grey Marsh <
> > > > > kd.k...@gmail.com> wrote:
> > > > > > @Jacob, I tried your approach. Somehow it got stuck in the
> > > > > "@time ds =
> > > > > > DataStreams.DataTable(f)" line. After 15 minutes running,
> > > > > julia is using
> > > > > > ~500mb and 1 cpu core with no sign of end. The memory use
> > > > > has been almost
> > > > > > same for the whole duration of 15 minutes. I'm letting it
> > > > > run, hoping that
> > > > > > it finishes after some time.
> > > > > >
> > > > > > From your run, I can see it needs 12gb memory which is
> > > > > higher than my
> > > > > > machine memory of 8gb. could it be the problem?
> > > > > 
> > > > > 12GB is the total number of memory ever allocated during the
> > > > > timing. A
> > > > > lot of them might be intermediate results that are freed by
> > > > > the GC.
> > > > > Also, from the output of @time, it looks like 0.4.
> > > > > 
> > > > > >
> > > > > > On Wednesday, October 14, 2015 at 2:28:09 AM UTC+5:30,
> > > > > Jacob Quinn wrote:
> > > > > >>
> > > > > >> I'm hesitant to suggest, but if you're in a bind, I have
> > > > > an experimental
> > > > > >> package for fast CSV reading. The API has stabilized
> > > > > somewhat over the last
> > > > > >> week and I'm planning a more broad release soon, but I'd
> > > > > still consider it
> > > > > >> alpha mode. That said, if anyone's willing to give it a
> > > > > drive, you just need
> > > > > >> to
> > > > > >>
> > > > > >> Pkg.add("Libz")
> > > > > >> Pkg.add("NullableArrays")
> > > > > >> Pkg.clone("https://github.com/quinnj/DataStreams.jl;)
> > > > > >> Pkg.clone("https://github.com/quinnj/CSV.jl;)
> > > > > >>
> > > > > >> With the original file referenced here I get:
> > > > > >>
> > > > > >> julia> reload("CSV")
> > > > > >>
> > > > > >> julia> f =
> > > > > CSV.Source("/Users/jacobquinn/Downloads/train.csv";null="NA")
> > > > > >> CSV.Source: "/Users/jacobquinn/Downloads/train.csv"
> > > > > >> delim: ','
> > > > > >> quotechar: '"'
> > > > > >> escapechar: '\\'
> > > > > >> null: "NA"
> > > > > >> schema:
> > > > > >>
> > > > > DataStreams.Schema(UTF8String["ID","VAR_0001","VAR_0002","VAR
> > > > > _0003","VAR_0004","VAR_0005","VAR_0006","VAR_0007","VAR_0008"
> > > > > ,"VAR_0009"
> > > > > >> …
> > > > > >>
> > > > > "VAR_1926","VAR_1927","VAR_1928","VAR_1929","VAR_1930","VAR_1
> > > > > 931","VAR_1932","VAR_1933","VAR_1934","target"],[Int64,DataSt
> > > > > reams.PointerString,Int64,Int64,Int64,DataStreams.PointerStri
> > > > > ng,Int64,Int64,DataStreams.PointerString,DataStreams.PointerS
> > > > > tring
> > > > > >> …
> > > > > >>
> > > > > Int64,Int64,Int64,Int64,Int64,Int64,Int64,Int64,DataStreams.P
> > > > > ointerString,Int64],145231,1934)
> > > > > >> dateformat:
> > > > > Base.Dates.DateFormat(Base.Dates.Slot[],"","english")
> > > > > >>
> > > > > >>
> > > > > >> julia> @time ds = DataStreams.DataTable(f)
> > > > > >>  43.513800 seconds (694.00 M allocations: 12.775 GB, 2.55%
> > > > > gc time)
> > > > > >>
> > > > > >>
> > > > > >> You can convert the result to a DataFrame with:
> > > > > >>
> > > > > >> function DataFrames.DataFrame(dt::DataStreams.DataTable)
> > > > > >> cols = dt.schema.cols
> > > 

Re: [julia-users] julia newb seeks critique of a "Defstruct" clone macro

2015-10-14 Thread Mauro
I don't think there is much documentation for Julia-macros around but if
you know LISP then you're ahead in the game anyway.  So it's mostly
learning by looking at other folk's macros:

My package https://github.com/mauro3/Parameters.jl does something
similar to your example.  So you could have a look at that and get some
inspiration.

Also xdump and Meta.show_sexpr are your friends in dissecting Expr.

On Tue, 2015-10-13 at 23:18, Tim Menzies  wrote:
> this is day3 of julia so pardon dumb questions
>
> i had some trouble  finding tutorials on julia macros. i've read the
> textbooks but i suspect there is somewhere else to look (i say that since i
> would have thought that the following would exist in standard Julia, but i
> could not find it). so, two questions
>
> Q1) where to find julia macro tutorials (beyond the standard manual)?
>
> Q2) here's my first julia macro to do something like LISP's defstruct where
> i can define a type and its init contents all at one go. comments? traps
> for the unwary? improvements?
>
> # e.g. @def emp age=0 salary=1
>
> # autocreates a constructor emp0 that returns and emp
>
> # initialized with 0,1
>
> macro has(typename, pairs...)
> name = esc(symbol(string(typename,0)))
> tmp  = esc(symbol("tmp"))
> ones = [x.args[1] for x in pairs]
> twos = [x.args[2] for x in pairs]
> :(type $(typename)
>  $(ones...)
>   end;
>   function $(name)()
>  $(typename)($(twos...))
>   end)
> end
>
>
> thanks!
> tiim menzies


[julia-users] Re: Which public key is Julia 0.4.0 Pkg.Git trying to use on Windows?

2015-10-14 Thread Tomas Lycken


Now, in fact the rc4 has also stopped working :(

So yes, I assume this is a configuration error on my part rather than a 
regression in Julia. But I still don’t know how to fix it… :P

This is what my configuration looks like:

PS C:\Users\Tomas Lycken\.ssh> ls

Directory: C:\Users\Tomas Lycken\.ssh

ModeLastWriteTime Length Name
- -- 
-a---2015-10-13 15:28138 config
-a---2015-03-31 15:48   1679 github_rsa
-a---2015-03-31 15:48400 github_rsa.pub

PS C:\Users\Tomas Lycken\.ssh> cat .\configHost github.com
HostName github.com
PreferredAuthentications publickey
IdentityFile "/c/Users/Tomas Lycken/.ssh/github_rsa"

And as I said, using git from powershell is no problem, so Julia is 
apparently using a different git than vanilla PS.

// T


On Tuesday, October 13, 2015 at 7:51:26 PM UTC+2, Tony Kelman wrote:

Does it still work if you try with an rc now? Check your github settings to 
> see if all your keys are still current and valid. Sometimes github revokes 
> things if they've been inactive for a while.

​


[julia-users] Re: julia newb seeks critique of a "Defstruct" clone macro

2015-10-14 Thread Tomas Lycken


Nice! I would comment on the following:

   - 
   
   Your macro lacks input validation, which means users with typos or 
   misunderstandings will probably get weird error messages
   - 
   
   I’d create the returned expression using a quote block, rather than :(), 
   to make it clear to source readers that it’s more than one statement (this 
   is a convention I’ve observed, but by no means a rule)
   
Others will probably react to different aspects of the implementation.

// T

On Wednesday, October 14, 2015 at 3:55:52 AM UTC+2, Tim Menzies wrote:

this is day3 of julia so pardon dumb questions
>
> i had some trouble  finding tutorials on julia macros. i've read the 
> textbooks but i suspect there is somewhere else to look (i say that since i 
> would have thought that the following would exist in standard Julia, but i 
> could not find it). so, two questions
>
> Q1) where to find julia macro tutorials (beyond the standard manual)?
>
> Q2) here's my first julia macro to do something like LISP's defstruct 
> where i can define a type and its init contents all at one go. comments? 
> traps for the unwary? improvements?
>
> # e.g. @def emp age=0 salary=1
>
> # autocreates a constructor emp0 that returns and emp
>
> # initialized with 0,1
>
> macro has(typename, pairs...)
> name = esc(symbol(string(typename,0)))
> tmp  = esc(symbol("tmp"))
> ones = [x.args[1] for x in pairs]
> twos = [x.args[2] for x in pairs]
> :(type $(typename)
>  $(ones...)
>   end;
>   function $(name)()
>  $(typename)($(twos...))
>   end)
> end
>
>
> thanks!
> tiim menzies
>
> ​


Re: [julia-users] 900mb csv loading in Julia failed: memory comparison vs python pandas and R

2015-10-14 Thread bernhard
with readtable the julia process goes up to 6.3 GB and stays there. It 
takes 95 seconds. (@time shows "373M, allocations: 13GB, 7% GC time")
I will try Jacob's approach again.


Am Mittwoch, 14. Oktober 2015 10:59:06 UTC+2 schrieb Milan Bouchet-Valat:
>
> Le mercredi 14 octobre 2015 à 00:15 -0700, Grey Marsh a écrit : 
> > Done with the testing in the cloud instance. 
> > It works and the timings in my case 
> > 
> > 58.346345 seconds (694.00 M allocations: 12.775 GB, 2.63% gc time) 
> > 
> > result of "top" command:  VIRT: 11.651g RES: 3.579g 
> > 
> > ~13gb memory for a 900mb file! 
> > Thanks to Jacob atleast I was able check that the process works. 
> As Yichao noted, at no point in the import did Julia use 13GB of RAM. 
> That's the total amount of memory that was allocated and freed by 
> pieces (694M of them). You'd need to watch the Julia process while 
> working to see what's the maximum value of RES when importing. 
>
>
> Regards 
>
> > On Wednesday, October 14, 2015 at 12:10:02 PM UTC+5:30, bernhard 
> > wrote: 
> > > Jacob 
> > > 
> > > I do run into the same issue as Grey. the step 
> > > ds = DataStreams.DataTable(f); 
> > > gets stuck. 
> > > I also tried this with a smaller file (150MB) which I have. This 
> > > file is read by readtable in 15s. But the DataTable function 
> > > freezes. I use 0.4 on Windows 7. 
> > > 
> > > I note that your code did work on a tiny file though (40 lines or 
> > > so). 
> > > I do get a dataframe, but when I show it (by simply typing df, or 
> > > dump(df)) Julia crashes... 
> > > 
> > > Bernhard 
> > > 
> > > 
> > > Am Mittwoch, 14. Oktober 2015 06:54:16 UTC+2 schrieb Grey Marsh: 
> > > > I am using Julia 0.4 for this purpose, if that's what is meant by 
> > > > "0.4 only". 
> > > > 
> > > > On Wednesday, October 14, 2015 at 9:53:09 AM UTC+5:30, Jacob 
> > > > Quinn wrote: 
> > > > > Oh yes, I forgot to mention that the CSV/DataStreams code is 
> > > > > 0.4 only. Definitely interested to hear about any 
> > > > > results/experiences though. 
> > > > > 
> > > > > -Jacob 
> > > > > 
> > > > > On Tue, Oct 13, 2015 at 10:11 PM, Yichao Yu  
> > > > > wrote: 
> > > > > > On Wed, Oct 14, 2015 at 12:02 AM, Grey Marsh < 
> > > > > > kd.k...@gmail.com> wrote: 
> > > > > > > @Jacob, I tried your approach. Somehow it got stuck in the 
> > > > > > "@time ds = 
> > > > > > > DataStreams.DataTable(f)" line. After 15 minutes running, 
> > > > > > julia is using 
> > > > > > > ~500mb and 1 cpu core with no sign of end. The memory use 
> > > > > > has been almost 
> > > > > > > same for the whole duration of 15 minutes. I'm letting it 
> > > > > > run, hoping that 
> > > > > > > it finishes after some time. 
> > > > > > > 
> > > > > > > From your run, I can see it needs 12gb memory which is 
> > > > > > higher than my 
> > > > > > > machine memory of 8gb. could it be the problem? 
> > > > > > 
> > > > > > 12GB is the total number of memory ever allocated during the 
> > > > > > timing. A 
> > > > > > lot of them might be intermediate results that are freed by 
> > > > > > the GC. 
> > > > > > Also, from the output of @time, it looks like 0.4. 
> > > > > > 
> > > > > > > 
> > > > > > > On Wednesday, October 14, 2015 at 2:28:09 AM UTC+5:30, 
> > > > > > Jacob Quinn wrote: 
> > > > > > >> 
> > > > > > >> I'm hesitant to suggest, but if you're in a bind, I have 
> > > > > > an experimental 
> > > > > > >> package for fast CSV reading. The API has stabilized 
> > > > > > somewhat over the last 
> > > > > > >> week and I'm planning a more broad release soon, but I'd 
> > > > > > still consider it 
> > > > > > >> alpha mode. That said, if anyone's willing to give it a 
> > > > > > drive, you just need 
> > > > > > >> to 
> > > > > > >> 
> > > > > > >> Pkg.add("Libz") 
> > > > > > >> Pkg.add("NullableArrays") 
> > > > > > >> Pkg.clone("https://github.com/quinnj/DataStreams.jl;) 
> > > > > > >> Pkg.clone("https://github.com/quinnj/CSV.jl;) 
> > > > > > >> 
> > > > > > >> With the original file referenced here I get: 
> > > > > > >> 
> > > > > > >> julia> reload("CSV") 
> > > > > > >> 
> > > > > > >> julia> f = 
> > > > > > CSV.Source("/Users/jacobquinn/Downloads/train.csv";null="NA") 
> > > > > > >> CSV.Source: "/Users/jacobquinn/Downloads/train.csv" 
> > > > > > >> delim: ',' 
> > > > > > >> quotechar: '"' 
> > > > > > >> escapechar: '\\' 
> > > > > > >> null: "NA" 
> > > > > > >> schema: 
> > > > > > >> 
> > > > > > DataStreams.Schema(UTF8String["ID","VAR_0001","VAR_0002","VAR 
> > > > > > _0003","VAR_0004","VAR_0005","VAR_0006","VAR_0007","VAR_0008" 
> > > > > > ,"VAR_0009" 
> > > > > > >> … 
> > > > > > >> 
> > > > > > "VAR_1926","VAR_1927","VAR_1928","VAR_1929","VAR_1930","VAR_1 
> > > > > > 931","VAR_1932","VAR_1933","VAR_1934","target"],[Int64,DataSt 
> > > > > > reams.PointerString,Int64,Int64,Int64,DataStreams.PointerStri 
> > > > > > ng,Int64,Int64,DataStreams.PointerString,DataStreams.PointerS 
> > > > > > tring 
> > > > > > >> … 
> > > > > > >> 
> > > 

Re: [julia-users] Method fieldnames not defined

2015-10-14 Thread Milan Bouchet-Valat
Le mercredi 14 octobre 2015 à 11:44 -0700, Martin Maechler a écrit :
> 
> 
> Am Montag, 13. April 2015 17:43:54 UTC+2 schrieb Stefan Karpinski:
> > You are probably on Julia 0.3.x in which this function was called
> > `names`. You can either use `names` or use the Compat package which
> > lets you use names from the future.
> > 
> So, if we have code (in ESS https://github.com/emacs/ESS/ ) which
> needs to work with both julia 0.3 and 0.4   we should use names() and
>  the 'Compat' package.
> @Stefan Karpinski :  Can you paste julia code for using 'Compat' 
>  such that it also works in 0.3 ?
You simply need to do:
@Compat.compat fieldnames(x)

or 

using Compat
@compat fieldnames(x)


Don't forget to add Compat to the REQUIRES file of your package.

Regards

> > 
> > On Mon, Apr 13, 2015 at 11:37 AM, 'Antoine Messager' via julia
> > -users  wrote:
> > > Hi,
> > > 
> > > I would like to find the field of SolverResults{Float64}. So
> > > according to the manual, I just need to use the method fieldnames
> > > but unfortunately it is not defined. 
> > > What can I do?
> > > 
> > > Thank you,
> > > Antoine
> > > 
> > 


Re: [julia-users] Call by name

2015-10-14 Thread Stefan Karpinski
Do you mean the evaluation strategy
? Or keyword
arguments

?

On Wed, Oct 14, 2015 at 11:17 PM,  wrote:

> Hi,
>
> I was wondering whether there is a syntax for call by name parameter
> passing ?
>
> Thanks.
>


Re: [julia-users] Re: Avoiding building LLVM with ever Julia release

2015-10-14 Thread Tony Kelman
On master you can now do out-of-tree builds, though that's not really the 
most helpful thing in the world if you only want to switch versions of a 
single dependency (since the rest of the dependencies will also usually 
have to rebuild in each build tree).

In a git checkout without using out-of-tree builds, you can checkout 
different release, master, or other branches and reuse the built copies of 
deps. Due to the out of tree builds, master and release-0.4 have different 
directory structures and can't reuse everything all that nicely. It's 
possibly to use system copies of various libraries, though you won't always 
get the exact same functionality or bugfixes as when you build deps from 
source with Julia's makefiles. You can see what .travis.yml does, or the 
way 
that 
https://github.com/JuliaLang/julia/blob/master/contrib/windows/msys_build.sh 
extracts a Windows nightly and uses many libraries from it on AppVeyor, but 
neither is a particularly clean way to do things. The CI builds are 
optimized for build speed, not organizational sanity.

I'd recommend Tomas' approach right now of having one git clone per release 
series you want to build. To keep built copies around without having to 
clean them out when you change branches, you can do `make install` and the 
julia-$sha installed copy will stick around even if you move around to 
different commits.


On Wednesday, October 14, 2015 at 9:28:39 AM UTC-7, Isaiah wrote:
>
> Presumably ... you can put linear algebra libraries somewhere too?
>
>
> Well, yes and maybe. There are a number of USE_SYSTEM_* overrides 
> available, but I'm not sure if they all provide a way to indicate which 
> specific one to use, as there is with the LLVM_CONFIG. (perhaps Tony will 
> clarify. and I'm sure a PR to this effect would be welcomed if it's not 
> available now)
>
> On Wed, Oct 14, 2015 at 11:55 AM, milktrader  > wrote:
>
>> Thanks for the Make.user suggestion. I have an idea on how to make that 
>> work. Presumably (and this is not part of the original post) you 
>> can put linear algebra libraries somewhere too?
>>
>>
>> On Wednesday, October 14, 2015 at 11:11:56 AM UTC-4, Isaiah wrote:
>>>
>>> It should be possible to use an existing LLVM build by setting the 
>>> following in Make.user:
>>>
>>> USE_SYSTEM_LLVM=1
>>> LLVM_CONFIG=/path/to/llvm-config
>>>
>>> If you build LLVM manually you will need to be sure to apply the patches 
>>> specified in `deps/Makefile`.
>>>
>>> On Wed, Oct 14, 2015 at 10:54 AM, milktrader  wrote:
>>>
 I like this solution and I've been using git from the beginning until I 
 decided I needed to have multiple versions of Julia around at the same 
 time, with the ability to open each version whenever I choose.

 My still rough implementation of this is to rename *julia* to other 
 names based on version numbers

 [julia (master)] 
 ✈  chefs

 0.3.11  .  kevin
 0.4-rc1 .  wanda
 0.4-rc4 .  frida
 0.4-0   .  julius

 So this is straightforward to do the hard way (which is how I'm doing 
 it now) by simply building each julia version from scratch.

 How would this work with using git versus tar files?

 On Wednesday, October 14, 2015 at 10:37:06 AM UTC-4, Tero Frondelius 
 wrote:
>
> git clone https://github.com/JuliaLang/julia.git
> cd julia
> make
>
>
> after update in julia folder:
> git fetch
> git branch v0.4.1
> make
>
>
> Maybe some fine tuning in commands, but basically drop the method of 
> downloading tar and start using git. 
>
>
>
> On Wednesday, October 14, 2015 at 5:21:36 PM UTC+3, milktrader wrote:
>>
>> I'm downloading full tar files for each new Julia version and of 
>> course it comes with LLVM. I'd like to avoid building LLVM every single 
>> time and have it compiled once, and available for all the new Julia 
>> releases (that use that LLVM version of course).
>>
>> Any pointers?
>>
>> Dan
>>
>
>>>
>

[julia-users] Re: Install Old Version of Julia (0.3)

2015-10-14 Thread Kristoffer Carlsson
http://julialang.org/downloads/oldreleases maybe?

On Wednesday, October 14, 2015 at 9:19:45 PM UTC+2, Tim Wheeler wrote:
>
> Hello Julia Users,
>
> I am running Ubuntu 14.04 and had the standard julia ppa. Ubuntu 
> automatically updated me to Julia 0.4. Unfortunately I have a paper due 
> soon and the update messed up my modules. Is it possible to install the old 
> version (0.3)?
> I was on the julia releases page 
>  
> but the 0.3.11-trusty2 file is shown to be broken for Intel processors. 
> Is there a way to get an older version that compiles for Ubuntu 14.04 
> (trusty) on Intel chips?
>
> I also checked `apt-cache policy julia`, which gives me:
>
> >>apt-cache policy julia
> julia:
>   Installed: (none)
>   Candidate: 0.4.0-trusty3
>   Version table:
>  0.4.0-trusty3 0
> 500 http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu/ 
> trusty/main amd64 Packages
> 100 /var/lib/dpkg/status
>  0.2.1+dfsg-2 0
> 500 http://us.archive.ubuntu.com/ubuntu/ trusty/universe amd64 
> Packages
>
>
> Is 0.3 no longer an option?
>
> Thank you
>


[julia-users] Re: Install Old Version of Julia (0.3)

2015-10-14 Thread Tim Wheeler
Okay, I figured it out.
Apparently amd64 doesn't have anything to do with AMD vs. Intel.
I installed it and it works!

On Wednesday, October 14, 2015 at 12:19:45 PM UTC-7, Tim Wheeler wrote:
>
> Hello Julia Users,
>
> I am running Ubuntu 14.04 and had the standard julia ppa. Ubuntu 
> automatically updated me to Julia 0.4. Unfortunately I have a paper due 
> soon and the update messed up my modules. Is it possible to install the old 
> version (0.3)?
> I was on the julia releases page 
>  
> but the 0.3.11-trusty2 file is shown to be broken for Intel processors. 
> Is there a way to get an older version that compiles for Ubuntu 14.04 
> (trusty) on Intel chips?
>
> I also checked `apt-cache policy julia`, which gives me:
>
> >>apt-cache policy julia
> julia:
>   Installed: (none)
>   Candidate: 0.4.0-trusty3
>   Version table:
>  0.4.0-trusty3 0
> 500 http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu/ 
> trusty/main amd64 Packages
> 100 /var/lib/dpkg/status
>  0.2.1+dfsg-2 0
> 500 http://us.archive.ubuntu.com/ubuntu/ trusty/universe amd64 
> Packages
>
>
> Is 0.3 no longer an option?
>
> Thank you
>


[julia-users] Re: What features are interesting in a VS Code plug-in?

2015-10-14 Thread Burning Legion
Just took a look at the new basic language support. Importing a textmate 
language definition to start it off is quite nice. It would be nicer if 
macros were highlighted (can't see why they aren't as they are defined) but 
it is a start!

On Thursday, October 8, 2015 at 9:12:29 AM UTC+2, Burning Legion wrote:
>
> -Autocomplete(Intellisense), with help for a function showing up on hover
> -Linter support
> -Debugger (this would be awhile of course for both vscode and gallium)
>
> Other than the basics, some nice commands would be:
> -Create a package
>
> This is just off the top of my head.
>
> What would be great is some form of automatic syntax checks as well. 
> Checking that you are using the correct types for a function and such would 
> be nice to know when writing instead of when running. That is a bit 
> ambitious for now.
>
> Do post when you get started! If I have any free time in the foreseeable 
> future I would love to help.
>
> On Thursday, October 8, 2015 at 8:04:51 AM UTC+2, Tomas Lycken wrote:
>
>> I've just been invited to an early look at a plug-in SDK for Visual 
>> Studio Code, and I want to start experimenting with it in a plug-in for 
>> Julia development. 
>>
>> What features are interesting in such a plug-in? 
>>
>> I have no idea what the SDK will support, so your favorite feature might 
>> not be possible to implement (at least not at the moment) but I'd still 
>> love to learn what features you all would find the most useful. 
>>
>> Hit me! :) 
>>
>> // T 
>>
>>

Re: [julia-users] How to correctly do pivoted QR in Julia 0.4+?

2015-10-14 Thread Victor Minden
That worked perfectly -- I was not familiar with the syntax of ';' versus 
',' in the documentation.   

Thanks, Yichao!

On Tuesday, October 13, 2015 at 8:06:58 PM UTC-7, Yichao Yu wrote:
>
> On Tue, Oct 13, 2015 at 5:48 PM, Victor Minden  > wrote: 
> > I posted this originally as an issue on the GitHub page but was 
> redirected 
> > here, which seems like a great resource.  Also, seems like this is more 
> just 
> > my misunderstanding than an issue with the code, so this should be more 
> > appropriate. 
> > 
> > With Julia 0.4+, it seems that using 
> > F = qr(A,pivot=true); 
> > 
> > is no longer the correct calling syntax, but I do not understand the new 
> > syntax. The documentation gives, 
> > 
> > help?> qr 
> > 
> > search: qr qrfact qrfact! sqrt sqrtm isqrt require QuickSort 
> > PartialQuickSort 
> > 
> >   qr(A [,pivot=Val{false}][;thin=true]) -> Q, R, [p] 
> > 
> >   Compute the (pivoted) QR factorization of A such that either A = Q*R 
> or 
> > A[:,p] = Q*R. Also see qrfact. The default 
> >   is to compute a thin factorization. Note that R is not extended with 
> zeros 
> > when the full Q is requested. 
> > 
> > 
> > but it seems that "pivot" is not a keyword argument.  I must just not 
> > understand the meaning of the function signature.  Can someone give me 
> an 
> > example of how to call QR with pivoting?  I was directed to 
> > https://github.com/JuliaLang/julia/blob/release-0.4/NEWS.md, but I 
> cannot 
> > find anything in the 0.4 changes that seems to indicate the correct way 
> to 
> > do this. 
>
> It's an optional argument, what follows `;` (i.e. `thin=true`) are 
> keyword arguments. Just call it with e.g. qr(A, Val{true}) 
>
> > 
> > Thanks! 
>


Re: [julia-users] Method fieldnames not defined

2015-10-14 Thread Josh Langsfeld
Actually Compat doesn't rewrite the code to call 'names' but just defines 
'fieldnames' directly. So either

import Compat
Compat.fieldnames(x)

or 

using Compat
fieldnames(x)

is sufficient.

On Wednesday, October 14, 2015 at 4:47:36 PM UTC-4, Milan Bouchet-Valat 
wrote:
>
> Le mercredi 14 octobre 2015 à 11:44 -0700, Martin Maechler a écrit : 
> > 
> > 
> > Am Montag, 13. April 2015 17:43:54 UTC+2 schrieb Stefan Karpinski: 
> > > You are probably on Julia 0.3.x in which this function was called 
> > > `names`. You can either use `names` or use the Compat package which 
> > > lets you use names from the future. 
> > > 
> > So, if we have code (in ESS https://github.com/emacs/ESS/ ) which 
> > needs to work with both julia 0.3 and 0.4   we should use names() and 
> >  the 'Compat' package. 
> > @Stefan Karpinski :  Can you paste julia code for using 'Compat' 
> >  such that it also works in 0.3 ? 
> You simply need to do: 
> @Compat.compat fieldnames(x) 
>
> or 
>
> using Compat 
> @compat fieldnames(x) 
>
>
> Don't forget to add Compat to the REQUIRES file of your package. 
>
> Regards 
>
> > > 
> > > On Mon, Apr 13, 2015 at 11:37 AM, 'Antoine Messager' via julia 
> > > -users  wrote: 
> > > > Hi, 
> > > > 
> > > > I would like to find the field of SolverResults{Float64}. So 
> > > > according to the manual, I just need to use the method fieldnames 
> > > > but unfortunately it is not defined. 
> > > > What can I do? 
> > > > 
> > > > Thank you, 
> > > > Antoine 
> > > > 
> > > 
>


Re: [julia-users] CUDArt main example does not run smoothly

2015-10-14 Thread Tim Holy
Sorry you had trouble. I've updated the README, so you might want to try 
again. I also tagged a new version of the package, so you'll have access to 
the newer features described in the README; you'll probably want to do 
Pkg.update().

Best,
--Tim

On Wednesday, October 14, 2015 08:27:06 AM Joaquim Masset Lacombe Dias Garcia 
wrote:
> I could not run the main example of CUDArt from
> 
> : https://github.com/JuliaGPU/CUDArt.jl
> 
> The code and error follow bellow.
> 
> running:
> #MyCudaModule previously defined as
> in: https://github.com/JuliaGPU/CUDArt.jl
> using CUDArt, MyCudaModule
> 
> A = rand(10,5)
> 
> result = devices(dev->capability(dev)[1]>=2) do devlist
> MyCudaModule.init(devlist) do
> function1(CudaArray(A))
> end
> end
> 
> error output:
> 
> wrong number of arguments
> while loading In[9], in expression starting on line 5
> 
>  in anonymous at In[9]:7
>  in init at In[8]:27
>  in anonymous at In[9]:6
>  in devices at C:\Users\joaquimgarcia\.julia\v0.3\CUDArt\src\device.jl:57
> 
> 
> 
> I tried in both julia 0.3 and 0.4
> 
> 
> also i tried Pkg.test("CUDArt")
> 
> and al tests passed



Re: [julia-users] Method fieldnames not defined

2015-10-14 Thread Zack L.-B.
Seems to be in reference to this pull request 
.

On Wednesday, October 14, 2015 at 11:44:02 AM UTC-7, Martin Maechler wrote:
>
>
>
> Am Montag, 13. April 2015 17:43:54 UTC+2 schrieb Stefan Karpinski:
>>
>> You are probably on Julia 0.3.x in which this function was called 
>> `names`. You can either use `names` or use the Compat package which lets 
>> you use names from the future.
>>
>
> So, if we have code (in ESS https://github.com/emacs/ESS/ ) which needs 
> to work with both julia 0.3 and 0.4   we should use names() and  the 
> 'Compat' package.
> @Stefan Karpinski :  Can you paste julia code for using 'Compat'  such 
> that it also works in 0.3 ?
>  
>
>>
>> On Mon, Apr 13, 2015 at 11:37 AM, 'Antoine Messager' via julia-users <
>> julia...@googlegroups.com> wrote:
>>
>>> Hi,
>>>
>>> I would like to find the field of SolverResults{Float64}. So according 
>>> to the manual , I 
>>> just need to use the method fieldnames but unfortunately it is not defined. 
>>> What can I do?
>>>
>>> Thank you,
>>> Antoine
>>>
>>
>>

Re: [julia-users] Re: Julia T-shirt and Sticker

2015-10-14 Thread Ben Arthur
we really really need a t-shirt to celebrate the release of 0.4.  stickers 
would be great too.  i'd even be willing to pay an exorbitant amount for 
these things if the proceeds went to juliaComputing.


[julia-users] Re: Azure interface for Julia

2015-10-14 Thread cdm

while i have not given this a try yet, it seems that Juju
(https://jujucharms.com/docs/stable/getting-started)
has been integrated fairly well into the Azure
ecosystem ...

perhaps a Julia wrapper over the Juju API would
be feasible:

   https://godoc.org/github.com/juju/juju/api


good luck !!!




On Tuesday, October 13, 2015 at 6:55:52 PM UTC-7, Raphael Ribeiro wrote:
>
> Hello, 
>
> Is there any Azure interface for Julia like following interface for EC2?
>
> https://github.com/amitmurthy/AWS.jl
>
> Anyways, we appretiated any help on how we could develop an Azure 
> interface for Julia.
>
> Thanks!
>
> -- 
> *Raphael Pereira Ribeiro*
> *Instituto de Computação - IC/UFAL*
> *Graduando em Ciências da Computação*
> *http://lattes.cnpq.br/9969641216207080 
> *
>


RE: [julia-users] Re: What features are interesting in a VS Code plug-in?

2015-10-14 Thread David Anthoff
Would it make sense to have a git repo with a Julia extension for VS code 
somewhere? If you would just push you julia folder from your 
.vscode\extensions, that would be great!

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Burning Legion
Sent: Wednesday, October 14, 2015 1:38 PM
To: julia-users 
Subject: [julia-users] Re: What features are interesting in a VS Code plug-in?

 

Just took a look at the new basic language support. Importing a textmate 
language definition to start it off is quite nice. It would be nicer if macros 
were highlighted (can't see why they aren't as they are defined) but it is a 
start!

On Thursday, October 8, 2015 at 9:12:29 AM UTC+2, Burning Legion wrote:

-Autocomplete(Intellisense), with help for a function showing up on hover

-Linter support

-Debugger (this would be awhile of course for both vscode and gallium)

 

Other than the basics, some nice commands would be:
-Create a package

 

This is just off the top of my head.

 

What would be great is some form of automatic syntax checks as well. Checking 
that you are using the correct types for a function and such would be nice to 
know when writing instead of when running. That is a bit ambitious for now.

 

Do post when you get started! If I have any free time in the foreseeable future 
I would love to help.

On Thursday, October 8, 2015 at 8:04:51 AM UTC+2, Tomas Lycken wrote:

I've just been invited to an early look at a plug-in SDK for Visual Studio 
Code, and I want to start experimenting with it in a plug-in for Julia 
development. 

What features are interesting in such a plug-in? 

I have no idea what the SDK will support, so your favorite feature might not be 
possible to implement (at least not at the moment) but I'd still love to learn 
what features you all would find the most useful. 

Hit me! :) 

// T 



[julia-users] Prevent Escher from evaluating signals on start

2015-10-14 Thread Yakir Gagnon
Title... 
I have a bunch of tabs and it all takes time to load. I don't need the 
functions to evaluate before the user presses on something. Any easy way i can 
prevent all the signals from running their functions when the pages load up? 

[julia-users] Re: Markdown.parse question

2015-10-14 Thread j verzani
Thanks for that, it is helpful. I'm don't really like the heuristic, but it 
is something that can be worked with.

On Tuesday, October 13, 2015 at 11:26:44 PM UTC-4, andy hayden wrote:
>
> Whether it renders as $ or $$ is inferred from the position, if it's 
> inline it uses $ if it's a block $$.
>
> julia> Markdown.latex(Markdown.parse("""\$\\sin(x)\$"""))
> "\$\$\\sin(x)\$\$"
>
>
> julia> Markdown.latex(Markdown.parse("""inline \$\\sin(x)\$"""))
> "inline \$\\sin(x)\$\n"
>
>
>
> https://github.com/JuliaLang/julia/blob/6e4c9f164832b5743b13c36c58d6bdd63ebbf1b8/base/markdown/IPython/IPython.jl#L28-L32
>
> On Tuesday, 13 October 2015 19:30:48 UTC-7, j verzani wrote:
>>
>> With v0.4, is there a way to have Markdown parse latex and tell the 
>> difference between inline math and display math? In particular, this yields 
>> two identical pieces:
>>
>> ```
>>
>> julia> macro L_mstr(x) x end
>>
>> julia> Markdown.parse(L"""
>>
>>$\sin(x)$
>>
>>$$\sin(x)$$""", flavor=:julia).content
>>
>> 2-element Array{Any,1}:
>>
>>  Base.Markdown.LaTeX("\\sin(x)")
>>
>>  Base.Markdown.LaTeX("\\sin(x)")
>>
>> ```
>>
>>
>> I tried other flavors (github, common), but they don't identify the LaTeX
>>
>

Re: [julia-users] Prevent Escher from evaluating signals on start

2015-10-14 Thread Yakir Gagnon
Thanks Shashi,
But while that prevents stuff in another tab from running before that other
tab is in focus, stuff runs twice when that tab gets focused...!?
Maybe that's what you meant with "you will need to memorize the function
you pass to lift..." but in that case I don't understand what you mean.

Thanks again for the awesome work!


Yakir Gagnon
The Queensland Brain Institute (Building #79)
The University of Queensland
Brisbane QLD 4072
Australia

cell +61 (0)424 393 332
work +61 (0)733 654 089

On Thu, Oct 15, 2015 at 2:46 AM, Shashi Gowda 
wrote:

> You can pass in an init=empty keyword argument to lift / consume, and the
> initial value of the signal will be an empty UI. The actual value will be
> computed next time the input signal udpates. You should also provide a
> typ=Any kwarg to lift / consume so that if you replace empty with something
> that is not Empty, you do not get a type conversion error. something like:
>
> selected_tab = Input(1)
> vbox(
> tabs(["a", "b"]) >>> selected_tab,
> lift(selected_tab, typ=Any, init=empty) do page_no
> // compute page at page_no
> end
> )
>
> may be what you need. To avoid recomputing when you switch to tab 1 and
> then to tab 2 and then back to tab 1, you will need to memoize the function
> you pass to lift...
>
>
> On Wed, Oct 14, 2015 at 6:06 PM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>
>> Title...
>> I have a bunch of tabs and it all takes time to load. I don't need the
>> functions to evaluate before the user presses on something. Any easy way i
>> can prevent all the signals from running their functions when the pages
>> load up?
>
>
>


Re: [julia-users] Escher image update

2015-10-14 Thread Yakir Gagnon
Thanks, yea I just asynchronously clean everything *.jpg in assets/ before
I continue with the rest of the actions in the lift.


Yakir Gagnon
The Queensland Brain Institute (Building #79)
The University of Queensland
Brisbane QLD 4072
Australia

cell +61 (0)424 393 332
work +61 (0)733 654 089

On Thu, Oct 15, 2015 at 2:55 AM, Shashi Gowda 
wrote:

> Sorry the problem is actually not even browser caching... what happens is
> Escher thinks there is nothing to update because the URL attribute remains
> the same...
>
> imread is a good solution although slower. Another good solution is the
> one you yourself gave if you can figure out a way to clean up all the
> temporary files it creates.
>
> On Wed, Oct 14, 2015 at 10:19 PM, Shashi Gowda 
> wrote:
>
>> This is browser caching at work
>>
>> one way to resolve this is to just imread the image instead of loading it
>> using image() which just creates the equivalent of an  HTML tag...
>>
>> e.g.
>>
>> using Images
>>
>> run(`convert a.jpg  assets/a.jpg`)
>> imread("assets/a.jpg")
>>
>>
>> On Wed, Oct 14, 2015 at 3:52 AM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>>
>>>
>>>
>>> I have some widgets that run imagemagick updating images in `assets/`,
>>> and the only way I can get those to update with `image` is to change their
>>> names...
>>> So if I just do this:
>>> ```julia
>>> run(`convert a.jpg  assets/a.jpg`)
>>> image("assets/a.jpg")
>>> ```
>>> the image doesn't get updated. But this craziness works:
>>> ```julia
>>> name = rand(Int)
>>> run(`convert a.jpg  assets/$name.jpg`)
>>> image("assets/$name.jpg")
>>> ```
>>> Isn't there a better way?
>>>
>>
>>
>


Re: [julia-users] Prevent Escher from evaluating signals on start

2015-10-14 Thread Shashi Gowda
Memoization means storing the result of a function and then using the
stored value when the function is called with the same arguments.

See https://github.com/simonster/Memoize.jl

On Thu, Oct 15, 2015 at 9:16 AM, Yakir Gagnon <12.ya...@gmail.com> wrote:

> Thanks Shashi,
> But while that prevents stuff in another tab from running before that
> other tab is in focus, stuff runs twice when that tab gets focused...!?
> Maybe that's what you meant with "you will need to memorize the function
> you pass to lift..." but in that case I don't understand what you mean.
>
> Thanks again for the awesome work!
>
>
> Yakir Gagnon
> The Queensland Brain Institute (Building #79)
> The University of Queensland
> Brisbane QLD 4072
> Australia
>
> cell +61 (0)424 393 332
> work +61 (0)733 654 089
>
> On Thu, Oct 15, 2015 at 2:46 AM, Shashi Gowda 
> wrote:
>
>> You can pass in an init=empty keyword argument to lift / consume, and
>> the initial value of the signal will be an empty UI. The actual value will
>> be computed next time the input signal udpates. You should also provide a
>> typ=Any kwarg to lift / consume so that if you replace empty with something
>> that is not Empty, you do not get a type conversion error. something like:
>>
>> selected_tab = Input(1)
>> vbox(
>> tabs(["a", "b"]) >>> selected_tab,
>> lift(selected_tab, typ=Any, init=empty) do page_no
>> // compute page at page_no
>> end
>> )
>>
>> may be what you need. To avoid recomputing when you switch to tab 1 and
>> then to tab 2 and then back to tab 1, you will need to memoize the function
>> you pass to lift...
>>
>>
>> On Wed, Oct 14, 2015 at 6:06 PM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>>
>>> Title...
>>> I have a bunch of tabs and it all takes time to load. I don't need the
>>> functions to evaluate before the user presses on something. Any easy way i
>>> can prevent all the signals from running their functions when the pages
>>> load up?
>>
>>
>>
>


Re: [julia-users] Prevent Escher from evaluating signals on start

2015-10-14 Thread Yakir Gagnon
*amazing...


Yakir Gagnon
The Queensland Brain Institute (Building #79)
The University of Queensland
Brisbane QLD 4072
Australia

cell +61 (0)424 393 332
work +61 (0)733 654 089

On Thu, Oct 15, 2015 at 2:29 PM, Yakir Gagnon <12.ya...@gmail.com> wrote:

> OMG amaing. Thanks again ...!!!
>
>
> Yakir Gagnon
> The Queensland Brain Institute (Building #79)
> The University of Queensland
> Brisbane QLD 4072
> Australia
>
> cell +61 (0)424 393 332
> work +61 (0)733 654 089
>
> On Thu, Oct 15, 2015 at 2:17 PM, Shashi Gowda 
> wrote:
>
>> Memoization means storing the result of a function and then using the
>> stored value when the function is called with the same arguments.
>>
>> See https://github.com/simonster/Memoize.jl
>>
>> On Thu, Oct 15, 2015 at 9:16 AM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>>
>>> Thanks Shashi,
>>> But while that prevents stuff in another tab from running before that
>>> other tab is in focus, stuff runs twice when that tab gets focused...!?
>>> Maybe that's what you meant with "you will need to memorize the function
>>> you pass to lift..." but in that case I don't understand what you mean.
>>>
>>> Thanks again for the awesome work!
>>>
>>>
>>> Yakir Gagnon
>>> The Queensland Brain Institute (Building #79)
>>> The University of Queensland
>>> Brisbane QLD 4072
>>> Australia
>>>
>>> cell +61 (0)424 393 332
>>> work +61 (0)733 654 089
>>>
>>> On Thu, Oct 15, 2015 at 2:46 AM, Shashi Gowda 
>>> wrote:
>>>
 You can pass in an init=empty keyword argument to lift / consume, and
 the initial value of the signal will be an empty UI. The actual value will
 be computed next time the input signal udpates. You should also provide a
 typ=Any kwarg to lift / consume so that if you replace empty with something
 that is not Empty, you do not get a type conversion error. something like:

 selected_tab = Input(1)
 vbox(
 tabs(["a", "b"]) >>> selected_tab,
 lift(selected_tab, typ=Any, init=empty) do page_no
 // compute page at page_no
 end
 )

 may be what you need. To avoid recomputing when you switch to tab 1 and
 then to tab 2 and then back to tab 1, you will need to memoize the function
 you pass to lift...


 On Wed, Oct 14, 2015 at 6:06 PM, Yakir Gagnon <12.ya...@gmail.com>
 wrote:

> Title...
> I have a bunch of tabs and it all takes time to load. I don't need the
> functions to evaluate before the user presses on something. Any easy way i
> can prevent all the signals from running their functions when the pages
> load up?



>>>
>>
>


Re: [julia-users] Prevent Escher from evaluating signals on start

2015-10-14 Thread Yakir Gagnon
OMG amaing. Thanks again ...!!!


Yakir Gagnon
The Queensland Brain Institute (Building #79)
The University of Queensland
Brisbane QLD 4072
Australia

cell +61 (0)424 393 332
work +61 (0)733 654 089

On Thu, Oct 15, 2015 at 2:17 PM, Shashi Gowda 
wrote:

> Memoization means storing the result of a function and then using the
> stored value when the function is called with the same arguments.
>
> See https://github.com/simonster/Memoize.jl
>
> On Thu, Oct 15, 2015 at 9:16 AM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>
>> Thanks Shashi,
>> But while that prevents stuff in another tab from running before that
>> other tab is in focus, stuff runs twice when that tab gets focused...!?
>> Maybe that's what you meant with "you will need to memorize the function
>> you pass to lift..." but in that case I don't understand what you mean.
>>
>> Thanks again for the awesome work!
>>
>>
>> Yakir Gagnon
>> The Queensland Brain Institute (Building #79)
>> The University of Queensland
>> Brisbane QLD 4072
>> Australia
>>
>> cell +61 (0)424 393 332
>> work +61 (0)733 654 089
>>
>> On Thu, Oct 15, 2015 at 2:46 AM, Shashi Gowda 
>> wrote:
>>
>>> You can pass in an init=empty keyword argument to lift / consume, and
>>> the initial value of the signal will be an empty UI. The actual value will
>>> be computed next time the input signal udpates. You should also provide a
>>> typ=Any kwarg to lift / consume so that if you replace empty with something
>>> that is not Empty, you do not get a type conversion error. something like:
>>>
>>> selected_tab = Input(1)
>>> vbox(
>>> tabs(["a", "b"]) >>> selected_tab,
>>> lift(selected_tab, typ=Any, init=empty) do page_no
>>> // compute page at page_no
>>> end
>>> )
>>>
>>> may be what you need. To avoid recomputing when you switch to tab 1 and
>>> then to tab 2 and then back to tab 1, you will need to memoize the function
>>> you pass to lift...
>>>
>>>
>>> On Wed, Oct 14, 2015 at 6:06 PM, Yakir Gagnon <12.ya...@gmail.com>
>>> wrote:
>>>
 Title...
 I have a bunch of tabs and it all takes time to load. I don't need the
 functions to evaluate before the user presses on something. Any easy way i
 can prevent all the signals from running their functions when the pages
 load up?
>>>
>>>
>>>
>>
>


Re: [julia-users] Re: Avoiding building LLVM with ever Julia release

2015-10-14 Thread Isaiah Norton
>
> Presumably ... you can put linear algebra libraries somewhere too?


Well, yes and maybe. There are a number of USE_SYSTEM_* overrides
available, but I'm not sure if they all provide a way to indicate which
specific one to use, as there is with the LLVM_CONFIG. (perhaps Tony will
clarify. and I'm sure a PR to this effect would be welcomed if it's not
available now)

On Wed, Oct 14, 2015 at 11:55 AM, milktrader  wrote:

> Thanks for the Make.user suggestion. I have an idea on how to make that
> work. Presumably (and this is not part of the original post) you
> can put linear algebra libraries somewhere too?
>
>
> On Wednesday, October 14, 2015 at 11:11:56 AM UTC-4, Isaiah wrote:
>>
>> It should be possible to use an existing LLVM build by setting the
>> following in Make.user:
>>
>> USE_SYSTEM_LLVM=1
>> LLVM_CONFIG=/path/to/llvm-config
>>
>> If you build LLVM manually you will need to be sure to apply the patches
>> specified in `deps/Makefile`.
>>
>> On Wed, Oct 14, 2015 at 10:54 AM, milktrader  wrote:
>>
>>> I like this solution and I've been using git from the beginning until I
>>> decided I needed to have multiple versions of Julia around at the same
>>> time, with the ability to open each version whenever I choose.
>>>
>>> My still rough implementation of this is to rename *julia* to other
>>> names based on version numbers
>>>
>>> [julia (master)]
>>> ✈  chefs
>>>
>>> 0.3.11  .  kevin
>>> 0.4-rc1 .  wanda
>>> 0.4-rc4 .  frida
>>> 0.4-0   .  julius
>>>
>>> So this is straightforward to do the hard way (which is how I'm doing it
>>> now) by simply building each julia version from scratch.
>>>
>>> How would this work with using git versus tar files?
>>>
>>> On Wednesday, October 14, 2015 at 10:37:06 AM UTC-4, Tero Frondelius
>>> wrote:

 git clone https://github.com/JuliaLang/julia.git
 cd julia
 make


 after update in julia folder:
 git fetch
 git branch v0.4.1
 make


 Maybe some fine tuning in commands, but basically drop the method of
 downloading tar and start using git.



 On Wednesday, October 14, 2015 at 5:21:36 PM UTC+3, milktrader wrote:
>
> I'm downloading full tar files for each new Julia version and of
> course it comes with LLVM. I'd like to avoid building LLVM every single
> time and have it compiled once, and available for all the new Julia
> releases (that use that LLVM version of course).
>
> Any pointers?
>
> Dan
>

>>


Re: [julia-users] Strange performance problem with expression length inside @simd loop

2015-10-14 Thread Tim Holy
Try putting some extraneous parentheses around some of your operations, and 
you'll get good performance again. It's an inlining thing.

Please do report this as an issue: 
https://github.com/JuliaLang/julia/issues/new

--Tim

On Wednesday, October 14, 2015 08:07:11 AM Damien wrote:
> Hi all,
> 
> I'm noticing a strange performance issue with expressions such as this one:
> 
> n = 10
> a = zeros(Float32, n)
> b = rand(Float32, n)
> c = rand(Float32, n)
> 
> function test(a, b, c)
>@simd for i in 1:length(a)
>@inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) *
> (c[i] <= b[i]) * (c[i] >= b[i])
>end
> end
> 
> The problem is that performance and successful vectorisation depend on the
> number of comparison statements in the expression and whether the
> comparisons are explicitely cast to Float32.
> 
> In Julia 0.4-rc4, I get the following:
> 
> @inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * (c[i] <=
> b[i])
> 
> > test(a, b, c)
> > @time test(a, b, c)
> 
> 0.000169 seconds (4 allocations: 160 bytes)
> 
> @inbounds a[i] += b[i] * c[i] * (c[i] < b[i]) * (c[i] > b[i]) * (c[i] <=
> b[i]) * (c[i] >= b[i])
> 
> > test(a, b, c)
> > @time test(a, b, c)
> 
> 0.007258 seconds (200.00 k allocations: 3.052 MB, 47.59% gc time)
> 
> @inbounds a[i] += b[i] * c[i] * Float32(c[i] < b[i]) * Float32(c[i] > b[i])
> * Float32(c[i] <= b[i]) * Float32(c[i] <= b[i])
> 
> > test(a, b, c)
> > @time test(a, b, c)
> 
> 0.000137 seconds (4 allocations: 160 bytes)
> 
> I get a similar behavior in the current 0.5 HEAD (Commit d9f7c21* with the
> fix for issue #13553) but the threshold for the number of comparisons is
> slightly different.
> 
> (a) Is meant to be OK to use expressions like a[i] * (c[i] < b[i]) or
> should I always cast explicitely? I really like the implicit version,
> because it is very readable and a natural translation of equations
> involving cases.
> 
> (b) What is causing the vectorisation threshold observed here?
> 
> Best,
> Damien



Re: [julia-users] Method fieldnames not defined

2015-10-14 Thread Martin Maechler


Am Montag, 13. April 2015 17:43:54 UTC+2 schrieb Stefan Karpinski:
>
> You are probably on Julia 0.3.x in which this function was called `names`. 
> You can either use `names` or use the Compat package which lets you use 
> names from the future.
>

So, if we have code (in ESS https://github.com/emacs/ESS/ ) which needs to 
work with both julia 0.3 and 0.4   we should use names() and  the 'Compat' 
package.
@Stefan Karpinski :  Can you paste julia code for using 'Compat'  such that 
it also works in 0.3 ?
 

>
> On Mon, Apr 13, 2015 at 11:37 AM, 'Antoine Messager' via julia-users <
> julia...@googlegroups.com > wrote:
>
>> Hi,
>>
>> I would like to find the field of SolverResults{Float64}. So according 
>> to the manual , I 
>> just need to use the method fieldnames but unfortunately it is not defined. 
>> What can I do?
>>
>> Thank you,
>> Antoine
>>
>
>

[julia-users] Semicolon behaviour

2015-10-14 Thread Cedric St-Jean
I keep running into cases where I expect the semi-colon to remove the 
output (return nothing), but something is returned. Am I misunderstanding 
the semi-colon's role as a separator, or is that a parser bug?

function whatev(fun)
fun()
end
u = whatev() do
2;
end
@show u
> u = 2




Re: [julia-users] Semicolon behaviour

2015-10-14 Thread Yichao Yu
On Wed, Oct 14, 2015 at 9:05 AM, Cedric St-Jean  wrote:
> I keep running into cases where I expect the semi-colon to remove the output
> (return nothing), but something is returned. Am I misunderstanding the
> semi-colon's role as a separator, or is that a parser bug?

The only thing that semi-colon surpresses is printing of result in
REPL (and IJulia). Nothing about the return value will change.

http://julia.readthedocs.org/en/latest/manual/getting-started/#getting-started

>
> function whatev(fun)
> fun()
> end
> u = whatev() do
> 2;
> end
> @show u
>> u = 2
>
>


[julia-users] Re: Semicolon behaviour

2015-10-14 Thread Tomas Lycken


To start with, semicolons act differently when you’re in the REPL or IJulia 
compared to in a script. In the REPL/IJulia, the semicolon suppresses 
output, so that there’s a difference between

julia> 3;

and

julia> 3
3

but in a script no output is produced without explicit print statement 
irregardless of semicolons.

I think your confusion might stem from thinking that 2; would actually 
parse as begin 2; nothing end, but that is - as you noticed - not the case.

To suppress the return value in your code, you probably want this:

function whatev(fun)
fun()
end
u = whatev() do
2
nothing
end
@show u

// T

On Wednesday, October 14, 2015 at 3:05:30 PM UTC+2, Cedric St-Jean wrote:

I keep running into cases where I expect the semi-colon to remove the 
> output (return nothing), but something is returned. Am I misunderstanding 
> the semi-colon's role as a separator, or is that a parser bug?
>
> function whatev(fun)
> fun()
> end
> u = whatev() do
> 2;
> end
> @show u
> > u = 2
>
>
> ​


Re: [julia-users] Re: [ANN] JuliaIO and FileIO

2015-10-14 Thread MA Laforge
Hi all,

I myself have met a similar need for a file object.  That way, I can use 
the dispatch engine to overload the open function:
f = File(format"test.jpg")
s = open(f)

instead of what I consider to be less attractive solutions:
f = "test.jpg"
s = open_jpeg(f)
#or
s = MyModule.open(f) #My module's JPG reader

I like the idea of the FileIO module, but I am less a fan of having to 
register new filetypes with the module.  ...Yet I must admit there is 
something nice about how FileIO appears to automate type creation (Sorry: I 
do not fully understand the module yet).

I played a bit with FileIO, but something seems a little off about how I 
use/generate objects (I'm not sure why that is, though).

Anyways, I tried getting similar functionality (attempts at solving my own 
problems dispatching read/open on different file types) by relying more 
heavily on the Julia type system:
https://github.com/ma-laforge/FileIO2.jl

Comments:

   - FileIO2 does not have as many bells & whistles as FileIO - but I think 
   it has potential for that.
   - FileIO2 even has facilities to dispatch on different file encodings 
   (ex: binary, UTF8, ASCII, ...) if one would desire such a thing (Though I 
   prefer not to do so, in most cases).

Sample Code:
#Generate a reference to a text file:
file = File{TextFmt}("generictextfile.txt")

#Easily display the entire contents of an ASCII file:
typealias ASCIIFile File{ASCIIFmt} #For convenience
print(read(ASCIIFile(@__FILE__)))

...So I figured I would just put this out there in case the FileIO group 
finds something of value in the FileIO2 solution.

MA


Re: [julia-users] julia newb seeks critique of a "Defstruct" clone macro

2015-10-14 Thread Tim Menzies
On Wednesday, October 14, 2015 at 5:53:47 AM UTC-4, Mauro wrote:
>
> My package https://github.com/mauro3/Parameters.jl does something 
> similar to your example.  So you could have a look at that and get some 
> inspiration. 


 
Parameters.jl is amazing... so detailed

But, i fear, a bridge too far for a newb like me.  I promise to return to 
it in small N months to see if I "grok" it then.

-- @timmenzies




>
> Also xdump and Meta.show_sexpr are your friends in dissecting Expr. 
>
> On Tue, 2015-10-13 at 23:18, Tim Menzies  
> wrote: 
> > this is day3 of julia so pardon dumb questions 
> > 
> > i had some trouble  finding tutorials on julia macros. i've read the 
> > textbooks but i suspect there is somewhere else to look (i say that 
> since i 
> > would have thought that the following would exist in standard Julia, but 
> i 
> > could not find it). so, two questions 
> > 
> > Q1) where to find julia macro tutorials (beyond the standard manual)? 
> > 
> > Q2) here's my first julia macro to do something like LISP's defstruct 
> where 
> > i can define a type and its init contents all at one go. comments? traps 
> > for the unwary? improvements? 
> > 
> > # e.g. @def emp age=0 salary=1 
> > 
> > # autocreates a constructor emp0 that returns and emp 
> > 
> > # initialized with 0,1 
> > 
> > macro has(typename, pairs...) 
> > name = esc(symbol(string(typename,0))) 
> > tmp  = esc(symbol("tmp")) 
> > ones = [x.args[1] for x in pairs] 
> > twos = [x.args[2] for x in pairs] 
> > :(type $(typename) 
> >  $(ones...) 
> >   end; 
> >   function $(name)() 
> >  $(typename)($(twos...)) 
> >   end) 
> > end 
> > 
> > 
> > thanks! 
> > tiim menzies 
>


Re: [julia-users] julia newb seeks critique of a "Defstruct" clone macro

2015-10-14 Thread Tim Menzies
On Wednesday, October 14, 2015 at 11:40:19 AM UTC-4, Isaiah wrote:
>
> Q1) where to find julia macro tutorials (beyond the standard manual)?
>
>
> I'm not aware of any tutorials, but there were several JuliaCon talks that 
> are related to some of the nittier details if you are interested [1]. If 
> anything in the manual is unclear or could use more explanation, please do 
> make suggestions. (we're trying to make it accessible for people with no 
> macro background as well as advanced Lispers)
>

one specific thing that had me lost for a while was how to splat n a list 
of values (the old ,@ LISP operator)

for a while, this looked like the best option

ex= quote end
# some loop 
 push!(ex, someExpx)

then i found the wonder of ellipses. e.g. in my code

type $(typename)
 $(ones...)
  end;

 
so I'd add that to the doco.


> Q2) ... improvements
>
>

two specific improvements would be

1)  "your top 10 most useful macros" into which I'd include @task

2) some macro examples in the doco whose complexities are a little more 
complex that @assert and @memo and a little less complex that 
Parameters.jl. 

just one idea on the second point. the other day for my SE modeling class I 
wrote a little python DSL for compartmental models. For fill details, 
see https://github.com/txt/mase/blob/master/src/dsl101.md. But here's what 
a subclass method of a Model class you have to offer to make the 
compartments flow

 def have(i):
return o(C = S(100), D = S(0),
 q = F(0),  r = F(8), s = F(0))

  def step(i,dt,t,u,v):
def saturday(x): return int(x) % 7 == 6
v.C +=  dt*(u.q - u.r)
v.D +=  dt*(u.r - u.s)
v.q  =  70  if saturday(t) else 0 
v.s  =  u.D if saturday(t) else 0
if t == 27: # special case (the day i forget)
  v.s = 0

what would the **simplest** macros that you could do something like this in 
a  Julia DSL for compartmental models?

-- @timmenzies

>
>

[julia-users] Macro definition & call environments

2015-10-14 Thread juliatylors
Hi,

can someone explain what macro definition environment and macro call 
environment?

- I was reading the following 
page: 
http://docs.julialang.org/en/release-0.4/manual/metaprogramming/#man-metaprogramming
  - it says in the Hygiene section : 
Local variables are then renamed to be unique (using the gensym() 
 
function, which generates new symbols), and *global variables are resolved 
within the macro definition environment. *


In a later passage:

Thanks.


[julia-users] Call by name

2015-10-14 Thread Stefan Karpinski
No, Julia only supports one evaluation strategy: strict evaluation with
pass-by-sharing semantics.

On Thursday, October 15, 2015, > wrote:

> I mean evaluation strategy.
>
> for example, in scala you use :=> syntax for call by name evaluation
> strategy. is there a similar syntax in Julia?
> Thanks
>
> On Wednesday, October 14, 2015 at 1:01:45 PM UTC-7, Stefan Karpinski wrote:
>>
>> Do you mean the evaluation strategy
>> ? Or keyword
>> arguments
>> 
>> ?
>>
>> On Wed, Oct 14, 2015 at 11:17 PM,  wrote:
>>
>>> Hi,
>>>
>>> I was wondering whether there is a syntax for call by name parameter
>>> passing ?
>>>
>>> Thanks.
>>>
>>
>>


Re: [julia-users] Re: Julia T-shirt and Sticker

2015-10-14 Thread Timothée Poisot
Hex stickers would be cool http://hexb.in/ -- and stickermule prints
them quite well, or so I've heard.

I'd be willing to chip in as well if the proceeds went to
supporting julia in any way.

On Wed, 14 Oct 2015 16:55:26 -0700 (PDT)
Ben Arthur  wrote:

> we really really need a t-shirt to celebrate the release of 0.4.
> stickers would be great too.  i'd even be willing to pay an
> exorbitant amount for these things if the proceeds went to
> juliaComputing.



-- 
Timothée Poisot, PhD

Professeur adjoint
Département des sciences biologiques
Université de Montréal

phone  : 514 343-7691
web: http://poisotlab.io
twitter: @PoisotLab
meeting: https://tpoisot.youcanbook.me/



Re: [julia-users] Julia T-shirt and Sticker

2015-10-14 Thread Stefan Karpinski
Those are cool stickers. Will have to work on a design.

On Thursday, October 15, 2015, Timothée Poisot  wrote:

> Hex stickers would be cool http://hexb.in/ -- and stickermule prints
> them quite well, or so I've heard.
>
> I'd be willing to chip in as well if the proceeds went to
> supporting julia in any way.
>
> On Wed, 14 Oct 2015 16:55:26 -0700 (PDT)
> Ben Arthur > wrote:
>
> > we really really need a t-shirt to celebrate the release of 0.4.
> > stickers would be great too.  i'd even be willing to pay an
> > exorbitant amount for these things if the proceeds went to
> > juliaComputing.
>
>
>
> --
> Timothée Poisot, PhD
>
> Professeur adjoint
> Département des sciences biologiques
> Université de Montréal
>
> phone  : 514 343-7691
> web: http://poisotlab.io
> twitter: @PoisotLab
> meeting: https://tpoisot.youcanbook.me/
>
>


Re: [julia-users] Julia T-shirt and Sticker

2015-10-14 Thread Stefan Karpinski
I've also wanted shirts for a long time so we should design something for
that. But I want them to be nice – something I'll wear around regularly.

On Thursday, October 15, 2015, Stefan Karpinski 
wrote:

> Those are cool stickers. Will have to work on a design.
>
> On Thursday, October 15, 2015, Timothée Poisot  > wrote:
>
>> Hex stickers would be cool http://hexb.in/ -- and stickermule prints
>> them quite well, or so I've heard.
>>
>> I'd be willing to chip in as well if the proceeds went to
>> supporting julia in any way.
>>
>> On Wed, 14 Oct 2015 16:55:26 -0700 (PDT)
>> Ben Arthur  wrote:
>>
>> > we really really need a t-shirt to celebrate the release of 0.4.
>> > stickers would be great too.  i'd even be willing to pay an
>> > exorbitant amount for these things if the proceeds went to
>> > juliaComputing.
>>
>>
>>
>> --
>> Timothée Poisot, PhD
>>
>> Professeur adjoint
>> Département des sciences biologiques
>> Université de Montréal
>>
>> phone  : 514 343-7691
>> web: http://poisotlab.io
>> twitter: @PoisotLab
>> meeting: https://tpoisot.youcanbook.me/
>>
>>


[julia-users] Macro definition and call environments

2015-10-14 Thread juliatylors
Hi,

can someone explain what macro definition environment and macro call 
environment?

- I was reading the following page: 
http://docs.julialang.org/en/release-0.4/manual/metaprogramming/#man-metaprogramming
  - it says in the Hygiene section : 
"Local variables are then renamed to be unique (using the gensym() 
 function, 
which generates new symbols), and *global variables are resolved within the 
macro definition environment. "*
*  - Do these global variables also refer to function calls like 
 time(), println() within a macro??*


In a later passage:
  - Here the user expression ex is a call to time, but not the same time 
function that the macro uses. It clearly refers to MyModule.time. *Therefore 
we must arrange for the code in **ex to be resolved in the macro call 
environment.*


*I am kind of confused how to think of the hygiene of a macro:*
*  - Should i be thinking in terms of what is returned as an expr from the 
macro should not conflict with the code context where it is returned?*
*  - Or should i be thinking in terms of what is being supplied to macro 
should not conflict with what is inside the macro?*


*Thanks*


[julia-users] Re: Markdown.parse question

2015-10-14 Thread Steven G. Johnson
I wish it would use the same equation syntax as pandoc and Jupyter. You need a 
darn good reason to be different from the dominant implementation of equations 
in Markdown. 

(And "$$ is deprecated in LaTeX is not a good enough reason. Markdown isn't 
LaTeX.)

Re: [julia-users] Call by name

2015-10-14 Thread juliatylors
I mean evaluation strategy.

for example, in scala you use :=> syntax for call by name evaluation 
strategy. is there a similar syntax in Julia?
Thanks

On Wednesday, October 14, 2015 at 1:01:45 PM UTC-7, Stefan Karpinski wrote:
>
> Do you mean the evaluation strategy 
> ? Or keyword 
> arguments 
> 
> ?
>
> On Wed, Oct 14, 2015 at 11:17 PM,  
> wrote:
>
>> Hi,
>>
>> I was wondering whether there is a syntax for call by name parameter 
>> passing ?
>>
>> Thanks.
>>
>
>

Re: [julia-users] Prevent Escher from evaluating signals on start

2015-10-14 Thread Yakir Gagnon
Shashi,
In case you're still there:
How do I access results from calculation done inside one consume in another
consume? I think I'm doing this wrong. I basically have some inputs that
when they change (via one of your widgets), I run some functions on the
values of the inputs. These functions have some results, and then I want to
access those results in another tab in combination with other inputs and
widgets etc...
Sorry for all the questions..


Yakir Gagnon
The Queensland Brain Institute (Building #79)
The University of Queensland
Brisbane QLD 4072
Australia

cell +61 (0)424 393 332
work +61 (0)733 654 089

On Thu, Oct 15, 2015 at 2:29 PM, Yakir Gagnon <12.ya...@gmail.com> wrote:

> *amazing...
>
>
> Yakir Gagnon
> The Queensland Brain Institute (Building #79)
> The University of Queensland
> Brisbane QLD 4072
> Australia
>
> cell +61 (0)424 393 332
> work +61 (0)733 654 089
>
> On Thu, Oct 15, 2015 at 2:29 PM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>
>> OMG amaing. Thanks again ...!!!
>>
>>
>> Yakir Gagnon
>> The Queensland Brain Institute (Building #79)
>> The University of Queensland
>> Brisbane QLD 4072
>> Australia
>>
>> cell +61 (0)424 393 332
>> work +61 (0)733 654 089
>>
>> On Thu, Oct 15, 2015 at 2:17 PM, Shashi Gowda 
>> wrote:
>>
>>> Memoization means storing the result of a function and then using the
>>> stored value when the function is called with the same arguments.
>>>
>>> See https://github.com/simonster/Memoize.jl
>>>
>>> On Thu, Oct 15, 2015 at 9:16 AM, Yakir Gagnon <12.ya...@gmail.com>
>>> wrote:
>>>
 Thanks Shashi,
 But while that prevents stuff in another tab from running before that
 other tab is in focus, stuff runs twice when that tab gets focused...!?
 Maybe that's what you meant with "you will need to memorize the
 function you pass to lift..." but in that case I don't understand what you
 mean.

 Thanks again for the awesome work!


 Yakir Gagnon
 The Queensland Brain Institute (Building #79)
 The University of Queensland
 Brisbane QLD 4072
 Australia

 cell +61 (0)424 393 332
 work +61 (0)733 654 089

 On Thu, Oct 15, 2015 at 2:46 AM, Shashi Gowda 
 wrote:

> You can pass in an init=empty keyword argument to lift / consume, and
> the initial value of the signal will be an empty UI. The actual value will
> be computed next time the input signal udpates. You should also provide a
> typ=Any kwarg to lift / consume so that if you replace empty with 
> something
> that is not Empty, you do not get a type conversion error. something like:
>
> selected_tab = Input(1)
> vbox(
> tabs(["a", "b"]) >>> selected_tab,
> lift(selected_tab, typ=Any, init=empty) do page_no
> // compute page at page_no
> end
> )
>
> may be what you need. To avoid recomputing when you switch to tab 1
> and then to tab 2 and then back to tab 1, you will need to memoize the
> function you pass to lift...
>
>
> On Wed, Oct 14, 2015 at 6:06 PM, Yakir Gagnon <12.ya...@gmail.com>
> wrote:
>
>> Title...
>> I have a bunch of tabs and it all takes time to load. I don't need
>> the functions to evaluate before the user presses on something. Any easy
>> way i can prevent all the signals from running their functions when the
>> pages load up?
>
>
>

>>>
>>
>


Re: [julia-users] Prevent Escher from evaluating signals on start

2015-10-14 Thread Yakir Gagnon


Solved it!
All I needed to do is direct the result from the consume to a variable that 
was then accessible anywhere else. So:

A = consume( . . . ) do o
   
   result
end

consume( . . . ) do o
   
end