Re: [julia-users] Cleaner way of using Val types?

2016-03-30 Thread Yichao Yu
On Wed, Mar 30, 2016 at 10:51 PM, Chris <7hunderstr...@gmail.com> wrote:
> Here's my current dilemma, (hopefully) explained by this simple example:
>
> I have a composite type that has a bool field:
>
> type A
>   x::Float64
>   b::Bool
> end
>
> I have a function with different behavior based on the value of A.b. The
> manual suggests the following solution:
>
> function dothing(a::A, ::Type{Val{false}})
>...
> end
>
> function dothing(a::A, ::Type{Val{true}})
>   ...
> end
>
> That's fine, but now I have to call the function as dothing(a, Val{a.b}),

Don't do this, just use a branch. Never construct a type with a type
parameter of runtime determined value.
The doc should be made very clear to discourage this!

> which just strikes me as slightly redundant/verbose. Is there some way to
> make this more compact, i.e. just dothing(a), while still avoiding the check
> of a.b inside the function? Perhaps parameterizing the type A itself?
>
> Hopefully I made myself relatively clear. Thanks in advance.


[julia-users] Cleaner way of using Val types?

2016-03-30 Thread Chris
Here's my current dilemma, (hopefully) explained by this simple example:

I have a composite type that has a bool field:

type A
  x::Float64
  b::Bool
end

I have a function with different behavior based on the value of A.b. The 
manual  
suggests 
the following solution:

function dothing(a::A, ::Type{Val{false}})
   ...
end

function dothing(a::A, ::Type{Val{true}})
  ...
end

That's fine, but now I have to call the function as dothing(a, Val{a.b}), 
which just strikes me as slightly redundant/verbose. Is there some way to 
make this more compact, i.e. just dothing(a), while still avoiding the 
check of a.b inside the function? Perhaps parameterizing the type A itself?

Hopefully I made myself relatively clear. Thanks in advance.


[julia-users] Re: Error: Kernel dead in Jupyter

2016-03-30 Thread Zahirul ALAM
Yea. This has fixed it. Thanks a lot to looking into it promptly. 



On Tuesday, 29 March 2016 06:56:46 UTC-4, Tony Kelman wrote:
>
> This should be resolved upstream by 
> https://build.opensuse.org/package/rdiff/windows:mingw:win64/mingw64-runtime?linkrev=base=40
> In a fee hours when everything is finished rebuilding this should start 
> working again with just a WinRPM.update() and maybe also 
> WinRPM.install("zeromq")



Re: [julia-users] Access stack frame address?

2016-03-30 Thread Yichao Yu
On Mar 30, 2016 6:22 PM, "Yichao Yu"  wrote:
>
>
> On Mar 30, 2016 6:21 PM, "Laurent Bartholdi" 
wrote:
> >
> > Hi,
> > Is there a way to obtain the address of the current stack frame (the
ebp register on x86 processors)?
> >
> > In GCC, there's the bultin primitive __builtin_frame_address() that
does precisely that.
>
> Why do you want this?
>

It's possible but should not be done in general.

> >
> > Many thanks in advance, Laurent


Re: [julia-users] Access stack frame address?

2016-03-30 Thread Yichao Yu
On Mar 30, 2016 6:21 PM, "Laurent Bartholdi" 
wrote:
>
> Hi,
> Is there a way to obtain the address of the current stack frame (the ebp
register on x86 processors)?
>
> In GCC, there's the bultin primitive __builtin_frame_address() that does
precisely that.

Why do you want this?

>
> Many thanks in advance, Laurent


[julia-users] Access stack frame address?

2016-03-30 Thread Laurent Bartholdi
Hi,
Is there a way to obtain the address of the current stack frame (the ebp 
register on x86 processors)?

In GCC, there's the bultin primitive __builtin_frame_address() that does 
precisely that.

Many thanks in advance, Laurent


Re: [julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-03-30 Thread Johannes Wagner


On Wednesday, March 30, 2016 at 1:58:23 PM UTC+2, Milan Bouchet-Valat wrote:
>
> Le mercredi 30 mars 2016 à 04:43 -0700, Johannes Wagner a écrit : 
> > Sorry for not having expressed myself clearly, I meant the latest 
> > version of fedora to work fine (24 development). I always used the 
> > latest julia nightly available on the copr nalimilan repo. Right now 
> > that is: 0.5.0-dev+3292, Commit 9d527c5*, all use 
> > LLVM: libLLVM-3.7.1 (ORCJIT, haswell) 
> > 
> > peakflops on all machines (hardware identical) is ~1.2..1.5e11.   
> > 
> > Fedora 22&23 with julia 0.5 is ~50% slower then 0.4, only on fedora 
> > 24 julia 0.5 is  faster compared to julia 0.4. 
> Could you try to find a simple code to reproduce the problem? In 
> particular, it would be useful to check whether this comes from 
> OpenBLAS differences or whether it also happens with pure Julia code 
> (typical operations which depend on BLAS are matrix multiplication, as 
> well as most of linear algebra). Normally, 0.4 and 0.5 should use the 
> same BLAS, but who knows... 
>

well thats what I did, and the 3 simple calls inside the loop are more or 
less same speed. only the whole loop seems slower. See my code sample from 
answer march 8th (code gets in same proportions faster when exp(im .* 
dotprods) is replaced by cis(dotprods) ). 
So I don't know what I can do then... 

Can you also confirm that all versioninfo() fields are the same for all 
> three machines, both for 0.4 and 0.5? We must envision the possibility 
> that the differences actually come from 0.4. 


ohoh, right! just noticed that my fedora 24 machine was an ivy bridge which 
works fast:

Julia Version 0.5.0-dev+3292
Commit 9d527c5* (2016-03-28 06:55 UTC)
Platform Info:
  System: Linux (x86_64-redhat-linux)
  CPU: Intel(R) Core(TM) i5-3550 CPU @ 3.30GHz
  WORD_SIZE: 64
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblasp.so.0
  LIBM: libopenlibm
  LLVM: libLLVM-3.7.1 (ORCJIT, ivybridge)

and the other ones with fed22/23 are haswell, which work slow:

Julia Version 0.5.0-dev+3292
Commit 9d527c5* (2016-03-28 06:55 UTC)
Platform Info:
  System: Linux (x86_64-redhat-linux)
  CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblasp.so.0
  LIBM: libopenlibm
  LLVM: libLLVM-3.7.1 (ORCJIT, haswell)

I just booted an fedora 23 on the ivy bridge machine and it's also fast. 
 
Now if I use julia 0.45 on both architectures:

Julia Version 0.4.5
Commit 2ac304d* (2016-03-18 00:58 UTC)
Platform Info:
  System: Linux (x86_64-redhat-linux)
  CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblasp.so.0
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

and:

Julia Version 0.4.5
Commit 2ac304d* (2016-03-18 00:58 UTC)
Platform Info:
  System: Linux (x86_64-redhat-linux)
  CPU: Intel(R) Core(TM) i5-3550 CPU @ 3.30GHz
  WORD_SIZE: 64
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblasp.so.0
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

there is no speed difference apart from the ~10% or so from the faster 
haswell machine. So could perhaps be haswell hardware target specific with 
the change from llvm 3.3 to 3.7.1? Is there anything else I could provide?

Best, Johannes

 Regards 


>
> > > Le mercredi 16 mars 2016 à 09:25 -0700, Johannes Wagner a écrit :  
> > > > just a little update. Tested some other fedoras: Fedora 22 with 
> llvm  
> > > > 3.8 is also slow with julia 0.5, whereas a fedora 24 branch with 
> llvm  
> > > > 3.7 is faster on julia 0.5 compared to julia 0.4, as it should be  
> > > > (speedup from inner loop parts translated into speedup to whole  
> > > > function).  
> > > >  
> > > > don't know if anyone cares about that... At least the latest 
> version  
> > > > seems to work fine, hope it stays like this into the final fedora 
> 24  
> > > What's the "latest version"? git built from source or RPM nightlies?  
> > > With which LLVM version for each?  
> > > 
> > > If from the RPMs, I've switched them to LLVM 3.8 for a few days, and  
> > > went back to 3.7 because of a build failure. So that might explain 
> the  
> > > difference. You can install the last version which built with LLVM 
> 3.8  
> > > manually from here:  
> > > 
> https://copr-be.cloud.fedoraproject.org/results/nalimilan/julia-nightlies/fedora-23-x86_64/00167549-julia/
>   
>
> > > 
> > > It would be interesting to compare it with the latest nightly with 
> 3.7.  
> > > 
> > > 
> > > Regards  
> > > 
> > > 
> > > 
> > > > > hey guys,  
> > > > > I just experienced something weird. I have some code that runs 
> fine  
> > > > > on 0.43, then I updated to 0.5dev to test the new Arrays, run 
> same  
> > > > > code and noticed it got about ~50% slower. Then I downgraded back  
> > > > > to 0.43, ran the old code, but speed remained slow. I noticed 
> while  
> > > > > reinstalling 0.43, 

Re: [julia-users] git protocol packages

2016-03-30 Thread Tony Kelman
libgit2 can use the osx native tls library on mac, so we shouldn't need openssl 
there. Not sure why openssh is getting confused. I'm pretty sure there are 
https keychain helpers out there if it's entering your password for private 
repos that you're worried about.

Re: [julia-users] git protocol packages

2016-03-30 Thread Isaiah Norton
The problem is likely that libgit2 detects libssh2 with CMake's
PKG_CHECK_MODULE. This uses `pkg-config`, so it finds whatever is already
on your system rather than the built-from-source version. There may be some
CMake magic to override that behavior, but probably the easiest way around
is to pass the required variables to CMake directly when configuring
libgit2 from the Julia Makefile. For the variables, see:

https://github.com/libgit2/libgit2/blob/2f0450f4d635358f6da5d174c128b9ed1059bbf8/CMakeLists.txt#L344-L360

If LIBSSH2_FOUND is set before PKG_CHECK_MODULES, then that macro will
short-circuit without setting any variables.

On Wed, Mar 30, 2016 at 3:21 PM, Erik Schnetter  wrote:

> I tried installing libssh2 automatically
> , but failed
> due to "use of undeclared identifier 'LIBSSH2_KNOWNHOST_KEY_UNKNOWN'".
> Apparently, the build process picks up a system include directory that
> has another, older libssh2 installed, while detecting where my OpenSSL
> library is installed. This looks messy, so I gave up for the time
> being. Maybe the solution is to distribute OpenSSL as well. Or to
> disable OpenSSL, and use MbedTLS instead (that we also distribute).
>
> If you are using a key chain, then the ssh agent should start
> automatically when you log in. This works out of the box for me on OS
> X, presumably using the OS X keychain.
>
> Thanks for the pointers.
>
> -erik
>
>
> On Wed, Mar 30, 2016 at 9:54 AM, Isaiah Norton 
> wrote:
> > I'm not sure if this is supposed to be officially supported yet, but I
> was
> > able to get ssh:// to work on OS X:
> >
> > 1. `brew install libssh2`
> > 2. from julia root dir: `cd deps && make configure-libgit2 VERBOSE=1`
> > 3. copy the cmake command printed by above, and re-run it manually. For
> some
> > reason PKG_CONFIG_MODULE didn't detect libgit2 the first time
> (discovered by
> > trial-and-error, verified by `make distclean-libgit2` and doing the
> process
> > again).
> >
> >   it should look something like:
> >
> >  `cmake {HOME}/git/jl71/deps/srccache/libgit2/
> > -DCMAKE_INSTALL_PREFIX:PATH={HOME}/git/jl71/usr
> -DCMAKE_VERBOSE_MAKEFILE=ON
> > -DCMAKE_C_COMPILER="clang" -DCMAKE_C_COMPILER_ARG1="-m64 "
> > -DCMAKE_CXX_COMPILER="clang++" -DCMAKE_CXX_COMPILER_ARG1="-m64 "
> > -DTHREADSAFE=ON -DCMAKE_BUILD_TYPE=Release`
> >
> >
> > 4. in julia root directory: `make clean && make`
> > 5. start ssh-agent. in bash: "$ eval `ssh-agent`"
> > 6. run something that causes ssh-agent to unlock the key, for example
> > regular command line git clone'ing a repository via ssh.
> >
> > After those steps, the following works:
> >
> > julia> Pkg.clone("ssh://g...@github.com/johnmyleswhite/ASCIIPlots.jl.git
> ")
> >
> > If I neglect step 6, then the callback ("credentials_cb") gets called
> > indefinitely (noted via print debugging), so it seems that we are missing
> > some step to make ssh-agent unlock the key pair (which happens via system
> > prompt on OS X).
> >
> > So: it looks like this is almost-supported, but we need to fix build
> issues
> > and teach the libgit2 wrapper to set up ssh-agent credentials correctly
> on
> > its own (at least on OS X). Presumably the situation is the same or
> better
> > on Linux. On Windows, building against libssh2 is explicitly disabled by
> our
> > Makefile.
> >
> >
> > On Mon, Mar 28, 2016 at 4:01 PM, Blake Johnson  >
> > wrote:
> >>
> >> Is there a way to still support git protocol (as opposed to https)
> >> packages with the new libgit2 based package system? I have a fair
> number of
> >> private packages on a local server, and it sure would be nice to be
> able to
> >> fetch those with SSH key authentication.
> >
> >
>
>
>
> --
> Erik Schnetter 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


Re: [julia-users] git protocol packages

2016-03-30 Thread Erik Schnetter
I tried installing libssh2 automatically
, but failed
due to "use of undeclared identifier 'LIBSSH2_KNOWNHOST_KEY_UNKNOWN'".
Apparently, the build process picks up a system include directory that
has another, older libssh2 installed, while detecting where my OpenSSL
library is installed. This looks messy, so I gave up for the time
being. Maybe the solution is to distribute OpenSSL as well. Or to
disable OpenSSL, and use MbedTLS instead (that we also distribute).

If you are using a key chain, then the ssh agent should start
automatically when you log in. This works out of the box for me on OS
X, presumably using the OS X keychain.

Thanks for the pointers.

-erik


On Wed, Mar 30, 2016 at 9:54 AM, Isaiah Norton  wrote:
> I'm not sure if this is supposed to be officially supported yet, but I was
> able to get ssh:// to work on OS X:
>
> 1. `brew install libssh2`
> 2. from julia root dir: `cd deps && make configure-libgit2 VERBOSE=1`
> 3. copy the cmake command printed by above, and re-run it manually. For some
> reason PKG_CONFIG_MODULE didn't detect libgit2 the first time (discovered by
> trial-and-error, verified by `make distclean-libgit2` and doing the process
> again).
>
>   it should look something like:
>
>  `cmake {HOME}/git/jl71/deps/srccache/libgit2/
> -DCMAKE_INSTALL_PREFIX:PATH={HOME}/git/jl71/usr -DCMAKE_VERBOSE_MAKEFILE=ON
> -DCMAKE_C_COMPILER="clang" -DCMAKE_C_COMPILER_ARG1="-m64 "
> -DCMAKE_CXX_COMPILER="clang++" -DCMAKE_CXX_COMPILER_ARG1="-m64 "
> -DTHREADSAFE=ON -DCMAKE_BUILD_TYPE=Release`
>
>
> 4. in julia root directory: `make clean && make`
> 5. start ssh-agent. in bash: "$ eval `ssh-agent`"
> 6. run something that causes ssh-agent to unlock the key, for example
> regular command line git clone'ing a repository via ssh.
>
> After those steps, the following works:
>
> julia> Pkg.clone("ssh://g...@github.com/johnmyleswhite/ASCIIPlots.jl.git")
>
> If I neglect step 6, then the callback ("credentials_cb") gets called
> indefinitely (noted via print debugging), so it seems that we are missing
> some step to make ssh-agent unlock the key pair (which happens via system
> prompt on OS X).
>
> So: it looks like this is almost-supported, but we need to fix build issues
> and teach the libgit2 wrapper to set up ssh-agent credentials correctly on
> its own (at least on OS X). Presumably the situation is the same or better
> on Linux. On Windows, building against libssh2 is explicitly disabled by our
> Makefile.
>
>
> On Mon, Mar 28, 2016 at 4:01 PM, Blake Johnson 
> wrote:
>>
>> Is there a way to still support git protocol (as opposed to https)
>> packages with the new libgit2 based package system? I have a fair number of
>> private packages on a local server, and it sure would be nice to be able to
>> fetch those with SSH key authentication.
>
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Artigos para ler

2016-03-30 Thread Abel Siqueira
Olá,

você mandou esse e-mail para a lista de Julia. Acho que foi errado.

Abraço

Abel Soares Siqueira

Em 30 de março de 2016 09:47, Eduardo Lenz 
escreveu:

> Oi...estou colocando um quasi-newton aqui no nosso programa
> e aumentei a base de dados, por sobreposição de um ruído aleatório.
>
> Vamos ver se não vale a pena separar a base de dados para
> treino/verificação...
>
> Agora vamos para os artigos !
>
> Olha este daqui !
> http://icsl.gatech.edu/aa/images/7/72/ASOC_Journal.pdf
>
> http://www.cs.cornell.edu/selman/papers/pdf/03.comp-elec-agri.neural.pdf
>
> http://ieeexplore.ieee.org/stamp/stamp.jsp?tp==5692335
>
> http://www.ncbi.nlm.nih.gov/pubmed/8573804
>
>
> https://www.researchgate.net/publication/252044043_Application_of_neural_networks_for_failure_detection_on_wind_turbines
>
>
> --
>
>  Prof. Dr. Eduardo Lenz Cardoso
>  Associate Professor
>  Coordinator of the Graduate Program in Mechanical Engineering - UDESC /
> CCT
>  (Coordenador do Programa de Pós-Graduação em Engenharia Mecânica -
> UDESC/CCT)
>  +55 47 84072162
>  UDESC - Departamento de Engenharia Mecanica
>  Campus Universitario Prof. Avelino Marcante, 200
>  Bom Retiro - Joinville - SC - Brasil
>  CEP: 89223-100
>  E-mail: eduardobarpl...@gmail.com
>  eduardo.card...@udesc.br
> 
>


[julia-users] Julia command line usage: tool versus library

2016-03-30 Thread Cameron McBride
Simple question (I think): Is there an easy or idiomatic way to
differentiate if a julia file is being run directly from when it's included?

I'm looking for the equivalent of this in ruby:

if $0 == __FILE__
  puts "Running file"
else
  puts "Included file"
end

Thanks.

Cameron


[julia-users] Re: Load error:UdefVarError : Model not defined in include boot.jl:261

2016-03-30 Thread tannirind
Yes I got it Thank you 
BR

On Wednesday, March 30, 2016 at 5:42:54 PM UTC+2, Iain Dunning wrote:
>
> You need to use the correct case: using JuMP
>
> On Wednesday, March 30, 2016 at 9:19:10 AM UTC-4, tann...@gmail.com wrote:
>>
>> Thank you Kelman,
>>
>> I was trying with this code here "using JuMp" is already mentioned
>>
>> using jump
>> using Ipopt
>> m=Model(solver=IpoptSolver())
>> @defVar(m, x, start = 0.0)
>> @defVar(m, y, start = 0.0)
>> @setNLObjective(m, Min, (1-x)^2 + 100(y-x^2)^2)
>>
>> solve(m)
>>
>> println("x = ", getValue(x), "y = ", getValue(y))
>>
>> BR
>>
>>
>>
>> On Wednesday, March 30, 2016 at 2:11:39 PM UTC+2, Tony Kelman wrote:
>>>
>>> You need
>>> using JuMP
>>
>>

[julia-users] Re: Does the Julia global RNG have a name?

2016-03-30 Thread Tim Wheeler
Ha! It is Base.GLOBAL_RNG

On Wednesday, March 30, 2016 at 9:42:32 AM UTC-7, Tim Wheeler wrote:
>
> Hello Julia Users,
>
> Does the Julia global RNG have a name? Is it possible to do something as 
> follows?
>
> function sample_batch!(batch, dataset, rng::AbstractRNG = JULIA_GLOBAL_RNG
> )
> ...
> end
>
> Do I have to do something like this?
>
> function sample_batch!(rng::AbstractRNG, batch, dataset)
> ...
> end
>
> function sample_batch!(batch, dataset)
> ...
> end
>
> Thanks!
> -Tim
>


[julia-users] Capturing Error messages as strings

2016-03-30 Thread Matthew Pearce

Anyone know how to capture error messages as strings? This is for debugging 
of code run on remote machines, as the trace on the master node appears to 
be incomplete.

Note I am *not* asking about the atexit() behaviour. I am asking about 
capturing non-fatal errors like sqrt(-1) rather than segfaults etc.

Much appreciated

Matthew


[julia-users] Does the Julia global RNG have a name?

2016-03-30 Thread Tim Wheeler
Hello Julia Users,

Does the Julia global RNG have a name? Is it possible to do something as 
follows?

function sample_batch!(batch, dataset, rng::AbstractRNG = JULIA_GLOBAL_RNG)
...
end

Do I have to do something like this?

function sample_batch!(rng::AbstractRNG, batch, dataset)
...
end

function sample_batch!(batch, dataset)
...
end

Thanks!
-Tim


[julia-users] Re: Load error:UdefVarError : Model not defined in include boot.jl:261

2016-03-30 Thread Iain Dunning
You need to use the correct case: using JuMP

On Wednesday, March 30, 2016 at 9:19:10 AM UTC-4, tann...@gmail.com wrote:
>
> Thank you Kelman,
>
> I was trying with this code here "using JuMp" is already mentioned
>
> using jump
> using Ipopt
> m=Model(solver=IpoptSolver())
> @defVar(m, x, start = 0.0)
> @defVar(m, y, start = 0.0)
> @setNLObjective(m, Min, (1-x)^2 + 100(y-x^2)^2)
>
> solve(m)
>
> println("x = ", getValue(x), "y = ", getValue(y))
>
> BR
>
>
>
> On Wednesday, March 30, 2016 at 2:11:39 PM UTC+2, Tony Kelman wrote:
>>
>> You need
>> using JuMP
>
>

Re: [julia-users] Re: dispatch on type of tuple from ...

2016-03-30 Thread Tamas Papp
Hi Bill,

It works for a single argument, but not for multiple ones. I have a
self-contained example:

--8<---cut here---start->8---
module Foos

type Foo{T <: Tuple} 
end 

m1{T}(f::Foo{Tuple{T}}, index::T...) = 42 # suggested by Bill Hart

m2{T}(f::Foo{T}, index::T...) = 42 # what I thought would work

_m3{T}(f::Foo{T}, index::T) = 42 # indirect solution
m3{T}(f::Foo{T}, index...) = _m3(f, index)

end

f1 = Foos.Foo{Tuple{Int}}()

Foos.m1(f1, 9)  # works
Foos.m2(f1, 9)  # won't work
Foos.m3(f1, 9)  # works

f2 = Foos.Foo{Tuple{Int,Symbol}}()

Foos.m1(f2, 9, :a)   # won't work
Foos.m2(f2, 9, :a)   # won't work
Foos.m3(f2, 9, :a)   # indirect, works
--8<---cut here---end--->8---

For more than one argument, only the indirect solution works.

Best,

Tamas

On Wed, Mar 23 2016, via julia-users wrote:

> The following seems to work, but I'm not sure whether it was what you 
> wanted:
>
> import Base.getindex 
>
> type Foo{T <: Tuple} 
> end 
>
> getindex{T}(f::Foo{Tuple{T}}, index::T...) = 42
>
> Foo{Tuple{Int}}()[9]
>
> Bill.
>
> On Wednesday, 23 March 2016 14:38:20 UTC+1, Tamas Papp wrote:
>>
>> Hi, 
>>
>> My understanding was that ... in a method argument forms a tuple, but I 
>> don't know how to dispatch on that. Self-contained example: 
>>
>> import Base.getindex 
>>
>> type Foo{T <: Tuple} 
>> end 
>>
>> getindex{T}(f::Foo{T}, index::T...) = 42 
>>
>> Foo{Tuple{Int}}()[9] ## won't match 
>>
>> ## workaround with wrapper: 
>>
>> _mygetindex{T}(f::Foo{T}, index::T) = 42 
>> mygetindex{T}(f::Foo{T}, index...) = _mygetindex(f, index) 
>>
>> mygetindex(Foo{Tuple{Int}}(), 9) 
>>
>> Is it possible to make it work without a wrapper in current Julia? 
>>
>> Best, 
>>
>> Tamas 
>>



[julia-users] Re: Almost at 500 packages!

2016-03-30 Thread Adrian Salceanu
James, that's great. 

I'd say the most efficient way of doing this is if I finish the API and you 
do the REPL package for querying the API and displaying the results. We can 
discuss the data and the structure of the responses and I can provide you 
with mock responses, so you won't have to wait while I develop the API. 

We can catch up over email, I'll follow up with more details. 

miercuri, 30 martie 2016, 14:19:24 UTC+2, James Fairbanks a scris:
>
> I am interested in this project and have some time on my hands over the 
> next few weeks.
>
> On Wednesday, March 30, 2016 at 5:55:16 AM UTC-4, Adrian Salceanu wrote:
>>
>> I begun working on such a tool a few weeks ago. 
>>
>> A) It goes over the METADATA (https://github.com/JuliaLang/METADATA.jl) 
>> for all the registered packages and then B) uses the GitHub API to get the 
>> README and additional stats (contributions, stars, followers, etc). 
>> Planning on C) exposing this as a REST-ful API and D) building a package 
>> that can be used from the REPL to search for packages using the API. 
>>
>> A and B are done (though not entirely production ready yet) - C and D are 
>> yet to come. 
>>
>> I'm building this as an application of a bigger project I'm working on - 
>> a full stack web framework. This is going to take more time (I have the ORM 
>> at 90% with basic controllers support and routing and serving via Mux) but 
>> I can extract just the requirements for this and make it available in a few 
>> weeks. 
>>
>> If anybody wants to contribute with ideas or dev time, I'd be happy to 
>> set up a repo ASAP. 
>>
>>
>> marți, 20 ianuarie 2015, 16:32:45 UTC+1, Iain Dunning a scris:
>>>
>>> Just noticed on http://pkg.julialang.org/pulse.html that we are at 499 
>>> registered packages with at least one version tagged that are Julia 0.4-dev 
>>> compatible (493 on Julia 0.3).
>>>
>>> Thanks to all the package developers for their efforts in growing the 
>>> Julia package ecosystem!
>>>
>>>

Re: [julia-users] The Arrays are hard to pre-allocate for me, are they possible to be pre-allocated?

2016-03-30 Thread Lutfullah Tomak
Surely not the main issue here but doing `length( ϕ[:, 1] )` wastes memory 
for nothing. Instead use `size(ϕ, 1)` ?
Also `for j1 in [1:n1-1; n2+1:ns]` this can be split to 2 for loops to 
avoid allocations and memory loads. 
j1 is off by 1 in indexing so just decrease ranges by 1 as `0:n1-2` and 
n2:ns-1` and use j1 instead of `j1-1` (The same for previous loop).

On Wednesday, March 30, 2016 at 8:58:55 AM UTC+3, 博陈 wrote:
>
> I rewrote my code and manually loop from 1:n, the pre-allocation problem 
> is solved. I also added some parenthesis as you suggested, that helps, but 
> not very much. 
>
> 在 2016年3月30日星期三 UTC+8上午1:56:47,Yichao Yu写道:
>>
>>
>>
>> On Tue, Mar 29, 2016 at 1:50 PM, 博陈  wrote:
>>
>>> Additionally, the allocation problem is not solved. I guess this 
>>> http://julia-programming-language.2336112.n4.nabble.com/How-to-avoid-temporary-arrays-being-created-in-a-function-td14492.html
>>>  might 
>>> be helpful, but I just don't know how to change my code.
>>>
>>>
>> The only places you create temporary arrays according to your profile is 
>> the `sum(A[1:n])` and you just need to loop from 1:n manually instead of 
>> creating an subarray
>>  
>>
>>>
>>>
>>> 在 2016年3月30日星期三 UTC+8上午1:15:07,Yichao Yu写道:
>>>


 On Tue, Mar 29, 2016 at 12:43 PM, 博陈  wrote:

> I tried the built-in profiler, and find that the problem lies in lines 
> I end  with **, the result is shown below:
> that proved my guess, how can I pre-allocate these arrays? If I don't 
> want to divide this code into several parts that calculate these arrays 
> separately. 
>

 I don't understand what you mean by `divide this code into several 
 parts that calculate these arrays separately`
  

> | lines | backtrace |
>
> |   169 |  9011 |  ***
>
> |   173 |  1552 |
>
> |   175 |  2604 |
>
> |   179 |  2906 |
>
> |   181 |  1541 |
>
> |   192 |  4458 |
>
> |   211 | 13332 |
>
> |   214 |  8431 |
>
> |   218 | 15871 |***
>
> |   221 |  2538 |
>
>
> 在 2016年3月29日星期二 UTC+8下午9:27:27,Stefan Karpinski写道:
>>
>> Have you tried:
>>
>> (a) calling @code_typewarn on your function
>> (b) using the built-in profiler?
>>
>>
>> On Tue, Mar 29, 2016 at 9:23 AM, 博陈  wrote:
>>
>>> First of all, have a look at the result.
>>>
>>>
>>> 
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> My code calculates the evolution of 1-d 2-electron system in the 
>>> electric field, some variables are calculated during the evolution.
>>> According to the result of @time evolution, my code must have a 
>>> pre-allocation problem. Before you see the long code, i suggest that 
>>> the 
>>> hotspot might be around the Arrays prop_e, \phio, pp. I have learnt 
>>> that I 
>>> can use m = Array(Float64, 1) outside a "for" loop and empty!(m) and 
>>> push!(m, new_m) inside the loop to pre-allocate the variable m, but in 
>>> my 
>>> situations, I don't know how to pre-allocate these arrays.
>>>
>>> Below is the script (precisely, the main function) itself.
>>>
>>> function evolution(ϕ::Array{Complex{Float64}, 2},
>>>ele::Array{Float64, 1}, dx::Float64, dt::Float64,
>>>flags::Tuple{Int64, Int64, Int64, Int64})
>>> ϕg = copy(ϕ)
>>> FFTW.set_num_threads(8)
>>> ns = length( ϕ[:, 1] )
>>> x = get_x(ns, dx)
>>> p = get_p(ns, dx)
>>> if flags[4] == 1
>>> pp = similar(p)
>>> A = -cumsum(ele) * dt
>>> A² = A.*A
>>> # splitting
>>> r_sp = 150.0
>>> δ_sp = 5.0
>>> splitter = Array(Float64, ns, ns)
>>> end
>>> nt = length( ele )
>>>
>>> # # Pre-allocate result and temporary arrays
>>> #if flags[1] == 1
>>> σ = zeros(Complex128, nt)
>>> #end
>>> #if flags[2] == 1
>>> a = zeros(Float64, nt)
>>> #end
>>> #if flags[3] == 1
>>> r_ionization = 20.0
>>> n1 = round(Int, ns/2 - r_ionization/dx)
>>> n2 = round(Int, ns/2 + r_ionization/dx)
>>> ip = zeros(Float64, nt)
>>> #end
>>>
>>> # FFT plan
>>> p_fft! = plan_fft!( similar(ϕ), flags=FFTW.MEASURE )
>>>
>>> prop_x = similar( ϕ )
>>> prop_p = similar( prop_x )
>>> prop_e = similar( prop_x )
>>> # this two versions just cost the same time
>>> xplusy = Array(Float64, ns, ns)
>>> #xplusy = 

Re: [julia-users] documentation of print() and println()

2016-03-30 Thread Stefan Karpinski
Yes, this is definitely a documentation oversight. I've opened an issue for
it: https://github.com/JuliaLang/julia/issues/15693.

On Wed, Mar 30, 2016 at 9:01 AM,  wrote:

> I believe that print (and println) can be used as follows:
>
> >print(io,"Highway ",61," Revisited")
>
> However, ?print demonstrates only the print(x) usage. Same for the manual.
> No mentioning of the io or additional arguments. Did I miss something?
>
> /Paul S
>


Re: [julia-users] git protocol packages

2016-03-30 Thread Isaiah Norton
I'm not sure if this is supposed to be officially supported yet, but I was
able to get ssh:// to work on OS X:

1. `brew install libssh2`
2. from julia root dir: `cd deps && make configure-libgit2 VERBOSE=1`
3. copy the cmake command printed by above, and re-run it manually. For
some reason PKG_CONFIG_MODULE didn't detect libgit2 the first time
(discovered by trial-and-error, verified by `make distclean-libgit2` and
doing the process again).

  it should look something like:

 `cmake {HOME}/git/jl71/deps/srccache/libgit2/
-DCMAKE_INSTALL_PREFIX:PATH={HOME}/git/jl71/usr -DCMAKE_VERBOSE_MAKEFILE=ON
-DCMAKE_C_COMPILER="clang" -DCMAKE_C_COMPILER_ARG1="-m64 "
-DCMAKE_CXX_COMPILER="clang++" -DCMAKE_CXX_COMPILER_ARG1="-m64 "
-DTHREADSAFE=ON -DCMAKE_BUILD_TYPE=Release`


4. in julia root directory: `make clean && make`
5. start ssh-agent. in bash: "$ eval `ssh-agent`"
6. run something that causes ssh-agent to unlock the key, for example
regular command line git clone'ing a repository via ssh.

After those steps, the following works:

julia> Pkg.clone("ssh://g...@github.com/johnmyleswhite/ASCIIPlots.jl.git
")

If I neglect step 6, then the callback ("credentials_cb") gets called
indefinitely (noted via print debugging), so it seems that we are missing
some step to make ssh-agent unlock the key pair (which happens via system
prompt on OS X).

So: it looks like this is almost-supported, but we need to fix build issues
and teach the libgit2 wrapper to set up ssh-agent credentials correctly on
its own (at least on OS X). Presumably the situation is the same or better
on Linux. On Windows, building against libssh2 is explicitly disabled by
our Makefile.


On Mon, Mar 28, 2016 at 4:01 PM, Blake Johnson 
wrote:

> Is there a way to still support git protocol (as opposed to https)
> packages with the new libgit2 based package system? I have a fair number of
> private packages on a local server, and it sure would be nice to be able to
> fetch those with SSH key authentication.
>


[julia-users] double-dash cmdline syntax doesn't work ?

2016-03-30 Thread Didier Verna

  Hello,

I've installed Julia 0.4.5 on my Mac and I'm reading the user manual...

The Getting Started section mentions the double-dash cmdline syntax, but
it doesn't seem to work. With the script given as example that just
shows the value of ARGS, I get this:

didier(s000)% julia --color=yes -- /tmp/test.jl foo bar
ERROR: could not open file /Users/didier/--
[...]

However, it works without the double-dash:
didier(s000)% julia --color=yes /tmp/test.jl foo bar
foo
bar

This behavior seems in contradiction with Julia's synopsis:
didier(s000)% julia --help
julia [switches] -- [programfile] [args...]


Thanks.

-- 
Resistance is futile. You will be jazzimilated.

Lisp, Jazz, Aïkido: http://www.didierverna.info


[julia-users] Re: Load error:UdefVarError : Model not defined in include boot.jl:261

2016-03-30 Thread tannirind
Thank you Kelman,

I was trying with this code here "using JuMp" is already mentioned

using jump
using Ipopt
m=Model(solver=IpoptSolver())
@defVar(m, x, start = 0.0)
@defVar(m, y, start = 0.0)
@setNLObjective(m, Min, (1-x)^2 + 100(y-x^2)^2)

solve(m)

println("x = ", getValue(x), "y = ", getValue(y))

BR



On Wednesday, March 30, 2016 at 2:11:39 PM UTC+2, Tony Kelman wrote:
>
> You need
> using JuMP



Re: [julia-users] Re: Is there a performance penalty for defining functions inside functions?

2016-03-30 Thread Mauro
No: `include` includes in global scope.

On Wed, 2016-03-30 at 15:02, FANG Colin  wrote:
> What about include in a function?
>
> function mainFunc()
> include("helper.jl")
>
> call helper() and do stuff
> return something
> end
>
>
> inside helper.jl
>
> function helper()
> do stuff
> return something
> end
>
>
>
> On Wednesday, March 30, 2016 at 1:26:22 PM UTC+1, Christopher Fisher wrote:
>>
>> There might be some cases where defining functions within functions can
>> improve speed. As Mauro noted, this may not be true in .4 but will be fixed
>> in .5. See the following for examples:
>>
>>
>> https://groups.google.com/forum/#!searchin/julia-users/Passing$20data$20through$20Optim/julia-users/a_81sxvb-3c/9q6RvjfkBwAJ
>>
>> On Tuesday, March 29, 2016 at 12:31:42 PM UTC-4, Evan Fields wrote:
>>>
>>> To keep namespaces clear and to help with code readability, I might like
>>> to do this:
>>>
>>> function mainFunc()
>>> function helper()
>>> do stuff
>>> return something
>>> end
>>>
>>> call helper() and do stuff
>>> return something
>>> end
>>>
>>> That way the helper function is only visible to the function that needs
>>> it and when reading the code it's obvious that the helper "belongs to" the
>>> main function. Is there any performance penalty for doing this? Or is this
>>> bad practice for some reason I don't know?
>>>
>>


[julia-users] Re: raytracing in julia

2016-03-30 Thread jw3126
It seems that the official FireRays 
 repo currently 
can't be build on my system (Ubuntu 14.04), though there is already an 
issue about this. 

If there is only a C++ API, what kind of trouble does that mean for trying 
to wrap it with julia? Is it impossible? Very hacky? Need to look into 
llvm? Bad performance?

Also what about your GeometryTypes 
 package? I saw there a 
type 'Particle', are there any plans to support ray tracing in that package 
or on top of it?

On Tuesday, March 29, 2016 at 12:30:14 PM UTC+2, Simon Danisch wrote:
>
> Oh, contrary to FireRender, FireRay seems to only have a C++ API...
> There's also https://github.com/JuliaGeometry/GeometricalPredicates.jl
> and https://github.com/JuliaGeometry/TriangleIntersect.jl, though!
>
>
> Am Freitag, 25. März 2016 14:54:01 UTC+1 schrieb jw3126:
>>
>> I need to do things like building a simple geometry which consists of a 
>> few polygons, cylinders, spheres and calculate if/where rays hit these 
>> objects.
>>
>> Is there some julia library which does this already? Or some easy to wrap 
>> C/Fortran library? Any suggestions?
>> I would prefer a solution, which does not depend on vectorization, but 
>> can be called efficiently as part of a loop, one ray at a time.
>>
>

[julia-users] Re: Is there a performance penalty for defining functions inside functions?

2016-03-30 Thread FANG Colin
What about include in a function?

function mainFunc()
include("helper.jl")

call helper() and do stuff
return something
end


inside helper.jl

function helper()
do stuff
return something
end



On Wednesday, March 30, 2016 at 1:26:22 PM UTC+1, Christopher Fisher wrote:
>
> There might be some cases where defining functions within functions can 
> improve speed. As Mauro noted, this may not be true in .4 but will be fixed 
> in .5. See the following for examples:
>
>
> https://groups.google.com/forum/#!searchin/julia-users/Passing$20data$20through$20Optim/julia-users/a_81sxvb-3c/9q6RvjfkBwAJ
>
> On Tuesday, March 29, 2016 at 12:31:42 PM UTC-4, Evan Fields wrote:
>>
>> To keep namespaces clear and to help with code readability, I might like 
>> to do this:
>>
>> function mainFunc()
>> function helper()
>> do stuff
>> return something
>> end
>>
>> call helper() and do stuff
>> return something
>> end
>>
>> That way the helper function is only visible to the function that needs 
>> it and when reading the code it's obvious that the helper "belongs to" the 
>> main function. Is there any performance penalty for doing this? Or is this 
>> bad practice for some reason I don't know?
>>
>

[julia-users] documentation of print() and println()

2016-03-30 Thread paul . soederlind
I believe that print (and println) can be used as follows: 

>print(io,"Highway ",61," Revisited") 

However, ?print demonstrates only the print(x) usage. Same for the manual. 
No mentioning of the io or additional arguments. Did I miss something?

/Paul S


[julia-users] Artigos para ler

2016-03-30 Thread Eduardo Lenz
Oi...estou colocando um quasi-newton aqui no nosso programa
e aumentei a base de dados, por sobreposição de um ruído aleatório.

Vamos ver se não vale a pena separar a base de dados para
treino/verificação...

Agora vamos para os artigos !

Olha este daqui !
http://icsl.gatech.edu/aa/images/7/72/ASOC_Journal.pdf

http://www.cs.cornell.edu/selman/papers/pdf/03.comp-elec-agri.neural.pdf

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp==5692335

http://www.ncbi.nlm.nih.gov/pubmed/8573804

https://www.researchgate.net/publication/252044043_Application_of_neural_networks_for_failure_detection_on_wind_turbines


-- 

 Prof. Dr. Eduardo Lenz Cardoso
 Associate Professor
 Coordinator of the Graduate Program in Mechanical Engineering - UDESC / CCT
 (Coordenador do Programa de Pós-Graduação em Engenharia Mecânica -
UDESC/CCT)
 +55 47 84072162
 UDESC - Departamento de Engenharia Mecanica
 Campus Universitario Prof. Avelino Marcante, 200
 Bom Retiro - Joinville - SC - Brasil
 CEP: 89223-100
 E-mail: eduardobarpl...@gmail.com
 eduardo.card...@udesc.br



[julia-users] Re: Is there a performance penalty for defining functions inside functions?

2016-03-30 Thread Christopher Fisher
There might be some cases where defining functions within functions can 
improve speed. As Mauro noted, this may not be true in .4 but will be fixed 
in .5. See the following for examples:

https://groups.google.com/forum/#!searchin/julia-users/Passing$20data$20through$20Optim/julia-users/a_81sxvb-3c/9q6RvjfkBwAJ

On Tuesday, March 29, 2016 at 12:31:42 PM UTC-4, Evan Fields wrote:
>
> To keep namespaces clear and to help with code readability, I might like 
> to do this:
>
> function mainFunc()
> function helper()
> do stuff
> return something
> end
>
> call helper() and do stuff
> return something
> end
>
> That way the helper function is only visible to the function that needs it 
> and when reading the code it's obvious that the helper "belongs to" the 
> main function. Is there any performance penalty for doing this? Or is this 
> bad practice for some reason I don't know?
>


[julia-users] Re: Almost at 500 packages!

2016-03-30 Thread James Fairbanks
I am interested in this project and have some time on my hands over the 
next few weeks.

On Wednesday, March 30, 2016 at 5:55:16 AM UTC-4, Adrian Salceanu wrote:
>
> I begun working on such a tool a few weeks ago. 
>
> A) It goes over the METADATA (https://github.com/JuliaLang/METADATA.jl) 
> for all the registered packages and then B) uses the GitHub API to get the 
> README and additional stats (contributions, stars, followers, etc). 
> Planning on C) exposing this as a REST-ful API and D) building a package 
> that can be used from the REPL to search for packages using the API. 
>
> A and B are done (though not entirely production ready yet) - C and D are 
> yet to come. 
>
> I'm building this as an application of a bigger project I'm working on - a 
> full stack web framework. This is going to take more time (I have the ORM 
> at 90% with basic controllers support and routing and serving via Mux) but 
> I can extract just the requirements for this and make it available in a few 
> weeks. 
>
> If anybody wants to contribute with ideas or dev time, I'd be happy to set 
> up a repo ASAP. 
>
>
> marți, 20 ianuarie 2015, 16:32:45 UTC+1, Iain Dunning a scris:
>>
>> Just noticed on http://pkg.julialang.org/pulse.html that we are at 499 
>> registered packages with at least one version tagged that are Julia 0.4-dev 
>> compatible (493 on Julia 0.3).
>>
>> Thanks to all the package developers for their efforts in growing the 
>> Julia package ecosystem!
>>
>>

[julia-users] Load error:UdefVarError : Model not defined in include boot.jl:261

2016-03-30 Thread Tony Kelman
You need
using JuMP

Re: [julia-users] parametric type question: N,T <: NTuple{N}

2016-03-30 Thread Tim Holy
That kind of type specification has been on the wish list for a long time. 
Maybe when #8974 lands (certainly not in julia 0.5).

--Tim

On Wednesday, March 30, 2016 01:24:54 PM Tamas Papp wrote:
> Hi,
> 
> I am working on a mini-library that makes tabulation of arbitrary data
> easier. The design that works well at the moment is just wrapping a
> dictionary, which maps a Tuple of given type (and thus length) to a
> subtype of Real. The Tuple type gives the allowed key types.
> 
> It could be defined like this:
> 
> immutable DynamicNamedArray{Tv <: Real, Tk <: Tuple}
> elements::Dict{Tk,Tv}
> ordering
> function DynamicNamedArray{Tk,Tv}(elements::Dict{Tk,Tv},
> ordering::NTuple) @assert nfields(Tk) == length(ordering)
> new(elements, ordering)
> end
> end
> 
> where ordering is another tuple, of functions, which is used for display
> and conversion to NamedArray.
> 
> But suppose I want to incorporate the length of the key tuple Tk into
> the type. I could use
> 
> immutable DynamicNamedArray{Tv <: Real, Tk <: Tuple, N}
> elements::Dict{Tk,Tv}
> ordering::NTuple{N,Function}
> function DynamicNamedArray{Tk,Tv,N}(elements::Dict{Tk,Tv},
> ordering::NTuple{N}) @assert nfields(Tk) == N
> new(elements, ordering)
> end
> end
> 
> I can then make it a subtype of AbstractSparseArray{Tv,N}. But the
> constructor is still used to enforce nfields(Tk) == N. I am wondering if
> there is a way to do something like (mock code follows)
> 
> immutable DynamicNamedArray{N, Tv <: Real, Tk <: NTuple{N}}
>   ...
> end
> 
> so that the order would be enforced in the type.
> 
> If I am trying to do something nonsensical and better solutions exist,
> please tell me.
> 
> Best,
> 
> Tamas



Re: [julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-03-30 Thread Milan Bouchet-Valat
Le mercredi 30 mars 2016 à 04:43 -0700, Johannes Wagner a écrit :
> Sorry for not having expressed myself clearly, I meant the latest
> version of fedora to work fine (24 development). I always used the
> latest julia nightly available on the copr nalimilan repo. Right now
> that is: 0.5.0-dev+3292, Commit 9d527c5*, all use
> LLVM: libLLVM-3.7.1 (ORCJIT, haswell)
> 
> peakflops on all machines (hardware identical) is ~1.2..1.5e11.  
> 
> Fedora 22&23 with julia 0.5 is ~50% slower then 0.4, only on fedora
> 24 julia 0.5 is  faster compared to julia 0.4.
Could you try to find a simple code to reproduce the problem? In
particular, it would be useful to check whether this comes from
OpenBLAS differences or whether it also happens with pure Julia code
(typical operations which depend on BLAS are matrix multiplication, as
well as most of linear algebra). Normally, 0.4 and 0.5 should use the
same BLAS, but who knows...

Can you also confirm that all versioninfo() fields are the same for all
three machines, both for 0.4 and 0.5? We must envision the possibility
that the differences actually come from 0.4.


Regards


> > Le mercredi 16 mars 2016 à 09:25 -0700, Johannes Wagner a écrit : 
> > > just a little update. Tested some other fedoras: Fedora 22 with llvm 
> > > 3.8 is also slow with julia 0.5, whereas a fedora 24 branch with llvm 
> > > 3.7 is faster on julia 0.5 compared to julia 0.4, as it should be 
> > > (speedup from inner loop parts translated into speedup to whole 
> > > function). 
> > > 
> > > don't know if anyone cares about that... At least the latest version 
> > > seems to work fine, hope it stays like this into the final fedora 24 
> > What's the "latest version"? git built from source or RPM nightlies? 
> > With which LLVM version for each? 
> > 
> > If from the RPMs, I've switched them to LLVM 3.8 for a few days, and 
> > went back to 3.7 because of a build failure. So that might explain the 
> > difference. You can install the last version which built with LLVM 3.8 
> > manually from here: 
> > https://copr-be.cloud.fedoraproject.org/results/nalimilan/julia-nightlies/fedora-23-x86_64/00167549-julia/
> >  
> > 
> > It would be interesting to compare it with the latest nightly with 3.7. 
> > 
> > 
> > Regards 
> > 
> > 
> > 
> > > > hey guys, 
> > > > I just experienced something weird. I have some code that runs fine 
> > > > on 0.43, then I updated to 0.5dev to test the new Arrays, run same 
> > > > code and noticed it got about ~50% slower. Then I downgraded back 
> > > > to 0.43, ran the old code, but speed remained slow. I noticed while 
> > > > reinstalling 0.43, openblas-threads didn't get isntalled along with 
> > > > it. So I manually installed it, but no change.  
> > > > Does anyone has an idea what could be going on? LLVM on fedora23 is 
> > > > 3.7 
> > > > 
> > > > Cheers, Johannes 
> > > > 


Re: [julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-03-30 Thread Johannes Wagner
Sorry for not having expressed myself clearly, I meant the latest version 
of fedora to work fine (24 development). I always used the latest julia 
nightly available on the copr nalimilan repo. Right now that is: 
0.5.0-dev+3292, Commit 9d527c5*, all use
LLVM: libLLVM-3.7.1 (ORCJIT, haswell)

peakflops on all machines (hardware identical) is ~1.2..1.5e11.  

Fedora 22&23 with julia 0.5 is ~50% slower then 0.4, only on fedora 24 
julia 0.5 is  faster compared to julia 0.4.


On Wednesday, March 16, 2016 at 7:34:28 PM UTC+1, Milan Bouchet-Valat wrote:
>
> Le mercredi 16 mars 2016 à 09:25 -0700, Johannes Wagner a écrit : 
> > just a little update. Tested some other fedoras: Fedora 22 with llvm 
> > 3.8 is also slow with julia 0.5, whereas a fedora 24 branch with llvm 
> > 3.7 is faster on julia 0.5 compared to julia 0.4, as it should be 
> > (speedup from inner loop parts translated into speedup to whole 
> > function). 
> > 
> > don't know if anyone cares about that... At least the latest version 
> > seems to work fine, hope it stays like this into the final fedora 24 
> What's the "latest version"? git built from source or RPM nightlies? 
> With which LLVM version for each? 
>
> If from the RPMs, I've switched them to LLVM 3.8 for a few days, and 
> went back to 3.7 because of a build failure. So that might explain the 
> difference. You can install the last version which built with LLVM 3.8 
> manually from here: 
>
> https://copr-be.cloud.fedoraproject.org/results/nalimilan/julia-nightlies/fedora-23-x86_64/00167549-julia/
>  
>
> It would be interesting to compare it with the latest nightly with 3.7. 
>
>
> Regards 
>
>
>
> > > hey guys, 
> > > I just experienced something weird. I have some code that runs fine 
> > > on 0.43, then I updated to 0.5dev to test the new Arrays, run same 
> > > code and noticed it got about ~50% slower. Then I downgraded back 
> > > to 0.43, ran the old code, but speed remained slow. I noticed while 
> > > reinstalling 0.43, openblas-threads didn't get isntalled along with 
> > > it. So I manually installed it, but no change.  
> > > Does anyone has an idea what could be going on? LLVM on fedora23 is 
> > > 3.7 
> > > 
> > > Cheers, Johannes 
> > > 
>


[julia-users] parametric type question: N,T <: NTuple{N}

2016-03-30 Thread Tamas Papp
Hi,

I am working on a mini-library that makes tabulation of arbitrary data
easier. The design that works well at the moment is just wrapping a
dictionary, which maps a Tuple of given type (and thus length) to a
subtype of Real. The Tuple type gives the allowed key types.

It could be defined like this:

immutable DynamicNamedArray{Tv <: Real, Tk <: Tuple}
elements::Dict{Tk,Tv}
ordering
function DynamicNamedArray{Tk,Tv}(elements::Dict{Tk,Tv}, ordering::NTuple)
@assert nfields(Tk) == length(ordering)
new(elements, ordering)
end
end

where ordering is another tuple, of functions, which is used for display
and conversion to NamedArray.

But suppose I want to incorporate the length of the key tuple Tk into
the type. I could use

immutable DynamicNamedArray{Tv <: Real, Tk <: Tuple, N}
elements::Dict{Tk,Tv}
ordering::NTuple{N,Function}
function DynamicNamedArray{Tk,Tv,N}(elements::Dict{Tk,Tv}, 
ordering::NTuple{N})
@assert nfields(Tk) == N
new(elements, ordering)
end
end

I can then make it a subtype of AbstractSparseArray{Tv,N}. But the
constructor is still used to enforce nfields(Tk) == N. I am wondering if
there is a way to do something like (mock code follows)

immutable DynamicNamedArray{N, Tv <: Real, Tk <: NTuple{N}}
  ...
end

so that the order would be enforced in the type.

If I am trying to do something nonsensical and better solutions exist,
please tell me.

Best,

Tamas


[julia-users] Load error:UdefVarError : Model not defined in include boot.jl:261

2016-03-30 Thread tannirind
Hello Every one,

I am solving simple non linear examples using JuMp with IpoptSolver. when i 
include file on julia. the specific error come .I updated packages  as well.

Load Error: UNdefVarError : Model not defined in include boot.jl:261
include_from model at loading. jl : 304

Any one have any idea about this? Thank you for your time 

Best Regards,
Tanveer Iqbal


[julia-users] Re: Almost at 500 packages!

2016-03-30 Thread Adrian Salceanu
I begun working on such a tool a few weeks ago. 

A) It goes over the METADATA (https://github.com/JuliaLang/METADATA.jl) for 
all the registered packages and then B) uses the GitHub API to get the 
README and additional stats (contributions, stars, followers, etc). 
Planning on C) exposing this as a REST-ful API and D) building a package 
that can be used from the REPL to search for packages using the API. 

A and B are done (though not entirely production ready yet) - C and D are 
yet to come. 

I'm building this as an application of a bigger project I'm working on - a 
full stack web framework. This is going to take more time (I have the ORM 
at 90% with basic controllers support and routing and serving via Mux) but 
I can extract just the requirements for this and make it available in a few 
weeks. 

If anybody wants to contribute with ideas or dev time, I'd be happy to set 
up a repo ASAP. 


marți, 20 ianuarie 2015, 16:32:45 UTC+1, Iain Dunning a scris:
>
> Just noticed on http://pkg.julialang.org/pulse.html that we are at 499 
> registered packages with at least one version tagged that are Julia 0.4-dev 
> compatible (493 on Julia 0.3).
>
> Thanks to all the package developers for their efforts in growing the 
> Julia package ecosystem!
>
>

Re: [julia-users] The Arrays are hard to pre-allocate for me, are they possible to be pre-allocated?

2016-03-30 Thread 博陈
I tried @code_warntype, and the result show that the red alert appear only 
in the io part. Maybe it's not a type-stability issue.

在 2016年3月30日星期三 UTC+8上午3:18:24,Tim Holy写道:
>
> I haven't look at this myself, but have you tried Stefan's suggestion to 
> look 
> at `@code_warntype`? This might not be a preallocation issue, it might be 
> a 
> type-stability issue. 
>
> --Tim 
>
> On Tuesday, March 29, 2016 01:56:21 PM Yichao Yu wrote: 
> > On Tue, Mar 29, 2016 at 1:50 PM, 博陈  
> wrote: 
> > > Additionally, the allocation problem is not solved. I guess this 
> > > 
> http://julia-programming-language.2336112.n4.nabble.com/How-to-avoid-tempo 
> > > rary-arrays-being-created-in-a-function-td14492.html might be helpful, 
> but 
> > > I just don't know how to change my code. 
> > 
> > The only places you create temporary arrays according to your profile is 
> > the `sum(A[1:n])` and you just need to loop from 1:n manually instead of 
> > creating an subarray 
> > 
> > > 在 2016年3月30日星期三 UTC+8上午1:15:07,Yichao Yu写道: 
> > > 
> > >> On Tue, Mar 29, 2016 at 12:43 PM, 博陈  wrote: 
> > >>> I tried the built-in profiler, and find that the problem lies in 
> lines I 
> > >>> end  with **, the result is shown below: 
> > >>> that proved my guess, how can I pre-allocate these arrays? If I 
> don't 
> > >>> want to divide this code into several parts that calculate these 
> arrays 
> > >>> separately. 
> > >> 
> > >> I don't understand what you mean by `divide this code into several 
> parts 
> > >> that calculate these arrays separately` 
> > >> 
> > >>> | lines | backtrace | 
> > >>> | 
> > >>> |   169 |  9011 |  *** 
> > >>> |   
> > >>> |   173 |  1552 | 
> > >>> |   
> > >>> |   175 |  2604 | 
> > >>> |   
> > >>> |   179 |  2906 | 
> > >>> |   
> > >>> |   181 |  1541 | 
> > >>> |   
> > >>> |   192 |  4458 | 
> > >>> |   
> > >>> |   211 | 13332 | 
> > >>> |   
> > >>> |   214 |  8431 | 
> > >>> |   
> > >>> |   218 | 15871 |*** 
> > >>> |   
> > >>> |   221 |  2538 | 
> > >>> 
> > >>> 在 2016年3月29日星期二 UTC+8下午9:27:27,Stefan Karpinski写道: 
> > >>> 
> >  Have you tried: 
> >  
> >  (a) calling @code_typewarn on your function 
> >  (b) using the built-in profiler? 
> >  
> >  On Tue, Mar 29, 2016 at 9:23 AM, 博陈  wrote: 
> > > First of all, have a look at the result. 
> > > 
> > > 
> > > <
> https://lh3.googleusercontent.com/-anNt-E4P1vM/Vvp-TybegZI/AB 
> > > 
> E/ZvDO2xarndMSgKVOXy_hcPd5NTh-7QcEA/s1600/QQ%25E5%259B%25BE%25E7%2589% 
> > > 258720160329210732.png> 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > My code calculates the evolution of 1-d 2-electron system in the 
> > > electric field, some variables are calculated during the 
> evolution. 
> > > According to the result of @time evolution, my code must have a 
> > > pre-allocation problem. Before you see the long code, i suggest 
> that 
> > > the 
> > > hotspot might be around the Arrays prop_e, \phio, pp. I have 
> learnt 
> > > that I 
> > > can use m = Array(Float64, 1) outside a "for" loop and empty!(m) 
> and 
> > > push!(m, new_m) inside the loop to pre-allocate the variable m, 
> but in 
> > > my 
> > > situations, I don't know how to pre-allocate these arrays. 
> > > 
> > > Below is the script (precisely, the main function) itself. 
> > > 
> > > function evolution(ϕ::Array{Complex{Float64}, 2}, 
> > > 
> > >ele::Array{Float64, 1}, dx::Float64, 
> dt::Float64, 
> > >flags::Tuple{Int64, Int64, Int64, Int64}) 
> > > 
> > > ϕg = copy(ϕ) 
> > > FFTW.set_num_threads(8) 
> > > ns = length( ϕ[:, 1] ) 
> > > x = get_x(ns, dx) 
> > > p = get_p(ns, dx) 
> > > if flags[4] == 1 
> > > 
> > > pp = similar(p) 
> > > A = -cumsum(ele) * dt 
> > > A² = A.*A 
> > > # splitting 
> > > r_sp = 150.0 
> > > δ_sp = 5.0 
> > > splitter = Array(Float64, ns, ns) 
> > > 
> > > end 
> > > nt = length( ele ) 
> > > 
> > > # # Pre-allocate result and temporary arrays 
> > > #if flags[1] == 1 
> > > σ = zeros(Complex128, nt) 
> > > #end 
> > > #if flags[2] == 1 
> > > a = zeros(Float64, nt) 
> > > #end 
> > > #if flags[3] == 1 
> > > r_ionization = 20.0 
> > > n1 = round(Int, ns/2 - r_ionization/dx) 
> > > n2 = round(Int, ns/2 + r_ionization/dx) 
> > > ip = zeros(Float64, nt) 
> > > #end 
> > > 
> > > # FFT plan 
> > > p_fft! = plan_fft!( similar(ϕ), flags=FFTW.MEASURE ) 
> > > 
> > > prop_x =