[julia-users] Re: ANN: JuMP 0.12 released

2016-03-08 Thread Tony Kelman
I was also recently seeing NLopt hang during some of JuMP's tests, so I 
don't think it's just you.


On Tuesday, March 8, 2016 at 9:52:38 PM UTC-8, Evan Fields wrote:
>
> Great to hear. Two minor questions which aren't clear (to me) from the 
> documentation:
> - Once a user defined function has been defined and registered, can it be 
> incorporated into NL expressions via @defNLExpr?
> - The documentation references both ForwardDiff.jl and 
> ReverseDiffSparse.jl. Which is used where? What are the tradeoffs users 
> should be aware of?
>
> Semi-unrelated: two days ago I was using JuMP 0.12 and NLopt to solve what 
> should have been a very simple (2 variable) nonlinear problem. When I fed 
> the optimal solution as the starting values for the variables, the 
> solve(model) command (or NLopt) hung indefinitely. Perturbing my starting 
> point by .0001 fixed that - solve returned a solution 
> instantaneously-by-human-perception. Am I doing something dumb?
>


Re: [julia-users] Julia on Android (and/or the web) - a scientific calculator on steroids.. good for tablets

2016-03-08 Thread Viral Shah
How about making this a GSOC project? Could you perhaps submit a PR to the GSOC 
project page on julialang.org with these ideas, since you seem to have made the 
most amount of progress here?

-viral



> On 09-Mar-2016, at 11:47 AM, Lutfullah Tomak  wrote:
> 
> I think the most crictical one that toolchain misses is (lib)gfortran 
> support. I managed to build gfortran for cross compile but I think I am 
> missing pure hard float libgfortran to compile libopenblas with. For android, 
> libopenblas expect hard float libraries. In google provided toolchain, hard 
> float libraries in .../armv7-a/hard but I don't have .../armv7-a/hard 
> directory in my personal build of toolchain. Nevertheless, I can build 
> openblas with lapack support but netlib provided tests does not work well. 
> Blas passes tests.
> Also, for some dependecies including llvm, there is an app called termux. 
> They have source package build system available at 
> https://github.com/termux/termux-packages . It can be helpful to look at. 
> They listed julia in their package suggestion page for some time 
> https://termux.com/package-suggestions.html .



[julia-users] Re: Julia on Android (and/or the web) - a scientific calculator on steroids.. good for tablets

2016-03-08 Thread Lutfullah Tomak
I think the most crictical one that toolchain misses is (lib)gfortran support. 
I managed to build gfortran for cross compile but I think I am missing pure 
hard float libgfortran to compile libopenblas with. For android, libopenblas 
expect hard float libraries. In google provided toolchain, hard float libraries 
in .../armv7-a/hard but I don't have .../armv7-a/hard directory in my personal 
build of toolchain. Nevertheless, I can build openblas with lapack support but 
netlib provided tests does not work well. Blas passes tests.
Also, for some dependecies including llvm, there is an app called termux. They 
have source package build system available at 
https://github.com/termux/termux-packages . It can be helpful to look at. They 
listed julia in their package suggestion page for some time 
https://termux.com/package-suggestions.html .

[julia-users] Re: ANN: JuMP 0.12 released

2016-03-08 Thread Evan Fields
Great to hear. Two minor questions which aren't clear (to me) from the 
documentation:
- Once a user defined function has been defined and registered, can it be 
incorporated into NL expressions via @defNLExpr?
- The documentation references both ForwardDiff.jl and 
ReverseDiffSparse.jl. Which is used where? What are the tradeoffs users 
should be aware of?

Semi-unrelated: two days ago I was using JuMP 0.12 and NLopt to solve what 
should have been a very simple (2 variable) nonlinear problem. When I fed 
the optimal solution as the starting values for the variables, the 
solve(model) command (or NLopt) hung indefinitely. Perturbing my starting 
point by .0001 fixed that - solve returned a solution 
instantaneously-by-human-perception. Am I doing something dumb?


[julia-users] Re: Eigenvalues of symmetric dense n=10^6 matrix (ScaLAPACK.jl?)

2016-03-08 Thread Jack Poulson
Scratch that; I misread your request as I was forwarded a link to this with 
the title "Big SVD".

You should look into wrapping Elemental's HermitianEig and/or the 
appropriate routine from ELPA, unless you only want a small number of 
modes, in which case a Krylov subspace technique would likely be preferred.

Jack

On Tuesday, March 8, 2016 at 8:20:10 PM UTC-8, Jack Poulson wrote:
>
> There were several recent extensions to Elemental's SVD support, as 
> detailed here:
> https://github.com/elemental/Elemental/issues/125
>
> In particular, FULL_SVD, THIN_SVD, COMPACT_SVD, and PRODUCT_SVD are now 
> all supported, where the latter should used if you do not need small 
> triplets as it uses a more parallelizable reduction to tridiagonal form of 
> A^H A rather than a reduction to bidiagonal form of A (as well as a more 
> parallelized MRRR Hermitian tridiagonal eigensolver instead of a QR 
> algorithm for the bidiagonal SVD).
>
> With that said, it's worth asking if your friend only needs the top k 
> triplets for modest k, as something like Jiahao and Andreas's TSVD 
> implementation on top of a distributed dense matrix-vector product likely 
> be the way to go (otherwise, a wrapper to SLEPc would be warranted).
>
> Jack
>
> On Tuesday, March 8, 2016 at 12:19:07 PM UTC-8, Tony Kelman wrote:
>>
>> May also want to look into Elemental. Last I checked Elemental uses 
>> Scalapack in a handful of places but I don't think it would be the case 
>> here.
>>
>>
>> On Tuesday, March 8, 2016 at 8:19:15 AM UTC-8, Erik Schnetter wrote:
>>>
>>> A colleague mentioned to me that he needs to diagonalize (find 
>>> eigenvalues and eigenvectors) of a symmetric dense matrix with n=10^6 
>>> (i.e. the matrix has 10^12 entries). ScaLapack seems to be the way to 
>>> go for this. 
>>>
>>> I'm happy to see there is 
>>> . Saying its 
>>> documentation is "Spartan" is an understatement. There is a file 
>>> "test/test.jl" that seems to not be included from "runtests.jl", and 
>>> which thus might be intended as example. 
>>>
>>> The package doesn't pass its tests. If you know more about whether 
>>> this just needs a brush-up or whether major surgery is required I'd 
>>> appreciate feedback. 
>>>
>>> -erik 
>>>
>>> -- 
>>> Erik Schnetter  
>>> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>>>
>>

[julia-users] Re: Eigenvalues of symmetric dense n=10^6 matrix (ScaLAPACK.jl?)

2016-03-08 Thread Jack Poulson
There were several recent extensions to Elemental's SVD support, as 
detailed here:
https://github.com/elemental/Elemental/issues/125

In particular, FULL_SVD, THIN_SVD, COMPACT_SVD, and PRODUCT_SVD are now all 
supported, where the latter should used if you do not need small triplets 
as it uses a more parallelizable reduction to tridiagonal form of A^H A 
rather than a reduction to bidiagonal form of A (as well as a more 
parallelized MRRR Hermitian tridiagonal eigensolver instead of a QR 
algorithm for the bidiagonal SVD).

With that said, it's worth asking if your friend only needs the top k 
triplets for modest k, as something like Jiahao and Andreas's TSVD 
implementation on top of a distributed dense matrix-vector product likely 
be the way to go (otherwise, a wrapper to SLEPc would be warranted).

Jack

On Tuesday, March 8, 2016 at 12:19:07 PM UTC-8, Tony Kelman wrote:
>
> May also want to look into Elemental. Last I checked Elemental uses 
> Scalapack in a handful of places but I don't think it would be the case 
> here.
>
>
> On Tuesday, March 8, 2016 at 8:19:15 AM UTC-8, Erik Schnetter wrote:
>>
>> A colleague mentioned to me that he needs to diagonalize (find 
>> eigenvalues and eigenvectors) of a symmetric dense matrix with n=10^6 
>> (i.e. the matrix has 10^12 entries). ScaLapack seems to be the way to 
>> go for this. 
>>
>> I'm happy to see there is 
>> . Saying its 
>> documentation is "Spartan" is an understatement. There is a file 
>> "test/test.jl" that seems to not be included from "runtests.jl", and 
>> which thus might be intended as example. 
>>
>> The package doesn't pass its tests. If you know more about whether 
>> this just needs a brush-up or whether major surgery is required I'd 
>> appreciate feedback. 
>>
>> -erik 
>>
>> -- 
>> Erik Schnetter  
>> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>>
>

[julia-users] Re: Julia on Android (and/or the web) - a scientific calculator on steroids.. good for tablets

2016-03-08 Thread Viral Shah
I took a hard look at porting Julia to Android with a friend's help who is 
a core Android developer at Google. The android toolchain was the most 
daunting part, and I got stuck at cross compilation of julia dependencies.

I personally would love to help get julia on android, as I have heard many 
such requests myself, but am not very familiar with the android toolchain. 
I can help out with the julia side. With x86 support, the right way forward 
would be to find a powerful android device where one can build julia from 
source instead of cross compliation - if such a thing is possible.

-viral

On Wednesday, March 9, 2016 at 9:28:06 AM UTC+5:30, Michael G wrote:
>
> I am maintaining the SL4A project and we are getting requests to add Julia 
> to the repo. Is anyone interested in helping out so we can run julia on 
> SL4A??? I would need a little help on implementation since I am unfamiliar 
> with the language
>
> -Michael 
>
> On Thursday, May 28, 2015 at 10:17:08 AM UTC-4, Páll Haraldsson wrote:
>>
>>
>> I've noticed: "I guess we can announce alpha support for arm in 0.4 as 
>> well." (and the other thread on Julia on ARM).
>>
>> Now, Android runs on x86 (already covered, then if you have that kind of 
>> device, no need to wait for ARM support), ARM, and MIPS (actually do not 
>> know of a single device that uses it..).
>>
>>
>> I would like to know the most promising way to support Android and..
>>
>> A. For Firefox OS and the web in general, and hybrid apps, compiling to 
>> JavaScript (or Dart and then to JavaScript) would be a possibility, with 
>> asm.js/Emscripten.
>>
>> B. Just making native Android apps is probably easier. Assuming the ARM 
>> CPU is solved, it seems easier. And iOS would be very similar.. But would 
>> not work for Firefox OS - not a priority for now, but the web in general 
>> would be nice..
>>
>>
>> B. seems more promising except for the tiny/non-existent MIPS "problem".. 
>> Also better long term, for full Android framework support and full Julia 
>> support (concurrency/BLAS etc. that JavaScript would not handle).
>>
>>
>> 1. Just getting Julia to work on Android is the first step. Just the 
>> REPL, wouldn't have to be Juno IDE etc. or GUI stuff.
>>
>> 2. You could to a lot with just the REPL and a real keyboard or just an 
>> alternative programmers virtual keyboard.. However, graphing would be nice, 
>> and what would be needed? What are the most promising GUI libraries already 
>> supported by Julia (or not..)? Say Qt, supported by Julia and Android. 
>> Would it just work?
>>
>> 3. Long term, making apps, even standalone (Julia "supports" that) with 
>> Julia. If GUIs work for graphing, is then really anything possible? I know 
>> Android/Java has a huge framework. Google is already supporting Android 
>> with Go (without any Java) as of version 1.4 and with Dart (for hybrid 
>> apps). For Go they have a "framework problem" going to support games at 
>> first. Some people are sceptical about Julia and games because of GC (I'm 
>> not so much). I note Go also has GC..
>>
>> JavaCall.jl only works for JVM not Dalvik or ART. Would it be best to 
>> just use the native C support on Android or somehow go through Go? Anyone 
>> already tried to call Go from Julia? Rust is possible, but doesn't have GC. 
>> Go should be possible, just as Java, but have similar problems..
>>
>> Do/could macros somehow help with supporting the full Android framework? 
>> Julia already has "no overhead" calling, could you generate bindings from 
>> automatically from some metadata and/or on the fly?
>>
>>
>> This could be a cool pet project - anyone else working along these lines?
>>
>> Any reason plan B couldn't succeed relatively quickly? There are some 
>> ways to make apps *on* Android already, I think all crappy, Julia wouldn't 
>> be..?
>>
>>
>> Thanks in advance,
>> -- 
>> Palli.
>>
>>

Re: [julia-users] overhead of accessing rows vs columns in SparseMatrixCSC format

2016-03-08 Thread Viral Shah
Also, the general view is that new sparse matrix formats should be in 
packages rather than Base, which everyone agrees with. This would lead to 
faster development - and Base can be modified to make it easy to add new 
sparse formats.

-viral

On Wednesday, March 9, 2016 at 9:30:31 AM UTC+5:30, Viral Shah wrote:
>
> Nobody is working on it at the moment, that I know of. We almost had a 
> working version then, but it was deemed too complex to include in Base at 
> that time.
>
> -viral
>
> On Wednesday, March 9, 2016 at 6:28:26 AM UTC+5:30, Anonymous wrote:
>>
>> Thank you for the response, I read the 2015 discussion on here about CSR 
>> sparse matrices, do you happen to know the implementation status on when 
>> this will be included?
>>
>> On Monday, March 7, 2016 at 4:59:40 PM UTC-8, Tim Holy wrote:
>>>
>>> Likely yes to both. Best to just test yourself, of course. 
>>>
>>> I'm sure you know this, but you'll want to access them through the 
>>> return 
>>> values of findnz or equivalent, not using S[i,j]. 
>>>
>>> --Tim 
>>>
>>> On Monday, March 07, 2016 10:50:58 AM Anonymous wrote: 
>>> > So I have a sparse matrix which doesn't get modified by for which I 
>>> would 
>>> > like to access by rows, is there significantly more overhead in 
>>> accessing 
>>> > rows vs columns?  If so, would it be more efficient to instead access 
>>> the 
>>> > columns of its transpose? 
>>>
>>>

Re: [julia-users] overhead of accessing rows vs columns in SparseMatrixCSC format

2016-03-08 Thread Viral Shah
Nobody is working on it at the moment, that I know of. We almost had a 
working version then, but it was deemed too complex to include in Base at 
that time.

-viral

On Wednesday, March 9, 2016 at 6:28:26 AM UTC+5:30, Anonymous wrote:
>
> Thank you for the response, I read the 2015 discussion on here about CSR 
> sparse matrices, do you happen to know the implementation status on when 
> this will be included?
>
> On Monday, March 7, 2016 at 4:59:40 PM UTC-8, Tim Holy wrote:
>>
>> Likely yes to both. Best to just test yourself, of course. 
>>
>> I'm sure you know this, but you'll want to access them through the return 
>> values of findnz or equivalent, not using S[i,j]. 
>>
>> --Tim 
>>
>> On Monday, March 07, 2016 10:50:58 AM Anonymous wrote: 
>> > So I have a sparse matrix which doesn't get modified by for which I 
>> would 
>> > like to access by rows, is there significantly more overhead in 
>> accessing 
>> > rows vs columns?  If so, would it be more efficient to instead access 
>> the 
>> > columns of its transpose? 
>>
>>

[julia-users] Re: Julia on Android (and/or the web) - a scientific calculator on steroids.. good for tablets

2016-03-08 Thread Michael G
I am maintaining the SL4A project and we are getting requests to add Julia 
to the repo. Is anyone interested in helping out so we can run julia on 
SL4A??? I would need a little help on implementation since I am unfamiliar 
with the language

-Michael 

On Thursday, May 28, 2015 at 10:17:08 AM UTC-4, Páll Haraldsson wrote:
>
>
> I've noticed: "I guess we can announce alpha support for arm in 0.4 as 
> well." (and the other thread on Julia on ARM).
>
> Now, Android runs on x86 (already covered, then if you have that kind of 
> device, no need to wait for ARM support), ARM, and MIPS (actually do not 
> know of a single device that uses it..).
>
>
> I would like to know the most promising way to support Android and..
>
> A. For Firefox OS and the web in general, and hybrid apps, compiling to 
> JavaScript (or Dart and then to JavaScript) would be a possibility, with 
> asm.js/Emscripten.
>
> B. Just making native Android apps is probably easier. Assuming the ARM 
> CPU is solved, it seems easier. And iOS would be very similar.. But would 
> not work for Firefox OS - not a priority for now, but the web in general 
> would be nice..
>
>
> B. seems more promising except for the tiny/non-existent MIPS "problem".. 
> Also better long term, for full Android framework support and full Julia 
> support (concurrency/BLAS etc. that JavaScript would not handle).
>
>
> 1. Just getting Julia to work on Android is the first step. Just the REPL, 
> wouldn't have to be Juno IDE etc. or GUI stuff.
>
> 2. You could to a lot with just the REPL and a real keyboard or just an 
> alternative programmers virtual keyboard.. However, graphing would be nice, 
> and what would be needed? What are the most promising GUI libraries already 
> supported by Julia (or not..)? Say Qt, supported by Julia and Android. 
> Would it just work?
>
> 3. Long term, making apps, even standalone (Julia "supports" that) with 
> Julia. If GUIs work for graphing, is then really anything possible? I know 
> Android/Java has a huge framework. Google is already supporting Android 
> with Go (without any Java) as of version 1.4 and with Dart (for hybrid 
> apps). For Go they have a "framework problem" going to support games at 
> first. Some people are sceptical about Julia and games because of GC (I'm 
> not so much). I note Go also has GC..
>
> JavaCall.jl only works for JVM not Dalvik or ART. Would it be best to just 
> use the native C support on Android or somehow go through Go? Anyone 
> already tried to call Go from Julia? Rust is possible, but doesn't have GC. 
> Go should be possible, just as Java, but have similar problems..
>
> Do/could macros somehow help with supporting the full Android framework? 
> Julia already has "no overhead" calling, could you generate bindings from 
> automatically from some metadata and/or on the fly?
>
>
> This could be a cool pet project - anyone else working along these lines?
>
> Any reason plan B couldn't succeed relatively quickly? There are some ways 
> to make apps *on* Android already, I think all crappy, Julia wouldn't be..?
>
>
> Thanks in advance,
> -- 
> Palli.
>
>

[julia-users] Is it slow to load packages inside function ?

2016-03-08 Thread Fei Ma
Hello everyone : 

I am working on a signal processing project. And I have found something 
strange. 

When I using"plot" function from the PyPlot , and "matopen"  from the 
MAT package. Sometimes, 
the function run very fast, but sometimes it is very slow ? 

I don't know why ?   


Re: [julia-users] Documentation on Base.uncompressed_ast/ Core.Inference.typeinf_uncached

2016-03-08 Thread Yichao Yu
On Tue, Mar 8, 2016 at 9:18 PM, Julia Tylors  wrote:
> I wasn't able to find a documentation about these?
> Can you point me in the right direction?

These are internal (and unexported) functions that are not meant to be
called directly. You shouldn't be writing code that relies on these
functions since they might break at any time.

Of course you are welcome to learn about julia internals. The best way
to learn these would be reading the code yourself (which I believe is
the main doc, especially for these code that are changing all the
time).

I believe typeinf_uncached is (one of) the main entry point to the
type inference system and should return the inferenced type ast (there
are a few other arguments that should be relatively easy to guess the
meanings).

uncompressed_ast is used to turn the AST from a "compressed" form to a
normal (AS) tree. The compression was done in order to minimize the
number of live objects and make it more GC friendly. (This shouldn't
be a big problem anymore with the generational GC).


>
> Thanks


[julia-users] Documentation on Base.uncompressed_ast/ Core.Inference.typeinf_uncached

2016-03-08 Thread Julia Tylors
I wasn't able to find a documentation about these?
Can you point me in the right direction?

Thanks


Re: [julia-users] overhead of accessing rows vs columns in SparseMatrixCSC format

2016-03-08 Thread Anonymous
Thank you for the response, I read the 2015 discussion on here about CSR 
sparse matrices, do you happen to know the implementation status on when 
this will be included?

On Monday, March 7, 2016 at 4:59:40 PM UTC-8, Tim Holy wrote:
>
> Likely yes to both. Best to just test yourself, of course. 
>
> I'm sure you know this, but you'll want to access them through the return 
> values of findnz or equivalent, not using S[i,j]. 
>
> --Tim 
>
> On Monday, March 07, 2016 10:50:58 AM Anonymous wrote: 
> > So I have a sparse matrix which doesn't get modified by for which I 
> would 
> > like to access by rows, is there significantly more overhead in 
> accessing 
> > rows vs columns?  If so, would it be more efficient to instead access 
> the 
> > columns of its transpose? 
>
>

Re: [julia-users] PyPlot color scatter plots

2016-03-08 Thread Tom Breloff
If you run it in the repl, it should bring up an interactive window
automatically. In ijulia, calling 'gui()' should do the same. (Let me know
if that doesn't work)

Spencer Lyon and I were working on integrating PlotlyJS just today, which
will give great interaction to inline IJulia plots... Stay tuned.

On Tuesday, March 8, 2016,  wrote:

> Sorry about asking so many questions for you, but thank you very much.
> Plots was the only package that proved to be efficient in my case!
>
> I just would like to know if in these 3D examples using PyPlot as backend
> we can like rotate the plot.
>
> Thanks
>
> On Tuesday, March 8, 2016 at 10:24:01 AM UTC-6, Tom Breloff wrote:
>>
>> This will work as expected if you call "scatter3d" instead.  The 3D
>> interface needs a bit of an overhaul... I agree that something like
>> "scatter" on x/y/z should properly produce a 3D scatter.  It's on my radar.
>>
>> On Tue, Mar 8, 2016 at 11:16 AM,  wrote:
>>
>>> Is it possible to do this color plot for a 3D example?. In this case the
>>> color of the markers will depend on another expression:
>>>
>>> using Plots
>>> pyplot(leg = false, size = (400,300))
>>> x = linspace(0,20,100)
>>> y = sin(x)
>>> z = cos(x)
>>> scatter(x,y,z, zcolor=20*x + y, marker = (:o, stroke(1)))
>>>
>>>
>>>
>>> On Tuesday, March 8, 2016 at 9:49:36 AM UTC-6, Tom Breloff wrote:

 It's possible you have an old version... try doing
 Pkg.checkout("Plots").  I'll get around to tagging a new version soon.

 On Tue, Mar 8, 2016 at 10:47 AM,  wrote:

>
>
> On Tuesday, March 8, 2016 at 9:41:14 AM UTC-6, mauri...@gmail.com
> wrote:
>>
>> Thank you very much Tom! However, my plot is not showing the colors,
>> it's just black, do you know what can be happening?
>>
>> On Tuesday, March 8, 2016 at 9:33:39 AM UTC-6, Tom Breloff wrote:
>>>
>>> You can almost copy that verbatim with PyPlot.jl, or here's the same
>>> in Plots:
>>>
>>> [image: Inline image 1]
>>>
>>>
>>> On Tue, Mar 8, 2016 at 9:51 AM,  wrote:
>>>
 Hello Julia Users,

 PyPlot has some modules related to color plots. However the
 documentation in Julia don't address the following application (in 
 Python):

 import numpy as npimport matplotlib.pyplot as plt

 x = np.linspace(0, 20, 100)
 y = np.sin(x)
 z = x + 20 * y

 scaled_z = (z - z.min()) / z.ptp()
 colors = plt.cm.coolwarm(scaled_z)

 plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
 plt.show()




 Does anyone know how can I define the "colors" as above and plot in
 Julia the same example?

 Any information is really helpful.

 Thank you very much.

>>>
>>>

>>


Re: [julia-users] Algorithm performance comparison with python

2016-03-08 Thread Stefan Karpinski
Yes, this is a good example – I reopened the issue because we should do it.

On Tue, Mar 8, 2016 at 6:35 PM, Jérémy Béjanin 
wrote:

> On Tuesday, March 8, 2016 at 5:49:05 PM UTC-5, Stefan Karpinski wrote:
>
>>
>> You're writing the same matrix location each time so the results are the
>> same.
>>
>
> Once again you are correct, the problem was with the python code and I was
> confused because I had assumed it was right.
>
> Back to the original problem:
> I understand that this issue would not occur too often, considering that a
> serious algorithm would always use a sparse matrix (that's my next step!).
> Are there actual use cases where it might be important to use the same
> memory trick as Numpy?
>


Re: [julia-users] PyPlot color scatter plots

2016-03-08 Thread mauriciodeq
Sorry about asking so many questions for you, but thank you very much. 
Plots was the only package that proved to be efficient in my case! 

I just would like to know if in these 3D examples using PyPlot as backend 
we can like rotate the plot. 

Thanks

On Tuesday, March 8, 2016 at 10:24:01 AM UTC-6, Tom Breloff wrote:
>
> This will work as expected if you call "scatter3d" instead.  The 3D 
> interface needs a bit of an overhaul... I agree that something like 
> "scatter" on x/y/z should properly produce a 3D scatter.  It's on my radar.
>
> On Tue, Mar 8, 2016 at 11:16 AM,  wrote:
>
>> Is it possible to do this color plot for a 3D example?. In this case the 
>> color of the markers will depend on another expression: 
>>
>> using Plots 
>> pyplot(leg = false, size = (400,300))
>> x = linspace(0,20,100) 
>> y = sin(x) 
>> z = cos(x)
>> scatter(x,y,z, zcolor=20*x + y, marker = (:o, stroke(1)))
>>
>>
>>
>> On Tuesday, March 8, 2016 at 9:49:36 AM UTC-6, Tom Breloff wrote:
>>>
>>> It's possible you have an old version... try doing 
>>> Pkg.checkout("Plots").  I'll get around to tagging a new version soon.
>>>
>>> On Tue, Mar 8, 2016 at 10:47 AM,  wrote:
>>>


 On Tuesday, March 8, 2016 at 9:41:14 AM UTC-6, mauri...@gmail.com 
 wrote:
>
> Thank you very much Tom! However, my plot is not showing the colors, 
> it's just black, do you know what can be happening? 
>
> On Tuesday, March 8, 2016 at 9:33:39 AM UTC-6, Tom Breloff wrote:
>>
>> You can almost copy that verbatim with PyPlot.jl, or here's the same 
>> in Plots:
>>
>> [image: Inline image 1]
>>
>>
>> On Tue, Mar 8, 2016 at 9:51 AM,  wrote:
>>
>>> Hello Julia Users, 
>>>
>>> PyPlot has some modules related to color plots. However the 
>>> documentation in Julia don't address the following application (in 
>>> Python): 
>>>
>>> import numpy as npimport matplotlib.pyplot as plt
>>>
>>> x = np.linspace(0, 20, 100)
>>> y = np.sin(x)
>>> z = x + 20 * y
>>>
>>> scaled_z = (z - z.min()) / z.ptp()
>>> colors = plt.cm.coolwarm(scaled_z)
>>>
>>> plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
>>> plt.show()
>>>
>>>
>>>
>>>
>>> Does anyone know how can I define the "colors" as above and plot in 
>>> Julia the same example? 
>>>
>>> Any information is really helpful. 
>>>
>>> Thank you very much.
>>>
>>
>>
>>>
>

Re: [julia-users] Algorithm performance comparison with python

2016-03-08 Thread Stefan Karpinski
On Tue, Mar 8, 2016 at 5:30 PM, Jérémy Béjanin 
wrote:

>
> Yep, that was my mistake, thanks (strange I thought I had compared the
> outputs and they were the same, oh well...).
>

You're writing the same matrix location each time so the results are the
same.


[julia-users] Re: Julia and Atom

2016-03-08 Thread randmstring
Hi, 

if you are using Juno (and therefore ink), this is this 
 issue. Will most probably 
be fixed on the next release (also see this PR 
).

Regards,
Sebastian

Am Dienstag, 8. März 2016 19:35:36 UTC+1 schrieb Uwe Fechner:
>
> Hello,
>
> I gave Atom a try.
>
> One question: When I close Atom and restart it it doesn't remember the 
> window size.
>
> Is it possible to convince Atom to start with a wider window?
>
> Best regards:
>
> Uwe
>


Re: [julia-users] Algorithm performance comparison with python

2016-03-08 Thread Jérémy Béjanin

>
> We currently allocate an uninitialized array and then call the bzero 
> function to fill it with zeros – this takes a long time for large arrays. 
> We should instead mmap /dev/zero since that allows the kernel to lazily 
> allocate pages. I suspect this is what NumPy is doing, which in this 
> particular benchmark is an increasingly large benefit since huge chunks of 
> this matrix remain completely zeroed out and can all share a single actual 
> memory page. There's a very old (closed) issue about this that I've 
> reopened: #130 .
>

Thanks for looking into this.
 

> Also: I don't think this matters much but in the Julia version, the line
>
> ham[bra+1,bra+1] = diag_term
>
>
> is outside the `for s in 0:N-2` loop whereas in the Python version, the 
> corresponding line is inside the loop. That favors the Julia version, 
> however.
>

Yep, that was my mistake, thanks (strange I thought I had compared the 
outputs and they were the same, oh well...).


Re: [julia-users] Algorithm performance comparison with python

2016-03-08 Thread Jérémy Béjanin
I have. I don't see anything in particular that would help me here...
>
>

Re: [julia-users] Re: git help needed

2016-03-08 Thread Douglas Bates


On Tuesday, March 8, 2016 at 3:55:03 PM UTC-6, Andreas Noack wrote:
>
> No problem. When you have made a change like this, i.e. a hard reset or a 
> rebase and want to push the changes to an existing branch you'll have to
>
> git push --force origin glmms
>
> because git is trying to prevent you from making a non-recoverable change 
> to your repo. The reason is that when you have made this push, you cannot 
> recover the commit you are trying to remove from the git history.
>
 
Thanks. I think things are settled now. 


Re: [julia-users] Re: git help needed

2016-03-08 Thread Andreas Noack
No problem. When you have made a change like this, i.e. a hard reset or a
rebase and want to push the changes to an existing branch you'll have to

git push --force origin glmms

because git is trying to prevent you from making a non-recoverable change
to your repo. The reason is that when you have made this push, you cannot
recover the commit you are trying to remove from the git history.

On Tue, Mar 8, 2016 at 4:46 PM, Douglas Bates  wrote:

> Thanks, Andreas.  Unfortunately, I am still in trouble.  Sorry to be such
> a PITA.
>
> When I apply those changes that you describe it is okay until I try to
> push the changes.  The response  I get is that I can't push to origin/glmms
> because the branches have diverged.
>
> $ git status
> On branch glmms
> Your branch and 'origin/glmms' have diverged,
> and have 11 and 6 different commits each, respectively.
>   (use "git pull" to merge the remote branch into yours)
> nothing to commit, working directory clean
> bates@thin40:~/.julia/v0.4/MixedModels$ git pull
> Auto-merging src/paramlowertriangular.jl
> CONFLICT (content): Merge conflict in src/paramlowertriangular.jl
> Auto-merging src/logdet.jl
> CONFLICT (content): Merge conflict in src/logdet.jl
> Auto-merging src/linalg.jl
> CONFLICT (content): Merge conflict in src/linalg.jl
> Auto-merging src/inject.jl
> CONFLICT (content): Merge conflict in src/inject.jl
> Auto-merging src/inflate.jl
> CONFLICT (content): Merge conflict in src/inflate.jl
> Auto-merging src/cfactor.jl
> CONFLICT (content): Merge conflict in src/cfactor.jl
> Auto-merging src/bootstrap.jl
> CONFLICT (content): Merge conflict in src/bootstrap.jl
> Auto-merging src/MixedModels.jl
> CONFLICT (content): Merge conflict in src/MixedModels.jl
> Auto-merging src/GLMM/PIRLS.jl
> CONFLICT (content): Merge conflict in src/GLMM/PIRLS.jl
> Automatic merge failed; fix conflicts and then commit the result.
>
>
>
>
> On Tuesday, March 8, 2016 at 2:29:08 PM UTC-6, Andreas Noack wrote:
>>
>> Provided that you have pushed all your changes, I think the easiest
>> solution is to "remove" the commit in which you add the chksqr fix. You can
>> do that with
>>
>> git reset --hard c0b5c41d136013a8e2cd57f5bedd8c96f5d2e3c6 # the commit
>> right before the chksqr changes
>>
>> git cherry-pick b7564f59ac0a8b72a4a86ccc73cb805520d820b7
>>
>> git cherry-pick 021f766768b54686e60a75c023cb00ab0b38a5dc # the two
>> commits after the chksqr
>>
>> git rebase origin/master # now without conflicts
>>
>> You can also just reset hard to
>> https://github.com/andreasnoack/MixedModels.jl/tree/anj/glmms where I've
>> made exactly these changes or I can open a PR with my version.
>>
>> On Tuesday, March 8, 2016 at 1:46:08 PM UTC-5, Douglas Bates wrote:
>>>
>>> Having looked closer (nothing like a public post to cause you to read
>>> again and discover you were wrong) I see that I did change all those files
>>> in the glmms branch that Andreas changed in his pull request.  I  hadfixed
>>> the issue that Andreas addressed but in a different way and our changes
>>> were in conflict.
>>>
>>> However, I am still in the position that I can't reconcile the glmms
>>> branch and the master branch.
>>>
>>


Re: [julia-users] git help needed

2016-03-08 Thread Kevin Squire
Hi Douglas,

You should just do `git push -f origin glmms`.

This is a force push, and will replace origin/glmms with your local glmms
branch.

Cheers,
   Kevin

On Tuesday, March 8, 2016, Douglas Bates  wrote:

> Thanks, Andreas.  Unfortunately, I am still in trouble.  Sorry to be such
> a PITA.
>
> When I apply those changes that you describe it is okay until I try to
> push the changes.  The response  I get is that I can't push to origin/glmms
> because the branches have diverged.
>
> $ git status
> On branch glmms
> Your branch and 'origin/glmms' have diverged,
> and have 11 and 6 different commits each, respectively.
>   (use "git pull" to merge the remote branch into yours)
> nothing to commit, working directory clean
> bates@thin40:~/.julia/v0.4/MixedModels$ git pull
> Auto-merging src/paramlowertriangular.jl
> CONFLICT (content): Merge conflict in src/paramlowertriangular.jl
> Auto-merging src/logdet.jl
> CONFLICT (content): Merge conflict in src/logdet.jl
> Auto-merging src/linalg.jl
> CONFLICT (content): Merge conflict in src/linalg.jl
> Auto-merging src/inject.jl
> CONFLICT (content): Merge conflict in src/inject.jl
> Auto-merging src/inflate.jl
> CONFLICT (content): Merge conflict in src/inflate.jl
> Auto-merging src/cfactor.jl
> CONFLICT (content): Merge conflict in src/cfactor.jl
> Auto-merging src/bootstrap.jl
> CONFLICT (content): Merge conflict in src/bootstrap.jl
> Auto-merging src/MixedModels.jl
> CONFLICT (content): Merge conflict in src/MixedModels.jl
> Auto-merging src/GLMM/PIRLS.jl
> CONFLICT (content): Merge conflict in src/GLMM/PIRLS.jl
> Automatic merge failed; fix conflicts and then commit the result.
>
>
>
>
> On Tuesday, March 8, 2016 at 2:29:08 PM UTC-6, Andreas Noack wrote:
>>
>> Provided that you have pushed all your changes, I think the easiest
>> solution is to "remove" the commit in which you add the chksqr fix. You can
>> do that with
>>
>> git reset --hard c0b5c41d136013a8e2cd57f5bedd8c96f5d2e3c6 # the commit
>> right before the chksqr changes
>>
>> git cherry-pick b7564f59ac0a8b72a4a86ccc73cb805520d820b7
>>
>> git cherry-pick 021f766768b54686e60a75c023cb00ab0b38a5dc # the two
>> commits after the chksqr
>>
>> git rebase origin/master # now without conflicts
>>
>> You can also just reset hard to
>> https://github.com/andreasnoack/MixedModels.jl/tree/anj/glmms where I've
>> made exactly these changes or I can open a PR with my version.
>>
>> On Tuesday, March 8, 2016 at 1:46:08 PM UTC-5, Douglas Bates wrote:
>>>
>>> Having looked closer (nothing like a public post to cause you to read
>>> again and discover you were wrong) I see that I did change all those files
>>> in the glmms branch that Andreas changed in his pull request.  I  hadfixed
>>> the issue that Andreas addressed but in a different way and our changes
>>> were in conflict.
>>>
>>> However, I am still in the position that I can't reconcile the glmms
>>> branch and the master branch.
>>>
>>


[julia-users] Re: git help needed

2016-03-08 Thread Douglas Bates
Thanks, Andreas.  Unfortunately, I am still in trouble.  Sorry to be such a 
PITA.

When I apply those changes that you describe it is okay until I try to push 
the changes.  The response  I get is that I can't push to origin/glmms 
because the branches have diverged.

$ git status
On branch glmms
Your branch and 'origin/glmms' have diverged,
and have 11 and 6 different commits each, respectively.
  (use "git pull" to merge the remote branch into yours)
nothing to commit, working directory clean
bates@thin40:~/.julia/v0.4/MixedModels$ git pull
Auto-merging src/paramlowertriangular.jl
CONFLICT (content): Merge conflict in src/paramlowertriangular.jl
Auto-merging src/logdet.jl
CONFLICT (content): Merge conflict in src/logdet.jl
Auto-merging src/linalg.jl
CONFLICT (content): Merge conflict in src/linalg.jl
Auto-merging src/inject.jl
CONFLICT (content): Merge conflict in src/inject.jl
Auto-merging src/inflate.jl
CONFLICT (content): Merge conflict in src/inflate.jl
Auto-merging src/cfactor.jl
CONFLICT (content): Merge conflict in src/cfactor.jl
Auto-merging src/bootstrap.jl
CONFLICT (content): Merge conflict in src/bootstrap.jl
Auto-merging src/MixedModels.jl
CONFLICT (content): Merge conflict in src/MixedModels.jl
Auto-merging src/GLMM/PIRLS.jl
CONFLICT (content): Merge conflict in src/GLMM/PIRLS.jl
Automatic merge failed; fix conflicts and then commit the result.




On Tuesday, March 8, 2016 at 2:29:08 PM UTC-6, Andreas Noack wrote:
>
> Provided that you have pushed all your changes, I think the easiest 
> solution is to "remove" the commit in which you add the chksqr fix. You can 
> do that with
>
> git reset --hard c0b5c41d136013a8e2cd57f5bedd8c96f5d2e3c6 # the commit 
> right before the chksqr changes
>
> git cherry-pick b7564f59ac0a8b72a4a86ccc73cb805520d820b7
>
> git cherry-pick 021f766768b54686e60a75c023cb00ab0b38a5dc # the two commits 
> after the chksqr
>
> git rebase origin/master # now without conflicts
>
> You can also just reset hard to 
> https://github.com/andreasnoack/MixedModels.jl/tree/anj/glmms where I've 
> made exactly these changes or I can open a PR with my version.
>
> On Tuesday, March 8, 2016 at 1:46:08 PM UTC-5, Douglas Bates wrote:
>>
>> Having looked closer (nothing like a public post to cause you to read 
>> again and discover you were wrong) I see that I did change all those files 
>> in the glmms branch that Andreas changed in his pull request.  I  hadfixed 
>> the issue that Andreas addressed but in a different way and our changes 
>> were in conflict.
>>
>> However, I am still in the position that I can't reconcile the glmms 
>> branch and the master branch.
>>
>

Re: [julia-users] Algorithm performance comparison with python

2016-03-08 Thread Kevin Squire
Hi Jérémy,

Have you had the chance to read through the Julia performance tips yet?

http://docs.julialang.org/en/release-0.4/manual/performance-tips/

Cheers,
  Kevin

On Tuesday, March 8, 2016, Jérémy Béjanin  wrote:

> Hello,
>
> By curiosity, I have translated a simple algorithm from python to julia in
> order to compare their performance (gist here
> ). I am still a
> relative newcomer and so am not sure why I am seeing worse performance from
> julia.
>
> The code was not optimized for python (or julia for that matters).
>
> The code generates a Hamiltonian for a 1D transverse field ising model,
> and I am varying the number of sites which makes the size of the matrix
> grow exponentially. Here are the timings I get:
>
>
> 
>
> As can be seen, Julia is faster for lower number of sites, but that
> changes at N=13. Also interesting is the CPU and memory use during those
> runs. It seems that Julia uses memory much more aggressively:
>
>
> 
>


[julia-users] Algorithm performance comparison with python

2016-03-08 Thread Jérémy Béjanin
Hello,

By curiosity, I have translated a simple algorithm from python to julia in 
order to compare their performance (gist here 
). I am still a 
relative newcomer and so am not sure why I am seeing worse performance from 
julia.

The code was not optimized for python (or julia for that matters).

The code generates a Hamiltonian for a 1D transverse field ising model, and 
I am varying the number of sites which makes the size of the matrix grow 
exponentially. Here are the timings I get:



As can be seen, Julia is faster for lower number of sites, but that changes 
at N=13. Also interesting is the CPU and memory use during those runs. It 
seems that Julia uses memory much more aggressively:




[julia-users] Re: git help needed

2016-03-08 Thread Andreas Noack
Provided that you have pushed all your changes, I think the easiest 
solution is to "remove" the commit in which you add the chksqr fix. You can 
do that with

git reset --hard c0b5c41d136013a8e2cd57f5bedd8c96f5d2e3c6 # the commit 
right before the chksqr changes

git cherry-pick b7564f59ac0a8b72a4a86ccc73cb805520d820b7

git cherry-pick 021f766768b54686e60a75c023cb00ab0b38a5dc # the two commits 
after the chksqr

git rebase origin/master # now without conflicts

You can also just reset hard 
to https://github.com/andreasnoack/MixedModels.jl/tree/anj/glmms where I've 
made exactly these changes or I can open a PR with my version.

On Tuesday, March 8, 2016 at 1:46:08 PM UTC-5, Douglas Bates wrote:
>
> Having looked closer (nothing like a public post to cause you to read 
> again and discover you were wrong) I see that I did change all those files 
> in the glmms branch that Andreas changed in his pull request.  I  hadfixed 
> the issue that Andreas addressed but in a different way and our changes 
> were in conflict.
>
> However, I am still in the position that I can't reconcile the glmms 
> branch and the master branch.
>


[julia-users] Re: a excellent Julia IDE, JuliaDT

2016-03-08 Thread J Luis
Ok, thanks ... but will wait for a simpler thing to use.

terça-feira, 8 de Março de 2016 às 19:32:28 UTC, Avik Sengupta escreveu:
>
> Use the "Eclipse IDE for Java Developers". That should give you the least 
> amount of cruft. 
>
> Regards
> -
> Avik
>
> On Tuesday, 8 March 2016 19:00:22 UTC, J Luis wrote:
>>
>> A quick previous question. Which Eclipse version from the (many) 
>> available options should we install  (Java is completely out of my 
>> interest)?
>>
>> http://www.eclipse.org/downloads/
>>
>> Thanks
>>
>> terça-feira, 8 de Março de 2016 às 13:56:39 UTC, Liye zhang escreveu:
>>>
>>> If you are trying to find an IDE for Julia which is as convenient as 
>>> PyDev for python, or RStudio for R, you can test JuliaDT. Thanks for 
>>> the authors' excellent work!
>>>
>>> https://github.com/JuliaComputing/JuliaDT/releases/tag/v0.0.1 
>>>
>>> More about this software,
>>> http://juliacomputing.com/blog/2016/02/06/Eclipse-JuliaDT.html
>>>
>>

[julia-users] Re: Eigenvalues of symmetric dense n=10^6 matrix (ScaLAPACK.jl?)

2016-03-08 Thread Tony Kelman
May also want to look into Elemental. Last I checked Elemental uses 
Scalapack in a handful of places but I don't think it would be the case 
here.


On Tuesday, March 8, 2016 at 8:19:15 AM UTC-8, Erik Schnetter wrote:
>
> A colleague mentioned to me that he needs to diagonalize (find 
> eigenvalues and eigenvectors) of a symmetric dense matrix with n=10^6 
> (i.e. the matrix has 10^12 entries). ScaLapack seems to be the way to 
> go for this. 
>
> I'm happy to see there is 
> . Saying its 
> documentation is "Spartan" is an understatement. There is a file 
> "test/test.jl" that seems to not be included from "runtests.jl", and 
> which thus might be intended as example. 
>
> The package doesn't pass its tests. If you know more about whether 
> this just needs a brush-up or whether major surgery is required I'd 
> appreciate feedback. 
>
> -erik 
>
> -- 
> Erik Schnetter  
> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>


Re: [julia-dev] Re: [julia-users] Pkg.submit() not working on Julia v0.5

2016-03-08 Thread Ayush Pandey
The github issue is still open and I guess the solution for the above
problem hasn't been found yet.
I had to switch to direct connection to able to PkgDev.submit().

*Yours Sincerely,*
*Ayush Pandey* * *
*LinkedIn Profile *
*GitHub *

On Tue, Mar 8, 2016 at 5:20 AM, Yichao Yu  wrote:

> On Mon, Mar 7, 2016 at 6:46 PM, Ayush Pandey 
> wrote:
> > Pkg.add() error was solved using the above thread.
> > I have already setup my Julia as per the instructions given in the thread
> > still the PkgDev.submit() error persist.
>
> Ahh, in which case it might be related to
> https://github.com/JuliaLang/julia/issues/15381 ?
>
> >
> >
> > Yours Sincerely,
> > Ayush Pandey
> > LinkedIn Profile
> > GitHub
> >
> > On Tue, Mar 8, 2016 at 4:13 AM, Yichao Yu  wrote:
> >>
> >> On Mon, Mar 7, 2016 at 5:32 PM, Ayush Pandey  >
> >> wrote:
> >> > Thanks a lot. It did help :)
> >> > But now I seem to be running into a newer problem.
> >> >
> >> > PkgDev.submit("Convex")
> >> > INFO: Forking JuliaOpt/Convex.jl to Ayush-iitkgp
> >> > Enter host password for user 'Ayush-iitkgp':
> >> >
> >> > ERROR: Unknown value
> >> > Line: 0
> >> > Around: ...HTTP/1.1 201 Created ...
> >> >^
> >> >
> >> >  in error(::ASCIIString) at ./error.jl:21
> >> >  [inlined code] from ./strings/types.jl:185
> >> >  in _error(::ASCIIString, ::JSON.Parser.ParserState{ASCIIString}) at
> >> > /home/hduser/.julia/v0.5/JSON/src/Parser.jl:61
> >> >  in parse_value(::JSON.Parser.ParserState{ASCIIString}, ::Type{T}) at
> >> > /home/hduser/.julia/v0.5/JSON/src/Parser.jl:218
> >> >  in #parse#1(::Type{Dict{K,V}}, ::Any, ::ASCIIString) at
> >> > /home/hduser/.julia/v0.5/JSON/src/Parser.jl:308
> >> >  in token(::ASCIIString) at
> >> > /home/hduser/.julia/v0.5/PkgDev/src/github.jl:78
> >> >  [inlined code] from /home/hduser/.julia/v0.5/PkgDev/src/github.jl:56
> >> >  in req(::ASCIIString, ::Void, ::Cmd) at
> >> > /home/hduser/.julia/v0.5/PkgDev/src/github.jl:88
> >> >  in POST(::ASCIIString) at
> >> > /home/hduser/.julia/v0.5/PkgDev/src/github.jl:101
> >> >  in fork(::SubString{UTF8String}, ::SubString{UTF8String}) at
> >> > /home/hduser/.julia/v0.5/PkgDev/src/github.jl:114
> >> >  in (::PkgDev.Entry.##2#3)(::Base.LibGit2.GitRepo) at
> >> > /home/hduser/.julia/v0.5/PkgDev/src/entry.jl:32
> >> >  in with(::PkgDev.Entry.##2#3, ::Base.LibGit2.GitRepo) at
> >> > ./libgit2/types.jl:481
> >> >  in #pull_request#1(::ASCIIString, ::ASCIIString, ::ASCIIString,
> ::Any,
> >> > ::ASCIIString) at /home/hduser/.julia/v0.5/PkgDev/src/entry.jl:13
> >> >  [inlined code] from ./boot.jl:331
> >> >  in submit(::ASCIIString, ::ASCIIString) at
> >> > /home/hduser/.julia/v0.5/PkgDev/src/entry.jl:45
> >> >  [inlined code] from /home/hduser/.julia/v0.5/PkgDev/src/entry.jl:43
> >> >  in
> >> >
> >> >
> (::Base.Pkg.Dir.##2#3{Array{Any,1},PkgDev.Entry.#submit,Tuple{ASCIIString}})()
> >> > at ./pkg/dir.jl:31
> >> >  in
> >> >
> >> >
> cd(::Base.Pkg.Dir.##2#3{Array{Any,1},PkgDev.Entry.#submit,Tuple{ASCIIString}},
> >> > ::UTF8String) at ./file.jl:47
> >> >  in #cd#1(::Array{Any,1}, ::Any, ::Any, ::ASCIIString,
> >> > ::Vararg{ASCIIString}) at ./pkg/dir.jl:31
> >> >  [inlined code] from ./boot.jl:331
> >> >  in submit(::ASCIIString) at
> >> > /home/hduser/.julia/v0.5/PkgDev/src/PkgDev.jl:44
> >> >  in eval(::Module, ::Any) at ./boot.jl:267
> >> >
> >> > I am behind a corporate proxy.
> >> >
> >> >
> >> > When I run  ssh -T g...@github.com
> >> > Hi Ayush-iitkgp! You've successfully authenticated, but GitHub does
> not
> >> > provide shell access.
> >> >
> >> > It is giving the expected result.
> >> >
> >> > I am using Corkscrew and my ~/.ssh/config file has the following
> >> > configuration
> >> > User git
> >> >   Hostname ssh.github.com
> >> >   Port 443
> >> >   ProxyCommand corkscrew 10.3.100.207 8080 %h %p
> >> >   IdentityFile /home/hduser/.ssh/id_rsa
> >> >
> >> > Any help would be appreciated :)
> >>
> >> See https://github.com/JuliaLang/julia/issues/13472
> >>
> >> >
> >> >
> >> > Yours Sincerely,
> >> > Ayush Pandey
> >> > LinkedIn Profile
> >> > GitHub
> >> >
> >> > On Tue, Mar 8, 2016 at 2:57 AM, Tim Holy  wrote:
> >> >>
> >> >> The error message could be more explicit, but I suspect it should be
> >> >> `PkgDev.submit`, not `Pkg.submit`. You have to say `using PkgDev`
> >> >> before
> >> >> it
> >> >> will work.
> >> >>
> >> >> Best,
> >> >> --Tim
> >> >>
> >> >> On Tuesday, March 08, 2016 01:07:56 AM Ayush Pandey wrote:
> >> >> > Hi,
> >> >> >
> >> >> > I am tryig to submit a patch to Convex.jl package following the
> >> >> > instructions from
> >> >> > http://docs.julialang.org/en/release-0.4/manual/packages/
> >> >> >
> >> >> > The error comes when i run
> >> >> > Pkg.submit("Convex")
> >> >> >
> >> >> > The following error comes:
> 

[julia-users] Re: git help needed

2016-03-08 Thread Douglas Bates
Having looked closer (nothing like a public post to cause you to read again 
and discover you were wrong) I see that I did change all those files in the 
glmms branch that Andreas changed in his pull request.  I  hadfixed the 
issue that Andreas addressed but in a different way and our changes were in 
conflict.

However, I am still in the position that I can't reconcile the glmms branch 
and the master branch.


[julia-users] Julia and Atom

2016-03-08 Thread Uwe Fechner
Hello,

I gave Atom a try.

One question: When I close Atom and restart it it doesn't remember the 
window size.

Is it possible to convince Atom to start with a wider window?

Best regards:

Uwe


Re: [julia-users] Fixed Size Memory Allocation When Randomly Reading Array

2016-03-08 Thread Jared Crean

D-oh.  Sometimes its the simple things.

Thanks Yichao

Jared Crean



On 03/08/2016 11:34 AM, Yichao Yu wrote:



On Mar 8, 2016 11:32 AM, "Yichao Yu" > wrote:

>
> On Tue, Mar 8, 2016 at 11:17 AM, Jared Crean > wrote:

> > Hello,
> > I did a micro-brenchmark about using Ints or UInt8s as indices for
> > randomly reading an array, and discovered that when using Ints 
(but not
> > UInt8s), there is an unexpected memory allocation. More 
interestingly, the
> > size of the memory allocation does not depend on the size of the 
array (n in

> > the code below:
> >
> > function sumjc(val_arr, idx_arr)
> > # random read sum
> >   sum = 0.0
> >   for i=1:length(idx_arr)
> > idx = idx_arr[i]
> > sum += val_arr[idx]
> >   end
> >
> >   return sum
> > end
> >
> > function sumjc2(val_arr)
> > # sequential read sum
> >   sum = 0.0
> > for i=1:length(val_arr)
> >   sum += val_arr[i]
> > end
> >
> >   return sum
> > end
> >
> > n = 200
> > a = round(Int, n*rand(n)) + 1  # Int indices
> > b = UInt8[ i for i in a]  # UInt8 indices
> > vals = rand(n + 1)
> >
> > # warm up
> > sumjc(vals, a)
> > sumjc(vals, b)
> > sumjc2(vals)
> >
> > # results
> > @time sumjc(vals, a)
> > println("Int random read @time printed above")
> > @time sumjc(vals, b)
> > println("UInt8 random read @time printed above")
> > @time sumjc2(vals)
> > println("standard sum @time printed above")
> >
> > The output I get is:
> >4.083 microseconds (156 allocations: 10861 bytes) # ???
>
> Functions used by @time itself needs to be compiled and causes memory
> allocation.

And also other initialization needed when you call it the first time 
in global scope


>
> > Int random read @time printed above
> >2.716 microseconds (5 allocations: 160 bytes)
> > UInt8 random read @time printed above
> >1.715 microseconds (5 allocations: 160 bytes)
> > standard sum @time printed above
> >
> >
> > Is this a bug that should be reported on Github, or is this expected
> > behavior?  I am using the 0.4 release version of Julia.
> >
> >   Jared Crean





Re: [julia-users] Fixed Size Memory Allocation When Randomly Reading Array

2016-03-08 Thread Yichao Yu
On Mar 8, 2016 11:32 AM, "Yichao Yu"  wrote:
>
> On Tue, Mar 8, 2016 at 11:17 AM, Jared Crean  wrote:
> > Hello,
> > I did a micro-brenchmark about using Ints or UInt8s as indices for
> > randomly reading an array, and discovered that when using Ints (but not
> > UInt8s), there is an unexpected memory allocation.  More interestingly,
the
> > size of the memory allocation does not depend on the size of the array
(n in
> > the code below:
> >
> > function sumjc(val_arr, idx_arr)
> > # random read sum
> >   sum = 0.0
> >   for i=1:length(idx_arr)
> > idx = idx_arr[i]
> > sum += val_arr[idx]
> >   end
> >
> >   return sum
> > end
> >
> > function sumjc2(val_arr)
> > # sequential read sum
> >   sum = 0.0
> > for i=1:length(val_arr)
> >   sum += val_arr[i]
> > end
> >
> >   return sum
> > end
> >
> > n = 200
> > a = round(Int, n*rand(n)) + 1  # Int indices
> > b = UInt8[ i for i in a]  # UInt8 indices
> > vals = rand(n + 1)
> >
> > # warm up
> > sumjc(vals, a)
> > sumjc(vals, b)
> > sumjc2(vals)
> >
> > # results
> > @time sumjc(vals, a)
> > println("Int random read @time printed above")
> > @time sumjc(vals, b)
> > println("UInt8 random read @time printed above")
> > @time sumjc2(vals)
> > println("standard sum @time printed above")
> >
> > The output I get is:
> >4.083 microseconds (156 allocations: 10861 bytes)  # ???
>
> Functions used by @time itself needs to be compiled and causes memory
> allocation.

And also other initialization needed when you call it the first time in
global scope

>
> > Int random read @time printed above
> >2.716 microseconds (5 allocations: 160 bytes)
> > UInt8 random read @time printed above
> >1.715 microseconds (5 allocations: 160 bytes)
> > standard sum @time printed above
> >
> >
> > Is this a bug that should be reported on Github, or is this expected
> > behavior?  I am using the 0.4 release version of Julia.
> >
> >   Jared Crean


Re: [julia-users] Fixed Size Memory Allocation When Randomly Reading Array

2016-03-08 Thread Yichao Yu
On Tue, Mar 8, 2016 at 11:17 AM, Jared Crean  wrote:
> Hello,
> I did a micro-brenchmark about using Ints or UInt8s as indices for
> randomly reading an array, and discovered that when using Ints (but not
> UInt8s), there is an unexpected memory allocation.  More interestingly, the
> size of the memory allocation does not depend on the size of the array (n in
> the code below:
>
> function sumjc(val_arr, idx_arr)
> # random read sum
>   sum = 0.0
>   for i=1:length(idx_arr)
> idx = idx_arr[i]
> sum += val_arr[idx]
>   end
>
>   return sum
> end
>
> function sumjc2(val_arr)
> # sequential read sum
>   sum = 0.0
> for i=1:length(val_arr)
>   sum += val_arr[i]
> end
>
>   return sum
> end
>
> n = 200
> a = round(Int, n*rand(n)) + 1  # Int indices
> b = UInt8[ i for i in a]  # UInt8 indices
> vals = rand(n + 1)
>
> # warm up
> sumjc(vals, a)
> sumjc(vals, b)
> sumjc2(vals)
>
> # results
> @time sumjc(vals, a)
> println("Int random read @time printed above")
> @time sumjc(vals, b)
> println("UInt8 random read @time printed above")
> @time sumjc2(vals)
> println("standard sum @time printed above")
>
> The output I get is:
>4.083 microseconds (156 allocations: 10861 bytes)  # ???

Functions used by @time itself needs to be compiled and causes memory
allocation.

> Int random read @time printed above
>2.716 microseconds (5 allocations: 160 bytes)
> UInt8 random read @time printed above
>1.715 microseconds (5 allocations: 160 bytes)
> standard sum @time printed above
>
>
> Is this a bug that should be reported on Github, or is this expected
> behavior?  I am using the 0.4 release version of Julia.
>
>   Jared Crean


Re: [julia-users] PyPlot color scatter plots

2016-03-08 Thread Tom Breloff
This will work as expected if you call "scatter3d" instead.  The 3D
interface needs a bit of an overhaul... I agree that something like
"scatter" on x/y/z should properly produce a 3D scatter.  It's on my radar.

On Tue, Mar 8, 2016 at 11:16 AM,  wrote:

> Is it possible to do this color plot for a 3D example?. In this case the
> color of the markers will depend on another expression:
>
> using Plots
> pyplot(leg = false, size = (400,300))
> x = linspace(0,20,100)
> y = sin(x)
> z = cos(x)
> scatter(x,y,z, zcolor=20*x + y, marker = (:o, stroke(1)))
>
>
>
> On Tuesday, March 8, 2016 at 9:49:36 AM UTC-6, Tom Breloff wrote:
>>
>> It's possible you have an old version... try doing
>> Pkg.checkout("Plots").  I'll get around to tagging a new version soon.
>>
>> On Tue, Mar 8, 2016 at 10:47 AM,  wrote:
>>
>>>
>>>
>>> On Tuesday, March 8, 2016 at 9:41:14 AM UTC-6, mauri...@gmail.com wrote:

 Thank you very much Tom! However, my plot is not showing the colors,
 it's just black, do you know what can be happening?

 On Tuesday, March 8, 2016 at 9:33:39 AM UTC-6, Tom Breloff wrote:
>
> You can almost copy that verbatim with PyPlot.jl, or here's the same
> in Plots:
>
> [image: Inline image 1]
>
>
> On Tue, Mar 8, 2016 at 9:51 AM,  wrote:
>
>> Hello Julia Users,
>>
>> PyPlot has some modules related to color plots. However the
>> documentation in Julia don't address the following application (in 
>> Python):
>>
>> import numpy as npimport matplotlib.pyplot as plt
>>
>> x = np.linspace(0, 20, 100)
>> y = np.sin(x)
>> z = x + 20 * y
>>
>> scaled_z = (z - z.min()) / z.ptp()
>> colors = plt.cm.coolwarm(scaled_z)
>>
>> plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
>> plt.show()
>>
>>
>>
>>
>> Does anyone know how can I define the "colors" as above and plot in
>> Julia the same example?
>>
>> Any information is really helpful.
>>
>> Thank you very much.
>>
>
>
>>


[julia-users] Eigenvalues of symmetric dense n=10^6 matrix (ScaLAPACK.jl?)

2016-03-08 Thread Erik Schnetter
A colleague mentioned to me that he needs to diagonalize (find
eigenvalues and eigenvectors) of a symmetric dense matrix with n=10^6
(i.e. the matrix has 10^12 entries). ScaLapack seems to be the way to
go for this.

I'm happy to see there is
. Saying its
documentation is "Spartan" is an understatement. There is a file
"test/test.jl" that seems to not be included from "runtests.jl", and
which thus might be intended as example.

The package doesn't pass its tests. If you know more about whether
this just needs a brush-up or whether major surgery is required I'd
appreciate feedback.

-erik

-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Data Structure / Pair question

2016-03-08 Thread Christopher Alexander
In case anyone is interested, I actually just decided to use a Matrix, and 
when I needed it I would call sortrows and take each column and put them in 
a values vector and weights vector (for further calculations) respectively. 
 I thought this was easier, and I noticed sortrows keeps the two columns 
"together" (in the sense that, upon sorting, the values in cols A & B are 
kept together).

You can see the result 
here: https://github.com/pazzo83/QuantLib.jl/blob/master/src/math/statistics.jl

If anyone has a possibly more efficient way of doing this, please share!

- Chris

On Monday, March 7, 2016 at 7:39:45 PM UTC-5, Christopher Alexander wrote:
>
> Many thanks, these are very helpful!  Yes, the two vectors are going to be 
> of the same type.  The StructsOfArrays package is interesting too, as let's 
> say you have a construction like this:
>
> *arr = StructOfArrays(Pair{Float64, Float64}, 100)*
>
> You can go ahead, and populate this as you need.  If you need all the 
> firsts or seconds of the pairs, you can access the object's array param 
> (arr.arrays), and all the firsts are in the first array, and all the 
> seconds are in the second array.  I noticed that at a certain size, the 
> sort algo must change, and I needed to override the resize! method for the 
> StructOfArrays type.  I will compare the speed vs some of these other 
> options.
>
> Thanks!!
>
> On Monday, March 7, 2016 at 6:13:22 PM UTC-5, tshort wrote:
>>
>> There are several options to "keep things together", particularly with 
>> vectors of the same type:
>>
>> - DataFrame columns -- watch how you use columns to keep type stability
>>
>> - Nx2 Array
>>
>> - Nx2 NamedArray:
>> https://github.com/davidavdav/NamedArrays.jl
>>
>> - AxisArrays:
>> https://github.com/mbauman/AxisArrays.jl
>>
>>
>> On Mon, Mar 7, 2016 at 5:11 PM, Christopher Alexander  
>> wrote:
>>
>>> Yea, I was thinking about two different vectors, but then if I did any 
>>> sorting, the value vector and weight vector would be out-of-sync.  I'll 
>>> check out this StructsOfArrays package
>>>
>>> Thanks!
>>>
>>> Chris
>>>
>>> On Monday, March 7, 2016 at 5:03:14 PM UTC-5, tshort wrote:

 It depends on what "various weighted statistical calculations" 
 involves. I'd start with two vectors, `x` and `w`. If you really need them 
 to be coupled tightly, you could define an immutable type to hold the 
 value 
 and the weight, but the two separate vectors can be faster for some 
 operations. Also, see:

 https://github.com/simonster/StructsOfArrays.jl

 On Mon, Mar 7, 2016 at 4:50 PM, Christopher Alexander <
 uvap...@gmail.com> wrote:

> Hello all, I need to create a structure where I keep track of pairs of 
> value => weight so that I can do various weighted statistical 
> calculations.
>
> I know that StatsBase has a weights vector, which I plan on using, but 
> the way that is set up is that it is disassociated from each of the 
> values 
> to which the weights are to be applied.
>
> I need the mapping that "Pair" provides, but I've noticed that there 
> is no easy way, if I have an array of pairs, to grab all the first values 
> or all the second values (like you can do with a dict in grabbing keys or 
> values).
>
> I've tried to do something like map(first, my_array_of_pairs), but 
> this is about 10x slower than if you have a dictionary of value => weight 
> and just asked for the keys.  I actually tried to use a dict at first, 
> but 
> ran into issues with duplicate values (they were overwriting each other 
> because the value was the key).
>
> Any suggestions, or any better way to manipulate an array of Pairs?
>
> Thanks!
>
> Chris
>


>>

[julia-users] Fixed Size Memory Allocation When Randomly Reading Array

2016-03-08 Thread Jared Crean
Hello,
I did a micro-brenchmark about using Ints or UInt8s as indices for 
randomly reading an array, and discovered that when using Ints (but not 
UInt8s), there is an unexpected memory allocation.  More interestingly, the 
size of the memory allocation does not depend on the size of the array (n 
in the code below:

function sumjc(val_arr, idx_arr)
# random read sum
  sum = 0.0
  for i=1:length(idx_arr)
idx = idx_arr[i]
sum += val_arr[idx]
  end

  return sum
end

function sumjc2(val_arr)
# sequential read sum
  sum = 0.0
for i=1:length(val_arr)
  sum += val_arr[i]
end

  return sum
end

n = 200
a = round(Int, n*rand(n)) + 1  # Int indices
b = UInt8[ i for i in a]  # UInt8 indices
vals = rand(n + 1)

# warm up
sumjc(vals, a)
sumjc(vals, b)
sumjc2(vals)

# results
@time sumjc(vals, a)
println("Int random read @time printed above")
@time sumjc(vals, b)
println("UInt8 random read @time printed above")
@time sumjc2(vals)
println("standard sum @time printed above")

The output I get is:
   4.083 microseconds (156 allocations: 10861 bytes)  # ???
Int random read @time printed above
   2.716 microseconds (5 allocations: 160 bytes)
UInt8 random read @time printed above
   1.715 microseconds (5 allocations: 160 bytes)
standard sum @time printed above


Is this a bug that should be reported on Github, or is this expected 
behavior?  I am using the 0.4 release version of Julia.

  Jared Crean


Re: [julia-users] PyPlot color scatter plots

2016-03-08 Thread mauriciodeq
Is it possible to do this color plot for a 3D example?. In this case the 
color of the markers will depend on another expression: 

using Plots 
pyplot(leg = false, size = (400,300))
x = linspace(0,20,100) 
y = sin(x) 
z = cos(x)
scatter(x,y,z, zcolor=20*x + y, marker = (:o, stroke(1)))



On Tuesday, March 8, 2016 at 9:49:36 AM UTC-6, Tom Breloff wrote:
>
> It's possible you have an old version... try doing Pkg.checkout("Plots").  
> I'll get around to tagging a new version soon.
>
> On Tue, Mar 8, 2016 at 10:47 AM,  wrote:
>
>>
>>
>> On Tuesday, March 8, 2016 at 9:41:14 AM UTC-6, mauri...@gmail.com wrote:
>>>
>>> Thank you very much Tom! However, my plot is not showing the colors, 
>>> it's just black, do you know what can be happening? 
>>>
>>> On Tuesday, March 8, 2016 at 9:33:39 AM UTC-6, Tom Breloff wrote:

 You can almost copy that verbatim with PyPlot.jl, or here's the same in 
 Plots:

 [image: Inline image 1]


 On Tue, Mar 8, 2016 at 9:51 AM,  wrote:

> Hello Julia Users, 
>
> PyPlot has some modules related to color plots. However the 
> documentation in Julia don't address the following application (in 
> Python): 
>
> import numpy as npimport matplotlib.pyplot as plt
>
> x = np.linspace(0, 20, 100)
> y = np.sin(x)
> z = x + 20 * y
>
> scaled_z = (z - z.min()) / z.ptp()
> colors = plt.cm.coolwarm(scaled_z)
>
> plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
> plt.show()
>
>
>
>
> Does anyone know how can I define the "colors" as above and plot in 
> Julia the same example? 
>
> Any information is really helpful. 
>
> Thank you very much.
>


>

Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Stefan Karpinski
Nice. My next branch will be called LoTS

On Tue, Mar 8, 2016 at 10:58 AM, Tom Breloff  wrote:

> There should only remain String (concrete) and AbstractString in Base.
>
>
> One string to rule them all...
>
> [image: Inline image 1]
>
> On Tue, Mar 8, 2016 at 10:14 AM, Milan Bouchet-Valat 
> wrote:
>
>> Le mardi 08 mars 2016 à 16:08 +0100, Daniel Carrera a écrit :
>> >
>> > On 8 March 2016 at 15:52, Milan Bouchet-Valat 
>> > wrote:
>> > > > Array("hello")
>> > > This case is tricky since Array{Int}(1) creates a vector with one
>> > > element, not an array containing 1. So for consistency we have to
>> raise
>> > > an error for non-integer arguments.
>> >
>> > Array(1) fails, and Array{Int}(1) gives me 140180935446712 ??!!!
>> >
>> > julia> Array{Int}(1)
>> > 1-element Array{Int64,1}:
>> >  140180935446712
>> This is a one-element array with uninitialized memory. There's a
>> discussion going on about whether to change this to return 0 or not:
>> https://github.com/JuliaLang/julia/issues/9147
>>
>> > > > String(10)
>> > > String isn't a concrete type currently in Julia, that's the old name
>> > > for AbstractString. But the plans is to move to a single string type,
>> > > so this could work. I agree that it would be more logical than writing
>> > > string() in small case as currently.
>> >
>> > Yeah. Since `string()` works, String() could just be made to do what
>> > `string()` does today.
>> >
>> > Can you tell me about the plans to move to a single string type? Does
>> > that mean that the proliferation of string types (AbstractString,
>> > ASCIIString, UTF8String, etc) is going to end?
>> See https://github.com/JuliaLang/julia/pull/14383
>>
>> There should only remain String (concrete) and AbstractString in Base.
>> Packages will still be able to provide custom string types.
>>
>> > > Lower-case functions have been deprecated as much as possible. See
>> > > above about string vs. String. so I don't think we're going to add new
>> > > ones.
>> > Ok. What's the issue with lower-case functions?
>> That they are redundant, inconsistent or both with those starting with
>> an upper-case?
>>
>>
>> Regards
>>
>
>


Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Tom Breloff
>
> There should only remain String (concrete) and AbstractString in Base.


One string to rule them all...

[image: Inline image 1]

On Tue, Mar 8, 2016 at 10:14 AM, Milan Bouchet-Valat 
wrote:

> Le mardi 08 mars 2016 à 16:08 +0100, Daniel Carrera a écrit :
> >
> > On 8 March 2016 at 15:52, Milan Bouchet-Valat 
> > wrote:
> > > > Array("hello")
> > > This case is tricky since Array{Int}(1) creates a vector with one
> > > element, not an array containing 1. So for consistency we have to raise
> > > an error for non-integer arguments.
> >
> > Array(1) fails, and Array{Int}(1) gives me 140180935446712 ??!!!
> >
> > julia> Array{Int}(1)
> > 1-element Array{Int64,1}:
> >  140180935446712
> This is a one-element array with uninitialized memory. There's a
> discussion going on about whether to change this to return 0 or not:
> https://github.com/JuliaLang/julia/issues/9147
>
> > > > String(10)
> > > String isn't a concrete type currently in Julia, that's the old name
> > > for AbstractString. But the plans is to move to a single string type,
> > > so this could work. I agree that it would be more logical than writing
> > > string() in small case as currently.
> >
> > Yeah. Since `string()` works, String() could just be made to do what
> > `string()` does today.
> >
> > Can you tell me about the plans to move to a single string type? Does
> > that mean that the proliferation of string types (AbstractString,
> > ASCIIString, UTF8String, etc) is going to end?
> See https://github.com/JuliaLang/julia/pull/14383
>
> There should only remain String (concrete) and AbstractString in Base.
> Packages will still be able to provide custom string types.
>
> > > Lower-case functions have been deprecated as much as possible. See
> > > above about string vs. String. so I don't think we're going to add new
> > > ones.
> > Ok. What's the issue with lower-case functions?
> That they are redundant, inconsistent or both with those starting with
> an upper-case?
>
>
> Regards
>


Re: [julia-users] Re: how to paste png into ipython julia notebook?

2016-03-08 Thread Steven G. Johnson
Jupyter just added a feature to drag-and-drop or paste images into markdown 
cells that should make it a lot easier to create notebooks with technical 
illustrations: https://github.com/jupyter/notebook/pull/621


Re: [julia-users] PyPlot color scatter plots

2016-03-08 Thread Tom Breloff
It's possible you have an old version... try doing Pkg.checkout("Plots").
I'll get around to tagging a new version soon.

On Tue, Mar 8, 2016 at 10:47 AM,  wrote:

>
>
> On Tuesday, March 8, 2016 at 9:41:14 AM UTC-6, mauri...@gmail.com wrote:
>>
>> Thank you very much Tom! However, my plot is not showing the colors, it's
>> just black, do you know what can be happening?
>>
>> On Tuesday, March 8, 2016 at 9:33:39 AM UTC-6, Tom Breloff wrote:
>>>
>>> You can almost copy that verbatim with PyPlot.jl, or here's the same in
>>> Plots:
>>>
>>> [image: Inline image 1]
>>>
>>>
>>> On Tue, Mar 8, 2016 at 9:51 AM,  wrote:
>>>
 Hello Julia Users,

 PyPlot has some modules related to color plots. However the
 documentation in Julia don't address the following application (in Python):

 import numpy as npimport matplotlib.pyplot as plt

 x = np.linspace(0, 20, 100)
 y = np.sin(x)
 z = x + 20 * y

 scaled_z = (z - z.min()) / z.ptp()
 colors = plt.cm.coolwarm(scaled_z)

 plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
 plt.show()




 Does anyone know how can I define the "colors" as above and plot in
 Julia the same example?

 Any information is really helpful.

 Thank you very much.

>>>
>>>


Re: [julia-users] PyPlot color scatter plots

2016-03-08 Thread mauriciodeq


On Tuesday, March 8, 2016 at 9:41:14 AM UTC-6, mauri...@gmail.com wrote:
>
> Thank you very much Tom! However, my plot is not showing the colors, it's 
> just black, do you know what can be happening? 
>
> On Tuesday, March 8, 2016 at 9:33:39 AM UTC-6, Tom Breloff wrote:
>>
>> You can almost copy that verbatim with PyPlot.jl, or here's the same in 
>> Plots:
>>
>> [image: Inline image 1]
>>
>>
>> On Tue, Mar 8, 2016 at 9:51 AM,  wrote:
>>
>>> Hello Julia Users, 
>>>
>>> PyPlot has some modules related to color plots. However the 
>>> documentation in Julia don't address the following application (in Python): 
>>>
>>> import numpy as npimport matplotlib.pyplot as plt
>>>
>>> x = np.linspace(0, 20, 100)
>>> y = np.sin(x)
>>> z = x + 20 * y
>>>
>>> scaled_z = (z - z.min()) / z.ptp()
>>> colors = plt.cm.coolwarm(scaled_z)
>>>
>>> plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
>>> plt.show()
>>>
>>>
>>>
>>>
>>> Does anyone know how can I define the "colors" as above and plot in 
>>> Julia the same example? 
>>>
>>> Any information is really helpful. 
>>>
>>> Thank you very much.
>>>
>>
>>

Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Daniel Carrera
On 8 March 2016 at 16:14, Milan Bouchet-Valat  wrote:

> > julia> Array{Int}(1)
> > 1-element Array{Int64,1}:
> >  140180935446712
> This is a one-element array with uninitialized memory. There's a
> discussion going on about whether to change this to return 0 or not:
> https://github.com/JuliaLang/julia/issues/9147
>
>
Ok... So I was thinking that there could be a function called array() or
Array() that does what collect() does right now. Just for the consistency.



> > Can you tell me about the plans to move to a single string type? Does
> > that mean that the proliferation of string types (AbstractString,
> > ASCIIString, UTF8String, etc) is going to end?
> See https://github.com/JuliaLang/julia/pull/14383
>
> There should only remain String (concrete) and AbstractString in Base.
> Packages will still be able to provide custom string types.
>


That is really good news. Then it would make sense to un-deprecate
Base.String and make it do something similar to what `string()` does today.

 Cheers,
Daniel.


Re: [julia-users] PyPlot color scatter plots

2016-03-08 Thread mauriciodeq
Thank you very much Tom! 

On Tuesday, March 8, 2016 at 9:33:39 AM UTC-6, Tom Breloff wrote:
>
> You can almost copy that verbatim with PyPlot.jl, or here's the same in 
> Plots:
>
> [image: Inline image 1]
>
>
> On Tue, Mar 8, 2016 at 9:51 AM,  wrote:
>
>> Hello Julia Users, 
>>
>> PyPlot has some modules related to color plots. However the documentation 
>> in Julia don't address the following application (in Python): 
>>
>> import numpy as npimport matplotlib.pyplot as plt
>>
>> x = np.linspace(0, 20, 100)
>> y = np.sin(x)
>> z = x + 20 * y
>>
>> scaled_z = (z - z.min()) / z.ptp()
>> colors = plt.cm.coolwarm(scaled_z)
>>
>> plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
>> plt.show()
>>
>>
>>
>>
>> Does anyone know how can I define the "colors" as above and plot in Julia 
>> the same example? 
>>
>> Any information is really helpful. 
>>
>> Thank you very much.
>>
>
>

Re: [julia-users] PyPlot color scatter plots

2016-03-08 Thread Tom Breloff
You can almost copy that verbatim with PyPlot.jl, or here's the same in
Plots:

[image: Inline image 1]


On Tue, Mar 8, 2016 at 9:51 AM,  wrote:

> Hello Julia Users,
>
> PyPlot has some modules related to color plots. However the documentation
> in Julia don't address the following application (in Python):
>
> import numpy as npimport matplotlib.pyplot as plt
>
> x = np.linspace(0, 20, 100)
> y = np.sin(x)
> z = x + 20 * y
>
> scaled_z = (z - z.min()) / z.ptp()
> colors = plt.cm.coolwarm(scaled_z)
>
> plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
> plt.show()
>
>
>
>
> Does anyone know how can I define the "colors" as above and plot in Julia
> the same example?
>
> Any information is really helpful.
>
> Thank you very much.
>


Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Milan Bouchet-Valat
Le mardi 08 mars 2016 à 16:08 +0100, Daniel Carrera a écrit :
> 
> On 8 March 2016 at 15:52, Milan Bouchet-Valat 
> wrote:
> > > Array("hello")
> > This case is tricky since Array{Int}(1) creates a vector with one
> > element, not an array containing 1. So for consistency we have to raise
> > an error for non-integer arguments.
> 
> Array(1) fails, and Array{Int}(1) gives me 140180935446712 ??!!!
> 
> julia> Array{Int}(1)
> 1-element Array{Int64,1}:
>  140180935446712
This is a one-element array with uninitialized memory. There's a
discussion going on about whether to change this to return 0 or not:
https://github.com/JuliaLang/julia/issues/9147

> > > String(10)
> > String isn't a concrete type currently in Julia, that's the old name
> > for AbstractString. But the plans is to move to a single string type,
> > so this could work. I agree that it would be more logical than writing
> > string() in small case as currently.
> 
> Yeah. Since `string()` works, String() could just be made to do what
> `string()` does today.
> 
> Can you tell me about the plans to move to a single string type? Does
> that mean that the proliferation of string types (AbstractString,
> ASCIIString, UTF8String, etc) is going to end?
See https://github.com/JuliaLang/julia/pull/14383

There should only remain String (concrete) and AbstractString in Base.
Packages will still be able to provide custom string types.

> > Lower-case functions have been deprecated as much as possible. See
> > above about string vs. String. so I don't think we're going to add new
> > ones.
> Ok. What's the issue with lower-case functions?
That they are redundant, inconsistent or both with those starting with
an upper-case?


Regards


Re: [julia-users] Handling optional/keyword arguments

2016-03-08 Thread Milan Bouchet-Valat
Le mardi 08 mars 2016 à 05:39 -0800, 'Tim Loderhose' via julia-users a
écrit :
> Hello everyone,
> I started using Julia recently and am very pleased with the many
> options I am given to write code.
> Now I want to translate a toolbox I wrote in MATLAB to Julia (It's
> about MRI reconstruction).
> 
> There are often lots of options in my function arguments, something
> which I handled in a quite ugly way in Matlab, for example:
> A function which loads an image takes in a filename and varargin,
> where varargin can be certain counters I need to construct a matrix
> from binary data.
> If no varargin is supplied, the function calls another function which
> will load these counters from a seperate header file.
> If they are supplied, it expects them to be correct and in the right
> order to function.
> 
> Now in Julia I have the option to specify all the variables needed in
> the arguments, and give them default values.
> The default, however, depends on the file which I am loading - if the
> extra arguments are not supplied, I would call the other function
> which gives me the defaults.
> 
> Does anyone know an elegant way to handle this in Julia?
> I thought of making keyword arguments which I could all set to 0.
> Then I can check that at the beginning of the function and load the
> defaults if necessary. But that seems ugly to me.
> 
> I put some example code here: 
> https://gist.github.com/timlod/7bb82e4796dd6404ccfa
The code in that gist uses optional positional arguments, not keyword
arguments. The difference is that keyword arguments are specified using
their names, so that you may e.g. skip the first one but still provide
the second one.

Using 0 as a special value is fine. Even though it looks a bit weird,
users do not necessarily need to see this technical detail: they can
either supply the argument or not, without ever passing zero.

Tom's solution is another possibility, which might look cleaner, but
which has the drawback that you need to check yourself that all passed
keyword arguments are valid and of the correct type. So personally I
would only use it when you don't now in advance the list of possible
arguments, or when there are too many of them.


Regards

> If this is too vague, I'll try to concretise it more :)
> Thanks,
> Tim
> 
> 


Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Yichao Yu
On Tue, Mar 8, 2016 at 10:08 AM, Daniel Carrera  wrote:
>
> On 8 March 2016 at 15:52, Milan Bouchet-Valat  wrote:
>>
>> > Array("hello")
>> This case is tricky since Array{Int}(1) creates a vector with one
>> element, not an array containing 1. So for consistency we have to raise
>> an error for non-integer arguments.
>
>
>
> Array(1) fails, and Array{Int}(1) gives me 140180935446712 ??!!!
>
> julia> Array{Int}(1)
> 1-element Array{Int64,1}:
>  140180935446712

isbits types are not initialized. Use zeros if you want to zero initialize them.

>
>
>> > String(10)
>> String isn't a concrete type currently in Julia, that's the old name
>> for AbstractString. But the plans is to move to a single string type,
>> so this could work. I agree that it would be more logical than writing
>> string() in small case as currently.
>
>
>
> Yeah. Since `string()` works, String() could just be made to do what
> `string()` does today.
>
> Can you tell me about the plans to move to a single string type? Does that
> mean that the proliferation of string types (AbstractString, ASCIIString,
> UTF8String, etc) is going to end?
>
>
>>
>> Lower-case functions have been deprecated as much as possible. See
>> above about string vs. String. so I don't think we're going to add new
>> ones.
>
>
> Ok. What's the issue with lower-case functions?
>
> Cheers,
> Daniel.


Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Daniel Carrera
On 8 March 2016 at 15:52, Milan Bouchet-Valat  wrote:

> > Array("hello")
> This case is tricky since Array{Int}(1) creates a vector with one
> element, not an array containing 1. So for consistency we have to raise
> an error for non-integer arguments.
>


Array(1) fails, and Array{Int}(1) gives me 140180935446712 ??!!!

julia> Array{Int}(1)
1-element Array{Int64,1}:
 140180935446712


> String(10)
> String isn't a concrete type currently in Julia, that's the old name
> for AbstractString. But the plans is to move to a single string type,
> so this could work. I agree that it would be more logical than writing
> string() in small case as currently.
>


Yeah. Since `string()` works, String() could just be made to do what
`string()` does today.

Can you tell me about the plans to move to a single string type? Does that
mean that the proliferation of string types (AbstractString, ASCIIString,
UTF8String, etc) is going to end?



> Lower-case functions have been deprecated as much as possible. See
> above about string vs. String. so I don't think we're going to add new
> ones.
>

Ok. What's the issue with lower-case functions?

Cheers,
Daniel.


Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Daniel Carrera
I don't think Julia is UN-reasonable. I just thought think it could look
cleaner and more consistent than it does today.

Cheers,
Daniel.

On 8 March 2016 at 14:40, Tim Holy  wrote:

> julia> Int(3.0)
> 3
>
> julia> Int(3.2)
> ERROR: InexactError()
>  in call at essentials.jl:56
>
> That's pretty reasonable, no?
>
> --Tim
>
> On Tuesday, March 08, 2016 04:12:34 AM Daniel Carrera wrote:
> > Hello,
> >
> > In Julia 0.4 the "int(foo)" syntax was deprecated:
> >
> > julia> int(3.2)
> > WARNING: int(x::AbstractFloat) is deprecated, use round(Int,x) instead.
> > ...
> > 3
> >
> > I am happy with `round(Int,x)` but recently I noticed that Python
> > consistently uses the name of the a type as the name of the function to
> > convert values into that type. Compare:
> >
> > Python:
> >
> > list("hello") # Creates a `list` type.
> > str(10) # Creates a `str` type.
> > int(3.2) # Creates an `int` type.
> > set("hello") # Creates a `set` type.
> >
> >
> > Julia:
> >
> > collect("hello") # Creates an Array
> > string(10) # Creates an ASCIIString
> > round(Int,3.2) # Creates an Int
> > Set("hello") # Creates a Set.
> >
> > I think the Python guys are onto something. It is easy to remember that
> the
> > name of the function is the name of the type. Do you think there is any
> > merit in Julia copying that idea? In Julia the equivalent might be
> > something like this:
> >
> > Array("hello")
> > String(10)
> > Int(3.2)
> > Set("hello")
> >
> > Currently only the last one works. The others give errors, and String is
> in
> > fact deprecated. We could try the lower case versions:
> >
> > array("hello")
> > string(10)
> > int(3.2)
> > set("hello")
> >
> > Now string(10) works, but int(3.2) is deprecated. The others don't exist
> > but could exist:
> >
> > set(x) = Set(x)
> > array(x) = collect(x)
> >
> > I think it would be nice for Julia add this extra bit of consistency in
> the
> > language. What do you think?
> >
> > Cheers,
> > Daniel.
>
>


Re: [julia-users] Meshgrid function

2016-03-08 Thread mauriciodeq
Thank you very much, it was really helpful!

On Tuesday, March 8, 2016 at 1:00:53 AM UTC-6, Tomas Lycken wrote:
>
> The thing with meshgrid, is that it’s terribly inefficient compared to 
> the equivalent idioms in Julia. Instead of implementing it and keeping with 
> inefficient habits, embrace the fact that there are better ways to do 
> things :)
>
> Since this question is recurring, I think the problem might rather be 
> documentation. As a user looking for a meshgrid function, where did you 
> look? What did you look for, more specifically? Would it have helped to 
> have some examples along the lines of “when you want to do this, using 
> meshgrid, in MATLAB, here’s how to do the same thing without meshgrid in 
> Julia”? I could turn the following writeup into a FAQ entry or something, 
> but it would be nice to know where to place it to actually have it make a 
> difference.
> Making do without meshgrid (and getting out ahead!) 
>
> For starters, if what you are after is a straight-up port, meshgrid is 
> *very* easy to implement for yourself, so that you don’t have to think 
> about the rest. *Mind you, this is not the most performant way to do 
> this!*
>
> meshgrid(xs, ys) = [xs[i] for i in 1:length(xs), j in length(ys)], [ys[j] for 
> i in 1:length(xs), j in 1:length(ys)]
>
> Try it out with X, Y = meshgrid(1:15, 1:8). (If you’re not happy with the 
> orientation of the axes, just switch i and j in the meshgrid 
> implementation.)
>
> However, most of the uses of meshgrid in MATLAB is for stuff where you 
> want to do a bunch of calculations for each point on a grid, and where each 
> calculation is dependent on the coordinates of just one point. Usually 
> something along the lines of
>
> X, Y = meshgrid(xs, ys)
> Z = sin(X) + 2 * Y.^2 ./ cos(X + Y)
>
> In Julia, this is better done with a list comprehension altogether, 
> eliminating the need for allocating X and Y:
>
> Z = [sin(x) + 2 * y^2 / cos(x+y) for x in xs, y in ys]
>
> If you think it’s now become too difficult to read what the actual 
> transformation is, just refactor it out into a function:
>
> foo(x,y) = sin(x) + 2 * y^2 / cos(x+y)
> Z = [foo(x,y) for x in xs, y in ys]
>
> If you do this a lot, you might want to avoid writing the list 
> comprehension altogether; that’s easy too. Just create a function that does 
> that for you. With a little clever interface design, you’ll see that this 
> is even more useful than what you could do with meshgrid. 
>
> inflate(f, xs, ys) = [f(x,y) for x in xs, y in ys]
>
> # call version 1: with a named function
> inflate(foo, xs, ys) 
>
> # call version 2: anonymous function syntax
> inflate((x,y) -> sin(x) + 2 * y^2 / cos(x,y), xs, ys)
>
> # call version 3: with a do block
> inflate(xs, ys) do x, y
> sin(x) + 2 * y^2 / cos(x,y)
> end
>
> Note that the last version of this call is very flexible; in fact, it’s 
> just as flexible as having a named function. For example, you can do 
> everything in multiple steps, allowing for intermediate results that don’t 
> have to allocate large arrays (as they would have, had you used meshgrid), 
> making it easier to be explicit about whatever algorithm you’re using.
>
> However, the two last versions of these calls are not the most performant 
> as of yet, since they’re using anonymous functions. This is changing, 
> though, and in 0.5 this won’t be an issue anymore. Until then, for large 
> arrays I’d use the first call version, with a named function, to squeeze as 
> much as possible out of this.
>
> I did a small benchmark of this using a straight list comprehension (i.e. 
> without the inflate function) vs using meshgrid. Here are the results:
>
> julia> foo(x,y) = sin(x) + 2y.^2 ./ cos(x+y);
>
> julia> function bench_meshgrid(N)
>X, Y = meshgrid(1:N, 1:N)
>foo(X, Y)
>end;
>
> julia> function bench_comprehension(N)
>[foo(x,y) for x in 1:N, y in 1:N]
>end;
>
> # warmup omitted
>
> julia> @time bench_comprehension(1_000);
>   0.033284 seconds (6 allocations: 7.630 MB)
>
> julia> @time bench_meshgrid(1_000);
>   0.071993 seconds (70 allocations: 68.666 MB, 5.12% gc time)
>
> julia> @time bench_comprehension(10_000);
>   3.547387 seconds (6 allocations: 762.940 MB, 0.06% gc time)
>
> julia> @time bench_meshgrid(10_000);
> ERROR: OutOfMemoryError()
>  in .* at arraymath.jl:118
>  in foo at none:1
>  in bench_meshgrid at none:3
>
> Note the memory usage: for the list comprehension, it’s entirely dominated 
> by the result (10_000^2 * 8 / 1024^2 ~= 762.939 MB), while meshgrid eats 
> almost ten times as much memory (the exact ratio will vary very much 
> depending on the function foo and how many intermediate arrays it 
> creates). For large inputs, meshgrid doesn't even make it.
>
> // T
>
> On Monday, March 7, 2016 at 5:35:13 PM UTC+1, Christoph Ortner wrote:
>
> Have a look at 
>>   http://matlabcompat.github.io
>>
>> If it doesn't have a mesh grid function, then maybe mesh grid from the 
>> examples 

Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Milan Bouchet-Valat
Le mardi 08 mars 2016 à 04:12 -0800, Daniel Carrera a écrit :
> Hello,
> 
> In Julia 0.4 the "int(foo)" syntax was deprecated:
> 
> julia> int(3.2)
> WARNING: int(x::AbstractFloat) is deprecated, use round(Int,x)
> instead.
> ...
> 3
> 
> I am happy with `round(Int,x)` but recently I noticed that Python
> consistently uses the name of the a type as the name of the function
> to convert values into that type. Compare:
> 
> Python:
> 
> list("hello") # Creates a `list` type.
> str(10) # Creates a `str` type.
> int(3.2) # Creates an `int` type.
> set("hello") # Creates a `set` type.
> 
> 
> Julia:
> 
> collect("hello") # Creates an Array
> string(10) # Creates an ASCIIString
> round(Int,3.2) # Creates an Int
> Set("hello") # Creates a Set.
> 
> I think the Python guys are onto something. It is easy to remember
> that the name of the function is the name of the type. Do you think
> there is any merit in Julia copying that idea? In Julia the
> equivalent might be something like this:
> 
> Array("hello")
This case is tricky since Array{Int}(1) creates a vector with one
element, not an array containing 1. So for consistency we have to raise
an error for non-integer arguments.

> String(10)
String isn't a concrete type currently in Julia, that's the old name
for AbstractString. But the plans is to move to a single string type,
so this could work. I agree that it would be more logical than writing
string() in small case as currently.

> Int(3.2)
As Tim noted, this has to fail for correctness (and to protect
ourselves from potentially dangerous bugs).

> Set("hello")
> 
> Currently only the last one works. The others give errors, and String
> is in fact deprecated. We could try the lower case versions:
> 
> array("hello")
> string(10)
> int(3.2)
> set("hello")
> 
> Now string(10) works, but int(3.2) is deprecated. The others don't
> exist but could exist:
> 
> set(x) = Set(x)
> array(x) = collect(x)
Lower-case functions have been deprecated as much as possible. See
above about string vs. String. so I don't think we're going to add new
ones.


Regards

> I think it would be nice for Julia add this extra bit of consistency
> in the language. What do you think?
> 
> Cheers,
> Daniel.


[julia-users] PyPlot color scatter plots

2016-03-08 Thread mauriciodeq
Hello Julia Users, 

PyPlot has some modules related to color plots. However the documentation 
in Julia don't address the following application (in Python): 

import numpy as npimport matplotlib.pyplot as plt

x = np.linspace(0, 20, 100)
y = np.sin(x)
z = x + 20 * y

scaled_z = (z - z.min()) / z.ptp()
colors = plt.cm.coolwarm(scaled_z)

plt.scatter(x, y, marker='+', edgecolors=colors, s=150, linewidths=4)
plt.show()




Does anyone know how can I define the "colors" as above and plot in Julia 
the same example? 

Any information is really helpful. 

Thank you very much.


[julia-users] Re: GSOC2016

2016-03-08 Thread Miles Lubin
Hi Rohan,

Head over to the julia-opt list and open a discussion there.

Miles

On Tuesday, March 8, 2016 at 8:56:39 AM UTC-5, Rohan Goel wrote:
>
> Hello everyone,
> I want to contribute to the idea "Support for complex numbers within 
> Convex.jl".
> Can someone please help me getting started what all things should I start 
> studying and setup my machine.
> Thankyou
>


[julia-users] Re: ANN: JuMP 0.12 released

2016-03-08 Thread Miles Lubin
This is a bug, I've opened an issue 
here: https://github.com/JuliaOpt/JuMP.jl/issues/695

As a workaround, if you replace sqrt(y0) with 0.0 then the NaNs go away. 
Clearly it shouldn't affect the result since y0 is a constant.

On Tuesday, March 8, 2016 at 2:46:12 AM UTC-5, JP Dussault wrote:
>
> Hi,
>
> I experience NaN in second derivatives with JuMp 0.12  which  were not 
> present with the previous version. I scaled down one example, a 
> Brachistochrone model (see 
> http://link.springer.com/article/10.1007%2Fs11081-013-9244-4)  to n=4 
> yielding 6 variables and the hessian clearly shows many NaN entries. I 
> double checked with an "equivalent" AMPL version and the hessian is clean 
> (no NaN).
>
> Here is the model as I ran it within the MPB Newton solver notebook.
>
> n=4
> nlp = Model(solver=NewtonSolver())
> eps = 1e-8
>
> xn = 5.0
> yn = 1.0 
> x0 = 0.0
> y0 = 0.0
>
> @defVar(nlp, x[j=1:n-1], start = xn*(j/n))
> @defVar(nlp, y[j=1:n-1], start = (j/n))
> 
> @defNLExpr(nlp, dx1,  (x[1] - x0))
> @defNLExpr(nlp, dxn,  (xn - x[n-1])) 
> @defNLExpr(nlp, dx[j=2:n-1],  (x[j] - x[j-1]))
>
> @defNLExpr(nlp, dy[j=2:n-1],  (y[j] - y[j-1]))
> @defNLExpr(nlp, dy1,  (y[1] - y0))
> @defNLExpr(nlp, dyn,  (yn - y[n-1]))
>
> @defNLExpr(nlp, s[j=2:n-1],  sqrt(dx[j]^2 + dy[j]^2))
> @defNLExpr(nlp, s1,  sqrt(dx1^2 + dy1^2))
> @defNLExpr(nlp, sn,  sqrt(dxn^2 + dyn^2))
>
> @defNLExpr(nlp, f[j=2:n-1],  s[j] /(sqrt(y[j-1])+sqrt(y[j])))
> @defNLExpr(nlp, f1,  s1 /(sqrt(y0) + sqrt(y[1])))
> @defNLExpr(nlp, fn,  sn /(sqrt(y[n-1])+sqrt(yn)))
>
> 
> @setNLObjective(
> nlp,
> Min,
> sum{f[i], i = 2:n-1} + f1 + fn
> )
>
>
> status = solve(nlp);
>
> Thx, 
>
>
> JPD
>
>
> Le samedi 27 février 2016 23:14:12 UTC+1, Miles Lubin a écrit :
>>
>> The JuMP team is happy to announce the release of JuMP 0.12.
>>
>> This release features a complete rewrite of JuMP's automatic 
>> differentiation functionality, which is the largest change in JuMP's 
>> nonlinear optimization support since JuMP 0.5. Most of the changes are 
>> under the hood, but as previously announced 
>>  there 
>> are a couple of syntax changes:
>> - The first parameter to @defNLExpr *and* @defExpr should now be the 
>> model object. All linear and nonlinear subexpressions are now attached to 
>> their corresponding model.
>> - If solving a sequence of nonlinear models, you should now use nonlinear 
>> parameters instead of Julia's variable binding rules.
>>
>> Many nonlinear models should see performance improvements in JuMP 0.12, 
>> let us know if you observe otherwise.
>>
>> We also now support user-defined functions 
>>  
>> and *automatic differentiation of user-defined functions*. This is quite 
>> a significant new feature which allows users to integrate (almost) 
>> arbitrary code as a nonlinear function within JuMP expressions, thanks to 
>> ForwardDiff.jl . We're 
>> looking forward to seeing how this feature can be used in practice; please 
>> give us feedback on the syntax and any rough edges you run into.
>>
>> Other changes include:
>> - Changed syntax for iterating over AffExpr objects
>> - Stopping the solver from within a callback now causes the solver to 
>> return :UserLimit instead of throwing an error.
>> - getDual() now works for conic problems (thanks to Emre Yamangil)
>>
>> Given the large number of changes, bugs are possible. Please let us know 
>> of any incorrect or confusing behavior.
>>
>> Miles, Iain, and Joey
>>
>

[julia-users] Handling optional/keyword arguments

2016-03-08 Thread 'Tim Loderhose' via julia-users
Hello everyone,
I started using Julia recently and am very pleased with the many options I 
am given to write code.
Now I want to translate a toolbox I wrote in MATLAB to Julia (It's about 
MRI reconstruction).

There are often lots of options in my function arguments, something which I 
handled in a quite ugly way in Matlab, for example:
A function which loads an image takes in a filename and varargin, where 
varargin can be certain counters I need to construct a matrix from binary 
data.
If no varargin is supplied, the function calls another function which will 
load these counters from a seperate header file.
If they are supplied, it expects them to be correct and in the right order 
to function.

Now in Julia I have the option to specify all the variables needed in the 
arguments, and give them default values.
The default, however, depends on the file which I am loading - if the extra 
arguments are not supplied, I would call the other function which gives me 
the defaults.

Does anyone know an elegant way to handle this in Julia?
I thought of making keyword arguments which I could all set to 0. Then I 
can check that at the beginning of the function and load the defaults if 
necessary. But that seems ugly to me.

I put some example code here: 
https://gist.github.com/timlod/7bb82e4796dd6404ccfa

If this is too vague, I'll try to concretise it more :)
Thanks,
Tim




[julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-03-08 Thread Johannes Wagner
as an example, the data looks like this:

using(TimeIt)
v = rand(3)
r = rand(6000,3)
x = linspace(1.0, 2.0, 300) * (v./sqrt(sumabs2(v)))'

*# Julia 0.4 function*

function s04(xl, rl)
result = zeros(size(xl,1))
for i = 1:size(xl,1)
dotprods = rl * xl[i,:]'   
  #1 loops, best of 3: 17.66 µs per loop
imexp  = exp(im .* dotprods)   
#1000 loops, best of 3: 172.33 µs per loop
sumprod  = sum(imexp) * sum(conj(imexp))   #1 
loops, best of 3: 21.04 µs per loop
result[i] = sumprod
end
return result
end

and using @timeit s04(x,r) gives 
10 loops, best of 3: *67.52 ms* per loop
where most time is spend in the exp() calls. Now in 0.5dev, the individual 
parts have similar or actually better timings like the dot product:

*# Julia 0.5 function*

function s05(xl, rl)
result = zeros(size(xl,1))
for i = 1:size(xl,1)
dotprods = rl * xl[i,:] 
   #1 loops, best of 3: 10.99 µs per loop
imexp  = exp(im .* dotprods)  #1000 
loops, best of 3: 158.50 µs per loop
sumprod  = sum(imexp) * sum(conj(imexp))  #1 loops, 
best of 3: 21.81 µs per loop
result[i] = sumprod
end
return result
end

but @timeit s05(x,r) always gives something ~70% worse runtime:
10 loops, best of 3: *113.80 ms* per loop

And always the same on my Fedora23 workstation, individual calls inside the 
function have slightly better performance in 0.5dev, but the whole function 
is slower. But oddly enough only on my Fedora workstation! On a OS X 
laptop, those 0.5dev speedups from the parts inside the loop translate in 
the expected speedup for the whole function!
So that puzzles me, could someone perhaps reproduce this with above 
function and input on an linux system, preferably also fedora?

cheers, Johannes

On Friday, February 26, 2016 at 4:28:05 PM UTC+1, Kristoffer Carlsson wrote:
>
> What code and where is it spending time? You talk about openblas, does it 
> mean that blas got slower for you? How about peakflops() on the different 
> versions?
>
> On Friday, February 26, 2016 at 4:08:06 PM UTC+1, Johannes Wagner wrote:
>>
>> hey guys,
>> I just experienced something weird. I have some code that runs fine on 
>> 0.43, then I updated to 0.5dev to test the new Arrays, run same code and 
>> noticed it got about ~50% slower. Then I downgraded back to 0.43, ran the 
>> old code, but speed remained slow. I noticed while reinstalling 0.43, 
>> openblas-threads didn't get isntalled along with it. So I manually 
>> installed it, but no change. 
>> Does anyone has an idea what could be going on? LLVM on fedora23 is 3.7
>>
>> Cheers, Johannes
>>
>

[julia-users] a excellent Julia IDE, JuliaDT

2016-03-08 Thread Liye zhang
If you are trying to find an IDE for Julia which is as convenient as PyDev 
for python, or RStudio for R, you can test JuliaDT. Thanks for the authors' 
excellent work!

https://github.com/JuliaComputing/JuliaDT/releases/tag/v0.0.1 

More about this software,
http://juliacomputing.com/blog/2016/02/06/Eclipse-JuliaDT.html


[julia-users] GSOC2016

2016-03-08 Thread Rohan Goel
Hello everyone,
I want to contribute to the idea "Support for complex numbers within 
Convex.jl".
Can someone please help me getting started what all things should I start 
studying and setup my machine.
Thankyou


Re: [julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Tim Holy
julia> Int(3.0)
3

julia> Int(3.2)
ERROR: InexactError()
 in call at essentials.jl:56

That's pretty reasonable, no?

--Tim

On Tuesday, March 08, 2016 04:12:34 AM Daniel Carrera wrote:
> Hello,
> 
> In Julia 0.4 the "int(foo)" syntax was deprecated:
> 
> julia> int(3.2)
> WARNING: int(x::AbstractFloat) is deprecated, use round(Int,x) instead.
> ...
> 3
> 
> I am happy with `round(Int,x)` but recently I noticed that Python
> consistently uses the name of the a type as the name of the function to
> convert values into that type. Compare:
> 
> Python:
> 
> list("hello") # Creates a `list` type.
> str(10) # Creates a `str` type.
> int(3.2) # Creates an `int` type.
> set("hello") # Creates a `set` type.
> 
> 
> Julia:
> 
> collect("hello") # Creates an Array
> string(10) # Creates an ASCIIString
> round(Int,3.2) # Creates an Int
> Set("hello") # Creates a Set.
> 
> I think the Python guys are onto something. It is easy to remember that the
> name of the function is the name of the type. Do you think there is any
> merit in Julia copying that idea? In Julia the equivalent might be
> something like this:
> 
> Array("hello")
> String(10)
> Int(3.2)
> Set("hello")
> 
> Currently only the last one works. The others give errors, and String is in
> fact deprecated. We could try the lower case versions:
> 
> array("hello")
> string(10)
> int(3.2)
> set("hello")
> 
> Now string(10) works, but int(3.2) is deprecated. The others don't exist
> but could exist:
> 
> set(x) = Set(x)
> array(x) = collect(x)
> 
> I think it would be nice for Julia add this extra bit of consistency in the
> language. What do you think?
> 
> Cheers,
> Daniel.



Re: [julia-users] Re: Array slices and functions that modify inputs.

2016-03-08 Thread Tom Breloff
You could certainly improve your contextual communication, however I'm
guessing you're pointing out that brackets are treated as slices when on
the left hand side of assignment?  This is true, and good, and really the
only behavior that makes any sense.  But you are right that it isn't
currently consistent.

On Mon, Mar 7, 2016 at 9:49 PM, Daniel Carrera  wrote:

> That is not literally true:
>
> vals[2:3] = vals[3:4]
>
>
> On Tuesday, 8 March 2016 02:37:26 UTC+1, John Myles White wrote:
>>
>> Array indexing produces a brand new array that has literally no
>> relationship with the source array.
>>
>>  -- John
>>
>> On Monday, March 7, 2016 at 5:21:34 PM UTC-8, Daniel Carrera wrote:
>>>
>>> Hello,
>>>
>>> Some Julia functions act on their inputs. For example:
>>>
>>> julia> vals = [6,5,4,3]
>>> 4-element Array{Int64,1}:
>>>  6
>>>  5
>>>  4
>>>  3
>>>
>>> julia> sort!(vals);
>>>
>>> julia> vals
>>> 4-element Array{Int64,1}:
>>>  3
>>>  4
>>>  5
>>>  6
>>>
>>>
>>> However, it looks like these functions do not modify array slices:
>>>
>>> julia> vals = [6,5,4,3]
>>> 4-element Array{Int64,1}:
>>>  6
>>>  5
>>>  4
>>>  3
>>>
>>> julia> sort!(vals[2:end])
>>> 3-element Array{Int64,1}:
>>>  3
>>>  4
>>>  5
>>>
>>> julia> vals
>>> 4-element Array{Int64,1}:
>>>  6
>>>  5
>>>  4
>>>  3
>>>
>>>
>>> Can anyone explain to me why this happens? Is this a language feature?
>>> Is it at all possible to make a destructive function that acts on slices?
>>>
>>> Cheers,
>>> Daniel.
>>>
>>>


[julia-users] Using PyPlot to work with obj files, noob question on arrays

2016-03-08 Thread kleinsplash
Hi,

I want to just load a mesh and plot it along with a bunch of other data on 
one plot. Just simple plotting. I have a way that's working but it's just 
wrong: 


mesh = load(obj)

#mesh = HomogenousMesh(normals: 420xGeometryTypes.Normal{3,Float32}, 
vertices: 420xFixedSizeArrays.Point{3,Float32}, faces: 
840xGeometryTypes.Face{3,UInt32,-1}, )
#typeof(mesh) = 
GeometryTypes.HomogenousMesh{FixedSizeArrays.Point{3,Float32},GeometryTypes.Face{3,UInt32,-1},GeometryTypes.Normal{3,Float32},Void,Void,Void,Void}

# inefficient way of finding the number
x=0
for v in mesh.vertices
x+=1
end 
# set up array with verticies 
vert = Array{Float32}(x,3)
# map to new array 
for i = 1:x
vert[i,:] = [mesh.vertices[i][1], mesh.vertices[i][2], mesh.vertices[i][
3]]
end 
#plot with PyPlot
plot_wireframe(vert[:,1], vert[:,2], vert[:,3])

length(mesh.verticies) doesn't work, and I haven't figured out how to 
instantiate an empty Array without having the dimensions. 


Any help would be much appreciated. 

Thx


Re: [julia-users] Re: Array slices and functions that modify inputs.

2016-03-08 Thread Erik Schnetter
Assigning to a slice and taking a slice are completely different
operations, they merely have similar syntax. After parsing, these two
operations are lowered to two different function calls, `setindex!`
and `getindex`. The former modifies the array, while the latter
creates a new array.

There has been a recent discussion whether taking a slice should
return a subarray or should return a new array. The outcome is still
very much in the open; people are currently collecting data (i.e.
studying real-world code examples) to see which variant is "better",
in the sense of "more likely to avoid programming errors" and "runs
faster in general".

As is, if the difference between these two isn't already highlighted
in the documentation, it definitively should be.

-erik

On Tue, Mar 8, 2016 at 6:11 AM, Daniel Carrera  wrote:
> Hello,
>
> Is there any difference besides the obvious one that assigning to a slice
> modifies the array? What I mean is, is there a difference in the internals
> or how it works that I should be aware of? Up until this thread I had been
> under the impression that a slice was a type of link/alias to the original
> array, and that's still my mental picture of how assignment to a slice
> works.
>
> Thanks.
>
>
> On 8 March 2016 at 04:02, Stefan Karpinski  wrote:
>>
>> Assigning to slice is quite different from taking a slice.
>>
>> On Mon, Mar 7, 2016 at 9:49 PM, Daniel Carrera  wrote:
>>>
>>> That is not literally true:
>>>
>>> vals[2:3] = vals[3:4]
>>>
>>>
>>> On Tuesday, 8 March 2016 02:37:26 UTC+1, John Myles White wrote:

 Array indexing produces a brand new array that has literally no
 relationship with the source array.

  -- John

 On Monday, March 7, 2016 at 5:21:34 PM UTC-8, Daniel Carrera wrote:
>
> Hello,
>
> Some Julia functions act on their inputs. For example:
>
> julia> vals = [6,5,4,3]
> 4-element Array{Int64,1}:
>  6
>  5
>  4
>  3
>
> julia> sort!(vals);
>
> julia> vals
> 4-element Array{Int64,1}:
>  3
>  4
>  5
>  6
>
>
> However, it looks like these functions do not modify array slices:
>
> julia> vals = [6,5,4,3]
> 4-element Array{Int64,1}:
>  6
>  5
>  4
>  3
>
> julia> sort!(vals[2:end])
> 3-element Array{Int64,1}:
>  3
>  4
>  5
>
> julia> vals
> 4-element Array{Int64,1}:
>  6
>  5
>  4
>  3
>
>
> Can anyone explain to me why this happens? Is this a language feature?
> Is it at all possible to make a destructive function that acts on slices?
>
> Cheers,
> Daniel.
>
>>
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] How about "array(foo)" or "int(foo)" to convert "foo" to an array or int?

2016-03-08 Thread Daniel Carrera
Hello,

In Julia 0.4 the "int(foo)" syntax was deprecated:

julia> int(3.2)
WARNING: int(x::AbstractFloat) is deprecated, use round(Int,x) instead.
...
3

I am happy with `round(Int,x)` but recently I noticed that Python 
consistently uses the name of the a type as the name of the function to 
convert values into that type. Compare:

Python:

list("hello") # Creates a `list` type.
str(10) # Creates a `str` type.
int(3.2) # Creates an `int` type.
set("hello") # Creates a `set` type.


Julia:

collect("hello") # Creates an Array
string(10) # Creates an ASCIIString
round(Int,3.2) # Creates an Int
Set("hello") # Creates a Set.

I think the Python guys are onto something. It is easy to remember that the 
name of the function is the name of the type. Do you think there is any 
merit in Julia copying that idea? In Julia the equivalent might be 
something like this:

Array("hello")
String(10)
Int(3.2)
Set("hello")

Currently only the last one works. The others give errors, and String is in 
fact deprecated. We could try the lower case versions:

array("hello")
string(10)
int(3.2)
set("hello")

Now string(10) works, but int(3.2) is deprecated. The others don't exist 
but could exist:

set(x) = Set(x)
array(x) = collect(x)

I think it would be nice for Julia add this extra bit of consistency in the 
language. What do you think?

Cheers,
Daniel.


Re: [julia-users] Re: Array slices and functions that modify inputs.

2016-03-08 Thread Daniel Carrera
Hello,

Is there any difference besides the obvious one that assigning to a slice
modifies the array? What I mean is, is there a difference in the internals
or how it works that I should be aware of? Up until this thread I had been
under the impression that a slice was a type of link/alias to the original
array, and that's still my mental picture of how assignment to a slice
works.

Thanks.


On 8 March 2016 at 04:02, Stefan Karpinski  wrote:

> Assigning to slice is quite different from taking a slice.
>
> On Mon, Mar 7, 2016 at 9:49 PM, Daniel Carrera  wrote:
>
>> That is not literally true:
>>
>> vals[2:3] = vals[3:4]
>>
>>
>> On Tuesday, 8 March 2016 02:37:26 UTC+1, John Myles White wrote:
>>>
>>> Array indexing produces a brand new array that has literally no
>>> relationship with the source array.
>>>
>>>  -- John
>>>
>>> On Monday, March 7, 2016 at 5:21:34 PM UTC-8, Daniel Carrera wrote:

 Hello,

 Some Julia functions act on their inputs. For example:

 julia> vals = [6,5,4,3]
 4-element Array{Int64,1}:
  6
  5
  4
  3

 julia> sort!(vals);

 julia> vals
 4-element Array{Int64,1}:
  3
  4
  5
  6


 However, it looks like these functions do not modify array slices:

 julia> vals = [6,5,4,3]
 4-element Array{Int64,1}:
  6
  5
  4
  3

 julia> sort!(vals[2:end])
 3-element Array{Int64,1}:
  3
  4
  5

 julia> vals
 4-element Array{Int64,1}:
  6
  5
  4
  3


 Can anyone explain to me why this happens? Is this a language feature?
 Is it at all possible to make a destructive function that acts on slices?

 Cheers,
 Daniel.


>


[julia-users] Re: Concurrently install two versions of Julia

2016-03-08 Thread Gunnar Farnebäck
The environment variable JULIA_PKGDIR may be of use.

Den måndag 7 mars 2016 kl. 05:44:29 UTC+1 skrev Vishnu Raj:
>
> In my system I have Juno with 0.4.2 and julia with 0.4.3. Every time I 
> start one, all my packages are getting recompiled. They are all under same 
> v0.4 folder and I think that's why this is happening.
> Is there a way to avoid this?
>
> On Sunday, March 6, 2016 at 3:28:40 PM UTC+5:30, Andreas Lobinger wrote:
>>
>> Depends on your definition of installed and what system you use. I use 
>> since 0.2 (and the 0.3dev) a local build -on a linux system- and this quite 
>> nicely encapsulated in a single directory inside my home so they live in 
>> parallel. The package directory which is inside .julia is has versioning 
>> (v0.3/v0.4/v0.5), too.
>>
>> On Saturday, March 5, 2016 at 11:48:37 PM UTC+1, Pulkit Agarwal wrote:
>>>
>>> Hi,
>>>
>>> Is there a way to have the stable version of Julia as the global Julia 
>>> (i.e. something which can be accessed by calling `julia` from the command 
>>> line) and the development version of Julia (which will be stored in some 
>>> other folder).
>>>
>>> Thanks,
>>> Pulkit
>>>
>>

Re: [julia-users] Meshgrid function

2016-03-08 Thread elextr
Neat.

Wherever it goes it needs a pointer 
from 
http://docs.julialang.org/en/latest/manual/noteworthy-differences/#noteworthy-differences-from-matlab
 
but its a bit long to actually go there.

On Tuesday, 8 March 2016 17:00:53 UTC+10, Tomas Lycken wrote:
>
> The thing with meshgrid, is that it’s terribly inefficient compared to 
> the equivalent idioms in Julia. Instead of implementing it and keeping with 
> inefficient habits, embrace the fact that there are better ways to do 
> things :)
>
> Since this question is recurring, I think the problem might rather be 
> documentation. As a user looking for a meshgrid function, where did you 
> look? What did you look for, more specifically? Would it have helped to 
> have some examples along the lines of “when you want to do this, using 
> meshgrid, in MATLAB, here’s how to do the same thing without meshgrid in 
> Julia”? I could turn the following writeup into a FAQ entry or something, 
> but it would be nice to know where to place it to actually have it make a 
> difference.
> Making do without meshgrid (and getting out ahead!) 
>
> For starters, if what you are after is a straight-up port, meshgrid is 
> *very* easy to implement for yourself, so that you don’t have to think 
> about the rest. *Mind you, this is not the most performant way to do 
> this!*
>
> meshgrid(xs, ys) = [xs[i] for i in 1:length(xs), j in length(ys)], [ys[j] for 
> i in 1:length(xs), j in 1:length(ys)]
>
> Try it out with X, Y = meshgrid(1:15, 1:8). (If you’re not happy with the 
> orientation of the axes, just switch i and j in the meshgrid 
> implementation.)
>
> However, most of the uses of meshgrid in MATLAB is for stuff where you 
> want to do a bunch of calculations for each point on a grid, and where each 
> calculation is dependent on the coordinates of just one point. Usually 
> something along the lines of
>
> X, Y = meshgrid(xs, ys)
> Z = sin(X) + 2 * Y.^2 ./ cos(X + Y)
>
> In Julia, this is better done with a list comprehension altogether, 
> eliminating the need for allocating X and Y:
>
> Z = [sin(x) + 2 * y^2 / cos(x+y) for x in xs, y in ys]
>
> If you think it’s now become too difficult to read what the actual 
> transformation is, just refactor it out into a function:
>
> foo(x,y) = sin(x) + 2 * y^2 / cos(x+y)
> Z = [foo(x,y) for x in xs, y in ys]
>
> If you do this a lot, you might want to avoid writing the list 
> comprehension altogether; that’s easy too. Just create a function that does 
> that for you. With a little clever interface design, you’ll see that this 
> is even more useful than what you could do with meshgrid. 
>
> inflate(f, xs, ys) = [f(x,y) for x in xs, y in ys]
>
> # call version 1: with a named function
> inflate(foo, xs, ys) 
>
> # call version 2: anonymous function syntax
> inflate((x,y) -> sin(x) + 2 * y^2 / cos(x,y), xs, ys)
>
> # call version 3: with a do block
> inflate(xs, ys) do x, y
> sin(x) + 2 * y^2 / cos(x,y)
> end
>
> Note that the last version of this call is very flexible; in fact, it’s 
> just as flexible as having a named function. For example, you can do 
> everything in multiple steps, allowing for intermediate results that don’t 
> have to allocate large arrays (as they would have, had you used meshgrid), 
> making it easier to be explicit about whatever algorithm you’re using.
>
> However, the two last versions of these calls are not the most performant 
> as of yet, since they’re using anonymous functions. This is changing, 
> though, and in 0.5 this won’t be an issue anymore. Until then, for large 
> arrays I’d use the first call version, with a named function, to squeeze as 
> much as possible out of this.
>
> I did a small benchmark of this using a straight list comprehension (i.e. 
> without the inflate function) vs using meshgrid. Here are the results:
>
> julia> foo(x,y) = sin(x) + 2y.^2 ./ cos(x+y);
>
> julia> function bench_meshgrid(N)
>X, Y = meshgrid(1:N, 1:N)
>foo(X, Y)
>end;
>
> julia> function bench_comprehension(N)
>[foo(x,y) for x in 1:N, y in 1:N]
>end;
>
> # warmup omitted
>
> julia> @time bench_comprehension(1_000);
>   0.033284 seconds (6 allocations: 7.630 MB)
>
> julia> @time bench_meshgrid(1_000);
>   0.071993 seconds (70 allocations: 68.666 MB, 5.12% gc time)
>
> julia> @time bench_comprehension(10_000);
>   3.547387 seconds (6 allocations: 762.940 MB, 0.06% gc time)
>
> julia> @time bench_meshgrid(10_000);
> ERROR: OutOfMemoryError()
>  in .* at arraymath.jl:118
>  in foo at none:1
>  in bench_meshgrid at none:3
>
> Note the memory usage: for the list comprehension, it’s entirely dominated 
> by the result (10_000^2 * 8 / 1024^2 ~= 762.939 MB), while meshgrid eats 
> almost ten times as much memory (the exact ratio will vary very much 
> depending on the function foo and how many intermediate arrays it 
> creates). For large inputs, meshgrid doesn't even make it.
>
> // T
>
> On Monday, March 7, 2016 at 5:35:13 PM UTC+1, Christoph Ortner wrote: