Re: [julia-users] Advice on (perhaps) chunking to HDF5

2016-09-13 Thread Ralph Smith
I have better luck with 

inds = fill(:,3)

By the way, if anyone appropriate is watching, can we have a sticky post 
about how to format Julia code here?
And is the comprehension form of a one-line "for" loop considered good 
style? I don't see it in the manual anywhere.

On Tuesday, September 13, 2016 at 9:36:58 PM UTC-4, sparrowhawker wrote:
>
> Cool! The colons approach makes sense to me, followed by splatting.
>
> I'm unfamiliar with the syntax here but when I try to create a tuple in 
> the REPL using
>
> inds = ((:) for i in 1:3)
>
> I get 
> ERROR: syntax: missing separator in tuple
>
>
>
> On 13 September 2016 at 17:27, Erik Schnetter  > wrote:
>
>> If you have a varying rank, then you should probably use something like 
>> `CartesianIndex` and `CartesianRange` to represent the indices, or possible 
>> tuples of integers. You would then use the splatting operator to create the 
>> indexing instructions:
>>
>> ```Julia
>> indrange = CartesianRange(xyz)
>> dset[indrange..., i] = slicedim
>> ```
>>
>> I don't know whether the expression `indrange...` works as-is, or whether 
>> you have to manually create a tuple of `UnitRange`s.
>>
>> If you want to use colons, then you'd write
>>
>> ```Julia
>> inds = ((:) for i in 1:rank)
>> dset[inds..., i] = xyz
>> ```
>>
>> -erik
>>
>>
>>
>>
>> On Tue, Sep 13, 2016 at 5:08 PM, Anandaroop Ray > > wrote:
>>
>>> Many thanks for your comprehensive recommendations. I think HDF5 views 
>>> are probably what I need to go with - will read up more and then ask.
>>>
>>> What I mean about dimension is rank, really. The shape is always the 
>>> same for all samples. One slice for storage, i.e., one sample, could be 
>>> chunked as dset[:,:,i] or dset[:,:,:,:,i] but always of the form, 
>>> dset[:,...,:i], depending on input to the code at runtime.
>>>
>>> Thanks
>>>
>>> On 13 September 2016 at 14:47, Erik Schnetter >> > wrote:
>>>
 On Tue, Sep 13, 2016 at 11:27 AM, sparrowhawker  wrote:

> Hi,
>
> I'm new to Julia, and have been able to accomplish a lot of what I 
> used to do in Matlab/Fortran, in very little time since I started using 
> Julia in the last three months. Here's my newest stumbling block.
>
> I have a process which creates nsamples within a loop. Each sample 
> takes a long time to compute as there are expensive finite difference 
> operations, which ultimately lead to a sample, say 1 to 10 seconds. I 
> have 
> to store each of the nsamples, and I know the size and dimensions of each 
> of the nsamples (all samples have the same size and dimensions). However, 
> depending on the run time parameters, each sample may be a 32x32 image or 
> perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3 
> hyper-rectangle. To be clear, each sample can be an arbitrary dimension 
> hyper-rectangle, specified at run time.
>
> Obviously, since I don't want to lose computation and want to see 
> incremental progress, I'd like to do incremental saves of these samples 
> on 
> disk, instead of waiting to collect all nsamples at the end. For 
> instance, 
> if I had to store 1000 samples of size 64x64, I thought perhaps I could 
> chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right 
> approach? If so, here's a prototype program to do so, but it depends on 
> my 
> knowing the number of dimensions of the slice, which is not known until 
> runtime,
>
> using HDF5
>
> filename = "test.h5"
> # open file
> fmode ="w"
> # get a file object
> fid = h5open(filename, fmode)
> # matrix to write in chunks
> B = rand(64,64,1000)
> # figure out its dimensions
> sizeTuple = size(B)
> Ndims = length(sizeTuple)
> # set up to write in chunks of sizeArray
> sizeArray = ones(Int, Ndims)
> [sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of 
> size array is :...:,1
> # create a dataset models within root
> dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)), 
> "chunk", sizeArray)
> [dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
> close(fid)
>
> This works, but the second last line, dset[:,:,i] requires syntax 
> specific to writing a slice of a dimension 3 array - but I don't know the 
> dimensions until run time. Of course I could just write to a flat binary 
> file incrementally, but HDF5.jl could make my life so much simpler!
>

 HDF5 supports "extensible datasets", which were created for use cases 
 such as this one. I don't recall the exact syntax, but if I recall 
 correctly, you can specify one dimension (the first one in C, the last one 
 in Julia) to be extensible, and then you can add more data as you go. You 
 will probably need to specify a chunk size, which 

[julia-users] Re: julialang home page screenshot's notebook source

2016-09-13 Thread Steven G. Johnson


On Tuesday, September 13, 2016 at 6:48:11 PM UTC-4, Ken Massey wrote:
>
> Sorry for the crazy question ...
>
> On the julialang.org homepage, you'll find:
>   "Here is a screenshot of a web-based interactive IJulia Notebook 
>  session, using Gadfly 
> ."
>
> It seems like I had once actually visited the notebook from which this 
> screenshot was taken:
>
>http://julialang.org/images/ijulia.png
>

If you look at

https://github.com/JuliaLang/julialang.github.com/tree/master/images

you'll see that this image was created by Stefan in 2013 
(https://github.com/JuliaLang/julialang.github.com/commit/ca3645931ec15642f99c186592e3e20edaaa9ded).
 
 It should really be updated...


[julia-users] Re: IJulia documenation example produces ReferenceError: guide_background_mouseover is not defined

2016-09-13 Thread Steven G. Johnson


On Tuesday, September 13, 2016 at 8:14:11 PM UTC-4, Ken Massey wrote:
>
> Do you happen to remember which documentation you were looking at for that 
> example ?  I see that code screenshoted on the julialang.org homepage, 
> but can't find the original notebook anywhere. 
>

 This is a pretty ancient thread; almost any nontrivial code example from 
2014 would have to be updated nowadays.


[julia-users] Re: Priority queue - peek versus dequeue!

2016-09-13 Thread David P. Sanders


El martes, 13 de septiembre de 2016, 17:16:19 (UTC-4), Júlio Hoffimann 
escribió:
>
> Hi,
>
> Could you explain why "peek" returns the pair (key,value) whereas 
> "dequeue!" only returns the key?
>
> using Base.Collections
> pq = PriorityQueue()
> enqueue!(pq, key, value)
> key, value = Collections.peek(pq)
> key = dequeue!(pq)
>
> I wanted to have a single line in which I retrieve both (key,value) and at 
> the same time remove the pair from the collection:
>
> key, value = dequeue!(pq)
>
> Should I open an issue on GitHub?
>


I also wondered that at some point. Yes please to an issue.
It also seems to me that for consistency, this should be called pop! rather 
than dequeue!, and enqueue! should be push!

 

>
> -Júlio
>


Re: [julia-users] Advice on (perhaps) chunking to HDF5

2016-09-13 Thread Anandaroop Ray
Cool! The colons approach makes sense to me, followed by splatting.

I'm unfamiliar with the syntax here but when I try to create a tuple in the
REPL using

inds = ((:) for i in 1:3)

I get
ERROR: syntax: missing separator in tuple



On 13 September 2016 at 17:27, Erik Schnetter  wrote:

> If you have a varying rank, then you should probably use something like
> `CartesianIndex` and `CartesianRange` to represent the indices, or possible
> tuples of integers. You would then use the splatting operator to create the
> indexing instructions:
>
> ```Julia
> indrange = CartesianRange(xyz)
> dset[indrange..., i] = slicedim
> ```
>
> I don't know whether the expression `indrange...` works as-is, or whether
> you have to manually create a tuple of `UnitRange`s.
>
> If you want to use colons, then you'd write
>
> ```Julia
> inds = ((:) for i in 1:rank)
> dset[inds..., i] = xyz
> ```
>
> -erik
>
>
>
>
> On Tue, Sep 13, 2016 at 5:08 PM, Anandaroop Ray 
> wrote:
>
>> Many thanks for your comprehensive recommendations. I think HDF5 views
>> are probably what I need to go with - will read up more and then ask.
>>
>> What I mean about dimension is rank, really. The shape is always the same
>> for all samples. One slice for storage, i.e., one sample, could be chunked
>> as dset[:,:,i] or dset[:,:,:,:,i] but always of the form, dset[:,...,:i],
>> depending on input to the code at runtime.
>>
>> Thanks
>>
>> On 13 September 2016 at 14:47, Erik Schnetter 
>> wrote:
>>
>>> On Tue, Sep 13, 2016 at 11:27 AM, sparrowhawker >> > wrote:
>>>
 Hi,

 I'm new to Julia, and have been able to accomplish a lot of what I used
 to do in Matlab/Fortran, in very little time since I started using Julia in
 the last three months. Here's my newest stumbling block.

 I have a process which creates nsamples within a loop. Each sample
 takes a long time to compute as there are expensive finite difference
 operations, which ultimately lead to a sample, say 1 to 10 seconds. I have
 to store each of the nsamples, and I know the size and dimensions of each
 of the nsamples (all samples have the same size and dimensions). However,
 depending on the run time parameters, each sample may be a 32x32 image or
 perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3
 hyper-rectangle. To be clear, each sample can be an arbitrary dimension
 hyper-rectangle, specified at run time.

 Obviously, since I don't want to lose computation and want to see
 incremental progress, I'd like to do incremental saves of these samples on
 disk, instead of waiting to collect all nsamples at the end. For instance,
 if I had to store 1000 samples of size 64x64, I thought perhaps I could
 chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right
 approach? If so, here's a prototype program to do so, but it depends on my
 knowing the number of dimensions of the slice, which is not known until
 runtime,

 using HDF5

 filename = "test.h5"
 # open file
 fmode ="w"
 # get a file object
 fid = h5open(filename, fmode)
 # matrix to write in chunks
 B = rand(64,64,1000)
 # figure out its dimensions
 sizeTuple = size(B)
 Ndims = length(sizeTuple)
 # set up to write in chunks of sizeArray
 sizeArray = ones(Int, Ndims)
 [sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of size
 array is :...:,1
 # create a dataset models within root
 dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)),
 "chunk", sizeArray)
 [dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
 close(fid)

 This works, but the second last line, dset[:,:,i] requires syntax
 specific to writing a slice of a dimension 3 array - but I don't know the
 dimensions until run time. Of course I could just write to a flat binary
 file incrementally, but HDF5.jl could make my life so much simpler!

>>>
>>> HDF5 supports "extensible datasets", which were created for use cases
>>> such as this one. I don't recall the exact syntax, but if I recall
>>> correctly, you can specify one dimension (the first one in C, the last one
>>> in Julia) to be extensible, and then you can add more data as you go. You
>>> will probably need to specify a chunk size, which could be the size of the
>>> increment in your case. Given file system speeds, a chunk size smaller than
>>> a few MegaBytes probably don't make much sense (i.e. will slow things down).
>>>
>>> If you want to monitor the HDF5 file as it is being written, look at the
>>> SWIMR feature. This requires HDF5 1.10; unfortunately, Julia will by
>>> default often still install version 1.8.
>>>
>>> If you want to protect against crashes of your code so that you don't
>>> lose progress, then HDF5 is probably not right for you. Once an HDF5 

[julia-users] Julia for Data Science book (Technics Publications)

2016-09-13 Thread Zacharias Voulgaris


Hi everyone,

 

I’m fairly new in this group but I’ve been an avid Julia user since Ver. 
0.2. About a year ago I decided to take the next step and start using Julia 
professionally, namely for data science projects (even if at that time I 
was a PM in Microsoft). Shortly afterwards, I started writing a book about 
it, focusing on how we can use this wonderful tool for data science 
projects. My aim was to make it easy for everyone to learn to make use of 
their Julia know-how for tackling data science project, but also to help 
more experienced data scientists to do what they usually do but instead of 
Python / R / Scala, use a more elegant tool (Julia). I’m writing this post 
because this book is finally a reality.

 

In this book, which is titled Julia for Data Science and published by 
Technics Publications, I cover various data science topics. These include 
data engineering, supervised and unsupervised  machine learning, 
statistics, and some graph analysis. Also, since the focus is on 
applications rather than digging into the deeper layers of the language, I 
make use of IJulia instead of Juno, while I also refrain from delving into 
custom types and meta-programming. Yet, in this book I make use of tools 
and metrics that are rarely, if ever, mentioned in other data science books 
(e.g. the T-SNE method, some variants of Jaccard Similarity, some 
alternative error averages for regression, and more). Also, I try to keep 
assumptions about the reader’s knowledge to a minimum, so there are plenty 
of links to references for the various concepts used in the book, from 
reliable sources. Finally, the book includes plenty of reference sections 
at the end, so you don’t need to remember all the packages introduced, or 
all the places where you can learn more about the language. Each chapter is 
accompanied by a series of questions and some exercises, to help you make 
sure you comprehend everything you’ve read, while at the end I include a 
full project for you to practice on (answers to all the exercises and the 
project itself are at an appendix). All the code used in the book is 
available on Jupyter files, while the data files are also available in .csv 
and text format.

 

The book is available in both paperback and eBook format (PDF, Kindle, and 
Safari) at the publisher’s website: https://technicspub.com/analytics

 

Please note that for some reason the Packt publishing house, which has had 
the monopoly on Julia books up until now, decided to follow suit, which is 
why it is releasing a book with the same title next month (clearly, 
imagination is not their strongest suit!). So, please make sure that you 
don’t confuse the two books. My goal is not to make a quick buck through 
this book (which is why I’m not publishing it via Packt); instead, I aim to 
make Julia more well-known to the data science world while at the same time 
make data science more accessible to all Julia users. 

 

Thanks,

 

Zack


[julia-users] Re: IJulia documenation example produces ReferenceError: guide_background_mouseover is not defined

2016-09-13 Thread Ken Massey

Do you happen to remember which documentation you were looking at for that 
example ?  I see that code screenshoted on the julialang.org homepage, but 
can't find the original notebook anywhere.

On Thursday, April 3, 2014 at 3:29:48 PM UTC-4, vfclists wrote:
>
> I am trying the documentation example using Julia 0.2.1 and IJulia 0.1.5
>
> srand(37)
> plot(x=[1:500], y=cumsum(randn(500)), Geom.line, Guide.XLabel("time"), 
> Guide.YLabel("random.walk"))
>
> The example above produces the output below in Firefox 28.0 
>
> Javascript error adding output!
> ReferenceError: guide_background_mouseover is not defined
> See your browser Javascript console for more details
>
> From the errors I have been obtaining it seems that quite a few things a 
> missing.
>
> Is there way of debugging to verify check that the dependencies of each 
> component are all present?
>
>

[julia-users] julialang home page screenshot's notebook source

2016-09-13 Thread Ken Massey
Sorry for the crazy question ...

On the julialang.org homepage, you'll find:
  "Here is a screenshot of a web-based interactive IJulia Notebook 
 session, using Gadfly 
."

It seems like I had once actually visited the notebook from which this 
screenshot was taken:

   http://julialang.org/images/ijulia.png

But now I cannot find it.  Does anybody know where that screenshot was 
taken from ?

Thanks.





[julia-users] Re: Vector Field operators (gradient, divergence, curl) in Julia

2016-09-13 Thread Steven G. Johnson
Both ForwardDiff and ReverseDiff source solve a different problem (taking 
the derivative of a user-supplied function f(x)).  The Matlab and NumPy 
gradient functions, instead, take an array (not a function) and compute 
differences of adjacent elements of the array, returning a new array.


[julia-users] Re: show in 0.5

2016-09-13 Thread Steven G. Johnson


On Tuesday, September 13, 2016 at 5:00:48 PM UTC-4, Sheehan Olver wrote:
>
> I'm confused by the following in 0.5.  In 0.4 I would override 
> Base.show(::IO,::MyType) to create the default output.  What should be done 
> in 0.5?
>

The REPL calls show(io, MIME("text/plain"), x) [formerly writemime in 0.4]. 
 By default, this calls show(io, x).

So, if you define your own type, adding a show(io, x) method is all you 
need.  However, if you want multiline output in the REPL to be different 
from the single-line output of things like show and print, then you can 
additionally define a show(io::IO, ::MIME"text/plain", x) that does not 
call show(io, x).

This is what is happening for arrays.  Base defines both a show(io, 
text/plain, x) and a show(io, x) method for arrays.  The former gives 
multi-line output limited by the terminal size, and is used for REPL output 
(and IJulia).  The latter gives single-line output of the whole array 
(regardless of size), and is used for things like println(x) and string(x).
 


Re: [julia-users] Advice on (perhaps) chunking to HDF5

2016-09-13 Thread Erik Schnetter
If you have a varying rank, then you should probably use something like
`CartesianIndex` and `CartesianRange` to represent the indices, or possible
tuples of integers. You would then use the splatting operator to create the
indexing instructions:

```Julia
indrange = CartesianRange(xyz)
dset[indrange..., i] = slicedim
```

I don't know whether the expression `indrange...` works as-is, or whether
you have to manually create a tuple of `UnitRange`s.

If you want to use colons, then you'd write

```Julia
inds = ((:) for i in 1:rank)
dset[inds..., i] = xyz
```

-erik




On Tue, Sep 13, 2016 at 5:08 PM, Anandaroop Ray 
wrote:

> Many thanks for your comprehensive recommendations. I think HDF5 views are
> probably what I need to go with - will read up more and then ask.
>
> What I mean about dimension is rank, really. The shape is always the same
> for all samples. One slice for storage, i.e., one sample, could be chunked
> as dset[:,:,i] or dset[:,:,:,:,i] but always of the form, dset[:,...,:i],
> depending on input to the code at runtime.
>
> Thanks
>
> On 13 September 2016 at 14:47, Erik Schnetter  wrote:
>
>> On Tue, Sep 13, 2016 at 11:27 AM, sparrowhawker 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm new to Julia, and have been able to accomplish a lot of what I used
>>> to do in Matlab/Fortran, in very little time since I started using Julia in
>>> the last three months. Here's my newest stumbling block.
>>>
>>> I have a process which creates nsamples within a loop. Each sample takes
>>> a long time to compute as there are expensive finite difference operations,
>>> which ultimately lead to a sample, say 1 to 10 seconds. I have to store
>>> each of the nsamples, and I know the size and dimensions of each of the
>>> nsamples (all samples have the same size and dimensions). However,
>>> depending on the run time parameters, each sample may be a 32x32 image or
>>> perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3
>>> hyper-rectangle. To be clear, each sample can be an arbitrary dimension
>>> hyper-rectangle, specified at run time.
>>>
>>> Obviously, since I don't want to lose computation and want to see
>>> incremental progress, I'd like to do incremental saves of these samples on
>>> disk, instead of waiting to collect all nsamples at the end. For instance,
>>> if I had to store 1000 samples of size 64x64, I thought perhaps I could
>>> chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right
>>> approach? If so, here's a prototype program to do so, but it depends on my
>>> knowing the number of dimensions of the slice, which is not known until
>>> runtime,
>>>
>>> using HDF5
>>>
>>> filename = "test.h5"
>>> # open file
>>> fmode ="w"
>>> # get a file object
>>> fid = h5open(filename, fmode)
>>> # matrix to write in chunks
>>> B = rand(64,64,1000)
>>> # figure out its dimensions
>>> sizeTuple = size(B)
>>> Ndims = length(sizeTuple)
>>> # set up to write in chunks of sizeArray
>>> sizeArray = ones(Int, Ndims)
>>> [sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of size
>>> array is :...:,1
>>> # create a dataset models within root
>>> dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)),
>>> "chunk", sizeArray)
>>> [dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
>>> close(fid)
>>>
>>> This works, but the second last line, dset[:,:,i] requires syntax
>>> specific to writing a slice of a dimension 3 array - but I don't know the
>>> dimensions until run time. Of course I could just write to a flat binary
>>> file incrementally, but HDF5.jl could make my life so much simpler!
>>>
>>
>> HDF5 supports "extensible datasets", which were created for use cases
>> such as this one. I don't recall the exact syntax, but if I recall
>> correctly, you can specify one dimension (the first one in C, the last one
>> in Julia) to be extensible, and then you can add more data as you go. You
>> will probably need to specify a chunk size, which could be the size of the
>> increment in your case. Given file system speeds, a chunk size smaller than
>> a few MegaBytes probably don't make much sense (i.e. will slow things down).
>>
>> If you want to monitor the HDF5 file as it is being written, look at the
>> SWIMR feature. This requires HDF5 1.10; unfortunately, Julia will by
>> default often still install version 1.8.
>>
>> If you want to protect against crashes of your code so that you don't
>> lose progress, then HDF5 is probably not right for you. Once an HDF5 file
>> is open for writing, the on-disk state might be inconsistent, so that you
>> can lose all data when your code crashes. In this case, you might want to
>> write data into different files, one per increment. HDF5 1.0 offers
>> "views", which are umbrella files that stitch together datasets stored in
>> other files.
>>
>> If you are looking for generic advice for setting up things with HDF5,
>> then I recommend their documentation. 

[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-13 Thread Zhong Pan
The "Can't open perl script "scripts/config.pl": No such file or directory" 
error has been solved. See: https://github.com/ARMmbed/mbedtls/issues/541
What I did was to open the CMakeList.txt file under 
"C:\Users\zpan\Documents\GitHub\julia\deps\srccache\mbedtls-2.3.0-gpl", and 
replaced ${PERL_EXECUTABLE} with ${CMAKE_CURRENT_SOURCE_DIR} on line 34.
 
Trying to solve the "Could not create symbolic link" issue.



[julia-users] Priority queue - peek versus dequeue!

2016-09-13 Thread Júlio Hoffimann
Hi,

Could you explain why "peek" returns the pair (key,value) whereas 
"dequeue!" only returns the key?

using Base.Collections
pq = PriorityQueue()
enqueue!(pq, key, value)
key, value = Collections.peek(pq)
key = dequeue!(pq)

I wanted to have a single line in which I retrieve both (key,value) and at 
the same time remove the pair from the collection:

key, value = dequeue!(pq)

Should I open an issue on GitHub?

-Júlio


Re: [julia-users] Advice on (perhaps) chunking to HDF5

2016-09-13 Thread Anandaroop Ray
Many thanks for your comprehensive recommendations. I think HDF5 views are
probably what I need to go with - will read up more and then ask.

What I mean about dimension is rank, really. The shape is always the same
for all samples. One slice for storage, i.e., one sample, could be chunked
as dset[:,:,i] or dset[:,:,:,:,i] but always of the form, dset[:,...,:i],
depending on input to the code at runtime.

Thanks

On 13 September 2016 at 14:47, Erik Schnetter  wrote:

> On Tue, Sep 13, 2016 at 11:27 AM, sparrowhawker 
> wrote:
>
>> Hi,
>>
>> I'm new to Julia, and have been able to accomplish a lot of what I used
>> to do in Matlab/Fortran, in very little time since I started using Julia in
>> the last three months. Here's my newest stumbling block.
>>
>> I have a process which creates nsamples within a loop. Each sample takes
>> a long time to compute as there are expensive finite difference operations,
>> which ultimately lead to a sample, say 1 to 10 seconds. I have to store
>> each of the nsamples, and I know the size and dimensions of each of the
>> nsamples (all samples have the same size and dimensions). However,
>> depending on the run time parameters, each sample may be a 32x32 image or
>> perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3
>> hyper-rectangle. To be clear, each sample can be an arbitrary dimension
>> hyper-rectangle, specified at run time.
>>
>> Obviously, since I don't want to lose computation and want to see
>> incremental progress, I'd like to do incremental saves of these samples on
>> disk, instead of waiting to collect all nsamples at the end. For instance,
>> if I had to store 1000 samples of size 64x64, I thought perhaps I could
>> chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right
>> approach? If so, here's a prototype program to do so, but it depends on my
>> knowing the number of dimensions of the slice, which is not known until
>> runtime,
>>
>> using HDF5
>>
>> filename = "test.h5"
>> # open file
>> fmode ="w"
>> # get a file object
>> fid = h5open(filename, fmode)
>> # matrix to write in chunks
>> B = rand(64,64,1000)
>> # figure out its dimensions
>> sizeTuple = size(B)
>> Ndims = length(sizeTuple)
>> # set up to write in chunks of sizeArray
>> sizeArray = ones(Int, Ndims)
>> [sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of size
>> array is :...:,1
>> # create a dataset models within root
>> dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)),
>> "chunk", sizeArray)
>> [dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
>> close(fid)
>>
>> This works, but the second last line, dset[:,:,i] requires syntax
>> specific to writing a slice of a dimension 3 array - but I don't know the
>> dimensions until run time. Of course I could just write to a flat binary
>> file incrementally, but HDF5.jl could make my life so much simpler!
>>
>
> HDF5 supports "extensible datasets", which were created for use cases such
> as this one. I don't recall the exact syntax, but if I recall correctly,
> you can specify one dimension (the first one in C, the last one in Julia)
> to be extensible, and then you can add more data as you go. You will
> probably need to specify a chunk size, which could be the size of the
> increment in your case. Given file system speeds, a chunk size smaller than
> a few MegaBytes probably don't make much sense (i.e. will slow things down).
>
> If you want to monitor the HDF5 file as it is being written, look at the
> SWIMR feature. This requires HDF5 1.10; unfortunately, Julia will by
> default often still install version 1.8.
>
> If you want to protect against crashes of your code so that you don't lose
> progress, then HDF5 is probably not right for you. Once an HDF5 file is
> open for writing, the on-disk state might be inconsistent, so that you can
> lose all data when your code crashes. In this case, you might want to write
> data into different files, one per increment. HDF5 1.0 offers "views",
> which are umbrella files that stitch together datasets stored in other
> files.
>
> If you are looking for generic advice for setting up things with HDF5,
> then I recommend their documentation. If you are looking for how to access
> these features in Julia, or if you notice a feature that is not available
> in Julia, then we'll be happy to explain or correct things.
>
> What do mean by "dimension only known at run time" -- do you mean what
> Julia calls "size" (shape) or what Julia calls "dim" (rank)?
>
> Do all datasets have the same size, or do they differ? If they differ,
> then putting them into the same dataset might not make sense; in this case,
> I would write them into different datasets.
>
> -erik
>
> --
> Erik Schnetter  http://www.perimeterinstitute.
> ca/personal/eschnetter/
>


[julia-users] show in 0.5

2016-09-13 Thread Sheehan Olver


I'm confused by the following in 0.5.  In 0.4 I would override 
Base.show(::IO,::MyType) to create the default output.  What should be done 
in 0.5?


*julia> **rand(5,5)*

*5x5 Array{Float64,2}:*

* 0.448531   0.570789  0.6983990.718604  0.118253*

* 0.0953516  0.856834  0.0730664   0.382955  0.488855*

* 0.639358   0.412943  0.00413064  0.419452  0.163792*

* 0.120035   0.288662  0.9100860.274534  0.659739*

* 0.747583   0.557842  0.8568890.827823  0.745782*


*julia> **show(rand(5,5))*

[0.5767891818849664 0.3885100700605799 0.10250219144345829 
0.2495010507697788 0.6707546989111066

 0.09179095440919793 0.028439204878678126 0.8240619293530689 
0.002545380069679526 0.5774220661835723

 0.2984153686262503 0.9321235989807006 0.5320629425846966 
0.004958467541819278 0.3948259997744583

 0.40079126615269467 0.6635437065727794 0.6437420924760648 
0.14495502173543695 0.9819191423352878

 0.5244432897905984 0.1099014439888335 0.5799843584483502 
0.7900314911801023 0.0008403147121169852]




[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-13 Thread Zhong Pan
Tony,

Thanks for your reply. I did solve the CMAKE_C_COMPILER issue - see my 
previous post (just 1 min after yours). However, it's not by appending to 
$PATH though - I had tried adding the gcc path to $PATH and the same error 
still showed up.

Thanks for the hint about adding "lib" prefix. I don't have .dll files 
though - the MKL library consists of 17 .lib files (the file list I gave 
missed a few letters at the end for some files so they become .l or .li - 
my apologies). 

I will try adding the prefix anyway... After other road blocks are removed. 
Please see my previous post for details.

On Tuesday, September 13, 2016 at 3:45:16 PM UTC-5, Tony Kelman wrote:
>
> maybe better to just temporarily add gcc's location to your working path 
> for the duration of the make (don't leave it there longer though).
>
> I think that linker error is from arpack. Try making a copy of mkl_rt.dll 
> called libmkl_rt.dll and see if that helps. libtool isn't able to link 
> against dlls that don't have a lib prefix unless you have a corresponding 
> .dll.a import library (maybe copying and renaming one od the .lib files to 
> libmkl_rt.dll.a could also work?) because being annoying is what libtool 
> does.
>


[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-13 Thread Zhong Pan
Tony,

Thanks for your reply. I did solve the CMAKE_C_COMPILER issue - see my 
previous post (just 1 min after yours). However, it's not by appending to 
$PATH though - I had tried adding the gcc path to $PATH and the same error 
still showed up.

Thanks for the hint about adding "lib" prefix. I don't have .dll files 
though - the MKL library consists of 17 .lib files. I will try adding the 
prefix anyway... After other road blocks are removed. Please see my 
previous post for details.


On Tuesday, September 13, 2016 at 3:45:16 PM UTC-5, Tony Kelman wrote:
>
> maybe better to just temporarily add gcc's location to your working path 
> for the duration of the make (don't leave it there longer though).
>
> I think that linker error is from arpack. Try making a copy of mkl_rt.dll 
> called libmkl_rt.dll and see if that helps. libtool isn't able to link 
> against dlls that don't have a lib prefix unless you have a corresponding 
> .dll.a import library (maybe copying and renaming one od the .lib files to 
> libmkl_rt.dll.a could also work?) because being annoying is what libtool 
> does.
>


[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-13 Thread Zhong Pan
I realize I may be documenting a bunch of build issues that more 
experienced programmer may overcome more easily. Anyway since I started, 
please bear with me. :-)

So I solved the problem of incorrect CMAKE_C_COMPILER. What I did was I 
found the make file that was causing Error 1:
/c/users/zpan/documents/github/julia/deps/mbedtls.mk 
I opened it, scrolled to line 40, and inserted a line before line 38:
MBEDTLS_OPTS += 
-DCMAKE_C_COMPILER="/C/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc.exe"
 

After saving this and trying "make install" again, the compiler seems to be 
found, but another issue popped up, which I cannot solve yet. 

The error message:

zpan@WSPWork MSYS /c/users/zpan/documents/github/julia
$ make install
-- The C compiler identification is GNU 6.2.0
-- Check for working C compiler: 
C:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc.exe
-- Check for working C compiler: 
C:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc.exe
 
-- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Found Perl: C:/msys64/usr/bin/perl.exe (found version "5.22.1")
Can't open perl script "scripts/config.pl": No such file or directory
CMake Error at tests/CMakeLists.txt:122 (message):
  Could not create symbolic link for:
  
C:/Users/zpan/Documents/GitHub/julia/deps/srccache/mbedtls-2.3.0-gpl/tests/data_files
  --> Invalid switch - "Users".



-- Configuring incomplete, errors occurred!
See also 
"C:/Users/zpan/Documents/GitHub/julia/deps/build/mbedtls-2.3.0/CMakeFiles/CMakeOutput.log".
make[2]: *** [/c/users/zpan/documents/github/julia/deps/mbedtls.mk:41: 
build/mbedtls-2.3.0/Makefile] Error 1
make[1]: *** [Makefile:81: julia-deps] Error 2
make: *** [Makefile:331: install] Error 2




[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-13 Thread Tony Kelman
maybe better to just temporarily add gcc's location to your working path for 
the duration of the make (don't leave it there longer though).

I think that linker error is from arpack. Try making a copy of mkl_rt.dll 
called libmkl_rt.dll and see if that helps. libtool isn't able to link against 
dlls that don't have a lib prefix unless you have a corresponding .dll.a import 
library (maybe copying and renaming one od the .lib files to libmkl_rt.dll.a 
could also work?) because being annoying is what libtool does.

[julia-users] Re: Pkg.add() works fine while Pkg.update() doesn't over https instead of git

2016-09-13 Thread Tony Kelman
Better to use a different branch name, not metadata-v2

[julia-users] Re: Vector Field operators (gradient, divergence, curl) in Julia

2016-09-13 Thread Christoph Ortner
Fast to implement, only moderately fast for execution; I switch to 
ReverseDiffSource


[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-13 Thread Zhong Pan
OK my non-parallel build bumped into a different problem. It seems the 
CMAKE_C_COMPILER PATH was not understood. 
I am not sure what is the correct one, so I tried a few variations:
C:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc.exe
/C/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc
/C/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc.exe

For example, I tried modifying CMakeCache.txt by modifying this line:
CMAKE_C_COMPILER:UNINITIALIZED=C:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc
into
CMAKE_C_COMPILER:UNINITIALIZED=C:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc.exe
(and also tried the other 2 variations)

Or creating a new line:
CMAKE_C_COMPILER=C:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc.exe
(and also tried the other 2 variations)

Problem is everytime I run "make" or "make install", CMakeCache.txt is 
overwritten.

I also tried the environmental variable way:
$ export 
CC=/C/Users/zpan/Documents/GitHub/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc
(and also tried the other 2 variations)

None of the solutions worked. So if I do "make" or "make install" I am 
stuck at this error message. 

I am not experienced in building projects with MinGW, so maybe the problem 
is a trivial one. But, for me it's tough. :-) Any help appreciated!


See full error message below:

zpan@WSPWork MSYS /c/users/zpan/documents/github/julia
$ make
-- The C compiler identification is GNU 6.2.0
CMake Error at CMakeLists.txt:2 (project):
  The CMAKE_C_COMPILER:


C:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/gcc

  is not a full path to an existing compiler tool.

  Tell CMake where to find the compiler by setting either the environment
  variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path 
to
  the compiler, or to the compiler name if it is in the PATH.


-- Configuring incomplete, errors occurred!
See also 
"C:/Users/zpan/Documents/GitHub/julia/deps/build/mbedtls-2.3.0/CMakeFiles/CMakeOutput.log".
make[1]: *** [/c/users/zpan/documents/github/julia/deps/mbedtls.mk:40: 
build/mbedtls-2.3.0/Makefile] Error 1
make: *** [Makefile:81: julia-deps] Error 2





On Tuesday, September 13, 2016 at 2:47:07 PM UTC-5, Zhong Pan wrote:
>
> Thank you Tony for the information. I still had some problem.
>
> I tried building Julia (release-0.5) using MinGW following the 
> instructions here: 
> https://github.com/JuliaLang/julia/blob/master/README.windows.md
>
> The only difference is, after creating the Make.user file, I edited it 
> using a text editor and added the options to use MKL, so the complete 
> Make.user now looks like:
> override CMAKE=/C/CMake/bin/cmake.exe
> USE_INTEL_MKL = 1
> USE_INTEL_MKL_FFT = 1
> USE_INTEL_LIBM = 1
>
> The addition of the above lines is per build instructions about MKL at 
> https://github.com/JuliaLang/julia, except that I removed "USEICC = 1" 
> and "USEIFC = 1" since I am not using Intel compiler. But I do have Intel 
> MKL with the free Community License installed on my computer at "C:\Program 
> Files (x86)\IntelSWTools\compilers_and_libraries_2017.0.109\windows\mkl"
>
> After I started the build with "make -j 3", the build would run for a 
> while then stop with 
> make: *** [Makefile:81: julia-deps] Error 2
>
> I went through the messages and found this to be the most suspicious for a 
> root cause:
> c:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/../lib/gcc/x86_64-w64-mingw32/6.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe:
>  
> cannot find -lmkl_rt
>
> So I realized that MinGW was not able to find the MKL libraries. I tried 
> the following 2:
>
> (1) Copied all the 17 .lib files from "C:\Program Files 
> (x86)\IntelSWTools\compilers_and_libraries_2017.0.109\windows\mkl\lib\intel64_win"
>  
> to "C:\mkllib" (so the path does not contain a space), then I did
> $ export MKLLIB=/C/mkllib
>
> (2) Copied all the 17 .lib files to 
> "C:\Users\zpan\Documents\GitHub\julia\usr\lib"
>
> But I was still getting the same Error 2 in the end, and I could still 
> find the error message saying "cannot find -lmkl_rt".  Well, mkl_rt.lib was 
> indeed part of the 17 files, which are:
>
> mkl_blas95_ilp64.lib
> mkl_blas95_lp64.lib
> mkl_core.lib
> mkl_core_dll.lib
> mkl_intel_ilp64.lib
> mkl_intel_ilp64_dll.li
> mkl_intel_lp64.lib
> mkl_intel_lp64_dll.lib
> mkl_intel_thread.lib
> mkl_intel_thread_dll.l
> mkl_lapack95_ilp64.lib
> mkl_lapack95_lp64.lib
> mkl_rt.lib
> mkl_sequential.lib
> mkl_sequential_dll.lib
> mkl_tbb_thread.lib
> mkl_tbb_thread_dll.lib
>
> I am currently trying the command "make install" to build it without 
> parallel option - this is very slow but I can see the messages in proper 
> order.
>
> Did I miss something to let the linker see the .lib files? Any help 
> greatly appreciated! 
>
>
>
> On 

Re: [julia-users] Re: Julia Users and First Customers

2016-09-13 Thread Chris Rackauckas
Thanks for weighing in. I'll stick to your lingo: solid and usable, but not 
with a guarantee of long term support until v1.0. That's what I've been 
meaning by alpha (I mean I use Julia everyday because it's solid and 
usable), but I won't continue to use that term because it seems to carry a 
worse connotation than what I meant it to have.

Though I will say that when I talk of "Julia", I also am referring to the 
package ecosystem (like how Python means Python + NumPy etc.). FWIW, most 
of the issues I run into tend to be due to packages rather than base. 

On Tuesday, September 13, 2016 at 1:00:10 PM UTC-7, Stefan Karpinski wrote:
>
> +1 to everything David said. And yes, we should put out an official 
> statement on this subject. Julia has absolutely been released on four and 
> very nearly five occasions: 0.1, 0.2, 0.3, 0.4 and any day now, 0.5. These 
> are not alpha releases, nor are they unstable. They are very solid and 
> usable. The pre-1.0 numbering of these releases (see, that's the word) are 
> simply an indication that we *will* change APIs and break code in the 
> relatively near future – before 1.0 is out. We could have numbered these 
> releases 1.0, 2.0, 3.0 and 4.0, but (to me at least), there's an 
> expectation that a 1.0 release will be supported for *years* into the 
> future, which would leave us supporting four-five releases concurrently, 
> which simply isn't realistic at this point in time. Once Julia 1.0 is 
> released we *will* support it for the long term, while we continue work 
> towards 2.0 and beyond. But at that point the Julia community will be 
> larger, we will have more resources, and releases will be further apart.
>
> Aside: Swift seems to have taken the let's release major versions very 
> often since they are already on a prerelease of major version 3.0 (and 
> maybe that's better marketing), but to me having three major releases in 
> two years seems a bit crazy. Of course, Swift has a very large, effectively 
> captive audience. Julia is likely to follow the Rust approach, albeit with 
> a *lot* less breakage between pre-1.0 versions.
>
> Issues you may have had with Juno don't really have anything to do with 
> Julia, although there's an argument to be made that pre-1.0 numbering of 
> the language is a good indicator that the ecosystem may be a little rough 
> around the edges. Once Julia 0.5 is out, there will be a stable version of 
> Juno released to go with it.
>
> On Tue, Sep 13, 2016 at 1:19 PM, Chris Rackauckas  > wrote:
>
>> I agree that there are some qualifiers that have to go with it in order 
>> to get the connotation right, but the term "unstable but usable 
>> pre-release" is just a phrase for alpha. It's not misleading to 
>> characterize Julia as not having been released since the core dev group is 
>> calling v1.0 the release, and so saying pre-1.0 is just a synonym for 
>> "pre-release". Being alpha doesn't mean it's buggy, it just means that it's 
>> alpha, as in not the final version and can change. You can rename it to 
>> another synonym, but it's still true.
>>
>> Whether it's in a state to be used in production, that's a call for an 
>> experienced Julia user who knows the specifics of the application they are 
>> looking to build. But throwing it in with the more stable languages so that 
>> way someone in a meeting is like a stand-in ("hey guys, we can try Julia. 
>> It should be faster than those other options, and should be hard to learn, 
>> what do you think?"), that has a decent change of leading to trouble.
>>
>> And I think not being honest with people is a good way to put a bad taste 
>> in people's mouths. Even if lots of Julia were stable, the package 
>> ecosystem isn't. Just the other day I was doing damage control since there 
>> was a version of Juno that was tagged which broke "most users'" systems, by 
>> most users I mean there was an "obvious" fix: look at the error message, 
>> manually clone this random new repository, maybe check out master on a few 
>> packages 
>> .
>>  
>> You want to use the plot pane? Oh just checkout master, no biggie. 
>>  If someone is 
>> trying to use Julia without knowing any Git/Github, it's possible, but 
>> we're still at a point where it could lead to some trouble, or it's 
>> confusing since some basic features are seemingly missing (though just not 
>> tagged). 
>>
>> When you're talking about a drop-in replacement for MATLAB/Python/R, 
>> we're talking about people less familiar with all of this software 
>> development stuff who have a tendency to confuse warnings (like deprecation 
>> warnings) as errors, not realize their install is Julia v0.3 that they 
>> played with one day from along time ago which is why the documentation 
>> isn't working, trying to use a 

Re: [julia-users] debugging tips for GC?

2016-09-13 Thread Yichao Yu
On Tue, Sep 13, 2016 at 4:11 PM, Yichao Yu  wrote:

> I'm able to reproduce it in rr and found the issue.
>
> TL;DR the issue is at https://github.com/JuliaGraphics/Cairo.jl/blame/
> master/src/Cairo.jl#L625, where it passes the ownership of a cairo
> pointer to julia, causing a double free.
>

Which comes from your commit a year ago ;-p
https://github.com/JuliaGraphics/Cairo.jl/commit/23681dc1270882c964059d23863a88d4276f6fd8


>
> Here's the rough process of my debugging, I'm not really sure how to
> summarize it though
>
> 1. It abort in cairo `cairo_destory_path` so I first compiled a cairo with
> debug symbol to make my life easier. (the function is pretty short so
> reading the disasm would have worked too)
> 2. It is free'ing `path->data` so I added a watchpoint on it `watch -l
> path->data` and reverse-continue to find the point of assignment.
> 3. Assignment happens in cairo from a valid malloc so path->data isn't
> corrupted.
> 4. Now it takes some guessing to figure out exactly what's wrong. I'm not
> sure how glibc stores it's malloc metadata (would help to know that) so I
> tried the naive thing and watch the intptr_t before the malloc result
> (that's how julia store the gc metadata) and run forward. None of the
> assignment to this location looks suspicious (they are all in glibc and the
> first hit isn't free'ing this value)
> 5. So now I tried the brute force way,the pointer (`path->data`) I see is
> `0x3746950` so I simply did a conditional breakpoint to see when it's
> free'd with `br free if $rdi == 0x3746950`. I use rdi to get the first
> argument since the glibc I installed doesn't have that detailed debug info.
> 6. After a long run (conditional breakpoint is really slow which is why I
> didn't use it first) it hits a breakpoint in the julia GC when free'ing an
> array. The array has a data pointer the same as the one in question and
> that's before the pointer is free'd by cairo so sth is wrong with the
> creation of the array. Now simply watch the `a->data` and go back again.
> I'm lucky this time, if this didn't work, the next thing to try would be
> trying to reduce the code/ run GC more often so that I can afford looking
> at the code more carefully instead of just catching events in the debugger.
> 7. As expected, it hits `jl_ptr_to_array` and going up a frame it seems
> that the caller is supplying a cairo pointer and transfering the ownership,
> which is wrong.
>
>
>
> On Tue, Sep 13, 2016 at 3:36 PM, Yichao Yu  wrote:
>
>>
>>
>> On Tue, Sep 13, 2016 at 3:31 PM, Andreas Lobinger 
>> wrote:
>>
>>> Hello colleague,
>>>
>>> On Tuesday, September 13, 2016 at 7:25:38 PM UTC+2, Yichao Yu wrote:


 On Tue, Sep 13, 2016 at 12:49 PM, Andreas Lobinger 
 wrote:

> Hello colleagues,
>
> i'm trying to find out, why this
> ...
>
 fails miserably. I guess, but cannot track it down right now: There is
> something wrong in memory management of Cairo.jl that only shows up for
> objects that could have been freed long ago and julia and libcairo have
> different concepts of invalidation.
>
> Any blog/receipe/issue that deals with GC debugging?
>

 It's not too different from debugging memory issue in any other program.
 It usually helps (a lot) to reproduce under rr[1]

>>>
>>> Many thanks for pointing to this. I was aware it exists but wasn't aware
>>> of their progress.
>>>
>>>
 Other than that, it strongly depend on the kind of error and I've seen
 it happens due to almost all parts of the runtime and it's really hard to
 summarize.

>>>
>>> What do you mean with "happens due to almost all parts of the runtime" ?
>>>
>>
>> The general procedure is basically catch the failure and try to figure
>> out why it got into this states. This means that you generally need to
>> trace back where certain value is generated which also usually means that
>> you need to trace back through a few layers of code and they might be
>> scattered all over the place.
>>
>>
>


Re: [julia-users] debugging tips for GC?

2016-09-13 Thread Yichao Yu
I'm able to reproduce it in rr and found the issue.

TL;DR the issue is at
https://github.com/JuliaGraphics/Cairo.jl/blame/master/src/Cairo.jl#L625,
where it passes the ownership of a cairo pointer to julia, causing a double
free.

Here's the rough process of my debugging, I'm not really sure how to
summarize it though

1. It abort in cairo `cairo_destory_path` so I first compiled a cairo with
debug symbol to make my life easier. (the function is pretty short so
reading the disasm would have worked too)
2. It is free'ing `path->data` so I added a watchpoint on it `watch -l
path->data` and reverse-continue to find the point of assignment.
3. Assignment happens in cairo from a valid malloc so path->data isn't
corrupted.
4. Now it takes some guessing to figure out exactly what's wrong. I'm not
sure how glibc stores it's malloc metadata (would help to know that) so I
tried the naive thing and watch the intptr_t before the malloc result
(that's how julia store the gc metadata) and run forward. None of the
assignment to this location looks suspicious (they are all in glibc and the
first hit isn't free'ing this value)
5. So now I tried the brute force way,the pointer (`path->data`) I see is
`0x3746950` so I simply did a conditional breakpoint to see when it's
free'd with `br free if $rdi == 0x3746950`. I use rdi to get the first
argument since the glibc I installed doesn't have that detailed debug info.
6. After a long run (conditional breakpoint is really slow which is why I
didn't use it first) it hits a breakpoint in the julia GC when free'ing an
array. The array has a data pointer the same as the one in question and
that's before the pointer is free'd by cairo so sth is wrong with the
creation of the array. Now simply watch the `a->data` and go back again.
I'm lucky this time, if this didn't work, the next thing to try would be
trying to reduce the code/ run GC more often so that I can afford looking
at the code more carefully instead of just catching events in the debugger.
7. As expected, it hits `jl_ptr_to_array` and going up a frame it seems
that the caller is supplying a cairo pointer and transfering the ownership,
which is wrong.


On Tue, Sep 13, 2016 at 3:36 PM, Yichao Yu  wrote:

>
>
> On Tue, Sep 13, 2016 at 3:31 PM, Andreas Lobinger 
> wrote:
>
>> Hello colleague,
>>
>> On Tuesday, September 13, 2016 at 7:25:38 PM UTC+2, Yichao Yu wrote:
>>>
>>>
>>> On Tue, Sep 13, 2016 at 12:49 PM, Andreas Lobinger 
>>> wrote:
>>>
 Hello colleagues,

 i'm trying to find out, why this
 ...

>>> fails miserably. I guess, but cannot track it down right now: There is
 something wrong in memory management of Cairo.jl that only shows up for
 objects that could have been freed long ago and julia and libcairo have
 different concepts of invalidation.

 Any blog/receipe/issue that deals with GC debugging?

>>>
>>> It's not too different from debugging memory issue in any other program.
>>> It usually helps (a lot) to reproduce under rr[1]
>>>
>>
>> Many thanks for pointing to this. I was aware it exists but wasn't aware
>> of their progress.
>>
>>
>>> Other than that, it strongly depend on the kind of error and I've seen
>>> it happens due to almost all parts of the runtime and it's really hard to
>>> summarize.
>>>
>>
>> What do you mean with "happens due to almost all parts of the runtime" ?
>>
>
> The general procedure is basically catch the failure and try to figure out
> why it got into this states. This means that you generally need to trace
> back where certain value is generated which also usually means that you
> need to trace back through a few layers of code and they might be scattered
> all over the place.
>
>


Re: [julia-users] Re: Julia Users and First Customers

2016-09-13 Thread Stefan Karpinski
+1 to everything David said. And yes, we should put out an official
statement on this subject. Julia has absolutely been released on four and
very nearly five occasions: 0.1, 0.2, 0.3, 0.4 and any day now, 0.5. These
are not alpha releases, nor are they unstable. They are very solid and
usable. The pre-1.0 numbering of these releases (see, that's the word) are
simply an indication that we *will* change APIs and break code in the
relatively near future – before 1.0 is out. We could have numbered these
releases 1.0, 2.0, 3.0 and 4.0, but (to me at least), there's an
expectation that a 1.0 release will be supported for *years* into the
future, which would leave us supporting four-five releases concurrently,
which simply isn't realistic at this point in time. Once Julia 1.0 is
released we *will* support it for the long term, while we continue work
towards 2.0 and beyond. But at that point the Julia community will be
larger, we will have more resources, and releases will be further apart.

Aside: Swift seems to have taken the let's release major versions very
often since they are already on a prerelease of major version 3.0 (and
maybe that's better marketing), but to me having three major releases in
two years seems a bit crazy. Of course, Swift has a very large, effectively
captive audience. Julia is likely to follow the Rust approach, albeit with
a *lot* less breakage between pre-1.0 versions.

Issues you may have had with Juno don't really have anything to do with
Julia, although there's an argument to be made that pre-1.0 numbering of
the language is a good indicator that the ecosystem may be a little rough
around the edges. Once Julia 0.5 is out, there will be a stable version of
Juno released to go with it.

On Tue, Sep 13, 2016 at 1:19 PM, Chris Rackauckas 
wrote:

> I agree that there are some qualifiers that have to go with it in order to
> get the connotation right, but the term "unstable but usable pre-release"
> is just a phrase for alpha. It's not misleading to characterize Julia as
> not having been released since the core dev group is calling v1.0 the
> release, and so saying pre-1.0 is just a synonym for "pre-release". Being
> alpha doesn't mean it's buggy, it just means that it's alpha, as in not the
> final version and can change. You can rename it to another synonym, but
> it's still true.
>
> Whether it's in a state to be used in production, that's a call for an
> experienced Julia user who knows the specifics of the application they are
> looking to build. But throwing it in with the more stable languages so that
> way someone in a meeting is like a stand-in ("hey guys, we can try Julia.
> It should be faster than those other options, and should be hard to learn,
> what do you think?"), that has a decent change of leading to trouble.
>
> And I think not being honest with people is a good way to put a bad taste
> in people's mouths. Even if lots of Julia were stable, the package
> ecosystem isn't. Just the other day I was doing damage control since there
> was a version of Juno that was tagged which broke "most users'" systems, by
> most users I mean there was an "obvious" fix: look at the error message,
> manually clone this random new repository, maybe check out master on a few
> packages
> .
> You want to use the plot pane? Oh just checkout master, no biggie.
>  If someone is
> trying to use Julia without knowing any Git/Github, it's possible, but
> we're still at a point where it could lead to some trouble, or it's
> confusing since some basic features are seemingly missing (though just not
> tagged).
>
> When you're talking about a drop-in replacement for MATLAB/Python/R, we're
> talking about people less familiar with all of this software development
> stuff who have a tendency to confuse warnings (like deprecation warnings)
> as errors, not realize their install is Julia v0.3 that they played with
> one day from along time ago which is why the documentation isn't working,
> trying to use a feature they see mentioned on a Github issue which requires
> checking out a development branch but still on the release, wondering where
> the documentation is for things like threads (and being told to just check
> the source code), accidentally using deprecated / no longer developed
> packages because they were pointed to it by an old StackOverflow post.
> These aren't hypothetical, these are all things I encountered in teaching a
> Julia workshop and beginning to spread it around my lab / department. None
> of these problems are difficult to overcome, but if you say Julia is as
> easy to use right now as those other languages, 
> non-software-development-oriented
> users will quickly encounter these simple problems and it may leave a bad
> taste in their mouths.
>
> "I tried Julia but I 

Re: [julia-users] Advice on (perhaps) chunking to HDF5

2016-09-13 Thread Erik Schnetter
On Tue, Sep 13, 2016 at 11:27 AM, sparrowhawker 
wrote:

> Hi,
>
> I'm new to Julia, and have been able to accomplish a lot of what I used to
> do in Matlab/Fortran, in very little time since I started using Julia in
> the last three months. Here's my newest stumbling block.
>
> I have a process which creates nsamples within a loop. Each sample takes a
> long time to compute as there are expensive finite difference operations,
> which ultimately lead to a sample, say 1 to 10 seconds. I have to store
> each of the nsamples, and I know the size and dimensions of each of the
> nsamples (all samples have the same size and dimensions). However,
> depending on the run time parameters, each sample may be a 32x32 image or
> perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3
> hyper-rectangle. To be clear, each sample can be an arbitrary dimension
> hyper-rectangle, specified at run time.
>
> Obviously, since I don't want to lose computation and want to see
> incremental progress, I'd like to do incremental saves of these samples on
> disk, instead of waiting to collect all nsamples at the end. For instance,
> if I had to store 1000 samples of size 64x64, I thought perhaps I could
> chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right
> approach? If so, here's a prototype program to do so, but it depends on my
> knowing the number of dimensions of the slice, which is not known until
> runtime,
>
> using HDF5
>
> filename = "test.h5"
> # open file
> fmode ="w"
> # get a file object
> fid = h5open(filename, fmode)
> # matrix to write in chunks
> B = rand(64,64,1000)
> # figure out its dimensions
> sizeTuple = size(B)
> Ndims = length(sizeTuple)
> # set up to write in chunks of sizeArray
> sizeArray = ones(Int, Ndims)
> [sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of size
> array is :...:,1
> # create a dataset models within root
> dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)),
> "chunk", sizeArray)
> [dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
> close(fid)
>
> This works, but the second last line, dset[:,:,i] requires syntax
> specific to writing a slice of a dimension 3 array - but I don't know the
> dimensions until run time. Of course I could just write to a flat binary
> file incrementally, but HDF5.jl could make my life so much simpler!
>

HDF5 supports "extensible datasets", which were created for use cases such
as this one. I don't recall the exact syntax, but if I recall correctly,
you can specify one dimension (the first one in C, the last one in Julia)
to be extensible, and then you can add more data as you go. You will
probably need to specify a chunk size, which could be the size of the
increment in your case. Given file system speeds, a chunk size smaller than
a few MegaBytes probably don't make much sense (i.e. will slow things down).

If you want to monitor the HDF5 file as it is being written, look at the
SWIMR feature. This requires HDF5 1.10; unfortunately, Julia will by
default often still install version 1.8.

If you want to protect against crashes of your code so that you don't lose
progress, then HDF5 is probably not right for you. Once an HDF5 file is
open for writing, the on-disk state might be inconsistent, so that you can
lose all data when your code crashes. In this case, you might want to write
data into different files, one per increment. HDF5 1.0 offers "views",
which are umbrella files that stitch together datasets stored in other
files.

If you are looking for generic advice for setting up things with HDF5, then
I recommend their documentation. If you are looking for how to access these
features in Julia, or if you notice a feature that is not available in
Julia, then we'll be happy to explain or correct things.

What do mean by "dimension only known at run time" -- do you mean what
Julia calls "size" (shape) or what Julia calls "dim" (rank)?

Do all datasets have the same size, or do they differ? If they differ, then
putting them into the same dataset might not make sense; in this case, I
would write them into different datasets.

-erik

-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-13 Thread Zhong Pan
Thank you Tony for the information. I still had some problem.

I tried building Julia (release-0.5) using MinGW following the instructions 
here: 
https://github.com/JuliaLang/julia/blob/master/README.windows.md

The only difference is, after creating the Make.user file, I edited it 
using a text editor and added the options to use MKL, so the complete 
Make.user now looks like:
override CMAKE=/C/CMake/bin/cmake.exe
USE_INTEL_MKL = 1
USE_INTEL_MKL_FFT = 1
USE_INTEL_LIBM = 1

The addition of the above lines is per build instructions about MKL 
at https://github.com/JuliaLang/julia, except that I removed "USEICC = 1" 
and "USEIFC = 1" since I am not using Intel compiler. But I do have Intel 
MKL with the free Community License installed on my computer at "C:\Program 
Files (x86)\IntelSWTools\compilers_and_libraries_2017.0.109\windows\mkl"

After I started the build with "make -j 3", the build would run for a while 
then stop with 
make: *** [Makefile:81: julia-deps] Error 2

I went through the messages and found this to be the most suspicious for a 
root cause:
c:/users/zpan/documents/github/julia/usr/x86_64-w64-mingw32/sys-root/mingw/bin/../lib/gcc/x86_64-w64-mingw32/6.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe:
 
cannot find -lmkl_rt

So I realized that MinGW was not able to find the MKL libraries. I tried 
the following 2:

(1) Copied all the 17 .lib files from "C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2017.0.109\windows\mkl\lib\intel64_win"
 
to "C:\mkllib" (so the path does not contain a space), then I did
$ export MKLLIB=/C/mkllib

(2) Copied all the 17 .lib files to 
"C:\Users\zpan\Documents\GitHub\julia\usr\lib"

But I was still getting the same Error 2 in the end, and I could still find 
the error message saying "cannot find -lmkl_rt".  Well, mkl_rt.lib was 
indeed part of the 17 files, which are:

mkl_blas95_ilp64.lib
mkl_blas95_lp64.lib
mkl_core.lib
mkl_core_dll.lib
mkl_intel_ilp64.lib
mkl_intel_ilp64_dll.li
mkl_intel_lp64.lib
mkl_intel_lp64_dll.lib
mkl_intel_thread.lib
mkl_intel_thread_dll.l
mkl_lapack95_ilp64.lib
mkl_lapack95_lp64.lib
mkl_rt.lib
mkl_sequential.lib
mkl_sequential_dll.lib
mkl_tbb_thread.lib
mkl_tbb_thread_dll.lib

I am currently trying the command "make install" to build it without 
parallel option - this is very slow but I can see the messages in proper 
order.

Did I miss something to let the linker see the .lib files? Any help greatly 
appreciated! 



On Monday, September 12, 2016 at 3:59:19 PM UTC-5, Tony Kelman wrote:
>
> Intel compilers on windows are MSVC style, which our build system is not 
> really set up to handle. There is experimental partial support (search for 
> "MSVC support tracking issue" if you're interested) but it would really 
> require rewriting the build system to use cmake to work smoothly.
>
> You can build Julia from source with mingw but direct the build system to 
> use an mkl library instead of building openblas from source. At one point 
> mkl on windows didn't export gfortran style naming conventions for blas and 
> lapack functions, but recent versions do last I checked. Could even be able 
> to switch from an existing Julia binary by rebuilding the system image.
>


Re: [julia-users] Julia Users and First Customers

2016-09-13 Thread Stefan Karpinski
On Tue, Sep 13, 2016 at 10:21 AM, Adeel Malik 
wrote:

> I would Like to ask few questions about Julia that I could not find it on
> the internet.
>
> 1) Who were the very first few users of Julia ?
>

Julia was developed by Jeff Bezanson, Viral Shah and myself, funded (in
part) by Alan Edelman; and Alan was the first user, unless you consider the
three initial developers to be users. He also had an MIT class use Julia
for numerical stuff before Julia 1.0, and they were probably the next round
of early users after Alan. Jameson Nash was one of those students, so at
least one of them liked it :)

2) Who were the industrial customers of Julia when it was first released?
> Who are those industrial customers now?
>

Intel, BlackRock, the FAA (ACAS X), Conning, Invenia, Voxel8 and a number
of other companies have publicly talked about using Julia and have
sponsored JuliaCons. There are many others too but some are not public and
its nearly impossible to track this. Julia just broke into the top 50
programming languages on the Tiobe Index
 – there
are a lot of users these days.

3) How Julia found more users?
>

Word of mouth, talks at conferences, and sites like Hacker News and Reddit,
and the occasional article in magazines like Wired.

4) How Julia survived against Python and R at first release?
>

The initial selling point is that it's much faster. But people come for the
speed and stay for the features and productivity (and speed). Or they don't
– some people get frustrated by the less mature ecosystem and go back to
using what they were before, but those people are far outnumbered by the
people who keep on using Julia, or the user base wouldn't be growing so
fast.


Re: [julia-users] debugging tips for GC?

2016-09-13 Thread Yichao Yu
On Tue, Sep 13, 2016 at 3:31 PM, Andreas Lobinger 
wrote:

> Hello colleague,
>
> On Tuesday, September 13, 2016 at 7:25:38 PM UTC+2, Yichao Yu wrote:
>>
>>
>> On Tue, Sep 13, 2016 at 12:49 PM, Andreas Lobinger 
>> wrote:
>>
>>> Hello colleagues,
>>>
>>> i'm trying to find out, why this
>>> ...
>>>
>> fails miserably. I guess, but cannot track it down right now: There is
>>> something wrong in memory management of Cairo.jl that only shows up for
>>> objects that could have been freed long ago and julia and libcairo have
>>> different concepts of invalidation.
>>>
>>> Any blog/receipe/issue that deals with GC debugging?
>>>
>>
>> It's not too different from debugging memory issue in any other program.
>> It usually helps (a lot) to reproduce under rr[1]
>>
>
> Many thanks for pointing to this. I was aware it exists but wasn't aware
> of their progress.
>
>
>> Other than that, it strongly depend on the kind of error and I've seen it
>> happens due to almost all parts of the runtime and it's really hard to
>> summarize.
>>
>
> What do you mean with "happens due to almost all parts of the runtime" ?
>

The general procedure is basically catch the failure and try to figure out
why it got into this states. This means that you generally need to trace
back where certain value is generated which also usually means that you
need to trace back through a few layers of code and they might be scattered
all over the place.


Re: [julia-users] debugging tips for GC?

2016-09-13 Thread Andreas Lobinger
Hello colleague,

On Tuesday, September 13, 2016 at 7:25:38 PM UTC+2, Yichao Yu wrote:
>
>
> On Tue, Sep 13, 2016 at 12:49 PM, Andreas Lobinger  > wrote:
>
>> Hello colleagues,
>>
>> i'm trying to find out, why this 
>> ...
>>
> fails miserably. I guess, but cannot track it down right now: There is 
>> something wrong in memory management of Cairo.jl that only shows up for 
>> objects that could have been freed long ago and julia and libcairo have 
>> different concepts of invalidation.
>>
>> Any blog/receipe/issue that deals with GC debugging?
>>
>
> It's not too different from debugging memory issue in any other program.
> It usually helps (a lot) to reproduce under rr[1]
>

Many thanks for pointing to this. I was aware it exists but wasn't aware of 
their progress.
 

> Other than that, it strongly depend on the kind of error and I've seen it 
> happens due to almost all parts of the runtime and it's really hard to 
> summarize.
>

What do you mean with "happens due to almost all parts of the runtime" ?


[julia-users] Re: Advice on (perhaps) chunking to HDF5

2016-09-13 Thread sparrowhawker
I only found chunking information in the docs on HDF5, not JLD. 

Could you be more specific? Do you mean using custom serialization in JLD 
files? And will that get around my problem of the dimensions of the data to 
be chunked being of dimension specified only at runtime?

On Tuesday, September 13, 2016 at 1:30:31 PM UTC-5, Kristoffer Carlsson 
wrote:
>
> How about using JLD.jl: https://github.com/JuliaIO/JLD.jl
>
> On Tuesday, September 13, 2016 at 7:00:25 PM UTC+2, sparrowhawker wrote:
>>
>> Hi,
>>
>> I'm new to Julia, and have been able to accomplish a lot of what I used 
>> to do in Matlab/Fortran, in very little time since I started using Julia in 
>> the last three months. Here's my newest stumbling block.
>>
>> I have a process which creates nsamples within a loop. Each sample takes 
>> a long time to compute as there are expensive finite difference operations, 
>> which ultimately lead to a sample, say 1 to 10 seconds. I have to store 
>> each of the nsamples, and I know the size and dimensions of each of the 
>> nsamples (all samples have the same size and dimensions). However, 
>> depending on the run time parameters, each sample may be a 32x32 image or 
>> perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3 
>> hyper-rectangle. To be clear, each sample can be an arbitrary dimension 
>> hyper-rectangle, specified at run time.
>>
>> Obviously, since I don't want to lose computation and want to see 
>> incremental progress, I'd like to do incremental saves of these samples on 
>> disk, instead of waiting to collect all nsamples at the end. For instance, 
>> if I had to store 1000 samples of size 64x64, I thought perhaps I could 
>> chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right 
>> approach? If so, here's a prototype program to do so, but it depends on my 
>> knowing the number of dimensions of the slice, which is not known until 
>> runtime,
>>
>> using HDF5
>>
>> filename = "test.h5"
>> # open file
>> fmode ="w"
>> # get a file object
>> fid = h5open(filename, fmode)
>> # matrix to write in chunks
>> B = rand(64,64,1000)
>> # figure out its dimensions
>> sizeTuple = size(B)
>> Ndims = length(sizeTuple)
>> # set up to write in chunks of sizeArray
>> sizeArray = ones(Int, Ndims)
>> [sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of size 
>> array is :...:,1
>> # create a dataset models within root
>> dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)), 
>> "chunk", sizeArray)
>> [dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
>> close(fid)
>>
>> This works, but the second last line, dset[:,:,i] requires syntax 
>> specific to writing a slice of a dimension 3 array - but I don't know the 
>> dimensions until run time. Of course I could just write to a flat binary 
>> file incrementally, but HDF5.jl could make my life so much simpler!
>>
>> Many thanks for any pointers.
>>
>

Re: [julia-users] hashing floating point zeroes

2016-09-13 Thread Stefan Karpinski
It has been discussed that hashing -0.0 and 0.0 the same, but this has an
unfortunate interaction with sorting. See #9381
 and #18485
, which I just opened.

On Tue, Sep 13, 2016 at 10:41 AM, Steven G. Johnson 
wrote:

>
>
> On Tuesday, September 13, 2016 at 10:07:49 AM UTC-4, Fengyang Wang wrote:
>>
>> This is an intuitive explanation, but the mathematics of IEEE floating
>> points seem to be designed so that 0.0 represents a "really small positive
>> number" and -0.0 represents "exact zero" or at least "an even smaller
>> really small negative number"; hence -0.0 + 0.0 = 0.0. I never understood
>> this either.
>>
>
> For one thing, the signed zero preserves 1/(1/x) == x, even when x is +Inf
> or -Inf, since 1/-Inf is -0.0 and 1/-0.0 is -Inf.   More generally, when
> there is underflow (numbers get so small they can no longer be
> represented), you lose the value but you don't lose the sign.   Also, the
> sign of zero is useful in evaluating complex-valued functions that have a
> branch cut along the real axis, so that you know which side of the branch
> you are on (see the classic paper Much Ado About Nothing's SIgn Bit
>  by Kahan).
>


Re: [julia-users] Re: Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Stefan Karpinski
Great. It seems like there's some low hanging fruit to speed it up even
more :)

On Tue, Sep 13, 2016 at 2:35 PM, Neal Becker  wrote:

> Stefan Karpinski wrote:
>
> > On Tue, Sep 13, 2016 at 1:23 PM, Neal Becker
> >  wrote:
> >
> >>
> >> Thanks for the ideas.  Here, though, the generated values need to be
> >> Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
> >> So the output array itself would be Array{Int64} for example, but the
> >> values
> >> in the array are [0 ... 7].  Do you know a better way to do this?
> >
> >
> > Is this the kind of thing you're looking for?
> >
> > julia> @time rand(0x0:0x7, 10^5);
> >   0.001795 seconds (10 allocations: 97.953 KB)
> >
> >
> > Produces a 10^5-element array of random UInt8 values between 0 and 7.
>
> Yes, that is the sort of thing I want!  I guess the type of the returned
> array is determined by the type of the argument passed.
>
> BTW, after the first fix to my PnGen, the time for the julia code is about
> equal to the python/c++ code.  Not bad I suppose for a 1st try, although
> this code is pretty trivial.
>
>
>


[julia-users] Re: Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Neal Becker
Stefan Karpinski wrote:

> On Tue, Sep 13, 2016 at 1:23 PM, Neal Becker
>  wrote:
> 
>>
>> Thanks for the ideas.  Here, though, the generated values need to be
>> Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
>> So the output array itself would be Array{Int64} for example, but the
>> values
>> in the array are [0 ... 7].  Do you know a better way to do this?
> 
> 
> Is this the kind of thing you're looking for?
> 
> julia> @time rand(0x0:0x7, 10^5);
>   0.001795 seconds (10 allocations: 97.953 KB)
> 
> 
> Produces a 10^5-element array of random UInt8 values between 0 and 7.

Yes, that is the sort of thing I want!  I guess the type of the returned 
array is determined by the type of the argument passed.

BTW, after the first fix to my PnGen, the time for the julia code is about 
equal to the python/c++ code.  Not bad I suppose for a 1st try, although 
this code is pretty trivial.




Re: [julia-users] Re: Julia Low Pass Filter much slower than identical code in Python ??

2016-09-13 Thread Stefan Karpinski
As an aside, most JIT runtimes actually don't compile code until after it
has been run a few thousand times and seems hot – before that they
interpret it. Julia is unusual in that it fully JIT compiles almost all
code (except for some top-level expressions which are interpreted) before
running it at all. That's technically what JIT originally meant, but not
what most systems do since they often *need* profile information from
execution in order to compile the code, whereas Julia can infer this
information in advance.

On Tue, Sep 13, 2016 at 3:09 AM, Matjaz Licer 
wrote:

> Makes sense, thanks :-)
>
> On 12 September 2016 at 17:17, Stefan Karpinski 
> wrote:
>
>> JIT = Just In Time, i.e. the first time you use the code.
>>
>> On Mon, Sep 12, 2016 at 6:52 AM, MLicer  wrote:
>>
>>> Indeed it does! I thought JIT compilation takes place prior to execution
>>> of the script. Thanks so much, this makes sense now!
>>>
>>> Output:
>>> first call:   0.804573 seconds (1.18 M allocations: 53.183 MB, 1.43% gc
>>> time)
>>> repeated call:  0.000472 seconds (217 allocations: 402.938 KB)
>>>
>>> Thanks again,
>>>
>>> Cheers!
>>>
>>>
>>> On Monday, September 12, 2016 at 12:48:30 PM UTC+2, randm...@gmail.com
>>> wrote:

 The Julia code takes 0.000535 seconds for me on the second run --
 during the first run, Julia has to compile the method you're timing. Have a
 look at the performance tips
 
 for a more in depth explanation.

 Am Montag, 12. September 2016 11:53:01 UTC+2 schrieb MLicer:
>
> Dear all,
>
> i've written a low-pass filter in Julia and Python and the code in
> Julia seems to be much slower (*0.800 sec in Julia vs 0.000 sec in
> Python*). I *must* be coding ineffieciently, can anyone comment on
> the two codes below?
>
> *Julia:*
>
>
> 
> using PyPlot, DSP
>
> # generate data:
> x = linspace(0,30,1e4)
> sin_noise(arr) = sin(arr) + rand(length(arr))
>
> # create filter:
> designmethod = Butterworth(5)
> ff = digitalfilter(Lowpass(0.02),designmethod)
> @time yl = filtfilt(ff, sin_noise(x))
>
> Python:
>
> from scipy import signal
> import numpy as np
> import cProfile, pstats
>
> def sin_noise(arr):
> return np.sin(arr) + np.random.rand(len(arr))
>
> def filterSignal(b,a,x):
> return signal.filtfilt(b, a, x, axis=-1)
>
> def main():
> # generate data:
> x = np.linspace(0,30,1e4)
> y = sin_noise(x)
> b, a = signal.butter(5, 0.02, "lowpass", analog=False)
> ff = filterSignal(b,a,y)
>
> cProfile.runctx('filterSignal(b,a,y)',globals(),{'b':b,'a':a,'y':y
> },filename='profileStatistics')
>
> p = pstats.Stats('profileStatistics')
> printFirstN = 5
> p.sort_stats('cumtime').print_stats(printFirstN)
>
> if __name__=="__main__":
> main()
>
>
> Thanks very much for any replies!
>

>>
>


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Stefan Karpinski
On Tue, Sep 13, 2016 at 1:23 PM, Neal Becker  wrote:

>
> Thanks for the ideas.  Here, though, the generated values need to be
> Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
> So the output array itself would be Array{Int64} for example, but the
> values
> in the array are [0 ... 7].  Do you know a better way to do this?


Is this the kind of thing you're looking for?

julia> @time rand(0x0:0x7, 10^5);
  0.001795 seconds (10 allocations: 97.953 KB)


Produces a 10^5-element array of random UInt8 values between 0 and 7.


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Chris Rackauckas
A range is a type that essentially acts as an array, but really is only 
three numbers: start, end, and the step length. I.e. a=0:2^N would make a 
range where a[1]=0, a[i]==i-1, and a[end]=2^N. I haven't looked at the 
whole code, but if you're using rand([0...2^N]), then each time you do that 
it has to make the array, whereas rand(1:2^N) or things like that using 
ranges won't. So if you find yourself making arrays like [0...2^N], they 
should probably be ranges.

On Tuesday, September 13, 2016 at 10:43:39 AM UTC-7, Neal Becker wrote:
>
> I'm not following you here. IIUC a range is a single scalar value?  Are 
> you 
> suggesting I want an Array{range}? 
>
> Chris Rackauckas wrote: 
>
> > Do you need to use an array? That sounds better suited for a range. 
> > 
> > On Tuesday, September 13, 2016 at 10:24:15 AM UTC-7, Neal Becker wrote: 
> >> 
> >> Steven G. Johnson wrote: 
> >> 
> >> > 
> >> > 
> >> > 
> >> > On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote: 
> >> >> 
> >> >> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> >> >> providing 
> >> >> N bits at a time to fill an Array.  It's supposed to be a fast way 
> to 
> >> get 
> >> >> an 
> >> >> Array of small-width random integers. 
> >> >> 
> >> > 
> >> > rand(T, n) already does this for small integer types T.  (In fact, it 
> >> > generates 128 random bits at a time.)  See base/random.jl 
> >> > 
> >> < 
> >> 
>
> https://github.com/JuliaLang/julia/blob/d0e7684dd0ce867e1add2b88bb91f1c4574100e0/base/random.jl#L507-L515>
>  
>
> >> 
> >> > for how it does it. 
> >> > 
> >> > In a quick test, rand(UInt16, 10^6) was more than 6x faster than 
> >> > pnseq(16)(10^6, UInt16). 
> >> 
> >> Thanks for the ideas.  Here, though, the generated values need to be 
> >> Uniform([0...2^N]), where N could be any number.  For example 
> [0...2^3]. 
> >> So the output array itself would be Array{Int64} for example, but the 
> >> values 
> >> in the array are [0 ... 7].  Do you know a better way to do this? 
> >> 
> >> > 
> >> > (In a performance-critical situation where you are calling this lots 
> of 
> >> > times to generate random arrays, I would pre-allocate the array A and 
> >> call 
> >> > rand!(A) instead to fill it with random numbers in-place.) 
> >> 
> >> 
> >> 
>
>
>

[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Neal Becker
I'm not following you here. IIUC a range is a single scalar value?  Are you 
suggesting I want an Array{range}?

Chris Rackauckas wrote:

> Do you need to use an array? That sounds better suited for a range.
> 
> On Tuesday, September 13, 2016 at 10:24:15 AM UTC-7, Neal Becker wrote:
>>
>> Steven G. Johnson wrote:
>>
>> > 
>> > 
>> > 
>> > On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote:
>> >> 
>> >> PnSeq.jl calls rand() to get a Int64, caching the result and then
>> >> providing
>> >> N bits at a time to fill an Array.  It's supposed to be a fast way to
>> get
>> >> an
>> >> Array of small-width random integers.
>> >> 
>> > 
>> > rand(T, n) already does this for small integer types T.  (In fact, it
>> > generates 128 random bits at a time.)  See base/random.jl
>> > 
>> <
>> 
https://github.com/JuliaLang/julia/blob/d0e7684dd0ce867e1add2b88bb91f1c4574100e0/base/random.jl#L507-L515>
>>
>> > for how it does it.
>> > 
>> > In a quick test, rand(UInt16, 10^6) was more than 6x faster than
>> > pnseq(16)(10^6, UInt16).
>>
>> Thanks for the ideas.  Here, though, the generated values need to be
>> Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
>> So the output array itself would be Array{Int64} for example, but the
>> values
>> in the array are [0 ... 7].  Do you know a better way to do this?
>>
>> > 
>> > (In a performance-critical situation where you are calling this lots of
>> > times to generate random arrays, I would pre-allocate the array A and
>> call
>> > rand!(A) instead to fill it with random numbers in-place.)
>>
>>
>>




[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Chris Rackauckas
Do you need to use an array? That sounds better suited for a range.

On Tuesday, September 13, 2016 at 10:24:15 AM UTC-7, Neal Becker wrote:
>
> Steven G. Johnson wrote: 
>
> > 
> > 
> > 
> > On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote: 
> >> 
> >> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> >> providing 
> >> N bits at a time to fill an Array.  It's supposed to be a fast way to 
> get 
> >> an 
> >> Array of small-width random integers. 
> >> 
> > 
> > rand(T, n) already does this for small integer types T.  (In fact, it 
> > generates 128 random bits at a time.)  See base/random.jl 
> > 
> <
> https://github.com/JuliaLang/julia/blob/d0e7684dd0ce867e1add2b88bb91f1c4574100e0/base/random.jl#L507-L515>
>  
>
> > for how it does it. 
> > 
> > In a quick test, rand(UInt16, 10^6) was more than 6x faster than 
> > pnseq(16)(10^6, UInt16). 
>
> Thanks for the ideas.  Here, though, the generated values need to be 
> Uniform([0...2^N]), where N could be any number.  For example [0...2^3]. 
> So the output array itself would be Array{Int64} for example, but the 
> values 
> in the array are [0 ... 7].  Do you know a better way to do this? 
>
> > 
> > (In a performance-critical situation where you are calling this lots of 
> > times to generate random arrays, I would pre-allocate the array A and 
> call 
> > rand!(A) instead to fill it with random numbers in-place.) 
>
>
>

[julia-users] electron framework / javascript / LLVM / Julia numerics?

2016-09-13 Thread Perrin Meyer
The github "electron" cross platform app framework looks pretty slick upon 
first inspection (chrome / v8 / node.js /  javascript, llvm)  

However, last time I checked, the javascript numerical libraries i've 
looked at are alpha quality at best.  

Since julia is also LLVM based, would it be possible to somehow create 
numeric libraries /  code in Julia, and "export" (emscripten?)  asm.js 
"pure" javascript numerical code that could be "linked" to code in the 
electron framework, since that would be a possibly easy way to create cross 
platform apps (linux, mac, windows, android) with high quality numerics?  I 
would be more interested in correctness than raw speed, although I've been 
impressed by V8 / asm.js benchmarks I've seen. 

Thanks

perrin



Re: [julia-users] debugging tips for GC?

2016-09-13 Thread Yichao Yu
On Tue, Sep 13, 2016 at 12:49 PM, Andreas Lobinger 
wrote:

> Hello colleagues,
>
> i'm trying to find out, why this
>
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.5.0-rc4+0 (2016-09-09 01:43 UTC)
>  _/ |\__'_|_|_|\__'_|  |
> |__/   |  x86_64-linux-gnu
>
> julia> Pkg.test("Cairo")
> INFO: Testing Cairo
> INFO: Cairo tests passed
>
> julia> Pkg.test("Cairo")
> INFO: Testing Cairo
> INFO: Cairo tests passed
>
> julia> Pkg.test("Cairo")
> INFO: Testing Cairo
> INFO: Cairo tests passed
>
> julia> Pkg.test("Cairo")
> INFO: Testing Cairo
> INFO: Cairo tests passed
>
> julia> Pkg.test("Cairo")
> INFO: Testing Cairo
> INFO: Cairo tests passed
>
> julia> Pkg.test("Cairo")
> INFO: Testing Cairo
> *** Error in `/home/lobi/julia05/usr/bin/julia': free(): invalid pointer:
> 0x04013e80 ***
>
> signal (6): Aborted
> while loading /home/lobi/.julia/v0.5/Cairo/samples/sample_imagepattern.jl,
> in expression starting on line 29
> raise at /build/eglibc-3GlaMS/eglibc-2.19/signal/../nptl/sysdeps/
> unix/sysv/linux/raise.c:56
> abort at /build/eglibc-3GlaMS/eglibc-2.19/stdlib/abort.c:89
> __libc_message at /build/eglibc-3GlaMS/eglibc-2.19/libio/../sysdeps/posix/
> libc_fatal.c:175
> malloc_printerr at /build/eglibc-3GlaMS/eglibc-2.19/malloc/malloc.c:4996
> [inlined]
> _int_free at /build/eglibc-3GlaMS/eglibc-2.19/malloc/malloc.c:3840
> jl_gc_free_array at /home/lobi/julia05/src/gc.c:744 [inlined]
> sweep_malloced_arrays at /home/lobi/julia05/src/gc.c:765 [inlined]
> gc_sweep_other at /home/lobi/julia05/src/gc.c:1032 [inlined]
> _jl_gc_collect at /home/lobi/julia05/src/gc.c:1792 [inlined]
> jl_gc_collect at /home/lobi/julia05/src/gc.c:1858
> jl_gc_pool_alloc at /home/lobi/julia05/src/gc.c:828
> jl_gc_alloc_ at /home/lobi/julia05/src/julia_internal.h:148 [inlined]
> jl_gc_alloc at /home/lobi/julia05/src/gc.c:1881
> jl_alloc_svec_uninit at /home/lobi/julia05/src/simplevector.c:47
> jl_alloc_svec at /home/lobi/julia05/src/simplevector.c:56
> intersect_tuple at /home/lobi/julia05/src/jltypes.c:601
> jl_type_intersect at /home/lobi/julia05/src/jltypes.c:992
> jl_type_intersection_matching at /home/lobi/julia05/src/jltypes.c:1510
> jl_lookup_match at /home/lobi/julia05/src/typemap.c:376 [inlined]
> jl_typemap_intersection_node_visitor at /home/lobi/julia05/src/
> typemap.c:503
> jl_typemap_intersection_visitor at /home/lobi/julia05/src/typemap.c:565
> jl_typemap_intersection_visitor at /home/lobi/julia05/src/typemap.c:556
> ml_matches at /home/lobi/julia05/src/gf.c:2266 [inlined]
> jl_matching_methods at /home/lobi/julia05/src/gf.c:2287
> _methods_by_ftype at ./reflection.jl:223
> unknown function (ip: 0x7f00077b1f22)
> jl_call_method_internal at /home/lobi/julia05/src/julia_internal.h:189
> [inlined]
> jl_apply_generic at /home/lobi/julia05/src/gf.c:1929
> inlineable at ./inference.jl:2496
> unknown function (ip: 0x7f00077c4c92)
>
>
> fails miserably. I guess, but cannot track it down right now: There is
> something wrong in memory management of Cairo.jl that only shows up for
> objects that could have been freed long ago and julia and libcairo have
> different concepts of invalidation.
>
> Any blog/receipe/issue that deals with GC debugging?
>

It's not too different from debugging memory issue in any other program.
It usually helps (a lot) to reproduce under rr[1]
Other than that, it strongly depend on the kind of error and I've seen it
happens due to almost all parts of the runtime and it's really hard to
summarize.

[1] github.com/mozilla/rr


>
> Wishing a happy day,
> Andreas
>


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Neal Becker
Steven G. Johnson wrote:

> 
> 
> 
> On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote:
>>
>> PnSeq.jl calls rand() to get a Int64, caching the result and then
>> providing
>> N bits at a time to fill an Array.  It's supposed to be a fast way to get
>> an
>> Array of small-width random integers.
>>
> 
> rand(T, n) already does this for small integer types T.  (In fact, it
> generates 128 random bits at a time.)  See base/random.jl
> 

> for how it does it.
> 
> In a quick test, rand(UInt16, 10^6) was more than 6x faster than
> pnseq(16)(10^6, UInt16).

Thanks for the ideas.  Here, though, the generated values need to be
Uniform([0...2^N]), where N could be any number.  For example [0...2^3].
So the output array itself would be Array{Int64} for example, but the values 
in the array are [0 ... 7].  Do you know a better way to do this?

> 
> (In a performance-critical situation where you are calling this lots of
> times to generate random arrays, I would pre-allocate the array A and call
> rand!(A) instead to fill it with random numbers in-place.)




Re: [julia-users] Re: Julia Users and First Customers

2016-09-13 Thread Chris Rackauckas
I agree that there are some qualifiers that have to go with it in order to 
get the connotation right, but the term "unstable but usable pre-release" 
is just a phrase for alpha. It's not misleading to characterize Julia as 
not having been released since the core dev group is calling v1.0 the 
release, and so saying pre-1.0 is just a synonym for "pre-release". Being 
alpha doesn't mean it's buggy, it just means that it's alpha, as in not the 
final version and can change. You can rename it to another synonym, but 
it's still true.

Whether it's in a state to be used in production, that's a call for an 
experienced Julia user who knows the specifics of the application they are 
looking to build. But throwing it in with the more stable languages so that 
way someone in a meeting is like a stand-in ("hey guys, we can try Julia. 
It should be faster than those other options, and should be hard to learn, 
what do you think?"), that has a decent change of leading to trouble.

And I think not being honest with people is a good way to put a bad taste 
in people's mouths. Even if lots of Julia were stable, the package 
ecosystem isn't. Just the other day I was doing damage control since there 
was a version of Juno that was tagged which broke "most users'" systems, by 
most users I mean there was an "obvious" fix: look at the error message, 
manually clone this random new repository, maybe check out master on a few 
packages 
.
 
You want to use the plot pane? Oh just checkout master, no biggie. 
 If someone is trying 
to use Julia without knowing any Git/Github, it's possible, but we're still 
at a point where it could lead to some trouble, or it's confusing since 
some basic features are seemingly missing (though just not tagged). 

When you're talking about a drop-in replacement for MATLAB/Python/R, we're 
talking about people less familiar with all of this software development 
stuff who have a tendency to confuse warnings (like deprecation warnings) 
as errors, not realize their install is Julia v0.3 that they played with 
one day from along time ago which is why the documentation isn't working, 
trying to use a feature they see mentioned on a Github issue which requires 
checking out a development branch but still on the release, wondering where 
the documentation is for things like threads (and being told to just check 
the source code), accidentally using deprecated / no longer developed 
packages because they were pointed to it by an old StackOverflow post. 
These aren't hypothetical, these are all things I encountered in teaching a 
Julia workshop and beginning to spread it around my lab / department. None 
of these problems are difficult to overcome, but if you say Julia is as 
easy to use right now as those other languages, 
non-software-development-oriented users will quickly encounter these simple 
problems and it may leave a bad taste in their mouths.

"I tried Julia but I couldn't get Juno to install" 
"Did you set the julia path either as an environment variable or inside the 
julia-client package?"
"No, I don't know what the julia path is. Anyways, let me know when it 
actually has an 'auto-install' since I want to be able to use an IDE, but 
have it simple to setup"

That's too common right now. People think a "developed" language means a 
1-click installed IDE to a language where they can use the same script a 
year from now without any errors or warnings, and right now that's not 
offered. Don't get me wrong, I love Julia and use it for everything now 
because it is high quality, not buggy, and works well. But I would not say 
it's not alpha.

I too would like to hear the core devs weigh in. I presented my side, but 
am willing to toe the party line if there is one.

On Tuesday, September 13, 2016 at 9:39:09 AM UTC-7, David Anthoff wrote:
>
> I find this characterization of julia as “not released” and “alpha” really 
> not helpful. To the casual reader these terms signal low quality and lots 
> of bugs, and in my experience (after 2.5 years of heavy use) those are the 
> last two things that come to mind with respect to julia. On the contrary, I 
> think the quality of the julia builds at this point can easily compete with 
> things like Python or R (in terms of bugs).
>
>  
>
> I think the correct characterization of the pre-1.0 builds is that julia 
> hasn’t reached a stable API, i.e. you need to be willing to live with 
> breaking changes coming your way. That is a VERY different thing than a 
> buggy alpha release!
>
>  
>
> There is a large group of julia users out there that use julia for “real 
> world” work. It is really not helpful for us if julia gets an undeserved 
> reputation of being a pre-release, buggy thing that shouldn’t be used for 
> “real work”. Such a reputation would question the 

[julia-users] Advice on (perhaps) chunking to HDF5

2016-09-13 Thread sparrowhawker
Hi,

I'm new to Julia, and have been able to accomplish a lot of what I used to 
do in Matlab/Fortran, in very little time since I started using Julia in 
the last three months. Here's my newest stumbling block.

I have a process which creates nsamples within a loop. Each sample takes a 
long time to compute as there are expensive finite difference operations, 
which ultimately lead to a sample, say 1 to 10 seconds. I have to store 
each of the nsamples, and I know the size and dimensions of each of the 
nsamples (all samples have the same size and dimensions). However, 
depending on the run time parameters, each sample may be a 32x32 image or 
perhaps a 64x64x64 voxset with 3 attributes, i.e., a 64x64x64x3 
hyper-rectangle. To be clear, each sample can be an arbitrary dimension 
hyper-rectangle, specified at run time.

Obviously, since I don't want to lose computation and want to see 
incremental progress, I'd like to do incremental saves of these samples on 
disk, instead of waiting to collect all nsamples at the end. For instance, 
if I had to store 1000 samples of size 64x64, I thought perhaps I could 
chunk and save 64x64 slices to an HDF5 file 1000 times. Is this the right 
approach? If so, here's a prototype program to do so, but it depends on my 
knowing the number of dimensions of the slice, which is not known until 
runtime,

using HDF5

filename = "test.h5"
# open file
fmode ="w"
# get a file object
fid = h5open(filename, fmode)
# matrix to write in chunks
B = rand(64,64,1000)
# figure out its dimensions
sizeTuple = size(B)
Ndims = length(sizeTuple)
# set up to write in chunks of sizeArray
sizeArray = ones(Int, Ndims)
[sizeArray[i] = sizeTuple[i] for i in 1:(Ndims-1)] # last value of size 
array is :...:,1
# create a dataset models within root
dset = d_create(fid, "models", datatype(Float64), dataspace(size(B)), 
"chunk", sizeArray)
[dset[:,:,i] = slicedim(B, Ndims, i) for i in 1:size(B, Ndims)]
close(fid)

This works, but the second last line, dset[:,:,i] requires syntax specific 
to writing a slice of a dimension 3 array - but I don't know the dimensions 
until run time. Of course I could just write to a flat binary file 
incrementally, but HDF5.jl could make my life so much simpler!

Many thanks for any pointers.


Re: [julia-users] Hard-to-debug deep inlining

2016-09-13 Thread Tim Wheeler
Yes, Julia 0.4
Great to hear that this is easier in 0.5!

-Other Tim

On Tuesday, September 13, 2016 at 9:49:38 AM UTC-7, Tim Holy wrote:
>
> Which version of julia? If you're not using 0.5, try it and you might be 
> pleased. 
>
> You can also launch `julia --inline=no`, which occasionally still remains 
> useful. 
>
> --Tim 
>
> On Tuesday, September 13, 2016 8:55:58 AM CDT Tim Wheeler wrote: 
> > Hi Julia Users, 
> > 
> > So I was looking at ConjugatePriors.jl and trying to resolve its 
> problems 
> > with respect to the latest Distributions.jl. As discussed in issue 11 
> > , testing 
> > ConjugatePriors after removing the REQUIRE bounds results in: 
> > 
> > MethodError: no method matching 
> > 
> _rand!(::Distributions.MvNormalCanon{PDMats.PDMat{Float64,Array{Float64,2}}, 
>
> > Array{Float64,1}}, 
> > ::Array{Float64,1}) on line 52 of conjugates_mvnormal.jl 
> > 
> > <
> https://github.com/JuliaStats/ConjugatePriors.jl/blob/master/test/conjugate 
> > s_mvnormal.jl#L52>. and line 25 of fallbacks.jl 
> > 
> > If you check that line you find the following: 
> > 
> > posterior_randmodel(pri, G::IncompleteFormulation, x) = complete(G, pri, 
> > posterior_rand(pri, G, x)) 
> > 
> > Okay, the problem isn't really there. The call to posterior_rand is 
> inlined 
> > (I assume), so it doesn't show up in the test stack trace. So we 
> manually 
> > go to: 
> > 
> > posterior_rand(pri, G::IncompleteFormulation, x) = 
> Base.rand(posterior_canon 
> > (pri, G, x)) 
> > 
> > 
> > This also isn't the problem, at least not directly. 
> > 
> > In fact, the also inlined call to posterior_canon(pri, G, x) works fine. 
> It 
> > returns an MvNormalCanon object and then Base.rand is called. 
> > 
> > This calls some inlined functions, which eventually call 
> > Base._rand!(MvNormalCanon, x::Vector), which leads to the problem, 
> namely 
> > that _rand!(MvNormalCannon, x::Matrix) is all that is defined. 
> > 
> > But why was that so hard to discover? Why does only line 25 of 
> fallbacks,jl 
> > show up in the error stack trace? Was there a better way to debug this? 
>
>
>

Re: [julia-users] Hard-to-debug deep inlining

2016-09-13 Thread Tim Holy
Which version of julia? If you're not using 0.5, try it and you might be 
pleased.

You can also launch `julia --inline=no`, which occasionally still remains 
useful.

--Tim

On Tuesday, September 13, 2016 8:55:58 AM CDT Tim Wheeler wrote:
> Hi Julia Users,
> 
> So I was looking at ConjugatePriors.jl and trying to resolve its problems
> with respect to the latest Distributions.jl. As discussed in issue 11
> , testing
> ConjugatePriors after removing the REQUIRE bounds results in:
> 
> MethodError: no method matching
> _rand!(::Distributions.MvNormalCanon{PDMats.PDMat{Float64,Array{Float64,2}},
> Array{Float64,1}},
> ::Array{Float64,1}) on line 52 of conjugates_mvnormal.jl
> 
>  s_mvnormal.jl#L52>. and line 25 of fallbacks.jl
> 
> If you check that line you find the following:
> 
> posterior_randmodel(pri, G::IncompleteFormulation, x) = complete(G, pri,
> posterior_rand(pri, G, x))
> 
> Okay, the problem isn't really there. The call to posterior_rand is inlined
> (I assume), so it doesn't show up in the test stack trace. So we manually
> go to:
> 
> posterior_rand(pri, G::IncompleteFormulation, x) = Base.rand(posterior_canon
> (pri, G, x))
> 
> 
> This also isn't the problem, at least not directly.
> 
> In fact, the also inlined call to posterior_canon(pri, G, x) works fine. It
> returns an MvNormalCanon object and then Base.rand is called.
> 
> This calls some inlined functions, which eventually call
> Base._rand!(MvNormalCanon, x::Vector), which leads to the problem, namely
> that _rand!(MvNormalCannon, x::Matrix) is all that is defined.
> 
> But why was that so hard to discover? Why does only line 25 of fallbacks,jl
> show up in the error stack trace? Was there a better way to debug this?




[julia-users] debugging tips for GC?

2016-09-13 Thread Andreas Lobinger
Hello colleagues,

i'm trying to find out, why this 

   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.5.0-rc4+0 (2016-09-09 01:43 UTC)
 _/ |\__'_|_|_|\__'_|  |  
|__/   |  x86_64-linux-gnu

julia> Pkg.test("Cairo")
INFO: Testing Cairo
INFO: Cairo tests passed

julia> Pkg.test("Cairo")
INFO: Testing Cairo
INFO: Cairo tests passed

julia> Pkg.test("Cairo")
INFO: Testing Cairo
INFO: Cairo tests passed

julia> Pkg.test("Cairo")
INFO: Testing Cairo
INFO: Cairo tests passed

julia> Pkg.test("Cairo")
INFO: Testing Cairo
INFO: Cairo tests passed

julia> Pkg.test("Cairo")
INFO: Testing Cairo
*** Error in `/home/lobi/julia05/usr/bin/julia': free(): invalid pointer: 
0x04013e80 ***

signal (6): Aborted
while loading /home/lobi/.julia/v0.5/Cairo/samples/sample_imagepattern.jl, 
in expression starting on line 29
raise at 
/build/eglibc-3GlaMS/eglibc-2.19/signal/../nptl/sysdeps/unix/sysv/linux/raise.c:56
abort at /build/eglibc-3GlaMS/eglibc-2.19/stdlib/abort.c:89
__libc_message at 
/build/eglibc-3GlaMS/eglibc-2.19/libio/../sysdeps/posix/libc_fatal.c:175
malloc_printerr at /build/eglibc-3GlaMS/eglibc-2.19/malloc/malloc.c:4996 
[inlined]
_int_free at /build/eglibc-3GlaMS/eglibc-2.19/malloc/malloc.c:3840
jl_gc_free_array at /home/lobi/julia05/src/gc.c:744 [inlined]
sweep_malloced_arrays at /home/lobi/julia05/src/gc.c:765 [inlined]
gc_sweep_other at /home/lobi/julia05/src/gc.c:1032 [inlined]
_jl_gc_collect at /home/lobi/julia05/src/gc.c:1792 [inlined]
jl_gc_collect at /home/lobi/julia05/src/gc.c:1858
jl_gc_pool_alloc at /home/lobi/julia05/src/gc.c:828
jl_gc_alloc_ at /home/lobi/julia05/src/julia_internal.h:148 [inlined]
jl_gc_alloc at /home/lobi/julia05/src/gc.c:1881
jl_alloc_svec_uninit at /home/lobi/julia05/src/simplevector.c:47
jl_alloc_svec at /home/lobi/julia05/src/simplevector.c:56
intersect_tuple at /home/lobi/julia05/src/jltypes.c:601
jl_type_intersect at /home/lobi/julia05/src/jltypes.c:992
jl_type_intersection_matching at /home/lobi/julia05/src/jltypes.c:1510
jl_lookup_match at /home/lobi/julia05/src/typemap.c:376 [inlined]
jl_typemap_intersection_node_visitor at /home/lobi/julia05/src/typemap.c:503
jl_typemap_intersection_visitor at /home/lobi/julia05/src/typemap.c:565
jl_typemap_intersection_visitor at /home/lobi/julia05/src/typemap.c:556
ml_matches at /home/lobi/julia05/src/gf.c:2266 [inlined]
jl_matching_methods at /home/lobi/julia05/src/gf.c:2287
_methods_by_ftype at ./reflection.jl:223
unknown function (ip: 0x7f00077b1f22)
jl_call_method_internal at /home/lobi/julia05/src/julia_internal.h:189 
[inlined]
jl_apply_generic at /home/lobi/julia05/src/gf.c:1929
inlineable at ./inference.jl:2496
unknown function (ip: 0x7f00077c4c92)


fails miserably. I guess, but cannot track it down right now: There is 
something wrong in memory management of Cairo.jl that only shows up for 
objects that could have been freed long ago and julia and libcairo have 
different concepts of invalidation.

Any blog/receipe/issue that deals with GC debugging?

Wishing a happy day,
Andreas


RE: [julia-users] Re: Julia Users and First Customers

2016-09-13 Thread David Anthoff
I find this characterization of julia as “not released” and “alpha” really not 
helpful. To the casual reader these terms signal low quality and lots of bugs, 
and in my experience (after 2.5 years of heavy use) those are the last two 
things that come to mind with respect to julia. On the contrary, I think the 
quality of the julia builds at this point can easily compete with things like 
Python or R (in terms of bugs).

 

I think the correct characterization of the pre-1.0 builds is that julia hasn’t 
reached a stable API, i.e. you need to be willing to live with breaking changes 
coming your way. That is a VERY different thing than a buggy alpha release!

 

There is a large group of julia users out there that use julia for “real world” 
work. It is really not helpful for us if julia gets an undeserved reputation of 
being a pre-release, buggy thing that shouldn’t be used for “real work”. Such a 
reputation would question the validity of our results, whereas a reputation as 
“hasn’t reached a stable API” is completely harmless.

 

Also, keep in mind that there is julia computing out there, which is feeding 
the core dev group. They have customers that pay them (I hope) for supported 
versions of julia, so it seems highly misleading to characterize julia as not 
released and not ready for production. Heck, you can buy a support contract for 
the current released version, so in my mind that seems very much released!

 

I think it would be a good idea if the core julia group would actually put a 
definitive statement out on the website for this topic. There are a couple of 
devs that at least from the outside seem close to the core group that have made 
statements like the one below, that to any sloppy reader will just sound like 
“stay away from julia if you don’t want a bug riddled system”, and I think that 
really doesn’t square with the message that e.g. julia computing needs to put 
out there or with the state of the language. I think a good official position 
would be: “Current julia releases are of high quality and are ready to be used 
for ‘real world’ work. Pre-1.0 releases will introduce breaking API changes 
between 0.x versions, which might require extra work on the users part when 
updating to new julia versions.” Or something like that.

 

Cheers,

David

 

--

David Anthoff

University of California, Berkeley

 

  http://www.david-anthoff.com

 

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Chris Rackauckas
Sent: Tuesday, September 13, 2016 9:05 AM
To: julia-users 
Subject: [julia-users] Re: Julia Users and First Customers

 

1. Jeff Bezanson and Stefan Karpinski. I kid (though that's true). It's the 
group of MIT students who made it. You can track the early issues and kind of 
see who's involved. 
  
Very early on that's probably a good indicator of who's joining in when, but 
that only makes sense for very early Julia when using means also contributing 
to some extent.

 

2. Julia hasn't been released. Putting it in large scale production and 
treating it like it has been released is a bad idea.

 

3. The results advertise for itself. You go learn Julia and come back to your 
lab with faster code that was quicker to write than their MATLAB/Python/R code, 
and then everyone wants you to teach a workshop. Also packages seem to play a 
big role: a lot of people come to these forums for the first time to use things 
like JuMP.

 

4. Julia hasn't had its first release. 

 

Keep in mind Julia is still in its alpha. It doesn't even have a beta for v1.0 
yet. That doesn't mean it's not generally usable yet, quite the contrary: any 
hacker willing to play with it will find that you can get some extraordinary 
productivity and performance gains at this point. But just because Julia is 
doing well does not mean that it has suddenly been released. This misconception 
can lead to issues like this blog post 

 . Of course they had issues with Julia updating and breaking syntax: it's 
explicitly stated that Base syntax may change and many things may break because 
Julia is still in its alpha, and so there's no reason to slow down / stifle 
development for the few who jumped the gun and wanted a stable release 
yesterday. It always ends up like people complaining that the alpha/beta for a 
video game is buggy: of course it is, that's what you signed up for. Remember 
that?

 

Making sure people are aware of this fact is a good thing for Julia. If 
someone's not really a programmer and doesn't want to read Github issues / deal 
with changing documentation (a lot of mathematicians / scientists), there's 
still no reason to push Julia onto them because Julia, as an unreleased alpha 
program, will change and so you will need to 

[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Steven G. Johnson


On Monday, September 12, 2016 at 8:16:52 AM UTC-4, Steven G. Johnson wrote:
>
>
>
> On Monday, September 12, 2016 at 7:59:33 AM UTC-4, DNF wrote:
>>
>> function(p::pnseq)(n,T=Int64)
>>
>>>
> Note that the real problem with this function declaration is that the type 
> T is known only at runtime, not at compile-time. It would be better to 
> do
>
>  function (p::pnseq){T}(n, ::Type{T}=Int64)
>

Note that you have the same problem in several places, e.g. in 
Constellation.jl.

(I don't really understand what that file is doing, but it seems to be 
constructing lots of little arrays that would be better of constructed 
implicitly as part of other data-processing operations.) 

There are lots of micro-optimizations I can spot in Constellation.jl, e.g. 
let k = 2*pi/N; [cis(k*x) for x in 0:N-1]; end is almost 2x faster than 
[exp(im*2*pi/N*x) for x in 0:N-1] on my machine, but as usual one would 
expect that the biggest benefits would arise by re-arranging you code to 
avoid multiple passes over multiple arrays and instead do a single pass 
over one (or zero) arrays.

(I also have no idea how well-optimized the DSP.jl FIRFilters routines are; 
maybe Tim Holy knows, since he's been doing optimizing filters for 
Images.jl)


[julia-users] Re: Pkg.add() works fine while Pkg.update() doesn't over https instead of git

2016-09-13 Thread Chris Rackauckas
You modified your METADATA. Did you make a package or something? The 
easiest thing to do would be to go to the v0.4/METADATA folder and use `git 
stash`. However, this will stash any of the changes you made. Did you make 
these changes for a reason, like you want to publish a new release for a 
package? Then you should commit and push that to your metadata-v2 branch on 
your fork. See http://docs.julialang.org/en/release-0.4/manual/packages/

On Tuesday, September 13, 2016 at 7:54:43 AM UTC-7, Rahul Mourya wrote:
>
> Hi,
> I'm using Julia-0.4.6. My machine is behind a firewall, thus configured 
> git to use https: git config --global url."https://".insteadOf git://.
> Under this setting, I'm able to install packages using Pkg.add(), however, 
> when I use Pkg.update(), I get following error:
>
> INFO: Updating METADATA...
> Cannot pull with rebase: You have unstaged changes.
> Please commit or stash them.
> ERROR: failed process: Process(`git pull --rebase -q`, ProcessExited(1)) 
> [1]
>  in pipeline_error at process.jl:555
>  in run at process.jl:531
>  in anonymous at pkg/entry.jl:283
>  in withenv at env.jl:160
>  in anonymous at pkg/entry.jl:282
>  in cd at ./file.jl:22
>  in update at ./pkg/entry.jl:272
>  in anonymous at pkg/dir.jl:31
>  in cd at file.jl:22
>  in cd at pkg/dir.jl:31
>  in update at ./pkg.jl:45
>
> what could be the reason? Any workaround this?
>
> Thanks!
>


[julia-users] Re: Julia Users and First Customers

2016-09-13 Thread Chris Rackauckas
1. Jeff Bezanson and Stefan Karpinski. I kid (though that's true). It's the 
group of MIT students who made it. You can track the early issues and kind 
of see who's involved. 
 
Very 
early on that's probably a good indicator of who's joining in when, but 
that only makes sense for very early Julia when using means also 
contributing to some extent.

2. Julia hasn't been released. Putting it in large scale production and 
treating it like it has been released is a bad idea.

3. The results advertise for itself. You go learn Julia and come back to 
your lab with faster code that was quicker to write than their 
MATLAB/Python/R code, and then everyone wants you to teach a workshop. Also 
packages seem to play a big role: a lot of people come to these forums for 
the first time to use things like JuMP.

4. Julia hasn't had its first release. 

Keep in mind Julia is still in its alpha. It doesn't even have a beta for 
v1.0 yet. That doesn't mean it's not generally usable yet, quite the 
contrary: any hacker willing to play with it will find that you can get 
some extraordinary productivity and performance gains at this point. But 
just because Julia is doing well does not mean that it has suddenly been 
released. This misconception can lead to issues like this blog post 
.
 
Of course they had issues with Julia updating and breaking syntax: it's 
explicitly stated that Base syntax may change and many things may break 
because Julia is still in its alpha, and so there's no reason to slow down 
/ stifle development for the few who jumped the gun and wanted a stable 
release yesterday. It always ends up like people complaining that the 
alpha/beta for a video game is buggy: of course it is, that's what you 
signed up for. Remember that?

Making sure people are aware of this fact is a good thing for Julia. If 
someone's not really a programmer and doesn't want to read Github issues / 
deal with changing documentation (a lot of mathematicians / scientists), 
there's still no reason to push Julia onto them because Julia, as an 
unreleased alpha program, will change and so you will need to keep 
up-to-date with the changes. Disregarding that fact will only lead to 
misconceptions about Julia when people inevitably run into problems here. 

On Tuesday, September 13, 2016 at 7:55:51 AM UTC-7, Adeel Malik wrote:
>
> I would Like to ask few questions about Julia that I could not find it on 
> the internet. 
>
> 1) Who were the very first few users of Julia ?
>
> 2) Who were the industrial customers of Julia when it was first released? 
> Who are those industrial customers now?
>
> 3) How Julia found more users?
>
> 4) How Julia survived against Python and R at first release?
>
> It's not a homework. Anyone can answer these questions or put me at the 
> right direction that would be perfect.
>
> Thanks in advance
>
> Regards,
> Adeel
>


[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Steven G. Johnson


On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote:
>
> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> providing 
> N bits at a time to fill an Array.  It's supposed to be a fast way to get 
> an 
> Array of small-width random integers. 
>

rand(T, n) already does this for small integer types T.  (In fact, it 
generates 128 random bits at a time.)  See base/random.jl 

 
for how it does it.

In a quick test, rand(UInt16, 10^6) was more than 6x faster than 
pnseq(16)(10^6, UInt16).

(In a performance-critical situation where you are calling this lots of 
times to generate random arrays, I would pre-allocate the array A and call 
rand!(A) instead to fill it with random numbers in-place.)


[julia-users] Hard-to-debug deep inlining

2016-09-13 Thread Tim Wheeler
Hi Julia Users,

So I was looking at ConjugatePriors.jl and trying to resolve its problems 
with respect to the latest Distributions.jl. As discussed in issue 11 
, testing 
ConjugatePriors after removing the REQUIRE bounds results in:

MethodError: no method matching 
_rand!(::Distributions.MvNormalCanon{PDMats.PDMat{Float64,Array{Float64,2}},Array{Float64,1}},
 
::Array{Float64,1}) on line 52 of conjugates_mvnormal.jl 
.
 
and line 25 of fallbacks.jl

If you check that line you find the following:

posterior_randmodel(pri, G::IncompleteFormulation, x) = complete(G, pri, 
posterior_rand(pri, G, x))

Okay, the problem isn't really there. The call to posterior_rand is inlined 
(I assume), so it doesn't show up in the test stack trace. So we manually 
go to:

posterior_rand(pri, G::IncompleteFormulation, x) = Base.rand(posterior_canon
(pri, G, x))


This also isn't the problem, at least not directly.

In fact, the also inlined call to posterior_canon(pri, G, x) works fine. It 
returns an MvNormalCanon object and then Base.rand is called.

This calls some inlined functions, which eventually call 
Base._rand!(MvNormalCanon, x::Vector), which leads to the problem, namely 
that _rand!(MvNormalCannon, x::Matrix) is all that is defined.

But why was that so hard to discover? Why does only line 25 of fallbacks,jl 
show up in the error stack trace? Was there a better way to debug this?



Re: [julia-users] Re: Tutorial Julia language brazilian portuguese

2016-09-13 Thread Phelipe Wesley
Criei o grupo no slack, para entrar é só acessar a url e um convite será
enviado para o email.

https://still-dawn-96640.herokuapp.com/

No dia 12 de setembro de 2016 às 18:59, 
escreveu:

> olá felipe. É uma boa ideia. Se você criar eu compartilho com o pessoal da
> unb.
>
> Hello Felipe. It's a great idea. If you marry I share with the staff of
> unb.
>
> Em segunda-feira, 12 de setembro de 2016 17:54:50 UTC-3, Phelipe Wesley
> escreveu:
>>
>> O que acham de criarmos um grupo Julia-Brasil no slack?
>>
>


Re: [julia-users] How to make generic constructors with no arguments work?

2016-09-13 Thread Tom Breloff
julia> type A{T}
 item::Nullable{T}
 function A()
   new(Nullable{T}())
 end
   end

julia> A{Int}()
A{Int64}(Nullable{Int64}())


You only need the name, and it uses the parameters from the type definition.

On Tue, Sep 13, 2016 at 9:28 AM, SZubik  wrote:

> Hi,
>
> For some reason I can't figure out the following, maybe someone
> encountered this before.
> For example I have the following type
>
> type A{T}
>   item::Nullable{T}
> end
>
> and
>
> A{Int64}(2)
>
> returns A{Int64}(Nullable{2}) as expected.
>
> However I want to introduce a constructor with no parameters (so by
> default item is null):
>
> type A{T}
>   item::Nullable{T}
>   function A{T}()
> new{T}(Nullable{T}())
>   end
> end
>
> And I get
>
> WARNING: static parameter T does not occur in signature for call at line 4
> The method will not be callable.
>
>
> As expected, I get some error when try to call it:
>
> A{Int64}()
>
> Error:
>
> LoadError: MethodError: `convert` has no method matching 
> convert(::Type{A{Int64}})
> This may have arisen from a call to the constructor A{Int64}(...),
> since type constructors fall back to convert methods.
>
>
> If someone could help me make sense out of this, it would be much appreciated 
> :)
>
> Thanks
>
>
>


[julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-13 Thread Steven Sagaert
for me a distribution is more than just a gobbled together bunch of 
disparate packages: ideally it should have a common style and work with 
common datastructures for input/ouput (between methods) to exchange data. 
That's the real crux of the problem, not the fact that you need to manually 
import packages or not. 

On Tuesday, September 13, 2016 at 4:49:50 PM UTC+2, Randy Zwitch wrote:
>
> "Also, there's a good reason to ask "why fuss with distributions when 
> anyone could just add the packages and add the import statements to their 
> .juliarc?" (though its target audience is for people who don't know details 
> like the .juliarc, but also want Julia to work seamlessly like MATLAB)."
>
> I feel like if you're using MATLAB, it should be a really small step to 
> teach about the .juliarc file, as oppose to the amount of 
> maintenance/fragmentation that comes along with multiple distributions.
>
> Point #1 makes sense for me, if only because that's a use case that can't 
> be accomplished through simple textfile manipulation
>
> On Tuesday, September 13, 2016 at 4:39:15 AM UTC-4, Chris Rackauckas wrote:
>>
>> I think one major point of contention when talking about what should be 
>> included in Base due to competing factors:
>>
>>
>>1. Some people would like a "lean Base" for things like embedded 
>>installs or other low memory applications
>>2. Some people want a MATLAB-like "bells and whistles" approach. This 
>>way all the functions they use are just there: no extra packages to 
>>find/import.
>>3. Some people like having things in Base because it "standardizes" 
>>things. 
>>4. Putting things in Base constrains their release schedule. 
>>5. Putting things in packages outside of JuliaLang helps free up 
>>Travis.
>>
>>
>> The last two concerns have been why things like JuliaMath have sprung up 
>> to move things out of Base. However, I think there is some credibility to 
>> having some form of standardization. I think this can be achieved through 
>> some kind of standard library. This would entail a set of packages which 
>> are installed when Julia is installed, and a set of packages which add 
>> their using statement to the .juliarc. To most users this would be 
>> seamless: they would install automatically, and every time you open Julia, 
>> they would import automatically. There are a few issues there:
>>
>>
>>1.  This wouldn't work with building from source. This idea works 
>>better for binaries (this is no biggie since these users are likely more 
>>experienced anyways)
>>2. Julia would have to pick winners and losers.
>>
>> That second part is big: what goes into the standard library? Would all 
>> of the MATLAB functions like linspace, find, etc. go there? Would the 
>> sparse matrix library be included?
>>
>> I think one way to circumvent the second issue would be to allow for 
>> Julia Distributions. A distribution would be defined by:
>>
>>
>>1. A Julia version
>>2. A List of packages to install (with versions?)
>>3. A build script
>>4. A .juliarc
>>
>> The ideal would be for one to be able to make an executable from those 
>> parts which would install the Julia version with the specified packages, 
>> build the packages (and maybe modify some environment variables / 
>> defaults), and add a .juliarc that would automatically import some packages 
>> / maybe define some constants or checkout branches. JuliaLang could then 
>> provide a lean distribution and a "standard distribution" where the 
>> standard distribution is a more curated library which people can fight 
>> about, but it's not as big of a deal if anyone can make their own. This has 
>> many upsides:
>>
>>
>>1. Julia wouldn't have to come with what you don't want.
>>2. Other than some edge cases where the advantages of Base come into 
>>play (I don't know of a good example, but I know there are some things 
>>which can't be defined outside of Base really well, like BigFloats? I'm 
>> not 
>>the expert on this.), most things could spawn out to packages without the 
>>standard user ever noticing.
>>3. There would still be a large set of standard functions you can 
>>assume most people will have.
>>4. You can share Julia setups: for example, with my lab I would share 
>>a distribution that would have all of the JuliaDiffEq packages installed, 
>>along with Plots.jl and some backends, so that way it would be "in the 
>> box 
>>solve differential equations and plot" setup like what MATLAB provides. I 
>>could pick packages/versions that I know work well together, 
>>and guarantee their install will work. 
>>5. You could write tutorials / run workshops which use a 
>>distribution, knowing that a given set of packages will be available.
>>6. Anyone could make their setup match yours by looking at the 
>>distribution setup scripts (maybe just make a base function which runs 
>> that 
>>

[julia-users] Julia Users and First Customers

2016-09-13 Thread Adeel Malik
I would Like to ask few questions about Julia that I could not find it on 
the internet. 

1) Who were the very first few users of Julia ?

2) Who were the industrial customers of Julia when it was first released? 
Who are those industrial customers now?

3) How Julia found more users?

4) How Julia survived against Python and R at first release?

It's not a homework. Anyone can answer these questions or put me at the 
right direction that would be perfect.

Thanks in advance

Regards,
Adeel


[julia-users] How to make generic constructors with no arguments work?

2016-09-13 Thread SZubik
Hi,

For some reason I can't figure out the following, maybe someone encountered 
this before.
For example I have the following type

type A{T}
  item::Nullable{T}
end

and 

A{Int64}(2) 

returns A{Int64}(Nullable{2}) as expected.

However I want to introduce a constructor with no parameters (so by default 
item is null):

type A{T}
  item::Nullable{T}
  function A{T}()
new{T}(Nullable{T}())
  end
end

And I get

WARNING: static parameter T does not occur in signature for call at line 4
The method will not be callable.


As expected, I get some error when try to call it:

A{Int64}()

Error:

LoadError: MethodError: `convert` has no method matching 
convert(::Type{A{Int64}})
This may have arisen from a call to the constructor A{Int64}(...),
since type constructors fall back to convert methods.


If someone could help me make sense out of this, it would be much appreciated :)

Thanks




[julia-users] Cannot add Instruments.jl

2016-09-13 Thread FelixFischer
Hello everybody!

I hope someone here can help me. I'm a Julia-beginner and I have to connect
several devices in the lab with Julia (Serial/GPIB). I searched for packages
and found two: Instruments and SerialPorts. But while adding those, I just
get errors. I tried to Pkg.build("Instruments") but this is the error:

/julia> Pkg.build("Instruments")
INFO: Building Instruments
=[ ERROR: Instruments
]=

LoadError: None of the selected providers can install dependency visa.
Use BinDeps.debug(package_name) to see available providers

while loading
/home/iwsatlas1/ffischer/.julia/linux-ubuntu-14.04-x86_64/v0.4/Instruments/deps/build.jl,
in expression starting on line 6



===[ BUILD ERRORS
]

WARNING: Instruments had build errors.

 - packages with build errors remain installed in
/home/iwsatlas1/ffischer/.julia/linux-ubuntu-14.04-x86_64/v0.4
 - build the package(s) and all dependencies with `Pkg.build("Instruments")`
 - build a single package by running its `deps/build.jl` script

===
/

Any ideas what I can do? Or other ways to control the devices? 

Greetings
Felix F



--
View this message in context: 
http://julia-programming-language.2336112.n4.nabble.com/Cannot-add-Instruments-jl-tp47020.html
Sent from the Julia Users mailing list archive at Nabble.com.


[julia-users] Pkg.add() works fine while Pkg.update() doesn't over https instead of git

2016-09-13 Thread Rahul Mourya
Hi,
I'm using Julia-0.4.6. My machine is behind a firewall, thus configured git 
to use https: git config --global url."https://".insteadOf git://.
Under this setting, I'm able to install packages using Pkg.add(), however, 
when I use Pkg.update(), I get following error:

INFO: Updating METADATA...
Cannot pull with rebase: You have unstaged changes.
Please commit or stash them.
ERROR: failed process: Process(`git pull --rebase -q`, ProcessExited(1)) [1]
 in pipeline_error at process.jl:555
 in run at process.jl:531
 in anonymous at pkg/entry.jl:283
 in withenv at env.jl:160
 in anonymous at pkg/entry.jl:282
 in cd at ./file.jl:22
 in update at ./pkg/entry.jl:272
 in anonymous at pkg/dir.jl:31
 in cd at file.jl:22
 in cd at pkg/dir.jl:31
 in update at ./pkg.jl:45

what could be the reason? Any workaround this?

Thanks!


[julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-13 Thread Randy Zwitch
"Also, there's a good reason to ask "why fuss with distributions when 
anyone could just add the packages and add the import statements to their 
.juliarc?" (though its target audience is for people who don't know details 
like the .juliarc, but also want Julia to work seamlessly like MATLAB)."

I feel like if you're using MATLAB, it should be a really small step to 
teach about the .juliarc file, as oppose to the amount of 
maintenance/fragmentation that comes along with multiple distributions.

Point #1 makes sense for me, if only because that's a use case that can't 
be accomplished through simple textfile manipulation

On Tuesday, September 13, 2016 at 4:39:15 AM UTC-4, Chris Rackauckas wrote:
>
> I think one major point of contention when talking about what should be 
> included in Base due to competing factors:
>
>
>1. Some people would like a "lean Base" for things like embedded 
>installs or other low memory applications
>2. Some people want a MATLAB-like "bells and whistles" approach. This 
>way all the functions they use are just there: no extra packages to 
>find/import.
>3. Some people like having things in Base because it "standardizes" 
>things. 
>4. Putting things in Base constrains their release schedule. 
>5. Putting things in packages outside of JuliaLang helps free up 
>Travis.
>
>
> The last two concerns have been why things like JuliaMath have sprung up 
> to move things out of Base. However, I think there is some credibility to 
> having some form of standardization. I think this can be achieved through 
> some kind of standard library. This would entail a set of packages which 
> are installed when Julia is installed, and a set of packages which add 
> their using statement to the .juliarc. To most users this would be 
> seamless: they would install automatically, and every time you open Julia, 
> they would import automatically. There are a few issues there:
>
>
>1.  This wouldn't work with building from source. This idea works 
>better for binaries (this is no biggie since these users are likely more 
>experienced anyways)
>2. Julia would have to pick winners and losers.
>
> That second part is big: what goes into the standard library? Would all of 
> the MATLAB functions like linspace, find, etc. go there? Would the sparse 
> matrix library be included?
>
> I think one way to circumvent the second issue would be to allow for Julia 
> Distributions. A distribution would be defined by:
>
>
>1. A Julia version
>2. A List of packages to install (with versions?)
>3. A build script
>4. A .juliarc
>
> The ideal would be for one to be able to make an executable from those 
> parts which would install the Julia version with the specified packages, 
> build the packages (and maybe modify some environment variables / 
> defaults), and add a .juliarc that would automatically import some packages 
> / maybe define some constants or checkout branches. JuliaLang could then 
> provide a lean distribution and a "standard distribution" where the 
> standard distribution is a more curated library which people can fight 
> about, but it's not as big of a deal if anyone can make their own. This has 
> many upsides:
>
>
>1. Julia wouldn't have to come with what you don't want.
>2. Other than some edge cases where the advantages of Base come into 
>play (I don't know of a good example, but I know there are some things 
>which can't be defined outside of Base really well, like BigFloats? I'm 
> not 
>the expert on this.), most things could spawn out to packages without the 
>standard user ever noticing.
>3. There would still be a large set of standard functions you can 
>assume most people will have.
>4. You can share Julia setups: for example, with my lab I would share 
>a distribution that would have all of the JuliaDiffEq packages installed, 
>along with Plots.jl and some backends, so that way it would be "in the box 
>solve differential equations and plot" setup like what MATLAB provides. I 
>could pick packages/versions that I know work well together, 
>and guarantee their install will work. 
>5. You could write tutorials / run workshops which use a distribution, 
>knowing that a given set of packages will be available.
>6. Anyone could make their setup match yours by looking at the 
>distribution setup scripts (maybe just make a base function which runs 
> that 
>install since it would all be in Julia). This would be nice for some work 
>in progress projects which require checking out master on 3 different 
>packages, and getting some weird branch for another 5. I would give you a 
>succinct and standardized way to specify an install to get there.
>
>
> Side notes:
>
> [An interesting distribution would be that JuliaGPU could provide a full 
> distribution for which CUDAnative works (since it requires a different 
> Julia install)]
>
> [A 

Re: [julia-users] hashing floating point zeroes

2016-09-13 Thread Steven G. Johnson


On Tuesday, September 13, 2016 at 10:07:49 AM UTC-4, Fengyang Wang wrote:
>
> This is an intuitive explanation, but the mathematics of IEEE floating 
> points seem to be designed so that 0.0 represents a "really small positive 
> number" and -0.0 represents "exact zero" or at least "an even smaller 
> really small negative number"; hence -0.0 + 0.0 = 0.0. I never understood 
> this either.
>

For one thing, the signed zero preserves 1/(1/x) == x, even when x is +Inf 
or -Inf, since 1/-Inf is -0.0 and 1/-0.0 is -Inf.   More generally, when 
there is underflow (numbers get so small they can no longer be 
represented), you lose the value but you don't lose the sign.   Also, the 
sign of zero is useful in evaluating complex-valued functions that have a 
branch cut along the real axis, so that you know which side of the branch 
you are on (see the classic paper Much Ado About Nothing's SIgn Bit 
 by Kahan).


[julia-users] Re: Vector Field operators (gradient, divergence, curl) in Julia

2016-09-13 Thread Chris Rackauckas
For gradients, check out ForwardDiff. It'll give you really fast 
calculations.

On Tuesday, September 13, 2016 at 4:29:59 AM UTC-7, MLicer wrote:
>
> Dear all,
>
> i am wondering if there exists Julia N-dimensional equivalents to Numpy 
> vector field operators like gradient, divergence and curl, for example:
>
> np.gradient(x)
>
> Thanks so much,
>
> Cheers!
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-13 Thread Michael Krabbe Borregaard
"An hour", pfhh, I only wish I was that efficient. I really like it, also
looking very much forward to reading the comments people will make on this.

On Tue, Sep 13, 2016 at 4:16 PM, Tom Breloff  wrote:

> I stole an hour this morning and implemented this: https://github.com/
> tbreloff/ConcreteAbstractions.jl
>
> It's pretty faithful to my earlier description, except that the macros
> names are `@base` and `@extend`.  Comments/criticisms welcome!
>
> On Tue, Sep 13, 2016 at 9:57 AM, Michael Krabbe Borregaard <
> mkborrega...@gmail.com> wrote:
>
>> First, apologies, I didn't mean that you wanted to 'subvert' (
>> http://www.merriam-webster.com/dictionary/subvert) julia. As a
>> non-native speaker the finer nuances of English sometimes slip. The word I
>> was looking for was perhaps 'sidestep'.
>> Also I see your point that as long as the Abstract type cannot be
>> instantiated, the problem with putting supertypes in an array should not be
>> relevant. So perhaps this is actually a nice general solution to the issue.
>>
>
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-13 Thread Tom Breloff
I stole an hour this morning and implemented this:
https://github.com/tbreloff/ConcreteAbstractions.jl

It's pretty faithful to my earlier description, except that the macros
names are `@base` and `@extend`.  Comments/criticisms welcome!

On Tue, Sep 13, 2016 at 9:57 AM, Michael Krabbe Borregaard <
mkborrega...@gmail.com> wrote:

> First, apologies, I didn't mean that you wanted to 'subvert' (
> http://www.merriam-webster.com/dictionary/subvert) julia. As a non-native
> speaker the finer nuances of English sometimes slip. The word I was looking
> for was perhaps 'sidestep'.
> Also I see your point that as long as the Abstract type cannot be
> instantiated, the problem with putting supertypes in an array should not be
> relevant. So perhaps this is actually a nice general solution to the issue.
>


Re: [julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Páll Haraldsson


On Monday, September 12, 2016 at 7:01:05 PM UTC, Yichao Yu wrote:
>
> On Sep 12, 2016 2:52 PM, "Páll Haraldsson"  > wrote:
> >
> > On Monday, September 12, 2016 at 11:32:48 AM UTC, Neal Becker wrote:
> >>
> >> Anyone care to make suggestions on this code, how to make it faster, or 
> more 
> >> idiomatic Julia?
> >
> >  
> >
> > It may not matter, but this function:
> >
> > function coef_from_func(func, delta, size)
> >center = float(size-1)/2
> >return [func((i - center)*delta) for i in 0:size-1]
> > end
> >
> > returns Array{Any,1} while this could be better:
> >
> > function coef_from_func(func, delta, size)
> >center = float(size-1)/2
> >return Float64[func((i - center)*delta) for i in 0:size-1]
> > end
> >
> > returns Array{Float64,1} (if not, maybe helpful to know elsewhere).
> >
>
> Not applicable on 0.5
>

Good to know (and confirmed) meaning I guess 0.4 is slower (but correct 
results), with the former. Not with the latter, but then you are less 
generic. It seems Compat.jl would not get you out of that dilemma..



Re: [julia-users] hashing floating point zeroes

2016-09-13 Thread Fengyang Wang
This is an intuitive explanation, but the mathematics of IEEE floating 
points seem to be designed so that 0.0 represents a "really small positive 
number" and -0.0 represents "exact zero" or at least "an even smaller 
really small negative number"; hence -0.0 + 0.0 = 0.0. I never understood 
this either.

On Saturday, July 9, 2016 at 7:40:58 AM UTC-4, Tom Breloff wrote:
>
> Yes. They are different numbers. In a way, negative zero represents "a 
> really small negative number" that can't be represented exactly using 
> floating point. 
>
> On Saturday, July 9, 2016, Davide Lasagna  > wrote:
>
>> Hi, 
>>
>> I have just been bitten by a function hashing a custom type containing a 
>> vector of floats. It turns out that hashing positive and negative floating 
>> point zeros returns different hashes. 
>>
>> Demo:
>> julia> hash(-0.0)
>> 0x3be7d0f7780de548
>>
>> julia> hash(0.0)
>> 0x77cfa1eef01bca90
>>
>> julia> hash(0)
>> 0x77cfa1eef01bca90
>>
>> Is this expected behaviour?
>>
>>

[julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-13 Thread Páll Haraldsson
On Tuesday, September 13, 2016 at 8:39:15 AM UTC, Chris Rackauckas wrote:
>
>
> This could be a terrible idea, I don't know.
>
 
I don't think so.
 
In the download page there is already a choice of (two extra.. when 
JuliaBox is thought of as such) "distributions", if you will:

"We provide several ways for you to run Julia:

   - In the terminal using the built-in Julia command line.
   - The Juno  integrated development environment 
   (IDE).
   - In the browser on JuliaBox.com" 

That is Juno includes Julia (from memory), so you could have a 
combinatorial explosion.. core packages in a distributions x possible IDEs 
x possible debuggers..

I'm all for a MATLAB like distribution that can compete, with Juno I guess 
(JuliaComputing promotes Ecplise IDE integration..). I hope people can 
agree on something, and avoid a combinatorial explosion.

For myself, I really like julia-lite to be available (it is unofficially).

-- 
Palli.


I think it's great that


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-13 Thread Michael Krabbe Borregaard
First, apologies, I didn't mean that you wanted to 'subvert' (
http://www.merriam-webster.com/dictionary/subvert) julia. As a non-native
speaker the finer nuances of English sometimes slip. The word I was looking
for was perhaps 'sidestep'.
Also I see your point that as long as the Abstract type cannot be
instantiated, the problem with putting supertypes in an array should not be
relevant. So perhaps this is actually a nice general solution to the issue.


[julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-13 Thread mmh
What about sysimage for these pkgs out of base?

On Tuesday, September 13, 2016 at 4:39:15 AM UTC-4, Chris Rackauckas wrote:
>
> I think one major point of contention when talking about what should be 
> included in Base due to competing factors:
>
>
>1. Some people would like a "lean Base" for things like embedded 
>installs or other low memory applications
>2. Some people want a MATLAB-like "bells and whistles" approach. This 
>way all the functions they use are just there: no extra packages to 
>find/import.
>3. Some people like having things in Base because it "standardizes" 
>things. 
>4. Putting things in Base constrains their release schedule. 
>5. Putting things in packages outside of JuliaLang helps free up 
>Travis.
>
>
> The last two concerns have been why things like JuliaMath have sprung up 
> to move things out of Base. However, I think there is some credibility to 
> having some form of standardization. I think this can be achieved through 
> some kind of standard library. This would entail a set of packages which 
> are installed when Julia is installed, and a set of packages which add 
> their using statement to the .juliarc. To most users this would be 
> seamless: they would install automatically, and every time you open Julia, 
> they would import automatically. There are a few issues there:
>
>
>1.  This wouldn't work with building from source. This idea works 
>better for binaries (this is no biggie since these users are likely more 
>experienced anyways)
>2. Julia would have to pick winners and losers.
>
> That second part is big: what goes into the standard library? Would all of 
> the MATLAB functions like linspace, find, etc. go there? Would the sparse 
> matrix library be included?
>
> I think one way to circumvent the second issue would be to allow for Julia 
> Distributions. A distribution would be defined by:
>
>
>1. A Julia version
>2. A List of packages to install (with versions?)
>3. A build script
>4. A .juliarc
>
> The ideal would be for one to be able to make an executable from those 
> parts which would install the Julia version with the specified packages, 
> build the packages (and maybe modify some environment variables / 
> defaults), and add a .juliarc that would automatically import some packages 
> / maybe define some constants or checkout branches. JuliaLang could then 
> provide a lean distribution and a "standard distribution" where the 
> standard distribution is a more curated library which people can fight 
> about, but it's not as big of a deal if anyone can make their own. This has 
> many upsides:
>
>
>1. Julia wouldn't have to come with what you don't want.
>2. Other than some edge cases where the advantages of Base come into 
>play (I don't know of a good example, but I know there are some things 
>which can't be defined outside of Base really well, like BigFloats? I'm 
> not 
>the expert on this.), most things could spawn out to packages without the 
>standard user ever noticing.
>3. There would still be a large set of standard functions you can 
>assume most people will have.
>4. You can share Julia setups: for example, with my lab I would share 
>a distribution that would have all of the JuliaDiffEq packages installed, 
>along with Plots.jl and some backends, so that way it would be "in the box 
>solve differential equations and plot" setup like what MATLAB provides. I 
>could pick packages/versions that I know work well together, 
>and guarantee their install will work. 
>5. You could write tutorials / run workshops which use a distribution, 
>knowing that a given set of packages will be available.
>6. Anyone could make their setup match yours by looking at the 
>distribution setup scripts (maybe just make a base function which runs 
> that 
>install since it would all be in Julia). This would be nice for some work 
>in progress projects which require checking out master on 3 different 
>packages, and getting some weird branch for another 5. I would give you a 
>succinct and standardized way to specify an install to get there.
>
>
> Side notes:
>
> [An interesting distribution would be that JuliaGPU could provide a full 
> distribution for which CUDAnative works (since it requires a different 
> Julia install)]
>
> [A "Data Science Distribution" would be a cool idea: you'd never want to 
> include all of the plotting and statistical things inside of Base, but 
> someone pointing out what all of the "good" packages are that play nice 
> with each other would be very helpful.]
>
> [What if the build script could specify a library path, so that way it can 
> install a setup which doesn't interfere with a standard Julia install?]
>
> This is not without downsides. Indeed, one place where you can look is 
> Python. Python has distributions, but one problem with them is that 
> packages don't 

[julia-users] walkdir

2016-09-13 Thread adrian_lewis
Both the functions pwd() and walkdir() are defined in the filesystem 
package, but:

julia> VERSION
v"0.4.6"

julia> pwd()
"/Users/aidy"

julia> walkdir(".")
ERROR: UndefVarError: walkdir not defined

Aidy


Re: [julia-users] Re: equivalent of numpy newaxis?

2016-09-13 Thread DNF
Oh, yeah. I forgot that .+, .-, etc are not along for the ride yet. I 
understand it is pretty much ready for inclusion in 0.6.

There is also a one extra allocation, namely y.', which would not be 
necessary in a loop. But this is hardly worse than numpy, right?


On Tuesday, September 13, 2016 at 2:42:35 PM UTC+2, Mauro wrote:
>
> On Tue, 2016-09-13 at 14:26, Neal Becker  
> wrote: 
> > So you're saying that abs2.(x .- y.') will not allocate a 2d array and 
> then 
> > pass to abs2?  That's great!  But how would I know that? 
>
> The operators do not do the fusing yet, check right at the bottom of the 
> linked manual section.  I think you can work around it by using their 
> functional form: 
>
> x .+ y # not fused 
> (+).(x,y) # fused 
>
> So: 
>
> out .= abs2.((-).(x, y.')) 
>
> > DNF wrote: 
> > 
> >> For your particular example, it looks like what you want is (and I am 
> just 
> >> guessing what mag_sqr means): 
> >> dist = abs2.(x .- y.') 
> >> The performance should be the similar to a hand-written loop on version 
> >> 0.5. 
> >> 
> >> You can read about it here: 
> >> 
> http://docs.julialang.org/en/release-0.5/manual/functions/#dot-syntax-for-vectorizing-functions
>  
> >> 
> >> 
> >> On Monday, September 12, 2016 at 9:29:15 PM UTC+2, Neal Becker wrote: 
> >>> 
> >>> Some time ago I asked this question 
> >>> 
> >>> 
> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent-of-numpy-newaxis
>  
> >>> 
> >>> As a more interesting example, here is some real python code I use: 
> >>> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:]) 
> >>> 
> >>> where demod_out, const.map are each vectors, mag_sqr performs 
> >>> element-wise euclidean distance, and the result is a 2D array whose 
> 1st 
> >>> axis matches the 
> >>> 1st axis of demod_out, and the 2nd axis matches the 2nd axis of 
> >>> const.map. 
> >>> 
> >>> 
> >>> From the answers I've seen, julia doesn't really have an equivalent 
> >>> functionality.  The idea here is, without allocating a new array, 
> >>> manipulate 
> >>> the strides to cause broadcasting. 
> >>> 
> >>> AFAICT, the best for Julia would be just forget the vectorized code, 
> and 
> >>> explicitly write out loops to perform the computation.  OK, I guess, 
> but 
> >>> maybe not as readable. 
> >>> 
> >>> Is there any news on this front? 
> >>> 
> >>> 
>


[julia-users] Julia TCP server and connection

2016-09-13 Thread Karli Kund
Hi,

I posted question in stackoverflow with no answers: 
http://stackoverflow.com/questions/39448808/julia-tcp-server-and-connection 
Maybe from here. 


[julia-users] Re: equivalent of numpy newaxis?

2016-09-13 Thread Neal Becker
So you're saying that abs2.(x .- y.') will not allocate a 2d array and then 
pass to abs2?  That's great!  But how would I know that?

DNF wrote:

> For your particular example, it looks like what you want is (and I am just
> guessing what mag_sqr means):
> dist = abs2.(x .- y.')
> The performance should be the similar to a hand-written loop on version
> 0.5.
> 
> You can read about it here:
> http://docs.julialang.org/en/release-0.5/manual/functions/#dot-syntax-for-vectorizing-functions
> 
> 
> On Monday, September 12, 2016 at 9:29:15 PM UTC+2, Neal Becker wrote:
>>
>> Some time ago I asked this question
>>
>> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent-of-numpy-newaxis
>>
>> As a more interesting example, here is some real python code I use:
>> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:])
>>
>> where demod_out, const.map are each vectors, mag_sqr performs
>> element-wise euclidean distance, and the result is a 2D array whose 1st
>> axis matches the
>> 1st axis of demod_out, and the 2nd axis matches the 2nd axis of
>> const.map.
>>
>>
>> From the answers I've seen, julia doesn't really have an equivalent
>> functionality.  The idea here is, without allocating a new array,
>> manipulate
>> the strides to cause broadcasting.
>>
>> AFAICT, the best for Julia would be just forget the vectorized code, and
>> explicitly write out loops to perform the computation.  OK, I guess, but
>> maybe not as readable.
>>
>> Is there any news on this front?
>>
>>




Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-13 Thread Tom Breloff
To be clear...I'm not trying to subvert anything! Just to make it easier
and more natural to choose the best option. :)  if it wasn't clear from my
example, AbstractFoo would NOT be a concrete type, and couldn't be
constructed.

On Tuesday, September 13, 2016, Michael Borregaard 
wrote:

> Thanks for the enlightening discussion. The emerging consensus is to use
> example #2, but perhaps use macros to make the syntax easier to read and
> maintain. Alternatively, it looks like my idea with having a FoobarData
> object as a field would do the job (but would require
> foobar.foobardata.bazbaz syntax for accessing fields, of course).
>
> It is also interesting to see that there are divergent views. It seems to
> me, for example, that Tom Breloff's macro syntax would subvert the
> inheritance design decision that Stefan Karpinski described, by combining
> the abstract type with the concrete type?
>


[julia-users] Re: Vector Field operators (gradient, divergence, curl) in Julia

2016-09-13 Thread Steven G. Johnson
See https://github.com/JuliaLang/julia/issues/16113


[julia-users] Vector Field operators (gradient, divergence, curl) in Julia

2016-09-13 Thread MLicer
Dear all,

i am wondering if there exists Julia N-dimensional equivalents to Numpy 
vector field operators like gradient, divergence and curl, for example:

np.gradient(x)

Thanks so much,

Cheers!


[julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-13 Thread Steven Sagaert
I' m in favor of this. In fact I asked for the same thing 
in https://groups.google.com/forum/#!topic/julia-users/3g8zXaXfQqk although 
in a more cryptic way :)

BTW: java already has something like this:  next to the 2 big standard 
distributions javaSE & javaEE (there's also a third specialized javaME but 
that's paying and incompatibel), there are now more fine grained 
distributions called "profiles". In java 9 with modules it will be even 
easier to create more profiles/distributions.


On Tuesday, September 13, 2016 at 10:39:15 AM UTC+2, Chris Rackauckas wrote:
>
> I think one major point of contention when talking about what should be 
> included in Base due to competing factors:
>
>
>1. Some people would like a "lean Base" for things like embedded 
>installs or other low memory applications
>2. Some people want a MATLAB-like "bells and whistles" approach. This 
>way all the functions they use are just there: no extra packages to 
>find/import.
>3. Some people like having things in Base because it "standardizes" 
>things. 
>4. Putting things in Base constrains their release schedule. 
>5. Putting things in packages outside of JuliaLang helps free up 
>Travis.
>
>
> The last two concerns have been why things like JuliaMath have sprung up 
> to move things out of Base. However, I think there is some credibility to 
> having some form of standardization. I think this can be achieved through 
> some kind of standard library. This would entail a set of packages which 
> are installed when Julia is installed, and a set of packages which add 
> their using statement to the .juliarc. To most users this would be 
> seamless: they would install automatically, and every time you open Julia, 
> they would import automatically. There are a few issues there:
>
>
>1.  This wouldn't work with building from source. This idea works 
>better for binaries (this is no biggie since these users are likely more 
>experienced anyways)
>2. Julia would have to pick winners and losers.
>
> That second part is big: what goes into the standard library? Would all of 
> the MATLAB functions like linspace, find, etc. go there? Would the sparse 
> matrix library be included?
>
> I think one way to circumvent the second issue would be to allow for Julia 
> Distributions. A distribution would be defined by:
>
>
>1. A Julia version
>2. A List of packages to install (with versions?)
>3. A build script
>4. A .juliarc
>
> The ideal would be for one to be able to make an executable from those 
> parts which would install the Julia version with the specified packages, 
> build the packages (and maybe modify some environment variables / 
> defaults), and add a .juliarc that would automatically import some packages 
> / maybe define some constants or checkout branches. JuliaLang could then 
> provide a lean distribution and a "standard distribution" where the 
> standard distribution is a more curated library which people can fight 
> about, but it's not as big of a deal if anyone can make their own. This has 
> many upsides:
>
>
>1. Julia wouldn't have to come with what you don't want.
>2. Other than some edge cases where the advantages of Base come into 
>play (I don't know of a good example, but I know there are some things 
>which can't be defined outside of Base really well, like BigFloats? I'm 
> not 
>the expert on this.), most things could spawn out to packages without the 
>standard user ever noticing.
>3. There would still be a large set of standard functions you can 
>assume most people will have.
>4. You can share Julia setups: for example, with my lab I would share 
>a distribution that would have all of the JuliaDiffEq packages installed, 
>along with Plots.jl and some backends, so that way it would be "in the box 
>solve differential equations and plot" setup like what MATLAB provides. I 
>could pick packages/versions that I know work well together, 
>and guarantee their install will work. 
>5. You could write tutorials / run workshops which use a distribution, 
>knowing that a given set of packages will be available.
>6. Anyone could make their setup match yours by looking at the 
>distribution setup scripts (maybe just make a base function which runs 
> that 
>install since it would all be in Julia). This would be nice for some work 
>in progress projects which require checking out master on 3 different 
>packages, and getting some weird branch for another 5. I would give you a 
>succinct and standardized way to specify an install to get there.
>
>
> Side notes:
>
> [An interesting distribution would be that JuliaGPU could provide a full 
> distribution for which CUDAnative works (since it requires a different 
> Julia install)]
>
> [A "Data Science Distribution" would be a cool idea: you'd never want to 
> include all of the plotting and statistical things inside of 

[julia-users] Idea: Julia Standard Libraries and Distributions

2016-09-13 Thread Chris Rackauckas
I think one major point of contention when talking about what should be 
included in Base due to competing factors:


   1. Some people would like a "lean Base" for things like embedded 
   installs or other low memory applications
   2. Some people want a MATLAB-like "bells and whistles" approach. This 
   way all the functions they use are just there: no extra packages to 
   find/import.
   3. Some people like having things in Base because it "standardizes" 
   things. 
   4. Putting things in Base constrains their release schedule. 
   5. Putting things in packages outside of JuliaLang helps free up Travis.


The last two concerns have been why things like JuliaMath have sprung up to 
move things out of Base. However, I think there is some credibility to 
having some form of standardization. I think this can be achieved through 
some kind of standard library. This would entail a set of packages which 
are installed when Julia is installed, and a set of packages which add 
their using statement to the .juliarc. To most users this would be 
seamless: they would install automatically, and every time you open Julia, 
they would import automatically. There are a few issues there:


   1.  This wouldn't work with building from source. This idea works better 
   for binaries (this is no biggie since these users are likely more 
   experienced anyways)
   2. Julia would have to pick winners and losers.

That second part is big: what goes into the standard library? Would all of 
the MATLAB functions like linspace, find, etc. go there? Would the sparse 
matrix library be included?

I think one way to circumvent the second issue would be to allow for Julia 
Distributions. A distribution would be defined by:


   1. A Julia version
   2. A List of packages to install (with versions?)
   3. A build script
   4. A .juliarc

The ideal would be for one to be able to make an executable from those 
parts which would install the Julia version with the specified packages, 
build the packages (and maybe modify some environment variables / 
defaults), and add a .juliarc that would automatically import some packages 
/ maybe define some constants or checkout branches. JuliaLang could then 
provide a lean distribution and a "standard distribution" where the 
standard distribution is a more curated library which people can fight 
about, but it's not as big of a deal if anyone can make their own. This has 
many upsides:


   1. Julia wouldn't have to come with what you don't want.
   2. Other than some edge cases where the advantages of Base come into 
   play (I don't know of a good example, but I know there are some things 
   which can't be defined outside of Base really well, like BigFloats? I'm not 
   the expert on this.), most things could spawn out to packages without the 
   standard user ever noticing.
   3. There would still be a large set of standard functions you can assume 
   most people will have.
   4. You can share Julia setups: for example, with my lab I would share a 
   distribution that would have all of the JuliaDiffEq packages installed, 
   along with Plots.jl and some backends, so that way it would be "in the box 
   solve differential equations and plot" setup like what MATLAB provides. I 
   could pick packages/versions that I know work well together, 
   and guarantee their install will work. 
   5. You could write tutorials / run workshops which use a distribution, 
   knowing that a given set of packages will be available.
   6. Anyone could make their setup match yours by looking at the 
   distribution setup scripts (maybe just make a base function which runs that 
   install since it would all be in Julia). This would be nice for some work 
   in progress projects which require checking out master on 3 different 
   packages, and getting some weird branch for another 5. I would give you a 
   succinct and standardized way to specify an install to get there.


Side notes:

[An interesting distribution would be that JuliaGPU could provide a full 
distribution for which CUDAnative works (since it requires a different 
Julia install)]

[A "Data Science Distribution" would be a cool idea: you'd never want to 
include all of the plotting and statistical things inside of Base, but 
someone pointing out what all of the "good" packages are that play nice 
with each other would be very helpful.]

[What if the build script could specify a library path, so that way it can 
install a setup which doesn't interfere with a standard Julia install?]

This is not without downsides. Indeed, one place where you can look is 
Python. Python has distributions, but one problem with them is that 
packages don't tend to play nice with all distributions. This leads to 
fragmenting in the package sphere. Also, since it's not explicit on where 
the packages come from, it may be harder to find documentation (however, 
maybe Documenter.jl automatically adding links to the documentation in 
docstrings could fix this?). Also, 

Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-13 Thread Michael Borregaard
Thanks for the enlightening discussion. The emerging consensus is to use 
example #2, but perhaps use macros to make the syntax easier to read and 
maintain. Alternatively, it looks like my idea with having a FoobarData 
object as a field would do the job (but would require 
foobar.foobardata.bazbaz syntax for accessing fields, of course).

It is also interesting to see that there are divergent views. It seems to 
me, for example, that Tom Breloff's macro syntax would subvert the 
inheritance design decision that Stefan Karpinski described, by combining 
the abstract type with the concrete type?


[julia-users] Re: Workflow question - reloading modules

2016-09-13 Thread Michael Borregaard
Maybe this is useful: 
https://github.com/JunoLab/atom-julia-client/blob/master/manual/workflow.md


[julia-users] Re: Slow Performance Compared to MATLAB

2016-09-13 Thread Michael Borregaard
That is a pretty massive script to ask people to look at for performance 
:-) Try profiling it and identify the most expensive code and post that, 
that will be much easier to give feedback on.


Re: [julia-users] Strange behavior of push! and pop! for an array of array elements

2016-09-13 Thread Tamas Papp
No need, you were not rude in any way.

On Tue, Sep 13 2016, Michele Zaffalon wrote:

> Apologies, I did not want to insult or be rude. Thank you again for the
> clear explanation.
>
> On Tue, Sep 13, 2016 at 8:38 AM, Tamas Papp  wrote:
>
>> Please don't put words in my mouth, I did not say that. In general, I
>> find "use case" an elusive concept. I prefer simple building blocks with
>> clear semantics that I can combine easily to solve problems.
>>
>> Also, whether something "makes sense" is also somewhat subjective and
>> depends on your expectations and prior experience. Coming from, say,
>> Common Lisp, Julia's semantics in this case make perfect sense. Coming
>> from other languages, you may find it surprising, but that's always part
>> of learning a new language. My own preference is to write quite a bit of
>> code in a language before commenting on whether certain features "make
>> sense", but YMMV.
>>
>> On Tue, Sep 13 2016, Michele Zaffalon wrote:
>>
>> > Thank you for your explanation.
>> >
>> > In practice you are saying that consistency has led to this consequence
>> > even though there is no use case, and therefore it makes little sense? I
>> am
>> > not trying to provoke, it is that I find it easier to internalize the
>> > concept, once I know the reason behind that concept.
>> >
>> >
>> > On Tue, Sep 13, 2016 at 7:24 AM, Tamas Papp  wrote:
>> >
>> >> Fill behaves this way not because of a specific design choice based on a
>> >> compelling use case, but because of consistency with other language
>> >> features. fill does not copy, and arrays are passed by reference in
>> >> Julia, consequently you have the behavior described below.
>> >>
>> >> IMO it is best to learn about this and internalize the fact that arrays
>> >> and structures are passed by reference. The alternative would be some
>> >> DWIM-style solution where fill tries to figure out whether to copy its
>> >> first argument or not, which would be a mess.
>> >>
>> >> On Tue, Sep 13 2016, Michele Zaffalon wrote:
>> >>
>> >> > I have been bitten by this myself. Is there a user case for having an
>> >> array
>> >> > filled with references to the same object? Why would one want this
>> >> > behaviour?
>> >> >
>> >> > On Tue, Sep 13, 2016 at 4:45 AM, Yichao Yu  wrote:
>> >> >
>> >> >>
>> >> >>
>> >> >> On Mon, Sep 12, 2016 at 10:33 PM, Zhilong Liu <
>> lzl200102...@gmail.com>
>> >> >> wrote:
>> >> >>
>> >> >>> Hello all,
>> >> >>>
>> >> >>> I am pretty new to Julia, and I am trying to perform push and pop
>> >> inside
>> >> >>> an array of 1D array elements. For example, I created the following
>> >> array
>> >> >>> with 1000 empty arrays.
>> >> >>>
>> >> >>> julia> vring = fill([], 1000)
>> >> >>>
>> >> >>
>> >> >>
>> >> >> This creates an array with 1000 identical object, if you want to make
>> >> them
>> >> >> different (but initially equal) object, you can use `[[] for i in
>> >> 1:1000]`
>> >> >>
>> >> >>>
>> >> >>> Then, when I push an element to vring[2],
>> >> >>>
>> >> >>>
>> >> >>> julia> push!(vring[2],1)
>> >> >>>
>> >> >>>
>> >> >>> I got the following result. Every array element inside vring gets
>> the
>> >> >>> value 1. But I only want the 1 to be pushed to the 2nd array element
>> >> >>> inside vring. Anybody knows how to do that efficiently?
>> >> >>>
>> >> >>>
>> >> >>> julia> vring
>> >> >>>
>> >> >>> 1000x1 Array{Array{Any,1},2}:
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  ⋮
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>  Any[1]
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> Thanks!
>> >> >>>
>> >> >>> Zhilong Liu
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>
>> >>
>> >>
>>


Re: [julia-users] Re: Julia Low Pass Filter much slower than identical code in Python ??

2016-09-13 Thread Matjaz Licer
Makes sense, thanks :-)

On 12 September 2016 at 17:17, Stefan Karpinski 
wrote:

> JIT = Just In Time, i.e. the first time you use the code.
>
> On Mon, Sep 12, 2016 at 6:52 AM, MLicer  wrote:
>
>> Indeed it does! I thought JIT compilation takes place prior to execution
>> of the script. Thanks so much, this makes sense now!
>>
>> Output:
>> first call:   0.804573 seconds (1.18 M allocations: 53.183 MB, 1.43% gc
>> time)
>> repeated call:  0.000472 seconds (217 allocations: 402.938 KB)
>>
>> Thanks again,
>>
>> Cheers!
>>
>>
>> On Monday, September 12, 2016 at 12:48:30 PM UTC+2, randm...@gmail.com
>> wrote:
>>>
>>> The Julia code takes 0.000535 seconds for me on the second run -- during
>>> the first run, Julia has to compile the method you're timing. Have a look
>>> at the performance tips
>>> 
>>> for a more in depth explanation.
>>>
>>> Am Montag, 12. September 2016 11:53:01 UTC+2 schrieb MLicer:

 Dear all,

 i've written a low-pass filter in Julia and Python and the code in
 Julia seems to be much slower (*0.800 sec in Julia vs 0.000 sec in
 Python*). I *must* be coding ineffieciently, can anyone comment on the
 two codes below?

 *Julia:*


 
 using PyPlot, DSP

 # generate data:
 x = linspace(0,30,1e4)
 sin_noise(arr) = sin(arr) + rand(length(arr))

 # create filter:
 designmethod = Butterworth(5)
 ff = digitalfilter(Lowpass(0.02),designmethod)
 @time yl = filtfilt(ff, sin_noise(x))

 Python:

 from scipy import signal
 import numpy as np
 import cProfile, pstats

 def sin_noise(arr):
 return np.sin(arr) + np.random.rand(len(arr))

 def filterSignal(b,a,x):
 return signal.filtfilt(b, a, x, axis=-1)

 def main():
 # generate data:
 x = np.linspace(0,30,1e4)
 y = sin_noise(x)
 b, a = signal.butter(5, 0.02, "lowpass", analog=False)
 ff = filterSignal(b,a,y)

 cProfile.runctx('filterSignal(b,a,y)',globals(),{'b':b,'a':a,'y':y
 },filename='profileStatistics')

 p = pstats.Stats('profileStatistics')
 printFirstN = 5
 p.sort_stats('cumtime').print_stats(printFirstN)

 if __name__=="__main__":
 main()


 Thanks very much for any replies!

>>>
>


Re: [julia-users] Strange behavior of push! and pop! for an array of array elements

2016-09-13 Thread Michele Zaffalon
Apologies, I did not want to insult or be rude. Thank you again for the
clear explanation.

On Tue, Sep 13, 2016 at 8:38 AM, Tamas Papp  wrote:

> Please don't put words in my mouth, I did not say that. In general, I
> find "use case" an elusive concept. I prefer simple building blocks with
> clear semantics that I can combine easily to solve problems.
>
> Also, whether something "makes sense" is also somewhat subjective and
> depends on your expectations and prior experience. Coming from, say,
> Common Lisp, Julia's semantics in this case make perfect sense. Coming
> from other languages, you may find it surprising, but that's always part
> of learning a new language. My own preference is to write quite a bit of
> code in a language before commenting on whether certain features "make
> sense", but YMMV.
>
> On Tue, Sep 13 2016, Michele Zaffalon wrote:
>
> > Thank you for your explanation.
> >
> > In practice you are saying that consistency has led to this consequence
> > even though there is no use case, and therefore it makes little sense? I
> am
> > not trying to provoke, it is that I find it easier to internalize the
> > concept, once I know the reason behind that concept.
> >
> >
> > On Tue, Sep 13, 2016 at 7:24 AM, Tamas Papp  wrote:
> >
> >> Fill behaves this way not because of a specific design choice based on a
> >> compelling use case, but because of consistency with other language
> >> features. fill does not copy, and arrays are passed by reference in
> >> Julia, consequently you have the behavior described below.
> >>
> >> IMO it is best to learn about this and internalize the fact that arrays
> >> and structures are passed by reference. The alternative would be some
> >> DWIM-style solution where fill tries to figure out whether to copy its
> >> first argument or not, which would be a mess.
> >>
> >> On Tue, Sep 13 2016, Michele Zaffalon wrote:
> >>
> >> > I have been bitten by this myself. Is there a user case for having an
> >> array
> >> > filled with references to the same object? Why would one want this
> >> > behaviour?
> >> >
> >> > On Tue, Sep 13, 2016 at 4:45 AM, Yichao Yu  wrote:
> >> >
> >> >>
> >> >>
> >> >> On Mon, Sep 12, 2016 at 10:33 PM, Zhilong Liu <
> lzl200102...@gmail.com>
> >> >> wrote:
> >> >>
> >> >>> Hello all,
> >> >>>
> >> >>> I am pretty new to Julia, and I am trying to perform push and pop
> >> inside
> >> >>> an array of 1D array elements. For example, I created the following
> >> array
> >> >>> with 1000 empty arrays.
> >> >>>
> >> >>> julia> vring = fill([], 1000)
> >> >>>
> >> >>
> >> >>
> >> >> This creates an array with 1000 identical object, if you want to make
> >> them
> >> >> different (but initially equal) object, you can use `[[] for i in
> >> 1:1000]`
> >> >>
> >> >>>
> >> >>> Then, when I push an element to vring[2],
> >> >>>
> >> >>>
> >> >>> julia> push!(vring[2],1)
> >> >>>
> >> >>>
> >> >>> I got the following result. Every array element inside vring gets
> the
> >> >>> value 1. But I only want the 1 to be pushed to the 2nd array element
> >> >>> inside vring. Anybody knows how to do that efficiently?
> >> >>>
> >> >>>
> >> >>> julia> vring
> >> >>>
> >> >>> 1000x1 Array{Array{Any,1},2}:
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  ⋮
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>  Any[1]
> >> >>>
> >> >>>
> >> >>>
> >> >>> Thanks!
> >> >>>
> >> >>> Zhilong Liu
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>
> >>
> >>
>


Re: [julia-users] Strange behavior of push! and pop! for an array of array elements

2016-09-13 Thread Tamas Papp
Please don't put words in my mouth, I did not say that. In general, I
find "use case" an elusive concept. I prefer simple building blocks with
clear semantics that I can combine easily to solve problems.

Also, whether something "makes sense" is also somewhat subjective and
depends on your expectations and prior experience. Coming from, say,
Common Lisp, Julia's semantics in this case make perfect sense. Coming
from other languages, you may find it surprising, but that's always part
of learning a new language. My own preference is to write quite a bit of
code in a language before commenting on whether certain features "make
sense", but YMMV.

On Tue, Sep 13 2016, Michele Zaffalon wrote:

> Thank you for your explanation.
>
> In practice you are saying that consistency has led to this consequence
> even though there is no use case, and therefore it makes little sense? I am
> not trying to provoke, it is that I find it easier to internalize the
> concept, once I know the reason behind that concept.
>
>
> On Tue, Sep 13, 2016 at 7:24 AM, Tamas Papp  wrote:
>
>> Fill behaves this way not because of a specific design choice based on a
>> compelling use case, but because of consistency with other language
>> features. fill does not copy, and arrays are passed by reference in
>> Julia, consequently you have the behavior described below.
>>
>> IMO it is best to learn about this and internalize the fact that arrays
>> and structures are passed by reference. The alternative would be some
>> DWIM-style solution where fill tries to figure out whether to copy its
>> first argument or not, which would be a mess.
>>
>> On Tue, Sep 13 2016, Michele Zaffalon wrote:
>>
>> > I have been bitten by this myself. Is there a user case for having an
>> array
>> > filled with references to the same object? Why would one want this
>> > behaviour?
>> >
>> > On Tue, Sep 13, 2016 at 4:45 AM, Yichao Yu  wrote:
>> >
>> >>
>> >>
>> >> On Mon, Sep 12, 2016 at 10:33 PM, Zhilong Liu 
>> >> wrote:
>> >>
>> >>> Hello all,
>> >>>
>> >>> I am pretty new to Julia, and I am trying to perform push and pop
>> inside
>> >>> an array of 1D array elements. For example, I created the following
>> array
>> >>> with 1000 empty arrays.
>> >>>
>> >>> julia> vring = fill([], 1000)
>> >>>
>> >>
>> >>
>> >> This creates an array with 1000 identical object, if you want to make
>> them
>> >> different (but initially equal) object, you can use `[[] for i in
>> 1:1000]`
>> >>
>> >>>
>> >>> Then, when I push an element to vring[2],
>> >>>
>> >>>
>> >>> julia> push!(vring[2],1)
>> >>>
>> >>>
>> >>> I got the following result. Every array element inside vring gets the
>> >>> value 1. But I only want the 1 to be pushed to the 2nd array element
>> >>> inside vring. Anybody knows how to do that efficiently?
>> >>>
>> >>>
>> >>> julia> vring
>> >>>
>> >>> 1000x1 Array{Array{Any,1},2}:
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  ⋮
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>  Any[1]
>> >>>
>> >>>
>> >>>
>> >>> Thanks!
>> >>>
>> >>> Zhilong Liu
>> >>>
>> >>>
>> >>>
>> >>>
>> >>
>>
>>


Re: [julia-users] Strange behavior of push! and pop! for an array of array elements

2016-09-13 Thread Michele Zaffalon
Thank you for your explanation.

In practice you are saying that consistency has led to this consequence
even though there is no use case, and therefore it makes little sense? I am
not trying to provoke, it is that I find it easier to internalize the
concept, once I know the reason behind that concept.


On Tue, Sep 13, 2016 at 7:24 AM, Tamas Papp  wrote:

> Fill behaves this way not because of a specific design choice based on a
> compelling use case, but because of consistency with other language
> features. fill does not copy, and arrays are passed by reference in
> Julia, consequently you have the behavior described below.
>
> IMO it is best to learn about this and internalize the fact that arrays
> and structures are passed by reference. The alternative would be some
> DWIM-style solution where fill tries to figure out whether to copy its
> first argument or not, which would be a mess.
>
> On Tue, Sep 13 2016, Michele Zaffalon wrote:
>
> > I have been bitten by this myself. Is there a user case for having an
> array
> > filled with references to the same object? Why would one want this
> > behaviour?
> >
> > On Tue, Sep 13, 2016 at 4:45 AM, Yichao Yu  wrote:
> >
> >>
> >>
> >> On Mon, Sep 12, 2016 at 10:33 PM, Zhilong Liu 
> >> wrote:
> >>
> >>> Hello all,
> >>>
> >>> I am pretty new to Julia, and I am trying to perform push and pop
> inside
> >>> an array of 1D array elements. For example, I created the following
> array
> >>> with 1000 empty arrays.
> >>>
> >>> julia> vring = fill([], 1000)
> >>>
> >>
> >>
> >> This creates an array with 1000 identical object, if you want to make
> them
> >> different (but initially equal) object, you can use `[[] for i in
> 1:1000]`
> >>
> >>>
> >>> Then, when I push an element to vring[2],
> >>>
> >>>
> >>> julia> push!(vring[2],1)
> >>>
> >>>
> >>> I got the following result. Every array element inside vring gets the
> >>> value 1. But I only want the 1 to be pushed to the 2nd array element
> >>> inside vring. Anybody knows how to do that efficiently?
> >>>
> >>>
> >>> julia> vring
> >>>
> >>> 1000x1 Array{Array{Any,1},2}:
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  ⋮
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>  Any[1]
> >>>
> >>>
> >>>
> >>> Thanks!
> >>>
> >>> Zhilong Liu
> >>>
> >>>
> >>>
> >>>
> >>
>
>