Re: [julia-users] errorbar plot in Plots.jl

2016-10-14 Thread franckhertz16
Beautiful, thank you!

On Saturday, October 15, 2016 at 12:06:39 AM UTC+2, Tom Breloff wrote:
>
> Already implemented as attributes: yerror and xerror
>
> On Friday, October 14, 2016,  wrote:
>
>> I have just discovered Plots, which I use mostly with the PlotlyJS 
>> backend because of the inline interactivity in Juliabox, and I have been 
>> spoiled by it: thank you Tom and Spencer.
>>
>> I need an errorbar plot (something like 
>> http://matplotlib.org/examples/statistics/errorbar_demo_features.html): 
>> is there any plan to add it? I could use the OHLC plot, but I cannot figure 
>> out how to remove the horizontal bars.
>>
>>

[julia-users] Re: Parallel file access

2016-10-14 Thread Ralph Smith
There are good synchronization primitives for Tasks, and a bit for threads, 
but not much for parallel processes. (One could use named system semaphores 
on Linux and Windows, but there's no Julia wrapper yet AFAIK.)
I also found management of parallel processes confusing, and good 
nontrivial examples are not obvious.  So here's my humble offering:
https://gist.github.com/RalphAS/2a37b1c631923efa30ac4a7f02a2ee9d

(I just happen to be working on some synchronization problems this month; 
if a real expert has a better solution, let's hear it.)

On Friday, October 14, 2016 at 3:19:25 PM UTC-4, Zachary Roth wrote:
>
> Thanks for the reply and suggestion, Ralph.  I tried to get this working 
> with semaphores/mutexes/locks/etc.  But I've not been having any luck.
>
> Here's a simplified, incomplete version of what I'm trying to do.  I'm 
> hoping that someone can offer a suggestion if they see some sample code.  
>
> function localfunction()
> files = listfiles()
> locks = [Threads.SpinLock() for _ in files]
> ranges = getindexranges(length(files))
>
> pmap(pairs(ranges)) do rows_and_cols
> rows, cols = rows_and_cols
> workerfunction(files, locks, rows, cols)
> end
> end
>
> function workerfunction(files, locks, rows, cols)
> data = kindofexpensive(...)
> pairs = pairs(rows, cols)
> 
> @sync for idx in unique([rows; cols])
> @async begin
> lock(locks[idx])
> updatefile(files[idx], data[idx])
> unlock(locks[idx])
> end
> end
> end
>
> This (obviously) does not work.  I think that the problem is that the 
> locks are being copied when the function is spawned on each process.  I've 
> tried wrapping the locks/semaphores in Futures/RemoteChannels, but that 
> also hasn't worked for me.
>
> I found that I could do the sort of coordination that I need by starting 
> Tasks on the local process.  More specifically, each file would have an 
> associated Task to handle the coordination between processes.  But this 
> only worked for me in a simplified situation with the Tasks being declared 
> globally.  When I tried to implement this coordination within localfunction 
> above, I got an error (really a bunch of errors) that said that a running 
> Task cannot be serialized.
>
> Sorry for the long post, but I'm really hoping that someone can help me 
> out.  I have a feeling that I'm missing something pretty simple.
>
> ---Zachary
>
>
>
>
> On Tuesday, October 11, 2016 at 10:15:06 AM UTC-4, Ralph Smith wrote:
>>
>> You can do it with 2 (e.g. integer) channels per worker (requests and 
>> replies) and a task for each pair in the main process. That's so ugly I'd 
>> be tempted to write an
>>  interface to named system semaphores. Or just use a separate file for 
>> each worker.
>>
>> On Monday, October 10, 2016 at 11:09:39 AM UTC-4, Zachary Roth wrote:
>>>
>>> Hi, everyone,
>>>
>>> I'm trying to save to a single file from multiple worker processes, but 
>>> don't know of a nice way to coordinate this.  When I don't coordinate, 
>>> saving works fine much of the time.  But I sometimes get errors with 
>>> reading/writing of files, which I'm assuming is happening because multiple 
>>> processes are trying to use the same file simultaneously.
>>>
>>> I tried to coordinate this with a queue/channel of `Condition`s managed 
>>> by a task running in process 1, but this isn't working for me.  I've tried 
>>> to simiplify this to track down the problem.  At least part of the issue 
>>> seems to be writing to the channel from process 2.  Specifically, when I 
>>> `put!` something onto a channel (or `push!` onto an array) from process 2, 
>>> the channel/array is still empty back on process 1.  I feel like I'm 
>>> missing something simple.  Is there an easier way to go about coordinating 
>>> multiple processes that are trying to access the same file?  If not, does 
>>> anyone have any tips?
>>>
>>> Thanks for any help you can offer.
>>>
>>> Cheers,
>>> ---Zachary
>>>
>>

[julia-users] Dataframe without column and row names ?

2016-10-14 Thread Henri Girard
Hi,
Is it possible to have a table with only the result ?
I don't want row /column names.

using DataFrames
function iain_magic(n::Int)
M = zeros(Int, n, n)
for I = 1:n, J = 1:n
@inbounds M[I,J] = n*((I+J-1+(n >> 1))%n)+((I+2J-2)%n) + 1
end
return M
end
mm=iain_magic(3)
df=DataFrame(mm)


[julia-users] Re: eachline() work with pmap() is slow

2016-10-14 Thread lovebufan
I have change the code to parallel on files rather than lines. codes are 
available here 
 if 
anyone have interests.
However, the speed is not satisfactory still (total processing speed 
approx. 10M/s, ideally it should be 100M/s, the network speed). 
CPU not full, IO not full, and I cannot find the bottleneck...

@Jeremy, thanks for the reply. The bottleneck is IO. You need days just to 
stream all files at full speed. Thus waiting to load the whole file will 
waste a lot of time. Ideally it will be that when I streamed the data one 
pass, the processing is also done without extra time.
@Páll, do you mean that pmap will first do a ``collect`` operation, then 
processing? So even you give pmap an iterator, it will not benefit from it? 
That will be sad. 



Re: [julia-users] FactCheck.jl bundled with Julia?

2016-10-14 Thread Yichao Yu
On Fri, Oct 14, 2016 at 9:28 PM, Júlio Hoffimann 
wrote:

> Ok, I am not switching to FactCheck then, didn't knew it is being
> deprecated in a sense.
>

You can switch to BaseTestNext if you need 0.4 compatibility.


>
> Thank you,
> -Júlio
>
> 2016-10-14 18:05 GMT-07:00 Yichao Yu :
>
>> On Oct 14, 2016 8:52 PM, "Júlio Hoffimann" 
>> wrote:
>> >
>> > Oh really? I'm not following it closely. Please let me know why that is
>> the case, I was planning to switch to FactCheck.
>>
>> Afaict the new test in base is a improved version of FactCheck.
>>
>> >
>> > -Júlio
>>
>
>


Re: [julia-users] FactCheck.jl bundled with Julia?

2016-10-14 Thread Júlio Hoffimann
Ok, I am not switching to FactCheck then, didn't knew it is being
deprecated in a sense.

Thank you,
-Júlio

2016-10-14 18:05 GMT-07:00 Yichao Yu :

> On Oct 14, 2016 8:52 PM, "Júlio Hoffimann" 
> wrote:
> >
> > Oh really? I'm not following it closely. Please let me know why that is
> the case, I was planning to switch to FactCheck.
>
> Afaict the new test in base is a improved version of FactCheck.
>
> >
> > -Júlio
>


Re: [julia-users] FactCheck.jl bundled with Julia?

2016-10-14 Thread Yichao Yu
On Oct 14, 2016 8:52 PM, "Júlio Hoffimann" 
wrote:
>
> Oh really? I'm not following it closely. Please let me know why that is
the case, I was planning to switch to FactCheck.

Afaict the new test in base is a improved version of FactCheck.

>
> -Júlio


Re: [julia-users] Does julia -L work with plots?

2016-10-14 Thread Isaiah Norton
Call `show()` at the end of the script. See explanation here:

https://github.com/JuliaPy/PyPlot.jl#non-interactive-plotting

On Fri, Oct 14, 2016 at 1:17 PM, Stefan Rigger  wrote:

>
>
> Hello everyone,
>
> I'm using julia to numerically solve partial differential equations and
> use PyPlot to plot the solutions. I'd like to be able to calculate a
> solution and plot entering something like
> julia -L file.jl
>
> in Terminal but unfortunately, this doesn't seem to work (there is no
> window opening where one can view the solution). I've attached a small .jl
> file if you want to test it yourself. Is there any way one can get this to
> work? Thank you for your time,
>
> Best Regards,
> Stefan Rigger
>


Re: [julia-users] FactCheck.jl bundled with Julia?

2016-10-14 Thread Júlio Hoffimann
Oh really? I'm not following it closely. Please let me know why that is the
case, I was planning to switch to FactCheck.

-Júlio


Re: [julia-users] Embedding Julia in C++ - Determining returned array types

2016-10-14 Thread Isaiah Norton
On Fri, Oct 14, 2016 at 2:28 PM, Kyle Kotowick  wrote:
>
>
> After determining that an array was returned, how would you determine what
> the inner type of the array is (i.e. the type of the objects it contains)?
>

`jl_array_eltype`


>
> And furthermore, if it returns an array of type "Any", would there be any
> way to tell what the type is of any arbitrary element in that array?
>

`jl_typeof`, after retrieving the element (which will be boxed)


>
> Thanks!
>


Re: [julia-users] linspace question; bug?

2016-10-14 Thread Páll Haraldsson
2016-10-14 22:58 GMT+00:00 Stefan Karpinski :

> I'll answer your question with a riddle: How can you have a length-one
> collection whose first and last value are different?
>

I know that; yes, it's a degenerate case, and as I said "may not matter too
much"

[Nobody makes a one element linspace intentionally, but would it be bad to
allow it? I'm not sure it would happen, usefully anywhere, but if it does,
then your code errors out; that may be intentional.]

I wouldn't have posted if not for seeing "Real" that is the real question
(bug?).


Re: [julia-users] FactCheck.jl bundled with Julia?

2016-10-14 Thread Tom Breloff
Actually that's not true. AFAIK, people are switching back to the new
testing in Base, and FactCheck will be deprecated eventually. (Unless I got
the wrong memo?)

On Friday, October 14, 2016, Júlio Hoffimann 
wrote:

> Hi,
>
> It seems that FactCheck.jl has become the defacto standard for writing
> tests in Julia packages. Wouldn't it be a good idea to have it bundled with
> Julia? Any reason to keep the current test framework?
>
> -Júlio
>


[julia-users] FactCheck.jl bundled with Julia?

2016-10-14 Thread Júlio Hoffimann
Hi,

It seems that FactCheck.jl has become the defacto standard for writing 
tests in Julia packages. Wouldn't it be a good idea to have it bundled with 
Julia? Any reason to keep the current test framework?

-Júlio


Re: [julia-users] linspace question; bug?

2016-10-14 Thread Stefan Karpinski
I'll answer your question with a riddle: How can you have a length-one
collection whose first and last value are different?

On Fri, Oct 14, 2016 at 6:25 PM, Páll Haraldsson 
wrote:

>
> I mistook third param for step, and got confusing error:
>
>
> julia> linspace(1, 2, 1)
> ERROR: linspace(1.0, 2.0, 1.0): endpoints differ
>  in linspace(::Float64, ::Float64, ::Float64) at ./range.jl:212
>  in linspace(::Float64, ::Float64, ::Int64) at ./range.jl:251
>  in linspace(::Int64, ::Int64, ::Int64) at ./range.jl:253
>
> julia> linspace(1.0, 2.0, 1)
> ERROR: linspace(1.0, 2.0, 1.0): endpoints differ
>  in linspace(::Float64, ::Float64, ::Float64) at ./range.jl:212
>  in linspace(::Float64, ::Float64, ::Int64) at ./range.jl:251
>
>
> It may not matter too much to get this to work (or give helpful error); I
> went to debug and found (should num/len be Integer? see inline comments):
>
>
> immutable LinSpace{T<:AbstractFloat} <: Range{T}
> start::T
> stop::T
> len::T # len::Integer, only countable..
> divisor::T
> end
>
> function linspace{T<:AbstractFloat}(start::T, stop::T, len::T)
>
> #long function omitted
>
> function linspace{T<:AbstractFloat}(start::T, stop::T, len::Real) #
> change to len::Integer, is this for type stability reasons, or to handle
> Rationals somehow?
> T_len = convert(T, len)
> T_len == len || throw(InexactError())
> linspace(start, stop, T_len)
> end
> linspace(start::Real, stop::Real, len::Real=50) =  # change to
> len::Integer=50
> linspace(promote(AbstractFloat(start), AbstractFloat(stop))..., len)
>
>


Re: [julia-users] Most effective way to build a large string?

2016-10-14 Thread Páll Haraldsson


On Friday, October 14, 2016 at 10:44:47 PM UTC, Páll Haraldsson wrote:
>
> On Friday, October 14, 2016 at 5:17:45 PM UTC, Diego Javier Zea wrote:
>>
>> Hi!
>> I have a function that uses `IOBuffer` for this creating one `String` 
>> like the example. 
>> Is it needed or recommended `close` the IOBuffer after `takebuf_string`?
>>
>
> I find it unlikely.
>
>  help?> takebuf_string
> search: takebuf_string
>
>   takebuf_string(b::IOBuffer)
>
>   Obtain the contents of an IOBuffer as a string, without copying. 
> Afterwards, the IOBuffer is reset to its initial state.
>
> reset means they take action, and could have closed if needed; IOBuffer is 
> an in-memory thing, even if freeing memory was the issue, then garbage 
> collection should take care of that.
>

Note, IOBuffer (in RAM) is not like a file in non-volatile memory (unlike 
RAM).
 

>
>
> Since this thread was necromanced:
>
> @Karpinski: "The takebuf_string function really needs a new name."
>
> I do not see clearly that that has happened, shouldn't 
>
> help?> takebuf_string
>
> show then?
>
> What would be a good name? Changing and/or documenting the above could be 
> an "up-for-grabs" issue.
>

@Steven: "Further, in this case, the "takebuf_string" function (or 
takebuf_array) isn't just conversion, it is mutation because it empties the 
buffer.  So, arguably it should follow the Julia convention and append a ! 
to the name."



> New function would just call the old function..
>
>

Re: [julia-users] Most effective way to build a large string?

2016-10-14 Thread Páll Haraldsson
On Friday, October 14, 2016 at 5:17:45 PM UTC, Diego Javier Zea wrote:
>
> Hi!
> I have a function that uses `IOBuffer` for this creating one `String` like 
> the example. 
> Is it needed or recommended `close` the IOBuffer after `takebuf_string`?
>

I find it unlikely.

 help?> takebuf_string
search: takebuf_string

  takebuf_string(b::IOBuffer)

  Obtain the contents of an IOBuffer as a string, without copying. 
Afterwards, the IOBuffer is reset to its initial state.

reset means they take action, and could have closed if needed; IOBuffer is 
an in-memory thing, even if freeing memory was the issue, then garbage 
collection should take care of that.


Since this thread was necromanced:

@Karpinski: "The takebuf_string function really needs a new name."

I do not see clearly that that has happened, shouldn't 

help?> takebuf_string

show then?

What would be a good name? Changing and/or documenting the above could be 
an "up-for-grabs" issue.

New function would just call the old function..



[julia-users] linspace question; bug?

2016-10-14 Thread Páll Haraldsson

I mistook third param for step, and got confusing error:


julia> linspace(1, 2, 1)
ERROR: linspace(1.0, 2.0, 1.0): endpoints differ
 in linspace(::Float64, ::Float64, ::Float64) at ./range.jl:212
 in linspace(::Float64, ::Float64, ::Int64) at ./range.jl:251
 in linspace(::Int64, ::Int64, ::Int64) at ./range.jl:253

julia> linspace(1.0, 2.0, 1)
ERROR: linspace(1.0, 2.0, 1.0): endpoints differ
 in linspace(::Float64, ::Float64, ::Float64) at ./range.jl:212
 in linspace(::Float64, ::Float64, ::Int64) at ./range.jl:251


It may not matter too much to get this to work (or give helpful error); I 
went to debug and found (should num/len be Integer? see inline comments):


immutable LinSpace{T<:AbstractFloat} <: Range{T}
start::T
stop::T
len::T # len::Integer, only countable..
divisor::T
end

function linspace{T<:AbstractFloat}(start::T, stop::T, len::T)

#long function omitted

function linspace{T<:AbstractFloat}(start::T, stop::T, len::Real) # change 
to len::Integer, is this for type stability reasons, or to handle Rationals 
somehow?
T_len = convert(T, len)
T_len == len || throw(InexactError())
linspace(start, stop, T_len)
end
linspace(start::Real, stop::Real, len::Real=50) =  # change to 
len::Integer=50
linspace(promote(AbstractFloat(start), AbstractFloat(stop))..., len)



Re: [julia-users] errorbar plot in Plots.jl

2016-10-14 Thread Tom Breloff
Already implemented as attributes: yerror and xerror

On Friday, October 14, 2016,  wrote:

> I have just discovered Plots, which I use mostly with the PlotlyJS backend
> because of the inline interactivity in Juliabox, and I have been spoiled by
> it: thank you Tom and Spencer.
>
> I need an errorbar plot (something like http://matplotlib.org/
> examples/statistics/errorbar_demo_features.html): is there any plan to
> add it? I could use the OHLC plot, but I cannot figure out how to remove
> the horizontal bars.
>
>


[julia-users] Embedding Julia in C++ - Determining returned array types

2016-10-14 Thread Kyle Kotowick
I'm trying to embed Julia code in a C++ app I'm building. I have it all 
working and can do some basic calls, all good so far.

One thing I'm struggling with, though, is determining what type of array is 
returned by a Julia evaluation. The only available documentation seems to 
be this page , 
which is far from comprehensive and doesn't show how to do this. Let's say 
I call some abstract Julia code (dynamic, unknown at compile time) like 
this:

jl_value_t *ret = jl_eval_string(some_code_string);
if(jl_is_array(ret)) {

// we now know that the returned value is an array, but how do we know 
what the inner type is? (e.g. Float64, Int64, etc.)

}

After determining that an array was returned, how would you determine what 
the inner type of the array is (i.e. the type of the objects it contains)?

And furthermore, if it returns an array of type "Any", would there be any 
way to tell what the type is of any arbitrary element in that array?

Thanks!


[julia-users] errorbar plot in Plots.jl

2016-10-14 Thread franckhertz16
I have just discovered Plots, which I use mostly with the PlotlyJS backend 
because of the inline interactivity in Juliabox, and I have been spoiled by 
it: thank you Tom and Spencer.

I need an errorbar plot (something like 
http://matplotlib.org/examples/statistics/errorbar_demo_features.html): is 
there any plan to add it? I could use the OHLC plot, but I cannot figure 
out how to remove the horizontal bars.



Re: [julia-users] Re: Julia-i18n logo proposal

2016-10-14 Thread Waldir Pimenta
With the final detail ironed out 
, I'm happy to announce 
that Julia-i18n has officially adopted the J version 
 as its logo!

Who said design by committee didn't work? ;) Thanks all for the useful 
comments and suggestions.

--Waldir

On Thursday, October 6, 2016 at 6:36:01 AM UTC+1, Sébastien Celles wrote:
>
> +1 for http://imgh.us/julia-i18n-j.svg
>
> Le jeudi 6 octobre 2016 00:22:13 UTC+2, John Gibson a écrit :
>>
>> I also prefer the "J" one (http://imgh.us/julia-i18n-j.svg). It's more 
>> balanced. The Hindi "ja" in particular fits the circle better and looks 
>> more balanced without the diacritic "uu" (the loopy thing underneath).
>>
>> Nice job!
>>
>> John
>>
>> On Wednesday, October 5, 2016 at 4:35:10 PM UTC-4, Islam Badreldin wrote:
>>>
>>>
>>> +1 for the logo with the 'J' sound.
>>>
>>>   -Islam
>>> _
>>> From: Stefan Karpinski 
>>> Sent: Wednesday, October 5, 2016 10:13 AM
>>> Subject: Re: [julia-users] Re: Julia-i18n logo proposal
>>> To: Julia Users 
>>>
>>>
>>> On Wed, Oct 5, 2016 at 9:29 AM, Waldir Pimenta  
>>> wrote:
>>>
 Oops, meant to link to julia-i18n-j.svg 
  in the previous message, rather than 
 twice to the -ju variant.

>>>
>>> I like this one  – it looks nicely 
>>> balanced. The letters from the three scripts is really nice.
>>>
>>>
>>>

[julia-users] Re: Parallel file access

2016-10-14 Thread Zachary Roth
Thanks for the reply and suggestion, Ralph.  I tried to get this working 
with semaphores/mutexes/locks/etc.  But I've not been having any luck.

Here's a simplified, incomplete version of what I'm trying to do.  I'm 
hoping that someone can offer a suggestion if they see some sample code.  

function localfunction()
files = listfiles()
locks = [Threads.SpinLock() for _ in files]
ranges = getindexranges(length(files))

pmap(pairs(ranges)) do rows_and_cols
rows, cols = rows_and_cols
workerfunction(files, locks, rows, cols)
end
end

function workerfunction(files, locks, rows, cols)
data = kindofexpensive(...)
pairs = pairs(rows, cols)

@sync for idx in unique([rows; cols])
@async begin
lock(locks[idx])
updatefile(files[idx], data[idx])
unlock(locks[idx])
end
end
end

This (obviously) does not work.  I think that the problem is that the locks 
are being copied when the function is spawned on each process.  I've tried 
wrapping the locks/semaphores in Futures/RemoteChannels, but that also 
hasn't worked for me.

I found that I could do the sort of coordination that I need by starting 
Tasks on the local process.  More specifically, each file would have an 
associated Task to handle the coordination between processes.  But this 
only worked for me in a simplified situation with the Tasks being declared 
globally.  When I tried to implement this coordination within localfunction 
above, I got an error (really a bunch of errors) that said that a running 
Task cannot be serialized.

Sorry for the long post, but I'm really hoping that someone can help me 
out.  I have a feeling that I'm missing something pretty simple.

---Zachary




On Tuesday, October 11, 2016 at 10:15:06 AM UTC-4, Ralph Smith wrote:
>
> You can do it with 2 (e.g. integer) channels per worker (requests and 
> replies) and a task for each pair in the main process. That's so ugly I'd 
> be tempted to write an
>  interface to named system semaphores. Or just use a separate file for 
> each worker.
>
> On Monday, October 10, 2016 at 11:09:39 AM UTC-4, Zachary Roth wrote:
>>
>> Hi, everyone,
>>
>> I'm trying to save to a single file from multiple worker processes, but 
>> don't know of a nice way to coordinate this.  When I don't coordinate, 
>> saving works fine much of the time.  But I sometimes get errors with 
>> reading/writing of files, which I'm assuming is happening because multiple 
>> processes are trying to use the same file simultaneously.
>>
>> I tried to coordinate this with a queue/channel of `Condition`s managed 
>> by a task running in process 1, but this isn't working for me.  I've tried 
>> to simiplify this to track down the problem.  At least part of the issue 
>> seems to be writing to the channel from process 2.  Specifically, when I 
>> `put!` something onto a channel (or `push!` onto an array) from process 2, 
>> the channel/array is still empty back on process 1.  I feel like I'm 
>> missing something simple.  Is there an easier way to go about coordinating 
>> multiple processes that are trying to access the same file?  If not, does 
>> anyone have any tips?
>>
>> Thanks for any help you can offer.
>>
>> Cheers,
>> ---Zachary
>>
>

[julia-users] Re: Julia and the Tower of Babel

2016-10-14 Thread Jeffrey Sarnoff
Just clarifying: For a two part package name that begins with an acronym 
and ends in a word   
  
the present guidance:   
 the acronym is to be uppercased and the second word is to be 
capitalized, no separator.  
 so: CSSScripts, HTMLLinks  

the desired guidance (from 24hrs of feedback):   
 the acronym is to be titlecased and the second word is to be 
capitalized, no separator.   
 so: CssScripts, HtmlLinks

What is behind the present guidance?


On Saturday, October 8, 2016 at 8:42:05 AM UTC-4, Jeffrey Sarnoff wrote:
>
> I have created a new Organization on github: *JuliaPraxis.*
> Everyone who has added to this thread will get an invitation to join, and 
> so contribute.
> I will set up the site and let you know how do include your wor(l)d views.
>
> Anyone else is welcome to post to this thread, and I will send an 
> invitation.
>
>
>
> On Saturday, October 8, 2016 at 6:59:51 AM UTC-4, Chris Rackauckas wrote:
>>
>> Conventions would have to be arrived at before this is possible.
>>
>> On Saturday, October 8, 2016 at 3:39:55 AM UTC-7, Traktor Toni wrote:
>>>
>>> In my opinion the solutions to this are very clear, or would be:
>>>
>>> 1. make a mandatory linter for all julia code
>>> 2. julia IDEs should offer good intellisense
>>>
>>> Am Freitag, 7. Oktober 2016 17:35:46 UTC+2 schrieb Gabriel Gellner:

 Something that I have been noticing, as I convert more of my research 
 code over to Julia, is how the super easy to use package manager (which I 
 love), coupled with the talent base of the Julia community seems to have a 
 detrimental effect on the API consistency of the many “micro” packages 
 that 
 cover what I would consider the de-facto standard library.

 What I mean is that whereas a commercial package like 
 Matlab/Mathematica etc., being written under one large umbrella, will 
 largely (clearly not always) choose consistent names for similar API 
 keyword arguments, and have similar calling conventions for master 
 function 
 like tools (`optimize` versus `lbfgs`, etc), which I am starting to 
 realize 
 is one of the great selling points of these packages as an end user. I can 
 usually guess what a keyword will be in Mathematica, whereas even after a 
 year of using Julia almost exclusively I find I have to look at the 
 documentation (or the source code depending on the documentation ...) to 
 figure out the keyword names in many common packages.

 Similarly, in my experience with open source tools, due to the 
 complexity of the package management, we get large “batteries included” 
 distributions that cover a lot of the standard stuff for doing science, 
 like python’s numpy + scipy combination. Whereas in Julia the equivalent 
 of 
 scipy is split over many, separately developed packages (Base, Optim.jl, 
 NLopt.jl, Roots.jl, NLsolve.jl, ODE.jl/DifferentialEquations.jl). Many of 
 these packages are stupid awesome, but they can have dramatically 
 different 
 naming conventions and calling behavior, for essential equivalent 
 behavior. 
 Recently I noticed that tolerances, for example, are named as `atol/rtol` 
 versus `abstol/reltol` versus `abs_tol/rel_tol`, which means is extremely 
 easy to have a piece of scientific code that will need to use all three 
 conventions across different calls to seemingly similar libraries. 

 Having brought this up I find that the community is largely sympathetic 
 and, in general, would support a common convention, the issue I have 
 slowly 
 realized is that it is rarely that straightforward. In the above example 
 the abstol/reltol versus abs_tol/rel_tol seems like an easy example of 
 what 
 can be tidied up, but the latter underscored name is consistent with 
 similar naming conventions from Optim.jl for other tolerances, so that 
 community is reluctant to change the convention. Similarly, I think there 
 would be little interest in changing abstol/reltol to the underscored 
 version in packages like Base, ODE.jl etc as this feels consistent with 
 each of these code bases. Hence I have started to think that the problem 
 is 
 the micro-packaging. It is much easier to look for consistency within a 
 package then across similar packages, and since Julia seems to distribute 
 so many of the essential tools in very narrow boundaries of functionality 
 I 
 am not sure that this kind of naming convention will ever be able to reach 
 something like a Scipy, or the even higher standard of commercial packages 
 like Matlab/Mathematica. (I am sure there are many more examples like 
 using 
 maxiter, versus iterations for describing stopping criteria in iterative 
 solvers ...)

 Even further I have noticed that even when packages try to find 
 consistency across packages, for example 

Re: [julia-users] Re: Any 0.5 performance tips?

2016-10-14 Thread Mauro
Good detective work, please post your code in the issue!

On Fri, 2016-10-14 at 19:51, Andrew  wrote:
> I've found the main problem. I have a function which repeatedly accesses a
> 6-dimensional array in a loop. This function took no time in 0.4, but is
> actually very slow in 0.5. My problem looks very similar to 18774
> .
> Here's an example:
>
> A3 = rand(10, 10, 10);
> function test3(A, nx1, nx2, nx3)
>   for i = 1:10_000_000
> A[nx1, nx2, nx3]
>   end
> end
>
> A5 = rand(10, 10, 10, 10, 10);
> function test5(A, nx1, nx2, nx3, nx4, nx5)
>   for i = 1:10_000_000
> A[nx1, nx2, nx3, nx4, nx5]
>   end
> end
>
> A6 = rand(10, 10, 10, 10, 10, 10);
> function test6(A, nx1, nx2, nx3, nx4, nx5, nx6)
>   for i = 1:10_000_000
> A[nx1, nx2, nx3, nx4, nx5, nx6]
>   end
> end
> function test6_fast(A, nx1, nx2, nx3, nx4, nx5, nx6)
>   Asize = size(A)
>   for i = 1:10_000_000
> A[sub2ind(Asize, nx1, nx2, nx3, nx4, nx5, nx6 )]
>   end
> end
> @time test3(A3, 1, 1, 1)
> @time test5(A5, 1, 1, 1, 1, 1)
> @time test6(A6, 1, 1, 1, 1, 1, 1)
> @time test6_fast(A6, 1, 1, 1, 1, 1, 1)
>
>
> test6 takes 0.01s in 0.4 and takes 15s in 0.5. Using a linear index fixes
> the problem.
>
> On Friday, September 30, 2016 at 2:30:02 AM UTC-4, Mauro wrote:
>>
>> On Fri, 2016-09-30 at 03:45, Andrew 
>> wrote:
>> > I checked, and my objective function is evaluated exactly as many times
>> > under 0.4 as it is under 0.5. The number of iterations must be the same.
>> >
>> > I also looked at the times more precisely. For one particular function
>> call
>> > in the code, I have:
>> >
>> > 0.4 with old code: 6.7s 18.5M allocations
>> > 0.4 with 0.5 style code(regular anonymous functions) 11.6s, 141M
>> > allocations
>> > 0.5: 36.2s, 189M allocations
>> >
>> > Surprisingly, 0.4 is still much faster even without the fast anonymous
>> > functions trick. It doesn't look like 0.5 is generating many more
>> > allocations than 0.4 on the same code, the time is just a lot slower.
>>
>> Sounds like your not far off a minimal, working example.  Post it and
>> I'm sure it will be dissected in no time. (And an issue can be filed).
>>
>> > On Thursday, September 29, 2016 at 3:36:46 PM UTC-4, Tim Holy wrote:
>> >>
>> >> No real clue about what's happening, but my immediate thought was that
>> if
>> >> your algorithm is iterative and uses some kind of threshold to decide
>> >> convergence, then it seems possible that a change in the accuracy of
>> some
>> >> computation might lead to it getting "stuck" occasionally due to
>> roundoff
>> >> error. That's probably more likely to happen because of some kind of
>> >> worsening rather than some improvement, but either is conceivable.
>> >>
>> >> If that's even a possible explanation, I'd check for unusually-large
>> >> numbers of iterations and then print some kind of convergence info.
>> >>
>> >> Best,
>> >> --Tim
>> >>
>> >> On Thu, Sep 29, 2016 at 1:21 PM, Andrew > >
>> >> wrote:
>> >>
>> >>> In the 0.4 version the above times are pretty consistent. I never
>> observe
>> >>> any several thousand allocation calls. I wonder if compilation is
>> occurring
>> >>> repeatedly.
>> >>>
>> >>> This isn't terribly pressing for me since I'm not currently working on
>> >>> this project, but if there's an easy fix it would be useful for future
>> work.
>> >>>
>> >>> (sorry I didn't mean to post twice. For some reason hitting spacebar
>> was
>> >>> interpreted as the post command?)
>> >>>
>> >>>
>> >>> On Thursday, September 29, 2016 at 2:15:35 PM UTC-4, Andrew wrote:
>> 
>>  I've used @code_warntype everywhere I can think to and I've only
>> found
>>  one Core.box. The @code_warntype looks like this
>> 
>>  Variables:
>>    #self#::#innerloop#3133{#bellman_obj}
>>    state::State{IdioState,AggState}
>>    EVspline::Dierckx.Spline1D
>>    model::Model{CRRA_Family,AggState}
>>    policy::PolicyFunctions{Array{Float64,6},Array{Int64,6}}
>>    OO::NW
>> 
>> 
>> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>
>> 
>>  Body:
>>    begin
>> 
>> 
>> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>
>>  = $(Expr(:new,
>> 
>> ##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj},
>>
>>  :(state), :(EVspline), :(model), :(policy), :(OO),
>>  :((Core.getfield)(#self#,:bellman_obj)::#bellman_obj)))
>>    SSAValue(0) =
>> 
>> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>
>> 
>> 
>> 

Re: [julia-users] Re: Any 0.5 performance tips?

2016-10-14 Thread Andrew
I've found the main problem. I have a function which repeatedly accesses a 
6-dimensional array in a loop. This function took no time in 0.4, but is 
actually very slow in 0.5. My problem looks very similar to 18774 
. 
Here's an example:

A3 = rand(10, 10, 10);
function test3(A, nx1, nx2, nx3)
  for i = 1:10_000_000
A[nx1, nx2, nx3]
  end
end

A5 = rand(10, 10, 10, 10, 10);
function test5(A, nx1, nx2, nx3, nx4, nx5)
  for i = 1:10_000_000
A[nx1, nx2, nx3, nx4, nx5]
  end
end

A6 = rand(10, 10, 10, 10, 10, 10);
function test6(A, nx1, nx2, nx3, nx4, nx5, nx6)
  for i = 1:10_000_000
A[nx1, nx2, nx3, nx4, nx5, nx6]
  end
end
function test6_fast(A, nx1, nx2, nx3, nx4, nx5, nx6)
  Asize = size(A)
  for i = 1:10_000_000
A[sub2ind(Asize, nx1, nx2, nx3, nx4, nx5, nx6 )]
  end
end
@time test3(A3, 1, 1, 1)
@time test5(A5, 1, 1, 1, 1, 1)
@time test6(A6, 1, 1, 1, 1, 1, 1)
@time test6_fast(A6, 1, 1, 1, 1, 1, 1)


test6 takes 0.01s in 0.4 and takes 15s in 0.5. Using a linear index fixes 
the problem.

On Friday, September 30, 2016 at 2:30:02 AM UTC-4, Mauro wrote:
>
> On Fri, 2016-09-30 at 03:45, Andrew  
> wrote: 
> > I checked, and my objective function is evaluated exactly as many times 
> > under 0.4 as it is under 0.5. The number of iterations must be the same. 
> > 
> > I also looked at the times more precisely. For one particular function 
> call 
> > in the code, I have: 
> > 
> > 0.4 with old code: 6.7s 18.5M allocations 
> > 0.4 with 0.5 style code(regular anonymous functions) 11.6s, 141M 
> > allocations 
> > 0.5: 36.2s, 189M allocations 
> > 
> > Surprisingly, 0.4 is still much faster even without the fast anonymous 
> > functions trick. It doesn't look like 0.5 is generating many more 
> > allocations than 0.4 on the same code, the time is just a lot slower. 
>
> Sounds like your not far off a minimal, working example.  Post it and 
> I'm sure it will be dissected in no time. (And an issue can be filed). 
>
> > On Thursday, September 29, 2016 at 3:36:46 PM UTC-4, Tim Holy wrote: 
> >> 
> >> No real clue about what's happening, but my immediate thought was that 
> if 
> >> your algorithm is iterative and uses some kind of threshold to decide 
> >> convergence, then it seems possible that a change in the accuracy of 
> some 
> >> computation might lead to it getting "stuck" occasionally due to 
> roundoff 
> >> error. That's probably more likely to happen because of some kind of 
> >> worsening rather than some improvement, but either is conceivable. 
> >> 
> >> If that's even a possible explanation, I'd check for unusually-large 
> >> numbers of iterations and then print some kind of convergence info. 
> >> 
> >> Best, 
> >> --Tim 
> >> 
> >> On Thu, Sep 29, 2016 at 1:21 PM, Andrew  > 
> >> wrote: 
> >> 
> >>> In the 0.4 version the above times are pretty consistent. I never 
> observe 
> >>> any several thousand allocation calls. I wonder if compilation is 
> occurring 
> >>> repeatedly. 
> >>> 
> >>> This isn't terribly pressing for me since I'm not currently working on 
> >>> this project, but if there's an easy fix it would be useful for future 
> work. 
> >>> 
> >>> (sorry I didn't mean to post twice. For some reason hitting spacebar 
> was 
> >>> interpreted as the post command?) 
> >>> 
> >>> 
> >>> On Thursday, September 29, 2016 at 2:15:35 PM UTC-4, Andrew wrote: 
>  
>  I've used @code_warntype everywhere I can think to and I've only 
> found 
>  one Core.box. The @code_warntype looks like this 
>  
>  Variables: 
>    #self#::#innerloop#3133{#bellman_obj} 
>    state::State{IdioState,AggState} 
>    EVspline::Dierckx.Spline1D 
>    model::Model{CRRA_Family,AggState} 
>    policy::PolicyFunctions{Array{Float64,6},Array{Int64,6}} 
>    OO::NW 
>  
>  
> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>  
>
>  
>  Body: 
>    begin 
>  
>  
> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>  
>
>  = $(Expr(:new, 
>  
> ##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj},
>  
>
>  :(state), :(EVspline), :(model), :(policy), :(OO), 
>  :((Core.getfield)(#self#,:bellman_obj)::#bellman_obj))) 
>    SSAValue(0) = 
>  
> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>  
>
>  
>  
> 

[julia-users] Re: eachline() work with pmap() is slow

2016-10-14 Thread Páll Haraldsson
On Friday, October 14, 2016 at 3:45:36 AM UTC, love...@gmail.com wrote:
>
> I want to process each line of a large text file (100G) in parallel using 
> the following code
>
> pmap(process_fun, eachline(the_file))
>
> however, it seems that pmap is slow. following is a dummy experiment:
>
>  

> the goal is to process those files (300+) as fast as possible. and maybe 
> there are better ways to call pmap?
>

I'm not sure, there's much gain to process *each* file in parallel, on top 
of these many files (at least if they are similar size, and say one not 
much bigger).

help?> pmap
[..]
  By default, pmap distributes the computation over all specified workers.
[..]

I'm not sure how this works, since lines in a file may not each be the same 
length, I THINK you need to read the file serially (there are probably 
workaround, but pmap wouldn't be responsible for that).

The computations, would however be distributed, if they take a long time 
(compared to the I/O, well the read; else distributed=false might be a 
win?) and are independent (I guess pmap requires that) then pmap could be a 
win, but see above. Note also, parameters (batch_size=1) seem to me to be 
tuning parameters?


That's some big file.. I'm kind of interested in big [1D] arrays (see other 
thread), seems to me, this is streaming work and while file bigger, [each 
process] doesn't need more than 2 GB (a limit I'm interested in).


 


Re: [julia-users] Most effective way to build a large string?

2016-10-14 Thread Diego Javier Zea
Hi!
I have a function that uses `IOBuffer` for this creating one `String` like 
the example. 
Is it needed or recommended `close` the IOBuffer after `takebuf_string`?
Best!

On Tuesday, February 17, 2015 at 1:47:08 PM UTC-3, Stefan Karpinski wrote:
>
> IOBuffer is what you're looking for:
>
> buf = IOBuffer()
> for i = 1:100
>println(buf, i)
> end
> takebuf_string(buf) # => returns everything that's been written to buf.
>
> The takebuf_string function really needs a new name.
>
> On Tue, Feb 17, 2015 at 9:06 AM, Maurice Diamantini <
> maurice.d...@gmail.com > wrote:
>
>> Hi,
>>
>> In Ruby, String is mutable which allows to build large strings like this:
>> txt = ""
>> for ...
>> txt << "yet another line\n"
>> end
>> # do something with txt
>>
>> The Julia the (bad) way I use is to do:
>> txt = ""
>> for ...
>> txt *=  "yet another line\n"
>> end
>> # do something with txt
>>
>> Which is very slow for a big string (because it build a new more and more 
>> string at each iteration).
>>
>> So is there another way to do it (in standard Julia)?
>> Or is there another type which could be used (something like a Buffer 
>> type or Array type)?
>>
>> Thank,
>> -- Maurice
>>
>
>

[julia-users] Does julia -L work with plots?

2016-10-14 Thread Stefan Rigger


Hello everyone, 

I'm using julia to numerically solve partial differential equations and use 
PyPlot to plot the solutions. I'd like to be able to calculate a solution 
and plot entering something like 
julia -L file.jl

in Terminal but unfortunately, this doesn't seem to work (there is no 
window opening where one can view the solution). I've attached a small .jl 
file if you want to test it yourself. Is there any way one can get this to 
work? Thank you for your time,

Best Regards,
Stefan Rigger


test.jl
Description: Binary data


[julia-users] Re: Performance of release 0.5.0 v 0.4.6.0

2016-10-14 Thread Jeremy Cavanagh
Thanks Guys, that's very helpful. I shall now go away and play with the 
performance tests. 


[julia-users] Re: What is really "big data" for Julia (or otherwise), 1D or multi-dimensional?

2016-10-14 Thread Páll Haraldsson
On Thursday, October 13, 2016 at 7:49:51 PM UTC, cdm wrote:
>
> from CloudArray.jl:
>
> "If you are dealing with big data, i.e., your RAM memory is not enough to 
> store your data, you can create a CloudArray from a file."
>
>
> https://github.com/gsd-ufal/CloudArray.jl#creating-a-cloudarray-from-a-file
>

Good to know, and seems cool.. (like CatViews.jl) indexes could need to be 
bigger than 32-bit this way.. even for 2D.

But has anyone worked with more than 70 terabyte arrays, that would 
otherwise have been a limitation?

Anyone know biggest (or just big over 2 GB) one-dimensional array people 
are working with?


[julia-users] Re: eachline() work with pmap() is slow

2016-10-14 Thread Jeremy McNees
I need to run something similar due to a large number of text files that I 
have. They are too large to load into memory at one-time, let alone 
multiple files at the same time. I find that pmap() works very well here. 

First, you should wrap your for loop in a function. In general you should 
block your code with functions in Julia. Second, can you provide a 
determiner to the split function? 

Third, you may not need 32 procs for this job. There's overhead associated 
with parallel processing. 

This stackoverflow post has some more information that might be useful: 
http://stackoverflow.com/questions/21890893/reading-csv-in-julia-is-slow-compared-to-python/35120894?noredirect=1#comment66827279_35120894



[julia-users] Re: eachline() work with pmap() is slow

2016-10-14 Thread Jeremy McNees
I need to run something similar due to a large number of text files that I 
have. They are too large to load into memory at one-time, let alone 
multiple files at the same time. I find that pmap() works very well here. 

First, you should wrap your for loop in a function. In general you should 
block your code with functions in Julia. Second, can you provide a 
determiner to the split function? 

Third, you may not need 32 procs for this job. There's overhead associated 
with parallel processing. 

This stackoverflow post has some more information that might be useful: 
http://stackoverflow.com/questions/21890893/reading-csv-in-julia-is-slow-compared-to-python/35120894?noredirect=1#comment66827279_35120894


On Thursday, October 13, 2016 at 11:45:36 PM UTC-4, love...@gmail.com wrote:
>
> I want to process each line of a large text file (100G) in parallel using 
> the following code
>
> pmap(process_fun, eachline(the_file))
>
> however, it seems that pmap is slow. following is a dummy experiment:
>
> julia> writedlm("tmp.txt",rand(10,100)) # produce a large file
> julia> @time for l in eachline("tmp.txt")
>   split(l)
>   end
>   5.678517 seconds (11.00 M allocations: 732.637 MB, 40.67% gc time)
>
> julia> addprocs() # 32 core
>
> julia> @time map(split, eachline("tmp.txt"));
>   4.834571 seconds (11.00 M allocations: 734.638 MB, 32.84% gc time)
>
> julia> @time pmap(split, eachline("tmp.txt"));
> 112.275411 seconds (227.06 M allocations: 10.024 GB, 50.72% gc time)
>
> the goal is to process those files (300+) as fast as possible. and maybe 
> there are better ways to call pmap?
>


Re: [julia-users] ANN: RawArray.jl

2016-10-14 Thread Páll Haraldsson
On Thursday, October 13, 2016 at 6:00:30 PM UTC, Tim Holy wrote:
>
> If you just want a raw dump of memory, you can get that, and if it's big 
> it uses `Mmap.mmap` when it reads the data back in. So you can read 
> terabyte-sized arrays.
>

[Not clear on mmap.. just a possibility, kind or requirement when arrays 
are this big?]

Good to know, I have another thread on array sizes (and 2 GB limit).

You mean you could read terabyte-sized, not that it's common or that you 
know of for 1D arrays?

[Would that be arrays of big structs? Fewer than 2G, e.g. 32-bit index 
would do?]

I'm not at all worried for 2D (or more dimensions).



[julia-users] Re: return Bool [and casting to Bool]

2016-10-14 Thread Steven G. Johnson
https://github.com/JuliaLang/julia/issues/18367 


[julia-users] return Bool [and casting to Bool]

2016-10-14 Thread Páll Haraldsson

It's great to see explicit return types in the language as of 0.5.

About return [or it's implicit nature, I only read about half as very 
long..]:

https://groups.google.com/forum/#!topic/julia-users/4RVR8qQDrUg


Disallowinging implicit return, would be a breaking change.


Is there some room for adding special handling of [return] true or [return] 
false to the language?

I'm kind of worried, that if anyone changes code, you get a Union.

Since Boll[eans] are so fundamental to computing, it seems you should be 
returning them with no possibility of error.


In general for Bool, and especially for return, it seems not good that 0.0 
gets cast to it (at least e.g. casting Strings fail), and 0 etc.

[In the unlikely situation, that is really wanted.. then the new return 
type opens up that possibility of an explicit Union return type.]


[Trivia: Go language, has a kind of Union of the error code and the 
otherwise meant return type, instead of C's error model or exception 
handling. I'm deeply skeptical of this, but a Union would almost allow 
that.. or GoTypeError type..]



[julia-users] Layered subplot does not work with categorical variables

2016-10-14 Thread Christopher Fisher
Hi all-

I'm having trouble creating a subplot with multiple layers when the 
variables are categorical.  Here is a stripped down version of my code.

df = DataFrame(Condition = repeat(["Condition 1","Condition 2"], inner = 
3), Block = repeat(["Block 1","Block 2", "Block 3"],outer = 2),y1 = 
rand(6), y2 = rand(6))


plot(
 Geom.subplot_grid(
layer(df,x = :Block,y = :y1,xgroup = :Condition,
  Geom.line, 
Theme(default_color=colorant"red")),
layer(df,x = :Block,y = :y2,xgroup = :Condition,
  Geom.line, 
Theme(default_color=colorant"blue"



This generates the following error:

ethodError: no method matching isfinite(::String)
Closest candidates are:
isfinite(!Matched::Float16) at float16.jl:119
isfinite(!Matched::BigFloat) at mpfr.jl:799
isfinite(!Matched::DataArrays.NAtype) at 
/home/dfish/.julia/v0.5/DataArrays/src/predicates.jl:9


However, if I use Block repeat([1,2,3],outer = 2), the code works. Is there 
any way around this? 

Thanks,

Chris 


[julia-users] Layered subplot does not work with categorical variables

2016-10-14 Thread Christopher Fisher
Hi all-

I'm having trouble creating a subplot with multiple layers when the 
variables are categorical.  Here is a stripped down version of my code.

df = DataFrame(Condition = repeat(["Condition 1","Condition 2"], inner = 
3), Block = repeat(["Block 1","Block 2", "Block 3"],outer = 2),y1 = 
rand(6), y2 = rand(6))


plot(
 Geom.subplot_grid(
layer(df,x = :Block,y = :y1,xgroup = :Condition,
  Geom.line, 
Theme(default_color=colorant"red")),
layer(df,x = :Block,y = :y2,xgroup = :Condition,
  Geom.line, 
Theme(default_color=colorant"blue"



This generates the following error:

ethodError: no method matching isfinite(::String)
Closest candidates are:
isfinite(!Matched::Float16) at float16.jl:119
isfinite(!Matched::BigFloat) at mpfr.jl:799
isfinite(!Matched::DataArrays.NAtype) at 
/home/dfish/.julia/v0.5/DataArrays/src/predicates.jl:9


However, if I use Block repeat([1,2,3],outer = 2), the code works. Is there 
any way around this? 

Thanks,

Chris 


Re: [julia-users] ERROR: Target architecture mismatch

2016-10-14 Thread ABB
Ok - thanks for the clarification.  I will try to compile on the compute 
node, not the login node.  

I will submit a ticket to TACC and ask about cmake too.  The version on the 
compute node is 2.8.11.  

On Friday, October 14, 2016 at 10:42:51 AM UTC-5, Erik Schnetter wrote:
>
> Julia runs some of the code it generates as part of its bootstrapping 
> procedure. That is, traditional cross-compiling won't work. I think there's 
> a way around it, but it's not trivial. I would avoid this in the beginning.
>
> -erik
>
> On Fri, Oct 14, 2016 at 11:28 AM, ABB  
> wrote:
>
>> I was building on the (Haswell) front end.  From some of the other issues 
>> I looked at it appeared that I could specify the architecture even if I was 
>> not actually building on that kind of system.  But that could be totally 
>> wrong, so I can try it on the KNL node if that's required.
>>
>> When I put  "LLVM_VER := svn" and tried this morning (again on the front 
>> end) the error I got was:
>>
>> JULIA_CPU_TARGET = knl
>>
>>
>>  lib/CodeGen/SelectionDAG/DAGCombiner.cpp | 13 +
>>
>>  test/CodeGen/X86/negate-shift.ll | 16 
>>
>>  2 files changed, 17 insertions(+), 12 deletions(-)
>>
>> CMake Error at CMakeLists.txt:3 (cmake_minimum_required):
>>
>>   CMake 3.4.3 or higher is required.  You are running version 2.8.11
>>
>>
>>
>> -- Configuring incomplete, errors occurred!
>>
>> make[1]: *** [build/llvm-svn/build_Release/CMakeCache.txt] Error 1
>>
>> make: *** [julia-deps] Error 2
>>
>>
>>
>>
>> On Friday, October 14, 2016 at 9:51:56 AM UTC-5, Erik Schnetter wrote:
>>>
>>> Were you building on a KNL node or on the frontend? What architecture 
>>> did you specify?
>>>
>>> -erik
>>>
>>> On Thu, Oct 13, 2016 at 9:38 PM, Valentin Churavy  
>>> wrote:
>>>
 Since KNL is just a new platform the default version of the LLVM 
 compiler that Julia is based on does not support it properly.
 During our testing at MIT we found that we needed to switch to the 
 current upstream of LLVM (or if anybody reads this at a later time LLVM 
 4.0)
 You can do that by putting
 LLVM_VER:=svn
 into your Make.user.

 - Valentin

 On Friday, 14 October 2016 09:55:16 UTC+9, ABB wrote:
>
> Sigh... build failed.  I'm including the last part that worked and the 
> error message which followed:
>
> JULIA usr/lib/julia/inference.ji
> essentials.jl
> generator.jl
> reflection.jl
> options.jl
> promotion.jl
> tuple.jl
> range.jl
> expr.jl
> error.jl
> bool.jl
> number.jl
> int.jl
>
> signal (4): Illegal instruction
> while loading int.jl, in expression starting on line 193
> ! at ./bool.jl:16
> jl_call_method_internal at 
> /home1/04179/abean/julia/src/julia_internal.h:189
> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
> anonymous at ./ (unknown line)
> jl_call_method_internal at 
> /home1/04179/abean/julia/src/julia_internal.h:189
> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:569
> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
> jl_load_ at /home1/04179/abean/julia/src/toplevel.c:605
> include at ./boot.jl:231
> jl_call_method_internal at 
> /home1/04179/abean/julia/src/julia_internal.h:189
> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
> do_call at /home1/04179/abean/julia/src/interpreter.c:66
> eval at /home1/04179/abean/julia/src/interpreter.c:190
> jl_interpret_toplevel_expr at 
> /home1/04179/abean/julia/src/interpreter.c:31
> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
> jl_eval_module_expr at /home1/04179/abean/julia/src/toplevel.c:196
> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:465
> jl_toplevel_eval at /home1/04179/abean/julia/src/toplevel.c:580
> jl_toplevel_eval_in_warn at /home1/04179/abean/julia/src/builtins.c:590
> jl_toplevel_eval_in at /home1/04179/abean/julia/src/builtins.c:556
> eval at ./boot.jl:234
> jl_call_method_internal at 
> /home1/04179/abean/julia/src/julia_internal.h:189
> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
> do_call at /home1/04179/abean/julia/src/interpreter.c:66
> eval at /home1/04179/abean/julia/src/interpreter.c:190
> jl_interpret_toplevel_expr at 
> /home1/04179/abean/julia/src/interpreter.c:31
> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
> exec_program at /home1/04179/abean/julia/ui/repl.c:66
> true_main at /home1/04179/abean/julia/ui/repl.c:119
> main at /home1/04179/abean/julia/ui/repl.c:232
> __libc_start_main at /usr/lib64/libc.so.6 

[julia-users] help with @generated function call please?

2016-10-14 Thread Florian Oswald
hi all, 

I want to evaluate a function at each index of an array. There is a N 
dimensional function, and I want to map it onto an N-dimensional array:

fpoly(x::Array{Real,5}) = x[1] + x[2]^2 + x[3] + x[4]^2 + x[5] 

want to do 

a = rand(2,2,2,2,2);
b = similar(a)

for i1 in indices(a,1)
for i2 in indices(a,2)
...
b[i1,i2,i3,i4,i5] = fpoly(a[i1,i2,i3,i4,i5])
end
end...

I tried:
# actually want to do it inplace
@generated function set_poly!{T,N}(a::Array{T,N})
quote
@nloops $N i a begin
@nref $N a i = @ncall $N fpoly i->a[i]
end
end
end

but that fails. I dont get further than:

macroexpand(:(@nloops 3 j a begin
x = @ncall 3 fpoly i->a[j]
end))

*quote  # cartesian.jl, line 62:*

*for j_3 = indices(a,3) # cartesian.jl, line 63:*

*nothing # cartesian.jl, line 64:*

*begin  # cartesian.jl, line 62:*

*for j_2 = indices(a,2) # cartesian.jl, line 63:*

*nothing # cartesian.jl, line 64:*

*begin  # cartesian.jl, line 62:*

*for j_1 = indices(a,1) # cartesian.jl, line 63:*

*nothing # cartesian.jl, line 64:*

*begin  # REPL[145], line 2:*

*x = fpoly(a[j],a[j],a[j])*

*end # cartesian.jl, line 65:*

*nothing*

*end*

*end # cartesian.jl, line 65:*

*nothing*

*end*

*end # cartesian.jl, line 65:*

*nothing*

*end*

*end*



*which is a start but how can I get the LHS right the indices of a right?*




Re: [julia-users] ERROR: Target architecture mismatch

2016-10-14 Thread Erik Schnetter
Julia runs some of the code it generates as part of its bootstrapping
procedure. That is, traditional cross-compiling won't work. I think there's
a way around it, but it's not trivial. I would avoid this in the beginning.

-erik

On Fri, Oct 14, 2016 at 11:28 AM, ABB  wrote:

> I was building on the (Haswell) front end.  From some of the other issues
> I looked at it appeared that I could specify the architecture even if I was
> not actually building on that kind of system.  But that could be totally
> wrong, so I can try it on the KNL node if that's required.
>
> When I put  "LLVM_VER := svn" and tried this morning (again on the front
> end) the error I got was:
>
> JULIA_CPU_TARGET = knl
>
>
>  lib/CodeGen/SelectionDAG/DAGCombiner.cpp | 13 +
>
>  test/CodeGen/X86/negate-shift.ll | 16 
>
>  2 files changed, 17 insertions(+), 12 deletions(-)
>
> CMake Error at CMakeLists.txt:3 (cmake_minimum_required):
>
>   CMake 3.4.3 or higher is required.  You are running version 2.8.11
>
>
>
> -- Configuring incomplete, errors occurred!
>
> make[1]: *** [build/llvm-svn/build_Release/CMakeCache.txt] Error 1
>
> make: *** [julia-deps] Error 2
>
>
>
>
> On Friday, October 14, 2016 at 9:51:56 AM UTC-5, Erik Schnetter wrote:
>>
>> Were you building on a KNL node or on the frontend? What architecture did
>> you specify?
>>
>> -erik
>>
>> On Thu, Oct 13, 2016 at 9:38 PM, Valentin Churavy 
>> wrote:
>>
>>> Since KNL is just a new platform the default version of the LLVM
>>> compiler that Julia is based on does not support it properly.
>>> During our testing at MIT we found that we needed to switch to the
>>> current upstream of LLVM (or if anybody reads this at a later time LLVM 4.0)
>>> You can do that by putting
>>> LLVM_VER:=svn
>>> into your Make.user.
>>>
>>> - Valentin
>>>
>>> On Friday, 14 October 2016 09:55:16 UTC+9, ABB wrote:

 Sigh... build failed.  I'm including the last part that worked and the
 error message which followed:

 JULIA usr/lib/julia/inference.ji
 essentials.jl
 generator.jl
 reflection.jl
 options.jl
 promotion.jl
 tuple.jl
 range.jl
 expr.jl
 error.jl
 bool.jl
 number.jl
 int.jl

 signal (4): Illegal instruction
 while loading int.jl, in expression starting on line 193
 ! at ./bool.jl:16
 jl_call_method_internal at /home1/04179/abean/julia/src/j
 ulia_internal.h:189
 jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
 anonymous at ./ (unknown line)
 jl_call_method_internal at /home1/04179/abean/julia/src/j
 ulia_internal.h:189
 jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:569
 jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
 jl_load at /home1/04179/abean/julia/src/toplevel.c:596
 jl_load_ at /home1/04179/abean/julia/src/toplevel.c:605
 include at ./boot.jl:231
 jl_call_method_internal at /home1/04179/abean/julia/src/j
 ulia_internal.h:189
 jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
 do_call at /home1/04179/abean/julia/src/interpreter.c:66
 eval at /home1/04179/abean/julia/src/interpreter.c:190
 jl_interpret_toplevel_expr at /home1/04179/abean/julia/src/i
 nterpreter.c:31
 jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
 jl_eval_module_expr at /home1/04179/abean/julia/src/toplevel.c:196
 jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:465
 jl_toplevel_eval at /home1/04179/abean/julia/src/toplevel.c:580
 jl_toplevel_eval_in_warn at /home1/04179/abean/julia/src/builtins.c:590
 jl_toplevel_eval_in at /home1/04179/abean/julia/src/builtins.c:556
 eval at ./boot.jl:234
 jl_call_method_internal at /home1/04179/abean/julia/src/j
 ulia_internal.h:189
 jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
 do_call at /home1/04179/abean/julia/src/interpreter.c:66
 eval at /home1/04179/abean/julia/src/interpreter.c:190
 jl_interpret_toplevel_expr at /home1/04179/abean/julia/src/i
 nterpreter.c:31
 jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
 jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
 jl_load at /home1/04179/abean/julia/src/toplevel.c:596
 exec_program at /home1/04179/abean/julia/ui/repl.c:66
 true_main at /home1/04179/abean/julia/ui/repl.c:119
 main at /home1/04179/abean/julia/ui/repl.c:232
 __libc_start_main at /usr/lib64/libc.so.6 (unknown line)
 unknown function (ip: 0x401928)
 Allocations: 100373 (Pool: 100371; Big: 2); GC: 0
 /bin/sh: line 1: 15078 Illegal instruction
 /home1/04179/abean/julia/usr/bin/julia-debug -C knl --output-ji
 /home1/04179/abean/julia/usr/lib/julia/inference.ji --startup-file=no
 coreimg.jl
 make[1]: *** [/home1/04179/abean/julia/usr/lib/julia/inference.ji]
 Error 132
 make: *** 

Re: [julia-users] ERROR: Target architecture mismatch

2016-10-14 Thread ABB
I was building on the (Haswell) front end.  From some of the other issues I 
looked at it appeared that I could specify the architecture even if I was 
not actually building on that kind of system.  But that could be totally 
wrong, so I can try it on the KNL node if that's required.

When I put  "LLVM_VER := svn" and tried this morning (again on the front 
end) the error I got was:

JULIA_CPU_TARGET = knl


 lib/CodeGen/SelectionDAG/DAGCombiner.cpp | 13 +

 test/CodeGen/X86/negate-shift.ll | 16 

 2 files changed, 17 insertions(+), 12 deletions(-)

CMake Error at CMakeLists.txt:3 (cmake_minimum_required):

  CMake 3.4.3 or higher is required.  You are running version 2.8.11



-- Configuring incomplete, errors occurred!

make[1]: *** [build/llvm-svn/build_Release/CMakeCache.txt] Error 1

make: *** [julia-deps] Error 2




On Friday, October 14, 2016 at 9:51:56 AM UTC-5, Erik Schnetter wrote:
>
> Were you building on a KNL node or on the frontend? What architecture did 
> you specify?
>
> -erik
>
> On Thu, Oct 13, 2016 at 9:38 PM, Valentin Churavy  > wrote:
>
>> Since KNL is just a new platform the default version of the LLVM compiler 
>> that Julia is based on does not support it properly.
>> During our testing at MIT we found that we needed to switch to the 
>> current upstream of LLVM (or if anybody reads this at a later time LLVM 4.0)
>> You can do that by putting
>> LLVM_VER:=svn
>> into your Make.user.
>>
>> - Valentin
>>
>> On Friday, 14 October 2016 09:55:16 UTC+9, ABB wrote:
>>>
>>> Sigh... build failed.  I'm including the last part that worked and the 
>>> error message which followed:
>>>
>>> JULIA usr/lib/julia/inference.ji
>>> essentials.jl
>>> generator.jl
>>> reflection.jl
>>> options.jl
>>> promotion.jl
>>> tuple.jl
>>> range.jl
>>> expr.jl
>>> error.jl
>>> bool.jl
>>> number.jl
>>> int.jl
>>>
>>> signal (4): Illegal instruction
>>> while loading int.jl, in expression starting on line 193
>>> ! at ./bool.jl:16
>>> jl_call_method_internal at 
>>> /home1/04179/abean/julia/src/julia_internal.h:189
>>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>>> anonymous at ./ (unknown line)
>>> jl_call_method_internal at 
>>> /home1/04179/abean/julia/src/julia_internal.h:189
>>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:569
>>> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
>>> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
>>> jl_load_ at /home1/04179/abean/julia/src/toplevel.c:605
>>> include at ./boot.jl:231
>>> jl_call_method_internal at 
>>> /home1/04179/abean/julia/src/julia_internal.h:189
>>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>>> do_call at /home1/04179/abean/julia/src/interpreter.c:66
>>> eval at /home1/04179/abean/julia/src/interpreter.c:190
>>> jl_interpret_toplevel_expr at 
>>> /home1/04179/abean/julia/src/interpreter.c:31
>>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
>>> jl_eval_module_expr at /home1/04179/abean/julia/src/toplevel.c:196
>>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:465
>>> jl_toplevel_eval at /home1/04179/abean/julia/src/toplevel.c:580
>>> jl_toplevel_eval_in_warn at /home1/04179/abean/julia/src/builtins.c:590
>>> jl_toplevel_eval_in at /home1/04179/abean/julia/src/builtins.c:556
>>> eval at ./boot.jl:234
>>> jl_call_method_internal at 
>>> /home1/04179/abean/julia/src/julia_internal.h:189
>>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>>> do_call at /home1/04179/abean/julia/src/interpreter.c:66
>>> eval at /home1/04179/abean/julia/src/interpreter.c:190
>>> jl_interpret_toplevel_expr at 
>>> /home1/04179/abean/julia/src/interpreter.c:31
>>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
>>> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
>>> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
>>> exec_program at /home1/04179/abean/julia/ui/repl.c:66
>>> true_main at /home1/04179/abean/julia/ui/repl.c:119
>>> main at /home1/04179/abean/julia/ui/repl.c:232
>>> __libc_start_main at /usr/lib64/libc.so.6 (unknown line)
>>> unknown function (ip: 0x401928)
>>> Allocations: 100373 (Pool: 100371; Big: 2); GC: 0
>>> /bin/sh: line 1: 15078 Illegal instruction 
>>> /home1/04179/abean/julia/usr/bin/julia-debug -C knl --output-ji 
>>> /home1/04179/abean/julia/usr/lib/julia/inference.ji --startup-file=no 
>>> coreimg.jl
>>> make[1]: *** [/home1/04179/abean/julia/usr/lib/julia/inference.ji] Error 
>>> 132
>>> make: *** [julia-inference] Error 2
>>>
>>>
>>>
>>> Any advice for debugging that?  I don't find any previous issues which 
>>> are helpful.
>>>
>>> Thanks - 
>>>
>>> Austin
>>>
>>> On Thursday, October 13, 2016 at 1:49:24 PM UTC-5, ABB wrote:

 Awesome.  Thanks.  I'll try it again then.  I appreciate the help.

 (Austin is also my name.  I save space in my memory by going to school 
 at, living in and being a guy with 

Re: [julia-users] ERROR: Target architecture mismatch

2016-10-14 Thread Erik Schnetter
Were you building on a KNL node or on the frontend? What architecture did
you specify?

-erik

On Thu, Oct 13, 2016 at 9:38 PM, Valentin Churavy 
wrote:

> Since KNL is just a new platform the default version of the LLVM compiler
> that Julia is based on does not support it properly.
> During our testing at MIT we found that we needed to switch to the current
> upstream of LLVM (or if anybody reads this at a later time LLVM 4.0)
> You can do that by putting
> LLVM_VER:=svn
> into your Make.user.
>
> - Valentin
>
> On Friday, 14 October 2016 09:55:16 UTC+9, ABB wrote:
>>
>> Sigh... build failed.  I'm including the last part that worked and the
>> error message which followed:
>>
>> JULIA usr/lib/julia/inference.ji
>> essentials.jl
>> generator.jl
>> reflection.jl
>> options.jl
>> promotion.jl
>> tuple.jl
>> range.jl
>> expr.jl
>> error.jl
>> bool.jl
>> number.jl
>> int.jl
>>
>> signal (4): Illegal instruction
>> while loading int.jl, in expression starting on line 193
>> ! at ./bool.jl:16
>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>> ulia_internal.h:189
>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>> anonymous at ./ (unknown line)
>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>> ulia_internal.h:189
>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:569
>> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
>> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
>> jl_load_ at /home1/04179/abean/julia/src/toplevel.c:605
>> include at ./boot.jl:231
>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>> ulia_internal.h:189
>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>> do_call at /home1/04179/abean/julia/src/interpreter.c:66
>> eval at /home1/04179/abean/julia/src/interpreter.c:190
>> jl_interpret_toplevel_expr at /home1/04179/abean/julia/src/i
>> nterpreter.c:31
>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
>> jl_eval_module_expr at /home1/04179/abean/julia/src/toplevel.c:196
>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:465
>> jl_toplevel_eval at /home1/04179/abean/julia/src/toplevel.c:580
>> jl_toplevel_eval_in_warn at /home1/04179/abean/julia/src/builtins.c:590
>> jl_toplevel_eval_in at /home1/04179/abean/julia/src/builtins.c:556
>> eval at ./boot.jl:234
>> jl_call_method_internal at /home1/04179/abean/julia/src/j
>> ulia_internal.h:189
>> jl_apply_generic at /home1/04179/abean/julia/src/gf.c:1942
>> do_call at /home1/04179/abean/julia/src/interpreter.c:66
>> eval at /home1/04179/abean/julia/src/interpreter.c:190
>> jl_interpret_toplevel_expr at /home1/04179/abean/julia/src/i
>> nterpreter.c:31
>> jl_toplevel_eval_flex at /home1/04179/abean/julia/src/toplevel.c:558
>> jl_parse_eval_all at /home1/04179/abean/julia/src/ast.c:717
>> jl_load at /home1/04179/abean/julia/src/toplevel.c:596
>> exec_program at /home1/04179/abean/julia/ui/repl.c:66
>> true_main at /home1/04179/abean/julia/ui/repl.c:119
>> main at /home1/04179/abean/julia/ui/repl.c:232
>> __libc_start_main at /usr/lib64/libc.so.6 (unknown line)
>> unknown function (ip: 0x401928)
>> Allocations: 100373 (Pool: 100371; Big: 2); GC: 0
>> /bin/sh: line 1: 15078 Illegal instruction
>> /home1/04179/abean/julia/usr/bin/julia-debug -C knl --output-ji
>> /home1/04179/abean/julia/usr/lib/julia/inference.ji --startup-file=no
>> coreimg.jl
>> make[1]: *** [/home1/04179/abean/julia/usr/lib/julia/inference.ji] Error
>> 132
>> make: *** [julia-inference] Error 2
>>
>>
>>
>> Any advice for debugging that?  I don't find any previous issues which
>> are helpful.
>>
>> Thanks -
>>
>> Austin
>>
>> On Thursday, October 13, 2016 at 1:49:24 PM UTC-5, ABB wrote:
>>>
>>> Awesome.  Thanks.  I'll try it again then.  I appreciate the help.
>>>
>>> (Austin is also my name.  I save space in my memory by going to school
>>> at, living in and being a guy with the same name.)
>>>
>>> On Thursday, October 13, 2016 at 1:40:09 PM UTC-5, Erik Schnetter wrote:

 AB

 You're speaking of Stampede, if I might guess from the "austin" prefix
 in your email address. I would treat the old and the new section of the
 machines as separate, since they are not binary compatible. If you are
 really interested in the KNL part, then I'd concentrate on these, and use
 the development mode to always log in, build on, and run on the KNL nodes,
 and ignore everything else. Mixing different architectures in a single
 Julia environment is something I'd tackle much later, if at all.

 Alternatively you can use "haswell" as CPU architecture (instead of
 "core2" above), which should work both on the front end as well as the KNL
 nodes. However, this way you will lose speed on the KNL nodes, except for
 linear algebra operations.

 -erik


 On Thu, Oct 13, 2016 at 2:26 PM, ABB  wrote:

> This is great - thanks for getting back to 

Re: [julia-users] why do we have Base.isless(a, ::NAtype) but not Base.isless(a, ::Nullable)?

2016-10-14 Thread Milan Bouchet-Valat
Le jeudi 13 octobre 2016 à 15:40 +0200, Florian Oswald a écrit :
> i'm trying to understand why we don't have something similar in terms
> of comparison for Nullable as we have for DataArrays NAtype (below).
> point me to the relevant github conversation, if any, is fine. 
Such a method already exists in NullableArrays and in Julia 0.6. See
https://github.com/JuliaLang/julia/pull/18304


Regards

> How would I implement methods to find the maximium of an
> Array{Nullable{Float64}}? like so?
> 
> Base.isless(a::Any, x::Nullable{Float64}) = isnull(x) ? true :
> Base.isless(a,get(x))
> 
> 
> ~/.julia/v0.5/DataArrays/src/operators.jl:502
> 
> #
> # Comparison operators
> #
> 
> Base.isequal(::NAtype, ::NAtype) = true
> Base.isequal(::NAtype, b) = false
> Base.isequal(a, ::NAtype) = false
> Base.isless(::NAtype, ::NAtype) = false
> Base.isless(::NAtype, b) = false
> Base.isless(a, ::NAtype) = true
> 


Re: [julia-users] Re: What's the status of SIMD instructions from a user's perspective in v0.5?

2016-10-14 Thread Milan Bouchet-Valat
Le jeudi 13 octobre 2016 à 07:27 -0700, Florian Oswald a écrit :
> 
> Hi Erik,
> 
> that's great thanks. I may have a hot inner loop where this could be
> very helpful. I'll have a closer look and come back with any
> questions later on if that's ok. 
Maybe I'm stating the obvious, but you don't need to manually use SIMD
types to get SIMD instructions in simple/common cases. For example, the
following high-level generic code uses SIMD instructions on my machine
when passed standard vectors :

function add!(x::AbstractArray, y::AbstractArray)
    @inbounds for i in eachindex(x, y)
        x[i] += y[i]
    end
end


Regards

> 
> cheers
> florian
> 
> > 
> > If you want to use the SIMD package, then you need to manually
> > vectorized the code. That is, all (most of) the local variables
> > you're using will have a SIMD `Vec` type. For convenience, your
> > input and output arrays will likely still hold scalar values, and
> > the `vload` and vstore` functions access scalar arrays,
> > reading/writing SIMD vectors. The function you quote above (from
> > the SIMD examples) does just this.
> > 
> > What vector length `N` is best depends on the particular machine.
> > Usually, you would look at the CPU instruction set and choose the
> > largest SIMD vector size that the CPU supports, but sometimes twice
> > that size or half that size might also work well. Note that using a
> > larger SIMD vector size roughly corresponds to loop unrolling,
> > which might be beneficial if the compiler isn't clever enough to do
> > this automatically.
> > 
> > There's additional complication if the array size is not a multiple
> > of the vector size. In this case, extending the array via dummy
> > elements if often the easiest way to go.
> > 
> > Note that SIMD vectorization is purely a performance improvement.
> > It does not make sense to make such changes without measuring
> > performance before and after. Given the low-level nature if the
> > changes, looking at the generated assembler code via `@code_native`
> > is usually also insightful.
> > 
> > I'll be happy to help if you have a specific problem on which
> > you're working.
> > 
> > -erik
> > 
> > 
> > On Thu, Oct 13, 2016 at 9:51 AM, Florian Oswald  > om> wrote:
> > > 
> > > ok thanks! and so I should define my SIMD-able function like
> > > 
> > > function vadd!{N,T}(xs::Vector{T}, ys::Vector{T},
> > > ::Type{Vec{N,T}})
> > > @assert length(ys) == length(xs)
> > > @assert length(xs) % N == 0
> > > @inbounds for i in 1:N:length(xs)
> > > xv = vload(Vec{N,T}, xs, i)
> > > yv = vload(Vec{N,T}, ys, i)
> > > xv += yv
> > > vstore(xv, xs, i)
> > > end
> > > end
> > > i.e. using vload() and vstore() methods?
> > > 
> > > > 
> > > > If you want explicit simd the best way right now is the great
> > > > SIMD.jl package https://github.com/eschnett/SIMD.jl  it is
> > > > builds on top of VecElement.
> > > > 
> > > > In many cases we can perform automatic vectorisation, but you
> > > > have to start Julia with -O3
> > > > 
> > > > > 
> > > > > i see on the docs http://docs.julialang.org/en/release-0.5/st
> > > > > dlib/simd-types/?highlight=SIMD that there is a vecElement
> > > > > that is build for SIMD support. I don't understand if as a
> > > > > user I should construct vecElement arrays and hope for some
> > > > > SIMD optimization? thanks.
> > > > > 
> > > > > 
> > > > 
> > > 
> > 
> > 
> > 
> > -- 
> > Erik Schnetter  http://www.perimeterinstitute.ca
> > /personal/eschnetter/


Re: [julia-users] Re: why do we have Base.isless(a, ::NAtype) but not Base.isless(a, ::Nullable)?

2016-10-14 Thread Milan Bouchet-Valat
Le jeudi 13 octobre 2016 à 06:45 -0700, Florian Oswald a écrit :
> I mean, do I have to cycle through the array and basically clean it
> of #NULL before findign the maximium or is there another way?
Currently you have two solutions:
julia> using NullableArrays

julia> x = NullableArray([1, 2, 3, Nullable()])
4-element NullableArrays.NullableArray{Int64,1}:
 1
 2
 3
 #NULL

julia> minimum(x, skipnull=true)
Nullable{Int64}(1)


Or:

julia> minimum(dropnull(x))
1


Regards

> > i'm trying to understand why we don't have something similar in
> > terms of comparison for Nullable as we have for DataArrays NAtype
> > (below). point me to the relevant github conversation, if any, is
> > fine. 
> > 
> > How would I implement methods to find the maximium of an
> > Array{Nullable{Float64}}? like so?
> > 
> > Base.isless(a::Any, x::Nullable{Float64}) = isnull(x) ? true :
> > Base.isless(a,get(x))
> > 
> > 
> > ~/.julia/v0.5/DataArrays/src/operators.jl:502
> > 
> > #
> > # Comparison operators
> > #
> > 
> > Base.isequal(::NAtype, ::NAtype) = true
> > Base.isequal(::NAtype, b) = false
> > Base.isequal(a, ::NAtype) = false
> > Base.isless(::NAtype, ::NAtype) = false
> > Base.isless(::NAtype, b) = false
> > Base.isless(a, ::NAtype) = true
> > 
> > 


[julia-users] Re: redefining Base method for `show`

2016-10-14 Thread lapeyre . math122a
I'm thinking of symbolic mathematics (Symata.jl). Another example is 
`SymPy.jl`, which prints rationals the same way I want to, like this: 
"2/3". But in SymPy.jl, rationals are not `Rational`'s, but rather wrapped 
python objects, so the problem with printing does not arise. If I wrapped 
`Rational`'s it would introduce a lot of complexity.

So far, walking expression trees to wrap objects of certain types just 
before printing seems to be working well.

On Friday, October 14, 2016 at 4:23:27 AM UTC+2, Jeffrey Sarnoff wrote:
>
> I assume you meant x::T in type A{T}.  Why do you want to do this:
>
>> Every time a Rational or Symbol or Bool is encountered on any level, I 
>> want it to print differently than Base.show does it.
>>
> Do you want to adorn it (like "3//5" -> "{3//5}") or alter it (like "3//5" 
> -> "2//5")?
>
> Also, I think you are approaching solving your problem in way more suited 
> to another language.  But I really have no idea what your motivation is.
>
> On Tuesday, October 11, 2016 at 7:21:35 PM UTC-4, lapeyre@gmail.com 
> wrote:
>>
>> To make it concrete, I have
>>
>> type A{T}
>>x
>>a::Array{Any,1}
>> end
>>
>> The elements of the array a are numbers, Symbols, strings, etc., as well 
>> as more instances of type A{T}.  They
>> may be nested to arbitrary depth. If I call show on an instance of A{T}, 
>> then show will be called recursively
>> on all parts of the tree. Every time a Rational or Symbol or Bool is 
>> encountered on any level, I want it to print differently than Base.show 
>> does it.
>>
>>
>> On Tuesday, October 11, 2016 at 11:48:46 PM UTC+2, Jeffrey Sarnoff wrote:
>>>
>>> Are you saying  a and b and c and d?
>>>
>>> (a) that you have a outer type which has a Rational field and has 
>>> another field of a type that has a field which is typed Rational or is 
>>> typed e.g. Vector{Rational}   
>>>
>>> (b) and displaying a value of the outer type includes displaying the 
>>> Rationals from withiin the field of the inner type
>>>
>>> (c) and when displaying that value, you want to present the outer type's 
>>> Rational field a special way
>>>
>>> (d) and when displaying that value, you want to present the Rational 
>>> fields of the inner type in the usual way
>>>
>>>
>>> On Tuesday, October 11, 2016 at 1:23:37 PM UTC-4, lapeyre@gmail.com 
>>> wrote:

 I think I understand what you are saying (not sure).  A problem that 
 arises is that if I call show or print on an object, then show or print 
 may 
 be called many times on fields and fields of fields, etc., including from 
 within Base code before the call returns. I don't know how to tell the 
 builtin julia code my preference for printing rationals. The only way I 
 know to get a redefinition of show eg M.show to work in all situations, is 
 to copy all the code that might be called. Maybe I'm missing something, 
 but 
 I can't see a way around this.

 I'm not familiar with the idea of a fencing module.

 On Monday, October 10, 2016 at 11:30:18 PM UTC+2, Jeffrey Sarnoff wrote:
>
> You could wrap your redefinitions in a module M without exporting show 
> explicitly.   
> `using M`  and accessing the your variation as `M.show` may give the 
> localization you want.
> Should it not, then doing that within some outer working context, a 
> fencing module, may add enough flexibility.
>
>
> On Monday, October 10, 2016 at 4:18:52 PM UTC-4, lapeyre@gmail.com 
> wrote:
>>
>> For the record, a workable solution, at least for this particular 
>> code: I pass all output through wrapout() at the outermost output call. 
>> The 
>> object to be printed is traversed recursively. All types fall through 
>> except for the handful that I want to change. Each of these is each 
>> wrapped 
>> in a new type. I extend Base.show for each of these wrapper types. This 
>> seems pretty economical and  robust and works across versions. The 
>> wrapper 
>> types are only introduced upon output, the rest of the code never sees 
>> them.
>>
>> This works because the code uses a customization of the REPL, and 
>> several instances of print, warn, string, etc. I make a new top-level 
>> output function for the REPL that uses `wrapout`. I also  generate 
>> wrappers 
>> for each of print, etc. that map `wrapout` over all arguments. A 
>> developer 
>> is expected to use the interface provided rather than `print` etc. 
>> directly.  The user doesn't even have a choice. There are very few types 
>> in 
>> the package, but a lot of nested instances. So there is very little code 
>> needed to traverse these instances.  This might be more difficult in a 
>> different situation if many methods for wrapout() were required.
>>
>> On Monday, October 10, 2016 at 12:20:50 AM UTC+2, 
>> lapeyre@gmail.com wrote:
>>>

[julia-users] Re: Merge functions from different headers (Matrix vs. Vector)

2016-10-14 Thread DNF
On Friday, October 14, 2016 at 3:16:31 PM UTC+2, Martin Florek wrote:
>
> Thank you very much. This is very elegant way, I think that it solve my 
> problem.
>

You're welcome. If you are looking to improve performance further, you 
could add @inbounds and @simd macro calls, as seen 
here: 
http://docs.julialang.org/en/release-0.5/manual/performance-tips/#performance-annotations


Re: [julia-users] GC rooting for embedding: what is safe and unsafe?

2016-10-14 Thread Yichao Yu
On Fri, Oct 14, 2016 at 7:03 AM, Bart Janssens 
wrote:

> Hi,
>
> Replies below, to the best of my understanding of the Julia C interface:
>
> On Fri, Oct 14, 2016 at 11:47 AM Gunnar Farnebäck 
> wrote:
>
>> Reading through the threads and issues on gc rooting for embedded code,
>> as well as the code comments above the JL_GC_PUSH* macros in julia.h, I'm
>> still uncertain about how defensive it's necessary to be and best
>> practices. I'll structure this into a couple of cases with questions.
>>
>> 1. One of the use cases in examples/embedding.c is written:
>>
>> jl_function_t *func = jl_get_function(jl_base_module, "sqrt");
>> jl_value_t* argument = jl_box_float64(2.0);
>> jl_value_t* ret = jl_call1(func, argument);
>>
>> if (jl_is_float64(ret)) {
>> double retDouble = jl_unbox_float64(ret);
>> printf("sqrt(2.0) in C: %e\n", retDouble);
>> }
>>
>>
>>
> Is this missing gc rooting for argument during the call to jl_call1 or is
>> it safe without it?
>> Would ret need gc rooting to be safe during the calls to jl_is_float64
>> and/or jl_unbox_float64?
>>
>
> The jl_call argument must be rooted since func may allocate. I don't think
> the operations on ret allocate, but if you're unsure it's better to root
> it. Also, as your code evolves you may decide to perform extra operations
> on ret and then it's easy to forget the GC rooting at that point, so I'd
> root ret here.
>

jl_call1 (and other jl_call* functions) are special in the sense that it
roots it's argument so you don't have to root `argument`
The ret doesn't have to be rooted if these are all what you are doing with
it.
The only one that should in principle be rooted is actually `func`.
However, since it is known to be a global constant you won't get into any
trouble without. (If it's a non-const global then you have to)


>
>
>>
>> 2.
>> jl_value_t *a = jl_box_float64(1.0);
>> jl_value_t *b = jl_box_float64(2.0);
>> JL_GC_PUSH2(, );
>>
>> Is this unsafe since a is not yet rooted during the second call to
>> jl_box_float64 and must instead be written like below?
>>
>> jl_value_t *a = 0;
>> jl_value_t *b = 0;
>> JL_GC_PUSH2(, );
>> a = jl_box_float64(1.0);
>> b = jl_box_float64(2.0);
>>
>> For a single variable it's just fine to do like this though?
>> jl_value_t *a = jl_box_float64(1.0);
>> JL_GC_PUSH1();
>>
>>
> Yes, since jl_box will allocate.
>

This is correct.


>
>
>> 3. Are
>> jl_function_t *func = jl_get_function(jl_base_module, "println");
>> jl_value_t *a = 0;
>> jl_value_t *b = 0;
>> JL_GC_PUSH2(, );
>> a = jl_box_float64(1.0);
>> b = jl_box_float64(2.0);
>> jl_call2(func, a, b);
>> JL_GC_POP();
>>
>> and
>>
>> jl_function_t *func = jl_get_function(jl_base_module, "println");
>> jl_value_t **args;
>> JL_GC_PUSHARGS(args, 2);
>> args[0] = jl_box_float64(1.0);
>> args[1] = jl_box_float64(2.0);
>> jl_call(func, args, 2);
>> JL_GC_POP();
>>
>> equivalent and both safe? Are there any reasons to choose one over the
>> other, apart from personal preferences?
>>
>
> They are equivalent, looking at the code for the macro it seems that the
> JL_GC_PUSHARGS variant heap-allocates the array of pointers to root, so
> that might be slightly slower. I'd only use the JL_GC_PUSHARGS version if
> the number of arguments comes from a variable or a parameter or similar.
>

No JL_GC_PUSHARGS does **NOT** heap allocate the array.
The difference between the two is pretty small and the performance should
be almost not noticeable unless you are doing a lot of things with the
boxed variables.


>
>
>>
>> 4. Can any kind of exception checking be done safely without rooting the
>> return value?
>> jl_value_t *ret = jl_call1(...);
>> if (jl_exception_occured())
>> printf("%s \n", jl_typeof_str(jl_exception_occured()));
>> else
>> d = jl_unbox_float64(ret);
>>
>>
This is currently safe. It's a little close to the edge of what I think we
can always guarantee in the future


> 5. What kind of costs, other than increased code size, can be expected
>> from overzealous gc rooting?
>>
>
>
It leaks local variable addresses to global so the compiler cannot reason
about their values as well. This is the "difference" I've mentioned above
that you won't notice "unless you are doing a lot of things with the boxed
variables"


> These I leave for the experts :)
>
>
>>
>> 6. Is it always safe not to gc root the function pointer returned from
>> jl_get_function?
>>
>>
> A function should be bound to the symbol you use to look it up, so it is
> already rooted.
>

No. It is only safe to not root the value if it is a const global (well, or
you know that it never changes) and is not an immutable.
You can also not root any value if it is a singleton, which happens to be
true for most functions.


>
> Cheers,
>
> Bart
>


Re: [julia-users] Memory allocation issue when using external packages in module

2016-10-14 Thread Yichao Yu
On Fri, Oct 14, 2016 at 4:14 AM, Jan  wrote:

> Thanks for the hint. That seems to be the reason!
>
> I have a couple of follow up questions so that I learn more about Julia.
> Would be nice if someone takes a couple of minutes to educate me.
>
> I found a simple example reproducing my issue:
>
> module FooBar
>
> # Including BlackBoxOptim causes type instability for exponents
>
> using BlackBoxOptim
>
> # Multiplying instead of exponentiating works fine, even when using
> BlackBoxOptim
>
> f(a::Float64, b::Float64) = a^b
>
> end
>
> If I run @code_warntype with and without BlackBoxOptim for the code above,
> it is clear that it causes the type instability.
>
> With BlackBoxOptim:
>
>
> 
>
>
> Without BlackBoxOptim:
>
>
>
> 
>
>
> For my real code, the @code_warntype macro produces identical results
> whether I use BlackBoxOptim or not even though one is type unstable.
>
> *Question 1 + related ones :) :* Is there an alternative way to check
> type instability which is more detailed but still halfway easily readable?
> Would it be possible to detect the issue with @code_llvm or similar? I
> tried to write @code_llvm to a file since the output is very long, but
> never succeeded. Is this possible?
>

code_warntype is as much detail as you can get about type instability. This
is a compiler bug, which is likely why it might looks non-obvious from the
output.
I strongly recommend **against** using code_llvm to check type stability
unless you are really familiar with llvm IR. It indeed gives you more
(lower level) information but rarely (never) about why type instability
happens.


>
> *Question 2:* When the issue 18465 is fixed, will that end up in Julia
> 0.5.1 or is a fixed version available already before that somewhere?
>

The source with the fix will be on the release-0.5 branch so you can
compile it yourself. Ideally you don't have to wait too long before it is
released though.


>
> Many thanks for any help. I really like to work with Julia so some input
> would be highly appreciated.
>
> Jan
>
>
>


Re: [julia-users] list all methods on certain datatype

2016-10-14 Thread Mauro
Try:

help?> methodswith
search: methodswith

  methodswith(typ[, module or function][, showparents])

  Return an array of methods with an argument of type typ.

  The optional second argument restricts the search to a particular module or 
function (the default is all modules, starting from Main).

  If optional showparents is true, also return arguments with a parent type of 
typ, excluding type Any.


On Fri, 2016-10-14 at 13:20, Paulito Palmes  wrote:
> is there a function or keyboard shortcut to list all functions operating on
> certain datatype?
>
> for example, in a typical object-based approach, you can type the object in
> REPL with a dot and it completes all methods operating on that object. in
> Julia, we can only list all the behavior of a certain function like
> methods(+) but in most cases, you are more interested not on the function +
> itself but on the list of functions operating on a given datatype. in many
> cases, you forgot all the names of the functions but you know it operates
> on a certain object only so it's easy to recall it if you can list the
> names of these functions operating on a certain object only. currently, if
> you do methods(+), you will endless list. it will be nice if you can do
> functionlists(object).


Re: [julia-users] Type stability of function in composite types

2016-10-14 Thread Mauro
For Julia to infer types it needs to know the exact type of the
function.  Try this:

type Foo{F<:Function}
  f::F
  y::Array{Float64, 1}
  x::Array{Float64, 2}
end


On Fri, 2016-10-14 at 13:26, Giuseppe Ragusa  wrote:
> I am failing to understand why the following code produce type instability
> (caveat: the code is a reduction o large and more complex code, but the
> features are the same).
>
> ```
> type Foo
>   f::Function
>   y::Array{Float64, 1}
>   x::Array{Float64, 2}
> end
>
> type Bar{T}
>   b::T
> end
>
> type A{T}
>   a::T
> end
>
> x = randn(100,4)
> y = randn(100)
>
> f(theta, y, x) = x'*(y-x*theta)
>
> (g::Foo)(x) = g.f(x, g.y, g.x)
>
> a = A(Bar(Foo(f, y, x)))
>
> m(a::A) = a.a.b([.1,.1,.1,.1])
> ```
>
> The output of @code_warntype is below:
>
> ```
> @code_warntype m(a)
>
> Variables:
>   #self#::#m
>   a::A{Bar{Foo}}
>
> Body:
>   begin
>   SSAValue(1) =
> (Core.getfield)((Core.getfield)(a::A{Bar{Foo}},:a)::Bar{Foo},:b)::Foo
>   SSAValue(0) = $(Expr(:invoke, LambdaInfo for vect(::Float64,
> ::Vararg{Float64,N}), :(Base.vect), 0.1, 0.1, 0.1, 0.1))
>   return
> ((Core.getfield)(SSAValue(1),:f)::F)(SSAValue(0),(Core.getfield)(SSAValue(1),:y)::Array{Float64,1},(Core.getfield)(SSAValue(1),:x)::Array{Float64,2})::Any
>   end::Any
> ```
>
> I thought the v0.5 will solve the problem with functions not being
> correctly inferred. Maybe, I am simply doing something patently stupid.


[julia-users] Re: Nested Grouping in Gadfly

2016-10-14 Thread Christopher Fisher
I guess one approach is to use subplots with the grid lines turned off. But 
if someone has a better solution, I would like to know. 

On Friday, October 14, 2016 at 6:18:35 AM UTC-4, Christopher Fisher wrote:
>
> Hi all-
>
> I want to create a plot with nested grouping. The data contain a condition 
> grouping variable (1 and 2) and a block grouping variable (1,2 and 3). So I 
> would like to be able to group the data by condition and display the blocks 
> (in order from 1 to 3) within each condition. Ultimately, it would contain 
> two lines (one for each condition) with three points each and a label for 
> the conditions. Is this possible?
>
> Thank you,
>
> Chris
>


[julia-users] Type stability of function in composite types

2016-10-14 Thread Giuseppe Ragusa
I am failing to understand why the following code produce type instability 
(caveat: the code is a reduction o large and more complex code, but the 
features are the same). 

```
type Foo
  f::Function
  y::Array{Float64, 1}
  x::Array{Float64, 2}  
end

type Bar{T}
  b::T
end

type A{T}
  a::T
end

x = randn(100,4)
y = randn(100)

f(theta, y, x) = x'*(y-x*theta)

(g::Foo)(x) = g.f(x, g.y, g.x)

a = A(Bar(Foo(f, y, x)))

m(a::A) = a.a.b([.1,.1,.1,.1])
```

The output of @code_warntype is below:

```
@code_warntype m(a)

Variables:
  #self#::#m
  a::A{Bar{Foo}}

Body:
  begin
  SSAValue(1) = 
(Core.getfield)((Core.getfield)(a::A{Bar{Foo}},:a)::Bar{Foo},:b)::Foo
  SSAValue(0) = $(Expr(:invoke, LambdaInfo for vect(::Float64, 
::Vararg{Float64,N}), :(Base.vect), 0.1, 0.1, 0.1, 0.1))
  return 
((Core.getfield)(SSAValue(1),:f)::F)(SSAValue(0),(Core.getfield)(SSAValue(1),:y)::Array{Float64,1},(Core.getfield)(SSAValue(1),:x)::Array{Float64,2})::Any
  end::Any
```

I thought the v0.5 will solve the problem with functions not being 
correctly inferred. Maybe, I am simply doing something patently stupid. 







[julia-users] list all methods on certain datatype

2016-10-14 Thread Paulito Palmes
is there a function or keyboard shortcut to list all functions operating on 
certain datatype?

for example, in a typical object-based approach, you can type the object in 
REPL with a dot and it completes all methods operating on that object. in 
Julia, we can only list all the behavior of a certain function like 
methods(+) but in most cases, you are more interested not on the function + 
itself but on the list of functions operating on a given datatype. in many 
cases, you forgot all the names of the functions but you know it operates 
on a certain object only so it's easy to recall it if you can list the 
names of these functions operating on a certain object only. currently, if 
you do methods(+), you will endless list. it will be nice if you can do 
functionlists(object).


Re: [julia-users] Re: Julia and the Tower of Babel

2016-10-14 Thread Jeffrey Sarnoff
first pass at naming guidelines https://github.com/JuliaPraxis/Naming

On Thursday, October 13, 2016 at 8:07:18 AM UTC-4, Páll Haraldsson wrote:
>
> On Sunday, October 9, 2016 at 9:59:12 AM UTC, Michael Borregaard wrote:
>>
>>
>> So when I came to julia I was struck by how structured the package 
>> ecosystem appears to be, yet, in spite of the micropackaging. [..] I think 
>> there are a number of reasons for this difference, but I also believe that 
>> a primary reason is the reliance on github for developing the package 
>> ecosystem from the bottom up, and the use of organizations.
>>
>
> Could be; my feeling is that Julia allows for better
>
> https://en.wikipedia.org/wiki/Separation_of_concerns [term "was probably 
> coined by Edsger W. Dijkstra 
>  in his 1974 paper "On 
> the role of scientific thought" "; synonym for "modularity"?]
>
> that other languages, OO (and information hiding) has been credited as 
> helping, but my feeling is that multiple dispatch is even better, for it.
>
>
> That is, leads to low:
>
> https://en.wikipedia.org/wiki/Coupling_(computer_programming)
> "Coupling is usually contrasted with cohesion. Low coupling 
>  often correlates with high 
> cohesion, and vice versa. Low coupling is often a sign of a well-structured 
> computer 
> system  and a good design"
>
>
> https://en.wikipedia.org/wiki/Cohesion_(computer_science)
>
> Now, as an outsider looking in, e.g. on:
>
> https://en.wikipedia.org/wiki/Automatic_differentiation
>
> There seems to be lots of redundant packages with e.g.
>
> https://github.com/denizyuret/AutoGrad.jl
>
>
> Maybe it's just my limited math skills showing, are there subtle 
> differences, explaining are requiring all these packages?
>
> Do you expect some/many packages to just die?
>
> One solution to many similar packages is a:
>
> https://en.wikipedia.org/wiki/Facade_pattern
>
> e.g. Plots.jl and then backends (you may care less about(?)).
>
>
> Not sure when you use all these similar (or complementary?) packages 
> together.. if it applies.
>
>
> In my other answer I misquoted (making clear original user's comment is 
> quoting
>
> Style Insensitive?
> https://github.com/nim-lang/Nim/issues/521
> >Nimrod is a style-insensitive language. This means that it is not 
> case-sensitive and even underscores are ignored: type is a reserved word, 
> and so is TYPE or T_Y_P_E. The idea behind this is that this allows 
> programmers to use their own preferred spelling style and libraries written 
> by different programmers cannot use incompatible conventions. [..]
>
> Please *rethink* about that or at least give us an option to disable 
> both: case insensitive and also underscore ignored
>
> [another user]:
>
> Also a consistent style for code bases is VASTLY overrated, in fact I 
> almost never had the luxury of it and yet it was never a problem."
>


Re: [julia-users] GC rooting for embedding: what is safe and unsafe?

2016-10-14 Thread Bart Janssens
Hi,

Replies below, to the best of my understanding of the Julia C interface:

On Fri, Oct 14, 2016 at 11:47 AM Gunnar Farnebäck 
wrote:

> Reading through the threads and issues on gc rooting for embedded code, as
> well as the code comments above the JL_GC_PUSH* macros in julia.h, I'm
> still uncertain about how defensive it's necessary to be and best
> practices. I'll structure this into a couple of cases with questions.
>
> 1. One of the use cases in examples/embedding.c is written:
>
> jl_function_t *func = jl_get_function(jl_base_module, "sqrt");
> jl_value_t* argument = jl_box_float64(2.0);
> jl_value_t* ret = jl_call1(func, argument);
>
> if (jl_is_float64(ret)) {
> double retDouble = jl_unbox_float64(ret);
> printf("sqrt(2.0) in C: %e\n", retDouble);
> }
>
>
>
Is this missing gc rooting for argument during the call to jl_call1 or is
> it safe without it?
> Would ret need gc rooting to be safe during the calls to jl_is_float64
> and/or jl_unbox_float64?
>

The jl_call argument must be rooted since func may allocate. I don't think
the operations on ret allocate, but if you're unsure it's better to root
it. Also, as your code evolves you may decide to perform extra operations
on ret and then it's easy to forget the GC rooting at that point, so I'd
root ret here.


>
> 2.
> jl_value_t *a = jl_box_float64(1.0);
> jl_value_t *b = jl_box_float64(2.0);
> JL_GC_PUSH2(, );
>
> Is this unsafe since a is not yet rooted during the second call to
> jl_box_float64 and must instead be written like below?
>
> jl_value_t *a = 0;
> jl_value_t *b = 0;
> JL_GC_PUSH2(, );
> a = jl_box_float64(1.0);
> b = jl_box_float64(2.0);
>
> For a single variable it's just fine to do like this though?
> jl_value_t *a = jl_box_float64(1.0);
> JL_GC_PUSH1();
>
>
Yes, since jl_box will allocate.


> 3. Are
> jl_function_t *func = jl_get_function(jl_base_module, "println");
> jl_value_t *a = 0;
> jl_value_t *b = 0;
> JL_GC_PUSH2(, );
> a = jl_box_float64(1.0);
> b = jl_box_float64(2.0);
> jl_call2(func, a, b);
> JL_GC_POP();
>
> and
>
> jl_function_t *func = jl_get_function(jl_base_module, "println");
> jl_value_t **args;
> JL_GC_PUSHARGS(args, 2);
> args[0] = jl_box_float64(1.0);
> args[1] = jl_box_float64(2.0);
> jl_call(func, args, 2);
> JL_GC_POP();
>
> equivalent and both safe? Are there any reasons to choose one over the
> other, apart from personal preferences?
>

They are equivalent, looking at the code for the macro it seems that the
JL_GC_PUSHARGS variant heap-allocates the array of pointers to root, so
that might be slightly slower. I'd only use the JL_GC_PUSHARGS version if
the number of arguments comes from a variable or a parameter or similar.


>
> 4. Can any kind of exception checking be done safely without rooting the
> return value?
> jl_value_t *ret = jl_call1(...);
> if (jl_exception_occured())
> printf("%s \n", jl_typeof_str(jl_exception_occured()));
> else
> d = jl_unbox_float64(ret);
>
> 5. What kind of costs, other than increased code size, can be expected
> from overzealous gc rooting?
>

These I leave for the experts :)


>
> 6. Is it always safe not to gc root the function pointer returned from
> jl_get_function?
>
>
A function should be bound to the symbol you use to look it up, so it is
already rooted.

Cheers,

Bart


[julia-users] Nested Grouping in Gadfly

2016-10-14 Thread Christopher Fisher
Hi all-

I want to create a plot with nested grouping. The data contain a condition 
grouping variable (1 and 2) and a block grouping variable (1,2 and 3). So I 
would like to be able to group the data by condition and display the blocks 
(in order from 1 to 3) within each condition. Ultimately, it would contain 
two lines (one for each condition) with three points each and a label for 
the conditions. Is this possible?

Thank you,

Chris


[julia-users] GC rooting for embedding: what is safe and unsafe?

2016-10-14 Thread Gunnar Farnebäck
Reading through the threads and issues on gc rooting for embedded code, as 
well as the code comments above the JL_GC_PUSH* macros in julia.h, I'm 
still uncertain about how defensive it's necessary to be and best 
practices. I'll structure this into a couple of cases with questions.

1. One of the use cases in examples/embedding.c is written:

jl_function_t *func = jl_get_function(jl_base_module, "sqrt");
jl_value_t* argument = jl_box_float64(2.0);
jl_value_t* ret = jl_call1(func, argument);

if (jl_is_float64(ret)) {
double retDouble = jl_unbox_float64(ret);
printf("sqrt(2.0) in C: %e\n", retDouble);
}

Is this missing gc rooting for argument during the call to jl_call1 or is 
it safe without it?
Would ret need gc rooting to be safe during the calls to jl_is_float64 
and/or jl_unbox_float64?

2. 
jl_value_t *a = jl_box_float64(1.0);
jl_value_t *b = jl_box_float64(2.0);
JL_GC_PUSH2(, );

Is this unsafe since a is not yet rooted during the second call to 
jl_box_float64 and must instead be written like below?

jl_value_t *a = 0;
jl_value_t *b = 0;
JL_GC_PUSH2(, );
a = jl_box_float64(1.0);
b = jl_box_float64(2.0);

For a single variable it's just fine to do like this though?
jl_value_t *a = jl_box_float64(1.0);
JL_GC_PUSH1();

3. Are
jl_function_t *func = jl_get_function(jl_base_module, "println");
jl_value_t *a = 0;
jl_value_t *b = 0;
JL_GC_PUSH2(, );
a = jl_box_float64(1.0);
b = jl_box_float64(2.0);
jl_call2(func, a, b);
JL_GC_POP();

and

jl_function_t *func = jl_get_function(jl_base_module, "println");
jl_value_t **args;
JL_GC_PUSHARGS(args, 2);
args[0] = jl_box_float64(1.0);
args[1] = jl_box_float64(2.0);
jl_call(func, args, 2);
JL_GC_POP();

equivalent and both safe? Are there any reasons to choose one over the 
other, apart from personal preferences?

4. Can any kind of exception checking be done safely without rooting the 
return value?
jl_value_t *ret = jl_call1(...);
if (jl_exception_occured())
printf("%s \n", jl_typeof_str(jl_exception_occured()));
else
d = jl_unbox_float64(ret);

5. What kind of costs, other than increased code size, can be expected from 
overzealous gc rooting?

6. Is it always safe not to gc root the function pointer returned from 
jl_get_function?



[julia-users] Re: Merge functions from different headers (Matrix vs. Vector)

2016-10-14 Thread DNF
As a proposal, this is what I would do, given you requirements:

function _scaleRestore!(Z, Zout, shift, stretch)
for j in 1:size(Z, 2), i in 1:size(Z, 1)
Zout[i, j] = Z[i, j] * stretch[j] + shift[j]
end
return Zout
end
scaleRestore!(Z::Vector, shift::Number, stretch::Number) = _scaleRestore!(Z, 
Z, shift, stretch)
scaleRestore!(Z::Matrix, shift::Vector, stretch::Vector) = _scaleRestore!(Z, 
Z, shift, stretch)

scaleRestore(Z::Vector, shift::Number, stretch::Number) = _scaleRestore!(Z, 
similar(Z), shift, stretch)
scaleRestore(Z::Matrix, shift::Vector, stretch::Vector) = _scaleRestore!(Z, 
similar(Z), shift, stretch)

I put in both mutating and non-mutating versions, just in case. Single 
signature definition I cannot help you with, I'm afraid.


[julia-users] Re: Merge functions from different headers (Matrix vs. Vector)

2016-10-14 Thread DNF
I don't know of any way to accomplish what you want with a single method 
signature. I don't see how Union can help you, because you would not be 
able to disallow (Vector x Vector x Vector) input, for example. You 
normally achieve this in Julia by writing two separate methods, which is 
what you already did. Do you have a very good reason to merge those to 
methods?

With respect to your memory constraints, you should not use transpose 
anyway, because that creates a copy. The following uses minimal memory, 
scales Z in-place, and works for both your input cases (it is allowed to 
index into a scalar), but doesn't solve your type signature problem:

function scaleRestore!(Z, shift, stretch)
for j in 1:size(Z, 2), i in 1:size(Z, 1)
Z[i, j] = Z[i, j] * stretch[j] + shift[j]
end
return Z
end

If you don't want in-place, just create an output array first.


On Friday, October 14, 2016 at 10:35:56 AM UTC+2, Martin Florek wrote:
>
> Thanks DNF,
> but I want to just merge version 1 and 2 with limited to one vector - two 
> scalars OR one matrix - two vectors into a single definition. On the 
> function f_scaleRestore does not matter, can have different forms. I also 
> have requirements for memory, so I use broadcast(). So the question is how 
> to write a function, where I enter only two kinds of desired input? 
>  Probably the solution is Union{}, but how to implement it.
>
> On Friday, 14 October 2016 10:03:15 UTC+2, DNF wrote:
>>
>> Hmm. I slightly misread the way you had set up your code. I thought you 
>> wanted your code to cover three cases: all scalar, one vector - two 
>> scalars, and one matrix - two vectors.
>>
>> So to be clear: defining the function:
>>
>> scaleRestore(a, b, c) = a .* b' .+ c'
>>
>> covers both your cases and then some others. Unless you have some 
>> particular reason to constrain the inputs in some way, I wouldn't add all 
>> the type declarations, but just leave the function generic.
>>
>>
>> On Friday, October 14, 2016 at 9:55:21 AM UTC+2, DNF wrote:
>>>
>>> This should work for the three cases you have set up:
>>>
>>> f_scaleRestore(a, b, c) = a .* b' .+ c'
>>>
>>>
>>> On Friday, October 14, 2016 at 9:12:55 AM UTC+2, Martin Florek wrote:

 Hi all,

 I have the following two functions and I want to sensibly merge them 
 into one. How to marge headers and body of function for Matrix and Vector?
 - 
 f_scaleRestore(a::Float64, b::Float64, c::Float64) = a * b + c;
 -  
 - # version 1
 - function scaleRestore(Z::Matrix{Float64}, shift::Vector{Float64}, 
 stretch::Vector{Float64})
 -   broadcast(f_scaleRestore, Z, stretch', shift')
 - end
 -  
 - # version 2
 - function scaleRestore(Z::Vector{Float64}, shift::Float64, stretch::
 Float64)
 -   broadcast(f_scaleRestore, Z, stretch, shift)
 - end


 Thanks in advance,
 Martin

>>>

Re: [julia-users] Memory allocation issue when using external packages in module

2016-10-14 Thread Jan
Thanks for the hint. That seems to be the reason!

I have a couple of follow up questions so that I learn more about Julia. 
Would be nice if someone takes a couple of minutes to educate me.

I found a simple example reproducing my issue:

module FooBar

# Including BlackBoxOptim causes type instability for exponents

using BlackBoxOptim

# Multiplying instead of exponentiating works fine, even when using 
BlackBoxOptim

f(a::Float64, b::Float64) = a^b

end

If I run @code_warntype with and without BlackBoxOptim for the code above, 
it is clear that it causes the type instability.

With BlackBoxOptim:




Without BlackBoxOptim:





For my real code, the @code_warntype macro produces identical results 
whether I use BlackBoxOptim or not even though one is type unstable.

*Question 1 + related ones :) :* Is there an alternative way to check type 
instability which is more detailed but still halfway easily readable? Would 
it be possible to detect the issue with @code_llvm or similar? I tried to 
write @code_llvm to a file since the output is very long, but never 
succeeded. Is this possible?

*Question 2:* When the issue 18465 is fixed, will that end up in Julia 
0.5.1 or is a fixed version available already before that somewhere?

Many thanks for any help. I really like to work with Julia so some input 
would be highly appreciated.

Jan




[julia-users] Re: Merge functions from different headers (Matrix vs. Vector)

2016-10-14 Thread DNF
Hmm. I slightly misread the way you had set up your code. I thought you 
wanted your code to cover three cases: all scalar, one vector - two 
scalars, and one matrix - two vectors.

So to be clear: defining the function:

scaleRestore(a, b, c) = a .* b' .+ c'

covers both your cases and then some others. Unless you have some 
particular reason to constrain the inputs in some way, I wouldn't add all 
the type declarations, but just leave the function generic.


On Friday, October 14, 2016 at 9:55:21 AM UTC+2, DNF wrote:
>
> This should work for the three cases you have set up:
>
> f_scaleRestore(a, b, c) = a .* b' .+ c'
>
>
> On Friday, October 14, 2016 at 9:12:55 AM UTC+2, Martin Florek wrote:
>>
>> Hi all,
>>
>> I have the following two functions and I want to sensibly merge them 
>> into one. How to marge headers and body of function for Matrix and Vector?
>> - 
>> f_scaleRestore(a::Float64, b::Float64, c::Float64) = a * b + c;
>> -  
>> - # version 1
>> - function scaleRestore(Z::Matrix{Float64}, shift::Vector{Float64}, 
>> stretch::Vector{Float64})
>> -   broadcast(f_scaleRestore, Z, stretch', shift')
>> - end
>> -  
>> - # version 2
>> - function scaleRestore(Z::Vector{Float64}, shift::Float64, stretch::
>> Float64)
>> -   broadcast(f_scaleRestore, Z, stretch, shift)
>> - end
>>
>>
>> Thanks in advance,
>> Martin
>>
>

Re: [julia-users] shared array of user defined type

2016-10-14 Thread Mauro
On Fri, 2016-10-14 at 09:41, Alexandros Fakos  wrote:
> Thank you Mauro. If I understand correctly, this means that in SharedArrays
> I cannot use immutable types whose fields contain arrays. Is that right?
> Thanks a lot,
> Alex

Yes. You can check with `isbits`.

> On Friday, October 14, 2016 at 1:57:42 AM UTC-5, Mauro wrote:
>>
>> On Fri, 2016-10-14 at 00:02, Alexandros Fakos > > wrote:
>> > Hi,
>> >
>> > Is there a way to create a shared array of a user defined composite
>> > type?
>>
>> Yes, but only for isbits types, i.e. immutables which do not contain
>> non-immutable fields.  Otherwise you could look into the new &
>> experimental threads support in 0.5.
>>
>> > I want to parallelize using pmap() on a function f() which has as
>> > arguments user defined composite types.
>>


[julia-users] Re: Merge functions from different headers (Matrix vs. Vector)

2016-10-14 Thread DNF
This should work for the three cases you have set up:

f_scaleRestore(a, b, c) = a .* b' .+ c'


On Friday, October 14, 2016 at 9:12:55 AM UTC+2, Martin Florek wrote:
>
> Hi all,
>
> I have the following two functions and I want to sensibly merge them into 
> one. How to marge headers and body of function for Matrix and Vector?
> - 
> f_scaleRestore(a::Float64, b::Float64, c::Float64) = a * b + c;
> -  
> - # version 1
> - function scaleRestore(Z::Matrix{Float64}, shift::Vector{Float64}, 
> stretch::Vector{Float64})
> -   broadcast(f_scaleRestore, Z, stretch', shift')
> - end
> -  
> - # version 2
> - function scaleRestore(Z::Vector{Float64}, shift::Float64, stretch::
> Float64)
> -   broadcast(f_scaleRestore, Z, stretch, shift)
> - end
>
>
> Thanks in advance,
> Martin
>


Re: [julia-users] shared array of user defined type

2016-10-14 Thread Alexandros Fakos
Thank you Mauro. If I understand correctly, this means that in SharedArrays 
I cannot use immutable types whose fields contain arrays. Is that right?
Thanks a lot,
Alex

On Friday, October 14, 2016 at 1:57:42 AM UTC-5, Mauro wrote:
>
> On Fri, 2016-10-14 at 00:02, Alexandros Fakos  > wrote: 
> > Hi, 
> > 
> > Is there a way to create a shared array of a user defined composite 
> > type? 
>
> Yes, but only for isbits types, i.e. immutables which do not contain 
> non-immutable fields.  Otherwise you could look into the new & 
> experimental threads support in 0.5. 
>
> > I want to parallelize using pmap() on a function f() which has as 
> > arguments user defined composite types. 
>


[julia-users] Merge functions from different headers (Matrix vs. Vector)

2016-10-14 Thread Martin Florek
Hi all,

I have the following two functions and I want to sensibly merge them into 
one. How to marge headers and body of function for Matrix and Vector?
- 
f_scaleRestore(a::Float64, b::Float64, c::Float64) = a * b + c;
-  
- # version 1
- function scaleRestore(Z::Matrix{Float64}, shift::Vector{Float64}, 
stretch::Vector{Float64})
-   broadcast(f_scaleRestore, Z, stretch', shift')
- end
-  
- # version 2
- function scaleRestore(Z::Vector{Float64}, shift::Float64, stretch::Float64
)
-   broadcast(f_scaleRestore, Z, stretch, shift)
- end


Thanks in advance,
Martin


Re: [julia-users] shared array of user defined type

2016-10-14 Thread Mauro
On Fri, 2016-10-14 at 00:02, Alexandros Fakos  wrote:
> Hi,
>
> Is there a way to create a shared array of a user defined composite
> type?

Yes, but only for isbits types, i.e. immutables which do not contain
non-immutable fields.  Otherwise you could look into the new &
experimental threads support in 0.5.

> I want to parallelize using pmap() on a function f() which has as
> arguments user defined composite types.