[julia-users] Re: Representation of a material conditional (implication)

2016-10-07 Thread Fengyang Wang
As Jussi Piitulainen noted, the ^ operator is backwards, so you need to 
wrap it around a function.

On Friday, October 7, 2016 at 10:05:34 AM UTC-4, Kevin Liu wrote:
>
> *julia> **@code_native(b^a)*
>
>.section__TEXT,__text,regular,pure_instructions
>
> Filename: bool.jl
>
> Source line: 39
>
>pushq   %rbp
>
>movq%rsp, %rbp
>
> Source line: 39
>
>xorb$1, %sil
>
>orb %dil, %sil
>
>movb%sil, %al
>
>popq%rbp
>
>ret
>
> *julia> **@code_native(a<=b)*
>
>.section__TEXT,__text,regular,pure_instructions
>
> Filename: bool.jl
>
> Source line: 29
>
>pushq   %rbp
>
>movq%rsp, %rbp
>
> Source line: 29
>
>xorb$1, %dil
>
>orb %sil, %dil
>
>movb%dil, %al
>
>popq%rbp
>
>ret
>
> *julia> **@code_native(ifelse(a,b,true))*
>
>.section__TEXT,__text,regular,pure_instructions
>
> Filename: operators.jl
>
> Source line: 48
>
>pushq   %rbp
>
>movq%rsp, %rbp
>
>testb   $1, %dil
>
> Source line: 48
>
>jne L17
>
>movb%dl, %sil
>
> L17:   movb%sil, %al
>
>popq%rbp
>
>ret
>
>
>
> On Friday, October 7, 2016 at 10:58:34 AM UTC-3, Kevin Liu wrote:
>>
>> *julia> **@code_llvm(b^a)*
>>
>>
>> define i1 @"julia_^_21646"(i1, i1) {
>>
>> top:
>>
>>   %2 = xor i1 %1, true
>>
>>   %3 = or i1 %0, %2
>>
>>   ret i1 %3
>>
>> }
>>
>> On Friday, October 7, 2016 at 10:56:26 AM UTC-3, Kevin Liu wrote:
>>>
>>> Sorry, no need, I got this
>>>
>>> *julia> **@code_llvm(a<=b)*
>>>
>>>
>>> define i1 @"julia_<=_21637"(i1, i1) {
>>>
>>> top:
>>>
>>>   %2 = xor i1 %0, true
>>>
>>>   %3 = or i1 %1, %2
>>>
>>>   ret i1 %3
>>>
>>> }
>>>
>>>
>>> *julia> **@code_llvm(ifelse(a,b,true))*
>>>
>>>
>>> define i1 @julia_ifelse_21636(i1, i1, i1) {
>>>
>>> top:
>>>
>>>   %3 = select i1 %0, i1 %1, i1 %2
>>>
>>>   ret i1 %3
>>>
>>> }
>>>
>>>
>>> How do you read this output?
>>>
>>> On Friday, October 7, 2016 at 10:50:57 AM UTC-3, Kevin Liu wrote:

 Jeffrey, can you show the expression you put inside @code_llvm() and 
 @code_native() for evaluation? 

 On Friday, October 7, 2016 at 2:26:56 AM UTC-3, Jeffrey Sarnoff wrote:
>
> Hi Jussi,
>
> Your version compiles down more neatly than the ifelse version. On my 
> system, BenchmarkTools gives nearly identical results; I don't know why, 
> but the ifelse version is consistently a smidge faster (~%2, relative 
> speed). Here is the llvm code and local native code for each, your 
> version 
> looks more tidy.  
>
>
> ```
> implies(p::Bool, q::Bool) = (p <= q)  implies(p::Bool, 
> q::Bool) = ifelse( p, q, true )
>
> # llvm
>
>   %2 = xor i8 %0, 1%2 = and i8 %0, 1
>   %3 = or i8 %2, %1%3 = icmp eq i8 %2, 
> 0
>   ret i8 %3%4 = select i1 %3, 
> i8 1, i8 %1
>ret i8 %3
>
> # native with some common code removed
>
> xorb   $1, %diltestb  $1, %dil
> orb%sil, %dil  movb   $1, %al
> movb   %dil, %al   je L15
> popq   %rbpmovb   %sil, %al
> retq L15:  popq   %rbp
>retq
> ```
>
>
>
>
> On Friday, October 7, 2016 at 12:22:23 AM UTC-4, Jussi Piitulainen 
> wrote:
>>
>>
>> implies(p::Bool, q::Bool) = p <= q
>>
>>
>>
>> torstai 6. lokakuuta 2016 19.10.51 UTC+3 Kevin Liu kirjoitti:
>>>
>>> How is an implication represented in Julia? 
>>>
>>>
>>> https://en.wikipedia.org/wiki/Material_conditional#Definitions_of_the_material_conditional
>>>
>>

Re: [julia-users] How does one read and write a sizeable amount of data (megabytes) from a process?

2016-10-07 Thread Eric Davies
Thanks, I will try both of those. I had tried making smaller pieces before 
but I'll try with `Base.process_events(false)`. 


Re: [julia-users] How does one read and write a sizeable amount of data (megabytes) from a process?

2016-10-07 Thread Keno Fischer
Hi Eric,

the problem here is that the kernel is blocking the process because there's
nobody reading on the
other side (since the writing process is blocked so it can't do any
reading). We can probably do some
things to improve the situation, but for now I'd recommend the following:
- Start reading before writing, e.g. using
t = @async read(pout)
...
# Later
wait(t)
- Break up the write into smaller pieces. If you want to be absolutely sure
that the reading happened,
you can also call `Base.process_events(false)` after doing a write.


On Fri, Oct 7, 2016 at 2:56 PM, Eric Davies  wrote:

> I'm trying to write a bunch of data to a process, then read out the
> result. Replaced the process here with `cat -` because that's close
> enough.
>
> function bloop(size)
>   data = "f" ^ size
>
>   print("start")
>   (pout, pin, p) = readandwrite(`cat -`)
>   print("write")
>   write(pin, data)
>   print("flush")
>   flush(pin)
>   print("close")
>   close(pin)
>   print("read")
>   output = read(pout)
>   print("close")
>   close(p)
>
>   println()
>
>   output
> end
>
> If `size <= 146944`, this succeeds. If `146944 < size <= 147456`, Julia
> blocks forever at the `read` call in a `kevent` system call. If `size >
> 147456`, Julia blocks forever at the `write` call in a `write` system call.
>
> How can I make this work?
>


Re: [julia-users] Threads.@threads and throw ErrorException interaction

2016-10-07 Thread Diego Javier Zea
Thanks Yichao, I found the related issue: 
https://github.com/JuliaLang/julia/issues/17532
Also I found useful your comment about task and IO in this issue: 
https://github.com/JuliaLang/julia/issues/14494
Thanks!


[julia-users] Trick to use plotlyjs in Atom

2016-10-07 Thread romain dupont
Hi,

I woul dlike to share the following trick to use plotlyjs in Atom:

using Plots
plotlyjs()

plot(rand(10))
gui()


Cheers




Re: [julia-users] help with syntax for Function-like objects for parametric types

2016-10-07 Thread Chris Stook
That makes sense.  I should have seen that.

Thank you!



[julia-users] Re: travis with dependency on scipy?

2016-10-07 Thread Steven G. Johnson


On Friday, October 7, 2016 at 9:10:44 AM UTC-4, David van Leeuwen wrote:
>
> Hello, 
>
> For a tiny package  that depends 
> on PyCall and python's scipy.spatial I am trying to engineer a 
> `.travis.yml`,
>

You can always do ENV["PYTHON"]="" to force PyCall to install its own 
Python distro (via Conda), and do pyimport_conda("scipy.spatial", "scipy") 
to make Conda install scipy for you as needed.

Steven

PS. As explained in the PyCall README, I don't recommend using @pyimport in 
Modules.   Instead, do
   const spatial = PyNULL()
and then, in your __init__ function, do copy!(spatial, 
pyimport_conda("scipy.spatial", "scipy"))  this way, you can put 
__precompile__(true) at the top of your module and safely precompile it.

PPS. I see from your source code that you had some confusion about 
Base.show vs. Base.display.  Never override Base.display for this sort of 
thing.   See the new manual section for more info:

 http://docs.julialang.org/en/latest/manual/types/#custom-pretty-printing


Re: [julia-users] help with syntax for Function-like objects for parametric types

2016-10-07 Thread Yichao Yu
On Fri, Oct 7, 2016 at 4:05 PM, Chris Stook  wrote:
> Calling objects of type T is only valid if N arguments are provided.  What
> is the correct syntax for this?
>
> immutable T{N}
>   t :: NTuple{N,Any}
> end
>
> (x::T)(args...) = error("wrong number of arguments")

function (x::T{N}){N}(args::Varargs{Any,N})

The parameter always goes after the callee.

>
> syntax: invalid function name "{N}(x::T{N})"


Re: [julia-users] Threads.@threads and throw ErrorException interaction

2016-10-07 Thread Yichao Yu
On Fri, Oct 7, 2016 at 4:02 PM, Diego Javier Zea  wrote:
> Hi,
> I was starting to play with Threads.@threads and I noticed a strange
> behaviour when the macro is used together with throw and ErrorException:
>
>   | | |_| | | | (_| |  |  Version 0.5.0 (2016-09-19 18:14 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/   |  x86_64-pc-linux-gnu
>
> julia> a = zeros(10);
>
> julia> Threads.@threads for i = 1:10
>a[i] = Threads.threadid()
>if 4 <= i <= 8
>throw(ErrorException("My Error"))
>end
>end
>
> julia> a
> 10-element Array{Float64,1}:
>  1.0
>  1.0
>  1.0
>  2.0
>  0.0
>  0.0
>  3.0
>  0.0
>  4.0
>  4.0
>
> julia> a = zeros(10);
>
> julia> Threads.@threads for i = 1:10
>if 4 <= i <= 8
>throw(ErrorException("My Error"))
>end
>a[i] = Threads.threadid()
>end
>
> julia> a
> 10-element Array{Float64,1}:
>  1.0
>  1.0
>  1.0
>  0.0
>  0.0
>  0.0
>  0.0
>  0.0
>  4.0
>  4.0
>
> The loop isn't terminated with the error and there is not error message
> printed. When the throw is at the end of the loop body, the result looks
> strange and unpredictable.
> Is this behaviour expected?


It's known and we have an issue for it already.

> Do I simply need to avoid using throw and threads together?

For now. Yes. And unless you are working on threading support in Base
julia you should avoid using threads in general.

>
> Best,
>


[julia-users] help with syntax for Function-like objects for parametric types

2016-10-07 Thread Chris Stook
Calling objects of type T is only valid if N arguments are provided.  What 
is the correct syntax for this?

immutable T{N}
  t :: NTuple{N,Any}
end

(x::T)(args...) = error("wrong number of arguments")
function {N}(x::T{N})(args::Varargs{Any,N})
  print("do stuff here")
end

syntax: invalid function name "{N}(x::T{N})"


[julia-users] Threads.@threads and throw ErrorException interaction

2016-10-07 Thread Diego Javier Zea
Hi, 
I was starting to play with* Threads.@threads *and I noticed a strange 
behaviour when the macro is used together with *throw* and *ErrorException*:

  | | |_| | | | (_| |  |  Version 0.5.0 (2016-09-19 18:14 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/   |  x86_64-pc-linux-gnu

julia> a = zeros(10);

julia> Threads.@threads for i = 1:10
   a[i] = Threads.threadid()
   if 4 <= i <= 8
   throw(ErrorException("My Error"))
   end
   end

julia> a
10-element Array{Float64,1}:
 1.0
 1.0
 1.0
 2.0
 0.0
 0.0
 3.0
 0.0
 4.0
 4.0

julia> a = zeros(10);

julia> Threads.@threads for i = 1:10
   if 4 <= i <= 8
   throw(ErrorException("My Error"))
   end
   a[i] = Threads.threadid()
   end

julia> a
10-element Array{Float64,1}:
 1.0
 1.0
 1.0
 0.0
 0.0
 0.0
 0.0
 0.0
 4.0
 4.0

The loop isn't terminated with the error and there is not error message 
printed. When the *throw* is at the end of the loop body, the result looks 
strange and unpredictable.
*Is this behaviour expected?  *
*Do I simply need to avoid using throw and threads together?*

*Best,*



Re: [julia-users] Re: multiple dispatch for operators

2016-10-07 Thread Stefan Karpinski
I think you're asking on the wrong list :P

On Fri, Oct 7, 2016 at 1:56 PM, Gabriel Gellner 
wrote:

> Any reason to not just use a function? (like np.dot etc)
>
> My understanding is that in python '*' means elementwise multiplication,
> so even if you could monkeypatch numpys __mul__ method to do the right
> thing wouldn't you be changing the semantics?
>
> Gabriel
>
> On Friday, October 7, 2016 at 3:51:11 AM UTC-6, Sisyphuss wrote:
>
>> In Julia, we can do multiple dispatch for operators, that is the
>> interpreter can identify:
>> float + integer
>> integer + integer
>> integer + float
>> float + float
>> as well as *user-defined* data structure.
>>
>> Recently, I am working on Python (I have no choice because Spark hasn't
>> yet a Julia binding). I intended to do the same thing -- multiplication --
>> between a Numpy matrix and self-defined Low-rank matrix. Of course, I
>> defined the `__rmul__ ` method for Low-rank matrix. However, it seems to me
>> that the Numpy matrix intercepts the `*` operator as its `__mul__` method,
>> which expects the argument on the right side of `*` to be a scalar.
>>
>> I would like to know if there is anyway around?
>>
>>


[julia-users] How does one read and write a sizeable amount of data (megabytes) from a process?

2016-10-07 Thread Eric Davies
I'm trying to write a bunch of data to a process, then read out the result. 
Replaced the process here with `cat -` because that's close enough.

function bloop(size)
  data = "f" ^ size

  print("start")
  (pout, pin, p) = readandwrite(`cat -`)
  print("write")
  write(pin, data)
  print("flush")
  flush(pin)
  print("close")
  close(pin)
  print("read")
  output = read(pout)
  print("close")
  close(p)

  println()

  output
end

If `size <= 146944`, this succeeds. If `146944 < size <= 147456`, Julia 
blocks forever at the `read` call in a `kevent` system call. If `size > 
147456`, Julia blocks forever at the `write` call in a `write` system call.

How can I make this work?


[julia-users] Re: StaticArrays vs FixedSizeArrays

2016-10-07 Thread Christoph Ortner
I've used FixedSizeArrays in the past but switch to StaticArrays, they seem 
more convenient in a few of ways, but for 90% of use cases they seem to be 
comparable. To protect myself against a possible move to yet another 
package, I added a layer of `typealias`.


Re: [julia-users] Indexing an Float64 scalar

2016-10-07 Thread Yichao Yu
On Fri, Oct 7, 2016 at 1:59 PM, Eduardo Lenz  wrote:
> Hi.
>
> What is exactly the point of a construction like
>
> "
> julia> a = 33.33
> 33.33
>
> julia> a[1]
> 33.33
> "
> be valid ? I think it should rise an error, but there is a
> getindex(::Float64, ::Int64) defined in the core language (v0.5).

https://github.com/JuliaLang/julia/issues/7903 and if you make it
iterable it should also be indexable. (or both should be gone at the
same time if we agree on that)

>
>
>


[julia-users] Indexing an Float64 scalar

2016-10-07 Thread Eduardo Lenz
Hi.

What is exactly the point of a construction like 

"
julia> a = 33.33
33.33

julia> a[1]
33.33
"
be valid ? I think it should rise an error, but there is a  
getindex(::Float64, ::Int64) defined in the core language (v0.5).





[julia-users] Re: multiple dispatch for operators

2016-10-07 Thread Gabriel Gellner
Any reason to not just use a function? (like np.dot etc)

My understanding is that in python '*' means elementwise multiplication, so 
even if you could monkeypatch numpys __mul__ method to do the right thing 
wouldn't you be changing the semantics?

Gabriel

On Friday, October 7, 2016 at 3:51:11 AM UTC-6, Sisyphuss wrote:

> In Julia, we can do multiple dispatch for operators, that is the 
> interpreter can identify:
> float + integer
> integer + integer
> integer + float
> float + float
> as well as *user-defined* data structure.
>
> Recently, I am working on Python (I have no choice because Spark hasn't 
> yet a Julia binding). I intended to do the same thing -- multiplication -- 
> between a Numpy matrix and self-defined Low-rank matrix. Of course, I 
> defined the `__rmul__ ` method for Low-rank matrix. However, it seems to me 
> that the Numpy matrix intercepts the `*` operator as its `__mul__` method, 
> which expects the argument on the right side of `*` to be a scalar.
>
> I would like to know if there is anyway around?
>
>

[julia-users] Re: Julia and the Tower of Babel

2016-10-07 Thread Gabriel Gellner
Yeah the R system is probably the best guide, as it also has a pretty easy 
to use package manager ... hence so, so many packages ;) I think python 
works without a single BDF (for science at least) since the core packages 
are monolithic, so the consistency is immediately apparent, and I find 
programmers, as a rule, dislike inconsistent API's within a given project 
(while seemingly less worried across packages).

In response to Andreas and Tom, I don't mean to sound like I don't want to 
collaborate, rather starting this discussion between conventions in Base 
and Optim.jl for example made me realize that it is not clear the solution 
is just a matter of simple discussion, rather each group would need to 
sacrifice a certain level of API consistency if they used the other's 
convention ... and like John says, usually that kind of decision requires 
someone to make a command from on high, which having a loose package system 
doesn't always facilitate. But we shall see, maybe it doesn't matter in the 
long run.

I just find it stressful when I am making my own package on what is the 
best convention to follow ... every choice feels like a severe tradeoff (do 
I use reltol to be like Base, which will be less and less of a guide as 
packages are moved out of Base ..., or do I use rel_tol because my package 
will commonly be used in conjunction with Optim.jl ...).

Thanks for the response though,
something I noticed, but wasn't sure what other felt.

all the best.

On Friday, October 7, 2016 at 10:49:47 AM UTC-6, John Myles White wrote:

> I don't really see how you can solve this without a single dictator who 
> controls the package ecosystem. I'm not enough of an expert in Python to 
> say how well things work there, but the R ecosystem is vastly less 
> organized than the Julia ecosystem. Insofar as it's getting better, it's 
> because the community has agreed to make Hadley Wickham their benevolent 
> dictator.
>
>  --John
>
> On Friday, October 7, 2016 at 8:35:46 AM UTC-7, Gabriel Gellner wrote:
>>
>> Something that I have been noticing, as I convert more of my research 
>> code over to Julia, is how the super easy to use package manager (which I 
>> love), coupled with the talent base of the Julia community seems to have a 
>> detrimental effect on the API consistency of the many “micro” packages that 
>> cover what I would consider the de-facto standard library.
>>
>> What I mean is that whereas a commercial package like Matlab/Mathematica 
>> etc., being written under one large umbrella, will largely (clearly not 
>> always) choose consistent names for similar API keyword arguments, and have 
>> similar calling conventions for master function like tools (`optimize` 
>> versus `lbfgs`, etc), which I am starting to realize is one of the great 
>> selling points of these packages as an end user. I can usually guess what a 
>> keyword will be in Mathematica, whereas even after a year of using Julia 
>> almost exclusively I find I have to look at the documentation (or the 
>> source code depending on the documentation ...) to figure out the keyword 
>> names in many common packages.
>>
>> Similarly, in my experience with open source tools, due to the complexity 
>> of the package management, we get large “batteries included” distributions 
>> that cover a lot of the standard stuff for doing science, like python’s 
>> numpy + scipy combination. Whereas in Julia the equivalent of scipy is 
>> split over many, separately developed packages (Base, Optim.jl, NLopt.jl, 
>> Roots.jl, NLsolve.jl, ODE.jl/DifferentialEquations.jl). Many of these 
>> packages are stupid awesome, but they can have dramatically different 
>> naming conventions and calling behavior, for essential equivalent behavior. 
>> Recently I noticed that tolerances, for example, are named as `atol/rtol` 
>> versus `abstol/reltol` versus `abs_tol/rel_tol`, which means is extremely 
>> easy to have a piece of scientific code that will need to use all three 
>> conventions across different calls to seemingly similar libraries. 
>>
>> Having brought this up I find that the community is largely sympathetic 
>> and, in general, would support a common convention, the issue I have slowly 
>> realized is that it is rarely that straightforward. In the above example 
>> the abstol/reltol versus abs_tol/rel_tol seems like an easy example of what 
>> can be tidied up, but the latter underscored name is consistent with 
>> similar naming conventions from Optim.jl for other tolerances, so that 
>> community is reluctant to change the convention. Similarly, I think there 
>> would be little interest in changing abstol/reltol to the underscored 
>> version in packages like Base, ODE.jl etc as this feels consistent with 
>> each of these code bases. Hence I have started to think that the problem is 
>> the micro-packaging. It is much easier to look for consistency within a 
>> package then across similar packages, and since Julia seems to distribute 
>> so many 

[julia-users] Re: multiple dispatch for operators

2016-10-07 Thread Jeffrey Sarnoff


Is there a way around the nonjuliance of Python  

within Python?


Regards, Jeffrey



On Friday, October 7, 2016 at 5:51:11 AM UTC-4, Sisyphuss wrote:
>
> In Julia, we can do multiple dispatch for operators, that is the 
> interpreter can identify:
> float + integer
> integer + integer
> integer + float
> float + float
> as well as *user-defined* data structure.
>
> Recently, I am working on Python (I have no choice because Spark hasn't 
> yet a Julia binding). I intended to do the same thing -- multiplication -- 
> between a Numpy matrix and self-defined Low-rank matrix. Of course, I 
> defined the `__rmul__ ` method for Low-rank matrix. However, it seems to me 
> that the Numpy matrix intercepts the `*` operator as its `__mul__` method, 
> which expects the argument on the right side of `*` to be a scalar.
>
> I would like to know if there is anyway around?
>
>

[julia-users] Re: Julia and the Tower of Babel

2016-10-07 Thread John Myles White
I don't really see how you can solve this without a single dictator who 
controls the package ecosystem. I'm not enough of an expert in Python to 
say how well things work there, but the R ecosystem is vastly less 
organized than the Julia ecosystem. Insofar as it's getting better, it's 
because the community has agreed to make Hadley Wickham their benevolent 
dictator.

 --John

On Friday, October 7, 2016 at 8:35:46 AM UTC-7, Gabriel Gellner wrote:
>
> Something that I have been noticing, as I convert more of my research code 
> over to Julia, is how the super easy to use package manager (which I love), 
> coupled with the talent base of the Julia community seems to have a 
> detrimental effect on the API consistency of the many “micro” packages that 
> cover what I would consider the de-facto standard library.
>
> What I mean is that whereas a commercial package like Matlab/Mathematica 
> etc., being written under one large umbrella, will largely (clearly not 
> always) choose consistent names for similar API keyword arguments, and have 
> similar calling conventions for master function like tools (`optimize` 
> versus `lbfgs`, etc), which I am starting to realize is one of the great 
> selling points of these packages as an end user. I can usually guess what a 
> keyword will be in Mathematica, whereas even after a year of using Julia 
> almost exclusively I find I have to look at the documentation (or the 
> source code depending on the documentation ...) to figure out the keyword 
> names in many common packages.
>
> Similarly, in my experience with open source tools, due to the complexity 
> of the package management, we get large “batteries included” distributions 
> that cover a lot of the standard stuff for doing science, like python’s 
> numpy + scipy combination. Whereas in Julia the equivalent of scipy is 
> split over many, separately developed packages (Base, Optim.jl, NLopt.jl, 
> Roots.jl, NLsolve.jl, ODE.jl/DifferentialEquations.jl). Many of these 
> packages are stupid awesome, but they can have dramatically different 
> naming conventions and calling behavior, for essential equivalent behavior. 
> Recently I noticed that tolerances, for example, are named as `atol/rtol` 
> versus `abstol/reltol` versus `abs_tol/rel_tol`, which means is extremely 
> easy to have a piece of scientific code that will need to use all three 
> conventions across different calls to seemingly similar libraries. 
>
> Having brought this up I find that the community is largely sympathetic 
> and, in general, would support a common convention, the issue I have slowly 
> realized is that it is rarely that straightforward. In the above example 
> the abstol/reltol versus abs_tol/rel_tol seems like an easy example of what 
> can be tidied up, but the latter underscored name is consistent with 
> similar naming conventions from Optim.jl for other tolerances, so that 
> community is reluctant to change the convention. Similarly, I think there 
> would be little interest in changing abstol/reltol to the underscored 
> version in packages like Base, ODE.jl etc as this feels consistent with 
> each of these code bases. Hence I have started to think that the problem is 
> the micro-packaging. It is much easier to look for consistency within a 
> package then across similar packages, and since Julia seems to distribute 
> so many of the essential tools in very narrow boundaries of functionality I 
> am not sure that this kind of naming convention will ever be able to reach 
> something like a Scipy, or the even higher standard of commercial packages 
> like Matlab/Mathematica. (I am sure there are many more examples like using 
> maxiter, versus iterations for describing stopping criteria in iterative 
> solvers ...)
>
> Even further I have noticed that even when packages try to find 
> consistency across packages, for example Optim.jl <-> Roots.jl <-> 
> NLsolve.jl, when one package changes how they do things (Optim.jl moving to 
> delegation on types for method choice) then again the consistency fractures 
> quickly, where we now have a common divide of using either Typed dispatch 
> keywords versus :method symbol names across the previous packages (not to 
> mention the whole inplace versus not-inplace for function arguments …)
>
> Do people, with more experience in scientific packages ecosystems, feel 
> this is solvable? Or do micro distributions just lead to many, many varying 
> degrees of API conventions that need to be learned by end users? Is this 
> common in communities that use C++ etc? I ask as I wonder how much this 
> kind of thing can be worried about when making small packages is so easy.
>


[julia-users] Re: Julia and the Tower of Babel

2016-10-07 Thread Andreas Lobinger
Hello colleague,

On Friday, October 7, 2016 at 5:35:46 PM UTC+2, Gabriel Gellner wrote:
>
> Something that I have been noticing, as I convert more of my research code 
> over to Julia, is how the super easy to use package manager (which I love), 
> coupled with the talent base of the Julia community seems to have a 
> detrimental effect on the API consistency of the many “micro” packages that 
> cover what I would consider the de-facto standard library. 
>
 well, you consider 'this' the de-facto standard library and others 
consider 'that' a reasonable standard library and others ...

If you see the need for standardisation of interfaces, just volunteer to 
write a style guide and open issues and PRs on the respective packages. All 
this is open source and the development process is transparent on github. 
For exactly that reason: collaboration.

I'm contributing to the ecosystem and it has been really a pleasure to be 
part of the story.

Wishing a happy day,
Andreas


Re: [julia-users] Calling DLL function

2016-10-07 Thread Yichao Yu
On Fri, Oct 7, 2016 at 12:10 PM, Jérémy Béjanin
 wrote:
> The format is fairly complicated, but I need to take 4 bytes from that
> buffer and extract two 12-bit numbers from that 32-bit string (along with
> some reversing operations, as you can see in the function). I cannot seem to
> find bit operations that address/concatenate individual bits without going
> through a string format and parsing. Would you have any guidance as to where
> in the manual I should look?

I didn't follow that way too complicated string handling but sth on the line of
a & 0xfff
and
(a >> 16) & 0xfff
should do it.
You need to adjust to the actual bits location of course.

If you have never done anything with bits before, this might be useful
in general. https://en.wikipedia.org/wiki/Bitwise_operation

>
> On Thursday, October 6, 2016 at 8:15:41 PM UTC-4, Yichao Yu wrote:
>>
>> On Thu, Oct 6, 2016 at 5:40 PM, Jérémy Béjanin 
>> wrote:
>> > Thanks! this works perfectly.
>> >
>> > The pointer_to_array() function seems to be deprecated, however, do you
>> > know
>> > how to use the suggested replacement, unsafe_wrap()?
>>
>> Yes.
>>
>> >
>> > And if it's not too much to ask, I am wondering how I can do the
>> > conversion
>> > from this UInt8 buffer more efficiently. This is what I am currently
>> > doing,
>> > where blocks is the buffer:
>> >
>> > # We now have all the blocks in the blocks array, we need to extract the
>> > # 12-bit samples from that array. The data in the blocks is organized in
>> > 4
>> > # bytes / 32-bit words, and each word contains two 12-bit samples.
>> > # Calculate the number of 32-bit words in the blocks array
>> > numwords = div(numblocks*DIG_BLOCK_SIZE,4)
>> > # Initialize array to store the samples
>> > data = Array{Int32}(2*numwords)
>> > for n=1:numwords
>> > # Convert next four bytes to 32-bit string
>> > s =
>> >
>> > reverse(bits(blocks[(n-1)*4+1]))*reverse(bits(blocks[(n-1)*4+2]))*reverse(bits(blocks[(n-1)*4+3]))*reverse(bits(blocks[(n-1)*4+4]))
>> > # Parse the first 12 bits as the first sample and the 12 bits 4 bits
>> > later as the second sample
>> > data[(n-1)*2+1] = parse(Int,reverse(s[1:12]),2)
>> > data[(n-1)*2+2] = parse(Int,reverse(s[17:28]),2)
>> > end
>> >
>> > This is pretty slow, I assume due to the translation between numbers and
>> > strings. Is there a better way to do this?
>>
>> I don't know what exactly the format is but you should not go though
>> string and you should be able to do what you want with simple
>> interger/bits operations.
>>
>> >
>> > Thanks,
>> > Jeremy
>> >


Re: [julia-users] Julia and the Tower of Babel

2016-10-07 Thread Tom Breloff
This is something that I've spent a lot of time and energy thinking and
discussing, as part of both Plots and JuliaML.  I think the situation can
be improved in a big way, but this is not something with a "magic
solution".  It takes time, effort, and a constant desire to collaborate and
design with care for the greater community.  As soon as people get lazy, it
starts to get unwieldy.  So I think the "solution" is just to keep at it...
keep trying to collaborate... keep trying to agree on common conventions...
and always look to find common ground.  Use Base as a guide, whenever
possible, and if there are different conventions in place across packages,
then spend the time to agree on shared conventions.  And if people refuse
to collaborate, give them crap about it.

On Fri, Oct 7, 2016 at 12:02 PM, David Anthoff  wrote:

> I don’t have a solution, but I completely agree with the problem
> description.
>
>
>
> I guess one small step would be that package authors should follow the
> patterns in base, if there are any.
>
>
>
> *From:* julia-users@googlegroups.com [mailto:julia-users@googlegroups.com]
> *On Behalf Of *Gabriel Gellner
> *Sent:* Friday, October 7, 2016 8:36 AM
> *To:* julia-users 
> *Subject:* [julia-users] Julia and the Tower of Babel
>
>
>
> Something that I have been noticing, as I convert more of my research code
> over to Julia, is how the super easy to use package manager (which I love),
> coupled with the talent base of the Julia community seems to have a
> detrimental effect on the API consistency of the many “micro” packages that
> cover what I would consider the de-facto standard library.
>
> What I mean is that whereas a commercial package like Matlab/Mathematica
> etc., being written under one large umbrella, will largely (clearly not
> always) choose consistent names for similar API keyword arguments, and have
> similar calling conventions for master function like tools (`optimize`
> versus `lbfgs`, etc), which I am starting to realize is one of the great
> selling points of these packages as an end user. I can usually guess what a
> keyword will be in Mathematica, whereas even after a year of using Julia
> almost exclusively I find I have to look at the documentation (or the
> source code depending on the documentation ...) to figure out the keyword
> names in many common packages.
>
> Similarly, in my experience with open source tools, due to the complexity
> of the package management, we get large “batteries included” distributions
> that cover a lot of the standard stuff for doing science, like python’s
> numpy + scipy combination. Whereas in Julia the equivalent of scipy is
> split over many, separately developed packages (Base, Optim.jl, NLopt.jl,
> Roots.jl, NLsolve.jl, ODE.jl/DifferentialEquations.jl). Many of these
> packages are stupid awesome, but they can have dramatically different
> naming conventions and calling behavior, for essential equivalent behavior.
> Recently I noticed that tolerances, for example, are named as `atol/rtol`
> versus `abstol/reltol` versus `abs_tol/rel_tol`, which means is extremely
> easy to have a piece of scientific code that will need to use all three
> conventions across different calls to seemingly similar libraries.
>
> Having brought this up I find that the community is largely sympathetic
> and, in general, would support a common convention, the issue I have slowly
> realized is that it is rarely that straightforward. In the above example
> the abstol/reltol versus abs_tol/rel_tol seems like an easy example of what
> can be tidied up, but the latter underscored name is consistent with
> similar naming conventions from Optim.jl for other tolerances, so that
> community is reluctant to change the convention. Similarly, I think there
> would be little interest in changing abstol/reltol to the underscored
> version in packages like Base, ODE.jl etc as this feels consistent with
> each of these code bases. Hence I have started to think that the problem is
> the micro-packaging. It is much easier to look for consistency within a
> package then across similar packages, and since Julia seems to distribute
> so many of the essential tools in very narrow boundaries of functionality I
> am not sure that this kind of naming convention will ever be able to reach
> something like a Scipy, or the even higher standard of commercial packages
> like Matlab/Mathematica. (I am sure there are many more examples like using
> maxiter, versus iterations for describing stopping criteria in iterative
> solvers ...)
>
> Even further I have noticed that even when packages try to find
> consistency across packages, for example Optim.jl <-> Roots.jl <->
> NLsolve.jl, when one package changes how they do things (Optim.jl moving to
> delegation on types for method choice) then again the consistency fractures
> quickly, where we now have a common divide of using either Typed dispatch
> keywords versus :method symbol 

Re: [julia-users] Calling DLL function

2016-10-07 Thread Jérémy Béjanin
The format is fairly complicated, but I need to take 4 bytes from that 
buffer and extract two 12-bit numbers from that 32-bit string (along with 
some reversing operations, as you can see in the function). I cannot seem 
to find bit operations that address/concatenate individual bits without 
going through a string format and parsing. Would you have any guidance as 
to where in the manual I should look?

On Thursday, October 6, 2016 at 8:15:41 PM UTC-4, Yichao Yu wrote:
>
> On Thu, Oct 6, 2016 at 5:40 PM, Jérémy Béjanin  > wrote: 
> > Thanks! this works perfectly. 
> > 
> > The pointer_to_array() function seems to be deprecated, however, do you 
> know 
> > how to use the suggested replacement, unsafe_wrap()? 
>
> Yes. 
>
> > 
> > And if it's not too much to ask, I am wondering how I can do the 
> conversion 
> > from this UInt8 buffer more efficiently. This is what I am currently 
> doing, 
> > where blocks is the buffer: 
> > 
> > # We now have all the blocks in the blocks array, we need to extract the 
> > # 12-bit samples from that array. The data in the blocks is organized in 
> 4 
> > # bytes / 32-bit words, and each word contains two 12-bit samples. 
> > # Calculate the number of 32-bit words in the blocks array 
> > numwords = div(numblocks*DIG_BLOCK_SIZE,4) 
> > # Initialize array to store the samples 
> > data = Array{Int32}(2*numwords) 
> > for n=1:numwords 
> > # Convert next four bytes to 32-bit string 
> > s = 
> > 
> reverse(bits(blocks[(n-1)*4+1]))*reverse(bits(blocks[(n-1)*4+2]))*reverse(bits(blocks[(n-1)*4+3]))*reverse(bits(blocks[(n-1)*4+4]))
>  
>
> > # Parse the first 12 bits as the first sample and the 12 bits 4 bits 
> > later as the second sample 
> > data[(n-1)*2+1] = parse(Int,reverse(s[1:12]),2) 
> > data[(n-1)*2+2] = parse(Int,reverse(s[17:28]),2) 
> > end 
> > 
> > This is pretty slow, I assume due to the translation between numbers and 
> > strings. Is there a better way to do this? 
>
> I don't know what exactly the format is but you should not go though 
> string and you should be able to do what you want with simple 
> interger/bits operations. 
>
> > 
> > Thanks, 
> > Jeremy 
> > 
>


RE: [julia-users] Julia and the Tower of Babel

2016-10-07 Thread David Anthoff
I don’t have a solution, but I completely agree with the problem description.

 

I guess one small step would be that package authors should follow the patterns 
in base, if there are any. 

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Gabriel Gellner
Sent: Friday, October 7, 2016 8:36 AM
To: julia-users 
Subject: [julia-users] Julia and the Tower of Babel

 

Something that I have been noticing, as I convert more of my research code over 
to Julia, is how the super easy to use package manager (which I love), coupled 
with the talent base of the Julia community seems to have a detrimental effect 
on the API consistency of the many “micro” packages that cover what I would 
consider the de-facto standard library.

What I mean is that whereas a commercial package like Matlab/Mathematica etc., 
being written under one large umbrella, will largely (clearly not always) 
choose consistent names for similar API keyword arguments, and have similar 
calling conventions for master function like tools (`optimize` versus `lbfgs`, 
etc), which I am starting to realize is one of the great selling points of 
these packages as an end user. I can usually guess what a keyword will be in 
Mathematica, whereas even after a year of using Julia almost exclusively I find 
I have to look at the documentation (or the source code depending on the 
documentation ...) to figure out the keyword names in many common packages.

Similarly, in my experience with open source tools, due to the complexity of 
the package management, we get large “batteries included” distributions that 
cover a lot of the standard stuff for doing science, like python’s numpy + 
scipy combination. Whereas in Julia the equivalent of scipy is split over many, 
separately developed packages (Base, Optim.jl, NLopt.jl, Roots.jl, NLsolve.jl, 
ODE.jl/DifferentialEquations.jl). Many of these packages are stupid awesome, 
but they can have dramatically different naming conventions and calling 
behavior, for essential equivalent behavior. Recently I noticed that 
tolerances, for example, are named as `atol/rtol` versus `abstol/reltol` versus 
`abs_tol/rel_tol`, which means is extremely easy to have a piece of scientific 
code that will need to use all three conventions across different calls to 
seemingly similar libraries. 

Having brought this up I find that the community is largely sympathetic and, in 
general, would support a common convention, the issue I have slowly realized is 
that it is rarely that straightforward. In the above example the abstol/reltol 
versus abs_tol/rel_tol seems like an easy example of what can be tidied up, but 
the latter underscored name is consistent with similar naming conventions from 
Optim.jl for other tolerances, so that community is reluctant to change the 
convention. Similarly, I think there would be little interest in changing 
abstol/reltol to the underscored version in packages like Base, ODE.jl etc as 
this feels consistent with each of these code bases. Hence I have started to 
think that the problem is the micro-packaging. It is much easier to look for 
consistency within a package then across similar packages, and since Julia 
seems to distribute so many of the essential tools in very narrow boundaries of 
functionality I am not sure that this kind of naming convention will ever be 
able to reach something like a Scipy, or the even higher standard of commercial 
packages like Matlab/Mathematica. (I am sure there are many more examples like 
using maxiter, versus iterations for describing stopping criteria in iterative 
solvers ...)

Even further I have noticed that even when packages try to find consistency 
across packages, for example Optim.jl <-> Roots.jl <-> NLsolve.jl, when one 
package changes how they do things (Optim.jl moving to delegation on types for 
method choice) then again the consistency fractures quickly, where we now have 
a common divide of using either Typed dispatch keywords versus :method symbol 
names across the previous packages (not to mention the whole inplace versus 
not-inplace for function arguments …)

Do people, with more experience in scientific packages ecosystems, feel this is 
solvable? Or do micro distributions just lead to many, many varying degrees of 
API conventions that need to be learned by end users? Is this common in 
communities that use C++ etc? I ask as I wonder how much this kind of thing can 
be worried about when making small packages is so easy.



[julia-users] Julia and the Tower of Babel

2016-10-07 Thread Gabriel Gellner
 

Something that I have been noticing, as I convert more of my research code 
over to Julia, is how the super easy to use package manager (which I love), 
coupled with the talent base of the Julia community seems to have a 
detrimental effect on the API consistency of the many “micro” packages that 
cover what I would consider the de-facto standard library.

What I mean is that whereas a commercial package like Matlab/Mathematica 
etc., being written under one large umbrella, will largely (clearly not 
always) choose consistent names for similar API keyword arguments, and have 
similar calling conventions for master function like tools (`optimize` 
versus `lbfgs`, etc), which I am starting to realize is one of the great 
selling points of these packages as an end user. I can usually guess what a 
keyword will be in Mathematica, whereas even after a year of using Julia 
almost exclusively I find I have to look at the documentation (or the 
source code depending on the documentation ...) to figure out the keyword 
names in many common packages.

Similarly, in my experience with open source tools, due to the complexity 
of the package management, we get large “batteries included” distributions 
that cover a lot of the standard stuff for doing science, like python’s 
numpy + scipy combination. Whereas in Julia the equivalent of scipy is 
split over many, separately developed packages (Base, Optim.jl, NLopt.jl, 
Roots.jl, NLsolve.jl, ODE.jl/DifferentialEquations.jl). Many of these 
packages are stupid awesome, but they can have dramatically different 
naming conventions and calling behavior, for essential equivalent behavior. 
Recently I noticed that tolerances, for example, are named as `atol/rtol` 
versus `abstol/reltol` versus `abs_tol/rel_tol`, which means is extremely 
easy to have a piece of scientific code that will need to use all three 
conventions across different calls to seemingly similar libraries. 

Having brought this up I find that the community is largely sympathetic 
and, in general, would support a common convention, the issue I have slowly 
realized is that it is rarely that straightforward. In the above example 
the abstol/reltol versus abs_tol/rel_tol seems like an easy example of what 
can be tidied up, but the latter underscored name is consistent with 
similar naming conventions from Optim.jl for other tolerances, so that 
community is reluctant to change the convention. Similarly, I think there 
would be little interest in changing abstol/reltol to the underscored 
version in packages like Base, ODE.jl etc as this feels consistent with 
each of these code bases. Hence I have started to think that the problem is 
the micro-packaging. It is much easier to look for consistency within a 
package then across similar packages, and since Julia seems to distribute 
so many of the essential tools in very narrow boundaries of functionality I 
am not sure that this kind of naming convention will ever be able to reach 
something like a Scipy, or the even higher standard of commercial packages 
like Matlab/Mathematica. (I am sure there are many more examples like using 
maxiter, versus iterations for describing stopping criteria in iterative 
solvers ...)

Even further I have noticed that even when packages try to find consistency 
across packages, for example Optim.jl <-> Roots.jl <-> NLsolve.jl, when one 
package changes how they do things (Optim.jl moving to delegation on types 
for method choice) then again the consistency fractures quickly, where we 
now have a common divide of using either Typed dispatch keywords versus 
:method symbol names across the previous packages (not to mention the whole 
inplace versus not-inplace for function arguments …)

Do people, with more experience in scientific packages ecosystems, feel 
this is solvable? Or do micro distributions just lead to many, many varying 
degrees of API conventions that need to be learned by end users? Is this 
common in communities that use C++ etc? I ask as I wonder how much this 
kind of thing can be worried about when making small packages is so easy.


Re: [julia-users] Re: read binary file with endianness

2016-10-07 Thread Michele Zaffalon
Thank you.


On Thu, Oct 6, 2016 at 8:42 PM, Steven G. Johnson 
wrote:

> Yes, just read in the first byte, and then read in the rest of the data,
> calling ntoh or hton as needed on each datum.
>


[julia-users] Getting variable names of function though the AST?

2016-10-07 Thread Jon Norberg
Hi, I asked in a thread some year ago how to get the parameters and variables 
used in a function. I got some amazing help from the always very helpful 
community (Thanks Mauro and more, 
https://groups.google.com/forum/m/#!search/Jon$20norberg/julia-users/bV4VZxbzZyk).
 However, as already hinted at in that thread, this trick would cease to work 
in 0.5. So my question now is: How can I do this in 0.5?

I have come this far
julia
ast=Base.uncompressed_ast(methods(growthV).mt.defs.func.lambda_template)
```

which gives me the code of the function. previously one could also get at all 
parameters used in the function (se link to above htread) but I am at a loss 
how to get there now in julia 0.5.

[julia-users] Re: Representation of a material conditional (implication)

2016-10-07 Thread Kevin Liu


*julia> **@code_native(b^a)*

   .section__TEXT,__text,regular,pure_instructions

Filename: bool.jl

Source line: 39

   pushq   %rbp

   movq%rsp, %rbp

Source line: 39

   xorb$1, %sil

   orb %dil, %sil

   movb%sil, %al

   popq%rbp

   ret

*julia> **@code_native(a<=b)*

   .section__TEXT,__text,regular,pure_instructions

Filename: bool.jl

Source line: 29

   pushq   %rbp

   movq%rsp, %rbp

Source line: 29

   xorb$1, %dil

   orb %sil, %dil

   movb%dil, %al

   popq%rbp

   ret

*julia> **@code_native(ifelse(a,b,true))*

   .section__TEXT,__text,regular,pure_instructions

Filename: operators.jl

Source line: 48

   pushq   %rbp

   movq%rsp, %rbp

   testb   $1, %dil

Source line: 48

   jne L17

   movb%dl, %sil

L17:   movb%sil, %al

   popq%rbp

   ret



On Friday, October 7, 2016 at 10:58:34 AM UTC-3, Kevin Liu wrote:
>
> *julia> **@code_llvm(b^a)*
>
>
> define i1 @"julia_^_21646"(i1, i1) {
>
> top:
>
>   %2 = xor i1 %1, true
>
>   %3 = or i1 %0, %2
>
>   ret i1 %3
>
> }
>
> On Friday, October 7, 2016 at 10:56:26 AM UTC-3, Kevin Liu wrote:
>>
>> Sorry, no need, I got this
>>
>> *julia> **@code_llvm(a<=b)*
>>
>>
>> define i1 @"julia_<=_21637"(i1, i1) {
>>
>> top:
>>
>>   %2 = xor i1 %0, true
>>
>>   %3 = or i1 %1, %2
>>
>>   ret i1 %3
>>
>> }
>>
>>
>> *julia> **@code_llvm(ifelse(a,b,true))*
>>
>>
>> define i1 @julia_ifelse_21636(i1, i1, i1) {
>>
>> top:
>>
>>   %3 = select i1 %0, i1 %1, i1 %2
>>
>>   ret i1 %3
>>
>> }
>>
>>
>> How do you read this output?
>>
>> On Friday, October 7, 2016 at 10:50:57 AM UTC-3, Kevin Liu wrote:
>>>
>>> Jeffrey, can you show the expression you put inside @code_llvm() and 
>>> @code_native() for evaluation? 
>>>
>>> On Friday, October 7, 2016 at 2:26:56 AM UTC-3, Jeffrey Sarnoff wrote:

 Hi Jussi,

 Your version compiles down more neatly than the ifelse version. On my 
 system, BenchmarkTools gives nearly identical results; I don't know why, 
 but the ifelse version is consistently a smidge faster (~%2, relative 
 speed). Here is the llvm code and local native code for each, your version 
 looks more tidy.  


 ```
 implies(p::Bool, q::Bool) = (p <= q)  implies(p::Bool, q::Bool) 
 = ifelse( p, q, true )

 # llvm

   %2 = xor i8 %0, 1%2 = and i8 %0, 1
   %3 = or i8 %2, %1%3 = icmp eq i8 %2, 0
   ret i8 %3%4 = select i1 %3, 
 i8 1, i8 %1
ret i8 %3

 # native with some common code removed

 xorb   $1, %diltestb  $1, %dil
 orb%sil, %dil  movb   $1, %al
 movb   %dil, %al   je L15
 popq   %rbpmovb   %sil, %al
 retq L15:  popq   %rbp
retq
 ```




 On Friday, October 7, 2016 at 12:22:23 AM UTC-4, Jussi Piitulainen 
 wrote:
>
>
> implies(p::Bool, q::Bool) = p <= q
>
>
>
> torstai 6. lokakuuta 2016 19.10.51 UTC+3 Kevin Liu kirjoitti:
>>
>> How is an implication represented in Julia? 
>>
>>
>> https://en.wikipedia.org/wiki/Material_conditional#Definitions_of_the_material_conditional
>>
>

[julia-users] Re: Representation of a material conditional (implication)

2016-10-07 Thread Kevin Liu
 

*julia> **@code_llvm(b^a)*


define i1 @"julia_^_21646"(i1, i1) {

top:

  %2 = xor i1 %1, true

  %3 = or i1 %0, %2

  ret i1 %3

}

On Friday, October 7, 2016 at 10:56:26 AM UTC-3, Kevin Liu wrote:
>
> Sorry, no need, I got this
>
> *julia> **@code_llvm(a<=b)*
>
>
> define i1 @"julia_<=_21637"(i1, i1) {
>
> top:
>
>   %2 = xor i1 %0, true
>
>   %3 = or i1 %1, %2
>
>   ret i1 %3
>
> }
>
>
> *julia> **@code_llvm(ifelse(a,b,true))*
>
>
> define i1 @julia_ifelse_21636(i1, i1, i1) {
>
> top:
>
>   %3 = select i1 %0, i1 %1, i1 %2
>
>   ret i1 %3
>
> }
>
>
> How do you read this output?
>
> On Friday, October 7, 2016 at 10:50:57 AM UTC-3, Kevin Liu wrote:
>>
>> Jeffrey, can you show the expression you put inside @code_llvm() and 
>> @code_native() for evaluation? 
>>
>> On Friday, October 7, 2016 at 2:26:56 AM UTC-3, Jeffrey Sarnoff wrote:
>>>
>>> Hi Jussi,
>>>
>>> Your version compiles down more neatly than the ifelse version. On my 
>>> system, BenchmarkTools gives nearly identical results; I don't know why, 
>>> but the ifelse version is consistently a smidge faster (~%2, relative 
>>> speed). Here is the llvm code and local native code for each, your version 
>>> looks more tidy.  
>>>
>>>
>>> ```
>>> implies(p::Bool, q::Bool) = (p <= q)  implies(p::Bool, q::Bool) 
>>> = ifelse( p, q, true )
>>>
>>> # llvm
>>>
>>>   %2 = xor i8 %0, 1%2 = and i8 %0, 1
>>>   %3 = or i8 %2, %1%3 = icmp eq i8 %2, 0
>>>   ret i8 %3%4 = select i1 %3, i8 
>>> 1, i8 %1
>>>ret i8 %3
>>>
>>> # native with some common code removed
>>>
>>> xorb   $1, %diltestb  $1, %dil
>>> orb%sil, %dil  movb   $1, %al
>>> movb   %dil, %al   je L15
>>> popq   %rbpmovb   %sil, %al
>>> retq L15:  popq   %rbp
>>>retq
>>> ```
>>>
>>>
>>>
>>>
>>> On Friday, October 7, 2016 at 12:22:23 AM UTC-4, Jussi Piitulainen wrote:


 implies(p::Bool, q::Bool) = p <= q



 torstai 6. lokakuuta 2016 19.10.51 UTC+3 Kevin Liu kirjoitti:
>
> How is an implication represented in Julia? 
>
>
> https://en.wikipedia.org/wiki/Material_conditional#Definitions_of_the_material_conditional
>


[julia-users] Re: Representation of a material conditional (implication)

2016-10-07 Thread Kevin Liu
Sorry, no need, I got this

*julia> **@code_llvm(a<=b)*


define i1 @"julia_<=_21637"(i1, i1) {

top:

  %2 = xor i1 %0, true

  %3 = or i1 %1, %2

  ret i1 %3

}


*julia> **@code_llvm(ifelse(a,b,true))*


define i1 @julia_ifelse_21636(i1, i1, i1) {

top:

  %3 = select i1 %0, i1 %1, i1 %2

  ret i1 %3

}


How do you read this output?

On Friday, October 7, 2016 at 10:50:57 AM UTC-3, Kevin Liu wrote:
>
> Jeffrey, can you show the expression you put inside @code_llvm() and 
> @code_native() for evaluation? 
>
> On Friday, October 7, 2016 at 2:26:56 AM UTC-3, Jeffrey Sarnoff wrote:
>>
>> Hi Jussi,
>>
>> Your version compiles down more neatly than the ifelse version. On my 
>> system, BenchmarkTools gives nearly identical results; I don't know why, 
>> but the ifelse version is consistently a smidge faster (~%2, relative 
>> speed). Here is the llvm code and local native code for each, your version 
>> looks more tidy.  
>>
>>
>> ```
>> implies(p::Bool, q::Bool) = (p <= q)  implies(p::Bool, q::Bool) 
>> = ifelse( p, q, true )
>>
>> # llvm
>>
>>   %2 = xor i8 %0, 1%2 = and i8 %0, 1
>>   %3 = or i8 %2, %1%3 = icmp eq i8 %2, 0
>>   ret i8 %3%4 = select i1 %3, i8 
>> 1, i8 %1
>>ret i8 %3
>>
>> # native with some common code removed
>>
>> xorb   $1, %diltestb  $1, %dil
>> orb%sil, %dil  movb   $1, %al
>> movb   %dil, %al   je L15
>> popq   %rbpmovb   %sil, %al
>> retq L15:  popq   %rbp
>>retq
>> ```
>>
>>
>>
>>
>> On Friday, October 7, 2016 at 12:22:23 AM UTC-4, Jussi Piitulainen wrote:
>>>
>>>
>>> implies(p::Bool, q::Bool) = p <= q
>>>
>>>
>>>
>>> torstai 6. lokakuuta 2016 19.10.51 UTC+3 Kevin Liu kirjoitti:

 How is an implication represented in Julia? 


 https://en.wikipedia.org/wiki/Material_conditional#Definitions_of_the_material_conditional

>>>

[julia-users] Re: Representation of a material conditional (implication)

2016-10-07 Thread Kevin Liu
Jeffrey, can you show the expression you put inside @code_llvm() and 
@code_native() for evaluation? 

On Friday, October 7, 2016 at 2:26:56 AM UTC-3, Jeffrey Sarnoff wrote:
>
> Hi Jussi,
>
> Your version compiles down more neatly than the ifelse version. On my 
> system, BenchmarkTools gives nearly identical results; I don't know why, 
> but the ifelse version is consistently a smidge faster (~%2, relative 
> speed). Here is the llvm code and local native code for each, your version 
> looks more tidy.  
>
>
> ```
> implies(p::Bool, q::Bool) = (p <= q)  implies(p::Bool, q::Bool) 
> = ifelse( p, q, true )
>
> # llvm
>
>   %2 = xor i8 %0, 1%2 = and i8 %0, 1
>   %3 = or i8 %2, %1%3 = icmp eq i8 %2, 0
>   ret i8 %3%4 = select i1 %3, i8 
> 1, i8 %1
>ret i8 %3
>
> # native with some common code removed
>
> xorb   $1, %diltestb  $1, %dil
> orb%sil, %dil  movb   $1, %al
> movb   %dil, %al   je L15
> popq   %rbpmovb   %sil, %al
> retq L15:  popq   %rbp
>retq
> ```
>
>
>
>
> On Friday, October 7, 2016 at 12:22:23 AM UTC-4, Jussi Piitulainen wrote:
>>
>>
>> implies(p::Bool, q::Bool) = p <= q
>>
>>
>>
>> torstai 6. lokakuuta 2016 19.10.51 UTC+3 Kevin Liu kirjoitti:
>>>
>>> How is an implication represented in Julia? 
>>>
>>>
>>> https://en.wikipedia.org/wiki/Material_conditional#Definitions_of_the_material_conditional
>>>
>>

[julia-users] How to print an array with the correct shape in 0.5

2016-10-07 Thread spaceLem
In Julia 0.4.6 I could print or @show a 2d array, and it would give me a 
nicely formatted 2d array
println(reshape(1:4,2,2))
[1 3
 2 4]

However in Julia 0.5 I instead get:
println(reshape(1:4,2,2))
[1 3; 2 4]

Is it still possible to print 2d arrays without flattening them? And if so, 
how?

Regards
Jamie


[julia-users] travis with dependency on scipy?

2016-10-07 Thread David van Leeuwen
Hello, 

For a tiny package  that depends on 
PyCall and python's scipy.spatial I am trying to engineer a `.travis.yml`, 
but I get an error in travis 
 that 
`spatial.ConvexHull` is not known.  This is the travis file:

language: julia
julia:
 - 0.4
 - 0.5
 - nightly
before_install:
 - sudo apt-get -qq update
 - sudo apt-get install -y python-scipy


I did not go for the `sudo: false` option because I figured I needed the 
`apt-get install`. 

I've tried to reproduce this in docker with the `julia` image, but there 
everything works fine. 

Does anybody know how to debug .travis.yml files?

Thanks, 

---david


[julia-users] Re: StaticArrays vs FixedSizeArrays

2016-10-07 Thread Simon Danisch
You might enjoy: GeometryTypes.jl 

Especially the Face type should be convenient: 
https://github.com/JuliaGeometry/GeometryTypes.jl/blob/master/src/faces.jl

Its currently based on FixedSizeArrays, but I want to port it to 
StaticArrays whenever it's sensible (and I have time :P ).
So in the ideal case, you won't notice any change in the low-level 
implementation.
I've made sure that most operations needed in Graphics are fast and 
supported in FixedSizeArrays. This doesn't seem to be fully the case for 
StaticArrays yet.
On the other hand, StaticArrays is more fit for general fixed array code.

I'm tracking porting GeometryTypes to StaticArrays in this issue:
https://github.com/JuliaArrays/StaticArrays.jl/issues/45

Best,
Simon


Am Freitag, 7. Oktober 2016 08:10:56 UTC+2 schrieb Petr Hlavenka:
>
> Hi,
> With Julia 0.5 I'm little bit confused what is the best approach for high 
> speed array access. I'm writing a simulation where particles move in the 
> field interpolated from an irregular mesh of tetrahedrons in 3D.
> So i basically have a *points* array of MyPoint (= 3xFloat64, stands for 
> x,y,z) and *elements *array of MyElement(= 4xUInt32,  stands for indexes 
> to points forming a tetrahedron) and some lookup arrays.
> I often access the points by direct indexing  - defining own 
> getindex(::MyPoint, ::Integer) for the custom types, such as this example: 
>
> points[elements[tetrahedron_id][vertex_id]][axis_ix] 
>
> or in loops where i prefer treating the arrays as types such as 
>
> for el in elements
>   point1 = el[1]
>   point2 = el[2]
>   r = (point1[1]+point2[1])/2 + (point1[2]+point2[2])/2
> end
>
>
> or even calling specialized functions on MyElement type
>
> for el in elements
>   myfunc(el)
> end
>
> I'm considering StaticArrays.jl and FixedSizeArrays to use for the 
> definition of MyPoint and MyElement, but I'm little bit confused what is 
> the actual difference between the two implementation. Could please someone 
> explain to me, what is the difference between this two packages? Is one of 
> them superseding the other one on 0.5+? What is the difference in memory 
> layout and performance consequences?
>
> Thank you very much.
> Cheers 
> Petr
>
>  
>
>

[julia-users] multiple dispatch for operators

2016-10-07 Thread Sisyphuss
In Julia, we can do multiple dispatch for operators, that is the 
interpreter can identify:
float + integer
integer + integer
integer + float
float + float
as well as *user-defined* data structure.

Recently, I am working on Python (I have no choice because Spark hasn't yet 
a Julia binding). I intended to do the same thing -- multiplication -- 
between a Numpy matrix and self-defined Low-rank matrix. Of course, I 
defined the `__rmul__ ` method for Low-rank matrix. However, it seems to me 
that the Numpy matrix intercepts the `*` operator as its `__mul__` method, 
which expects the argument on the right side of `*` to be a scalar.

I would like to know if there is anyway around?



[julia-users] Re: I have two data collection for mapreduce() vs. generator, which is correct/better implementation? (Julia 0.5)

2016-10-07 Thread Martin Florek
Thanks Andrew for answer. 
I also have experience that eachindex() is slightly faster. In Performance 
tips I found macros e.g. @simd. Do you have any experience with them?

On Thursday, 6 October 2016 16:13:22 UTC+2, Martin Florek wrote:
>
>
> Hi All,
>
> I'm new in Julia and I need to decide about the more correct/better 
> implementation for two data collection. I have implemented mean absolute 
> percentage error (MAPE) for *Generator Expressions* (Comprehensions 
> without brackets):
>
> a = rand(10_000_000)
> p = rand(10_000_000)
>
> err(actual, predicted) = (actual - predicted) / actual
>
> f(a, p) = 100 * sumabs(err(a[i], p[i]) for i in eachindex(a)) /length(a)
>
> a with *mapreduce()* function.
>
> function mapre(a, p)
> s = mapreduce(t -> begin b,c=t; abs((b - c) / b) end, +, zip(a, p))
> s * 100/length(a)
> end
>
> When compare *@time f(a,p) *I get:
>
> 0.026515 seconds (11 allocations: 304 bytes) 797.1301337918511
>
> and *@time mapre(a, p):*
>
> 0.079932 seconds (9 allocations: 272 bytes) 797.1301337918511
>
>
> Thanks in advance,
> Martin
>


Re: [julia-users] Re: julia-i18n: Translators and reviewer needed!

2016-10-07 Thread Mosè Giordano
Hi Ismael,

I can confirm the problems in
http://julialang.org/downloads/platform.html are now fixed and I added
the missing string.  Thank you!

However I just noticed that the description of Gadfly in
http://julialang.org/downloads/plotting.html is back in English (this
happens for both Italian and Spanish, for the same strings.  This was
also the case the the platform page).

Bye,
Mosè

2016-10-05 17:21 GMT+02:00 Ismael Venegas Castelló:
> I have re applied the translations, just so everybody knows, translations
> are never, ever deleted, even if the original strings are somehow marked as
> ignored, even if the website endpoint is deleted, translations always stay
> stored. Please check out that page now, it should be fixed:
>
>
> In the image above, you can see that this particular string changed, there
> is no mention about Snow Leopard anymore, when an update like this happens,
> you will see it as "untranslated" but if you pay more attention you'll see
> it has a suggestion that matches almost fully. In this case I just reused
> your translated string but removed the Snow Leopard mention from the
> translation and then clicked "save" again.
>
>
>
>
> In this other image, you can see that you have a new string, this doesn't
> have any prior suggestions, so it must be a new string, that was added to
> this page, your current progress without this string translated is 97%, you
> just need to translate it and you'll have 100% in this page again, if you
> have any more doubts, suggestion or you want to report anything else, please
> do not hesitate to do so. I has been very busy which is why I was late to
> answer your concern, have a very good day!


[julia-users] Re: StaticArrays vs FixedSizeArrays

2016-10-07 Thread Kristoffer Carlsson
See https://github.com/SimonDanisch/FixedSizeArrays.jl/issues/159

On Friday, October 7, 2016 at 8:10:56 AM UTC+2, Petr Hlavenka wrote:
>
> Hi,
> With Julia 0.5 I'm little bit confused what is the best approach for high 
> speed array access. I'm writing a simulation where particles move in the 
> field interpolated from an irregular mesh of tetrahedrons in 3D.
> So i basically have a *points* array of MyPoint (= 3xFloat64, stands for 
> x,y,z) and *elements *array of MyElement(= 4xUInt32,  stands for indexes 
> to points forming a tetrahedron) and some lookup arrays.
> I often access the points by direct indexing  - defining own 
> getindex(::MyPoint, ::Integer) for the custom types, such as this example: 
>
> points[elements[tetrahedron_id][vertex_id]][axis_ix] 
>
> or in loops where i prefer treating the arrays as types such as 
>
> for el in elements
>   point1 = el[1]
>   point2 = el[2]
>   r = (point1[1]+point2[1])/2 + (point1[2]+point2[2])/2
> end
>
>
> or even calling specialized functions on MyElement type
>
> for el in elements
>   myfunc(el)
> end
>
> I'm considering StaticArrays.jl and FixedSizeArrays to use for the 
> definition of MyPoint and MyElement, but I'm little bit confused what is 
> the actual difference between the two implementation. Could please someone 
> explain to me, what is the difference between this two packages? Is one of 
> them superseding the other one on 0.5+? What is the difference in memory 
> layout and performance consequences?
>
> Thank you very much.
> Cheers 
> Petr
>
>  
>
>

[julia-users] Re: Representation of a material conditional (implication)

2016-10-07 Thread Jussi Piitulainen
Indeed.

implies(p::Bool, q::Bool) = q^p




perjantai 7. lokakuuta 2016 10.25.06 UTC+3 Fengyang Wang kirjoitti:
>
> The ^ operator can be used for implication. Due to Curry-Howard, 
> implication is isomorphic to function abstraction, which combinatorially 
> can be counted using the exponentiation function.
>
> On Thursday, October 6, 2016 at 12:10:51 PM UTC-4, Kevin Liu wrote:
>>
>> How is an implication represented in Julia? 
>>
>>
>> https://en.wikipedia.org/wiki/Material_conditional#Definitions_of_the_material_conditional
>>
>

[julia-users] Re: Any examples of using Julia with online judge systems

2016-10-07 Thread Fengyang Wang
HackerRank supports Julia in its Algorithms domain and related contests. I 
sent a support request to extend this to the Mathematics domain.

On Monday, June 27, 2016 at 10:39:52 PM UTC-4, Андрей Логунов wrote:
>
> Are there any projects/initiatives to apply Julia with online judge 
> systems (https://en.wikipedia.org/wiki/Online_judge) like spoj, codechef, 
> codeforces for use in education
>


[julia-users] Re: Representation of a material conditional (implication)

2016-10-07 Thread Fengyang Wang
The ^ operator can be used for implication. Due to Curry-Howard, 
implication is isomorphic to function abstraction, which combinatorially 
can be counted using the exponentiation function.

On Thursday, October 6, 2016 at 12:10:51 PM UTC-4, Kevin Liu wrote:
>
> How is an implication represented in Julia? 
>
>
> https://en.wikipedia.org/wiki/Material_conditional#Definitions_of_the_material_conditional
>


[julia-users] Re: Unexpected results with typeof(DataFrame)

2016-10-07 Thread Fengyang Wang
I think you have misunderstood Kristoffer. It's possible the Int values 
themselves are of distinct types; note for instance

julia> Array{Int, Int32(1)}
Array{Int64,1}

julia> Array{Int, 1}
Array{Int64,1}

julia> Array{Int, Int32(1)} == Array{Int, 1}
false




On Thursday, October 6, 2016 at 10:52:24 PM UTC-4, r5823 wrote:
>
> All elements of temp and DF are of type ParType.Parjm so I don't think 
> there is any mixing going on in terms of different types within :A of each 
> variable
>
>
> On Monday, October 3, 2016 at 5:45:55 AM UTC-4, Kristoffer Carlsson wrote:
>>
>> The int value "1" is an Int32 in one case and an Int64 in another? 
>
>

[julia-users] Re: ANN: A potential new Discourse-based Julia forum

2016-10-07 Thread Fengyang Wang
I would also personally prefer a Discourse-based forum to this Google 
Groups mailing list.

On Saturday, September 19, 2015 at 8:16:36 PM UTC-4, Jonathan Malmaud wrote:
>
> Hi all,
> There's been some chatter about maybe switching to a new, more modern 
> forum platform for Julia that could potentially subsume julia-users, 
> julia-dev, julia-stats, julia-gpu, and julia-jobs.   I created 
> http://julia.malmaud.com for us to try one out and see if we like it. 
> Please check it out and leave feedback. All the old posts from julia-users 
> have already been imported to it.
>
> It is using Discourse , the same forum 
> software used for the forums of Rust , 
> BoingBoing, and some other big sites. Benefits over Google Groups include 
> better support for topic tagging, community moderation features,  Markdown 
> (and hence syntax highlighting) in messages, inline previews of linked-to 
> Github issues, better mobile support, and more options for controlling when 
> and what you get emailed. The Discourse website 
>  does a better job of summarizing the 
> advantages than I could.
>
> To get things started, MIke Innes suggested having a topic on what we 
> plan on working on this coming wee 
> k.
>  
> I think that's a great idea.
>
> Just to be clear, this isn't "official" in any sense - it's just to 
> kickstart the discussion. 
>
> -Jon
>
>
>

[julia-users] StaticArrays vs FixedSizeArrays

2016-10-07 Thread Petr Hlavenka
Hi,
With Julia 0.5 I'm little bit confused what is the best approach for high 
speed array access. I'm writing a simulation where particles move in the 
field interpolated from an irregular mesh of tetrahedrons in 3D.
So i basically have a *points* array of MyPoint (= 3xFloat64, stands for 
x,y,z) and *elements *array of MyElement(= 4xUInt32,  stands for indexes to 
points forming a tetrahedron) and some lookup arrays.
I often access the points by direct indexing  - defining own 
getindex(::MyPoint, ::Integer) for the custom types, such as this example: 

points[elements[tetrahedron_id][vertex_id]][axis_ix] 

or in loops where i prefer treating the arrays as types such as 

for el in elements
  point1 = el[1]
  point2 = el[2]
  r = (point1[1]+point2[1])/2 + (point1[2]+point2[2])/2
end


or even calling specialized functions on MyElement type

for el in elements
  myfunc(el)
end

I'm considering StaticArrays.jl and FixedSizeArrays to use for the 
definition of MyPoint and MyElement, but I'm little bit confused what is 
the actual difference between the two implementation. Could please someone 
explain to me, what is the difference between this two packages? Is one of 
them superseding the other one on 0.5+? What is the difference in memory 
layout and performance consequences?

Thank you very much.
Cheers 
Petr