Re: [julia-users] How to vonvet Char to Float64, what wrong ?

2016-11-25 Thread Milan Bouchet-Valat
Le vendredi 25 novembre 2016 à 00:35 -0800, programista...@gmail.com a
écrit :
> How to convert Char  to Float? What wrong ?
Use parse(Float64, string(x)), with x the Char.


Regards


> julia> eltype(sort(unique(dane[:,4]))[3])
> Char
> 
> julia> (sort(unique(dane[:,4]))[3])
> "-.097"
> 
> julia> convert(Float64(sort(unique(dane[:,4]))[3]))
> ERROR: MethodError: `convert` has no method matching
> convert(::Type{Float64}, ::UTF8String)
> This may have arisen from a call to the constructor Float64(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   call{T}(::Type{T}, ::Any)
>   convert(::Type{Float64}, ::Int8)
>   convert(::Type{Float64}, ::Int16)
>   ...
>  in call at essentials.jl:56
> 
> paul


Re: [julia-users] Good way to organize constants?

2016-11-16 Thread Milan Bouchet-Valat
Le mercredi 16 novembre 2016 à 13:38 +, FANG Colin a écrit :
> Is there going to be overhead if I create constant group modules and
> use them via module_name.abc?
Not that I know of.


Regards

> On 16 November 2016 at 13:31, Milan Bouchet-Valat <nalimi...@club.fr>
> wrote:
> > Le mercredi 16 novembre 2016 à 04:18 -0800, FANG Colin a écrit :
> > > Say, I have a few constants
> > >
> > > const VTYPE_BINARY = 'B'
> > > const VTYPE_INTEGER = 'I'
> > > const VTYPE_CONTINUOUS = 'C'
> > >
> > > What's a good way to have a namespace on it?
> > >
> > > So that I can use Vtype.BINARY, Vtype.INTEGER, Vtype.CONTINUOUS
> > >
> > > Should I put those in a separator module? or create a type and
> > get an
> > > instance of it?
> > >
> > > module Vtype
> > >     const 
> > > end
> > >
> > > immutable Vtype
> > >     BINARY::Char
> > >     INTEGER::Char
> > >     CONTINUOUS::Char
> > > end
> > >
> > > const vtype = Vtype('B','I','C')?
> > Putting it in a module is fine if you want to regroup these under
> > the
> > same namespace. That's the standard practice with enums too (when
> > constants correspond to integer codes, which isn't the case here).
> > 
> > 
> > Regards
> > 
> 


Re: [julia-users] Good way to organize constants?

2016-11-16 Thread Milan Bouchet-Valat
Le mercredi 16 novembre 2016 à 04:18 -0800, FANG Colin a écrit :
> Say, I have a few constants
> 
> const VTYPE_BINARY = 'B'
> const VTYPE_INTEGER = 'I'
> const VTYPE_CONTINUOUS = 'C'
> 
> What's a good way to have a namespace on it?
> 
> So that I can use Vtype.BINARY, Vtype.INTEGER, Vtype.CONTINUOUS
> 
> Should I put those in a separator module? or create a type and get an
> instance of it?
> 
> module Vtype
>     const 
> end
> 
> immutable Vtype
>     BINARY::Char
>     INTEGER::Char
>     CONTINUOUS::Char
> end
> 
> const vtype = Vtype('B','I','C')?
Putting it in a module is fine if you want to regroup these under the
same namespace. That's the standard practice with enums too (when
constants correspond to integer codes, which isn't the case here).


Regards


Re: [julia-users] Sharing experience on packages

2016-11-16 Thread Milan Bouchet-Valat
Le mardi 15 novembre 2016 à 02:02 -0800, Jérôme Collet a écrit :
> Hi all,
> 
> I am new to Julia, I used to use R. And using R packages, the main
> difficulty for me is the choice of a package for a given task. Most
> of the time, there are many packages solving the same problem, so we
> have to choose.
> A first possibility could be the TaskViews, but it is easy to see
> that around a third of all packages is listed in a TaskView, so being
> listed does not say anything about quality.
There already exists a similar tool for Julia:
http://svaksha.github.io/Julia.jl/

I guess we could advertize it more.

Regards

> Thanks to Rstudio, it is possible to know how many times a given
> package was downloaded. It is an indication about package quality.
> But this indication is difficult to obtain, and not very reliable.
> I heard that in Matlab, it is easy to know the experience about a
> package, or even a function of a package, compared to other similar
> functions in another packages.
> So, are there any plans to collect, store and share the experience on
> packages? 


Re: [julia-users] R's update(Update and Re-fit a Model) in Julia?

2016-11-15 Thread Milan Bouchet-Valat
Le lundi 14 novembre 2016 à 14:18 -0800, Hongwei Liu a écrit :
> Hi guys,
> 
> I am new to Julia and I have trouble in finding a similar function in
> Julia that has the ability of "update" in R.
> 
> For example, set formula = y ~ x1 + x2
> 
> In R, I can use update(formula, D  ~ . ) to change the formula from y
> ~ x1 + x2 to D ~ x1 + x2
> 
> In Julia, the formula's type is DataFrames.Formula and I have
> searched online and Dataframes document for a long time but still
> couldn't find the answer.
> 
> So my question is:
> 
> Is there are such a function in Julia? If not, is there a way to
> modify a formula directly?
I don't think we provide such a function yet, but you can easily do
that manually.

Use dump() to see what the formula object consists in:
julia> dump(y ~ x1 + x2)
DataFrames.Formula
  lhs: Symbol y
  rhs: Expr
head: Symbol call
args: Array{Any}((3,))
  1: Symbol +
  2: Symbol x1
  3: Symbol x2
typ: Any

Here, you can just change the rhs (right hand side) argument:
julia> f = y ~ x1 + x2
Formula: y ~ x1 + x2

julia> f.lhs = :D
:D

julia> f
Formula: D ~ x1 + x2


Regards

> Thanks a lot!!
> 
> Hongwei


Re: [julia-users] Iterating over non-zero entries of a sparse matrix

2016-11-09 Thread Milan Bouchet-Valat
Le mercredi 09 novembre 2016 à 05:37 -0800, Christoph Ortner a écrit :
> Is there as iterator implemented that allows me to iterate over all
> non-zero entries of a sparse matrix or vector? E.g. 
> 
> for (i, j, z) in nonzeros(A) 
> 
> 
> (I realise that nonzeros does something else!)
As the docs for nonzeros() say, have a look at nzrange().


Regards


Re: [julia-users] Re: Immutable type modification of SparseMatrixCSC

2016-11-07 Thread Milan Bouchet-Valat
Le dimanche 06 novembre 2016 à 15:31 -0800, Kristoffer Carlsson a écrit :
> 
> 
> > For reference, here's the long discussion that happened before making 
> > that change: 
> > https://github.com/JuliaLang/julia/pull/16371 
> > 
> > Indeed I think Tony was right that this has undesirable consequences in 
> > terms of usability. Not being able to use the same in-place API for 
> > dense and sparse matrices is a no-go IMHO. 
> > 
> 
> Please elaborate. You cannot resize dense matrices in-place either.
>
> Just create a new matrix from the fields of the old one if you want
> to change the size. How does this limit usability?
Right, I was being sloppy. That remark applies to sparse vectors, though.


Regards

 
> > 
> > Regards 
> > 
> > > > Le samedi 05 novembre 2016 à 20:11 -0700, vav...@uwaterloo.ca a 
> > écrit : 
> > > Just to clarify, you don't actually need to allocate the memory for 
> > > two large sparse matrices.  You can change colptr, rowval, and nzval 
> > > in place (as you have already done) and then create a new sparse 
> > > matrix using these modified arrays.  In this case, the arrays won't 
> > > be copied over; instead the new sparse matrix will get a pointer to 
> > > the already-allocated arrays.  This technique could lead to some 
> > > subtle program bugs (because if two sparse matrices share an nzval 
> > > array, then any change to one affects the other), but you can guard 
> > > against this by making the old matrix's name inaccessible, for 
> > > instance, by giving the new matrix the same name as the old one. 
> > > 
> > > -- Steve Vavasis 
> > > 
> > > 
> > > > I see. returning a new, smaller instance of the matrix was my 
> > > > temporary solution. Bummer that it's necessary... Is it really so 
> > > > strange to want to change the size of a  
> > > > 
> > > > Another alternative which occurs to me is to make some child class 
> > > > of SparseCSCMatrix with two more arrays, each with one element, 
> > > > which track the sizes? Since I am making the matrix smaller, not 
> > > > larger, I can't imagine that this would be an issue. However, if I 
> > > > need to allocate two large sparse matrices just to remove a row/col 
> > > > pair, then I can suddenly use half the memory I could before... 
> > > > 
> > > > 
> > > > > When a type is immutable, that means you can't modify the data in 
> > > > > the type.  If a field of a type is mutable, a reference to it is 
> > > > > stored in the type, and you cannot modify that reference.  In the 
> > > > > case of SparseMatrixCSC, that means you can't make A.colptr point 
> > > > > to a different array, for example.  You are still free to change 
> > > > > the values in A.colptr, because that does not require modifying 
> > > > > the reference.  If the field is immutable, then the value itself 
> > > > > is stored in the type, not a reference to the value, and you 
> > > > > cannot modify the value.  That's why you get the error in the 
> > > > > example. 
> > > > > 
> > > > > The best solution I can think of for your rmCol example is to 
> > > > > create a new SparseMatrixCSC that uses the same arrays as the old 
> > > > > one (to avoid increasing memory usage) and return it.  Replacing 
> > > > > the last line of rmCol with: 
> > > > > 
> > > > >   SparseMatrixCSC(m, n-1, A.colptr, A.rowval, A.nzval) 
> > > > > 
> > > > >   should do the trick.  The only sticky point is that the 
> > > > > original A now contains invalid data, so you'll have to make sure 
> > > > > only the the matrix returned from rmCol is used in the future, 
> > > > > not the original A. 
> > > > > 
> > > > >   Jared Crean 
> > > > > 
> > > > > 
> > > > > > The SparseMatrixCSC type is immutable, which I understand to 
> > > > > > mean that I must respect the types of the attributes of 
> > > > > > SparseMatrixCSC types, as well as not changing the attributes 
> > > > > > themselves. 
> > > > > > 
> > > > > > The following gives a loadError (type is immutable).  I am 
> > > > > > confused that I can modify the colptr and rowval attributes of 
> > > > > > A, but not the A.n attribute. Why is this? I am trying to 
> > > > > > change the size of a sparse matrix after removing the data from 
> > > > > > a row and column. 
> > > > > > 
> > > > > > I have checked that both A.n and A.n - 1 are type Int. 
> > > > > > 
> > > > > > function rmCol(A::SparseMatrixCSC, rmCol::Int) 
> > > > > >     colRmRange = A.colptr[rmCol]:(A.colptr[rmCol+1]-1) 
> > > > > >     filter!(e -> e != rmCol, A.colptr) 
> > > > > >      
> > > > > >     deleteat!(A.rowval, rowval[colRmRange]) 
> > > > > >     deleteat!(A.nzval, colRmRange) 
> > > > > >     A.n -= 1 # inclusion of this line throws error 
> > > > > >      
> > > > > > end 
> > > > > > 
> > > > > 
> > > > 


Re: [julia-users] Re: converting binary string into integer

2016-11-06 Thread Milan Bouchet-Valat
Le dimanche 06 novembre 2016 à 10:13 -0800, Alberto Barradas a écrit :
> Hi guys,
> Now that `parseint()` got removed for version 0.5, Is `parse()` the
> only way to do this?
>  How could I parse binary into a BigInt? More specifically, I want to
> see the integer number of the arecibo message. (73x23 so 1679 binary
> digits into a big int)
parse() is meant to be called on Julia expressions.
Use parse(Int, ...) or parse(BigInt, ...) depending on your needs.



Regards


Re: [julia-users] Re: [Announcement] Moving to Discourse (Statement of Intent)

2016-11-06 Thread Milan Bouchet-Valat
Le dimanche 06 novembre 2016 à 01:49 -0800, Andreas Lobinger a écrit :
> Hello colleague,
> 
> > The Julia community has been growing rapidly over the last few
> > years and discussions are happening at many different places: there
> > are several Google Groups (julia-users, julia-dev, ...), IRC,
> > Gitter, and a few other places. Sometimes packages or organisations
> > also have their own forums and chat rooms.
> > 
> > In the past, Discourse has been brought up as an alternative
> > platform that we could use instead of Google Groups and that would
> > allow us to invite the entire Julia community into one space.
> > 
> 
> What problem with the julia-users mailing-list and the google groups
> web interface is solved by using discourse?
You can have a look at the previous thread about this:
https://groups.google.com/d/msg/julia-users/4oDqW-QxyVA/lw71uqNGBQAJ

I also encourage you to give a try to Discourse (this can be done
without even creating an account on their test instance).

> Why do you think (can you prove?) more centralisation will happen
> with discourse?
Centralization will be made possible by allowing for sub-forums
dedicated to each topic (stats, optimization, data...) inside Discourse
itself, instead of creating totally separate mailing lists as is
currently done. Of course people will still be free to use something
else, but that's quite unlikely.

> > We would like to solicit feedback from the broader Julia community
> > about moving julia-users to Discourse as well, and potentially
> > other mailing lists like julia-stats.
> > 
> 
> Please define 'We'. 
"We" meant "Julia core developers".

> > If you have feedback or comments, please post them
> > at http://discourse.julialang.org/t/migration-of-google-groups-to-
> > discourse or in this thread.
> > 
> 
> In some parts of the world, asking for feedback on a topic via a
> different medium is seen as unfriendly act ... but still there is
> this thread.
The idea was that we would like to see how well it works by having
people use Discourse for this discussion. But as you noted there's this
thread for people who don't want to to that.


Regards

> Wishing a happy day,
>  Andreas


Re: [julia-users] Re: Immutable type modification of SparseMatrixCSC

2016-11-06 Thread Milan Bouchet-Valat
For reference, here's the long discussion that happened before making
that change:
https://github.com/JuliaLang/julia/pull/16371

Indeed I think Tony was right that this has undesirable consequences in
terms of usability. Not being able to use the same in-place API for
dense and sparse matrices is a no-go IMHO.


Regards

Le samedi 05 novembre 2016 à 20:11 -0700, vava...@uwaterloo.ca a
écrit :
> Just to clarify, you don't actually need to allocate the memory for
> two large sparse matrices.  You can change colptr, rowval, and nzval
> in place (as you have already done) and then create a new sparse
> matrix using these modified arrays.  In this case, the arrays won't
> be copied over; instead the new sparse matrix will get a pointer to
> the already-allocated arrays.  This technique could lead to some
> subtle program bugs (because if two sparse matrices share an nzval
> array, then any change to one affects the other), but you can guard
> against this by making the old matrix's name inaccessible, for
> instance, by giving the new matrix the same name as the old one.
> 
> -- Steve Vavasis
> 
> 
> > I see. returning a new, smaller instance of the matrix was my
> > temporary solution. Bummer that it's necessary... Is it really so
> > strange to want to change the size of a 
> > 
> > Another alternative which occurs to me is to make some child class
> > of SparseCSCMatrix with two more arrays, each with one element,
> > which track the sizes? Since I am making the matrix smaller, not
> > larger, I can't imagine that this would be an issue. However, if I
> > need to allocate two large sparse matrices just to remove a row/col
> > pair, then I can suddenly use half the memory I could before...
> > 
> > 
> > > When a type is immutable, that means you can't modify the data in
> > > the type.  If a field of a type is mutable, a reference to it is
> > > stored in the type, and you cannot modify that reference.  In the
> > > case of SparseMatrixCSC, that means you can't make A.colptr point
> > > to a different array, for example.  You are still free to change
> > > the values in A.colptr, because that does not require modifying
> > > the reference.  If the field is immutable, then the value itself
> > > is stored in the type, not a reference to the value, and you
> > > cannot modify the value.  That's why you get the error in the
> > > example.
> > > 
> > > The best solution I can think of for your rmCol example is to
> > > create a new SparseMatrixCSC that uses the same arrays as the old
> > > one (to avoid increasing memory usage) and return it.  Replacing
> > > the last line of rmCol with:
> > > 
> > >   SparseMatrixCSC(m, n-1, A.colptr, A.rowval, A.nzval)
> > > 
> > >   should do the trick.  The only sticky point is that the
> > > original A now contains invalid data, so you'll have to make sure
> > > only the the matrix returned from rmCol is used in the future,
> > > not the original A.
> > > 
> > >   Jared Crean
> > > 
> > > 
> > > > The SparseMatrixCSC type is immutable, which I understand to
> > > > mean that I must respect the types of the attributes of
> > > > SparseMatrixCSC types, as well as not changing the attributes
> > > > themselves.
> > > > 
> > > > The following gives a loadError (type is immutable).  I am
> > > > confused that I can modify the colptr and rowval attributes of
> > > > A, but not the A.n attribute. Why is this? I am trying to
> > > > change the size of a sparse matrix after removing the data from
> > > > a row and column.
> > > > 
> > > > I have checked that both A.n and A.n - 1 are type Int.
> > > > 
> > > > function rmCol(A::SparseMatrixCSC, rmCol::Int)
> > > >     colRmRange = A.colptr[rmCol]:(A.colptr[rmCol+1]-1)
> > > >     filter!(e -> e != rmCol, A.colptr)
> > > >     
> > > >     deleteat!(A.rowval, rowval[colRmRange])
> > > >     deleteat!(A.nzval, colRmRange)
> > > >     A.n -= 1 # inclusion of this line throws error
> > > >     
> > > > end
> > > > 
> > > 
> > 


Re: [julia-users] Question: Forcing readtable to create string type on import

2016-11-03 Thread Milan Bouchet-Valat
Le jeudi 03 novembre 2016 à 13:35 -0700, LeAnthony Mathews a écrit :
> Thanks Michael,
>   I been thinking about this all day.  Yes, basically I am going to
> have to create a macro CSVreadtable that mimics the readtable
> command, but in the expantion uses CSV.read.  The macro will manually
> constructs a similar readtable sized dataframe array, but use the
> column types I specify or inherit from the original readtable
> command.  The macro can use the current CSV.read parameters.
> 
> So this would work.
> df1_CSVreadtable = CSVreadtable("$df1_path"; types=Dict(1=>String))  
> 
> so a:
> eltypes(df1_CSVreadtable)
> 3-element Array{Type,1}:
>  Int32   
>  String
>  String
> 
> 
>   Anyway, I was looking for a quick fix, but it least I will learn
> some Julia.
If you don't have missing values and just want a Vector{String}, you
can pass nullable=false to CSV.read().


Regards

> 
> 
> > DataFrames is currently undergoing a very major change. Looks like
> > CSV creates the new type of DataFrames. I hope someone can help you
> > with using that. As a workaround, on the normal DataFrames version,
> > I have generally just replaced with a string representation:
> > ```
> > df[:account_numbers] = ["$account_number" for account_number in
> > df[:account_numbers]]
> > 
> > On Thu, Nov 3, 2016 at 3:05 PM, LeAnthony Mathews  > om> wrote:
> > > Sure, so I need col #1 in my CSV to be a string in my data frame.
> > >   
> > > 
> > > So as a test  I tried to load the file 3 different ways:
> > > 
> > > df1_CSV = CSV.read("$df1_path"; types=Dict(1=>String))  #forcing
> > > the column to stay a string
> > > df1_readtable = readtable("$df1_path")  #Do not know how to force
> > > the column to stay a string
> > > df1_convertDF = convert(DataFrame, df1_CSV)
> > > 
> > > Here is the output:  If they are all dataframes then showcols
> > > should work an all three df1:
> > > 
> > > julia> names(df1_CSV)
> > > 3-element Array{Symbol,1}:
> > >  :account_number
> > >  Symbol("Discharge Date")
> > >  :site
> > > 
> > > julia> names(df1_readtable)
> > > 3-element Array{Symbol,1}:
> > >  :account_number
> > >  :Discharge_Date
> > >  :site
> > > 
> > > julia> names(df1_convertDF)
> > > 3-element Array{Symbol,1}:
> > >  :account_number
> > >  Symbol("Discharge Date")
> > >  :site
> > > 
> > > 
> > > julia> eltypes(df1_CSV)
> > > 3-element Array{Type,1}:
> > >  Nullable{String}
> > >  Nullable{WeakRefString{UInt8}}
> > >  Nullable{WeakRefString{UInt8}}
> > > 
> > > julia> eltypes(df1_readtable)
> > > 3-element Array{Type,1}:
> > >  Int32   #Do not know how to force the column to stay a string
> > >  String
> > >  String
> > > 
> > > julia> eltypes(df1_convertDF)
> > > 3-element Array{Type,1}:
> > >  Nullable{String}
> > >  Nullable{WeakRefString{UInt8}}
> > >  Nullable{WeakRefString{UInt8}}
> > > 
> > > julia> showcols(df1_convertDF)
> > > 1565x3 DataFrames.DataFrame
> > > ERROR: MethodError: no method matching
> > > countna(::NullableArrays.NullableArray{St
> > > ring,1})
> > > Closest candidates are:
> > >   countna(::Array{T,N}) at
> > > C:\Users\lmathews\.julia\v0.5\DataFrames\src\other\ut
> > > ils.jl:115
> > >   countna(::DataArrays.DataArray{T,N}) at
> > > C:\Users\lmathews\.julia\v0.5\DataFram
> > > es\src\other\utils.jl:128
> > >   countna(::DataArrays.PooledDataArray{T,R<:Integer,N}) at
> > > C:\Users\lmathews\.ju
> > > lia\v0.5\DataFrames\src\other\utils.jl:143
> > >  in colmissing(::DataFrames.DataFrame) at
> > > C:\Users\lmathews\.julia\v0.5\DataFram
> > > es\src\abstractdataframe\abstractdataframe.jl:657
> > >  in showcols(::Base.TTY, ::DataFrames.DataFrame) at
> > > C:\Users\lmathews\.julia\v0.
> > > 5\DataFrames\src\abstractdataframe\show.jl:574
> > >  in showcols(::DataFrames.DataFrame) at
> > > C:\Users\lmathews\.julia\v0.5\DataFrames
> > > \src\abstractdataframe\show.jl:581
> > > 
> > > julia> showcols(df1_readtable)
> > > 1565x3 DataFrames.DataFrame
> > > │ Col # │ Name           │ Eltype │ Missing │
> > > ├───┼┼┼─┤
> > > │ 1     │ account_number │ Int32  │ 0       │
> > > │ 2     │ Discharge_Date │ String │ 0       │
> > > │ 3     │ site           │ String │ 0       │
> > > 
> > > julia> showcols(df1_CSV)
> > > 1565x3 DataFrames.DataFrame
> > > ERROR: MethodError: no method matching
> > > countna(::NullableArrays.NullableArray{St
> > > ring,1})
> > > Closest candidates are:
> > >   countna(::Array{T,N}) at
> > > C:\Users\lmathews\.julia\v0.5\DataFrames\src\other\ut
> > > ils.jl:115
> > >   countna(::DataArrays.DataArray{T,N}) at
> > > C:\Users\lmathews\.julia\v0.5\DataFram
> > > es\src\other\utils.jl:128
> > >   countna(::DataArrays.PooledDataArray{T,R<:Integer,N}) at
> > > C:\Users\lmathews\.ju
> > > lia\v0.5\DataFrames\src\other\utils.jl:143
> > >  in colmissing(::DataFrames.DataFrame) at
> > > C:\Users\lmathews\.julia\v0.5\DataFram
> > > es\src\abstractdataframe\abstractdataframe.jl:657
> > >  in showcols(::Base.TTY, ::DataFrames.DataFrame) at
> > > 

Re: [julia-users] package reproducibility

2016-10-31 Thread Milan Bouchet-Valat
Le vendredi 28 octobre 2016 à 00:24 -0700, Kevin Kunzmann a écrit :
> Hey,
> 
> I was just wondering whether Julia has a checkpoint-like
> functionality (R checkpoint-package) for using a specific checkpoint
> of the package ecosystem. With quick development happening this would
> improve reproducibility drastically.
FWIW, this is one of the main features planned for the next version of
the package management system (codenamed Pkg3). These would be called
"environments".


Regards


Re: [julia-users] parse Unicode string to Float64

2016-10-25 Thread Milan Bouchet-Valat
Le lundi 24 octobre 2016 à 21:44 -0700, Chris Stook a écrit :
> I'm trying to parse a text file which contains some floating point
> numbers.  The number 2.5 is represented by the string
> "\x002\0.\x005\0".  Parse will not convert this to a Float64.  Print
> works (prints "2.5") in Atom and Jupyter, but not in the REPL.
> 
> _
> _       _ _(_)_     |  A fresh approach to technical computing
> (_)     | (_) (_)    |  Documentation: http://docs.julialang.org
> _ _   _| |_  __ _   |  Type "?help" for help.
> | | | | | | |/ _` |  |
> | | |_| | | | (_| |  |  Version 0.5.0 (2016-09-19 18:14 UTC)
> _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/                   |  x86_64-w64-mingw32
> 
> julia> print("\x002\0.\x005\0")
> �2�.�5�
> julia> parse(Float64,"\x002\0.\x005\0")
> ERROR: ArgumentError: invalid number format "\x002\0.\x005\0" for
> Float64
> in parse(::Type{Float64}, ::String) at .\parse.jl:167
> 
> julia>
> 
> I am not familiar with Unicode.  Is the Unicode valid?  How should I
> convert this to a Float?  I do not have control over the input file.
"Unicode" doesn't refer to a specific encoding. Julia expects UTF-8,
but this appears to be UTF16-BE (which is often abusively called
Unicode in the Windows world).

You can use my StringEncodings package to decode the file to a Julia
string (see the README):

julia> using StringEncodings

julia> parse(Float64, decode("\x002\0.\x005\0".data, "UTF-16BE"))
2.5


Regards


Re: [julia-users] Named for loops?

2016-10-24 Thread Milan Bouchet-Valat
Le lundi 24 octobre 2016 à 08:05 -0400, Isaiah Norton a écrit :
> 
> 
> On Monday, October 24, 2016, Angel de Vicente  gmail.com> wrote:
> > Hi,
> > 
> > I don't see it in the documentation, but I'm wondering if there is
> > a way
> > to have named nested loops, so that one can specify those names to
> > break
> > and continue, in order to have more control.
> > 
> > In the following examples, continue always skips the remaining loop
> > for
> > j, while break terminates the j loop in the first example and
> > terminates
> > the single outer loop in the second (I guess it can be a bit
> > confusing
> > at first, but I can understand the design).
> > 
> > ,
> > | println("nested")
> > | for i in 1:3
> > |     for j in 1:6
> > |         if 2 < j < 4 continue end
> > |         if j > 5 break end
> > |         @printf("i: %i j: %i \n",i,j)
> > |     end
> > | end
> > |
> > | println("single outer loop")
> > | for i in 1:3, j in 1:6
> > |     if 2 < j < 4 continue end
> > |     if j > 5 break end
> > |     @printf("i: %i j: %i \n",i,j)
> > | end
> > `
> > 
> > But I'm used to Fortran, where one can do things like this (CYCLE
> > ==
> > continue ; EXIT == break), and you specify the loop that the
> > instruction
> > applies to.
> > 
> > ,
> > |   INA: DO a = 1,1000
> > |     INB: DO b=a+1,1000
> > |         IF (a+b .GT. 1000) CYCLE INA
> > |         INC: DO c=b+1,1000
> > |            IF (a+b+c .GT. 1000) CYCLE INB
> > |            IF (a+b+c .EQ. 1000 .AND. a**2 + b**2 .EQ. c**2) THEN
> > |               PRINT*, a*b*c
> > |               EXIT INA
> > |            END IF
> > |         END DO INC
> > |      END DO INB
> > |   END DO INA
> > `
> > 
> > 
> > Is it possible now (or in the near future) to have named nested
> > loops in
> > Julia?
> 
> No, and I'm not aware of any plan to support this. But you can create
> custom control flow with '@goto', so it could be probably be done
> with a macro.
A way to get out of multiple loops has actually been discussed, and the
consensus appears to be that it's a good idea:
https://github.com/JuliaLang/julia/issues/5334

Though it's not a high-priority feature since @goto works quite well.


Regards

> > 
> > Thanks,
> > --
> > Ángel de Vicente
> > http://www.iac.es/galeria/angelv/
> > 


Re: [julia-users] Selecting rows from a DataFrame using a filter or predicate

2016-10-20 Thread Milan Bouchet-Valat
Le mercredi 19 octobre 2016 à 13:51 -0700, Dean Schulze a écrit :
> I have a DataFrame
> 
> julia> df
> 252931×2 DataFrames.DataFrame
> │ Row    │ x    │ y       │
> ├┼──┼─┤
> │ 1      │ 0    │ 3   │
> │ 2      │ 0    │ 6   │
> │ 3      │ 0    │ 124800  │
> │ 4      │ 0    │ 19  │
> │ 5      │ 0    │ 20  │
> │ 6      │ 0    │ 204800  │
> │ 7      │ 0    │ 224800  │
> │ 8      │ 0    │ 234800  │
> ⋮
> │ 252923 │ 4999 │ 3364800 │
> │ 252924 │ 4999 │ 3374800 │
> │ 252925 │ 4999 │ 339 │
> │ 252926 │ 4999 │ 3434800 │
> │ 252927 │ 4999 │ 3464800 │
> │ 252928 │ 4999 │ 349 │
> │ 252929 │ 4999 │ 351 │
> │ 252930 │ 4999 │ 3534800 │
> │ 252931 │ 4999 │ 354 │
> 
> 
> I need to work with it in sub-DataFrames due to its size.  I tried
> filtering using this syntax
> 
> df_missing_timestamps_normalized[1:1, :(y < 10)]
> 
> df_missing_timestamps_normalized[1:1, :(y -> y < 10)]
> 
> but they both throw errors.
> 
> How do I select sub-DataFrames based on a predicate?
Try with df[df[:y] .< 100, :].

You can also have a look at the DataFramesMeta.jl, Query.jl and
StructuredQueries.jl packages for nicer ways of working with data sets.


Regards


Re: [julia-users] UTF8, how to procesed text data

2016-10-19 Thread Milan Bouchet-Valat
Le mercredi 19 octobre 2016 à 06:02 -0700, programista...@gmail.com a
écrit :
> Version 0.3.12, udate to 5 ?
Yes. 0.3.x versions are unsupported for some time now.


Regards

> > Le mercredi 19 octobre 2016 à 04:46 -0700, program...@gmail.com a 
> > écrit : 
> > > Data file is coding UTF8 but i cant procedsed this datain Julia
> > ? 
> > > What wrong ? 
> > > 
> > > o=open("data.txt") 
> > > 
> > > julia> temp=readline(io) 
> > > "3699778,13,2,gdbiehz jablej gupując szybgi Injehnej dg 26 
> > > paździehniga,1\n" 
> > > 
> > > julia> temp[61:65] 
> > > "aźdz" 
> > > 
> > > julia> findin(temp[61:65],"d") 
> > > ERROR: invalid UTF-8 character index 
> > >  in next at utf8.jl:64 
> > >  in findin at array.jl:1179 
> > You didn't say what version of Julia you're using. The bug seems
> > to 
> > happen on 0.4.7, but not on 0.5.0, so I'd encourage you to
> > upgrade. 
> > 
> > (Note that in general you shouldn't index into strings with
> > arbitrary 
> > integers: only values referring to the beginning of a Unicode code 
> > point are valid.) 
> > 
> > 
> > Regards 
>  


Re: [julia-users] matrix multiplications

2016-10-19 Thread Milan Bouchet-Valat
Le mardi 18 octobre 2016 à 15:28 -0700, Steven G. Johnson a écrit :
> 
> 
> > Since it uses the in-place assignment operator .= it could be made
> > to work as desired, but there's some designing to do.
> > 
> 
> The problem is that it doesn't know that * is a matrix multiplication
> until compile-time.
> 
> In any case, I don't think there's a huge amount to be gained from
> special syntax here.  Unlike broadcast operations, matrix
> multiplications cannot in general be fused with other operations.  
> So you might as well do A_mul_B!.
I think the biggest gain would be discoverability and consistency with
other in-place operations. A_mul_B! isn't the most Julian of our APIs
(to say the least)...


Regards


Re: [julia-users] UTF8, how to procesed text data

2016-10-19 Thread Milan Bouchet-Valat
Le mercredi 19 octobre 2016 à 04:46 -0700, programista...@gmail.com a
écrit :
> Data file is coding UTF8 but i cant procedsed this datain Julia ?
> What wrong ?
> 
> o=open("data.txt")
> 
> julia> temp=readline(io)
> "3699778,13,2,gdbiehz jablej gupując szybgi Injehnej dg 26
> paździehniga,1\n"
> 
> julia> temp[61:65]
> "aźdz"
> 
> julia> findin(temp[61:65],"d")
> ERROR: invalid UTF-8 character index
>  in next at utf8.jl:64
>  in findin at array.jl:1179
You didn't say what version of Julia you're using. The bug seems to
happen on 0.4.7, but not on 0.5.0, so I'd encourage you to upgrade.

(Note that in general you shouldn't index into strings with arbitrary
integers: only values referring to the beginning of a Unicode code
point are valid.)


Regards


Re: [julia-users] Dataframe without column and row names ?

2016-10-17 Thread Milan Bouchet-Valat
Le lundi 17 octobre 2016 à 06:32 +0200, henri.gir...@gmail.com a
écrit :
> In fact I don't know how to make the nice border but I noticed
> dataframe 
> does it ...
> 
> I only need the nice border for the matrix, but I don't know how to
> do it ?
> 
> Maybe I should ask how to make a nice border in a matrix ?
DataFrames' main feature isn't to provide nice borders... You can try
overriding the show() method for Matrix if you want borders.


Regards

> Regards
> 
> Henri
> 
> 
> Le 15/10/2016 à 22:32, Milan Bouchet-Valat a écrit :
> > 
> > Le vendredi 14 octobre 2016 à 19:59 -0700, Henri Girard a écrit :
> > > 
> > > Hi,
> > > Is it possible to have a table with only the result ?
> > > I don't want row /column names.
> > So why do you create a data frame? Isn't a Matrix enough?
> > 
> > 
> > Regards
> > 
> > > 
> > > using DataFrames
> > > function iain_magic(n::Int)
> > >  M = zeros(Int, n, n)
> > >  for I = 1:n, J = 1:n
> > >  @inbounds M[I,J] = n*((I+J-1+(n >> 1))%n)+((I+2J-2)%n) +
> > > 1
> > >  end
> > >  return M
> > > end
> > > mm=iain_magic(3)
> > > df=DataFrame(mm)


Re: [julia-users] How to determine which functions to overload, or, who is at the bottom of the function chain?

2016-10-16 Thread Milan Bouchet-Valat
Le samedi 15 octobre 2016 à 20:36 -0700, colintbow...@gmail.com a
écrit :
> Hi all,
> 
> Twice now I've thought I had overloaded the appropriate functions for
> a new type, only to observe apparent inconsistencies in the way the
> new type behaves. Of course, there were no inconsistencies. Instead,
> the observed behaviour stemmed from overloading a function that is
> not at the bottom of the function chain. The two examples where I
> stuffed up were:
> 
> 1) overloading Base.< instead of overloading Base.isless, and
In this case, the help is quite explicit:
help?> <
search: < <= << <: .< .<= .<<

  <(x, y)

  Less-than comparison operator. New numeric types should implement this
  function for two arguments of the new type. Because of the behavior of
  floating-point NaN values, < implements a partial order. Types with a
  canonical partial order should implement <, and types with a canonical total
  order should implement isless.

> 2) overloading Base.string(x) instead of overloading Base.show(io,
> x).
This one is a bit trickier, since the printing code is complex, and not
completely stabilized yet. Though the help still gives some hints:

help?> string
search: string String stringmime Cstring Cwstring RevString RepString
readstring

  string(xs...)

  Create a string from any values using the print function.

So the more fundamental function to override is print(). The help for
print() says it falls back to show() if there's no print() method for a
given type. So if you don't have a special need for print(), override
show().

> My question is this: What is the communities best solution/resource
> for knowing which functions are at the bottom of the chain and thus
> are the ones that need to be overloaded for a new type?
In general, look at the help for a function. If there's no answer
(which is a most likely a lack in the documentation which should be
reported), look for it in the manual. The latter can always be useful,
even if the help already gives a reply.

But documentation is perfectible, so do not hesitate to ask questions
and suggest enhancements (ideally via pull requests when you have found
out how it works).


Regards


> Cheers and thanks in advance to all repsonders,
> 
> Colin


Re: [julia-users] Dataframe without column and row names ?

2016-10-16 Thread Milan Bouchet-Valat
Le vendredi 14 octobre 2016 à 19:59 -0700, Henri Girard a écrit :
> Hi,
> Is it possible to have a table with only the result ?
> I don't want row /column names.
So why do you create a data frame? Isn't a Matrix enough?


Regards

> using DataFrames
> function iain_magic(n::Int)
>     M = zeros(Int, n, n)
>     for I = 1:n, J = 1:n
>     @inbounds M[I,J] = n*((I+J-1+(n >> 1))%n)+((I+2J-2)%n) + 1
>     end
>     return M
> end
> mm=iain_magic(3)
> df=DataFrame(mm) 


Re: [julia-users] why do we have Base.isless(a, ::NAtype) but not Base.isless(a, ::Nullable)?

2016-10-14 Thread Milan Bouchet-Valat
Le jeudi 13 octobre 2016 à 15:40 +0200, Florian Oswald a écrit :
> i'm trying to understand why we don't have something similar in terms
> of comparison for Nullable as we have for DataArrays NAtype (below).
> point me to the relevant github conversation, if any, is fine. 
Such a method already exists in NullableArrays and in Julia 0.6. See
https://github.com/JuliaLang/julia/pull/18304


Regards

> How would I implement methods to find the maximium of an
> Array{Nullable{Float64}}? like so?
> 
> Base.isless(a::Any, x::Nullable{Float64}) = isnull(x) ? true :
> Base.isless(a,get(x))
> 
> 
> ~/.julia/v0.5/DataArrays/src/operators.jl:502
> 
> #
> # Comparison operators
> #
> 
> Base.isequal(::NAtype, ::NAtype) = true
> Base.isequal(::NAtype, b) = false
> Base.isequal(a, ::NAtype) = false
> Base.isless(::NAtype, ::NAtype) = false
> Base.isless(::NAtype, b) = false
> Base.isless(a, ::NAtype) = true
> 


Re: [julia-users] Re: What's the status of SIMD instructions from a user's perspective in v0.5?

2016-10-14 Thread Milan Bouchet-Valat
Le jeudi 13 octobre 2016 à 07:27 -0700, Florian Oswald a écrit :
> 
> Hi Erik,
> 
> that's great thanks. I may have a hot inner loop where this could be
> very helpful. I'll have a closer look and come back with any
> questions later on if that's ok. 
Maybe I'm stating the obvious, but you don't need to manually use SIMD
types to get SIMD instructions in simple/common cases. For example, the
following high-level generic code uses SIMD instructions on my machine
when passed standard vectors :

function add!(x::AbstractArray, y::AbstractArray)
    @inbounds for i in eachindex(x, y)
        x[i] += y[i]
    end
end


Regards

> 
> cheers
> florian
> 
> > 
> > If you want to use the SIMD package, then you need to manually
> > vectorized the code. That is, all (most of) the local variables
> > you're using will have a SIMD `Vec` type. For convenience, your
> > input and output arrays will likely still hold scalar values, and
> > the `vload` and vstore` functions access scalar arrays,
> > reading/writing SIMD vectors. The function you quote above (from
> > the SIMD examples) does just this.
> > 
> > What vector length `N` is best depends on the particular machine.
> > Usually, you would look at the CPU instruction set and choose the
> > largest SIMD vector size that the CPU supports, but sometimes twice
> > that size or half that size might also work well. Note that using a
> > larger SIMD vector size roughly corresponds to loop unrolling,
> > which might be beneficial if the compiler isn't clever enough to do
> > this automatically.
> > 
> > There's additional complication if the array size is not a multiple
> > of the vector size. In this case, extending the array via dummy
> > elements if often the easiest way to go.
> > 
> > Note that SIMD vectorization is purely a performance improvement.
> > It does not make sense to make such changes without measuring
> > performance before and after. Given the low-level nature if the
> > changes, looking at the generated assembler code via `@code_native`
> > is usually also insightful.
> > 
> > I'll be happy to help if you have a specific problem on which
> > you're working.
> > 
> > -erik
> > 
> > 
> > On Thu, Oct 13, 2016 at 9:51 AM, Florian Oswald  > om> wrote:
> > > 
> > > ok thanks! and so I should define my SIMD-able function like
> > > 
> > > function vadd!{N,T}(xs::Vector{T}, ys::Vector{T},
> > > ::Type{Vec{N,T}})
> > > @assert length(ys) == length(xs)
> > > @assert length(xs) % N == 0
> > > @inbounds for i in 1:N:length(xs)
> > > xv = vload(Vec{N,T}, xs, i)
> > > yv = vload(Vec{N,T}, ys, i)
> > > xv += yv
> > > vstore(xv, xs, i)
> > > end
> > > end
> > > i.e. using vload() and vstore() methods?
> > > 
> > > > 
> > > > If you want explicit simd the best way right now is the great
> > > > SIMD.jl package https://github.com/eschnett/SIMD.jl  it is
> > > > builds on top of VecElement.
> > > > 
> > > > In many cases we can perform automatic vectorisation, but you
> > > > have to start Julia with -O3
> > > > 
> > > > > 
> > > > > i see on the docs http://docs.julialang.org/en/release-0.5/st
> > > > > dlib/simd-types/?highlight=SIMD that there is a vecElement
> > > > > that is build for SIMD support. I don't understand if as a
> > > > > user I should construct vecElement arrays and hope for some
> > > > > SIMD optimization? thanks.
> > > > > 
> > > > > 
> > > > 
> > > 
> > 
> > 
> > 
> > -- 
> > Erik Schnetter  http://www.perimeterinstitute.ca
> > /personal/eschnetter/


Re: [julia-users] Re: why do we have Base.isless(a, ::NAtype) but not Base.isless(a, ::Nullable)?

2016-10-14 Thread Milan Bouchet-Valat
Le jeudi 13 octobre 2016 à 06:45 -0700, Florian Oswald a écrit :
> I mean, do I have to cycle through the array and basically clean it
> of #NULL before findign the maximium or is there another way?
Currently you have two solutions:
julia> using NullableArrays

julia> x = NullableArray([1, 2, 3, Nullable()])
4-element NullableArrays.NullableArray{Int64,1}:
 1
 2
 3
 #NULL

julia> minimum(x, skipnull=true)
Nullable{Int64}(1)


Or:

julia> minimum(dropnull(x))
1


Regards

> > i'm trying to understand why we don't have something similar in
> > terms of comparison for Nullable as we have for DataArrays NAtype
> > (below). point me to the relevant github conversation, if any, is
> > fine. 
> > 
> > How would I implement methods to find the maximium of an
> > Array{Nullable{Float64}}? like so?
> > 
> > Base.isless(a::Any, x::Nullable{Float64}) = isnull(x) ? true :
> > Base.isless(a,get(x))
> > 
> > 
> > ~/.julia/v0.5/DataArrays/src/operators.jl:502
> > 
> > #
> > # Comparison operators
> > #
> > 
> > Base.isequal(::NAtype, ::NAtype) = true
> > Base.isequal(::NAtype, b) = false
> > Base.isequal(a, ::NAtype) = false
> > Base.isless(::NAtype, ::NAtype) = false
> > Base.isless(::NAtype, b) = false
> > Base.isless(a, ::NAtype) = true
> > 
> > 


Re: [julia-users] Re: translating julia

2016-10-10 Thread Milan Bouchet-Valat
(Sorry, this message was apparently blocked since Saturday. Sending it
again.)

Le samedi 08 octobre 2016 à 07:59 -0700, Ismael Venegas Castelló a
écrit :
> 
> I got a mail today and I published a response, but I'm not sure why
> it doesn't show up here, so I'll paste it verbatim:
OK, I had kept it private to avoid sounding too negative and risking
discouraging contributors. But since you made it public... :-)

> 
> Hi Milan! 
> 
> Thank you for stating your concern, 
> 
> > 
> >  but I'm worried that we completely destroy the professional aspect
> of the website
> 
> I agree with you but, I think that we need a banner in the julia
> website stating that the translations are not done by professionals
> but by volunteers and also inviting them to join and improve the
> translations in their own languages. 
> 
> I'll activate another option that is more strict, in which only
> reviewed translations will be able to be shown in the website (as
> opposed to only translated but un reviewed ones), but anyway, since
> this is a collaborative effort we need to work as a team. For example
> Transifex doesn't prohibit someone of reviewing his/her own
> translations, even when I've tried to make clear that this is not a
> good practice, there is no way to enforce it.
OK, it's great that you can require reviews before publishing. That
should really improve the quality of translations, even if people can
cheat (in general, I think we can assume good faith from contributors).

> 
> However, please join the French team and un review, comment and
> correct the strings in order to improve the quality, there is no
> other way (other than paying for professional translations).
> 
> The team is small but growing, and the project has just started, if
> we state publicly that the translations are crowd sourced by the
> community and an ongoing progress, then I'm sure no one will expect
> them to be of professional grade yet, and no reputation will be
> affected.
> 
> Expecting professional translation from the get go, is unrealistic
> without investing money into the project. And waiting, g for all the
> crowd sourced translations to become of professional grade, would
> kill the motivation of the contributors, as this has already
> happened, the project was stagnant for 1+ year (with the
> infrastructure already being ready and tested). If the project had
> continued, maybe today we would already have much more complete
> translations of near professional grade.
> 
> So I think the best way to approach this is not to be conservative,
> but to be open and transparent, so we can get more help and others
> can't be disappointed for the current status, and the reputation of
> all, not only of the Julia project, but also of the contributors that
> are willing to translate for us, remains intact and even become more
> positive.
Sure, I don't expect a professional level from the start. But I think
we'd better validate the translations slowly so that they are only
online when teams are confident about them, than rush to publish poor
quality translations. As you note, teams need some time to be set up.

I've just joined the French team and made a suggestion, but I'm not
sure I have the permissions to review other's translations. For
example, who's in charge of deciding whether my suggestion is better
than the existing one?


Keep up the good work. I'm sure in the end it will be great -- I'd just
like to ensure we don't have to go through a too messy period with
broken translations everywhere.



Regards

> 
> Think synergy!
> 
> Regards,
> Ismael Venegas Castelló
> 
> 
> 2016-10-08 8:46 GMT-05:00 Milan Bouchet-Valat <>:
> > 
> > Hi!
> > 
> > I really appreciate the progress of translations of the website.
> > But
> > I've just realized the French version of the site contains lots of
> > mistakes, including incorrect translations, typos, and case issues.
> > In
> > some cases one cannot understand what the sentence means.
> > 
> > Can I recommend extra care when translating Julia? Typically,
> > translations shouldn't be done by a single person, and should only
> > be
> > published after having been reviewed by another contributor. The
> > rule
> > should be that it's better to have an English sentence than a
> > broken
> > approximately translated one. Translations can do more harm than
> > good
> > without a lot of care.
> > 
> > Sorry for sounding too negative, but I'm worried that we completely
> > destroy the professional aspect of the website by having random
> > people
> > do weird things in each language no core developer understands.
> > It's
> > very hard to keep control over th

Re: [julia-users] Linux distributions with Julia >= 0.5.0

2016-10-08 Thread Milan Bouchet-Valat
Le samedi 08 octobre 2016 à 03:23 -0700, Femto Trader a écrit :
> Hello,
> 
> my main development environment is under Mac OS X
> but I'm looking for a Linux distribution (that I will run under
> VirtualBox)
> that have Julia 0.5.0 support (out of the box)
> 
> Even Debian Sid is 0.4.7 (October 8th, 2016)
> https://packages.debian.org/fr/sid/julia
> 
> So what Linux distribution should I use to simply test my packages
> with Julia >=0.5.0 ?
Fedora 25 (Beta to be released soon) includes Julia 0.5.0.

But you can also use the Copr repository to install that version on
Fedora 23 and 24, as well as on RHEL/Centos 7. And of course generic
Linux binaries are quite easy to install too.


Regards


Re: [julia-users] Re: Julia and the Tower of Babel

2016-10-08 Thread Milan Bouchet-Valat
Le samedi 08 octobre 2016 à 01:47 -0700, jonathan.bie...@alumni.epfl.ch
a écrit :
> Maybe an "easy" first step would be to have a page (a github repo)
> containing domain specific naming conventions (atol/abstol) that
> package
> developers can look up. Even though existing packages might not adopt
> them, at least newly created ones would have a chance
> to be more consistent. You could even do a small tool that parse your
> files and warn you about improper naming.
Creating a web page like this sounds like a good idea.

As regards automatic checking, note that there's already Lint.jl, to
which a list of "nonstandard" names could be added, together with
recommendations.


Regards


Re: [julia-users] Subset a DataFrame without a #NULL value in a column

2016-10-04 Thread Milan Bouchet-Valat
Le lundi 03 octobre 2016 à 18:14 -0700, Min-Woong Sohn a écrit :
> Previously, under DataArray, I could do
> 
> df2 = df[!isna(df[:somvar),:]
> 
> Is there a NullableArray equivalent to isna()? I've tried isnull(),
> which is not defined.
isnull() is defined in Julia Base. But it's not a vectorized operation:
isnull(df[:somevar]) will return false when you seem to be looking for
a boolean vector. What you're looking for should work when written as
isnull.(df[:somevar]), but currently there's a bug that makes it fail.
That's on our TODO list:
https://github.com/JuliaStats/NullableArrays.jl/issues/153


Regards


Re: [julia-users] Is there a way to use values in a DataFrame directly in computation?

2016-10-03 Thread Milan Bouchet-Valat
Le lundi 03 octobre 2016 à 08:21 -0700, Min-Woong Sohn a écrit :
> I am using DataFrames from master branch (with NullableArrays as the default) 
> and was wondering how the following should be done:
> 
> df = DataFrame()
> df[:A] = NullableArray([1,2,3])
> 
> The following are not allowed or return wrong values:
> 
> df[1,:A] == 1   # false
> df[1,:A] > 1     # MethodError: no method matching isless(::Int64, 
> ::Nullable{Int64})
> df[3,:A] + 1     # MethodError: no method matching +(::Nullable{Int64}, 
> ::Int64)
> 
> How should I get around these issues? Does anybody know if there is a
> plan to support these kinds of computations directly?
These operations currently work (after loading NullableArrays) if you
rewrite 1 as Nullable(1), eg. df[1, :A] == Nullable(1). But the two
first return a Nullable{Bool}, so you need to call get() on the result
if you want to use them e.g. with an if. As an alternative, you can use
isequal().

There are discussions as regards whether mixing Nullable and scalars
should be allowed, as well as whether these operations should be moved
into Julia Base. See in particular
https://github.com/JuliaStats/NullableArrays.jl/pull/85
https://github.com/JuliaLang/julia/pull/16988

Anyway, the best approach to work with data frames is probably to use
frameworks like AbstractQuery.jl and Query.jl, which are not yet
completely ready to handle Nullable, but should make this easier.


Regards


Re: [julia-users] Is there a way to use values in a DataFrame directly in computation?

2016-10-03 Thread Milan Bouchet-Valat
Le lundi 03 octobre 2016 à 08:21 -0700, Min-Woong Sohn a écrit :
> 
> I am using DataFrames from master branch (with NullableArrays as the
> default) and was wondering how the following should be done:
> 
> df = DataFrame()
> df[:A] = NullableArray([1,2,3])
> 
> The following are not allowed or return wrong values:
> 
> df[1,:A] == 1   # false
> df[1,:A] > 1     # MethodError: no method matching isless(::Int64,
> ::Nullable{Int64})
> df[3,:A] + 1     # MethodError: no method matching
> +(::Nullable{Int64}, ::Int64)
> 
> How should I get around these issues? Does anybody know if there is a
> plan to support these kinds of computations directly?
These operations currently work (after loading NullableArrays) if you
rewrite 1 as Nullable(1), eg. df[1, :A] == Nullable(1). But the two
first return a Nullable{Bool}, so you need to call get() on the result
if you want to use them e.g. with an if. As an alternative, you can use
isequal().

There are discussions as regards whether mixing Nullable and scalars
should be allowed, as well as whether these operations should be moved
into Julia Base. See in particular
https://github.com/JuliaStats/NullableArrays.jl/pull/85
https://github.com/JuliaLang/julia/pull/16988

Anyway, the best approach to work with data frames is probably to use
frameworks like AbstractQuery.jl and Query.jl, which are not yet
completely ready to handle Nullable, but should make this easier.


Regards


Re: [julia-users] Re: "both DataArrays and StatsBase export "rle"; uses of it in module DataFrames must be qualified"

2016-09-28 Thread Milan Bouchet-Valat
Le mercredi 28 septembre 2016 à 04:44 -0700, K leo a écrit :
> Thanks for the reply.  Then this is an issue in DataFrames.
Yes, but one that is already fixed in master by removing the dependency
on DataArrays.


Regards

> > As the error says, they both export a function called rle, so it is
> > not possible to know which one you're trying to call, if you don't
> > qualify them. Qualifying means writing "package name dot" and then
> > the function, as seen below
> > 
> > module A
> > export f
> > f(x::Int64) = x
> > end
> > 
> > module B
> > export f
> > f(x::Int64) = x+1
> > end
> > 
> > using A, B
> > 
> > f(3) # error
> > A.f(3) # returns x = 3
> > B.f(3) # returns x + 1 = 3 + 1
> > 
> > 
> > 
> > > I get a few warning messages like this often.  Does it mean that
> > > DataFrames package need to be updated, or that I need to do
> > > something in my user code?
> > > 
> > 


Re: [julia-users] Re: LightXML Ubuntu 16.04 julia 0.4.5

2016-09-25 Thread Milan Bouchet-Valat
Le samedi 24 septembre 2016 à 06:17 -0700, Ján Adamčák a écrit :
> Thanks,
> 
> after installing 
> 
> sudo apt-get install libxml2-dev
> 
> is LightXML fully working.
Could you file an issue against LightXML.jl? It should be able to
install the package automatically, or could even work without it.


Regards


> I have another question: Why this dependency didn't resolved
> automatically by Pkg.add("LightXML") ?
> 
> 
> 
> > You might need the -dev version to get a plain "libxml2.so" in
> > addition to the version with an soname in it. I thought Julia
> > should be able to find the soname versions too, but maybe not?
> > 
> > 
> > > I tried sudo apt-get install libxml2, but I got answer from
> > > ubuntu:
> > > 
> > > libxml2 is already the newest version (2.9.3+dfsg 1-1ubuntu0.1).
> > > libxml2 is tagged as manually installed.
> > > 
> > > But from julia I got same answer:
> > > 
> > > ERROR: error compiling call: could not load library "libxml2"
> > > 
> > > 
> > > 
> > > > Try
> > > > 
> > > > sudo apt install libxml2
> > > > 
> > > > 
> > > > 
> > > > > Hi Guys,
> > > > > 
> > > > > I tried use LightXML on Ubuntu 16.04, but I got an error: 
> > > > > 
> > > > > ERROR: error compiling call: could not load library "libxml2"
> > > > > 
> > > > > Can You help me?
> > > > > 
> > > > > Thanks.
> > > > > 
> > > > > Log:
> > > > > 
> > > > >                  _
> > > > >   _       _ _(_)_     |  A fresh approach to technical
> > > > > computing
> > > > >  (_)     | (_) (_)    |  Documentation: http://docs.julialang
> > > > > .org
> > > > >   _ _   _| |_  __ _   |  Type "?help" for help.
> > > > >  | | | | | | |/ _` |  |
> > > > >  | | |_| | | | (_| |  |  Version 0.4.5 (2016-03-18 00:58 UTC)
> > > > > _/ |\__'_|_|_|\__'_|  |  
> > > > > |__/                   |  x86_64-linux-gnu
> > > > > 
> > > > > julia> Pkg.status()
> > > > > No packages installed
> > > > > 
> > > > > julia> Pkg.add("LightXML")
> > > > > INFO: Cloning cache of Compat from
> > > > > git://github.com/JuliaLang/Compat.jl.git
> > > > > INFO: Cloning cache of LightXML from
> > > > > git://github.com/JuliaIO/LightXML.jl.git
> > > > > INFO: Installing Compat v0.9.2
> > > > > INFO: Installing LightXML v0.4.0
> > > > > INFO: Building LightXML
> > > > > INFO: Package database updated
> > > > > INFO: METADATA is out-of-date — you may not have the latest
> > > > > version of LightXML
> > > > > INFO: Use `Pkg.update()` to get the latest versions of your
> > > > > packages
> > > > > 
> > > > > julia> Pkg.update()
> > > > > INFO: Updating METADATA...
> > > > > INFO: Computing changes...
> > > > > INFO: No packages to install, update or remove
> > > > > 
> > > > > julia> using LightXML
> > > > > INFO: Precompiling module LightXML...
> > > > > 
> > > > > julia> # create an empty XML document
> > > > >       xdoc = XMLDocument()
> > > > > ERROR: error compiling call: could not load library "libxml2"
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > 
> > > 
> > 


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Milan Bouchet-Valat
Le jeudi 22 septembre 2016 à 14:54 -0700, Tsur Herman a écrit :
> By the way my test3 functions is super fast
> 
>  @time test3(r)
>   0.32 seconds (4 allocations: 160 bytes)
Beware, if you don't return 'total' from the function, LLVM optimizes
away the whole loops and turns the function into a no-op (have a look
at @code_llvm or @code_native).


Regards


> > 
> > On my side both function perform equally. although test2 had to be
> > timed twice to get to the same performance.
> > 
> > julia> test2(x)= sum( [t^2 for t in x] )
> > 
> > julia> @time test2(r)
> >   0.017423 seconds (13.22 k allocations: 1.339 MB)
> > 
> > julia> @time test2(r)
> >   0.000332 seconds (9 allocations: 781.531 KB)
> > 
> > I think the discrepancy  comes from the JITing process because if I
> > time it without using  the macro @time, it works from the first
> > run.
> > 
> > julia> test2(x)= sum( [t^2 for t in x] )
> > WARNING: Method definition test2(Any) in module Main at REPL[68]:1
> > overwritten at REPL[71]:1.
> > test2 (generic function with 1 method)
> > 
> > julia> tic();for i=1:1 ; test2(r);end;toc()/1
> > elapsed time: 3.090764498 seconds
> > 0.0003090764498
> > 
> > About the memory footprint -> test2 first constructs the inner
> > vector then calls sum.
> > 
> > > since the type was not inferred the zero-element could not be
> > > created.
> > The sum of an empty set or vector is undefined it is not zero.
> > you can rewrite it in a more explicit way
> > 
> > test3(r) = begin 
> >     total = Float64(0);
> >  for t in r total+=t ;end;end
> > 
> > 
> > 
> > 
> > 
> > 
> > > I've seen the same, and the answer I got at the JuliaLang gitter
> > > channel was that it could not be inferred because r could be of
> > > length 0, and in that case, the return type could not be
> > > inferred. My Julia-fu is too weak to then explain why the
> > > comprehension would be able to infer the return type.
> > > 
> > > > I see the same, yet:
> > > > 
> > > > julia> r = rand(10^5);
> > > > 
> > > > julia> @time test1(r)
> > > >   0.000246 seconds (7 allocations: 208 bytes)
> > > > 33375.54531253989
> > > > 
> > > > julia> @time test2(r)
> > > >   0.001029 seconds (7 allocations: 781.500 KB)
> > > > 33375.54531253966
> > > > 
> > > > So test1 is efficient, despite the codewarn output. Not sure
> > > > what's up.
> > > > 
> > > > On Thu, Sep 22, 2016 at 2:21 PM, Christoph Ortner  > > > gmail.com> wrote:
> > > > > I hope that there is something I am missing, or making a
> > > > > mistake in the following example: 
> > > > > 
> > > > > r = rand(10)
> > > > > test1(r) = sum( t^2 for t in r )
> > > > > test2(r)= sum( [t^2 for t in r] )
> > > > > @code_warntype test1(r)   # return type Any is inferred
> > > > > @code_warntype test2(r)   # return type Float64 is inferred
> > > > > 
> > > > > 
> > > > > This caused a problem for me, beyond execution speed: I used
> > > > > a generator to create the elements for a comprehension, since
> > > > > the type was not inferred the zero-element could not be
> > > > > created.
> > > > > 
> > > > > Is this a known issue?
> > > > > 
> > > > 
> > > > 
> > > 
> > 


Re: [julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Milan Bouchet-Valat
Le mercredi 21 septembre 2016 à 12:15 -0700, Chris Rackauckas a écrit :
> I see. So what I am getting is that, in my codes, 
> 
> 1. I will need to add @fastmath anywhere I want these optimizations
> to show up. That should be easy since I can just add it at the
> beginnings of loops where I have @inbounds which already covers every
> major inner loop I have. Easy find/replace fix. 
> 
> 2. For my own setup, I am going to need to build from source to get
> all the optimizations? I would've though the point of using the Linux
> repositories instead of the generic binaries is that they would be
> setup to build for your system. That's just a non-expert's
> misconception I guess? I think that should be highlighted somewhere.
No, the point of using Linux packages is to integrate easily with the
rest of the system (e.g. automated updates, installation in path
without manual tweaking), and to use your distribution's libraries to
avoid duplication.

That's just how it works for any software in a distribution. You need
to use Gentoo if you want software to be tuned at build time to your
particular system.


Regards

> > Le mercredi 21 septembre 2016 à 11:36 -0700, Chris Rackauckas a
> > écrit : 
> > > The Windows one is using the pre-built binary. The Linux one uses
> > the 
> > > COPR nightly (I assume that should build with all the goodies?) 
> > The Copr RPMs are subject to the same constraint as official
> > binaries: 
> > we need them to work on most machines. So they don't enable FMA
> > (nor 
> > e.g. AVX) either. 
> > 
> > It would be nice to find a way to ship with several pre-built
> > sysimg 
> > files and using the highest instruction set supported by your CPU. 
> > 
> > 
> > Regards 
> > 
> > > > > Hi, 
> > > > >   First of all, does LLVM essentially fma or muladd
> > expressions 
> > > > > like `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that
> > one 
> > > > > explicitly use `muladd` and `fma` on these types of
> > instructions 
> > > > > (is there a macro for making this easier)? 
> > > > > 
> > > > 
> > > > You will generally need to use muladd, unless you use
> > @fastmath. 
> > > > 
> > > >   
> > > > >   Secondly, I am wondering if my setup is no applying these 
> > > > > operations correctly. Here's my test code: 
> > > > > 
> > > > 
> > > > If you're using the prebuilt downloads (as opposed to building
> > from 
> > > > source), you will need to rebuild the sysimg (look in 
> > > > contrib/build_sysimg.jl) as we build for the lowest-common 
> > > > architecture. 
> > > > 
> > > > -Simon 
> > > > 


Re: [julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Milan Bouchet-Valat
Le mercredi 21 septembre 2016 à 11:36 -0700, Chris Rackauckas a écrit :
> The Windows one is using the pre-built binary. The Linux one uses the
> COPR nightly (I assume that should build with all the goodies?)
The Copr RPMs are subject to the same constraint as official binaries:
we need them to work on most machines. So they don't enable FMA (nor
e.g. AVX) either.

It would be nice to find a way to ship with several pre-built sysimg
files and using the highest instruction set supported by your CPU.


Regards

> > > Hi,
> > >   First of all, does LLVM essentially fma or muladd expressions
> > > like `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one
> > > explicitly use `muladd` and `fma` on these types of instructions
> > > (is there a macro for making this easier)?
> > > 
> > 
> > You will generally need to use muladd, unless you use @fastmath.
> > 
> >  
> > >   Secondly, I am wondering if my setup is no applying these
> > > operations correctly. Here's my test code:
> > > 
> > 
> > If you're using the prebuilt downloads (as opposed to building from
> > source), you will need to rebuild the sysimg (look in
> > contrib/build_sysimg.jl) as we build for the lowest-common
> > architecture.
> > 
> > -Simon
> > 


Re: [julia-users] Configuring incomplete (julia-deps error 2)

2016-09-11 Thread Milan Bouchet-Valat
Le dimanche 11 septembre 2016 à 11:05 -0700, Davide a écrit :
> Hi,
> 
> I am trying to compile v0.5rc04 and I get the error reported below
> during configuration.
> My system: Debian GNU/Linux 64bit on an Intel i7 machine.
> I checked for the required external dependencies and all are
> installed.
> I have MARCH=native in Make.user.
> 
> The last lines of the output of the make command (make -j 4) are:
> 
> CMake Error at /usr/share/cmake-3.0/Modules/FindOpenSSL.cmake:293
> (list):
>   list GET given empty list
> Call Stack (most recent call first):
>   CMakeLists.txt:277 (FIND_PACKAGE)
> 
> CMake Error at /usr/share/cmake-3.0/Modules/FindOpenSSL.cmake:294
> (list):
>   list GET given empty list
> Call Stack (most recent call first):
>   CMakeLists.txt:277 (FIND_PACKAGE)
> 
> CMake Error at /usr/share/cmake-3.0/Modules/FindOpenSSL.cmake:296
> (list):
>   list GET given empty list
> Call Stack (most recent call first):
>   CMakeLists.txt:277 (FIND_PACKAGE)
> 
> CMake Error at /usr/share/cmake-3.0/Modules/FindOpenSSL.cmake:298
> (list):
>   list GET given empty list
> Call Stack (most recent call first):
>   CMakeLists.txt:277 (FIND_PACKAGE)
> 
> -- Configuring incomplete, errors occurred!
> 
>    Thanks for help,
> 
>    Davide
Please call make without -j, and post a longer excerpt of the log (e.g.
in a GitHub gist).


Regards


Re: [julia-users] correct typing for unsafe_wrap

2016-08-21 Thread Milan Bouchet-Valat
Le dimanche 21 août 2016 à 01:36 -0700, Andreas Lobinger a écrit :
> Hello colleagues,
> 
> i'm trying to use unsafe_wrap from a pointer from an external call
> (cfunction) to an array access.
> Looks like i have type problems:
> 
> 
> 
> function read_from_stream_callback(s::IO, buf::Ptr{UInt8},
> len::UInt32)
> 
>     #b1 = zeros(UInt8,len)
>     #nb = readbytes!(s,b1,len)
>     
>     #for i=1:len
>     #    unsafe_store!(buf,b1[i],i)
>     #end
> 
>     b1 = zeros(UInt8,len)
>     unsafe_wrap(b1,buf,len)
>     nb = readbytes!(s,b1,len)
> 
>     @compat(Int32(0))
> end
> 
>    _
>    _   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)    |  Documentation: http://docs.julialang.org
>    _ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.5.0-rc2+0 (2016-08-12 11:25 UTC)
>  _/ |\__'_|_|_|\__'_|  |  
> |__/   |  x86_64-linux-gnu
> 
> julia> include("png_stream.jl")
> ERROR: LoadError: MethodError: no method matching
> unsafe_wrap(::Array{UInt8,1}, ::Ptr{UInt8}, ::UInt32)
> Closest candidates are:
>   unsafe_wrap(::Type{String}, ::Union{Ptr{Int8},Ptr{UInt8}},
> ::Integer) at pointer.jl:86
>   unsafe_wrap(::Type{String}, ::Union{Ptr{Int8},Ptr{UInt8}},
> ::Integer, ::Bool) at pointer.jl:86
>  
> unsafe_wrap{T}(::Union{Type{Array{T,1}},Type{Array{T,N}},Type{Array}}
> , ::Ptr{T}, ::Integer) at pointer.jl:57
>   ...
>  in
> read_from_stream_callback(::Base.AbstractIOBuffer{Array{UInt8,1}},
> ::Ptr{UInt8}, ::UInt32) at /home/lobi/juliarepo/png_stream.jl:16
>  in read_png_from_stream(::Base.AbstractIOBuffer{Array{UInt8,1}}) at
> /home/lobi/juliarepo/png_stream.jl:26
>  in include_from_node1(::String) at ./loading.jl:426
> while loading /home/lobi/juliarepo/png_stream.jl, in expression
> starting on line 55
> 
> to which i somehow disagree (as long as UInt32 is an Integer).
> 
> the commented code runs correctly (copy to but UInt8 seq.)
> 
> A better explaination of the error?
The problem is with the first argument: it should be a type, not an
array. You should simply pass 'Array' instead of 'b1': no need to
allocate a buffer, as this function uses the original memory referred
to by the pointer (without making a copy).


Regards

> Wishing a happy day,
>     Andreas
> 
> 


Re: [julia-users] How to install julia-0.5.0-rc0 on Ubuntu Linux

2016-07-28 Thread Milan Bouchet-Valat
Le jeudi 28 juillet 2016 à 01:58 -0700, Uwe Fechner a écrit :
> Hello,
> 
> I am trying to install julia-0.5.0-rc0 .
> 
> I downloaded it from Github, unpacked it and executed
> make -j4
> 
> The building failed after 10 minutes with the following
> message:
> 
> 
>  cblas_zhpr2  PASSED THE COMPUTATIONAL TESTS (   577 CALLS)
>  cblas_zhpr2  PASSED THE COMPUTATIONAL TESTS (   577 CALLS)
> 
>  END OF TESTS
> OK.
> 
>  OpenBLAS build complete. (BLAS CBLAS LAPACK LAPACKE)
> 
>   OS               ... Linux             
>   Architecture     ... x86_64               
>   BINARY           ... 64bit                 
>   Use 64 bits int    (equivalent to "-i8" in Fortran)      
>   C compiler       ... GCC  (command line : gcc -m64)
>   Fortran compiler ... GFORTRAN  (command line : gfortran -m64)
>   Library Name     ... libopenblas64_p-r0.2.18.a (Multi threaded; Max
> num-threads is 16)
> 
> To install the library, you can run "make
> PREFIX=/path/to/your/installation install".
> 
> make: *** [julia-deps] Error 2
> ufechner@TUD277255:~/00Software/julia-0.5.0-rc0$
> 
> 
> How can I fix this?
> 
> I am using Ubuntu 14.04.
Can you run it without -j4 and post the last messages? Those reported
here don't seem related to the failure.


Regards


Re: [julia-users] splice! inconsistency with Vector{Vector{UInt8}}

2016-07-25 Thread Milan Bouchet-Valat
Le samedi 23 juillet 2016 à 11:37 -0700, 'George Marrows' via julia-
users a écrit :
> Hi. As per the docs on splice! and n:(n-1) ranges, the following
> works fine to insert a value before location 1 in a Vector:
> x = [1,2,3]
> splice!(x, 1:0, 23)
> print(x)  # => [23,1,2,3]
This example works, but only as a special-case of '23' being taken as
equivalent to '[23]'.

> But the following behaves strangely and seems to corrupt the array in
> the process:
> a = Vector{UInt8}[UInt8[0xf3,0x32,0x37]]
> splice!(a, 1:0, UInt8[7,3])   # =>  MethodError: `convert` has no
> method matching convert(::Type{Array{UInt8,1}}, ::UInt8)
> print(a) # => [#undef,#undef,UInt8[0xf3,0x32,0x37]]
This example fails because 'a' is a vector of vectors, so you need to
pass a vector of vectors to splice!() too. Since you pass it a vector
of UInt8, it tries to insert UInt8 values into a, which isn't possible.
The failure comes from 'convert(Vector{UInt8}, 7)'.

Try with
splice!(a, 1:0, Vector{UInt8}[UInt8[7,3]])

The array corruption is a different issue which is hard to fix:
checking whether conversion will succeed before resizing the array
hurts performance a lot, so currently if conversion fails the array is
resized anyway. This might be fixed at some point if a good solution is
found. See https://github.com/JuliaLang/julia/issues/15868


Regards

> Is this a bug, or have I simply not read the docs correctly?



Re: [julia-users] Additional dataframes library

2016-07-10 Thread Milan Bouchet-Valat
Le dimanche 10 juillet 2016 à 09:02 -0700, Brandon Taylor a écrit :
> I'm looking to add an additional dataframes library. I was going to
> pick up all the additional functions that are in tidyr and dplyr that
> aren't covered in dataframes.jl and dataframesmeta.jl but I wanted to
> check whether something like this existed first.
No that I know of, but I'd encourage you to coordinate your work with
Julia Stats developers. For example, you could open issues or pull
requests against DataFramesMeta if your additions would fit well with
the goal of the package. Or if you're not sure, write a message to the
julia-stats mailing list to start a discussion (I won't be able to
reply immediately, but others might).

In particular, it will be probably be relevant to your work to be aware
of the current plan to switch from DataArrays to NullableArrays:
https://github.com/JuliaStats/DataFrames.jl/pull/1008
https://github.com/JuliaStats/DataArrays.jl/issues/73
(see also referenced issues)


Regards


Re: [julia-users] Error using inner constructors and type parameters

2016-07-09 Thread Milan Bouchet-Valat
Le vendredi 08 juillet 2016 à 18:20 -0700, James Noeckel a écrit :
> type LinkedMesh{RT<:Real}
>   faces::LinkedList{LinkedFace}
>   vertices::Array{Point{3, RT}, 1}
>   LinkedMesh(points::Array{Point{3, RT}, 1}) = new(nil(LinkedFace),
> points)
> end
> 
> 
> When I pass the below value to the above constructor, I get an error:
> points::AbstractArray{Point{3, FT}, 1} #the value isn't relevant
> 
> mesh = LinkedMesh(points)
> 
> 
> ERROR: MethodError: `convert` has no method matching
> convert(::Type{Meshing.LinkedMesh{RT<:Real}},
> ::Array{FixedSizeArrays.Point{3,Float64},1})
> This may have arisen from a call to the constructor
> Meshing.LinkedMesh{RT<:Real}(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   call{T}(::Type{T}, ::Any)
>   convert{T}(::Type{T}, ::T)
> 
> 
> This suggests that it isn't using my constructor at all, possibly
> because of the type parameter. But Float64 is a subtype of Real, so I
> don't see why this isn't working.
That's because you defined an inner constructor. In that case, the
default one-argument outer constructor isn't created automatically. 

The inner constructor needs to be passed the type parameter, like this:
LinkedMesh{FT}(points)

You can create the outer constructor with:
LinkedMesh{RT}(points::Array{Point{3, RT}, 1}) = LinkedMesh{RT}(points)

But if your example corresponds to what you actually need, you can just
transform the inner constructor into an outer constructor. Inner
constructors should be used only when really needed, i.e. when you have
to check the passed arguments to prevent creating invalid objects. See
the manual for more details.


Regards


Re: [julia-users] Re: A naive benchmark

2016-07-02 Thread Milan Bouchet-Valat
Le vendredi 01 juillet 2016 à 16:11 -0700, baillot maxime a écrit :
> @Tim Holy : Thank you for the web page. I didn't know it. Now I
> understand a lot of thing :)
> 
> @Kristoffer and Patrick: I just read about that in the link that Tim
> gave me. I did change the code and the time just past from 0.348052
> seconds to  0.037768 seconds.
> 
> Thanks to you all. Now I understand a lot of things and why it was
> slower than matlab.
> 
> So now I understand why a lot of people was speaking
> about Devectorizing matrix calculus. But I think it's sad, because if
> I want to do this I will use C or C++ .  Not a matrixial language
> like Julia or Matlab.
Note that work is going on to allow vectorized syntax to be (almost) as
efficient as devectorized loops. See
https://github.com/JuliaLang/julia/issues/16285


Regards

> Anyway! So if I'm not mistaking... It's better for me to create a
> "mul()" function than use the ".*" ?


Re: [julia-users] Julia 0.4.6 doesn't work in Windows x86 environment

2016-06-30 Thread Milan Bouchet-Valat
Le jeudi 30 juin 2016 à 02:24 -0700, Stefan Schnell a écrit :
> 
> 
> Hello community,
> 
> I installed the new Julia release 0.4.6 on a Windows x86 environment
> but it doesn't work. I get the following message:
> Activation context generation failed for "C:\Dummy\Julia-
> 0.4.6\bin\julia.exe". Dependent Assembly
> "Microsoft.VC90.CRT,processorArchitecture="amd64",publicKeyToken="1fc
> 8b3b9a1e18e3b",type="win32",version="9.0.21022.8"" could not be
> found. Please use sxstrace.exe for detailed diagnosis.
> As far as I can see the executable search for the processor
> architecture amd64, but it should search for x86.
> I checked the 32-bit version of Julia in a x64 Windows environment
> and it works perfect.
Are you sure you downloaded the 32-bit version ? i.e. the one from
https://s3.amazonaws.com/julialang/bin/winnt/x86/0.4/julia-0.4.6-win32.exe

What Windows version are you using ?


Regards


Re: [julia-users] Re: How to install 0.4.5 on Ubuntu?

2016-06-16 Thread Milan Bouchet-Valat
Le jeudi 16 juin 2016 à 01:34 -0700, Nils Gudat a écrit :
> Apologies for decreasing quality of questions, but how do I actually
> start julia after building from source on Linux? After having
> successfully (I believe) built the latest 0.4.6-pre master, I can't
> find an exectuable to start julia. I tried grepping through the
> folder (find | grep '\.exe$'), but to no avail.
If the build succeeded, you only have to run ./julia (.exe files are a
Windows format).


Regards


Re: [julia-users] Re: parse.(Int64, x)

2016-06-16 Thread Milan Bouchet-Valat
Le mercredi 15 juin 2016 à 17:28 -0700, Tony Kelman a écrit :
> Try parse.([Int64], x)
> note that the output will be an Array{Any} because issue #4883 hasn't
> been fixed yet. The issue here is that broadcast doesn't treat types
> as "scalar-like."
Is the latter a separate bug? Should we open an issue for that?


> > map of course works, but it is quite verbose. I’ve been working a
> > group of new julia users lately, many of them from other languages
> > like R, Python etc., and they roll their eyes when something that
> > simple takes
> >  
> > df[:x] = map(q->parse(Int64,q), df[:x])
> >  
> > It just is quite complicated for something pretty simple… Maybe
> > there are other simple constructs for this?
> >  
> > Thanks,
> > David
> >  
> > From: julia...@googlegroups.com [mailto:julia...@googlegroups.com]
> > On Behalf Of John Myles White
> > Sent: Wednesday, June 15, 2016 3:53 PM
> > To: julia-users 
> > Subject: [julia-users] Re: parse.(Int64, x)
> >  
> > I would be careful combining element-wise function application with
> > partial function application. Why not use map instead?
> > 
> > On Wednesday, June 15, 2016 at 3:47:05 PM UTC-7, David Anthoff
> > wrote:
> > I just tried to use the new dot syntax for vectorising function
> > calls in order to convert an array of strings into an array of
> > Int64. For example, if this would work, it would be very, very
> > handy:
> >  
> > x = [“1”, “2”, “3”]
> > parse.(Int64, x)
> >  
> > Right now I get an error, but I wonder whether this could be
> > enabled somehow in this new framework? If this would work for all
> > sorts of parsing, type conversions etc. it would just be fantastic.
> > Especially when working DataFrames and one is in the first phase of
> > cleaning up data types of columns etc. this would make for a very
> > nice and short notation.
> >  
> > Thanks,
> > David
> >  
> > --
> > David Anthoff
> > University of California, Berkeley
> >  
> > http://www.david-anthoff.com
> >  
> > 


Re: [julia-users] Re: function! vs. function

2016-06-12 Thread Milan Bouchet-Valat
Le dimanche 12 juin 2016 à 10:13 -0700, digxx a écrit :
> the function I defined above? Or what do you mean?
> function f!(a,farr)
> for i=1:n
>   for j=1:n
>   farr[j,i] = (a[j,:]*a[:,i] - A[j,:]*a[:,i])[1] - k2[j,i]
>   end
> end
> end
That method requires two arguments, you cannot call it with f!(x). So
what method did you call?


Regards


Re: [julia-users] I can enter the same keyword argument twice, and the second over-rules the first.

2016-06-12 Thread Milan Bouchet-Valat
Le samedi 11 juin 2016 à 19:46 -0700, colintbow...@gmail.com a écrit :
> I can enter the same keyword argument twice, and the second entry is
> the one that gets used. A short example follows:
> 
> f(x::Int ; kw::Int=0) = x * kw
> f(2)
> f(2, kw=3) #evaluates to 6
> f(2, kw=3, kw=4) #evaluates to 8
> 
> Is this desired behaviour or is it a bug? Based on a quick scan, I
> can't quite tell if this is the same bug as issue 9535
> (https://github.com/JuliaLang/julia/issues/9535), so thought I would
> post here before filing anything. Also, I'm on v0.4, so I don't want
> to file if this is already taken care of in v0.5.
It also happens on 0.5. This sounds like a different (and simpler) bug
than #9535, so filing a new issue would likely be useful. Even if it
was done on purpose, the manual doesn't seem to mention it.


Regards


Re: [julia-users] Constructors for types with Nullable fields

2016-06-10 Thread Milan Bouchet-Valat
Le vendredi 10 juin 2016 à 00:56 -0700, Helge Eichhorn a écrit :
> Hi,
> 
> let's say I have the following type with two Nullable fields:
> 
> type WithNulls 
>     a::Nullable{Float64} 
>     b::Nullable{Float64}
> end
> 
> I now want the user to be able to create an instance of this type
> without caring about Nullables. For this I use a constructor with
> keyword arguments.
> 
> function WithNulls(;
>     a = Nullable{Float64}(),
>     b = Nullable{Float64}(),
> )
>     WithNulls(a, b)
> end
> 
> This works for Float64 but not for the other leaf types of Real.
> 
> # Works
> WithNulls(a=3.0)
> 
> # Does not work
> WithNulls(a=pi)
> 
> This can be fixed by adding the following methods to convert:
> 
> Base.convert{T<:Real}(::Type{Nullable{T}}, v::T) = Nullable{T}(v)
> Base.convert{T<:Real,S<:Real}(::Type{Nullable{T}}, v::S) =
> Nullable{T}(convert(T,v))
> 
> Finally the question:
> Should the above convert methods not be part of Base? I think
> converting between different Nullable{T<:Real} values might be a
> common use case. Is there a more elegant way to do this?
Yes. These methods have been added in the 0.5 development version, so
your example works directly there.


Regards


Re: ***SPAM*** [julia-users] Constructors for types with Nullable fields

2016-06-10 Thread Milan Bouchet-Valat
Le vendredi 10 juin 2016 à 00:56 -0700, Helge Eichhorn a écrit :
> Hi,
> 
> let's say I have the following type with two Nullable fields:
> 
> type WithNulls 
>     a::Nullable{Float64} 
>     b::Nullable{Float64}
> end
> 
> I now want the user to be able to create an instance of this type
> without caring about Nullables. For this I use a constructor with
> keyword arguments.
> 
> function WithNulls(;
>     a = Nullable{Float64}(),
>     b = Nullable{Float64}(),
> )
>     WithNulls(a, b)
> end
> 
> This works for Float64 but not for the other leaf types of Real.
> 
> # Works
> WithNulls(a=3.0)
> 
> # Does not work
> WithNulls(a=pi)
> 
> This can be fixed by adding the following methods to convert:
> 
> Base.convert{T<:Real}(::Type{Nullable{T}}, v::T) = Nullable{T}(v)
> Base.convert{T<:Real,S<:Real}(::Type{Nullable{T}}, v::S) =
> Nullable{T}(convert(T,v))
> 
> Finally the question:
> Should the above convert methods not be part of Base? I think
> converting between different Nullable{T<:Real} values might be a
> common use case. Is there a more elegant way to do this?
Yes. These methods have been added in the 0.5 development version, so
your example works directly there.


Regards


Re: [julia-users] Re: Reconstruct a string from a Clang.jl generated looong type

2016-06-06 Thread Milan Bouchet-Valat
Le lundi 06 juin 2016 à 09:56 -0700, J Luis a écrit :
> 
> > What exactly are you after? 
> > 
>  What I'm after is simple. To be able to access the members
> ``x_units`` and so on of
> 
> http://gmt.soest.hawaii.edu/doc/latest/GMT_API.html#gmt-grids
> 
> with the wrapper 
> 
> https://github.com/joa-quim/GMT.jl/blob/master/src/libgmt_h.jl#L1236
> 
> (note, the thing is working in its bulk but I'm having issues with
> those strings as I mentioned before)
> 
> 
> > There's not much more to it than using tuples, which should be well
> > documented.
> > 
> tuples yes, but when I search for ``NTuples`` in the docs there is
> barely any hit and all that come show it being used as a type
> dfinition but found no mention of what it is. Yes, a ``ntuple``
> function is documented but I guess it's not the same thing.
The codes could be improved, but basically NTuple is a standard tuple
with all elements being of the same type:
julia> NTuple{3,UInt8}
Tuple{UInt8,UInt8,UInt8}

julia> isa((1,2), NTuple{2,Int})
true


Regards

> > We wrapped Vulkan with this Clang version: https://github.com/Julia
> > GPU/VulkanCore.jl/blob/master/gen/api/vk_common_1.0.0.jl#L1367
> > 
> Thanks. I will look.
> 
> Joaquim
>  
> > 
> > Best,
> > Simon
> > 
> > > Hi,
> > > 
> > > I have one of those types generated from a C struct with Clang.jl
> > > that turns a stack variable into a lng list of members (for
> > > example (but I have longer ones))
> > > 
> > > https://github.com/joa-quim/GMT.jl/blob/master/src/libgmt_h.jl#L1
> > > 246
> > > 
> > > (an in interlude: isn't yet any better way of representing a C
> > > "char str[256];"?)
> > > 
> > > when executed I get (example)
> > > 
> > > julia> hdr.x_units
> > > GMT.Array_80_Uint8(0x6c,0x6f,0x6e,0x67,0x69,0x74,0x75,0x64,0x65,0
> > > x20,0x5b,0x64,0x65,0x67,0x72,0x65,0x65,0x73,0x5f,0x65,0x61,0x73,0
> > > x74,0x5d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0
> > > x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0
> > > x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0
> > > x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0
> > > x00,0x00,0x00,0x00,0x00,0x00)
> > > 
> > > but I need to transform it into a string. After some suffering I
> > > came out with this solution
> > > 
> > > julia> join([Char(hdr.x_units.(n)) for n=1:sizeof(hdr.x_units)])
> > > "longitude
> > > [degrees_east]\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
> > > 0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
> > > 
> > > well, it works but it's kind or ugly. Is there a better way of
> > > achieving this? Namely, how could I avoid creating a string with
> > > all of those \0's? I know I can remove them after, but what about
> > > not copying them on first place?
> > > 
> > > Thanks
> > > 
> > > 


Re: [julia-users] good practice with dictionary pop!?

2016-06-06 Thread Milan Bouchet-Valat
Le lundi 06 juin 2016 à 04:08 -0700, FANG Colin a écrit :
> I find myself often writing code like this when dealing pop! with a
> Dict{A, B}
> 
> 
> if !haskey(my_dict, key)
>     do_something...
> end
> 
> value = pop!(my_dict, key)
> 
> do_something_else...
> 
> 
> The code above has an issue: it looks up the key in a dict twice
> unnecessary.
> 
> I know I can opt for the alternative pattern:
> 
> 
> v = pop!(my_dict, key, nothing)
> 
> if v == nothing
>     do_something...
> end
> 
> value::B = v
> 
> do_something_else...
> 
> Note this piece of code is also a bit annoying, as v is no longer
> type stable. I have to think of another name "value" and apply an
> assignment with type assert. (naming variables is hard)
> 
> 
> Probably I can extend `pop!` with pop!(f, key, v) so that I can do:
> 
> value = pop!(my_dict, key) do
>    do_something...
> end
> 
> do_something_else...
> 
> But using an anonymous function here seems an overkill with overhead?
> 
> What do people do in such case?
I would say the do block syntax sounds like a good solution, and it
parallels the existing one for get() and get!().

Another solution would be to add a function which would always return a
Nullable. This has been discussed at
https://github.com/JuliaLang/julia/issues/13055



Regards


Re: [julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Milan Bouchet-Valat
Le mercredi 01 juin 2016 à 02:35 -0700, Lutfullah Tomak a écrit :
> julia> b_prime = ["8",9,10,c]
> 
> This works with Any.
> 
> julia> Any["3", 4, 14, c]
> 4-element Array{Any,1}:
>    "3"        
>   4           
>  14           
>    Any[10,"c"]
>
Yes, for now you need to use that syntax. In 0.5, the OP's code works
fine too since the deprecation path has been removed.


Regards


Re: [julia-users] Re: collect([1 2]) in 0.5.0-

2016-05-27 Thread Milan Bouchet-Valat
Le jeudi 26 mai 2016 à 18:41 -0400, Jeff Bezanson a écrit :
> 
> Yes, it would be good to deprecate `collect` to `Vector` (so it
> always
> returns a vector like it used to), and call `Array` where we want the
> result to have the same shape as the iterator. The main wart is that
> Array{T}(tuple) already has a meaning where the tuple specifies the
> size, but here the tuple would be the iterator giving the data. I
> think it would be best to base everything on iterators, e.g.
> 
> zeros(n) = Vector(take(repeated(0), n))
> zeros(n,m) = Array(reshape(repeated(0), (n,m)) or
> zeros(n,m) = Array(repeated(0), (n,m))
> 
> # half seriously
> Array(T::Type, n) = Array{T}(take(Undefined(), n))
> 
> I can just barely imagine a deprecation path here. The main question
> is what to do about all the constructor calls that currently make
> uninitialized arrays. For numeric types it would be ok to deprecate
> them to `zeros(T, n, m)`, since we will probably move to zero filling
> anyway, but it's less clear what to do for general T.
How about this ?

@deprecate Array{T}(dims::Dims) Array{T}(dims...)

Then the only ambiguous case is Array{T}(x::Integer), which could
either mean collect(x) or Array{T}((x,)). Maybe that's another argument
in favor of deprecating iteration over scalars, which would allow
choosing the latter meaning.


Regards

> 
> On Thu, May 26, 2016 at 4:14 PM,   wrote:
> > 
> > 
> > I've also discovered that collect(1) will now generate a 0-
> > dimensional
> > Array, presumably as part of the same update, while collect([1])
> > still
> > generates a Vector. Is this intended too? If so, is there any way
> > of
> > generating a Vector from either a Vector or a scalar - I was using
> > collect
> > to do this up to now with 0.[34].x, 0.5-dev...?
> > 
> > Cheers,
> > 
> > 
> > Richard.
> > 
> > 
> > PS If it's a permanent change, the docs should be updated too:
> > 
> > Return an array of type Array{element_type,1} of all items in a
> > collection.
> > 
> > 
> > On Wednesday, 25 May 2016 20:41:05 UTC+1, paul.so...@gmail.com
> > wrote:
> > > 
> > > 
> > > 
> > > Hi,
> > > 
> > > I am not sure if this is the right place to post questions about
> > > version
> > > 0.5.0-, but I'll give it a try anyhow.
> > > 
> > > In 0.4.5, collect([1 2]) gives Array{Int64,1}, like a column
> > > vector.
> > > 
> > > In 0.5.0_ (as of 25 May, Win64), collect([1 2]) gives
> > > Array{Int64,2}, like
> > > a row vector.
> > > 
> > > Is this intended? (...just struggling to prepare for the next
> > > release)
> > > 
> > > /Paul S


Re: [julia-users] collect([1 2]) in 0.5.0-

2016-05-26 Thread Milan Bouchet-Valat
Le jeudi 26 mai 2016 à 09:03 -0400, Stefan Karpinski a écrit :
> Perhaps these should be called Vector and Array? As in Vector(f(x)
> for x in A) and Array(f(x) for x in A).
Cf. https://github.com/JuliaLang/julia/issues/16029


> On Wed, May 25, 2016 at 7:07 PM, Jeffrey Sarnoff
>  wrote:
> > I hope Julia is not ready to drop the immediacy of clarity when it
> > is new-found and current use adjacent (e.g. "shape-preserving
> > f(g(x) for x in A)").
> > It is reasonable that `collect` become this better version of its
> > prior self; and, if desired, a vector-only version would have a new
> > name or way of indication.
> > `collectvec( __ )` might do `reshape( (__), prod(size(__)) )`   
> > 
> > 
> > 
> > > Yes, so far this is intended. We want a shape-preserving
> > > `collect` for 
> > > implementing comprehensions, for example `collect(2x for x in
> > > A)`. 
> > > However a case could be made that `collect` should continue to
> > > return 
> > > only vectors, and the shape-preserving version should have a new
> > > name. 
> > > 
> > > On Wed, May 25, 2016 at 3:41 PM,   wrote: 
> > > > Hi, 
> > > > 
> > > > I am not sure if this is the right place to post questions
> > > about version 
> > > > 0.5.0-, but I'll give it a try anyhow. 
> > > > 
> > > > In 0.4.5, collect([1 2]) gives Array{Int64,1}, like a column
> > > vector. 
> > > > 
> > > > In 0.5.0_ (as of 25 May, Win64), collect([1 2]) gives
> > > Array{Int64,2}, like a 
> > > > row vector. 
> > > > 
> > > > Is this intended? (...just struggling to prepare for the next
> > > release) 
> > > > 
> > > > /Paul S 
> > > 


Re: [julia-users] DateTime conversion in DataFrames

2016-05-26 Thread Milan Bouchet-Valat
Le jeudi 26 mai 2016 à 06:15 -0700, akrun a écrit :
> I am using the DataFrames package.  I find it difficult to convert to
> DateTime. 
> 
>       println(DateTime("4/5/2002 04:20", "m/d/y H:M")) 
> gives output
> 
>       2002-04-05T04:20:00
> 
> However, if I try
> 
>   df1 = DataFrame(V1 = ["4/5/2002 04:20", "4/5/2002 04:25"])
>   println(DateTime(df1[:V1])) 
> gives
> 
>  ArgumentError: Delimiter mismatch. Couldn't find first
> delimiter, "-", in date string
>  in parse at dates/io.jl:152
>  
> 
> Is there any workaround?
This isn't specific to data frames. You also get this with
V1 = ["4/5/2002 04:20", "4/5/2002 04:25"]
DateTime(V1)

Anyway, you need to pass the format as in your first example:
DateTime(V1, "m/d/y H:M")DateTime(df1[:V1], "m/d/y H:M")


Regards


Re: [julia-users] julia equivalent of python [] (aka list)

2016-05-25 Thread Milan Bouchet-Valat
Le mercredi 25 mai 2016 à 02:50 -0700, DNF a écrit :
> Is ::Array{Any, 1} the correct annotation?
> >> hello(v::Vector{Any}) = println("Hello")
> >> hello([2,'a']) 
> Hello
> >> hello([2,2]) 
> ERROR: MethodError: no method matching hello(::Array{Int64,1}) 
>  in eval(::Module, ::Any) at
> /usr/local/Cellar/julia/HEAD/lib/julia/sys.dylib:-1
> 
> It only works for vectors that are specifically of type Vector{Any}.
> Vector{Int64} is not a subtype of Vector{Any}.
> 
> This works, however, even though Vector is not a subtype of
> Vector{Any}:
> >> goodbye(v::Vector) = println("bye, bye")
> goodbye (generic function with 1 method) 
> >> goodbye([2,'a'])
> bye, bye 
> >>> goodbye([2,2]) 
> bye, bye
You're hitting type invariance. See
http://docs.julialang.org/en/latest/manual/types/#parametric-composite-types
and look for this term in the mailing list archives.


Regards

> > You are mixing up the constructor and the type syntax. Just use 
> > Vector{Any} in the type definition. 
> > 
> > On Tue, May 24 2016, Andreas Lobinger wrote: 
> > 
> > > I tend to agree with you, however... 
> > > 
> > > julia> d = Any[] 
> > > 0-element Array{Any,1} 
> > > 
> > > julia> type d1 
> > >        name::AbstractString 
> > >        content::Any[] 
> > >        end 
> > > ERROR: TypeError: d1: in type definition, expected Type{T}, got
> > Array{Any,1} 
> > > 
> > > On Tuesday, May 24, 2016 at 7:11:50 PM UTC+2, Stefan Karpinski
> > wrote: 
> > > 
> > >  Since Julia 0.4 [] is what you're looking for. 
> > > 
> > >  On Tue, May 24, 2016 at 1:06 PM, Andreas Lobinger  > .com> wrote: 
> > > 
> > >  Hello colleagues, 
> > > 
> > >  it really feels strange to ask this, but what is the julia
> > equivalent of python's list? 
> > > 
> > >  So. 
> > > 
> > >  1 can by initialised empty 
> > >  2 subject of append and extend 
> > >  3 accepting Any entry 
> > >  4 foolproof usage in type definition... (my real problem seems
> > to be here) 
> > > 
> > >  Wishing a happy day, 
> > > 
> > >          Andreas 
> > 


Re: [julia-users] Re: Lack of an explicit return in Julia, heartache or happiness?

2016-05-25 Thread Milan Bouchet-Valat
Le mardi 24 mai 2016 à 23:00 -0400, Tom Breloff a écrit :
> > if g() returns `nothing` then this code is fine; if g() returns a
> > value, then we are accidentally returning it.
> 
> This is the frustrating part for me.   I very frequently have methods
> which "do something and then pass control to another method".  So by
> putting g() at the end of f, I'm purposefully (not accidentally)
> saying that I want to return whatever g() returns.  Adding "return"
> before that doesn't change the function, or the intention.  If
> someone doesn't care or want what is returned, then they don't use
> it.  I still don't see any benefit of a change, but I do see a real
> annoyance.
The problem isn't so much functions returning values that are not
useful, but functions which inadvertently return  a value just because
it's what the last expression happens to evaluate to. Then callers may
start relying on it, and the day you refactor the function a bit it
returns something different and their code breaks.

To avoid this, you need to add explicit "return nothing" or "nothing"
at the end of functions, which is annoying (and can easily be forgotten
when you never experienced the problem).


Regards

> Is there an actual performance hit if there's a value returned but
> not used?  I assume that the compiler sees that the return value is
> unused and discards it as if nothing was returned.  Am I wrong with
> that assumption?
> 
> On Tue, May 24, 2016 at 6:29 PM, Stefan Karpinski  rg> wrote:
> > On Tue, May 24, 2016 at 5:08 PM, Steven G. Johnson  > il.com> wrote:
> > > As Jeff says, the current behavior is consistent in that a block
> > > like begin...end has a value equal to its last expression in any
> > > other context, so it would be odd if begin...end in a function
> > > body did not have the same behavior. I mostly like the style of
> > > explicit "return" statements in functions, but I think this
> > > should be more of a linting thing.
> > > 
> > What I proposed doesn't change anything about what any expression
> > evaluates to. The only thing it changes is that
> > 
> > function f(args...)
> >     # stuff
> > end
> > 
> > would implicitly mean
> > 
> > function f(args...)
> >     # stuff
> >     return
> > end
> > 
> > That way unless you put an explicit return in a long form function,
> > it automatically returns nothing, avoiding accidentally returning
> > the value of the last expression.
> > In order to adjust for this change one would at most need to add a
> > single return at the end of long-form functions which are
> > *intended* to return a value. Short-form functions would not have
> > to change and long-form functions with explicit returns would not
> > have to change.
> > 
> > The problem with making this "a linting thing" is that we don't
> > statically know which expressions return nothing. When you see code
> > like this:
> > 
> > function f(args...)
> >     # stuff
> >     g()
> > end
> > 
> > should it be a lint warning or not? It depends on what g() returns:
> > if g() returns `nothing` then this code is fine; if g() returns a
> > value, then we are accidentally returning it. Whether the code
> > passes linting depends on the unknowable behavior of g(). Even if
> > we did dynamic instrumentation, there's a chance that g() could
> > return `nothing` as a meaningful value (see match). With the change
> > I proposed, we can statically tell the difference between an
> > accidental return and an intentional one.
> > 


Re: [julia-users] Gaston package up-and-running in Windows 8/10 with gnuplot utility v5.0 (!!)

2016-05-23 Thread Milan Bouchet-Valat
Le dimanche 22 mai 2016 à 22:19 -0700, Андрей Логунов a écrit :
> I've got a Gaston package version up-and-running with Julia 0.4.5
> under Windows 8/10 with gnuplot utility v5.0. And it does run in
> Linux/Ubuntu 16.4, too.
> A short instruction in the Gaston.jl file.  Just do not know, how to
> "pull request" it.
You'll have to learn a bit of git to do that. See
http://docs.julialang.org/en/stable/manual/packages/#making-changes-to-an-existing-package


Regards


Re: [julia-users] Re: Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-21 Thread Milan Bouchet-Valat
Le vendredi 20 mai 2016 à 14:24 -0700, Kevin Liu a écrit :
> Pkg.update() updated all packages, did the job, thanks. So suppose I
> have a dataset called train. R2(train), rˆ2(train), r2(train) didn't
> work. 
The R² only makes sense for a fitted model. See GLM.jl docs about how
to fit it, and call the function on the resulting object.

> I meant predictive accuracy. Does it apply to GLM? 
Sure, but there are many possible indicators of this, so you'll have to
be more specific.


Regards

> > Le vendredi 20 mai 2016 à 08:59 -0700, Kevin Liu a écrit : 
> > > I think accuracy doesn't make sense for a linear model whose
> > purpose 
> > > isn't to predict. Do you agree?  
> > Sorry, I don't know what you mean by "accuracy". Anyway, only
> > users 
> > know the purpose of their models. All we can do is provide the
> > support 
> > for indicators and let them choose the appropriate ones. 
> > 
> > 
> > Regards 
> > 
> > > > Pkg.update("GLM") 
> > > > ERROR: MethodError: `update` has no method matching 
> > > > update(::ASCIIString) 
> > > > 
> > > > 
> > > > 
> > > > > Le jeudi 19 mai 2016 à 19:08 -0700, Kevin Liu a écrit :  
> > > > > > Thanks. I might need some help if I encounter problems on
> > this 
> > > > > pseudo  
> > > > > > version.   
> > > > > I've just tagged a new 0.5.2 release, so this shouldn't be 
> > > > > necessary  
> > > > > now (just run Pkg.update()).  
> > > > > 
> > > > > 
> > > > > Regards  
> > > > > 
> > > > > > > Le jeudi 19 mai 2016 à 09:30 -0700, Kevin Liu a
> > écrit :   
> > > > > > > > It seems the pkg owners are still deciding   
> > > > > > > >   
> > > > > > > > Funcs to evaluate fit                                  
> >     
> > > > >         
> > > > > > > >        https://github.com/JuliaStats/GLM.jl/issues/74  
> >  
> > > > > > > > Add fit statistics functions and document existing ones
> >  ht 
> > > > > tps://gith   
> > > > > > > > ub.com/JuliaStats/StatsBase.jl/pull/146   
> > > > > > > > Implement fit statistics functions                    
> >     
> > > > >         
> > > > > > > >   https://github.com/JuliaStats/GLM.jl/pull/115   
> > > > > > > These PRs have been merged, so we just need to tag a new 
> > > > > release. Until   
> > > > > > > then, you can use Pkg.checkout() to use the development 
> > > > > version   
> > > > > > > (function is called R² or R2).   
> > > > > > >  
> > > > > > >  
> > > > > > > Regards   
> > > > > > >  
> > > > > > > >   
> > > > > > > > > I looked in GLM.jl but couldn't find a function for 
> > > > > calculating the   
> > > > > > > > > R2 or the accuracy of the R2 estimate.   
> > > > > > > > >   
> > > > > > > > > My understanding is that both should appear with the 
> > > > > glm()   
> > > > > > > > > function. Help would be appreciated.
> > > > > > > > >   
> > > > > > > > > Kevin   
> > > > > > > > >   


Re: [julia-users] Re: Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-20 Thread Milan Bouchet-Valat
Le vendredi 20 mai 2016 à 08:37 -0700, Kevin Liu a écrit :
> Pkg.update("GLM")
> ERROR: MethodError: `update` has no method matching
> update(::ASCIIString)
Try Pkg.update().


> > Le jeudi 19 mai 2016 à 19:08 -0700, Kevin Liu a écrit : 
> > > Thanks. I might need some help if I encounter problems on this
> > pseudo 
> > > version.  
> > I've just tagged a new 0.5.2 release, so this shouldn't be
> > necessary 
> > now (just run Pkg.update()). 
> > 
> > 
> > Regards 
> > 
> > > > Le jeudi 19 mai 2016 à 09:30 -0700, Kevin Liu a écrit :  
> > > > > It seems the pkg owners are still deciding  
> > > > >  
> > > > > Funcs to evaluate fit                                        
> >          
> > > > >        https://github.com/JuliaStats/GLM.jl/issues/74  
> > > > > Add fit statistics functions and document existing ones  http
> > s://gith  
> > > > > ub.com/JuliaStats/StatsBase.jl/pull/146  
> > > > > Implement fit statistics functions                          
> >          
> > > > >   https://github.com/JuliaStats/GLM.jl/pull/115  
> > > > These PRs have been merged, so we just need to tag a new
> > release. Until  
> > > > then, you can use Pkg.checkout() to use the development
> > version  
> > > > (function is called R² or R2).  
> > > > 
> > > > 
> > > > Regards  
> > > > 
> > > > >  
> > > > > > I looked in GLM.jl but couldn't find a function for
> > calculating the  
> > > > > > R2 or the accuracy of the R2 estimate.  
> > > > > >  
> > > > > > My understanding is that both should appear with the
> > glm()  
> > > > > > function. Help would be appreciated.   
> > > > > >  
> > > > > > Kevin  
> > > > > >  


Re: [julia-users] Re: Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-20 Thread Milan Bouchet-Valat
Le vendredi 20 mai 2016 à 08:59 -0700, Kevin Liu a écrit :
> I think accuracy doesn't make sense for a linear model whose purpose
> isn't to predict. Do you agree? 
Sorry, I don't know what you mean by "accuracy". Anyway, only users
know the purpose of their models. All we can do is provide the support
for indicators and let them choose the appropriate ones.


Regards

> > Pkg.update("GLM")
> > ERROR: MethodError: `update` has no method matching
> > update(::ASCIIString)
> > 
> > 
> > 
> > > Le jeudi 19 mai 2016 à 19:08 -0700, Kevin Liu a écrit : 
> > > > Thanks. I might need some help if I encounter problems on this
> > > pseudo 
> > > > version.  
> > > I've just tagged a new 0.5.2 release, so this shouldn't be
> > > necessary 
> > > now (just run Pkg.update()). 
> > > 
> > > 
> > > Regards 
> > > 
> > > > > Le jeudi 19 mai 2016 à 09:30 -0700, Kevin Liu a écrit :  
> > > > > > It seems the pkg owners are still deciding  
> > > > > >  
> > > > > > Funcs to evaluate fit                                      
> > >            
> > > > > >        https://github.com/JuliaStats/GLM.jl/issues/74  
> > > > > > Add fit statistics functions and document existing ones  ht
> > > tps://gith  
> > > > > > ub.com/JuliaStats/StatsBase.jl/pull/146  
> > > > > > Implement fit statistics functions                        
> > >            
> > > > > >   https://github.com/JuliaStats/GLM.jl/pull/115  
> > > > > These PRs have been merged, so we just need to tag a new
> > > release. Until  
> > > > > then, you can use Pkg.checkout() to use the development
> > > version  
> > > > > (function is called R² or R2).  
> > > > > 
> > > > > 
> > > > > Regards  
> > > > > 
> > > > > >  
> > > > > > > I looked in GLM.jl but couldn't find a function for
> > > calculating the  
> > > > > > > R2 or the accuracy of the R2 estimate.  
> > > > > > >  
> > > > > > > My understanding is that both should appear with the
> > > glm()  
> > > > > > > function. Help would be appreciated.   
> > > > > > >  
> > > > > > > Kevin  
> > > > > > >  


Re: [julia-users] Re: Pkg.publish() : ERROR: There are no METADATA changes to publish

2016-05-20 Thread Milan Bouchet-Valat
Le vendredi 20 mai 2016 à 07:14 -0700, Evan Fields a écrit :
> On further inspection I think this is because I managed to name the
> package repository TravelingSalesmanHeuristics.jl (rather than just
> TravelingSalesmanHeuristics). Indeed I just ran
> Pkg.add("TravelingSalesmanHeuristics") and now have in my .julia/v0.4
> both TravelingSalesmanHeuristics and TravelingSalesmanHeuristics.jl
> 
> When I published v0.0.1 Pkg.publish failed (different error, can't
> remember) and I had to create the pull request manually; presumably I
> now have to do that going forward?
You should be able to move TravelingSalesmanHeuristics.jl to another
folder and work with TravelingSalesmanHeuristics. Doesn't that work?


Regards


Re: [julia-users] Pkg.publish() : ERROR: There are no METADATA changes to publish

2016-05-20 Thread Milan Bouchet-Valat
Le vendredi 20 mai 2016 à 07:04 -0700, Evan Fields a écrit :
> Presumably I'm doing something dumb, but I'm at a loss. I'm trying to
> tag version 0.0.2 of TravelingSalesmanHeuristics in METADATA. With my
> local machine up to date with the remote github repository, I run
> Pkg.update() and Pkg.tag("TravelingSalesmanHeuristics.j", v"0.0.2")
> and get no errors. 
I think the command should be
Pkg.tag("TravelingSalesmanHeuristics", v"0.0.2")

(The error reporting might be broken.)


Regards

> Then I run Pkg.publish() and get the following output:
> 
> julia> Pkg.publish()
> ERROR: There are no METADATA changes to publish
>  in publish at pkg/entry.jl:348
>  in anonymous at pkg/dir.jl:31
>  in cd at file.jl:32
>  in cd at pkg/dir.jl:31
>  in publish at pkg.jl:61
> 
> 
> Indeed, when I to git log -n 10 in the local copy of METADATA I see
> no commits about tagging TravelingSalesmanHeuristics. So it seems the
> Pkg.publish error is not wrong.
> 
> What am I doing wrong? Thanks!


Re: [julia-users] Sharing methods across modules

2016-05-20 Thread Milan Bouchet-Valat
Le vendredi 20 mai 2016 à 05:53 -0700, Christoph Ortner a écrit :
> I want to understand how to share methods across modules who don't
> know of one another.  I understand that this is discussed in various
> places; I tried to go through may issues, but in the end I didn't get
> a good picture of what I should do now. Long post - my question in
> the end is: is creating an interface module the only (best) way to
> achieve this right now, or are there better / more elegant / ...
> constructs?
Yes, I think that's the recommended solution, and I don't know of plans
to change this. It can be a bit annoying, but it has the advantage of
forcing collaboration to define common interfaces across
modules/packages. You should be able to find threads about this in the
archives where the arguments against automatically merging methods from
different modules are developed.


Regards

> Consider this situation:
> 
> module A
>     export TA, blib
>     type TA end
>     blib(::TA) = "A"
> end
> 
> module B
>     export blib, dosomething
>     blib(::Any) = "any"
>     dosomething(x) = blib(x)
> end
> 
> using A, B
> dosomething(TA())
> 
> #  "any"
> 
> I want the output to be "A", but it will be "any", because module B
> doesn't know about A.blib. The standard solution is this (?yes?):
> 
> module B
>     export blib, dosomething
>     blib(::Any) = "any"
>     dosomething(x) = blib(x)
> end
>     
> module A
>     import B
>     export TA
>     type TA end
>     B.blib(::TA) = "A"
> end
> 
> using A, B
> dosomething(TA())
> 
> #  "A"
> 
> However, this requires A to know about B (or vice-versa). Suppose now
> that I want to be able to use functionality across modules that don't
> know of one another, so this sort of import is not an option. The
> *only* solution I see at the moment is to define a "super-module"
> that specifies the interface
> 
> module Super
>    export blib
>    blib() = error("this is just an interface")
> end
> 
> module A
>     import Super
>     export TA
>     type TA end
>     Super.blib(::TA) = "A"
> end
> 
> module B
>     using Super
>     export TB, dosomething
>     Super.blib(::Any) = "any"
>     dosomething(x) = blib(x)
> end
> 
> using B, A
> dosomething(TA())
> 
> To repeat my question: is this the recommended way to achieve that A
> and B should not have to know of each other?
> 
> Long-term: how is this going to evolve, which are the most relevant
> issues to follow?
> 
> Any comments would be appreciated. 
> 
> Christoph
> 


Re: [julia-users] Re: Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-20 Thread Milan Bouchet-Valat
Le jeudi 19 mai 2016 à 19:08 -0700, Kevin Liu a écrit :
> Thanks. I might need some help if I encounter problems on this pseudo
> version. 
I've just tagged a new 0.5.2 release, so this shouldn't be necessary
now (just run Pkg.update()).


Regards

> > Le jeudi 19 mai 2016 à 09:30 -0700, Kevin Liu a écrit : 
> > > It seems the pkg owners are still deciding 
> > > 
> > > Funcs to evaluate fit                                                 
> > >        https://github.com/JuliaStats/GLM.jl/issues/74 
> > > Add fit statistics functions and document existing ones  https://gith 
> > > ub.com/JuliaStats/StatsBase.jl/pull/146 
> > > Implement fit statistics functions                                   
> > >   https://github.com/JuliaStats/GLM.jl/pull/115 
> > These PRs have been merged, so we just need to tag a new release. Until 
> > then, you can use Pkg.checkout() to use the development version 
> > (function is called R² or R2). 
> > 
> > 
> > Regards 
> > 
> > > 
> > > > I looked in GLM.jl but couldn't find a function for calculating the 
> > > > R2 or the accuracy of the R2 estimate. 
> > > > 
> > > > My understanding is that both should appear with the glm() 
> > > > function. Help would be appreciated.  
> > > > 
> > > > Kevin 
> > > > 


Re: [julia-users] Re: Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-19 Thread Milan Bouchet-Valat
Le jeudi 19 mai 2016 à 09:30 -0700, Kevin Liu a écrit :
> It seems the pkg owners are still deciding
> 
> Funcs to evaluate fit                                                
>        https://github.com/JuliaStats/GLM.jl/issues/74
> Add fit statistics functions and document existing ones  https://gith
> ub.com/JuliaStats/StatsBase.jl/pull/146
> Implement fit statistics functions                                  
>   https://github.com/JuliaStats/GLM.jl/pull/115
These PRs have been merged, so we just need to tag a new release. Until
then, you can use Pkg.checkout() to use the development version
(function is called R² or R2).


Regards

> 
> > I looked in GLM.jl but couldn't find a function for calculating the
> > R2 or the accuracy of the R2 estimate.
> > 
> > My understanding is that both should appear with the glm()
> > function. Help would be appreciated. 
> > 
> > Kevin
> > 


Re: [julia-users] Re: Slow reading of file

2016-05-15 Thread Milan Bouchet-Valat
Le dimanche 15 mai 2016 à 08:08 -0700, Ford Ox a écrit :
> Thanks Yu and Quinn.
> 
> Now lets go one step further. Lets say I don't want to use any
> default parse function. I will make my own.
> 
> type Buffer{T} x::T end
> 
> function store!(b::Buffer{String}, c::Char) b.x = "$(b.x)$x" end
> function store!(b::Buffer{Int}, c::Char, d::Int) b.x += (c -
> '0')*10^d end #d is number of digits
> 
> 
> Usage
> get_rid_of_leading_spaces()
> while !isspace((x = read_next_char()) store(buffer, x) end
> 
> But I can see potential problems:
> "$(b.x)$x" is probably not much effective, maybe I should use arrays
> of chars with fixed size, but how do I convert it to string?
> string(char_array...) doesn't work with '\0' and I don't want to
> create new array by calling char_array[1:size].
> is b.x += (c - '0')*10^d really the fastest implementation possible?
See IOBuffer and takebuf_string.


Regards

> As a side note, is there anything like show_method_body(function)? I
> often want to see how is function base.xy implemented, but I have to
> manually search all the files which is quite exhausting.
> 
>   


Re: [julia-users] Slow reading of file

2016-05-14 Thread Milan Bouchet-Valat
Le samedi 14 mai 2016 à 05:01 -0700, Ford Ox a écrit :
> Fixed. Julia now takes 11 seconds to finish
> type Tokenizer
>     tokens::Array{AbstractString, 1}
>     index::Int
>     Tokenizer(s::AbstractString) = new(split(strip(s)), 0)
> end
>
> type Buffer
>     stream::IOStream
>     tokenizer::Tokenizer
>     Buffer(stream) = new(stream, Tokenizer(""))
> end
AbstractString is still not a concrete type. Use
UTF8String/ASCIIString, or do this instead:

type Tokenizer{T<:AbstractString}
     tokens::Array{T, 1}
     index::Int
     Tokenizer(s::AbstractString) = new(split(strip(s)), 0)
end

type Buffer{T<:AbstractString}
stream::IOStream
tokenizer::Tokenizer{T}
Buffer(stream) = new(stream, Tokenizer(""))
end

(Note that "" will create an ASCIIString, use UTF8String("") if you need to 
support non-ASCII chars.)


Regards

> 
> 
> > Your types have totally untyped fields – the compiler has to emit
> > very pessimistic code about this. Rule of thumb: locations (fields,
> > collections) should be as concretely typed as possible; parameters
> > don't need to be.
> > 
> > On Sat, May 14, 2016 at 1:36 PM, Ford Ox  wrote:
> > > I have written exact same code in java and julia for reading
> > > integers from file. 
> > > Julia code was A LOT slower. (12 seconds vs 1.16 seconds)
> > > 
> > > import Base.isempty, Base.close
> > > 
> > > ##    Tokenizer ##
> > > 
> > > type Tokenizer
> > >     tokens
> > >     index
> > >     Tokenizer(s::AbstractString) = new(split(strip(s)), 0)
> > > end
> > > 
> > > isempty(t::Tokenizer) = length(t.tokens) == t.index
> > > 
> > > function next!(t::Tokenizer)
> > >     t.index += 1
> > >     t.tokens[t.index]
> > > end
> > > 
> > > ## Buffer ##
> > > 
> > > type Buffer
> > >     stream
> > >     tokenizer
> > >     Buffer(stream) = new(stream, [])
> > > end
> > > 
> > > function next!(b::Buffer)
> > >     if isempty(b.tokenizer)
> > >         b.tokenizer = Tokenizer(readline(b.stream))
> > >     end
> > >     next!(b.tokenizer)
> > > end
> > > 
> > > close!(b::Buffer) = close(b.stream)
> > > nexttype!(t, b::Buffer) = parse(t, next!(b))
> > > nextint!(b::Buffer) = nexttype!(Int, b)
> > > 
> > > cd("pathToMyFile")
> > > b = Buffer(open("File"))
> > > 
> > > function readall!(b::Buffer)
> > >     for _ in 1:nextint!(b)
> > >         nextint!(b)
> > >     end
> > >     close!(b)
> > > end
> > > 
> > > @time readall!(b)
> > > 
> > > 
> > > > 12.314114 seconds (84.84 M allocations: 3.793 GB, 11.47% gc
> > > > time)
> > > package alg;
> > > 
> > > import java.io.*;
> > > import java.util.StringTokenizer;
> > > 
> > > public class Try {
> > >     StringTokenizer tokenizer;
> > >     BufferedReader reader;
> > > 
> > >     public static void main(String[] args) throws IOException {
> > >         String name = "fileName";
> > >         Try reader = new Try(new File(name));
> > > 
> > >         long itime = System.nanoTime();
> > >         int N = reader.nextInt();
> > >         for(int n=0; n < N; n++)
> > >             reader.nextInt();
> > >         System.out.println((double) (System.nanoTime() - itime) /
> > > 10);
> > > 
> > >     }
> > > 
> > >     Try(File f) throws FileNotFoundException {
> > >         tokenizer = new StringTokenizer("");
> > >         reader = new BufferedReader(new FileReader(f));
> > >     }
> > > 
> > >     String next() throws IOException {
> > >         if(!tokenizer.hasMoreTokens()) tokenize();
> > >         return tokenizer.nextToken();
> > >     }
> > > 
> > >     void tokenize() throws IOException {
> > >         tokenizer = new StringTokenizer(reader.readLine());
> > >     }
> > > 
> > >     int nextInt() throws IOException {
> > >         return Integer.parseInt(next());
> > >     }
> > > }
> > > >  1.169884868
> > >  
> > > The file has 7 068 650 lines. On each line is one integer that is
> > > not bigger than 2^16.
> > > 
> > 


Re: [julia-users] Re: NA vs NaN in DataFrames

2016-05-14 Thread Milan Bouchet-Valat
Le samedi 14 mai 2016 à 17:49 +1000, Андрей Логунов a écrit :
> using RDatasets, DataFrames
> mlmf = dataset("mlmRev","Gcsemv");
> 
> Produces
> 
> │ 1    │ "20920" │ "16"    │ "M"    │ 23.0    │ NaN    │
> │ 2    │ "20920" │ "25"    │ "F"    │ NaN     │ 71.2   │
> │ 3    │ "20920" │ "27"    │ "F"    │ 39.0    │ 76.8   │
> │ 4    │ "20920" │ "31"    │ "F"    │ 36.0    │ 87.9   │
> │ 5    │ "20920" │ "42"    │ "M"    │ 16.0    │ 44.4   │
> │ 6    │ "20920" │ "62"    │ "F"    │ 36.0    │ NaN    │
> │ 7    │ "20920" │ "101"   │ "F"    │ 49.0    │ 89.8   │
> │ 8    │ "20920" │ "113"   │ "M"    │ 25.0    │ 17.5   │
> │ 9    │ "20920" │ "146"   │ "M"    │ NaN     │ 32.4   │
> │ 10   │ "22520" │ "1"     │ "F"    │ 48.0    │ 84.2   │
> │ 11   │ "22520" │ "7"     │ "M"    │ 46.0    │ 66.6   │
> 
> 
> Should be NAs but here we have NaNs
Thanks for the reproducible example. Please file an issue on GitHub
against RDatasets.


Regards

> 2016-05-14 17:22 GMT+10:00 Tamas Papp :
> > Again: please provide a self-contained example.
> > 
> > On Sat, May 14 2016, Андрей Логунов wrote:
> > 
> > > The misuse of the word is all mine.
> > > But the problem persists. RDatasets in Win10 produce NaN-values
> > for
> > > unvailable values (NAs) as compared to Unices.
> > > So the funcs dropna() and complete_cases() 'do not work' as
> > needed? no
> > > filtering done. As I understand Complete_cases() uses a bitarray.
> > But is
> > > there a shortcut?
> > >
> > >
> > >
> > > суббота, 14 мая 2016 г., 16:26:01 UTC+10 пользователь Tamas Papp
> > написал:
> > >>
> > >> On Sat, May 14 2016, Андрей Логунов wrote:
> > >>
> > >> > To add, fiddling with array comprehensions as per the problem
> > with NaNs
> > >> > found a buggy thing.
> > >> > The following code does not work:
> > >> >
> > >> > [x for x in filter(!isnan, convert(Array,dataframe[:fld]))]
> > >>
> > >> This is not a bug. ! does not operate on functions, only on
> > concrete
> > >> values (Bitarray, Bool, etc).
> > >>
> > >> Also, even if you find a bug, "does not work" is unlikely to get
> > you any
> > >> help without an error message and preferably a self-contained
> > example.
> > >>
> > >> Best,
> > >>
> > >> Tamas
> > >>
> > 
> > 


Re: [julia-users] Splatting speed

2016-05-13 Thread Milan Bouchet-Valat
Le vendredi 13 mai 2016 à 14:54 -0700, Brandon Taylor a écrit :
> I was wondering why the following code is so slow:
> 
> @time broadcast( (x...) -> +(x...), [1:1000], [1001:2000] )
> 
> In comparison to
> 
> @time broadcast(+, [1:1000], [1001:2000] )
> 
> and what would be a faster way to define an anonymous function with a
> variable number of arguments? Note, I can't use zip because I can't
> guarantee that the things being broadcast over are going to be the
> same size.
Splatting on a (strongly) variable number of arguments is generally not
a good idea, as Julia needs to compile specific methods for each
combination of arguments. But that's not the problem you're seeing
here, since you only have two arguments.

Judging from your use of [1:1000] instead of collect(1:1000), I guess
you're on Julia 0.4, where anonymous functions are slow. In 0.5, and
when wrapping the code in a function, I don't see any clear difference.


Regards


Re: [julia-users] Match package minor bug, it seems

2016-05-13 Thread Milan Bouchet-Valat
Le vendredi 13 mai 2016 à 00:52 -0700, Андрей Логунов a écrit :
> Match package raises a deprecation warning regarding String.
> Upon inspection the problem is in the following passage in
> matchmacro.jl
> 
> # Regex strings (r"[a-z]*")
>     elseif isexpr(expr, :macrocall) && expr.args[1] ==
> symbol("@r_str")
>     append!(info.tests, [:(isa($val, String)),
> :(Match.ismatch($expr, $val))])
>     info
> 
> Replacing String with AbstractString here with the following
> Pkg.build("Match") seem to solve the problem.
Good catch. Please make a pull request against Match.jl on GitHub.
Though you'll have to use the Compat package to keep supporting Julia
0.3 (or drop support for that release if the maintainer agrees).


Regards

> B.R.,
> Win10 and JL 0.4.5 user


Re: [julia-users] Re: how long until vectorized code runs fast?

2016-05-12 Thread Milan Bouchet-Valat
Le mercredi 11 mai 2016 à 23:03 -0700, Anonymous a écrit :
> In response to both Kristoffer and Keno's timely responses,
> 
> Originally I just did a simple @time test of the form
> Matrix .* horizontal vector
> 
> and then tested the same thing with for loops, and the for loops were
> way faster (and used way less memory)
> 
> However I just devectorized one of my algorithms and ran an @time
> comparison and the vectorized version was actually twice as fast as
> the devectorized version, however the vectorized version used way
> more memory.  Clearly I don't really understand the specifics of what
> makes code slow, and in particular how vectorized code compares to
> devectorized code.  Vectorized code does seem to use a lot more
> memory, but clearly for my algorithm it nevertheless runs faster than
> the devectorized version.  Is there a reference I could look at that
> explains this to someone with a background in math but not much
> knowledge of computer architecture?
I don't know about a reference, but I suspect this is due to BLAS.
Vectorized versions of linear algebra operations like matrix
multiplication are highly optimized, and run several threads in
parallel. OTC, your devectorized code isn't carefully tuned for a
specific processor model, and uses a single CPU core (soon Julia will
support using several threads, and see [1]).

So depending on the particular operations you're running, the
vectorized form can be faster even though it allocates more memory. In
general, it will likely be faster to use BLAS for expensive operations
on large matrices. OTOH, it's better to devectorize code if you
successively perform several simple operations on an array, because
each operation currently allocates a copy of the array (this may well
change with [2]).


Regards


1: http://julialang.org/blog/2016/03/parallelaccelerator
2: https://github.com/JuliaLang/julia/issues/16285

> > There seems to be a myth going around that vectorized code in Julia
> > is 
> > slow. That's not really the case. Often times it's just that 
> > devectorized code is faster because one can manually perform 
> > operations such as loop fusion, which the compiler cannot
> > currently 
> > reason about (and most C compilers can't either). In some other 
> > languages those benefits get drowned out by language overhead, but
> > in 
> > julia those kinds of constructs are generally fast. The cases
> > where 
> > julia can be slower is when there is excessive memory allocation in
> > a 
> > tight inner loop, but those cases can usually be rewritten fairly 
> > easily without losing the vectorized look of the code. 
> > 
> > On Thu, May 12, 2016 at 1:35 AM, Kristoffer Carlsson 
> >  wrote: 
> > > It is always easier to discuss if there is a piece of code to
> > look at. Could 
> > > you perhaps post a few code examples that does not run as fast as
> > you want? 
> > > 
> > > Also, make sure to look at : 
> > > https://github.com/IntelLabs/ParallelAccelerator.jl. They have a
> > quite 
> > > sophisticated compiler that does loop fusions and parallelization
> > and other 
> > > cool stuff. 
> > > 
> > > 
> > > 
> > > On Thursday, May 12, 2016 at 7:22:24 AM UTC+2, Anonymous wrote: 
> > >> 
> > >> This remains one of the main drawbacks of Julia, and the
> > devectorize 
> > >> package is basically useless as it doesn't support some really
> > crucial 
> > >> vectorized operations.  I'd really prefer not to rewrite all my
> > vectorized 
> > >> code into nested loops if at all possible, but I really need
> > more speed, can 
> > >> anyone tell me the timeline and future plans for making
> > vectorized code run 
> > >> at C speed? 


Re: [julia-users] how long until vectorized code runs fast?

2016-05-12 Thread Milan Bouchet-Valat
Le mercredi 11 mai 2016 à 22:22 -0700, Anonymous a écrit :
> This remains one of the main drawbacks of Julia, and the devectorize
> package is basically useless as it doesn't support some really
> crucial vectorized operations.  I'd really prefer not to rewrite all
> my vectorized code into nested loops if at all possible, but I really
> need more speed, can anyone tell me the timeline and future plans for
> making vectorized code run at C speed?
Some major improvements are coming in 0.5, and more are currently being
worked on/discussed. See
https://github.com/JuliaLang/julia/issues/16285


Regards


Re: [julia-users] using subscripts \_1, \_2, ... as field names

2016-05-10 Thread Milan Bouchet-Valat
Le mardi 10 mai 2016 à 01:56 -0700, Davide Lasagna a écrit :
> Hi, 
> 
> I have a custom type representing a bordered matrix (a big square
> matrix, bordered by two vectors and a scalar in the bottom right
> corner), where the four blocks are stored in separated chunks of
> memory. I would like to call the fields of my type using subscripts
> \_1\_1, \_1\_2, ... so that I can nicely access them as A.11, A.12,
> ... However, it seems that subscripts (and superscripts) with digits
> can't be used for variable names, resulting in a syntax error.
>  Subscripts with letters work fine. 
> 
> Any thoughts on this?
All identifiers must start with a letter. So you could call them A.c₁₁
or A.c11, etc.

Maybe this rule could be loosened a bit, since subscripts are never
parsed as actual numbers. OTOH, I'm not sure using subscripts is really
better than using standard digits with a letter prefix.


Regards


Re: [julia-users] Re: calling sort on a range with rev=true

2016-05-05 Thread Milan Bouchet-Valat
Le dimanche 01 mai 2016 à 19:11 -0700, 'Greg Plowman' via julia-users a
écrit :
> 
> Extending/overwriting sort in range.jl (line 686)
> 
> sort(r::Range) = issorted(r) ? r : reverse(r)
> 
> with the following worked for me.
> 
> function Base.sort(r::Range; rev::Bool=false)
>     if rev
>         issorted(r) ? reverse(r) : r
>     else
>         issorted(r) ? r : reverse(r)
>     end
> end
Makes sense, please file an issue or a pull request (if the latter, be
sure to also add tests to prevent regressions).


Regards


Re: [julia-users] Re: Trouble with parametric parts of a type def

2016-04-29 Thread Milan Bouchet-Valat
Le vendredi 29 avril 2016 à 00:06 -0700, DNF a écrit :
> I was going to suggest:
> 
>  type MyType{A<:AbstractArray, T<:Unsigned} 
>      data::A{T}
>  end
> 
> which does not work. The error is:
> ERROR: TypeError: Type{...} expression: expected Type{T}, got TypeVar
> 
> Does anyone know whether this should work in an ideal world, or does
> the type definition simply not make sense?
This is called triangular dispatch, and will likely be supported at
some point. See https://github.com/JuliaLang/julia/issues/3766 and the
issues linked there.


Regards

> > Hi, I'm tying to create a parametric type which can hold a variable
> > which could be any kind of vector, and whatever the vector is, it
> > can store one of the UInts. So it could have:
> > 
> > UInt8[1,2,3]
> > UInt64[1,2,3]
> > UnitRange(1, 3)
> > UnitRange(UInt16(1), UInt16(3))
> > ... and so on.
> > 
> > But I'm having trouble deciding what needs to go in the parametric
> > part of it's definition: e.g.
> > 
> > MyType{T <: (unsure of this bit)}
> >     val::T
> > end
> > 
> > 
> > I thought that AbstractVector{Unsigned} might be the way to go, but
> > Vector{UInt8} does not inherit from AbstractVector{Unsigned}.
> > 
> > Thanks,
> > Ben.
> > 


Re: [julia-users] Re: Evaluation of boolean expression fails

2016-04-26 Thread Milan Bouchet-Valat
Le mardi 26 avril 2016 à 13:22 -0400, Yichao Yu a écrit :
> On Tue, Apr 26, 2016 at 12:58 PM, Milan Bouchet-Valat  wrote:
> > 
> > Le mardi 26 avril 2016 à 12:52 -0400, Yichao Yu a écrit :
> > > 
> > > On Tue, Apr 26, 2016 at 12:09 PM, Ali Rezaee <arv.ka...@gmail.com> wrote:
> > > > 
> > > > 
> > > > 
> > > > Thanks for your replies.
> > > > My objective is exactly what the code shows. I have a list of Boolean
> > > > expressions similar to the examples in the code, and I need to evaluate 
> > > > them
> > > > one by one based on x values.
> > > > So writing a macro would be the only solution.
> > > Just to be clear, a macro can't help here. You need to eval in global
> > > scope if you want to evaluate arbitrary expressions.
> > Well, a macro could replace "x" with the name of the first argument,
> > create a function from that and call it.
> Well, I assume the string is runtime value so there's nothing a macro
> can do. If it is compile time value, then you might as well write the
> code directly instead of storing it in strings...
I assume these expressions come from an external file. We've seen a
similar scenario recently on this list.


Regards

> > 
> > Though if possible creating anonymous functions is clearly a cleaner
> > solution.
> > 
> > 
> > Regards
> > 
> > > 
> > > > 
> > > > 
> > > > 
> > > > Best regards
> > > > 
> > > > On Tuesday, April 26, 2016 at 5:38:21 PM UTC+2, Ali Rezaee wrote:
> > > > > 
> > > > > 
> > > > > 
> > > > > Hi everyone,
> > > > > 
> > > > > I am trying to run the code below. When I try the code outside of a
> > > > > function and in REPL, it runs successfully. However when I run it 
> > > > > using a
> > > > > function it throw an error.
> > > > > Why do I get the error? and how can I solve this problem?
> > > > > 
> > > > > Thanks in advance for your help.
> > > > > 
> > > > > rules = ["(x[1] && x[2])", "(x[3] || x[4])"]; # a list of boolean
> > > > > expressions
> > > > > boolList = [false, true, false, true]; # a boolean vector for every x 
> > > > > in
> > > > > rules
> > > > > 
> > > > > function evaluate(rules, boolList)
> > > > >   x = boolList
> > > > >   result = Array{Bool}(length(rules))
> > > > >   for (i, rule) in enumerate(rules)
> > > > > result[i] = eval(parse(rule))
> > > > >   end
> > > > >   return result
> > > > > end
> > > > > 
> > > > > evaluate(rules, boolList)
> > > > > # ERROR: UndefVarError: x not defined
> > > > > 
> > > > > # but This will work:
> > > > > x = boolList
> > > > > result = Array{Bool}(length(rules))
> > > > > for (i, rule) in enumerate(rules)
> > > > >   result[i] = eval(parse(rule))
> > > > > end
> > > > > 
> > > > > result
> > > > > # 2-element Array{Bool,1}: false true
> > > > > 
> > > > > 


Re: [julia-users] changing the package name

2016-04-26 Thread Milan Bouchet-Valat
Le mardi 26 avril 2016 à 09:17 -0700, Chang Kwon a écrit :
> I was wondering what is the proper way to change the package name
> under development. This is often the case when I submit a package and
> then was recommended to revise the package name. Well, without much
> knowledge of git, I always mess up...
> 
> When I'm ready to develop a package, I generate a package:
> Pkg.generate("TestPkg", "MIT")
> 
> and keep developing. When I'm ready to publish it, I do:
> 
> Pkg.register("TestPkg")
> Pkg.tag("TestPkg")
> Pkg.publish()
> 
> Then, I make a pull request on GitHub.
> 
> Suppose then I need to change the package name to "NewPackageName".
> What do I do now? 
> Things to do might be:
> 
> - change name of the GitHub.com repository
> - change the folder name from .julia/v0.4/TestPkg to
> .julia/v0.4/NewPackageName
> - changing the remote repository address to https://github.com/chkwon
> /NewPackageName.jl.git
> 
> Then I usually have some complaints and errors from Pkg.update() and
> Pkg.publish() saying that TestPkg does not exist, etc. At this
> moment, I'm messed up with .julia/v0.4 folder and all other
> repositories. What is the proper order of making changes? 
I've followed this process recently, and I don't remember errors like
that. As long as there isn't any mention of TestPkg under .julia, you
should be fine (and grep -R can tell you that).

Could you post the exact errors you get?


Regards


Re: [julia-users] Re: Evaluation of boolean expression fails

2016-04-26 Thread Milan Bouchet-Valat
Le mardi 26 avril 2016 à 12:52 -0400, Yichao Yu a écrit :
> On Tue, Apr 26, 2016 at 12:09 PM, Ali Rezaee  wrote:
> > 
> > 
> > Thanks for your replies.
> > My objective is exactly what the code shows. I have a list of Boolean
> > expressions similar to the examples in the code, and I need to evaluate them
> > one by one based on x values.
> > So writing a macro would be the only solution.
> Just to be clear, a macro can't help here. You need to eval in global
> scope if you want to evaluate arbitrary expressions.
Well, a macro could replace "x" with the name of the first argument,
create a function from that and call it.

Though if possible creating anonymous functions is clearly a cleaner
solution.


Regards

> > 
> > 
> > Best regards
> > 
> > On Tuesday, April 26, 2016 at 5:38:21 PM UTC+2, Ali Rezaee wrote:
> > > 
> > > 
> > > Hi everyone,
> > > 
> > > I am trying to run the code below. When I try the code outside of a
> > > function and in REPL, it runs successfully. However when I run it using a
> > > function it throw an error.
> > > Why do I get the error? and how can I solve this problem?
> > > 
> > > Thanks in advance for your help.
> > > 
> > > rules = ["(x[1] && x[2])", "(x[3] || x[4])"]; # a list of boolean
> > > expressions
> > > boolList = [false, true, false, true]; # a boolean vector for every x in
> > > rules
> > > 
> > > function evaluate(rules, boolList)
> > >   x = boolList
> > >   result = Array{Bool}(length(rules))
> > >   for (i, rule) in enumerate(rules)
> > > result[i] = eval(parse(rule))
> > >   end
> > >   return result
> > > end
> > > 
> > > evaluate(rules, boolList)
> > > # ERROR: UndefVarError: x not defined
> > > 
> > > # but This will work:
> > > x = boolList
> > > result = Array{Bool}(length(rules))
> > > for (i, rule) in enumerate(rules)
> > >   result[i] = eval(parse(rule))
> > > end
> > > 
> > > result
> > > # 2-element Array{Bool,1}: false true
> > > 
> > > 


Re: [julia-users] Why Ubuntu 16.04 version runs only on one CPU?

2016-04-24 Thread Milan Bouchet-Valat
Le dimanche 24 avril 2016 à 09:43 -0700, K leo a écrit :
> Looking through the linear standard functions list in the
> documentation, I think perhaps the only function used is linreg.  I
> don't directly use LAPACK.
> 
Hmm, doesn't sound like the best candidate. I've made a few attempts,
and I couldn't create a vector big enough so that linreg() took more
than one or two seconds. So either you have a lot of RAM, or this isn't
the function which makes your CPU go to 200% for a significant time. 

Did you consider operators when looking for linear algebra functions?
Anyway, the only way to be certain a function exhibits the problem is
to run it in isolation in both Julia versions.


Regards

> > Le dimanche 24 avril 2016 à 08:44 -0700, K leo a écrit : 
> > > It is hard to know how to describe my code.  So I tried to run
> > it 
> > > with the generic version which does use 200% of CPU.  So there
> > seems 
> > > something different with the Ubuntu version of Julia. 
> > OK. Could you at least make a list of linear algebra functions that
> > you 
> > are using? It could be that Ubuntu's OpenBLAS doesn't include
> > optimized 
> > LAPACK functions. 
> > 
> > 
> > Regards 
> > 
> > > I installed under my home folder.  I ran it in the following way 
> > > hoping that the shared libs are not mixed with the system ones. 
> > > 
> > > $ ~/Software/julia-2ac304dfba/bin/julia 
> > >                _ 
> > >    _       _ _(_)_     |  A fresh approach to technical
> > computing 
> > >   (_)     | (_) (_)    |  Documentation: http://docs.julialang.or
> > g 
> > >    _ _   _| |_  __ _   |  Type "?help" for help. 
> > >   | | | | | | |/ _` |  | 
> > >   | | |_| | | | (_| |  |  Version 0.4.5 (2016-03-18 00:58 UTC) 
> > >  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release 
> > > |__/                   |  x86_64-unknown-linux-gnu 
> > > 
> > > 
> > > > Le dimanche 24 avril 2016 à 05:38 -0700, K leo a écrit :  
> > > > >  
> > > > > See below.  
> > > > >  
> > > > > > Le samedi 23 avril 2016 à 18:10 -0700, K leo a écrit :   
> > > > > > > > Le samedi 23 avril 2016 à 04:52 -0700, K leo a
> > écrit :
> > > > > > > > Anyway,
> > > > > > > > the Ubuntu PPA is no longer maintained. The
> > recommended 
> > > > solution is to
> > > > > > > > use generic Linux binaries from the Julia website.
> > > > > > > >   
> > > > > > > 
> > > > > > > On Ubuntu 16.04, Julia 0.4.5 is available on multiverse 
> > > > repository   
> > > > > > > for some reason.  Not sure who maintains it.   
> > > > > > Ah, OK, I thought you were using the PPA package. Anyway, 
> > > > Ubuntu 16.04   
> > > > > > includes the latest OpenBLAS and the Julia package depends
> > on 
> > > > it, so it   
> > > > > > should be good. That said, maybe that OpenBLAS library
> > doesn't 
> > > > use   
> > > > > > threading.   
> > > > > >  
> > > > > > Can you confirm that running
> > > > > > x=rand(1,1);   
> > > > > > x*x   
> > > > > > does not make Julia use more than 100% CPU?   
> > > > > It actually uses close to 200% of CPU.    
> > > > OK, so no problem in this area.  
> > > > 
> > > > I think you'll have to give more details about the code you 
> > > > mentioned  
> > > > in your first post for us to be able to help. Also, please
> > confirm 
> > > > that  
> > > > you see the same problem (i.e. code using only one core) with
> > the  
> > > > generic Linux binaries.  
> > > > 
> > > > 
> > > > Regards  
> > > > 
> > > > > > If not, could you post the output of versioninfo()?
> > Finally, 
> > > > please run   
> > > > > > 'ls -l /usr/lib/julia/', 'ls -l /usr/lib64/julia' from a
> > shell 
> > > > and copy   
> > > > > > the result.   
> > > > >  julia> versioninfo()  
> > > > > Julia Version 0.4.5  
> > > > > Commit 2ac304d (2016-03-18 00:58 UTC)  
> > > > > Platform Info:  
> > > > >   System: Linux (x86_64-linux-gnu)  
> > > > >   CPU: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz  
> > > > >   WORD_SIZE: 64  
> > > > >   BLAS: libopenblas (NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY 
> > > > Haswell)  
> > > > >   LAPACK: libopenblas  
> > > > >   LIBM: libopenlibm  
> > > > >   LLVM: libLLVM-3.8  
> > > > >  
> > > > > $ ls -l /usr/lib/julia/  
> > > > > ls: cannot access '/usr/lib/julia/': No such file or
> > directory  
> > > > >  
> > > > > $ ls -l /usr/lib/x86_64-linux-gnu/julia  
> > > > > total 26440  
> > > > > lrwxrwxrwx 1 root root       20 Apr 18 17:45 libarpack.so -> 
> > > > ../../libarpack.so.2  
> > > > > -rw-r--r-- 1 root root    15608 Apr 18 17:45
> > libccalltest.so  
> > > > > lrwxrwxrwx 1 root root       22 Apr 18 17:45 libcholmod.so
> > -> 
> > > > ../libcholmod.so.3.0.6  
> > > > > lrwxrwxrwx 1 root root       22 Apr 18 17:45 libdSFMT.so -> 
> > > > ../libdSFMT-19937.so.1  
> > > > > lrwxrwxrwx 1 root root       25 Apr 18 17:45
> > libfftw3f_threads.so 
> > > > -> ../libfftw3f_threads.so.3  
> > > > > lrwxrwxrwx 1 root root       24 Apr 18 17:45
> > libfftw3_threads.so 
> > > > -> 

Re: [julia-users] Why Ubuntu 16.04 version runs only on one CPU?

2016-04-24 Thread Milan Bouchet-Valat
Le dimanche 24 avril 2016 à 08:44 -0700, K leo a écrit :
> It is hard to know how to describe my code.  So I tried to run it
> with the generic version which does use 200% of CPU.  So there seems
> something different with the Ubuntu version of Julia.
OK. Could you at least make a list of linear algebra functions that you
are using? It could be that Ubuntu's OpenBLAS doesn't include optimized
LAPACK functions.


Regards

> I installed under my home folder.  I ran it in the following way
> hoping that the shared libs are not mixed with the system ones.
> 
> $ ~/Software/julia-2ac304dfba/bin/julia
>                _
>    _       _ _(_)_     |  A fresh approach to technical computing
>   (_)     | (_) (_)    |  Documentation: http://docs.julialang.org
>    _ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.4.5 (2016-03-18 00:58 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/                   |  x86_64-unknown-linux-gnu
> 
> 
> > Le dimanche 24 avril 2016 à 05:38 -0700, K leo a écrit : 
> > > 
> > > See below. 
> > > 
> > > > Le samedi 23 avril 2016 à 18:10 -0700, K leo a écrit :  
> > > > > > Le samedi 23 avril 2016 à 04:52 -0700, K leo a écrit :   
> > > > > > Anyway,   
> > > > > > the Ubuntu PPA is no longer maintained. The recommended
> > solution is to   
> > > > > > use generic Linux binaries from the Julia website.   
> > > > > >  
> > > > >    
> > > > > On Ubuntu 16.04, Julia 0.4.5 is available on multiverse
> > repository  
> > > > > for some reason.  Not sure who maintains it.  
> > > > Ah, OK, I thought you were using the PPA package. Anyway,
> > Ubuntu 16.04  
> > > > includes the latest OpenBLAS and the Julia package depends on
> > it, so it  
> > > > should be good. That said, maybe that OpenBLAS library doesn't
> > use  
> > > > threading.  
> > > > 
> > > > Can you confirm that running   
> > > > x=rand(1,1);  
> > > > x*x  
> > > > does not make Julia use more than 100% CPU?  
> > > It actually uses close to 200% of CPU.   
> > OK, so no problem in this area. 
> > 
> > I think you'll have to give more details about the code you
> > mentioned 
> > in your first post for us to be able to help. Also, please confirm
> > that 
> > you see the same problem (i.e. code using only one core) with the 
> > generic Linux binaries. 
> > 
> > 
> > Regards 
> > 
> > > > If not, could you post the output of versioninfo()? Finally,
> > please run  
> > > > 'ls -l /usr/lib/julia/', 'ls -l /usr/lib64/julia' from a shell
> > and copy  
> > > > the result.  
> > >  julia> versioninfo() 
> > > Julia Version 0.4.5 
> > > Commit 2ac304d (2016-03-18 00:58 UTC) 
> > > Platform Info: 
> > >   System: Linux (x86_64-linux-gnu) 
> > >   CPU: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz 
> > >   WORD_SIZE: 64 
> > >   BLAS: libopenblas (NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY
> > Haswell) 
> > >   LAPACK: libopenblas 
> > >   LIBM: libopenlibm 
> > >   LLVM: libLLVM-3.8 
> > > 
> > > $ ls -l /usr/lib/julia/ 
> > > ls: cannot access '/usr/lib/julia/': No such file or directory 
> > > 
> > > $ ls -l /usr/lib/x86_64-linux-gnu/julia 
> > > total 26440 
> > > lrwxrwxrwx 1 root root       20 Apr 18 17:45 libarpack.so ->
> > ../../libarpack.so.2 
> > > -rw-r--r-- 1 root root    15608 Apr 18 17:45 libccalltest.so 
> > > lrwxrwxrwx 1 root root       22 Apr 18 17:45 libcholmod.so ->
> > ../libcholmod.so.3.0.6 
> > > lrwxrwxrwx 1 root root       22 Apr 18 17:45 libdSFMT.so ->
> > ../libdSFMT-19937.so.1 
> > > lrwxrwxrwx 1 root root       25 Apr 18 17:45 libfftw3f_threads.so
> > -> ../libfftw3f_threads.so.3 
> > > lrwxrwxrwx 1 root root       24 Apr 18 17:45 libfftw3_threads.so
> > -> ../libfftw3_threads.so.3 
> > > lrwxrwxrwx 1 root root       15 Apr 18 17:45 libgmp.so ->
> > ../libgmp.so.10 
> > > -rw-r--r-- 1 root root  1175608 Apr 18 17:45 libjulia.so 
> > > lrwxrwxrwx 1 root root       15 Apr 18 17:45 libmpfr.so ->
> > ../libmpfr.so.4 
> > > lrwxrwxrwx 1 root root       22 Apr 18 17:45 libopenblas.so ->
> > ../../libopenblas.so.0 
> > > lrwxrwxrwx 1 root root       19 Apr 18 17:45 libopenlibm.so ->
> > ../libopenlibm.so.2 
> > > lrwxrwxrwx 1 root root       22 Apr 18 17:45 libopenspecfun.so ->
> > ../libopenspecfun.so.1 
> > > lrwxrwxrwx 1 root root       18 Apr 18 17:45 libpcre2-8.so ->
> > ../libpcre2-8.so.0 
> > > -rw-r--r-- 1 root root   203048 Apr 18 17:45 libRmath-julia.so 
> > > lrwxrwxrwx 1 root root       19 Apr 18 17:45 libspqr.so ->
> > ../libspqr.so.2.0.2 
> > > lrwxrwxrwx 1 root root       32 Apr 18 17:45
> > libsuitesparseconfig.so -> ../libsuitesparseconfig.so.4.4.6 
> > > -rw-r--r-- 1 root root     6000 Apr 18 17:45
> > libsuitesparse_wrapper.so 
> > > lrwxrwxrwx 1 root root       22 Apr 18 17:45 libumfpack.so ->
> > ../libumfpack.so.5.7.1 
> > > -rw-r--r-- 1 root root 25664120 Apr 18 17:45 sys.so 
> > > 
> > > $ ls -l /usr/lib64/julia 
> > > ls: cannot access '/usr/lib64/julia': No such file or directory 
> > > 
> > > > 

Re: [julia-users] Re: implications of using the precompiled "generic linux binary" vs building julia myself?

2016-04-24 Thread Milan Bouchet-Valat
Le dimanche 24 avril 2016 à 05:26 -0700, K leo a écrit :
> I tried that and it seems when I have other versions (the PPA version
> for instance) of Julia installed in the system, the shared libs were
> mixed up.  When I removed the PPA version, it complained some shared
> libs were not found.
Please copy all the error messages you got then.


Regards

> > Le samedi 23 avril 2016 à 18:23 -0700, K leo a écrit : 
> > > I also would like to know about what to do with the lib folder.
> >  Can 
> > > someone explain?  There is no README with Linux generic version. 
> > You just need to extract the whole contents of the archive
> > somewhere, 
> > and run bin/julia. No need to look at the other files, but they
> > need to 
> > be present. 
> > 
> > 
> > Regards 
> > 
> > > > What did you do with the generic build? 
> > > > 
> > > > I can take the bin/julia and run it, but I am confused as to
> > what 
> > > > to do with the rest of the folders, especially all the files
> > under 
> > > > lib/julia . Do I have to copy it somewhere? The other folders
> > seem 
> > > > easy to get, such as man being man pages, etc ... Thanks! 
> > > > 
> > > > 


Re: [julia-users] Why Ubuntu 16.04 version runs only on one CPU?

2016-04-24 Thread Milan Bouchet-Valat
Le dimanche 24 avril 2016 à 05:38 -0700, K leo a écrit :
> 
> See below.
> 
> > Le samedi 23 avril 2016 à 18:10 -0700, K leo a écrit : 
> > > > Le samedi 23 avril 2016 à 04:52 -0700, K leo a écrit :  
> > > > Anyway,  
> > > > the Ubuntu PPA is no longer maintained. The recommended solution is to  
> > > > use generic Linux binaries from the Julia website.  
> > > > 
> > >   
> > > On Ubuntu 16.04, Julia 0.4.5 is available on multiverse repository 
> > > for some reason.  Not sure who maintains it. 
> > Ah, OK, I thought you were using the PPA package. Anyway, Ubuntu 16.04 
> > includes the latest OpenBLAS and the Julia package depends on it, so it 
> > should be good. That said, maybe that OpenBLAS library doesn't use 
> > threading. 
> > 
> > Can you confirm that running  
> > x=rand(1,1); 
> > x*x 
> > does not make Julia use more than 100% CPU? 
> It actually uses close to 200% of CPU.  
OK, so no problem in this area.

I think you'll have to give more details about the code you mentioned
in your first post for us to be able to help. Also, please confirm that
you see the same problem (i.e. code using only one core) with the
generic Linux binaries.


Regards

> > If not, could you post the output of versioninfo()? Finally, please run 
> > 'ls -l /usr/lib/julia/', 'ls -l /usr/lib64/julia' from a shell and copy 
> > the result. 
>  julia> versioninfo()
> Julia Version 0.4.5
> Commit 2ac304d (2016-03-18 00:58 UTC)
> Platform Info:
>   System: Linux (x86_64-linux-gnu)
>   CPU: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Haswell)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.8
> 
> $ ls -l /usr/lib/julia/
> ls: cannot access '/usr/lib/julia/': No such file or directory
> 
> $ ls -l /usr/lib/x86_64-linux-gnu/julia
> total 26440
> lrwxrwxrwx 1 root root       20 Apr 18 17:45 libarpack.so -> 
> ../../libarpack.so.2
> -rw-r--r-- 1 root root    15608 Apr 18 17:45 libccalltest.so
> lrwxrwxrwx 1 root root       22 Apr 18 17:45 libcholmod.so -> 
> ../libcholmod.so.3.0.6
> lrwxrwxrwx 1 root root       22 Apr 18 17:45 libdSFMT.so -> 
> ../libdSFMT-19937.so.1
> lrwxrwxrwx 1 root root       25 Apr 18 17:45 libfftw3f_threads.so -> 
> ../libfftw3f_threads.so.3
> lrwxrwxrwx 1 root root       24 Apr 18 17:45 libfftw3_threads.so -> 
> ../libfftw3_threads.so.3
> lrwxrwxrwx 1 root root       15 Apr 18 17:45 libgmp.so -> ../libgmp.so.10
> -rw-r--r-- 1 root root  1175608 Apr 18 17:45 libjulia.so
> lrwxrwxrwx 1 root root       15 Apr 18 17:45 libmpfr.so -> ../libmpfr.so.4
> lrwxrwxrwx 1 root root       22 Apr 18 17:45 libopenblas.so -> 
> ../../libopenblas.so.0
> lrwxrwxrwx 1 root root       19 Apr 18 17:45 libopenlibm.so -> 
> ../libopenlibm.so.2
> lrwxrwxrwx 1 root root       22 Apr 18 17:45 libopenspecfun.so -> 
> ../libopenspecfun.so.1
> lrwxrwxrwx 1 root root       18 Apr 18 17:45 libpcre2-8.so -> 
> ../libpcre2-8.so.0
> -rw-r--r-- 1 root root   203048 Apr 18 17:45 libRmath-julia.so
> lrwxrwxrwx 1 root root       19 Apr 18 17:45 libspqr.so -> ../libspqr.so.2.0.2
> lrwxrwxrwx 1 root root       32 Apr 18 17:45 libsuitesparseconfig.so -> 
> ../libsuitesparseconfig.so.4.4.6
> -rw-r--r-- 1 root root     6000 Apr 18 17:45 libsuitesparse_wrapper.so
> lrwxrwxrwx 1 root root       22 Apr 18 17:45 libumfpack.so -> 
> ../libumfpack.so.5.7.1
> -rw-r--r-- 1 root root 25664120 Apr 18 17:45 sys.so
> 
> $ ls -l /usr/lib64/julia
> ls: cannot access '/usr/lib64/julia': No such file or directory
> 
> > 
> > I'm Ccing Graham Inggs, who maintains the Debian package on which the 
> > Ubuntu package is based. 
> > 
> > 
> > Regards 


Re: [julia-users] Re: implications of using the precompiled "generic linux binary" vs building julia myself?

2016-04-24 Thread Milan Bouchet-Valat
Le samedi 23 avril 2016 à 18:23 -0700, K leo a écrit :
> I also would like to know about what to do with the lib folder.  Can
> someone explain?  There is no README with Linux generic version.
You just need to extract the whole contents of the archive somewhere,
and run bin/julia. No need to look at the other files, but they need to
be present.


Regards

> > What did you do with the generic build?
> > 
> > I can take the bin/julia and run it, but I am confused as to what
> > to do with the rest of the folders, especially all the files under
> > lib/julia . Do I have to copy it somewhere? The other folders seem
> > easy to get, such as man being man pages, etc ... Thanks!
> > 
> > 


Re: [julia-users] Why Ubuntu 16.04 version runs only on one CPU?

2016-04-24 Thread Milan Bouchet-Valat
Le samedi 23 avril 2016 à 18:10 -0700, K leo a écrit :
> > Le samedi 23 avril 2016 à 04:52 -0700, K leo a écrit : 
> > Anyway, 
> > the Ubuntu PPA is no longer maintained. The recommended solution is to 
> > use generic Linux binaries from the Julia website. 
> > 
>  
> On Ubuntu 16.04, Julia 0.4.5 is available on multiverse repository
> for some reason.  Not sure who maintains it.
Ah, OK, I thought you were using the PPA package. Anyway, Ubuntu 16.04
includes the latest OpenBLAS and the Julia package depends on it, so it
should be good. That said, maybe that OpenBLAS library doesn't use
threading.

Can you confirm that running 
x=rand(1,1);
x*x
does not make Julia use more than 100% CPU?

If not, could you post the output of versioninfo()? Finally, please run
'ls -l /usr/lib/julia/', 'ls -l /usr/lib64/julia' from a shell and copy
the result.

I'm Ccing Graham Inggs, who maintains the Debian package on which the
Ubuntu package is based.


Regards


Re: [julia-users] Why Ubuntu 16.04 version runs only on one CPU?

2016-04-23 Thread Milan Bouchet-Valat
Le samedi 23 avril 2016 à 04:52 -0700, K leo a écrit :
> I did some timing measures running the same code.  On Ubuntu 15.10
> where Julia uses 200% of CPU, the code runs in 2500 seconds.  On
> Ubuntu 16.04 where Julia takes 100% CPU, the code runs in 2700
> seconds.  I don't know what causes Julia to use less CPU on 16.04,
> whether being the Linux version or the Julia version.  So given the
> little difference in time, perhaps the conclusion is Ubuntu 16.04 is
> more efficient using hardware resources.  Any comments?
This is probably due to a newer OpenBLAS being more efficient. Anyway,
the Ubuntu PPA is no longer maintained. The recommended solution is to
use generic Linux binaries from the Julia website.


Regards

> > 
> > 
> > > On Thu, Apr 21, 2016 at 10:39 AM, K leo 
> > > wrote: 
> > > > Prior to running Ubuntu 16.04, I get Julia from the PPA, and
> > > run it simply 
> > > > like: 
> > > > 
> > > >> julia 
> > > > 
> > > > Then when I run julia code, "top" shows CPU usage of Julia as
> > > something like 
> > > > 200% (I have two cores). 
> > > 
> > > What code are you running. Any pre-build version should run julia
> > > code 
> > > only on one thread. Any other thread are created by libraries
> > > like 
> > > fftw or openblas. 
> >  
> > I run my own code which does not directly require fftw or openblas.
> >  The same code runs on the version on Ubuntu 15.10, it uses 200%
> > CPU, but on the version on 16.04, it only uses 100%.  Note the
> > versions of Julia on the two systems are likely from different
> > builds.
> > 
> > > 
> > > > 
> > > > Now on 16.04, julia only runs upto 100% of CPU.  The version of
> > > julia is 
> > > > said to maintained by "Ubuntu Developers 
> > > > " 
> > > > 
> > > > What is happening? 


Re: [julia-users] Re: Function check without @assert

2016-04-22 Thread Milan Bouchet-Valat
Le vendredi 22 avril 2016 à 09:06 -0700, Kristoffer Carlsson a écrit :
> It doesn't. You need a quite new master for them. 
Actually, these macros don't even exist on git master. No PR has been
merged at this stage, and it's no clear what's going to be decided.


Regards


Re: [julia-users] Inferrable way of getting element types from tuple of vectors

2016-04-22 Thread Milan Bouchet-Valat
Le vendredi 22 avril 2016 à 09:15 -0400, Yichao Yu a écrit :
> On Fri, Apr 22, 2016 at 9:02 AM, Milan Bouchet-Valat  wrote:
> > 
> > Hi! Yet more explorations regarding inference and varargs/tuple
> > arguments.
> > 
> > I have a function taking a tuple of vectors, and I need a way to
> > extract the element types of each of them, in order to create a Dict.
> > The following code works fine, but inference is not able to compute T
> > at compile time:
> > function f(x::Tuple)
> > T = Tuple{map(eltype, x)...}
> > Dict{T,Int}()
> > end
> > 
> > @code_warntype f(([1,2], [1.,2.],))
> > 
> > The same happens with varargs, i.e. f(x...).
> > 
> > Is there any solution to this (other than generated functions)? Am I
> > missing an existing bug report again?
> This is the case `@pure` is replacing `@generated` on 0.5-dev (More
> flexible type computation without non-trivial code generation)
> 
> ```
> julia> Base.@pure f(x) = Tuple{map(eltype, x.parameters)...}
> f (generic function with 1 method)
> 
> julia> function g(x::Tuple)
>    Dict{f(typeof(x)),Int}()
>    end
> g (generic function with 1 method)
> 
> julia> @code_warntype g(([1, 2], [1., 2.]))
> Variables:
>   #self#::#g
>   x::Tuple{Array{Int64,1},Array{Float64,1}}
> 
> Body:
>   begin  # REPL[2], line 2:
>   return 
> (Dict{Tuple{Int64,Float64},Int64})()::Dict{Tuple{Int64,Float64},Int64}
>   end::Dict{Tuple{Int64,Float64},Int64}
> ```
Great, thanks! I could have thought about this myself. Should I file an
issue about achieving this optimization automatically, though?


While I'm at it, I'm having problems with the next step:
function f(x::Tuple)
for (i, el) in enumerate(zip(x...))
@show i, el
end
end

@code_warntype f((1:2,))

Where does Union{Int64,Tuple{Int64}} come from? I've fixed zip() some
time ago to ensure it always returns a tuple...


Regards



[julia-users] Inferrable way of getting element types from tuple of vectors

2016-04-22 Thread Milan Bouchet-Valat
Hi! Yet more explorations regarding inference and varargs/tuple
arguments.

I have a function taking a tuple of vectors, and I need a way to
extract the element types of each of them, in order to create a Dict.
The following code works fine, but inference is not able to compute T
at compile time:
function f(x::Tuple)
T = Tuple{map(eltype, x)...}
Dict{T,Int}()
end

@code_warntype f(([1,2], [1.,2.],))

The same happens with varargs, i.e. f(x...).

Is there any solution to this (other than generated functions)? Am I
missing an existing bug report again?

Thanks!


Re: [julia-users] Function check without @assert

2016-04-22 Thread Milan Bouchet-Valat
Le jeudi 21 avril 2016 à 22:11 -0700, Robert DJ a écrit :
> I've become fond of using the @assert macro for checking input to
> functions, e.g.
> 
> @assert size(A) == size(B)
> @assert x > 0
> 
> Now I've read discussions like https://github.com/JuliaLang/julia/iss
> ues/10614 where I see that this not recommended and that @assert is
> slow.
> What is then the "right" Julian way that isn't slowing things down?
> Something like
> 
> size(A) != size(B) && throw(ArgumentError())
> x <= 0 && throw(DomainError())
This is the recommended solution, which is not faster but has the
advantage of raising specific exceptions which the caller can check.
Though currently you need to write the error message by hand, which in
the two examples you shown could be done automatically (hence the idea
of @require).

OTOH, @assert should be reserved for actual bugs which are never
supposed to happen (e.g. inconsistent internal state), not for
incorrect arguments.


Regards


Re: [julia-users] Re: performance of two different array allocations

2016-04-22 Thread Milan Bouchet-Valat
Le jeudi 21 avril 2016 à 13:24 -0700, Andrew a écrit :
> I don't get it, but it doesn't like something about concatenating the
> number and the array. If you convert the 1 to an array first, it
> works. I thought it was because 1 is an integer and you're joining it
> with an array of float 0's, but using 1. doesn't work either.
> 
> function version1(N)
>          b = [[1]; zeros(N-1)]
>          println(typeof(b))
>          for k = 1:N
>            for j = 1:N
>              b[j] += k
>            end
>          end
>        end
> 
> This has no type issues.
This looks like
https://github.com/JuliaLang/julia/issues/13665


Regards


> > Thanks. I figured it was something along these lines. I'd forgotten
> > about the "@code_warntype" macro
> >  
> > > It looks like the type inference is failing in version1. If you
> > > do "@code_warntype version1(1000)", it shows that it is inferring
> > > the type of b as Any.
> > > 
> > > > In a class I'm teaching the students are using Julia and I
> > > > couldn't for the life of me figure out why one of my students
> > > > codes was allocating a lot of memory.
> > > > 
> > > > I finally paired it down the following example that I don't
> > > > understand:
> > > > 
> > > > function version1(N)
> > > >   b = [1;zeros(N-1)]
> > > >   println(typeof(b))
> > > >   for k = 1:N
> > > >     for j = 1:N
> > > >       b[j] += k
> > > >     end
> > > >   end
> > > > end
> > > > 
> > > > 
> > > > function version2(N)
> > > >   b = zeros(N)
> > > >   b[1] = 1
> > > >   println(typeof(b))
> > > >   for k = 1:N
> > > >     for j = 1:N
> > > >       b[j] += k
> > > >     end
> > > >   end
> > > > end
> > > > 
> > > > N = 1000
> > > > println("compiling..")
> > > > @time version1(N)
> > > > version2(N)
> > > > println()
> > > > println()
> > > > 
> > > > println("Version 1")
> > > > @time version1(N)
> > > > println()
> > > > 
> > > > println("Version 2")
> > > > @time version2(N)
> > > > 
> > > > The output of this (without the compiling output) in v0.4.5 is:
> > > > 
> > > > Version 1
> > > > Array{Float64,1}
> > > >   0.092473 seconds (3.47 M allocations: 52.920 MB, 3.24% gc
> > > > time)
> > > > 
> > > > Version 2
> > > > Array{Float64,1}
> > > >   0.001195 seconds (27 allocations: 8.828 KB)
> > > > 
> > > > Both version produce the same type for Array b, but in version1
> > > > every time through the loop allocation happens and in the
> > > > version2 the only allocation is of the initial array.
> > > > 
> > > > I've not run into this one before (because I would never do
> > > > version1), but as all of us that teach know students will
> > > > always surprise you with their approaches.
> > > > 
> > > > Any help understanding what's going on would be appreciated.
> > > > 


Re: [julia-users] promotion vs. specialization

2016-04-21 Thread Milan Bouchet-Valat
Le jeudi 21 avril 2016 à 11:00 -0400, Yichao Yu a écrit :
> On Thu, Apr 21, 2016 at 10:55 AM, Stefan Karpinski  wrote:
> > 
> > This is probably more of a julia-dev topic, but my gut reaction is that the
> > combination of multiple dispatch and implicit conversion would be chaos.
> > Following method calls can be tricky enough (much easier with Gallium,
> > however) with just dispatch in the mix. With implicit conversion too, it
> > seems like it would be nearly impossible to know what might or might not be
> > called. I think it would be too easy to accidentally invoke a method that
> > wasn't intended.
> I think the proposal was to add an automatic conversion on top of the
> dispatch. so
> 
> f(a::Integer as Int) = ... will be effectively translated to
> f(_a::Integer) = (a = convert(Int, _a)::Int; ...)
For reference, this recently came up in a PR regarding 'as':
https://github.com/JuliaLang/julia/pull/15818#issuecomment-207922230


Regards


> > On Thu, Apr 21, 2016 at 9:05 AM, Didier Verna 
> > wrote:
> > > 
> > > 
> > > 
> > >   This is just an idea from the top of my head, probably wild and maybe
> > >   silly. I haven't given it any serious thought.
> > > 
> > > Given the existence of the general promotion system (which I like a lot,
> > > along with other things in Julia, such as the functor capabilities), I'm
> > > wondering about automatic specialization.
> > > 
> > > What I mean is this: suppose you have a type Foo which can be converted
> > > to an Int. Suppose as well that you have a function bar that only works
> > > on Ints. You cannot currently call bar with a Foo, but since Foo is
> > > convertible to an Int, it could make sense that bar() suddenly becomes
> > > an applicable method, with implicit conversion...
> > > 
> > > --
> > > ELS'16 registration open! http://www.european-lisp-symposium.org
> > > 
> > > Lisp, Jazz, Aïkido: http://www.didierverna.info
> > 


Re: [julia-users] Creating symbols

2016-04-21 Thread Milan Bouchet-Valat
Le jeudi 21 avril 2016 à 03:40 -0700, Jeffrey Sarnoff a écrit :
> I understand making a pull request.  I don't know how to set up code
> to deprecate something in that context.
That's pretty easy. Have a look at the comments at the top of
base/deprecated.jl.


Regards

> > I meant AST analysis not sed (simple 6am typo).
> > 
> > > I am advocating for gathering the known misfits and fitting them
> > > aright yesterday or today (maybe you are reading this tomorrow).
> > > 
> > > The "massively breakingness" of symbol/Symbol or any others does
> > > not phase me, In part, the clean workings of Julia's deprecation
> > > mechanism does smooth the change-over and allows catch-up-ing for
> > >  packages in use with v0.5-tobe.  And it seems quite possible for
> > > someone who knows how to write something that, for simple
> > > spelling substitutions at least, catches the deprecation notice
> > > and offers to sed? the change -- it seems so, anyway.  If I were
> > > loading a package and were asked "Would you like to replace the
> > > deprecated uses of 'symbol' as a function with 'Symbol', which is
> > > current best practice?" I would respond 'y'.  (and a better
> > > question would be "Do you want to replace deprecated symbols,
> > > spellings, and uses with current best practice wherever that can
> > > be done safely?")
> > > 
> > > 
> > > > I am all for changing this, but in the specific case of
> > > > symbol/Symbol this is going to be massively breaking, and even
> > > > if the fix is pretty simple (applying s/symbol\(/Symbol\(/
> > > > probably fixes 99% of the code) the timing needs to be right.
> > > > What other cases are there of such functions that should be
> > > > merged into constructors? Is there an issue to track them?
> > > > Also, when is a good time to introduce such a change?  Should
> > > > they be tackled at the same time, or at a different one?
> > > > I'm willing to pitch in some time to help here, if the timing
> > > > works out.
> > > > // T
> > > > 


Re: [julia-users] Creating symbols

2016-04-21 Thread Milan Bouchet-Valat
Le mercredi 20 avril 2016 à 22:57 -0700, Tomas Lycken a écrit :
> I am all for changing this, but in the specific case of symbol/Symbol
> this is going to be massively breaking, and even if the fix is pretty
> simple (applying s/symbol\(/Symbol\(/ probably fixes 99% of the code)
> the timing needs to be right. 
Well, this wouldn't be breaking, only very annoying (lots of
deprecation warnings). But we've done this in the past with
int -> Int or Uint -> UInt, which were arguably even more common.

> What other cases are there of such functions that should be merged
> into constructors? Is there an issue to track them? 
>
> Also, when is a good time to introduce such a change?  Should they be
> tackled at the same time, or at a different one? 
I'm not sure there remain other cases like this, but these can be done
one at a time without too much harm. Anyway the affected parts of the
code are likely to be distinct.

There's been some discussion as regards lower-case functions creating
iterators, but I think for now the conclusion is that we want to keep
them:
https://github.com/JuliaLang/julia/issues/10162#issuecomment-74304109

> I'm willing to pitch in some time to help here, if the timing works
> out.
I would say the timing is fine as people have not yet ported all of
their packages to 0.5, so why not make a pull request to deprecate
symbol()?


Regards


Re: [julia-users] Re: Ambiguous method definitions

2016-04-20 Thread Milan Bouchet-Valat
Le mercredi 20 avril 2016 à 08:50 -0700, Robert Gates a écrit :
> I wonder if this should be an issue in julia itself. Perhaps it would
> be good to require at least one argument?
There has been some discussion about syntax to specify a minimum and
maximum number of arguments. But nothing has been decided yet. See
https://github.com/JuliaLang/julia/pull/10691#issuecomment-88509128


Regards

> > Yes I would say this is dangerous.  Assuming there must be at least
> > one input, the signature should probably be:
> > 
> > > f(firstval::T, rest::T...) = <...>
> > though it certainly doesn't look as pretty. 
> > 
> > On Wed, Apr 20, 2016 at 10:54 AM, Robert Gates  > > wrote:
> > > Okay, perfect, that answered my question! I thought that at least
> > > one of the vararg arguments is mandatory. A philosophical
> > > thought: isn't this use-case kind of dangerous when overloading
> > > Base functions in packages? In my case, MultiPoly and Lazy both
> > > overload Base.+ with a varargs function (like the one above).
> > > 
> > > 
> > > > I was wondering how this can happen:
> > > > 
> > > > julia> type T1; end
> > > > 
> > > > julia> type T2; end
> > > > 
> > > > julia> f(a::T1...) = ()
> > > > f (generic function with 1 method)
> > > > 
> > > > julia> f(a::T2...) = ()
> > > > WARNING: New definition 
> > > >     f(Main.T2...) at none:1
> > > > is ambiguous with: 
> > > >     f(Main.T1...) at none:1.
> > > > To fix, define 
> > > >     f()
> > > > before the new definition.
> > > > f (generic function with 2 methods)
> > > > 
> > > > This is a fresh julia 0.4.5 session. I actually encountered
> > > > this when using two packages and then built this MWE.
> > > > 
> > > > 
> > 


Re: [julia-users] Re: macros design

2016-04-20 Thread Milan Bouchet-Valat
Le mercredi 20 avril 2016 à 16:22 +0100, Didier Verna a écrit :
> Milan Bouchet-Valat <nalimi...@club.fr> wrote:
> 
> > 
> > OTOH, short-circuit operators are in limited number (&& and ||).
> > Packages authors cannot create new ones without the user knowing
>   Do you mean it's possible to create new short-circuit operators ?
No, precisely that it's not possible, so we don't need a special syntax
like @.

Regards

> > 
> > Yes. For example, DataFrames.jl and DataFramesMeta.jl provide
> > functions like where(), by() and aggregate() both in function and
> > macro forms.  The former takes a function, while the latter offers
> > a
> > convenience syntax which creates a function under the hood.
> > 
> > See in particular this section:
> > https://github.com/JuliaStats/DataFramesMeta.jl#operations-on-group
> > eddataframes
>   Thanks.
> 
> 
> > 
> > I don't think that's possible, as the short-circuit behavior of &&
> > means it does not evaluate its second operand if the first one is
> > false. So it cannot be a standard function.
>   Sure. It would have to be built-in.
> 


Re: [julia-users] Re: macros design

2016-04-20 Thread Milan Bouchet-Valat
Le mercredi 20 avril 2016 à 15:34 +0100, Didier Verna a écrit :
> Matt Bauman  wrote:
> 
> > 
> > It's nice for both humans (it's obvious that there could be some
> > non-standard evaluation semantics or other such funniness)
>   Maybe for /some/ humans ;-), but I don't like this. It exposes
>   implementation details to the programmer. It breaks the functor
>   syntactic uniformity (consider that a single expression could
>   otherwise have different semantics in different contexts which gives a
>   lot of expressiveness) and makes Julia much less convenient for DSLs
>   design for instance. Also, it's not coherent with the rest of the
>   language. Short-circuit operators[1] and some built-in constructs also
>   have non-standard evaluation semantics, but they don't have a funny
>   calling syntax.
The fact that you're calling a macro and not a function is certainly
not an implementation detail. The @ is here to warn you that the call
might have any kind of side-effects, or at least does not behave like
functions.

OTOH, short-circuit operators are in limited number (&& and ||).
Packages authors cannot create new ones without the user knowing, so
the potential for confusion is much lower.

> > and the parser (it knows exactly which symbols it needs to resolve in
> > order to expand the macros since nothing else can contain the @
> > character).  
>   Making life easier to the internals should never be a valid argument
>   to corrupt the externals :-) Besides, I don't see how it would be
>   difficult to figure out if a symbol refers to a macro, or to a regular
>   function. In fact, Julia already does something similar, according to
>   whether a function or a functor is called. Well, I guess that macros
>   are not first-class enough...
> 
> > 
> > Are you talking about the optional parentheses?  That is: `@m(x,y)` vs
> > `@m x y`?
>   Yes, sorry.
> 
> > 
> >  It's very convenient to have the parentheses-less version for macros
> > like @inline and @simd which annotate or transform existing
> > structures.  And parentheses with comma-separated arguments can be
> > nice in cases where the distinction between several arguments might
> > get fuzzy otherwise.
>   OK. Not much convinced here either. Not sure the convenience was worth
>   the syntactic trouble it causes in the rest of the language
>   (e.g. deprecating the foo() syntax).
> 
> 
>   One final remark: a corollary of this specific calling syntax is that
>   eponymous functions and macros may co-exist (they seem to live in
>   separate namespaces). Anyone ever saw a use for this?
Yes. For example, DataFrames.jl and DataFramesMeta.jl provide functions
like where(), by() and aggregate() both in function and macro forms.
The former takes a function, while the latter offers a convenience
syntax which creates a function under the hood.

See in particular this section:
https://github.com/JuliaStats/DataFramesMeta.jl#operations-on-groupeddataframes

> Footnotes: 
> [1]  BTW, is there a functional equivalent to && and friends? I mean, a
> logical AND short-circuit function usable in prefix notation? I was
> surprised that I can do +(1,2) but not &&(true,false).
I don't think that's possible, as the short-circuit behavior of &&
means it does not evaluate its second operand if the first one is
false. So it cannot be a standard function.


Regards


Re: [julia-users] Re: Rounding to zero from positive or negative numbers results in positive or negative zero.

2016-04-20 Thread Milan Bouchet-Valat
Le mardi 19 avril 2016 à 22:10 -0700, Jeffrey Sarnoff a écrit :
> Hi,
> 
> You have discovered that IEEE standard floating point numbers have
> two distinct zeros: 0.0 and -0.0.  They compare `==` even though they
> are not `===`.  If you want to consider +0.0 and -0.0 to be the same,
> use `==` or `!=` not `===`  or `!==` when testing floating point
> values (the other comparisons <=, <, >=, > treat the two zeros as a
> single value).
There's actually an open issue about what to do with -0.0 and NaN in
Dicts: https://github.com/JuliaLang/julia/issues/9381

It turns out it's very hard to find a good solution.


Regards

> > Hello everyone!
> > I was wondering if the following behavior of round() has an special
> > purpouse:
> > 
> > a = round(0.1)
> > 0.0
> > 
> > b = round(-0.1)
> > -0.0
> > 
> > a == b
> > true
> > 
> > a === b
> > false
> > 
> > bits(a)
> > ""
> > 
> > 
> > bits(b)
> > "1000"
> > 
> > So the sign stays around...
> > 
> > I am using this rounded numbers as keys in a dictionary and julia
> > can tell the difference. 
> > 
> > For example, I expected something like this:
> > dict = [i => exp(i) for i in [a,b]]
> > Dict{Any,Any} with 1 entry:
> >  0.0 => 1.0
> > 
> > but got this:
> > dict = [i => exp(i) for i in [a,b]]
> > Dict{Any,Any} with 2 entries:
> >   0.0  => 1.0
> >   -0.0 => 1.0
> > 
> > It is not a big problem really but I would like to know where can
> > this behaviour come handy.
> > 
> > Cheers!
> > 


Re: [julia-users] Re: Type inference on the length of varargs

2016-04-19 Thread Milan Bouchet-Valat
Le mardi 19 avril 2016 à 06:58 -0700, Matt Bauman a écrit :
> That's https://github.com/JuliaLang/julia/pull/11242.  Another common
> workaround is a generated function (but the wrapper function is
> better if you don't need other generated functionality):
> 
> @generated function f(x...)
>     N = length(x)
>     :(Array{Int, $N})
> end
Perfect, thanks. I must say I never really understood what that giant
PR was about, except that it sounded interesting.


Regards


> > Hi! 
> > 
> > I'm looking for the recommended way of getting type inference to 
> > determine the number of elements passed via varargs. 
> > 
> > I guess a code snippet is better than a thousand words: in the 
> > following function, the type of a isn't inferred correctly. 
> > 
> > function f(x...) 
> >     N = length(x) 
> >     a = Array{Int, N}() 
> >     # ... 
> > end 
> > 
> > Using a wrapper function fixes the type instability: 
> > 
> > g(x...) = g(x) 
> > function g{N}(x::NTuple{N}) 
> >     a = Array{Int, N}() 
> >     # 
> > ... 
> > end 
> > 
> > Is there a better solution than this workaround? 
> > 
> > 
> > Regards 


[julia-users] Type inference on the length of varargs

2016-04-19 Thread Milan Bouchet-Valat
Hi!

I'm looking for the recommended way of getting type inference to
determine the number of elements passed via varargs.

I guess a code snippet is better than a thousand words: in the
following function, the type of a isn't inferred correctly.

function f(x...)
    N = length(x)
    a = Array{Int, N}()
    # ...
end

Using a wrapper function fixes the type instability:

g(x...) = g(x)
function g{N}(x::NTuple{N})
    a = Array{Int, N}()
    #
...
end

Is there a better solution than this workaround?


Regards


  1   2   3   4   5   6   >