[julia-users] Re: multiple dispatch for operators

2016-10-07 Thread Gabriel Gellner
Any reason to not just use a function? (like np.dot etc)

My understanding is that in python '*' means elementwise multiplication, so 
even if you could monkeypatch numpys __mul__ method to do the right thing 
wouldn't you be changing the semantics?

Gabriel

On Friday, October 7, 2016 at 3:51:11 AM UTC-6, Sisyphuss wrote:

> In Julia, we can do multiple dispatch for operators, that is the 
> interpreter can identify:
> float + integer
> integer + integer
> integer + float
> float + float
> as well as *user-defined* data structure.
>
> Recently, I am working on Python (I have no choice because Spark hasn't 
> yet a Julia binding). I intended to do the same thing -- multiplication -- 
> between a Numpy matrix and self-defined Low-rank matrix. Of course, I 
> defined the `__rmul__ ` method for Low-rank matrix. However, it seems to me 
> that the Numpy matrix intercepts the `*` operator as its `__mul__` method, 
> which expects the argument on the right side of `*` to be a scalar.
>
> I would like to know if there is anyway around?
>
>

[julia-users] Re: Julia and the Tower of Babel

2016-10-07 Thread Gabriel Gellner
Yeah the R system is probably the best guide, as it also has a pretty easy 
to use package manager ... hence so, so many packages ;) I think python 
works without a single BDF (for science at least) since the core packages 
are monolithic, so the consistency is immediately apparent, and I find 
programmers, as a rule, dislike inconsistent API's within a given project 
(while seemingly less worried across packages).

In response to Andreas and Tom, I don't mean to sound like I don't want to 
collaborate, rather starting this discussion between conventions in Base 
and Optim.jl for example made me realize that it is not clear the solution 
is just a matter of simple discussion, rather each group would need to 
sacrifice a certain level of API consistency if they used the other's 
convention ... and like John says, usually that kind of decision requires 
someone to make a command from on high, which having a loose package system 
doesn't always facilitate. But we shall see, maybe it doesn't matter in the 
long run.

I just find it stressful when I am making my own package on what is the 
best convention to follow ... every choice feels like a severe tradeoff (do 
I use reltol to be like Base, which will be less and less of a guide as 
packages are moved out of Base ..., or do I use rel_tol because my package 
will commonly be used in conjunction with Optim.jl ...).

Thanks for the response though,
something I noticed, but wasn't sure what other felt.

all the best.

On Friday, October 7, 2016 at 10:49:47 AM UTC-6, John Myles White wrote:

> I don't really see how you can solve this without a single dictator who 
> controls the package ecosystem. I'm not enough of an expert in Python to 
> say how well things work there, but the R ecosystem is vastly less 
> organized than the Julia ecosystem. Insofar as it's getting better, it's 
> because the community has agreed to make Hadley Wickham their benevolent 
> dictator.
>
>  --John
>
> On Friday, October 7, 2016 at 8:35:46 AM UTC-7, Gabriel Gellner wrote:
>>
>> Something that I have been noticing, as I convert more of my research 
>> code over to Julia, is how the super easy to use package manager (which I 
>> love), coupled with the talent base of the Julia community seems to have a 
>> detrimental effect on the API consistency of the many “micro” packages that 
>> cover what I would consider the de-facto standard library.
>>
>> What I mean is that whereas a commercial package like Matlab/Mathematica 
>> etc., being written under one large umbrella, will largely (clearly not 
>> always) choose consistent names for similar API keyword arguments, and have 
>> similar calling conventions for master function like tools (`optimize` 
>> versus `lbfgs`, etc), which I am starting to realize is one of the great 
>> selling points of these packages as an end user. I can usually guess what a 
>> keyword will be in Mathematica, whereas even after a year of using Julia 
>> almost exclusively I find I have to look at the documentation (or the 
>> source code depending on the documentation ...) to figure out the keyword 
>> names in many common packages.
>>
>> Similarly, in my experience with open source tools, due to the complexity 
>> of the package management, we get large “batteries included” distributions 
>> that cover a lot of the standard stuff for doing science, like python’s 
>> numpy + scipy combination. Whereas in Julia the equivalent of scipy is 
>> split over many, separately developed packages (Base, Optim.jl, NLopt.jl, 
>> Roots.jl, NLsolve.jl, ODE.jl/DifferentialEquations.jl). Many of these 
>> packages are stupid awesome, but they can have dramatically different 
>> naming conventions and calling behavior, for essential equivalent behavior. 
>> Recently I noticed that tolerances, for example, are named as `atol/rtol` 
>> versus `abstol/reltol` versus `abs_tol/rel_tol`, which means is extremely 
>> easy to have a piece of scientific code that will need to use all three 
>> conventions across different calls to seemingly similar libraries. 
>>
>> Having brought this up I find that the community is largely sympathetic 
>> and, in general, would support a common convention, the issue I have slowly 
>> realized is that it is rarely that straightforward. In the above example 
>> the abstol/reltol versus abs_tol/rel_tol seems like an easy example of what 
>> can be tidied up, but the latter underscored name is consistent with 
>> similar naming conventions from Optim.jl for other tolerances, so that 
>> community is reluctant to change the convention. Similarly, I think there 
>> would be little interest in changing abstol/reltol to the underscored 
>> version in packages like Base, ODE.jl etc as th

[julia-users] Julia and the Tower of Babel

2016-10-07 Thread Gabriel Gellner
 

Something that I have been noticing, as I convert more of my research code 
over to Julia, is how the super easy to use package manager (which I love), 
coupled with the talent base of the Julia community seems to have a 
detrimental effect on the API consistency of the many “micro” packages that 
cover what I would consider the de-facto standard library.

What I mean is that whereas a commercial package like Matlab/Mathematica 
etc., being written under one large umbrella, will largely (clearly not 
always) choose consistent names for similar API keyword arguments, and have 
similar calling conventions for master function like tools (`optimize` 
versus `lbfgs`, etc), which I am starting to realize is one of the great 
selling points of these packages as an end user. I can usually guess what a 
keyword will be in Mathematica, whereas even after a year of using Julia 
almost exclusively I find I have to look at the documentation (or the 
source code depending on the documentation ...) to figure out the keyword 
names in many common packages.

Similarly, in my experience with open source tools, due to the complexity 
of the package management, we get large “batteries included” distributions 
that cover a lot of the standard stuff for doing science, like python’s 
numpy + scipy combination. Whereas in Julia the equivalent of scipy is 
split over many, separately developed packages (Base, Optim.jl, NLopt.jl, 
Roots.jl, NLsolve.jl, ODE.jl/DifferentialEquations.jl). Many of these 
packages are stupid awesome, but they can have dramatically different 
naming conventions and calling behavior, for essential equivalent behavior. 
Recently I noticed that tolerances, for example, are named as `atol/rtol` 
versus `abstol/reltol` versus `abs_tol/rel_tol`, which means is extremely 
easy to have a piece of scientific code that will need to use all three 
conventions across different calls to seemingly similar libraries. 

Having brought this up I find that the community is largely sympathetic 
and, in general, would support a common convention, the issue I have slowly 
realized is that it is rarely that straightforward. In the above example 
the abstol/reltol versus abs_tol/rel_tol seems like an easy example of what 
can be tidied up, but the latter underscored name is consistent with 
similar naming conventions from Optim.jl for other tolerances, so that 
community is reluctant to change the convention. Similarly, I think there 
would be little interest in changing abstol/reltol to the underscored 
version in packages like Base, ODE.jl etc as this feels consistent with 
each of these code bases. Hence I have started to think that the problem is 
the micro-packaging. It is much easier to look for consistency within a 
package then across similar packages, and since Julia seems to distribute 
so many of the essential tools in very narrow boundaries of functionality I 
am not sure that this kind of naming convention will ever be able to reach 
something like a Scipy, or the even higher standard of commercial packages 
like Matlab/Mathematica. (I am sure there are many more examples like using 
maxiter, versus iterations for describing stopping criteria in iterative 
solvers ...)

Even further I have noticed that even when packages try to find consistency 
across packages, for example Optim.jl <-> Roots.jl <-> NLsolve.jl, when one 
package changes how they do things (Optim.jl moving to delegation on types 
for method choice) then again the consistency fractures quickly, where we 
now have a common divide of using either Typed dispatch keywords versus 
:method symbol names across the previous packages (not to mention the whole 
inplace versus not-inplace for function arguments …)

Do people, with more experience in scientific packages ecosystems, feel 
this is solvable? Or do micro distributions just lead to many, many varying 
degrees of API conventions that need to be learned by end users? Is this 
common in communities that use C++ etc? I ask as I wonder how much this 
kind of thing can be worried about when making small packages is so easy.


Re: [julia-users] Re: When can the broadcast dot (in 0.5) be used?

2016-08-18 Thread Gabriel Gellner


On Thursday, August 18, 2016 at 10:52:42 AM UTC-7, Michael Borregaard wrote:
>
> That sounds right. I wonder if this is not a bug?
>

>From my reading of the all the related issues, it is a known "issue" ;) It 
looks like a good solution is being ironed out. But I don't think this will 
be added to the 0.5.x timeline, but rather be a 0.6.x feature (along with a 
lot of other amazing .() features!).

For your example you can use the workaround at the moment that map deals 
with this case in the expected manner:

map(x->is(x, "b"), ["a", "b"])

will give the array answer (though likely not what you wanted ...)

Gabriel


[julia-users] Re: When can the broadcast dot (in 0.5) be used?

2016-08-18 Thread Gabriel Gellner
My understanding (which can easily be wrong!) is that this stems from 
https://github.com/JuliaLang/julia/issues/16966.
Namely that size("b") throws a method error, whereas your size(2) gives a 
null tuple (). So in general broadcast (which is what the .() is sugar for) 
requires that the element type object that will be vectorizing over has a 
method defined for size.

In general I find Julia's behavior with strings/chars hard to understand. I 
usually just use python when I need to do a lot of string processing.

On Thursday, August 18, 2016 at 7:09:19 AM UTC-7, Michael Borregaard wrote:

> I have a hard time figuring out what functions and types accept the 
> broadcast dot in 0.5. E.g.,
>
> is(1, 2)# false
> is.([1, 2], 2)  # Bool[false, true]
>
> is("a", "b")#false
> is.("["a","b"], "b")#MethodError: no method matching size(::String)
>
> Can anyone give me a hint?
>


[julia-users] Re: `abs` has no method matching abs(::Array{Any,1})

2016-07-21 Thread Gabriel Gellner
Can you just cast the array to Float64 or whatever numeric type you need to 
column to be?

On Thursday, July 21, 2016 at 9:50:14 AM UTC-7, Ping Hou wrote:
>
> Hi,
>
> I encountered a problem when I running my code. 
>
> LoadError: MethodError: `abs` has no method matching abs(::Array{Any,1})
> while loading In[21], in expression starting on line 12
>
>
> Could anybody help me to fix it?
>
> Best,
> Ping
>


Re: [julia-users] findpeaks

2016-07-20 Thread Gabriel Gellner
Yay! thanks so much. Under my nose the whole time.

On Wednesday, July 20, 2016 at 4:30:44 PM UTC-7, Tim Holy wrote:
>
> On Wednesday, July 20, 2016 4:01:00 PM CDT Gabriel Gellner wrote: 
> > Is there a standard (Base or common package) that implements something 
> akin 
> > to matlab's `findpeaks`. It is easy enough to make something like this, 
> but 
> > I imagined that this would be something that already exists but that I 
> have 
> > just missed? 
>
> julia> using Images 
>
> help?> Images 
> search: Images nimages Image ImageCmap OverlayImage AbstractImage 
> AbstractImageDirect AbstractImageIndexed imaverage integral_image 
>
>   Images is a package for representing and processing images. 
>
>   Constructors, conversions, and traits: 
>
>   - Construction: `Image`, `ImageCmap`, `grayim`, `colorim`, `convert`, 
> `copyproperties`, `shareproperties` 
>   - Traits: `colordim`, `colorspace`, `coords_spatial`, `data`, 
> `isdirect`, 
> `isxfirst`, `isyfirst`, `pixelspacing`, `properties`, `sdims`, 
> `spacedirections`, `spatialorder`, `storageorder`, `timedim` 
>   - Size-related traits: `height`, `nchannels`, `ncolorelem`, `nimages`, 
> `size_spatial`, `width`, `widthheight` 
>   - Trait assertions: `assert_2d`, `assert_scalar_color`, 
> `assert_timedim_last`, `assert_xfirst`, `assert_yfirst` 
>   - Indexing operations: `getindexim`, `sliceim`, `subim` 
>   - Conversions: `convert`, `raw`, `reinterpret`, `separate` 
>
>   Contrast/coloration: 
>
>   - `MapInfo`: `MapNone`, `BitShift`, `ClampMinMax`, `ScaleMinMax`, 
> `ScaleAutoMinMax`, etc. 
>   - `imadjustintensity`, `sc`, `imstretch`, `imcomplement` 
>
>   Algorithms: 
>
>   - Reductions: `maxfinite`, `maxabsfinite`, `minfinite`, `meanfinite`, 
> `sad`, 
> `ssd`, `integral_image` 
>   - Resizing: `restrict`, `imresize` (not yet exported) 
>   - Filtering: `imfilter`, `imfilter_fft`, `imfilter_gaussian`, 
> `imfilter_LoG`, 
> `imROF`, `ncc`, `padarray` 
>   - Filtering kernels: `ando[345]`, `guassian2d`, `imaverage`, `imdog`, 
> `imlaplacian`, `prewitt`, `sobel` 
>   - Exposure : `imhist`, `histeq` 
>   - Gradients: `backdiffx`, `backdiffy`, `forwarddiffx`, `forwarddiffy`, 
> `imgradients` 
>   - Edge detection: `imedge`, `imgradients`, `thin_edges`, `magnitude`, 
> `phase`, `magnitudephase`, `orientation`, `canny` 
>   - Corner detection: `imcorner` 
>   - Blob detection: `blob_LoG`, `findlocalmaxima`, `findlocalminima` 
>   - Morphological operations: `dilate`, `erode`, `closing`, `opening`, 
> `tophat`, `bothat`, `morphogradient`, `morpholaplace` 
>   - Connected components: `label_components`, `component_boxes`, 
> `component_lengths`, `component_indices`, `component_subscripts`, 
> `component_centroids` 
>
>   Test images and phantoms (see also TestImages.jl): 
>
>   - `shepp_logan` 
>
> help?> findlocalmaxima 
> search: findlocalmaxima findlocalminima 
>
>   findlocalmaxima(img, [region, edges]) -> Vector{Tuple} 
>
>   Returns the coordinates of elements whose value is larger than all of 
> their 
> immediate neighbors. region is a list of dimensions to consider. edges is 
> a 
> boolean specifying whether to 
>   include the first and last elements of each dimension. 
>
> (I think it would be great if most packages would provide an API summary 
> with 
> ?PkgName. Not that I always remember myself) 
>
> --Tim 
>
>

[julia-users] findpeaks

2016-07-20 Thread Gabriel Gellner
Is there a standard (Base or common package) that implements something akin 
to matlab's `findpeaks`. It is easy enough to make something like this, but 
I imagined that this would be something that already exists but that I have 
just missed?


Re: [julia-users] Re: Converting from ColorTypes to Tuple

2016-07-07 Thread Gabriel Gellner
I use the following as a utility to have PyPlot.jl/PyCall.jl automatically 
convert RGB types into tuples

function PyObject(t::Color)
trgb = convert(RGB, t)
ctup = map(float, (red(trgb), green(trgb), blue(trgb)))
o = PyObject(ctup)
return o
end

I'm sure it can be tweaked to be more general. But it works so far when I 
am doing quick and dirty plotting :) Good luck!

On Thursday, July 7, 2016 at 6:09:41 PM UTC-7, Tim Holy wrote:

> On Thursday, July 7, 2016 4:41:10 PM CDT Islam Badreldin wrote: 
> > Maybe 
> > this means PyPlot.jl needs to add better support for ColorTypes? 
>
> That sounds like a very reasonable solution. I don't really know PyPlot at 
> all, so I don't have any advice to offer, but given how well you seem to 
> understand things already it seems that matters are in excellent hands 
> :-). 
>
> Best, 
> --Tim 
>
>

Re: [julia-users] Re: I have done few examples where can I put them at disposition ?

2016-07-01 Thread Gabriel Gellner
Looks great!

Don't forget to use a license of some kind (I recommend MIT). It is easy to 
add from the github web interface. Check out:
https://help.github.com/articles/open-source-licensing/

On Thursday, June 30, 2016 at 10:41:55 PM UTC-7, Henri Girard wrote:
>
> Yes I didn't see the commit button now it's there 
>
> Le 30/06/2016 20:52, Gabriel Gellner a écrit :
>
> You could also just make a repo on GitHub. 
> See for example:
> https://github.com/dpsanders/intermediate_julia
>
>
> On Thursday, June 30, 2016 at 8:28:25 AM UTC-7, Henri Girard wrote: 
>>
>> I have done some examples in julia (in french) and I think it could be 
>> interesting for beginners. Juliabox or similar web site could host them ?
>>
>>
>

[julia-users] Re: I have done few examples where can I put them at disposition ?

2016-06-30 Thread Gabriel Gellner
You could also just make a repo on GitHub.
See for example:
https://github.com/dpsanders/intermediate_julia


On Thursday, June 30, 2016 at 8:28:25 AM UTC-7, Henri Girard wrote:
>
> I have done some examples in julia (in french) and I think it could be 
> interesting for beginners. Juliabox or similar web site could host them ?
>
>

Re: [julia-users] A question of style: type constructors vs methods

2016-06-24 Thread Gabriel Gellner
Really good point about dispatch. Man need to think this through. Might 
just be my brain not fully getting multiple dispatch versus OO inheritance 
for this kind of design.

On Friday, June 24, 2016 at 12:29:48 PM UTC-7, Toivo Henningsson wrote:
>
> If you call the function Bar, then users might expect to be able to do 
> dispatch and type assertions using ::Bar, so I'd say it's safer to use bar 
> in this case. 
>
> Also consider if you're sure that Foo will always return a Foo in the 
> future; if you want to keep the flexibility to change that then I tubing I 
> think you should go with foo as well. 
>


[julia-users] Re: VS code extension

2016-06-22 Thread Gabriel Gellner
Are there benefits to using this over atom? Why are people moving over? 
Pros, Cons?

On Tuesday, June 21, 2016 at 3:26:52 PM UTC-7, David Anthoff wrote:
>
> Hi all,
>
>  
>
> I’ve created a new github repo for a VS code extension 
> https://github.com/davidanthoff/julia-vscode and it is published here 
> https://marketplace.visualstudio.com/items?itemName=julialang.language-julia. 
> Be5invis will delete the old julia extension from the marketplace because 
> he is no longer maintaining it. At this point my extension has zero 
> additional features over his old one, except that there is a public github 
> repo where in theory people could contribute.
>
>  
>
> Two questions:
>
> 1) could we move the github repo under some official julia organization? I 
> know most other editor plugins are under julialang, but I also remember 
> talk about creating something like a juliaeditor org or something like 
> that? In any case, I would like to move the repo to something more 
> official. I’m happy to maintain it for the foreseeable future, so no fear 
> on that front.
>
> 2) I used the julia logo. I hope that is ok from a copyright point of 
> view? I took it from the repo for the julia homepage, and that whole repo 
> seemed to be under an MIT license, but just wanted to be sure.
>
>  
>
> And if anyone wants to add stuff to the extension, PRs are welcome! 
> Especially a debug adapter would of course be fantastic.
>
>  
>
> Cheers,
>
> David 
>
>  
>
> --
>
> David Anthoff
>
> University of California, Berkeley
>
>  
>
> http://www.david-anthoff.com
>
>  
>


Re: [julia-users] Re: JuliaCon schedule announced

2016-06-22 Thread Gabriel Gellner
For future conferences I would be super stoked to pay some fee to have 
early access if that would help at all. Super stoked to see so many of 
these sweet talks!

On Wednesday, June 22, 2016 at 6:49:43 AM UTC-7, Viral Shah wrote:
>
> Yes they will be and hopefully much sooner than last year.
>
> -viral
> On Jun 22, 2016 7:31 AM, "nuffe"  wrote:
>
>> Will all the talks be posted on youtube, like last year? If so, do you 
>> know when? Thank you (overseas enthusiast) 
>>
>> On Thursday, June 9, 2016 at 11:34:18 PM UTC+2, Viral Shah wrote:
>>>
>>> The JuliaCon talks and workshop schedule has now been announced.
>>>
>>> http://juliacon.org/schedule.html
>>>
>>> Please buy your tickets if you have been procrastinating. We have seen 
>>> tickets going much faster this year, and waiting until the day before is 
>>> unlikely to work this year. Please also spread the message to your friends 
>>> and colleagues and relevant mailing lists. Here's the conference poster for 
>>> emailing and printing:
>>>
>>> http://juliacon.org/pdf/juliacon2016poster3.pdf
>>>
>>> -viral
>>>
>>

Re: [julia-users] A question of style: type constructors vs methods

2016-06-21 Thread Gabriel Gellner
Ugh keyboard error.

So lets say I have the type

type Foo
field1
field2
end

So I have my nice constructors dealing with all the ways I want to create 
Foo types, but then I want a specialized Foo that has field2 set to some 
special value that has a special sub meaning from my default Foo. Say I 
have a default constructor for Foo like

Foo(;field1=, field2=)

But I want to have a different default, so I call this
Bar(;field1=, field2=) = Foo(field1, field2)

My question! Really this is an abuse of the naming guide, as Bar is really 
a method, not a type constructor with the same name as the type, but I 
guess I am thinking about this as a kind of specialized type of Foo, hence 
the uppercase, but should I instead do

bar(...)

when I am returning these kind of specialized versions of Foo?

On Tuesday, June 21, 2016 at 11:44:07 AM UTC-7, Gabriel Gellner wrote:
>
> So going deeper into using my own types, and coding style in julia.
>
> Lets say I have the container type
>
> type Foo
>
>
> On Monday, June 20, 2016 at 7:10:15 AM UTC-7, Gabriel Gellner wrote:
>>
>> That was what I was feeling. That this was a legacy issue for lowercase 
>> type "constructors". Thanks so much.
>>
>> On Monday, June 20, 2016 at 1:11:41 AM UTC-7, Mauro wrote:
>>>
>>> I think you should use the constructors if the user expects to construct 
>>> a certain type, e.g. `Dict()`.  Conversely if the user cares about the 
>>> action then use a function, e.g.: 
>>>
>>> julia> keys(Dict()) 
>>> Base.KeyIterator for a Dict{Any,Any} with 0 entries 
>>>
>>> here I don't care about the type, I just want to iterate the keys. 
>>>
>>> But of course, it's not that clear-cut: `linspace` has history, so does 
>>> `zeros`.  So, for a new container type I'd use the constructor `FooBar`. 
>>>
>>> On Sun, 2016-06-19 at 23:12, Gabriel Gellner <gabriel...@gmail.com> 
>>> wrote: 
>>> > I am currently making some container like types, so I am using the 
>>> > convention of studly caps for  the types ie `FooBar`. For usage I am 
>>> > confused on what the julian convention is for having expressive type 
>>> > constructors like for `Dict` and `DataFrame`, versus using methods 
>>> like 
>>> > `linspace`. Clearly I could use either, but it is not clear to me when 
>>> I 
>>> > should use one convention over the other. 
>>> > 
>>> > Clearly I can have my api be like: 
>>> > 
>>> > f = FooBar(...) 
>>> > 
>>> > or 
>>> > 
>>> > f = foobar() 
>>> > 
>>> > but is one preferred over the other? Is it just random when to use one 
>>> or 
>>> > the other when making container like types? 
>>>
>>

Re: [julia-users] A question of style: type constructors vs methods

2016-06-21 Thread Gabriel Gellner
So going deeper into using my own types, and coding style in julia.

Lets say I have the container type

type Foo


On Monday, June 20, 2016 at 7:10:15 AM UTC-7, Gabriel Gellner wrote:
>
> That was what I was feeling. That this was a legacy issue for lowercase 
> type "constructors". Thanks so much.
>
> On Monday, June 20, 2016 at 1:11:41 AM UTC-7, Mauro wrote:
>>
>> I think you should use the constructors if the user expects to construct 
>> a certain type, e.g. `Dict()`.  Conversely if the user cares about the 
>> action then use a function, e.g.: 
>>
>> julia> keys(Dict()) 
>> Base.KeyIterator for a Dict{Any,Any} with 0 entries 
>>
>> here I don't care about the type, I just want to iterate the keys. 
>>
>> But of course, it's not that clear-cut: `linspace` has history, so does 
>> `zeros`.  So, for a new container type I'd use the constructor `FooBar`. 
>>
>> On Sun, 2016-06-19 at 23:12, Gabriel Gellner <gabriel...@gmail.com> 
>> wrote: 
>> > I am currently making some container like types, so I am using the 
>> > convention of studly caps for  the types ie `FooBar`. For usage I am 
>> > confused on what the julian convention is for having expressive type 
>> > constructors like for `Dict` and `DataFrame`, versus using methods like 
>> > `linspace`. Clearly I could use either, but it is not clear to me when 
>> I 
>> > should use one convention over the other. 
>> > 
>> > Clearly I can have my api be like: 
>> > 
>> > f = FooBar(...) 
>> > 
>> > or 
>> > 
>> > f = foobar() 
>> > 
>> > but is one preferred over the other? Is it just random when to use one 
>> or 
>> > the other when making container like types? 
>>
>

[julia-users] Re: integrate.odeint(f,t)

2016-06-20 Thread Gabriel Gellner
Welcome by the way! I used to use Python a lot myself. Julia is great, but 
it does require a fair amount of extra learning due to its youth. I hope 
you get past the rocky start :)

```
using PyPlot
using PyCall
@pyimport scipy.integrate as integrate

function f(y, t)
if t < 25
return [y[2], -8*y[1] + 0.1*y[2]]
elseif 25 < t < 45
return [y[2], -8*y[1]]
else
return [y[2], -8*y[1] - 0.1*y[2]]
end
end

t = linspace(0, 35, 1000)
y = [0.01, 0]
y1 = integrate.odeint(f, y, t)
plot(t, y1)
xlim(0, 35)
ylim(-0.2, 0.2)
```

This is what I would do to make your code work. Some of the issues was that 
you were using 0-based indexing (C like, python etc) so I just 
changed all the lines that had things like y[0] -> y[1] and y[1] -> y[2].
Also you should be careful about following python idioms to closely in 
Julia. Doing things like
plt = PyPlot
is likely not the correct idea. I have shown what I consider idiomatic use 
of PyPlot would look like in this case.

Final question. Are you using the python integrators because you want the 
LSODE codes specifically? You could of course use the pure Julia package 
ODE.jl if you can deal with using a Runge-Kutta algorithm instead (Adams 
methods are coming soon!)

Here is your code using that package instead just for comparison (with some 
naming changes to be more awesome in my mind ;)
using ODE

```
using ODE

function f(t, y)
if t < 25
return [y[2], -8*y[1] + 0.1*y[2]]
elseif 25 < t < 45
return [y[2], -8*y[1]]
else
return [y[2], -8*y[1] - 0.1*y[2]]
end
end

tspan = linspace(0, 35, 1000)
y0 = [0.01, 0]
t, y = ode45(f, y0, tspan)
plot(t, y)
xlim(0, 35)
ylim(-0.2, 0.2)
```

Good luck!

On Monday, June 20, 2016 at 1:41:41 PM UTC-7, Henri Girard wrote:
>
> y1 (odeint arrays has nothing inside) that's why there is an error but 
> from what ?
>
> Le lundi 20 juin 2016 17:32:07 UTC+2, Henri Girard a écrit :
>>
>> I am trying to convert from python to julia, but I don't know how to use 
>> y1=integrate.odeint(f,t), apparently I should have add a derivativ ?
>>
>> --python--
>> from scipy.integrate import odeint
>> def f(y,t):
>> if t<25:
>> return [y[1],-8*y[0]+0.1*y[1]]
>> elif 25> return [y[1],-8*y[0]]
>> else:
>> return [y[1],-8*y[0]-0.1*y[1]]
>> t=np.linspace(0,35,1000)
>> # start from  y=0.01, y’=0
>> y1=odeint(f,[0.01,0],t)
>> plt.plot(t,y1[:,0])
>> plt.title("Evolution temporelle")
>> plt.xlabel("Temps t (s)")
>> plt.ylabel("Intensite i (A)")
>> plt.plot(0,0.01,"ro")
>> #plt.savefig("RLC-demarrage.eps")
>> plt.show()
>> --ijulia
>> @pyimport scipy.integrate as odeint
>> function f(y,t)
>>   if t<25
>> return [y[1],-8*y[0]+0.1*y[1]]
>>   elseif 25> return [y[1],-8*y[0]]
>>   else
>> return [y[1],-8*y[0]-0.1*y[1]]
>>   end
>> end;
>> t=linspace(0,35,1000)
>> y=linspace(0.01,0)
>> y1=integrate.odeint(f,y,t)
>>
>

[julia-users] Re: @pyimport error

2016-06-20 Thread Gabriel Gellner
This should also work with python2 if you have the packages installed (I 
don't have a python 3 install on my machine and the import works).

On Monday, June 20, 2016 at 6:30:36 AM UTC-7, Henri Girard wrote:
>
> Answering : I try with python3 and everything works
>
> Le lundi 20 juin 2016 15:14:21 UTC+2, Henri Girard a écrit :
>>
>> when I try scipy I can't get it working ?
>>
>>
>> @pyimport scipy.integrate as integrate
>> ERROR: PyError (:PyImport_ImportModule) 
>> ImportError('No module named scipy.integrate',)
>>
>>  [inlined code] from /home/pi/.julia/v0.4/PyCall/src/exception.jl:81
>>  in pyimport at /home/pi/.julia/v0.4/PyCall/src/PyCall.jl:341
>>
>> Any help
>>
>

Re: [julia-users] A question of style: type constructors vs methods

2016-06-20 Thread Gabriel Gellner
That was what I was feeling. That this was a legacy issue for lowercase 
type "constructors". Thanks so much.

On Monday, June 20, 2016 at 1:11:41 AM UTC-7, Mauro wrote:
>
> I think you should use the constructors if the user expects to construct 
> a certain type, e.g. `Dict()`.  Conversely if the user cares about the 
> action then use a function, e.g.: 
>
> julia> keys(Dict()) 
> Base.KeyIterator for a Dict{Any,Any} with 0 entries 
>
> here I don't care about the type, I just want to iterate the keys. 
>
> But of course, it's not that clear-cut: `linspace` has history, so does 
> `zeros`.  So, for a new container type I'd use the constructor `FooBar`. 
>
> On Sun, 2016-06-19 at 23:12, Gabriel Gellner <gabriel...@gmail.com 
> > wrote: 
> > I am currently making some container like types, so I am using the 
> > convention of studly caps for  the types ie `FooBar`. For usage I am 
> > confused on what the julian convention is for having expressive type 
> > constructors like for `Dict` and `DataFrame`, versus using methods like 
> > `linspace`. Clearly I could use either, but it is not clear to me when I 
> > should use one convention over the other. 
> > 
> > Clearly I can have my api be like: 
> > 
> > f = FooBar(...) 
> > 
> > or 
> > 
> > f = foobar() 
> > 
> > but is one preferred over the other? Is it just random when to use one 
> or 
> > the other when making container like types? 
>


[julia-users] Re: Unexpected error on 1st order ODE using ODE package

2016-06-19 Thread Gabriel Gellner
Fair enough. I guess if the urge is to support strangely generic code then 
the inputs should never be expected to be promoted. But I just don't see 
what you are describing as being a "standard" in Julia. ODE.jl and your 
project maybe try to attain this level of genericity, but most of Base 
seems to do this kind of promotion, and doesn't always return the types, or 
give an inexact error, when you provide certain inputs (like returning 
Rational if you input Rational). Optim.jl, Roots.jl, Disributions.jl 
doesn't do this, quadgk doesn't give back rationals, etc. The standard 
libraries I use the most often I guess.

A user that is told that type in type out is the standard in Julia, and 
that strange error messages like the OP had are not bugs, would have great 
reason to be skeptical that this is actually a contract that is commonly 
followed. Maybe this is the way Julia is going. I hope that doing the 
common case will never get too annoying (like throwing a type error so I 
can have the unlikely case that I an use Rational inputs for an ode solver 
;), and that all my code would be expected to be filled with eltypes and 
inexact errors. We shall see.

On Sunday, June 19, 2016 at 7:32:55 PM UTC-7, Chris Rackauckas wrote:
>
> But I gave an example where the output type can change depending on the 
> chosen timespan (eltype(y0)==Rational{Int}), so "eltype(f(y0))" can really 
> only be what the time integration algorithm actually produces, which means 
> you have to just run it and see what happens. It becomes a type-stability 
> issue because the internals of the functions are using similar(y0) to make 
> all the arrays (well, with some promotions before doing so), and so each 
> time you go through the loop you're banking on getting the same type. The 
> other way to handle this would be to make the arrays something like a Float 
> which just always kind of work, so this "bug" was likely introduced when 
> the integrator were upgraded to allow for output to be any type (That's 
> what introduced this issue in DifferentialEquations.jl, so I assume that's 
> what happened in ODE.jl as well).
>
> The examples you gave are cases where the output is set (Int/Int returns 
> on Float no matter what), or where the type promotion isn't too difficult 
> (quadgtk is a linear combination of values from a to b, so just check the 
> element type of the intermediate values that you're using and promote 
> everything to that). But if you don't want to do the first, and you're 
> dealing with a case where type inference is essentially undecidable, you 
> will have this issue.
>
> That said, using the heuristic of "eltype(f(y0))" will do better than it 
> currently does (it would catch the case where all inputs are an Int but one 
> application makes things floats, which is probably the most common problem).
>
> On Monday, June 20, 2016 at 3:13:44 AM UTC+1, Gabriel Gellner wrote:
>>
>> I will admit my understanding of type stability in a jit compiler is 
>> shakey, but I don't see 0.5//2 as a type instability, rather it is a method 
>> error (the issues is a float does not convert to an int, but having a int 
>> become a float is a fair promotion, so I can't see why requiring y0 to be a 
>> float is at all similar -- it is very Julian to promote it). I understand 
>> type instability to mean that the output of function is not uniquely 
>> determined by its inputs. In this sense there is no type instability. The 
>> issue as I understand it is that we do eltype(y0) for the output type, when 
>> maybe it makes more sense to do eltype(f(y0)) which would then be type 
>> stable, since f is. This would even work if you did y0 = 3f0, etc.
>>
>> Again I could be missing something simple, but requiring the inputs 
>> to agree with the output of the input function feels like something far 
>> beyond any kind of Julia difference from other languages, and more a 
>> feature of it being a dynamic language and not purely static. Suggesting 
>> users, especially new users, to be worry about these situations doesn't 
>> jive with my understanding of Julia. It feels like premature optimization 
>> to me.
>>  
>>
>> On Sunday, June 19, 2016 at 6:38:37 PM UTC-7, Chris Rackauckas wrote:
>>
>>> I mean, it's the same type instability that you get if you try things 
>>> like .5//2. Many Julia function work with ints that give a float, but not 
>>> all do. If any function doesn't work (like convert will always fail if you 
>>> somehow got a float but initialized an array to be similar(arrInt)), then 
>>> you get this error.
>>>
>>> This can be probably be masked a little by pre-processing. I know that 
>>> ODE.jl mak

[julia-users] Re: Unexpected error on 1st order ODE using ODE package

2016-06-19 Thread Gabriel Gellner
I will admit my understanding of type stability in a jit compiler is 
shakey, but I don't see 0.5//2 as a type instability, rather it is a method 
error (the issues is a float does not convert to an int, but having a int 
become a float is a fair promotion, so I can't see why requiring y0 to be a 
float is at all similar -- it is very Julian to promote it). I understand 
type instability to mean that the output of function is not uniquely 
determined by its inputs. In this sense there is no type instability. The 
issue as I understand it is that we do eltype(y0) for the output type, when 
maybe it makes more sense to do eltype(f(y0)) which would then be type 
stable, since f is. This would even work if you did y0 = 3f0, etc.

Again I could be missing something simple, but requiring the inputs 
to agree with the output of the input function feels like something far 
beyond any kind of Julia difference from other languages, and more a 
feature of it being a dynamic language and not purely static. Suggesting 
users, especially new users, to be worry about these situations doesn't 
jive with my understanding of Julia. It feels like premature optimization 
to me.
 

On Sunday, June 19, 2016 at 6:38:37 PM UTC-7, Chris Rackauckas wrote:

> I mean, it's the same type instability that you get if you try things like 
> .5//2. Many Julia function work with ints that give a float, but not all 
> do. If any function doesn't work (like convert will always fail if you 
> somehow got a float but initialized an array to be similar(arrInt)), then 
> you get this error.
>
> This can be probably be masked a little by pre-processing. I know that 
> ODE.jl makes the types compatible to start, but that doesn't mean they will 
> be after one step. For example, run an Euler step with try-catch and then 
> have it up the types to whatever is compatible, then solve the ODE. And 
> this has almost no performance penalty in most cases (and would be easy to 
> switch off). But I don't know if this goes into a low level "this uses this 
> method to solve the ODE" that ODE.jl implements. But even if you do this, 
> you won't catch all of the type errors. For example, if you want to use 
> Rational{Int}, it can take quite a few steps to overflow the numerator or 
> denominator, but once it does, you get an InexactError (and the solution is 
> to use Rational{BigInt}). 
>
> You can use some try-catch phrases in the main solver, or put the solver 
> in a try-catch and have it fix types and re-run, but these are all things 
> that would be non-intuitive behavior and would have to be off by default. 
> But at that point, the user would probably know to just fix the type 
> problem.
>
> So honestly I don't think that there's a good way to make this "silent". 
> But this is the fundamental trade off in Julia that makes it fast, and it's 
> not something that is just encountered here, so users will need to learn 
> about it pretty quick or else they will see lots of other 
> functions/packages break.
>
> On Monday, June 20, 2016 at 2:07:30 AM UTC+1, Gabriel Gellner wrote:
>>
>> Is this truly a type instability?
>>
>> The function f has no type stability issues from my understanding of the 
>> concept. No matter the input types you always get a Float output so there 
>> is no type instability. Many of Julia's functions work this way, including 
>> division 1/2 -> float even though the inputs are ints.
>>
>> The real issue is that ode23 infers the type of the output from y0 which 
>> in this case is an int, but I don't see how this is the correct inference. 
>> Maybe it is desired, but I hardly see this as normal Julia behavior. I can 
>> happily mix input types to arguments in many Julia constructs, without 
>> forcing me to have to use the same input vs output type. matrix mult, sin, 
>> sqrt, etc etc. Isn't this exactly what convert functions are for?
>>
>> hell the developer docs say that literals in expressions should be ints 
>> so that conversions can be better. that is they say I should right 2*x not 
>> 2.0*x so that type promotions can work correctly. The issue in this case is 
>> that an implementation detail is being exposed to the user, that eltype(y0) 
>> is determining the output of the function. I don't see that this is 
>> standard Julian practice, though it might be desired in this case. For 
>> example I can use quadgk(f, 1, 2) and not have an error because of the 
>> integer a, b. And that is a very similar style function Base method.
>>
>> Maybe I am missing something simple, but I worry being to harsh about 
>> types when it feels unessary.
>>
>>
>> On Sunday, June 19, 2016 at 5:28:39 PM UTC-7, Chris Rackauckas wrote:
>>
>>>

[julia-users] A question of style: type constructors vs methods

2016-06-19 Thread Gabriel Gellner
I am currently making some container like types, so I am using the 
convention of studly caps for  the types ie `FooBar`. For usage I am 
confused on what the julian convention is for having expressive type 
constructors like for `Dict` and `DataFrame`, versus using methods like 
`linspace`. Clearly I could use either, but it is not clear to me when I 
should use one convention over the other.

Clearly I can have my api be like:

f = FooBar(...)

or

f = foobar()

but is one preferred over the other? Is it just random when to use one or 
the other when making container like types?


[julia-users] Re: Unexpected error on 1st order ODE using ODE package

2016-06-19 Thread Gabriel Gellner
You are passing in the initial condition `start` as an integer, but ode23 
needs this to be a float. Change it to `const start = 3.0` and you are 
golden. This does feel like a bug you should file an issue at the github 
page.

On Sunday, June 19, 2016 at 11:49:55 AM UTC-7, Joungmin Lee wrote:
>
> Hi,
>
> I am making simple examples of the ODE package in Julia, but I cannot make 
> a code without error for 1st order ODE.
>
> Here is my code:
>
> using ODE;
>>
>> function f(t, y)
>> x = y
>> ​
>> dx_dt = (2-x)/5
>> 
>> dx_dt
>> end
>>
>> const start = 3;
>> time = 0:0.1:30;
>>
>> t, y = ode23(f, start, time);
>>
>
> It finally gives:
> LoadError: InexactError()
> while loading In[14], in expression starting on line 1
>
> in copy! at abstractarray.jl:310
> in setindex! at array.jl:313
> in oderk_adapt at C:\Users\user\.julia\v0.4\ODE\src\runge_kutta.jl:279
> in oderk_adapt at C:\Users\user\.julia\v0.4\ODE\src\runge_kutta.jl:220
> in ode23 at C:\Users\user\.julia\v0.4\ODE\src\runge_kutta.jl:210 
>
> The example of 2nd order ODE at the GitHub works fine.
>
> How should I edit the code?
>


[julia-users] Re: Tips for optimizing this short code snippet

2016-06-18 Thread Gabriel Gellner
What integration library are you using with Cython/Fortran? Is it using the 
same algorithm as quadgk? Your code seems so simple I imagine this is just 
comparing the quadrature implementations :)

On Saturday, June 18, 2016 at 5:53:57 AM UTC-7, Marius Millea wrote:
>
> Hi all, I'm sort of just starting out with Julia, I'm trying to get gauge 
> of how fast I can make some code of which I have Cython and Fortran 
> versions to see if I should continue down the path of converting more or my 
> stuff to Julia (which in general I'd very much like to, if I can get it 
> fast enough). I thought maybe I'd post the code in question here to see if 
> I could get any tips. I've stripped down the original thing to what I think 
> are the important parts, a nested integration with an inner function 
> closure and some global variables. 
>
> module test
>
> const a = 1.
>
> function f(x)
> quadgk(y->1/g(y),0,x)[1]  # <=== outer integral
> end
>
> function g(y)
> integrand(x) = x^2*sqrt(x^2*y^2+a)/(exp(sqrt(x^2+y^2))+a)
> quadgk(integrand,0,Inf)[1]   # <=== inner integral
> end
>
> end
>
>
> > @timeit test.f(1.)
> 100 loops, best of 3: 3.10 ms per loop
>
>
>
>
> Does anyone have any tips that squeezes a little more out of this code? I 
> have run ProfileView on it, and although I'm not sure I fully understand 
> how to read its output, I think it's saying the majority of runtime is 
> spent in quadgk itself. So perhaps I should look into using a different 
> integration library? 
>
> Thanks for any help. 
>
>

Re: [julia-users] Re: "None" in Python and "nothing" in Julia for feature matching function

2016-06-17 Thread Gabriel Gellner
Hmmm. Must be something different causing the problem.

When I check:
```jl
using PyCall
PyObject(nothing)
```
I get the expected `PyObject None` back.

On Friday, June 17, 2016 at 1:06:49 PM UTC-7, Stefan Karpinski wrote:
>
> Converting between Python/None <=> Julia/nothing seems like something that 
> PyCall should maybe do since they are pretty much equivalent. Maybe open an 
> issue here: https://github.com/stevengj/PyCall.jl?
>
> On Fri, Jun 17, 2016 at 3:58 PM, Gabriel Gellner <gabriel...@gmail.com 
> > wrote:
>
>> Have you tried passing `:none` in the argument list. I find that PyCall 
>> does the correct conversion on the symbol.
>>
>>
>> On Friday, June 17, 2016 at 11:42:32 AM UTC-7, I Ce wrote:
>>>
>>> I am using PyCalland @pyimport cv2 to implement an OpenCV 
>>> feature-matching program in Julia.
>>>
>>>
>>> I have an example of the code I want to use in *Python* (see *Brute-Force 
>>> Matching with SIFT Descriptors and Ratio Test* in this link: 
>>> http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html
>>>  for 
>>> the full Python code.)
>>>
>>>
>>> Everything up to the point of drawMatchesKnn() works fine, but I have 
>>> issues with the outImg argument when converting to Julia.
>>>
>>>
>>> Documentation for drawMatchesKnn() is pasted below: 
>>> (and can also be found here: 
>>>
>>> http://docs.opencv.org/3.0-beta/modules/features2d/doc/drawing_function_of_keypoints_and_matches.html#drawmatches
>>>
>>>
>>> Python: cv2.drawMatchesKnn(img1, keypoints1, img2, keypoints2, 
>>> matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, 
>>> flags]) → outImg
>>>
>>> Parameter description:
>>>
>>>- img1 – First source image.
>>>- keypoints1 – Keypoints from the first source image.
>>>- img2 – Second source image.
>>>- keypoints2 – Keypoints from the second source image.
>>>- matches1to2 – Matches from the first image to the second one, 
>>>which means that keypoints1[i] has a corresponding point in 
>>>keypoints2[matches[i]] .
>>>- outImg – Output image. Its content depends on the flags value 
>>>defining what is drawn in the output image. See possible flags bit 
>>> values 
>>>below.
>>>- matchColor – Color of matches (lines and connected keypoints). If 
>>>matchColor==Scalar::all(-1) , the color is generated randomly.
>>>- singlePointColor – Color of single keypoints (circles), which 
>>>means that keypoints do not have the matches. If 
>>>singlePointColor==Scalar::all(-1) , the color is generated randomly.
>>>- matchesMask – Mask determining which matches are drawn. If the 
>>>mask is empty, all matches are drawn.
>>>- flags – Flags setting drawing features. Possible flags bit values 
>>>are defined by DrawMatchesFlags.
>>>
>>> As you can see from the sample program, the drawMatchesKnn() line in 
>>> Python would look like this:
>>> img3 = cv2.drawMatchesKnn(train,kp1,query,kp2,good,None,flags=2) (the 
>>> key argument is argument 6, specified as "None")
>>>
>>>
>>> I'm having problems because I don't really know what an equivalent, 
>>> working example in Julia would be.
>>>
>>>
>>>
>>> I tried this:
>>> img3 = cv2.drawMatchesKnn(train,kp1,query,kp2,good)
>>>
>>> And got this error: (so arg6 is required)
>>>
>>> LoadError: PyError (:PyObject_Call) 
>>> TypeError("Required argument 'outImg' (pos 6) not found",)
>>>
>>>
>>>
>>> This: (passing the scalar value 0, which worked for the method 
>>> drawKeyPoints() in another program)
>>> img3 = cv2.drawMatchesKnn(train,kp1,query,kp2,good,0,flags=2)
>>>
>>> and got this error:
>>>
>>> LoadError: PyError (:PyObject_Call) 
>>> SystemError('NULL result without error in PyObject_Call',)
>>>
>>>
>>>
>>> and this:
>>> img3 = cv2.drawMatchesKnn(train,kp1,query,kp2,good,nothing,flags=2)
>>>
>>> and got this error:
>>>
>>> LoadError: PyError (:PyObject_Call) 
>>> SystemError('NULL result without error in PyObject_Call',)
>>>
>>>
>>> Seems tricky to me because None in Python and nothing in Julia do not 
>>> appear to behave the same way.
>>>
>>>
>>> Anything else I could try? What could the problem be, and how can I fix 
>>> it?
>>>
>>>
>>> Thanks for reading!
>>> Any help is much appreciated.
>>>
>>
>

[julia-users] Re: "None" in Python and "nothing" in Julia for feature matching function

2016-06-17 Thread Gabriel Gellner
Have you tried passing `:none` in the argument list. I find that PyCall 
does the correct conversion on the symbol.

On Friday, June 17, 2016 at 11:42:32 AM UTC-7, I Ce wrote:
>
> I am using PyCalland @pyimport cv2 to implement an OpenCV 
> feature-matching program in Julia.
>
>
> I have an example of the code I want to use in *Python* (see *Brute-Force 
> Matching with SIFT Descriptors and Ratio Test* in this link: 
> http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html
>  for 
> the full Python code.)
>
>
> Everything up to the point of drawMatchesKnn() works fine, but I have 
> issues with the outImg argument when converting to Julia.
>
>
> Documentation for drawMatchesKnn() is pasted below: 
> (and can also be found here: 
>
> http://docs.opencv.org/3.0-beta/modules/features2d/doc/drawing_function_of_keypoints_and_matches.html#drawmatches
>
>
> Python: cv2.drawMatchesKnn(img1, keypoints1, img2, keypoints2, 
> matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, 
> flags]) → outImg
>
> Parameter description:
>
>- img1 – First source image.
>- keypoints1 – Keypoints from the first source image.
>- img2 – Second source image.
>- keypoints2 – Keypoints from the second source image.
>- matches1to2 – Matches from the first image to the second one, which 
>means that keypoints1[i] has a corresponding point in 
>keypoints2[matches[i]] .
>- outImg – Output image. Its content depends on the flags value 
>defining what is drawn in the output image. See possible flags bit values 
>below.
>- matchColor – Color of matches (lines and connected keypoints). If 
>matchColor==Scalar::all(-1) , the color is generated randomly.
>- singlePointColor – Color of single keypoints (circles), which means 
>that keypoints do not have the matches. If 
>singlePointColor==Scalar::all(-1) , the color is generated randomly.
>- matchesMask – Mask determining which matches are drawn. If the mask 
>is empty, all matches are drawn.
>- flags – Flags setting drawing features. Possible flags bit values 
>are defined by DrawMatchesFlags.
>
> As you can see from the sample program, the drawMatchesKnn() line in 
> Python would look like this:
> img3 = cv2.drawMatchesKnn(train,kp1,query,kp2,good,None,flags=2) (the key 
> argument is argument 6, specified as "None")
>
>
> I'm having problems because I don't really know what an equivalent, 
> working example in Julia would be.
>
>
>
> I tried this:
> img3 = cv2.drawMatchesKnn(train,kp1,query,kp2,good)
>
> And got this error: (so arg6 is required)
>
> LoadError: PyError (:PyObject_Call) 
> TypeError("Required argument 'outImg' (pos 6) not found",)
>
>
>
> This: (passing the scalar value 0, which worked for the method 
> drawKeyPoints() in another program)
> img3 = cv2.drawMatchesKnn(train,kp1,query,kp2,good,0,flags=2)
>
> and got this error:
>
> LoadError: PyError (:PyObject_Call) 
> SystemError('NULL result without error in PyObject_Call',)
>
>
>
> and this:
> img3 = cv2.drawMatchesKnn(train,kp1,query,kp2,good,nothing,flags=2)
>
> and got this error:
>
> LoadError: PyError (:PyObject_Call) 
> SystemError('NULL result without error in PyObject_Call',)
>
>
> Seems tricky to me because None in Python and nothing in Julia do not 
> appear to behave the same way.
>
>
> Anything else I could try? What could the problem be, and how can I fix it?
>
>
> Thanks for reading!
> Any help is much appreciated.
>


[julia-users] Re: translation python/ijulia

2016-06-15 Thread Gabriel Gellner
I imagine you are using something like sympy that is doing a mathematica 
like plot over omega (the var statements)
In which case you need to change the line

x = linspace(0, 2pi)

to

omega = linspace(0, 2*w) # which seems to be the true range from the python 
code, not 2pi

and then you need to change your 

y = ...

bit to use elementwise multiplication (.*) and elementwise division (./) to 
get the two arrays needed for the plot (omega, y) to be the same length.

On Tuesday, June 14, 2016 at 11:52:30 PM UTC-7, Henri Girard wrote:

> Hi,
>
> I would like to transform this python working example into a ijulia 
> program ,
> Any help
> HG
>
> var("w")
> u=1/pi*360;s=1/(pi*360)
> R=100;L=10;C=10E-6
> w=1/(sqrt(L*C));var('omega')
>
> p=plot(abs(1/(1+i*omega*R*C-omega*omega*L*C)),omega,0,2*w,figsize=4,frame=True,
> fontsize=8,axes=False,color="green") ;show(p)
> print"Pulsation angulaire = ",round(w)," Rad/s"
>
> I tried this but got errors :
> R=100;L=10;C=10E-6;w=1/(sqrt(L*C));omega=0;
> x=linspace(0,2pi)
> y=(abs(1/(1+im*omega*R*C-omega*omega*L*C)));
> fig = figure("Angle")
> plt.plot(x, y);
> plt.ylabel("z");
> plt.xlabel("w");
> ax = plt.gca() # get current axes
> ax[:set_xlim]((0,2*w));
> ax[:set_ylim]((0,10));
> plt.grid("on")
> plt.title("Pulsation");
>
>
> LoadError: PyError (:PyObject_Call) 
> ValueError(u'x and y must have same first dimension',)
>   File 
> "/home/pi/.julia/v0.4/Conda/deps/usr/lib/python2.7/site-packages/matplotlib/pyplot.py",
>  line 3154, in plot
> ret = ax.plot(*args, **kwargs)
>   File 
> "/home/pi/.julia/v0.4/Conda/deps/usr/lib/python2.7/site-packages/matplotlib/__init__.py",
>  line 1812, in inner
> return func(ax, *args, **kwargs)
>   File 
> "/home/pi/.julia/v0.4/Conda/deps/usr/lib/python2.7/site-packages/matplotlib/axes/_axes.py",
>  line 1424, in plot
> for line in self._get_lines(*args, **kwargs):
>   File 
> "/home/pi/.julia/v0.4/Conda/deps/usr/lib/python2.7/site-packages/matplotlib/axes/_base.py",
>  line 386, in _grab_next_args
> for seg in self._plot_args(remaining, kwargs):
>   File 
> "/home/pi/.julia/v0.4/Conda/deps/usr/lib/python2.7/site-packages/matplotlib/axes/_base.py",
>  line 364, in _plot_args
> x, y = self._xy_from_xy(x, y)
>   File 
> "/home/pi/.julia/v0.4/Conda/deps/usr/lib/python2.7/site-packages/matplotlib/axes/_base.py",
>  line 223, in _xy_from_xy
> raise ValueError("x and y must have same first dimension")
>
> while loading In[38], in expression starting on line 2
>
>  [inlined code] from /home/pi/.julia/v0.4/PyCall/src/exception.jl:81
>  in pycall at /home/pi/.julia/v0.4/PyCall/src/PyCall.jl:502
>
>
>

Re: [julia-users] Differential Equations Package

2016-06-08 Thread Gabriel Gellner
Thank you so much for the feedback.

I just want to end with how much I love this community. I know this kind of 
"bike shedding" discussion can be annoying. Especially with all the work 
volunteers put in.

Also I love your work especially mauro :)

Anyway back to lurking.

Gabriel


Re: [julia-users] Differential Equations Package

2016-06-08 Thread Gabriel Gellner
I absolutely agree in general. I am terrified when julia code gets to fancy 
:) I am largely a bear of little brain... (thos ODE.jl choice to send back 
array of arrays kind of breaks this feeling for simplicity ... having to 
learn to do hcat(sol...)' for the common case, versus letting this work for 
some high dimensional vector field case that was discussed feels like 
leaning on the side of complex ;)

> 
> > sol = dsolve(RK54(func, y0), trange, ) 
>
> maybe better: 
>
> sol = dsolve(func, y0, trange, RK54() ) 
>
>
> This would more follow the way that Optim.jl has gone (sort of they seem 
to separate some of the general solver options, from the specific options. 
I haven't followed enough to know why this separation was desired).

The problem I see with this is that it overly relies on separating the func 
and array from the model/problem type. So again you suffer from not being 
able to do the
easy to understand (and for larger problems likely more efficient as you 
don't have to reallocate the temp arrays, but the naive user need not 
understand this).

prob = RK45(func, y0)

sol = dsolve(prob, tspan, )

sol2 = dsolve(prob, tspan2, )

as likely you will need to know the dimension of y0 to allocate all the 
underlying solver work space arrays, so RK45() can not be 
entirely decoupled from the "model" definition.
 

> Well, either is fine.  I think it is important that the high-level 
> interface is free or mostly free from custom types.  A casual user 
> shouldn't need to learn about special types and be able to just use 
> arrays and functions.  (I find that with Julia's awesome type system it 
> is easy to overdo it on the types.) 
>
> I guess the key I see is that the matlab way of doing things encodes the 
"type" of the solver into the name of the function "ode45, ode23" etc
whereas if you have a wrapper type (or some function that returns such a 
type) then you just do

desolve(solver_type, ...)

which doesn't feel in any way more complex. The end user need not know 
anything about the type if they don't want to, they just need to know if 
you want the ode45 like behavior your do

desolve(RK45(func, y0), tspan)

vs

ode45(func, y0, tspan)

hell if the length is an issue you could have

ode(RK45(func, y0), tspan)

for only an extra 3 characters ;) with the super nice behavior of not 
having an entirely different api for when you need finer control (you just 
need extra knowledge of the special types).

That is why I feel that following the matlab way of doing things so closely 
(ie requiring a pure array + callback version) limits the api without 
making it simpler for the end, naive, user.

I guess I see the above way as being no different that using something like 
Distributions.jl, which I see as one of the nicest uses of types from an 
api perspective.
Getting to do

rand(disttype, options)

is super easy to understand versus having a billion specialty functions 
that take the raw options and dist parameters.

Just my thoughts as an end user. Those that are working on this will 
decide, but I feel the distaste for types in this simple case feels overly 
pessimistic.


Re: [julia-users] Differential Equations Package

2016-06-08 Thread Gabriel Gellner
So much great news in this thread! I am crazy happy that ODE.jl is not 
dead. As an interested outsider it seemed like the community got gridlocked 
on the API discussion. It is nice that this is not the case.

A quick question as to why the matlab interface is better for small 
problems, I don't really get that.

suppose we have some kind of OdeProblem type and then you have specific 
types for a solver that inherit from this, say a `RK54` (don't worry about 
the ugly names)
then wouldn't it be just as easy from an end users perspective if you had 
something like:

sol = dsolve(RK54(func, y0), trange, )

as

sol = ode45(func, y0, trange, )

but with the above you get the ability to save the OdeProblem separately, 
have special versions that do inplace updating etc. I don't really see what 
having the callback version, versus the callback in a type version needs to 
be drastically different from the user perspective? Furthermore if 
something like the above is used then if the user switches between the high 
level and low level interface there is far less of a pattern switch in the 
calling function. Rather you would just do it on separate lines and save 
the problem type.

I have read much of the old discussions on the ODE.jl github issues, but it 
largely seemed that this was being looked at from a small vs large problem 
and not really from an API usability problem.

Thanks for any insight. I look forward to seeing what is in the works with 
the GSOC project.

Gabriel


Re: [julia-users] Differential Equations Package

2016-06-07 Thread Gabriel Gellner
I think the name makes a tonne of sense given the scope, and fits in the 
line with many standard packages: Calculus, Optim, Distributions, etc.
There is no reason that if a great Bifurcation suite grows it couldn't be 
part of DifferentialEquations (though that feels weird to me personally).

Also I for one feel we should all be stupid thankful that someone is 
pushing on this issue. I find diffeq solvers are one of the weakest areas 
of Julia currently. Graphs, Stats, Optimization all feel at a level that is 
very comfortable to replace general packages like Matlab for me (at a first 
approximation) but man solving these kinds of equations in julia feels 
barely passible. It is a herculean task to get this support in and Chris 
seems to be doing it with crazy determination. He could call the pacakge 
the OneTrueSolutionToSolvingEverything and I would support it!

It will kill me that it will be camelCase mind you ... Julia needs a PEP8 
linter bad ;)

On Tuesday, June 7, 2016 at 1:48:51 AM UTC-7, jonatha...@alumni.epfl.ch 
wrote:
>
> Your package is about computing numerical solutions of differential 
> equations. Differential equations are something a bit more general, for 
> example a package for bifurcation analysis could also want to call itself 
> "DifferentialEquations". I don't really have a better name... 
> NSolveDiffEqus ? That said I don't think you really need to rename it.
>


Re: [julia-users] Re: reductions in 0.5 and dropping dimensions

2016-05-25 Thread Gabriel Gellner
My mind has just been blown ... I didn't realize you could do element-wise 
operations on types of different shape ... Do any other languages do this? 
I know that the dimension reduction was labelled APL like ... how did APL 
deal with this kind of reduction operation? Off to the docs ... my mind ...

On Wednesday, May 25, 2016 at 7:03:03 PM UTC-7, Tim Holy wrote:
>
> Since you asked...from my perspective, the easiest argument against it is 
>
> pnormalized = p ./ sum(p, 1) 
>
> If you drop the summed dimension, that won't work anymore. One can write 
>
> pnormalized = p ./ reshape(sum(p, 1), 1, size(p, 2)) 
>
> but that's a little uglier than the converse, calling `squeeze` when you'd 
> rather get rid of the dimension. 
>
> That said, I recognize that your point has validity. I offer the example 
> just 
> to make sure it's known. 
>
> Best, 
> --Tim 
>
> On Wednesday, May 25, 2016 06:49:53 PM a. kramer wrote: 
> > That's a solution, yes, but my feeling is that the "default" slicing and 
> > reduction behavior should play nice with one another.  In 0.5 it seems 
> > dropping dimensions with A[1,:] is default, which I do prefer.  It seems 
> > natural for sum(A,1) to be equivalent to A[1,:] + A[2,:] + A[3,:] + ... 
> > 
> > In my workflow (primarily data analysis), I often mix these slicing and 
> > reduction operations, but it would be unusual for a situation to appear 
> > where I would want one behavior for slicing and a different one for 
> array 
> > reduction.  In situations where array dimensions correspond to, say, 
> > repeated observations, it is common to compare slices to means or maxima 
> > across dimensions.  However, I would be interested to hear arguments 
> > against. 
> > 
> > On Wednesday, May 25, 2016 at 9:05:04 PM UTC-4, Gabriel Gellner wrote: 
> > > Does it bother you to do the A[[1], :] to keep the old behavior? I 
> haven't 
> > > thought enough about the role of sum, mean etc. 
> > > 
> > > On Wednesday, May 25, 2016 at 5:11:20 PM UTC-7, a. kramer wrote: 
> > >> Apologies in advance if this is something that has been discussed at 
> > >> length already, I wasn't able to find it. 
> > >> 
> > >> In Julia 0.5, if A is a 5x5 matrix, the behavior of A[1,:] will be 
> > >> changed to return a 5-element array instead of a 1x5 array.  However, 
> at 
> > >> least in the current build, sum(A,1) still gives a 1x5 array as it 
> does 
> > >> in 
> > >> earlier versions, and similarly for other array reductions like mean, 
> > >> maximum, etc. 
> > >> 
> > >> I understand that these functions and slicing are fundamentally 
> different 
> > >> things, but I find this a little counter intuitive (at the very least 
> > >> different from numpy's behavior).  I often find myself interested in 
> > >> quantities such as A[1,:] ./ mean(A,1) (the first row of a matrix 
> > >> normalized by the average of each column's entries).  In 0.5, this 
> gives 
> > >> something quite different from what I'm expecting (in fact it gives a 
> 5x5 
> > >> matrix). 
> > >> 
> > >> So my questions are: Is there a discussion of the rationale behind 
> doing 
> > >> things this way?  Is this something that may be changed in the 
> future? 
> > >> If 
> > >> not, is there an alternative to the standard sum, mean, etc. 
> functions 
> > >> that 
> > >> is recommended for this?  Just a liberal use of squeeze()? 
>
>

[julia-users] Re: reductions in 0.5 and dropping dimensions

2016-05-25 Thread Gabriel Gellner
Does it bother you to do the A[[1], :] to keep the old behavior? I haven't 
thought enough about the role of sum, mean etc.

On Wednesday, May 25, 2016 at 5:11:20 PM UTC-7, a. kramer wrote:
>
> Apologies in advance if this is something that has been discussed at 
> length already, I wasn't able to find it.
>
> In Julia 0.5, if A is a 5x5 matrix, the behavior of A[1,:] will be changed 
> to return a 5-element array instead of a 1x5 array.  However, at least in 
> the current build, sum(A,1) still gives a 1x5 array as it does in earlier 
> versions, and similarly for other array reductions like mean, maximum, 
> etc.  
>
> I understand that these functions and slicing are fundamentally different 
> things, but I find this a little counter intuitive (at the very least 
> different from numpy's behavior).  I often find myself interested in 
> quantities such as A[1,:] ./ mean(A,1) (the first row of a matrix 
> normalized by the average of each column's entries).  In 0.5, this gives 
> something quite different from what I'm expecting (in fact it gives a 5x5 
> matrix).
>
> So my questions are: Is there a discussion of the rationale behind doing 
> things this way?  Is this something that may be changed in the future?  If 
> not, is there an alternative to the standard sum, mean, etc. functions that 
> is recommended for this?  Just a liberal use of squeeze()?
>


[julia-users] Re: ccall closures and the wizardry of S. G. Johnson

2016-05-25 Thread Gabriel Gellner
Perfect. I was hoping that this would be the solution. I am reading the 
sections on getting this behavior more carefully. Thanks for the 
clarification.

On Tuesday, May 24, 2016 at 2:04:22 PM UTC-7, Steven G. Johnson wrote:
>
>
>
> On Tuesday, May 24, 2016 at 3:01:12 PM UTC-4, Gabriel Gellner wrote:
>>
>> Working thru the incredible guide:
>> http://julialang.org/blog/2013/05/callback
>>
>> I am stuck on understanding if there are any work arounds for being able 
>> to use julia anonymous functions in `ccall` callback functions. 
>> Specifically I am interested in the common case of wanting to use function 
>> parameters without having to do the C convention of passing around a void 
>> *param pointer as the last argument of the callback.
>> (ie instead of having func(x, y, param) I could do (x, y) -> func(x, y, 
>> a, b) to set the values for the parameters of a julia callback function 
>> that is passed to a ccall routine)
>>
>
> As explained in the guide (see the qsort_r example), you only need for the 
> C API to have a void* pointer that you pass around.On the Julia side, 
> this can be completely hidden from the caller.  (e.g. in the qsort_r 
> example, the Julia lessthan function passed to qsort! does not need a 
> "param" argument.)
>
> (This is how NLopt.jl, Cubature.jl, etcetera work: the C API uses void* 
> pointers, but this is hidden from the Julia callers exactly as described in 
> the blog post.)
>


Re: [julia-users] I feel that on the syntax level, Julia sacrificed too much elegancy trying to be compatible with textbook math notations

2016-05-24 Thread Gabriel Gellner
I don't think those discussion points will make Siyi happy ;) We are 
getting even more required use of dotted "broadcasting" instead of less 
like he wants.
I much much prefer being explicit with the dots, but like all syntax 
discussion it seems to get heated :)

On Tuesday, May 24, 2016 at 2:31:29 PM UTC-7, Isaiah wrote:
>
> This has been a point of (extremely) lengthy discussion. See the following 
> issues for recent improvements and plans, some of which will be in the next 
> release:
>
> https://github.com/JuliaLang/julia/pull/15032
> https://github.com/JuliaLang/julia/issues/16285
>
> On Tue, May 24, 2016 at 5:23 PM, Siyi Deng  > wrote:
>
>>
>> numpy arrays operate element-wise by default, and have broadcasting 
>> (binary singleton expansion) behaviors by default. in julia you have to do 
>> (.> , .<, .==) all the time.
>>
>>
>

[julia-users] Re: linreg strangely picky about inputs

2016-05-23 Thread Gabriel Gellner
Okay so I have completely hand coded the cov/var calculation to avoid any 
overhead, and now I get around the 20x speedup you mentioned. I really 
don't see any significant benefit of returning a tuple, allocating the 2 
element array has an insignificant overhead even for flot64 vectors of size 
10 inside hot loops. I really don't like the ugly output and lack 
definitions of arithmatic that returning the tuple forces for no clear 
benefit in my mind. If this is really wanted I would much rather have a 
linreg! method, so that the default behavior is not effected by this 
optimization. I will of course change this if the will of the people is to 
have a tuple, but I am -1.

Another quick question do I make a new issue/pull request for this 
behavior? As it is no longer the trivial change of my current pull request. 
Also should this function be moved out of linalg.jl since it is really not 
using the linear algebra routines anymore?

The code I have come up with is:

function linreg5{S <: Number, T <: Number}(x::AbstractArray{S}, 
y::AbstractArray{T})
n = length(x)
@assert n == length(y)
mx = mean(x) # need to know this at each step so calculate before
tot_dx = 0.0
tot_y = 0.0
tot_xy = 0.0
for i in 1:n
tot_dx += (x[i] - mx)^2
tot_y += y[i]
tot_xy += x[i]*y[i]
end
b1 = (tot_xy - mx*tot_y)/tot_dx # a 1\n cancels in the top and bottom
b0 = tot_y/n - b1*mx
return [b0, b1]
end

If anyone has any comments on what could be made better.

Thanks,
Gabriel

On Monday, May 23, 2016 at 5:58:37 AM UTC-7, Andreas Noack wrote:
>
> Almost, but return a tuple instead of an array to avoid array allocation. 
> Tight now, cov allocates temporary arrays for the demeaned vectors but that 
> should probably be changed later anyway.
>
> On Monday, May 23, 2016 at 2:50:34 AM UTC-4, Gabriel Gellner wrote:
>>
>> So I am not able to get such a dramatic speed up. Do you mean something 
>> beyond:
>>
>> function linreg2{S <: Number, T <: Number}(x::AbstractArray{S}, 
>> y::AbstractArray{T})
>> mx = mean(x)
>> my = mean(y)
>> b1 = Base.covm(x, mx, y, my)/varm(x, mx)
>> b0 = my - b1*mx
>> return [b0, b1]
>> end
>>
>> Which I find speeds up around 3x, or do you mean writing a custom cov 
>> function that is smarter about memory? (I am returning an array as I like 
>> to be able to do vector math on the coefficients ... but if I return a 
>> tuple it isn't much faster for me)
>>
>> On Sunday, May 22, 2016 at 8:37:43 PM UTC-7, Andreas Noack wrote:
>>>
>>> I don't think that linreg has received much attention over the years. 
>>> Most often it is almost as simple to use \. It you take a look at 
>>> linreg then I'd suggest that you try to write in terms of cov and var and 
>>> return a tuple instead of an Array. That will speed up the computation 
>>> already now and with an unallocating cov, I see 20x speed up over linreg 
>>> for n=1 and Float64 element on 0.4.
>>>
>>> On Saturday, May 21, 2016 at 2:33:20 PM UTC-4, Gabriel Gellner wrote:
>>>>
>>>>
>>>>
>>>> I think it's an error.  The method definition should probably just be:
>>>>>
>>>>> linreg{T<:Number,S<:Number}(X::AbstractVector{T}, 
>>>>> y::AbstractVector{S}) = [ones(T, size(X,1)) X] \ y
>>>>>
>>>>> which will allow it to work for X and y of different types.
>>>>>
>>>>> Is using the more specific typing (<: Number) the current best 
>>>> practices? I notice a lot of the methods in linalg/generics.jl just check 
>>>> for AbstractVectors etc without requiring numeric types even though that 
>>>> would be the more correct check.
>>>>  
>>>>
>>>>> A PR (pull request) with a patch and a test case would be most welcome.
>>>>>
>>>> On it! Taking me a bit of sweat to figure out the build process and 
>>>> github stuff but once I have that all sorted a PR will be submitted. 
>>>> Thanks 
>>>> for the quick response. 
>>>>
>>>

[julia-users] Re: linreg strangely picky about inputs

2016-05-23 Thread Gabriel Gellner
So I am not able to get such a dramatic speed up. Do you mean something 
beyond:

function linreg2{S <: Number, T <: Number}(x::AbstractArray{S}, 
y::AbstractArray{T})
mx = mean(x)
my = mean(y)
b1 = Base.covm(x, mx, y, my)/varm(x, mx)
b0 = my - b1*mx
return [b0, b1]
end

Which I find speeds up around 3x, or do you mean writing a custom cov 
function that is smarter about memory? (I am returning an array as I like 
to be able to do vector math on the coefficients ... but if I return a 
tuple it isn't much faster for me)

On Sunday, May 22, 2016 at 8:37:43 PM UTC-7, Andreas Noack wrote:
>
> I don't think that linreg has received much attention over the years. Most 
> often it is almost as simple to use \. It you take a look at linreg then 
> I'd suggest that you try to write in terms of cov and var and return a 
> tuple instead of an Array. That will speed up the computation already now 
> and with an unallocating cov, I see 20x speed up over linreg for n=1 
> and Float64 element on 0.4.
>
> On Saturday, May 21, 2016 at 2:33:20 PM UTC-4, Gabriel Gellner wrote:
>>
>>
>>
>> I think it's an error.  The method definition should probably just be:
>>>
>>> linreg{T<:Number,S<:Number}(X::AbstractVector{T}, y::AbstractVector{S}) 
>>> = [ones(T, size(X,1)) X] \ y
>>>
>>> which will allow it to work for X and y of different types.
>>>
>>> Is using the more specific typing (<: Number) the current best 
>> practices? I notice a lot of the methods in linalg/generics.jl just check 
>> for AbstractVectors etc without requiring numeric types even though that 
>> would be the more correct check.
>>  
>>
>>> A PR (pull request) with a patch and a test case would be most welcome.
>>>
>> On it! Taking me a bit of sweat to figure out the build process and 
>> github stuff but once I have that all sorted a PR will be submitted. Thanks 
>> for the quick response. 
>>
>

[julia-users] Re: linreg strangely picky about inputs

2016-05-22 Thread Gabriel Gellner
On Sunday, May 22, 2016 at 8:37:43 PM UTC-7, Andreas Noack wrote:
>
> I don't think that linreg has received much attention over the years. Most 
> often it is almost as simple to use \. It you take a look at linreg then 
> I'd suggest that you try to write in terms of cov and var and return a 
> tuple instead of an Array. That will speed up the computation already now 
> and with an unallocating cov, I see 20x speed up over linreg for n=1 
> and Float64 element on 0.4.
>
> I have made a PR that just does the trivial change outlined with some 
extra tests. But I will look into what you describe. Really I see the only 
benefit of the current function is saving the [ones(x) x] call ;) (as well 
as making the intention of the code more transparent).

Gabriel

 


[julia-users] Re: linreg strangely picky about inputs

2016-05-22 Thread Gabriel Gellner
So I have a PR
https://github.com/JuliaLang/julia/pull/16523

I get an error on AppVeyor build which I don't understand. Any tips, or 
missed documentation is greatly appreciated! My GitHub kung-fu is fledgling 
at best.

On Saturday, May 21, 2016 at 11:33:20 AM UTC-7, Gabriel Gellner wrote:
>
>
>
> I think it's an error.  The method definition should probably just be:
>>
>> linreg{T<:Number,S<:Number}(X::AbstractVector{T}, y::AbstractVector{S}) = 
>> [ones(T, size(X,1)) X] \ y
>>
>> which will allow it to work for X and y of different types.
>>
>> Is using the more specific typing (<: Number) the current best practices? 
> I notice a lot of the methods in linalg/generics.jl just check for 
> AbstractVectors etc without requiring numeric types even though that would 
> be the more correct check.
>  
>
>> A PR (pull request) with a patch and a test case would be most welcome.
>>
> On it! Taking me a bit of sweat to figure out the build process and github 
> stuff but once I have that all sorted a PR will be submitted. Thanks for 
> the quick response. 
>


[julia-users] Re: linreg strangely picky about inputs

2016-05-21 Thread Gabriel Gellner


I think it's an error.  The method definition should probably just be:
>
> linreg{T<:Number,S<:Number}(X::AbstractVector{T}, y::AbstractVector{S}) = 
> [ones(T, size(X,1)) X] \ y
>
> which will allow it to work for X and y of different types.
>
> Is using the more specific typing (<: Number) the current best practices? 
I notice a lot of the methods in linalg/generics.jl just check for 
AbstractVectors etc without requiring numeric types even though that would 
be the more correct check.
 

> A PR (pull request) with a patch and a test case would be most welcome.
>
On it! Taking me a bit of sweat to figure out the build process and github 
stuff but once I have that all sorted a PR will be submitted. Thanks for 
the quick response. 


[julia-users] linreg strangely picky about inputs

2016-05-20 Thread Gabriel Gellner
Is it intentional that the builting linreg is so picky about its inputs? It 
seems to me that the code is a one liner:
linreg{T<:Number}(X::AbstractVector{T}, y::AbstractVector{T}) = [ones(T, 
size(X,1)) X] \ y

but forcing the types to be the same for both seems strange to me as it 
throws errors for using unitranges, linranges etc..

Even the tests seem to do contortions to get around this, using x = 
[float(1:10);] calls

would there be any downside to just having

linreg(X::AbstractVector, y::AbstractVector) = [ones(size(X,1)) X] \ y

which would cause an error if x or y weren't compatible as the `\` operator 
would give an error.

Or is there a way to say that the are both abstract arrays of numbers but 
not the same type?

I could try to figure out how to implement this one line change if it is 
not the desired behavior and learn how to do a pull request. 

If it is a design choice can someone explain why it isn't just a pain in 
the ass?

Thanks so much.


[julia-users] Re: Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-05 Thread Gabriel Gellner
lol,
thanks for the python 3 update! I need to get using that one of these days 
...

Yeah I didn't see your comment as very critical, I just wanted to say we 
can all coexist in general :)

I find the holy wars on basic stuff like this always a bit mind blowing. 
Julia continually impresses me with the care in which syntax and semantics 
are added/changed in the language. My own complaints about syntax when I 
was first starting were carefully explained to me by some of the core devs, 
when maybe a studded paddle would have been more appropriate...

On Tuesday, April 5, 2016 at 7:01:00 AM UTC-7, DNF wrote:
>
>
> On Tuesday, April 5, 2016 at 3:36:09 PM UTC+2, Gabriel Gellner wrote:
>>
>> you can do the python example as:
>>
>> a[[1, 4] + range(7, 17, 2)]
>> (ignoring the issues that this is not the same range as julia since 
>> python uses 0-based indices ...)
>>
>
> Thanks. I'm getting an error, though:
>
> TypeError: can only concatenate list (not "range") to list
>
> I assume you are using Python 2.7, while I'm at 3.5. But still, this 
> helps. It seems I can do:
>
> a[[1, 4] + list(range(7, 17, 2))]
>
> which is a real improvement.
>
> As to the overall topic I don't think it is fair to have to poo-poo python 
>> to also feel that julia does it well. I really like both. 
>>
>
> I really don't like poo-pooing of python, but I think it's a response to 
> some pretty unreasonable criticism from the OP. In my newbie opinion, array 
> indexing is an area where Julia shines, much more so than NumPy.
>


Re: [julia-users] Re: Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-05 Thread Gabriel Gellner
Sweetness. Thanks for the clarification!

On Tuesday, April 5, 2016 at 7:42:02 AM UTC-7, Cristóvão Duarte Sousa wrote:
>
>
> For the julia example isn't matlab like concatenation using ';' being 
>> removed from julia?
>>
>
> No. Just the concatenation using ','
> (AFAIK)
>  
>


[julia-users] Re: Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-05 Thread Gabriel Gellner
you can do the python example as:

a[[1, 4] + range(7, 17, 2)]
(ignoring the issues that this is not the same range as julia since python 
uses 0-based indices ...)

you don't need to index with an ndarray, and that way you can get the nice 
use of the + operator for concatenate.

Lacking the : for range does make some code more verbose, but you get used 
to it.

For the julia example isn't matlab like concatenation using ';' being 
removed from julia?

As to the overall topic I don't think it is fair to have to poo-poo python 
to also feel that julia does it well. I really like both. Arguing about 0 
vs 1 based indexing is like emacs vs vi ... it leads to nothing but madness 
...

R + matlab + julia + fortran + Mathematica use 1 based and the world hasn't 
ended

sometimes 0 based is nice, but you get used to not having it, just like you 
get used to having it.

As for the truncating integer div ... I do hate that that in python and do 
"from __future__ import division" always ;)

On Monday, April 4, 2016 at 11:55:40 PM UTC-7, DNF wrote:
>
> Typo, I meant to type:
>
> Python 3.5
> a[i*len(a)//n:(i+1)*len(a)//n]
>
> Julia:
> a[1+i*end÷n:(i+1)end÷n]
>
> I'm just learning Python, and must say I find indexing in Python to be 
> very awkward compared to Julia or Matlab. Do you have any suggestion for 
> how I should do this in Python?
> a[[1; 4; 7:2:15]]
> So far I've got:
> a[np.concatenate(([1,4], np.arange(7,17,2)))]
>
>
>
>
> On Tuesday, April 5, 2016 at 8:46:57 AM UTC+2, DNF wrote:
>>
>>
>> On Saturday, April 2, 2016 at 1:55:55 PM UTC+2, Spiritus Pap wrote:
>>>
>>> A simple example why it makes my *life hard*: Assume there is an array 
>>> of size 100, and i want to take the i_th portion of it out of n. This is a 
>>> common scenario for research-code, at least for me and my friends.
>>> In python:
>>> a[i*100/n:(i+1)*100/n]
>>> In julia:
>>> a[1+div(i*100,n):div((i+1)*100,n)]
>>>
>>> A lot more cumbersome in julia, and it is annoying and unattractive. 
>>> This is just a simple example.
>>>
>>
>> Python 3.5
>> a[i*len(a)//n:(i+1)*len(a)//n]
>>
>> Julia:
>> a[1+i*end÷5:(i+1)end÷5] 
>>
>>

[julia-users] Re: Proposal: NoveltyColors.jl

2015-11-24 Thread Gabriel Gellner
As an end user that would love this, I would prefer a single package. Put 
all them tasty, wacky colors in one place!

On Tuesday, 24 November 2015 14:08:35 UTC-8, Randy Zwitch wrote:
>
> Since the Julia ecosystem is getting bigger, I figured I'd propose this 
> here first and see what people think is the right way forward (instead of 
> wasting people's time at METADATA)
>
> In the R community, they've created two packages of novelty color schemes: 
> Wes 
> Anderson  and Beyonce 
> . While humorous, these color palettes 
> are interesting to me and I'd like to make them available in Vega.jl (and 
> Julia more broadly). Should I:
>
> 1) Not do it at allbecause this is a serious, scientific community!
> 2) Do two separate packages, mimicking R
> 3) Create a single NoveltyColors.jl package, in case there are other 
> palettes that come up in the future
> 4) Make a feature request at Colors.jl (really not my favorite choice, 
> since there is so much cited research behind the palettes)
>
> I neglected to mention ColorBrewer.jl (which Vega.jl uses), since 
> ColorBrewer is a known entity in the plotting community.
>
> What do people think? Note, I'm not looking for anyone to do the work 
> (I'll do it), just looking for packaging input.
>


[julia-users] Julia REPL color background in command line windows

2015-11-14 Thread Gabriel Gellner
Julia messes up the background colors in the windows console -- that is 
-- it forces it to black. This boxed black background persists after julia 
is exited. Is there somewhere to look for why this is happening? Or a fix?

Thanks so much.


[julia-users] Re: Anaconda Python

2015-11-02 Thread Gabriel Gellner
Oh man ... please keep supporting anaconda. As a windows user being stuck 
with easy_install haunts my dreams. Anaconda may be forkish, but at least 
windows is a first class citizen. So many open source projects make using 
Windows beyond painful.

On Monday, 2 November 2015 13:00:51 UTC-8, le...@neilson-levin.org wrote:
>
> 
> I don't think you should support Anaconda Python.  I realize it is 
> convenient.  Providing a sort of private copy of Python and its packages 
> makes sense.  It simplifies installation and maintenance of key Julia 
> dependencies for users.  I just don't think you should use Anaconda to do 
> it. 
>
> Anaconda is fork of Python, its package management, its primary package 
> repository, and many of the packages themselves.  Forks are BAD.  It 
> borders on a commercial lock-in or, at a minimum, a technical lock-in to 
> Anaconda.
>
> I am a commercial software guy by experience.  I made a living from 
> commercial software and find that to be completely honorable.  This is not 
> an anti-commercial rant.  It IS, on the other hand, an anti-fork rant.
>
> Python is a vibrant community.  Julia is a vibrant community on a very 
> nice trajectory.  May they both continue.  Rather than a philosophical 
> discussion of Continuum and various open source license types, lets think 
> about this from the standpoint of Julia.
>
> Would you like it if someone came along and forked all of Julia, 
> especially Pkg, and created forks of every package?   To do so would be 
> entirely compliant with the MIT open source license.  So, it would be legal 
> (not that license enforcement is common in the open source world).  But, 
> would it be DESIRABLE?  You've done a fine thing to rely largely on git and 
> github.
>
> Probably not.  Is it possible that someone proposing enhancements found 
> that their suggestions were rejected?  Well, that can happen.  Perhaps that 
> would lead to a fork.  But, if there was community endorsement of the 
> suggestions from some reasonable plurality of members and enhancements 
> could be made without injury to those preferring some other code path, it 
> would be reasonable to accommodate particularly if the proposers backed 
> their suggestions with effort--working code that could be integrated under 
> the conditions mentioned.  I depict this in a somewhat negative way, but my 
> point is to confirm that *contributing is better than forking*.
>
> Typically, it is easier--and less negative--than the scenario I depicted. 
>  There is a community.  Some leaders are very technically adept and have a 
> vision (e.g., Julia is not C, Python, R, or Java so it won't do things just 
> like those or other languages...) so they have some sway over final 
> inclusion decisions.  And these technical leaders do care what the 
> community suggests; are open to suggestions and contributions; occasionally 
> reject some input with transparent reasons (transparency may not convince 
> the proposer, but it is good for everyone to see the dialog and decisions); 
> and often accept suggestions--implementing the suggestions themselves or 
> accepting pull requests.  But, realistically the core team makes most of 
> the commits and carries most of the work.
>
> This is probably how we want it to work.  We probably don't want a fork of 
> Julia and hope to avoid it and we will see Julia grow and be enhanced--most 
> often on the path and vision of the founders and sometimes with the 
> contributions of others. In the spirt of "do unto others...", let's not 
> encourage a fork of Python.
>
> This would mean using Python releases of the Python Software Foundation 
> (PSF) and its package repository PyPI.  There will be some inconvenience. 
>  Perhaps not all of the Python "cousins" are enamored of Julia and aren't 
> eager to be helpful.  Or perhaps they are merely neutral and busy.  But, it 
> supports their community to endorse it.  Do unto others...   As a serious 
> practical matter, the communities are not distinct.  Many user-developers 
> do use both Julia and Python.  We'd like both communities to thrive.  I 
> think Continuum would probably concur with the broad sentiment, though not 
> with my personal opinion about using Anaconda as a Julia dependency.  
>
> This requires some deep thought.  Using Anaconda is certainly a near-term 
> convenience.   On Windows, it is possible to get most of the same benefits 
> from the less commercially oriented release WinPython.  On Mac, ...from 
> Homebrew, which is also quite non-commercial.  On Linux, ...well, that is 
> another kettle of fragmentation--and probably better to rely on PSF than a 
> bunch of package repositories.  Consider:  how do you want the Julia 
> community to develop?  How does the Julia community overlap with the Python 
> community (and to a lesser extent the R community)?  How do choices affect 
> the healthy, long-term evolution of an open source community?
> 
>
> I've left out discussion of how open source 

[julia-users] Re: Sundials question

2015-11-02 Thread Gabriel Gellner
If you use the C interface and not the simplified interface you can pass an 
array of parameters to your lhs function. I found the best way to figure 
out how is to look at the code for how they made the `cvode` function (in 
Sundials.jl) to see how the C functions are being called and then look at 
the C manual for Sundials to see how to pass the arrays, it is very 
similar. Without more info it is hard to tell you exactly how to do it with 
your given community type.


On Monday, 2 November 2015 09:14:12 UTC-8, Timothée Poisot wrote:
>
> Hi list 
>
> I have a question related to SunDials.jl 
>
> We are building a model for which we currently have a function that 
> takes in two objects of custom types (c::Community and p::Parameters). 
> We'd like to move the code to Sundials, but we're not sure about the 
> best way to do it. 
>
> Currently, we have a function dNdt(c::Community, p::Parameters), 
> returning an array of Float64 with the derivative, for every population 
> size (stored in c.N[1], c.N[2], ...). 
>
> Is there a way to pass additional arguments to the functions used in 
> Sundial? 
>
> t 
>
> -- 
> Timothée Poisot, PhD 
>
> Professeur adjoint 
> Département des sciences biologiques 
> Université de Montréal 
>
> phone  : 514 343-7691 
> web: http://poisotlab.io 
> twitter: @PoisotLab 
> meeting: https://tpoisot.youcanbook.me/ 
>
>

[julia-users] Help my Row Major brain!

2015-10-27 Thread Gabriel Gellner
I am trying my to figure out the Julian way to create a table of values 
(matrix) from a function that returns multiple values. As this is really 
thinking about the problem as a function that generates the rows of the 
table it feels super awkward to do this in Julia currently. For example, 
lets say I have a function of the form:

function exact_solution(t)



[julia-users] Re: Help my Row Major brain!

2015-10-27 Thread Gabriel Gellner
Okay sorry tab seems to send ...

I am trying my to figure out the Julian way to create a table of values 
(matrix) from a function that returns multiple values. As this is really 
thinking about the problem as a function that generates the rows of the 
table it feels super awkward to do this in Julia currently. For example, 
lets say I have a function of the form:

function exact(t)
yout = zeros(2)
yout[1] = 3.0*exp(t) - 2.0*exp(t)
yout[2] = exp(t) + 2.0*exp(t)
yout
end

then what i want is a matrix of these solutions so my first thought is to do

esol = [exact(t) for t in linspace(0, 10, 100)]
hcat(esol...)'

is this the idiomatic solution?

Is there a better way to do this? How do people generally deal with Array 
or Arrays. Feels weird to me currently.

Gabriel


On Tuesday, 27 October 2015 09:31:22 UTC-7, Gabriel Gellner wrote:
>
> I am trying my to figure out the Julian way to create a table of values 
> (matrix) from a function that returns multiple values. As this is really 
> thinking about the problem as a function that generates the rows of the 
> table it feels super awkward to do this in Julia currently. For example, 
> lets say I have a function of the form:
>
> function exact_solution(t)
>
>

[julia-users] Re: Help my Row Major brain!

2015-10-27 Thread Gabriel Gellner
Sadly that wants to make a matrix of two long rows ;) like the hcat(...). 
So needs the transpose as well ... maybe this is the way?
Thanks for opening my eyes to mapreduce though!

On Tuesday, 27 October 2015 09:43:59 UTC-7, Glen O wrote:
>
> One relatively neat way to do this is
>
> mapreduce(exact,hcat,linspace(0,10,100))
>
> On Wednesday, 28 October 2015 02:38:56 UTC+10, Gabriel Gellner wrote:
>>
>> Okay sorry tab seems to send ...
>>
>> I am trying my to figure out the Julian way to create a table of values 
>> (matrix) from a function that returns multiple values. As this is really 
>> thinking about the problem as a function that generates the rows of the 
>> table it feels super awkward to do this in Julia currently. For example, 
>> lets say I have a function of the form:
>>
>> function exact(t)
>> yout = zeros(2)
>> yout[1] = 3.0*exp(t) - 2.0*exp(t)
>> yout[2] = exp(t) + 2.0*exp(t)
>> yout
>> end
>>
>> then what i want is a matrix of these solutions so my first thought is to 
>> do
>>
>> esol = [exact(t) for t in linspace(0, 10, 100)]
>> hcat(esol...)'
>>
>> is this the idiomatic solution?
>>
>> Is there a better way to do this? How do people generally deal with Array 
>> or Arrays. Feels weird to me currently.
>>
>> Gabriel
>>
>>
>> On Tuesday, 27 October 2015 09:31:22 UTC-7, Gabriel Gellner wrote:
>>>
>>> I am trying my to figure out the Julian way to create a table of values 
>>> (matrix) from a function that returns multiple values. As this is really 
>>> thinking about the problem as a function that generates the rows of the 
>>> table it feels super awkward to do this in Julia currently. For example, 
>>> lets say I have a function of the form:
>>>
>>> function exact_solution(t)
>>>
>>>

Re: [julia-users] A grateful scientist

2015-10-26 Thread Gabriel Gellner
Seriously +9000 to this sentiment.
I am new to Julia but man what this community has made is incredible. The 
beauty of this project staggers me. Not having to mess around with C for a 
large chunk of my new codes inner loops feels like magic every time.

Thank you all so much.

Gabriel

On Monday, 26 October 2015 06:16:05 UTC-7, Scott T wrote:
>
> (Oh and Yakir, your work sounds like one of the coolest interdisciplinary 
> mixes I could possibly think of.)
>
> On Monday, 26 October 2015 13:11:47 UTC, Scott T wrote:
>>
>> I'll add my voice to say thanks as well! I made the switch from Python to 
>> Julia for some astrophysical models in my PhD, because I suck at C and 
>> Fortran and didn't like the messiness of dealing with something like 
>> Cython. Julia has been great for this and has introduced me gently to the 
>> usefulness of types and how to program for speed without throwing me in the 
>> deep end. I look forward to the day I can introduce people to it without 
>> having to preface the introduction with "You probably won't care about what 
>> I'm saying until the language and packages are more stable, but..."
>>
>> Scott T
>>
>> On Monday, 26 October 2015 12:47:47 UTC, Jon Norberg wrote:
>>>
>>> Utterly seconding that. Amazing community and beautiful language. 
>>>
>>> Thanks all!
>>>
>>> Jon Norberg 
>>>
>>>

Re: [julia-users] Re: Re: are array slices views in 0.4?

2015-10-26 Thread Gabriel Gellner


On Monday, 26 October 2015 11:17:58 UTC-7, Christoph Ortner wrote:
>
> Fabian - Many thanks for your comments. This was very helpful.
>
> (c) if I want to write code now that shouldn't break with 0.5, what should 
>> I do?
>>
>
> I think when you need a copy, just surround your getindex with a copy 
> function. (e.g. copy(x[:,10]) instead of x[:,10]). 
>
> But this would lead me to make two copies. I was more interested in seeing 
> whether there is a guideline on how to write code now so it doesn't have to 
> be rewritten for 0.5.
>
>
> you could do copy(sub(x, :, 10)) to avoid two copies. Not as pretty ;)
 

> My own scepticism comes from the idea that using immutable objects 
> throughout prevents bugs and one should only use mutable objects sparingly 
> (primarily for performance - but I thought it shouldn't be the default)
>
> the all immutable idea has its merits, but it is really not how Julia 
works already. If I pass the name of an array to a function I get a 
reference not a copy. I only get a copy of a slice of that same array at 
the moment which feels inconsistent to me. if your function accepts an 
array and you are mutating it in the body of the function I really don't 
like that

func(a) 
func(b[1:2, 3:4])

has different behavior if I am mutating the passed in array. This will be a 
source of far more bugs in my mind than simply having it so that if you 
mutate the passed in arguments to a function you are changing the 
parameters. 

Also I would hate to have to do

func(ref(a))

or some such thing when I have no plans to change a in my function (which 
for me is the farr more common case).

Ultimately I find that Julia treats us like consenting adults when it comes 
to passing objects to functions ... and the new array views behavior simply 
adds consistency.


Re: [julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-25 Thread Gabriel Gellner


On Sunday, 25 October 2015 07:05:03 UTC-7, Christoph Ortner wrote:
>
>
> very nice example - thank you - but I notice that you use linrange :).
>
> Thanks, Christoph
>

Man I wish they had called it linrange ;) with type LinRange. Still I am at 
peace with the new way it works. Hopefully it will get faster in the 
future, and with the upcoming printing changes I think it will become a 
minor headache in corner cases at worst (in which case using collect will 
be no problem). I don't think the python range/xrange is a good comparison 
as that is a function that was almost always used for looping in that 
language so of course the iterator is the best idea. linspace is often used 
for array creation in languages like Matlab.

Gabriel 


Re: [julia-users] A question of Style: Iterators into regular Arrays

2015-10-25 Thread Gabriel Gellner

On Sunday, 25 October 2015 11:06:14 UTC-7, Stefan Karpinski wrote:
>
> This is still an option but I'm yet to be convinced that we want to have 
> that many things exported: linrange, LinRange, and linspace which just does 
> collect on linrange? Seems like one too many. 

  


Personally I think that `linspace` should not be exported/deprecated. 
Just have `linrange`  which returns the current `LinRange` (updated name 
...) and have the user use collect(linrange) when needed. I just don't like 
the name linspace anymore as it makes me think about the function in 
Matlab/NumPy which caused my original discontent. (I even tried make a 
quick package for this, but sadly linrange is a deprecated name so I 
abandoned the attempt). Really the name linspace feels like Matlab baggage, 
it is not a name like rand or ones which are a perfect match to the  idea, 
regardless of the implementation. Instead it is a strange name for 
returning an equally spaced, dense, float array. linrange is really more 
descriptive in my mind.

I also prefer having the nice naming consistency with UnitRange, 
FloatRange. Makes me feel like these are a suite of types (like the 
filename linspace is defined in "ranges.jl" ;).

If this is done I think logspace -> logrange as well even though it returns 
an array at the moment.
Similarly if a special name is really wanted for collect(linrange) I think 
it should be alinrange, similarly you would then want aunitrange, 
afloatrange -- this to me would be similar to the seye like exports for 
getting different sparse types. That being said I currently wouldn't vote 
for this as I have seen the light on AbstractArray and the current linrange 
behavior. Outside of being a little slower, at the moment, in vectorized 
cases I like having this and seems to work fine for what I use it for most 
often (parameters for time series like functions or ODE solvers).

But this is pure bike shedding aesthetics. I have used R and Matlab enough 
to learn to deal with strange inconsistencies like this.



Re: [julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-22 Thread Gabriel Gellner

>
> A related discussion is about a special Ones type representing an array 
> of 1, which would allow efficient generic implementations of 
> (non-)weighted statistical functions: 
> https://github.com/JuliaStats/StatsBase.jl/issues/135 
>
> But regarding zeros(), there might not be any compelling use case to 
> return a special type. Anyway, if arrays are changed to initialize to 
> zero [1], that function go could away entirely


lol never thought of this kind of special case. You could simply have a 
"SameVector" object that just stores the value and the length. * + - ^ 
would be easy to define and the space/(# of operations) savings could be 
massive ;). With this you would have no need to special case zeros(n) 
either just have it return a SameVector with value 0. We could even 
generalize it to "BlockVector" for sequences of same values storing start 
and stop locations.


[julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
No that is a good point. Often you can use an iterator where an explicit 
array would also work. The issue I guess is that this puts the burden on 
the developer to always write generic code that when you would want to 
accept an Array you also need to accept a iterator like LinSpace. 

Maybe this is easier to do than I currently understand? If not for regular 
scientists like myself I find that this would force me to make my functions 
far more complex than I generally would do in practice. I would often just 
write a signature like func(x::Vector) when I want my code to accept of 
Vector, but if I pass a LinSpace object to this I get a type error. I like 
this type of coding as it fits my model of what I want to do. The 
contortions I would need to do for something as basic as having a nice way 
to get a linear array of floating point numbers to a simple function that 
wants to accept a Vector seems like a wart to me. Am I missing something 
simple?

For the builtins clearly this is not the case which is nice, but it only 
masks the issue of how this would be used by regular users in my opinion.

On Wednesday, 21 October 2015 05:59:21 UTC-7, Jonathan Malmaud wrote:
>
> Gabriel, I rewrote your code to not ever explicitly convert ranges to 
> arrays and it still works fine. Maybe I'm not quite understanding the issue?
>
> function Jakes_Flat( fd, Ts, Ns, t0 = 0, E0 = 1, phi_N = 0 )
> # Inputs:
> #
> # Outputs:
>   N0  = 8;  # As suggested by Jakes
>   N   = 4*N0+2; # An accurate approximation
>   wd  = 2*pi*fd;# Maximum Doppler frequency
>   t   = t0 + (0:Ns-1)*Ts;
>   tf  = t[end] + Ts;
>   coswt = [ sqrt(2)*cos(wd*t'); 2*cos(wd*cos(2*pi/N*(1:N0))*t') ]
>   temp = zeros(1,N0+1)
>   temp[1,2:end] = pi/(N0+1)*(1:N0)'
>   temp[1,1] = phi_N
>   h = E0/sqrt(2*N0+1)*exp(im*temp ) * coswt
>   return h, tf;
> end
>
> On Wednesday, October 21, 2015 at 3:11:44 AM UTC-4, Gabriel Gellner wrote:
>>
>> I find the way that you need to use `linspace` and `range` objects a bit 
>> jarring for when you want to write vectorized code, or when I want to pass 
>> an array to a function that requires an Array. I get how nice the iterators 
>> are when writing loops and that you can use `collect(iter)` to get a array 
>> (and that it is possible to write polymorphic code that takes LinSpace 
>> types and uses them like Arrays … but this hurts my small brain). But I 
>> find I that I often want to write code that uses an actual array and having 
>> to use `collect` all the time seems like a serious wart for an otherwise 
>> stunning language for science. (
>> https://github.com/JuliaLang/julia/issues/9637 gives the evolution I 
>> think of making these iterators)
>>
>>  
>>
>> For example recently the following code was posted/refined on this 
>> mailing list:
>>
>>  
>>
>> function Jakes_Flat( fd, Ts, Ns, t0 = 0, E0 = 1, phi_N = 0 )
>>
>> # Inputs:
>>
>> #
>>
>> # Outputs:
>>
>>   N0  = 8;  # As suggested by Jakes
>>
>>   N   = 4*N0+2; # An accurate approximation
>>
>>   wd  = 2*pi*fd;# Maximum Doppler frequency
>>
>>   t   = t0 + [0:Ns-1;]*Ts;
>>
>>   tf  = t[end] + Ts;
>>
>>   coswt = [ sqrt(2)*cos(wd*t'); 2*cos(wd*cos(2*pi/N*[1:N0;])*t') ]
>>
>>   temp = zeros(1,N0+1)
>>
>>   temp[1,2:end] = pi/(N0+1)*[1:N0;]'
>>
>>   temp[1,1] = phi_N
>>
>>   h = E0/sqrt(2*N0+1)*exp(im*temp ) * coswt
>>
>>   return h, tf;
>>
>> end
>>
>>  
>>
>> From <https://groups.google.com/forum/#!topic/julia-users/_lIVpV0e_WI> 
>>
>>  
>>
>> Notice all the horrible [;] notations to make these arrays … and it 
>> seems like the devs want to get rid of this notation as well (which they 
>> should it is way too subtle in my opinion). So imagine the above code with 
>> `collect` statements. Is this the way people work? I find the `collect` 
>> statements in mathematical expressions to really break me out of the 
>> abstraction (that I am just writing math).
>>
>>  
>>
>> I get that this could be written as an explicit loop, and this would 
>> likely make it faster as well (man I love looping in Julia). That being 
>> said in this case I don't find the vectorized version a performance issue, 
>> rather I prefer how this reads as it feels closer to the math to me. 
>>
>>  
>>
>> So my question: what is the Juilan way of making explicit arrays using 
>> either `range (:)` or `linspace`? Is it to pollute everything with 
>> `collect`? Would it be worth having versions of linspace that return an 
>> actual array? (something like alinspace or whatnot)
>>
>>
>> Thanks for any tips, comments etc
>>
>

Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-10-21 Thread Gabriel Gellner
I agree with this downvote so much it hurts. The logspace/linspace is 
painfully ugly. linrange is the right name in my find for the iterator 
version.

On Wednesday, 30 September 2015 10:31:55 UTC-7, Alex Ames wrote:
>
> Another downvote on linspace returning a range object. It seems odd for 
> linspace and logspace to return different types, and linrange provides the 
> low-memory option where needed. Numpy's `linspace` also returns an array 
> object.
>  I ran into errors when trying to plot a function over a linspace of x 
> values, since plotting libs currently expect vectors as arguments, not 
> range objects. Easily fixed if you know Julia well, but Matlab/Python 
> converts may be stymied.
>
> On Wednesday, September 30, 2015 at 12:19:22 PM UTC-5, J Luis wrote:
>>
>> I want to add my voice to the dislikers. Those are the type of surprises 
>> that are not welcome mainly for matlab users. 
>>
>> quarta-feira, 30 de Setembro de 2015 às 16:53:57 UTC+1, Christoph Ortner 
>> escreveu:
>>>
>>> I also strongly dislike the `linspace` change; I like the idea though of 
>>> having `linspace` and `linrange`, where the former should give the array.
>>> Christoph
>>>
>>>
>>> On Wednesday, 30 September 2015 10:21:36 UTC+1, Michele Zaffalon wrote:

 I just realize that the thread is about 0.3.11 and I am showing output 
 for 0.4.0-rc2. Sorry for the noise.

 On Wed, Sep 30, 2015 at 11:17 AM, Michele Zaffalon <
 michele@gmail.com> wrote:

>
> On Wed, Sep 30, 2015 at 9:50 AM, Milan Bouchet-Valat  
> wrote:
>
>> Le mercredi 30 septembre 2015 à 08:55 +0200, Michele Zaffalon a écrit 
>> :
>> > Just curious: linspace returns a Range object, but logspace returns 
>> a
>> > vector because there is no much use case for a LogRange object?
>> >
>> > @feza: I have also seen the deprecation warning going away after a
>> > couple of calls, but I am not sure why. If you restart Julia, the
>> > deprecations reappear.
>> Deprecation warnings are only printed once for each call place. The
>> idea is that once you're aware of it, there's no point in nagging you.
>>
>> Anyway, that warning is most probably not related to linspace at all,
>> but rather to the array concatenation syntax resulting in an effect
>> equivalent to collect(). If you show us a piece of code that prints 
>> the
>> warning, we can give you more details.
>>
>>
>> Regards
>>
>
> Sorry, you are right, I was referring to the concatenation.
> It prints it exaclty twice if I type it in the REPL, it always prints 
> it if I define it within a function e.g. a() = [1:3].
>
> C:\Users\michele.zaffalon>julia
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.4.0-rc2 (2015-09-18 17:51 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/   |  x86_64-w64-mingw32
>
> julia> [1:3]
> WARNING: [a] concatenation is deprecated; use collect(a) instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at abstractarray.jl:29
>  in vect at abstractarray.jl:32
> while loading no file, in expression starting on line 0
> 3-element Array{Int64,1}:
>  1
>  2
>  3
>
> julia> [1:3]
> WARNING: [a] concatenation is deprecated; use collect(a) instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at abstractarray.jl:29
>  in vect at abstractarray.jl:32
> while loading no file, in expression starting on line 0
> 3-element Array{Int64,1}:
>  1
>  2
>  3
>
> julia> [1:3]
> 3-element Array{Int64,1}:
>  1
>  2
>  3
>
> julia> a() = [1:3]
> a (generic function with 1 method)
>
> julia> a()
> WARNING: [a] concatenation is deprecated; use collect(a) instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at abstractarray.jl:29
>  in a at none:1
> while loading no file, in expression starting on line 0
> 3-element Array{Int64,1}:
>  1
>  2
>  3
>
> julia> a()
> WARNING: [a] concatenation is deprecated; use collect(a) instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at abstractarray.jl:29
>  in a at none:1
> while loading no file, in expression starting on line 0
> 3-element Array{Int64,1}:
>  1
>  2
>  3
>
> julia> a()
> WARNING: [a] concatenation is deprecated; use collect(a) instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at abstractarray.jl:29
>  in a at none:1
> while loading no file, in expression starting on line 0

[julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
Oh man thanks for this link. Makes me feel better that I am not alone in 
feeling this pain. This is really the first *wart* I have felt in the 
language decisions in Julia. Forcing iterators as the default such a common 
array generation object feels so needless. Calling any code that asks for a 
concrete array instead of a AbstractArray also feels dangerous... 
especially when at first approximation almost all the builtin basic array 
generation functions returns concrete arrays.

On Wednesday, 21 October 2015 05:14:36 UTC-7, Christoph Ortner wrote:
>
> Nice to see this is coming up again. (hopefully many  more times). There 
> seems to be a small group of people (me included) who are really bothered 
> by this. Here is the link to the last discussion I remember:
>
>
> https://groups.google.com/forum/#!searchin/julia-users/linspace/julia-users/qPqgJS-usrU/Hu40_tOlDQAJ
>


Re: [julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
Wow! Thanks everyone for all the advice!!! Super helpful. I now see that it 
is super easy to deal with the LinSpace objects. 

That being said I guess I get scared when the docs tell me to use concrete 
types for performance ;) Most of the code I write for myself is working 
with Float64 arrays a la Matlab. I am comfortable with duck typing and what 
not but one of the things that drew me to Julia (vs lets say Mathematica) 
is how in general the type system feels easy to reason about. All the 
layers of indirection can scare me as it makes it harder for me to 
understand the kinds of performance problems I might face. I like sending 
concrete array types for my own code as it often catches bugs (when I maybe 
used an int literal and this accidentally created a int array when i really 
wanted a float, potentially leading to horrible performance cascades as all 
my generic code starts doing the wrong thing...).

I guess really what this comes down to is the point made by DNF that 
changing linspace to an iterator when that name means an array in Matlab 
and python is not the path of least surprise. It feels to me like this is a 
strange inconsistency as we could just as easily return other common 
functions, like `rand` to be an AbstractArray like type, but this would 
suck as it does in this case. I guess it all comes down to wishing that, 
like DNF suggested, we had a proper name for linspace and the iterator 
version was a new name. I guess my way forward will likely be to define my 
own linarray() function, sadly this will make my code a bit confusing to my 
matlab friends ;)

Again thanks though. I really have learned a crazy bunch about the generic 
type system.

On Wednesday, 21 October 2015 08:50:32 UTC-7, Spencer Russell wrote:
>
> On Wed, Oct 21, 2015, at 11:38 AM, Gabriel Gellner wrote:
>
> No that is a good point. Often you can use an iterator where an explicit 
> array would also work. The issue I guess is that this puts the burden on 
> the developer to always write generic code that when you would want to 
> accept an Array you also need to accept a iterator like LinSpace. 
>  
> Maybe this is easier to do than I currently understand?
>
>  
> In general you can just make your function accept an AbstractVector (or 
> more generally AbstractArray) and things will just work. If there are 
> places where Ranges, LinSpaces, etc. aren't behaving as proper arrays 
> then I think those are generally good candidates for Issues (or even 
> better, PRs).
>  
> As a related aside:
> For efficiency you'll still want to use a concrete Vector field if you're 
> defining your own types, but luckily there seems to be a convert method 
> defined so you can do:
>  
> type MyArr{T}
> x::Vector{T}
> end
>  
> a = MyArr{Int64}(1:4)
>  
> And it will auto-convert the range to an array for storage in your type.
>  
> -s
>


[julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
Continuing to think about all the ideas presented in this thread. It seems 
that the general advice is that almost all functions should at first pass 
be of "Abstract" or untyped (duck typed) versions. If this is the case why 
is Abstract not the default meaning for Array? Is this just a historical 
issue? This feels like the language design is sort of fighting this advice 
and instead it should have been that we have Array meaning AbstractArray 
and Array meaning something like ConcreteArray to put the incentive/most 
natural way to add types. Similar for Vector, Matrix etc.

I guess I find this idea that full genericity is the correct way to do 
things to be a bit at odds with how the language coaxes you to do things 
(and the general discussion of performance in Julia). Is this a more recent 
feeling? Did Julia start out being more about concrete types and template 
like generic types? This would explain the linspace vs logspace and all 
other basic array creating functions (ones, zeros, rand etc) and the 
default names for many types vs the "Abstract" prefixed ones.

Thanks for all the insight.

On Wednesday, 21 October 2015 00:11:44 UTC-7, Gabriel Gellner wrote:
>
> I find the way that you need to use `linspace` and `range` objects a bit 
> jarring for when you want to write vectorized code, or when I want to pass 
> an array to a function that requires an Array. I get how nice the iterators 
> are when writing loops and that you can use `collect(iter)` to get a array 
> (and that it is possible to write polymorphic code that takes LinSpace 
> types and uses them like Arrays … but this hurts my small brain). But I 
> find I that I often want to write code that uses an actual array and having 
> to use `collect` all the time seems like a serious wart for an otherwise 
> stunning language for science. (
> https://github.com/JuliaLang/julia/issues/9637 gives the evolution I 
> think of making these iterators)
>
>  
>
> For example recently the following code was posted/refined on this mailing 
> list:
>
>  
>
> function Jakes_Flat( fd, Ts, Ns, t0 = 0, E0 = 1, phi_N = 0 )
>
> # Inputs:
>
> #
>
> # Outputs:
>
>   N0  = 8;  # As suggested by Jakes
>
>   N   = 4*N0+2; # An accurate approximation
>
>   wd  = 2*pi*fd;# Maximum Doppler frequency
>
>   t   = t0 + [0:Ns-1;]*Ts;
>
>   tf  = t[end] + Ts;
>
>   coswt = [ sqrt(2)*cos(wd*t'); 2*cos(wd*cos(2*pi/N*[1:N0;])*t') ]
>
>   temp = zeros(1,N0+1)
>
>   temp[1,2:end] = pi/(N0+1)*[1:N0;]'
>
>   temp[1,1] = phi_N
>
>   h = E0/sqrt(2*N0+1)*exp(im*temp ) * coswt
>
>   return h, tf;
>
> end
>
>  
>
> From <https://groups.google.com/forum/#!topic/julia-users/_lIVpV0e_WI> 
>
>  
>
> Notice all the horrible [;] notations to make these arrays … and it 
> seems like the devs want to get rid of this notation as well (which they 
> should it is way too subtle in my opinion). So imagine the above code with 
> `collect` statements. Is this the way people work? I find the `collect` 
> statements in mathematical expressions to really break me out of the 
> abstraction (that I am just writing math).
>
>  
>
> I get that this could be written as an explicit loop, and this would 
> likely make it faster as well (man I love looping in Julia). That being 
> said in this case I don't find the vectorized version a performance issue, 
> rather I prefer how this reads as it feels closer to the math to me. 
>
>  
>
> So my question: what is the Juilan way of making explicit arrays using 
> either `range (:)` or `linspace`? Is it to pollute everything with 
> `collect`? Would it be worth having versions of linspace that return an 
> actual array? (something like alinspace or whatnot)
>
>
> Thanks for any tips, comments etc
>


Re: [julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
I have no issue with the LinSpace object, I simply do not see why it is the 
special case for this kind of object (I imagine the choice was made since 
it was seen to be used mainly for looping vs being used for array creation 
like similar functions logspace, zeros, ones, etc). If the iterator version 
is so good I don't see why all Vectors are not returned as this type for 
all the reasons you mention. In the current state where only linspace 
returns this kind of special polymorphic type it simply breaks any feeling 
of consistency in the language. I so something like x = zeros(10) and I get 
an array great. the next line I do y = linspace(0, 5, 10) I get a new 
fangled iterator object. They work the same but how do I get an iterator 
version of zeros? etc. It is a pedantic point but so is special casing this 
super common function to mean something that is not consistent with all 
other languages that use it. Which would be fine if when I did something 
like sin(linspace(0, 5, 100)) I got back an iterator but I don't. This 
abstraction is not percolated thru other functions, further giving the 
feeling of a needless special case in the language writ large. They simply 
get converted to concrete types for many common functions, when if this is 
done will vary from function to function, with little semantic reasoning. 
My feeling is that if what people say is true than why is the default 
numeric array not have the iterator semantics. As it is linspace is made 
special for no reason other than it is assumed it will be used for looping.

People want to argue that it is more elegant and takes advantage of what 
makes Julia powerful, which I get, but then why not go all in on this? 
Mathematica does this. Everything is basically a List which nice behavior 
for vectors matrices etc. I have no issue with this kind of elegance, but 
it is rough when the abstraction is inconsistent (the type is not truly 
perserved in most functions). As people have mentioned clearly logspace 
should be a special object ... but when does this stop? I dislike that this 
feels so arbitrary ... and as a result it is jarring when it turns up. The 
fact that it is polymorphic is only one kind of consistency.

On Wednesday, 21 October 2015 12:09:29 UTC-7, Jonathan Malmaud wrote:
>
> It’s still hard for me to understanding what the value of returning an 
> array is by default. 
>
> By getting a structured LinSpace object, it enables things like having the 
> REPL print it in a special way, to optimize arithmetic operations on it (so 
> that adding a scalar to a LinSpace is O(1) instead of O(N) in the length of 
> the space), to inspect its properties by accessing its fields, etc. And on 
> top of that, it can be used transparently in virtually all expressions 
> where a concrete array can be used. It’s not like Python where iterators 
> are generally going to be much slower and clunkier to work with than a 
> reified Numpy array. 
>
> The only downside really is if your arguments are explicitly and 
> unnecessarily typed to expect an Array, which is not a great habit to get 
> into no matter what linspace returns.
>
> Not trying to be argumentative or dismissive here- just trying to 
> understand. I would think that if one of your motivations for getting into 
> Julia is the rich type system compared to Matlab, you’d be happy that Julia 
> isn’t forced to discard semantic information from operations like linspace 
> as a result of only only raw numeric vectors being first-class constructs 
> in the language (as in Matlab and Numpy).
>
> On Oct 21, 2015, at 2:38 PM, Gabriel Gellner <gabriel...@gmail.com 
> > wrote:
>
> Wow! Thanks everyone for all the advice!!! Super helpful. I now see that 
> it is super easy to deal with the LinSpace objects. 
>
> That being said I guess I get scared when the docs tell me to use concrete 
> types for performance ;) Most of the code I write for myself is working 
> with Float64 arrays a la Matlab. I am comfortable with duck typing and what 
> not but one of the things that drew me to Julia (vs lets say Mathematica) 
> is how in general the type system feels easy to reason about. All the 
> layers of indirection can scare me as it makes it harder for me to 
> understand the kinds of performance problems I might face. I like sending 
> concrete array types for my own code as it often catches bugs (when I maybe 
> used an int literal and this accidentally created a int array when i really 
> wanted a float, potentially leading to horrible performance cascades as all 
> my generic code starts doing the wrong thing...).
>
> I guess really what this comes down to is the point made by DNF that 
> changing linspace to an iterator when that name means an array in Matlab 
> and python is not the path of least surprise. It feels to me like this is a 
> strange

Re: [julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
That doesn't feel like a reason that they can't be iterators, rather that 
they might be slow ;) a la python. My point is not about speed but the 
consistency of the language. Are there many cases in Julia where there is a 
special type like this because it is convenient/elegant to implement? This 
feels like a recipe for madness, my guess is that this would be crazy rare.

People wondered why people might mind that we get a LinSpace object vs an 
Array. For me it is this strange feeling that I am getting a special case 
that doesn't feel well motivated other than there is a nice way to 
implement it (and that people, again, assumed that it would largely be used 
for looping). If not all things can be made consistently iterators when 
they are vector-like then why not have a special function that returns this 
special type (like your aforementioned linrange)? The fact that I lose my 
iterator when I use certain functions but not others is a way that this 
polymorphism that everyone is describing doesn't feel as nice to me, since 
it will not compose in cases where it likely should, outside of 
implementation details.

On Wednesday, 21 October 2015 12:55:35 UTC-7, DNF wrote:
>
> The reason why not all arrays can be iterators is that in general arrays 
> can not be 'compressed' like that. A linear range can be compressed to: a 
> start value, an increment, and a length, making it incredibly lightweight. 
> Doing this for sin() is not that easy. Doing it for rand() is simply 
> Impossible.
>
> On Wednesday, October 21, 2015 at 9:38:07 PM UTC+2, Gabriel Gellner wrote:
>>
>> I have no issue with the LinSpace object, I simply do not see why it is 
>> the special case for this kind of object (I imagine the choice was made 
>> since it was seen to be used mainly for looping vs being used for array 
>> creation like similar functions logspace, zeros, ones, etc). If the 
>> iterator version is so good I don't see why all Vectors are not returned as 
>> this type for all the reasons you mention. In the current state where only 
>> linspace returns this kind of special polymorphic type it simply breaks any 
>> feeling of consistency in the language. I so something like x = zeros(10) 
>> and I get an array great. the next line I do y = linspace(0, 5, 10) I get a 
>> new fangled iterator object. They work the same but how do I get an 
>> iterator version of zeros? etc. It is a pedantic point but so is special 
>> casing this super common function to mean something that is not consistent 
>> with all other languages that use it. Which would be fine if when I did 
>> something like sin(linspace(0, 5, 100)) I got back an iterator but I don't. 
>> This abstraction is not percolated thru other functions, further giving the 
>> feeling of a needless special case in the language writ large. They simply 
>> get converted to concrete types for many common functions, when if this is 
>> done will vary from function to function, with little semantic reasoning. 
>> My feeling is that if what people say is true than why is the default 
>> numeric array not have the iterator semantics. As it is linspace is made 
>> special for no reason other than it is assumed it will be used for looping.
>>
>> People want to argue that it is more elegant and takes advantage of what 
>> makes Julia powerful, which I get, but then why not go all in on this? 
>> Mathematica does this. Everything is basically a List which nice behavior 
>> for vectors matrices etc. I have no issue with this kind of elegance, but 
>> it is rough when the abstraction is inconsistent (the type is not truly 
>> perserved in most functions). As people have mentioned clearly logspace 
>> should be a special object ... but when does this stop? I dislike that this 
>> feels so arbitrary ... and as a result it is jarring when it turns up. The 
>> fact that it is polymorphic is only one kind of consistency.
>>
>

Re: [julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
Okay so I am starting to see the light. I see that if LinSpace becomes 
fully replacable for an Array it is fine.

Final Questions:

* I can't use LinSpace in matrix mult A * b::LinSpace, is this simply a 
Bug/Missing Feature? Or intentional? In general if basic builtin functions 
that operate on Array{Float64} types don't accept LinSpace objects is this 
something to be fixed? Or do we make special assumptions about the 
meaningfulness of such operations?
* LinSpace objects seem much slower when used in things like element 
multiplication A .* b::LinSpace is much much Slower than A .* b::Array, is 
this to be expected (the cost of the extra abstraction lack of required 
denseness) or simply a side effect of lack of optimization?
* In general if I make my code accept AbstractArray{Float64} in general 
should I expect a performance penalty when calling the function with 
Array{Float64} parameters?

thanks


On Wednesday, 21 October 2015 14:25:17 UTC-7, Stefan Karpinski wrote:
>
> On Wed, Oct 21, 2015 at 5:13 PM, Gabriel Gellner <gabriel...@gmail.com 
> > wrote:
>
>> Maybe all this is just transitional that soon LinSpace objects will 
>> always work like Arrays in all code I might use as an end user. Currently 
>> as a new user I have not had this experience. I have noticed that LinSpaces 
>> where returned, and had to learn what they were and at times run `collect` 
>> to make them into what I wanted. I have not felt this abstraction bleed yet 
>> in other areas of Julia.
>
>
> I think this is exactly what's happening. The LinSpace type is hitting 
> code that should be abstracted in a lot of places but hasn't yet been. 
> You'd get the exact same issues if you'd passed a FloatRange object into 
> those functions. So the LinSpace type is really doing us all a favor by 
> forcing more Julia libraries to the right level of abstraction.
>


Re: [julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
Sweetness. Thank you. AbstractArray it is for me!

On Wednesday, 21 October 2015 16:29:28 UTC-7, Tim Holy wrote:
>
> On Wednesday, October 21, 2015 03:32:04 PM Gabriel Gellner wrote: 
> > * I can't use LinSpace in matrix mult A * b::LinSpace, is this simply a 
> > Bug/Missing Feature? 
>
> Yes 
>
> > * LinSpace objects seem much slower when used in things like element 
> > multiplication A .* b::LinSpace is much much Slower than A .* b::Array, 
> is 
> > this to be expected (the cost of the extra abstraction lack of required 
> > denseness) or simply a side effect of lack of optimization? 
>
> It's because we can't yet eliminate bounds-checks with @inbounds on 
> anything 
> except Arrays. 
>
> > * In general if I make my code accept AbstractArray{Float64} in general 
> > should I expect a performance penalty when calling the function with 
> > Array{Float64} parameters? 
>
> None whatsoever. The only place that's a bad idea is in declarations of 
> type- 
> fields, see 
> http://docs.julialang.org/en/latest/manual/performance-tips/#avoid-fields-with-abstract-type.
>  
>
>
> --Tim 
>
>

Re: [julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner
Isn't the situation actually similar to functions like `eye` which returns 
a dense array (which doesn't feel intuitive for what an eye or diagonal 
matrix actually is) whereas we have to call the special version seye to get 
a sparse array. For most users linspace is like `eye` by default 
suggesting the dense version (ie from reasoning about other languages), 
whereas the other names I have heard would be like seye, ie linrange. What 
we currently lack is top level functions that let us choose which version 
we want like what we have for the sparse vs dense arrays. The current 
situation is actually exposing less of the type diversity you mention. It 
feels like following the dense vs sparse array stuff we would want a 
similar symmetry for LinSpace vs Array creation.

On Wednesday, 21 October 2015 14:20:25 UTC-7, Stefan Karpinski wrote:
>
> The whole notion of always using a single dense array type is simply a 
> non-starter. Do we just scrap the whole sparse array thing? There goes half 
> of the stuff you might want to do in linear algebra or machine learning. 
> Special matrix types like Diagonal or UpperTriangular, etc.? Toss those out 
> too. Various types of ranges are just a different form of compactly 
> representing arrays with special structure. How come no one is complaining 
> that 1:n is not a dense vector?
>
> On Wed, Oct 21, 2015 at 4:23 PM, Jonathan Malmaud <mal...@gmail.com 
> > wrote:
>
>> You're making good points for sure - logspace and linspace are 
>> inconsistent wrt  return types.
>>
>> But I just having trouble seeing how it impacts you as a user of the 
>> language; it's essentially an implementation detail that allows for some 
>> optimizations when performing arithmetic on array-like values known to have 
>> a certain exploitable structure (eg, uniform element spacing). 
>>
>> Your own code will virtually never make explicit references to Vector or 
>> LinSpace types (except perhaps in specifying the types of fields in new 
>> types you define). If tomorrow logspace was changed to return a special 
>> type or linspace to return a plain array, your code would remain identical 
>> if function arguments were typed as AbstractVector instead of Vector.
>>
>> So I can see how it could bother someone's aesthetic knowing that under 
>> the hood these inconsistencies exist (it bothers me a bit). And I can see 
>> how if you're counting on certain optimizations happening, like scalar 
>> multiplication on spaces being O(1), it would be frustrating that it will 
>> depend on if the space was generated by linspace or logspace. But is that 
>> really leading to 'madness'? 
>>
>> There are other examples of these light-weight wrapper types being used, 
>> especially in the linear algebra routines. The matrix factorization 
>> functions, like qrfact, return special types, for one thing.
>>
>> On Wed, Oct 21, 2015 at 4:07 PM, Gabriel Gellner <gabriel...@gmail.com 
>> > wrote:
>>
>>> That doesn't feel like a reason that they can't be iterators, rather 
>>> that they might be slow ;) a la python. My point is not about speed but the 
>>> consistency of the language. Are there many cases in Julia where there is a 
>>> special type like this because it is convenient/elegant to implement? This 
>>> feels like a recipe for madness, my guess is that this would be crazy rare.
>>>
>>> People wondered why people might mind that we get a LinSpace object vs 
>>> an Array. For me it is this strange feeling that I am getting a special 
>>> case that doesn't feel well motivated other than there is a nice way to 
>>> implement it (and that people, again, assumed that it would largely be used 
>>> for looping). If not all things can be made consistently iterators when 
>>> they are vector-like then why not have a special function that returns this 
>>> special type (like your aforementioned linrange)? The fact that I lose my 
>>> iterator when I use certain functions but not others is a way that this 
>>> polymorphism that everyone is describing doesn't feel as nice to me, since 
>>> it will not compose in cases where it likely should, outside of 
>>> implementation details.
>>>
>>>
>>> On Wednesday, 21 October 2015 12:55:35 UTC-7, DNF wrote:
>>>>
>>>> The reason why not all arrays can be iterators is that in general 
>>>> arrays can not be 'compressed' like that. A linear range can be compressed 
>>>> to: a start value, an increment, and a length, making it incredibly 
>>>> lightweight. Doing this for sin() is not that easy. Doing it for rand() is 
>&g

[julia-users] A question of Style: Iterators into regular Arrays

2015-10-21 Thread Gabriel Gellner


I find the way that you need to use `linspace` and `range` objects a bit 
jarring for when you want to write vectorized code, or when I want to pass 
an array to a function that requires an Array. I get how nice the iterators 
are when writing loops and that you can use `collect(iter)` to get a array 
(and that it is possible to write polymorphic code that takes LinSpace 
types and uses them like Arrays … but this hurts my small brain). But I 
find I that I often want to write code that uses an actual array and having 
to use `collect` all the time seems like a serious wart for an otherwise 
stunning language for science. (
https://github.com/JuliaLang/julia/issues/9637 gives the evolution I think 
of making these iterators)

 

For example recently the following code was posted/refined on this mailing 
list:

 

function Jakes_Flat( fd, Ts, Ns, t0 = 0, E0 = 1, phi_N = 0 )

# Inputs:

#

# Outputs:

  N0  = 8;  # As suggested by Jakes

  N   = 4*N0+2; # An accurate approximation

  wd  = 2*pi*fd;# Maximum Doppler frequency

  t   = t0 + [0:Ns-1;]*Ts;

  tf  = t[end] + Ts;

  coswt = [ sqrt(2)*cos(wd*t'); 2*cos(wd*cos(2*pi/N*[1:N0;])*t') ]

  temp = zeros(1,N0+1)

  temp[1,2:end] = pi/(N0+1)*[1:N0;]'

  temp[1,1] = phi_N

  h = E0/sqrt(2*N0+1)*exp(im*temp ) * coswt

  return h, tf;

end

 

>From  

 

Notice all the horrible [;] notations to make these arrays … and it 
seems like the devs want to get rid of this notation as well (which they 
should it is way too subtle in my opinion). So imagine the above code with 
`collect` statements. Is this the way people work? I find the `collect` 
statements in mathematical expressions to really break me out of the 
abstraction (that I am just writing math).

 

I get that this could be written as an explicit loop, and this would likely 
make it faster as well (man I love looping in Julia). That being said in 
this case I don't find the vectorized version a performance issue, rather I 
prefer how this reads as it feels closer to the math to me. 

 

So my question: what is the Juilan way of making explicit arrays using 
either `range (:)` or `linspace`? Is it to pollute everything with 
`collect`? Would it be worth having versions of linspace that return an 
actual array? (something like alinspace or whatnot)


Thanks for any tips, comments etc