[julia-users] Re: [ANN] GLVisualize

2016-02-26 Thread Jeffrey Sarnoff
Last night I was wondering about what software to use to make 
visualizables; there was nothing for that within Julia then. You must work 
very fast.  Thank you.



On Friday, February 26, 2016 at 5:11:02 PM UTC-5, Simon Danisch wrote:
>
> Hi
>
> this is the first release of GLVisualize.jl 
> , a 2D/3D visualization 
> library completely written in Julia and OpenGL.
>
> You can find some crude documentation on glvisualize.com 
> .
> I hope to improve the examples and the documentation in the coming weeks.
> The biggest problem for most people right now will be a slightly flaky 
> camera and missing guides and labels.
> This is being worked on! If someone beats me to the guide/axis creation, 
> I'd be very happy. This could be a fun project to get started with 
> GLVisualize.
> Please feel free to open any issue concerning missing documentation, 
> discrepancies  and 
> bugs!
>
> Relation to GLPlot :
> GLPlot is now a thin wrapper for GLVisualize with a focus on plotting. 
> Since I concentrated mostly on finishing GLVisualize, it's a reeally thin 
> wrapper. 
> It basically just forwards all calls to GLVisualize, and adds a 
> boundingbox around the objects. 
> In the future, it should offer some basic UI, automatic creation of 
> axis/labels, screenshots and an alternative API that is more familiar to 
> people that are coming from other plotting libraries (e.g. functions like 
> surf, contourf, patches).
> If anyone has specific plans on how this could look like don't hesitate to 
> open issues and PR's!
>
> Outlook:
> I'd like to make GLVisualize more independent of the rendering backend by 
> using some backend agnostic geometry 
>  
> representation.
> This will make it easier to integrate backends like FireRender 
> , WebGL, Vulkan 
>  (why Vulkan 
> ), 
> or some text based backends like PDF/SVG.
>
> Furthermore, I'd like to improve the performance and interaction 
> possibilities.
>
> I have to thank the Julia Group for supporting me :) It's a pleasure to be 
> a part of the Julia community!
>
> I'm looking forward to the great visualizations you'll create!
>
> Best,
> Simon Danisch
>


Re: [julia-users] Pass contents of array as arguments to function

2016-02-26 Thread Erik Schnetter
Try
```Julia
func([(1,2), (3,4)]...)
```

-erik


On Fri, Feb 26, 2016 at 9:51 PM,   wrote:
> I have an array of tuples and I want to pass the tuples to the zip function
> without manually typing out the indices. Is there a way to do this?



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] map losing type information?

2016-02-26 Thread Evan Fields
I notice when I use map with a set collection I end up with sets of type 
Any, even when the equivalent code with arrays doesn't lose the type 
information. Am I doing something wrong? Check it out:

   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.3 (2016-01-12 21:37 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/   |  x86_64-w64-mingw32

julia> xSet = Set{Int}([1,2,3])
Set([2,3,1])

julia> xLst = [1,2,3]
3-element Array{Int64,1}:
 1
 2
 3

julia> typeof(xSet)
Set{Int64}

julia> typeof(xLst)
Array{Int64,1}

julia> f(x) = x+1
f (generic function with 1 method)

julia> typeof(map(f, xSet))
Array{Any,1}

julia> typeof(map(f, xLst))
Array{Int64,1}

julia>


[julia-users] Pass contents of array as arguments to function

2016-02-26 Thread maloolsegev
I have an array of tuples and I want to pass the tuples to the zip function 
without manually typing out the indices. Is there a way to do this?


[julia-users] Re: Checking a call whether it belongs to a specific method

2016-02-26 Thread Greg Plowman
Not sure method_exists() is helpful, but it is sort of related ...
 

julia> function Foo(x::Float64,y::Int64) # lets call this f1
   #some code
   end
Foo (generic function with 1 method)

julia> function Foo(x::Int64,y::Float64) # lets call this f2
   #some code
   end
Foo (generic function with 2 methods)

julia> method_exists(Foo, (Float64, Int64))
true

julia> method_exists(Foo, (Int64, Float64))
true

julia> method_exists(Foo, (Float64, Float64))
false




Re: [julia-users] Re: Issue with macros for creating types

2016-02-26 Thread Yichao Yu
On Fri, Feb 26, 2016 at 9:35 PM, Yichao Yu  wrote:
> On Fri, Feb 26, 2016 at 9:34 PM, Ben Ward  wrote:
>> Ohhh I see it, yep. Can't give a number in a parametric type definition -
>> ugh, well that's tiredness!
>
> Admittedly the macro expansion probably should have swallowed the error
.

And I obviously meant `shouldn't have` 

>
>>
>>
>> On Saturday, February 27, 2016 at 2:12:43 AM UTC, Ben Ward wrote:
>>>
>>> Hi,
>>>
>>> I'm trying to create a macro for the BioJulia project which will allow
>>> easy creation of biological alphabets, which will work with the new
>>> BioSequence type we are designing as a major improvement to the Seq module.
>>> As it will be creating types and functions, I expect hygiene will be an
>>> issue, but before I even get to that, let's start simple. Just a quote, and
>>> some interpolation:
>>>
>>> macro create_alphabet(alph_name, n_bits)
>>>
>>>
>>>quote
>>>
>>>
>>>immutable $(alph_name){$(n_bits)} <: Alphabet end
>>>
>>>
>>>end
>>>
>>>
>>>end
>>>
>>>
>>> @create_alphabet (macro with 1 method)
>>>
>>>
>>> julia> abstract Alphabet
>>>
>>>
>>> julia> @create_alphabet hi 1
>>>
>>>
>>> ERROR: syntax: malformed expression
>>>
>>>
>>>  in eval(::Module, ::Any) at ./boot.jl:267
>>>
>>>
>>>
>>>
>>> For the life of me, I really cannot see what I've done wrong - it's one
>>> line with two interpolated words! What massively obvious thing am I being a
>>> moron about at 2 am?
>>>
>>> Thanks,
>>> Ben.


Re: [julia-users] Re: Issue with macros for creating types

2016-02-26 Thread Yichao Yu
On Fri, Feb 26, 2016 at 9:34 PM, Ben Ward  wrote:
> Ohhh I see it, yep. Can't give a number in a parametric type definition -
> ugh, well that's tiredness!

Admittedly the macro expansion probably should have swallowed the error .

>
>
> On Saturday, February 27, 2016 at 2:12:43 AM UTC, Ben Ward wrote:
>>
>> Hi,
>>
>> I'm trying to create a macro for the BioJulia project which will allow
>> easy creation of biological alphabets, which will work with the new
>> BioSequence type we are designing as a major improvement to the Seq module.
>> As it will be creating types and functions, I expect hygiene will be an
>> issue, but before I even get to that, let's start simple. Just a quote, and
>> some interpolation:
>>
>> macro create_alphabet(alph_name, n_bits)
>>
>>
>>quote
>>
>>
>>immutable $(alph_name){$(n_bits)} <: Alphabet end
>>
>>
>>end
>>
>>
>>end
>>
>>
>> @create_alphabet (macro with 1 method)
>>
>>
>> julia> abstract Alphabet
>>
>>
>> julia> @create_alphabet hi 1
>>
>>
>> ERROR: syntax: malformed expression
>>
>>
>>  in eval(::Module, ::Any) at ./boot.jl:267
>>
>>
>>
>>
>> For the life of me, I really cannot see what I've done wrong - it's one
>> line with two interpolated words! What massively obvious thing am I being a
>> moron about at 2 am?
>>
>> Thanks,
>> Ben.


[julia-users] Re: Issue with macros for creating types

2016-02-26 Thread Ben Ward
Ohhh I see it, yep. Can't give a number in a parametric type definition - 
ugh, well that's tiredness!

On Saturday, February 27, 2016 at 2:12:43 AM UTC, Ben Ward wrote:
>
> Hi,
>
> I'm trying to create a macro for the BioJulia project which will allow 
> easy creation of biological alphabets, which will work with the new 
> BioSequence type we are designing as a major improvement to the Seq module. 
> As it will be creating types and functions, I expect hygiene will be an 
> issue, but before I even get to that, let's start simple. Just a quote, and 
> some interpolation:
>
> *macro create_alphabet(alph_name, n_bits)*
>
>*quote*
>
>*immutable $(alph_name){$(n_bits)} <: Alphabet end*
>
>*end*
>
>*end*
>
> *@create_alphabet (macro with 1 method)*
>
> *julia> **abstract Alphabet*
>
> *julia> **@create_alphabet hi 1*
>
> *ERROR: syntax: malformed expression*
>
> * in eval(::Module, ::Any) at ./boot.jl:267*
>
>
>
> For the life of me, I really cannot see what I've done wrong - it's one 
> line with two interpolated words! What massively obvious thing am I being a 
> moron about at 2 am?
>
> Thanks,
> Ben.
>


[julia-users] Issue with macros for creating types

2016-02-26 Thread Ben Ward
Hi,

I'm trying to create a macro for the BioJulia project which will allow easy 
creation of biological alphabets, which will work with the new BioSequence 
type we are designing as a major improvement to the Seq module. As it will 
be creating types and functions, I expect hygiene will be an issue, but 
before I even get to that, let's start simple. Just a quote, and some 
interpolation:

*macro create_alphabet(alph_name, n_bits)*

   *quote*

   *immutable $(alph_name){$(n_bits)} <: Alphabet end*

   *end*

   *end*

*@create_alphabet (macro with 1 method)*

*julia> **abstract Alphabet*

*julia> **@create_alphabet hi 1*

*ERROR: syntax: malformed expression*

* in eval(::Module, ::Any) at ./boot.jl:267*



For the life of me, I really cannot see what I've done wrong - it's one 
line with two interpolated words! What massively obvious thing am I being a 
moron about at 2 am?

Thanks,
Ben.


Re: [julia-users] Comparing functions

2016-02-26 Thread Keno Fischer
It depends on what the exact predicate is you're trying to define (if you
tell us, we might be able to suggest a way). However, I would caution
against this. It sounds like you're encoding properties into ASTs that may
be better off in a proper data structure.

On Fri, Feb 26, 2016 at 6:21 PM, Julia Tylors  wrote:

> How about an unofficial way? If you are to do it, how would you do it?
>
> Thanks
>
> On Fri, Feb 26, 2016 at 1:44 PM, Stefan Karpinski 
> wrote:
>
>> This application is a little more plausible since it's an optimization
>> and you can tolerate false negatives (functions that are actually the same
>> but appear different). There's no official way to do this but maybe Tom's
>> function helps.
>>
>> On Fri, Feb 26, 2016 at 4:42 PM, Tom Short 
>> wrote:
>>
>>> I'm interested in something like that, too. I only want to rerun parts
>>> of calculations if functions change or if input data has changed. Here's
>>> what I came up with to check functions:
>>>
>>> https://github.com/tshort/Make.jl/blob/master/src/Make.jl#L43-L49
>>>
>>> It's probably not right but it's working reasonably well for me, at
>>> least for anonymous functions. Changes in Julia v0.5 will probably change
>>> this. I'm currently wrestling with how to maintain some state between Julia
>>> sessions.
>>>
>>>
>>>
>>>
>>> On Fri, Feb 26, 2016 at 3:18 PM, Julia Tylors 
>>> wrote:
>>>
 to check whether they are equal so that i don't need to make them go
 through another of my operations to save time. That way it will be cached.

 On Friday, February 26, 2016 at 12:16:18 PM UTC-8, Stefan Karpinski
 wrote:
>
> Why?
>
> On Fri, Feb 26, 2016 at 3:14 PM, Julia Tylors 
> wrote:
>
>> This was a nice question,
>> i think i am trying to figure out a way to check if 2 functions
>> (partial possibly) are  at the same syntactic location in the AST and 
>> their
>> free variables refer to the equal/same data
>>
>>
>>
>> On Friday, February 26, 2016 at 12:02:30 PM UTC-8, Stefan Karpinski
>> wrote:
>>>
>>> What are you trying to discover about these functions?
>>>
>>> On Fri, Feb 26, 2016 at 2:50 PM, Julia Tylors 
>>> wrote:
>>>
 Isn't there a trick like i can serialize a partial function and
 then check their equality in the serialized form?

 On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski
 wrote:
>
> Functions are compared by identity – they are equal if they are
> the same function, and not otherwise. Comparing functions 
> syntactically is
> shallow and nearly useless. Comparing functions by what they compute 
> is
> undecidable. So identity is essentially the only useful way to compare
> functions.
>
> On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors 
> wrote:
>
>> Hello,
>>
>> I have a simple question. I like to compare functions. For
>> example:
>>
>> function some_func(x::Target, y::Config, z::Int64)
>>#some code here
>> end
>>
>> #some partialization here
>> f1 = (x::Target,y::Config) -> some_func(x,y,5)
>> f2 = (x::Target,y::Config) -> some_func(x,y,4)
>>
>>
>> I want to evaluate the following expression:
>>
>> f1 == f2
>>
>>
>> Thanks
>>
>
>
>>>
>
>>>
>>
>


Re: [julia-users] [ANN] GLVisualize

2016-02-26 Thread Tom Breloff
How exciting... congrats!  I've been following the progress... I hope I
find time to get my hands dirty with it soon.

On Fri, Feb 26, 2016 at 5:11 PM, Simon Danisch  wrote:

> Hi
>
> this is the first release of GLVisualize.jl
> , a 2D/3D visualization
> library completely written in Julia and OpenGL.
>
> You can find some crude documentation on glvisualize.com
> .
> I hope to improve the examples and the documentation in the coming weeks.
> The biggest problem for most people right now will be a slightly flaky
> camera and missing guides and labels.
> This is being worked on! If someone beats me to the guide/axis creation,
> I'd be very happy. This could be a fun project to get started with
> GLVisualize.
> Please feel free to open any issue concerning missing documentation,
> discrepancies  and
> bugs!
>
> Relation to GLPlot :
> GLPlot is now a thin wrapper for GLVisualize with a focus on plotting.
> Since I concentrated mostly on finishing GLVisualize, it's a reeally thin
> wrapper.
> It basically just forwards all calls to GLVisualize, and adds a
> boundingbox around the objects.
> In the future, it should offer some basic UI, automatic creation of
> axis/labels, screenshots and an alternative API that is more familiar to
> people that are coming from other plotting libraries (e.g. functions like
> surf, contourf, patches).
> If anyone has specific plans on how this could look like don't hesitate to
> open issues and PR's!
>
> Outlook:
> I'd like to make GLVisualize more independent of the rendering backend by
> using some backend agnostic geometry
> 
> representation.
> This will make it easier to integrate backends like FireRender
> , WebGL, Vulkan
>  (why Vulkan
> ),
> or some text based backends like PDF/SVG.
>
> Furthermore, I'd like to improve the performance and interaction
> possibilities.
>
> I have to thank the Julia Group for supporting me :) It's a pleasure to be
> a part of the Julia community!
>
> I'm looking forward to the great visualizations you'll create!
>
> Best,
> Simon Danisch
>


Re: [julia-users] Comparing functions

2016-02-26 Thread Julia Tylors
How about an unofficial way? If you are to do it, how would you do it?

Thanks

On Fri, Feb 26, 2016 at 1:44 PM, Stefan Karpinski 
wrote:

> This application is a little more plausible since it's an optimization and
> you can tolerate false negatives (functions that are actually the same but
> appear different). There's no official way to do this but maybe Tom's
> function helps.
>
> On Fri, Feb 26, 2016 at 4:42 PM, Tom Short 
> wrote:
>
>> I'm interested in something like that, too. I only want to rerun parts of
>> calculations if functions change or if input data has changed. Here's what
>> I came up with to check functions:
>>
>> https://github.com/tshort/Make.jl/blob/master/src/Make.jl#L43-L49
>>
>> It's probably not right but it's working reasonably well for me, at least
>> for anonymous functions. Changes in Julia v0.5 will probably change this.
>> I'm currently wrestling with how to maintain some state between Julia
>> sessions.
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 3:18 PM, Julia Tylors 
>> wrote:
>>
>>> to check whether they are equal so that i don't need to make them go
>>> through another of my operations to save time. That way it will be cached.
>>>
>>> On Friday, February 26, 2016 at 12:16:18 PM UTC-8, Stefan Karpinski
>>> wrote:

 Why?

 On Fri, Feb 26, 2016 at 3:14 PM, Julia Tylors 
 wrote:

> This was a nice question,
> i think i am trying to figure out a way to check if 2 functions
> (partial possibly) are  at the same syntactic location in the AST and 
> their
> free variables refer to the equal/same data
>
>
>
> On Friday, February 26, 2016 at 12:02:30 PM UTC-8, Stefan Karpinski
> wrote:
>>
>> What are you trying to discover about these functions?
>>
>> On Fri, Feb 26, 2016 at 2:50 PM, Julia Tylors 
>> wrote:
>>
>>> Isn't there a trick like i can serialize a partial function and then
>>> check their equality in the serialized form?
>>>
>>> On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski
>>> wrote:

 Functions are compared by identity – they are equal if they are the
 same function, and not otherwise. Comparing functions syntactically is
 shallow and nearly useless. Comparing functions by what they compute is
 undecidable. So identity is essentially the only useful way to compare
 functions.

 On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors 
 wrote:

> Hello,
>
> I have a simple question. I like to compare functions. For example:
>
> function some_func(x::Target, y::Config, z::Int64)
>#some code here
> end
>
> #some partialization here
> f1 = (x::Target,y::Config) -> some_func(x,y,5)
> f2 = (x::Target,y::Config) -> some_func(x,y,4)
>
>
> I want to evaluate the following expression:
>
> f1 == f2
>
>
> Thanks
>


>>

>>
>


[julia-users] Re: Using push! with vector-elements of a matrix ?

2016-02-26 Thread colintbowers
Almost exactly this question just popped up on StackOverflow the other day. 
Follow link for an explanation about what is happening under the hood:

http://stackoverflow.com/questions/35623326/assignment-to-multidimensional-array-in-julia

On Friday, 26 February 2016 13:50:32 UTC+11, Ilya Orson wrote:
>
> Ok, thanks!



[julia-users] [ANN] GLVisualize

2016-02-26 Thread Simon Danisch
Hi

this is the first release of GLVisualize.jl 
, a 2D/3D visualization library 
completely written in Julia and OpenGL.

You can find some crude documentation on glvisualize.com 
.
I hope to improve the examples and the documentation in the coming weeks.
The biggest problem for most people right now will be a slightly flaky 
camera and missing guides and labels.
This is being worked on! If someone beats me to the guide/axis creation, 
I'd be very happy. This could be a fun project to get started with 
GLVisualize.
Please feel free to open any issue concerning missing documentation, 
discrepancies  and 
bugs!

Relation to GLPlot :
GLPlot is now a thin wrapper for GLVisualize with a focus on plotting. 
Since I concentrated mostly on finishing GLVisualize, it's a reeally thin 
wrapper. 
It basically just forwards all calls to GLVisualize, and adds a boundingbox 
around the objects. 
In the future, it should offer some basic UI, automatic creation of 
axis/labels, screenshots and an alternative API that is more familiar to 
people that are coming from other plotting libraries (e.g. functions like 
surf, contourf, patches).
If anyone has specific plans on how this could look like don't hesitate to 
open issues and PR's!

Outlook:
I'd like to make GLVisualize more independent of the rendering backend by 
using some backend agnostic geometry 
 
representation.
This will make it easier to integrate backends like FireRender 
, WebGL, Vulkan 
 (why Vulkan 
), 
or some text based backends like PDF/SVG.

Furthermore, I'd like to improve the performance and interaction 
possibilities.

I have to thank the Julia Group for supporting me :) It's a pleasure to be 
a part of the Julia community!

I'm looking forward to the great visualizations you'll create!

Best,
Simon Danisch


Re: [julia-users] Comparing functions

2016-02-26 Thread Stefan Karpinski
This application is a little more plausible since it's an optimization and
you can tolerate false negatives (functions that are actually the same but
appear different). There's no official way to do this but maybe Tom's
function helps.

On Fri, Feb 26, 2016 at 4:42 PM, Tom Short  wrote:

> I'm interested in something like that, too. I only want to rerun parts of
> calculations if functions change or if input data has changed. Here's what
> I came up with to check functions:
>
> https://github.com/tshort/Make.jl/blob/master/src/Make.jl#L43-L49
>
> It's probably not right but it's working reasonably well for me, at least
> for anonymous functions. Changes in Julia v0.5 will probably change this.
> I'm currently wrestling with how to maintain some state between Julia
> sessions.
>
>
>
>
> On Fri, Feb 26, 2016 at 3:18 PM, Julia Tylors 
> wrote:
>
>> to check whether they are equal so that i don't need to make them go
>> through another of my operations to save time. That way it will be cached.
>>
>> On Friday, February 26, 2016 at 12:16:18 PM UTC-8, Stefan Karpinski wrote:
>>>
>>> Why?
>>>
>>> On Fri, Feb 26, 2016 at 3:14 PM, Julia Tylors 
>>> wrote:
>>>
 This was a nice question,
 i think i am trying to figure out a way to check if 2 functions
 (partial possibly) are  at the same syntactic location in the AST and their
 free variables refer to the equal/same data



 On Friday, February 26, 2016 at 12:02:30 PM UTC-8, Stefan Karpinski
 wrote:
>
> What are you trying to discover about these functions?
>
> On Fri, Feb 26, 2016 at 2:50 PM, Julia Tylors 
> wrote:
>
>> Isn't there a trick like i can serialize a partial function and then
>> check their equality in the serialized form?
>>
>> On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski
>> wrote:
>>>
>>> Functions are compared by identity – they are equal if they are the
>>> same function, and not otherwise. Comparing functions syntactically is
>>> shallow and nearly useless. Comparing functions by what they compute is
>>> undecidable. So identity is essentially the only useful way to compare
>>> functions.
>>>
>>> On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors 
>>> wrote:
>>>
 Hello,

 I have a simple question. I like to compare functions. For example:

 function some_func(x::Target, y::Config, z::Int64)
#some code here
 end

 #some partialization here
 f1 = (x::Target,y::Config) -> some_func(x,y,5)
 f2 = (x::Target,y::Config) -> some_func(x,y,4)


 I want to evaluate the following expression:

 f1 == f2


 Thanks

>>>
>>>
>
>>>
>


Re: [julia-users] Comparing functions

2016-02-26 Thread Tom Short
I'm interested in something like that, too. I only want to rerun parts of
calculations if functions change or if input data has changed. Here's what
I came up with to check functions:

https://github.com/tshort/Make.jl/blob/master/src/Make.jl#L43-L49

It's probably not right but it's working reasonably well for me, at least
for anonymous functions. Changes in Julia v0.5 will probably change this.
I'm currently wrestling with how to maintain some state between Julia
sessions.




On Fri, Feb 26, 2016 at 3:18 PM, Julia Tylors  wrote:

> to check whether they are equal so that i don't need to make them go
> through another of my operations to save time. That way it will be cached.
>
> On Friday, February 26, 2016 at 12:16:18 PM UTC-8, Stefan Karpinski wrote:
>>
>> Why?
>>
>> On Fri, Feb 26, 2016 at 3:14 PM, Julia Tylors  wrote:
>>
>>> This was a nice question,
>>> i think i am trying to figure out a way to check if 2 functions (partial
>>> possibly) are  at the same syntactic location in the AST and their free
>>> variables refer to the equal/same data
>>>
>>>
>>>
>>> On Friday, February 26, 2016 at 12:02:30 PM UTC-8, Stefan Karpinski
>>> wrote:

 What are you trying to discover about these functions?

 On Fri, Feb 26, 2016 at 2:50 PM, Julia Tylors 
 wrote:

> Isn't there a trick like i can serialize a partial function and then
> check their equality in the serialized form?
>
> On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski
> wrote:
>>
>> Functions are compared by identity – they are equal if they are the
>> same function, and not otherwise. Comparing functions syntactically is
>> shallow and nearly useless. Comparing functions by what they compute is
>> undecidable. So identity is essentially the only useful way to compare
>> functions.
>>
>> On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors 
>> wrote:
>>
>>> Hello,
>>>
>>> I have a simple question. I like to compare functions. For example:
>>>
>>> function some_func(x::Target, y::Config, z::Int64)
>>>#some code here
>>> end
>>>
>>> #some partialization here
>>> f1 = (x::Target,y::Config) -> some_func(x,y,5)
>>> f2 = (x::Target,y::Config) -> some_func(x,y,4)
>>>
>>>
>>> I want to evaluate the following expression:
>>>
>>> f1 == f2
>>>
>>>
>>> Thanks
>>>
>>
>>

>>


[julia-users] Re: Pkg.tag "fatal: Not a valid commit name" error

2016-02-26 Thread SundaraRaman R
Hi! Sorry I wasn't able to reply sooner on this during the work week. Looks 
like you got it successfully published anyway 
(https://github.com/codles/EEG.jl/releases), cheers!

For the sake of posterity that searches these forums, could you describe 
what you had to do to solve this `publish` issue? Was doing a manual 
publish of the METADATA as described in 
http://docs.julialang.org/en/latest/manual/packages/#man-manual-publish 
sufficient? Or did it involve more magic? 

On Monday, February 22, 2016 at 12:56:50 AM UTC+5:30, Rob wrote:
>
> Thanks for the help I managed to retag all versions with the command 
>
> Pkg.tag("EEG", v"0.0.2", "eb110a6c7d11ec72ea06e9db9292d33be6248c95", force
> =true)
>
> But now when running `Pkg.publish()` I get the following error
>
> ERROR: EEG v0.0.1 SHA1 changed in METADATA – refusing to publish
>
>
> Any tips on how to solve this?
>
>
>
> On Saturday, 20 February 2016 17:39:54 UTC+1, SundaraRaman R wrote:
>>
>>
>>
>> On Saturday, February 20, 2016 at 9:52:25 PM UTC+5:30, SundaraRaman R 
>> wrote:
>>>
>>> you can do a force tagging via Pkg.tag("EEG", v"0.0.3", force=true) 
>>>
>> force isn't a keyword arg, so this should've been just Pkg.tag("EEG", 
>> v"0.0.3", true) 
>>  
>>
>>> However, it may be better to go back and tag each version from 0.0.1 (or 
>>> whichever your first version was) with the correct commit ID, as seen with 
>>> *git 
>>> log --tags* now. 
>>>
>>
>> Just for completion's sake, this would be done as Pkg.tag("EEG", 
>> v"0.0.1", true, "") , where the 
>>  is the sha1 commit ID of tag 0.0.1 
>> as emitted by the *git log *command (and similarly for v"0.0.2"). 
>>
>>
>>

Re: [julia-users] Comparing functions

2016-02-26 Thread Julia Tylors
to check whether they are equal so that i don't need to make them go 
through another of my operations to save time. That way it will be cached.

On Friday, February 26, 2016 at 12:16:18 PM UTC-8, Stefan Karpinski wrote:
>
> Why?
>
> On Fri, Feb 26, 2016 at 3:14 PM, Julia Tylors  > wrote:
>
>> This was a nice question, 
>> i think i am trying to figure out a way to check if 2 functions (partial 
>> possibly) are  at the same syntactic location in the AST and their free 
>> variables refer to the equal/same data
>>
>>
>>
>> On Friday, February 26, 2016 at 12:02:30 PM UTC-8, Stefan Karpinski wrote:
>>>
>>> What are you trying to discover about these functions?
>>>
>>> On Fri, Feb 26, 2016 at 2:50 PM, Julia Tylors  
>>> wrote:
>>>
 Isn't there a trick like i can serialize a partial function and then 
 check their equality in the serialized form?

 On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski 
 wrote:
>
> Functions are compared by identity – they are equal if they are the 
> same function, and not otherwise. Comparing functions syntactically is 
> shallow and nearly useless. Comparing functions by what they compute is 
> undecidable. So identity is essentially the only useful way to compare 
> functions.
>
> On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors  
> wrote:
>
>> Hello,
>>
>> I have a simple question. I like to compare functions. For example:
>>
>> function some_func(x::Target, y::Config, z::Int64)
>>#some code here
>> end
>>
>> #some partialization here
>> f1 = (x::Target,y::Config) -> some_func(x,y,5)
>> f2 = (x::Target,y::Config) -> some_func(x,y,4)
>>
>>
>> I want to evaluate the following expression:
>>
>> f1 == f2 
>>
>>
>> Thanks
>>
>
>
>>>
>

Re: [julia-users] Comparing functions

2016-02-26 Thread Stefan Karpinski
Why?

On Fri, Feb 26, 2016 at 3:14 PM, Julia Tylors  wrote:

> This was a nice question,
> i think i am trying to figure out a way to check if 2 functions (partial
> possibly) are  at the same syntactic location in the AST and their free
> variables refer to the equal/same data
>
>
>
> On Friday, February 26, 2016 at 12:02:30 PM UTC-8, Stefan Karpinski wrote:
>>
>> What are you trying to discover about these functions?
>>
>> On Fri, Feb 26, 2016 at 2:50 PM, Julia Tylors  wrote:
>>
>>> Isn't there a trick like i can serialize a partial function and then
>>> check their equality in the serialized form?
>>>
>>> On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski
>>> wrote:

 Functions are compared by identity – they are equal if they are the
 same function, and not otherwise. Comparing functions syntactically is
 shallow and nearly useless. Comparing functions by what they compute is
 undecidable. So identity is essentially the only useful way to compare
 functions.

 On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors 
 wrote:

> Hello,
>
> I have a simple question. I like to compare functions. For example:
>
> function some_func(x::Target, y::Config, z::Int64)
>#some code here
> end
>
> #some partialization here
> f1 = (x::Target,y::Config) -> some_func(x,y,5)
> f2 = (x::Target,y::Config) -> some_func(x,y,4)
>
>
> I want to evaluate the following expression:
>
> f1 == f2
>
>
> Thanks
>


>>


Re: [julia-users] Comparing functions

2016-02-26 Thread Julia Tylors
This was a nice question, 
i think i am trying to figure out a way to check if 2 functions (partial 
possibly) are  at the same syntactic location in the AST and their free 
variables refer to the equal/same data



On Friday, February 26, 2016 at 12:02:30 PM UTC-8, Stefan Karpinski wrote:
>
> What are you trying to discover about these functions?
>
> On Fri, Feb 26, 2016 at 2:50 PM, Julia Tylors  > wrote:
>
>> Isn't there a trick like i can serialize a partial function and then 
>> check their equality in the serialized form?
>>
>> On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski wrote:
>>>
>>> Functions are compared by identity – they are equal if they are the same 
>>> function, and not otherwise. Comparing functions syntactically is shallow 
>>> and nearly useless. Comparing functions by what they compute is 
>>> undecidable. So identity is essentially the only useful way to compare 
>>> functions.
>>>
>>> On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors  
>>> wrote:
>>>
 Hello,

 I have a simple question. I like to compare functions. For example:

 function some_func(x::Target, y::Config, z::Int64)
#some code here
 end

 #some partialization here
 f1 = (x::Target,y::Config) -> some_func(x,y,5)
 f2 = (x::Target,y::Config) -> some_func(x,y,4)


 I want to evaluate the following expression:

 f1 == f2 


 Thanks

>>>
>>>
>

Re: [julia-users] Comparing functions

2016-02-26 Thread Stefan Karpinski
What are you trying to discover about these functions?

On Fri, Feb 26, 2016 at 2:50 PM, Julia Tylors  wrote:

> Isn't there a trick like i can serialize a partial function and then check
> their equality in the serialized form?
>
> On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski wrote:
>>
>> Functions are compared by identity – they are equal if they are the same
>> function, and not otherwise. Comparing functions syntactically is shallow
>> and nearly useless. Comparing functions by what they compute is
>> undecidable. So identity is essentially the only useful way to compare
>> functions.
>>
>> On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors  wrote:
>>
>>> Hello,
>>>
>>> I have a simple question. I like to compare functions. For example:
>>>
>>> function some_func(x::Target, y::Config, z::Int64)
>>>#some code here
>>> end
>>>
>>> #some partialization here
>>> f1 = (x::Target,y::Config) -> some_func(x,y,5)
>>> f2 = (x::Target,y::Config) -> some_func(x,y,4)
>>>
>>>
>>> I want to evaluate the following expression:
>>>
>>> f1 == f2
>>>
>>>
>>> Thanks
>>>
>>
>>


Re: [julia-users] Checking a call whether it belongs to a specific method

2016-02-26 Thread Julia Tylors
Nope, i want to return the specific method from the function to be used 
later on.
Thanks

On Friday, February 26, 2016 at 11:47:07 AM UTC-8, Yichao Yu wrote:
>
> On Fri, Feb 26, 2016 at 2:39 PM, Tom Breloff  > wrote: 
> > So you want to figure out which version of a function will be called 
> during 
> > dispatch?  This sounds like you're approaching your problem in the wrong 
> > way.  Can you give us what you're really trying to achieve?  (as opposed 
> to 
> > asking about a solution to a sub-problem) 
> > 
> > On Fri, Feb 26, 2016 at 2:27 PM, Julia Tylors  > wrote: 
> >> 
> >> Hi everyone, 
> >> 
> >> I was wondering whether i can check a call belongs to specific method. 
> For 
> >> example: 
> >> 
> >> function Foo(x::Float64,y::Int64) # lets call this f1 
> >> #some code 
> >> end 
> >> 
> >> function Foo(x::Int64,y::Float64) # lets call this f2 
> >> #some code 
> >> end 
> >> 
> >> method =@belongs Foo(3.0,2) 
>
> FWIW, if this is just for debugging/testing you are just looking for 
> @which. 
>
> >> 
> >> I want method to be the function f1? 
> >> 
> >> How can i do this? 
> >> 
> >> Thanks 
> > 
> > 
>


Re: [julia-users] Checking a call whether it belongs to a specific method

2016-02-26 Thread Julia Tylors
this is exactly what i trying to achieve in essence.
However for the bigger picture,

I am trying to test whether this call is going to execute the method (
version of the function) which i specifically want? Because i am going to
 check whether i need to create a  partialized version of the function?
since you brought it up, can this be done via generated functions ?
Thanks

On Fri, Feb 26, 2016 at 11:39 AM, Tom Breloff  wrote:

> So you want to figure out which version of a function will be called
> during dispatch?  This sounds like you're approaching your problem in the
> wrong way.  Can you give us what you're really trying to achieve?  (as
> opposed to asking about a solution to a sub-problem)
>
> On Fri, Feb 26, 2016 at 2:27 PM, Julia Tylors 
> wrote:
>
>> Hi everyone,
>>
>> I was wondering whether i can check a call belongs to specific method.
>> For example:
>>
>> function Foo(x::Float64,y::Int64) # lets call this f1
>> #some code
>> end
>>
>> function Foo(x::Int64,y::Float64) # lets call this f2
>> #some code
>> end
>>
>> method =@belongs Foo(3.0,2)
>>
>> I want method to be the function f1?
>>
>> How can i do this?
>>
>> Thanks
>>
>
>


Re: [julia-users] Comparing functions

2016-02-26 Thread Julia Tylors
Isn't there a trick like i can serialize a partial function and then check 
their equality in the serialized form?

On Friday, February 26, 2016 at 11:22:37 AM UTC-8, Stefan Karpinski wrote:
>
> Functions are compared by identity – they are equal if they are the same 
> function, and not otherwise. Comparing functions syntactically is shallow 
> and nearly useless. Comparing functions by what they compute is 
> undecidable. So identity is essentially the only useful way to compare 
> functions.
>
> On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors  > wrote:
>
>> Hello,
>>
>> I have a simple question. I like to compare functions. For example:
>>
>> function some_func(x::Target, y::Config, z::Int64)
>>#some code here
>> end
>>
>> #some partialization here
>> f1 = (x::Target,y::Config) -> some_func(x,y,5)
>> f2 = (x::Target,y::Config) -> some_func(x,y,4)
>>
>>
>> I want to evaluate the following expression:
>>
>> f1 == f2 
>>
>>
>> Thanks
>>
>
>

Re: [julia-users] Checking a call whether it belongs to a specific method

2016-02-26 Thread Yichao Yu
On Fri, Feb 26, 2016 at 2:39 PM, Tom Breloff  wrote:
> So you want to figure out which version of a function will be called during
> dispatch?  This sounds like you're approaching your problem in the wrong
> way.  Can you give us what you're really trying to achieve?  (as opposed to
> asking about a solution to a sub-problem)
>
> On Fri, Feb 26, 2016 at 2:27 PM, Julia Tylors  wrote:
>>
>> Hi everyone,
>>
>> I was wondering whether i can check a call belongs to specific method. For
>> example:
>>
>> function Foo(x::Float64,y::Int64) # lets call this f1
>> #some code
>> end
>>
>> function Foo(x::Int64,y::Float64) # lets call this f2
>> #some code
>> end
>>
>> method =@belongs Foo(3.0,2)

FWIW, if this is just for debugging/testing you are just looking for @which.

>>
>> I want method to be the function f1?
>>
>> How can i do this?
>>
>> Thanks
>
>


Re: [julia-users] Checking a call whether it belongs to a specific method

2016-02-26 Thread Tom Breloff
So you want to figure out which version of a function will be called during
dispatch?  This sounds like you're approaching your problem in the wrong
way.  Can you give us what you're really trying to achieve?  (as opposed to
asking about a solution to a sub-problem)

On Fri, Feb 26, 2016 at 2:27 PM, Julia Tylors  wrote:

> Hi everyone,
>
> I was wondering whether i can check a call belongs to specific method. For
> example:
>
> function Foo(x::Float64,y::Int64) # lets call this f1
> #some code
> end
>
> function Foo(x::Int64,y::Float64) # lets call this f2
> #some code
> end
>
> method =@belongs Foo(3.0,2)
>
> I want method to be the function f1?
>
> How can i do this?
>
> Thanks
>


Re: [julia-users] Incrementally creating a tuple for.

2016-02-26 Thread Kristoffer Carlsson
The non inlined version is 5 times slower on 0.5 than 0.4 as well.

[julia-users] Checking a call whether it belongs to a specific method

2016-02-26 Thread Julia Tylors
Hi everyone,

I was wondering whether i can check a call belongs to specific method. For 
example:

function Foo(x::Float64,y::Int64) # lets call this f1
#some code
end

function Foo(x::Int64,y::Float64) # lets call this f2
#some code
end

method =@belongs Foo(3.0,2)

I want method to be the function f1?

How can i do this?

Thanks


Re: [julia-users] Comparing functions

2016-02-26 Thread Stefan Karpinski
Functions are compared by identity – they are equal if they are the same
function, and not otherwise. Comparing functions syntactically is shallow
and nearly useless. Comparing functions by what they compute is
undecidable. So identity is essentially the only useful way to compare
functions.

On Fri, Feb 26, 2016 at 2:09 PM, Julia Tylors  wrote:

> Hello,
>
> I have a simple question. I like to compare functions. For example:
>
> function some_func(x::Target, y::Config, z::Int64)
>#some code here
> end
>
> #some partialization here
> f1 = (x::Target,y::Config) -> some_func(x,y,5)
> f2 = (x::Target,y::Config) -> some_func(x,y,4)
>
>
> I want to evaluate the following expression:
>
> f1 == f2
>
>
> Thanks
>


Re: [julia-users] Incrementally creating a tuple for.

2016-02-26 Thread Kristoffer Carlsson
I started some here if you are interested.

https://github.com/KristofferC/TupleBenchmarks.jl

Interestingly, putting an @inbounds on the loopdot function makes it 3 times 
slower.

[julia-users] Comparing functions

2016-02-26 Thread Julia Tylors
Hello,

I have a simple question. I like to compare functions. For example:

function some_func(x::Target, y::Config, z::Int64)
   #some code here
end

#some partialization here
f1 = (x::Target,y::Config) -> some_func(x,y,5)
f2 = (x::Target,y::Config) -> some_func(x,y,4)


I want to evaluate the following expression:

f1 == f2 


Thanks


Re: [julia-users] Incrementally creating a tuple for.

2016-02-26 Thread Tim Holy
On Friday, February 26, 2016 09:48:38 AM Kristoffer Carlsson wrote:
> It is possible that micro benchmarks might be misleading since in real code
> you might trash the instruction cache if you load a fat unrolled monster
> function into it.

The nice thing is that the compiler knows what K is and will generate different 
code for each choice of K---if LLVM is smart about its unrolling, presumably 
it will unroll for small K but not for large K.

Of course, it would be even nicer if you could say "use the default fallback 
for any K > 8". But I don't think we can do that right now.

Best,
--Tim



Re: [julia-users] Incrementally creating a tuple for.

2016-02-26 Thread Kristoffer Carlsson
Thanks for you suggestion Tim. I will try some benchmarks in the weekend and 
report back.

It is possible that micro benchmarks might be misleading since in real code you 
might trash the instruction cache if you load a fat unrolled monster function 
into it.

Re: [julia-users] Incrementally creating a tuple for.

2016-02-26 Thread Erik Schnetter
If you represent a matrix as a vector of vectors, then you have one
call to `ntuple` (probably unrolled 9 times), each calling another
`ntuple` (probably also unrolled 9 times), which would give you 18
invocations of the function Tim just showed.

It's not clear whether the unrolling is good or bad, since it will
avoid many index calculations, and will enable generating SIMD
instructions.

-erik

On Fri, Feb 26, 2016 at 9:57 AM, Tim Holy  wrote:
> I'm not sure there's any reason you couldn't use a loop for the "K" axis in
> "M-by-K * K-by-N", but as you say the problem is that when using tuples you
> have to create the whole thing at once. So you'd need 81 calls to a function
> like
>
> @noinline function loopdot(A::Mat{M,K}, B::Mat{K,N}, Arow, Bcol)
> s = zero(eltype(A))*zero(eltype(B))
>  for k = 1:K
> s += A[Arow, k] * B[k, Bcol]
> end
> s
> end
>
> (The @noinline will help guarantee that less code is generated.) Of course,
> LLVM might unroll that loop for you.
>
> Best,
> --Tim
>
> On Friday, February 26, 2016 12:17:53 AM Kristoffer Carlsson wrote:
>> Hello everyone.
>>
>> To avoid the XY problem I will first describe my actual use case. I am
>> writing a library for tensors to be used in continuum mechanics. These are
>> in dimensions 1,2,3 and typically of order 1,2,4.
>> Since tensor expressions are usually chained together like (A ⊗ B) * C ⋅ a
>> I want to make these stack allocated so I can get good performance without
>> making a bunch of mutating functions and having to preallocate everything
>> etc.
>>
>> The way you stack allocate vector like objects in julia right now seems to
>> almost exclusively be with an immutable of a tuple. So my data structure
>> right now looks similar to:
>>
>> immutable FourthOrderTensor{T} <: AbstractArray{T, 4}
>> data::NTuple{81, T}
>> end
>>
>> Base.size(::FourthOrderTensor) = (3,3,3,3)
>> Base.linearindexing{T}(::Type{FourthOrderTensor{T}}) = Base.LinearFast()
>> Base.getindex(S::FourthOrderTensor, i::Int) = S.data[i]
>>
>> If you'd like, you can think of the NTuple{81, T} as actually represent a
>> 9x9 matrix.
>>
>>
>> The largest operations I will need to do is a double contraction between
>> two fourth order tensors which is equivalent to a 9x9 matrix multiplied
>> with another 9x9 matrix.
>>
>> My actual question is, is there a good way to do this without generating a
>> huge amount of code?
>>
>> I have looked at for example the FixedSizeArrays package but there they
>> seem to simply unroll every operation. For a 9x9 * 9x9 matrix
>> multiplication this unrolling creates a huge amount of code and doesn't
>> feel right.
>>
>> I guess what I want is pretty
>> much https://github.com/JuliaLang/julia/issues/11902 but is there any
>> workaround as of now?
>>
>> Thanks!
>>
>> // Kristoffer
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] [General Interest] An update to John Gustafson's Unums: "Unums 2.0"

2016-02-26 Thread Job van der Zwan


Originally found on: D-lang forum discussion 
, 
forwarded to me by Robbert van Dalen. 


The previous julia-users discussion about unums 
 provoked 
a lot of responses (as did the previous discussion on /r/compsci 
,
 
the previous Hacker News discussion 
), so I figured I'd post this 
as an update for the people who were interested back then. For those of you 
who haven't heard of this before, John Gustafson has been working on an 
alternative number encoding to IEEE floating point. He just presented an 
updated version of this unum proposal at Multicore 2016. It's very 
different from before:

For those of you who think you have already seen unums, this is a different 
approach. Every one of the slides here is completely new and has not been 
presented before the Multicore 2016 conference.

The original unums had sign, exponent, and fraction fields just like IEEE 
floats and obeying most of the same rules; those unums had three metadata 
fields, the “utag”, that described the exact-inexact state, the exponent 
size, and the fraction size, and they therefore had variable length. This 
new approach makes a complete break from IEEE float compatibility and 
redesigns the way we represent the infinite space of real numbers on a 
computer.

They are just as mathematically rigorous as before, but they clear up the 
remaining clunkiness of the IEEE format. They are so terse and so fast that 
you can think about solving very difficult equations by trying the entire 
real number line, overlooking nothing.

You can find all the slides here:

PDF: A Radical Approach to Computation with Real Numbers 


PPTX: A Radical Approach to Computation with Real Numbers 



I recommend the powerpoint, because it has speaker notes (which is where I 
took the above citations from), clarifying a lot.

Gustafson also gave two replies to questions on the D-lang discussion board:


What I don't get is: is there an exposant anymore? I don't see any mention of 
it.
> That's part of the breakthrough: the separation of bit fields that is so 
> deeply built into IEEE floats, has far more drawbacks then advantages. That's 
> where all the problems with gradual underflow, wasted bit patterns on NaNs, 
> hidden bits, and wasted bit patterns in general, stem from. It is possible to 
> extract the exponent, in the traditional sense, by an integer divide if the 
> lattice is one chosen to be self-similar as the dynamic range increases. The 
> integer divide need NOT be by a power of 2, nor do you need to do the divide 
> very often at all... think different about the way we represent reals with 
> bit strings!


The basic idea for Unums seems that you get an estimate of the bounds and then 
recompute using higher precision or better algorithm when necessary. With 
regular floats you just get a noisy value and you need much more heavy 
machinery to know whether you need to recompute using a better algorithm/higher 
precision.
> Not quite. What you describe is a very old idea. When unums lose accuracy, 
> they become multiple unums in a row, or in a multidimensional volume 
> (uboxes). The next calculation starts not from the "interval" described by 
> the largest and smallest unum, but from each unum in the set; the results are 
> then combined as a set union, which leads to bounds that grow linearly 
> instead of exponentially.


PS: Robbert has opened up a Google Group for general Unum computing 
discussion 


Re: [julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-02-26 Thread Yichao Yu
On Fri, Feb 26, 2016 at 10:28 AM, Kristoffer Carlsson
 wrote:
> What code and where is it spending time? You talk about openblas, does it
> mean that blas got slower for you? How about peakflops() on the different
> versions?
>
>
> On Friday, February 26, 2016 at 4:08:06 PM UTC+1, Johannes Wagner wrote:
>>
>> hey guys,
>> I just experienced something weird. I have some code that runs fine on
>> 0.43, then I updated to 0.5dev to test the new Arrays, run same code and
>> noticed it got about ~50% slower. Then I downgraded back to 0.43, ran the
>> old code, but speed remained slow. I noticed while reinstalling 0.43,
>> openblas-threads didn't get isntalled along with it. So I manually installed
>> it, but no change.
>> Does anyone has an idea what could be going on? LLVM on fedora23 is 3.7

Also, how did you install/compile the two versions.

>>
>> Cheers, Johannes


[julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-02-26 Thread Kristoffer Carlsson
What code and where is it spending time? You talk about openblas, does it 
mean that blas got slower for you? How about peakflops() on the different 
versions?

On Friday, February 26, 2016 at 4:08:06 PM UTC+1, Johannes Wagner wrote:
>
> hey guys,
> I just experienced something weird. I have some code that runs fine on 
> 0.43, then I updated to 0.5dev to test the new Arrays, run same code and 
> noticed it got about ~50% slower. Then I downgraded back to 0.43, ran the 
> old code, but speed remained slow. I noticed while reinstalling 0.43, 
> openblas-threads didn't get isntalled along with it. So I manually 
> installed it, but no change. 
> Does anyone has an idea what could be going on? LLVM on fedora23 is 3.7
>
> Cheers, Johannes
>


[julia-users] Re: Google Summer of Code 2016 - Ideas Page

2016-02-26 Thread Nilabhra Roy Chowdhury
Hi,

I just started with Julia a few days back. I found the ideas page and 
thought I could contribute. Any pointers to where I should start ? Maybe 
fix some bugs? 

On Thursday, February 11, 2016 at 9:19:19 AM UTC+5:30, Shashi Gowda wrote:
>
> Hi all,
>
> I have merged the previous ideas pages (2015, 2014) into a canonical one 
> https://julialang.org/soc/ideas-page 
> 
>  
> (and set up the appropriate redirects)
>
> Please add your Summer of Code ideas and edit previous ones here 
> https://github.com/JuliaLang/julialang.github.com/edit/master/soc/ideas-page.md
>
> Let us also try and keep this page updated all year round so that ideas 
> get carried over to the next summer.
>
> Julia will be applying for GSoC 2016. The organization application 
> deadline is on 19th, it will be nice to have a high quality ideas page by 
> then.
>
> Thank you
>


Re: [julia-users] Re: [julia-dev] Re: Google Summer of Code 2016 - Ideas Page

2016-02-26 Thread perez . hz
Hi Mauro,

I would like to submit a proposal to work on the ODE.jl package,
for the GSoC. From my undergraduate and master thesis I have
experience with the Taylor method for solving ODEs (ie., based on Taylor
series expansions). This is a variable order, variable step
size method, which uses automatic differentiation
techniques in order to reach high order integration methods (30th, 40th 
order)
which enable machine-epsilon precision with very competitive speeds.
I think the Taylor method is important to include in the ODE.jl package,
as it is very versatile and precise.

Besides the utility of the Taylor method for ODEs integration, a DAEs 
solver can
also be implemented using the Taylor models framework.

I would be very happy to contribute to the ODE.jl package!

Best regards,

On Thursday, February 11, 2016 at 7:56:45 AM UTC-6, Mauro wrote:
>
>
> It is desirable to have ode-solvers which are pure Julia.  Both to cut 
> down on dependencies and to allow easy hacking and development. 
> Further, Sundials.jl will not work with generic Julia datatypes (e.g. I 
> think Julia sparse matrices are not supported for Jacobians).  Thus, 
> ODE.jl is to stay and to be improved on. 
>
> The currently ongoing work of which I'm aware is: 
> https://github.com/JuliaLang/ODE.jl/pull/49 
> https://github.com/JuliaLang/ODE.jl/pull/72 
>
> Needed work is: 
> - more solvers 
> - a unified code structure/API 
> - parallelism(?) 
>
> I'll try and update the GSoC description. 
>


[julia-users] regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-02-26 Thread Johannes Wagner
hey guys,
I just experienced something weird. I have some code that runs fine on 
0.43, then I updated to 0.5dev to test the new Arrays, run same code and 
noticed it got about ~50% slower. Then I downgraded back to 0.43, ran the 
old code, but speed remained slow. I noticed while reinstalling 0.43, 
openblas-threads didn't get isntalled along with it. So I manually 
installed it, but no change. 
Does anyone has an idea what could be going on? LLVM on fedora23 is 3.7

Cheers, Johannes


Re: [julia-users] Incrementally creating a tuple for.

2016-02-26 Thread Tim Holy
I'm not sure there's any reason you couldn't use a loop for the "K" axis in 
"M-by-K * K-by-N", but as you say the problem is that when using tuples you 
have to create the whole thing at once. So you'd need 81 calls to a function 
like

@noinline function loopdot(A::Mat{M,K}, B::Mat{K,N}, Arow, Bcol)
s = zero(eltype(A))*zero(eltype(B))
 for k = 1:K
s += A[Arow, k] * B[k, Bcol]
end
s
end

(The @noinline will help guarantee that less code is generated.) Of course, 
LLVM might unroll that loop for you.

Best,
--Tim

On Friday, February 26, 2016 12:17:53 AM Kristoffer Carlsson wrote:
> Hello everyone.
> 
> To avoid the XY problem I will first describe my actual use case. I am
> writing a library for tensors to be used in continuum mechanics. These are
> in dimensions 1,2,3 and typically of order 1,2,4.
> Since tensor expressions are usually chained together like (A ⊗ B) * C ⋅ a
> I want to make these stack allocated so I can get good performance without
> making a bunch of mutating functions and having to preallocate everything
> etc.
> 
> The way you stack allocate vector like objects in julia right now seems to
> almost exclusively be with an immutable of a tuple. So my data structure
> right now looks similar to:
> 
> immutable FourthOrderTensor{T} <: AbstractArray{T, 4}
> data::NTuple{81, T}
> end
> 
> Base.size(::FourthOrderTensor) = (3,3,3,3)
> Base.linearindexing{T}(::Type{FourthOrderTensor{T}}) = Base.LinearFast()
> Base.getindex(S::FourthOrderTensor, i::Int) = S.data[i]
> 
> If you'd like, you can think of the NTuple{81, T} as actually represent a
> 9x9 matrix.
> 
> 
> The largest operations I will need to do is a double contraction between
> two fourth order tensors which is equivalent to a 9x9 matrix multiplied
> with another 9x9 matrix.
> 
> My actual question is, is there a good way to do this without generating a
> huge amount of code?
> 
> I have looked at for example the FixedSizeArrays package but there they
> seem to simply unroll every operation. For a 9x9 * 9x9 matrix
> multiplication this unrolling creates a huge amount of code and doesn't
> feel right.
> 
> I guess what I want is pretty
> much https://github.com/JuliaLang/julia/issues/11902 but is there any
> workaround as of now?
> 
> Thanks!
> 
> // Kristoffer



Re: [julia-users] Re: Incrementally creating a tuple for.

2016-02-26 Thread Kristoffer Carlsson
Thanks for your reply Eric. You always have good suggestions.

On Friday, February 26, 2016 at 2:10:48 PM UTC+1, Erik Schnetter wrote:
>
> Kristoffer 
>
> You could try some simple "placeholder" operations, such as e.g. 
> creating a new tensor from an old tensor by only replacing a few 
> indices (or by adding them). Then you can compare speed between tuples 
> and arrays. Unless you see a speedup, this might not be worth 
> pursuing.  
>

I looked a bit at https://github.com/StephenVavasis/Modifyfield.jl and it 
seems the compiler is quite good at removing redundant writes when it comes 
to .
 

> If the bottleneck in your code is allocating the tensors, then using 
> (or: automatically generating?) operations that re-use existing 
> tensors might be the answer. 
>

Yes the bottleneck is in allocation. Regarding mutating functions, it is 
definitely a possibility but then you need to pass in a buffer, and the 
code gets a lot uglier because you are basically prohibiting infix operators

Compare (exaggerated for effect)

(A ⊗ B) * C ⋅ a 

and 

dot!(ABCA_buff, dcontract!(ABC_buffer, C, otimes!(AB_buffer, A, B), a)


It is of course possible create a dict to hold temporary arrays of the 
correct shape and type however, you need to be careful so you don't get 
wrong results due to aliasing. Also I have found that caching arrays in a 
dict is often quite slow.

Maybe this whole things would be solved 
by https://github.com/JuliaLang/julia/pull/8134 combined 
with https://github.com/JuliaLang/julia/pull/12205
 

>
> To reduce code duplication, you can try using the `ntuple` function, 
> and passing an actual function as argument. If this function isn't 
> inlined, then every operation on a 9x9 tensor is unrolled to "only" 81 
> operations, not to 81x9 operations. (Or maybe ntuple actually uses a 
> loop internally?) 
>
> If you then represent a matrix as a 9-tuple of 9-tuples, you might 
> reduce unrolling to 9 operations per function. 
>

Yes maybe computing one column at a time by using a nested tuple is the way 
to go. Like you say, it would give 81 operations and not the 700 I 
currently get. I think this is actually how FixedSizeArrays are doing it.
 

>
> Finally -- you can use `llvmcall` to generate low-level operations, 
> and in these low-level operations you can explicitly program loops. Of 
> course, this is likely a tad more low-level than you'd want. 
>
> I'm not aware of another way. 
>
> -erik 
>
>
> On Fri, Feb 26, 2016 at 3:25 AM, Kristoffer Carlsson 
> > wrote: 
> > Botched the title a bit. Oh well. 
> > 
> > 
> > On Friday, February 26, 2016 at 9:17:54 AM UTC+1, Kristoffer Carlsson 
> wrote: 
> >> 
> >> Hello everyone. 
> >> 
> >> To avoid the XY problem I will first describe my actual use case. I am 
> >> writing a library for tensors to be used in continuum mechanics. These 
> are 
> >> in dimensions 1,2,3 and typically of order 1,2,4. 
> >> Since tensor expressions are usually chained together like (A ⊗ B) * C 
> ⋅ a 
> >> I want to make these stack allocated so I can get good performance 
> without 
> >> making a bunch of mutating functions and having to preallocate 
> everything 
> >> etc. 
> >> 
> >> The way you stack allocate vector like objects in julia right now seems 
> to 
> >> almost exclusively be with an immutable of a tuple. So my data 
> structure 
> >> right now looks similar to: 
> >> 
> >> immutable FourthOrderTensor{T} <: AbstractArray{T, 4} 
> >> data::NTuple{81, T} 
> >> end 
> >> 
> >> Base.size(::FourthOrderTensor) = (3,3,3,3) 
> >> Base.linearindexing{T}(::Type{FourthOrderTensor{T}}) = 
> Base.LinearFast() 
> >> Base.getindex(S::FourthOrderTensor, i::Int) = S.data[i] 
> >> 
> >> If you'd like, you can think of the NTuple{81, T} as actually represent 
> a 
> >> 9x9 matrix. 
> >> 
> >> 
> >> The largest operations I will need to do is a double contraction 
> between 
> >> two fourth order tensors which is equivalent to a 9x9 matrix multiplied 
> with 
> >> another 9x9 matrix. 
> >> 
> >> My actual question is, is there a good way to do this without 
> generating a 
> >> huge amount of code? 
> >> 
> >> I have looked at for example the FixedSizeArrays package but there they 
> >> seem to simply unroll every operation. For a 9x9 * 9x9 matrix 
> multiplication 
> >> this unrolling creates a huge amount of code and doesn't feel right. 
> >> 
> >> I guess what I want is pretty much 
> >> https://github.com/JuliaLang/julia/issues/11902 but is there any 
> workaround 
> >> as of now? 
> >> 
> >> Thanks! 
> >> 
> >> // Kristoffer 
>
>
>
> -- 
> Erik Schnetter > 
> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>


Re: [julia-users] Re: Incrementally creating a tuple for.

2016-02-26 Thread Erik Schnetter
Kristoffer

You could try some simple "placeholder" operations, such as e.g.
creating a new tensor from an old tensor by only replacing a few
indices (or by adding them). Then you can compare speed between tuples
and arrays. Unless you see a speedup, this might not be worth
pursuing.

If the bottleneck in your code is allocating the tensors, then using
(or: automatically generating?) operations that re-use existing
tensors might be the answer.

To reduce code duplication, you can try using the `ntuple` function,
and passing an actual function as argument. If this function isn't
inlined, then every operation on a 9x9 tensor is unrolled to "only" 81
operations, not to 81x9 operations. (Or maybe ntuple actually uses a
loop internally?)

If you then represent a matrix as a 9-tuple of 9-tuples, you might
reduce unrolling to 9 operations per function.

Finally -- you can use `llvmcall` to generate low-level operations,
and in these low-level operations you can explicitly program loops. Of
course, this is likely a tad more low-level than you'd want.

I'm not aware of another way.

-erik


On Fri, Feb 26, 2016 at 3:25 AM, Kristoffer Carlsson
 wrote:
> Botched the title a bit. Oh well.
>
>
> On Friday, February 26, 2016 at 9:17:54 AM UTC+1, Kristoffer Carlsson wrote:
>>
>> Hello everyone.
>>
>> To avoid the XY problem I will first describe my actual use case. I am
>> writing a library for tensors to be used in continuum mechanics. These are
>> in dimensions 1,2,3 and typically of order 1,2,4.
>> Since tensor expressions are usually chained together like (A ⊗ B) * C ⋅ a
>> I want to make these stack allocated so I can get good performance without
>> making a bunch of mutating functions and having to preallocate everything
>> etc.
>>
>> The way you stack allocate vector like objects in julia right now seems to
>> almost exclusively be with an immutable of a tuple. So my data structure
>> right now looks similar to:
>>
>> immutable FourthOrderTensor{T} <: AbstractArray{T, 4}
>> data::NTuple{81, T}
>> end
>>
>> Base.size(::FourthOrderTensor) = (3,3,3,3)
>> Base.linearindexing{T}(::Type{FourthOrderTensor{T}}) = Base.LinearFast()
>> Base.getindex(S::FourthOrderTensor, i::Int) = S.data[i]
>>
>> If you'd like, you can think of the NTuple{81, T} as actually represent a
>> 9x9 matrix.
>>
>>
>> The largest operations I will need to do is a double contraction between
>> two fourth order tensors which is equivalent to a 9x9 matrix multiplied with
>> another 9x9 matrix.
>>
>> My actual question is, is there a good way to do this without generating a
>> huge amount of code?
>>
>> I have looked at for example the FixedSizeArrays package but there they
>> seem to simply unroll every operation. For a 9x9 * 9x9 matrix multiplication
>> this unrolling creates a huge amount of code and doesn't feel right.
>>
>> I guess what I want is pretty much
>> https://github.com/JuliaLang/julia/issues/11902 but is there any workaround
>> as of now?
>>
>> Thanks!
>>
>> // Kristoffer



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: deepcopy_internal and arrays

2016-02-26 Thread 'Bill Hart' via julia-users
Thanks. I appreciate the reply.

On 26 February 2016 at 09:32, Mauro  wrote:

> > However, could you please explain to me what is involved in updating
> dict?
> > I understand an ObjectIdDict is a hash table whose keys are object ID's.
> > But the documentation doesn't tell me how to generate such a key for my
> > object, nor what value to insert in the dict when overloading
> > deepcopy_internal. I presume the object itself is used as the key? But
> what
> > value should be inserted?
>
> Looking at the code in Base:
>
> julia> @less Base.deepcopy_internal([1,2], ObjectIdDict())
>
> suggests to me that the key is the object to be copied and the value is
> the copy.
>
> Aside: the function should be renamed to deepcopy_internal! as it
> modifies the dict.
>
> > On 25 February 2016 at 21:01, Yichao Yu  wrote:
> >
> >> On Thu, Feb 25, 2016 at 2:57 PM, Toivo Henningsson  >
> >> wrote:
> >> > It seems very reasonable that you should be able to overload deepcopy
> >> for a given type, and that if that has to be done on a specific way, it
> >> should be mentioned in the documentation for deepcopy. Open an issue?
> >>
> >> Please read the doc first.
> >>
> >> overloading deepcopy is supported and documented
> >>
> >>
> >>
> http://julia.readthedocs.org/en/latest/stdlib/base/?highlight=deepcopy#Base.deepcopy
> >>
>


Re: [julia-users] Re: deepcopy_internal and arrays

2016-02-26 Thread Mauro
> However, could you please explain to me what is involved in updating dict?
> I understand an ObjectIdDict is a hash table whose keys are object ID's.
> But the documentation doesn't tell me how to generate such a key for my
> object, nor what value to insert in the dict when overloading
> deepcopy_internal. I presume the object itself is used as the key? But what
> value should be inserted?

Looking at the code in Base:

julia> @less Base.deepcopy_internal([1,2], ObjectIdDict())

suggests to me that the key is the object to be copied and the value is
the copy.

Aside: the function should be renamed to deepcopy_internal! as it
modifies the dict.

> On 25 February 2016 at 21:01, Yichao Yu  wrote:
>
>> On Thu, Feb 25, 2016 at 2:57 PM, Toivo Henningsson 
>> wrote:
>> > It seems very reasonable that you should be able to overload deepcopy
>> for a given type, and that if that has to be done on a specific way, it
>> should be mentioned in the documentation for deepcopy. Open an issue?
>>
>> Please read the doc first.
>>
>> overloading deepcopy is supported and documented
>>
>>
>> http://julia.readthedocs.org/en/latest/stdlib/base/?highlight=deepcopy#Base.deepcopy
>>


[julia-users] Re: Incrementally creating a tuple for.

2016-02-26 Thread Kristoffer Carlsson
Botched the title a bit. Oh well.

On Friday, February 26, 2016 at 9:17:54 AM UTC+1, Kristoffer Carlsson wrote:
>
> Hello everyone.
>
> To avoid the XY problem I will first describe my actual use case. I am 
> writing a library for tensors to be used in continuum mechanics. These are 
> in dimensions 1,2,3 and typically of order 1,2,4. 
> Since tensor expressions are usually chained together like (A ⊗ B) * C ⋅ a 
> I want to make these stack allocated so I can get good performance without 
> making a bunch of mutating functions and having to preallocate everything 
> etc.
>
> The way you stack allocate vector like objects in julia right now seems to 
> almost exclusively be with an immutable of a tuple. So my data structure 
> right now looks similar to:
>
> immutable FourthOrderTensor{T} <: AbstractArray{T, 4}
> data::NTuple{81, T}
> end
>
> Base.size(::FourthOrderTensor) = (3,3,3,3)
> Base.linearindexing{T}(::Type{FourthOrderTensor{T}}) = Base.LinearFast()
> Base.getindex(S::FourthOrderTensor, i::Int) = S.data[i]
>
> If you'd like, you can think of the NTuple{81, T} as actually represent a 
> 9x9 matrix.
>
>
> The largest operations I will need to do is a double contraction between 
> two fourth order tensors which is equivalent to a 9x9 matrix multiplied 
> with another 9x9 matrix.
>
> My actual question is, is there a good way to do this without generating a 
> huge amount of code?
>
> I have looked at for example the FixedSizeArrays package but there they 
> seem to simply unroll every operation. For a 9x9 * 9x9 matrix 
> multiplication this unrolling creates a huge amount of code and doesn't 
> feel right.
>
> I guess what I want is pretty much 
> https://github.com/JuliaLang/julia/issues/11902 but is there any 
> workaround as of now?
>
> Thanks!
>
> // Kristoffer
>


[julia-users] Incrementally creating a tuple for.

2016-02-26 Thread Kristoffer Carlsson
Hello everyone.

To avoid the XY problem I will first describe my actual use case. I am 
writing a library for tensors to be used in continuum mechanics. These are 
in dimensions 1,2,3 and typically of order 1,2,4. 
Since tensor expressions are usually chained together like (A ⊗ B) * C ⋅ a 
I want to make these stack allocated so I can get good performance without 
making a bunch of mutating functions and having to preallocate everything 
etc.

The way you stack allocate vector like objects in julia right now seems to 
almost exclusively be with an immutable of a tuple. So my data structure 
right now looks similar to:

immutable FourthOrderTensor{T} <: AbstractArray{T, 4}
data::NTuple{81, T}
end

Base.size(::FourthOrderTensor) = (3,3,3,3)
Base.linearindexing{T}(::Type{FourthOrderTensor{T}}) = Base.LinearFast()
Base.getindex(S::FourthOrderTensor, i::Int) = S.data[i]

If you'd like, you can think of the NTuple{81, T} as actually represent a 
9x9 matrix.


The largest operations I will need to do is a double contraction between 
two fourth order tensors which is equivalent to a 9x9 matrix multiplied 
with another 9x9 matrix.

My actual question is, is there a good way to do this without generating a 
huge amount of code?

I have looked at for example the FixedSizeArrays package but there they 
seem to simply unroll every operation. For a 9x9 * 9x9 matrix 
multiplication this unrolling creates a huge amount of code and doesn't 
feel right.

I guess what I want is pretty 
much https://github.com/JuliaLang/julia/issues/11902 but is there any 
workaround as of now?

Thanks!

// Kristoffer