[julia-users] Re: kernel restarts after running code

2015-04-24 Thread David P. Sanders


El viernes, 24 de abril de 2015, 21:44:00 (UTC-5), Pooya escribió:
>
> I have been using julia in Mac Terminal and IJulia notebook for a while 
> and they have been fine. But now I am getting this error: "The kernel 
> appears to have died. It will restart automatically." in IJulia, and the 
> following in Terminal after I run one of my codes. It restarts the kernel 
> in both cases! Can anyone help?
>
> julia(8724,0x7fff7315e310) malloc: *** error for object 0x7feed745df10: 
> pointer being realloc'd was not allocated
>
> *** set a breakpoint in malloc_error_break to debug
>
> signal (6): Abort trap: 6
>
> __pthread_kill at /usr/lib/system/libsystem_kernel.dylib (unknown line)
>
> Abort trap: 6
>

My usual solution for these kinds of problems (i.e. problems in which it's 
unclear what the problem is and it was previously working!) is the 
following sequence, testing if it now works after each step

(i) Pkg.update()   

(ii) Pkg.build("IJulia")

(iii) Remove entire .julia directory and reinstall IJulia with 
Pkg.add("IJulia").  (This is overkill for what would basically be "get the 
latest version of all dependencies", but it can't do any harm...)

Of course it might be an actual bug, in which case these steps won't help 
at all...


[julia-users] Re: Online regression algorithms

2015-04-24 Thread Iain Dunning
I think the closest thing is
https://github.com/johnmyleswhite/StreamStats.jl
Now whether those operations should be in there, or built on top of it, I'm 
not sure. But definitely open an issue there to get a discussion going.

Cheers,
Iain

On Friday, April 24, 2015 at 5:13:15 PM UTC-4, Tom Breloff wrote:
>
> I'm considering writing packages for the following online (i.e. updating 
> models on the fly as new data arrives) techniques, but this functionality 
> might exist already, or there might be a package that I should contribute 
> to instead of writing my own:
>
>- Online PCA (such as "Candid covariance-free incremental principal 
>component analysis")
>- Online flexible least squares (time-varying regression weights)
>- Online support vector machines/regressions
>
> Are there any packages that might have this functionality, or even a good 
> framework that I could/should add to?  Does anyone else have a need for 
> these algorithms?
>


Re: [julia-users] Re: Defining a function in different modules

2015-04-24 Thread elextr


On Saturday, April 25, 2015 at 12:55:39 PM UTC+10, Michael Francis wrote:
>
> the resolution of that issue seems odd -  If I have two completely 
> unrelated libraries. Say DataFrames and one of my own. I export value( 
> ::MyType) I'm happily using it. Some time later I Pkg.update(), unbeknownst 
> to me the DataFrames dev team have added an export of value( ::DataFrame, 
> ...) suddenly all my code which imports both breaks and I have to go 
> through the entire stack qualifying the calls, as do other users of my 
> module? That doesn't seem right, there is no ambiguity I can see and the 
> multiple dispatch should continue to work correctly. 
>
> Fundamentally I want the two value() functions to collapse and not have to 
> qualify them. If there is a dispatch ambiguity then game over, but if there 
> isn't I don't see any advantage (and lots of negatives) to preventing the 
> import. 
>
> I'd argue the same is true with overloading methods in Base. Why would we 
> locally mask get if there is no dispatch ambiguity even if I don't 
> importall Base. 
>
> Qualifying names seems like an anti pattern in a multiple dispatch world. 
> Except for those edge cases where there is an ambiguity of dispatch. 
>
> Am I missing something? Perhaps I don't understand multiple dispatch well 
> enough?


IIUC the problem is not where you have two distinct and totally separate 
types to dispatch on, its when one module also defines methods on a common 
parent type (think about ::Any).  That module is expecting the concrete 
types it defines methods for to dispatch to these methods and all other 
types to dispatch to the method defined for the abstract parent type.  

But if methods from another module were combined then that behaviour would 
be changed silently for some types to dispatch to the second modules 
methods.  

But nothing says these two functions do the same thing, just because they 
have the same name.  The example I usually use is that the `bark()` 
function from the `Tree` module is likely to be different to the `bark()` 
function in the `Dog` module.  So if functions are combined on name, mixing 
modules can silently change behaviour of existing code, and thats "not a 
good thing".

To extend an existing function in another module you can explicitly do so, 
ie name Module1.funct() so that the behaviour of the function can be 
extended by methods for other types, but by explicitly naming the function, 
you are guaranteeing that the methods have acceptable semantics to combine.
 


Re: [julia-users] Re: Defining a function in different modules

2015-04-24 Thread Michael Francis
the resolution of that issue seems odd -  If I have two completely unrelated 
libraries. Say DataFrames and one of my own. I export value( ::MyType) I'm 
happily using it. Some time later I Pkg.update(), unbeknownst to me the 
DataFrames dev team have added an export of value( ::DataFrame, ...) suddenly 
all my code which imports both breaks and I have to go through the entire stack 
qualifying the calls, as do other users of my module? That doesn't seem right, 
there is no ambiguity I can see and the multiple dispatch should continue to 
work correctly. 

Fundamentally I want the two value() functions to collapse and not have to 
qualify them. If there is a dispatch ambiguity then game over, but if there 
isn't I don't see any advantage (and lots of negatives) to preventing the 
import. 

I'd argue the same is true with overloading methods in Base. Why would we 
locally mask get if there is no dispatch ambiguity even if I don't importall 
Base. 

Qualifying names seems like an anti pattern in a multiple dispatch world. 
Except for those edge cases where there is an ambiguity of dispatch. 

Am I missing something? Perhaps I don't understand multiple dispatch well 
enough? 

[julia-users] kernel restarts after running code

2015-04-24 Thread Pooya
 

I have been using julia in Mac Terminal and IJulia notebook for a while and 
they have been fine. But now I am getting this error: "The kernel appears 
to have died. It will restart automatically." in IJulia, and the following 
in Terminal after I run one of my codes. It restarts the kernel in both 
cases! Can anyone help?

julia(8724,0x7fff7315e310) malloc: *** error for object 0x7feed745df10: 
pointer being realloc'd was not allocated

*** set a breakpoint in malloc_error_break to debug

signal (6): Abort trap: 6

__pthread_kill at /usr/lib/system/libsystem_kernel.dylib (unknown line)

Abort trap: 6


Re: [julia-users] Re: Defining a function in different modules

2015-04-24 Thread elextr


On Saturday, April 25, 2015 at 4:56:58 AM UTC+10, Stefan Karpinski wrote:
>
> For anyone who isn't following changes to Julia master closely, Jeff 
> closed #4345  yesterday, 
> which addresses one major concern of "programming in the large".
>
> I think the other concern about preventing people from intentionally or 
> accidentally monkey-patching is very legitimate as well, but it's way less 
> clear what to do about it. I've contemplated the idea of not allowing a 
> module to add methods to a generic function unless it "owns" the function 
> or one of the argument types, but that feels like such a fussy rule, I 
> don't think it's the right solution. But I haven't come up with anything 
> better either.
>

I would have thought stopping intentional behaviour is non-Julian, but 
accidental errors should indeed be limited.  Perhaps adding methods to 
other modules functions needs to explicit.
 

>
> On Wed, Apr 22, 2015 at 6:19 PM, Jeff Bezanson  > wrote:
>
>> I think it's reasonable to adopt a convention in some code of not using 
>> `using`.
>>
>> Another way to look at this is that a library author could affect name
>> visibility in somebody else's code by adjusting the signature of a
>> method. That doesn't seem like a desirable interaction to me. Often
>> somebody might initially define foo(::Image), and then later realize
>> it's actually applicable to any array, and change it to
>> foo(::AbstractArray). Doing that shouldn't cause any major fuss.
>>
>> On Wed, Apr 22, 2015 at 5:58 PM, Michael Francis > > wrote:
>> > You are correct it is restrictive, though I will take some convincing 
>> that
>> > this is a bad thing, as systems get larger in Julia it is going to be
>> > increasingly important to manage code reuse and prevent accidental 
>> masking
>> > of types. Multiple dispatch is a wonderful tool for supporting these 
>> goals.
>> > Unfortunately allowing people the ability to export get() et al 
>> to
>> > the users scope seems like a bad idea. This is already happening from
>> > modules today. Perhaps the middle ground is to force an explicit 
>> import, so
>> > using only imports functions which have types defined in the module.  
>> The
>> > person defining the module exports all the functions they want but only
>> > those that are 'safe' e.g. follow my original rule are implicitly 
>> imported.
>> > Hence you would have something like the following code. This is not far
>> > different from the importall today, except that the exports are
>> > automatically restricted.
>> >
>> > using MyMath # Imports only those functions which include types
>> > defined in MyMath
>> > import MyMath.*  # Imports all other  functions  defined in MyMath
>> > import MyMath.afunc  # Imports  one  function
>> > import MyOther.afunc # Fails collides with MyMath.afunc
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wednesday, April 22, 2015 at 4:40:03 PM UTC-4, Jeff Bezanson wrote:
>> >>
>> >> That rule seems extremely restrictive to me. It would be very common,
>> >> for example, to create a library of functions that operate on standard
>> >> data types like numbers and arrays. I don't see that we can exclude
>> >> that kind of use.
>> >>
>> >> Also, printing a warning is not the key part of #4345. The important
>> >> part is that you'd have to qualify names in that case, which is the
>> >> same thing that would happen if `export`ing the names were disallowed.
>> >>
>> >>
>> >> On Wed, Apr 22, 2015 at 8:47 AM, Michael Francis 
>> >> wrote:
>> >> > I read through the issues / threads ( and some others )
>> >> >
>> >> > https://github.com/JuliaLang/julia/issues/2327
>> >> > https://github.com/JuliaLang/julia/issues/4345
>> >> >
>> >> > I'm not sure that the either the SuperSecretBase or the warning are 
>> the
>> >> > correct approach. I'd like to propose a counter which is a very 
>> simple
>> >> > rule.
>> >> >
>> >> > "You can only export functions from a module where they reference at
>> >> > least
>> >> > one type defined in the module."
>> >> >
>> >> > There may have to be a slight tweak for Base, though it is not hard 
>> to
>> >> > argue
>> >> > that the primitive types are defined in Base.
>> >> >
>> >> > so
>> >> >
>> >> > module Module1
>> >> > type Bar end
>> >> > my( b::Bar ) = 1
>> >> >
>> >> > export my   # fine exports to the global space
>> >> > end
>> >> >
>> >> > module Module2
>> >> > type Foo end
>> >> > my() = 1
>> >> >
>> >> > export my   # ERROR exporting function which does not reference
>> >> > local
>> >> > type
>> >> > end
>> >> >
>> >> > module Module3
>> >> > type Wow end
>> >> > my( w::Wow ) = 1
>> >> > my() = 1
>> >> > end
>> >> > export my   # Is an ERROR I can not export a function which does 
>> not
>> >> > reference a local type
>> >> > end
>> >> >
>> >> > So in the example provided my Mike above, multiple dispatch would do 
>> the
>> >> > right thing. If I also want to define a function for value in my 
>> module
>>

Re: [julia-users] in-place fn in a higher order fn

2015-04-24 Thread elextr


On Friday, April 24, 2015 at 8:11:52 PM UTC+10, Mauro wrote:
>
> >> Well it seems Julia should know that nothing is used from fn!, without 
> >> knowing anything about fn!.  That is at least what @code_warntype 
> >> suggest (with julia --inline=no).  For 
> >> 
> >> function f(ar) 
> >> for i=1:n 
> >> hh!(ar, i) 
> >> end 
> >> end 
> >> 
> >> the loop gives: 
> >> 
> >>   GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), 
> :(#s1::Int64))) 
> >>   i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1)) 
> >>   #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 71: 
> >>   $(Expr(:call1, :hh!, :(ar::Array{Float64,1}), :(i::Int64))) 
> >>   3: 
> >>   unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)), 
> >> :($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2 
> >> 
> >> For 
> >> function g(fn!,ar) 
> >> a = 0 
> >> for i=1:n 
> >> fn!(ar, i) 
> >> end 
> >> a 
> >> end 
> >> 
> >> the loop gives: 
> >>   2: 
> >>   GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), 
> :(#s1::Int64))) 
> >>   i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1)) 
> >>   #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 78: 
> >>   (fn!::F)(ar::Array{Float64,1},i::Int64)::Any 
> >> 
> > 
> > Wouldn't it (or fn!) need to allocate for this Any here ^ 
> > 
> > IIUC its fn! that decides if it returns something, and even if the 
> caller 
> > doesn't need it, the return value still has to be stored somewhere. 
>
> I think this optimisation should work irrespective of what fn! returns 
> by the fact that the value is not used.  This and more seems to happen 
> in the first-order function.  Here a version of first-order 
> function which calls a function which returns an inferred Any: 
>
> const aa = Any[1] 
> hh_ret!(ar,i) = (ar[i] = hh(ar[i]); aa[1]) 
>
> function f_ret(ar) 
> a = aa[1] 
> for i=1:n 
> a=hh_ret!(ar, i) 
> end 
> a 
> end 
> julia> @time f_ret(a); 
> elapsed time: 0.259893608 seconds (160 bytes allocated) 
>
> It's still fast and doesn't allocate, even though it uses the value! 
>

The a= aa[1] is unused and can be removed (well if n is >=1).

Then I would have thought that first hh_ret() is inlined, then the loop 
body is visible to the optimiser and the unused value is removed as it is 
optimised to ar[n].

But if hh_ret() was passed to f_ret() it can only be inlined if f_ret() is 
re-compiled for each call with a new parameter function.  Thats not 
impossible in a dynamic language like Julia.  But still may not work if the 
function being passed as a parameter is the result of an expression that 
can't be resolved at compile time.

As Tim says its all optimisations that havn't yet been written :)

 

>
> > Maybe fn! does the allocating, but it still happens. 
>
> It's a different story if there is actual allocation in fn!. 
>
> >>   3: 
> >>   unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)), 
> >> :($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2 
> >> 
> >> So, at least from my naive viewpoint, it looks like there is no need 
> for 
> >> an allocation in this case.  Or is there ever a case when the return 
> >> value of fn! would be used? 
> >> 
> >> I think this is quite different from the case when the return value of 
> >> fn! is used because then, as long as Julia cannot do type inference on 
> >> the value of fn!, it cannot know what the type is. 
> >> 
> > 
> > Which is why its ::Any above I guess and my understanding is Any means 
> > boxed and allocated on the heap? 
>
> Thanks for indulging me! 
>


[julia-users] using ctrl-p to navigate history

2015-04-24 Thread Christian Peel
I'd like to use ^p and ^n to navigate the REPL history the way that the up
and down arrows do now in the latest v0.4 builds.   That is, when I type
"a" at the REPL and type ctrl-p, then I'd like to see the most recent
command that I executed that started with "a" (see
https://github.com/JuliaLang/julia/issues/6377 ).  Here the code that I put
in my .juliarc.jl file; it doesn't seem to work for either 0.3.7 or a build
from today. Any suggestions?
--
import Base: LineEdit, REPL
if VERSION>=v"0.3.9"
const mykeys =
Dict{Any,Any}(
  "^P" => (s,o...)->(LineEdit.edit_move_up(s) ||
   LineEdit.history_prev(s,
LineEdit.mode(s).hist)),
  "^N" => (s,o...)->(LineEdit.edit_move_up(s) ||
   LineEdit.history_next(s,
LineEdit.mode(s).hist))
  )
function customize_keys(repl)
repl.interface = REPL.setup_interface(repl; extra_repl_keymap =
mykeys)
end
atreplinit(customize_keys)
else
const mykeys = {
"^P" => (s,o...)->(LineEdit.edit_move_up(s) ||
   LineEdit.history_prev(s, LineEdit.mode(s).hist)),
"^N" => (s,o...)->(LineEdit.edit_move_down(s) ||
   LineEdit.history_next(s, LineEdit.mode(s).hist)),
}
Base.active_repl.interface =
REPL.setup_interface(Base.active_repl; extra_repl_keymap =
mykeys)
end
--
FYI  I'm using OS X, I have the caps and ctrl swapped, and I'm running
Julia from Terminal.


-- 
chris.p...@ieee.org


Re: [julia-users] Re: Is there a plotting package that works for a current 0.4 build?

2015-04-24 Thread Tim Holy
I have commit access, and I went ahead and hit the merge button there.

--Tim

On Friday, April 24, 2015 12:38:06 PM Tony Kelman wrote:
> I have a PR open on Winston that at least makes its tests pass, though I
> didn't try much beyond that. I don't know whether anyone other than Mike
> Nolta has commit access there, and we haven't seen him on github for a
> month or two. Hopefully everything is alright and he's just been busy.
> 
> -Tony
> 
> On Friday, April 24, 2015 at 3:58:55 AM UTC-7, Tomas Lycken wrote:
> > I git pull-ed and built Julia this morning, and I can't get any of PyPlot,
> > Winston or Gadfly or TextPlots to show me anything (either loading the
> > package, or running something equivalent to plot(rand(15)), borked).
> > 
> > Is there any plotting tool that's had time to adjust to the tuplocalypse
> > and other recent breaking changes?
> > 
> > // T



[julia-users] Qwt plotting

2015-04-24 Thread Tom Breloff
As part of a larger (private) codebase, I have a module that uses PyCall to 
call into PyQwt5.  I like Qwt a lot for fast, simple, interactive plotting. 
 I have a python wrapper that sets up zooming/panning functionality and 
some other basics.  The python code is mostly from another private 
codebase, and I decided to wrap julia around that instead of calling Qwt 
directly, because I'm lazy.

Some sample commands:

x = rand(100)
Y = rand(100,3)
plot(Y) 
   # simple line plot of 3 lines
subplot(x, Y; linetype=:dots, titles=["plot1","plot2","plot3"]) 
# creates 3 scatterplots in the same window... x vs Y[:,i]
scatter(x, x+randn(100); color=:red)   
   # scatterplot with red dots
p = plot(x); oplot(p, x*50; axis=:right)   
 # 2-axis plot

My question... should I go to the trouble of releasing this as a standalone 
package?   Would anyone use it?   Also does anyone know of any licensing 
issues I need to be aware of with Qwt/PyQwt?


Re: [julia-users] Online regression algorithms

2015-04-24 Thread Kevin Squire
I would be interested in these.  You might see if/how they fit in, e.g.,
with Regression.jl.

Cheers,
   Kevin

On Fri, Apr 24, 2015 at 2:09 PM, Tom Breloff  wrote:

> I'm considering writing packages for the following online (i.e. updating
> models on the fly as new data arrives) techniques, but this functionality
> might exist already, or there might be a package that I should contribute
> to instead of writing my own:
>
>- Online PCA (such as "Candid covariance-free incremental principal
>component analysis")
>- Online flexible least squares (time-varying regression weights)
>- Online support vector machines/regressions
>
> Are there any packages that might have this functionality, or even a good
> framework that I could/should add to?  Does anyone else have a need for
> these algorithms?
>


[julia-users] Online regression algorithms

2015-04-24 Thread Tom Breloff
I'm considering writing packages for the following online (i.e. updating 
models on the fly as new data arrives) techniques, but this functionality 
might exist already, or there might be a package that I should contribute 
to instead of writing my own:

   - Online PCA (such as "Candid covariance-free incremental principal 
   component analysis")
   - Online flexible least squares (time-varying regression weights)
   - Online support vector machines/regressions

Are there any packages that might have this functionality, or even a good 
framework that I could/should add to?  Does anyone else have a need for 
these algorithms?


Re: [julia-users] Is there a REPL shortcut for a subscript numeral?

2015-04-24 Thread Jiahao Chen
The manual has a list of supported tab completions:

http://julia.readthedocs.org/en/release-0.3/manual/unicode-input/

http://julia.readthedocs.org/en/latest/manual/unicode-input/


[julia-users] Re: Help to optimize a loop through a list

2015-04-24 Thread Duane Wilson
Ah I see. Yeah this probably depends a lot on the way you are representing 
the bits and how you split the objectives. If each objective is an 
ASCIIString of bits and they're held separately (i.e. objectives is a 
Vector{ASCIIString}) what I have provided should work because the isless 
function works on strings. It probably isn't the most efficient way to 
compare them however.

On Friday, April 24, 2015 at 1:34:25 PM UTC-3, Ronan Chagas wrote:
>
> Hi Duane Wilson,
>
> Em sexta-feira, 24 de abril de 2015 11:52:08 UTC-3, Duane Wilson escreveu:
>>
>> Actually I was able to get the go ahead from my boss to create a gist of 
>> the code I've produced. I've modified it a little bit, as some of the 
>> things in there are really relevant.
>>
>> http://nbviewer.ipython.org/gist/dwil/5cc31d1fea141740cf96
>>
>> Any comments for optimizing this a little be more would be appreciated :)
>>
>>  
> Thanks very much, it will help me and, if I can contribute, I will tell 
> you :)
>
> The problem is that for each generation of this evolutionary algorithm I 
> will need to check if n candidate points belong to the Pareto frontier, 
> where n is the number of bits in the string.
> Thus, I need a really fast algorithm for this kind of operation. As I am 
> doing now (I will post the code in github), it works fine until the 
> frontier has a large number of elements.
> Just one example, suppose that I'm using n = 16 and I have 6,000 elements 
> in the frontier. If I generate 1000 generations, then I will need to check 
> if 16,000 points are in the frontier.
> It will lead to 96,000,000 comparisons considering that no point will be 
> added to the frontier.
>
> Thanks,
> Ronan
>
>

Re: [julia-users] Is there a plotting package that works for a current 0.4 build?

2015-04-24 Thread Miguel Bazdresch
I haven't tried Gaston on master, but it _may_ work, if you're on Linux or
MacOS.

-- mb

On Fri, Apr 24, 2015 at 6:58 AM, Tomas Lycken 
wrote:

> I git pull-ed and built Julia this morning, and I can't get any of PyPlot,
> Winston or Gadfly or TextPlots to show me anything (either loading the
> package, or running something equivalent to plot(rand(15)), borked).
>
> Is there any plotting tool that's had time to adjust to the tuplocalypse
> and other recent breaking changes?
>
> // T
>


[julia-users] Re: how to display the whole model

2015-04-24 Thread Tony Kelman
Joey's got you covered luckily, but just a heads up that it's a good idea 
to provide some context that your question is about the JuMP package in 
posts like this, since "Model" could mean different things in different 
applications/packages.


On Friday, April 24, 2015 at 11:02:59 AM UTC-7, Michela Di Lullo wrote:
>
> Hello everyone,
>
> I'm new to julia and I was wondering how to display the whole model. 
> I tried with: 
>
> mod=Model(...)
> ...
>
> display(mod)
>
>
> Feasibility problem with:
>
>  * 144 linear constraints
>
>  * 2822 variables: 1514 binary
>
> Solver set to Gurobi
>
>
> but it only says the *number* of variables and constraints in the model.. 
> while I want to see the constraints/objective in its/their expanded forms.  
>
>
> Any idea about how to make it? 
>
> Thank you all for any suggestion :)
>


Re: [julia-users] Re: .jl to .exe

2015-04-24 Thread Tony Kelman
Right, we don't have any user-visible way to cross-compile from Julia 
source to a library for other platforms right now. We don't yet have a 
user-visible way to natively compile Julia modules into libraries except 
via userimg.jl, but there is a PR open for this. Once that functionality is 
integrated and working for native compilation, figuring out how to extend 
it to cross-compilation is an interesting long-term target. Significant 
work would need to be done to get there though.


On Friday, April 24, 2015 at 7:54:37 AM UTC-7, Stefan Karpinski wrote:
>
> I think the question was regarding cross-compiling a .exe from a .jl 
> script, which, as you say, doesn't work yet.
>
> On Fri, Apr 24, 2015 at 10:51 AM, Isaiah Norton  > wrote:
>
>> How to do so?
>>
>>
>> If this refers to cross-compiling Julia (rather than cross-compiling an 
>> exe from Julia, which is not currently possible), please see:
>> https://github.com/JuliaLang/julia/blob/master/README.windows.md
>>
>> On Fri, Apr 24, 2015 at 10:27 AM, Paul D > > wrote:
>>
>>> How to do so?
>>>
>>> On Fri, Apr 24, 2015 at 4:20 AM, Tony Kelman >> > wrote:
>>> > It is possible to cross-compile a Windows exe of Julia from Linux 
>>> right now,
>>> > so this could probably be made to work.
>>> >
>>> >
>>> > On Thursday, April 23, 2015 at 8:15:46 AM UTC-7, pauld11718 wrote:
>>> >>
>>> >> Will it be possible to cross-compile?
>>> >> Do all the coding on linux(64 bit) and generate the exe for windows(32
>>> >> bit)?
>>>
>>
>>
>

[julia-users] Re: Is there a plotting package that works for a current 0.4 build?

2015-04-24 Thread Tony Kelman
I have a PR open on Winston that at least makes its tests pass, though I 
didn't try much beyond that. I don't know whether anyone other than Mike 
Nolta has commit access there, and we haven't seen him on github for a 
month or two. Hopefully everything is alright and he's just been busy.

-Tony


On Friday, April 24, 2015 at 3:58:55 AM UTC-7, Tomas Lycken wrote:
>
> I git pull-ed and built Julia this morning, and I can't get any of PyPlot, 
> Winston or Gadfly or TextPlots to show me anything (either loading the 
> package, or running something equivalent to plot(rand(15)), borked).
>
> Is there any plotting tool that's had time to adjust to the tuplocalypse 
> and other recent breaking changes?
>
> // T
>


[julia-users] Re: Official docker images

2015-04-24 Thread Tony Kelman
That's pretty cool to see. As far as I can tell that image doesn't have git 
installed yet, so to use the package manager you'll probably want to 
run(`apt-get install git`) first thing. Maybe we should open an issue 
asking for that to be pre-built? Will make the images larger but save 
time/hassle for people who want to use them.


On Thursday, April 23, 2015 at 5:26:36 AM UTC-7, Viral Shah wrote:
>
> It seems that Docker has an official julia image: 
>
> https://registry.hub.docker.com/_/julia/ 
>
> -viral 
>
>
>
>

Re: [julia-users] Re: Defining a function in different modules

2015-04-24 Thread Stefan Karpinski
For anyone who isn't following changes to Julia master closely, Jeff closed
#4345  yesterday, which
addresses one major concern of "programming in the large".

I think the other concern about preventing people from intentionally or
accidentally monkey-patching is very legitimate as well, but it's way less
clear what to do about it. I've contemplated the idea of not allowing a
module to add methods to a generic function unless it "owns" the function
or one of the argument types, but that feels like such a fussy rule, I
don't think it's the right solution. But I haven't come up with anything
better either.

On Wed, Apr 22, 2015 at 6:19 PM, Jeff Bezanson 
wrote:

> I think it's reasonable to adopt a convention in some code of not using
> `using`.
>
> Another way to look at this is that a library author could affect name
> visibility in somebody else's code by adjusting the signature of a
> method. That doesn't seem like a desirable interaction to me. Often
> somebody might initially define foo(::Image), and then later realize
> it's actually applicable to any array, and change it to
> foo(::AbstractArray). Doing that shouldn't cause any major fuss.
>
> On Wed, Apr 22, 2015 at 5:58 PM, Michael Francis 
> wrote:
> > You are correct it is restrictive, though I will take some convincing
> that
> > this is a bad thing, as systems get larger in Julia it is going to be
> > increasingly important to manage code reuse and prevent accidental
> masking
> > of types. Multiple dispatch is a wonderful tool for supporting these
> goals.
> > Unfortunately allowing people the ability to export get() et al
> to
> > the users scope seems like a bad idea. This is already happening from
> > modules today. Perhaps the middle ground is to force an explicit import,
> so
> > using only imports functions which have types defined in the module.  The
> > person defining the module exports all the functions they want but only
> > those that are 'safe' e.g. follow my original rule are implicitly
> imported.
> > Hence you would have something like the following code. This is not far
> > different from the importall today, except that the exports are
> > automatically restricted.
> >
> > using MyMath # Imports only those functions which include types
> > defined in MyMath
> > import MyMath.*  # Imports all other  functions  defined in MyMath
> > import MyMath.afunc  # Imports  one  function
> > import MyOther.afunc # Fails collides with MyMath.afunc
> >
> >
> >
> >
> >
> >
> > On Wednesday, April 22, 2015 at 4:40:03 PM UTC-4, Jeff Bezanson wrote:
> >>
> >> That rule seems extremely restrictive to me. It would be very common,
> >> for example, to create a library of functions that operate on standard
> >> data types like numbers and arrays. I don't see that we can exclude
> >> that kind of use.
> >>
> >> Also, printing a warning is not the key part of #4345. The important
> >> part is that you'd have to qualify names in that case, which is the
> >> same thing that would happen if `export`ing the names were disallowed.
> >>
> >>
> >> On Wed, Apr 22, 2015 at 8:47 AM, Michael Francis 
> >> wrote:
> >> > I read through the issues / threads ( and some others )
> >> >
> >> > https://github.com/JuliaLang/julia/issues/2327
> >> > https://github.com/JuliaLang/julia/issues/4345
> >> >
> >> > I'm not sure that the either the SuperSecretBase or the warning are
> the
> >> > correct approach. I'd like to propose a counter which is a very simple
> >> > rule.
> >> >
> >> > "You can only export functions from a module where they reference at
> >> > least
> >> > one type defined in the module."
> >> >
> >> > There may have to be a slight tweak for Base, though it is not hard to
> >> > argue
> >> > that the primitive types are defined in Base.
> >> >
> >> > so
> >> >
> >> > module Module1
> >> > type Bar end
> >> > my( b::Bar ) = 1
> >> >
> >> > export my   # fine exports to the global space
> >> > end
> >> >
> >> > module Module2
> >> > type Foo end
> >> > my() = 1
> >> >
> >> > export my   # ERROR exporting function which does not reference
> >> > local
> >> > type
> >> > end
> >> >
> >> > module Module3
> >> > type Wow end
> >> > my( w::Wow ) = 1
> >> > my() = 1
> >> > end
> >> > export my   # Is an ERROR I can not export a function which does
> not
> >> > reference a local type
> >> > end
> >> >
> >> > So in the example provided my Mike above, multiple dispatch would do
> the
> >> > right thing. If I also want to define a function for value in my
> module
> >> > it
> >> > would work consistently against the types I define. We don't have to
> >> > perform
> >> > recursive exports and import usage should be reduced.
> >> >
> >> > If you want to define an empty function you can do so with a default
> arg
> >> > module Module4
> >> > type Zee end
> >> > my( ::Type{Zee} = Zee ) = 1
> >> > export my   # Works, but I can select against it using multiple
> >> > dispatch
> >> > by providing

[julia-users] Re: how to display the whole model

2015-04-24 Thread Joey Huchette


print(mod)

On Friday, April 24, 2015 at 2:02:59 PM UTC-4, Michela Di Lullo wrote:

Hello everyone,
>
> I'm new to julia and I was wondering how to display the whole model. 
> I tried with: 
>
> mod=Model(...)
> ...
>
> display(mod)
>
>
> Feasibility problem with:
>
>  * 144 linear constraints
>
>  * 2822 variables: 1514 binary
>
> Solver set to Gurobi
>
>
> but it only says the *number* of variables and constraints in the model.. 
> while I want to see the constraints/objective in its/their expanded forms.  
>
>
> Any idea about how to make it? 
>
> Thank you all for any suggestion :)
>
​


Re: [julia-users] Parallel computing and multiple outputs

2015-04-24 Thread Archibald Pontier
Thank you very much, that's exactly what I need!

On 24 April 2015 at 19:41, Jameson Nash  wrote:
> Julia's parallel constructs assume that globals are available on all
> processors, but will copy any locals used to every processor. So one way to
> fix your example is:
>
> julia> A = let t3=t3; pmap(i -> test2(t3[i], t, t2), 1:3); end
>
> On Fri, Apr 24, 2015 at 9:57 AM Archibald Pontier
>  wrote:
>>
>> Hi everyone,
>>
>> I have the following problem which originates from the fact that
>> SharedArray doesn't seem to accept composite types as argument. Consider the
>> following sequence (I started julia with -p 2):
>>
>> julia> @everywhere type T
>>t::Array{Float64, 1}
>>end
>>
>> julia> @everywhere type T2
>>t::Array{Float64, 1}
>>end
>>
>> julia> @everywhere type T3
>>t::Array{Float64, 1}
>>end
>>
>> julia> @everywhere function test(t::T, t2::T2, i)
>>return T3([t.t[i], t2.t[i]]);
>>end
>>
>> julia> @everywhere t = T(rand(3))
>>
>> julia> @everywhere t2 = T2(rand(3))
>>
>> julia> t3 = pmap(i -> test(t, t2, i), 1:3)
>> 3-element Array{Any,1}:
>>  T3([0.521706,0.0155359])
>>  T3([0.112277,0.59876])
>>  T3([0.0399843,0.373688])
>>
>> julia> @everywhere function test2(t3::T3, t::T, t2::T2)
>>t.t[1] += t3.t[1];
>>t2.t[2] += t3.t[2];
>>return (t, t2);
>>end
>>
>> julia> A = pmap(i -> test2(t3[i], t, t2), 1:3)
>> exception on exception on 2: 3: ERROR: t3 not defined
>>  in anonymous at none:1
>>  in anonymous at multi.jl:855
>>  in run_work_thunk at multi.jl:621
>>  in anonymous at task.jl:855
>> ERROR: t3 not defined
>>  in anonymous at none:1
>>  in anonymous at multi.jl:855
>>  in run_work_thunk at multi.jl:621
>>  in anonymous at task.jl:855
>> 2-element Array{Any,1}:
>>  UndefVarError(:t3)
>>  UndefVarError(:t3)
>>
>> The problem solves itself if I do @everywhere t3 = pmap(...), however from
>> my understanding this would only lead to every operation done on every
>> process, which defeats the purpose of doing things in parallel in the first
>> place. Am I wrong with this conclusion?
>>
>> Regards,
>> Archibald


[julia-users] how to display the whole model

2015-04-24 Thread Michela Di Lullo
Hello everyone,

I'm new to julia and I was wondering how to display the whole model. 
I tried with: 

mod=Model(...)
...

display(mod)


Feasibility problem with:

 * 144 linear constraints

 * 2822 variables: 1514 binary

Solver set to Gurobi


but it only says the *number* of variables and constraints in the model.. 
while I want to see the constraints/objective in its/their expanded forms.  


Any idea about how to make it? 

Thank you all for any suggestion :)


Re: [julia-users] Parallel computing and multiple outputs

2015-04-24 Thread Jameson Nash
Julia's parallel constructs assume that globals are available on all
processors, but will copy any locals used to every processor. So one way to
fix your example is:

julia> A = let t3=t3; pmap(i -> test2(t3[i], t, t2), 1:3); end

On Fri, Apr 24, 2015 at 9:57 AM Archibald Pontier <
archibald.pont...@gmail.com> wrote:

> Hi everyone,
>
> I have the following problem which originates from the fact that
> SharedArray doesn't seem to accept composite types as argument. Consider
> the following sequence (I started julia with -p 2):
>
> julia> @everywhere type T
>t::Array{Float64, 1}
>end
>
> julia> @everywhere type T2
>t::Array{Float64, 1}
>end
>
> julia> @everywhere type T3
>t::Array{Float64, 1}
>end
>
> julia> @everywhere function test(t::T, t2::T2, i)
>return T3([t.t[i], t2.t[i]]);
>end
>
> julia> @everywhere t = T(rand(3))
>
> julia> @everywhere t2 = T2(rand(3))
>
> julia> t3 = pmap(i -> test(t, t2, i), 1:3)
> 3-element Array{Any,1}:
>  T3([0.521706,0.0155359])
>  T3([0.112277,0.59876])
>  T3([0.0399843,0.373688])
>
> julia> @everywhere function test2(t3::T3, t::T, t2::T2)
>t.t[1] += t3.t[1];
>t2.t[2] += t3.t[2];
>return (t, t2);
>end
>
> julia> A = pmap(i -> test2(t3[i], t, t2), 1:3)
> exception on exception on 2: 3: ERROR: t3 not defined
>  in anonymous at none:1
>  in anonymous at multi.jl:855
>  in run_work_thunk at multi.jl:621
>  in anonymous at task.jl:855
> ERROR: t3 not defined
>  in anonymous at none:1
>  in anonymous at multi.jl:855
>  in run_work_thunk at multi.jl:621
>  in anonymous at task.jl:855
> 2-element Array{Any,1}:
>  UndefVarError(:t3)
>  UndefVarError(:t3)
>
> The problem solves itself if I do @everywhere t3 = pmap(...), however from
> my understanding this would only lead to every operation done on every
> process, which defeats the purpose of doing things in parallel in the first
> place. Am I wrong with this conclusion?
>
> Regards,
> Archibald
>


Re: [julia-users] Auto warn for unstable types

2015-04-24 Thread Iain Dunning
I think something I'd use was a @strict macro that annotates a function as 
something I'm willing to get plenty of warnings from, that'd be quite nice. 
We have the ability to add such compiler flags now, right?

On Friday, April 24, 2015 at 12:21:06 PM UTC-4, Peter Brady wrote:
>
> Other startup flags have a user/all option which would conceivably solve 
> the problem of getting too many warnings on startup.  My views are likely 
> colored by my use case - solving systems of PDEs.  Since my work is 
> strictly numerical, I've never met an ::Any that served a useful purpose.   
>
> On Friday, April 24, 2015 at 9:50:17 AM UTC-6, Tim Holy wrote:
>>
>> Related ongoing discussion in 
>> https://github.com/JuliaLang/julia/issues/10980 
>>
>> But I don't think it's practical or desirable to warn about all type 
>> instability; there are plenty of cases where it's either a useful or 
>> unavoidable property. The goal of optimization should be to eliminate 
>> those 
>> cases that actually matter for performance, and not worry about the ones 
>> that 
>> don't. If you run your code (or just, start julia) and see 100 warnings 
>> scroll 
>> past, you won't know where to begin. 
>>
>> --Tim 
>>
>> On Friday, April 24, 2015 11:12:57 AM Stefan Karpinski wrote: 
>> > Yes, I'd like to add exactly this kind of thing. 
>> > 
>> > On Fri, Apr 24, 2015 at 10:54 AM, Peter Brady  
>> wrote: 
>> > > Tim Holy introduced me to the wonders of @code_warntype in this 
>> discussion 
>> > > https://groups.google.com/forum/#!topic/julia-users/sq5gj-3TdQU. 
>>  I've 
>> > > since been using it to track down other instabilities in my code 
>> since it 
>> > > turns out that I'm very good at writing poor julia code.  Are there 
>> any 
>> > > plans to incorporate automatic warnings about type unstable functions 
>> when 
>> > > they are compiled?  Maybe even via a startup flag like `-Wunstable`? 
>>  I 
>> > > would prefer that its on by default.  This would go a long way 
>> towards 
>> > > helping me write much better code and probably help new users get 
>> more of 
>> > > the performance they were expecting. 
>>
>>

[julia-users] Re: Help to optimize a loop through a list

2015-04-24 Thread Ronan Chagas
Hi Duane Wilson,

Em sexta-feira, 24 de abril de 2015 11:52:08 UTC-3, Duane Wilson escreveu:
>
> Actually I was able to get the go ahead from my boss to create a gist of 
> the code I've produced. I've modified it a little bit, as some of the 
> things in there are really relevant.
>
> http://nbviewer.ipython.org/gist/dwil/5cc31d1fea141740cf96
>
> Any comments for optimizing this a little be more would be appreciated :)
>
>  
Thanks very much, it will help me and, if I can contribute, I will tell you 
:)

The problem is that for each generation of this evolutionary algorithm I 
will need to check if n candidate points belong to the Pareto frontier, 
where n is the number of bits in the string.
Thus, I need a really fast algorithm for this kind of operation. As I am 
doing now (I will post the code in github), it works fine until the 
frontier has a large number of elements.
Just one example, suppose that I'm using n = 16 and I have 6,000 elements 
in the frontier. If I generate 1000 generations, then I will need to check 
if 16,000 points are in the frontier.
It will lead to 96,000,000 comparisons considering that no point will be 
added to the frontier.

Thanks,
Ronan



[julia-users] Re: Help to optimize a loop through a list

2015-04-24 Thread Ronan Chagas
Thanks very much guys!

I will post the entire module in github (maybe this weekend if I have time).
It will be slow, but then you can help me :)

 @Mauro

The size of vars and f are fixed for each problem, thus I don't think it 
can be immutable.

The dummy example is to find the Pareto frontier of

f = [ x^2; (x-2)^2 ]
given that -10 < x < 10

Thus, the size of f will be 2 and the size of vars will be 1.


Re: [julia-users] Auto warn for unstable types

2015-04-24 Thread Peter Brady
Other startup flags have a user/all option which would conceivably solve 
the problem of getting too many warnings on startup.  My views are likely 
colored by my use case - solving systems of PDEs.  Since my work is 
strictly numerical, I've never met an ::Any that served a useful purpose.   

On Friday, April 24, 2015 at 9:50:17 AM UTC-6, Tim Holy wrote:
>
> Related ongoing discussion in 
> https://github.com/JuliaLang/julia/issues/10980 
>
> But I don't think it's practical or desirable to warn about all type 
> instability; there are plenty of cases where it's either a useful or 
> unavoidable property. The goal of optimization should be to eliminate 
> those 
> cases that actually matter for performance, and not worry about the ones 
> that 
> don't. If you run your code (or just, start julia) and see 100 warnings 
> scroll 
> past, you won't know where to begin. 
>
> --Tim 
>
> On Friday, April 24, 2015 11:12:57 AM Stefan Karpinski wrote: 
> > Yes, I'd like to add exactly this kind of thing. 
> > 
> > On Fri, Apr 24, 2015 at 10:54 AM, Peter Brady  > wrote: 
> > > Tim Holy introduced me to the wonders of @code_warntype in this 
> discussion 
> > > https://groups.google.com/forum/#!topic/julia-users/sq5gj-3TdQU. 
>  I've 
> > > since been using it to track down other instabilities in my code since 
> it 
> > > turns out that I'm very good at writing poor julia code.  Are there 
> any 
> > > plans to incorporate automatic warnings about type unstable functions 
> when 
> > > they are compiled?  Maybe even via a startup flag like `-Wunstable`? 
>  I 
> > > would prefer that its on by default.  This would go a long way towards 
> > > helping me write much better code and probably help new users get more 
> of 
> > > the performance they were expecting. 
>
>

Re: [julia-users] Float32() or float32() ?

2015-04-24 Thread Patrick O'Leary
It's fine to play with, and there are definitely worthwhile improvements, 
just want to be clear that there's still a fair number of impactful changes 
being made, and bugs to be squished.

On Friday, April 24, 2015 at 10:49:38 AM UTC-5, Sisyphuss wrote:
>
> Thanks, Patrick !
>
> Frightened by the word "unstable", I do not dare to use it anymore. 
> Expecting the version 0.4 to be released soon! 
>
> On Fri, Apr 24, 2015 at 3:13 PM, Patrick O'Leary  > wrote:
>
>> The master branch of the git repository is currently version 0.4-dev, 
>> which is in an unstable development phase. The relevant downloads are at 
>> the bottom of http://julialang.org/downloads/ under "Nightly Builds".
>>
>>
>> On Friday, April 24, 2015 at 5:47:37 AM UTC-5, Sisyphuss wrote:
>>>
>>> Thanks Tomas!
>>>
>>> By the way, I built Julia from the source, and I got the version 0.3. Do 
>>> you know how I can get the version 0.4?
>>>
>>>
>>>
>>> On Friday, April 24, 2015 at 12:21:17 PM UTC+2, Tomas Lycken wrote:

 To be concise: Yes, and yes :)

 Instead of `float32(x)` and the like, 0.4 uses constructor methods 
 (`Float32(x)` returns a `Float32`, just as `Foo(x)` returns a `Foo`...).

 // T

 On Friday, April 24, 2015 at 11:40:02 AM UTC+2, Sisyphuss wrote:
>
> I mean is there a syntax change from version 0.3 to version 0.4?
>
> Is "float32()"-like minuscule conversion going to be deprecated?
>
>
>
> On Friday, April 24, 2015 at 9:58:18 AM UTC+2, Tim Holy wrote:
>>
>> I'm not sure what your question is, but the documentation is correct 
>> in both 
>> cases. You can also use the Compat package, which allows you to write 
>> x = @compat Float32(y) 
>> even on julia 0.3. 
>>
>> --Tim 
>>
>> On Friday, April 24, 2015 12:35:01 AM Sisyphuss wrote: 
>> > To convert a number to the type Float32, 
>> > 
>> > In 0.3 doc: float32() 
>> > In 0.4 doc: Float32() 
>>
>>
>

Re: [julia-users] Finite field / Gaulois field implementation

2015-04-24 Thread Valentin Churavy
Hej Andreas,

thanks that looks like what I was searching for.



On Friday, 24 April 2015 19:59:11 UTC+9, Andreas Noack wrote:
>
> Hej Valentin
>
> There is a couple of simple examples. At least this
>
> http://acooke.org/cute/FiniteFiel1.html
>
> and I did one in this
>
> http://andreasnoack.github.io/talks/2015AprilStanford_AndreasNoack.ipynb
>
> notebook. The arithmetic definitions are simpler for GF(2), but should be 
> simple modifications to the definitions in the notebook.
>
> 2015-04-24 2:50 GMT-04:00 Valentin Churavy  >:
>
>> Hej,
>>
>> Did anybody tried to implement finite field arithmetic in Julia?. I would 
>> be particular interested in GF(2) eg. base 2 arithmetic modulus 2.
>>
>> Best,
>> Valentin
>>
>
>

Re: [julia-users] Auto warn for unstable types

2015-04-24 Thread Tim Holy
Related ongoing discussion in https://github.com/JuliaLang/julia/issues/10980

But I don't think it's practical or desirable to warn about all type 
instability; there are plenty of cases where it's either a useful or 
unavoidable property. The goal of optimization should be to eliminate those 
cases that actually matter for performance, and not worry about the ones that 
don't. If you run your code (or just, start julia) and see 100 warnings scroll 
past, you won't know where to begin.

--Tim

On Friday, April 24, 2015 11:12:57 AM Stefan Karpinski wrote:
> Yes, I'd like to add exactly this kind of thing.
> 
> On Fri, Apr 24, 2015 at 10:54 AM, Peter Brady  wrote:
> > Tim Holy introduced me to the wonders of @code_warntype in this discussion
> > https://groups.google.com/forum/#!topic/julia-users/sq5gj-3TdQU.  I've
> > since been using it to track down other instabilities in my code since it
> > turns out that I'm very good at writing poor julia code.  Are there any
> > plans to incorporate automatic warnings about type unstable functions when
> > they are compiled?  Maybe even via a startup flag like `-Wunstable`?  I
> > would prefer that its on by default.  This would go a long way towards
> > helping me write much better code and probably help new users get more of
> > the performance they were expecting.



Re: [julia-users] Float32() or float32() ?

2015-04-24 Thread Zheng Wendell
Thanks, Patrick !

Frightened by the word "unstable", I do not dare to use it anymore.
Expecting the version 0.4 to be released soon!

On Fri, Apr 24, 2015 at 3:13 PM, Patrick O'Leary 
wrote:

> The master branch of the git repository is currently version 0.4-dev,
> which is in an unstable development phase. The relevant downloads are at
> the bottom of http://julialang.org/downloads/ under "Nightly Builds".
>
>
> On Friday, April 24, 2015 at 5:47:37 AM UTC-5, Sisyphuss wrote:
>>
>> Thanks Tomas!
>>
>> By the way, I built Julia from the source, and I got the version 0.3. Do
>> you know how I can get the version 0.4?
>>
>>
>>
>> On Friday, April 24, 2015 at 12:21:17 PM UTC+2, Tomas Lycken wrote:
>>>
>>> To be concise: Yes, and yes :)
>>>
>>> Instead of `float32(x)` and the like, 0.4 uses constructor methods
>>> (`Float32(x)` returns a `Float32`, just as `Foo(x)` returns a `Foo`...).
>>>
>>> // T
>>>
>>> On Friday, April 24, 2015 at 11:40:02 AM UTC+2, Sisyphuss wrote:

 I mean is there a syntax change from version 0.3 to version 0.4?

 Is "float32()"-like minuscule conversion going to be deprecated?



 On Friday, April 24, 2015 at 9:58:18 AM UTC+2, Tim Holy wrote:
>
> I'm not sure what your question is, but the documentation is correct
> in both
> cases. You can also use the Compat package, which allows you to write
> x = @compat Float32(y)
> even on julia 0.3.
>
> --Tim
>
> On Friday, April 24, 2015 12:35:01 AM Sisyphuss wrote:
> > To convert a number to the type Float32,
> >
> > In 0.3 doc: float32()
> > In 0.4 doc: Float32()
>
>


[julia-users] Re: susceptance matrix

2015-04-24 Thread Michela Di Lullo
Thank you very much Patrick and ST. I'm really grateful to you for your 
help. 

I finally have my susceptance matrix. 

Best regards,

Michela 

Il giorno giovedì 23 aprile 2015 13:51:05 UTC+2, Michela Di Lullo ha 
scritto:
>
> Hello everyone, 
>
> I'm pretty new to the julia programming language and I'm having issues 
> while trying to declare a susceptance matrix in a unit commitment problem.
>
> I have the following elements: 
>
> BUS=[1:14]
>
> LINES=[1:20]
>
> NODE_FROM=[1 1 2 2 2 3 4 4 4 5 6 6 6 7 7 9 9 10 12 13]
>
> NODE_TO=[2 5 3 4 5 4 5 7 9 6 11 12 13 8 9 10 14 11 13 14]
>
> BRANCH=[(LINES[l], NODE_FROM[l], NODE_TO[l]) for l=1:length(LINES)]
>
> s1=[(i,i) for i in BUS]
>
> s2=[(m,k) for (l,m,k) in BRANCH]
>
> s3=[(k,m) for (l,m,k) in BRANCH]
>
> Y_BUS=union(s1,s2,s3)
>
> branch_x=[0.05917, 0.22304, 0.19797, 0.17632, 0.17388, 0.17103, 0.04211, 
> 0.20912, 0.55618, 0.25202, 0.1989, 0.25581, 0.13027, 0.17615, 0.11001, 
> 0.0845, 0.27038, 0.19207, 0.19988, 0.34802]
>
> and I need to declare the B susceptance matrix defined, in AMPL, as:
>
> param B{(k,m) in YBUS} := if(k == m)  then sum{(l,k,i) in BRANCH}  
> 1/branch_x[l,k,i] 
>  
> +sum{(l,i,k) in BRANCH}  1/branch_x[l,i,k]
>   else if(k != m) then 
> 
> sum{(l,k,m) in BRANCH}  1/branch_x[l,k,m]
>   
> +sum{(l,m,k) in BRANCH}  1/branch_x[l,m,k];
>
> I'm trying to make it but it's not working because of the indexes. I don't 
> know how to declare the parameter branch_x indexed by (n,b_from,b_to).
>
> Any idea about how to declare the B matrix correctly? 
>
> Thank you for any suggestion, 
>
> Michela
>


Re: [julia-users] Re: Segfault / Undefined reference

2015-04-24 Thread Stefan Karpinski
Thank you!

On Fri, Apr 24, 2015 at 11:14 AM, Archibald Pontier <
archibald.pont...@gmail.com> wrote:

> I'll try to make a short example of the crash (although I'm having
> trouble doing so) and filing an issue as soon as possible.
>
> On 24 April 2015 at 16:28, Stefan Karpinski  wrote:
> > Any segfault is always a bug. Would you mind filing an issue with a
> > self-contained example that can produce the segfault (plus Julia version
> > info)?
> >
> > On Fri, Apr 24, 2015 at 5:26 AM, Archibald Pontier
> >  wrote:
> >>
> >> Hi again,
> >>
> >> I found a way to prevent my code from crashing and it's really
> unexpected.
> >> I modified some global variable which only affects functions that are
> called
> >> by my emloop function and somehow it works. I have no idea why did this
> >> affect my code though, and especially what it made it segfault instead
> of
> >> just return an error. So now the serial version works, I'm still
> struggling
> >> with pmap with plenty of "result shape unspecified" errors but that's
> ok.
> >>
> >> All the best,
> >> Archibald
> >>
> >>
> >> On Friday, 24 April 2015 10:36:15 UTC+2, Archibald Pontier wrote:
> >>>
> >>> Hi everyone,
> >>>
> >>> Julia is (sometimes) segfaulting when running this function :
> >>>
> >>> function emloop(img, Sigma, mu, t, model, imgc, bg, FullTs)
> >>>   EMiter = 1;
> >>>   Check = typemax(Float64);
> >>> GModels, Genes,
> >>>   (x, y) = size(img);
> >>>   D = zeros(x, y); E = zeros(x, y); B = zeros(x, y);
> >>>
> >>>   # And we start the EM loop
> >>>   while EMiter <= maxEM && Check > ϵ
> >>> # Calculate the betas & find the optimal transformation for the
> >>> current iteration
> >>> (D, E, B) = fastBetas(model, [], mu, Sigma, img);
> >>> nt = findTransform(model, B, Gray((bg.r + bg.g + bg.b)/3), FullTs);
> >>>
> >>> # Now calculate the optimal parameters for the appearance model
> >>> OldMu = mu;  # For termination purposes
> >>> (mu, Sigma) = calcTheta(img, D, E, B);
> >>>
> >>> # Apply the transformation found to the image
> >>> img = RigidTransform(img, nt, bg, true, true);
> >>>
> >>> # Deal with the transformation
> >>> Oldt = t;
> >>> t += nt;
> >>> t.tx %= x;
> >>> t.ty %= y;
> >>>
> >>> # Termination criteria
> >>> NormOldTT = sqrt((Oldt.tx - t.tx)^2 + (Oldt.ty - t.ty)^2 + (Oldt.θ
> -
> >>> t.θ)^2);
> >>> CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) +
> norm(OldMu.B
> >>> - mu.B)
> >>> Check = CheckMu + NormOldTT # Giving lots of weight to the
> >>> transformation
> >>>
> >>> EMiter += 1;
> >>>   end
> >>>
> >>>   return emdata(D, E, B, img, θ(Sigma, mu), t);
> >>> end
> >>>
> >>> Some context: img is and Image{RGB{Float64}}, Sigma and mu are custom
> >>> types :
> >>> type Σ
> >>>   D::Array{Float64, 2}
> >>>   E::Array{Float64, 2}
> >>>   B::Array{Float64, 2}
> >>> end
> >>>
> >>> type μ
> >>>   D::Array{Float64, 1}
> >>>   E::Array{Float64, 1}
> >>>   B::Array{Float64, 1}
> >>> end
> >>>
> >>> t is another custom type to represent a rigid body transformation
> >>> (translation_x, translation_y, rotation_angle), bg is, for example,
> RGB(0.0,
> >>> 0.0, 0.0) (black), and FullTs is an array of transformations, namely
> all the
> >>> transformations we consider.
> >>>
> >>> It's weird for multiple reasons: first of all, this code was running
> fine
> >>> when not in a separate function returning a custom type (the custom
> type in
> >>> the return is to ease the access since I want to call this function
> for an
> >>> array of images with pmap and output all that which is not practical).
> >>> Furthermore, sometimes it crashes with a segfault, sometimes with an
> >>> undefined reference on this line, not always after the same EM
> iteration:
> >>> CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B -
> >>> mu.B)
> >>>
> >>> Thanks in advance for the help,
> >>> Archibald
> >
> >
>


Re: [julia-users] Re: Segfault / Undefined reference

2015-04-24 Thread Archibald Pontier
I'll try to make a short example of the crash (although I'm having
trouble doing so) and filing an issue as soon as possible.

On 24 April 2015 at 16:28, Stefan Karpinski  wrote:
> Any segfault is always a bug. Would you mind filing an issue with a
> self-contained example that can produce the segfault (plus Julia version
> info)?
>
> On Fri, Apr 24, 2015 at 5:26 AM, Archibald Pontier
>  wrote:
>>
>> Hi again,
>>
>> I found a way to prevent my code from crashing and it's really unexpected.
>> I modified some global variable which only affects functions that are called
>> by my emloop function and somehow it works. I have no idea why did this
>> affect my code though, and especially what it made it segfault instead of
>> just return an error. So now the serial version works, I'm still struggling
>> with pmap with plenty of "result shape unspecified" errors but that's ok.
>>
>> All the best,
>> Archibald
>>
>>
>> On Friday, 24 April 2015 10:36:15 UTC+2, Archibald Pontier wrote:
>>>
>>> Hi everyone,
>>>
>>> Julia is (sometimes) segfaulting when running this function :
>>>
>>> function emloop(img, Sigma, mu, t, model, imgc, bg, FullTs)
>>>   EMiter = 1;
>>>   Check = typemax(Float64);
>>> GModels, Genes,
>>>   (x, y) = size(img);
>>>   D = zeros(x, y); E = zeros(x, y); B = zeros(x, y);
>>>
>>>   # And we start the EM loop
>>>   while EMiter <= maxEM && Check > ϵ
>>> # Calculate the betas & find the optimal transformation for the
>>> current iteration
>>> (D, E, B) = fastBetas(model, [], mu, Sigma, img);
>>> nt = findTransform(model, B, Gray((bg.r + bg.g + bg.b)/3), FullTs);
>>>
>>> # Now calculate the optimal parameters for the appearance model
>>> OldMu = mu;  # For termination purposes
>>> (mu, Sigma) = calcTheta(img, D, E, B);
>>>
>>> # Apply the transformation found to the image
>>> img = RigidTransform(img, nt, bg, true, true);
>>>
>>> # Deal with the transformation
>>> Oldt = t;
>>> t += nt;
>>> t.tx %= x;
>>> t.ty %= y;
>>>
>>> # Termination criteria
>>> NormOldTT = sqrt((Oldt.tx - t.tx)^2 + (Oldt.ty - t.ty)^2 + (Oldt.θ -
>>> t.θ)^2);
>>> CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B
>>> - mu.B)
>>> Check = CheckMu + NormOldTT # Giving lots of weight to the
>>> transformation
>>>
>>> EMiter += 1;
>>>   end
>>>
>>>   return emdata(D, E, B, img, θ(Sigma, mu), t);
>>> end
>>>
>>> Some context: img is and Image{RGB{Float64}}, Sigma and mu are custom
>>> types :
>>> type Σ
>>>   D::Array{Float64, 2}
>>>   E::Array{Float64, 2}
>>>   B::Array{Float64, 2}
>>> end
>>>
>>> type μ
>>>   D::Array{Float64, 1}
>>>   E::Array{Float64, 1}
>>>   B::Array{Float64, 1}
>>> end
>>>
>>> t is another custom type to represent a rigid body transformation
>>> (translation_x, translation_y, rotation_angle), bg is, for example, RGB(0.0,
>>> 0.0, 0.0) (black), and FullTs is an array of transformations, namely all the
>>> transformations we consider.
>>>
>>> It's weird for multiple reasons: first of all, this code was running fine
>>> when not in a separate function returning a custom type (the custom type in
>>> the return is to ease the access since I want to call this function for an
>>> array of images with pmap and output all that which is not practical).
>>> Furthermore, sometimes it crashes with a segfault, sometimes with an
>>> undefined reference on this line, not always after the same EM iteration:
>>> CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B -
>>> mu.B)
>>>
>>> Thanks in advance for the help,
>>> Archibald
>
>


Re: [julia-users] Auto warn for unstable types

2015-04-24 Thread Stefan Karpinski
Yes, I'd like to add exactly this kind of thing.

On Fri, Apr 24, 2015 at 10:54 AM, Peter Brady  wrote:

> Tim Holy introduced me to the wonders of @code_warntype in this discussion
> https://groups.google.com/forum/#!topic/julia-users/sq5gj-3TdQU.  I've
> since been using it to track down other instabilities in my code since it
> turns out that I'm very good at writing poor julia code.  Are there any
> plans to incorporate automatic warnings about type unstable functions when
> they are compiled?  Maybe even via a startup flag like `-Wunstable`?  I
> would prefer that its on by default.  This would go a long way towards
> helping me write much better code and probably help new users get more of
> the performance they were expecting.
>


[julia-users] Auto warn for unstable types

2015-04-24 Thread Peter Brady
Tim Holy introduced me to the wonders of @code_warntype in this 
discussion https://groups.google.com/forum/#!topic/julia-users/sq5gj-3TdQU. 
 I've since been using it to track down other instabilities in my code 
since it turns out that I'm very good at writing poor julia code.  Are 
there any plans to incorporate automatic warnings about type unstable 
functions when they are compiled?  Maybe even via a startup flag like 
`-Wunstable`?  I would prefer that its on by default.  This would go a long 
way towards helping me write much better code and probably help new users 
get more of the performance they were expecting.


Re: [julia-users] Re: .jl to .exe

2015-04-24 Thread Stefan Karpinski
I think the question was regarding cross-compiling a .exe from a .jl
script, which, as you say, doesn't work yet.

On Fri, Apr 24, 2015 at 10:51 AM, Isaiah Norton 
wrote:

> How to do so?
>
>
> If this refers to cross-compiling Julia (rather than cross-compiling an
> exe from Julia, which is not currently possible), please see:
> https://github.com/JuliaLang/julia/blob/master/README.windows.md
>
> On Fri, Apr 24, 2015 at 10:27 AM, Paul D  wrote:
>
>> How to do so?
>>
>> On Fri, Apr 24, 2015 at 4:20 AM, Tony Kelman  wrote:
>> > It is possible to cross-compile a Windows exe of Julia from Linux right
>> now,
>> > so this could probably be made to work.
>> >
>> >
>> > On Thursday, April 23, 2015 at 8:15:46 AM UTC-7, pauld11718 wrote:
>> >>
>> >> Will it be possible to cross-compile?
>> >> Do all the coding on linux(64 bit) and generate the exe for windows(32
>> >> bit)?
>>
>
>


Re: [julia-users] Re: .jl to .exe

2015-04-24 Thread Isaiah Norton
>
> How to do so?


If this refers to cross-compiling Julia (rather than cross-compiling an exe
from Julia, which is not currently possible), please see:
https://github.com/JuliaLang/julia/blob/master/README.windows.md

On Fri, Apr 24, 2015 at 10:27 AM, Paul D  wrote:

> How to do so?
>
> On Fri, Apr 24, 2015 at 4:20 AM, Tony Kelman  wrote:
> > It is possible to cross-compile a Windows exe of Julia from Linux right
> now,
> > so this could probably be made to work.
> >
> >
> > On Thursday, April 23, 2015 at 8:15:46 AM UTC-7, pauld11718 wrote:
> >>
> >> Will it be possible to cross-compile?
> >> Do all the coding on linux(64 bit) and generate the exe for windows(32
> >> bit)?
>


[julia-users] Re: Help to optimize a loop through a list

2015-04-24 Thread Duane Wilson
Actually I was able to get the go ahead from my boss to create a gist of 
the code I've produced. I've modified it a little bit, as some of the 
things in there are really relevant.

http://nbviewer.ipython.org/gist/dwil/5cc31d1fea141740cf96

Any comments for optimizing this a little be more would be appreciated :)

On Friday, April 24, 2015 at 11:32:09 AM UTC-3, Duane Wilson wrote:
>
> It will definitely help to have the code you're using for this. I've have 
> and, am currently doing, a lot of work in Julia on non-dominated sorting, 
> so I would be happy to give any help I can. I have a relatively performant 
> sorting algorithm I've written in Julia and getting 6000+ points down to 
> near 0.5s should be possible. 
>
>
>
>
> On Friday, April 24, 2015 at 10:27:08 AM UTC-3, Patrick O'Leary wrote:
>>
>> It's helpful if you can post the full code; gist.github.com is a good 
>> place to drop snippets.
>>
>> From what I can see here, there are two things:
>>
>> (1) No idea if this is in global scope. If so, that's a problem.
>>
>> (2) push!() will grow and allocate as needed. It does overallocate so you 
>> won't get a new allocation on every single push, but if you know how big 
>> the final result will be, you can use the sizehint() method to avoid the 
>> repeated reallocations.
>>
>> Please read the Performance Tips page for a longer description of (1) and 
>> other information:
>> http://julia.readthedocs.org/en/latest/manual/performance-tips/
>>
>>
>> On Friday, April 24, 2015 at 7:47:38 AM UTC-5, Ronan Chagas wrote:
>>>
>>> Hi guys!
>>>
>>> I am coding MGEO algorithm into a Julia module.
>>> MGEO is a multiobjective evolutionary algorithm that was proposed by a 
>>> researcher at my institution.
>>>
>>> I have already a version of MGEO in C++ (
>>> https://github.com/ronisbr/mgeocpp).
>>>
>>> The problem is that Julia version is very slow compared to C++.
>>> When I tried a dummy example, it took 0.6s in C++ and 60s in Julia.
>>>
>>> After some analysis, I realized that the problem is the loop through a 
>>> list.
>>> The algorithm store a list of points (the Pareto frontier) and for every 
>>> iteration the algorithm must go through every point in this list comparing 
>>> each one with the new candidate to be at the frontier.
>>> The problem is that, for this dummy example, the process is repeated 
>>> 128,000 times and the number of points in the frontier at the end is 6,000+.
>>>
>>> Each point in the list is an instance of this type:
>>>
>>> type ParetoPoint
>>> # Design variables.
>>> vars::Array{Float64,1}
>>> # Objective functions.
>>> f::Array{Float64, 1}
>>> end
>>>
>>> I am creating the list (the Pareto frontier) as follows
>>>
>>> paretoFrontier = ParetoPoint[]
>>>
>>> and pushing new points using
>>>
>>> push!(paretoFrontier, candidatePoint)
>>>
>>> in which candidatePoint is an instance of ParetoPoint.
>>>
>>> Can anyone help me to optimize this code?
>>>
>>> Thanks,
>>> Ronan
>>>
>>

Re: [julia-users] Re: .jl to .exe

2015-04-24 Thread Stefan Karpinski
"this could probably be made to work" indicates that it doesn't yet work
out of the box and requires some time and effort to be put into making it
work and documenting it. If you'd like to help, that would be great.

On Fri, Apr 24, 2015 at 10:27 AM, Paul D  wrote:

> How to do so?
>
> On Fri, Apr 24, 2015 at 4:20 AM, Tony Kelman  wrote:
> > It is possible to cross-compile a Windows exe of Julia from Linux right
> now,
> > so this could probably be made to work.
> >
> >
> > On Thursday, April 23, 2015 at 8:15:46 AM UTC-7, pauld11718 wrote:
> >>
> >> Will it be possible to cross-compile?
> >> Do all the coding on linux(64 bit) and generate the exe for windows(32
> >> bit)?
>


[julia-users] Re: Help to optimize a loop through a list

2015-04-24 Thread Duane Wilson
It will definitely help to have the code you're using for this. I've have 
and, am currently doing, a lot of work in Julia on non-dominated sorting, 
so I would be happy to give any help I can. I have a relatively performant 
sorting algorithm I've written in Julia and getting 6000+ points down to 
near 0.5s should be possible. 




On Friday, April 24, 2015 at 10:27:08 AM UTC-3, Patrick O'Leary wrote:
>
> It's helpful if you can post the full code; gist.github.com is a good 
> place to drop snippets.
>
> From what I can see here, there are two things:
>
> (1) No idea if this is in global scope. If so, that's a problem.
>
> (2) push!() will grow and allocate as needed. It does overallocate so you 
> won't get a new allocation on every single push, but if you know how big 
> the final result will be, you can use the sizehint() method to avoid the 
> repeated reallocations.
>
> Please read the Performance Tips page for a longer description of (1) and 
> other information:
> http://julia.readthedocs.org/en/latest/manual/performance-tips/
>
>
> On Friday, April 24, 2015 at 7:47:38 AM UTC-5, Ronan Chagas wrote:
>>
>> Hi guys!
>>
>> I am coding MGEO algorithm into a Julia module.
>> MGEO is a multiobjective evolutionary algorithm that was proposed by a 
>> researcher at my institution.
>>
>> I have already a version of MGEO in C++ (
>> https://github.com/ronisbr/mgeocpp).
>>
>> The problem is that Julia version is very slow compared to C++.
>> When I tried a dummy example, it took 0.6s in C++ and 60s in Julia.
>>
>> After some analysis, I realized that the problem is the loop through a 
>> list.
>> The algorithm store a list of points (the Pareto frontier) and for every 
>> iteration the algorithm must go through every point in this list comparing 
>> each one with the new candidate to be at the frontier.
>> The problem is that, for this dummy example, the process is repeated 
>> 128,000 times and the number of points in the frontier at the end is 6,000+.
>>
>> Each point in the list is an instance of this type:
>>
>> type ParetoPoint
>> # Design variables.
>> vars::Array{Float64,1}
>> # Objective functions.
>> f::Array{Float64, 1}
>> end
>>
>> I am creating the list (the Pareto frontier) as follows
>>
>> paretoFrontier = ParetoPoint[]
>>
>> and pushing new points using
>>
>> push!(paretoFrontier, candidatePoint)
>>
>> in which candidatePoint is an instance of ParetoPoint.
>>
>> Can anyone help me to optimize this code?
>>
>> Thanks,
>> Ronan
>>
>

Re: [julia-users] code review: my first outer constructor :)

2015-04-24 Thread Gabriel Mitchell
@Mauro- thanks for the correction. @Thomas- thanks for the explanation!

On Friday, April 24, 2015 at 4:51:49 AM UTC-4, Tomas Lycken wrote:
>
> It might be worthwhile to point out *why* the last example acts in a 
> seemingly unintuitive way: all assignments return their right-hand sides, 
> so both n = 5.0 and n::Int = 5.0 return 5.0 - the latter will assign 5 to 
> the value of n, but it is not n that is returned in either case.
>
> I very rarely find a need for the ::T syntax other than to provide 
> methods to multiple dispatch; if I need to make sure that I assign n an 
> integer, I use n = convert(Int, 5.0) instead. It’s slightly more verbose, 
> but its return value will also be an integer (or throw an InexactError, 
> in which case I need to decide *how* to convert to integer and use a 
> function from the round/ceil/floor family instead…).
>
> // T
>
> On Friday, April 24, 2015 at 10:40:40 AM UTC+2, Mauro wrote:
>
> > (3) Cool given knowledge about P. However the ::T syntax is not for 
>> > declarations. Only annotations/assertions. If I try to run 
>> > 
>> > P = [1,2,3,4] 
>> > N::Int = sqrt(length(P)) 
>> > 
>> > I get a variable not defined error (julia 0.3.7). If I write instead 
>> > 
>> > P = [1,2,3,4] 
>> > N = sqrt(length(P))::Int 
>> > 
>> > the assertion fails, as sqrt returns a floating point type. 
>>
>> No, it works as Andrei coded it.  This is because :: behaves differently 
>> depending on whether it is in a value or variable context and whether 
>> it's at the REPL or in a local scope: 
>> http://docs.julialang.org/en/latest/manual/types/#type-declarations 
>>
>> julia> n::Int = 5.0 
>> ERROR: UndefVarError: n not defined 
>>  in eval at no file 
>>
>> julia> f() = (n::Int = 5.0; n) 
>> f (generic function with 1 method) 
>>
>> julia> f() 
>> 5 
>>
>> with this gotcha: 
>>
>> julia> f() = n::Int = 5.0 
>> f (generic function with 1 method) 
>>
>> julia> f() 
>> 5.0 
>>
>>
>>
>> > (4) Got it, I think. As I can't quite see why the particular branching 
>> > sequence is the right one though (could there, in principle, be fewer 
>> > elseif statements?) I would again suggest that perhaps a pattern 
>> matching 
>> > syntax could help clarify whats going on here. 
>> > 
>> > Yes I was referring to the gauge parameter. I can't quite remember 
>> > specifically what my offhand comment was about now. Although I think 
>> the 
>> > problem is still interesting! 
>> > 
>> > Anyway, my reply only has a bit of substance this time, but hopefully 
>> it 
>> > can help. 
>> > 
>> > On Monday, April 13, 2015 at 9:23:42 AM UTC-4, Andrei Berceanu wrote: 
>> >> 
>> >> Hi Gabriel, thanks a lot for your reply! 
>> >> 
>> >> (1) I use the WaveFunction type inside the Spectrum type, which as you 
>> can 
>> >> see is a collection of wavefunctions. For a typical usage case, have a 
>> look 
>> >> at 
>> >> 
>> >> 
>> http://nbviewer.ipython.org/github/berceanu/topo-photon/blob/master/anc/exploratory.ipynb
>>  
>> >> I am not quite sure what you mean with defining a methods for the 
>> "int" 
>> >> attribute. 
>> >> (2) As you can see, the gauge is an attribute of the Spectrum type, 
>> which 
>> >> is what I use in practice so I didn't see much point in storing it 
>> inside 
>> >> every wavefunction. Do you have a suggestion for a better 
>> implementation? 
>> >> In fact there are only these 2 gauge choices, landau and symmetric. 
>> >> (3) P should be a Vector of length N^2, and I thought that declaring N 
>> to 
>> >> be an int means implicit conversion as well - is that not so? 
>> >> (4) genspmat generates the (sparse) matrix of the Hamiltonian, so 
>> >> countnonzeros() simply counts beforehand how many nonzero elements 
>> there 
>> >> will be in this sparse matrix. If you think of an NxN 2D lattice, 
>> >> countnonzeros() simply counts the number of neighbours of each site (4 
>> for 
>> >> a center site, 3 for an edge site and 2 for a corner site). 
>> >> 
>> >> What do you mean by irrelevant in this case? Are you refering to the 
>> gauge 
>> >> parameter? 
>> >> 
>> >> On Sunday, April 12, 2015 at 11:28:48 PM UTC+2, Gabriel Mitchell 
>> wrote: 
>> >>> 
>> >>> Hi Andrei. I am not really Julia expert, but I do have a couple of 
>> high 
>> >>> level questions about your code, which might help anyone that feels 
>> >>> inclined to contribute a review. 
>> >>> 
>> >>> (1) Why have you made the WaveFunction type in the first place? In 
>> >>> particular, is there any reason that you just don't use alias 
>> Vector{Complex{Float64}} 
>> >>> and define a methods for the "int" attribute? 
>> >>> (2) the outer construction seems to allow for the option for 
>> different 
>> >>> gauges (which, in my unsophisticated mindset, I think of as 
>> alternative 
>> >>> parameterizations). Since different gauge choices in some sense imply 
>> >>> different semantics for the values in Psi (although presumably not 
>> other 
>> >>> function invariant to the gauge choice), it would seem that the user 
>> (or a 
>>

Re: [julia-users] Re: Segfault / Undefined reference

2015-04-24 Thread Stefan Karpinski
Any segfault is always a bug. Would you mind filing an issue with a
self-contained example that can produce the segfault (plus Julia version
info)?

On Fri, Apr 24, 2015 at 5:26 AM, Archibald Pontier <
archibald.pont...@gmail.com> wrote:

> Hi again,
>
> I found a way to prevent my code from crashing and it's really unexpected.
> I modified some global variable which only affects functions that are
> called by my emloop function and somehow it works. I have no idea why did
> this affect my code though, and especially what it made it segfault instead
> of just return an error. So now the serial version works, I'm still
> struggling with pmap with plenty of "result shape unspecified" errors but
> that's ok.
>
> All the best,
> Archibald
>
>
> On Friday, 24 April 2015 10:36:15 UTC+2, Archibald Pontier wrote:
>>
>> Hi everyone,
>>
>> Julia is (sometimes) segfaulting when running this function :
>>
>> function emloop(img, Sigma, mu, t, model, imgc, bg, FullTs)
>>   EMiter = 1;
>>   Check = typemax(Float64);
>> GModels, Genes,
>>   (x, y) = size(img);
>>   D = zeros(x, y); E = zeros(x, y); B = zeros(x, y);
>>
>>   # And we start the EM loop
>>   while EMiter <= maxEM && Check > ϵ
>> # Calculate the betas & find the optimal transformation for the
>> current iteration
>> (D, E, B) = fastBetas(model, [], mu, Sigma, img);
>> nt = findTransform(model, B, Gray((bg.r + bg.g + bg.b)/3), FullTs);
>>
>> # Now calculate the optimal parameters for the appearance model
>> OldMu = mu;  # For termination purposes
>> (mu, Sigma) = calcTheta(img, D, E, B);
>>
>> # Apply the transformation found to the image
>> img = RigidTransform(img, nt, bg, true, true);
>>
>> # Deal with the transformation
>> Oldt = t;
>> t += nt;
>> t.tx %= x;
>> t.ty %= y;
>>
>> # Termination criteria
>> NormOldTT = sqrt((Oldt.tx - t.tx)^2 + (Oldt.ty - t.ty)^2 + (Oldt.θ -
>> t.θ)^2);
>> CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B
>> - mu.B)
>> Check = CheckMu + NormOldTT # Giving lots of weight to the
>> transformation
>>
>> EMiter += 1;
>>   end
>>
>>   return emdata(D, E, B, img, θ(Sigma, mu), t);
>> end
>>
>> Some context: img is and Image{RGB{Float64}}, Sigma and mu are custom
>> types :
>> type Σ
>>   D::Array{Float64, 2}
>>   E::Array{Float64, 2}
>>   B::Array{Float64, 2}
>> end
>>
>> type μ
>>   D::Array{Float64, 1}
>>   E::Array{Float64, 1}
>>   B::Array{Float64, 1}
>> end
>>
>> t is another custom type to represent a rigid body transformation
>> (translation_x, translation_y, rotation_angle), bg is, for example,
>> RGB(0.0, 0.0, 0.0) (black), and FullTs is an array of transformations,
>> namely all the transformations we consider.
>>
>> It's weird for multiple reasons: first of all, this code was running fine
>> when not in a separate function returning a custom type (the custom type in
>> the return is to ease the access since I want to call this function for an
>> array of images with pmap and output all that which is not practical).
>> Furthermore, sometimes it crashes with a segfault, sometimes with an
>> undefined reference on this line, not always after the same EM iteration:
>> CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B -
>> mu.B)
>>
>> Thanks in advance for the help,
>> Archibald
>>
>


Re: [julia-users] Re: .jl to .exe

2015-04-24 Thread Paul D
How to do so?

On Fri, Apr 24, 2015 at 4:20 AM, Tony Kelman  wrote:
> It is possible to cross-compile a Windows exe of Julia from Linux right now,
> so this could probably be made to work.
>
>
> On Thursday, April 23, 2015 at 8:15:46 AM UTC-7, pauld11718 wrote:
>>
>> Will it be possible to cross-compile?
>> Do all the coding on linux(64 bit) and generate the exe for windows(32
>> bit)?


Re: [julia-users] Optimization of simple code

2015-04-24 Thread Stéphane Mottelet

Thank you, Julia is really worth using it !

S.

Le 24/04/2015 02:36, Andreas Noack a écrit :
Try running it twice. It spends time compiling the function the first 
time. I get


julia> include("../../Downloads/test.jl")
elapsed time: 0.666072953 seconds (42 MB allocated, 1.12% gc time in 2 
pauses with 0 full sweep)


julia> @time test()
elapsed time: 0.014324694 seconds (25 MB allocated, 28.88% gc time in 
1 pauses with 0 full sweep)


2015-04-23 19:09 GMT-04:00 Stéphane Mottelet 
mailto:stephane.motte...@gmail.com>>:


Hello,

I am trying to improve the speed of code like this:

M1_v=(v[17]
v[104]
v[149]
-[v[18]+v[63]+v[103]]
v[17]
v[104]
v[149]
...
-[v[39]+v[41]+v[124]]
v[38]
v[125]
v[127]
-[v[39]+v[41]+v[124]);

The attached file (with 1000 repetitions) runs very slowly (0.71s)
compared to Scilab where it takes only 0.42 s on my machine. Did I
miss something ?

Thanks for your help

S.






[julia-users] Re: susceptance matrix

2015-04-24 Thread ST
On Thursday, April 23, 2015 at 12:51:05 PM UTC+1, Michela Di Lullo wrote:
>
> [...] and I need to declare the B susceptance matrix defined, in AMPL, as:
>
> param B{(k,m) in YBUS} := if(k == m)  then sum{(l,k,i) in BRANCH}  
> 1/branch_x[l,k,i] 
>  
> +sum{(l,i,k) in BRANCH}  1/branch_x[l,i,k]
>   else if(k != m) then 
> 
> sum{(l,k,m) in BRANCH}  1/branch_x[l,k,m]
>   
> +sum{(l,m,k) in BRANCH}  1/branch_x[l,m,k];
>
> I'm trying to make it but it's not working because of the indexes. I don't 
> know how to declare the parameter branch_x indexed by (n,b_from,b_to).
>

branch_x_dict = {BRANCH[idx]=>branch_x[idx] for idx=1:20}

then you can access the elements as you would expect: branch_x_dict[l,k,m]

 

> Any idea about how to declare the B matrix correctly? 
>

If I understand correctly, the loops inside the sums are only over the 
indices of variables not previously specified, so e.g. sum{(l,k,i) in 
BRANCH} would iterate over the elements of BRANCH for which the second 
value is equal to the k from the outer loop (over YBUS). It doesn't look as 
neat (maybe someone can improve this), but I think the following does what 
you want.

B = Dict()
for (k, m) in Y_BUS
v = 0.0
if k == m
for (l,k_,i) in BRANCH
k == k_ || continue
v += 1/branch_x_dict[l,k,i]
end
for (l,i,k_) in BRANCH
k == k_ || continue
v += 1/branch_x_dict[l,i,k]
end
else
for (l,k_,m_) in BRANCH
k == k_ || continue
m == m_ || continue
v += 1/branch_x_dict[l,k,m]
end
for (l,m_,k_) in BRANCH
k == k_ || continue
m == m_ || continue
v += 1/branch_x_dict[l,m,k]
end
end
B[k,m] = v
end

This is a bit cumbersome but does what you want, I think. Maybe someone 
else has a good idea how to simplify it.

I've used a Dict() for B; you can also just define B as a matrix and the 
code should run just the same:

N_NODE = maximum([NODE_FROM,NODE_TO]) # largest node number = size of B 
matrix
B = spzeros(N_NODE, N_NODE)

(or just zeros if you want a full, not a sparse matrix.)

Alternatively, when B is a Dict, you can convert it into a matrix by 
iterating over the elements:

N_NODE = maximum([NODE_FROM,NODE_TO]) # maximum number of nodes = size of B 
matrix
Bmat = spzeros(N_NODE, N_NODE)
for ((k, m), v) in B
Bmat[k,m] = v
end

For one-dimensional sparse matrices there's sparsevec() which does this, 
but there doesn't seem to be an equivalent for two-dimensional sparse 
matrices/Dicts?...

Hope this helps.
best regards,
ST


[julia-users] Parallel computing and multiple outputs

2015-04-24 Thread Archibald Pontier
Hi everyone,

I have the following problem which originates from the fact that 
SharedArray doesn't seem to accept composite types as argument. Consider 
the following sequence (I started julia with -p 2):

julia> @everywhere type T
   t::Array{Float64, 1}
   end

julia> @everywhere type T2
   t::Array{Float64, 1}
   end

julia> @everywhere type T3
   t::Array{Float64, 1}
   end

julia> @everywhere function test(t::T, t2::T2, i)
   return T3([t.t[i], t2.t[i]]);
   end

julia> @everywhere t = T(rand(3))

julia> @everywhere t2 = T2(rand(3))

julia> t3 = pmap(i -> test(t, t2, i), 1:3)
3-element Array{Any,1}:
 T3([0.521706,0.0155359])
 T3([0.112277,0.59876])  
 T3([0.0399843,0.373688])

julia> @everywhere function test2(t3::T3, t::T, t2::T2)
   t.t[1] += t3.t[1];
   t2.t[2] += t3.t[2];
   return (t, t2);
   end

julia> A = pmap(i -> test2(t3[i], t, t2), 1:3)
exception on exception on 2: 3: ERROR: t3 not defined
 in anonymous at none:1
 in anonymous at multi.jl:855
 in run_work_thunk at multi.jl:621
 in anonymous at task.jl:855
ERROR: t3 not defined
 in anonymous at none:1
 in anonymous at multi.jl:855
 in run_work_thunk at multi.jl:621
 in anonymous at task.jl:855
2-element Array{Any,1}:
 UndefVarError(:t3)
 UndefVarError(:t3)

The problem solves itself if I do @everywhere t3 = pmap(...), however from 
my understanding this would only lead to every operation done on every 
process, which defeats the purpose of doing things in parallel in the first 
place. Am I wrong with this conclusion?

Regards,
Archibald


[julia-users] Re: Help to optimize a loop through a list

2015-04-24 Thread Patrick O'Leary
It's helpful if you can post the full code; gist.github.com is a good place 
to drop snippets.

>From what I can see here, there are two things:

(1) No idea if this is in global scope. If so, that's a problem.

(2) push!() will grow and allocate as needed. It does overallocate so you 
won't get a new allocation on every single push, but if you know how big 
the final result will be, you can use the sizehint() method to avoid the 
repeated reallocations.

Please read the Performance Tips page for a longer description of (1) and 
other information:
http://julia.readthedocs.org/en/latest/manual/performance-tips/


On Friday, April 24, 2015 at 7:47:38 AM UTC-5, Ronan Chagas wrote:
>
> Hi guys!
>
> I am coding MGEO algorithm into a Julia module.
> MGEO is a multiobjective evolutionary algorithm that was proposed by a 
> researcher at my institution.
>
> I have already a version of MGEO in C++ (
> https://github.com/ronisbr/mgeocpp).
>
> The problem is that Julia version is very slow compared to C++.
> When I tried a dummy example, it took 0.6s in C++ and 60s in Julia.
>
> After some analysis, I realized that the problem is the loop through a 
> list.
> The algorithm store a list of points (the Pareto frontier) and for every 
> iteration the algorithm must go through every point in this list comparing 
> each one with the new candidate to be at the frontier.
> The problem is that, for this dummy example, the process is repeated 
> 128,000 times and the number of points in the frontier at the end is 6,000+.
>
> Each point in the list is an instance of this type:
>
> type ParetoPoint
> # Design variables.
> vars::Array{Float64,1}
> # Objective functions.
> f::Array{Float64, 1}
> end
>
> I am creating the list (the Pareto frontier) as follows
>
> paretoFrontier = ParetoPoint[]
>
> and pushing new points using
>
> push!(paretoFrontier, candidatePoint)
>
> in which candidatePoint is an instance of ParetoPoint.
>
> Can anyone help me to optimize this code?
>
> Thanks,
> Ronan
>


Re: [julia-users] Help to optimize a loop through a list

2015-04-24 Thread Mauro
> type ParetoPoint
> # Design variables.
> vars::Array{Float64,1}
> # Objective functions.
> f::Array{Float64, 1}
> end

How large are the vectors in vars and f?  Are they always the same size?
If so, an immutable-isbits datatype might help.

> I am creating the list (the Pareto frontier) as follows
>
> paretoFrontier = ParetoPoint[]
>
> and pushing new points using
>
> push!(paretoFrontier, candidatePoint)
>
> in which candidatePoint is an instance of ParetoPoint.
>
> Can anyone help me to optimize this code?
>
> Thanks,
> Ronan



Re: [julia-users] Float32() or float32() ?

2015-04-24 Thread Patrick O'Leary
The master branch of the git repository is currently version 0.4-dev, which 
is in an unstable development phase. The relevant downloads are at the 
bottom of http://julialang.org/downloads/ under "Nightly Builds".

On Friday, April 24, 2015 at 5:47:37 AM UTC-5, Sisyphuss wrote:
>
> Thanks Tomas!
>
> By the way, I built Julia from the source, and I got the version 0.3. Do 
> you know how I can get the version 0.4?
>
>
>
> On Friday, April 24, 2015 at 12:21:17 PM UTC+2, Tomas Lycken wrote:
>>
>> To be concise: Yes, and yes :)
>>
>> Instead of `float32(x)` and the like, 0.4 uses constructor methods 
>> (`Float32(x)` returns a `Float32`, just as `Foo(x)` returns a `Foo`...).
>>
>> // T
>>
>> On Friday, April 24, 2015 at 11:40:02 AM UTC+2, Sisyphuss wrote:
>>>
>>> I mean is there a syntax change from version 0.3 to version 0.4?
>>>
>>> Is "float32()"-like minuscule conversion going to be deprecated?
>>>
>>>
>>>
>>> On Friday, April 24, 2015 at 9:58:18 AM UTC+2, Tim Holy wrote:

 I'm not sure what your question is, but the documentation is correct in 
 both 
 cases. You can also use the Compat package, which allows you to write 
 x = @compat Float32(y) 
 even on julia 0.3. 

 --Tim 

 On Friday, April 24, 2015 12:35:01 AM Sisyphuss wrote: 
 > To convert a number to the type Float32, 
 > 
 > In 0.3 doc: float32() 
 > In 0.4 doc: Float32() 



[julia-users] Re: Macro with varargs

2015-04-24 Thread Patrick O'Leary
On Friday, April 24, 2015 at 2:59:01 AM UTC-5, Kuba Roth wrote:
>
> Thanks, 
> So in order to create the quote/end block you use:
>   newxpr = Expr(:block)
> And the newxpr.args is the array field of the QuoteNode which stores each 
> expression?
>

Yeah, I think there might be an easier way to do it, but I couldn't think 
of it at the time :D The main goal was to introduce the macroexpand() 
function, which I hope you find useful!

Patrick


[julia-users] Help to optimize a loop through a list

2015-04-24 Thread Ronan Chagas
Hi guys!

I am coding MGEO algorithm into a Julia module.
MGEO is a multiobjective evolutionary algorithm that was proposed by a 
researcher at my institution.

I have already a version of MGEO in C++ 
(https://github.com/ronisbr/mgeocpp).

The problem is that Julia version is very slow compared to C++.
When I tried a dummy example, it took 0.6s in C++ and 60s in Julia.

After some analysis, I realized that the problem is the loop through a list.
The algorithm store a list of points (the Pareto frontier) and for every 
iteration the algorithm must go through every point in this list comparing 
each one with the new candidate to be at the frontier.
The problem is that, for this dummy example, the process is repeated 
128,000 times and the number of points in the frontier at the end is 6,000+.

Each point in the list is an instance of this type:

type ParetoPoint
# Design variables.
vars::Array{Float64,1}
# Objective functions.
f::Array{Float64, 1}
end

I am creating the list (the Pareto frontier) as follows

paretoFrontier = ParetoPoint[]

and pushing new points using

push!(paretoFrontier, candidatePoint)

in which candidatePoint is an instance of ParetoPoint.

Can anyone help me to optimize this code?

Thanks,
Ronan


[julia-users] Problem with extend_trace in Optim

2015-04-24 Thread Christopher Fisher
Hi all-
 
I am having trouble using the exended_trace method in Optim. My code and 
the resulting error message is located below. Everything works fine until I 
add extended_trace = true.  Any help would be greatly appreciated. 
 

 

 

 

 

 

optimum = optimize(fun,Start,method=:nelder_mead,store_trace = 
true,extended_trace = true,iterations = 40)

 

x not defined

while loading In[123], in expression starting on line 21

 

 in nelder_mead at /.julia/v0.3/Optim/src/nelder_mead.jl:28

 in optimize at /.julia/v0.3/Optim/src/optimize.jl:423

 in anonymous at no file:31


[julia-users] Re: Is there a plotting package that works for a current 0.4 build?

2015-04-24 Thread Simon Danisch
For gadfly I found out, that if you pull the master of most involved 
packages it works.
I think it was Compose, Gadfly, Dataframes...


Am Freitag, 24. April 2015 12:58:55 UTC+2 schrieb Tomas Lycken:
>
> I git pull-ed and built Julia this morning, and I can't get any of PyPlot, 
> Winston or Gadfly or TextPlots to show me anything (either loading the 
> package, or running something equivalent to plot(rand(15)), borked).
>
> Is there any plotting tool that's had time to adjust to the tuplocalypse 
> and other recent breaking changes?
>
> // T
>


Re: [julia-users] in-place fn in a higher order fn

2015-04-24 Thread Tim Holy
I don't think there's any fundamental reason things can't work as you're 
hoping, I just think they all count as optimizations that have not yet been 
implemented.

--Tim

On Friday, April 24, 2015 11:19:14 AM Mauro wrote:
> >> Well it seems Julia should know that nothing is used from fn!, without
> >> knowing anything about fn!.  That is at least what @code_warntype
> >> suggest (with julia --inline=no).  For
> >> 
> >> function f(ar)
> >> 
> >> for i=1:n
> >> 
> >> hh!(ar, i)
> >> 
> >> end
> >> 
> >> end
> >> 
> >> the loop gives:
> >>   GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), :(#s1::Int64)))
> >>   i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1))
> >>   #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 71:
> >>   $(Expr(:call1, :hh!, :(ar::Array{Float64,1}), :(i::Int64)))
> >>   3:
> >>   unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)),
> >> :
> >> :($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2
> >> 
> >> For
> >> function g(fn!,ar)
> >> 
> >> a = 0
> >> for i=1:n
> >> 
> >> fn!(ar, i)
> >> 
> >> end
> >> a
> >> 
> >> end
> >> 
> >> the loop gives:
> >>   2:
> >>   GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), :(#s1::Int64)))
> >>   i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1))
> >>   #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 78:
> >>   (fn!::F)(ar::Array{Float64,1},i::Int64)::Any
> > 
> > Wouldn't it (or fn!) need to allocate for this Any here ^
> > 
> > IIUC its fn! that decides if it returns something, and even if the caller
> > doesn't need it, the return value still has to be stored somewhere.
> 
> I think this optimisation should work irrespective of what fn! returns
> by the fact that the value is not used.  This and more seems to happen
> in the first-order function.  Here a version of first-order
> function which calls a function which returns an inferred Any:
> 
> const aa = Any[1]
> hh_ret!(ar,i) = (ar[i] = hh(ar[i]); aa[1])
> 
> function f_ret(ar)
> a = aa[1]
> for i=1:n
> a=hh_ret!(ar, i)
> end
> a
> end
> julia> @time f_ret(a);
> elapsed time: 0.259893608 seconds (160 bytes allocated)
> 
> It's still fast and doesn't allocate, even though it uses the value!
> 
> > Maybe fn! does the allocating, but it still happens.
> 
> It's a different story if there is actual allocation in fn!.
> 
> >>   3:
> >>   unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)),
> >> :
> >> :($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2
> >> 
> >> So, at least from my naive viewpoint, it looks like there is no need for
> >> an allocation in this case.  Or is there ever a case when the return
> >> value of fn! would be used?
> >> 
> >> I think this is quite different from the case when the return value of
> >> fn! is used because then, as long as Julia cannot do type inference on
> >> the value of fn!, it cannot know what the type is.
> > 
> > Which is why its ::Any above I guess and my understanding is Any means
> > boxed and allocated on the heap?
> 
> Thanks for indulging me!



Re: [julia-users] Is there a canonical method name for first and last index of a collection?

2015-04-24 Thread Mauro
There is `first`

On Fri, 2015-04-24 at 12:36, Tim Holy  wrote:
> There's endof(), but I don't know of a corresponding one for the beginning.
>
> --Tim
>
> On Friday, April 24, 2015 03:14:26 AM Tomas Lycken wrote:
>> I'm implementing a collection of types that implement indexing, but where
>> the indexes aren't necessarily bounded by `[1, size(collection, dim)]`.
>> Some of them will have these bounds, but others will be indexable e.g. in
>> `[0.5, size(collection, dim) + .5]` and yet others may have completely
>> arbitrary bounds.
>> 
>> Is there a canonical name for methods that would return (the upper/lower)
>> bounds for indexing? I am thinking along the lines of
>> 
>> ```julia
>> lowerbound(v::Vector) = 1
>> upperbound(v::Vector) = length(v)
>> bounds(v::AbstractVector) = (lowerbound(v), upperbound(v))
>> 
>> lowerbound(A::Array, d::Int) = 1
>> upperbound(A::Array, d::Int) = size(A, d)
>> bounds(A::AbstractArray, d::Int) = (lowerbound(A, d), upperbound(A, d))
>> 
>> #etc...
>> ```
>> 
>> but I'd rather add methods to an existing function, if there is one, than
>> just make up my own.
>> 
>> (I did try a few searches in the docs, but everything I could find
>> pertained to finding elements in collections...)
>> 
>> // T



Re: [julia-users] Ode solver thought, numerical recipes

2015-04-24 Thread Mauro
> - In the current implementation of Julia and Python/Scipy, you need to give 
> all the points t1,...,tn where you want to know y at the very beginning. If 
> you want to solve the differential equation on [ta,tb] and you want to find a 
> t such that y(t)=0, you are stuck because you'll most likely do a Newton 
> method and your evaluation point at iteration n will depend on the value of y 
> at iteration n-1. Mathematica solved this problem returning an "interpolation 
> object" instead of some values. This is extremely useful, especially with 
> dense methods.
> - If you want to find a t such that y(t)=0 and you don't even know an upper 
> bound for t (such as tb) you are stuck, even with Mathematica.

I think you should be able to use [0., Inf] in the ODE.jl solvers (or
[0, 1e308]).  Of course currently, without event detection, they will
not terminate.

> I propose the following solution. If you want to 
>
> solver = odesolver(f, ta, ya; method = "Euler", deltat = 0.01)
> y = value(solver, t)
>
> solver = odesolver(t, ta, ya; method = "DenseGraggBulirschStoer", relerror = 
> 1.0e-6)
> y = value(solver, t)
>
> The type of solver would depend on the method. For an Euler method, it would 
> just contain f, ta, ya the last t evaluated and the last value y computed. 
> For the DenseGraggBulirschStoer method, it would contain more information: 
> all the values t and the corresponding y computed, and some polynomial 
> coefficients for evaluation in between them.

Did you see DASSL.jl, it has an iterator method built in.  Maybe there
you can use tspan=[0,Inf]

> I have a few question to implement this idea.
> - The interface makes the function odesolver not type stable as the
> type of solver depends upon the method. Will it prevent static
> compilation when it is released? Is there any workaround?

This is what Alex suggested, in an example:

type ODEmeth{Name}
...
end
method_euler = ODEmeth{:euler}(...)

solver = odesolver(f, ta, ya, method_euler, deltat = 0.01)

This will dispatch on method_euler.  Note, you cannot use keyword args
for dispatch.

> - For the type of y, it could by anything living in a vector space. I
> am thinking of Float64, Array{Float64, 1}, Array{Float64, 2} maybe a
> FixedSizedArray if such a thing exists. Is there a way to enforce
> that? Is there a way to specify "any type that lives in a vector
> space" ?

Traits.jl might work or just use a Holy-trait...

> - For the function f, I am thinking of preallocating dy_dt and call
> f!(dy_dt, t, y) or call dy_dt = f(t, y). The current ODE package use
> the second method. Doesn't it kill the performance with many heap
> allocation? Will FixedSizeArray solve this problem? Is there a
> metaprogramming trick to transform the second function into the first
> one? Also, the first function is subject to aliasing between dy_dt and
> y. Is there a way to tell Julia there is no aliasing?

I think/hope that is what ODE.jl will do in the long run as well.  You can
easily go f! -> f but not f -> f! is not necessarily as performant.

> - Some implementation might come from numerical recipes even though they also 
> exist in other book (not as code but as algorithm). I've seen people breaking 
> the copyright of numerical recipes easily. To what extend the code should be 
> modified so it does not break the license? Is an adaptation from one language 
> to another enough to say that the license does not apply ?
>
> Thanks



Re: [julia-users] Finite field / Gaulois field implementation

2015-04-24 Thread Andreas Noack
Hej Valentin

There is a couple of simple examples. At least this

http://acooke.org/cute/FiniteFiel1.html

and I did one in this

http://andreasnoack.github.io/talks/2015AprilStanford_AndreasNoack.ipynb

notebook. The arithmetic definitions are simpler for GF(2), but should be
simple modifications to the definitions in the notebook.

2015-04-24 2:50 GMT-04:00 Valentin Churavy :

> Hej,
>
> Did anybody tried to implement finite field arithmetic in Julia?. I would
> be particular interested in GF(2) eg. base 2 arithmetic modulus 2.
>
> Best,
> Valentin
>


[julia-users] Is there a plotting package that works for a current 0.4 build?

2015-04-24 Thread Tomas Lycken
I git pull-ed and built Julia this morning, and I can't get any of PyPlot, 
Winston or Gadfly or TextPlots to show me anything (either loading the 
package, or running something equivalent to plot(rand(15)), borked).

Is there any plotting tool that's had time to adjust to the tuplocalypse 
and other recent breaking changes?

// T


Re: [julia-users] Re: Ode solver thought, numerical recipes

2015-04-24 Thread Mauro
It's a tricky business: https://en.wikipedia.org/wiki/Clean_room_design ...

On Fri, 2015-04-24 at 10:28, François Fayard  wrote:
> I'll make sure I won't use a single line of their code and only use the 
> papers they refer to.



Re: [julia-users] Float32() or float32() ?

2015-04-24 Thread Sisyphuss
Thanks Tomas!

By the way, I built Julia from the source, and I got the version 0.3. Do 
you know how I can get the version 0.4?



On Friday, April 24, 2015 at 12:21:17 PM UTC+2, Tomas Lycken wrote:
>
> To be concise: Yes, and yes :)
>
> Instead of `float32(x)` and the like, 0.4 uses constructor methods 
> (`Float32(x)` returns a `Float32`, just as `Foo(x)` returns a `Foo`...).
>
> // T
>
> On Friday, April 24, 2015 at 11:40:02 AM UTC+2, Sisyphuss wrote:
>>
>> I mean is there a syntax change from version 0.3 to version 0.4?
>>
>> Is "float32()"-like minuscule conversion going to be deprecated?
>>
>>
>>
>> On Friday, April 24, 2015 at 9:58:18 AM UTC+2, Tim Holy wrote:
>>>
>>> I'm not sure what your question is, but the documentation is correct in 
>>> both 
>>> cases. You can also use the Compat package, which allows you to write 
>>> x = @compat Float32(y) 
>>> even on julia 0.3. 
>>>
>>> --Tim 
>>>
>>> On Friday, April 24, 2015 12:35:01 AM Sisyphuss wrote: 
>>> > To convert a number to the type Float32, 
>>> > 
>>> > In 0.3 doc: float32() 
>>> > In 0.4 doc: Float32() 
>>>
>>>

Re: [julia-users] Is there a canonical method name for first and last index of a collection?

2015-04-24 Thread Tim Holy
There's endof(), but I don't know of a corresponding one for the beginning.

--Tim

On Friday, April 24, 2015 03:14:26 AM Tomas Lycken wrote:
> I'm implementing a collection of types that implement indexing, but where
> the indexes aren't necessarily bounded by `[1, size(collection, dim)]`.
> Some of them will have these bounds, but others will be indexable e.g. in
> `[0.5, size(collection, dim) + .5]` and yet others may have completely
> arbitrary bounds.
> 
> Is there a canonical name for methods that would return (the upper/lower)
> bounds for indexing? I am thinking along the lines of
> 
> ```julia
> lowerbound(v::Vector) = 1
> upperbound(v::Vector) = length(v)
> bounds(v::AbstractVector) = (lowerbound(v), upperbound(v))
> 
> lowerbound(A::Array, d::Int) = 1
> upperbound(A::Array, d::Int) = size(A, d)
> bounds(A::AbstractArray, d::Int) = (lowerbound(A, d), upperbound(A, d))
> 
> #etc...
> ```
> 
> but I'd rather add methods to an existing function, if there is one, than
> just make up my own.
> 
> (I did try a few searches in the docs, but everything I could find
> pertained to finding elements in collections...)
> 
> // T



Re: [julia-users] Re: Ode solver thought, numerical recipes

2015-04-24 Thread Tim Holy
On Friday, April 24, 2015 01:28:58 AM François Fayard wrote:
> I'll make sure I won't use a single line of their code and only use the
> papers they refer to.

Good. I couldn't tell exactly what your intentions were from your comments, so 
I wanted to make doubly-sure you didn't end up getting into a difficult 
situation after having invested a lot of effort.

Good luck!

--Tim



Re: [julia-users] Float32() or float32() ?

2015-04-24 Thread Tomas Lycken
To be concise: Yes, and yes :)

Instead of `float32(x)` and the like, 0.4 uses constructor methods 
(`Float32(x)` returns a `Float32`, just as `Foo(x)` returns a `Foo`...).

// T

On Friday, April 24, 2015 at 11:40:02 AM UTC+2, Sisyphuss wrote:
>
> I mean is there a syntax change from version 0.3 to version 0.4?
>
> Is "float32()"-like minuscule conversion going to be deprecated?
>
>
>
> On Friday, April 24, 2015 at 9:58:18 AM UTC+2, Tim Holy wrote:
>>
>> I'm not sure what your question is, but the documentation is correct in 
>> both 
>> cases. You can also use the Compat package, which allows you to write 
>> x = @compat Float32(y) 
>> even on julia 0.3. 
>>
>> --Tim 
>>
>> On Friday, April 24, 2015 12:35:01 AM Sisyphuss wrote: 
>> > To convert a number to the type Float32, 
>> > 
>> > In 0.3 doc: float32() 
>> > In 0.4 doc: Float32() 
>>
>>

[julia-users] Is there a canonical method name for first and last index of a collection?

2015-04-24 Thread Tomas Lycken
I'm implementing a collection of types that implement indexing, but where 
the indexes aren't necessarily bounded by `[1, size(collection, dim)]`. 
Some of them will have these bounds, but others will be indexable e.g. in 
`[0.5, size(collection, dim) + .5]` and yet others may have completely 
arbitrary bounds.

Is there a canonical name for methods that would return (the upper/lower) 
bounds for indexing? I am thinking along the lines of

```julia
lowerbound(v::Vector) = 1
upperbound(v::Vector) = length(v)
bounds(v::AbstractVector) = (lowerbound(v), upperbound(v))

lowerbound(A::Array, d::Int) = 1
upperbound(A::Array, d::Int) = size(A, d)
bounds(A::AbstractArray, d::Int) = (lowerbound(A, d), upperbound(A, d))

#etc...
```

but I'd rather add methods to an existing function, if there is one, than 
just make up my own.

(I did try a few searches in the docs, but everything I could find 
pertained to finding elements in collections...)

// T


Re: [julia-users] in-place fn in a higher order fn

2015-04-24 Thread Mauro
>> Well it seems Julia should know that nothing is used from fn!, without 
>> knowing anything about fn!.  That is at least what @code_warntype 
>> suggest (with julia --inline=no).  For 
>>
>> function f(ar) 
>> for i=1:n 
>> hh!(ar, i) 
>> end 
>> end 
>>
>> the loop gives: 
>>
>>   GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), :(#s1::Int64))) 
>>   i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1)) 
>>   #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 71: 
>>   $(Expr(:call1, :hh!, :(ar::Array{Float64,1}), :(i::Int64))) 
>>   3: 
>>   unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)), 
>> :($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2 
>>
>> For 
>> function g(fn!,ar) 
>> a = 0 
>> for i=1:n 
>> fn!(ar, i) 
>> end 
>> a 
>> end 
>>
>> the loop gives: 
>>   2: 
>>   GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), :(#s1::Int64))) 
>>   i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1)) 
>>   #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 78: 
>>   (fn!::F)(ar::Array{Float64,1},i::Int64)::Any 
>>
>
> Wouldn't it (or fn!) need to allocate for this Any here ^
>
> IIUC its fn! that decides if it returns something, and even if the caller 
> doesn't need it, the return value still has to be stored somewhere.

I think this optimisation should work irrespective of what fn! returns
by the fact that the value is not used.  This and more seems to happen
in the first-order function.  Here a version of first-order
function which calls a function which returns an inferred Any:

const aa = Any[1]
hh_ret!(ar,i) = (ar[i] = hh(ar[i]); aa[1])

function f_ret(ar)
a = aa[1]
for i=1:n
a=hh_ret!(ar, i)
end
a
end
julia> @time f_ret(a);
elapsed time: 0.259893608 seconds (160 bytes allocated)

It's still fast and doesn't allocate, even though it uses the value!

> Maybe fn! does the allocating, but it still happens.

It's a different story if there is actual allocation in fn!.

>>   3: 
>>   unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)), 
>> :($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2 
>>
>> So, at least from my naive viewpoint, it looks like there is no need for 
>> an allocation in this case.  Or is there ever a case when the return 
>> value of fn! would be used? 
>>
>> I think this is quite different from the case when the return value of 
>> fn! is used because then, as long as Julia cannot do type inference on 
>> the value of fn!, it cannot know what the type is. 
>>
>
> Which is why its ::Any above I guess and my understanding is Any means 
> boxed and allocated on the heap?

Thanks for indulging me!


Re: [julia-users] Float32() or float32() ?

2015-04-24 Thread Sisyphuss
I mean is there a syntax change from version 0.3 to version 0.4?

Is "float32()"-like minuscule conversion going to be deprecated?



On Friday, April 24, 2015 at 9:58:18 AM UTC+2, Tim Holy wrote:
>
> I'm not sure what your question is, but the documentation is correct in 
> both 
> cases. You can also use the Compat package, which allows you to write 
> x = @compat Float32(y) 
> even on julia 0.3. 
>
> --Tim 
>
> On Friday, April 24, 2015 12:35:01 AM Sisyphuss wrote: 
> > To convert a number to the type Float32, 
> > 
> > In 0.3 doc: float32() 
> > In 0.4 doc: Float32() 
>
>

[julia-users] Re: Segfault / Undefined reference

2015-04-24 Thread Archibald Pontier
Hi again,

I found a way to prevent my code from crashing and it's really unexpected. 
I modified some global variable which only affects functions that are 
called by my emloop function and somehow it works. I have no idea why did 
this affect my code though, and especially what it made it segfault instead 
of just return an error. So now the serial version works, I'm still 
struggling with pmap with plenty of "result shape unspecified" errors but 
that's ok.

All the best,
Archibald

On Friday, 24 April 2015 10:36:15 UTC+2, Archibald Pontier wrote:
>
> Hi everyone,
>
> Julia is (sometimes) segfaulting when running this function :
>
> function emloop(img, Sigma, mu, t, model, imgc, bg, FullTs)
>   EMiter = 1;
>   Check = typemax(Float64);
> GModels, Genes,
>   (x, y) = size(img);
>   D = zeros(x, y); E = zeros(x, y); B = zeros(x, y);
>
>   # And we start the EM loop
>   while EMiter <= maxEM && Check > ϵ
> # Calculate the betas & find the optimal transformation for the 
> current iteration
> (D, E, B) = fastBetas(model, [], mu, Sigma, img);
> nt = findTransform(model, B, Gray((bg.r + bg.g + bg.b)/3), FullTs);
>
> # Now calculate the optimal parameters for the appearance model
> OldMu = mu;  # For termination purposes
> (mu, Sigma) = calcTheta(img, D, E, B);
>
> # Apply the transformation found to the image
> img = RigidTransform(img, nt, bg, true, true);
>
> # Deal with the transformation
> Oldt = t;
> t += nt;
> t.tx %= x;
> t.ty %= y;
>
> # Termination criteria
> NormOldTT = sqrt((Oldt.tx - t.tx)^2 + (Oldt.ty - t.ty)^2 + (Oldt.θ - 
> t.θ)^2);
> CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B - 
> mu.B)
> Check = CheckMu + NormOldTT # Giving lots of weight to the 
> transformation
>
> EMiter += 1;
>   end
>
>   return emdata(D, E, B, img, θ(Sigma, mu), t);
> end
>
> Some context: img is and Image{RGB{Float64}}, Sigma and mu are custom 
> types :
> type Σ
>   D::Array{Float64, 2}
>   E::Array{Float64, 2}
>   B::Array{Float64, 2}
> end
>
> type μ
>   D::Array{Float64, 1}
>   E::Array{Float64, 1}
>   B::Array{Float64, 1}
> end
>
> t is another custom type to represent a rigid body transformation 
> (translation_x, translation_y, rotation_angle), bg is, for example, 
> RGB(0.0, 0.0, 0.0) (black), and FullTs is an array of transformations, 
> namely all the transformations we consider.
>
> It's weird for multiple reasons: first of all, this code was running fine 
> when not in a separate function returning a custom type (the custom type in 
> the return is to ease the access since I want to call this function for an 
> array of images with pmap and output all that which is not practical). 
> Furthermore, sometimes it crashes with a segfault, sometimes with an 
> undefined reference on this line, not always after the same EM iteration:
> CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B - 
> mu.B)
>
> Thanks in advance for the help,
> Archibald
>


Re: [julia-users] in-place fn in a higher order fn

2015-04-24 Thread elextr


On Friday, April 24, 2015 at 6:33:08 PM UTC+10, Mauro wrote:
>
> >> >> >> function f(fn!,ar) 
> >> >> >> for i=1:n 
> >> >> >> fn!(ar, i) # fn! updates ar[i] somehow, returns nothing 
> >> >> >> nothing# to make sure output of f is discarded 
> >> >> >> end 
> >> >> >> end 
> > 
> > 
> > I'm curious how you would see it optimised? IIUC Julia doesn't know fn! 
> at 
> > compile time, so it doesn't know if it returns something, or not, so it 
> has 
> > to allow for a return value even if its to throw it away immediately. 
>
> Well it seems Julia should know that nothing is used from fn!, without 
> knowing anything about fn!.  That is at least what @code_warntype 
> suggest (with julia --inline=no).  For 
>
> function f(ar) 
> for i=1:n 
> hh!(ar, i) 
> end 
> end 
>
> the loop gives: 
>
>   GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), :(#s1::Int64))) 
>   i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1)) 
>   #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 71: 
>   $(Expr(:call1, :hh!, :(ar::Array{Float64,1}), :(i::Int64))) 
>   3: 
>   unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)), 
> :($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2 
>
> For 
> function g(fn!,ar) 
> a = 0 
> for i=1:n 
> fn!(ar, i) 
> end 
> a 
> end 
>
> the loop gives: 
>   2: 
>   GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), :(#s1::Int64))) 
>   i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1)) 
>   #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 78: 
>   (fn!::F)(ar::Array{Float64,1},i::Int64)::Any 
>

Wouldn't it (or fn!) need to allocate for this Any here ^

IIUC its fn! that decides if it returns something, and even if the caller 
doesn't need it, the return value still has to be stored somewhere.  Maybe 
fn! does the allocating, but it still happens.
 

>   3: 
>   unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)), 
> :($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2 
>
> So, at least from my naive viewpoint, it looks like there is no need for 
> an allocation in this case.  Or is there ever a case when the return 
> value of fn! would be used? 
>
> I think this is quite different from the case when the return value of 
> fn! is used because then, as long as Julia cannot do type inference on 
> the value of fn!, it cannot know what the type is. 
>

Which is why its ::Any above I guess and my understanding is Any means 
boxed and allocated on the heap?


Re: [julia-users] code review: my first outer constructor :)

2015-04-24 Thread Tomas Lycken


It might be worthwhile to point out *why* the last example acts in a 
seemingly unintuitive way: all assignments return their right-hand sides, 
so both n = 5.0 and n::Int = 5.0 return 5.0 - the latter will assign 5 to 
the value of n, but it is not n that is returned in either case.

I very rarely find a need for the ::T syntax other than to provide methods 
to multiple dispatch; if I need to make sure that I assign n an integer, I 
use n = convert(Int, 5.0) instead. It’s slightly more verbose, but its 
return value will also be an integer (or throw an InexactError, in which 
case I need to decide *how* to convert to integer and use a function from 
the round/ceil/floor family instead…).

// T

On Friday, April 24, 2015 at 10:40:40 AM UTC+2, Mauro wrote:

> (3) Cool given knowledge about P. However the ::T syntax is not for 
> > declarations. Only annotations/assertions. If I try to run 
> > 
> > P = [1,2,3,4] 
> > N::Int = sqrt(length(P)) 
> > 
> > I get a variable not defined error (julia 0.3.7). If I write instead 
> > 
> > P = [1,2,3,4] 
> > N = sqrt(length(P))::Int 
> > 
> > the assertion fails, as sqrt returns a floating point type. 
>
> No, it works as Andrei coded it.  This is because :: behaves differently 
> depending on whether it is in a value or variable context and whether 
> it's at the REPL or in a local scope: 
> http://docs.julialang.org/en/latest/manual/types/#type-declarations 
>
> julia> n::Int = 5.0 
> ERROR: UndefVarError: n not defined 
>  in eval at no file 
>
> julia> f() = (n::Int = 5.0; n) 
> f (generic function with 1 method) 
>
> julia> f() 
> 5 
>
> with this gotcha: 
>
> julia> f() = n::Int = 5.0 
> f (generic function with 1 method) 
>
> julia> f() 
> 5.0 
>
>
>
> > (4) Got it, I think. As I can't quite see why the particular branching 
> > sequence is the right one though (could there, in principle, be fewer 
> > elseif statements?) I would again suggest that perhaps a pattern 
> matching 
> > syntax could help clarify whats going on here. 
> > 
> > Yes I was referring to the gauge parameter. I can't quite remember 
> > specifically what my offhand comment was about now. Although I think the 
> > problem is still interesting! 
> > 
> > Anyway, my reply only has a bit of substance this time, but hopefully it 
> > can help. 
> > 
> > On Monday, April 13, 2015 at 9:23:42 AM UTC-4, Andrei Berceanu wrote: 
> >> 
> >> Hi Gabriel, thanks a lot for your reply! 
> >> 
> >> (1) I use the WaveFunction type inside the Spectrum type, which as you 
> can 
> >> see is a collection of wavefunctions. For a typical usage case, have a 
> look 
> >> at 
> >> 
> >> 
> http://nbviewer.ipython.org/github/berceanu/topo-photon/blob/master/anc/exploratory.ipynb
>  
> >> I am not quite sure what you mean with defining a methods for the "int" 
> >> attribute. 
> >> (2) As you can see, the gauge is an attribute of the Spectrum type, 
> which 
> >> is what I use in practice so I didn't see much point in storing it 
> inside 
> >> every wavefunction. Do you have a suggestion for a better 
> implementation? 
> >> In fact there are only these 2 gauge choices, landau and symmetric. 
> >> (3) P should be a Vector of length N^2, and I thought that declaring N 
> to 
> >> be an int means implicit conversion as well - is that not so? 
> >> (4) genspmat generates the (sparse) matrix of the Hamiltonian, so 
> >> countnonzeros() simply counts beforehand how many nonzero elements 
> there 
> >> will be in this sparse matrix. If you think of an NxN 2D lattice, 
> >> countnonzeros() simply counts the number of neighbours of each site (4 
> for 
> >> a center site, 3 for an edge site and 2 for a corner site). 
> >> 
> >> What do you mean by irrelevant in this case? Are you refering to the 
> gauge 
> >> parameter? 
> >> 
> >> On Sunday, April 12, 2015 at 11:28:48 PM UTC+2, Gabriel Mitchell wrote: 
> >>> 
> >>> Hi Andrei. I am not really Julia expert, but I do have a couple of 
> high 
> >>> level questions about your code, which might help anyone that feels 
> >>> inclined to contribute a review. 
> >>> 
> >>> (1) Why have you made the WaveFunction type in the first place? In 
> >>> particular, is there any reason that you just don't use alias 
> Vector{Complex{Float64}} 
> >>> and define a methods for the "int" attribute? 
> >>> (2) the outer construction seems to allow for the option for different 
> >>> gauges (which, in my unsophisticated mindset, I think of as 
> alternative 
> >>> parameterizations). Since different gauge choices in some sense imply 
> >>> different semantics for the values in Psi (although presumably not 
> other 
> >>> function invariant to the gauge choice), it would seem that the user 
> (or a 
> >>> function naive to the gauge choice) would want a means to inspect this 
> >>> change in the object. In other words why is the gauge not an attribute 
> of 
> >>> the WaveFunction object, assuming you actually want this type? A 
> related 
> >>> question would be how many different gauges cho

Re: [julia-users] code review: my first outer constructor :)

2015-04-24 Thread Mauro
> (3) Cool given knowledge about P. However the ::T syntax is not for 
> declarations. Only annotations/assertions. If I try to run
>
> P = [1,2,3,4]
> N::Int = sqrt(length(P))
>
> I get a variable not defined error (julia 0.3.7). If I write instead
>
> P = [1,2,3,4]
> N = sqrt(length(P))::Int
>
> the assertion fails, as sqrt returns a floating point type. 

No, it works as Andrei coded it.  This is because :: behaves differently
depending on whether it is in a value or variable context and whether
it's at the REPL or in a local scope:
http://docs.julialang.org/en/latest/manual/types/#type-declarations

julia> n::Int = 5.0
ERROR: UndefVarError: n not defined
 in eval at no file

julia> f() = (n::Int = 5.0; n)
f (generic function with 1 method)

julia> f()
5

with this gotcha:

julia> f() = n::Int = 5.0
f (generic function with 1 method)

julia> f()
5.0



> (4) Got it, I think. As I can't quite see why the particular branching 
> sequence is the right one though (could there, in principle, be fewer 
> elseif statements?) I would again suggest that perhaps a pattern matching 
> syntax could help clarify whats going on here.
>
> Yes I was referring to the gauge parameter. I can't quite remember 
> specifically what my offhand comment was about now. Although I think the 
> problem is still interesting!
>
> Anyway, my reply only has a bit of substance this time, but hopefully it 
> can help.
>
> On Monday, April 13, 2015 at 9:23:42 AM UTC-4, Andrei Berceanu wrote:
>>
>> Hi Gabriel, thanks a lot for your reply!
>>
>> (1) I use the WaveFunction type inside the Spectrum type, which as you can 
>> see is a collection of wavefunctions. For a typical usage case, have a look 
>> at
>>
>> http://nbviewer.ipython.org/github/berceanu/topo-photon/blob/master/anc/exploratory.ipynb
>> I am not quite sure what you mean with defining a methods for the "int" 
>> attribute.
>> (2) As you can see, the gauge is an attribute of the Spectrum type, which 
>> is what I use in practice so I didn't see much point in storing it inside 
>> every wavefunction. Do you have a suggestion for a better implementation? 
>> In fact there are only these 2 gauge choices, landau and symmetric.
>> (3) P should be a Vector of length N^2, and I thought that declaring N to 
>> be an int means implicit conversion as well - is that not so?
>> (4) genspmat generates the (sparse) matrix of the Hamiltonian, so 
>> countnonzeros() simply counts beforehand how many nonzero elements there 
>> will be in this sparse matrix. If you think of an NxN 2D lattice, 
>> countnonzeros() simply counts the number of neighbours of each site (4 for 
>> a center site, 3 for an edge site and 2 for a corner site). 
>>
>> What do you mean by irrelevant in this case? Are you refering to the gauge 
>> parameter?
>>
>> On Sunday, April 12, 2015 at 11:28:48 PM UTC+2, Gabriel Mitchell wrote:
>>>
>>> Hi Andrei. I am not really Julia expert, but I do have a couple of high 
>>> level questions about your code, which might help anyone that feels 
>>> inclined to contribute a review. 
>>>
>>> (1) Why have you made the WaveFunction type in the first place? In 
>>> particular, is there any reason that you just don't use alias 
>>> Vector{Complex{Float64}} 
>>> and define a methods for the "int" attribute?
>>> (2) the outer construction seems to allow for the option for different 
>>> gauges (which, in my unsophisticated mindset, I think of as alternative 
>>> parameterizations). Since different gauge choices in some sense imply 
>>> different semantics for the values in Psi (although presumably not other 
>>> function invariant to the gauge choice), it would seem that the user (or a 
>>> function naive to the gauge choice) would want a means to inspect this 
>>> change in the object. In other words why is the gauge not an attribute of 
>>> the WaveFunction object, assuming you actually want this type? A related 
>>> question would be how many different gauges choices does one expect the 
>>> user to want to use. Just these two? These two plus a few more? All of them?
>>> (3) Line 17 asserts that N is an integer, but sqrt(length(P)) could be 
>>> non-integral.
>>> (4) I don't really understand what is going on with countnonzeros, but 
>>> maybe a pattern matching syntax ala Match.jl could help to make a more 
>>> declarative version of this function?
>>>
>>> As a side note, I think the problem of how to describe and take advantage 
>>> of irrelevant degrees of freedom in numerical computations is pretty 
>>> interesting and certainly has applications in all kids of fields, so it 
>>> would be cool if you had some ideas about how to systematically approach 
>>> this problem.
>>>  
>>>
>>> On Sunday, April 12, 2015 at 10:06:10 PM UTC+2, Andrei Berceanu wrote:

 Hi Mauro, I realised after posting this that I should have been much 
 more specific. 
 I apologise for that!

 Anyway, thanks for you reply. Not sure what you mean by OO programming, 
 as I thought Julia u

[julia-users] Segfault / Undefined reference

2015-04-24 Thread Archibald Pontier
Hi everyone,

Julia is (sometimes) segfaulting when running this function :

function emloop(img, Sigma, mu, t, model, imgc, bg, FullTs)
  EMiter = 1;
  Check = typemax(Float64);
GModels, Genes,
  (x, y) = size(img);
  D = zeros(x, y); E = zeros(x, y); B = zeros(x, y);

  # And we start the EM loop
  while EMiter <= maxEM && Check > ϵ
# Calculate the betas & find the optimal transformation for the current 
iteration
(D, E, B) = fastBetas(model, [], mu, Sigma, img);
nt = findTransform(model, B, Gray((bg.r + bg.g + bg.b)/3), FullTs);

# Now calculate the optimal parameters for the appearance model
OldMu = mu;  # For termination purposes
(mu, Sigma) = calcTheta(img, D, E, B);

# Apply the transformation found to the image
img = RigidTransform(img, nt, bg, true, true);

# Deal with the transformation
Oldt = t;
t += nt;
t.tx %= x;
t.ty %= y;

# Termination criteria
NormOldTT = sqrt((Oldt.tx - t.tx)^2 + (Oldt.ty - t.ty)^2 + (Oldt.θ - 
t.θ)^2);
CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B - 
mu.B)
Check = CheckMu + NormOldTT # Giving lots of weight to the 
transformation

EMiter += 1;
  end

  return emdata(D, E, B, img, θ(Sigma, mu), t);
end

Some context: img is and Image{RGB{Float64}}, Sigma and mu are custom types 
:
type Σ
  D::Array{Float64, 2}
  E::Array{Float64, 2}
  B::Array{Float64, 2}
end

type μ
  D::Array{Float64, 1}
  E::Array{Float64, 1}
  B::Array{Float64, 1}
end

t is another custom type to represent a rigid body transformation 
(translation_x, translation_y, rotation_angle), bg is, for example, 
RGB(0.0, 0.0, 0.0) (black), and FullTs is an array of transformations, 
namely all the transformations we consider.

It's weird for multiple reasons: first of all, this code was running fine 
when not in a separate function returning a custom type (the custom type in 
the return is to ease the access since I want to call this function for an 
array of images with pmap and output all that which is not practical). 
Furthermore, sometimes it crashes with a segfault, sometimes with an 
undefined reference on this line, not always after the same EM iteration:
CheckMu = norm(OldMu.D - mu.D) + norm(OldMu.E - mu.E) + norm(OldMu.B - mu.B)

Thanks in advance for the help,
Archibald


Re: [julia-users] in-place fn in a higher order fn

2015-04-24 Thread Mauro
>> >> >> function f(fn!,ar) 
>> >> >> for i=1:n 
>> >> >> fn!(ar, i) # fn! updates ar[i] somehow, returns nothing 
>> >> >> nothing# to make sure output of f is discarded 
>> >> >> end 
>> >> >> end 
>
>
> I'm curious how you would see it optimised? IIUC Julia doesn't know fn! at 
> compile time, so it doesn't know if it returns something, or not, so it has 
> to allow for a return value even if its to throw it away immediately.

Well it seems Julia should know that nothing is used from fn!, without
knowing anything about fn!.  That is at least what @code_warntype
suggest (with julia --inline=no).  For

function f(ar)
for i=1:n
hh!(ar, i)
end
end

the loop gives:

  GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), :(#s1::Int64)))
  i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1))
  #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 71:
  $(Expr(:call1, :hh!, :(ar::Array{Float64,1}), :(i::Int64)))
  3: 
  unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)), 
:($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2

For
function g(fn!,ar)
a = 0
for i=1:n
fn!(ar, i)
end
a
end

the loop gives:
  2: 
  GenSym(1) = $(Expr(:call1, :(top(next)), GenSym(0), :(#s1::Int64)))
  i = $(Expr(:call1, :(top(getfield)), GenSym(1), 1))
  #s1 = $(Expr(:call1, :(top(getfield)), GenSym(1), 2)) # line 78:
  (fn!::F)(ar::Array{Float64,1},i::Int64)::Any
  3: 
  unless $(Expr(:call1, :(top(!)), :($(Expr(:call1, :(top(!)), 
:($(Expr(:call1, :(top(done)), GenSym(0), :(#s1::Int64) goto 2

So, at least from my naive viewpoint, it looks like there is no need for
an allocation in this case.  Or is there ever a case when the return
value of fn! would be used?

I think this is quite different from the case when the return value of
fn! is used because then, as long as Julia cannot do type inference on
the value of fn!, it cannot know what the type is.


Re: [julia-users] Re: Ode solver thought, numerical recipes

2015-04-24 Thread François Fayard
I'll make sure I won't use a single line of their code and only use the papers 
they refer to.

Re: [julia-users] Re: Ode solver thought, numerical recipes

2015-04-24 Thread François Fayard
Hi Tim. I understand your point. The movie "Romeo+Juliet" is using the same 
lines as the original. Therefore, it's not their story. But if I write a 
modernization of this story, with different people, different places but still 
the same kind of background, it becomes mine. Nobody has a copyright on "a 
story with 2 young people who can't live their romance".

The algorithm of GraggBulirschStoer has been described in books way before NR 
and they don't own it. Moreover, I am allowed to study their code and make a 
modernization out of it.

Re: [julia-users] Float32() or float32() ?

2015-04-24 Thread Tim Holy
I'm not sure what your question is, but the documentation is correct in both 
cases. You can also use the Compat package, which allows you to write
x = @compat Float32(y)
even on julia 0.3.

--Tim

On Friday, April 24, 2015 12:35:01 AM Sisyphuss wrote:
> To convert a number to the type Float32,
> 
> In 0.3 doc: float32()
> In 0.4 doc: Float32()



Re: [julia-users] Re: Ode solver thought, numerical recipes

2015-04-24 Thread Tim Holy
If I write a book in English that is a modernization of Romeo & Juliet (a 
"known algorithm," er, story), and you translate it into French, it's still 
not "your" book: you can't distribute it without paying me money :-).

--Tim

On Thursday, April 23, 2015 03:03:29 PM François Fayard wrote:
> I'll have to reimplement the algorithm using my own "methods". Numerical
> Recipes are just implementation of known algorithms. But it's true that
> they are not fully documented and they use many tricks that make the code
> not that clear. I'll rework the code.
> 
> Any advice on the "Julia" questions ?



[julia-users] Re: Macro with varargs

2015-04-24 Thread Kuba Roth
Thanks, 
So in order to create the quote/end block you use:
  newxpr = Expr(:block)
And the newxpr.args is the array field of the QuoteNode which stores each 
expression?

Edit:
I can see now this is actually explained a bit in the advanced macro 
example in documentation.



On Thursday, April 23, 2015 at 1:10:15 PM UTC-7, Patrick O'Leary wrote:
>
> On Thursday, April 23, 2015 at 2:36:45 PM UTC-5, Kuba Roth wrote:
>>
>> This is my first  time writing macros in Julia. I've read related docs 
>> but could not find an example which works with the arbitrary number of 
>> arguments. 
>> So in my example below the args... works correctly with string literals 
>> but for the passed variables it returns their names and not the values.
>
>
> Here's the result of the last thing you called (note that I don't even 
> have testB and testC defined!)
>
> julia> macroexpand(:(@echo testB testC))
> :(for #6#x = (:testB,:testC) # line 3:
> print(#6#x," ")
> end)
>
> What ends up in `args` is the argument tuple to the macro. Typically, you 
> wouldn't process that in the final output--otherwise you could just use a 
> function! Instead, you'd splice each argument individually (`$(args[1])`, 
> `$(args[2])`, etc.) using a loop in the macro body, with each element of 
> the loop emitting more code, then gluing the pieces together at the end.
>
> Style notes: Typically, no space between function/macro name and formal 
> arguments list. Multiline expressions are easier to read in `quote`/`end` 
> blocks.
>
> Anyways, here's one way to do sort of what you want in a way that requires 
> a macro (though I still wouldn't use one for this! Didactic purposes only!):
>
> macro unrolled_echo(args...)
> newxpr = Expr(:block) # empty block to hold multiple statements
> append!(newxpr.args, [:(print($arg, " ")) for arg in args]) # the 
> arguments to the :block node are a list of Exprs
> newxpr # return the constructed expression
> end
>


[julia-users] Float32() or float32() ?

2015-04-24 Thread Sisyphuss
To convert a number to the type Float32,

In 0.3 doc: float32()
In 0.4 doc: Float32()