[julia-users] Re: Need help/direction on how to optimize some Julia code

2015-04-23 Thread Seth
The use of Requests.jl makes this very hard to benchmark accurately since 
it introduces (non-measurable) dependencies on network resources.

If you @profile the function, can you tell where it's spending most of its 
time?

On Tuesday, April 21, 2015 at 2:12:52 PM UTC-7, Harry B wrote:

 Hello,

 I had the need to take a text file with several million lines, construct a 
 URL with parameters picked from the tab limited file, and fire them one 
 after the other. After I read about Julia, I decided to try this in Julia. 
 However my initial implementation turned out to be slow and I was getting 
 close to my deadline. I then kept the Julia implementation aside and wrote 
 the same thing in Go, my other favorite language. Go version is twice (at 
 least) as fast as the Julia version. Now the task/deadline is over, I am 
 coming back to the Julia version to see what I did wrong.

 Go and Julia version are not written alike. In Go, I have just one main 
 thread reading a file and 5 go-routines waiting in a channel and one of 
 them will get the 'line/job' and fire off the url, wait for a response, 
 parse the JSON, and look for an id in a specific place, and go back to wait 
 for more items from the channel. 

 Julia code is very similar to the one discussed in the thread quoted 
 below. I invoke Julia with -p 5 and then have *each* process open the file 
 and read all lines. However each process is only processing 1/5th of the 
 lines and skipping others. It is a slight modification of what was 
 discussed in this thread 
 https://groups.google.com/d/msg/julia-users/Kr8vGwdXcJA/8ynOghlYaGgJ

 Julia code (no server URL or source for that though ) : 
 https://github.com/harikb/scratchpad1/tree/master/julia2
 Server could be anything that returns a static JSON.

 Considering the files will entirely fit in filesystem cache and I am 
 running this on a fairly large system (procinfo says 24 cores, 100G ram, 
 50G or free even after removing cached). The input file is only 875K. This 
 should ideally mean I can read the files several times in any programming 
 language and not skip a beat. wc -l on the file takes only 0m0.002s . Any 
 log/output is written to a fusion-io based flash disk. All fairly high end.

 https://github.com/harikb/scratchpad1/tree/master/julia2

 At this point, considering the machine is reasonably good, the only 
 bottleneck should be the time URL firing takes (it is a GET request, but 
 the other side has some processing to do) or the subsequent JSON parsing.

 Where do I go from here? How do I find out (a) are HTTP connections being 
 re-used by the underlying library? I am using this library 
 https://github.com/JuliaWeb/Requests.jl
 If not, that could answer this difference. How do I profile this code? I 
 am using julia 0.3.7 (since Requests.jl does not work with 0.4 nightly)

 Any help is appreciated.
 Thanks
 --
 Harry



Re: [julia-users] in-place fn in a higher order fn

2015-04-23 Thread Mauro
Thanks!  In that case, I'll file an issue then to get this noted.  Also,
I think there is no (general) issue on the bad performance of higher
order functions.  Should I file that too?

On Thu, 2015-04-23 at 15:52, Jameson Nash vtjn...@gmail.com wrote:
 The short answer is that there is a certain set of optimizations that have
 been implemented in Julia, but still a considerable set that has not been
 implemented. This falls into the category of optimizations that have not
 been implemented. Pull requests are always welcome (although I do not
 recommend this one as a good beginner / up-for-grabs issue).

 On Thu, Apr 23, 2015 at 9:18 AM Mauro mauro...@runbox.com wrote:

 It is well know that Julia struggles with type inference in higher order
 functions.  This usually leads to slow code and memory allocations.
 There are a few hacks to work around this.  Anyway, the question I have
 is: Why can't Julia do better with in-place functions?

 In short, a higher-order function like this:

 function f(fn!,ar)
 for i=1:n
 fn!(ar, i) # fn! updates ar[i] somehow, returns nothing
 nothing# to make sure output of f is discarded
 end
 end

 has almost as bad a performance (runtime and allocation-wise) as

 function g(fn,ar)
 for i=1:n
 ar[i] = fn(ar[i])
 end
 end

 A in-depth, ready to run example is here:
 https://gist.github.com/mauro3/f17da10247b0bad96f1a
 Including output of @code_warntype.

 So, why is Julia allocating memory when running f?  Nothing of f gets
 assigned to anything.

 Would this be something which is fixable more easily than the whole of
 the higher-order performance issues?  If so, is there an issue for this?

 Having good in-place higher order functions would go a long way with
 numerical computations.  Thanks!




[julia-users] Re: Jiahao's talk on Free probability, random matrices and disorder in organic semiconductors

2015-04-23 Thread Luis Benet
Thanks for sharing! Great talk
 


Re: [julia-users] in-place fn in a higher order fn

2015-04-23 Thread Mauro
 It's part of #3440, the compiler optimization metabug: function-valued 
 argument inlining

 https://github.com/JuliaLang/julia/issues/3440

Thanks, and yes, I'm aware of that one.  For me it's a bit hard to tell,
will the function-valued argument inlining solve all the higher order
function problems?  And is the problem described below, not quite a bit
easier to solve?

 On Thursday, April 23, 2015 at 9:34:48 AM UTC-5, Mauro wrote:

 Thanks!  In that case, I'll file an issue then to get this noted.  Also, 
 I think there is no (general) issue on the bad performance of higher 
 order functions.  Should I file that too? 

 On Thu, 2015-04-23 at 15:52, Jameson Nash vtjn...@gmail.com wrote: 
  The short answer is that there is a certain set of optimizations that 
 have 
  been implemented in Julia, but still a considerable set that has not 
 been 
  implemented. This falls into the category of optimizations that have not 
  been implemented. Pull requests are always welcome (although I do not 
  recommend this one as a good beginner / up-for-grabs issue). 
  
  On Thu, Apr 23, 2015 at 9:18 AM Mauro mauro...@runbox.com wrote: 
  
  It is well know that Julia struggles with type inference in higher 
 order 
  functions.  This usually leads to slow code and memory allocations. 
  There are a few hacks to work around this.  Anyway, the question I have 
  is: Why can't Julia do better with in-place functions? 
  
  In short, a higher-order function like this: 
  
  function f(fn!,ar) 
  for i=1:n 
  fn!(ar, i) # fn! updates ar[i] somehow, returns nothing 
  nothing# to make sure output of f is discarded 
  end 
  end 
  
  has almost as bad a performance (runtime and allocation-wise) as 
  
  function g(fn,ar) 
  for i=1:n 
  ar[i] = fn(ar[i]) 
  end 
  end 
  
  A in-depth, ready to run example is here: 
  https://gist.github.com/mauro3/f17da10247b0bad96f1a 
  Including output of @code_warntype. 
  
  So, why is Julia allocating memory when running f?  Nothing of f gets 
  assigned to anything. 
  
  Would this be something which is fixable more easily than the whole of 
  the higher-order performance issues?  If so, is there an issue for 
 this? 
  
  Having good in-place higher order functions would go a long way with 
  numerical computations.  Thanks! 
  





Re: [julia-users] in-place fn in a higher order fn

2015-04-23 Thread Patrick O'Leary
It's part of #3440, the compiler optimization metabug: function-valued 
argument inlining

https://github.com/JuliaLang/julia/issues/3440

On Thursday, April 23, 2015 at 9:34:48 AM UTC-5, Mauro wrote:

 Thanks!  In that case, I'll file an issue then to get this noted.  Also, 
 I think there is no (general) issue on the bad performance of higher 
 order functions.  Should I file that too? 

 On Thu, 2015-04-23 at 15:52, Jameson Nash vtjn...@gmail.com wrote: 
  The short answer is that there is a certain set of optimizations that 
 have 
  been implemented in Julia, but still a considerable set that has not 
 been 
  implemented. This falls into the category of optimizations that have not 
  been implemented. Pull requests are always welcome (although I do not 
  recommend this one as a good beginner / up-for-grabs issue). 
  
  On Thu, Apr 23, 2015 at 9:18 AM Mauro mauro...@runbox.com wrote: 
  
  It is well know that Julia struggles with type inference in higher 
 order 
  functions.  This usually leads to slow code and memory allocations. 
  There are a few hacks to work around this.  Anyway, the question I have 
  is: Why can't Julia do better with in-place functions? 
  
  In short, a higher-order function like this: 
  
  function f(fn!,ar) 
  for i=1:n 
  fn!(ar, i) # fn! updates ar[i] somehow, returns nothing 
  nothing# to make sure output of f is discarded 
  end 
  end 
  
  has almost as bad a performance (runtime and allocation-wise) as 
  
  function g(fn,ar) 
  for i=1:n 
  ar[i] = fn(ar[i]) 
  end 
  end 
  
  A in-depth, ready to run example is here: 
  https://gist.github.com/mauro3/f17da10247b0bad96f1a 
  Including output of @code_warntype. 
  
  So, why is Julia allocating memory when running f?  Nothing of f gets 
  assigned to anything. 
  
  Would this be something which is fixable more easily than the whole of 
  the higher-order performance issues?  If so, is there an issue for 
 this? 
  
  Having good in-place higher order functions would go a long way with 
  numerical computations.  Thanks! 
  



[julia-users] Re: .jl to .exe

2015-04-23 Thread pauld11718
Will it be possible to cross-compile?
Do all the coding on linux(64 bit) and generate the exe for windows(32 bit)?


Re: [julia-users] Re: SAVE THE DATE: JuliaCon 2015, June 24 - 28

2015-04-23 Thread Mauro
I'd like to chime in: it would be great to know soon about acceptance
and such to book travels.

On Thu, 2015-04-23 at 19:35, Carlo di Celico carlo.dicel...@gmail.com wrote:
 Are we there yet? :D

 On Tuesday, February 24, 2015 at 11:50:10 AM UTC-5, Hunter Owens wrote:

 Yup! We're shooting for roughly ~March 15th, but we will post on 
 julia-users when registration is available. 

 On Monday, February 23, 2015 at 6:02:42 PM UTC-6, Carlo di Celico wrote:

 Looking forward to this! I'm guessing registration will open through 
 Juliacon.org sometime soon?

 On Tuesday, January 27, 2015 at 12:18:36 AM UTC-5, Jiahao Chen wrote:

 On behalf of the organizing committee, it is my pleasure to announce 
 that JuliaCon 2015 will be held at the MIT Stata Center during the dates 
 of 
 Wednesday, June 24 through Sunday, June 28.

 More details forthcoming. We look forward to seeing you in Cambridge, 
 Massachusetts six months from now, sans blustering bombogenetic blizzard.





[julia-users] Re: SAVE THE DATE: JuliaCon 2015, June 24 - 28

2015-04-23 Thread Carlo di Celico
Are we there yet? :D

On Tuesday, February 24, 2015 at 11:50:10 AM UTC-5, Hunter Owens wrote:

 Yup! We're shooting for roughly ~March 15th, but we will post on 
 julia-users when registration is available. 

 On Monday, February 23, 2015 at 6:02:42 PM UTC-6, Carlo di Celico wrote:

 Looking forward to this! I'm guessing registration will open through 
 Juliacon.org sometime soon?

 On Tuesday, January 27, 2015 at 12:18:36 AM UTC-5, Jiahao Chen wrote:

 On behalf of the organizing committee, it is my pleasure to announce 
 that JuliaCon 2015 will be held at the MIT Stata Center during the dates of 
 Wednesday, June 24 through Sunday, June 28.

 More details forthcoming. We look forward to seeing you in Cambridge, 
 Massachusetts six months from now, sans blustering bombogenetic blizzard.



Re: [julia-users] Re: Argument Placeholder

2015-04-23 Thread Stefan Karpinski
What's the risk?

On Thu, Apr 23, 2015 at 4:24 PM, Ryan Li lyzh...@gmail.com wrote:

 The underscore actually is used as a variable name, which could be
 dangerous in the code.
 I am not sure if this is worth to take the risk.


 On Wednesday, April 22, 2015 at 6:00:00 PM UTC-7, Darwin Darakananda wrote:

 You can use underscore _ as a placeholder, for example:

 _, _, x = (1, 2, 3)


 On Wednesday, April 22, 2015 at 5:09:16 PM UTC-7, Ryan Li wrote:

 Hi,
 I was wondering if there is any argument placeholder in Julia.
 In Matlab, if we do not require one return argument, we just replace it
 with ~, e.g., [~,R]=qr(A).
 Is there any similar behavior in Julia?
 Best,
 Yingzhou




[julia-users] Re: Ode solver thought, numerical recipes

2015-04-23 Thread Alex
Hi Francois,

Have you seen the ODE.jl package [1]? It doesn't allow you to do the things you 
described, but some of the github issues touch features like iterator versions 
of solvers or event detection. Maybe you find the discussions interesting. 
Comments, suggestions and/or PRs are always welcome :-)

Best,

Alex.


[1] https://github.com/JuliaLang/ODE.jl



On Thursday, 23 April 2015 21:47:27 UTC+2, François Fayard  wrote:
 Hi,
 
 I've been looking at Julia for a long time, hoping to get some time so I can 
 dive in. As I am already into Fortran, C++ and Mathematica, a free dynamic 
 language looks like something fun to learn. Julia seems much more fun to me 
 than Python.
 
 I would like to implement an ode solver in pure Julia so we can use any type 
 available to Julia. What strikes me with current implementations (in 
 Julia/ODE, Python/Scipy, Mathematica/NDSolve) is the API that I find useless 
 for a program I just wrote. Let's suppose you have a Cauchy problem: y(ta)=ya 
 and y'(t)=f(t,y(t)).
 
 - In the current implementation of Julia and Python/Scipy, you need to give 
 all the points t1,...,tn where you want to know y at the very beginning. If 
 you want to solve the differential equation on [ta,tb] and you want to find a 
 t such that y(t)=0, you are stuck because you'll most likely do a Newton 
 method and your evaluation point at iteration n will depend on the value of y 
 at iteration n-1. Mathematica solved this problem returning an interpolation 
 object instead of some values. This is extremely useful, especially with 
 dense methods.
 - If you want to find a t such that y(t)=0 and you don't even know an upper 
 bound for t (such as tb) you are stuck, even with Mathematica.
 
 I propose the following solution. If you want to 
 
 solver = odesolver(f, ta, ya; method = Euler, deltat = 0.01)
 y = value(solver, t)
 
 solver = odesolver(t, ta, ya; method = DenseGraggBulirschStoer, relerror = 
 1.0e-6)
 y = value(solver, t)
 
 The type of solver would depend on the method. For an Euler method, it would 
 just contain f, ta, ya the last t evaluated and the last value y computed. 
 For the DenseGraggBulirschStoer method, it would contain more information: 
 all the values t and the corresponding y computed, and some polynomial 
 coefficients for evaluation in between them.
 
 I have a few question to implement this idea.
 - The interface makes the function odesolver not type stable as the type of 
 solver depends upon the method. Will it prevent static compilation when it is 
 released? Is there any workaround?
 - For the type of y, it could by anything living in a vector space. I am 
 thinking of Float64, Array{Float64, 1}, Array{Float64, 2} maybe a 
 FixedSizedArray if such a thing exists. Is there a way to enforce that? Is 
 there a way to specify any type that lives in a vector space ?
 - For the function f, I am thinking of preallocating dy_dt and call f!(dy_dt, 
 t, y) or call dy_dt = f(t, y). The current ODE package use the second method. 
 Doesn't it kill the performance with many heap allocation? Will 
 FixedSizeArray solve this problem? Is there a metaprogramming trick to 
 transform the second function into the first one? Also, the first function is 
 subject to aliasing between dy_dt and y. Is there a way to tell Julia there 
 is no aliasing?
 - Some implementation might come from numerical recipes even though they also 
 exist in other book (not as code but as algorithm). I've seen people breaking 
 the copyright of numerical recipes easily. To what extend the code should be 
 modified so it does not break the license? Is an adaptation from one language 
 to another enough to say that the license does not apply ?
 
 Thanks


[julia-users] Macro with varargs

2015-04-23 Thread Kuba Roth
This is my first  time writing macros in Julia. I've read related docs but 
could not find an example which works with the arbitrary number of arguments.
So in my example below the args... works correctly with string literals but for 
the passed variables it returns their names and not the values. I'm pretty sure 
this is a newbie mistake and much appreciate any comments.
Thank you.

macro echo (args...)
 :(for x in $args
   print(x,  )
   end)
end

julia @echo AAA VVV 1
AAA VVV 1

julia testB = BBB
BBB
julia testC = CCC
CCC
julia @echo testB testC
testB testC

[julia-users] Re: Macro with varargs

2015-04-23 Thread Patrick O'Leary
On Thursday, April 23, 2015 at 2:36:45 PM UTC-5, Kuba Roth wrote:

 This is my first  time writing macros in Julia. I've read related docs but 
 could not find an example which works with the arbitrary number of 
 arguments. 
 So in my example below the args... works correctly with string literals 
 but for the passed variables it returns their names and not the values.


Here's the result of the last thing you called (note that I don't even have 
testB and testC defined!)

julia macroexpand(:(@echo testB testC))
:(for #6#x = (:testB,:testC) # line 3:
print(#6#x, )
end)

What ends up in `args` is the argument tuple to the macro. Typically, you 
wouldn't process that in the final output--otherwise you could just use a 
function! Instead, you'd splice each argument individually (`$(args[1])`, 
`$(args[2])`, etc.) using a loop in the macro body, with each element of 
the loop emitting more code, then gluing the pieces together at the end.

Style notes: Typically, no space between function/macro name and formal 
arguments list. Multiline expressions are easier to read in `quote`/`end` 
blocks.

Anyways, here's one way to do sort of what you want in a way that requires 
a macro (though I still wouldn't use one for this! Didactic purposes only!):

macro unrolled_echo(args...)
newxpr = Expr(:block) # empty block to hold multiple statements
append!(newxpr.args, [:(print($arg,  )) for arg in args]) # the 
arguments to the :block node are a list of Exprs
newxpr # return the constructed expression
end


[julia-users] Ode solver thought, numerical recipes

2015-04-23 Thread François Fayard
Hi,

I've been looking at Julia for a long time, hoping to get some time so I can 
dive in. As I am already into Fortran, C++ and Mathematica, a free dynamic 
language looks like something fun to learn. Julia seems much more fun to me 
than Python.

I would like to implement an ode solver in pure Julia so we can use any type 
available to Julia. What strikes me with current implementations (in Julia/ODE, 
Python/Scipy, Mathematica/NDSolve) is the API that I find useless for a program 
I just wrote. Let's suppose you have a Cauchy problem: y(ta)=ya and 
y'(t)=f(t,y(t)).

- In the current implementation of Julia and Python/Scipy, you need to give all 
the points t1,...,tn where you want to know y at the very beginning. If you 
want to solve the differential equation on [ta,tb] and you want to find a t 
such that y(t)=0, you are stuck because you'll most likely do a Newton method 
and your evaluation point at iteration n will depend on the value of y at 
iteration n-1. Mathematica solved this problem returning an interpolation 
object instead of some values. This is extremely useful, especially with dense 
methods.
- If you want to find a t such that y(t)=0 and you don't even know an upper 
bound for t (such as tb) you are stuck, even with Mathematica.

I propose the following solution. If you want to 

solver = odesolver(f, ta, ya; method = Euler, deltat = 0.01)
y = value(solver, t)

solver = odesolver(t, ta, ya; method = DenseGraggBulirschStoer, relerror = 
1.0e-6)
y = value(solver, t)

The type of solver would depend on the method. For an Euler method, it would 
just contain f, ta, ya the last t evaluated and the last value y computed. For 
the DenseGraggBulirschStoer method, it would contain more information: all the 
values t and the corresponding y computed, and some polynomial coefficients for 
evaluation in between them.

I have a few question to implement this idea.
- The interface makes the function odesolver not type stable as the type of 
solver depends upon the method. Will it prevent static compilation when it is 
released? Is there any workaround?
- For the type of y, it could by anything living in a vector space. I am 
thinking of Float64, Array{Float64, 1}, Array{Float64, 2} maybe a 
FixedSizedArray if such a thing exists. Is there a way to enforce that? Is 
there a way to specify any type that lives in a vector space ?
- For the function f, I am thinking of preallocating dy_dt and call f!(dy_dt, 
t, y) or call dy_dt = f(t, y). The current ODE package use the second method. 
Doesn't it kill the performance with many heap allocation? Will FixedSizeArray 
solve this problem? Is there a metaprogramming trick to transform the second 
function into the first one? Also, the first function is subject to aliasing 
between dy_dt and y. Is there a way to tell Julia there is no aliasing?
- Some implementation might come from numerical recipes even though they also 
exist in other book (not as code but as algorithm). I've seen people breaking 
the copyright of numerical recipes easily. To what extend the code should be 
modified so it does not break the license? Is an adaptation from one language 
to another enough to say that the license does not apply ?

Thanks


[julia-users] Re: Argument Placeholder

2015-04-23 Thread Ryan Li
The underscore actually is used as a variable name, which could be 
dangerous in the code.
I am not sure if this is worth to take the risk.

On Wednesday, April 22, 2015 at 6:00:00 PM UTC-7, Darwin Darakananda wrote:

 You can use underscore _ as a placeholder, for example:

 _, _, x = (1, 2, 3)


 On Wednesday, April 22, 2015 at 5:09:16 PM UTC-7, Ryan Li wrote:

 Hi,
 I was wondering if there is any argument placeholder in Julia.
 In Matlab, if we do not require one return argument, we just replace it 
 with ~, e.g., [~,R]=qr(A).
 Is there any similar behavior in Julia?
 Best,
 Yingzhou



Re: [julia-users] Re: Ode solver thought, numerical recipes

2015-04-23 Thread François Fayard
I'll have to reimplement the algorithm using my own methods. Numerical 
Recipes are just implementation of known algorithms. But it's true that they 
are not fully documented and they use many tricks that make the code not that 
clear. I'll rework the code.

Any advice on the Julia questions ?

[julia-users] Re: Ode solver thought, numerical recipes

2015-04-23 Thread François Fayard
Hi Alex,

I've looks at ODE.jl which is still a small package. I don't like the API at 
all as it prevents way too many applications as discussed in the issues : you 
can't get dense output and you can't stop at a given event. An ode solver 
library really needs an object that contains its state that you use when you 
ask for a value at a given point.

I would like to start from scratch in order to get it right.

[julia-users] Re: Ode solver thought, numerical recipes

2015-04-23 Thread François Fayard
Hi Alex,

I have those books and numerical recipes just implement the algorithm they 
give. I'll give and start to code. I am sure I'll get help when I have 
something that works.


[julia-users] Re: Ode solver thought, numerical recipes

2015-04-23 Thread François Fayard
Hi Alex,

I have those books and numerical recipes just implement the algorithm they 
give. I'll give and start to code. I am sure I'll get help when I have 
something that works.


[julia-users] Re: .jl to .exe

2015-04-23 Thread Tony Kelman
It is possible to cross-compile a Windows exe of Julia from Linux right 
now, so this could probably be made to work.


On Thursday, April 23, 2015 at 8:15:46 AM UTC-7, pauld11718 wrote:

 Will it be possible to cross-compile?
 Do all the coding on linux(64 bit) and generate the exe for windows(32 
 bit)?



Re: [julia-users] Re: Ode solver thought, numerical recipes

2015-04-23 Thread Tim Holy
If you use the Numerical Recipes implementation as the basis for your code, 
I'm not sure you can legally distribute your code to anyone else.

--Tim

On Thursday, April 23, 2015 02:46:55 PM François Fayard wrote:
 Hi Alex,
 
 I have those books and numerical recipes just implement the algorithm they
 give. I'll give and start to code. I am sure I'll get help when I have
 something that works.



[julia-users] Re: ANN: EmpiricalRisks, Regression, SGDOptim

2015-04-23 Thread Jutho
Regression.jl seems to have like a sweet implementation of gradient based 
optimization algorithms. How does this compare to the work in Optim.jl? 
Would it be useful to join these efforts?

Op donderdag 23 april 2015 11:12:58 UTC+2 schreef Dahua Lin:

 Hi, 

 I am happy to announce three packages related to empirical risk 
 minimization

 EmpiricalRisks https://github.com/lindahua/EmpiricalRisks.jl

 This Julia package provides a collection of predictors and loss functions, 
 as well as the efficient computation of gradients, mainly to support the 
 implementation of (regularized) empirical risk minimization methods.

 Predictors:

- linear prediction
- affine prediction
- multivariate linear prediction
- multivariate affine prediction

 Loss functions:

- squared loss
- absolute loss
- quantile loss
- huber loss
- hinge loss
- smoothed hinge loss
- logistic loss
- sum squared loss (for multivariate prediction)
- multinomial logistic loss

 Regularizers:

- squared L2 regularization
- L1 regularization
- elastic net (L1 + squared L2)
- evaluation of proximal operators, w.r.t. these regularizers.


 Regression https://github.com/lindahua/Regression.jl

 This package was dead before, and I revived it recently. It is based on 
 EmpiricalRisks, and provides methods for regression analysis (for moderate 
 size problems, i.e. the data can be loaded entirely to memory). It supports 
 the following problems:

- Linear regression
- Ridge regression
- LASSO
- Logistic regression
- Multinomial Logistic regression
- Problems with customized loss and regularizers

 It also provides a variety of solvers:

- Analytical solution (for linear  ridge regression)
- Gradient descent
- BFGS
- L-BFGS
- Proximal gradient descent (recommended for LASSO  sparse regression)
- Accelerated gradient descent (experimental)


 SGDOptim https://github.com/lindahua/SGDOptim.jl

 I announced this couple weeks ago. Now this package has been fundamentally 
 refactored, and now it is based on EmpiricalRisks. It aims to provide 
 stochastic algorithms (e.g. SGD) for solve large scale regression problems.


 Cheers,
 Dahua








[julia-users] Problem building packages with dependencies

2015-04-23 Thread Peter Simon
I'm running Julia 0.3.7 and am attempting to regenerate my ~/.julia/v0.3 
directory (after carefully saving the old one). I'm having a problem with 
building several packages that have dependencies.  The problem packages 
include Cairo, HDF5, Nettle, and Images.  I'm working as a user without 
root privileges on CentOS 6.5.  Julia hangs when building these packages, 
apparently in the step where Yum is being invoked to install a dependency. 
 Our company's security posture recently became much stricter, and the 
system administrators recently locked down the computer, so that no 
downloads are allowed from anywhere (I'm using a local copies, obtained 
with special permission, of all Julia registered packages to do the package 
installations).  When I look into one of my old package directories in my 
old saved ~/.julia/v0.3 directory, I see that the deps subdirectory of 
these older installed packages also contains an automatically generated 
file named deps.jl that shows that the package manager was able to find 
local versions of the dependencies from directories included in my 
LD_LIBARY_PATH environment variable.  This deps.jl file is not present in 
any of the problematic package deps directories.   Has something changed 
recently so that Julia no longer attempts to find local files to satisfy 
dependencies, and instead immediately tries to download them?  Is there a 
way to force Julia to use the locally available dependencies?  Am I asking 
the right question here?

Thanks very much for any help with this.

--Peter


Re: [julia-users] Re: Jiahao's talk on Free probability, random matrices and disorder in organic semiconductors

2015-04-23 Thread Michele Zaffalon
Will the slides and the notebook be made available at some point? The video
is of fairly low resolution.

On Thu, Apr 23, 2015 at 4:52 PM, Luis Benet luis.bene...@gmail.com wrote:

 Thanks for sharing! Great talk




Re: [julia-users] in-place fn in a higher order fn

2015-04-23 Thread elextr


On Friday, April 24, 2015 at 2:03:07 AM UTC+10, Mauro wrote:

  It's part of #3440, the compiler optimization metabug: function-valued 
  argument inlining 
  
  https://github.com/JuliaLang/julia/issues/3440 

 Thanks, and yes, I'm aware of that one.  For me it's a bit hard to tell, 
 will the function-valued argument inlining solve all the higher order 
 function problems?  And is the problem described below, not quite a bit 
 easier to solve? 

  On Thursday, April 23, 2015 at 9:34:48 AM UTC-5, Mauro wrote: 
  
  Thanks!  In that case, I'll file an issue then to get this noted. 
  Also, 
  I think there is no (general) issue on the bad performance of higher 
  order functions.  Should I file that too? 
  
  On Thu, 2015-04-23 at 15:52, Jameson Nash vtj...@gmail.com 
 javascript: wrote: 
   The short answer is that there is a certain set of optimizations that 
  have 
   been implemented in Julia, but still a considerable set that has not 
  been 
   implemented. This falls into the category of optimizations that have 
 not 
   been implemented. Pull requests are always welcome (although I do not 
   recommend this one as a good beginner / up-for-grabs issue). 
   
   On Thu, Apr 23, 2015 at 9:18 AM Mauro maur...@runbox.com 
 javascript: wrote: 
   
   It is well know that Julia struggles with type inference in higher 
  order 
   functions.  This usually leads to slow code and memory allocations. 
   There are a few hacks to work around this.  Anyway, the question I 
 have 
   is: Why can't Julia do better with in-place functions? 
   
   In short, a higher-order function like this: 
   
   function f(fn!,ar) 
   for i=1:n 
   fn!(ar, i) # fn! updates ar[i] somehow, returns nothing 
   nothing# to make sure output of f is discarded 
   end 
   end 


I'm curious how you would see it optimised? IIUC Julia doesn't know fn! at 
compile time, so it doesn't know if it returns something, or not, so it has 
to allow for a return value even if its to throw it away immediately.

 


   
   has almost as bad a performance (runtime and allocation-wise) as 
   
   function g(fn,ar) 
   for i=1:n 
   ar[i] = fn(ar[i]) 
   end 
   end 
   
   A in-depth, ready to run example is here: 
   https://gist.github.com/mauro3/f17da10247b0bad96f1a 
   Including output of @code_warntype. 
   
   So, why is Julia allocating memory when running f?  Nothing of f 
 gets 
   assigned to anything. 
   
   Would this be something which is fixable more easily than the whole 
 of 
   the higher-order performance issues?  If so, is there an issue for 
  this? 
   
   Having good in-place higher order functions would go a long way with 
   numerical computations.  Thanks! 
   
  
  



[julia-users] Re: Need help/direction on how to optimize some Julia code

2015-04-23 Thread Harry B

I am trying to profile this code, so here is what I have so far. I added 
the following code to the path taken for the single-process mode.
I didn't bother with the multi-process once since I didn't know how to deal 
with @profile and remotecall_wait

@profile processOneFile(3085, 35649, filename)
bt, lidict = Profile.retrieve()
println(Profiling done)
for (k,v) in lidict
println(v)
end

Output is here 
https://github.com/harikb/scratchpad1/blob/master/julia2/run1.txt   (Ran 
with julia 0.3.7)
another run 
https://github.com/harikb/scratchpad1/blob/master/julia2/run2.txt  (Ran 
with julia-debug 0.3.7) - in case it gave better results.

However, there is quite a few lines marked without line or file info.

On Wednesday, April 22, 2015 at 2:44:13 AM UTC-7, Yuuki Soho wrote:

If I understand correctly now you are doing only 5 requests at the same 
time? It seems to me you could do much more. 

But that hides the inefficiency, whatever level it exists. The Go program 
also uses only 5 parallel connections.

On Wednesday, April 22, 2015 at 1:15:20 PM UTC-7, Stefan Karpinski wrote:

Honestly, I'm pretty pleased with that performance. This kind of thing 
is Go's bread and butter – being within a factor of 2 of Go at something 
like this is really good. That said, if you do figure out anything that's a 
bottleneck here, please file issues – there's no fundamental reason Julia 
can't be just as fast or faster than any other language at this.

Stefan, yes, it is about 2x if I subtract the 10 seconds or so (whatever it 
appears to me) as the startup time. I am running Julia 0.3.7 on a box with 
a deprecated GnuTLS (RHEL). The deprecation warning msg comes about 8 
seconds into the run and I wait another 2 seconds before I see the first 
print statement from my code (Started N processes message). My 
calculations already exclude these 10 seconds. 
I wonder if I would get better startup time with 0.4, but Requests.jl is 
not compatible with it (nor do I find any other library for 0.4). I will 
try 0.4 again and see I can fix Requests.jl

Any help is appreciated on further analysis of the profile output.

Thanks
--
Harry

On Thursday, April 23, 2015 at 7:21:11 AM UTC-7, Seth wrote:

 The use of Requests.jl makes this very hard to benchmark accurately since 
 it introduces (non-measurable) dependencies on network resources.

 If you @profile the function, can you tell where it's spending most of its 
 time?

 On Tuesday, April 21, 2015 at 2:12:52 PM UTC-7, Harry B wrote:

 Hello,

 I had the need to take a text file with several million lines, construct 
 a URL with parameters picked from the tab limited file, and fire them one 
 after the other. After I read about Julia, I decided to try this in Julia. 
 However my initial implementation turned out to be slow and I was getting 
 close to my deadline. I then kept the Julia implementation aside and wrote 
 the same thing in Go, my other favorite language. Go version is twice (at 
 least) as fast as the Julia version. Now the task/deadline is over, I am 
 coming back to the Julia version to see what I did wrong.

 Go and Julia version are not written alike. In Go, I have just one main 
 thread reading a file and 5 go-routines waiting in a channel and one of 
 them will get the 'line/job' and fire off the url, wait for a response, 
 parse the JSON, and look for an id in a specific place, and go back to wait 
 for more items from the channel. 

 Julia code is very similar to the one discussed in the thread quoted 
 below. I invoke Julia with -p 5 and then have *each* process open the file 
 and read all lines. However each process is only processing 1/5th of the 
 lines and skipping others. It is a slight modification of what was 
 discussed in this thread 
 https://groups.google.com/d/msg/julia-users/Kr8vGwdXcJA/8ynOghlYaGgJ

 Julia code (no server URL or source for that though ) : 
 https://github.com/harikb/scratchpad1/tree/master/julia2
 Server could be anything that returns a static JSON.

 Considering the files will entirely fit in filesystem cache and I am 
 running this on a fairly large system (procinfo says 24 cores, 100G ram, 
 50G or free even after removing cached). The input file is only 875K. This 
 should ideally mean I can read the files several times in any programming 
 language and not skip a beat. wc -l on the file takes only 0m0.002s . Any 
 log/output is written to a fusion-io based flash disk. All fairly high end.

 https://github.com/harikb/scratchpad1/tree/master/julia2

 At this point, considering the machine is reasonably good, the only 
 bottleneck should be the time URL firing takes (it is a GET request, but 
 the other side has some processing to do) or the subsequent JSON parsing.

 Where do I go from here? How do I find out (a) are HTTP connections being 
 re-used by the underlying library? I am using this library 
 https://github.com/JuliaWeb/Requests.jl
 If not, that could answer this difference. How do 

[julia-users] Re: ANN: EmpiricalRisks, Regression, SGDOptim

2015-04-23 Thread Dahua Lin
Regression.jl does not aim to replace or provide an alternative to Optim.jl. It 
is primarily to do regression, and optimization algorithms are encapsulated as 
details. 

However, there are certain aspects in Optim.jl that make it not very suitable 
for Regression.jl at this point. For example, we need to work with 2D/3D 
solutions directly, and need the support of proximal operator etc. Also, it is 
desirable to work with functors instead of Functions (which comes with less 
overhead). 

I think eventually some of the design in Regression.jl can be merged into 
Optim.jl, and Regression.jl can be made into a package that build on top of 
both EmpiricalRisks.jl and Optim.jl. I may open an issue at Optim.jl to propose 
some refactoring to begin with.

Dahua



Re: [julia-users] Re: Need help/direction on how to optimize some Julia code

2015-04-23 Thread Tim Holy
I think it's fair to say that Profile.print() will be quite a lot more 
informative---all you're getting is the list of lines visited, not anything 
about how much time each one takes.

--Tim

On Thursday, April 23, 2015 04:19:08 PM Harry B wrote:
 I am trying to profile this code, so here is what I have so far. I added
 the following code to the path taken for the single-process mode.
 I didn't bother with the multi-process once since I didn't know how to deal
 with @profile and remotecall_wait
 
 @profile processOneFile(3085, 35649, filename)
 bt, lidict = Profile.retrieve()
 println(Profiling done)
 for (k,v) in lidict
 println(v)
 end
 
 Output is here
 https://github.com/harikb/scratchpad1/blob/master/julia2/run1.txt   (Ran
 with julia 0.3.7)
 another run
 https://github.com/harikb/scratchpad1/blob/master/julia2/run2.txt  (Ran
 with julia-debug 0.3.7) - in case it gave better results.
 
 However, there is quite a few lines marked without line or file info.
 
 On Wednesday, April 22, 2015 at 2:44:13 AM UTC-7, Yuuki Soho wrote:
 
 If I understand correctly now you are doing only 5 requests at the same
 time? It seems to me you could do much more.
 
 But that hides the inefficiency, whatever level it exists. The Go program
 also uses only 5 parallel connections.
 
 On Wednesday, April 22, 2015 at 1:15:20 PM UTC-7, Stefan Karpinski wrote:
 
 Honestly, I'm pretty pleased with that performance. This kind of thing
 is Go's bread and butter – being within a factor of 2 of Go at something
 like this is really good. That said, if you do figure out anything that's a
 bottleneck here, please file issues – there's no fundamental reason Julia
 can't be just as fast or faster than any other language at this.
 
 Stefan, yes, it is about 2x if I subtract the 10 seconds or so (whatever it
 appears to me) as the startup time. I am running Julia 0.3.7 on a box with
 a deprecated GnuTLS (RHEL). The deprecation warning msg comes about 8
 seconds into the run and I wait another 2 seconds before I see the first
 print statement from my code (Started N processes message). My
 calculations already exclude these 10 seconds.
 I wonder if I would get better startup time with 0.4, but Requests.jl is
 not compatible with it (nor do I find any other library for 0.4). I will
 try 0.4 again and see I can fix Requests.jl
 
 Any help is appreciated on further analysis of the profile output.
 
 Thanks
 --
 Harry
 
 On Thursday, April 23, 2015 at 7:21:11 AM UTC-7, Seth wrote:
  The use of Requests.jl makes this very hard to benchmark accurately since
  it introduces (non-measurable) dependencies on network resources.
  
  If you @profile the function, can you tell where it's spending most of its
  time?
  
  On Tuesday, April 21, 2015 at 2:12:52 PM UTC-7, Harry B wrote:
  Hello,
  
  I had the need to take a text file with several million lines, construct
  a URL with parameters picked from the tab limited file, and fire them one
  after the other. After I read about Julia, I decided to try this in
  Julia.
  However my initial implementation turned out to be slow and I was getting
  close to my deadline. I then kept the Julia implementation aside and
  wrote
  the same thing in Go, my other favorite language. Go version is twice (at
  least) as fast as the Julia version. Now the task/deadline is over, I am
  coming back to the Julia version to see what I did wrong.
  
  Go and Julia version are not written alike. In Go, I have just one main
  thread reading a file and 5 go-routines waiting in a channel and one of
  them will get the 'line/job' and fire off the url, wait for a response,
  parse the JSON, and look for an id in a specific place, and go back to
  wait
  for more items from the channel.
  
  Julia code is very similar to the one discussed in the thread quoted
  below. I invoke Julia with -p 5 and then have *each* process open the
  file
  and read all lines. However each process is only processing 1/5th of the
  lines and skipping others. It is a slight modification of what was
  discussed in this thread
  https://groups.google.com/d/msg/julia-users/Kr8vGwdXcJA/8ynOghlYaGgJ
  
  Julia code (no server URL or source for that though ) :
  https://github.com/harikb/scratchpad1/tree/master/julia2
  Server could be anything that returns a static JSON.
  
  Considering the files will entirely fit in filesystem cache and I am
  running this on a fairly large system (procinfo says 24 cores, 100G ram,
  50G or free even after removing cached). The input file is only 875K.
  This
  should ideally mean I can read the files several times in any programming
  language and not skip a beat. wc -l on the file takes only 0m0.002s . Any
  log/output is written to a fusion-io based flash disk. All fairly high
  end.
  
  https://github.com/harikb/scratchpad1/tree/master/julia2
  
  At this point, considering the machine is reasonably good, the only
  bottleneck should be the time URL firing takes (it is a GET 

[julia-users] Optimization of simple code

2015-04-23 Thread Stéphane Mottelet

Hello,

I am trying to improve the speed of code like this:

M1_v=(v[17]
v[104]
v[149]
-[v[18]+v[63]+v[103]]
v[17]
v[104]
v[149]
...
-[v[39]+v[41]+v[124]]
v[38]
v[125]
v[127]
-[v[39]+v[41]+v[124]);

The attached file (with 1000 repetitions) runs very slowly (0.71s) 
compared to Scilab where it takes only 0.42 s on my machine. Did I miss 
something ?


Thanks for your help

S.

function test()
v=rand(172,1);
for i=1:1000
M1_v=[v[17]
v[104]
v[149]
-(v[18]+v[63]+v[103])
v[17]
v[104]
v[149]
-(v[18]+v[63]+v[103])
v[17]
v[104]
v[149]
-(v[18]+v[63]+v[103])
v[17]
v[104]
v[149]
-(v[18]+v[63]+v[103])
v[17]
v[104]
v[149]
-(v[18]+v[63]+v[103])
v[38]
v[125]
v[127]
-(v[39]+v[41]+v[124])
v[38]
v[125]
v[127]
-(v[39]+v[41]+v[124])
v[38]
v[125]
v[127]
-(v[39]+v[41]+v[124])
v[38]
v[125]
v[127]
-(v[39]+v[41]+v[124])
v[38]
v[125]
v[127]
-(v[39]+v[41]+v[124])
v[15]
v[102]
v[147]
-(v[16]+v[61]+v[85]+v[101])
v[15]
v[102]
v[147]
-(v[16]+v[61]+v[85]+v[101])
v[36]
-(v[37]+v[122])
v[36]
-(v[37]+v[122])
v[36]
-(v[37]+v[122])
v[67]
-(v[68]+v[153])
v[67]
-(v[68]+v[153])
v[67]
-(v[68]+v[153])
v[67]
-(v[68]+v[153])
v[67]
-(v[68]+v[153])
v[67]
-(v[68]+v[153])
v[82]
-(v[83]+v[168])
v[82]
-(v[83]+v[168])
v[82]
-(v[83]+v[168])
v[82]
-(v[83]+v[168])
v[70]
-(v[71]+v[156])
v[70]
-(v[71]+v[156])
v[70]
-(v[71]+v[156])
v[70]
-(v[71]+v[156])
v[63]
v[150]
-(v[64]+v[149])
v[63]
v[150]
-(v[64]+v[149])
v[63]
v[150]
-(v[64]+v[149])
v[63]
v[150]
-(v[64]+v[149])
v[63]
v[150]
-(v[64]+v[149])
v[61]
v[127]
-(v[41]+v[62]+v[147])
v[61]
v[127]
-(v[41]+v[62]+v[147])
v[43]
v[135]
-(v[49]+v[129])
v[43]
v[135]
-(v[49]+v[129])
v[43]
v[135]
-(v[49]+v[129])
v[43]
v[135]
-(v[49]+v[129])
v[69]
v[156]
v[160]
v[161]
v[164]
v[166]
v[168]
-(v[70]+v[74]+v[75]+v[77]+v[78]+v[80]+v[82]+v[155])
v[69]
v[156]
v[160]
v[161]
v[164]
v[166]
v[168]
-(v[70]+v[74]+v[75]+v[77]+v[78]+v[80]+v[82]+v[155])
v[69]
v[156]
v[160]
v[161]
v[164]
v[166]
v[168]
-(v[70]+v[74]+v[75]+v[77]+v[78]+v[80]+v[82]+v[155])
v[69]
v[156]
v[160]
v[161]
v[164]
v[166]
v[168]
-(v[70]+v[74]+v[75]+v[77]+v[78]+v[80]+v[82]+v[155])
v[48]
v[135]
v[136]
-(v[49]+v[50]+v[57]+v[134])
v[48]
v[135]
v[136]
-(v[49]+v[50]+v[57]+v[134])
v[48]
v[135]
v[136]
-(v[49]+v[50]+v[57]+v[134])
v[26]
v[114]
-(v[27]+v[28]+v[112])
v[26]
v[114]
-(v[27]+v[28]+v[112])
v[26]
v[114]
-(v[27]+v[28]+v[112])
v[35]
v[122]
v[124]
v[124]
v[158]
v[160]
v[161]
-(v[36]+v[38]+v[38]+v[72]+v[74]+v[75]+v[121])
v[35]
v[122]
v[124]
v[124]
v[158]
v[160]
v[161]
-(v[36]+v[38]+v[38]+v[72]+v[74]+v[75]+v[121])
v[35]
v[122]
v[124]
v[124]
v[158]
v[160]
v[161]
-(v[36]+v[38]+v[38]+v[72]+v[74]+v[75]+v[121])
v[44]
v[131]
v[144]
-(v[45]+v[47]+v[58]+v[130])
v[44]
v[131]
v[144]
-(v[45]+v[47]+v[58]+v[130])
v[44]
v[131]
v[144]
-(v[45]+v[47]+v[58]+v[130])
v[44]
v[131]
v[144]
-(v[45]+v[47]+v[58]+v[130])
v[44]
v[131]
v[144]
-(v[45]+v[47]+v[58]+v[130])
v[10]
v[15]
v[17]
v[18]
v[23]
v[38]
v[41]
v[51]
v[52]
v[54]
v[55]
v[60]
v[72]
v[74]
v[75]
v[108]
v[153]
-(v[22]+v[67]+v[84]+v[96]+v[101]+v[103]+v[104]+v[109]+v[124]+v[127]+v[137]+v[138]+v[140]+v[141]+v[146]+v[158]+v[160]+v[161])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[50]
v[137]
v[138]
v[140]
v[141]
v[144]
-(v[51]+v[52]+v[54]+v[55]+v[58]+v[136])
v[30]
-(v[31]+v[116])
v[30]
-(v[31]+v[116])
v[30]
-(v[31]+v[116])
v[49]
v[136]
-(v[50]+v[135])
v[49]
v[136]
-(v[50]+v[135])
v[49]
v[136]
-(v[50]+v[135])
v[49]
v[136]
-(v[50]+v[135])
v[49]
v[136]
-(v[50]+v[135])
v[49]
v[136]
-(v[50]+v[135])
v[49]
v[136]
-(v[50]+v[135])
v[12]
v[100]
v[129]
-(v[14]+v[43]+v[98])
v[12]
v[100]
v[129]
-(v[14]+v[43]+v[98])
v[12]
v[100]
v[129]
-(v[14]+v[43]+v[98])
v[12]
v[100]
v[129]
-(v[14]+v[43]+v[98])
v[32]
v[131]
v[166]
-(v[45]+v[80]+v[86]+v[118])
v[3]
v[12]
v[14]
v[90]
-(v[4]+v[25]+v[89]+v[98]+v[100])
v[3]
v[12]
v[14]
v[90]
-(v[4]+v[25]+v[89]+v[98]+v[100])
v[3]
v[12]
v[14]
v[90]
-(v[4]+v[25]+v[89]+v[98]+v[100])
v[3]
v[12]
v[14]
v[90]
-(v[4]+v[25]+v[89]+v[98]+v[100])
v[3]
v[12]
v[14]
v[90]
-(v[4]+v[25]+v[89]+v[98]+v[100])
v[3]
v[12]
v[14]
v[90]
-(v[4]+v[25]+v[89]+v[98]+v[100])
v[4]
v[91]
-(v[5]+v[90])
v[4]
v[91]
-(v[5]+v[90])
v[4]
v[91]
-(v[5]+v[90])
v[4]
v[91]
-(v[5]+v[90])
v[4]
v[91]
-(v[5]+v[90])
v[4]
v[91]
-(v[5]+v[90])
v[5]
v[5]
v[11]
v[13]
v[14]
v[92]
v[98]
-(v[6]+v[12]+v[34]+v[91]+v[91]+v[97]+v[99]+v[100])
v[5]
v[5]
v[11]
v[13]
v[14]
v[92]
v[98]

[julia-users] ANN: QuDirac.jl

2015-04-23 Thread Jarrett Revels
I'm happy to say that I've finally released QuDirac.jl 
https://github.com/JuliaQuantum/QuDirac.jl!

This package is for performing common quantum mechanical operations using 
Dirac notation.

Feature list:

- Implementations of state types (`Ket`,`Bra`), and a variety of operator 
types (`OpSum`,`OuterProduct`)
- Treat states and operators as map-like data structures, enabling 
label-based analysis for spectroscopy purposes
- Implementation of common operations like partial trace (`ptrace`) and 
partial transpose (`ptranspose`)
- Support for abstract/undefined inner products
- User-definable custom inner product rules
- Subspace selection/transformation via functions on state labels and 
coefficients:
- `xsubspace` allows easy selection of excitation subspaces of states 
and operators
- `permute` and `switch` allows generic permutation of factor labels 
for states
- `filter`/`filter!` are supported on both the labels and coefficients 
of operators/states
- Mapping functions (`map`/`maplabels`/`mapcoeffs`) for applying 
arbitrary functions to labels and coefficients
- Functional generation of operators using `@def_op` and `@rep_op`
- `d ... ` literals for natural Dirac notation input syntax

-- Jarrett


[julia-users] Re: Jiahao's talk on Free probability, random matrices and disorder in organic semiconductors

2015-04-23 Thread Sheehan Olver
Great talk!

On Thursday, April 23, 2015 at 4:52:57 AM UTC+10, Nick Henderson wrote:

 Hi All,

 Just wanted to announce that we posted Jiahao's recent talk on our youtube 
 page:

   https://www.youtube.com/watch?v=68yy33jOkOs

 Thanks again to both Jiahao and Andreas for the great talks!

 Abstract:

   Free probability, random matrices and disorder in organic semiconductors
   Jiahao Chen, MIT CSAIL

 Random matrix theory has long been used to study the spectral
 properties of physical systems, and has led to a rich interplay
 between probability theory and physics [1].  Historically, random
 matrices have been used to model physical systems with random
 fluctuations, or systems whose eigenproblems were too difficult to
 solve numerically.  This talk explores applications of RMT to the
 physics of disorder in organic semiconductors [2,3].  Revisiting the
 old problem of Anderson localization [4] has shed new light on the
 emerging field of free probability theory [5].  I will discuss the
 implications of free probabilistic ideas for finite-dimensional random
 matrices [6], as well as some hypotheses about eigenvector locality.
 Algorithms are available in the RandomMatrices.jl package [7] written
 for the Julia programming language.

 [1] M. L. Mehta.  Random matrices, 3/e, Academic Press, 2000.
 [2] J. Chen, E. Hontz, J. Moix, M. Welborn, T. Van Voorhis, A. Suarez,
 R. Movassagh, and A. Edelman.  Error analysis of free probability
 approximations to the density of states of disordered systems.
 Phys. Rev. Lett. (2012) 109:36403.
 [3] M. Welborn, J. Chen, and T. Van Voorhis.  Densities of states for
 disordered systems from free probability.  Phys. Rev. B (2013) 
 88:205113.
 [4] P. W. Anderson.  Absence of diffusion in certain random lattices.
 Phys. Rev. (1958) 109:1492--1505.
 [5] D. Voiculescu.  Addition of certain non-commuting random variables.
 J. Functional Anal. (1986) 66:323--346.
 [6] J. Chen, T. Van Voorhis, and A. Edelman.  Partial freeness of random
 matrices.  arXiv:1204.2257
 [7] https://github.com/jiahao/RandomMatrices.jl



[julia-users] New packages in time-series statistics

2015-04-23 Thread colintbowers
Hi all,

I've written three new packages for Julia, and am interested in getting 
some feedback/comments from the community as well as determining whether 
there is sufficient interest to register them officially. The packages are:

[DependentBootstrap](https://github.com/colintbowers/DependentBootstrap.jl)

[KernelStat](https://github.com/colintbowers/KernelStat.jl)

[RARIMA](https://github.com/colintbowers/RARIMA.jl)

and can be pulled using Pkg.clone(URL_HERE). I don't have any problems 
compiling them on v0.3, but would be very interested in hearing of any 
problems compiling on v0.4 (or v0.3 for that matter).

The first package, DependentBootstrap, implements the iid bootstrap, 
stationary bootstrap, circular block bootstrap, moving block bootstrap, 
tapered block bootstrap, and nonoverlapping block bootstrap, as well as the 
block length selection procedures in Politis and White (2004) (including 
the correction provided in Patton, Politis and White (2009)), and 
Paparoditis and Politis (2002). The main thing it doesn't do (yet) is work 
with multivariate data. So just 1-dimensional time-series for now. This 
package is implemented entirely in Julia.

The second package, KernelStat, just implements some kernel functions from 
statistics, and includes three bandwidth estimation procedures, including 
the adaptive choice method discussed in Politis (2003). The main purpose of 
this package for now is to provide the bandwidth estimates needed by the 
block length selection procedures in the DependentBootstrap package, but in 
the future it could be merged with other packages to provide a general 
package for kernel-based nonparametric statistics. This package is 
implemented entirely in Julia.

The third package, RARIMA, implements ARIMA estimation, forecast and 
simulation. Unfortunately, I didn't have time to implement all the 
functions in this package in Julia. To be honest, it is probably a task 
better suited to someone more knowledgeable about the ins-and-outs of ARIMA 
models and state space representations than I. So instead, what I've done 
with this package is use the Julia package RCall to wrap the ARIMA 
functionality in R, hence the package name RARIMA. Currently, the 
simulation functions in RARIMA are implemented in Julia, there is a version 
of the forecast functions implemented in Julia  (but they are not capable 
of including confidence intervals), and a version of the forecast functions 
that wrap R functionality (these do provide confidence intervals). Finally, 
all estimation functions wrap R routines.

I would welcome any comments, feedback, recommendations, pull requests, 
etc. I would be particularly interested in any suggestions to improve the 
performance of the functions in the DependentBootstrap or KernelStat 
package.

Cheers,

Colin


[julia-users] Re: absolute paths in show(io::IO,mt::MethodTable)

2015-04-23 Thread Kristoffer Carlsson
I am not sure if this helps you but with the @edit macro you get taken 
directly to the source code of the method.

For example

@edit +(1,1)

@edit sin(pi)



On Thursday, April 23, 2015 at 1:29:06 PM UTC+2, Tamas Papp wrote:

 Hi, 

 I am using the Debian nighly package for Julia, which includes the 
 sources. 

 methods (actually, show(io::IO,m::Method)) prints relative paths for 
 Base methods, eg 

 julia methods(methods) 
 # 4 methods for generic function methods: 
 methods(f::Function,t::ANY) at reflection.jl:104 
 methods(f::Function) at reflection.jl:133 
 methods(f::ANY,t::ANY) at reflection.jl:105 
 methods(x::ANY) at reflection.jl:139 

 I wonder if there is a way to print absolute paths, ie somehow tell 
 Julia that its base directory is /usr/share/julia/base/, and let show 
 prepend that to the relative paths. 

 I would find this useful since in Emacs/ESS I can open the referenced 
 file easily, so this would allow me to study sources quickly. 

 Best, 

 Tamas 



[julia-users] Re: susceptance matrix

2015-04-23 Thread Patrick O'Leary
On Thursday, April 23, 2015 at 6:51:05 AM UTC-5, Michela Di Lullo wrote:

 I'm trying to make it but it's not working because of the indexes. I don't 
 know how to declare the parameter branch_x indexed by (n,b_from,b_to).


I'm not sure what this indexing expression is supposed to do; branch_x is 
defined as a vector so providing three arguments doesn't make sense; each 
indexing argument indexes along a dimension of the array, and branch_x is 
one dimensional.

We might be missing better documentation of indexing expressions; all I can 
find right now is 
http://julia.readthedocs.org/en/release-0.3/stdlib/arrays/#indexing-assignment-and-concatenation;
 
getindex(X, ...) is the desugared form of X[...].

It looks like you might be trying to extract a range, in which case you can 
construct it with colon:

branch_x[b_from:b_to]

but that throws n away...if n is supposed to be a stride, then

branch_x[b_from:n:b_to]

...but without understanding the AMPL expression, I'm just stabbing 
possible answers at you.

(If someone who understands AMPL comes along, they might be able to help 
better/faster.) 


[julia-users] ANN: EmpiricalRisks, Regression, SGDOptim

2015-04-23 Thread Dahua Lin
Hi, 

I am happy to announce three packages related to empirical risk minimization

EmpiricalRisks https://github.com/lindahua/EmpiricalRisks.jl

This Julia package provides a collection of predictors and loss functions, 
as well as the efficient computation of gradients, mainly to support the 
implementation of (regularized) empirical risk minimization methods.

Predictors:

   - linear prediction
   - affine prediction
   - multivariate linear prediction
   - multivariate affine prediction
   
Loss functions:

   - squared loss
   - absolute loss
   - quantile loss
   - huber loss
   - hinge loss
   - smoothed hinge loss
   - logistic loss
   - sum squared loss (for multivariate prediction)
   - multinomial logistic loss
   
Regularizers:

   - squared L2 regularization
   - L1 regularization
   - elastic net (L1 + squared L2)
   - evaluation of proximal operators, w.r.t. these regularizers.
   

Regression https://github.com/lindahua/Regression.jl

This package was dead before, and I revived it recently. It is based on 
EmpiricalRisks, and provides methods for regression analysis (for moderate 
size problems, i.e. the data can be loaded entirely to memory). It supports 
the following problems:

   - Linear regression
   - Ridge regression
   - LASSO
   - Logistic regression
   - Multinomial Logistic regression
   - Problems with customized loss and regularizers
   
It also provides a variety of solvers:

   - Analytical solution (for linear  ridge regression)
   - Gradient descent
   - BFGS
   - L-BFGS
   - Proximal gradient descent (recommended for LASSO  sparse regression)
   - Accelerated gradient descent (experimental)


SGDOptim https://github.com/lindahua/SGDOptim.jl

I announced this couple weeks ago. Now this package has been fundamentally 
refactored, and now it is based on EmpiricalRisks. It aims to provide 
stochastic algorithms (e.g. SGD) for solve large scale regression problems.


Cheers,
Dahua








[julia-users] Re: How should let this work?

2015-04-23 Thread Tomas Lycken
For backwards compatibility, you can use Compat.jl. 

// T 

[julia-users] Re: Help understanding ASCIIString vs String

2015-04-23 Thread Test This
Thank you, Avik. I appreciate your help. 

On Wednesday, April 22, 2015 at 1:34:51 PM UTC-4, Avik Sengupta wrote:


 The answer to your first question is easy. subtypes() only display the 
 direct subtypes. ASCIIString is a subtype of DirectIndexString, which in 
 turn is a subtype of String

 julia subtypes(String)
 8-element Array{Any,1}:
  Base.GenericString  
  DirectIndexString   
  RepString   
  RevString{T:AbstractString}
  RopeString  
  SubString{T:AbstractString}
  UTF16String 
  UTF8String  

 julia subtypes(DirectIndexString)
 2-element Array{Any,1}:
  ASCIIString
  UTF32String

 The answer to your second question is more contentious, though a valid 
 answer might be that this is simply a design choice. A generic discussion 
 is here: 
 http://en.wikipedia.org/wiki/Covariance_and_contravariance_%28computer_science%29
  
 and there are various discussions in the mailing list about why this choice 
 has been made in Julia. 

 Regards
 -
 Avik

 On Wednesday, 22 April 2015 18:20:46 UTC+1, Test This wrote:

 Avik and Patrick,

 Thanks to both of you for clarifying this and for the alternative. 

 I have a couple of related questions:

 - Why is ASCIIString not listed when I do subtypes(String)?
 - Why is Dict{ASCIIString, Int} not a subtype of Dict{String, Int}, if 
 ASCIIString is a subtype of String?

 Anyhow, for now, your answers and links are very helpful.




 On Wednesday, April 22, 2015 at 10:38:27 AM UTC-4, Patrick O'Leary wrote:

 (The crusade continues)

 Never fear though, this doesn't mean you have to write more code! Julia 
 supports the use of type variables to express generics. So in your case, 
 instead of:

 function func(a::Params, b::String, c::Dict{String, Array{Int, 1}}, 
 d::Dict{String, Array{Int, 1}})
   ...
 end

 which has the aforementioned invariance problem, you can write

 function func{T:String}(a::Params, b::T, c::Dict{T, Array{Int, 1}}, 
 d::Dict{T, Array{Int, 1}})
   ...
 end

 which defines a family of methods for any subtype of String. These 
 methods are called, in Julia terms, parametric methods, and are discussed 
 in the manual here: 
 http://docs.julialang.org/en/release-0.3/manual/methods/#parametric-methods

 Hope this helps,
 Patrick


 On Wednesday, April 22, 2015 at 8:54:12 AM UTC-5, Avik Sengupta wrote:

 So a one line answer to this is julia container types are invariant. 

 Lets take this step by step

 julia f(x::String) = I am $x
 f (generic function with 2 methods)

 julia f(abc)
 I am abc

 julia g(x::Dict) = I am a dict of type: $(typeof(x))
 g (generic function with 1 method)

 julia g(Dict(abc=1))
 I am a dict of type: Dict{ASCIIString,Int64}

 julia h(x::Dict{String, Int}) = I am a dict of {String=Int}
 h (generic function with 1 method)

 julia h(Dict(abc=1))
 ERROR: MethodError: `h` has no method matching 
 h(::Dict{ASCIIString,Int64})


 Basically, while an ASCIIString is a subtype of the abstract type 
 String , a Dict{ASCIIString, Int} is not a subtype of Dict{String, Int}

 See here for more: 
 http://docs.julialang.org/en/release-0.3/manual/types/?highlight=contravariant#man-parametric-types

 Regards
 -
 Avik

 On Wednesday, 22 April 2015 14:14:06 UTC+1, Test This wrote:


 I defined a function 

 function func(a::Params, b::String, c::Dict{String, Array{Int, 1}}, 
 d::Dict{String, Array{Int, 1}})
   ...
 end

 When I run the program, calling this function with func(paramvalue, 
 H, d1, d2), I get an error saying func has no method matching
 (::Params, ::ASCIIString, ::Dict{ASCIIString,Array{Int64,1}}, 
 ::Dict{ASCIIString,Array{Int64,1}})

 The code works if I change String to ASCIIString in the function 
 definition. But I thought (and the REPL seems to agree) that 
 any value of type ASCIIString is also of type String. Then, why am I 
 getting this error. 

 Thanks in advance for your help.



[julia-users] Official docker images

2015-04-23 Thread Viral Shah
It seems that Docker has an official julia image:

https://registry.hub.docker.com/_/julia/

-viral





[julia-users] in-place fn in a higher order fn

2015-04-23 Thread Mauro
It is well know that Julia struggles with type inference in higher order
functions.  This usually leads to slow code and memory allocations.
There are a few hacks to work around this.  Anyway, the question I have
is: Why can't Julia do better with in-place functions?

In short, a higher-order function like this:

function f(fn!,ar)
for i=1:n
fn!(ar, i) # fn! updates ar[i] somehow, returns nothing
nothing# to make sure output of f is discarded
end
end

has almost as bad a performance (runtime and allocation-wise) as

function g(fn,ar)
for i=1:n
ar[i] = fn(ar[i])
end
end

A in-depth, ready to run example is here:
https://gist.github.com/mauro3/f17da10247b0bad96f1a
Including output of @code_warntype.

So, why is Julia allocating memory when running f?  Nothing of f gets
assigned to anything.

Would this be something which is fixable more easily than the whole of
the higher-order performance issues?  If so, is there an issue for this?

Having good in-place higher order functions would go a long way with
numerical computations.  Thanks!


Re: [julia-users] in-place fn in a higher order fn

2015-04-23 Thread Jameson Nash
The short answer is that there is a certain set of optimizations that have
been implemented in Julia, but still a considerable set that has not been
implemented. This falls into the category of optimizations that have not
been implemented. Pull requests are always welcome (although I do not
recommend this one as a good beginner / up-for-grabs issue).

On Thu, Apr 23, 2015 at 9:18 AM Mauro mauro...@runbox.com wrote:

 It is well know that Julia struggles with type inference in higher order
 functions.  This usually leads to slow code and memory allocations.
 There are a few hacks to work around this.  Anyway, the question I have
 is: Why can't Julia do better with in-place functions?

 In short, a higher-order function like this:

 function f(fn!,ar)
 for i=1:n
 fn!(ar, i) # fn! updates ar[i] somehow, returns nothing
 nothing# to make sure output of f is discarded
 end
 end

 has almost as bad a performance (runtime and allocation-wise) as

 function g(fn,ar)
 for i=1:n
 ar[i] = fn(ar[i])
 end
 end

 A in-depth, ready to run example is here:
 https://gist.github.com/mauro3/f17da10247b0bad96f1a
 Including output of @code_warntype.

 So, why is Julia allocating memory when running f?  Nothing of f gets
 assigned to anything.

 Would this be something which is fixable more easily than the whole of
 the higher-order performance issues?  If so, is there an issue for this?

 Having good in-place higher order functions would go a long way with
 numerical computations.  Thanks!



[julia-users] absolute paths in show(io::IO,mt::MethodTable)

2015-04-23 Thread Tamas Papp
Hi,

I am using the Debian nighly package for Julia, which includes the
sources.

methods (actually, show(io::IO,m::Method)) prints relative paths for
Base methods, eg

julia methods(methods)
# 4 methods for generic function methods:
methods(f::Function,t::ANY) at reflection.jl:104
methods(f::Function) at reflection.jl:133
methods(f::ANY,t::ANY) at reflection.jl:105
methods(x::ANY) at reflection.jl:139

I wonder if there is a way to print absolute paths, ie somehow tell
Julia that its base directory is /usr/share/julia/base/, and let show
prepend that to the relative paths.

I would find this useful since in Emacs/ESS I can open the referenced
file easily, so this would allow me to study sources quickly.

Best,

Tamas


[julia-users] Re: New packages in time-series statistics

2015-04-23 Thread Iain Dunning
Those all look really neat!

I think RARIMA would be one of the first packages dependent on RCall.

I'm ignorant about kernel statistics, but my only query is,  would this 
functionality be able to fit into an existing package? If so, it can be 
good, as it can help to reduce maintenance effort. If its a 
square-peg-round-hole situation, its fine.

Either way, would be great to get these packges registered!

Thanks,
Iain


On Thursday, April 23, 2015 at 2:45:57 AM UTC-4, colint...@gmail.com wrote:

 Hi all,

 I've written three new packages for Julia, and am interested in getting 
 some feedback/comments from the community as well as determining whether 
 there is sufficient interest to register them officially. The packages are:

 [DependentBootstrap](https://github.com/colintbowers/DependentBootstrap.jl
 )

 [KernelStat](https://github.com/colintbowers/KernelStat.jl)

 [RARIMA](https://github.com/colintbowers/RARIMA.jl)

 and can be pulled using Pkg.clone(URL_HERE). I don't have any problems 
 compiling them on v0.3, but would be very interested in hearing of any 
 problems compiling on v0.4 (or v0.3 for that matter).

 The first package, DependentBootstrap, implements the iid bootstrap, 
 stationary bootstrap, circular block bootstrap, moving block bootstrap, 
 tapered block bootstrap, and nonoverlapping block bootstrap, as well as the 
 block length selection procedures in Politis and White (2004) (including 
 the correction provided in Patton, Politis and White (2009)), and 
 Paparoditis and Politis (2002). The main thing it doesn't do (yet) is work 
 with multivariate data. So just 1-dimensional time-series for now. This 
 package is implemented entirely in Julia.

 The second package, KernelStat, just implements some kernel functions from 
 statistics, and includes three bandwidth estimation procedures, including 
 the adaptive choice method discussed in Politis (2003). The main purpose of 
 this package for now is to provide the bandwidth estimates needed by the 
 block length selection procedures in the DependentBootstrap package, but in 
 the future it could be merged with other packages to provide a general 
 package for kernel-based nonparametric statistics. This package is 
 implemented entirely in Julia.

 The third package, RARIMA, implements ARIMA estimation, forecast and 
 simulation. Unfortunately, I didn't have time to implement all the 
 functions in this package in Julia. To be honest, it is probably a task 
 better suited to someone more knowledgeable about the ins-and-outs of ARIMA 
 models and state space representations than I. So instead, what I've done 
 with this package is use the Julia package RCall to wrap the ARIMA 
 functionality in R, hence the package name RARIMA. Currently, the 
 simulation functions in RARIMA are implemented in Julia, there is a version 
 of the forecast functions implemented in Julia  (but they are not capable 
 of including confidence intervals), and a version of the forecast functions 
 that wrap R functionality (these do provide confidence intervals). Finally, 
 all estimation functions wrap R routines.

 I would welcome any comments, feedback, recommendations, pull requests, 
 etc. I would be particularly interested in any suggestions to improve the 
 performance of the functions in the DependentBootstrap or KernelStat 
 package.

 Cheers,

 Colin



Re: [julia-users] Re: Ode solver thought, numerical recipes

2015-04-23 Thread Alex
On Friday, 24 April 2015 00:03:29 UTC+2, François Fayard  wrote:
 I'll have to reimplement the algorithm using my own methods. Numerical 
 Recipes are just implementation of known algorithms. But it's true that they 
 are not fully documented and they use many tricks that make the code not that 
 clear. I'll rework the code.

I am no lawyer, but as Tim says  this doesn't sound encouraging [1]

You want to translate some (or all) the Numerical Recipes routines to a 
different computer language, and then redistribute the resulting translations.
If you are a licensed NR user, you can make any personal use you want on a 
licensed screen, including translating NR to another computer language. 
However, you can't redistribute the translations in any manner, since they are 
derivative works and still subject to our copyright. If you'd like your 
translations to be included, with attribution to you, in the next version of 
the Numerical Recipes Multi-Language Code CDROM, we're usually happy to oblige; 
but, alas, we are not able to offer you any financial compensation for this 
contribution. (The CDROM now has Modula 2, BASIC, and Lisp versions contributed 
in this manner.)

 
 Any advice on the Julia questions ?

1) type unstable odesolver: I think it wouldn't prevent creating an executable, 
it might just be not most efficient. Why don't you use different method types 
to create your solution object? Either you only use the respective type 
constructors or you make odesolve take the *type* of the respective method 
instead of a string/symbol.

2) Interfaces for types can be defined implicitly by the set of functions that 
the types need to support. For example, ODE.jl (currently) works for all types 
that support `norm`, `+`, `-`, multiplication with/division by scalars. Maybe 
the Traits.jl package is also interesting in this context [2]. Anyways, for a 
start using : AbstractVector might cover most of your wishes.

3) ODE.jl uses non-mutating rhs functions as you say, but mostly for historical 
reasons. You definitely want to allow mutating rhs. One way to do this is by 
introducing a problem type, which can be created by either providing a 
mutating or a non-mutating rhs function. Internally you always use the mutating 
version. I think we discussed this at some point, but I can't find the issue 
right now (maybe [3]).

Best,

Alex.

[1] http://www.nr.com/licenses/redistribute.html
[2] https://github.com/mauro3/Traits.jl
[3] https://github.com/JuliaLang/ODE.jl/pull/33

[julia-users] Return within `open() do`

2015-04-23 Thread Julia User
Example:

function example(txt; indent=true)
open(file, w) do f
result = somefunc(f, txt; indent=indent)
return result
end
end

Just a question: is it in julia valid to return within a do part or does 
this has problems with closing the file?

Thanks






[julia-users] Re: New packages in time-series statistics

2015-04-23 Thread colintbowers
After a quick browse of registered packages I realised what a sensible 
question this is. Currently registered is KernelDensity, KernelEstimator, 
KernSmooth, SmoothingKernels. Clearly the last thing we need is another 
package with the word Kernel in the title. I just had an (admittedly 
quick) look over the source code in these packages and there appears to be 
some cross-over, and a lot of potential for merging. My best guess is there 
could potentially be one package where currently there is four (five if you 
count mine).

Unfortunately fitting mine into one of these packages is not entirely 
straightforward. I followed the example set in the Distances and 
Distributions packages and made every kernel function its own type, which 
can then be evaluated via multiple dispatch using one function (evaluate). 
However, these other packages appear to be writing a new function name for 
each kernel. I personally prefer the idea of using multiple dispatch and 
the type system, but each to their own.

I think for now, the best thing I can do is to put all the functionality in 
my KernelStat package into the dependent bootstrap package (there isn't 
really that much code in the KernelStat package). If/when a kernel 
statistics package emerges as the dominant paradigm, I can add my bandwidth 
methods to it at that point and add it as a dependency of the 
DependentBootstrap package.

Cheers, I'm glad you like the look of it all.

Colin

On Friday, 24 April 2015 10:56:17 UTC+10, Iain Dunning wrote:

 Those all look really neat!

 I think RARIMA would be one of the first packages dependent on RCall.

 I'm ignorant about kernel statistics, but my only query is,  would this 
 functionality be able to fit into an existing package? If so, it can be 
 good, as it can help to reduce maintenance effort. If its a 
 square-peg-round-hole situation, its fine.

 Either way, would be great to get these packges registered!

 Thanks,
 Iain


 On Thursday, April 23, 2015 at 2:45:57 AM UTC-4, colint...@gmail.com 
 wrote:

 Hi all,

 I've written three new packages for Julia, and am interested in getting 
 some feedback/comments from the community as well as determining whether 
 there is sufficient interest to register them officially. The packages are:

 [DependentBootstrap](
 https://github.com/colintbowers/DependentBootstrap.jl)

 [KernelStat](https://github.com/colintbowers/KernelStat.jl)

 [RARIMA](https://github.com/colintbowers/RARIMA.jl)

 and can be pulled using Pkg.clone(URL_HERE). I don't have any problems 
 compiling them on v0.3, but would be very interested in hearing of any 
 problems compiling on v0.4 (or v0.3 for that matter).

 The first package, DependentBootstrap, implements the iid bootstrap, 
 stationary bootstrap, circular block bootstrap, moving block bootstrap, 
 tapered block bootstrap, and nonoverlapping block bootstrap, as well as the 
 block length selection procedures in Politis and White (2004) (including 
 the correction provided in Patton, Politis and White (2009)), and 
 Paparoditis and Politis (2002). The main thing it doesn't do (yet) is work 
 with multivariate data. So just 1-dimensional time-series for now. This 
 package is implemented entirely in Julia.

 The second package, KernelStat, just implements some kernel functions 
 from statistics, and includes three bandwidth estimation procedures, 
 including the adaptive choice method discussed in Politis (2003). The main 
 purpose of this package for now is to provide the bandwidth estimates 
 needed by the block length selection procedures in the DependentBootstrap 
 package, but in the future it could be merged with other packages to 
 provide a general package for kernel-based nonparametric statistics. This 
 package is implemented entirely in Julia.

 The third package, RARIMA, implements ARIMA estimation, forecast and 
 simulation. Unfortunately, I didn't have time to implement all the 
 functions in this package in Julia. To be honest, it is probably a task 
 better suited to someone more knowledgeable about the ins-and-outs of ARIMA 
 models and state space representations than I. So instead, what I've done 
 with this package is use the Julia package RCall to wrap the ARIMA 
 functionality in R, hence the package name RARIMA. Currently, the 
 simulation functions in RARIMA are implemented in Julia, there is a version 
 of the forecast functions implemented in Julia  (but they are not capable 
 of including confidence intervals), and a version of the forecast functions 
 that wrap R functionality (these do provide confidence intervals). Finally, 
 all estimation functions wrap R routines.

 I would welcome any comments, feedback, recommendations, pull requests, 
 etc. I would be particularly interested in any suggestions to improve the 
 performance of the functions in the DependentBootstrap or KernelStat 
 package.

 Cheers,

 Colin



Re: [julia-users] Return within `open() do`

2015-04-23 Thread Jameson Nash
A `do` block introduces an anonymous function, so a return is fine
(although note that it'll return from the do block, not the outer function).

On Thu, Apr 23, 2015 at 10:07 PM Julia User julia.user1...@gmail.com
wrote:

 Example:

 function example(txt; indent=true)
 open(file, w) do f
 result = somefunc(f, txt; indent=indent)
 return result
 end
 end

 Just a question: is it in julia valid to return within a do part or does
 this has problems with closing the file?

 Thanks