[julia-users] How do I get the 'head' of a continuous stream of data?

2015-01-03 Thread C. Bryan Daniels
In Julia, if a function returns a continuous stream of data, is it possible 
to 'take' the head of the stream before the stream has terminated? I am 
used to Clojure which has many mechanisms to do exactly this? I assume this 
is due to the inherent laziness of Clojure.

I made a previous post regarding a situation in which I was attempting to 
use 'HTTPClient' for a GET to a service that returns a continuous reply of 
json objects. I thought the following might work, but the fetch(r ) still 
blocks:

rr = get(url, RequestOptions(blocking=false, ...))
r = fetch(r ))
readytes(r)

Thanks.


Re: [julia-users] Micro-benchmarking Vandermonde matrix generation

2015-01-03 Thread Jiahao Chen
That's an interesting observation.

Vandermonde matrices are actually quite an interesting test of unusual
floating point computations. Constructing Vandermonde matrices explicitly
very quickly results in matrix elements that overflow to +/-Inf or
underflow toward 0 unless your starting vector has entries that are all
exactly of magnitude 1. Hence the larger problems you are computing on
[1:N] end up testing largely the speed of multiplying finite numbers by
infinity. Conversely had your tests used as input rand(N) (uniform random
numbers between 0 and 1), you would have tested largely the speed of
multiplying denormalized floats and zeros.

On my machine, the speed penalty from denormalized floats turns out to be
significant, but not the +Infs.

julia> gc()

julia> x=[(-1.0)^n for n=1:10^4]; @time vander2(x); #Lots of
multiplications involving +1.0 and -1.0
elapsed time: 1.216966083 seconds (160400 bytes allocated, 6.44% gc
time)

julia> gc()

julia> x=[1.0:1.0]; @time vander2(x); #Lots of multiplications
involving +Inf
elapsed time: 1.206602925 seconds (160400 bytes allocated, 6.04% gc
time)

julia> gc()

julia> x=rand(1); @time vander2(x); #Lots of multiplications involving
0s and denormalized floats
elapsed time: 2.952932852 seconds (160400 bytes allocated, 2.82% gc
time)

On my machine, the equivalent computations using numpy are significantly
slower:

>>> x=[(-1.0)**n for n in range(1,10001)]; start=time.clock();
np.vander(x); time.clock()-start
...
3.9613525
>>> x=[n for n in range(1,10001)]; start=time.clock(); np.vander(x);
time.clock()-start
...
5.0367129
>>> x=np.random.rand(1); start=time.clock(); np.vander(x);
time.clock()-start
...
5.4627212


[julia-users] Constructing Expr from s-expressions

2015-01-03 Thread Darwin Darakananda
Hi everyone,

Is there currently a function that converts s-expressions into Expr 
structures (basically the reverse of Base.Meta.show_sexpr)?

I just started playing around with metaprogramming in Julia, so I'm 
probably doing things wrong.  But there are a lot of times where I end up 
creating highly nested Expr objects, ending up with code that is 
indecipherable.  I've found that the output from show_sexpr is sometimes 
easier to read that that of dump or xdump (which only seems to display the 
first couple of levels).  So I'm curious to see if the reverse 
(s-expression -> Expr) is also true.  

If this function does not exist, would it be something that people would 
find useful?

Thanks and Happy New Years!

Darwin


[julia-users] DArrays performance

2015-01-03 Thread Amuthan A. Ramabathiran
Hello: I recently started exploring the parallel capabilities of Julia and 
I need some help in understanding and improving the performance a very 
elementary parallel code using DArrays (I use Julia 
version 0.4.0-dev+2431). The code pasted below (based essentially on 
plife.jl) solves u''(x) = 0, x \in [0,1] with u(0) and u(1) specified, 
using the 2nd order central difference approximation. The parallel version 
of the code runs significantly slower than the serial version. It would be 
nice if someone could point out ways to improve this and/or suggest an 
alternative efficient version.

function laplace_1D_serial(u::Array{Float64})
   N = length(u) - 2
   u_new = zeros(N)
   
   for i = 1:N
  u_new[i] = 0.5(u[i] + u[i + 2])
   end

   u_new
end

function serial_iterate(u::Array{Float64})
   u_new = laplace_1D_serial(u)
   
   for i = 1:length(u_new)
  u[i + 1] = u_new[i]
   end
end

function parallel_iterate(u::DArray)
   DArray(size(u), procs(u)) do I
  J = I[1]

  if myid() == 2
 local_array = zeros(length(J) + 1)
 for i = J[1] : J[end] + 1
local_array[i - J[1] + 1] = u[i]
 end
 append!([float(u[1])], laplace_1D_serial(local_array))
  
  elseif myid() == length(procs(u)) + 1
 local_array = zeros(length(J) + 1)
 for i = J[1] - 1 : J[end]
local_array[i - J[1] + 2] = u[i]
 end
 append!(laplace_1D_serial(local_array), [float(u[end])])
  
  else
 local_array = zeros(length(J) + 2)
 for i = J[1] - 1 : J[end] + 1
local_array[i - J[1] + 2] = u[i]
 end
 laplace_1D_serial(local_array)

  end
   end
end

A sample run on my laptop with 4 processors:
julia> u = zeros(1000); u[end] = 1.0; u_distributed = distribute(u);

julia> @time for i = 1:1000
 serial_iterate(u)
   end
elapsed time: 0.011452192 seconds (8300112 bytes allocated)

julia> @time for i = 1:1000
 u_distributed = parallel_iterate(u_distributed)
   end
elapsed time: 4.461922218 seconds (190565036 bytes allocated, 10.17% gc 
time)

Thanks for your help!

Cheers,
Amuthan
 



Re: [julia-users] Micro-benchmarking Vandermonde matrix generation

2015-01-03 Thread Joshua Adelman


On Saturday, January 3, 2015 11:56:20 PM UTC-5, Jiahao Chen wrote:
>
>
> On Sat, Jan 3, 2015 at 11:27 PM, Joshua Adelman  > wrote:
>
>> PS - maybe it's my experience with numpy, but I wonder if any thought has 
>> been given to following more of a numpy-style convention of always 
>> returning a view of an array when possible rather than a copy? 
>
>
> Search the Github issues for ArrayViews.
>
> All 3 versions of your Julia code have assignments of the form
>
> v2[:, k] = p
>
> which could be faster if you explicitly devectorize.
>
> Thanks,
>
> Jiahao Chen
> Staff Research Scientist
> MIT Computer Science and Artificial Intelligence Laboratory
>

Hi Jiahao,

Actually when I devectorized vander2 (explicitly looping over the first 
dimension as the inner-most loop) as suggested, while cases where N < 100 
are indeed faster, for N > 100 the devectorized code is slower.

Josh 


Re: [julia-users] DomainError

2015-01-03 Thread Jiahao Chen
It is difficult to tell without seeing the code, but this error is probably
being triggered because you calculated sin(x) or x^y somewhere where the
inputs were not NaN but the answer was NaN.

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory


Re: [julia-users] Micro-benchmarking Vandermonde matrix generation

2015-01-03 Thread Jiahao Chen
On Sat, Jan 3, 2015 at 11:27 PM, Joshua Adelman 
wrote:

> PS - maybe it's my experience with numpy, but I wonder if any thought has
> been given to following more of a numpy-style convention of always
> returning a view of an array when possible rather than a copy?


Search the Github issues for ArrayViews.

All 3 versions of your Julia code have assignments of the form

v2[:, k] = p

which could be faster if you explicitly devectorize.

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory


Re: [julia-users] Scope of Arrays

2015-01-03 Thread Jiahao Chen
On Sat, Jan 3, 2015 at 10:07 AM, Howard  wrote:

> a = test


You may find John Myles White's blog post on this topic helpful:

http://www.johnmyleswhite.com/notebook/2014/09/06/values-vs-bindings-the-map-is-not-the-territory/

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory


[julia-users] DomainError

2015-01-03 Thread jspark

Hi,

I am running one page of script but I get following DomainError;

ERROR : DomainError in sin at math.jl:122 in ^at math.jl:255 

Is it my side's error or Julia's

Thanks.




[julia-users] Micro-benchmarking Vandermonde matrix generation

2015-01-03 Thread Joshua Adelman
I'm a long time Python/Numpy user and am starting to play around with Julia 
a bit. To get a handle on the language and how to write fast code, I've 
been implementing some simple functions and then trying to performance tune 
them. One such experiment involved generating Vandermonde matrices and then 
comparing timings vs the method that numpy supplies (np.vander). The code 
for two simple implementations that I wrote, plus the method supplied by 
MatrixDepot.jl are in this notebook along with timings for each and the 
corresponding timing for the numpy method on the same machine. 

http://nbviewer.ipython.org/gist/synapticarbors/26910166ab775c04c47b

Generally Julia fares pretty well against numpy here, but does not 
necessarily match or beat it over all array sizes. The methods I wrote are 
similar to the numpy implementation and are typically faster than what's in 
MatrixDepot.jl, but I was hoping someone with a bit more experience in 
Julia might have some further tricks that would be educational to see. I've 
already looked at the Performance Tips section of the documentation, and I 
think I'm following best practices. 

Suggestions and feedback are appreciated as always.

Josh

PS - maybe it's my experience with numpy, but I wonder if any thought has 
been given to following more of a numpy-style convention of always 
returning a view of an array when possible rather than a copy? This is 
often a source of confusion with new numpy users, but once you realize that 
generally, if the slice or transformation can be represented by a fixed 
stride through the data, then you get a view, it's pretty intuitive. 


[julia-users] Scope of Arrays

2015-01-03 Thread Howard
I have two snippets of code which I think should behave exactly the same, 
but they do not.  In the first snippet, the variable "test" only changes in 
the "i" loop.  In the second snippet, the variable "test" changes in the j 
loop which makes no sense to me.  The only difference in the two sets of 
code is the second and fifth lines in each set.

Any help would be much appreciated.

##  Snippet 1: Correct
for i in 1:4
  test = i 
  for j in 1:3
a = test
a=j
println("j loop: a $a; test $test")
  end
  println("i loop: test $test")
end

## Snippet 2: Incorrect??
for i in 1:4
  test = [i] 
  for j in 1:3
a = test
a[1]=j
println("j loop: a $a; test $test")
  end
  println("i loop: test $test")
end


Thanks,


Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Dear Tim,
This will definitely fix it - thank you. I did not realise it could be done 
this way.
As usual - thank you for all your help. 
Christoph

On Sunday, 4 January 2015 01:56:40 UTC, Tim Holy wrote:
>
> The most julian way of doing this is to use the dimensionality (often 
> called 
> N) as a parameter, either for the type or functions of the type: 
>
> type MyContainer{T,N} 
> data::Array{T,N} 
> end 
>
> There are a ton of examples of this kind of trick in base; reading through 
> those files is a great resource. 
>
> --Tim 
>
> On Saturday, January 03, 2015 02:19:34 PM Christoph Ortner wrote: 
> > Dear All, 
> > 
> > Thank you so much for the various hints and tips. After re-reading the 
> > performance Tips I've now found all the problems with the code: in the 
> end 
> > there were two type-instabilities, one that was easy to resolve (declare 
> an 
> > array to be Int instead of Integer) but the other is still a problem for 
> me 
> > so I'd love to get more feedback on this. (Initially I was looking for 
> the 
> > wrong thing as I hadn't realised that type-instability can cause 
> unneeded 
> > memory allocation.) 
> > 
> > So my remaining issue is this. To get the code to run efficiently I had 
> to 
> > tell the compile that a certain array is 2-dimensional. 
> > 
> >I = geom.I::Array{Int, 2} 
> > 
> > In the type for geom, it is only declared as I::Array{Int}, because it 
> > could in fact be 1-, 2- or 3-dimensional. At the moment, I see two ways 
> to 
> > fix this: 
> >  * write three functions, and a wrapper which checks for the correct 
> > dimension. (Or possibly wrap my head around meta-programming and do it 
> this 
> > way. This would probably have the advantage that I could also unroll the 
> > inner loops and get some additional factors) 
> >  * replace I with a one-dimensional array and resolve the 
> multi-dimensional 
> > indexing manually, probably some slow-down because of checking the 
> dimension 
> > 
> > BUT, is there a quicker/easier/more readable fix? 
> > 
> > Many thanks, 
> > Christoph 
> > 
> > 
> > 
> > function evalSiteFunction!(sp::SiteLinear, geom::tbgeom, 
> >Y::Array{Float64,2}, 
> >Frc::Array{Float64,2}) 
> > # extract dimension information (d1, d2 \in \{2, 3\}) 
> > I = geom.I::Array{Int, 2} 
> > d1, d2, nneig = size(sp.coeffs) 
> > # assert type of the index set over which we are looping 
> > for jX in geom["iMM"]::Array{Int, 1} 
> > # initialise force to 0 
> > Frc[:,jX] = 0.0 
> > # loop over neighbouring sites 
> > @inbounds for n in 1:nneig 
> > # get the X-index of the current neighbour 
> > kX = getI(I, geom.X2I, jX, sp.stencil, n) 
> > # evaluate the term (devectorized for performance) 
> > for i = 1:d1 
> > Frc[i, jX] = sp.coeffs[i, 1, n] * Y[1, kX] 
> > for j = 2:d2 
> > Frc[i,jX] += sp.coeffs[i, j, n] * Y[j, kX] 
> > end 
> > end 
> > end 
> > end 
> > end 
>
>

[julia-users] Distances colwise issue, broadcasting question

2015-01-03 Thread AVF
On Friday afternoon, this code was working:

using Distances

a = rand(10,2)
b = rand(10,2)

colwise(Euclidean(), a', b')

Tonight it's not:

`colwise` has no method matching colwise(::Euclidean, ::Array{Float64,2}, 
::Array{Float64,2})


 I did run Pkg.update() in between, so maybe something changed?

Also, is there a way yet to do broadcasting? I.e. comparing a point pt = 
rand(1, 2) to an array a = rand(10, 2)? Thanks...


Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Tim Holy
The most julian way of doing this is to use the dimensionality (often called 
N) as a parameter, either for the type or functions of the type:

type MyContainer{T,N}
data::Array{T,N}
end

There are a ton of examples of this kind of trick in base; reading through 
those files is a great resource.

--Tim

On Saturday, January 03, 2015 02:19:34 PM Christoph Ortner wrote:
> Dear All,
> 
> Thank you so much for the various hints and tips. After re-reading the
> performance Tips I've now found all the problems with the code: in the end
> there were two type-instabilities, one that was easy to resolve (declare an
> array to be Int instead of Integer) but the other is still a problem for me
> so I'd love to get more feedback on this. (Initially I was looking for the
> wrong thing as I hadn't realised that type-instability can cause unneeded
> memory allocation.)
> 
> So my remaining issue is this. To get the code to run efficiently I had to
> tell the compile that a certain array is 2-dimensional.
> 
>I = geom.I::Array{Int, 2}
> 
> In the type for geom, it is only declared as I::Array{Int}, because it
> could in fact be 1-, 2- or 3-dimensional. At the moment, I see two ways to
> fix this:
>  * write three functions, and a wrapper which checks for the correct
> dimension. (Or possibly wrap my head around meta-programming and do it this
> way. This would probably have the advantage that I could also unroll the
> inner loops and get some additional factors)
>  * replace I with a one-dimensional array and resolve the multi-dimensional
> indexing manually, probably some slow-down because of checking the dimension
> 
> BUT, is there a quicker/easier/more readable fix?
> 
> Many thanks,
> Christoph
> 
> 
> 
> function evalSiteFunction!(sp::SiteLinear, geom::tbgeom,
>Y::Array{Float64,2},
>Frc::Array{Float64,2})
> # extract dimension information (d1, d2 \in \{2, 3\})
> I = geom.I::Array{Int, 2}
> d1, d2, nneig = size(sp.coeffs)
> # assert type of the index set over which we are looping
> for jX in geom["iMM"]::Array{Int, 1}
> # initialise force to 0
> Frc[:,jX] = 0.0
> # loop over neighbouring sites
> @inbounds for n in 1:nneig
> # get the X-index of the current neighbour
> kX = getI(I, geom.X2I, jX, sp.stencil, n)
> # evaluate the term (devectorized for performance)
> for i = 1:d1
> Frc[i, jX] = sp.coeffs[i, 1, n] * Y[1, kX]
> for j = 2:d2
> Frc[i,jX] += sp.coeffs[i, j, n] * Y[j, kX]
> end
> end
> end
> end
> end



[julia-users] Re: Julia for Enterprise?

2015-01-03 Thread i . costigan
Totally agree with Tobias. Personally, I wouldn't recommend using Julia in 
my corporate environment...its too bleeding edge at the moment...but I'm 
loving the features esp.  less dev time without significant performance hit 
to other languages like C(++). I'd say 1.0 might be where its becomes 
self-recommending...

On Thursday, 1 January 2015 20:04:11 UTC+11, Tobias Knopp wrote:
>
> Eric (and Keno),
>
> My statement that Julia "is from and for researchers" has been made in a 
> certain context where I wanted to explain why Julia has a different 
> development model than a programming language that is development within 
> Google.
>
> My personal opinion is that Julia is a great general purpose language that 
> will be very interesting beyond researchers. I have worked in companies and 
> believe that Julia has a great potential for
> - reducing development time
> - generating maintainable code
>
> Because I believe in this I have worked on embedding Julia in C/C++ which 
> also could be an option for your business (see the embedding chapter in the 
> docs).
>
> A better statement might be "Julia is currently developed by many 
> researcher and used by many researcher but is absolutely not limited to 
> research"
>
> Cheers,
>
> Tobi
>
>
>
> Am Donnerstag, 1. Januar 2015 07:13:24 UTC+1 schrieb Eric Forgy:
>>
>> Hi everyone,
>>
>> Happy New Year!
>>
>> I briefly introduced myself and what I'm trying to do here 
>> 
>> .
>>
>> I saw that Stefan gave a nice answer to the question "Is Julia ready for 
>> production use? " 
>> over on Quora. However, being ready for production is one thing and being 
>> ready for use in an enterprise application for large conservative financial 
>> institutions that undergo audits by regulators, etc., might be another. 
>>
>> A comment in this group was made yesterday,"Julia is from and for 
>> researchers. 
>> " 
>> I notice there are quite a number of researchers developing Julia, but 
>> naturally there is a much smaller team of core developers that seem to work 
>> very well together. If this small team disintegrated for some reason, e.g. 
>> find jobs, etc., I'm not sure Julia would have the escape velocity to 
>> develop into a mature enough language for the kind of applications I have 
>> in mind.
>>
>> I am bootstrapping a startup so I need to be careful how I allocate my 
>> time and resources. I don't mind being a little cutting edge, but I would 
>> have to consider the likelihood that Julia reaches at least a "first 
>> version" 1.0.
>>
>> So can I ask for some honest advice? With the obvious caveats understood, 
>> how far away is a "1.0"? How long can the core team continue its dedication 
>> to the development of Julia? Will Julia remain "from and for researchers" 
>> indefinitely? Can you envision Julia being used in large enterprise 
>> financial applications?
>>
>> Thank you for any words of wisdom.
>>
>> Best regards,
>> Eric
>>
>

Re: [julia-users] printf float64 as hex/raw

2015-01-03 Thread Jameson Nash
Julia uses MOTION_HINT / gdk_event_request_motions as recommended by the
Gtk/Gdk documentation (
https://developer.gnome.org/gdk2/stable/gdk2-Events.html#gdk-event-request-motions)
by default. Not sure why this would cause issues for you however.

I could try against other machines, if you would like. I tested on a 64-bit
mac with all Pkg dependencies installed from MacPorts.


On Sat Jan 03 2015 at 1:11:21 PM Andreas Lobinger 
wrote:

> Stranger.
>
> i now moved my C example to gtk3 and get also something like:
>
> 100,064026 102,490265
>> 100,064026 101,490265
>> 100,064026 100,490265
>> 100,064026 99,490265
>> 100,064026 98,490265
>> 100,064026 97,490265
>> 100,064026 96,490265
>> 100,064026 95,490265
>> 101,064026 94,490265
>> 101,064026 91,490265
>> 102,064026 88,490265
>> 103,064026 83,490265
>> 103,064026 79,490265
>> 104,064026 76,490265
>> 104,064026 75,490265
>> 105,064026 74,490265
>> 105,064026 73,490265
>>
>>
> I'll try to scan the gtk3 docu, if that's expected behaviour. The scribble
> example looks like the recommended way is to work with MOTION_HINT only and
> the use gdk_window_get_pointer.
>
>>


Re: [julia-users] Re: Disabling type instability in non-global scope

2015-01-03 Thread Jameson Nash
I agree. I've made improvements to the inliner to the point where doing the
proposed call-site splitting is straightforward. Additionally, I've been
working on an on-stack allocator, to remove the cost of boxing the types.

Additionally, there are many cases (like macros, deserialize, and Expr
objects), where the type-uncertainty ahead-of-time is generally both
necessary and desirable.

On Sat Jan 03 2015 at 12:02:44 PM Simon Kornblith 
wrote:

> This may be a bit greedy, but I'd rather that type instability were less
> of a performance bottleneck. There are some optimizations that could be
> done to address this:
>
>- Specialize loops inside functions (#5654
>). This would solve
>most cases where there is type instability due to reading from files.
>- Better optimization of union types. In particular, try not to box
>them and call functions directly with a branch instead of via
>jl_apply_generic. I discussed this a bit in the Nullable thread
>
>and at some point Jameson implemented call site inlining at some point but
>found much of the overhead was due to boxing.
>- Henchmen unrolling, although I'm not sure how much of a performance
>boost this would provide on top of better optimization of union types.
>- Inline caching  for
>cases that aren't addressed by the above optimizations. Since there will
>still be boxing involved, this might need better GC to be useful.
>
> These are all large projects and may not happen any time soon. OTOH,
> modern JS engines implement many of them, so they don't seem impossible.
>
>
> Simon
>
>
> On Friday, January 2, 2015 2:25:36 PM UTC-5, Ariel Keselman wrote:
>>
>> Hi, I Just want to discuss this idea: type instability in functions is a
>> source of slowness, and in fact there are several tools to catch instances
>> of it. I would even say that ising type instability in functions is
>> considered bad style.
>>
>> the most important use case for type instability seems to be allowing a
>> good interactive experience in the  Repl. Now since work in the repl is
>> always in global scope I think disabling type instability in functions
>> would not change this interactive experience. Then it would give us several
>> serious advantages:
>>
>> 1. No type instability slowness to chase, a few less tools to maintain
>>
>> 2. Since types in functions are stable, They could be statically type
>> checked just before compilation (not definition). So ifnyou try to run a
>> function that calls some non existent method you'll get an error on compile
>> time
>>
>> I don't like to call julia dynamic, I prefer interactive. And I realise
>> there are many subtleties here and this is really not that easy to
>> implement, but maybe julia could be the first interactive and statically
>> typed language. Hope I'm not being too greedy ;)
>>
>> Also look at the crystal language, they use some techniques similar to
>> those used in julia to do global type inference. They achieve fast compiled
>> programs without ever having to type a thing. And of course you can still
>> get type errors at compile time and some good tooling like statically typed
>> languages. They miss though the interactivity at global scope.
>>
>> Thoughts?
>
>


Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Tim: thank you, this fixed the allocation test!
Christoph

> As an aside: for the fast version, @time claims only 13888 bytes have 
been 
> allocated, but track-allocation claims that one particular line allocated 
> 2480888 bytes??? 

My guess is you're being fooled by allocation that occurs during 
compilation. 
Try this sequence: 

#load your code 
using BlahBlah 
# Use your code 
# Here I'm assuming this initializes the object A and runs mytest(A), 
# perhaps among other things 
include("runtests.jl") 

clear_malloc_data() 
mytest(A) 

exit() 

The clear_malloc_data erases any allocation that occurs previously, so that 
when you quit (which triggers the writing of the *.mem files), it only 
counts 
things that occurred when you ran the JIT-compiled `mytest`. 


[julia-users] Re: How quickly subtract the two large arrays ?

2015-01-03 Thread elextr


On Sunday, January 4, 2015 4:28:06 AM UTC+10, paul analyst wrote:
>
> THX
> A have not :/ but I can makes it in parts!
>

If the arrays won't fit in memory it probably doesn't matter what Julia 
does, the IO or paging time will dominate.

Cheers
Lex

 

>
> How simply use parallel for it? I have 8 proc, is working only 1
> Paul
>
>
>
>
> W dniu piątek, 15 sierpnia 2014 11:53:54 UTC+2 użytkownik Billou Bielour 
> napisał:
>>
>> This might be a bit faster:
>>
>> function sub!(A,B,C)
>> for j=1:size(A,2)
>> for i=1:size(A,1)
>> @inbounds C[i,j] = A[i,j] - B[i,j]
>> end
>> end
>> end
>>
>> C = zeros(size(A));
>> sub!(A,B,C)
>>
>> Do you have enough RAM to store these matrices though ? 10^5 * 10^5 
>> Float64 seems rather large.
>>
>>

Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Dear All,

Thank you so much for the various hints and tips. After re-reading the 
performance Tips I've now found all the problems with the code: in the end 
there were two type-instabilities, one that was easy to resolve (declare an 
array to be Int instead of Integer) but the other is still a problem for me 
so I'd love to get more feedback on this. (Initially I was looking for the 
wrong thing as I hadn't realised that type-instability can cause unneeded 
memory allocation.)

So my remaining issue is this. To get the code to run efficiently I had to 
tell the compile that a certain array is 2-dimensional.

   I = geom.I::Array{Int, 2}

In the type for geom, it is only declared as I::Array{Int}, because it 
could in fact be 1-, 2- or 3-dimensional. At the moment, I see two ways to 
fix this:
 * write three functions, and a wrapper which checks for the correct 
dimension. (Or possibly wrap my head around meta-programming and do it this 
way. This would probably have the advantage that I could also unroll the 
inner loops and get some additional factors)
 * replace I with a one-dimensional array and resolve the multi-dimensional 
indexing manually, probably some slow-down because of checking the dimension

BUT, is there a quicker/easier/more readable fix?

Many thanks,
Christoph



function evalSiteFunction!(sp::SiteLinear, geom::tbgeom, 
   Y::Array{Float64,2},
   Frc::Array{Float64,2})
# extract dimension information (d1, d2 \in \{2, 3\})
I = geom.I::Array{Int, 2}
d1, d2, nneig = size(sp.coeffs)
# assert type of the index set over which we are looping
for jX in geom["iMM"]::Array{Int, 1}
# initialise force to 0
Frc[:,jX] = 0.0
# loop over neighbouring sites
@inbounds for n in 1:nneig
# get the X-index of the current neighbour
kX = getI(I, geom.X2I, jX, sp.stencil, n) 
# evaluate the term (devectorized for performance)
for i = 1:d1
Frc[i, jX] = sp.coeffs[i, 1, n] * Y[1, kX]
for j = 2:d2
Frc[i,jX] += sp.coeffs[i, j, n] * Y[j, kX]
end
end
end
end
end




Re: [julia-users] Re: How to heapify and array?

2015-01-03 Thread Rodolfo Santana
Thanks a lot Viral, that was very helpful!!!

I quit Julia and restarted it and Collections.heapify! is working for me 
now. I am not sure what happened.

Tim, I downloaded Julia from here:
http://julialang.org/downloads/

Thanks a lot to everyone for replying, I really appreciate it!

-Rodolfo



On Saturday, January 3, 2015 9:11:04 AM UTC-6, Tim Holy wrote:
>
> It seems that you have a borked build of julia. How did you get it? If you 
> built from source, what happens when you try `make testall`? 
>
> --Tim 
>
> On Saturday, January 03, 2015 04:14:55 AM Rodolfo Santana wrote: 
> > Hi Tim, 
> > 
> > Thanks for the reply! Yes, I realize this is what 'heapify!' does. 
> > 
> > I first do x= rand(10). Then, I try Collections.heapify!(x) and I get 
> the 
> > error: 
> > 
> > ERROR: heapify! not defined 
> > 
> > Maybe someone can post a screenshot of how to use this function in 
> Julia? I 
> > couldn't find an example on the web using this function. 
> > 
> > Thanks, 
> > 
> > -Rodolfo 
> > 
> > On Saturday, January 3, 2015 5:40:22 AM UTC-6, Tim Holy wrote: 
> > > Also post any error messages, etc. 
> > > 
> > > You realize that `heapify!` returns an array, just with elements in a 
> > > different 
> > > order? The smallest element will be first. 
> > > 
> > > --Tim 
> > > 
> > > On Saturday, January 03, 2015 03:12:39 AM Rodolfo Santana wrote: 
> > > > Here it is: 
> > > > 
> > > > Julia Version 0.3.4 
> > > > 
> > > > Commit 3392026* (2014-12-26 10:42 UTC) 
> > > > 
> > > > Platform Info: 
> > > >   System: Darwin (x86_64-apple-darwin13.4.0) 
> > > >   
> > > >   CPU: Intel(R) Core(TM) i7 CPU   M 620  @ 2.67GHz 
> > > >   
> > > >   WORD_SIZE: 64 in finding a really 
> > > >   
> > > >   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem) 
> > > >   
> > > >   LAPACK: libopenblas 
> > > >   
> > > >   LIBM: libopenlibm 
> > > >   
> > > >   LLVM: libLLVM-3.3 
> > > > 
> > > > On Saturday, January 3, 2015 5:06:42 AM UTC-6, ele...@gmail.com 
> wrote: 
> > > > > Strange, works for me, maybe post the whole versioninfo() output 
> that 
> > > 
> > > the 
> > > 
> > > > > experts can look at. 
> > > > > 
> > > > > Cheers 
> > > > > Lex 
> > > > > 
> > > > > On Saturday, January 3, 2015 8:58:21 PM UTC+10, Rodolfo Santana 
> wrote: 
> > > > >> Hi Lex, 
> > > > >> 
> > > > >> I am using Version 0.3.4 
> > > > >> 
> > > > >> On Saturday, January 3, 2015 4:55:06 AM UTC-6, ele...@gmail.com 
> > > 
> > > wrote: 
> > > > >>> Whats your versioninfo() 
> > > > >>> 
> > > > >>> On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana 
> > > 
> > > wrote: 
> > > >  Thanks for the reply! I have tried that, but I get: 
> > > >  
> > > >  ERROR: heapify! not defined 
> > > >  
> > > >  On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje 
> wrote: 
> > > > > *Collections.heapify!(x)* 
> > > > > 
> > > > > kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana 
> > > > > 
> > > > > følgende: 
> > > > >> Let's say I have an array x=rand(10) . How do I use the 
> heapify! 
> > > > >> function to heapify x? 
> > > > >> 
> > > > >> Thanks! 
> > > > >> -Rodolfo 
>
>

Re: [julia-users] docs for switchers and students

2015-01-03 Thread ivo welch
thank you, isaiah.  I had skipped the video since I come from R, not
from python.  I started watching it.  it's indeed pretty good.  (I am
old school---I usually google-and-read rather than watch-video.)
[thanks, david.]

I tried contributors() in julialang, but that didn't tell me who "our"
core team is.  it wasn't anywhere obvious on the website.  the reason
is that I wanted to know who of the four authors Bruce Tate, Fred
Daoud, Jack Moffitt, Ian Dees wrote the julia chapter.  eventually, I
found it on the book website.  so, I sent Jack Moffitt an email to ask
him whether they can sell just the julia chapter of their book to our
students separately.  I can't ask my non-CS students to purchase a
book about many programming languages.

everyone, wish me luck ;-)

regards,

/iaw

/iaw


Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
J. Fred Weston Distinguished Professor of Finance
Anderson School at UCLA, C519
Director, UCLA Anderson Fink Center for Finance and Investments
Free Finance Textbook, http://book.ivo-welch.info/
Exec Editor, Critical Finance Review, http://www.critical-finance-review.org/
Editor and Publisher, FAMe, http://www.fame-jagazine.com/


On Sat, Jan 3, 2015 at 9:29 AM, Isaiah Norton  wrote:
> There are a number of resources listed here:
>
> http://julialang.org/learning/
>
> In particular, David Sanders' tutorial videos are excellent (aimed at Python
> users but very accessible).
>
> Regarding migration guides, probably the main resource is the "Noteworthy
> differences" (from Mat/Py/R) section of the manual.
>
> On Sat, Jan 3, 2015 at 12:05 PM, ivo welch  wrote:
>>
>>
>> dear julia experts---I am about to start teaching my MFE class.  Mostly,
>> my students will do programming with data.  My transitioning plan is as
>> follows: This year, I am planning to allow using julia.  (R is the
>> standard.)  Next year, I am planning to encourage julia equally with R.  In
>> 2 years, I am planning to switch over to julia (0.5?).  I am expecting rough
>> edges esp on the debugging side. I plan to recommend a lot of
>> print-and-recompile statements.
>>
>> Now, my students already have backgrounds in different computer languages,
>> which could be anything from VBE to python to R to Matlab to whatever.  I
>> know I can point them to the pretty good docs on the julia website.
>>
>> * if there are teaching/learning resources for new students above and
>> beyond the standard julia docs on the web that you would recommend, could
>> you please let me know?
>> * if there are language migration guides that you would recommend, could
>> you please let me know?
>> * if there are quick-reference guides that you would recommend, could you
>> please let me know?
>>
>> (I also suggested having the help system inside julia help with
>> transitioning R by "?R.lm", but way too recently to make it in-time.  And
>> ?python.xxx.  and ?matlab.xxx.)
>>
>> regards, /iaw
>>
>


Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Tim Holy
On Saturday, January 03, 2015 11:03:06 AM Christoph Ortner wrote:
> I then tried to use the @code_warntype  /  @code_typewarn  macro to see if
> this would detect something, but it seems this macro does not exist in my
> Julia installation?  Does this only come with 0.4?

Yes, only julia 0.4.

> As an aside: for the fast version, @time claims only 13888 bytes have been
> allocated, but track-allocation claims that one particular line allocated
> 2480888 bytes???

My guess is you're being fooled by allocation that occurs during compilation. 
Try this sequence:

#load your code
using BlahBlah
# Use your code
# Here I'm assuming this initializes the object A and runs mytest(A),
# perhaps among other things
include("runtests.jl")

clear_malloc_data()
mytest(A)

exit()

The clear_malloc_data erases any allocation that occurs previously, so that 
when you quit (which triggers the writing of the *.mem files), it only counts 
things that occurred when you ran the JIT-compiled `mytest`.

--Tim


[julia-users] Re: DataFrame vcat stack overflow

2015-01-03 Thread Guillaume Guy
Perfect. Thanks! 

On Friday, January 2, 2015 8:17:52 PM UTC-5, Sean Garborg wrote:
>
> Thanks for reporting -- it is a bug. Having a Array or DataArray with 
> NAtype as its eltype is a little awkward. Here's why it's causing you 
> trouble, and a couple alternatives:
>
> using DataFrames
> nrows = 3
> a = DataFrame(A = 1:nrows)
>
> # Column :A is all NA for all of these cases
> b1 = DataFrame(A = fill(NA, nrows))
> b2 = DataFrame(A = DataArray(Int, nrows))
> b3 = DataFrame(A = DataArray(None, nrows))
>
> vcat(a, b1) # ERROR: no method matching convert(::Type{Int64}, 
> ::DataArrays.NAtype)
> vcat(a, b2) # okay
> vcat(a, b3) # okay
>
> It should probably work as is (if not, I guess the promotion rules should 
> change, and the result should be of type Any or there should be a more 
> informative error).
>
> I opened an issue: https://github.com/JuliaStats/DataArrays.jl/issues/134, 
> but given that most interested developers are focused on coming up with an 
> replacement for DataArrays and NAtype, it may not get attention at the 
> moment, so I'd avoid creating that ambiguous array if possible for now.
>
>
>
> For your other question, conversion of columns, you'll generally use 
> functions from Base Julia or DataArrays.jl to transform data however you 
> like.
>
> Categorical variables are (for the moment) represented using 
> PooledDataArrays, so:
> pdata(abstract_array) or convert(PooledDataArray, abstract_array)
>
> And for strings:
> map(string, abstract_array) or convert(some_string_type, abstract_array)
>
>
> On Friday, January 2, 2015 3:05:31 PM UTC-7, Guillaume Guy wrote:
>>
>> Sean:
>>
>> I found the problem. Not sure if that is a "bug" per se.
>>
>> Looking at one element of the Array (which is subsequently vcat-ed):
>>
>>
>> 
>>
>> Note the NA in the equipment column. When running my function 
>> (intermediary_point) on each row of my input dataframe, equipment (which is 
>> a String column) becomes NA of NAType. Then, the resulting dataframe (see 
>> above) has an equipment column type which is now NAtype.
>>
>> Anyway ... You end up with dfs that has some elements looking like that:
>>
>> 7-element Array{Type{T<:Top},1}:
>>  UTF8String
>>  NAtype
>>  UTF8String
>>  UTF8String
>>  Int64 
>>  Float64   
>>  Float64
>>
>>
>> and some elements with the correct type. The vcat returns a convert error 
>> trying to convert the NAtype into String.
>>
>>
>> Is it a bug? Shouldn't the vcat convert the NAType into String?  
>>
>>
>> Another question I have is about how to convert a column type within an 
>> existing dataframe I'm looking for an Julia equivalent of R's *as.factor 
>> *or *as.string . *Alternative, when running DataFrame(A=1:20,B=1:20), is 
>> there a way to specify what A and B should be? 
>>
>>
>> Thx! 
>>
>>
>>
>> On Wednesday, December 31, 2014 10:42:30 PM UTC-5, Sean Garborg wrote:
>>>
>>> If you Pkg.update() and try again, you should be fine. DataFrames was 
>>> overdue for a tagged release -- you'll get v0.6.0 which includes some 
>>> updates to vcat. As a gut check, this works just fine:
>>>
>>> using DataFrames
>>> dfs = [DataFrame(Float64, 15, 15) for _=1:200_000]
>>> vcat(dfs)
>>>
>>> (If it doesn't for you, definitely file an issue.)
>>>
>>> Happy New Year,
>>> Sean
>>>
>>> On Thursday, December 25, 2014 5:06:23 PM UTC-7, Guillaume Guy wrote:

 Hi David:

 That is where the stack overflow error is thrown.

 I attached the code + the data in my first post for your reference.


 On Thursday, December 25, 2014 6:59:57 PM UTC-5, David van Leeuwen 
 wrote:
>
> Hello Guillome, 
>
> On Monday, December 22, 2014 9:09:16 PM UTC+1, Guillaume Guy wrote:
>>
>> Dear Julia users:
>>
>> Coming from a R background, I like to work with list of dataframes 
>> which i can reduce by doing do.call('rbind',list_of_df) 
>>
>> After ~10 years of using R, I only recently leaned of the do.call(). 
>
> In Julia, you would say:
>
> vcat(dfs...)
>
> ---david
>  
>
>> In Julia, I attempted to use vcat for this purpose but I ran into 
>> trouble:
>>
>> "
>>
>> stack overflow
>> while loading In[29], in expression starting on line 1
>>
>> "
>>
>>
>> This operation is basically the vcat of a large vector v consisting 
>> of 68K small (11X7) dataframes. The code is attached.
>>
>> Thanks for your help! 
>>
>

[julia-users] Re: Memory Allocation Question

2015-01-03 Thread Michael Hatherly


I only caught that by wrapping all the testing code at the bottom in a 
function and running that instead. Is there any improvement from trying 
that?

— Mike
​


On Saturday, 3 January 2015 22:29:06 UTC+2, Christoph Ortner wrote:
>
> Hi Mike,
>
> What an embarrassing typo. Thank you for pointing this out. But 
> unfortunately this still doesn't solve my problem. I will delete my 
> previous posts to not overflow this discussion. When I've isolated the 
> problem with my original code I will post again.
>
> Thank you,
> Christoph
>


[julia-users] Re: Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Hi Mike,

What an embarrassing typo. Thank you for pointing this out. But 
unfortunately this still doesn't solve my problem. I will delete my 
previous posts to not overflow this discussion. When I've isolated the 
problem with my original code I will post again.

Thank you,
Christoph


[julia-users] Re: Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Hi Mike,

What an embarrassing typo. Thank you for pointing this out. But 
unfortunately this still doesn't solve my problem. I will delete my 
previous posts to not overflow this discussion. When I've isolated the 
problem with my original code I will post again.

Thank you,



On Saturday, 3 January 2015 20:22:24 UTC, Michael Hatherly wrote:
>
> coeffs in your second function is a global variable (used in the first 
> line of that function). If I pass that through in a similar way to the 
> first function then the timings are roughly the same for me.
>
> — Mike
> ​
>
>
> On Saturday, 3 January 2015 22:12:51 UTC+2, Christoph Ortner wrote:
>>
>> Dear Tim and Viral  (and others),
>>
>> The above code snippet is a slightly simplified version of my original 
>> code, supplied with random data. I wrote two versions: one with basic 
>> arrays and a second with custom types that contain these arrays. The result:
>>
>>
>> @time evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
>>
>> elapsed time: 0.01496518 seconds (13888 bytes allocated)
>>
>> @time evalSiteFunction!(sp, J, Y, Frc)
>>
>> elapsed time: 1.178802132 seconds (320878944 bytes allocated, 11.28% gc 
>> time)
>>
>> Is this a bug? Or am I missing something else? I wanted to try 
>> @code_warntype, but it seems to not exist in my Julia installation. Is it 
>> 0.4?
>>
>> Although I can probably create a temporary workaround for this, any 
>> further advise would be greatly appreciated.
>>
>> All the best,
>> Christoph
>>
>>

Re: [julia-users] Re: Memory Allocation Question

2015-01-03 Thread Christoph Ortner
@code_warntype is in 0.4 master, I believe it got only introduced a few 
days ago. 

>
> rene 
>
 
ah - thanks for clarifying.  Anyhow, I now wrote another version, which to 
me definitely indicates a Julia bug - or if not a bug then at least very 
strange behaviour. Namely, if I simply create aliases of the arrays in the 
custom type, then it is again fast.

I would be curious to learn what the problem is?

 Christoph



evalSiteFunction_fix!(sp, J, Y, Frc)
@time evalSiteFunction_fix!(sp, J, Y, Frc)

elapsed time: 0.006182797 seconds (2088 bytes allocated)


function evalSiteFunction_fix!(sp::SiteLinear,
   J::Array{Int, 1},
   Y::Array{Float64,2},
   Frc::Array{Float64,2})
# extract dimension information (d1, d2 \in \{2, 3\})
locstencil = sp.stencil::Array{Int, 1}
loccoeffs = sp.coeffs::Array{Float64, 3}
d1, d2, nneig = size(loccoeffs)
# allocate some variables
kX = 0::Int; jX = 0::Int; n = 0::Int
# loop over sites
for jX in J# length(J) = 11268
# initialise force to 0
Frc[:,jX] = 0.0
# loop over neighbouring sites
for n in 1:nneig#  nneig = 37
# get the X-index of the current neighbour (highly simplified)
kX = jX + locstencil[n]
# evaluate the term (devectorized and unrolled for performance)
for i = 1:d1, j = 1:d2
  Frc[i,jX] += loccoeffs[i,j,n]*Y[j,kX]
end
end
end
end


[julia-users] Re: Memory Allocation Question

2015-01-03 Thread Michael Hatherly


coeffs in your second function is a global variable (used in the first line 
of that function). If I pass that through in a similar way to the first 
function then the timings are roughly the same for me.

— Mike
​


On Saturday, 3 January 2015 22:12:51 UTC+2, Christoph Ortner wrote:
>
> Dear Tim and Viral  (and others),
>
> The above code snippet is a slightly simplified version of my original 
> code, supplied with random data. I wrote two versions: one with basic 
> arrays and a second with custom types that contain these arrays. The result:
>
>
> @time evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
>
> elapsed time: 0.01496518 seconds (13888 bytes allocated)
>
> @time evalSiteFunction!(sp, J, Y, Frc)
>
> elapsed time: 1.178802132 seconds (320878944 bytes allocated, 11.28% gc 
> time)
>
> Is this a bug? Or am I missing something else? I wanted to try 
> @code_warntype, but it seems to not exist in my Julia installation. Is it 
> 0.4?
>
> Although I can probably create a temporary workaround for this, any 
> further advise would be greatly appreciated.
>
> All the best,
> Christoph
>
>

Re: [julia-users] Re: Memory Allocation Question

2015-01-03 Thread René Donner
Hi,

@code_warntype is in 0.4 master, I believe it got only introduced a few days 
ago.

rene


Am 03.01.2015 um 21:12 schrieb Christoph Ortner :

> Dear Tim and Viral  (and others),
> 
> The above code snippet is a slightly simplified version of my original code, 
> supplied with random data. I wrote two versions: one with basic arrays and a 
> second with custom types that contain these arrays. The result:
> 
>
> @time evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
> elapsed time: 0.01496518 seconds (13888 bytes allocated)
> 
> @time evalSiteFunction!(sp, J, Y, Frc)
> elapsed time: 1.178802132 seconds (320878944 bytes allocated, 11.28% gc time)
> 
> Is this a bug? Or am I missing something else? I wanted to try 
> @code_warntype, but it seems to not exist in my Julia installation. Is it 0.4?
> 
> Although I can probably create a temporary workaround for this, any further 
> advise would be greatly appreciated.
> 
> All the best,
> Christoph
> 



[julia-users] Re: Memory Allocation Question

2015-01-03 Thread Christoph Ortner
As an aside: I also tried to run with `track-allocation=user`

The strange thing is it claims that `@time 
evalSiteFunction_without_type!(coeffs, 
stencil, J, Y, Frc)` allocated 2480888 bytes instead of the 13888 bytes 
that @time shows? Bug, or just something I don't understand?

Best wishes,
 Christoph

-
- type SiteLinear
- stencil::Array{Int, 1}
- coeffs::Array{Float64, 3}
- end
-
-
- function evalSiteFunction_without_type!(coeffs::Array{Float64, 3},
-stencil::Array{Int, 1},
-J::Array{Int, 1},
-Y::Array{Float64,2},
-Frc::Array{Float64,2})
- # extract dimension information (d1, d2 \in \{2, 3\})
  2480888 d1, d2, nneig = size(coeffs)
- # allocate some variables
0 kX = 0::Int; jX = 0::Int; n = 0::Int
- # loop over sites
0 for jX in J# length(J) = 11268
- # initialise force to 0
0 Frc[:,jX] = 0.0
- # loop over neighbouring sites
0 for n in 1:nneig#  nneig = 37
- # get the X-index of the current neighbour (highly 
simplified)
0 kX = jX + stencil[n]
- # evaluate the term (devectorized and unrolled for 
performance)
- #  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
0 for i = 1:d1, j = 1:d2
0   Frc[i,jX] += coeffs[i,j,n]*Y[j,kX]
- end
- end
- end
- end
-
-
-
- function evalSiteFunction!(sp::SiteLinear,
-J::Array{Int, 1},
-Y::Array{Float64,2},
-Frc::Array{Float64,2})
- # extract dimension information (d1, d2 \in \{2, 3\})
 15841976 d1, d2, nneig = size(coeffs)
- # allocate some variables
0 kX = 0::Int; jX = 0::Int; n = 0::Int
- # loop over sites
  2492256 for jX in J# length(J) = 11268
- # initialise force to 0
0 Frc[:,jX] = 0.0
- # loop over neighbouring sites
116952768 for n in 1:nneig#  nneig = 37
- # get the X-index of the current neighbour (highly 
simplified)
 25515200 kX = jX + sp.stencil[n]
- # evaluate the term (devectorized and unrolled for 
performance)
- #  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
496803016 for i = 1:d1, j = 1:d2
  224   Frc[i,jX] += sp.coeffs[i,j,n]*Y[j,kX]
- end
- end
- end
- end
-
-
- # problem size
- nneigs = 18
- nsites = 11268
- dim = 2
-
- # data to be passed to function (simplified)
- coeffs = rand(dim,dim,2*nneigs+1)
- stencil = int(linspace(-nneigs, nneigs, 2*nneigs+1))
- Y = rand(2, nsites + 2*nneigs)
- Frc = zeros(2, nsites + 2*nneigs)
- J = int(linspace(nneigs+1, nsites+nneigs, nsites))
-
-
- evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
- @time evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
-
-
- # now try same with types
- sp = SiteLinear(stencil, coeffs)
- evalSiteFunction!(sp, J, Y, Frc)
- @time evalSiteFunction!(sp, J, Y, Frc)
-
 


[julia-users] Re: Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Dear Tim and Viral  (and others),

The above code snippet is a slightly simplified version of my original 
code, supplied with random data. I wrote two versions: one with basic 
arrays and a second with custom types that contain these arrays. The result:

   
@time evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)

elapsed time: 0.01496518 seconds (13888 bytes allocated)

@time evalSiteFunction!(sp, J, Y, Frc)

elapsed time: 1.178802132 seconds (320878944 bytes allocated, 11.28% gc 
time)

Is this a bug? Or am I missing something else? I wanted to try 
@code_warntype, but it seems to not exist in my Julia installation. Is it 
0.4?

Although I can probably create a temporary workaround for this, any further 
advise would be greatly appreciated.

All the best,
Christoph



[julia-users] Re: Memory Allocation Question

2015-01-03 Thread Christoph Ortner
# code for last post


type SiteLinear
stencil::Array{Int, 1}
coeffs::Array{Float64, 3}
end


function evalSiteFunction_without_type!(coeffs::Array{Float64, 3},
   stencil::Array{Int, 1},
   J::Array{Int, 1},
   Y::Array{Float64,2},
   Frc::Array{Float64,2})
# extract dimension information (d1, d2 \in \{2, 3\})
d1, d2, nneig = size(coeffs)
# allocate some variables
kX = 0::Int; jX = 0::Int; n = 0::Int
# loop over sites
for jX in J# length(J) = 11268
# initialise force to 0
Frc[:,jX] = 0.0
# loop over neighbouring sites
for n in 1:nneig#  nneig = 37
kX = jX + stencil[n]
for i = 1:d1, j = 1:d2
  Frc[i,jX] += coeffs[i,j,n]*Y[j,kX]
end
end
end
end


function evalSiteFunction!(sp::SiteLinear,
   J::Array{Int, 1},
   Y::Array{Float64,2},
   Frc::Array{Float64,2})
# extract dimension information (d1, d2 \in \{2, 3\})
d1, d2, nneig = size(coeffs)
# allocate some variables
kX = 0::Int; jX = 0::Int; n = 0::Int
# loop over sites
for jX in J# length(J) = 11268
# initialise output
Frc[:,jX] = 0.0
for n in 1:nneig#  nneig = 37
kX = jX + sp.stencil[n]
for i = 1:d1, j = 1:d2
  Frc[i,jX] += sp.coeffs[i,j,n]*Y[j,kX]
end
end
end
end


# problem size
nneigs = 18
nsites = 11268
dim = 2

# data to be passed to function (simplified)
coeffs = rand(dim,dim,2*nneigs+1)
stencil = int(linspace(-nneigs, nneigs, 2*nneigs+1))
Y = rand(2, nsites + 2*nneigs)
Frc = zeros(2, nsites + 2*nneigs)
J = int(linspace(nneigs+1, nsites+nneigs, nsites))

# timing without types
evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
@time evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)

# now try same with types
sp = SiteLinear(stencil, coeffs)
evalSiteFunction!(sp, J, Y, Frc)
@time evalSiteFunction!(sp, J, Y, Frc)
 


[julia-users] Re: Memory Allocation Question

2015-01-03 Thread Christoph Ortner
# Code for last post. Christoph



type SiteLinear
stencil::Array{Int, 1}
coeffs::Array{Float64, 3}
end


function evalSiteFunction_without_type!(coeffs::Array{Float64, 3},
   stencil::Array{Int, 1},
   J::Array{Int, 1},
   Y::Array{Float64,2},
   Frc::Array{Float64,2})
# extract dimension information (d1, d2 \in \{2, 3\})
d1, d2, nneig = size(coeffs)
# allocate some variables
kX = 0::Int; jX = 0::Int; n = 0::Int
# loop over sites
for jX in J# length(J) = 11268
# initialise force to 0
Frc[:,jX] = 0.0
# loop over neighbouring sites
for n in 1:nneig#  nneig = 37
# get the X-index of the current neighbour (highly simplified)
kX = jX + stencil[n]
# evaluate the term (devectorized and unrolled for performance)
#  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
for i = 1:d1, j = 1:d2
  Frc[i,jX] += coeffs[i,j,n]*Y[j,kX]
end
end
end
end



function evalSiteFunction!(sp::SiteLinear,
   J::Array{Int, 1},
   Y::Array{Float64,2},
   Frc::Array{Float64,2})
# extract dimension information (d1, d2 \in \{2, 3\})
d1, d2, nneig = size(coeffs)
# allocate some variables
kX = 0::Int; jX = 0::Int; n = 0::Int
# loop over sites
for jX in J# length(J) = 11268
# initialise force to 0
Frc[:,jX] = 0.0
# loop over neighbouring sites
for n in 1:nneig#  nneig = 37
# get the X-index of the current neighbour (highly simplified)
kX = jX + sp.stencil[n]
# evaluate the term (devectorized and unrolled for performance)
#  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
for i = 1:d1, j = 1:d2
  Frc[i,jX] += sp.coeffs[i,j,n]*Y[j,kX]
end
end
end
end


# problem size
nneigs = 18
nsites = 11268
dim = 2

# data to be passed to function (simplified)
coeffs = rand(dim,dim,2*nneigs+1)
stencil = int(linspace(-nneigs, nneigs, 2*nneigs+1))
Y = rand(2, nsites + 2*nneigs)
Frc = zeros(2, nsites + 2*nneigs)
J = int(linspace(nneigs+1, nsites+nneigs, nsites))


evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
@time evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)


# now try same with 
sp = SiteLinear(stencil, coeffs)
evalSiteFunction!(sp, J, Y, Frc)
@time evalSiteFunction!(sp, J, Y, Frc)
 









Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Dear Tim and Viral (and others),

The allocation check just showed that memory is being allocated all over 
the place (seemingly for no reason?), so I went ahead and simplified the 
function a bit to supply it with some random data. This is printed below. 

It turns out that if I pass standard arrays instead of custom types, then 
there is no problem at all; I get the performance I expect.

Without custom types:

elapsed time: 0.01496518 seconds (13888 bytes allocated)


With custom types:

elapsed time: 1.178802132 seconds (320878944 bytes allocated, 11.28% gc 
time)

I then tried to use the @code_warntype  /  @code_typewarn  macro to see if 
this would detect something, but it seems this macro does not exist in my 
Julia installation?  Does this only come with 0.4?

As an aside: for the fast version, @time claims only 13888 bytes have been 
allocated, but track-allocation claims that one particular line allocated 
2480888 bytes??? 

The .mem output is printed below. I will post a clean version in a separate 
post. I can work around this issue for now, but is it possible this is a 
bug? Or am I missing something else here? Many thanks for your help so far,
   Christoph



- type SiteLinear
- stencil::Array{Int, 1}
- coeffs::Array{Float64, 3}
- end
- 
- 
- function evalSiteFunction_without_type!(coeffs::Array{Float64, 3},
-stencil::Array{Int, 1},
-J::Array{Int, 1},
-Y::Array{Float64,2},
-Frc::Array{Float64,2})
- # extract dimension information (d1, d2 \in \{2, 3\})
  2480888 d1, d2, nneig = size(coeffs)
- # allocate some variables
0 kX = 0::Int; jX = 0::Int; n = 0::Int
- # loop over sites
0 for jX in J# length(J) = 11268
- # initialise force to 0
0 Frc[:,jX] = 0.0
- # loop over neighbouring sites
0 for n in 1:nneig#  nneig = 37
- # get the X-index of the current neighbour (highly 
simplified)
0 kX = jX + stencil[n]
- # evaluate the term (devectorized and unrolled for 
performance)
- #  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
0 for i = 1:d1, j = 1:d2
0   Frc[i,jX] += coeffs[i,j,n]*Y[j,kX]
- end
- end
- end
- end
- 
- 
- 
- function evalSiteFunction!(sp::SiteLinear,
-J::Array{Int, 1},
-Y::Array{Float64,2},
-Frc::Array{Float64,2})
- # extract dimension information (d1, d2 \in \{2, 3\})
 15841976 d1, d2, nneig = size(coeffs)
- # allocate some variables
0 kX = 0::Int; jX = 0::Int; n = 0::Int
- # loop over sites
  2492256 for jX in J# length(J) = 11268
- # initialise force to 0
0 Frc[:,jX] = 0.0
- # loop over neighbouring sites
116952768 for n in 1:nneig#  nneig = 37
- # get the X-index of the current neighbour (highly 
simplified)
 25515200 kX = jX + sp.stencil[n]
- # evaluate the term (devectorized and unrolled for 
performance)
- #  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
496803016 for i = 1:d1, j = 1:d2
  224   Frc[i,jX] += sp.coeffs[i,j,n]*Y[j,kX]
- end
- end
- end
- end
- 
- 
- # problem size
- nneigs = 18
- nsites = 11268
- dim = 2
- 
- # data to be passed to function (simplified)
- coeffs = rand(dim,dim,2*nneigs+1)
- stencil = int(linspace(-nneigs, nneigs, 2*nneigs+1))
- Y = rand(2, nsites + 2*nneigs)
- Frc = zeros(2, nsites + 2*nneigs)
- J = int(linspace(nneigs+1, nsites+nneigs, nsites))
- 
- 
- evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
- @time evalSiteFunction_without_type!(coeffs, stencil, J, Y, Frc)
- 
- 
- # now try same with types
- sp = SiteLinear(stencil, coeffs)
- evalSiteFunction!(sp, J, Y, Frc)
- @time evalSiteFunction!(sp, J, Y, Frc)


 


Re: [julia-users] Re: Package name for embedding R within Julia

2015-01-03 Thread lgautier
I agree.
RCall does provide consistency, although at the possible slight cost of 
boring conformity, and seems a better choice than RStats.

On Saturday, January 3, 2015 8:31:42 AM UTC-5, Viral Shah wrote:
>
> I prefer Rcall.jl, for consistency with ccall, PyCall, JavaCall, etc. 
> Also, once in JuliaStats, it will probably also be well advertised and 
> documented - so finding it should not be a challenge, IMO.
>
> -viral
>
> On Saturday, January 3, 2015 10:16:51 AM UTC+5:30, Ismael VC wrote:
>>
>> +1 for RStats.jl, I also think it's more search-friendly but not only for 
>> people coming from R.
>>
>> On Fri, Jan 2, 2015 at 9:50 PM, Gray Calhoun > > wrote:
>>
>>> That sounds great! Rcall.jl or RCall.jl are fine names; RStats.jl may be 
>>> slightly more search-friendly for people coming from R, since they may not 
>>> know about PyCall.
>>>
>>>
>>> On Friday, January 2, 2015 1:59:04 PM UTC-6, Douglas Bates wrote:

 For many statistics-oriented Julia users there is a great advantage in 
 being able to piggy-back on R development and to use at least the data 
 sets 
 from R packages.  Hence the RDatasets package and the read_rda function in 
 the DataFrames package for reading saved R data.

 Over the last couple of days I have been experimenting with running an 
 embedded R within Julia and calling R functions from Julia. This is 
 similar 
 in scope to the Rif package except that this code is written in Julia and 
 not as a set of wrapper functions written in C. The R API is a C API and, 
 in some ways, very simple. Everything in R is represented as a "symbolic 
 expression" or SEXPREC and passed around as pointers to such expressions 
 (called an SEXP type).  Most functions take one or more SEXP values as 
 arguments and return an SEXP.

 I have avoided reading the code for Rif for two reasons:
  1. It is GPL3 licensed
  2. I already know a fair bit of the R API and where to find API 
 function signatures.

 Here's a simple example
 julia> initR()
 1

 julia> globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
 Ptr{Void} @0x08c1c388

 julia> formaldehyde = tryEval(install(:Formaldehyde))
 Ptr{Void} @0x08fd1d18

 julia> inherits(formaldehyde,"data.frame")
 true

 julia> printValue(formaldehyde)
   carb optden
 1  0.1  0.086
 2  0.3  0.269
 3  0.5  0.446
 4  0.6  0.538
 5  0.7  0.626
 6  0.9  0.782

 julia> length(formaldehyde)
 2

 julia> names(formaldehyde)
 2-element Array{ASCIIString,1}:
  "carb"  
  "optden"

 julia> form1 = ccall((:VECTOR_ELT,libR),SEXP,
 (SEXP,Cint),formaldehyde,0)
 Ptr{Void} @0x0a5baf58

 julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
 14

 julia> carb = copy(pointer_to_array(ccall((:
 REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
 6-element Array{Float64,1}:
  0.1
  0.3
  0.5
  0.6
  0.7
  0.9

 julia> form2 = ccall((:VECTOR_ELT,libR),SEXP,
 (SEXP,Cint),formaldehyde,1)
 Ptr{Void} @0x0a5baef0

 julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form2)
 14

 julia> optden = copy(pointer_to_array(ccall((:
 REAL,libR),Ptr{Cdouble},(SEXP,),form2),length(form2)))
 6-element Array{Float64,1}:
  0.086
  0.269
  0.446
  0.538
  0.626
  0.782


 A call to printValue uses the R printing mechanism.

 Questions:
  - What would be a good name for such a package?  In the spirit of 
 PyCall it could be RCall or Rcall perhaps.

  - Right now I am defining several functions that emulate the names of 
 functions in R itself ir in the R API.  What is a good balance?  Obviously 
 it would not be a good idea to bring in all the names in the R base 
 namespace.  On the other hand, those who know names like "inherits" and 
 what it means in R will find it convenient to have such names in such a 
 package.

 - Should I move the discussion the the julia-stats list?


>>

Re: [julia-users] Re: bug in jacobisymbol Combinatorics.jl

2015-01-03 Thread Jiahao Chen
Yes, I've already included the change:

https://github.com/jiahao/Combinatorics.jl/commit/5e3d1b34adac5c7dfc2bd45d5515b34c87d6c9b0

A new version with this fix has been tagged as v0.1.2.


Re: [julia-users] Re: What does (1,2,(3,4)...) mean?

2015-01-03 Thread René Donner
ic, thanks a lot!

Am 03.01.2015 um 19:23 schrieb Ivar Nesje :

> See https://github.com/JuliaLang/julia/issues/4869
> 
> kl. 19:19:26 UTC+1 lørdag 3. januar 2015 skrev René Donner følgende:
> Hi, 
> 
> I wanted to append the tuple (3,4) to the tuple (1,2), expecting (1,2,3,4) as 
> output: 
> 
>   julia> a = (1,2,(3,4)...) 
>   (1,2,(3,4)...) 
> 
> It turns out I have to use tuple(1,2,(3,4)...) == (1,2,3,4). 
> 
> I understand (1,2,3,4) and (1,2,(3,4)), but what does the output 
> "(1,2,(3,4)...)", which has type (Int64,Int64,DataType), actually mean and 
> what is it used for? 
> 
> Thanks! 
> 
> Rene 
> 
> 
> 



[julia-users] Re: How quickly subtract the two large arrays ?

2015-01-03 Thread paul analyst
THX
A have not :/ but I can makes it in parts!

How simply use parallel for it? I have 8 proc, is working only 1
Paul




W dniu piątek, 15 sierpnia 2014 11:53:54 UTC+2 użytkownik Billou Bielour 
napisał:
>
> This might be a bit faster:
>
> function sub!(A,B,C)
> for j=1:size(A,2)
> for i=1:size(A,1)
> @inbounds C[i,j] = A[i,j] - B[i,j]
> end
> end
> end
>
> C = zeros(size(A));
> sub!(A,B,C)
>
> Do you have enough RAM to store these matrices though ? 10^5 * 10^5 
> Float64 seems rather large.
>
>

Re: [julia-users] Re: bug in jacobisymbol Combinatorics.jl

2015-01-03 Thread lapeyre . math122a
Great,  thanks. Cint make more sense. Also, the same change needs to be 
made for
 legendresymbol.


On Saturday, January 3, 2015 7:11:19 PM UTC+1, Jiahao Chen wrote:
>
> Thanks for the report and the fix.
>
> I've updated the code and strengthened the test; the only change was to 
> use Cint instead of Int32 for consistency with our C calling code:
>
>
> http://docs.julialang.org/en/release-0.3/manual/calling-c-and-fortran-code/#type-correspondences
>
> I've also enabled the issue tracker; didn't realize it was off.
>
> On Sat Jan 03 2015 at 12:52:41 PM > 
> wrote:
>
>> Changing the line that calls libgmp like this ( Int is replaced with 
>> Int32 )
>>
>>  return convert(Int,ccall((:__gmpz_jacobi, :libgmp), Int32,
>>
>> gives correct results. The C header and code says the return type is 
>> 'int'.
>> All values that should be -1, come out 4294967295.  This may be system
>> dependent, but I only have one laptop available at the moment.
>>
>>
>>
>> On Saturday, January 3, 2015 6:44:40 PM UTC+1, lapeyre@gmail.com 
>> wrote:
>>>
>>> This is not correct
>>>
>>> julia> jacobisymbol(10,7)
>>> 4294967295
>>>
>>> This happens in v0.3 and v0.4
>>> I can send more information, and have a possible fix. I tried to find a 
>>> way to make a comment or issue or something at
>>> https://github.com/jiahao/Combinatorics.jl,
>>> but was unable to find a button for it.  Better to talk before issuing a 
>>> PR.
>>>
>>> --John
>>>
>>>

[julia-users] Re: What does (1,2,(3,4)...) mean?

2015-01-03 Thread Ivar Nesje
See https://github.com/JuliaLang/julia/issues/4869

kl. 19:19:26 UTC+1 lørdag 3. januar 2015 skrev René Donner følgende:
>
> Hi, 
>
> I wanted to append the tuple (3,4) to the tuple (1,2), expecting (1,2,3,4) 
> as output: 
>
>   julia> a = (1,2,(3,4)...) 
>   (1,2,(3,4)...) 
>
> It turns out I have to use tuple(1,2,(3,4)...) == (1,2,3,4). 
>
> I understand (1,2,3,4) and (1,2,(3,4)), but what does the output 
> "(1,2,(3,4)...)", which has type (Int64,Int64,DataType), actually mean and 
> what is it used for? 
>
> Thanks! 
>
> Rene 
>
>
>
>

Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Mike Innes
Right, so you have the high-level, dynamic part of the language for basic
interactive use and the harder to use, fully typed part for the "real"
coding.

Except that already exists – you can just use Python and C. The split
between high and low-level is exactly the problem that Julia was designed
to solve.

Bear in mind that this goes well beyond a few type hints. If you wanted
this, you'd have to enforce containers always having concrete elements, for
example. If it ever made sense to have a container hold multiple types
(e.g. in expression trees, markdown etc.), you'd have to write a container
type like Nullable for every possible combination, at which point you're
just emulating dynamic typing in a ridiculously fiddly way – and you've
lost any extensibility you had to boot.

I actually can't think of a single language that enforces perfect type
stability (and static typing is not the same thing). Haskell comes close,
but it's not exactly renowned for its ease of use, which is a priority in
Julia's design.

If you really believe in this, I encourage you to have a go at implementing
something like Markdown.jl in a 100% type-stable way. If you can do it
without tripling the code size, halving the flexibility and gaining only a
marginal performance benefit, I'll relent, but right now I don't see it.

On 3 January 2015 at 14:17, Ariel Keselman  wrote:

>
> There is type-instability only at global scope, that's the point. So when
> you are using the repl (or just using the global scope) you won't have to
> write types. My suggestion is only to disallow type-instability at inner
> scopes so they become fully statically typed; the next step would be to
> statically check functions before compilation, a real static check just
> like you would get in say Java or or Haskell.
>
> The advantages are way more than just linting. You would catch whole
> categories of problems at copile time, before your code actually runs.
> Imagine finding a problem after your code runs for hrs, or a web server
> crash in production due to some untested code these things happen. This
> would allow way better tooling: the tooling that statically typed languages
> have (think Java, C#) is wa better than that of dynamically typed
> languages. And I say this even after considering PyCharm. The tools for
> statically typed languages allow for safe refactorings, find usages,
> definitions, automated documentation which is really sync'd with the code
> and much more.
>
> above I was just trying to demonstrate how some problems that currently
> use dynamic typing could easily be converted to be type stable, and hence
> statically typed. Yes, with the solution that I demonstrated above, when
> using a file name known only at runtime inside a function, you would have
> to write down the expected types, which doesn't seem like too much effort:
>
> myvec = load_vector_from_hdf5_file(filename, vecname, Float64)
>
> But now you get the advantage that the following code would fail at
> compile time:
>
> for i, v in enumerate(myvec)
> println("element number "*string(i)*" is "*myvec[i])
> end
>
> Of course if you're using the repl you could just use the macro, or if you
> want to do more automation at global scope just write a macro instead of a
> function.
>
> Seems to me that the advantages of type stable inner scopes easily
> outweighs the inconvenience of having to write a few rare types. Julia can
> be really be the 1st interactive language with statically typed guarantees
> and tooling. BTW With good enough tooling the IDE could suggest the HDF5
> vector type so in the long run this would represent really little cognitive
> effort from the programmer.
>
>
>


Re: [julia-users] Re: bug in jacobisymbol Combinatorics.jl

2015-01-03 Thread Ivar Nesje
The issue tracker is off for Github forks. If the package has moved and is 
maintained at @jiahao's fork, he should break the fork relation.

kl. 19:11:19 UTC+1 lørdag 3. januar 2015 skrev Jiahao Chen følgende:
>
> Thanks for the report and the fix.
>
> I've updated the code and strengthened the test; the only change was to 
> use Cint instead of Int32 for consistency with our C calling code:
>
>
> http://docs.julialang.org/en/release-0.3/manual/calling-c-and-fortran-code/#type-correspondences
>
> I've also enabled the issue tracker; didn't realize it was off.
>
> On Sat Jan 03 2015 at 12:52:41 PM > 
> wrote:
>
>> Changing the line that calls libgmp like this ( Int is replaced with 
>> Int32 )
>>
>>  return convert(Int,ccall((:__gmpz_jacobi, :libgmp), Int32,
>>
>> gives correct results. The C header and code says the return type is 
>> 'int'.
>> All values that should be -1, come out 4294967295.  This may be system
>> dependent, but I only have one laptop available at the moment.
>>
>>
>>
>> On Saturday, January 3, 2015 6:44:40 PM UTC+1, lapeyre@gmail.com 
>> wrote:
>>>
>>> This is not correct
>>>
>>> julia> jacobisymbol(10,7)
>>> 4294967295
>>>
>>> This happens in v0.3 and v0.4
>>> I can send more information, and have a possible fix. I tried to find a 
>>> way to make a comment or issue or something at
>>> https://github.com/jiahao/Combinatorics.jl,
>>> but was unable to find a button for it.  Better to talk before issuing a 
>>> PR.
>>>
>>> --John
>>>
>>>

[julia-users] What does (1,2,(3,4)...) mean?

2015-01-03 Thread René Donner
Hi, 

I wanted to append the tuple (3,4) to the tuple (1,2), expecting (1,2,3,4) as 
output:

  julia> a = (1,2,(3,4)...)
  (1,2,(3,4)...)

It turns out I have to use tuple(1,2,(3,4)...) == (1,2,3,4).

I understand (1,2,3,4) and (1,2,(3,4)), but what does the output 
"(1,2,(3,4)...)", which has type (Int64,Int64,DataType), actually mean and what 
is it used for?

Thanks!

Rene





[julia-users] Re: I don't understand this `convert` error.

2015-01-03 Thread Ismael VC
Thank you very much Milan!

El sábado, 3 de enero de 2015 10:29:08 UTC-6, Ismael VC escribió:
>
> I'm trying to port this Python class:
>
> class Env(dict):
> "An environment: a dict of {'var':val} pairs, with an outer Env."
> def __init__(self, parms=(), args=(), outer=None):
> self.update(zip(parms, args))
> self.outer = outer
> def find(self, var):
> "Find the innermost Env where var appears."
> return self if (var in self) else self.outer.find(var)
>
> Haven't even got to the `find` method yet! :( ...this is what I've done:
>
> Environment type:
>
> julia> type Env
>data::Dict
>outer::Nullable{Env}
>Env() = new(Dict(), Nullable{Env}())
>end
>
>
> Outer constructor:
>
> julia> function Env(parms::Tuple, args::Tuple, outer::Env)
>Env(Dict(zip(parms, args)), Nullable(outer))
>end
> Env
>
>
> So far so good:
>
> julia> parms = (:foo, :bar, :baz);
>  
> julia> args = (1, 2, 3);
>  
> julia> outer = Env()
> Env(Dict{Any,Any}(),Nullable{Env}())
>
>
> I've spent a lot of time trying to understand this, without success:
>
> julia> inner = Env(parms, args, outer)
> ERROR: `convert` has no method matching convert(::Type{Env}, 
> ::(Symbol,Symbol,Symbol), ::(Int32,Int32,Int32), ::Env)
>  in call at base.jl:35
>
>
> I'll have to go to work before soon, I know this'll be itching my head all 
> day long! :P
>
> What am I doing wrong? or How many things am I doing wrong? :O
>
> Thanks in advance!
>


Re: [julia-users] printf float64 as hex/raw

2015-01-03 Thread Andreas Lobinger
Stranger.

i now moved my C example to gtk3 and get also something like:

100,064026 102,490265
> 100,064026 101,490265
> 100,064026 100,490265
> 100,064026 99,490265
> 100,064026 98,490265
> 100,064026 97,490265
> 100,064026 96,490265
> 100,064026 95,490265
> 101,064026 94,490265
> 101,064026 91,490265
> 102,064026 88,490265
> 103,064026 83,490265
> 103,064026 79,490265
> 104,064026 76,490265
> 104,064026 75,490265
> 105,064026 74,490265
> 105,064026 73,490265
>
>
I'll try to scan the gtk3 docu, if that's expected behaviour. The scribble 
example looks like the recommended way is to work with MOTION_HINT only and 
the use gdk_window_get_pointer.  

>

[julia-users] Re: I don't understand this `convert` error.

2015-01-03 Thread Ismael VC
If forgot to mention I'm using this version:

julia> versioninfo()
Julia Version 0.4.0-dev+2403
Commit dae10ab (2015-01-02 13:56 UTC)
Platform Info:
  System: Linux (i686-pc-linux-gnu)
  CPU: Intel(R) Atom(TM) CPU N570   @ 1.66GHz
  WORD_SIZE: 32
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Atom)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3




El sábado, 3 de enero de 2015 10:29:08 UTC-6, Ismael VC escribió:
>
> I'm trying to port this Python class:
>
> class Env(dict):
> "An environment: a dict of {'var':val} pairs, with an outer Env."
> def __init__(self, parms=(), args=(), outer=None):
> self.update(zip(parms, args))
> self.outer = outer
> def find(self, var):
> "Find the innermost Env where var appears."
> return self if (var in self) else self.outer.find(var)
>
> Haven't even got to the `find` method yet! :( ...this is what I've done:
>
> Environment type:
>
> julia> type Env
>data::Dict
>outer::Nullable{Env}
>Env() = new(Dict(), Nullable{Env}())
>end
>
>
> Outer constructor:
>
> julia> function Env(parms::Tuple, args::Tuple, outer::Env)
>Env(Dict(zip(parms, args)), Nullable(outer))
>end
> Env
>
>
> So far so good:
>
> julia> parms = (:foo, :bar, :baz);
>  
> julia> args = (1, 2, 3);
>  
> julia> outer = Env()
> Env(Dict{Any,Any}(),Nullable{Env}())
>
>
> I've spent a lot of time trying to understand this, without success:
>
> julia> inner = Env(parms, args, outer)
> ERROR: `convert` has no method matching convert(::Type{Env}, 
> ::(Symbol,Symbol,Symbol), ::(Int32,Int32,Int32), ::Env)
>  in call at base.jl:35
>
>
> I'll have to go to work before soon, I know this'll be itching my head all 
> day long! :P
>
> What am I doing wrong? or How many things am I doing wrong? :O
>
> Thanks in advance!
>


Re: [julia-users] Re: bug in jacobisymbol Combinatorics.jl

2015-01-03 Thread Jiahao Chen
Thanks for the report and the fix.

I've updated the code and strengthened the test; the only change was to use
Cint instead of Int32 for consistency with our C calling code:

http://docs.julialang.org/en/release-0.3/manual/calling-c-and-fortran-code/#type-correspondences

I've also enabled the issue tracker; didn't realize it was off.

On Sat Jan 03 2015 at 12:52:41 PM  wrote:

> Changing the line that calls libgmp like this ( Int is replaced with Int32
> )
>
>  return convert(Int,ccall((:__gmpz_jacobi, :libgmp), Int32,
>
> gives correct results. The C header and code says the return type is 'int'.
> All values that should be -1, come out 4294967295.  This may be system
> dependent, but I only have one laptop available at the moment.
>
>
>
> On Saturday, January 3, 2015 6:44:40 PM UTC+1, lapeyre@gmail.com
> wrote:
>>
>> This is not correct
>>
>> julia> jacobisymbol(10,7)
>> 4294967295
>>
>> This happens in v0.3 and v0.4
>> I can send more information, and have a possible fix. I tried to find a
>> way to make a comment or issue or something at
>> https://github.com/jiahao/Combinatorics.jl,
>> but was unable to find a button for it.  Better to talk before issuing a
>> PR.
>>
>> --John
>>
>>


Re: [julia-users] printf float64 as hex/raw

2015-01-03 Thread Andreas Lobinger
Let's say, i didn't doubt julia here, but again a library confusion issue 
(i already had one on my 'old' system). What system do you run, esp. gtk, 
gdk, glib versions?

On Saturday, January 3, 2015 6:41:41 PM UTC+1, Jameson wrote:
>
> Strange. The code you posted above works for me on Julia0.2 and Julia0.3
>
> You can use `reinterpret` to cast the bits of one numeric type into 
> another.
>
> On Sat Jan 03 2015 at 10:09:44 AM Andreas Lobinger  > wrote:
>
>> Hello colleagues,
>>
>> most probably i overlooked it, how can i output a float (float64, double 
>> etc) in hex form (somehow raw)?
>>
>> (longer story ...
>> I'm doubting some output of a Gtk callback function which should report 
>> screen coordinates in float64, but values as integers e.g. 2.0 3.0 etc. A 
>> Gtk programm in C reports via printf exactly this, while in julia Gtk i get
>>
>> m 0.588928 3.045471
>> m 1.588928 3.045471
>> m 2.588928 3.045471
>> m 2.588928 2.045471
>> m 3.588928 2.045471
>> m 4.588928 2.045471
>> m 5.588928 2.045471
>> m 5.588928 3.045471
>> m 6.588928 7.045471
>> m 8.588928 9.045471
>> m 8.588928 10.045471)
>>
>> with
>>
>> julia> using Gtk
>>
>> julia> c = Gtk.@GtkCanvas(200,200);
>>
>> julia> w = Gtk.@GtkWindow(c);
>>
>> julia> show(c);
>>
>> julia> c.mouse.motion = (w,e) -> begin
>>@printf("m %f %f\n", e.x, e.y)
>>end
>>
>>
>>
>>
>> Wishing a happy day,
>>Andreas
>>
>>

[julia-users] Re: bug in jacobisymbol Combinatorics.jl

2015-01-03 Thread lapeyre . math122a
Changing the line that calls libgmp like this ( Int is replaced with Int32 )
   
 return convert(Int,ccall((:__gmpz_jacobi, :libgmp), Int32,

gives correct results. The C header and code says the return type is 'int'.
All values that should be -1, come out 4294967295.  This may be system
dependent, but I only have one laptop available at the moment.


On Saturday, January 3, 2015 6:44:40 PM UTC+1, lapeyre@gmail.com wrote:
>
> This is not correct
>
> julia> jacobisymbol(10,7)
> 4294967295
>
> This happens in v0.3 and v0.4
> I can send more information, and have a possible fix. I tried to find a 
> way to make a comment or issue or something at
> https://github.com/jiahao/Combinatorics.jl,
> but was unable to find a button for it.  Better to talk before issuing a 
> PR.
>
> --John
>
>

[julia-users] bug in jacobisymbol Combinatorics.jl

2015-01-03 Thread lapeyre . math122a
This is not correct

julia> jacobisymbol(10,7)
4294967295

This happens in v0.3 and v0.4
I can send more information, and have a possible fix. I tried to find a way 
to make a comment or issue or something at
https://github.com/jiahao/Combinatorics.jl,
but was unable to find a button for it.  Better to talk before issuing a PR.

--John



Re: [julia-users] printf float64 as hex/raw

2015-01-03 Thread Jameson Nash
Strange. The code you posted above works for me on Julia0.2 and Julia0.3

You can use `reinterpret` to cast the bits of one numeric type into another.

On Sat Jan 03 2015 at 10:09:44 AM Andreas Lobinger 
wrote:

> Hello colleagues,
>
> most probably i overlooked it, how can i output a float (float64, double
> etc) in hex form (somehow raw)?
>
> (longer story ...
> I'm doubting some output of a Gtk callback function which should report
> screen coordinates in float64, but values as integers e.g. 2.0 3.0 etc. A
> Gtk programm in C reports via printf exactly this, while in julia Gtk i get
>
> m 0.588928 3.045471
> m 1.588928 3.045471
> m 2.588928 3.045471
> m 2.588928 2.045471
> m 3.588928 2.045471
> m 4.588928 2.045471
> m 5.588928 2.045471
> m 5.588928 3.045471
> m 6.588928 7.045471
> m 8.588928 9.045471
> m 8.588928 10.045471)
>
> with
>
> julia> using Gtk
>
> julia> c = Gtk.@GtkCanvas(200,200);
>
> julia> w = Gtk.@GtkWindow(c);
>
> julia> show(c);
>
> julia> c.mouse.motion = (w,e) -> begin
>@printf("m %f %f\n", e.x, e.y)
>end
>
>
>
>
> Wishing a happy day,
>Andreas
>
>


Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Dear both - many thanks for suggestions. 

This code snippet is a small piece taken from about 2000 lines of code, so 
it will take me a little while to create a self-sufficient code snippet. 
I'll try Tim's suggestion first then try to extract the relevant pieces.

Thanks again,
Christoph




Re: [julia-users] docs for switchers and students

2015-01-03 Thread Isaiah Norton
There are a number of resources listed here:

http://julialang.org/learning/

In particular, David Sanders' tutorial videos are excellent (aimed at
Python users but very accessible).

Regarding migration guides, probably the main resource is the "Noteworthy
differences" (from Mat/Py/R) section of the manual.

On Sat, Jan 3, 2015 at 12:05 PM, ivo welch  wrote:

>
> dear julia experts---I am about to start teaching my MFE class.  Mostly,
> my students will do programming with data.  My transitioning plan is as
> follows: This year, I am planning to allow using julia.  (R is the
> standard.)  Next year, I am planning to encourage julia equally with R.  In
> 2 years, I am planning to switch over to julia (0.5?).  I am expecting
> rough edges esp on the debugging side. I plan to recommend a lot of
> print-and-recompile statements.
>
> Now, my students already have backgrounds in different computer languages,
> which could be anything from VBE to python to R to Matlab to whatever.  I
> know I can point them to the pretty good docs on the julia website.
>
> * if there are teaching/learning resources for new students above and
> beyond the standard julia docs on the web that you would recommend, could
> you please let me know?
> * if there are language migration guides that you would recommend, could
> you please let me know?
> * if there are quick-reference guides that you would recommend, could you
> please let me know?
>
> (I also suggested having the help system inside julia help with
> transitioning R by "?R.lm", but way too recently to make it in-time.  And
> ?python.xxx.  and ?matlab.xxx.)
>
> regards, /iaw
>
>


[julia-users] docs for switchers and students

2015-01-03 Thread ivo welch

dear julia experts---I am about to start teaching my MFE class.  Mostly, my 
students will do programming with data.  My transitioning plan is as 
follows: This year, I am planning to allow using julia.  (R is the 
standard.)  Next year, I am planning to encourage julia equally with R.  In 
2 years, I am planning to switch over to julia (0.5?).  I am expecting 
rough edges esp on the debugging side. I plan to recommend a lot of 
print-and-recompile statements.

Now, my students already have backgrounds in different computer languages, 
which could be anything from VBE to python to R to Matlab to whatever.  I 
know I can point them to the pretty good docs on the julia website.

* if there are teaching/learning resources for new students above and 
beyond the standard julia docs on the web that you would recommend, could 
you please let me know?
* if there are language migration guides that you would recommend, could 
you please let me know?
* if there are quick-reference guides that you would recommend, could you 
please let me know?

(I also suggested having the help system inside julia help with 
transitioning R by "?R.lm", but way too recently to make it in-time.  And 
?python.xxx.  and ?matlab.xxx.)

regards, /iaw



[julia-users] Re: Disabling type instability in non-global scope

2015-01-03 Thread Simon Kornblith
This may be a bit greedy, but I'd rather that type instability were less of 
a performance bottleneck. There are some optimizations that could be done 
to address this:

   - Specialize loops inside functions (#5654 
   ). This would solve most 
   cases where there is type instability due to reading from files.
   - Better optimization of union types. In particular, try not to box them 
   and call functions directly with a branch instead of via jl_apply_generic. 
   I discussed this a bit in the Nullable thread 
    
   and at some point Jameson implemented call site inlining at some point but 
   found much of the overhead was due to boxing.
   - Henchmen unrolling, although I'm not sure how much of a performance 
   boost this would provide on top of better optimization of union types.
   - Inline caching  for cases 
   that aren't addressed by the above optimizations. Since there will still be 
   boxing involved, this might need better GC to be useful.

These are all large projects and may not happen any time soon. OTOH, modern 
JS engines implement many of them, so they don't seem impossible.

Simon

On Friday, January 2, 2015 2:25:36 PM UTC-5, Ariel Keselman wrote:
>
> Hi, I Just want to discuss this idea: type instability in functions is a 
> source of slowness, and in fact there are several tools to catch instances 
> of it. I would even say that ising type instability in functions is 
> considered bad style. 
>   
> the most important use case for type instability seems to be allowing a 
> good interactive experience in the  Repl. Now since work in the repl is 
> always in global scope I think disabling type instability in functions 
> would not change this interactive experience. Then it would give us several 
> serious advantages: 
>
> 1. No type instability slowness to chase, a few less tools to maintain 
>
> 2. Since types in functions are stable, They could be statically type 
> checked just before compilation (not definition). So ifnyou try to run a 
> function that calls some non existent method you'll get an error on compile 
> time 
>
> I don't like to call julia dynamic, I prefer interactive. And I realise 
> there are many subtleties here and this is really not that easy to 
> implement, but maybe julia could be the first interactive and statically 
> typed language. Hope I'm not being too greedy ;) 
>
> Also look at the crystal language, they use some techniques similar to 
> those used in julia to do global type inference. They achieve fast compiled 
> programs without ever having to type a thing. And of course you can still 
> get type errors at compile time and some good tooling like statically typed 
> languages. They miss though the interactivity at global scope. 
>
> Thoughts?



[julia-users] Is there a julia user group in New York area?

2015-01-03 Thread Tony Fong
I'm planning a trip in late Jan. It'd be nice to be able to connect.

Tony


Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Tim Holy
Not saying it's impossible, but these might make interesting reading:

https://github.com/timholy/HDF5.jl/issues/88
https://github.com/timholy/HDF5.jl/issues/83
https://github.com/timholy/HDF5.jl/issues/37

If I have "file1.jld" and "file2.jld", both containing the variable "x" but the 
types are different, what exactly does the implementation of

function read_x_in_files(filenames)
for fn in filenames
x = read(fn, "x")
# now do something with x
end
end

read_x_in_files(("file1.jld", "file2.jld"))

look like if I'm not allowed to have type instability?

--Tim

On Saturday, January 03, 2015 08:02:51 AM Tobias Knopp wrote:
> I cannot judge on your proposal as I have not much experience with macros
> (nor with staged functions). Jameson surely can because he has a clear
> opinion when to use macros and when not.
> 
> I just wanted to make clear that defining the type can be a serious issue.
> 
> Further I don't really see whats wrong with the following
> 
> function a()
>   data = read_data_from_file()  # I am highly type unstable
>   do_some_hard_computation!(data)   # I am type stable
> end
> 
> What would I gain if the function a would be typestable. The hard work is
> done in the inner function.
> 
> Where you absolutely have a point is that it is currently hard to be sure
> if the inner function is type stable.
> Maybe it would be cool if one could annotated functions to be stable and
> raise an error if not?
> 
> Cheers
> 
> Tobi
> 
> Am Samstag, 3. Januar 2015 16:23:22 UTC+1 schrieb Ariel Keselman:
> > But using a function for this is wrong because macros allow you to load
> > the vector at runtime w/o specifying types and without inner scope
> > type-instability. So why insisting using a function instead of a macro?
> > The
> > only scenario I can think of is if the name of the file is only known at
> > runtime and only inside some type-stable inner scope. But then some
> > type-stable macro could help getting rid of the mess. I just don't see
> > this
> > mess happening in julia where you can use staged functions and macros; The
> > solution would look much different than in C++. It could be coded as some
> > DSL in a library which only exposes a nice API where you list the expected
> > types and structure and that's it. Not a single 'if' would leak ;)
> > 
> > There are no free lunches of course, and yes, you could run into some mess
> > but that seems would rarely happen. I would argue taht un that case no
> > other language either can help you (they cant help to get this both fast
> > clean) -- Julia, that's nothing to be ashemed of ;)
> > 
> > Now if macros are somehow too constrained for such uses we could have a
> > tag for functions as "type-unstable" in a form that prevents the type
> > instability from leaking outside, but of course I'm looking for solutions
> > within the current Julia state



Re: [julia-users] I don't understand this `convert` error.

2015-01-03 Thread Milan Bouchet-Valat
Le samedi 03 janvier 2015 à 08:29 -0800, Ismael VC a écrit :
> I'm trying to port this Python class:
> 
> class Env(dict):
> "An environment: a dict of {'var':val} pairs, with an outer Env."
> def __init__(self, parms=(), args=(), outer=None):
> self.update(zip(parms, args))
> self.outer = outer
> def find(self, var):
> "Find the innermost Env where var appears."
> return self if (var in self) else self.outer.find(var)
> 
> Haven't even got to the `find` method yet! :( ...this is what I've
> done:
> 
> Environment type:
> 
> julia> type Env
> 
>data::Dict
> 
>outer::Nullable{Env}
> 
>Env() = new(Dict(), Nullable{Env}())
> 
>end
> 
> 
> Outer constructor:
> 
> julia> function Env(parms::Tuple, args::Tuple, outer::Env)
> 
>Env(Dict(zip(parms, args)), Nullable(outer))
> 
>end
> 
> Env
> 
> 
> So far so good:
> 
> julia> parms = (:foo, :bar, :baz);
> 
>  
> 
> julia> args = (1, 2, 3);
> 
>  
> 
> julia> outer = Env()
> 
> Env(Dict{Any,Any}(),Nullable{Env}())
> 
> 
> I've spent a lot of time trying to understand this, without success:
> 
> julia> inner = Env(parms, args, outer)
> 
> ERROR: `convert` has no method matching
> convert(::Type{Env}, ::(Symbol,Symbol,Symbol), ::(Int32,Int32,Int32), ::Env)
> 
>  in call at base.jl:35
> 
> 
> I'll have to go to work before soon, I know this'll be itching my head
> all day long! :P
> 
> What am I doing wrong? or How many things am I doing wrong? :O
This is the error I get with lastest master:
julia> inner = Env(parms, args, outer)
ERROR: `convert` has no method matching
 convert(::Type{Env}, ::Dict{Symbol,Int64}, ::Nullable{Env})
 in call at none:2

Actually the error comes from the second call to Env():
Env(Dict(zip(parms, args)), Nullable(outer))

I think this is because you have overridden the inner constructor
without providing a two-argument version. It works if you move it to an
outer constructor:
   type Env
   data::Dict
   outer::Nullable{Env}
   end
   
   Env() = Env(Dict(), Nullable{Env}())
   
   function Env(parms::Tuple, args::Tuple, outer::Env)
   Env(Dict(zip(parms, args)), Nullable(outer))
   end


Regards


[julia-users] I don't understand this `convert` error.

2015-01-03 Thread Ismael VC
I'm trying to port this Python class:

class Env(dict):
"An environment: a dict of {'var':val} pairs, with an outer Env."
def __init__(self, parms=(), args=(), outer=None):
self.update(zip(parms, args))
self.outer = outer
def find(self, var):
"Find the innermost Env where var appears."
return self if (var in self) else self.outer.find(var)

Haven't even got to the `find` method yet! :( ...this is what I've done:

Environment type:

julia> type Env
   data::Dict
   outer::Nullable{Env}
   Env() = new(Dict(), Nullable{Env}())
   end


Outer constructor:

julia> function Env(parms::Tuple, args::Tuple, outer::Env)
   Env(Dict(zip(parms, args)), Nullable(outer))
   end
Env


So far so good:

julia> parms = (:foo, :bar, :baz);
 
julia> args = (1, 2, 3);
 
julia> outer = Env()
Env(Dict{Any,Any}(),Nullable{Env}())


I've spent a lot of time trying to understand this, without success:

julia> inner = Env(parms, args, outer)
ERROR: `convert` has no method matching convert(::Type{Env}, 
::(Symbol,Symbol,Symbol), ::(Int32,Int32,Int32), ::Env)
 in call at base.jl:35


I'll have to go to work before soon, I know this'll be itching my head all 
day long! :P

What am I doing wrong? or How many things am I doing wrong? :O

Thanks in advance!


Re: [julia-users] Finding the maximum location of a vector: does Julia have a parallel intrinsic function for the job?

2015-01-03 Thread Tim Holy
You're looking for indmax.

Almost none of the algorithms in Base use parallelism, unless you create 
special data structures (SharedArrays, DArrays).

--Tim

On Saturday, January 03, 2015 07:47:59 AM Pileas wrote:
> This may be a very trivial question, but I was wondering if there is an
> intrinsic Julia function that finds the location of the maximum in a
> vector, something similar to maxloc. And if a function like that exists, is
> it made with parallel programming in mind?
> 
> For example find the maximum of v = [1, 25, 89, 6, 8, 63, 7, 569, 8, 55,
>  4, 5, 64, 545, 5, 45, 5, 5, 55, 45, 454, 55, 578, 78, 7]
> but do it in parallel mode, that is divide the vector in sub-vectors, find
> the maximum location in each and eventually return the universal maximum
> (because this parallel procedure is faster than searching in the whole
> vector per se).
> 
> Thanks



Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Tim Holy
You may want to try
julia --track-allocation=user
to see where the allocation is coming from. (See the Performance Tips page in 
the manual.)

In case the results are confusing, see also 
https://github.com/JuliaLang/julia/pull/9581
(that may cause allocations to be attributed to their corresponding lines in 
Base, which is probably not what you want with --track-allocation=user.)

--Tim

On Saturday, January 03, 2015 04:32:45 AM Christoph Ortner wrote:
> Hi Tim,
> Many thanks for the suggestion, unfortunately it did not fix the issue; see
> below updated code + timing/allocation.
>All the best, Christoph
> 
> 
> 
> function evalSiteFunction!(sp::SiteLinear, geom::tbgeom,
>  Y::Array{Float64,2}, Frc::Array{Float64,2})
> # extract dimension information (d1, d2 \in \{2, 3\})
> d1, d2, nneig = size(sp.coeffs)
> # allocate some variables
> kX = 0::Int; jX = 0::Int
> # assert type of the index set over which we are looping
> J = geom["iMM"]::Array{Int, 1}
> for jX in J
> # initialise force to 0
> Frc[:,jX] = 0.0
> # loop over neighbouring sites
> for n in 1:nneig
> # get the X-index of the current neighbour
> kX = geom.I[ geom.X2I[1,jX] + sp.stencil[1,n], geom.X2I[2,jX] +
> sp.stencil[2,n] ]
> # evaluate the term (devectorized for performance)
> #  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
> for i = 1:d1, j = 1:d2
> Frc[i,jX] += sp.coeffs[i, j, n] * Y[j, kX]
> end
> end
> end
> end
> 
> @time TB.evalSiteFunction!(frcMM, geom, Y,  Frc)
> 
> elapsed time: 0.549490861 seconds (166123912 bytes allocated, 21.85% gc
> time)
> 
> 
> 
> 1   abstractarray.jl; checkbounds; line: 62
> 1   array.jl; getindex; line: 247
> 420 task.jl; anonymous; line: 340
>  420 .../IJulia/src/IJulia.jl; eventloop; line: 123
>   420 ...rc/execute_request.jl; execute_request_0x535c5df2; line: 140
>420 loading.jl; include_string; line: 97
> 420 profile.jl; anonymous; line: 14
>  45  ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 198
>   1 .../lib/julia/sys.dylib; +; (unknown line)
>   1 array.jl; getindex; line: 247
>  1   ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 202
>  372 ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 203
>   3 array.jl; getindex; line: 247
>   8 array.jl; setindex!; line: 308



Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Tobias Knopp
I cannot judge on your proposal as I have not much experience with macros 
(nor with staged functions). Jameson surely can because he has a clear 
opinion when to use macros and when not.

I just wanted to make clear that defining the type can be a serious issue.

Further I don't really see whats wrong with the following

function a()
  data = read_data_from_file()  # I am highly type unstable
  do_some_hard_computation!(data)   # I am type stable
end

What would I gain if the function a would be typestable. The hard work is 
done in the inner function.

Where you absolutely have a point is that it is currently hard to be sure 
if the inner function is type stable.
Maybe it would be cool if one could annotated functions to be stable and 
raise an error if not?

Cheers

Tobi

Am Samstag, 3. Januar 2015 16:23:22 UTC+1 schrieb Ariel Keselman:
>
> But using a function for this is wrong because macros allow you to load 
> the vector at runtime w/o specifying types and without inner scope 
> type-instability. So why insisting using a function instead of a macro? The 
> only scenario I can think of is if the name of the file is only known at 
> runtime and only inside some type-stable inner scope. But then some 
> type-stable macro could help getting rid of the mess. I just don't see this 
> mess happening in julia where you can use staged functions and macros; The 
> solution would look much different than in C++. It could be coded as some 
> DSL in a library which only exposes a nice API where you list the expected 
> types and structure and that's it. Not a single 'if' would leak ;)
>
> There are no free lunches of course, and yes, you could run into some mess 
> but that seems would rarely happen. I would argue taht un that case no 
> other language either can help you (they cant help to get this both fast 
> clean) -- Julia, that's nothing to be ashemed of ;)
>
> Now if macros are somehow too constrained for such uses we could have a 
> tag for functions as "type-unstable" in a form that prevents the type 
> instability from leaking outside, but of course I'm looking for solutions 
> within the current Julia state
>
>
>

[julia-users] Finding the maximum location of a vector: does Julia have a parallel intrinsic function for the job?

2015-01-03 Thread Pileas
This may be a very trivial question, but I was wondering if there is an 
intrinsic Julia function that finds the location of the maximum in a 
vector, something similar to maxloc. And if a function like that exists, is 
it made with parallel programming in mind?

For example find the maximum of v = [1, 25, 89, 6, 8, 63, 7, 569, 8, 55, 
 4, 5, 64, 545, 5, 45, 5, 5, 55, 45, 454, 55, 578, 78, 7]
but do it in parallel mode, that is divide the vector in sub-vectors, find 
the maximum location in each and eventually return the universal maximum 
(because this parallel procedure is faster than searching in the whole 
vector per se).

Thanks


Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Ariel Keselman
But using a function for this is wrong because macros allow you to load the 
vector at runtime w/o specifying types and without inner scope 
type-instability. So why insisting using a function instead of a macro? The 
only scenario I can think of is if the name of the file is only known at 
runtime and only inside some type-stable inner scope. But then some 
type-stable macro could help getting rid of the mess. I just don't see this 
mess happening in julia where you can use staged functions and macros; The 
solution would look much different than in C++. It could be coded as some 
DSL in a library which only exposes a nice API where you list the expected 
types and structure and that's it. Not a single 'if' would leak ;)

There are no free lunches of course, and yes, you could run into some mess 
but that seems would rarely happen. I would argue taht un that case no 
other language either can help you (they cant help to get this both fast 
clean) -- Julia, that's nothing to be ashemed of ;)

Now if macros are somehow too constrained for such uses we could have a tag 
for functions as "type-unstable" in a form that prevents the type 
instability from leaking outside, but of course I'm looking for solutions 
within the current Julia state




Re: [julia-users] Re: How to heapify and array?

2015-01-03 Thread Tim Holy
It seems that you have a borked build of julia. How did you get it? If you 
built from source, what happens when you try `make testall`?

--Tim

On Saturday, January 03, 2015 04:14:55 AM Rodolfo Santana wrote:
> Hi Tim,
> 
> Thanks for the reply! Yes, I realize this is what 'heapify!' does.
> 
> I first do x= rand(10). Then, I try Collections.heapify!(x) and I get the
> error:
> 
> ERROR: heapify! not defined
> 
> Maybe someone can post a screenshot of how to use this function in Julia? I
> couldn't find an example on the web using this function.
> 
> Thanks,
> 
> -Rodolfo
> 
> On Saturday, January 3, 2015 5:40:22 AM UTC-6, Tim Holy wrote:
> > Also post any error messages, etc.
> > 
> > You realize that `heapify!` returns an array, just with elements in a
> > different
> > order? The smallest element will be first.
> > 
> > --Tim
> > 
> > On Saturday, January 03, 2015 03:12:39 AM Rodolfo Santana wrote:
> > > Here it is:
> > > 
> > > Julia Version 0.3.4
> > > 
> > > Commit 3392026* (2014-12-26 10:42 UTC)
> > > 
> > > Platform Info:
> > >   System: Darwin (x86_64-apple-darwin13.4.0)
> > >   
> > >   CPU: Intel(R) Core(TM) i7 CPU   M 620  @ 2.67GHz
> > >   
> > >   WORD_SIZE: 64 in finding a really 
> > >   
> > >   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem)
> > >   
> > >   LAPACK: libopenblas
> > >   
> > >   LIBM: libopenlibm
> > >   
> > >   LLVM: libLLVM-3.3
> > > 
> > > On Saturday, January 3, 2015 5:06:42 AM UTC-6, ele...@gmail.com wrote:
> > > > Strange, works for me, maybe post the whole versioninfo() output that
> > 
> > the
> > 
> > > > experts can look at.
> > > > 
> > > > Cheers
> > > > Lex
> > > > 
> > > > On Saturday, January 3, 2015 8:58:21 PM UTC+10, Rodolfo Santana wrote:
> > > >> Hi Lex,
> > > >> 
> > > >> I am using Version 0.3.4
> > > >> 
> > > >> On Saturday, January 3, 2015 4:55:06 AM UTC-6, ele...@gmail.com
> > 
> > wrote:
> > > >>> Whats your versioninfo()
> > > >>> 
> > > >>> On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana
> > 
> > wrote:
> > >  Thanks for the reply! I have tried that, but I get:
> > >  
> > >  ERROR: heapify! not defined
> > >  
> > >  On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote:
> > > > *Collections.heapify!(x)*
> > > > 
> > > > kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana
> > > > 
> > > > følgende:
> > > >> Let's say I have an array x=rand(10) . How do I use the heapify!
> > > >> function to heapify x?
> > > >> 
> > > >> Thanks!
> > > >> -Rodolfo



[julia-users] printf float64 as hex/raw

2015-01-03 Thread Andreas Lobinger
Hello colleagues,

most probably i overlooked it, how can i output a float (float64, double 
etc) in hex form (somehow raw)?

(longer story ...
I'm doubting some output of a Gtk callback function which should report 
screen coordinates in float64, but values as integers e.g. 2.0 3.0 etc. A 
Gtk programm in C reports via printf exactly this, while in julia Gtk i get

m 0.588928 3.045471
m 1.588928 3.045471
m 2.588928 3.045471
m 2.588928 2.045471
m 3.588928 2.045471
m 4.588928 2.045471
m 5.588928 2.045471
m 5.588928 3.045471
m 6.588928 7.045471
m 8.588928 9.045471
m 8.588928 10.045471)

with

julia> using Gtk

julia> c = Gtk.@GtkCanvas(200,200);

julia> w = Gtk.@GtkWindow(c);

julia> show(c);

julia> c.mouse.motion = (w,e) -> begin
   @printf("m %f %f\n", e.x, e.y)
   end




Wishing a happy day,
   Andreas



[julia-users] Re: Warning: imported binding for transpose overwritten in module __anon__

2015-01-03 Thread Steven G. Johnson
You can safely ignore it.  @pyimport creates an module __anon__ (which is 
assigned to plt in this case) that has definitions for the Python functions 
in the Python module.   The warning is telling you that this module creates 
its own "transpose" function instead of extending Base.transpose.  (It is a 
warning because in many cases a module author would have intended to add a 
new method to Base.transpose instead.)

This is fine.  transpose in other modules still refers to Base.transpose, 
and plt.transpose refers to the pylab one (== numpy transpose).

--SGJ

PS. By the way, I would normally import just pyplot and not pylab.  The 
pylab module is useful in Python because it imports numpy too, and without 
that you wouldn't have a lot of basic array functionality.  But in Julia 
you already have the equivalent of numpy built in to Julia Base.   Also, I 
would tend to recommend the Julia PyPlot module over manually importing 
pyplot.  The PyPlot module adds some niceties like IJulia inline plots and 
interactive GUI plots, whereas pylab is imported by default in 
non-interactive mode.


Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Tobias Knopp
It is absolutely very much effort to specify the type when reading an hdf5 
file. Simply because it may not be known at compile time.

By the way, the array dimension is in various applications also a dynamic 
parameter not known at compile time. I once had to dispatch these 
uncertainties to a template based C++ (image reconstruction) library. It 
became a scary capsulated if/else construct that increased the compile time 
by a factor of 10.

Julia gives me very much flexibility here and once the file is read, the 
actual calculation is fast due type stable functions that are called.


Am Samstag, 3. Januar 2015 15:17:14 UTC+1 schrieb Ariel Keselman:
>
>
> There is type-instability only at global scope, that's the point. So when 
> you are using the repl (or just using the global scope) you won't have to 
> write types. My suggestion is only to disallow type-instability at inner 
> scopes so they become fully statically typed; the next step would be to 
> statically check functions before compilation, a real static check just 
> like you would get in say Java or or Haskell. 
>
> The advantages are way more than just linting. You would catch whole 
> categories of problems at copile time, before your code actually runs. 
> Imagine finding a problem after your code runs for hrs, or a web server 
> crash in production due to some untested code these things happen. This 
> would allow way better tooling: the tooling that statically typed languages 
> have (think Java, C#) is wa better than that of dynamically typed 
> languages. And I say this even after considering PyCharm. The tools for 
> statically typed languages allow for safe refactorings, find usages, 
> definitions, automated documentation which is really sync'd with the code 
> and much more.
>
> above I was just trying to demonstrate how some problems that currently 
> use dynamic typing could easily be converted to be type stable, and hence 
> statically typed. Yes, with the solution that I demonstrated above, when 
> using a file name known only at runtime inside a function, you would have 
> to write down the expected types, which doesn't seem like too much effort:
>
> myvec = load_vector_from_hdf5_file(filename, vecname, Float64)
>
> But now you get the advantage that the following code would fail at 
> compile time:
>
> for i, v in enumerate(myvec)
> println("element number "*string(i)*" is "*myvec[i])
> end
>
> Of course if you're using the repl you could just use the macro, or if you 
> want to do more automation at global scope just write a macro instead of a 
> function.
>
> Seems to me that the advantages of type stable inner scopes easily 
> outweighs the inconvenience of having to write a few rare types. Julia can 
> be really be the 1st interactive language with statically typed guarantees 
> and tooling. BTW With good enough tooling the IDE could suggest the HDF5 
> vector type so in the long run this would represent really little cognitive 
> effort from the programmer.
>
>
>

Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Ariel Keselman

There is type-instability only at global scope, that's the point. So when 
you are using the repl (or just using the global scope) you won't have to 
write types. My suggestion is only to disallow type-instability at inner 
scopes so they become fully statically typed; the next step would be to 
statically check functions before compilation, a real static check just 
like you would get in say Java or or Haskell. 

The advantages are way more than just linting. You would catch whole 
categories of problems at copile time, before your code actually runs. 
Imagine finding a problem after your code runs for hrs, or a web server 
crash in production due to some untested code these things happen. This 
would allow way better tooling: the tooling that statically typed languages 
have (think Java, C#) is wa better than that of dynamically typed 
languages. And I say this even after considering PyCharm. The tools for 
statically typed languages allow for safe refactorings, find usages, 
definitions, automated documentation which is really sync'd with the code 
and much more.

above I was just trying to demonstrate how some problems that currently use 
dynamic typing could easily be converted to be type stable, and hence 
statically typed. Yes, with the solution that I demonstrated above, when 
using a file name known only at runtime inside a function, you would have 
to write down the expected types, which doesn't seem like too much effort:

myvec = load_vector_from_hdf5_file(filename, vecname, Float64)

But now you get the advantage that the following code would fail at compile 
time:

for i, v in enumerate(myvec)
println("element number "*string(i)*" is "*myvec[i])
end

Of course if you're using the repl you could just use the macro, or if you 
want to do more automation at global scope just write a macro instead of a 
function.

Seems to me that the advantages of type stable inner scopes easily 
outweighs the inconvenience of having to write a few rare types. Julia can 
be really be the 1st interactive language with statically typed guarantees 
and tooling. BTW With good enough tooling the IDE could suggest the HDF5 
vector type so in the long run this would represent really little cognitive 
effort from the programmer.




Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Viral Shah
It seems that you are doing all the right things in the inner loops. Can 
you post a self-sufficient code snippet that one can run?

-viral

On Saturday, January 3, 2015 6:02:45 PM UTC+5:30, Christoph Ortner wrote:
>
> Hi Tim,
> Many thanks for the suggestion, unfortunately it did not fix the issue; 
> see below updated code + timing/allocation.
>All the best, Christoph
>
>
>
> function evalSiteFunction!(sp::SiteLinear, geom::tbgeom, 
>  Y::Array{Float64,2}, Frc::Array{Float64,2})
> # extract dimension information (d1, d2 \in \{2, 3\})
> d1, d2, nneig = size(sp.coeffs)
> # allocate some variables
> kX = 0::Int; jX = 0::Int
> # assert type of the index set over which we are looping
> J = geom["iMM"]::Array{Int, 1}
> for jX in J
> # initialise force to 0
> Frc[:,jX] = 0.0
> # loop over neighbouring sites
> for n in 1:nneig
> # get the X-index of the current neighbour
> kX = geom.I[ geom.X2I[1,jX] + sp.stencil[1,n], geom.X2I[2,jX] 
> + sp.stencil[2,n] ]
> # evaluate the term (devectorized for performance)
> #  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
> for i = 1:d1, j = 1:d2
> Frc[i,jX] += sp.coeffs[i, j, n] * Y[j, kX]
> end
> end
> end
> end
>
> @time TB.evalSiteFunction!(frcMM, geom, Y,  Frc)
>
> elapsed time: 0.549490861 seconds (166123912 bytes allocated, 21.85% gc time)
>
>
>
> 1   abstractarray.jl; checkbounds; line: 62
> 1   array.jl; getindex; line: 247
> 420 task.jl; anonymous; line: 340
>  420 .../IJulia/src/IJulia.jl; eventloop; line: 123
>   420 ...rc/execute_request.jl; execute_request_0x535c5df2; line: 140
>420 loading.jl; include_string; line: 97
> 420 profile.jl; anonymous; line: 14
>  45  ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 198
>   1 .../lib/julia/sys.dylib; +; (unknown line)
>   1 array.jl; getindex; line: 247
>  1   ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 202
>  372 ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 203
>   3 array.jl; getindex; line: 247
>   8 array.jl; setindex!; line: 308
>
>

Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Mike Innes
There is type instability here, actually – the macro doesn't change
anything with regard to types.

coltype = get_vector_coltype_from_file(filename)::?
load_vector_from_hdf5_file(filename, vecname, coltype)::?

get_vector... is not type stable, which means that the entire block isn't –
there's no way to know the type of the result of this expression without
runtime values. This program would therefore be rejected by a type-safe
compiler.

The only ways around this are to (a) specify types by hand or (b) load
types from the file and generate specialised code at compile time. One is
inconvenient and the other is incomplete, which is why dynamic typing is
compelling.

On 3 January 2015 at 13:24, Ariel Keselman  wrote:

> it should still work if the file name is known only at runtime, consider
> the following script:
>
>
> filename = "myrandomfile"*string(rand(1:5))*".hdf"
> vecname = "myvec"
> @load_vector_from_hdf5_file(filename, vecname)
>
>
> w/o specifying types, w/o type instability, and file name decided at
> runtime.
>
> I have yet seen a convincing argumet for allowing type instabilities
> inside compiled functions and having this "feature" comes with a high
> cost
>


Re: [julia-users] Re: Package name for embedding R within Julia

2015-01-03 Thread Viral Shah
I prefer Rcall.jl, for consistency with ccall, PyCall, JavaCall, etc. Also, 
once in JuliaStats, it will probably also be well advertised and documented 
- so finding it should not be a challenge, IMO.

-viral

On Saturday, January 3, 2015 10:16:51 AM UTC+5:30, Ismael VC wrote:
>
> +1 for RStats.jl, I also think it's more search-friendly but not only for 
> people coming from R.
>
> On Fri, Jan 2, 2015 at 9:50 PM, Gray Calhoun  wrote:
>
>> That sounds great! Rcall.jl or RCall.jl are fine names; RStats.jl may be 
>> slightly more search-friendly for people coming from R, since they may not 
>> know about PyCall.
>>
>>
>> On Friday, January 2, 2015 1:59:04 PM UTC-6, Douglas Bates wrote:
>>>
>>> For many statistics-oriented Julia users there is a great advantage in 
>>> being able to piggy-back on R development and to use at least the data sets 
>>> from R packages.  Hence the RDatasets package and the read_rda function in 
>>> the DataFrames package for reading saved R data.
>>>
>>> Over the last couple of days I have been experimenting with running an 
>>> embedded R within Julia and calling R functions from Julia. This is similar 
>>> in scope to the Rif package except that this code is written in Julia and 
>>> not as a set of wrapper functions written in C. The R API is a C API and, 
>>> in some ways, very simple. Everything in R is represented as a "symbolic 
>>> expression" or SEXPREC and passed around as pointers to such expressions 
>>> (called an SEXP type).  Most functions take one or more SEXP values as 
>>> arguments and return an SEXP.
>>>
>>> I have avoided reading the code for Rif for two reasons:
>>>  1. It is GPL3 licensed
>>>  2. I already know a fair bit of the R API and where to find API 
>>> function signatures.
>>>
>>> Here's a simple example
>>> julia> initR()
>>> 1
>>>
>>> julia> globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
>>> Ptr{Void} @0x08c1c388
>>>
>>> julia> formaldehyde = tryEval(install(:Formaldehyde))
>>> Ptr{Void} @0x08fd1d18
>>>
>>> julia> inherits(formaldehyde,"data.frame")
>>> true
>>>
>>> julia> printValue(formaldehyde)
>>>   carb optden
>>> 1  0.1  0.086
>>> 2  0.3  0.269
>>> 3  0.5  0.446
>>> 4  0.6  0.538
>>> 5  0.7  0.626
>>> 6  0.9  0.782
>>>
>>> julia> length(formaldehyde)
>>> 2
>>>
>>> julia> names(formaldehyde)
>>> 2-element Array{ASCIIString,1}:
>>>  "carb"  
>>>  "optden"
>>>
>>> julia> form1 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,0)
>>> Ptr{Void} @0x0a5baf58
>>>
>>> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
>>> 14
>>>
>>> julia> carb = copy(pointer_to_array(ccall((:
>>> REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
>>> 6-element Array{Float64,1}:
>>>  0.1
>>>  0.3
>>>  0.5
>>>  0.6
>>>  0.7
>>>  0.9
>>>
>>> julia> form2 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,1)
>>> Ptr{Void} @0x0a5baef0
>>>
>>> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form2)
>>> 14
>>>
>>> julia> optden = copy(pointer_to_array(ccall((:
>>> REAL,libR),Ptr{Cdouble},(SEXP,),form2),length(form2)))
>>> 6-element Array{Float64,1}:
>>>  0.086
>>>  0.269
>>>  0.446
>>>  0.538
>>>  0.626
>>>  0.782
>>>
>>>
>>> A call to printValue uses the R printing mechanism.
>>>
>>> Questions:
>>>  - What would be a good name for such a package?  In the spirit of 
>>> PyCall it could be RCall or Rcall perhaps.
>>>
>>>  - Right now I am defining several functions that emulate the names of 
>>> functions in R itself ir in the R API.  What is a good balance?  Obviously 
>>> it would not be a good idea to bring in all the names in the R base 
>>> namespace.  On the other hand, those who know names like "inherits" and 
>>> what it means in R will find it convenient to have such names in such a 
>>> package.
>>>
>>> - Should I move the discussion the the julia-stats list?
>>>
>>>
>

[julia-users] Re: Package name for embedding R within Julia

2015-01-03 Thread lgautier


Hi Doug,


> I have avoided reading the code for Rif for two reasons:
> 1. It is GPL3 licensed
>  2. I already know a fair bit of the R API and where to find API function 
signatures.

1. As stated earlier Rif is now GPLv2+ (and so is R)
2. We are probably several is that situation (I am behind the python-R 
interface rpy2, so I could probably add a track record of building 
interfaces to the mix).

Having that said, wanting to scratch a coding itch is a perfectly valid 
reason in my book. May the 2 reasons above are not necessary ?

PS: The package name "RCall" has been suggested (and is already used) by 
Randy Lai (https://github.com/randy3k/RCall.jl/ - I would have though 
people on this thread followed the thread on github about the names).


On Friday, January 2, 2015 2:59:04 PM UTC-5, Douglas Bates wrote:
>
> For many statistics-oriented Julia users there is a great advantage in 
> being able to piggy-back on R development and to use at least the data sets 
> from R packages.  Hence the RDatasets package and the read_rda function in 
> the DataFrames package for reading saved R data.
>
> Over the last couple of days I have been experimenting with running an 
> embedded R within Julia and calling R functions from Julia. This is similar 
> in scope to the Rif package except that this code is written in Julia and 
> not as a set of wrapper functions written in C. The R API is a C API and, 
> in some ways, very simple. Everything in R is represented as a "symbolic 
> expression" or SEXPREC and passed around as pointers to such expressions 
> (called an SEXP type).  Most functions take one or more SEXP values as 
> arguments and return an SEXP.
>
> I have avoided reading the code for Rif for two reasons:
>  1. It is GPL3 licensed
>  2. I already know a fair bit of the R API and where to find API function 
> signatures.
>
> Here's a simple example
> julia> initR()
> 1
>
> julia> globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
> Ptr{Void} @0x08c1c388
>
> julia> formaldehyde = tryEval(install(:Formaldehyde))
> Ptr{Void} @0x08fd1d18
>
> julia> inherits(formaldehyde,"data.frame")
> true
>
> julia> printValue(formaldehyde)
>   carb optden
> 1  0.1  0.086
> 2  0.3  0.269
> 3  0.5  0.446
> 4  0.6  0.538
> 5  0.7  0.626
> 6  0.9  0.782
>
> julia> length(formaldehyde)
> 2
>
> julia> names(formaldehyde)
> 2-element Array{ASCIIString,1}:
>  "carb"  
>  "optden"
>
> julia> form1 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,0)
> Ptr{Void} @0x0a5baf58
>
> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
> 14
>
> julia> carb = 
> copy(pointer_to_array(ccall((:REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
> 6-element Array{Float64,1}:
>  0.1
>  0.3
>  0.5
>  0.6
>  0.7
>  0.9
>
> julia> form2 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,1)
> Ptr{Void} @0x0a5baef0
>
> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form2)
> 14
>
> julia> optden = 
> copy(pointer_to_array(ccall((:REAL,libR),Ptr{Cdouble},(SEXP,),form2),length(form2)))
> 6-element Array{Float64,1}:
>  0.086
>  0.269
>  0.446
>  0.538
>  0.626
>  0.782
>
>
> A call to printValue uses the R printing mechanism.
>
> Questions:
>  - What would be a good name for such a package?  In the spirit of PyCall 
> it could be RCall or Rcall perhaps.
>
>  - Right now I am defining several functions that emulate the names of 
> functions in R itself ir in the R API.  What is a good balance?  Obviously 
> it would not be a good idea to bring in all the names in the R base 
> namespace.  On the other hand, those who know names like "inherits" and 
> what it means in R will find it convenient to have such names in such a 
> package.
>
> - Should I move the discussion the the julia-stats list?
>
>

Re: [julia-users] Re: How to heapify and array?

2015-01-03 Thread Viral Shah
This works for me on 0.3.4 on mac:


  _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.4 (2014-12-26 10:42 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/   |  x86_64-apple-darwin13.4.0


julia> a = rand(10);


julia> Collections.heapify!(a)
10-element Array{Float64,1}:
 0.0610402
 0.318859 
 0.1811   
 0.494483 
 0.412013 
 0.643721 
 0.65164  
 0.536059 
 0.867843 
 0.97967  



-viral

On Saturday, January 3, 2015 5:44:55 PM UTC+5:30, Rodolfo Santana wrote:
>
> Hi Tim,
>
> Thanks for the reply! Yes, I realize this is what 'heapify!' does. 
>
> I first do x= rand(10). Then, I try Collections.heapify!(x) and I get the 
> error:
>
> ERROR: heapify! not defined
>
> Maybe someone can post a screenshot of how to use this function in Julia? 
> I couldn't find an example on the web using this function.
>
> Thanks, 
>
> -Rodolfo
>
> On Saturday, January 3, 2015 5:40:22 AM UTC-6, Tim Holy wrote:
>>
>> Also post any error messages, etc. 
>>
>> You realize that `heapify!` returns an array, just with elements in a 
>> different 
>> order? The smallest element will be first. 
>>
>> --Tim 
>>
>> On Saturday, January 03, 2015 03:12:39 AM Rodolfo Santana wrote: 
>> > Here it is: 
>> > 
>> > Julia Version 0.3.4 
>> > 
>> > Commit 3392026* (2014-12-26 10:42 UTC) 
>> > 
>> > Platform Info: 
>> > 
>> >   System: Darwin (x86_64-apple-darwin13.4.0) 
>> > 
>> >   CPU: Intel(R) Core(TM) i7 CPU   M 620  @ 2.67GHz 
>> > 
>> >   WORD_SIZE: 64 
>> > 
>> >   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem) 
>> > 
>> >   LAPACK: libopenblas 
>> > 
>> >   LIBM: libopenlibm 
>> > 
>> >   LLVM: libLLVM-3.3 
>> > 
>> > On Saturday, January 3, 2015 5:06:42 AM UTC-6, ele...@gmail.com wrote: 
>> > > Strange, works for me, maybe post the whole versioninfo() output that 
>> the 
>> > > experts can look at. 
>> > > 
>> > > Cheers 
>> > > Lex 
>> > > 
>> > > On Saturday, January 3, 2015 8:58:21 PM UTC+10, Rodolfo Santana 
>> wrote: 
>> > >> Hi Lex, 
>> > >> 
>> > >> I am using Version 0.3.4 
>> > >> 
>> > >> On Saturday, January 3, 2015 4:55:06 AM UTC-6, ele...@gmail.com 
>> wrote: 
>> > >>> Whats your versioninfo() 
>> > >>> 
>> > >>> On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana 
>> wrote: 
>> >  Thanks for the reply! I have tried that, but I get: 
>> >  
>> >  ERROR: heapify! not defined 
>> >  
>> >  On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote: 
>> > > *Collections.heapify!(x)* 
>> > > 
>> > > kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana 
>> > > 
>> > > følgende: 
>> > >> Let's say I have an array x=rand(10) . How do I use the heapify! 
>> > >> function to heapify x? 
>> > >> 
>> > >> Thanks! 
>> > >> -Rodolfo 
>>
>>

Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Ariel Keselman
it should still work if the file name is known only at runtime, consider 
the following script:


filename = "myrandomfile"*string(rand(1:5))*".hdf"
vecname = "myvec"
@load_vector_from_hdf5_file(filename, vecname)


w/o specifying types, w/o type instability, and file name decided at 
runtime.

I have yet seen a convincing argumet for allowing type instabilities inside 
compiled functions and having this "feature" comes with a high cost


Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Mike Innes
What if the file name is only known at run time?

I'm all for better lint and warning tools to help those who want to remove
type stability, but bear in mind that not everyone does. I for one really
like the convenience of dynamic typing for non-performance-sensitive code.

On 3 January 2015 at 12:57, Ariel Keselman  wrote:

>
> well, I've just opened a pull request where qrfact is type-stable (using
> staged functions of course) see here:
>
> https://github.com/JuliaLang/julia/pull/9575
>
> The same could be done for the other functions you mentioned (e.g.
> factorize, sqrtm, etc.) and I'll do if the current one is positively
> accepted.
>
> staged functions can help with many of these cases, and macros can help in
> others. For e.g. loading a vector from an HDF file (or a table from a CSV
> file)) could be done in a type-stable way by pushing the instability into
> the macro expansion stage (so we have type stability once again in the
> sense that when compiling the compiler would see a type stable expanded
> unction)
>
> imagine something along the following lines, I think this could be
> achieved with current Julia:
>
> function load_vector_from_hdf5_file(filename, vecname, coltype)
> # read the vector using the given column type, this is type-stable
> end
>
> macro load_vector_from_hdf5_file(filename, vecname)
> quote
> coltype = get_vector_coltype_from_file($filename)
> load_vector_from_hdf5_file($filename, $vecname, coltype)
> end
> end
>
> this could be used to load hdf5/csv files w/o specifying types in a
> type-stable manner.
>
>
>
>
>


Re: [julia-users] Disabling type instability in non-global scope

2015-01-03 Thread Ariel Keselman

well, I've just opened a pull request where qrfact is type-stable (using 
staged functions of course) see here:

https://github.com/JuliaLang/julia/pull/9575

The same could be done for the other functions you mentioned (e.g. 
factorize, sqrtm, etc.) and I'll do if the current one is positively 
accepted.

staged functions can help with many of these cases, and macros can help in 
others. For e.g. loading a vector from an HDF file (or a table from a CSV 
file)) could be done in a type-stable way by pushing the instability into 
the macro expansion stage (so we have type stability once again in the 
sense that when compiling the compiler would see a type stable expanded 
unction)

imagine something along the following lines, I think this could be achieved 
with current Julia:

function load_vector_from_hdf5_file(filename, vecname, coltype)
# read the vector using the given column type, this is type-stable
end

macro load_vector_from_hdf5_file(filename, vecname)
quote
coltype = get_vector_coltype_from_file($filename)
load_vector_from_hdf5_file($filename, $vecname, coltype)
end
end

this could be used to load hdf5/csv files w/o specifying types in a 
type-stable manner.






Re: [julia-users] Memory Allocation Question

2015-01-03 Thread Christoph Ortner
Hi Tim,
Many thanks for the suggestion, unfortunately it did not fix the issue; see 
below updated code + timing/allocation.
   All the best, Christoph



function evalSiteFunction!(sp::SiteLinear, geom::tbgeom, 
 Y::Array{Float64,2}, Frc::Array{Float64,2})
# extract dimension information (d1, d2 \in \{2, 3\})
d1, d2, nneig = size(sp.coeffs)
# allocate some variables
kX = 0::Int; jX = 0::Int
# assert type of the index set over which we are looping
J = geom["iMM"]::Array{Int, 1}
for jX in J
# initialise force to 0
Frc[:,jX] = 0.0
# loop over neighbouring sites
for n in 1:nneig
# get the X-index of the current neighbour
kX = geom.I[ geom.X2I[1,jX] + sp.stencil[1,n], geom.X2I[2,jX] + 
sp.stencil[2,n] ]
# evaluate the term (devectorized for performance)
#  f[:] += slice(sp.coeffs, :, :, n) * Y[:, kX]
for i = 1:d1, j = 1:d2
Frc[i,jX] += sp.coeffs[i, j, n] * Y[j, kX]
end
end
end
end

@time TB.evalSiteFunction!(frcMM, geom, Y,  Frc)

elapsed time: 0.549490861 seconds (166123912 bytes allocated, 21.85% gc time)



1   abstractarray.jl; checkbounds; line: 62
1   array.jl; getindex; line: 247
420 task.jl; anonymous; line: 340
 420 .../IJulia/src/IJulia.jl; eventloop; line: 123
  420 ...rc/execute_request.jl; execute_request_0x535c5df2; line: 140
   420 loading.jl; include_string; line: 97
420 profile.jl; anonymous; line: 14
 45  ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 198
  1 .../lib/julia/sys.dylib; +; (unknown line)
  1 array.jl; getindex; line: 247
 1   ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 202
 372 ...qmmm/TBmultiscale.jl; evalSiteFunction!; line: 203
  3 array.jl; getindex; line: 247
  8 array.jl; setindex!; line: 308



Re: [julia-users] Re: How to heapify and array?

2015-01-03 Thread Rodolfo Santana
Hi Tim,

Thanks for the reply! Yes, I realize this is what 'heapify!' does. 

I first do x= rand(10). Then, I try Collections.heapify!(x) and I get the 
error:

ERROR: heapify! not defined

Maybe someone can post a screenshot of how to use this function in Julia? I 
couldn't find an example on the web using this function.

Thanks, 

-Rodolfo

On Saturday, January 3, 2015 5:40:22 AM UTC-6, Tim Holy wrote:
>
> Also post any error messages, etc. 
>
> You realize that `heapify!` returns an array, just with elements in a 
> different 
> order? The smallest element will be first. 
>
> --Tim 
>
> On Saturday, January 03, 2015 03:12:39 AM Rodolfo Santana wrote: 
> > Here it is: 
> > 
> > Julia Version 0.3.4 
> > 
> > Commit 3392026* (2014-12-26 10:42 UTC) 
> > 
> > Platform Info: 
> > 
> >   System: Darwin (x86_64-apple-darwin13.4.0) 
> > 
> >   CPU: Intel(R) Core(TM) i7 CPU   M 620  @ 2.67GHz 
> > 
> >   WORD_SIZE: 64 
> > 
> >   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem) 
> > 
> >   LAPACK: libopenblas 
> > 
> >   LIBM: libopenlibm 
> > 
> >   LLVM: libLLVM-3.3 
> > 
> > On Saturday, January 3, 2015 5:06:42 AM UTC-6, ele...@gmail.com wrote: 
> > > Strange, works for me, maybe post the whole versioninfo() output that 
> the 
> > > experts can look at. 
> > > 
> > > Cheers 
> > > Lex 
> > > 
> > > On Saturday, January 3, 2015 8:58:21 PM UTC+10, Rodolfo Santana wrote: 
> > >> Hi Lex, 
> > >> 
> > >> I am using Version 0.3.4 
> > >> 
> > >> On Saturday, January 3, 2015 4:55:06 AM UTC-6, ele...@gmail.com 
> wrote: 
> > >>> Whats your versioninfo() 
> > >>> 
> > >>> On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana 
> wrote: 
> >  Thanks for the reply! I have tried that, but I get: 
> >  
> >  ERROR: heapify! not defined 
> >  
> >  On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote: 
> > > *Collections.heapify!(x)* 
> > > 
> > > kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana 
> > > 
> > > følgende: 
> > >> Let's say I have an array x=rand(10) . How do I use the heapify! 
> > >> function to heapify x? 
> > >> 
> > >> Thanks! 
> > >> -Rodolfo 
>
>

Re: [julia-users] Re: How to heapify and array?

2015-01-03 Thread Tim Holy
Also post any error messages, etc.

You realize that `heapify!` returns an array, just with elements in a different 
order? The smallest element will be first.

--Tim

On Saturday, January 03, 2015 03:12:39 AM Rodolfo Santana wrote:
> Here it is:
> 
> Julia Version 0.3.4
> 
> Commit 3392026* (2014-12-26 10:42 UTC)
> 
> Platform Info:
> 
>   System: Darwin (x86_64-apple-darwin13.4.0)
> 
>   CPU: Intel(R) Core(TM) i7 CPU   M 620  @ 2.67GHz
> 
>   WORD_SIZE: 64
> 
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem)
> 
>   LAPACK: libopenblas
> 
>   LIBM: libopenlibm
> 
>   LLVM: libLLVM-3.3
> 
> On Saturday, January 3, 2015 5:06:42 AM UTC-6, ele...@gmail.com wrote:
> > Strange, works for me, maybe post the whole versioninfo() output that the
> > experts can look at.
> > 
> > Cheers
> > Lex
> > 
> > On Saturday, January 3, 2015 8:58:21 PM UTC+10, Rodolfo Santana wrote:
> >> Hi Lex,
> >> 
> >> I am using Version 0.3.4
> >> 
> >> On Saturday, January 3, 2015 4:55:06 AM UTC-6, ele...@gmail.com wrote:
> >>> Whats your versioninfo()
> >>> 
> >>> On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana wrote:
>  Thanks for the reply! I have tried that, but I get:
>  
>  ERROR: heapify! not defined
>  
>  On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote:
> > *Collections.heapify!(x)*
> > 
> > kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana
> > 
> > følgende:
> >> Let's say I have an array x=rand(10) . How do I use the heapify!
> >> function to heapify x?
> >> 
> >> Thanks!
> >> -Rodolfo



[julia-users] Re: How to heapify and array?

2015-01-03 Thread Rodolfo Santana
Here it is:

Julia Version 0.3.4

Commit 3392026* (2014-12-26 10:42 UTC)

Platform Info:

  System: Darwin (x86_64-apple-darwin13.4.0)

  CPU: Intel(R) Core(TM) i7 CPU   M 620  @ 2.67GHz

  WORD_SIZE: 64

  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Nehalem)

  LAPACK: libopenblas

  LIBM: libopenlibm

  LLVM: libLLVM-3.3

On Saturday, January 3, 2015 5:06:42 AM UTC-6, ele...@gmail.com wrote:
>
> Strange, works for me, maybe post the whole versioninfo() output that the 
> experts can look at.
>
> Cheers
> Lex
>
> On Saturday, January 3, 2015 8:58:21 PM UTC+10, Rodolfo Santana wrote:
>>
>> Hi Lex,
>>
>> I am using Version 0.3.4
>>
>> On Saturday, January 3, 2015 4:55:06 AM UTC-6, ele...@gmail.com wrote:
>>>
>>> Whats your versioninfo()
>>>
>>>
>>> On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana wrote:

 Thanks for the reply! I have tried that, but I get:

 ERROR: heapify! not defined

 On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote:
>
> *Collections.heapify!(x)*
>
> kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana 
> følgende:
>>
>> Let's say I have an array x=rand(10) . How do I use the heapify! 
>> function to heapify x?
>>
>> Thanks!
>> -Rodolfo
>>
>>
>>

[julia-users] Re: How to heapify and array?

2015-01-03 Thread elextr
Strange, works for me, maybe post the whole versioninfo() output that the 
experts can look at.

Cheers
Lex

On Saturday, January 3, 2015 8:58:21 PM UTC+10, Rodolfo Santana wrote:
>
> Hi Lex,
>
> I am using Version 0.3.4
>
> On Saturday, January 3, 2015 4:55:06 AM UTC-6, ele...@gmail.com wrote:
>>
>> Whats your versioninfo()
>>
>>
>> On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana wrote:
>>>
>>> Thanks for the reply! I have tried that, but I get:
>>>
>>> ERROR: heapify! not defined
>>>
>>> On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote:

 *Collections.heapify!(x)*

 kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana følgende:
>
> Let's say I have an array x=rand(10) . How do I use the heapify! 
> function to heapify x?
>
> Thanks!
> -Rodolfo
>
>
>

[julia-users] Re: How to heapify and array?

2015-01-03 Thread Rodolfo Santana
Hi Lex,

I am using Version 0.3.4

On Saturday, January 3, 2015 4:55:06 AM UTC-6, ele...@gmail.com wrote:
>
> Whats your versioninfo()
>
>
> On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana wrote:
>>
>> Thanks for the reply! I have tried that, but I get:
>>
>> ERROR: heapify! not defined
>>
>> On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote:
>>>
>>> *Collections.heapify!(x)*
>>>
>>> kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana følgende:

 Let's say I have an array x=rand(10) . How do I use the heapify! 
 function to heapify x?

 Thanks!
 -Rodolfo




[julia-users] Re: How to heapify and array?

2015-01-03 Thread elextr
Whats your versioninfo()


On Saturday, January 3, 2015 8:53:30 PM UTC+10, Rodolfo Santana wrote:
>
> Thanks for the reply! I have tried that, but I get:
>
> ERROR: heapify! not defined
>
> On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote:
>>
>> *Collections.heapify!(x)*
>>
>> kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana følgende:
>>>
>>> Let's say I have an array x=rand(10) . How do I use the heapify! 
>>> function to heapify x?
>>>
>>> Thanks!
>>> -Rodolfo
>>>
>>>
>>>

[julia-users] Re: How to heapify and array?

2015-01-03 Thread Rodolfo Santana
Thanks for the reply! I have tried that, but I get:

ERROR: heapify! not defined

On Saturday, January 3, 2015 4:48:27 AM UTC-6, Ivar Nesje wrote:
>
> *Collections.heapify!(x)*
>
> kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana følgende:
>>
>> Let's say I have an array x=rand(10) . How do I use the heapify! function 
>> to heapify x?
>>
>> Thanks!
>> -Rodolfo
>>
>>
>>

[julia-users] Re: How to heapify and array?

2015-01-03 Thread elextr


On Saturday, January 3, 2015 8:22:24 PM UTC+10, Rodolfo Santana wrote:
>
> Let's say I have an array x=rand(10) . How do I use the heapify! function 
> to heapify x?
>


Collections.heapify!(x)

Cheers
Lex
 

>
> Thanks!
> -Rodolfo
>
>
>

[julia-users] Re: How to heapify and array?

2015-01-03 Thread Ivar Nesje
 

*Collections.heapify!(x)*

kl. 11:22:24 UTC+1 lørdag 3. januar 2015 skrev Rodolfo Santana følgende:
>
> Let's say I have an array x=rand(10) . How do I use the heapify! function 
> to heapify x?
>
> Thanks!
> -Rodolfo
>
>
>

[julia-users] How to heapify and array?

2015-01-03 Thread Rodolfo Santana
Let's say I have an array x=rand(10) . How do I use the heapify! function 
to heapify x?

Thanks!
-Rodolfo




[julia-users] Re: Warning: imported binding for transpose overwritten in module __anon__

2015-01-03 Thread Xin Jin
Also encountered it. Same question as op asked.

On Sunday, February 23, 2014 6:47:55 AM UTC-8, Uwe Fechner wrote:
>
> Hello,
>
> if I enter:
>
> using PyCall
> @pyimport pylab as plt
>
> I get the warning:
> Warning: imported binding for transpose overwritten in module __anon__
>
> I am using Julia 0.21 on Ubuntu 12.04 64 bit with numpy 1.8.0 and
> matplotlib 1.3.1.
>
> Can I safely ignore this warning, or could this cause any problems later 
> on?
>
> Plotting currently works for me (on one of my 5 machines) as long as I 
> stick to the TkAgg backend.
>
> Best regards:
>
> Uwe Fechner
>
>

[julia-users] Package HTTPClient 'get' method ostream issue

2015-01-03 Thread C. Bryan Daniels


I am using the 'HTTPClient' package. I am using the 'get' method, but am 
having trouble properly configuring the output stream. Specifically, the 
API to a particular service responds to a 'get' call with a stream of json 
objects. The code snippets below work as expected by returning a continuous 
steam of json objects; I can terminate the stream with ctr-C.  What I 
really want is to be able to get a specific number of json objects. I've 
played around with options: ostream="some-file", ostream=IOBuffer() and 
blocking=false. This is probably a basic question, but any help in solving 
this would be appreciated. Thanks for any advice.



options_get = HTTPClient.HTTPC.RequestOptions(headers=headers,content_type=
"application/json",ostream=STDOUT) 


function get(lb::LittleBit)


HTTPClient.HTTPC.get(lb.url_get,lb.options_get)
   

end