Re: [julia-users] Re: new packages: PrimeSieve, ZChop, DeepConvert, PermutationsA

2014-12-23 Thread Hans W Borchers
> No, You've got a typo. But, be careful fixing it,
> it may eat up all your memory.

A typo from copying. The problem is that first the memory error occurs 
and on second call it returns an empty error.

julia> genprimes(1841378967856, 18500)
ERROR: MemoryError()
 in primescopy at 
/home/hwb/.julia/v0.3/PrimeSieve/src/primesieve_c.jl:40
 in genprimes at /home/hwb/.julia/v0.3/PrimeSieve/src/primesieve_c.jl:60

julia> genprimes(1841378967856, 18500)
0-element Array{Int64,1}

Well, I now see that you mention this in the "Bugs" section of the README 
file.

>> > Also, I don't believe this result:
> I'm always for freedom of conscience!

Of course, it depends on the definition, for me [7, 11, 13, 17, 19, 23] is 
the 
prototype, and then the next 6-er tuples are:

[97,101,103,107,109,113]
[16057,16061,16063,16067,16069,16073]
[19417,19421,19423,19427,19429,19433]
[43777,43781,43783,43787,43789,43793]
[1091257,1091261,1091263,1091267,1091269,1091273]
[1615837,1615841,1615843,1615847,1615849,1615853]
[1954357,1954361,1954363,1954367,1954369,1954373]

and indeed there are none in the interval [10, 20].
I wonder what the library means with tuplets. Is that documented somewhere?

>> Octetts of prime numbers are very interesting, even in theory, 
>> but function 'countprimes' does not allow to search for them:
> I didn't write the library, I just wrapped it. 
> I suspect its a bit of work to go to 8.

Don't worry. I took an old script of mine that computed only the first 
10-20 
prime octetts (in R or Python) and converted it to Julia (utilizing Julia's 
'isprime'). During this night it computed *all* prime octetts up to 10^12 , 
and there are hundreds of them, the last one being

[99452940701, 99452940703, 99452940707, 99452940709, 
 99452940731, 99452940733, 99452940737, 99452940739]

As you said, the 'isprime' function in Julia is really fast.



[julia-users] convert for parametric type unions

2014-12-23 Thread Andreas Noack
convert fails for parametric type unions, e.g.

julia> immutable TC{T} end

julia> immutable TD{T} end

julia> typealias TCD{T} Union(TC{T}, TD{T})
Union(TC{T},TD{T})

julia> convert{T}(::Type{TCD{T}}, x::TC) = "Andreas"
convert (generic function with 493 methods)

julia> convert(TCD{Float64}, TC{Float64}())
TC{Float64}()

julia> convert(TCD{Float64}, TC{Float32}())
ERROR: `convert` has no method matching
convert(::Type{Union(TD{Float64},TC{Float64})}, ::TC{Float32})
Closest candidates are:
  convert(::Type{T}, ::T)
  convert(::Type{Nullable{T}}, ::T)
  convert(::Type{Union(TC{T},TD{T})}, ::TC{T})


but I can define convert methods for non-parametric type unions, e.g.

julia> immutable TA end

julia> immutable TB end

julia> typealias TAB Union(TA, TB)
Union(TA,TB)

julia> import Base.convert

julia> convert(::Type{TAB}, x::TA) = "Andreas"
convert (generic function with 493 methods)

julia> convert(TAB, TA())
"Andreas"

and it also works fine for abstract types

julia> abstract TEF{T}

julia> immutable TE{T} <: TEF{T} end

julia> immutable TF{T} <: TEF{T} end

julia> convert{T}(::Type{TEF{T}}, TE) = "Andreas"
convert (generic function with 494 methods)

julia> convert(TEF{Float64}, TE{Float32}())
"Andreas"

Is this a limitation to parametric unions or is it a bug?


[julia-users] Re: How can I create a simple Graph using Graphs.jl?

2014-12-23 Thread Todd Leo

>
> Then you can use the methods described at 
> http://graphsjl-docs.readthedocs.org/en/latest/graphs.html#graph 
> 
>  to 
> extract/operate on edges and vertices and so on. 


Any ideas how to extract a certain vertex/edge by its index other than 
extract all vertices/edges by vertices()/edges() and check the index? The 
latter approach takes O(n) time.

On Sunday, June 1, 2014 10:30:41 PM UTC+8, Alex wrote:
>
> Hi Paulo,
>
> you can get your graph as follows
> julia> using Graphs
>
> julia> g = simple_graph(4)
> Directed Graph (4 vertices, 0 edges)
>
> julia> add_edge!(g, 1, 2)
> edge [1]: 1 -- 2
>
> julia> add_edge!(g, 2, 4)
> edge [2]: 2 -- 4
>
> julia> add_edge!(g, 4, 3)
> edge [3]: 4 -- 3
>
> julia> add_edge!(g, 3, 1)
> edge [4]: 3 -- 1
>
> julia> add_edge!(g, 1, 4)
> edge [5]: 1 -- 4
>
> julia> add_edge!(g, 3, 2)
> edge [6]: 3 -- 2
>
> julia> g
> Directed Graph (4 vertices, 6 edges)
>
> Then you can use the methods described at 
> http://graphsjl-docs.readthedocs.org/en/latest/graphs.html#graph to 
> extract/operate on edges and vertices and so on. 
>
> Hope that helps,
>
> Alex.
>
> On Sunday, 1 June 2014 16:17:02 UTC+2, Paulo Castro wrote:
>>
>> Hi guys,
>>
>> Sorry about making this kind of question, but even after reading the 
>> documentation, I don't know how to create the simplest graph object using 
>> Graphs.jl. For example, I want to create the following graph:
>>
>>
>> 
>>
>> Can someone give me the directions to start?
>>
>

Re: [julia-users] C code in a Julia package

2014-12-23 Thread Elliot Saba
Hey John,

If you can explain a bit as to what you want to do with your libraries,
that would help.  What I understand from you so far is that:

A) You have to C++ libraries that you are able to download and compile

B) You want to write a wrapper that sits between Julia and the C++, written
in C but used by Julia

Is that correct?
-E

On Tue, Dec 23, 2014 at 5:43 PM,  wrote:

> Can someone point me to examples of C code in Julia packages ? I am using
> BinDeps to download two C++ libraries. After a lot of blind trial and error
> I have a build.jl that does what I need. But, I need to write a c wrapper
> for the C++ libraries.  One idea is to have BinDeps handle the build, if I
> can fool it into thinking it is an external library. But, I don't know how
> to do that. BinDeps seems to want to download something. I find it rather
> opaque. Otherwise I need to find a way to write most of what BinDeps does
> by hand for my C code: find paths to top level of the package and various
> sub dirs; do change dirs, making, copying, linking at run time, etc. I
> spent quite a bit of time searching julia-user, and -dev, grepping and
> finding in package trees, reading docs and source code,  etc. no luck.
> Thanks.
>
> --John
>
>


Re: [julia-users] Re: convert Matlab code into Julia

2014-12-23 Thread Jameson Nash
There is a package available for reading matlab .mat files (
https://github.com/simonster/MAT.jl)

For basic `dir()` functionality, Julia's `readdir()` may be sufficient for
you. For more complicated usages, Glob.jl pattern matching may come in
handy (https://github.com/vtjnash/Glob.jl)

On Tue Dec 23 2014 at 10:41:36 PM DP  wrote:

> As per .mat file, one thing I can say is,
> Write a Matlab script to get a .csv data file from .mat.
> Now Interface .csv file with your julia code.
>
>
> On Wednesday, December 24, 2014 8:00:35 AM UTC+5:30, jspark wrote:
>>
>>
>> Hi,
>>
>> I am a *two days *old beginner for *Julia* and it seemed I can translate
>> *Matlab* code into Julia in a second (?) but the reality is always
>> different...
>> The goal for original *Matlab* code was to select eight time series data
>> files (matlab format) in a folder(directory) and run the data analysis with
>> "for loop" one by one.
>> Now it looks like *Juila* code but it is not yet working because 1) it
>> still use *Matlab *functions like dir() and 2) it need to run data files
>> in *.mat* format.
>> I have error from dirData = (line 5) and could you help me out from there?
>>
>> #set file path
>>
>> loadPath="E:\USA\Data\Data111" ;
>>
>> #select only Matlab files
>>
>> cd(loadPath)
>>
>> dirData = dir("*.mat"); ## Selected mat files
>> fileNames = {dirData.name}; ## named the list of the files as
>> fileNames
>>
>> #run loop
>>
>> for s=1:8
>>
>> FN=fileNames(s) # select one file
>> DATA=load(char(FN))
>>
>> "run additional codes"
>>
>> end;
>>
>>
>


[julia-users] Re: convert Matlab code into Julia

2014-12-23 Thread DP
As per .mat file, one thing I can say is,
Write a Matlab script to get a .csv data file from .mat.
Now Interface .csv file with your julia code.

On Wednesday, December 24, 2014 8:00:35 AM UTC+5:30, jspark wrote:
>
>
> Hi,
>
> I am a *two days *old beginner for *Julia* and it seemed I can translate 
> *Matlab* code into Julia in a second (?) but the reality is always 
> different...
> The goal for original *Matlab* code was to select eight time series data 
> files (matlab format) in a folder(directory) and run the data analysis with 
> "for loop" one by one. 
> Now it looks like *Juila* code but it is not yet working because 1) it 
> still use *Matlab *functions like dir() and 2) it need to run data files 
> in *.mat* format.   
> I have error from dirData = (line 5) and could you help me out from there?
>
> #set file path
>
> loadPath="E:\USA\Data\Data111" ;
>
> #select only Matlab files
>
> cd(loadPath)
>
> dirData = dir("*.mat"); ## Selected mat files
> fileNames = {dirData.name}; ## named the list of the files as fileNames
>
> #run loop 
>
> for s=1:8 
>   
> FN=fileNames(s) # select one file 
> DATA=load(char(FN))
>
> "run additional codes"
>
> end;
>
>


[julia-users] convert Matlab code into Julia

2014-12-23 Thread jspark

Hi,

I am a *two days *old beginner for *Julia* and it seemed I can translate 
*Matlab* code into Julia in a second (?) but the reality is always 
different...
The goal for original *Matlab* code was to select eight time series data 
files (matlab format) in a folder(directory) and run the data analysis with 
"for loop" one by one. 
Now it looks like *Juila* code but it is not yet working because 1) it 
still use *Matlab *functions like dir() and 2) it need to run data files in 
*.mat* format.   
I have error from dirData = (line 5) and could you help me out from there?

#set file path

loadPath="E:\USA\Data\Data111" ;

#select only Matlab files

cd(loadPath)

dirData = dir("*.mat"); ## Selected mat files
fileNames = {dirData.name}; ## named the list of the files as fileNames

#run loop 

for s=1:8 
  
FN=fileNames(s) # select one file 
DATA=load(char(FN))

"run additional codes"

end;
   


[julia-users] C code in a Julia package

2014-12-23 Thread lapeyre . math122a
Can someone point me to examples of C code in Julia packages ? I am using 
BinDeps to download two C++ libraries. After a lot of blind trial and error 
I have a build.jl that does what I need. But, I need to write a c wrapper 
for the C++ libraries.  One idea is to have BinDeps handle the build, if I 
can fool it into thinking it is an external library. But, I don't know how 
to do that. BinDeps seems to want to download something. I find it rather 
opaque. Otherwise I need to find a way to write most of what BinDeps does 
by hand for my C code: find paths to top level of the package and various 
sub dirs; do change dirs, making, copying, linking at run time, etc. I 
spent quite a bit of time searching julia-user, and -dev, grepping and 
finding in package trees, reading docs and source code,  etc. no luck. 
Thanks.

--John



[julia-users] Supporting @inbounds with custom array type?

2014-12-23 Thread Sheehan Olver
I'm making a custom "BandedMatrix" data type that overrides setindex!, and 
I want to use @inbounds:

@inbounds A[k,j]=4


The issue is that @inbounds doesn't seem to go "inside" the setindex! call. 
 Any idea to allow @inbounds to work?  My solution is to add another call 
ibsetindex!, but this makes the set syntax uglier.  Code is below.




immutable BandedMatrix{T}
data::Matrix{T}
a::Int
b::Int
n::Int #Number of rows
end
function BandedMatrix{T}(data::Matrix{T},a,b,n)
@assert size(data,1)==b-a+1
BandedMatrix{data,a,b,n}
end


bazeros{T}(::Type{T},a::Integer,b,n,m)=BandedMatrix(zeros(T,b-a+1,m),a,b,n)
bazeros{T}(::Type{T},a::Integer,b,n)=bazeros(T,a,b,n,n)
bazeros(a::Integer,b,n,m)=bazeros(Float64,a,b,n,m)
bazeros(a::Integer,b,n)=bazeros(Float64,a,b,n,n)

Base.size(A::BandedMatrix,k)=ifelse(k==1,A.n,size(A.data,2))
Base.size(A::BandedMatrix)=A.n,size(A.data,2)

usgetindex(A::BandedMatrix,k,j::Integer)=A.data[k-j+A.b+1,j]
getindex{T}(A::BandedMatrix{T},k::Integer,j::Integer)=(A.a≤j-k≤A.b)?usgetindex(A,k,j):(k≤A.n?zero(T):throw(BoundsError()))
getindex(A::BandedMatrix,kr::Range,j::Integer)=A.a≤j-kr[end]≤j-kr[1]≤A.b?usgetindex(A,kr,j):[A[k,j]
 
for k=kr]
getindex(A::BandedMatrix,k::Integer,jr::Range)=[A[k,j] for j=jr]
getindex(A::BandedMatrix,kr::Range,jr::Range)=[A[k,j] for k=kr,j=jr]
Base.full(A::BandedMatrix)=A[1:size(A,1),1:size(A,2)]


ibsetindex!(A::BandedMatrix,v,k,j::Integer)=(@inbounds 
A.data[k-j+A.b+1,j]=v)
setindex!(A::BandedMatrix,v,k,j::Integer)=(A.data[k-j+A.b+1,j]=v)

function setindex!(A::BandedMatrix,v,kr::Range,jr::Range)
for j in jr
A[kr,j]=slice(v,:,j)
end
end
function setindex!(A::BandedMatrix,v,k::Integer,jr::Range)
for j in jr
A[k,j]=v[j]
end
end




Re: [julia-users] Re: Half precision math operations

2014-12-23 Thread Jiahao Chen
Related issue: https://github.com/JuliaLang/julia/issues/5942


Re: [julia-users] Re: new packages: PrimeSieve, ZChop, DeepConvert, PermutationsA

2014-12-23 Thread John Lapeyre

On 12/22/2014 07:19 PM, Hans W Borchers wrote:

This approach to count prime  numbers based on tables is very

> useful (for me), thanks.
>
> There are more than 305 million primes in the interval
> [1841378967856, 18500]. What am I doing wrong:
>
> julia> genprimes(1841378967856, 18500]
> 0-element Array{Int64,1}
>
> Or is the sieve getting too large for numbers > 1^12 ?

  No, You've got a typo. But, be careful fixing it,
  it may eat up all your memory.




> Also, I don't believe this result:
>
> julia> countprimes(10, 20, tuplet = 6)
> 0


I'm always for freedom of conscience!




> Octetts of prime numbers are very interesting, even in theory,
> but function 'countprimes' does not allow to search for them:
>
> julia> countprimes(100, 200, tuplet = 8)
> ERROR: tuplet must be between 1 and 6
>  in countprimes at 
/home/hwb/.julia/v0.3/PrimeSieve/src/wrappers.jl:37

>
> Could you allow at least 'tuplet=8'?
> I know that the first octett is shortly above 1 million.

I didn't write the library, I just wrapped it. I suspect its
a bit of work to go to 8.

-- John



Re: [julia-users] Optional functionality in submodule

2014-12-23 Thread Avik Sengupta
>Thanks, so the code is only executed when the module was previously loaded 
right?

No, I believe that if the module is loaded in the future, the code is 
executed then. The code is executed immediately if the module has been 
previously loaded. See Mike's comments at that issue: 
https://github.com/JuliaLang/julia/issues/2025#issuecomment-67733391

Regards
-
Avik

On Tuesday, 23 December 2014 18:57:37 UTC, Tobias Knopp wrote:
>
> Thanks, so the code is only executed when the module was previously loaded 
> right?
>
> Am Dienstag, 23. Dezember 2014 16:10:55 UTC+1 schrieb tshort:
>>
>> Not quite what you're asking, but see some discussions in this issue:
>>
>> https://github.com/JuliaLang/julia/issues/2025
>>
>> Particularly, see the note on Mike Innes's require macro. Here is an 
>> example in action for supporting multiple plotting mechanisms:
>>
>>
>> https://github.com/one-more-minute/Jewel.jl/blob/b0e8c184f57e8e60c83e1b9ef49511b08c88f16f/src/LightTable/display/objects.jl#L168-L170
>>
>>
>>
>> On Tue, Dec 23, 2014 at 9:42 AM, Tobias Knopp  
>> wrote:
>>
>>> Sorry if this has already been answered. Its about optional plotting 
>>> functionality in a package. More precisely I want to have some Winston / 
>>> Gtk based plotting things and some PyPlot plotting routines.
>>>
>>> Is there a possibility to have submodules in a package so that the main 
>>> module can be used without the subfunctionality?
>>>
>>> e.g.
>>>
>>> using Foo 
>>>
>>> works does not require Winston/Gtk/PyPlot
>>>
>>> using Foo, FooGUI
>>>
>>> does require Winston and Gtk and
>>>
>>> using Foo, FooMyBeautifulPlots
>>>
>>> requires PyPlot?
>>>
>>> Thanks
>>>
>>> Tobi
>>>
>>
>>

Re: [julia-users] Re: Half precision math operations

2014-12-23 Thread Tobias Knopp
Interesting but it seems that the common SIMD instructions are indeed only 
available for float32 an larger. This would have been a factor 2 that could 
potentially reached.

Am Dienstag, 23. Dezember 2014 19:10:50 UTC+1 schrieb Erik Schnetter:
>
> Doing computation with Float16 is slow -- most CPUs only have Float32 
> operations implemented in hardware, as well as conversion operations 
> between Float16 and Float32. The only efficient way is to keep the 
> values as Float32 for as long as possible, and only convert to Float16 
> when storing back to memory. 
>
> With a macro such as `@fastmath`, Julia could automatically convert 
> pure Float16 operations to Float32 operations. Otherwise, if Julia 
> offered Float16 operations, it would be required to round in between 
> each operation if the operations are performed as Float32. Or maybe 
> there is language in the standard that allows higher precision 
> operations? I believe this is the case for 64-bit and 80-bit 
> operations. If so, Julia could offer efficient Float16 operations, 
> with probably very surprising semantics: If one manually introduces a 
> temporary variable of type Float16, the result would change... 
>
> -erik 
>
>
> On Tue, Dec 23, 2014 at 9:56 AM, Stefan Karpinski 
> > wrote: 
> > Doing computation with Float16s is not really reasonable - the IEEE 
> standard 
> > describes this as a format that is only for storage. 
> > 
> > 
> > On Dec 23, 2014, at 9:23 AM, Tobias Knopp  > 
> > wrote: 
> > 
> > I suppose that the fft limitation is due fftw supporting only float32 
> and 
> > float64. 
> > 
> > I am not sure if simd supports float16. If not you should not expect any 
> > speed gains. 
> > 
> > Cheers 
> > 
> > Tobi 
> > 
> > Am Dienstag, 23. Dezember 2014 11:56:46 UTC+1 schrieb Mark B: 
> >> 
> >> I was wondering how Julia supports half precision operations? It seems 
> it 
> >> does (more or less) but I'm not sure if there's a lot of type 
> conversion 
> >> going on behind the scenes with associated overhead. Would it be more 
> >> efficient to store and crunch on Float32s? 
> >> 
> >> julia> rand(Float16,2,2) * rand(Float16,2,2) 
> >> 2x2 Array{Float16,2}: 
> >>  0.58301  1.0508 
> >>  0.48145  0.73438 
> >> 
> >> julia> sparse(rand(Float16,2,2)) 
> >> 2x2 sparse matrix with 4 Float16 entries: 
> >> [1, 1]  =  0.448 
> >> [2, 1]  =  0.15771 
> >> [1, 2]  =  0.79932 
> >> [2, 2]  =  0.50928 
> >> 
> >> julia> fft(rand(Float16,2,2)) 
> >> 2x2 Array{Complex{Float64},2}: 
> >>1.76245+0.0im  -0.0603027+0.0im 
> >>  -0.129639+0.0im   -0.390869+0.0im 
> >> 
> >> Oops for the last one - is fft always double precision? 
> >> 
> > 
>
>
>
> -- 
> Erik Schnetter > 
> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>


Re: [julia-users] Optional functionality in submodule

2014-12-23 Thread Tobias Knopp
Thanks, so the code is only executed when the module was previously loaded 
right?

Am Dienstag, 23. Dezember 2014 16:10:55 UTC+1 schrieb tshort:
>
> Not quite what you're asking, but see some discussions in this issue:
>
> https://github.com/JuliaLang/julia/issues/2025
>
> Particularly, see the note on Mike Innes's require macro. Here is an 
> example in action for supporting multiple plotting mechanisms:
>
>
> https://github.com/one-more-minute/Jewel.jl/blob/b0e8c184f57e8e60c83e1b9ef49511b08c88f16f/src/LightTable/display/objects.jl#L168-L170
>
>
>
> On Tue, Dec 23, 2014 at 9:42 AM, Tobias Knopp  > wrote:
>
>> Sorry if this has already been answered. Its about optional plotting 
>> functionality in a package. More precisely I want to have some Winston / 
>> Gtk based plotting things and some PyPlot plotting routines.
>>
>> Is there a possibility to have submodules in a package so that the main 
>> module can be used without the subfunctionality?
>>
>> e.g.
>>
>> using Foo 
>>
>> works does not require Winston/Gtk/PyPlot
>>
>> using Foo, FooGUI
>>
>> does require Winston and Gtk and
>>
>> using Foo, FooMyBeautifulPlots
>>
>> requires PyPlot?
>>
>> Thanks
>>
>> Tobi
>>
>
>

[julia-users] Re: make errors and git version issues

2014-12-23 Thread SVAKSHA
Solved. Apparently this was not a git issue at all. Thanks for the
pointer Ivar - you were right that julia was running from another
directory, in this case the ppa nightlies by Elliot. Thanks for the
pointer and sorry about the noise.
SVAKSHA ॥  http://about.me/svaksha  ॥



On Tue, Dec 23, 2014 at 1:23 PM, SVAKSHA  wrote:
> On Tue, Dec 23, 2014 at 11:55 AM, SVAKSHA  wrote:
>> On Tue, Dec 23, 2014 at 9:10 AM, Ivar Nesje  wrote:
>>> I'm really at loss for what to try here. I'm upgrading to git 2.0, but the
>>> release notes doesn't seem to have changes that would break anything.
>>
>> I dont think git should be able to break anything as julia works as
>> expected with the exception that it does not compile anything that is
>> pulled after Friday. So could you try compiling julia after upgrading
>> git.
>
> Oops, forgot to mention that I'm on Ubuntu-LTS-14.04 and I updated the
> git via a ppa: git-core-ppa-trusty.list
> If it helps, https://launchpad.net/~git-core/+archive/ubuntu/ppa
>
> SVAKSHA ॥  http://about.me/svaksha  ॥


Re: [julia-users] Re: Half precision math operations

2014-12-23 Thread Erik Schnetter
Doing computation with Float16 is slow -- most CPUs only have Float32
operations implemented in hardware, as well as conversion operations
between Float16 and Float32. The only efficient way is to keep the
values as Float32 for as long as possible, and only convert to Float16
when storing back to memory.

With a macro such as `@fastmath`, Julia could automatically convert
pure Float16 operations to Float32 operations. Otherwise, if Julia
offered Float16 operations, it would be required to round in between
each operation if the operations are performed as Float32. Or maybe
there is language in the standard that allows higher precision
operations? I believe this is the case for 64-bit and 80-bit
operations. If so, Julia could offer efficient Float16 operations,
with probably very surprising semantics: If one manually introduces a
temporary variable of type Float16, the result would change...

-erik


On Tue, Dec 23, 2014 at 9:56 AM, Stefan Karpinski
 wrote:
> Doing computation with Float16s is not really reasonable - the IEEE standard
> describes this as a format that is only for storage.
>
>
> On Dec 23, 2014, at 9:23 AM, Tobias Knopp 
> wrote:
>
> I suppose that the fft limitation is due fftw supporting only float32 and
> float64.
>
> I am not sure if simd supports float16. If not you should not expect any
> speed gains.
>
> Cheers
>
> Tobi
>
> Am Dienstag, 23. Dezember 2014 11:56:46 UTC+1 schrieb Mark B:
>>
>> I was wondering how Julia supports half precision operations? It seems it
>> does (more or less) but I'm not sure if there's a lot of type conversion
>> going on behind the scenes with associated overhead. Would it be more
>> efficient to store and crunch on Float32s?
>>
>> julia> rand(Float16,2,2) * rand(Float16,2,2)
>> 2x2 Array{Float16,2}:
>>  0.58301  1.0508
>>  0.48145  0.73438
>>
>> julia> sparse(rand(Float16,2,2))
>> 2x2 sparse matrix with 4 Float16 entries:
>> [1, 1]  =  0.448
>> [2, 1]  =  0.15771
>> [1, 2]  =  0.79932
>> [2, 2]  =  0.50928
>>
>> julia> fft(rand(Float16,2,2))
>> 2x2 Array{Complex{Float64},2}:
>>1.76245+0.0im  -0.0603027+0.0im
>>  -0.129639+0.0im   -0.390869+0.0im
>>
>> Oops for the last one - is fft always double precision?
>>
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] How to convert TimeArrays do typical Julia Array (Matrix) ?

2014-12-23 Thread paul analyst
How to convert TimeArrays do typical Julia Array (Matrix) ?
Paul


Re: [julia-users] Optional functionality in submodule

2014-12-23 Thread Tom Short
Not quite what you're asking, but see some discussions in this issue:

https://github.com/JuliaLang/julia/issues/2025

Particularly, see the note on Mike Innes's require macro. Here is an
example in action for supporting multiple plotting mechanisms:

https://github.com/one-more-minute/Jewel.jl/blob/b0e8c184f57e8e60c83e1b9ef49511b08c88f16f/src/LightTable/display/objects.jl#L168-L170



On Tue, Dec 23, 2014 at 9:42 AM, Tobias Knopp 
wrote:

> Sorry if this has already been answered. Its about optional plotting
> functionality in a package. More precisely I want to have some Winston /
> Gtk based plotting things and some PyPlot plotting routines.
>
> Is there a possibility to have submodules in a package so that the main
> module can be used without the subfunctionality?
>
> e.g.
>
> using Foo
>
> works does not require Winston/Gtk/PyPlot
>
> using Foo, FooGUI
>
> does require Winston and Gtk and
>
> using Foo, FooMyBeautifulPlots
>
> requires PyPlot?
>
> Thanks
>
> Tobi
>


Re: [julia-users] how to properly dis-connect from a TcpSocket

2014-12-23 Thread Isaiah Norton
To cleanly close a socket, use `close(::TCPSocket)`.
Also, you might want to check `isopen(::TCPSocket)` as the loop condition.

(the applicable version of `close` in the method table is
`close(::AsyncStream)`, because TCPSocket <: AsyncStream)

On Tue, Dec 23, 2014 at 7:52 AM, Ben Arthur  wrote:

> i'm able to setup a connection and pass messages, the problem is that when
> the sender is another julia process, and that process exits, the receiver
> is spammed with an infinite number of empty lines. how does one exit
> cleanly? is there a disconnect() command?  code below. thanks
>
> == first do this in one julia process ==
> julia> @async begin
>   server = listen(2001)
>   while true
> sock = accept(server)
> @async while true
>   println(strftime(time())*": "*readline(sock))
> end
>   end
> end
>
> Task (waiting) @0x7fb9e7803480
>
> julia> Tue Dec 23 07:43:28 2014: foo
> Tue Dec 23 07:43:29 2014:
> Tue Dec 23 07:43:35 2014:
> Tue Dec 23 07:43:35 2014:
> Tue Dec 23 07:43:35 2014:
> Tue Dec 23 07:43:35 2014:
>
>
>
> == then do this in a second julia process ==
> julia> sock = connect(2001)
> TCPSocket(open, 0 bytes waiting)
>
> julia> println(sock,"foo")
>
> julia>   # note all the output in the process above following this
>


Re: [julia-users] Re: Half precision math operations

2014-12-23 Thread Stefan Karpinski
Doing computation with Float16s is not really reasonable – the IEEE standard 
describes this as a format that is only for storage.


> On Dec 23, 2014, at 9:23 AM, Tobias Knopp  wrote:
> 
> I suppose that the fft limitation is due fftw supporting only float32 and 
> float64.
> 
> I am not sure if simd supports float16. If not you should not expect any 
> speed gains.
> 
> Cheers
> 
> Tobi
> 
> Am Dienstag, 23. Dezember 2014 11:56:46 UTC+1 schrieb Mark B:
>> 
>> I was wondering how Julia supports half precision operations? It seems it 
>> does (more or less) but I'm not sure if there's a lot of type conversion 
>> going on behind the scenes with associated overhead. Would it be more 
>> efficient to store and crunch on Float32s?
>> 
>> julia> rand(Float16,2,2) * rand(Float16,2,2)
>> 2x2 Array{Float16,2}:
>>  0.58301  1.0508
>>  0.48145  0.73438
>> 
>> julia> sparse(rand(Float16,2,2))
>> 2x2 sparse matrix with 4 Float16 entries:
>> [1, 1]  =  0.448
>> [2, 1]  =  0.15771
>> [1, 2]  =  0.79932
>> [2, 2]  =  0.50928
>> 
>> julia> fft(rand(Float16,2,2))
>> 2x2 Array{Complex{Float64},2}:
>>1.76245+0.0im  -0.0603027+0.0im
>>  -0.129639+0.0im   -0.390869+0.0im
>> 
>> Oops for the last one - is fft always double precision?
>> 


[julia-users] Optional functionality in submodule

2014-12-23 Thread Tobias Knopp
Sorry if this has already been answered. Its about optional plotting 
functionality in a package. More precisely I want to have some Winston / 
Gtk based plotting things and some PyPlot plotting routines.

Is there a possibility to have submodules in a package so that the main 
module can be used without the subfunctionality?

e.g.

using Foo 

works does not require Winston/Gtk/PyPlot

using Foo, FooGUI

does require Winston and Gtk and

using Foo, FooMyBeautifulPlots

requires PyPlot?

Thanks

Tobi


[julia-users] Re: Half precision math operations

2014-12-23 Thread Tobias Knopp
I suppose that the fft limitation is due fftw supporting only float32 and 
float64.

I am not sure if simd supports float16. If not you should not expect any 
speed gains.

Cheers

Tobi

Am Dienstag, 23. Dezember 2014 11:56:46 UTC+1 schrieb Mark B:
>
> I was wondering how Julia supports half precision operations? It seems it 
> does (more or less) but I'm not sure if there's a lot of type conversion 
> going on behind the scenes with associated overhead. Would it be more 
> efficient to store and crunch on Float32s?
>
> julia> rand(Float16,2,2) * rand(Float16,2,2)
> 2x2 Array{Float16,2}:
>  0.58301  1.0508
>  0.48145  0.73438
>
> julia> sparse(rand(Float16,2,2))
> 2x2 sparse matrix with 4 Float16 entries:
> [1, 1]  =  0.448
> [2, 1]  =  0.15771
> [1, 2]  =  0.79932
> [2, 2]  =  0.50928
>
> julia> fft(rand(Float16,2,2))
> 2x2 Array{Complex{Float64},2}:
>1.76245+0.0im  -0.0603027+0.0im
>  -0.129639+0.0im   -0.390869+0.0im
>
> Oops for the last one - is fft always double precision?
>
>

[julia-users] Re: make errors and git version issues

2014-12-23 Thread SVAKSHA
On Tue, Dec 23, 2014 at 11:55 AM, SVAKSHA  wrote:
> On Tue, Dec 23, 2014 at 9:10 AM, Ivar Nesje  wrote:
>> I'm really at loss for what to try here. I'm upgrading to git 2.0, but the
>> release notes doesn't seem to have changes that would break anything.
>
> I dont think git should be able to break anything as julia works as
> expected with the exception that it does not compile anything that is
> pulled after Friday. So could you try compiling julia after upgrading
> git.

Oops, forgot to mention that I'm on Ubuntu-LTS-14.04 and I updated the
git via a ppa: git-core-ppa-trusty.list
If it helps, https://launchpad.net/~git-core/+archive/ubuntu/ppa

SVAKSHA ॥  http://about.me/svaksha  ॥


[julia-users] how to properly dis-connect from a TcpSocket

2014-12-23 Thread Ben Arthur
i'm able to setup a connection and pass messages, the problem is that when 
the sender is another julia process, and that process exits, the receiver 
is spammed with an infinite number of empty lines. how does one exit 
cleanly? is there a disconnect() command?  code below. thanks

== first do this in one julia process ==
julia> @async begin 
  server = listen(2001) 
  while true 
sock = accept(server) 
@async while true 
  println(strftime(time())*": "*readline(sock)) 
end 
  end 
end 

Task (waiting) @0x7fb9e7803480 

julia> Tue Dec 23 07:43:28 2014: foo 
Tue Dec 23 07:43:29 2014: 
Tue Dec 23 07:43:35 2014: 
Tue Dec 23 07:43:35 2014: 
Tue Dec 23 07:43:35 2014: 
Tue Dec 23 07:43:35 2014: 



== then do this in a second julia process ==
julia> sock = connect(2001) 
TCPSocket(open, 0 bytes waiting) 

julia> println(sock,"foo") 

julia>   # note all the output in the process above following this


[julia-users] Re: make errors and git version issues

2014-12-23 Thread SVAKSHA
On Tue, Dec 23, 2014 at 9:10 AM, Ivar Nesje  wrote:
> I'm really at loss for what to try here. I'm upgrading to git 2.0, but the
> release notes doesn't seem to have changes that would break anything.

I dont think git should be able to break anything as julia works as
expected with the exception that it does not compile anything that is
pulled after Friday. So could you try compiling julia after upgrading
git.

SVAKSHA ॥  http://about.me/svaksha  ॥


[julia-users] Half precision math operations

2014-12-23 Thread Mark B
I was wondering how Julia supports half precision operations? It seems it 
does (more or less) but I'm not sure if there's a lot of type conversion 
going on behind the scenes with associated overhead. Would it be more 
efficient to store and crunch on Float32s?

julia> rand(Float16,2,2) * rand(Float16,2,2)
2x2 Array{Float16,2}:
 0.58301  1.0508
 0.48145  0.73438

julia> sparse(rand(Float16,2,2))
2x2 sparse matrix with 4 Float16 entries:
[1, 1]  =  0.448
[2, 1]  =  0.15771
[1, 2]  =  0.79932
[2, 2]  =  0.50928

julia> fft(rand(Float16,2,2))
2x2 Array{Complex{Float64},2}:
   1.76245+0.0im  -0.0603027+0.0im
 -0.129639+0.0im   -0.390869+0.0im

Oops for the last one - is fft always double precision?



[julia-users] Re: make errors and git version issues

2014-12-23 Thread Ivar Nesje
I'm really at loss for what to try here. I'm upgrading to git 2.0, but the 
release notes doesn't seem to have changes that would break anything.

kl. 08:22:01 UTC+1 tirsdag 23. desember 2014 skrev Svaksha følgende:
>
> On Tue, Dec 23, 2014 at 7:09 AM, SVAKSHA > 
> wrote: 
> > On Mon, Dec 22, 2014 at 8:34 PM, Ivar Nesje  > wrote: 
> >> 3 days old version: This looks really weird, and I can't really see how 
> this 
> >> can happen. The first I would check is that you are actually running 
> the 
> >> julia you just built, rather than a a 3 days old julia from another 
> > 
> > Well, the make errors came after a fresh compile from the 
> > 20a5c3d888dd9317065390b8bbd984 
> > 17addeb65c commit. 
> > 
> > Fwiw, I had deleted the old julia repo and .julia directory, cloned 
> > julia afresh and compiled it again because the earlier commit was 
> > showing a 3-day old master.  Despite running make after doing a fresh 
> > pull**[0] today morning it still says "Version 0.4.0-dev+2200 
> > (2014-12-18 17:30 UTC) Commit 79d473c (4 days old master)". Really 
> > odd. 
> > 
> > 
> >> directory. The next point on the list is to post git status and the 
> last 
> >> line of sh base/version_git.sh so that we can see if it gives us a 
> clue. 
> > 
> > After a git pull since last night, 
> > ~/julia$ git status 
> > On branch master 
> > Your branch is ahead of 'origin/master' by 17 commits. **[0] 
> >   (use "git push" to publish your local commits) 
> > nothing to commit, working directory clean 
> > 
> > **[0] NB: this is odd, when tig shows this: 
> > https://gist.github.com/svaksha/76716563322d9065e672 
>
> Fwiw, I use the git version 2.2.1 and I'm wondering if the problem is 
> because I updated git after the security bug issue and dont use the OS 
> packaged version of git? It should'nt be related but I cannot put a 
> finger on the weird git behaviour. Just thinking out loud. 
>
> SVAKSHA ॥  http://about.me/svaksha  ॥ 
>


Re: [julia-users] Re: Article on `@simd`

2014-12-23 Thread Valentin Churavy
There is a recent and ongoing discussion on the llvm maillinglist to expose 
scatter and load operations as llvm 
intrinsics http://thread.gmane.org/gmane.comp.compilers.llvm.devel/79936 . 

- Valentin

On Monday, 1 December 2014 17:43:11 UTC+1, John Myles White wrote:
>
> This is great. Thanks, Jacob.
>
>  -- John
>
> On Dec 1, 2014, at 8:32 AM, Jacob Quinn > 
> wrote:
>
> For all the vectorization fans out there, I stumbled across this LLVM blog 
> post: http://blog.llvm.org/2014/11/loop-vectorization-diagnostics-and.html
>
> -Jacob
>
> On Wed, Oct 29, 2014 at 3:48 AM, Uwe Fechner  > wrote:
>
>> Great news!
>>
>> On Tuesday, October 28, 2014 5:06:18 PM UTC+1, Arch Robison wrote:
>>>
>>> Update: The recent Julia 0.3.2 release supports vectorization of 
>>> Float64.
>>>
>>
>
>