I am a little bit confused when trying to add precompile for svds in
userimg.jl, since svds uses nsv as keyword arguments:
function svds{S}(X::S; nsv::Int = 6, ritzvec::Bool = true, tol::Float64 =
0.0, maxiter::Int = 1000)
hence, calling svds maybe looks like: svds(X, nsv = 2), where X is a sparse
matrix. In this case, does the following precompile code work as what I
intended to?
precompile(svds, (SparseMatrixCSC{Float64,Int64}, Int64))
Surprisingly, I get no speed boost after rebuild julia with the precompile
configuration. So I guess I must have missed something. In addition, I
tried to precompile sprand function, and it did boost up quite a lot.
Cheers,
Todd Leo
On Wednesday, February 25, 2015 at 6:02:52 PM UTC+8, Tim Holy wrote:
>
> Keep in mind that compilation involves multiple stages:
> - parsing
> - lowering
> - type inference
> - generation of LLVM IR
> - generation of machine native code
>
> Just because it gets built into julia does not mean it goes through all of
> those steps for all possible data types (in fact, that's not even possible
> in
> principle). The first two steps are the primary reason why "using Gadfly"
> is
> slow. The last 3 are the reasons why calling sprand the second time is
> faster
> than the first.
>
> Like I said, look at base/precompile.jl. All of those functions are
> built-in
> to julia, so if there were no point in issuing those precompile
> statements,
> why would we bother having that file?
>
> You could make your own userimg.jl file with
>
> precompile(svds, (Matrix{Float64},))
>
> rebuild julia, and then I predict you'll get much faster response on the
> first
> use. But if you use matrices of different element types (other than
> Float64),
> you'll need to include those, too---the inferred, LLVM IR, and machine
> native
> code is specific for the element type.
>
> --Tim
>
>
>
> On Wednesday, February 25, 2015 01:14:53 AM Todd Leo wrote:
> > That's exactly how I understand, and I'm using 0.4, so Tim's suggestion
> > above won't work...I presume this is not related to the implementation
> of
> > svds, since other built-in functions also takes much more time and
> memory
> > on the first running:
> >
> > julia> @time sprand(5,5,0.1)
> > elapsed time: 0.314905039 seconds (11 MB allocated)
> > julia> @time sprand(5,5,0.1)
> > elapsed time: 3.0776e-5 seconds (976 bytes allocated)
> >
> >
> > Cheers,
> > Todd Leo
> >
> > On Wednesday, February 25, 2015 at 5:00:21 PM UTC+8, Viral Shah wrote:
> > > No need to precompile. svds was recently added to base - but it won't
> > > exist in 0.3.
> > >
> > > -viral
> > >
> > > On Wednesday, February 25, 2015 at 12:32:09 PM UTC+5:30, Todd Leo
> wrote:
> > >> Isn't svd/svds already included in base? Do I need to precompile
> them?
> > >>
> > >> On Saturday, February 7, 2015 at 3:48:50 AM UTC+8, Tim Holy wrote:
> > >>> Since it allocates 120MB on the first call and only 9kB on the
> second,
> > >>> all that
> > >>> memory is just due to compilation.
> > >>>
> > >>> The only way I know to fix that is to precompile some of the
> functions
> > >>> when you
> > >>> build julia. If this is a big deal to you, consider adding your own
> > >>> custom
> > >>> userimg.jl (and see base/precompile.jl for a model).
> > >>>
> > >>>
> > >>>
> http://docs.julialang.org/en/latest/manual/modules/?highlight=userimg#mo
> > >>> dule-initialization-and-precompilation
> > >>>
> > >>> --Tim
> > >>>
> > >>> On Friday, February 06, 2015 11:23:52 AM Viral Shah wrote:
> > >>> > This is because the internal operations with svds are allocating
> new
> > >>> > vectors every iteration. This PR should address it, but needs to
> be
> > >>> > completed,
> > >>> >
> > >>> > https://github.com/JuliaLang/julia/pull/7907
> > >>> >
> > >>> > Also, array indexing giving views by default will also help here.
> > >>> >
> > >>> > -viral
> > >>> >
> > >>> > On Friday, February 6, 2015 at 7:50:24 PM UTC+5:30, Todd Leo
> wrote:
> > >>> > > Hi Julians,
> > >>> > >
> > >>> > > I am trying to apply svd on a very large sparse matrix using
> svds.
> > >>>
> > >>> When
> > >>>
> > >>> > > warming up svds function, I used a very small random vector, and
> the
> > >>> > > number
> > >>> > > of singular values (nsv) is set to one. However, this simple set
> up
> > >>> > > results
> > >>> > > in a considerable high cost of time, and as much as 120MB memory
> is
> > >>> > > occupied. To me, it doesn't make sense, and one can reproduce it
> by:
> > >>> > >
> > >>> > > julia> @time svds(sprand(3,3,0.1), nsv = 1)
> > >>> > >
> > >>> > >> elapsed time: 2.640233094 seconds (117 MB allocated, 1.06% gc
> time
> > >>>
> > >>> in 5
> > >>>
> > >>> > >>> pauses with 0 full sweep)
> > >>> > >
> > >>> > > Regards,
> > >>> > > Todd
>
>