x with 1 Int64 nonzero entries:
> [1, 1] = -123456789
>
> julia> A.nzval[A.nzval .== flagval] = 0
> 0
>
> julia> A
> 1×1 sparse matrix with 1 Int64 nonzero entries:
> [1, 1] = 0
>
>
>
> On Wednesday, August 24, 2016 at 11:24:29 PM UTC-7, Gab
Say I want a 1x1 matrix with some structural zeros. Julia 0.4.* gives
julia> sparse([1],[1], 0)
1x1 sparse matrix with 0 Int64 entries:
while Julia 0.5 does
julia> sparse([1],[1],0)
1×1 sparse matrix with 1 Int64 nonzero entries:
[1, 1] = 0
the latter behavior is what I prefer, is
Hey,
I have a sparse matrix
A = [ C D
E F ]
now if I want to update F with another matrix with the exact same sparsity
structure, what would be the easiest way to do it efficiently? Ideally, I'd
just update the relevant values in rowval directly, but it seems messy to
extract which
Found the solution. The code i needed was
PyPlot.svg(true)
sweet plots, here i come.
On Wednesday, August 3, 2016 at 4:19:43 PM UTC-7, Gabriel Goh wrote:
>
> I can improve the output quality of IPython quite dramatically by using
> the following commands
>
> %matplotlib i
I can improve the output quality of IPython quite dramatically by using the
following commands
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.plot([1,2],[1,2])
Is there something equivalent in Julia? I know magics don't work in IJulia,
but
af/base/sparse/cholmod.jl#L1260-L1272
> .
>
>
>
>
> On Friday, July 8, 2016 at 4:40:45 AM UTC-4, Gabriel Goh wrote:
>>
>> Hey,
>>
>> I have a sequence of sparse matrix factorizations I need to do, each one
>> a different matrix but with the sam
include a patch file detailing the makefile changes and
> the klu_simple.jl file.
>
> Would love to hear if you put the KLU library to use and what kind of
> speedup you get. Also, if you found a different solution it would be
> interesting to hear about it.
>
> On Sunday, July 10,
; details of my tinkering.
>
> DG
>
> On Friday, July 8, 2016 at 4:40:45 AM UTC-4, Gabriel Goh wrote:
>>
>> Hey,
>>
>> I have a sequence of sparse matrix factorizations I need to do, each one
>> a different matrix but with the same sparsity structure. Is
There is also
https://github.com/mlubin/ReverseDiffSparse.jl
I've never used it myself, but I thought i'd throw it out there.
On Saturday, July 9, 2016 at 12:27:33 AM UTC-7, Gabriel Goh wrote:
>
> Forward differentiation has a bad complexity for functions of the form R^n
> ->
Forward differentiation has a bad complexity for functions of the form R^n
-> R. try using ReverseDiffSource.jl instead
This blog posts describes positive results using ReverseDiffSource.jl on an
autoencoder
Hey,
I have a sequence of sparse matrix factorizations I need to do, each one a
different matrix but with the same sparsity structure. Is there a way I can
save the AMD (or any other) ordering that sparsesuite returns, it does not
need to be recomputed each time?
Thanks a lot for the help!
I'm wondering if a library in Julia exists where I can specify dataflow
graphs which can be compiled and differenciated, similar to what tensorflow
does. Thanks a lot!
gt;> difficult, you will need to use a library other than Cholmod - Pardiso,
>> Mumps, or HSL MA57 are all decent choices.
>>
>>
>> On Friday, June 10, 2016 at 2:37:08 PM UTC-7, Gabriel Goh wrote:
>>>
>>> The following code doesn't make a lot o
is there windows support? I have a pretty beefy gaming PC
On Thursday, June 9, 2016 at 10:08:42 PM UTC-7, ran...@juliacomputing.com
wrote:
>
> Hello,
>
> We are pleased to announce ArrayFire.jl, a library for GPU and
> heterogeneous computing in Julia: (
>
The following code doesn't make a lot of sense, is there a reason why the
backslash doesnt pivot for sparse matrices?
*julia> *A =[1.0 0.0 1.0
0.0 0.0 1.0
1.0 1.0 0.0]
3x3 Array{Float64,2}:
1.0 0.0 1.0
0.0 0.0 1.0
1.0 1.0 0.0
*julia>* inv(A)
3x3
shit, you hit the nail on the head. they were allocated from a Julia
buffer, which was probabily GC'ed.
Thanks!!1
On Thursday, February 11, 2016 at 1:14:52 PM UTC-8, Yichao Yu wrote:
>
> On Thu, Feb 11, 2016 at 1:59 PM, Gabriel Goh <gabg...@gmail.com
> > wrote:
> > Let's
Let's say I have a program which returns
data = ccall(getcfunction())
data is a pointer to a C-struct,
dataload = unsafe_load(data)
which in turn contains a field a which is a 2D array, a pointer to some
pointers of size 4, say.
julia> pointer_to_array(dataload.a,4)
Ptr{Node} @0x3b8f66c0
my tests run smoothly in the current build of julia 0.5 beta. Is everything
good or are there breaking changes to come?
u can preallocate in advance these three vectors.
> >> However doing in this way is more cumbersome because you need to have
> a
> >> good estimate of the number of entries and to explicitly calculate the
> index
> >> for all entries.
> >>
> >
Thanks! This triplet solution was a miracle
On Monday, February 1, 2016 at 1:46:27 AM UTC-8, Kristoffer Carlsson wrote:
>
> At computer now.
>
> Something like this:
>
> function f(k)
> I, J, V = Int[], Int[], Float64[]
> for i = 1:k
> idxs = (i-1)*2 + 1:i*2
> for i in
Generating a sparse matrix from scratch seems to be quite memory intensive.
and slow. Say I wish to create a large block diagonal matrix with 2x2 block
entries.
Doing it naively is quite slow
function f(k)
M = spzeros(2*k,2*k)
for i = 1:k
D = (i-1)*2 + 1:i*2
M[D,D] = randn(2,2)
This is the error I am getting when I run
import ProfileView
ERROR: LoadError: BoundsError: attempt to access 157-element Array{Any,1}:
Symbol
Int8
UInt8
Int16
UInt16
Int32
UInt32
Int64
UInt64
Int128
⋮
24
25
26
27
28
Hey all, your help is appreciated in this problem!
This behavior is what I expect - when writing to a sparse matrix, no new
memory is allocated if there are no changes to the sparsity structure
julia> A = spdiagm(ones(1));
julia> x = randn(1,1);
julia> @time A[1,1:1] = x;
.
On Tuesday, June 30, 2015 at 9:21:42 PM UTC-4, Gabriel Goh wrote:
Thanks for the info. I'm on a windows machine, if it makes a difference.
On Tuesday, June 30, 2015 at 6:18:54 PM UTC-7, Miguel Bazdresch wrote:
Works for me on Linux, with julia v0.3.10, and IJulia master.
-- mb
On Tue, Jun 30
Thanks for the info. I'm on a windows machine, if it makes a difference.
On Tuesday, June 30, 2015 at 6:18:54 PM UTC-7, Miguel Bazdresch wrote:
Works for me on Linux, with julia v0.3.10, and IJulia master.
-- mb
On Tue, Jun 30, 2015 at 9:00 PM, Gabriel Goh gabg...@gmail.com
javascript
Hey everyone,
I've upgraded to Julia v0.3.10 and did the requisite Pkg.build(IJulia)
commands. After launching an IJulia notebook, the kernel keeps timing out
(reported as dead in IPython).
Downgrading to Julia v0.3.9 and re-running build(IJulia) fixes the
problem. Any suggestions?
Gabe
Do there exist anonymous objects, in the same way anonymous functions exist?
For example, I'd like to return a object A, without going through the
hoops of making an explicit type, which you can do (using my made up syntax)
function createMatrix()
# create an anonymous object A
A =
Do there exist anonymous objects, in the same way anonymous functions exist?
For example, I'd like to return a object A, without going through the
hoops of making an explicit type, which you can do
function createMatrix()
# create an anonymous object A
return A
end
A = createMatrix()
A*x
sub! Just what I was looking for! it works like a charm
On Saturday, May 30, 2015 at 3:35:50 PM UTC-7, Tim Holy wrote:
Not tested, but
xsub = sub(x, 10:20)
Base.LinAlg.axpy!(a, y, xsub)
should work just fine.
--Tim
On Saturday, May 30, 2015 02:35:01 PM Gabriel Goh wrote:
Hey
Hey All,
I'm wondering if its easy to do an in place assignment, say
x[10:20] = x[10:20] + a*y
using the axpy! library. I want to avoid the use of any temporary variables
if possible!
Thanks!
Gabe
Thanks for the help guys! The pointer to the discussion helped!
It seems the most Juilesque way to do this is to drop the modules
altogether. All the solutions suggested didn't feel very appealing.
Gabe
Hey all,
I'm wondering if you guys could educate me on the correct way to overload
functions in modules. The code here doesn't work:
# **
module A
type apple
end
export apple,f
f(x::A.apple) = println(I am apple);
end
#
32 matches
Mail list logo