I searched over the forum and it seems that I did not announce this package
when it is initially created.
So here it is:
https://github.com/JuliaSparse/SparseVectors.jl
Simply put, it provides sparse vectors (not sparse matrices with
one-column) and a series of methods operating thereon.
Great stuff.
Can you make a PR to Distributions.jl? I'd be happy to review the codes and
merge when it is ready.
Dahua
On Friday, July 24, 2015 at 8:29:27 PM UTC+8, lapeyre@gmail.com wrote:
Here is a generalized gamma distribution.
https://github.com/jlapeyre/GenGammaDist.jl
The package has been moved to under JuliaSparse:
https://github.com/JuliaSparse/SparseVectors.jl
Best,
Dahua
On Sunday, May 17, 2015 at 10:24:57 PM UTC+8, Dahua Lin wrote:
Dear all,
I am pleased to announce a new package SparseVectors:
https://github.com/lindahua/SparseVectors.jl
javascript: wrote:
Hi Tracy,
I am a regular Julia user and will be attending CVPR, feel free to find
me at the conference.
(Also, I have seen Dahua Lin at previous CVPRs, but I am not sure he will
attend this year.)
Best,
Sebastian Nowozin
On Saturday, 16 May 2015 12:19:30 UTC+1
Dear all,
I am pleased to announce a new package SparseVectors:
https://github.com/lindahua/SparseVectors.jl
Sparse data has become increasingly common in machine learning and related
areas. For example, in document analysis, each document is often
represented as a sparse vector, which each
Regression.jl does not aim to replace or provide an alternative to Optim.jl. It
is primarily to do regression, and optimization algorithms are encapsulated as
details.
However, there are certain aspects in Optim.jl that make it not very suitable
for Regression.jl at this point. For example,
Hi,
I am happy to announce three packages related to empirical risk minimization
EmpiricalRisks https://github.com/lindahua/EmpiricalRisks.jl
This Julia package provides a collection of predictors and loss functions,
as well as the efficient computation of gradients, mainly to support the
The latest version of ArrayViews (v0.6.0) now provides unsafe views (that
maintain raw pointers instead of the parent array). See
https://github.com/JuliaLang/ArrayViews.jl#view-types
You may see whether this makes your code more performant. Be careful, you
should make sure that unsafe views
My benchmark shows that element indexing has been as fast as it can be for
array views (or subarrays in Julia 0.4).
Now the problem is actually the construction of views/subarrays. To
optimize the overhead of this part, the compiler may need to introduce
additional optimization.
Dahua
On
Thanks for the great work!
Dahua
On Monday, April 20, 2015 at 9:47:13 AM UTC+8, Tim Holy wrote:
For those of you wanting to write code that will perform well on different
AbstractArray types, starting with julia 0.4 it will be recommended that
you
should typically write
for i in
You can do `NTuple{N, Int}`.
Dahua
On Sunday, April 19, 2015 at 8:05:25 AM UTC+8, Sheehan Olver wrote:
Cool! Is there a way to specify exactly the number? I actually
know the number of args as a templated variable, so would want to do:
immutable Foo{n}
Constructing a view would cause memory allocation. The compiler has not
been able to completely optimize this out.
I have been considering introducing something like a ``local_view`` that
caches the base pointer. The concern is that such views are dangerous is
passed around -- it does not
Dear all,
The parts about conjugate priors have been separated out of
Distributions.jl and migrated to ConjugatePriors.jl
https://github.com/JuliaStats/ConjugatePriors.jl
With recent updates, Distributions.jl can now work with both Julia 0.3
0.4, especially it is working with the new tuple
Great job!
Dahua
On Saturday, December 20, 2014 5:54:02 PM UTC+8, Chiyuan Zhang wrote:
Mocha https://github.com/pluskid/Mocha.jl is a Deep Learning framework
for Julia. Two major changes in v0.0.5:
- CUDA backend now available on Mac
- Mocha now handles general ND-tensors
There are several non-central distributions provided by Distributions.jl
- non-central F
- non-central T
- non-central Chi square
- non-central Hypergeometric
They are not documented and not tested yet. There are some risk of bugs --
but the core functions are delegated to Rmath, so the risk
NumericExtensions.jl was a stopgap.
Now that we have callable syntax and much more optimized map function in
Julia Base (v0.4), using that package is no longer recommended.
Dahua
On Saturday, December 20, 2014 7:07:10 AM UTC+8, Michael Wojnowicz wrote:
Hi Ivar,
Thanks for your thoughts
:06 PM UTC-5, Dahua Lin wrote:
This is something related to the vertex indexing mechanism. Please file
an issue on Graphs.jl. We may discuss how to solve this over there.
Dahua
On Saturday, November 22, 2014 6:39:47 AM UTC+8, Richard Futrell wrote:
Hi all,
Is this expected behavior
This is something related to the vertex indexing mechanism. Please file an
issue on Graphs.jl. We may discuss how to solve this over there.
Dahua
On Saturday, November 22, 2014 6:39:47 AM UTC+8, Richard Futrell wrote:
Hi all,
Is this expected behavior? It was surprising to me. On
The current implementation of the sum goes in this way,
For arrays longer than a certain threshold (1024, 2048?), it recursively
divide the array into two halves, and sum different halves respectively.
As the part to be summed gets shorter, it resorts to a four-way sequential
summing
sumsq internally invokes BLAS.dot.
When the length of x is small, there can be large overhead. Also,
NumericExtension is not recommended for now. Important functions have been
moved to Julia Base. You should use sumabs2 in the Julia Base instead,
which has a more sophisticated strategy, i.e.
I have been observing an interesting differences between people coming from
stats and machine learning.
Stats people tend to favor the approach that allows one to directly use the
category names to index the table, e.g. A[apple]. This tendency is
clearly reflected in the design of R, where one
, November 10, 2014 8:35:32 AM UTC+8, Dahua Lin wrote:
I have been observing an interesting differences between people coming
from stats and machine learning.
Stats people tend to favor the approach that allows one to directly use
the category names to index the table, e.g. A[apple]. This tendency
You said the size(x) is 1000, 60 and sumsq needs to work over 60 values.
The problem seems not that 60 is too small, but that you are doing this
along rows instead of columns.
If you have an m x n matrix, and you do the computation per column like the
following:
```julia
for j = 1:n
r[j] =
The output type of list comprehension is often difficult to infer (by
human), as it depends on a complex compile-time type inference mechanism.
This, combined with the fact that its behavior is inconsistent with that of
the function map, is a recipe for confusion.
We should try to make things
DimensionalityReduction has been superseded by MultivariateStats. We
purposely restrict the version range of DimensionalityReduction in order to
prevent people from using it in future.
- Dahua
On Monday, October 20, 2014 12:37:14 AM UTC+8, John Myles White wrote:
That package is abandoned.
Fixed.
Please checkout latest version (v0.4.8), which works for both Julia 0.3 and
0.4.
Dahua.
On Tuesday, October 28, 2014 12:48:37 PM UTC+8, Dahua Lin wrote:
I will try to fix this today or tomorrow.
Overly occupied by grant proposal and setting up a new lab in the past
month. Really
I just make a fresh clone of Julia 0.4, and when I did make, I get the
following error:
Note: The following floating-point exceptions are signalling: IEEE_DENORMAL
make[4]: /Users/dhlin/julia-0.4/deps/objconv/objconv: No such file or
directory
make[4]: *** [../libopenblasp-r0.2.12.a.renamed]
Recently, a series of packages failed under Julia 0.4, many of them are
under JuliaStats.
I just updated those packages and now all of them are working under both
Julia 0.3 and 0.4. Here is the list of updated packages and their latest
version:
- ArrayViews (0.4.8)
- StatsBase (0.6.9)
.
Can you tell me if you have a deps/objconv folder, and if so what's in it?
Does clang++ -o objconv -O2 *.cpp in that folder work?
On Tuesday, October 28, 2014 3:53:57 AM UTC-7, Dahua Lin wrote:
I just make a fresh clone of Julia 0.4, and when I did make, I get the
following error:
Note
.
*From:* Dahua Lin javascript:
*Sent:* Tuesday, October 28, 2014 4:58 AM
*To:* julia...@googlegroups.com javascript:
*Subject:* [julia-users] Re: Failure of installing Julia 0.4 (latest
master) on Mac OS X 10.10
deps/objconv is not there.
On Tuesday, October 28, 2014 7:14:11
It builds successfully after I did make -C deps install-objconv. However,
I think the make procedure should automatically do that ...
On Wednesday, October 29, 2014 8:32:32 AM UTC+8, Dahua Lin wrote:
This is a completely fresh clone (directly from Github) -- so there is
nothing there left
I will try to fix this today or tomorrow.
Overly occupied by grant proposal and setting up a new lab in the past
month. Really sorry about the package.
Dahua
On Friday, October 24, 2014 2:33:10 AM UTC+8, Tim Holy wrote:
On Thursday, October 23, 2014 10:50:19 AM Douglas Bates wrote:
What
If A is not a global variable (i.e within a function), @devec would be much
faster (comparable to sumabs)
Dahua
On Monday, August 25, 2014 4:26:22 AM UTC+8, Adam Smith wrote:
I've run into this a few times (and a few hundred times in python), so I
made an @iterize macro. Not sure how
Following Julia package naming convention, the package Distance was renamed
to Distances.
New package page: https://github.com/JuliaStats/Distances.jl
All materials in Distance.jl has been migrated. New issues and PRs should
go to Distances.jl.
Thanks,
Dahua
around?
— John
On Aug 25, 2014, at 10:07 PM, Dahua Lin lind...@gmail.com javascript:
wrote:
Following Julia package naming convention, the package Distance was
renamed to Distances.
New package page: https://github.com/JuliaStats/Distances.jl
All materials in Distance.jl has
I did some cleanup, reimplement some algorithms, and setup sphinx doc for
the Clustering.jl package.
Please check it out: http://clusteringjl.readthedocs.org/en/latest/
Best,
Dahua
than starting from scratch then Resampling.jl could
be a good one (https://github.com/johnmyleswhite/Resampling.jl)
On Sunday, August 10, 2014 12:58:48 PM UTC-4, Dahua Lin wrote:
I did some cleanup, reimplement some algorithms, and setup sphinx doc for
the Clustering.jl package.
Please
You may have to check which is the bottleneck: getindex or matrix
multiplication.
Dahua
On Tuesday, July 29, 2014 4:22:32 PM UTC-5, Florian Oswald wrote:
Hi all,
I've got an algorithm that hinges critically on fast matrix
multiplication. I put up the function on this gist
You may call gemv directly.
Dahua
On Tuesday, July 29, 2014 5:56:28 PM UTC-5, Florian Oswald wrote:
From the profile output it looks like a lot of time is spent in getindex.
I suppose that is bad news? Not sure how I could avoid any of the index
lookups.
On Tuesday, 29 July 2014, Dahua
Look more carefully into the pattern of your codes.
It seems that you may be able to reshape v1 into a matrix of size (m, n)
and v0 into a matrix of size (n, m), and you may do a matrix-matrix
multiplication once without looping over multiple strided vectors.
Also, if you are using ArrayViews,
We should probably incorporate (approximate) nearest neighbor search into
Clustering.jl at some point.
Dahua
On Monday, July 28, 2014 10:09:59 AM UTC-5, John Myles White wrote:
FWIW, there’s a KD-tree implementation in NearestNeighbors.jl
— John
On Jul 28, 2014, at 7:27 AM, Jacob Quinn
I think Regression is not deprecated.
Instead, it would be revived at some point in the future as a metapackage
that includes a few regression-related packages and a higher level API.
Dahua
On Sunday, July 27, 2014 11:19:26 PM UTC-5, Iain Dunning wrote:
Hi all,
I've been experimenting with
In the long run, we may probably need a common infrastructure for data
streams -- not only for large data frames, but also for a stream of
images/column vectors/whatever objects ... from any sources (single huge
file, multiple smaller files, network, ...)
Dahua
On Monday, July 28, 2014
Hi, Art,
Thanks very much of doing this. The package is excellent!
I would really appreciate if you may consider moving the package to
JuliaStats for broader participation (I think it is perfectly ready).
Dahua
On Saturday, July 26, 2014 9:38:03 AM UTC-5, wil...@gmail.com wrote:
After
probably discuss in the roadmap issue about what infrastructure
we need to support large-scale distributed machine learning problems.
-viral
On Monday, July 21, 2014 4:08:14 AM UTC+5:30, Dahua Lin wrote:
Please see https://github.com/JuliaStats/MLBase.jl/blob/master/NEWS.md
for recent
Please see https://github.com/JuliaStats/MLBase.jl/blob/master/NEWS.md for
recent updates.
Also the documentation is moved from Readme to a Sphinx doc
http://mlbasejl.readthedocs.org/en/latest/
Now we already have quite a few packages for various machine learning tasks:
MLBase.jl
Linear Least Square and Ridge Regression are also included now. (see
http://multivariatestatsjl.readthedocs.org/en/latest/lreg.html).
Dahua
On Friday, July 18, 2014 8:13:31 PM UTC-5, Dahua Lin wrote:
Recently, I developed a new package for Julia (under JuliaStats):
MultivariateStats https
This should have been fixed now.
Dahua
On Saturday, July 19, 2014 1:44:28 AM UTC-5, Jeff Waller wrote:
An update:
Brute forcing out Sampling from 0.4.8 and 0.4.9 fixes the problem, but why
would these even be
consulted?
, 2014, at 1:40 PM, Dahua Lin lind...@gmail.com javascript:
wrote:
Linear Least Square and Ridge Regression are also included now. (see
http://multivariatestatsjl.readthedocs.org/en/latest/lreg.html).
Dahua
On Friday, July 18, 2014 8:13:31 PM UTC-5, Dahua Lin wrote:
Recently, I developed
Recently, I developed a new package for Julia (under JuliaStats):
MultivariateStats https://github.com/JuliaStats/MultivariateStats.jl, for
multivariate statistical analysis.
Currently, the following functionalities have been implemented:
- Data Whitening
- Principal Component Analysis
pretty unreliable.
-- John
On Jul 18, 2014, at 9:13 PM, Dahua Lin lind...@gmail.com javascript:
wrote:
Recently, I developed a new package for Julia (under JuliaStats):
MultivariateStats https://github.com/JuliaStats/MultivariateStats.jl,
for multivariate statistical analysis
With the latest Julia, you can do this by
sumabs2(x)
Dahua
On Wednesday, July 16, 2014 9:57:54 AM UTC-5, Neal Becker wrote:
As a first exercise, I wanted to code magnitude squared of a complex
1-d array. Here is what I did:
mag_sqr{T} (x::Array{Complex{T},1}) =
I will address the warning about add! today.
Dahua
On Monday, July 14, 2014 3:43:58 PM UTC-5, Kevin Squire wrote:
Although annoying, these warnings won't actually cause any problems.
The best bet to remove the warnings is to file an issue with
NumericExtensions.jl, preferably (but not
Hi, Charles,
Looks like you are doing sampling based on given/computed probabilities.
You might want to checkout the sampling facilities provided in StatsBase
(see http://statsbasejl.readthedocs.org/en/latest/sampling.html for
details).
That package provides a series of optimized sampling
code_typed show that when writing as `a = [.5, 2]`, the type of a is not
successfully inferred within the function.
Dahua
On Sunday, June 22, 2014 8:08:08 AM UTC-5, a. kramer wrote:
You're right, it is creating a 1x2 array in this case but it doesn't
affect execution time in either case.
Motivated by the benchmarking needs in StatsBase and Distributions, I
develop a new lightweight package for comparative benchmarking:
BechmarkLite.jl https://github.com/lindahua/BenchmarkLite.jl
It is designed specially for the purpose of comparing performance of
multiple algorithms/procedures
Not every distribution comes with a default constructor (those requiring no
arguments).
Here is the document about how you may construct a Multinomial distribution:
http://distributionsjl.readthedocs.org/en/latest/multivariate.html#multinomial-distribution
The no method error means that there
The cumsum / cummax / cummin / cumprod, etc have suboptimal performance
currently, which are about 20x slower than the sum/prod etc (which we spent
a lot of efforts to optimize and tune).
Please open an issue in Github, and we will try to address this problem
later.
Dahua
On Friday, June
, Charles Novaes de Santana
charles...@gmail.com javascript: wrote:
Thank you, Dahua!
I will open an issue in Github as suggested by you. In meanwhile I will
see if by using sum I can get a better performance.
Best,
Charles
On Fri, Jun 20, 2014 at 5:54 PM, Dahua Lin lind...@gmail.com
Good to know. Looks like that we should consider improve the performance of
openlibm, especially for commonly used functions such as exp, log, etc.
On Tuesday, June 17, 2014 7:23:37 PM UTC-5, Alireza Nejati wrote:
Dahua: On my setup, most of the time is spent in the log function.
On
for the Profile code and the ProfileView package and of
Dahua Lin for the NumericExtensions and NumericFuns in particular. These
are incredible tools.
It is so easy to forget the you should profile before you attempt to
optimize your code. I just learned that again. I was getting very
Perhaps we should first profile the codes, and see which part constitutes
the bottleneck?
Dahua
On Tuesday, June 17, 2014 5:23:24 PM UTC-5, Alireza Nejati wrote:
But for fastest transcendental function performance, I assume that one
must use the micro-coded versions built into the
First, I agree with John that you don't have to declare the types in
general, like in a compiled language. It seems that Julia would be able to
infer the types of most variables in your codes.
There are several ways that your code's efficiency may be improved:
(1) You can use @inbounds to
Probably, it would be easier to simply write a loop
u = trues(length(x))
u[[5, 7]] = false
ir = 0
vr = -Inf
for i = 1:length(x)
if u[i] x[i] vr
ir = i
vr = x[i]
end
end
# ir would be what you want
If are your list of numbers are all positive, you can write this in a
See the document
here: https://github.com/JuliaStats/Distance.jl#computing-pairwise-distances
On Wednesday, June 4, 2014 8:15:51 AM UTC-5, Dahua Lin wrote:
The Distance package assumes that *each column is a point*.
Since Julia uses column-major layout for arrays, using column-major ways
I open a Github issue here: https://github.com/JuliaOpt/Optim.jl/issues/59
Dahua
On Wednesday, June 4, 2014 10:43:56 AM UTC-5, John Myles White wrote:
Exposing the option to control the initial approximate Hessian would be a
good idea. If you don’t mind, I’d encourage you to put a little
I don't think Julia allows pushing a row/column to a matrix.
Arrays of order 2 or higher are non-resizable.
On Wednesday, May 28, 2014 12:52:13 PM UTC-5, Patrick O'Leary wrote:
You should be able to use push!() for this. Are you getting a segfault
from some Julia code? If so, that shouldn't
Declared as immutable, PDMat is not supposed to be updated.
However, I understand that there are cases where efficiency is important
and you want to do the inplace updating anyways. For such situation, I
think you can just directly modify p.mat and p.chol, and wrap these
operations into a
As a side note, I just cleaned up the Devectorize.jl package (tagged vs
0.4). Now it works well under Julia v0.3.
I am now working on a major upgrade to this package. This may lead to a
more transparent extensible code generator and the support of arrays of
arbitrary dimensions (with the help
I am sympathetic to the need of being able to delete vertices or edges from
a graph. However, Graphs.jl (and many other packages) is still very young,
and it takes some time to provide a complete set of functionalities
(especially when one have to make a tradeoff between efficiency,
Whereas it might seem straightforward for people in the open source circle
to use Github to file issues or make pull requests, general users may find
them unfamiliar or even daunting tasks.
It would be useful to provide relevant instructions somewhere so that
people (especially those who are
Hi, Tim and Jake,
I have created the JuliaGPU organization and added you as owners (you
should have received emails about that). I have also moved CUDA.jl over
there.
I would really appreciate if you may move other relevant packages over
there.
I think future efforts based on CUDA/OpenCL
Hi, Laszlo,
Thanks for your interest in using the CUDA package.
Here are the brief answers to your questions:
(1) libcuda is the shared library that supports CUDA driver, which the
CUDA.jl package relies on to function.
(2) most CUDA driver functions are from this library (see
Laszlo,
I just added notes to the readme to briefly explain the OSes under which
the package has been tested, and what libcuda is.
A lot of Julia packages are wrapping C libraries. However, it is not a
common practice to list all the C functions that are being wrapped in the
documentation.
Currently, there have been some independent efforts to develop Julia
packages for GPU computing, which particularly include:
- CUDA.jl https://github.com/lindahua/CUDA.jl
- CUDArt.jl https://github.com/timholy/CUDArt.jl
- OpenCL.jl https://github.com/jakebolewski/OpenCL.jl
- CUFFT.jl
Hi, Siddhant
Thanks for working on this. I took a brief look of the source codes. Here
are several comments:
1. It is the convention of Julia package to have a suffix *.jl* in the repo
name. For example, you can have your github repo named ``UCLMLRepo.jl``.
2. It seems that the blocks within
don't think there's much code out there that relies on that.
With Yeppp there may be additional concerns since the functions are not as
accurate as openlibm.
Simon
On Friday, February 28, 2014 12:30:21 PM UTC-5, Dahua Lin wrote:
While VML is generally much faster for big arrays
This is very nice.
Now that we have several back-ends for vectorized computation, VML,
Yeppp, Julia's builtin functions, as well as the @simd-ized versions, I am
considering whether there is a way to switch back-end without affecting the
client codes.
- Dahua
On Thursday, February 27, 2014
to export the same API, you could, in principle, just
switch `using VML` to `using Yeppp`.
My question: are we finally conceding that add! and co. is probably worth
having?
— John
On Feb 28, 2014, at 7:10 AM, Dahua Lin lind...@gmail.com javascript:
wrote:
This is very nice.
Now
Yes, you can directly use the formatting string as the first argument.
Just that if the same formatting string is repeatedly used, it is more
efficient to first compile it into an instance of FormatExpr and use this
instance instead.
Best,
Dahua
On Wednesday, February 26, 2014 9:39:01 AM
I'd happy to announce that I just release a new package:
Formatting.jlhttps://github.com/lindahua/Formatting.jl
The package allows python-like formatting. For example
printfmt({1} + {2} = {3}, 100, 200, 300) #-- print(100 + 200 = 300)
printfmt({1:4s} + {2:.2f}, abc, 12) # -- print( abc + 12.00)
OpenCL is definitely more open (without vendor lock-in).
However, in practice, there are several aspects that make CUDA more
appealing for scientific computing:
- A number of mature libraries for various computation purpose: cuBLAS,
cuFFT, cuRand, CULA, Magma, etc.
- CUDA LLVM
Dear folks,
I'd happy to announce a new package: NMF.jlhttps://github.com/lindahua/NMF.jl
This package provides a variety of algorithms for nonnegative matrix
factorization. Here is a list of algorithms that are already available:
*Initialization:*
- Random Initialization
- NNDSVD
With many of its original functions moved to StatsBase, the development of
MLBase.jl has been stopped for a while.
Recently, I revived this package and added a series of functionalities.
Now, it is a swiss knife for machine learning.
This package does not implement specific machine learning
Thanks, John.
I would also want to note that MLBase.jl is going to be moved to JuliaStats
too, which is going to provide basic support of machine learning.
All these moves are for the purpose of coordinating the efforts of
developing machine learning tools (see
Is it possible to update the requirement of previously tagged versions?
On Friday, January 31, 2014 5:13:21 PM UTC-6, Ivar Nesje wrote:
It seems like you are using the 0.2.0 version of Julia, and some package
authors have not correctly marked new versions of their package to require
Agree.
People with programming background won't expect x and x + 0 (x * 1, x ^ 1,
etc) to be the same thing. (They are equal, but definitely not the same
copy). Even in MATLAB or Python/Numpy, when you modify y = x + 0, the
values in x are not affected.
- Dahua
On Monday, January 27, 2014
For 1D vectors, even if you use @inbounds, it will fetch the base address
(pointer) each time you access the element (since the compiler do not know
whether the array has been resized and thus the memory reallocated). If you
use unsafe_views, it is assumed that the base address is fixed, and
You are right. A rule of thumb is that never write something like
a[subs...] in loops if performance is important.
- Dahua
On Friday, January 3, 2014 10:11:37 AM UTC-6, Milan Bouchet-Valat wrote:
Le vendredi 03 janvier 2014 à 13:14 +0100, Milan Bouchet-Valat a écrit :
Le jeudi 26
I think the reason is that the compiler fails to infer the type of a for
test1 test2. I guess if you write a::Vector{Int} = zeros(Int, dims...),
the situation would be quite different.
- Dahua
On Saturday, January 4, 2014 3:44:19 PM UTC-6, Milan Bouchet-Valat wrote:
Hi!
I'd like propose
John,
Thanks for doing this. I agree with you that consistent code styles have
lots of benefits.
I read through your draft, and agree with many of the points there, except
the following:
(6) Never place more than 80 characters on a line.
I agree that overly long lines hurt readability.
I think this is the shortest (and my favorite) way:
w = [col$i for i = 1:100_000]
- Dahua
On Wednesday, January 1, 2014 9:22:12 AM UTC-6, John Myles White wrote:
Since we’re throwing out ways to do this, here’s another one:
w = Array(UTF8String, 100_000)
for i in 1:100_000
w[i] =
92 matches
Mail list logo