Public service announcement. It's actually on that Wikipedia page under the
other section:
https://en.m.wikipedia.org/wiki/Public_service_announcement
Josh
On Aug 8, 2015, at 8:15 AM, Uwe Fechner uwe.fechner@gmail.com wrote:
Hello,
what does PSA in the title mean?
Wikipedia didn't help:
On Monday, January 26, 2015 at 12:09:40 PM UTC-5, Joshua Adelman wrote:
On Monday, January 26, 2015 at 12:02:10 PM UTC-5, Wai Yip Tung wrote:
I'm trying to construct a list of list and do some operation on it. In
Python I would do
In [7]: ll = [[1,2],[1,2,3],[7]]
Say I want to sort
On Monday, January 26, 2015 at 12:02:10 PM UTC-5, Wai Yip Tung wrote:
I'm trying to construct a list of list and do some operation on it. In
Python I would do
In [7]: ll = [[1,2],[1,2,3],[7]]
Say I want to sort them by the length of the list, many function accepts a
`key` parameter. In
Personally (and I am definitely biased by experience), I really like the model
that is used pretty widely in the scientific python community. A lot of people
use Continuum's Anaconda Python distribution, which includes about 200 packages
(not all of them are actually python though). It's tested
Hi Steven,
I added your version (vander3) to my benchmark and updated the IJulia
notebook:
http://nbviewer.ipython.org/gist/synapticarbors/26910166ab775c04c47b
As you mentioned it's a lot faster than the other version I wrote and evens
out the underperformance vs numpy for the larger arrays
numpy.dot is calling BLAS under the hood so it's calling fast code and I
wouldn't expect Julia to shine against it. Try calling numpy methods that
aren't thin wrappers around C and you should see a bigger difference. Or
implement a larger more complex algorithm. Here's a simple micro-benchmark
Hi Jiahao,
Just a small note - based on your comments in the thread you mentioned, I ended
up changing my test to just multiply ones to avoid over/underflow. Those are
the results that are now in that notebook, so that shouldn't be an issue in the
plotted timings. On the python side, I'm using
, though, I may not be understanding;
it looks as if Julia is slower for larger matrices. is this true, or am I
just going crazy and not able to properly read graphs anymore?
On Thursday, January 8, 2015 at 12:46:29 PM UTC-6, Joshua Adelman wrote:
numpy.dot is calling BLAS under the hood so
When I just stick this whole thing in a function (as is recommended by the
performance tips section of the docs), it goes from 0.03 seconds to 1.2e-6
seconds. Literally:
function testf()
your code
end
I’ve just started playing around with Julia myself, and I’ve definitely
appreciated that
don't fully understand
the solution tough. I did
function testf()
My Code
end
Then, when I do the commands shown below, I still get 1e-3 seconds.
tic()
testf()
toc()
On Tuesday, January 6, 2015 7:48:02 PM UTC-6, Joshua Adelman wrote:
When I just stick this whole thing in a function
My understanding from the style guide and general recommendations is that you
shouldn’t have a single monolithic function that gets called once. Instead it’s
idiomatic and performant to compose complex algorithms from many concise
functions that are called repeatedly. If one of those functions
I'm a long time Python/Numpy user and am starting to play around with Julia
a bit. To get a handle on the language and how to write fast code, I've
been implementing some simple functions and then trying to performance tune
them. One such experiment involved generating Vandermonde matrices and
On Saturday, January 3, 2015 11:56:20 PM UTC-5, Jiahao Chen wrote:
On Sat, Jan 3, 2015 at 11:27 PM, Joshua Adelman joshua@gmail.com
javascript: wrote:
PS - maybe it's my experience with numpy, but I wonder if any thought has
been given to following more of a numpy-style convention
I'm attempting to do some dynamic module loading and subsequent processing
and can't quite figure something out. I think I have a reasonable (albeit
maybe not idiomatic) mechanism for dynamically importing a set of modules.
See my stackoverflow question and the subsequent self answer:
(mod00)).(symbol(func))()
hello
```
I'm not sure what the overall goal is here, but just beware that eval
should only be used like this for testing purposes or else your code will
be slow.
On Mon, Dec 29, 2014 at 10:45 PM, Joshua Adelman joshua@gmail.com
javascript: wrote:
I'm
On Sunday, December 21, 2014 10:36:46 PM UTC-5, Ronald L. Rivest wrote:
I find the following behavior of zip extremely confusing;
either I am misunderstanding something, or perhaps there
is a bug in the implementation of zip(?) [I would expect
the first and last expressions to give the same
:-).
--Tim
On Friday, November 21, 2014 11:57:10 AM Joshua Adelman wrote:
I'm playing around with Julia for the first time in an attempt to see if
I
can replace a Python + Cython component of a system I'm building.
Basically
I have a file of bytes representing a numpy structured/recarray
On Saturday, November 22, 2014 6:31:27 PM UTC-5, Patrick O'Leary wrote:
On Saturday, November 22, 2014 9:57:51 AM UTC-6, Isaiah wrote:
Have you looked at StrPack.jl? It may have a packed option. Julia uses
the platform ABI padding rules for easy interop with C.
Yes, you can used the
I'm playing around with Julia for the first time in an attempt to see if I
can replace a Python + Cython component of a system I'm building. Basically
I have a file of bytes representing a numpy structured/recarray (in memory
this is an array of structs). This gets memory mapped into a numpy
It has not escaped notice that this is less than ideal :-).
--Tim
On Friday, November 21, 2014 11:57:10 AM Joshua Adelman wrote:
I'm playing around with Julia for the first time in an attempt to see if
I
can replace a Python + Cython component of a system I'm building.
Basically
I
20 matches
Mail list logo