On Tue, Oct 21, 2008 at 5:01 PM, Bruce Southey [EMAIL PROTECTED] wrote:
I think you are on your own here as it is a huge chunk to chew!
Depending on what you really mean by linear models is also part of that
(the Wikipedia entry is amusing). Most people probably to stats
applications like lm
On Wed, Aug 13, 2008 at 4:01 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Wed, Aug 13, 2008 at 14:37, Joe Harrington [EMAIL PROTECTED] wrote:
On Tue, Aug 12, 2008 at 19:28, Charles R Harris
[EMAIL PROTECTED] wrote:
On Tue, Aug 12, 2008 at 5:13 PM, Andrew Dalke
[EMAIL PROTECTED]
wrote:
On Tue, Aug 12, 2008 at 1:46 AM, Andrew Dalke [EMAIL PROTECTED]wrote:
Here's the implementation, from lib/function_base.py
def nanmin(a, axis=None):
Find the minimium over the given axis, ignoring NaNs.
y = array(a,subok=True)
if not issubclass(y.dtype.type, _nx.integer):
On Thu, Jul 31, 2008 at 10:14 AM, Gael Varoquaux
[EMAIL PROTECTED] wrote:
On Thu, Jul 31, 2008 at 12:43:17PM +0200, Andrew Dalke wrote:
Startup performance has not been a numpy concern. It a concern for
me, and it has been (for other packages) a concern for some of my
clients.
I am
On Wed, Jul 30, 2008 at 9:25 PM, Dave Peterson [EMAIL PROTECTED]wrote:
Hello,
I am very pleased to announce that Traits 3.0 has just been released!
All of the URLs on PyPi to Enthought seem to be broken (e.g.,
http://code.enthought.com/traits). Can you give an example showing how
traits
On Sun, Jun 22, 2008 at 3:58 PM, Andreas Klöckner [EMAIL PROTECTED] wrote:
PyCuda is based on the driver API. CUBLAS uses the high-level API. Once
*can*
violate this rule without crashing immediately. But sketchy stuff does
happen. Instead, for BLAS-1 operations, PyCuda comes with a class
After poking around for a bit, I was wondering if there was a faster method
for the following:
# Array of index values 0..n
items = numpy.array([0,3,2,1,4,2],dtype=int)
# Count the number of occurrences of each index
counts = numpy.zeros(5, dtype=int)
for i in items:
counts[i] += 1
In my real
On Thu, May 22, 2008 at 12:08 PM, Keith Goodman [EMAIL PROTECTED] wrote:
How big is n? If it is much smaller than a million then loop over that
instead.
n is always relatively small, but I'd rather not do:
for i in range(n):
counts[i] = (items==i).sum()
If that was the best alternative,
I know I'm off topic and maybe a day late, but I'm pained by the naming of
ddof.
It is simply not intuitive for anyone other than the person who thought it
up (and from my recollection, maybe not even for him).For one, most
stats folk use 'df' as the abbreviation for 'degrees of freedom'.
On 1/8/08, Matthieu Brucher [EMAIL PROTECTED] wrote:
I have AMD processor so I guess I should use ACML somehow instead.
However, at 1st I would prefer my code to be platform-independent, and
at 2nd unfortunately I haven't encountered in numpy documentation (in
website scipy.org and
Hi Bruce,
I have to add that I don't know the answer to your question either, but I do
know that it is solvable and that once the list experts return,
enlightenment will soon follow. My confidence comes from knowing the Python
internals for how left and right multiplication are performed. As
On 7/20/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 20/07/07, Nils Wagner [EMAIL PROTECTED] wrote:
lorenzo bolla wrote:
hi all.
is there a function in numpy to compute the exp of a matrix, similar
to expm in matlab?
for example:
expm([[0,0],[0,0]]) = eye(2)
Numpy doesn't provide
On 7/20/07, Nils Wagner [EMAIL PROTECTED] wrote:
Your sqrtm_eig(x) function won't work if x is defective.
See test_defective.py for details.
I am aware, though at least on my system, the SVD-based method is by far the
fastest and robust (and can be made more robust by the addition of a
On 7/20/07, Nils Wagner [EMAIL PROTECTED] wrote:
Your sqrtm_eig(x) function won't work if x is defective.
See test_defective.py for details.
I've added several defective matrices to my test cases and the SVD method
doesn't work well as I'd thought (which is obvious in retrospect). I'm
On 7/20/07, Charles R Harris [EMAIL PROTECTED] wrote:
I expect using sqrt(x) will be faster than x**.5.
I did test this at one point and was also surprised that sqrt(x) seemed
slower than **.5. However I found out otherwise while preparing a timeit
script to demonstrate this observation.
On 7/20/07, Kevin Jacobs [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:
On 7/20/07, Charles R Harris [EMAIL PROTECTED] wrote:
I expect using sqrt(x) will be faster than x**.5.
I did test this at one point and was also surprised that sqrt(x) seemed
slower than **.5. However I found out
,
-43.21448471 +1.42983144e-07j]])
This certainly is a slightly unexpected behaviour.
Hanno
Kevin Jacobs [EMAIL PROTECTED] [EMAIL PROTECTED] said:
--=_Part_59405_32758974.1184593945795
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Content-Disposition
I had to poke around before finding it too:
bmat( [[K,G],[G.T, zeros(nc)]] )
On 4/1/07, Bill Baxter [EMAIL PROTECTED] wrote:
What's the best way of assembling a big matrix from parts?
I'm using lagrange multipliers to enforce constraints and this kind of
matrix comes up a lot:
[[ K, G],
[
18 matches
Mail list logo