2014-04-08 14:52 GMT+02:00 Nathaniel Smith <n...@pobox.com>:
> On Tue, Apr 8, 2014 at 9:58 AM, Björn Lindqvist <bjou...@gmail.com> wrote:
>> 2014-04-07 3:41 GMT+02:00 Nathaniel Smith <n...@pobox.com>:
>>> So, I guess as far as I'm concerned, this is ready to go. Feedback welcome:
>>>   http://legacy.python.org/dev/peps/pep-0465/
>>
>> Couldn't you please have made your motivation example actually runnable?
>>
>> import numpy as np
>> from numpy.linalg import inv, solve
>>
>> # Using dot function:
>> S = np.dot((np.dot(H, beta) - r).T,
>>            np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r))
>>
>> # Using dot method:
>> S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)
>>
>> Don't keep your reader hanging! Tell us what the magical variables H,
>> beta, r and V are. And why import solve when you aren't using it?
>> Curious readers that aren't very good at matrix math, like me, should
>> still be able to follow your logic. Even if it is just random data,
>> it's better than nothing!
>
> There's a footnote that explains the math in more detail and links to
> the real code this was adapted from. And solve is used further down in
> the section. But running it is really what you want, just insert:
>
> beta = np.random.randn(10)
> H = np.random.randn(2, 10)
> r = np.random.randn(2)
> V = np.random.randn(10, 10)
>
> Does that help? ;-)

Thanks! Yes it does help. Then I can see that this expression:

  np.dot(H, beta) - r

Evaluates to a vector. And a vector transposed is the vector itself.
So the .T part in this expression np.dot(H, beta) - r).T is
unnecessary, isn't it?


-- 
mvh/best regards Björn Lindqvist
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to