Re: numpy (matrix solver) - python vs. matlab

2012-05-04 Thread someone

On 05/04/2012 05:52 AM, Steven D'Aprano wrote:

On Thu, 03 May 2012 19:30:35 +0200, someone wrote:



So how do you explain that the natural frequencies from FEM (with
condition number ~1e6) generally correlates really good with real
measurements (within approx. 5%), at least for the first 3-4 natural
frequencies?


I would counter your hand-waving (correlates really good, within
approx 5% of *what*?) with hand-waving of my own:


Within 5% of experiments of course.
There is not much else to compare with.


Sure, that's exactly what I would expect!

*wink*

By the way, if I didn't say so earlier, I'll say so now: the
interpretation of how bad the condition number is will depend on the
underlying physics and/or mathematics of the situation. The
interpretation of loss of digits of precision is a general rule of thumb
that holds in many diverse situations, not a rule of physics that cannot
be broken in this universe.

If you have found a scenario where another interpretation of condition
number applies, good for you. That doesn't change the fact that, under
normal circumstances when trying to solve systems of linear equations, a
condition number of 1e6 is likely to blow away *all* the accuracy in your
measured data. (Very few physical measurements are accurate to more than
six digits.)


Not true, IMHO.

Eigenfrequencies (I think that is a very typical physical measurement 
and I cannot think of something that is more typical) don't need to be 
accurate with 6 digits. I'm happy with below 5% error. So if an 
eigenfrequency is measured to 100 Hz, I'm happy if the numerical model 
gives a result in the 5%-range of 95-105 Hz. This I got with a condition 
number of approx. 1e6 and it's good enough for me. I don't think anyone 
expects 6-digit accuracy with eigenfrequncies.




--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-04 Thread someone

On 05/04/2012 06:15 AM, Russ P. wrote:

On May 3, 4:59 pm, someonenewsbo...@gmail.com  wrote:

On 05/04/2012 12:58 AM, Russ P. wrote:
Ok, but I just don't understand what's in the empirical category, sorry...


I didn't look it up, but as far as I know, empirical just means based
on experiment, which means based on measured data. Unless I am


FEM based on measurement data? Still, I don't understand it, sorry.


mistaken , a finite element analysis is not based on measured data.


I'm probably a bit narrow-thinking because I just worked with this small 
FEM-program (in Matlab), but can you please give an example of a 
matrix-problem that is based on measurement data?



Yes, the results can be *compared* with measured data and perhaps
calibrated with measured data, but those are not the same thing.


Exactly. That's why I don't understand what solving a matrix system 
using measurement/empirical data, could typically be an example of...?



I agree with Steven D's comment above, and I will reiterate that a
condition number of 1e6 would not inspire confidence in me. If I had a
condition number like that, I would look for a better model. But
that's just a gut reaction, not a hard scientific rule.


I don't have any better model and don't know anything better. I still 
think that 5% accuracy is good enough and that nobody needs 6-digits 
precision for practical/engineering/empirical work... Maybe quantum 
physicists needs more than 6 digits of accuracy, but most 
practical/engineering problems are ok with an accuracy of 5%, I think, 
IMHO... Please tell me if I'm wrong.



--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread someone

On 05/02/2012 11:45 PM, Russ P. wrote:

On May 2, 1:29 pm, someonenewsbo...@gmail.com  wrote:


If your data starts off with only 1 or 2 digits of accuracy, as in your
example, then the result is meaningless -- the accuracy will be 2-2
digits, or 0 -- *no* digits in the answer can be trusted to be accurate.


I just solved a FEM eigenvalue problem where the condition number of the
mass and stiffness matrices was something like 1e6... Result looked good
to me... So I don't understand what you're saying about 10 = 1 or 2
digits. I think my problem was accurate enough, though I don't know what
error with 1e6 in condition number, I should expect. How did you arrive
at 1 or 2 digits for cond(A)=10, if I may ask ?


As Steven pointed out earlier, it all depends on the precision you are
dealing with. If you are just doing pure mathematical or numerical
work with no real-world measurement error, then a condition number of
1e6 may be fine. But you had better be using double precision (64-
bit) floating point numbers (which are the default in Python, of
course). Those have approximately 12 digits of precision, so you are
in good shape. Single-precision floats only have 6 or 7 digits of
precision, so you'd be in trouble there.

For any practical engineering or scientific work, I'd say that a
condition number of 1e6 is very likely to be completely unacceptable.


So how do you explain that the natural frequencies from FEM (with 
condition number ~1e6) generally correlates really good with real 
measurements (within approx. 5%), at least for the first 3-4 natural 
frequencies?


I would say that the problem lies with the highest natural frequencies, 
they for sure cannot be verified - there's too little energy in them. 
But the lowest frequencies (the most important ones) are good, I think - 
even for high cond number.



--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread Russ P.
On May 3, 10:30 am, someone newsbo...@gmail.com wrote:
 On 05/02/2012 11:45 PM, Russ P. wrote:



  On May 2, 1:29 pm, someonenewsbo...@gmail.com  wrote:

  If your data starts off with only 1 or 2 digits of accuracy, as in your
  example, then the result is meaningless -- the accuracy will be 2-2
  digits, or 0 -- *no* digits in the answer can be trusted to be accurate.

  I just solved a FEM eigenvalue problem where the condition number of the
  mass and stiffness matrices was something like 1e6... Result looked good
  to me... So I don't understand what you're saying about 10 = 1 or 2
  digits. I think my problem was accurate enough, though I don't know what
  error with 1e6 in condition number, I should expect. How did you arrive
  at 1 or 2 digits for cond(A)=10, if I may ask ?

  As Steven pointed out earlier, it all depends on the precision you are
  dealing with. If you are just doing pure mathematical or numerical
  work with no real-world measurement error, then a condition number of
  1e6 may be fine. But you had better be using double precision (64-
  bit) floating point numbers (which are the default in Python, of
  course). Those have approximately 12 digits of precision, so you are
  in good shape. Single-precision floats only have 6 or 7 digits of
  precision, so you'd be in trouble there.

  For any practical engineering or scientific work, I'd say that a
  condition number of 1e6 is very likely to be completely unacceptable.

 So how do you explain that the natural frequencies from FEM (with
 condition number ~1e6) generally correlates really good with real
 measurements (within approx. 5%), at least for the first 3-4 natural
 frequencies?

 I would say that the problem lies with the highest natural frequencies,
 they for sure cannot be verified - there's too little energy in them.
 But the lowest frequencies (the most important ones) are good, I think -
 even for high cond number.

Did you mention earlier what FEM stands for? If so, I missed it. Is
it finite-element modeling? Whatever the case, note that I said, If
you are just doing pure mathematical or numerical work with no real-
world measurement error, then a condition number of
1e6 may be fine. I forgot much more than I know about finite-element
modeling, but isn't it a purely numerical method of analysis? If that
is the case, then my comment above is relevant.

By the way, I didn't mean to patronize you with my earlier explanation
of orthogonal transformations. They are fundamental to understanding
the SVD, and I thought it might be interesting to anyone who is not
familiar with the concept.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread someone

On 05/03/2012 07:55 PM, Russ P. wrote:

On May 3, 10:30 am, someonenewsbo...@gmail.com  wrote:

On 05/02/2012 11:45 PM, Russ P. wrote:



For any practical engineering or scientific work, I'd say that a
condition number of 1e6 is very likely to be completely unacceptable.


So how do you explain that the natural frequencies from FEM (with
condition number ~1e6) generally correlates really good with real
measurements (within approx. 5%), at least for the first 3-4 natural
frequencies?

I would say that the problem lies with the highest natural frequencies,
they for sure cannot be verified - there's too little energy in them.
But the lowest frequencies (the most important ones) are good, I think -
even for high cond number.


Did you mention earlier what FEM stands for? If so, I missed it. Is
it finite-element modeling? Whatever the case, note that I said, If


Sorry, yes: Finite Element Model.


you are just doing pure mathematical or numerical work with no real-
world measurement error, then a condition number of
1e6 may be fine. I forgot much more than I know about finite-element
modeling, but isn't it a purely numerical method of analysis? If that


I'm not sure exactly, what is the definition of a purely numerical 
method of analysis? I would guess that the answer is yes, it's a purely 
numerical method? But I also thing it's a practical engineering or 
scientific work...



is the case, then my comment above is relevant.


Uh, I just don't understand the difference:

1) For any practical engineering or scientific work, I'd say that a 
condition number of 1e6 is very likely to be completely unacceptable.


vs.

2) If you are just doing pure mathematical or numerical work with no 
real-world measurement error, then a condition number of, 1e6 may be fine.


I would think that FEM is a practical engineering work and also pure 
numerical work... Or something...



By the way, I didn't mean to patronize you with my earlier explanation
of orthogonal transformations. They are fundamental to understanding
the SVD, and I thought it might be interesting to anyone who is not
familiar with the concept.


Don't worry, I think it was really good and I don't think anyone 
patronized me, on the contrary, people was/is very helpful. SVD isn't my 
strongest side and maybe I should've thought a bit more about this 
singular matrix and perhaps realized what some people here already 
explained, a bit earlier (maybe before I asked). Anyway, it's been good 
to hear/read what you've (and others) have written.


Yesterday and earlier today I was at work during the day so 
answering/replying took a bit longer than I like, considering the huge 
flow of posts in the matlab group. But now I'm home most of the time, 
for the next 3 days and will check for followup posts quite frequent, I 
think...


--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread Russ P.
Yeah, I realized that I should rephrase my previous statement to
something like this:

For any *empirical* engineering or scientific work, I'd say that a
condition number of 1e6 is likely to be unacceptable.

I'd put finite elements into the category of theoretical and numerical
rather than empirical. Still, a condition number of 1e6 would bother
me, but maybe that's just me.

--Russ P.


On May 3, 3:21 pm, someone newsbo...@gmail.com wrote:
 On 05/03/2012 07:55 PM, Russ P. wrote:



  On May 3, 10:30 am, someonenewsbo...@gmail.com  wrote:
  On 05/02/2012 11:45 PM, Russ P. wrote:
  For any practical engineering or scientific work, I'd say that a
  condition number of 1e6 is very likely to be completely unacceptable.

  So how do you explain that the natural frequencies from FEM (with
  condition number ~1e6) generally correlates really good with real
  measurements (within approx. 5%), at least for the first 3-4 natural
  frequencies?

  I would say that the problem lies with the highest natural frequencies,
  they for sure cannot be verified - there's too little energy in them.
  But the lowest frequencies (the most important ones) are good, I think -
  even for high cond number.

  Did you mention earlier what FEM stands for? If so, I missed it. Is
  it finite-element modeling? Whatever the case, note that I said, If

 Sorry, yes: Finite Element Model.

  you are just doing pure mathematical or numerical work with no real-
  world measurement error, then a condition number of
  1e6 may be fine. I forgot much more than I know about finite-element
  modeling, but isn't it a purely numerical method of analysis? If that

 I'm not sure exactly, what is the definition of a purely numerical
 method of analysis? I would guess that the answer is yes, it's a purely
 numerical method? But I also thing it's a practical engineering or
 scientific work...

  is the case, then my comment above is relevant.

 Uh, I just don't understand the difference:

 1) For any practical engineering or scientific work, I'd say that a
 condition number of 1e6 is very likely to be completely unacceptable.

 vs.

 2) If you are just doing pure mathematical or numerical work with no
 real-world measurement error, then a condition number of, 1e6 may be fine.

 I would think that FEM is a practical engineering work and also pure
 numerical work... Or something...

  By the way, I didn't mean to patronize you with my earlier explanation
  of orthogonal transformations. They are fundamental to understanding
  the SVD, and I thought it might be interesting to anyone who is not
  familiar with the concept.

 Don't worry, I think it was really good and I don't think anyone
 patronized me, on the contrary, people was/is very helpful. SVD isn't my
 strongest side and maybe I should've thought a bit more about this
 singular matrix and perhaps realized what some people here already
 explained, a bit earlier (maybe before I asked). Anyway, it's been good
 to hear/read what you've (and others) have written.

 Yesterday and earlier today I was at work during the day so
 answering/replying took a bit longer than I like, considering the huge
 flow of posts in the matlab group. But now I'm home most of the time,
 for the next 3 days and will check for followup posts quite frequent, I
 think...

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread someone

On 05/04/2012 12:58 AM, Russ P. wrote:

Yeah, I realized that I should rephrase my previous statement to
something like this:

For any *empirical* engineering or scientific work, I'd say that a
condition number of 1e6 is likely to be unacceptable.


Still, I don't understand it. Do you have an example of this kind of 
work, if it's not FEM?



I'd put finite elements into the category of theoretical and numerical
rather than empirical. Still, a condition number of 1e6 would bother
me, but maybe that's just me.


Ok, but I just don't understand what's in the empirical category, sorry...

Maybe the conclusion is just that if cond(A)  1e15 or 1e16, then that 
problem shouldn't be solved and maybe this is also approx. where matlab 
has it's warning-threshold (maybe, I'm just guessing here)... So, maybe 
I could perhaps use that limit in my future python program (when I find 
out how to get the condition number etc, but I assume this can be 
googled for with no problems)...


--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread Steven D'Aprano
On Thu, 03 May 2012 19:30:35 +0200, someone wrote:

 On 05/02/2012 11:45 PM, Russ P. wrote:
 On May 2, 1:29 pm, someonenewsbo...@gmail.com  wrote:

 If your data starts off with only 1 or 2 digits of accuracy, as in
 your example, then the result is meaningless -- the accuracy will be
 2-2 digits, or 0 -- *no* digits in the answer can be trusted to be
 accurate.

 I just solved a FEM eigenvalue problem where the condition number of
 the mass and stiffness matrices was something like 1e6... Result
 looked good to me... So I don't understand what you're saying about 10
 = 1 or 2 digits. I think my problem was accurate enough, though I
 don't know what error with 1e6 in condition number, I should expect.
 How did you arrive at 1 or 2 digits for cond(A)=10, if I may ask ?

 As Steven pointed out earlier, it all depends on the precision you are
 dealing with. If you are just doing pure mathematical or numerical work
 with no real-world measurement error, then a condition number of 1e6
 may be fine. But you had better be using double precision (64- bit)
 floating point numbers (which are the default in Python, of course).
 Those have approximately 12 digits of precision, so you are in good
 shape. Single-precision floats only have 6 or 7 digits of precision, so
 you'd be in trouble there.

 For any practical engineering or scientific work, I'd say that a
 condition number of 1e6 is very likely to be completely unacceptable.
 
 So how do you explain that the natural frequencies from FEM (with
 condition number ~1e6) generally correlates really good with real
 measurements (within approx. 5%), at least for the first 3-4 natural
 frequencies?

I would counter your hand-waving (correlates really good, within 
approx 5% of *what*?) with hand-waving of my own:

Sure, that's exactly what I would expect!

*wink*

By the way, if I didn't say so earlier, I'll say so now: the 
interpretation of how bad the condition number is will depend on the 
underlying physics and/or mathematics of the situation. The 
interpretation of loss of digits of precision is a general rule of thumb 
that holds in many diverse situations, not a rule of physics that cannot 
be broken in this universe.

If you have found a scenario where another interpretation of condition 
number applies, good for you. That doesn't change the fact that, under 
normal circumstances when trying to solve systems of linear equations, a 
condition number of 1e6 is likely to blow away *all* the accuracy in your 
measured data. (Very few physical measurements are accurate to more than 
six digits.)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-03 Thread Russ P.
On May 3, 4:59 pm, someone newsbo...@gmail.com wrote:
 On 05/04/2012 12:58 AM, Russ P. wrote:

  Yeah, I realized that I should rephrase my previous statement to
  something like this:

  For any *empirical* engineering or scientific work, I'd say that a
  condition number of 1e6 is likely to be unacceptable.

 Still, I don't understand it. Do you have an example of this kind of
 work, if it's not FEM?

  I'd put finite elements into the category of theoretical and numerical
  rather than empirical. Still, a condition number of 1e6 would bother
  me, but maybe that's just me.

 Ok, but I just don't understand what's in the empirical category, sorry...

I didn't look it up, but as far as I know, empirical just means based
on experiment, which means based on measured data. Unless I am
mistaken , a finite element analysis is not based on measured data.
Yes, the results can be *compared* with measured data and perhaps
calibrated with measured data, but those are not the same thing.

I agree with Steven D's comment above, and I will reiterate that a
condition number of 1e6 would not inspire confidence in me. If I had a
condition number like that, I would look for a better model. But
that's just a gut reaction, not a hard scientific rule.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread someone

On 05/02/2012 01:05 AM, Paul Rubin wrote:

someonenewsbo...@gmail.com  writes:

Actually I know some... I just didn't think so much about, before
writing the question this as I should, I know theres also something
like singular value decomposition that I think can help solve
otherwise illposed problems,


You will probably get better advice if you are able to describe what
problem (ill-posed or otherwise) you are actually trying to solve.  SVD


I don't understand what else I should write. I gave the singular matrix 
and that's it. Nothing more is to say about this problem, except it 
would be nice to learn some things for future use (for instance 
understanding SVD more - perhaps someone geometrically can explain SVD, 
that'll be really nice, I hope)...



just separates out the orthogonal and scaling parts of the
transformation induced by a matrix.  Whether that is of any use to you
is unclear since you don't say what you're trying to do.


Still, I dont think I completely understand SVD. SVD (at least in 
Matlab) returns 3 matrices, one is a diagonal matrix I think. I think I 
would better understand it with geometric examples, if one would be so 
kind to maybe write something about that... I can plot 3D vectors in 
matlab, very easily so maybe I better understand SVD if I hear/read the 
geometric explanation (references to textbook/similar is also appreciated).

--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread someone

On 05/02/2012 01:38 AM, Russ P. wrote:

On May 1, 4:05 pm, Paul Rubinno.em...@nospam.invalid  wrote:

someonenewsbo...@gmail.com  writes:

Actually I know some... I just didn't think so much about, before
writing the question this as I should, I know theres also something
like singular value decomposition that I think can help solve
otherwise illposed problems,


You will probably get better advice if you are able to describe what
problem (ill-posed or otherwise) you are actually trying to solve.  SVD
just separates out the orthogonal and scaling parts of the
transformation induced by a matrix.  Whether that is of any use to you
is unclear since you don't say what you're trying to do.


I agree with the first sentence, but I take slight issue with the word
just in the second. The orthogonal part of the transformation is
non-distorting, but the scaling part essentially distorts the space.
At least that's how I think about it. The larger the ratio between the
largest and smallest singular value, the more distortion there is. SVD
may or may not be the best choice for the final algorithm, but it is
useful for visualizing the transformation you are applying. It can
provide clues about the quality of the selection of independent
variables, state variables, or inputs.


Me would like to hear more! :-)

It would really appreciate if anyone could maybe post a simple SVD 
example and tell what the vectors from the SVD represents geometrically 
/ visually, because I don't understand it good enough and I'm sure it's 
very important, when it comes to solving matrix systems...


--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread Russ P.
On May 1, 11:03 pm, someone newsbo...@gmail.com wrote:
 On 05/02/2012 01:38 AM, Russ P. wrote:









  On May 1, 4:05 pm, Paul Rubinno.em...@nospam.invalid  wrote:
  someonenewsbo...@gmail.com  writes:
  Actually I know some... I just didn't think so much about, before
  writing the question this as I should, I know theres also something
  like singular value decomposition that I think can help solve
  otherwise illposed problems,

  You will probably get better advice if you are able to describe what
  problem (ill-posed or otherwise) you are actually trying to solve.  SVD
  just separates out the orthogonal and scaling parts of the
  transformation induced by a matrix.  Whether that is of any use to you
  is unclear since you don't say what you're trying to do.

  I agree with the first sentence, but I take slight issue with the word
  just in the second. The orthogonal part of the transformation is
  non-distorting, but the scaling part essentially distorts the space.
  At least that's how I think about it. The larger the ratio between the
  largest and smallest singular value, the more distortion there is. SVD
  may or may not be the best choice for the final algorithm, but it is
  useful for visualizing the transformation you are applying. It can
  provide clues about the quality of the selection of independent
  variables, state variables, or inputs.

 Me would like to hear more! :-)

 It would really appreciate if anyone could maybe post a simple SVD
 example and tell what the vectors from the SVD represents geometrically
 / visually, because I don't understand it good enough and I'm sure it's
 very important, when it comes to solving matrix systems...

SVD is perhaps the ultimate matrix decomposition and the ultimate tool
for linear analysis. Google it and take a look at the excellent
Wikipedia page on it. I would be wasting my time if I tried to compete
with that.

To really appreciate the SVD, you need some background in linear
algebra. In particular, you need to understand orthogonal
transformations. Think about a standard 3D Cartesian coordinate frame.
A rotation of the coordinate frame is an orthogonal transformation of
coordinates. The original frame and the new frame are both orthogonal.
A vector in one frame is converted to the other frame by multiplying
by an orthogonal matrix. The main feature of an orthogonal matrix is
that its transpose is its inverse (hence the inverse is trivial to
compute).

The SVD can be thought of as factoring any linear transformation into
a rotation, then a scaling, followed by another rotation. The scaling
is represented by the middle matrix of the transformation, which is a
diagonal matrix of the same dimensions as the original matrix. The
singular values can be read off of the diagonal. If any of them are
zero, then the original matrix is singular. If the ratio of the
largest to smallest singular value is large, then the original matrix
is said to be poorly conditioned.

Standard Cartesian coordinate frames are orthogonal. Imagine an x-y
coordinate frame in which the axes are not orthogonal. Such a
coordinate frame is possible, but they are rarely used. If the axes
are parallel, the coordinate frame will be singular and will basically
reduce to one-dimensional. If the x and y axes are nearly parallel,
the coordinate frame could still be used in theory, but it will be
poorly conditioned. You will need large numbers to represent points
fairly close to the origin, and small deviations will translate into
large changes in coordinate values. That can lead to problems due to
numerical roundoff errors and other kinds of errors.

--Russ P.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread Jussi Piitulainen
someone writes:

 except it would be nice to learn some things for future use (for
 instance understanding SVD more - perhaps someone geometrically can
 explain SVD, that'll be really nice, I hope)...

The Wikipedia article looks promising to me:
http://en.wikipedia.org/wiki/Singular_value_decomposition

Lots in it, with annotated links to more, including a song.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread Paul Rubin
Russ P. russ.paie...@gmail.com writes:
 The SVD can be thought of as factoring any linear transformation into
 a rotation, then a scaling, followed by another rotation. 

Ah yes, my description was backwards, sorry.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread Paul Rubin
someone newsbo...@gmail.com writes:
 You will probably get better advice if you are able to describe what
 problem (ill-posed or otherwise) you are actually trying to solve.  SVD
 I don't understand what else I should write. I gave the singular
 matrix and that's it. Nothing more is to say about this problem,

You could write what your application is.  What do the numbers in that
matrix actually represent?  

 except it would be nice to learn some things for future use (for
 instance understanding SVD more 

The Wikipedia article about SVD is pretty good.  

 Still, I dont think I completely understand SVD. SVD (at least in
 Matlab) returns 3 matrices, one is a diagonal matrix I think. 

Yes.  Two diagonal matrices and a unitary matrix.  The diagonal matrices
have the singular values which are the magnitudes of the matrix's
eigenvalues.  A matrix (for simplicity consider it to have nonzero
determinant) represents a linear transformation in space.  Think of a
bunch of vectors as arrows sticking out of the origin in different
directions, and consider what happens to them if you use the matrix to
act on all of them at once.  You could break the transformation into 3
steps: 1) scale the axes by suitable amounts, so all the vectors stretch
or shrink depending on their directions.  2) Rotate all the vectors
without changing their lengths, through some combination of angles; 3)
undo the scaling transformation so that the vectors get their original
lengths back.  Each of these 3 steps can be described by a matrix and
the 3 matrices are what SVD calculates.

I still don't have any idea whether this info is going to do you any
good.  There are some quite knowledgeable numerics folks here (I'm not
one) who can probably be of more help, but frankly your questions don't
make a whole lot of sense.  

You might get the book Numerical Recipes which is old but instructive.
Maybe someone else here has other recommendations.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread Kiuhnm

On 5/2/2012 8:00, someone wrote:

Still, I dont think I completely understand SVD. SVD (at least in
Matlab) returns 3 matrices, one is a diagonal matrix I think. I think I
would better understand it with geometric examples, if one would be so
kind to maybe write something about that... I can plot 3D vectors in
matlab, very easily so maybe I better understand SVD if I hear/read the
geometric explanation (references to textbook/similar is also appreciated).


Russ's post is a very good starting point. I hope you read it.

Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread Steven D'Aprano
On Wed, 02 May 2012 08:00:44 +0200, someone wrote:

 On 05/02/2012 01:05 AM, Paul Rubin wrote:
 someonenewsbo...@gmail.com  writes:
 Actually I know some... I just didn't think so much about, before
 writing the question this as I should, I know theres also something
 like singular value decomposition that I think can help solve
 otherwise illposed problems,

 You will probably get better advice if you are able to describe what
 problem (ill-posed or otherwise) you are actually trying to solve.  SVD
 
 I don't understand what else I should write. I gave the singular matrix
 and that's it.

You can't judge what an acceptable condition number is unless you know 
what your data is.

http://mathworld.wolfram.com/ConditionNumber.html
http://en.wikipedia.org/wiki/Condition_number

If your condition number is ten, then you should expect to lose one digit 
of accuracy in your solution, over and above whatever loss of accuracy 
comes from the numeric algorithm. A condition number of 64 will lose six 
bits, or about 1.8 decimal digits, of accuracy.

If your data starts off with only 1 or 2 digits of accuracy, as in your 
example, then the result is meaningless -- the accuracy will be 2-2 
digits, or 0 -- *no* digits in the answer can be trusted to be accurate.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread Steven_Lord



Russ P. russ.paie...@gmail.com wrote in message 
news:2275231f-405f-4ee3-966a-40c821b7c...@2g2000yqp.googlegroups.com...

On May 1, 11:52 am, someone newsbo...@gmail.com wrote:

On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:

 On 04/29/2012 07:59 PM, someone wrote:
 I do not use python much myself, but a quick google showed that pyhton
 scipy has API for linalg, so use, which is from the documentation, the
 following code example

 X = scipy.linalg.solve(A, B)

 But you still need to check the cond(). If it is too large, not good.
 How large and all that, depends on the problem itself. But the rule of
 thumb, the lower the better. Less than 100 can be good in general, but 
 I

 really can't give you a fixed number to use, as I am not an expert in
 this subjects, others who know more about it might have better
 recommendations.

Ok, that's a number...

Anyone wants to participate and do I hear something better than less
than 100 can be good in general ?

If I don't hear anything better, the limit is now 100...

What's the limit in matlab (on the condition number of the matrices), by
the way, before it comes up with a warning ???


I'm not going to answer, and the reason why is that saying the limit is X 
may lead you to believe that as long as my condition number is less than X, 
I'm safe. That's not the case. The threshold above which MATLAB warns is 
the fence that separates the tourists from the edge of the cliff of 
singularity -- just because you haven't crossed the fence doesn't mean 
you're not in danger of tripping and falling into the ravine.



The threshold of acceptability really depends on the problem you are
trying to solve.


I agree with this statement, although you generally don't want it to be too 
big. The definition of too big is somewhat fluid, though.



 I haven't solved linear equations for a long time,
but off hand, I would say that a condition number over 10 is
questionable.


That seems pretty low as a general bound IMO.


A high condition number suggests that the selection of independent
variables for the linear function you are trying to fit is not quite
right. For a poorly conditioned matrix, your modeling function will be
very sensitive to measurement noise and other sources of error, if
applicable. If the condition number is 100, then any input on one
particular axis gets magnified 100 times more than other inputs.
Unless your inputs are very precise, that is probably not what you
want.

Or something like that.


Russ, you and the OP (and others) may be interested in one of the books that 
Cleve Moler has written and made freely available on the MathWorks website:


http://www.mathworks.com/moler/

The chapter Linear Equations in Numerical Computing with MATLAB includes a 
section (section 2.9, starting on page 14 if I remember correctly) that 
discusses norm and condition number and gives a more formal statement of 
what you described. The code included in the book is written in MATLAB, but 
even if you don't use MATLAB (since I know this has been cross-posted to 
comp.lang.python) there's plenty of good, crunchy mathematics in that 
section.


--
Steve Lord
sl...@mathworks.com
To contact Technical Support use the Contact Us link on 
http://www.mathworks.com 


--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread someone

On 05/02/2012 01:03 PM, Kiuhnm wrote:

On 5/2/2012 8:00, someone wrote:

Still, I dont think I completely understand SVD. SVD (at least in
Matlab) returns 3 matrices, one is a diagonal matrix I think. I think I
would better understand it with geometric examples, if one would be so
kind to maybe write something about that... I can plot 3D vectors in
matlab, very easily so maybe I better understand SVD if I hear/read the
geometric explanation (references to textbook/similar is also
appreciated).


Russ's post is a very good starting point. I hope you read it.


Ofcourse I do.


--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread someone

On 05/02/2012 08:36 AM, Russ P. wrote:

On May 1, 11:03 pm, someonenewsbo...@gmail.com  wrote:

On 05/02/2012 01:38 AM, Russ P. wrote:

..

On May 1, 4:05 pm, Paul Rubinno.em...@nospam.invalidwrote:

It would really appreciate if anyone could maybe post a simple SVD
example and tell what the vectors from the SVD represents geometrically
/ visually, because I don't understand it good enough and I'm sure it's
very important, when it comes to solving matrix systems...


SVD is perhaps the ultimate matrix decomposition and the ultimate tool
for linear analysis. Google it and take a look at the excellent
Wikipedia page on it. I would be wasting my time if I tried to compete
with that.


Ok.


To really appreciate the SVD, you need some background in linear
algebra. In particular, you need to understand orthogonal
transformations. Think about a standard 3D Cartesian coordinate frame.
A rotation of the coordinate frame is an orthogonal transformation of
coordinates. The original frame and the new frame are both orthogonal.


Yep.


A vector in one frame is converted to the other frame by multiplying
by an orthogonal matrix. The main feature of an orthogonal matrix is
that its transpose is its inverse (hence the inverse is trivial to
compute).


As far as i know, you have to replace orthogonal with: orthonormal. That 
much I at least think I know without even going to wikipedia first...



The SVD can be thought of as factoring any linear transformation into
a rotation, then a scaling, followed by another rotation. The scaling
is represented by the middle matrix of the transformation, which is a
diagonal matrix of the same dimensions as the original matrix. The
singular values can be read off of the diagonal. If any of them are
zero, then the original matrix is singular. If the ratio of the
largest to smallest singular value is large, then the original matrix
is said to be poorly conditioned.


Aah, thank you very much. I can easily recognize some of this...


Standard Cartesian coordinate frames are orthogonal. Imagine an x-y
coordinate frame in which the axes are not orthogonal. Such a
coordinate frame is possible, but they are rarely used. If the axes
are parallel, the coordinate frame will be singular and will basically
reduce to one-dimensional. If the x and y axes are nearly parallel,
the coordinate frame could still be used in theory, but it will be
poorly conditioned. You will need large numbers to represent points
fairly close to the origin, and small deviations will translate into
large changes in coordinate values. That can lead to problems due to
numerical roundoff errors and other kinds of errors.


Thank you very much for your time. It always helps to get the same 
explanation from different people with slightly different ways of 
explaining it. Thanks!

--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread someone

On 05/02/2012 01:52 PM, Steven D'Aprano wrote:

On Wed, 02 May 2012 08:00:44 +0200, someone wrote:


On 05/02/2012 01:05 AM, Paul Rubin wrote:

someonenewsbo...@gmail.com   writes:

Actually I know some... I just didn't think so much about, before
writing the question this as I should, I know theres also something
like singular value decomposition that I think can help solve
otherwise illposed problems,


You will probably get better advice if you are able to describe what
problem (ill-posed or otherwise) you are actually trying to solve.  SVD


I don't understand what else I should write. I gave the singular matrix
and that's it.


You can't judge what an acceptable condition number is unless you know
what your data is.

http://mathworld.wolfram.com/ConditionNumber.html
http://en.wikipedia.org/wiki/Condition_number

If your condition number is ten, then you should expect to lose one digit
of accuracy in your solution, over and above whatever loss of accuracy
comes from the numeric algorithm. A condition number of 64 will lose six
bits, or about 1.8 decimal digits, of accuracy.

If your data starts off with only 1 or 2 digits of accuracy, as in your
example, then the result is meaningless -- the accuracy will be 2-2
digits, or 0 -- *no* digits in the answer can be trusted to be accurate.


I just solved a FEM eigenvalue problem where the condition number of the 
mass and stiffness matrices was something like 1e6... Result looked good 
to me... So I don't understand what you're saying about 10 = 1 or 2 
digits. I think my problem was accurate enough, though I don't know what 
error with 1e6 in condition number, I should expect. How did you arrive 
at 1 or 2 digits for cond(A)=10, if I may ask ?




--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread someone

On 05/02/2012 04:47 PM, Steven_Lord wrote:


Russ, you and the OP (and others) may be interested in one of the books
that Cleve Moler has written and made freely available on the MathWorks
website:

http://www.mathworks.com/moler/

The chapter Linear Equations in Numerical Computing with MATLAB
includes a section (section 2.9, starting on page 14 if I remember
correctly) that discusses norm and condition number and gives a more
formal statement of what you described. The code included in the book is
written in MATLAB, but even if you don't use MATLAB (since I know this
has been cross-posted to comp.lang.python) there's plenty of good,
crunchy mathematics in that section.


I use matlab more than python. I just want to learn python, which seems 
extremely powerful and easy but yet, I'm a python beginner.


Thank you very much for the reference, Mr. Lord. I'll look closely at 
that Moler-book, in about 1/2 hour or so Looking forward to learning 
more about this...

--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-02 Thread Russ P.
On May 2, 1:29 pm, someone newsbo...@gmail.com wrote:

  If your data starts off with only 1 or 2 digits of accuracy, as in your
  example, then the result is meaningless -- the accuracy will be 2-2
  digits, or 0 -- *no* digits in the answer can be trusted to be accurate.

 I just solved a FEM eigenvalue problem where the condition number of the
 mass and stiffness matrices was something like 1e6... Result looked good
 to me... So I don't understand what you're saying about 10 = 1 or 2
 digits. I think my problem was accurate enough, though I don't know what
 error with 1e6 in condition number, I should expect. How did you arrive
 at 1 or 2 digits for cond(A)=10, if I may ask ?

As Steven pointed out earlier, it all depends on the precision you are
dealing with. If you are just doing pure mathematical or numerical
work with no real-world measurement error, then a condition number of
1e6 may be fine. But you had better be using double precision (64-
bit) floating point numbers (which are the default in Python, of
course). Those have approximately 12 digits of precision, so you are
in good shape. Single-precision floats only have 6 or 7 digits of
precision, so you'd be in trouble there.

For any practical engineering or scientific work, I'd say that a
condition number of 1e6 is very likely to be completely unacceptable.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Russ P.
On Apr 29, 5:17 pm, someone newsbo...@gmail.com wrote:
 On 04/30/2012 12:39 AM, Kiuhnm wrote:

  So Matlab at least warns about Matrix is close to singular or badly
  scaled, which python (and I guess most other languages) does not...

  A is not just close to singular: it's singular!

 Ok. When do you define it to be singular, btw?

  Which is the most accurate/best, even for such a bad matrix? Is it
  possible to say something about that? Looks like python has a lot more
  digits but maybe that's just a random result... I mean Element 1,1 =
  2.81e14 in Python, but something like 3e14 in Matlab and so forth -
  there's a small difference in the results...

  Both results are *wrong*: no inverse exists.

 What's the best solution of the two wrong ones? Best least-squares
 solution or whatever?

  With python, I would also kindly ask about how to avoid this problem in
  the future, I mean, this maybe means that I have to check the condition
  number at all times before doing anything at all ? How to do that?

  If cond(A) is high, you're trying to solve your problem the wrong way.

 So you're saying that in another language (python) I should check the
 condition number, before solving anything?

  You should try to avoid matrix inversion altogether if that's the case.
  For instance you shouldn't invert a matrix just to solve a linear system.

 What then?

 Cramer's rule?

If you really want to know just about everything there is to know
about a matrix, take a look at its Singular Value Decomposition (SVD).
I've never used numpy, but I assume it can compute an SVD.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Eelco
There is linalg.pinv, which computes a pseudoinverse based on SVD that
works on all matrices, regardless of the rank of the matrix. It merely
approximates A*A.I = I as well as A permits though, rather than being
a true inverse, which may not exist.

Anyway, there are no general answers for this kind of thing. In all
non-textbook problems I can think of, the properties of your matrix
are highly constrained by the problem you are working on; which
additional tests are required to check for corner cases thus depends
on the problem. Often, if you have found an elegant solution to your
problem, no such corner cases exist. In that case, MATLAB is just
wasting your time with its automated checks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread someone

On 04/30/2012 02:57 AM, Paul Rubin wrote:

someonenewsbo...@gmail.com  writes:

A is not just close to singular: it's singular!

Ok. When do you define it to be singular, btw?


Singular means the determinant is zero, i.e. the rows or columns
are not linearly independent.  Let's give names to the three rows:

   a = [1 2 3]; b = [11 12 13]; c = [21 22 23].

Then notice that c = 2*b - a.  So c is linearly dependent on a and b.
Geometrically this means the three vectors are in the same plane,
so the matrix doesn't have an inverse.


Oh, thak you very much for a good explanation.


Which is the most accurate/best, even for such a bad matrix?


What are you trying to do?  If you are trying to calculate stuff
with matrices, you really should know some basic linear algebra.


Actually I know some... I just didn't think so much about, before 
writing the question this as I should, I know theres also something like 
singular value decomposition that I think can help solve otherwise 
illposed problems, although I'm not an expert like others in this forum, 
I know for sure :-)

--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread someone

On 05/01/2012 08:56 AM, Russ P. wrote:

On Apr 29, 5:17 pm, someonenewsbo...@gmail.com  wrote:

On 04/30/2012 12:39 AM, Kiuhnm wrote:

You should try to avoid matrix inversion altogether if that's the case.
For instance you shouldn't invert a matrix just to solve a linear system.


What then?

Cramer's rule?


If you really want to know just about everything there is to know
about a matrix, take a look at its Singular Value Decomposition (SVD).


I know a bit about SVD - I used it for a short period of time in Matlab, 
though I'm definately not an expert in it and I don't understand the 
whole theory with orthogality behind making it work so elegant as it 
is/does work out.



I've never used numpy, but I assume it can compute an SVD.


I'm making my first steps now with numpy, so there's a lot I don't know 
and haven't tried with numpy...



--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread someone

On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:

On 04/29/2012 07:59 PM, someone wrote:
I do not use python much myself, but a quick google showed that pyhton
scipy has API for linalg, so use, which is from the documentation, the
following code example

X = scipy.linalg.solve(A, B)

But you still need to check the cond(). If it is too large, not good.
How large and all that, depends on the problem itself. But the rule of
thumb, the lower the better. Less than 100 can be good in general, but I
really can't give you a fixed number to use, as I am not an expert in
this subjects, others who know more about it might have better
recommendations.


Ok, that's a number...

Anyone wants to participate and do I hear something better than less 
than 100 can be good in general ?


If I don't hear anything better, the limit is now 100...

What's the limit in matlab (on the condition number of the matrices), by 
the way, before it comes up with a warning ???




--
http://mail.python.org/mailman/listinfo/python-list


RE: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Prasad, Ramit
I'm making my first steps now with numpy, so there's a lot I don't know 
and haven't tried with numpy...

An excellent reason to subscribe to the numpy mailing list and
talk on there :)

Ramit


Ramit Prasad | JPMorgan Chase Investment Bank | Currencies Technology
712 Main Street | Houston, TX 77002
work phone: 713 - 216 - 5423

--

This email is confidential and subject to important disclaimers and
conditions including on offers for the purchase or sale of
securities, accuracy and completeness of information, viruses,
confidentiality, legal privilege, and legal entity disclaimers,
available at http://www.jpmorgan.com/pages/disclosures/email.  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Colin J. Williams

On 01/05/2012 2:43 PM, someone wrote:
[snip]

a = [1 2 3]; b = [11 12 13]; c = [21 22 23].

Then notice that c = 2*b - a. So c is linearly dependent on a and b.
Geometrically this means the three vectors are in the same plane,
so the matrix doesn't have an inverse.




Does it not mean that there are three parallel planes?

Consider the example in two dimensional space.

Colin W.
[snip]
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Russ P.
On May 1, 11:52 am, someone newsbo...@gmail.com wrote:
 On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:

  On 04/29/2012 07:59 PM, someone wrote:
  I do not use python much myself, but a quick google showed that pyhton
  scipy has API for linalg, so use, which is from the documentation, the
  following code example

  X = scipy.linalg.solve(A, B)

  But you still need to check the cond(). If it is too large, not good.
  How large and all that, depends on the problem itself. But the rule of
  thumb, the lower the better. Less than 100 can be good in general, but I
  really can't give you a fixed number to use, as I am not an expert in
  this subjects, others who know more about it might have better
  recommendations.

 Ok, that's a number...

 Anyone wants to participate and do I hear something better than less
 than 100 can be good in general ?

 If I don't hear anything better, the limit is now 100...

 What's the limit in matlab (on the condition number of the matrices), by
 the way, before it comes up with a warning ???

The threshold of acceptability really depends on the problem you are
trying to solve. I haven't solved linear equations for a long time,
but off hand, I would say that a condition number over 10 is
questionable.

A high condition number suggests that the selection of independent
variables for the linear function you are trying to fit is not quite
right. For a poorly conditioned matrix, your modeling function will be
very sensitive to measurement noise and other sources of error, if
applicable. If the condition number is 100, then any input on one
particular axis gets magnified 100 times more than other inputs.
Unless your inputs are very precise, that is probably not what you
want.

Or something like that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread someone

On 05/01/2012 09:59 PM, Colin J. Williams wrote:

On 01/05/2012 2:43 PM, someone wrote:
[snip]

a = [1 2 3]; b = [11 12 13]; c = [21 22 23].

Then notice that c = 2*b - a. So c is linearly dependent on a and b.
Geometrically this means the three vectors are in the same plane,
so the matrix doesn't have an inverse.




Does it not mean that there are three parallel planes?

Consider the example in two dimensional space.


I actually drawed it and saw it... It means that you can construct a 2D 
plane and all 3 vectors are in this 2D-plane...

--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread someone

On 05/01/2012 10:54 PM, Russ P. wrote:

On May 1, 11:52 am, someonenewsbo...@gmail.com  wrote:

On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:



What's the limit in matlab (on the condition number of the matrices), by
the way, before it comes up with a warning ???


The threshold of acceptability really depends on the problem you are
trying to solve. I haven't solved linear equations for a long time,
but off hand, I would say that a condition number over 10 is
questionable.


Anyone knows the threshold for Matlab for warning when solving x=A\b ? I 
tried edit slash but this seems to be internal so I cannot see what 
criteria the warning is based upon...



A high condition number suggests that the selection of independent
variables for the linear function you are trying to fit is not quite
right. For a poorly conditioned matrix, your modeling function will be
very sensitive to measurement noise and other sources of error, if
applicable. If the condition number is 100, then any input on one
particular axis gets magnified 100 times more than other inputs.
Unless your inputs are very precise, that is probably not what you
want.

Or something like that.


Ok. So it's like a frequency-response-function, output divided by input...
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Robert Kern

On 5/1/12 10:21 PM, someone wrote:

On 05/01/2012 10:54 PM, Russ P. wrote:

On May 1, 11:52 am, someonenewsbo...@gmail.com wrote:

On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:



What's the limit in matlab (on the condition number of the matrices), by
the way, before it comes up with a warning ???


The threshold of acceptability really depends on the problem you are
trying to solve. I haven't solved linear equations for a long time,
but off hand, I would say that a condition number over 10 is
questionable.


Anyone knows the threshold for Matlab for warning when solving x=A\b ? I tried
edit slash but this seems to be internal so I cannot see what criteria the
warning is based upon...


The documentation for that operator is here:

  http://www.mathworks.co.uk/help/techdoc/ref/mldivide.html

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Kiuhnm

On 5/1/2012 21:59, Colin J. Williams wrote:

On 01/05/2012 2:43 PM, someone wrote:
[snip]

a = [1 2 3]; b = [11 12 13]; c = [21 22 23].

Then notice that c = 2*b - a. So c is linearly dependent on a and b.
Geometrically this means the three vectors are in the same plane,
so the matrix doesn't have an inverse.




Does it not mean that there are three parallel planes?


They're not parallel because our matrix has rank 2, not 1.

Anyway, have a look at this:
http://en.wikipedia.org/wiki/Parallelepiped#Volume
It follows that our matrix A whose rows are a, b and c represents a 
parallelepiped. If our vectors are collinear or coplanar, the 
parallelepiped is degenerate, i.e. has volume 0. The converse is also true.


Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Paul Rubin
someone newsbo...@gmail.com writes:
 Actually I know some... I just didn't think so much about, before
 writing the question this as I should, I know theres also something
 like singular value decomposition that I think can help solve
 otherwise illposed problems,

You will probably get better advice if you are able to describe what
problem (ill-posed or otherwise) you are actually trying to solve.  SVD
just separates out the orthogonal and scaling parts of the
transformation induced by a matrix.  Whether that is of any use to you
is unclear since you don't say what you're trying to do.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-05-01 Thread Russ P.
On May 1, 4:05 pm, Paul Rubin no.em...@nospam.invalid wrote:
 someone newsbo...@gmail.com writes:
  Actually I know some... I just didn't think so much about, before
  writing the question this as I should, I know theres also something
  like singular value decomposition that I think can help solve
  otherwise illposed problems,

 You will probably get better advice if you are able to describe what
 problem (ill-posed or otherwise) you are actually trying to solve.  SVD
 just separates out the orthogonal and scaling parts of the
 transformation induced by a matrix.  Whether that is of any use to you
 is unclear since you don't say what you're trying to do.

I agree with the first sentence, but I take slight issue with the word
just in the second. The orthogonal part of the transformation is
non-distorting, but the scaling part essentially distorts the space.
At least that's how I think about it. The larger the ratio between the
largest and smallest singular value, the more distortion there is. SVD
may or may not be the best choice for the final algorithm, but it is
useful for visualizing the transformation you are applying. It can
provide clues about the quality of the selection of independent
variables, state variables, or inputs.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-30 Thread Kiuhnm

On 4/30/2012 2:17, someone wrote:

On 04/30/2012 12:39 AM, Kiuhnm wrote:


So Matlab at least warns about Matrix is close to singular or badly
scaled, which python (and I guess most other languages) does not...


A is not just close to singular: it's singular!


Ok. When do you define it to be singular, btw?


Which is the most accurate/best, even for such a bad matrix? Is it
possible to say something about that? Looks like python has a lot more
digits but maybe that's just a random result... I mean Element 1,1 =
2.81e14 in Python, but something like 3e14 in Matlab and so forth -
there's a small difference in the results...


Both results are *wrong*: no inverse exists.


What's the best solution of the two wrong ones? Best least-squares
solution or whatever?


Trust me. They're both so wrong that it doesn't matter.
Have a look at A*inv(A) and inv(A)*A and you'll see by yourself.


With python, I would also kindly ask about how to avoid this problem in
the future, I mean, this maybe means that I have to check the condition
number at all times before doing anything at all ? How to do that?


If cond(A) is high, you're trying to solve your problem the wrong way.


So you're saying that in another language (python) I should check the
condition number, before solving anything?


Yes, unless you already know that it will always be low by design.


You should try to avoid matrix inversion altogether if that's the case.
For instance you shouldn't invert a matrix just to solve a linear system.


What then?


Look at the documentation of the library you're using.


Cramer's rule?


Surprisingly, yes. That's an option. See
A condensation-based application of Cramerʼs rule for solving 
large-scale linear systems
Popular linear codes are based on Gaussian elimination or some iterative 
method, though.


Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-30 Thread Kiuhnm

On 4/30/2012 3:35, Nasser M. Abbasi wrote:

But you still need to check the cond(). If it is too large, not good.
How large and all that, depends on the problem itself. But the rule of
thumb, the lower the better. Less than 100 can be good in general, but I
really can't give you a fixed number to use, as I am not an expert in
this subjects, others who know more about it might have better
recommendations.


Alas, there's no fixed number and as if that wasn't enough, there are 
many condition numbers, each one with different properties. For 
instance, the Skeel condition number is scale-invariant and it's useful 
when a matrix is ill-conditioned just because its rows are out of scale.


Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


numpy (matrix solver) - python vs. matlab

2012-04-29 Thread someone

Hi,

Notice cross-post, I hope you bear over with me for doing that (and I 
imagine that some of you also like python in the matlab-group like 
myself)...


--
Python vs. Matlab:
--

Python:

from numpy import matrix
from numpy import linalg
A = matrix( [[1,2,3],[11,12,13],[21,22,23]] )
print A=
print A
print A.I (inverse of A)=
print A.I

A.I (inverse of A)=
[[  2.81466387e+14  -5.62932774e+14   2.81466387e+14]
 [ -5.62932774e+14   1.12586555e+15  -5.62932774e+14]
 [  2.81466387e+14  -5.62932774e+14   2.81466387e+14]]


Matlab:

 A=[1 2 3; 11 12 13; 21 22 23]

A =

 1 2 3
111213
212223

 inv(A)
Warning: Matrix is close to singular or badly scaled.
 Results may be inaccurate. RCOND = 1.067522e-17.

ans =

   1.0e+15 *

0.3002   -0.60050.3002
   -0.60051.2010   -0.6005
0.3002   -0.60050.3002

--
Python vs. Matlab:
--

So Matlab at least warns about Matrix is close to singular or badly 
scaled, which python (and I guess most other languages) does not...


Which is the most accurate/best, even for such a bad matrix? Is it 
possible to say something about that? Looks like python has a lot more 
digits but maybe that's just a random result... I mean Element 1,1 = 
2.81e14 in Python, but something like 3e14 in Matlab and so forth - 
there's a small difference in the results...


With python, I would also kindly ask about how to avoid this problem in 
the future, I mean, this maybe means that I have to check the condition 
number at all times before doing anything at all ? How to do that?


I hope you matlabticians like this topic, at least I myself find it 
interesting and many of you probably also program in some other language 
and then maybe you'll find this worthwhile to read about.

--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-29 Thread Kiuhnm

On 4/30/2012 0:17, someone wrote:

Hi,

Notice cross-post, I hope you bear over with me for doing that (and I
imagine that some of you also like python in the matlab-group like
myself)...

--
Python vs. Matlab:
--

Python:

from numpy import matrix
from numpy import linalg
A = matrix( [[1,2,3],[11,12,13],[21,22,23]] )
print A=
print A
print A.I (inverse of A)=
print A.I

A.I (inverse of A)=
[[ 2.81466387e+14 -5.62932774e+14 2.81466387e+14]
[ -5.62932774e+14 1.12586555e+15 -5.62932774e+14]
[ 2.81466387e+14 -5.62932774e+14 2.81466387e+14]]


Matlab:

  A=[1 2 3; 11 12 13; 21 22 23]

A =

1 2 3
11 12 13
21 22 23

  inv(A)
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 1.067522e-17.

ans =

1.0e+15 *

0.3002 -0.6005 0.3002
-0.6005 1.2010 -0.6005
0.3002 -0.6005 0.3002

--
Python vs. Matlab:
--

So Matlab at least warns about Matrix is close to singular or badly
scaled, which python (and I guess most other languages) does not...


A is not just close to singular: it's singular!


Which is the most accurate/best, even for such a bad matrix? Is it
possible to say something about that? Looks like python has a lot more
digits but maybe that's just a random result... I mean Element 1,1 =
2.81e14 in Python, but something like 3e14 in Matlab and so forth -
there's a small difference in the results...


Both results are *wrong*: no inverse exists.


With python, I would also kindly ask about how to avoid this problem in
the future, I mean, this maybe means that I have to check the condition
number at all times before doing anything at all ? How to do that?


If cond(A) is high, you're trying to solve your problem the wrong way. 
You should try to avoid matrix inversion altogether if that's the case. 
For instance you shouldn't invert a matrix just to solve a linear system.


Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-29 Thread someone

On 04/30/2012 12:39 AM, Kiuhnm wrote:


So Matlab at least warns about Matrix is close to singular or badly
scaled, which python (and I guess most other languages) does not...


A is not just close to singular: it's singular!


Ok. When do you define it to be singular, btw?


Which is the most accurate/best, even for such a bad matrix? Is it
possible to say something about that? Looks like python has a lot more
digits but maybe that's just a random result... I mean Element 1,1 =
2.81e14 in Python, but something like 3e14 in Matlab and so forth -
there's a small difference in the results...


Both results are *wrong*: no inverse exists.


What's the best solution of the two wrong ones? Best least-squares 
solution or whatever?



With python, I would also kindly ask about how to avoid this problem in
the future, I mean, this maybe means that I have to check the condition
number at all times before doing anything at all ? How to do that?


If cond(A) is high, you're trying to solve your problem the wrong way.


So you're saying that in another language (python) I should check the 
condition number, before solving anything?



You should try to avoid matrix inversion altogether if that's the case.
For instance you shouldn't invert a matrix just to solve a linear system.


What then?

Cramer's rule?
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-29 Thread Nasser M. Abbasi

On 04/29/2012 05:17 PM, someone wrote:


I would also kindly ask about how to avoid this problem in
the future, I mean, this maybe means that I have to check the condition
number at all times before doing anything at all ? How to do that?



I hope you'll check the condition number all the time.

You could be designing a building where people will live in it.

If do not check the condition number, you'll end up with a building that 
will fall down when a small wind hits it and many people will die all 
because you did not bother to check the condition number when you solved 
the equations you used in your design.


Also, as was said, do not use INV(A) directly to solve equations.

--Nasser
--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-29 Thread Paul Rubin
someone newsbo...@gmail.com writes:
 A is not just close to singular: it's singular!
 Ok. When do you define it to be singular, btw?

Singular means the determinant is zero, i.e. the rows or columns
are not linearly independent.  Let's give names to the three rows:

  a = [1 2 3]; b = [11 12 13]; c = [21 22 23].

Then notice that c = 2*b - a.  So c is linearly dependent on a and b.
Geometrically this means the three vectors are in the same plane,
so the matrix doesn't have an inverse.

 Which is the most accurate/best, even for such a bad matrix? 

What are you trying to do?  If you are trying to calculate stuff
with matrices, you really should know some basic linear algebra.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-29 Thread someone

On 04/30/2012 02:38 AM, Nasser M. Abbasi wrote:

On 04/29/2012 05:17 PM, someone wrote:


I would also kindly ask about how to avoid this problem in
the future, I mean, this maybe means that I have to check the condition
number at all times before doing anything at all ? How to do that?



I hope you'll check the condition number all the time.


So how big can it (cond-number) be before I should do something else? 
And what to do then? Cramers rule or pseudoinverse or something else?



You could be designing a building where people will live in it.

If do not check the condition number, you'll end up with a building that
will fall down when a small wind hits it and many people will die all
because you did not bother to check the condition number when you solved
the equations you used in your design.

Also, as was said, do not use INV(A) directly to solve equations.


In Matlab I used x=A\b.

I used inv(A) in python. Should I use some kind of pseudo-inverse or 
what do you suggest?




--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-29 Thread Nasser M. Abbasi

On 04/29/2012 07:59 PM, someone wrote:



 Also, as was said, do not use INV(A) directly to solve equations.


In Matlab I used x=A\b.



good.


I used inv(A) in python. Should I use some kind of pseudo-inverse or
what do you suggest?



I do not use python much myself, but a quick google showed that pyhton 
scipy has API for linalg, so use, which is from the documentation, the 
following code example


  X = scipy.linalg.solve(A, B)

But you still need to check the cond(). If it is too large, not good. 
How large and all that, depends on the problem itself. But the rule of 
thumb, the lower the better. Less than 100 can be good in general, but I 
really can't give you a fixed number to use, as I am not an expert in 
this subjects, others who know more about it might have better 
recommendations.


--Nasser



--
http://mail.python.org/mailman/listinfo/python-list


Re: numpy (matrix solver) - python vs. matlab

2012-04-29 Thread Nasser M. Abbasi

On 04/29/2012 07:17 PM, someone wrote:


Ok. When do you define it to be singular, btw?



There are things you can see right away about a matrix A being singular 
without doing any computation. By just looking at it.


For example, If you see a column (or row) being a linear combination of 
other column(s) (or row(s)) then this is a no no.


In your case you have

 1 2 3
 111213
 212223

You can see right away that if you multiply the second row by 2, and 
subtract from that one times the first row, then you obtain the third row.


Hence the third row is a linear combination of the first row and the 
second row. no good.


When you get a row (or a column) being a linear combination of others 
rows (or columns), then this means the matrix is singular.


--Nasser
--
http://mail.python.org/mailman/listinfo/python-list