Re: [Numpy-discussion] How can I constrain linear_least_squares to integer solutions?

2007-11-28 Thread Stefan van der Walt
On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote:
 This is not a trivial problem, as you can see by googling mixed integer least
 squares (MILS). Much will depend on the nature of the parameters, the number 
 of
 variables you are using in the fit, and how exact the solution needs to be. 
 One
 approach would be to start by rounding the coefficients that must be integer
 and improve the solution using annealing or genetic algorithms to jig the
 integer coefficients while fitting the remainder in the usual least square 
 way,
 but that wouldn't have the elegance of some of the specific methods used for
 this sort of problem. However, I don't know of a package in scipy that
 implements those more sophisticated algorithms, perhaps someone else on this
 list who knows more about these things than I can point you in the right
 direction.

Would this be a good candidate for a genetic algorithm?  I haven't
used GA before, so I don't know the typical rate of convergence or its
applicability to optimization problems.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Appending a numpy array to existing text file

2007-11-28 Thread Andy Cheesman
It does indeed work
Thanks for the help

Andy

LB wrote:
 If you just want to add your matrix to an existing ascii file, you can
 open this file in append mode and give the file handle to
 numpy.savetxt :
 
 f_handle = file('my_file.dat', 'a')
 savetxt(f_handle, my_matrix)
 f_handle.close()
 
 HTH
 
 --
 LB
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] documentation generator based on pyparsing

2007-11-28 Thread Robert Cimrman
Hi,

At http://scipy.org/Generate_Documentation you can find a very small 
documentation generator for NumPy/SciPy modules based on pyparsing 
package (by Paul McGuire). I am not sure if this belongs to where I put 
it, so feel free to (re)move the page as needed. I hope it might be 
interesting for you.

r.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How can I constrain linear_least_squares to integer solutions?

2007-11-28 Thread Timothy Hochberg
On Nov 28, 2007 12:59 AM, Stefan van der Walt [EMAIL PROTECTED] wrote:

 On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote:
  This is not a trivial problem, as you can see by googling mixed integer
 least
  squares (MILS). Much will depend on the nature of the parameters, the
 number of
  variables you are using in the fit, and how exact the solution needs to
 be. One
  approach would be to start by rounding the coefficients that must be
 integer
  and improve the solution using annealing or genetic algorithms to jig
 the
  integer coefficients while fitting the remainder in the usual least
 square way,
  but that wouldn't have the elegance of some of the specific methods used
 for
  this sort of problem. However, I don't know of a package in scipy that
  implements those more sophisticated algorithms, perhaps someone else on
 this
  list who knows more about these things than I can point you in the right
  direction.

 Would this be a good candidate for a genetic algorithm?  I haven't
 used GA before, so I don't know the typical rate of convergence or its
 applicability to optimization problems.

 Regards
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




If the number of terms is not huge and the function is well behaved; it
might be worth trying the following simple and stupid approach:

   1. Find the floating point minimum.
   2. for each set of possible set of integer coefficients near the FP
   minimum:
  1. Solve for the floating point coefficients with the integer
  coefficients fixed.
  2. If the minimum is the best so far, stash it somewhere for
  later.
   3. Return the best set of coefficients.



-- 
.  __
.   |-\
.
.  [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Converting char array to float

2007-11-28 Thread Sameer DCosta
I'm trying to convert a character array to a floating point array. I'm
using one of the recent svn builds. It is surprising that astype does
not do the job. However if I first convert the char array to an array
and then use astype everything works fine. Is this a bug?

import numpy as N
print N.__version__  # Output = '1.0.5.dev4426'
a = N.char.array(['123.45', '234.56'])
b = N.array(a).astype(N.float)  # This works.
print b
b = a.astype(N.float)   # This does not work and raises an exception

ValueError: Can only create a chararray from string data.


Thanks.
Sameer
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Converting char array to float

2007-11-28 Thread Pierre GM
Sameer,

I can't tell whether it's a bug or a feature, but I can give you some 
explanation: when you call .astype on your chararray, you call the 
__array_finalize__ of the chararray, which requires the dtype to be string 
like. Obviously, that won't work in your case. Transforming the chararray to 
a regular array of strings bypass this problem. That's what you're doing with 
the N.array(a) statement in your example.

Two comments, however:
* Try to use N.asarray() instead, as you won't copy the data (or use 
N.array(a,copy=False))
* You can also view your charray as a regular ndarray, and then use the astype 
method:
a.view(N.ndarray).astype(float_)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Converting char array to float

2007-11-28 Thread Matthieu Brucher
a does not seem to be an array, so it is not surprising that you need to
convert it to an array first.

Matthieu

2007/11/28, Sameer DCosta [EMAIL PROTECTED]:

 I'm trying to convert a character array to a floating point array. I'm
 using one of the recent svn builds. It is surprising that astype does
 not do the job. However if I first convert the char array to an array
 and then use astype everything works fine. Is this a bug?

 import numpy as N
 print N.__version__  # Output = '1.0.5.dev4426'
 a = N.char.array(['123.45', '234.56'])
 b = N.array(a).astype(N.float)  # This works.
 print b
 b = a.astype(N.float)   # This does not work and raises an exception

 ValueError: Can only create a chararray from string data.


 Thanks.
 Sameer
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
French PhD student
Website : http://miles.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Converting char array to float

2007-11-28 Thread Pierre GM
On Wednesday 28 November 2007 13:39:45 Matthieu Brucher wrote:
 a does not seem to be an array, so it is not surprising that you need to
 convert it to an array first.

Well, a *IS* a regular chararray, and therefore a subclass of ndarray (try 
isinstance). The problem isn't here, it's that the subclass doesn't have its 
own .astype() method. Instead, we use the standard ndarray.astype, which 
calls the __array_finalize__ method from the subclass, and this one requires 
a chararray. Maybe we could implement a simple method:
def astype(self, newdtype):
self.view(N.ndarray).astype(newdtype)
But wouldn't that break something down the line ?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] documentation generator based on pyparsing

2007-11-28 Thread Nils Wagner
On Wed, 28 Nov 2007 11:29:20 +0100
  Robert Cimrman [EMAIL PROTECTED] wrote:
 Hi,
 
 At http://scipy.org/Generate_Documentation you can find 
a very small 
 documentation generator for NumPy/SciPy modules based on 
pyparsing 
 package (by Paul McGuire). I am not sure if this belongs 
to where I put 
 it, so feel free to (re)move the page as needed. I hope 
it might be 
 interesting for you.
 
 r.
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

Hi Robert,

  
The output of

./gendocs.py -m 'scipy.linsolve.umfpack'

differs from your example output (available at
http://scipy.org/Generate_Documentation)

./gendocs.py -m 'scipy.linsolve.umfpack'
generating docs for scipy.linsolve.umfpack...
output LaTeX source file: ./scipy.linsolve.umfpack.tex
['Contains']
['Description', '-']
['Installation', '--']
['Examples', '--']
['Arguments of UmfpackContext solution methods', 
'--']
['Setting control parameters', 
'']
['Author']
['Other contributors']
['UmfpackContext class']
This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4)
entering extended mode
(./scipy.linsolve.umfpack.tex
LaTeX2e 2003/12/01
Babel v3.8d and hyphenation patterns for american, 
french, german, ngerman, b
ahasa, basque, bulgarian, catalan, croatian, czech, 
danish, dutch, esperanto, e
stonian, finnish, greek, icelandic, irish, italian, latin, 
magyar, norsk, polis
h, portuges, romanian, russian, serbian, slovak, slovene, 
spanish, swedish, tur
kish, ukrainian, nohyphenation, loaded.
(/usr/share/texmf/tex/latex/base/article.cls
Document Class: article 2004/02/16 v1.4f Standard LaTeX 
document class
(/usr/share/texmf/tex/latex/base/size10.clo))
(/usr/share/texmf/tex/latex/tools/bm.sty)
(/usr/share/texmf/tex/latex/a4wide/a4wide.sty
(/usr/share/texmf/tex/latex/ntgclass/a4.sty))
! Undefined control sequence.
l.12 \set

(/usr/share/texmf/tex/latex/graphics/graphicx.sty
(/usr/share/texmf/tex/latex/graphics/keyval.sty)
(/usr/share/texmf/tex/latex/graphics/graphics.sty
(/usr/share/texmf/tex/latex/graphics/trig.sty)
(/usr/share/texmf/tex/latex/graphics/graphics.cfg)
(/usr/share/texmf/tex/latex/graphics/pdftex.def)))
(./scipy.linsolve.umfpack.aux) 
[1{/var/lib/texmf/fonts/map/pdftex/updmap/pdftex
.map}] (./scipy.linsolve.umfpack.toc) [1]
Overfull \hbox (32.25606pt too wide) in paragraph at lines 
66--69
\OT1/cmr/m/n/10 A. Davis. All Rights Re-served. UMF-PACK 
home-page: http://www.
cise.ufl.edu/research/sparse/umfpack

Overfull \hbox (68.30923pt too wide) in paragraph at lines 
84--88
[]\OT1/cmr/m/n/10 [umfpack] li-brary[]dirs = 
dir/UMFPACK/UMFPACK/Lib in-clude
[]dirs = dir/UMFPACK/UMFPACK/Include

Overfull \hbox (90.36482pt too wide) in paragraph at lines 
91--95
[]\OT1/cmr/m/n/10 [amd] li-brary[]dirs = 
dir/UFsparse/AMD/Lib in-clude[]dirs
= dir/UFsparse/AMD/Include, dir/UFsparse/UFconfig

Overfull \hbox (49.97585pt too wide) in paragraph at lines 
96--100
[]\OT1/cmr/m/n/10 [umfpack] li-brary[]dirs = 
dir/UFsparse/UMFPACK/Lib in-clud
e[]dirs = dir/UFsparse/UMFPACK/Include,
! You can't use `macro parameter character #' in vertical 
mode.
l.109 #
 Contruct the solver.
! You can't use `macro parameter character #' in 
horizontal mode.
l.110 umfpack = um.UmfpackContext() #
   Use default 'di' 
family of UMFPACK rou...

! You can't use `macro parameter character #' in vertical 
mode.
l.112 #
 One-shot solution.
! You can't use `macro parameter character #' in 
horizontal mode.
l.114 #
 same as:
! You can't use `macro parameter character #' in vertical 
mode.
l.119 #
 Make LU decomposition.
! You can't use `macro parameter character #' in 
horizontal mode.
l.122 #
 Use already LU-decomposed matrix.
! You can't use `macro parameter character #' in 
horizontal mode.
l.125 #
 same as:

Overfull \hbox (6.97574pt too wide) in paragraph at lines 
119--128
\OT1/cmr/m/n/10 umf-pack( um.UMFPACK[]A, mtx, rhs1, 
au-to-Trans-pose = True ) s
ol2 = umf-pack( um.UMFPACK[]A,
! You can't use `macro parameter character #' in vertical 
mode.
l.131 #
 Make symbolic decomposition.
! You can't use `macro parameter character #' in 
horizontal mode.
l.133 #
 Print statistics.

Overfull \hbox (14.16289pt too wide) in paragraph at lines 
131--135
[]\OT1/cmr/m/n/10 Make sym-bolic de-com-po-si-tion. 
umf-pack.symbolic( mtx0 )
Print statis-tics. umf-pack.report[]symbolic()
! You can't use `macro parameter character #' in vertical 
mode.
l.138 #
 Make LU decomposition of mtx1 which has same 
structure as mtx0.
! You can't use `macro parameter character #' in 
horizontal mode.
l.140 #
 Print statistics.
! You can't use `macro parameter character #' in vertical 
mode.
l.143 #
 Use already LU-decomposed matrix.
! You can't use `macro parameter 

Re: [Numpy-discussion] Converting char array to float

2007-11-28 Thread Travis E. Oliphant
Sameer DCosta wrote:
 I'm trying to convert a character array to a floating point array. I'm
 using one of the recent svn builds. It is surprising that astype does
 not do the job. However if I first convert the char array to an array
 and then use astype everything works fine. Is this a bug?

 import numpy as N
 print N.__version__  # Output = '1.0.5.dev4426'
 a = N.char.array(['123.45', '234.56'])
 b = N.array(a).astype(N.float)  # This works.
 print b
 b = a.astype(N.float)   # This does not work and raises an exception

 ValueError: Can only create a chararray from string data.

   

The problem is that astype for a chararray will by default try to create 
a class of chararray.  This sub-class only allows string data and so the 
base-class astype(float) will fail.

The astype method could be over-ridden in order to support automatic 
conversion to other kinds of arrays, but I lean towards asking why? 
because explicit is better than implicit (although it is admittedly 
arguable which is which in this case...)

-Travis


 Thanks.
 Sameer
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How can I constrain linear_least_squares to integer solutions?

2007-11-28 Thread Charles R Harris
On Nov 28, 2007 12:59 AM, Stefan van der Walt [EMAIL PROTECTED] wrote:

 On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote:
  This is not a trivial problem, as you can see by googling mixed integer
 least
  squares (MILS). Much will depend on the nature of the parameters, the
 number of
  variables you are using in the fit, and how exact the solution needs to
 be. One
  approach would be to start by rounding the coefficients that must be
 integer
  and improve the solution using annealing or genetic algorithms to jig
 the
  integer coefficients while fitting the remainder in the usual least
 square way,
  but that wouldn't have the elegance of some of the specific methods used
 for
  this sort of problem. However, I don't know of a package in scipy that
  implements those more sophisticated algorithms, perhaps someone else on
 this
  list who knows more about these things than I can point you in the right
  direction.

 Would this be a good candidate for a genetic algorithm?  I haven't
 used GA before, so I don't know the typical rate of convergence or its
 applicability to optimization problems.


It depends. Just to show the sort of problems involved, suppose you have 32
integer variables and are looking for the last bit of optimization. If the
floating point optimum is at (.5, .5, , .5) and the error is
symmetrical, then each vertex of the surrounding integer cube is a solution
and there are 2**32 of them. If the error isn't symmetrical, and chances are
that with that many variables it is very far from that, then you have to
search a larger region. That's a lot of points. The more sophisticated
algorithms try to eliminate whole regions of points and keep narrowing
things down, but even so the problem can easily get out of hand. If you just
need a good solution, a genetic algorithm is a good bet to find one without
too much hassle. I had a similar problem in designing a digital min/max FIR
filter where I needed 15 bit integer coefficients for hardware
implementation. There was a narrow high rejection band in the filter and
simply rounding the coefficients left spikes in the response through that
band. With a GA I was able to eliminate the spikes in about 30 minutes of
evolution using a python/Numeric program. In that case the performance of
annealing was quite dependent on choosing the right parameters for cooling
rate, etc., while the GA was quite robust and straight forward. There was no
guarantee the I ended up with the best solution, but what I got was good
enough.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Building NumPy on Mac OS X without Apple GCC

2007-11-28 Thread Joshua Lippai
I updated my GCC to a more recent version a day ago, since Apple's
Xcode Tools only provide GCC 4.0 and the current release of GNU's GCC
is 4.2. I successfully achieved this, but now I run into a problem
when trying to build NumPy:

gcc: unrecognized option '-no-cpp-precomp'
cc1: error: unrecognized command line option -arch
cc1: error: unrecognized command line option -arch
cc1: error: unrecognized command line option -Wno-long-double
gcc: unrecognized option '-no-cpp-precomp'
cc1: error: unrecognized command line option -arch
cc1: error: unrecognized command line option -arch
cc1: error: unrecognized command line option -Wno-long-double


Upon investigation into the matter, I found out that these options
(no-cpp-precomp and Wno-long-double) are only valid in Apple's GCC and
not the regular GNU release. Yet it seems NumPy automatically assumes
Apple's GCC is being used when it realizes the target is OS X. Is
there a way around this, or at least some way to specify Apple's GCC?
NumPy is the only package I've tried building so far that has a
problem with this.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building NumPy on Mac OS X without Apple GCC

2007-11-28 Thread Robert Kern
Joshua Lippai wrote:
 I updated my GCC to a more recent version a day ago, since Apple's
 Xcode Tools only provide GCC 4.0 and the current release of GNU's GCC
 is 4.2. I successfully achieved this, but now I run into a problem
 when trying to build NumPy:
 
 gcc: unrecognized option '-no-cpp-precomp'
 cc1: error: unrecognized command line option -arch
 cc1: error: unrecognized command line option -arch
 cc1: error: unrecognized command line option -Wno-long-double
 gcc: unrecognized option '-no-cpp-precomp'
 cc1: error: unrecognized command line option -arch
 cc1: error: unrecognized command line option -arch
 cc1: error: unrecognized command line option -Wno-long-double
 
 
 Upon investigation into the matter, I found out that these options
 (no-cpp-precomp and Wno-long-double) are only valid in Apple's GCC and
 not the regular GNU release. Yet it seems NumPy automatically assumes
 Apple's GCC is being used when it realizes the target is OS X. Is
 there a way around this, or at least some way to specify Apple's GCC?
 NumPy is the only package I've tried building so far that has a
 problem with this.

I'm surprised that you've built other Python extension modules because numpy
does not add these flags; Python does. Python extensions should be built with
the same compiler that Python itself was built with. If you are using the binary
distribution from www.python.org, you should use Apple's gcc, not a different 
one.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building NumPy on Mac OS X without Apple GCC

2007-11-28 Thread Robert Kern
Joshua Lippai wrote:

 Thanks for the reply. Well, I built my Python stuff, including NumPy
 previously, before I changed to the higher GCC version. Do you know if
 there's an option I can toggle that will specify Apple's GCC to be
 used?

$ CC=/usr/bin/gcc python setup.py build

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building NumPy on Mac OS X without Apple GCC

2007-11-28 Thread Joshua Lippai
 Joshua Lippai wrote:
  I updated my GCC to a more recent version a day ago, since Apple's
  Xcode Tools only provide GCC 4.0 and the current release of GNU's GCC
  is 4.2. I successfully achieved this, but now I run into a problem
  when trying to build NumPy:
 
  gcc: unrecognized option '-no-cpp-precomp'
  cc1: error: unrecognized command line option -arch
  cc1: error: unrecognized command line option -arch
  cc1: error: unrecognized command line option -Wno-long-double
  gcc: unrecognized option '-no-cpp-precomp'
  cc1: error: unrecognized command line option -arch
  cc1: error: unrecognized command line option -arch
  cc1: error: unrecognized command line option -Wno-long-double
 
 
  Upon investigation into the matter, I found out that these options
  (no-cpp-precomp and Wno-long-double) are only valid in Apple's GCC and
  not the regular GNU release. Yet it seems NumPy automatically assumes
  Apple's GCC is being used when it realizes the target is OS X. Is
  there a way around this, or at least some way to specify Apple's GCC?
  NumPy is the only package I've tried building so far that has a
  problem with this.

 I'm surprised that you've built other Python extension modules because numpy
 does not add these flags; Python does. Python extensions should be built with
 the same compiler that Python itself was built with. If you are using the 
 binary
 distribution from www.python.org, you should use Apple's gcc, not a different 
 one.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth.
   -- Umberto Eco
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

Thanks for the reply. Well, I built my Python stuff, including NumPy
previously, before I changed to the higher GCC version. Do you know if
there's an option I can toggle that will specify Apple's GCC to be
used?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion