[Numpy-discussion] eigenface image too dark

2008-03-19 Thread royG
hi
while trying to make an eigenface image from a numpy array of floats i
tried this

from numpy import array
import Image

imagesize=(200,200)
def makeimage(inputarray,imagename):
  inputarray.shape=(-1,)
  newimg=Image.new('L', imagesize)
  newimg.putdata(inputarray)
  newimg.save(imagename)

since i am using images of 200X200 size,
i use an array with 4 elements  like
[ -92.35294118  -81.88235294  -67.58823529 ...,   -3.47058824
   -13.23529412   -9.76470588]
the problem is ,i get an image that is too dark.it looks like a face
but is too dark that even different arrays will create images that all
look alike!..
s there a way to 'tone it down' so that i can generate an eigenface
that can be displayed better?

thanks
RG
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] eigenface image too dark

2008-03-19 Thread Nadav Horesh
Easy solution:
   Use pylab's imashow(inputarray).
   In general ipython+matplolib are very handy for your kind of analysis

Longer solution:
   Scale your array:
 a_min = inputarray.min()
 a_max = inputarray.max()
 disp_array = ((inputarray-a_min)* 255/(a_max - a_min)).astype('uint8')\
  .
  .
  .
 newimg.putdata(disp_array)


Nadav.

-הודעה מקורית-
מאת: [EMAIL PROTECTED] בשם royG
נשלח: ד 19-מרץ-08 08:23
אל: numpy-discussion@scipy.org
נושא: [Numpy-discussion] eigenface image too dark
 
hi
while trying to make an eigenface image from a numpy array of floats i
tried this

from numpy import array
import Image

imagesize=(200,200)
def makeimage(inputarray,imagename):
  inputarray.shape=(-1,)
  newimg=Image.new('L', imagesize)
  newimg.putdata(inputarray)
  newimg.save(imagename)

since i am using images of 200X200 size,
i use an array with 4 elements  like
[ -92.35294118  -81.88235294  -67.58823529 ...,   -3.47058824
   -13.23529412   -9.76470588]
the problem is ,i get an image that is too dark.it looks like a face
but is too dark that even different arrays will create images that all
look alike!..
s there a way to 'tone it down' so that i can generate an eigenface
that can be displayed better?

thanks
RG
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] documentation for masked arrays?

2008-03-19 Thread Chris Withers
Hi All,

Where can I find docs for masked arrays?
The paid for book doesn't even contain the phrase masked_where :-(

cheers,

Chris

-- 
Simplistix - Content Management, Zope  Python Consulting
- http://www.simplistix.co.uk
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bug with with fill_values in masked arrays?

2008-03-19 Thread Chris Withers
OK, my specific problem with masked arrays is as follows:

  a = numpy.array([1,numpy.nan,2])
  aa = numpy.ma.masked_where(numpy.isnan(a),a)
  aa
array(data =
  [  1.e+00   1.e+20   2.e+00],
   mask =
  [False  True False],
   fill_value=1e+020)

  numpy.ma.set_fill_value(aa,0)
  aa
array(data =
  [ 1.  0.  2.],
   mask =
  [False  True False],
   fill_value=0)

OK, so this looks like I want it to, however:

  [v for v in aa]
[1.0, array(data =
  99,
   mask =
  True,
   fill_value=99)
, 2.0]

Two questions:

1. why am I not getting my NaN's back?

2. why is the wrong fill value being used here?

cheers,

Chris

-- 
Simplistix - Content Management, Zope  Python Consulting
- http://www.simplistix.co.uk
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Can't add user defined complex types

2008-03-19 Thread Neal Becker
In arrayobject.c, various complex functions (e.g., array_imag_get) use:
PyArray_ISCOMPLEX - PyTypeNum_ISCOMPLEX, 
which is hard coded to 2 predefined types :(

If PyArray_ISCOMPLEX allowed user-defined types, I'm guessing functions such
as array_imag_get would just work?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Unable to file bug

2008-03-19 Thread Neal Becker
http://scipy.org/scipy/numpy/newticket#preview is giving me:

Internal Server Error
The server encountered an internal error or misconfiguration and was unable
to complete your request.
Please contact the server administrator, [EMAIL PROTECTED] and inform them
of the time the error occurred, and anything you might have done that may
have caused the error.
More information about this error may be available in the server error log.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] eigenface image too dark

2008-03-19 Thread royG



 Longer solution:
Scale your array:
  a_min = inputarray.min()
  a_max = inputarray.max()
  disp_array = ((inputarray-a_min)* 255/(a_max - a_min)).astype('uint8')\
   .

thanx Nadav..the scaling works..and makes clear images

but why .astype(uint8) ? can't i use the array of floats as it is ?
even without  changing the type as uint8 the code makes clear images
when i use
disp_array = ((inputarray-a_min)* 255/(a_max - a_min))

thanks again
RG
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] documentation for masked arrays?

2008-03-19 Thread Bill Spotz
I have found that any search on that document containing an  
underscore will turn up zero matches.  Substitute a space instead.

On Mar 19, 2008, at 5:02 AM, Chris Withers wrote:

 Where can I find docs for masked arrays?
 The paid for book doesn't even contain the phrase masked_where :-(

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: [EMAIL PROTECTED] **




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] C++ class encapsulating ctypes-numpy array?

2008-03-19 Thread Joris De Ridder
Hi,

I'm passing (possibly non-contiguous) numpy arrays (data + shape +  
strides + ndim) with ctypes to my C++ function (with external C to  
make ctypes happy). Has anyone made a C++ class derived from a ctypes- 
numpy-array with an overloaded [] operator to allow easy indexing  
(e.g. x[0][2][5] for a 3D array) so that you don't have to worry about  
strides? I guess I'm not the first one thinking about this...

Cheers,
Joris



Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C++ class encapsulating ctypes-numpy array?

2008-03-19 Thread Matthieu Brucher
Hi,

On my blog, I spoke about the class we used. It is not derived from a Numpy
array, it is implemented in terms of a Numpy array (
http://matt.eifelle.com/item/5)

Matthieu

2008/3/19, Joris De Ridder [EMAIL PROTECTED]:

 Hi,

 I'm passing (possibly non-contiguous) numpy arrays (data + shape +
 strides + ndim) with ctypes to my C++ function (with external C to
 make ctypes happy). Has anyone made a C++ class derived from a ctypes-
 numpy-array with an overloaded [] operator to allow easy indexing
 (e.g. x[0][2][5] for a 3D array) so that you don't have to worry about
 strides? I guess I'm not the first one thinking about this...

 Cheers,
 Joris



 Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] documentation for masked arrays?

2008-03-19 Thread Chris Withers
Bill Spotz wrote:
 I have found that any search on that document containing an  
 underscore will turn up zero matches.  Substitute a space instead.

That's not been my experience. I found the *one* mention of fill_value 
just fine, the coverage of masked arrays is woeful :-(

Chris

-- 
Simplistix - Content Management, Zope  Python Consulting
- http://www.simplistix.co.uk
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help needed with numpy 10.5 release blockers

2008-03-19 Thread Matthieu Brucher
For the not blocker bugs, I think that #420 should be closed : float32 is
the the C float type, isn't it ?

Matthieu

2008/3/13, Jarrod Millman [EMAIL PROTECTED]:

 Hello,

 I am sure that everyone has noticed that 1.0.5 hasn't been released
 yet.  The main issue is that when I was getting ready to tag the
 release I noticed that the buildbot had a few failing tests:
 http://buildbot.scipy.org/waterfall?show_events=false

 Stefan van der Walt added tickets for the failures:
 http://projects.scipy.org/scipy/numpy/ticket/683
 http://projects.scipy.org/scipy/numpy/ticket/684
 http://projects.scipy.org/scipy/numpy/ticket/686
 And Chuck Harris fixed ticket #683 with in minutes (thanks!).  The
 others are still open.

 Stefan and I also triaged the remaining tickets--closing several and
 turning others in to release blockers:

 http://scipy.org/scipy/numpy/query?status=newseverity=blockermilestone=1.0.5order=priority

 I think that it is especially important that we spend some time trying
 to make the 1.0.5 release rock solid.  There are several important
 changes in the trunk so I really hope we can get these tickets
 resolved ASAP.  I need everyone's help getting this release out.  If
 you can help work on any of the open release blockers, please try to
 close them over the weekend.  If you have any ideas about the tickets
 but aren't exactly sure how to resolve them please post a message to
 the list or add a comment to the ticket.

 I will be traveling over the weekend, so I may be off-line until Monday.

 Thanks,

 --
 Jarrod Millman
 Computational Infrastructure for Research Labs
 10 Giannini Hall, UC Berkeley
 phone: 510.643.4014
 http://cirl.berkeley.edu/
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can't add user defined complex types

2008-03-19 Thread Travis E. Oliphant
Neal Becker wrote:
 In arrayobject.c, various complex functions (e.g., array_imag_get) use:
 PyArray_ISCOMPLEX - PyTypeNum_ISCOMPLEX, 
 which is hard coded to 2 predefined types :(

 If PyArray_ISCOMPLEX allowed user-defined types, I'm guessing functions such
 as array_imag_get would just work?
   
I don't think that it true.   There would need to be some kind of idea 
of complex-ness that is tested.   One way this could work is if your 
corresponding scalar inherited from the generic complex scalar type and 
then that was tested for.  

-Travis O.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can't add user defined complex types

2008-03-19 Thread Neal Becker
Travis E. Oliphant wrote:

 Neal Becker wrote:
 In arrayobject.c, various complex functions (e.g., array_imag_get) use:
 PyArray_ISCOMPLEX - PyTypeNum_ISCOMPLEX,
 which is hard coded to 2 predefined types :(

 If PyArray_ISCOMPLEX allowed user-defined types, I'm guessing functions
 such as array_imag_get would just work?
   
 I don't think that it true.   There would need to be some kind of idea
 of complex-ness that is tested.   One way this could work is if your
 corresponding scalar inherited from the generic complex scalar type and
 then that was tested for.
 
 -Travis O.

You don't think which is true?

Suppose along with registering a type, I can mark whether it is complex. 
Then we change PyArray_ISCOMPLEX to look at that mark for user-defined
types.

I believe get_part will just work.  I more-or-less copied the code, and made
my own functions 'get_real, get_imag', and they work just fine on my types.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can't add user defined complex types

2008-03-19 Thread Charles R Harris
On Wed, Mar 19, 2008 at 10:42 AM, Travis E. Oliphant [EMAIL PROTECTED]
wrote:

 Neal Becker wrote:
  In arrayobject.c, various complex functions (e.g., array_imag_get) use:
  PyArray_ISCOMPLEX - PyTypeNum_ISCOMPLEX,
  which is hard coded to 2 predefined types :(
 
  If PyArray_ISCOMPLEX allowed user-defined types, I'm guessing functions
 such
  as array_imag_get would just work?
 
 I don't think that it true.   There would need to be some kind of idea
 of complex-ness that is tested.   One way this could work is if your
 corresponding scalar inherited from the generic complex scalar type and
 then that was tested for.


That brings up a question I have. In looking to introduce float16, I noted
that the typenumbers are tightly packed at the low end. There is space for
user defined types =128, IIRC, but float16 and cfloat16 really belongs down
with the numbers. There are also several other types in the IEEE pipeline.
So I am wondering if we can't spread the type numbers out a bit more.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVD error in Numpy. NumPy Update reversed?

2008-03-19 Thread Lou Pecora
I recently had a personal email reply from Damian
Menscher who originally found the error in 2002.  He
states:

--

I explained the solution in a followup to my own post:
http://mail.python.org/pipermail/python-list/2002-August/161395.html
-- in short, find the dlasd4_ routine (for the current
1.0.4 version
it's at numpy/linalg/dlapack_lite.c:21902) and change
the max
iteration count from 20 to 100 or higher.

The basic problem was that they use an iterative
method to converge on
the solution, and they had a cutoff of the max number
of iterations
before giving up (to guard against an infinite loop or
cases where an
unlucky matrix would require an excessive number of
iterations and
therefore CPU).  The fix I used was simply to increase
the max
iteration count (from 20 to 100 -- 50 was enough to
solve my problem
but I went for overkill just to be sure I wouldn't see
it again).  It
*may* be reasonable to just leave this as an infinite
loop, or to
increase the count to 1000 or higher.  A lot depends
on your preferred
failure mode:
  - count too low - low cpu usage, but SVD did not
converge errors
somewhat common
  - very high count - some matrices will result in
high cpu usage,
non-convergence still possible
  - infinite loop - it will always converge, but may
take forever

NumPy was supposedly updated also (from 20 to 100, but
you may want to
go higher) in bug 601052.  They said the fix made it
into CVS, but
apparently it got lost or reverted when they did a
release (the oldest
release I can find is v1.0 from 2006 and has it set to
20).  I just
filed another bug (copy/paste of the previous one) in
hopes they'll
fix it for real this time:
http://scipy.org/scipy/numpy/ticket/706

Damian



I looked at line 21902  of dlapack_lite.c, it is,

for (niter = iter; niter = 20; ++niter) {

Indeed the upper limit for iterations in the
linalg.svd code is set for 20.  For now I will go with
my method (on earlier post) of squaring the matrix and
then doing svd when the original try on the original
matrix throws the linalg.linalg.LinAlgError.  I do not
claim that this is a cure-all.  But it seems to work
fast and avoids the original code from thrashing
around in a long iteration. 

I would suggest this be made explicit in the NumPy
documentation and then the user be given the option to
reset the limit on the number of iterations.  



-- Lou Pecora,   my views are my own.


  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVD error in Numpy. NumPy Update reversed?

2008-03-19 Thread Charles R Harris
On Wed, Mar 19, 2008 at 11:30 AM, Lou Pecora [EMAIL PROTECTED] wrote:

 I recently had a personal email reply from Damian
 Menscher who originally found the error in 2002.  He
 states:

 --

 I explained the solution in a followup to my own post:
 http://mail.python.org/pipermail/python-list/2002-August/161395.html
 -- in short, find the dlasd4_ routine (for the current
 1.0.4 version
 it's at numpy/linalg/dlapack_lite.c:21902) and change
 the max
 iteration count from 20 to 100 or higher.

 The basic problem was that they use an iterative
 method to converge on
 the solution, and they had a cutoff of the max number
 of iterations
 before giving up (to guard against an infinite loop or
 cases where an
 unlucky matrix would require an excessive number of
 iterations and
 therefore CPU).  The fix I used was simply to increase
 the max
 iteration count (from 20 to 100 -- 50 was enough to
 solve my problem
 but I went for overkill just to be sure I wouldn't see
 it again).  It
 *may* be reasonable to just leave this as an infinite
 loop, or to
 increase the count to 1000 or higher.  A lot depends
 on your preferred
 failure mode:
  - count too low - low cpu usage, but SVD did not
 converge errors
 somewhat common
  - very high count - some matrices will result in
 high cpu usage,
 non-convergence still possible
  - infinite loop - it will always converge, but may
 take forever

 NumPy was supposedly updated also (from 20 to 100, but
 you may want to
 go higher) in bug 601052.  They said the fix made it
 into CVS, but
 apparently it got lost or reverted when they did a
 release (the oldest
 release I can find is v1.0 from 2006 and has it set to
 20).  I just
 filed another bug (copy/paste of the previous one) in
 hopes they'll
 fix it for real this time:
 http://scipy.org/scipy/numpy/ticket/706

 Damian

 

 I looked at line 21902  of dlapack_lite.c, it is,

for (niter = iter; niter = 20; ++niter) {

 Indeed the upper limit for iterations in the
 linalg.svd code is set for 20.  For now I will go with
 my method (on earlier post) of squaring the matrix and
 then doing svd when the original try on the original
 matrix throws the linalg.linalg.LinAlgError.  I do not
 claim that this is a cure-all.  But it seems to work
 fast and avoids the original code from thrashing
 around in a long iteration.

 I would suggest this be made explicit in the NumPy
 documentation and then the user be given the option to
 reset the limit on the number of iterations.

 Well, it certainly shouldn't be hardwired in as 20. At minimum it should
be a #define, and ideally it should be passed in with the function call, but
I don't know if the interface allows that.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Correlate with small arrays

2008-03-19 Thread Peter Creasey
Hi,

I'm trying to do a PDE style calculation with numpy arrays

y = a * x[:-2] + b * x[1:-1] + c * x[2:]

with a,b,c constants. I realise I could use correlate for this, i.e

y = numpy.correlate(x, array((a, b, c)))

however the performance doesn't seem as good (I suspect correlate is
optimised for both arguments being long arrays). Is the first thing I
wrote probably the best? Or is there a better numpy function for this
case?

Regards,
Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy 1.0.5 almost ready

2008-03-19 Thread Jarrod Millman
Hello,

Thanks to everyone who has been working on getting the 1.0.5 release
of NumPy out the door.  Since my last email at least 12 bug tickets
have been closed.  There are a few remaining issues with the trunk,
but we are fasting approaching a release.

One additional issue that I would like to see more progress made on
before tagging the next release is improved documentation especially
of the new maskedarray implementation.  I know that Pierre has spent a
lot of time developing the new implementation and has other pressing
issues, so ideally others will be able to pitch in.  Given that I want
to get the release out ASAP, I have decided to have a Doc Day this
Friday, March 21st.  I will send out an official announcement later
tonight.

This release promises to bring a number of important improvements and
should represent a very stable and mature release in the 1.0 series of
NumPy.  After this release I hope to start planning for the a new
major development series leading to a 1.1 release.  So if you have any
time to help close tickets or improve documentation, please take the
time over the next few days to do so.

Thanks,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Correlate with small arrays

2008-03-19 Thread Robert Kern
On Wed, Mar 19, 2008 at 12:57 PM, Peter Creasey
[EMAIL PROTECTED] wrote:
 Hi,

  I'm trying to do a PDE style calculation with numpy arrays

  y = a * x[:-2] + b * x[1:-1] + c * x[2:]

  with a,b,c constants. I realise I could use correlate for this, i.e

  y = numpy.correlate(x, array((a, b, c)))

  however the performance doesn't seem as good (I suspect correlate is
  optimised for both arguments being long arrays). Is the first thing I
  wrote probably the best? Or is there a better numpy function for this
  case?

The relative performance seems to vary depending on the size, but it
seems to me that correlate usually beats the manual implementation,
particularly if you don't measure the array() part, too. len(x)=1000
is the only size where the manual version seems to beat correlate on
my machine.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] bug with with fill_values in maske d arrays?

2008-03-19 Thread Matt Knox
 
 OK, my specific problem with masked arrays is as follows:
 
   a = numpy.array([1,numpy.nan,2])
   aa = numpy.ma.masked_where(numpy.isnan(a),a)
   aa
 array(data =
   [  1.e+00   1.e+20   2.e+00],
mask =
   [False  True False],
fill_value=1e+020)
 
   numpy.ma.set_fill_value(aa,0)
   aa
 array(data =
   [ 1.  0.  2.],
mask =
   [False  True False],
fill_value=0)
 
 OK, so this looks like I want it to, however:
 
   [v for v in aa]
 [1.0, array(data =
   99,
mask =
   True,
fill_value=99)
 , 2.0]
 
 Two questions:
 
 1. why am I not getting my NaN's back?

when iterating over a masked array, you get the ma.masked constant for
elements that were masked (same as what you would get if you indexed the masked
array at that element). If you are referring specifically to the .data portion
of the array... it looks like the latest version of the numpy.ma sub-module
preserves nan's in the data portion of the masked array, but the old version
perhaps doesn't based on the output you are showing.

 
 2. why is the wrong fill value being used here?

the second element in the array iteration here is actually the numpy.ma.masked
constant, which always has the same fill value (which I guess is 99). This
is independent of the fill value for your specific array.

- Matt

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] JOB: Short-term programming (consultant) work

2008-03-19 Thread Rob Clewley
Dear NumPy users,

The developers of the PyDSTool dynamical systems software project have
money to hire a Python programmer on a short-term, per-task basis as a
technical consultant. The work can be done remotely and will be paid after
the completion of project milestones. The work must be completed by
July, when the current funds expire. Prospective consultants could be
professionals or students and will have proven experience and interest in
working with NumPy/SciPy, scientific computation in general, and
interfacing Python with C and Fortran codes.

Detailed work plan, schedule, and project specs are negotiable (if you
are talented and experienced we would like your input). The rate of
pay is commensurate with experience, and may be up to $45/hr or $1000
per project milestone (no fringe benefits), according to an agreed
measure of satisfactory product performance. There is a strong possibility
of longer term work depending on progress and funding availability.

PyDSTool (pydstool.sourceforge.net) is a multi-platform, open-source
environment offering a range of library tools and utils for research
in dynamical systems modeling for scientists and engineers.  As a
research project, it presently contains prototype code that we would
like to improve and better integrate into our long-term vision and
with other emerging (open-source) software tools.

Depending on interest and experience, current projects might include:
 * Conversion and pythonification of old Matlab code for model analysis
 * Improved interface for legacy C and Fortran code (numerical
integrators) via some combination of SWIG, Scons, automake
 * Overhaul of support for symbolic processing (probably by an
interface to SymPy)

For more details please contact Dr. Rob Clewley (rclewley) at (@) the
Department of Mathematics, Georgia State University (gsu.edu).

-- 
Robert H. Clewley, Ph. D.
Assistant Professor
Department of Mathematics and Statistics
Georgia State University
720 COE, 30 Pryor St
Atlanta, GA 30303, USA

tel: 404-413-6420 fax: 404-651-2246
http://www.mathstat.gsu.edu/~matrhc
http://brainsbehavior.gsu.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy and OpenMP

2008-03-19 Thread David Cournapeau
Charles R Harris wrote:


 Image processing may be a special in that many cases it is almost 
 embarrassingly parallel. Perhaps some special libraries for that sort 
 of application could be put together and just bits of c code be run on 
 different processors. Not that I know much about parallel processing, 
 but that would be my first take.

For me, the basic problem is that there is no support for this kind of 
thing in numpy right now (loading specific implementation at runtime). I 
think it would be a worthwhile goal for 1.1: the ability to load at 
runtime different implementations (for example: load multi-core blas on 
multi-core CPU); instead of of linking atlas/mkl, they would be used as 
plug-ins. This would require a significant work, though.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] eigenface image too dark

2008-03-19 Thread Nadav Horesh
I never used the putdata interface but the fromstring. It is likely that 
putdata is more flexible. However I urge you to use matplotlib: plotting with 
imshow followed by colorbar(), enables use to inspect the true pixels value, 
add grids, zoom etc.

   Nadav.


-הודעה מקורית-
מאת: [EMAIL PROTECTED] בשם royG
נשלח: ד 19-מרץ-08 15:57
אל: numpy-discussion@scipy.org
נושא: Re: [Numpy-discussion] eigenface image too dark
 



 Longer solution:
Scale your array:
  a_min = inputarray.min()
  a_max = inputarray.max()
  disp_array = ((inputarray-a_min)* 255/(a_max - a_min)).astype('uint8')\
   .

thanx Nadav..the scaling works..and makes clear images

but why .astype(uint8) ? can't i use the array of floats as it is ?
even without  changing the type as uint8 the code makes clear images
when i use
disp_array = ((inputarray-a_min)* 255/(a_max - a_min))

thanks again
RG
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion