Re: [Numpy-discussion] User Guide
Hi, I see the desire for stylistic improvement by removing the awkward parens but your correction has incorrect grammar. One cannot have arrays of Python, nor are Numpy objects a subset of Python (because Python is not a set) -- both of which are what your sentence technically states. I.e., the commas are in the wrong place. You could say The exception: one can have arrays of python objects (including those from numpy) thereby allowing for arrays of different sized elements. but I think it is even clear to just unpack this a bit more with The exception: one can have arrays of python objects, including numpy objects, which allows arrays to contain different sized elements. In my experience, attempting to be extremely concise in technical writing is a common cause of awkward grammar problems like this. I do it all the time :) -Rob On Thu, Jul 18, 2013 at 9:18 AM, Colin J. Williams cjwilliam...@gmail.com wrote: Returning to numpy after a while away, I'm impressed with the style and content of the User Guide and the Reference. This is to offer a Guide correction - I couldn't figure out how to offer the correction on-line. What is Numpy? Suggest: The exception: one can have arrays of (Python, including NumPy) objects, thereby allowing for arrays of different sized elements. to: The exception: one can have arrays of Python, including NumPy objects, thereby allowing for arrays of different sized elements. Colin W. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion -- Robert Clewley, Ph.D. Assistant Professor Neuroscience Institute and Department of Mathematics and Statistics Georgia State University PO Box 5030 Atlanta, GA 30302, USA tel: 404-413-6420 fax: 404-413-5446 http://neuroscience.gsu.edu/rclewley.html ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] [ANN] PyDSTool 0.88 -- dynamical systems modeling tools
A new release of the dynamical systems modeling toolbox PyDSTool is available from Sourceforge: http://www.sourceforge.net/projects/pydstool/ Highlights from the release notes: * Cleanup of global imports, especially: entire numpy.random and linalg namespaces no longer imported by default * Added support for 'min' and 'max' keywords in functional specifications (for ODE right-hand sides, for instance) * Optimization tools from third-party genericOpt (included with permission) and improved parameter estimation examples making use of this code * Numerical phase-response calculations now possible in PRC toolbox * Fully-fledged DSSRT toolbox for neural modeling (see wiki page) * New tests/demonstrations in PyDSTool/tests * Major improvements to intelligent expr2func (symbolic - python function conversion) * Improved compatibility with cross-platform use and with recent python versions and associated libraries * Added many minor features (see timeline on Trac http://jay.cam.cornell.edu/pydstool/timeline) * Fixed many bugs and quirks (see timeline on Trac http://jay.cam.cornell.edu/pydstool/timeline) This is mainly a bugfix release in preparation for a substantial upgrade at version 0.90, which will have a proper installer, unit testing, symbolic expression support via SymPy, and greatly improved interfacing to legacy ODE integrators. These features are being actively developed in 2009/2010. For installation and setting up, please carefully read the GettingStarted page at our wiki for platform-specific details: http://pydstool.sourceforge.net Please use the bug tracker and user discussion list at Sourceforge to report bugs or provide feedback. Code and documentation contributions are always welcome. Regards, Rob Clewley ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Numpy float precision vs Python list float issue
David, I'm confused about your reply. I don't think Ruben was only asking why you'd ever get non-zero error after the forward and inverse transform, but why his implementation using lists gives zero error but using arrays he gets something of order 1e-15. On Mon, Apr 20, 2009 at 9:47 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: I'm using Scipy/Numpy to do image wavelet transforms via the lifting scheme. I grabbed some code implementing the transforms with Python lists (float type). This code works perfectly, but slow for my needs (I'll be doing some genetic algorithms to evolve coefficients of the filters and the forward and inverse transform will be done many times). It's just implemented by looping in the lists and making computations this way. Reconstructed image after doing a forward and inverse transform is perfect, this is, original and reconstructed images difference is 0. With Scipy/Numpy float arrays slicing this code is much faster as you know. But the reconstructed image is not perfect. The image difference maximum and minimum values returns: maximum difference = 3.5527136788e-15 minimum difference = -3.5527136788e-15 Is this behavior expected? Yes, it is expected, it is inherent to how floating point works. By default, the precision for floating point array is double precision, for which, in normal settings, a = a + 1e-17. I don't think it's expected in this sense. The question is why the exact same sequence of arithmetic ops on lists yields zero error but on arrays yields 3.6e-15 error. This doesn't seem to be about lists not showing full precision of the values, because the differences are even zero when extracted from the lists. Because it seems sooo weird to me. It shouldn't :) The usual answer is that you should read this: http://docs.sun.com/app/docs/doc/800-7895 This doesn't help! This is a python question, methinks. Best, Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Numpy float precision vs Python list float issue
On Mon, Apr 20, 2009 at 10:48 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Rob Clewley wrote: David, I'm confused about your reply. I don't think Ruben was only asking why you'd ever get non-zero error after the forward and inverse transform, but why his implementation using lists gives zero error but using arrays he gets something of order 1e-15. That's more likely just an accident. Forward + inverse = id is the surprising thing, actually. In any numerical package, if you do ifft(fft(a)), you will not recover a exactly for any non trivial size. For example, with floating point numbers, the order in which you do operations matters, so: SNIP ARITHMETIC Will give you different values for d and c, even if you on paper, those are exactly the same. For those reasons, it is virtually impossible to have exactly the same values for two different implementations of the same algorithm. As long as the difference is small (if the reconstruction error falls in the 1e-15 range, it is mostly likely the case), it should not matter, I understand the numerical mathematics behind this very well but my point is that his two algorithms appear to be identical (same operations, same order), he simply uses lists in one and arrays in the other. It's not like he used vectorization or other array-related operations - he uses for loops in both cases. Of course I agree that 1e-15 error should be acceptable, but that's not the point. I think there is legitimate curiosity in wondering why there is any difference between using the two data types in exactly the same algorithm. Maybe Ruben could re-code the example using a single function that accepts either a list or an array to really demonstrate that there is no mistake in any of the decimal literals or any subtle difference in the ordering of operations that I didn't spot. Would that convince you, David, that there is something interesting that remains to be explained in this example?! Best, Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] [JOB] Short-term python programming consultant - funds expire soon!
Dear Pythonistas, Our open-source software project (PyDSTool) has money to hire an experienced Python programmer on a short-term, per-task basis as a technical consultant (i.e., no fringe benefits offered). The work can be done remotely and will be paid after the satisfactory completion of the objectives. The work must be completed by the end of April, when the current funds expire. The basic work plan and design documents are already laid out from previous work on these tasks, but the finer details will be negotiable. We plan to pay approximately $2-3k per task, depending on the exact code design and amount of time required. Prospective consultants could be professionals or students but must have proven experience with SWIG and both python and numpy distutils, and be willing to write a short document about the completed work for future maintenance purposes. We have a template for a simple contract and invoices can be relatively coarse-grained. As an open-source project, all contributed code will be BSD licensed as part of our project, although it will retain attribution of your authorship. We have two objectives for this work, which could be satisfied by two individual consultants but more likely by one: (1) This objective involves completing the implementation of automated compilation of C code into DLLs. These DLLs are dynamically created from a user's specification in python. The DLLs can be updated and reloaded if the user changes specifications at the python level. This functionality is crucial to providing fast solving of differential equations using legacy solvers written in C and Fortran. This functionality is relatively independent from the inner workings of our project so there should be minimal overhead to completing this task. We need to complete the integration of an existing code idea for this objective with the main trunk of our project. The existing code works as a stand-alone test for our C legacy solver but is not completed for our Fortran legacy solver (so that numpy's distutils needs to be used instead of python distutils) and needs to be integrated into the current SVN trunk. The design document and implementation for the C solver should be a helpful template for the Fortran solver. (2) We need a setup.py package installer for our project that automatically compiles the static parts of the legacy differential equation solvers during installation according to the directory structure and SWIG/distutils implementation to be completed in objective (1). If the consultant is experienced with writing python package installers, he/she may wish to negotiate working on a more advanced system such as an egg installer. PyDSTool (pydstool.sourceforge.net) is a multi-platform, open-source environment offering a range of library tools and utilities for research in dynamical systems modeling for scientists and engineers. Please contact Dr. Rob Clewley (rclewley) at (@) the Department of Mathematics, Georgia State University (gsu.edu) for more information. -- Robert H. Clewley, Ph.D. Assistant Professor Department of Mathematics and Statistics and Neuroscience Institute Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 http://www2.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] ANN: PyDSTool 0.87 released
Dear Scipy and Numpy user lists, The latest update to the open-source python dynamical systems modeling toolbox, PyDSTool 0.87, has been released on Sourceforge. http://www.sourceforge.net/projects/pydstool/ Major highlights are: * Implemented a more natural hybrid model specification format * Supports quadratic interpolation of data points in Trajectory objects (courtesy of Anne Archibald's poly_int class) * Supports more sophisticated data-driven model inference * Improved efficiency of ODE solvers * Various bug fixes and other API improvements * New demonstration scripts and more commenting for existing scripts in PyDSTool/tests/ * New wiki tutorial (courtesy of Daniel MartĂ) This is a modest update in preparation for a substantial upgrade at version 0.90, which will move symbolic expression support over to SymPy, and greatly improve the implementation of C-based ODE integrators. We are also trying to incorporate basic boundary-value problem solving, and we aim to further improve the parameter estimation / model inference tools to work effectively with OpenOpt. For installation and setting up, see the GettingStarted page at our wiki, http://pydstool.sourceforge.net The download contains full API documentation, BSD license information, and further details of recent code changes. Further documentation is on the wiki. As ever, all feedback is welcome as we try to find time to improve our code base. If you would like to contribute effort in improving the tutorial and wiki documentation, or to the code itself, please contact me. -Rob Clewley ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: I wrote some Numpy + SWIG + MinGW simple examples
On Thu, Nov 13, 2008 at 6:32 PM, Egor Zindy [EMAIL PROTECTED] wrote: To get my head round the numpy.i interface for SWIG, I wrote some simple examples and documented them as much as possible. The result is here: Awesome. That will be very helpful to me, and I'm sure to others too. I know some don't seem to consider SWIG so useful these days, but I rely on it a lot, and it does its job pretty well. Apart from some of the details concerning windows paths and mingw, it looks like your information is equally useful for non-windows SWIG usage too. I'll put up some links to it in due course, FWIW. Thanks, Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Automatic differentiation (was Re: second-order gradient)
Maybe we should focus on writing a decent 'deriv' function then. I know Konrad Hinsen's Scientific had a derivatives package (Scientific.Functions.Derivatives) that implemented automatic differentiation: http://en.wikipedia.org/wiki/Automatic_differentiation That would be great, but wouldn't that be best suited as a utility requiring Sympy? You'll want to take advantage of all sorts of symbolic classes, especially for any source code transformation approach. IMO Hinsen's implementation isn't a very efficient or attractive solution to AD given the great existing C/C++ codes out there. Maybe we should be looking to provide a python interface to an existing open source package such as ADOL-C, but I'm all in favour of a new pure python approach too. What would be perfect is to have a single interface to a python AD package that would support a faster implementation if the user wished to install a C/C++ package, otherwise would default to a pure python equivalent. -Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] any interest in including asecond-ordergradient?
2008/10/29 Fernando Perez [EMAIL PROTECTED]: I think it's fine to ask for functions that compute higher order derivatives of n-d arrays: we already have diff(), which operates on a single direction, and a hessian could make sense (with the caveats David points out). But with higher order derivatives there are many more combinations to worry about, and I really think it's a bad idea to lump those issues into the definition of gradient, which was a perfectly unambiguous object up until this point. I'm basically in favour of Fernando's suggestion to keep gradient simple and add a hessian function. Higher numerical derivatives from data aren't very reliable anyway. You're much better off interpolating with a polynomial and then differentiating that. Maybe we should focus on writing a decent 'deriv' function then. I know Konrad Hinsen's Scientific had a derivatives package (Scientific.Functions.Derivatives) that implemented automatic differentiation: Improving the support for a gradient of array data is an entirely independent project in my mind - but I like this idea and I replied in a new thread. -Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Automatic differentiation (was Re: second-order gradient)
In your experience, is this functionality enough to start a separate package, or should we try to include it somewhere else? Otherwise we could think of a new SciKit. I confess to knowing no details about scikits so I don't know what the difference really is between a new package and a scikit. To do this properly you'd end up with a sizable body of code, and the potential dependency on Sympy would also suggest making it somewhat separate. I'd defer to others on that point, although I don't really see what other package it would naturally fit with because AD has multiple applications. -Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Minimum distance between 2 paths in 3D
Hi Andrea, I was wondering if someone had any suggestions/references/snippets of code on how to find the minimum distance between 2 paths in 3D. Basically, for every path, I have I series of points (x, y, z) and I would like to know if there is a better way, other than comparing point by point the 2 paths, to find the minimum distance between them. In 2D there would be a few tricks you could use, but in higher dimensions anything smart that you could attempt might cost you more computation time than just comparing the points (unless N is huge). At least make sure to put the looping over points into a vectorized form to avoid python for loops. e.g. two curves given by 3xN arrays c and d: from numpy import concatenate, argmin from numpy.linalg import norm distvec = concatenate([c[:,i]-d.T for i in range(N)]) # all N**2 distance vectors ns = [norm(a) for a in distvec] # all N**2 norms of the distance vectors cix, dix = divmod(argmin(ns), N) # find the index of the minimum norm from [0 .. N**2] and decode which point this corresponds to The minimum is between the points c[:,cix] and d[:,dix]. A numpy wonk might be able to squeeze a bit more optimization out of this, but I think this code works OK. Unfortunately, unless you know more mathematical properties of your curves in advance (info about their maximum curvature, for instance) you'll end up needing to check every pair of points. If N is really big, like over 10**4 maybe, it might be worth trying to break the curves up into pieces contained in bounding boxes which you can eliminate from a full search if they don't intersect. -Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] JOB: Short-term programming (consultant) work
Dear NumPy users, The developers of the PyDSTool dynamical systems software project have money to hire a Python programmer on a short-term, per-task basis as a technical consultant. The work can be done remotely and will be paid after the completion of project milestones. The work must be completed by July, when the current funds expire. Prospective consultants could be professionals or students and will have proven experience and interest in working with NumPy/SciPy, scientific computation in general, and interfacing Python with C and Fortran codes. Detailed work plan, schedule, and project specs are negotiable (if you are talented and experienced we would like your input). The rate of pay is commensurate with experience, and may be up to $45/hr or $1000 per project milestone (no fringe benefits), according to an agreed measure of satisfactory product performance. There is a strong possibility of longer term work depending on progress and funding availability. PyDSTool (pydstool.sourceforge.net) is a multi-platform, open-source environment offering a range of library tools and utils for research in dynamical systems modeling for scientists and engineers. As a research project, it presently contains prototype code that we would like to improve and better integrate into our long-term vision and with other emerging (open-source) software tools. Depending on interest and experience, current projects might include: * Conversion and pythonification of old Matlab code for model analysis * Improved interface for legacy C and Fortran code (numerical integrators) via some combination of SWIG, Scons, automake * Overhaul of support for symbolic processing (probably by an interface to SymPy) For more details please contact Dr. Rob Clewley (rclewley) at (@) the Department of Mathematics, Georgia State University (gsu.edu). -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-651-2246 http://www.mathstat.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] PCA - Principal Component Analysis
IMO the Modular toolkit for Data Processing (MDP) has a fairly good and straightforward PCA implementation, among other good tools: mdp-toolkit.sourceforge.net/ I have no idea what apt-get is, though, so I don't know if this will be helpful or not! -Rob On 21/06/07, Alex Torquato S. Carneiro [EMAIL PROTECTED] wrote: I'm doing some projects in Python (system GNU / Linux - Ubuntu 7.0) about image processing. I'm needing a implementation of PCA, prefer to library for apt-get. Thanks. Alex. Novo Yahoo! CadĂȘ? - Experimente uma nova busca. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion