[Numpy-discussion] new OpenOpt Suite release 0.54

2014-06-15 Thread Dmitrey
I'm glad to inform you about new OpenOpt Suite release 0.54: 
    * Some changes for PyPy compatibility 
    * FuncDesigner translator() can handle sparse derivatives from automatic 
differentiation 
    * New interalg parameter rTol (relative tolerance, default 10^-8) 
    * Bugfix and improvements for categorical variables      * Some more 
changes and improvements 

Regards, D. 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] OpenOpt Suite release 0.53: Stochastic programming addon now is BSD-licensed

2014-03-15 Thread Dmitrey
hi all, 
I'm glad to inform you about new OpenOpt Suite release 0.53: 

    Stochastic programming addon now is available for free 
    Some minor changes 

-- 
Regards, D. 
http://openopt.org/Dmitrey 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] OpenOpt suite v 0.52

2013-12-15 Thread Dmitrey
Hi all, 
I'm glad to inform you about new  OpenOpt   Suite release 0.52 (2013-Dec-15): 
    Minor  interalg   speedup 
    oofun expression 
    MATLAB solvers fmincon and fsolve have been connected 
    Several MATLAB ODE solvers have been connected 
    New ODE solvers, parameters abstol and reltol 
    New GLP solver: direct 
    Some minor bugfixes and improvements 

Regards, D. ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now available in Python (Dmitrey)

2013-10-07 Thread Dmitrey
FYI scipy ODE solvers vode, dopri5, dop853 also have been connected to OpenOpt, 
possibly with automatic differentiation by FuncDesigner  (dopri5 and dop853 
don't use derivatives although). 

-- 
Regards, D. http://openopt.org/Dmitrey 

--- Исходное сообщение --- 
От кого: David Goldsmith  d.l.goldsm...@gmail.com  
Дата: 7 октября 2013, 07:16:33 

On Sun, Oct 6, 2013 at 10:00 AM,  numpy-discussion-requ...@scipy.org  wrote: 
Message: 2 
Date: Sat, 05 Oct 2013 21:36:48 +0300 
From: Dmitrey  tm...@ukr.net  
Subject: Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now 
        available in    Python 
To: Discussion of Numerical Python  numpy-discussion@scipy.org  
Cc: numpy-discussion@scipy.org 
Message-ID:  1380997576.559804301.aoyna...@frv43.ukr.net  
Content-Type: text/plain; charset=utf-8 
  Seems like using the MATLAB solvers with MCR requires my wrappers containing 
in several files to be compiled with MATLAB Compiler before. I have no license 
for MATLAB thus I may have problems if I'll make it done and will spread it 
with OpenOpt suite code, also, binary files are incompatible with BSD license. 

Darn, knew it was too good to be true.   On the other hand, IIRC a little bit 
obsolete MATLAB versions (I don't think difference is essential) have more 
liberal licenses. 
As for MATLAB solvers examples, I have already mentioned them in the mail list, 
you could see them in http://openopt.org/ODE (just replace solver name from 
scipy_lsoda to ode23s or any other), http://openopt.org/NLP , 
http://openopt.org/SNLE 
Oooops, so sorry. :-o  
DG  
-- 
Regards, D. http://openopt.org/Dmitrey 
-- next part -- 
An HTML attachment was scrubbed... 
URL: 
http://mail.scipy.org/pipermail/numpy-discussion/attachments/20131005/dd6638db/attachment-0001.html
 

-- 

___ 
NumPy-Discussion mailing list 
NumPy-Discussion@scipy.org 
http://mail.scipy.org/mailman/listinfo/numpy-discussion 


End of NumPy-Discussion Digest, Vol 85, Issue 17 
 



-- 
From A Letter From The Future in Peak Everything by Richard Heinberg: 

By the time I was an older teenager, a certain...attitude was developing among 
the young people...a feeling of utter contempt for anyone over a certain 
age--maybe 30 or 40.  The adults had consumed so many resources, and now there 
were none left for their own children...when those adults were younger, they 
[were] just doing what everybody else was doing...they figured it was normal to 
cut down ancient forests for...phone books, pump every last gallon of oil to 
power their SUV's...[but] for...my generation all that was just a dim 
memory...We [grew up] living in darkness, with shortages of food and water, 
with riots in the streets, with people begging on street corners...for us, the 
adults were the enemy. 

Want to really understand what's really going on?  Read Peak Everything. 


___
NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org 
 http://mail.scipy.org/mailman/listinfo/numpy-discussion 

 ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now available in Python

2013-10-05 Thread Dmitrey
It requires MATLAB or MATLAB Component Runtime  ( 
http://www.mathworks.com/products/compiler/mcr/ ) 
I'm not regular subscriber of the mail list thus you'd better ask openopt 
forum. 

-- 
Regards, D. http://openopt.org/Dmitrey 
--- Исходное сообщение --- 
От кого: Eric Carlson  ecarl...@eng.ua.edu  
Дата: 5 октября 2013, 01:19:28 

Hello,
Does this require a MATLAB install, or are these equivalent routines?

Thanks,
Eric

___
NumPy-Discussion mailing list
 ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] MATLAB ODE solvers - now available in Python

2013-10-05 Thread Dmitrey
  

--- Исходное сообщение --- 
От кого: David Goldsmith  d.l.goldsm...@gmail.com  
Дата: 5 октября 2013, 20:15:38 

MCR stands for MATLAB Compiler Runtime and if that's all it requires, that's 
great, 'cause that's free.  Look forward to giving this a try; does the 
distribution come w/ examples? 


Seems like using the MATLAB solvers with MCR requires my wrappers containing in 
several files to be compiled with MATLAB Compiler before. I have no license for 
MATLAB thus I may have problems if I'll make it done and will spread it with 
OpenOpt suite code, also, binary files are incompatible with BSD license. 
On the other hand, IIRC a little bit obsolete MATLAB versions (I don't think 
difference is essential) have more liberal licenses. 
As for MATLAB solvers examples, I have already mentioned them in the mail list, 
you could see them in http://openopt.org/ODE (just replace solver name from 
scipy_lsoda to ode23s or any other), http://openopt.org/NLP , 
http://openopt.org/SNLE 

-- 
Regards, D. http://openopt.org/Dmitrey 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] MATLAB ODE solvers - now available in Python

2013-10-04 Thread Dmitrey
Several MATLAB ODE dy/dt = f(y,t) solvers (ode15s, ode23, ode113, ode23t, 
ode23tb, ode45, ode23s)  have been connected to free OpenOpt Suite package 
(possibly with  FuncDesigner   automatic differentiation) in addition to 
scipy_lsoda (scipy.integrate.odeint), see the example . 
Currently only reltol parameter is available; future plans may include abstol, 
Python3 and PyPy support, solver ode15i for solving f(dy/dt, y, t) = 0, 
possibility to use single MATLAB session for several ODE probs. Sparse matrices 
handling is implemented for fmincon and fsolve but not ode solvers yet. 
-- 
Regards, D. http://openopt.org/Dmitrey 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] MATLAB fsolve now available in Python

2013-10-02 Thread Dmitrey
Hi all, 
New solver for systems of nonlinear equations ( SNLE ) has been connected to 
free Python framework OpenOpt: fsolve from  MATLAB   Optimization Toolbox; 
uploaded into PYPI in v. 0.5112. 

As well as fmincon , currently it's available for Python 2 only. 
Unlike scipy.optimize fsolve, it can handle sparse derivatives, user-supplied 
or from FuncDesigner automatic differentiation. 
See the example with 15000 equations. 
To keep discussion in a single place please use the OpenOpt forum thread. 
Regards, D. 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] MATLAB fmincon now available in Python2

2013-09-30 Thread Dmitrey
Hi all, 
current state of Python - MATLAB connection soft doesn't allow passing of 
function handlers, however, a walkaround has been implemented via some tricks, 
so now MATLAB function fmincon is available in Python-written OpenOpt and 
FuncDesigner frameworks (with possibility of automatic differentiation,  
example ). 

Future plans  include MATLAB fsolve, ode23, ode45 (unlike scipy fsolve and ode 
they can handle sparse matrices), fgoalattain, maybe global optimization 
toolbox solvers. 

I intend to post the message to several forums, so to keep discussion in a 
single place use OpenOpt forum thread 
http://forum.openopt.org/viewtopic.php?id=769 

Regards, D. 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] OpenOpt suite v 0.51

2013-09-15 Thread Dmitrey
Hi all, 
new OpenOpt suite v 0.51 has been released: Some improvements for  FuncDesigner 
  automatic differentiation and QP FuncDesigner now can model sparse (MI)(QC)QP 
Octave QP solver has been connected MATLAB solvers linprog ( LP ), quadprog ( 
QP ), lsqlin ( LLSP ), bintprog ( MILP ) New NLP solver: knitro Some elements 
of 2nd order interval analysis, mostly for interalg Some interalg improvements 
interalg can directly handle (MI)LP and (possibly nonconvex) (MI)(QC)QP New 
classes: knapsack problem ( KSP ), bin packing problem ( BPP ), dominating set 
problem ( DSP ) FuncDesigner can model SOCP SpaceFuncs   has been adjusted for 
recent versions of Python and NumPy visit http://openopt.org for more details. 
Regards, D. 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy bug with negative int pow

2013-08-13 Thread Dmitrey
Python 3.3.1 (default, Apr 17 2013, 22:32:14) 
[GCC 4.7.3] on linux 

 import numpy 
 numpy.__version__ 
'1.8.0.dev-d62f11d' 

 numpy.array((1,2,3)) / 2 
array([ 0.5,  1. ,  1.5]) 

#ok, but since division of integer arrays has been converted to float, pow is 
expected as well, but it's not: 

 numpy.array((1,2,3)) ** -1 
array([1, 0, 0], dtype=int32) 

 numpy.array((1,2,3)) ** -2 
array([1, 0, 0], dtype=int32) 

 3**-1 
0. 

D. 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] new free software for knapsack problem

2013-07-12 Thread Dmitrey
Hi all, 
FYI new free software for knapsack problem ( 
http://en.wikipedia.org/wiki/Knapsack_problem ) has been made (written in 
Python language); it can solve possibly constrained, possibly (with interalg ) 
nonlinear and multiobjective problems with specifiable accuracy. Along with 
interalg lots of  MILP   solvers can be used. 
See http://openopt.org/KSP for details. 
Regards, Dmitrey. ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Using some MATLAB optimization solvers from Python (OpenOpt/FuncDesigner)

2013-06-25 Thread Dmitrey



Hi all,






FYI some MATLAB solvers now can be involved with OpenOpt or FuncDesigner
:




* LP linprog
* QP quadprog
* LLSP lsqlin
* MILP bintprog






Sparsity handling is supported.






You should have
* MATLAB (or MATLAB Component Runtime)
* mlabwrap 




Unfortunately, it will hardly work out-of-the-box, you have to adjust
some paths and some environment variables.






As for nonlinear solvers, e.g. fmincon, probably they could be connected
via involving C MEX files, but it is not possible with current state of
mlabwrap yet.






Read MATLAB entry for details.






Regards, D.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] new OpenOpt Suite release 0.50

2013-06-15 Thread Dmitrey



Hi all,



I'm glad to inform you about new OpenOpt Suite release 0.50
(2013-June-15):






* interalg (solver with specifiable accuracy) now works many times
(sometimes orders) faster on (possibly multidimensional) integration
problems (IP) and on some optimization problems
* Add modeling dense (MI)(QC)QP in FuncDesigner (alpha-version, rendering
may work slowly yet)
* Bugfix for cplex wrapper
* Some improvements for FuncDesigner interval analysis (and thus
interalg)
* Add FuncDesigner interval analysis for tan in range(-pi/2,pi/2)
* Some other bugfixes and improvements
* (Proprietary) FuncDesigner stochastic addon now is available as
standalone pyc-file, became available for Python3 as well






Regards, Dmitrey.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-04-10 Thread Dmitrey



 --- Исходное сообщение ---
От кого: Robert Kern robert.k...@gmail.com
Дата: 9 апреля 2013, 14:29:43

On Tue, Apr 9, 2013 at 4:15 PM, Dmitrey   tm...@ukr.net   wrote:


 --- Исходное сообщение ---
 От кого: Robert Kern   robert.k...@gmail.com  
 Дата: 16 марта 2013, 22:15:07

 On Sat, Mar 16, 2013 at 6:19 PM, Dmitrey   tm...@ukr.net   wrote:


 --- Исходное сообщение ---
 От кого: Robert Kern   robert.k...@gmail.com  
 Дата: 16 марта 2013, 19:54:51

 On Sat, Mar 16, 2013 at 10:39 AM, Matthieu Brucher
   matthieu.bruc...@gmail.com   wrote:
 Even if they have different hashes, they can be stored in the same
 underlying list before they are retrieved. Then, an actual comparison is
 done to check if the given key (i.e. object instance, not hash) is the
 same
 as one of the stored keys.

 Right. And the rule is that if two objects compare equal, then they
 must also hash equal. Unfortunately, it looks like `oofun` objects do
 not obey this property. oofun.__eq__() seems to return a Constraint
 rather than a bool, so oofun objects should simply not be used as
 dictionary keys.

 It is one of several base features FuncDesigner is build on and is used
 extremely often and wide; then whole FuncDesigner would work incorrectly
 while it is used intensively and solves many problems better than its
 competitors.

 I understand. It just means that you can't oofun objects as dictionary
 keys. Adding a __hash__() method is not enough to make that work.

 No, it just means I had mapped, have mapped, map and will map oofun objects
 as Python dict keys.

Well, it's your software. You are free to make it as buggy as you wish, I guess.

Yes, and that's why each time I get a bugreport I immediately start
working on it, so usually I have zero opened bugs, as now . It somewhat
differs from your bugtracker , that has tens of opened bugs, and ~ half
of them are hanging for years (also, half of them are mentioned as high
and highest priority) . But it's definitely your right to keep it as
buggy as you wish, as well!
D.


 --
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-04-10 Thread Dmitrey



--- Исходное сообщение ---
От кого: Alan G Isaac alan.is...@gmail.com
Дата: 10 апреля 2013, 15:12:07

On 4/10/2013 3:31 AM, Robert Kern wrote:
 You cannot use objects that do not have a valid __eq__() (as in,
 returns boolean True if and only if they are to be considered
 equivalent for the purpose of dictionary lookup, otherwise returns
 False) as dictionary keys. Your oofun object still violates this
 principle. As dictionary keys, you want them to use their `id`
 attributes to distinguish them, but their __eq__() method still just
 returns another oofun with the default object.__nonzero__()
 implementation. This means that bool(some_oofun == other_oofun) is
 always True regardless of the `id` attributes. You have been
 unfortunate enough to not run into cases where this causes a problem
 yet, but the bug is still there, lurking, waiting for a chance hash
 collision to silently give you wrong results. That is the worst kind
 of bug.



Hi Dmitrey,

Robert and Sebastien have taken their time to carefully explain
to your why your design is flawed.  Your response has been only
that you rely on this design flaw and it has not bitten you yet.

It had bitten me some times till I understood the bugs source, but as I
had mentioned I had fixed all those parts of code.


 I trust you can see that this is truly not a response.

The right response is to explore how you can refactor to
eliminate this lurking bug, or to prove that it can *never*
bite due to another design feature.  You have done neither,
and the second looks impossible.  So you have work to do.  


 You say that you *must* use oofuns as dict keys.  This is probably
false, but you clearly want to retain this aspect of your design.
But this choice has an implication for the design of oofuns,
as carefully explained in this thread.  So you will have to
change the design, even though that may prove painful.

Refactoring is mere impossible, user API and thouzands lines of whole
FuncDesigner kernel heavily relies on the oofuns as dict keys. Also, I
don't see any alternative that is as convenient and fast as the involved
approach.
As for new features, I just keep it in mind while implementing them, and
now it's quite simple.

 No smaller step is adequate to the quality of software you
aspire to.

One last thing.  When someone like Robert or Sebastien take their
time to explain a problem to you, the right response is thank you,
even if their news is unwelcome.  Don't shoot the messenger.

I understand your opinion, but I'm not a kind of person who thanks on
responses like Well, it's your software. You are free to make it as
buggy as you wish (Robert have apologised, although).
Also, I haven't thanked Sebastien because I was AFK.
Thanks for all who participated in the thread.
D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-04-09 Thread Dmitrey



--- Исходное сообщение ---
От кого: Robert Kern robert.k...@gmail.com
Дата: 16 марта 2013, 22:15:07

On Sat, Mar 16, 2013 at 6:19 PM, Dmitrey   tm...@ukr.net   wrote:


 --- Исходное сообщение ---
 От кого: Robert Kern   robert.k...@gmail.com  
 Дата: 16 марта 2013, 19:54:51

 On Sat, Mar 16, 2013 at 10:39 AM, Matthieu Brucher
   matthieu.bruc...@gmail.com   wrote:
 Even if they have different hashes, they can be stored in the same
 underlying list before they are retrieved. Then, an actual comparison is
 done to check if the given key (i.e. object instance, not hash) is the
 same
 as one of the stored keys.

 Right. And the rule is that if two objects compare equal, then they
 must also hash equal. Unfortunately, it looks like `oofun` objects do
 not obey this property. oofun.__eq__() seems to return a Constraint
 rather than a bool, so oofun objects should simply not be used as
 dictionary keys.

 It is one of several base features FuncDesigner is build on and is used
 extremely often and wide; then whole FuncDesigner would work incorrectly
 while it is used intensively and solves many problems better than its
 competitors.

I understand. It just means that you can't oofun objects as dictionary
keys. Adding a __hash__() method is not enough to make that work.

No, it just means I had mapped, have mapped, map and will map oofun
objects as Python dict keys.
As for the bug, I have found and fixed its source (I used some info from
sorted list of free variables and somew other info from a non-sorted dict
of oofun sizes).
D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-03-16 Thread Dmitrey



--- Исходное сообщение ---
От кого: Alan G Isaac alan.is...@gmail.com
Дата: 15 марта 2013, 22:54:21

On 3/15/2013 3:34 PM, Dmitrey wrote:
 the suspected bugs are not documented yet


I'm going to guess that the state of the F_i changes
when you use them as keys (i.e., when you call __le__.

no, their state doesn't change for operations like __le__ . AFAIK
searching Python dict doesn't calls __le__ on the object keys at all, it
operates with method .__hash__(), and latter returns fixed integer
numbers assigned to the objects earlier (at least in my case).


 It is very hard to imagine that this is a Python or NumPy bug.

Cheers,
Alan

___
NumPy-Discussion mailing list  NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-03-16 Thread Dmitrey



--- Исходное сообщение ---
От кого: Matthieu Brucher matthieu.bruc...@gmail.com
Дата: 16 марта 2013, 11:33:39

Hi,

Different objects can have the same hash, so it compares to find the
actual correct object.
Usually when you store something in a dict and later you can't find
it anymore, it is that the internal state changed and that the hash
is not the same anymore.


my objects (oofuns) definitely have different __hash__() results - it's
just integers 1,2,3 etc assigned to the oofuns (stored in oofun._id
field) when they are created.

D.



Matthieu


2013/3/16 Dmitrey tm...@ukr.net



--- Исходное сообщение ---
От кого: Alan G Isaac alan.is...@gmail.com
Дата: 15 марта 2013, 22:54:21

On 3/15/2013 3:34 PM, Dmitrey wrote:
 the suspected bugs are not documented yet


I'm going to guess that the state of the F_i changes
when you use them as keys (i.e., when you call __le__.

no, their state doesn't change for operations like __le__ . AFAIK
searching Python dict doesn't calls __le__ on the object keys at
all, it operates with method .__hash__(), and latter returns
fixed integer numbers assigned to the objects earlier (at least
in my case).


 It is very hard to imagine that this is a Python or NumPy bug.

Cheers,
Alan

___
NumPy-Discussion mailing list  NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion  


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion




--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/

___
NumPy-Discussion mailing list  NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-03-16 Thread Dmitrey



--- Исходное сообщение ---
От кого: Matthieu Brucher matthieu.bruc...@gmail.com
Дата: 16 марта 2013, 12:39:07

Even if they have different hashes, they can be stored in the same
underlying list before they are retrieved. Then, an actual comparison
is done to check if the given key (i.e. object instance, not hash) is
the same as one of the stored keys.



but, as I have already mentioned, comparison of oofun(s) via __le__,
__eq__ etc doesn't change their inner state (but the methods can create
additional oofun(s), although).
I have checked via debugger - my methods __le__, __eq__, __lt__, __gt__,
__ge__ are not called from the buggy place of code, only __hash__ is
called from there. Python could check key objects equivalence via id(),
although, but I don't see any possible bug source from using id().
D.




2013/3/16 Dmitrey tm...@ukr.net



--- Исходное сообщение ---
От кого: Matthieu Brucher matthieu.bruc...@gmail.com
Дата: 16 марта 2013, 11:33:39

Hi,

Different objects can have the same hash, so it compares to
find the actual correct object.
Usually when you store something in a dict and later you
can't find it anymore, it is that the internal state changed
and that the hash is not the same anymore.


my objects (oofuns) definitely have different __hash__() results
- it's just integers 1,2,3 etc assigned to the oofuns (stored in
oofun._id field) when they are created.

D.



Matthieu


2013/3/16 Dmitrey tm...@ukr.net



--- Исходное сообщение ---
От кого: Alan G Isaac alan.is...@gmail.com
Дата: 15 марта 2013, 22:54:21

On 3/15/2013 3:34 PM, Dmitrey wrote:
 the suspected bugs are not documented yet


I'm going to guess that the state of the F_i changes
when you use them as keys (i.e., when you call __le__.

no, their state doesn't change for operations like __le__
. AFAIK searching Python dict doesn't calls __le__ on the
object keys at all, it operates with method .__hash__(),
and latter returns fixed integer numbers assigned to the
objects earlier (at least in my case).


 It is very hard to imagine that this is a Python or NumPy bug.

Cheers,
Alan

___
NumPy-Discussion mailing list  NumPy-Discussion@scipy.org   
 http://mail.scipy.org/mailman/listinfo/numpy-discussion  


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion




--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/

___
NumPy-Discussion mailing list  NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion  




--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-03-16 Thread Dmitrey



--- Исходное сообщение ---
От кого: Robert Kern robert.k...@gmail.com
Дата: 16 марта 2013, 19:54:51

On Sat, Mar 16, 2013 at 10:39 AM, Matthieu Brucher
  matthieu.bruc...@gmail.com   wrote:
 Even if they have different hashes, they can be stored in the same
 underlying list before they are retrieved. Then, an actual comparison is
 done to check if the given key (i.e. object instance, not hash) is the same
 as one of the stored keys.

Right. And the rule is that if two objects compare equal, then they
must also hash equal. Unfortunately, it looks like `oofun` objects do
not obey this property. oofun.__eq__() seems to return a Constraint
rather than a bool, so oofun objects should simply not be used as
dictionary keys. 

It is one of several base features FuncDesigner is build on and is used
extremely often and wide; then whole FuncDesigner would work incorrectly
while it is used intensively and solves many problems better than its
competitors.


That's quite possibly the source of the bug. Or at
least, that's a bug that needs to get fixed first before attempting to
debug anything else or attribute bugs to Python or numpy. Also, the
lack of a bool-returning __eq__() will prevent proper sorting, which
also seems to be used in the code snippet that Dmitrey showed.

as I have already mentioned, I ensured via debugger that my __eq__,
__le__ etc are not involved from the buggy place of the code, only
__hash__ is involved from there.


 --
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OpenOpt Suite release 0.45

2013-03-15 Thread Dmitrey



Hi all,



I'm glad to inform you about new OpenOpt Suite release 0.45
(2013-March-15):
* Essential improvements for FuncDesigner interval analysis (thus affect
interalg)
* Temporary walkaround for a serious bug in FuncDesigner automatic
differentiation kernel due to a bug in some versions of Python or NumPy,
may affect optimization problems, including (MI)LP, (MI)NLP, TSP etc
* Some other minor bugfixes and improvements






---



Regards, D.

 http://openopt.org/Dmitrey
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-03-15 Thread Dmitrey



--- Исходное сообщение ---

 От кого: Alan G Isaac alan.is...@gmail.com
Дата: 15 марта 2013, 20:38:38

On 3/15/2013 9:21 AM, Dmitrey wrote:
 Temporary walkaround for a serious bug in FuncDesigner automatic 
 differentiation kernel due to a bug in some versions of Python or NumPy,


Are the suspected bugs documented somewhere?

the suspected bugs are not documented yet, I guess it will be fixed in
future versions of Python or numpy
the bug is hard to locate and isolate, it looks like this:

derivative_items = list(pointDerivative.items())

# temporary walkaround for a bug in Python or numpy
derivative_items.sort(key=lambda elem: elem[0])
##

for key, val in derivative_items:
indexes = oovarsIndDict[key]

# this line is not reached in the involved buggy case
if not involveSparse and isspmatrix(val): val = val.A

if r.ndim == 1:
r[indexes[0]:indexes[1]] = val.flatten() if type(val) == ndarray else val
else:
# this line is not reached in the involved buggy case
r[:, indexes[0]:indexes[1]] = val if val.shape == r.shape else
val.reshape((funcLen, prod(val.shape)/funcLen))

so, pointDerivative is Python dict of pairs (F_i, N_i), where F_i are
hashable objects, and even for the case when N_i are ordinary scalars
(they can be numpy arrays or scipy sparse matrices) results of this code
are different wrt was or was not derivative_items.sort() performed; total
number of nonzero elements is same for both cases. oovarsIndDict is dict
of pairs (F_i, (n_start_i, n_end_i)), and for the case N_i are all
scalars for all i n_end_i = n_start_i - 1.


 Alan
PS The word 'banausic' is very rare in English.
Perhaps you meant 'unsophisticated'?


google translate tells me banausic is more appropriate translation than
unsophisticated for the sense I meant (those frameworks are aimed on
modelling only numerical optimization problems, while FuncDesigner is
suitable for modelling of systems of linear, nonlinear, ordinary
differential equations, eigenvalue problems, interval analysis and much
more).
D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Stochastic programming and optimization addon for FuncDesigner v. 0.421

2012-11-23 Thread Dmitrey



hi all,
I'm glad to inform you that stochastic programming and optimization addon
for FuncDesigner v. 0.421 has been released.

Now you can use gradient-based solvers for numerical optimization, such
as ALGENCAN, IPOPT, ralg, gsubg etc. Usually they work faster than
derivative-free (such as scipy_cobyla, BOBYQA)  or global (GLP) solvers,
e.g. on this example ALGENCAN time elapsed is less than 1 second while
scipy_cobyla spend ~20 sec. However pay attention that having function
P() in your problem may bring nonconvexity or some other issues to the
solver optimization trajectory, thus sometimes you'll have to use
derivative-free or GLP solvers (e.g. de) instead.

FuncDesigner is free (BSD license) cross-platform Python language written
software, while its stochastic programming and optimization addon,
written by same authors, is free for small-scaled problems with
educational or research purposes only.

For more details visit our website http://openopt.org

-
Regards, D.
http://openopt.org/Dmitrey
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] OpenOpt Suite release 0.42

2012-09-15 Thread Dmitrey



Hi all,


I'm glad to inform you about new OpenOpt Suite release 0.42
(2012-Sept-15). Main changes:


*  Some improvements for solver interalg, including handling of
categorical variables
*  Some parameters for solver gsubg
*  Speedup objective function for de and pswarm on FuncDesigner models
*  New global (GLP) solver: asa (adaptive simulated annealing)
*  Some new classes for network problems: TSP (traveling salesman
problem), STAB (maximum graph stable set)], MCP (maximum clique problem)
*  Improvements for FD XOR (and now it can handle many inputs)
*  Solver de has parameter seed, also, now it works with PyPy
*  Function sign now is available in FuncDesigner
*  FuncDesigner interval analysis (and thus solver interalg) now can
handle non-monotone splines of 1st order
*  FuncDesigner now can handle parameter fixedVars as Python dict
*  Now scipy InterpolatedUnivariateSpline is used in FuncDesigner
interpolator() instead of UnivariateSpline. This creates backward
incompatibility - you cannot pass smoothing parameter (s) to interpolator
no longer.
*  SpaceFuncs: add Point weight, Disk, Ball and method contains(), bugfix
for importing Sphere, some new examples
*  Some improvements (essential speedup, new parameter interpolate for
P()) for our (currently commercial) FuncDesigner Stochastic Programming
addon
*  Some bugfixes


In our website ( http://openopt.org ) you could vote for most required
OpenOpt Suite development direction(s) (poll has been renewed, previous
results are here).


Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-User] [ANN] New free tool for TSP solving

2012-09-03 Thread Dmitrey



--- Исходное сообщение ---
От кого: Niki Spahiev niki.spah...@gmail.com
Кому: scipy-u...@scipy.org
Дата: 3 сентября 2012, 13:57:49
Тема: Re: [SciPy-User] [ANN] New free tool for TSP solving






 New free tool for TSP solving is available (for downloading as well) -
 OpenOpt TSP class: TSP (traveling salesman problem).

Hello Dmitrey,

Can this tool solve ATSP problems?

Thanks,
Niki
 

Hi,
yes - asymmetric (see examples with networkx DiGraph), including
multigraphs (networkx MultiDiGraph) as well.






 ___
SciPy-User mailing 
listSciPy-User@scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] New free tool for TSP solving

2012-09-02 Thread Dmitrey



Hi all, 


New free tool for TSP solving is available (for downloading as well) -
OpenOpt TSP class: TSP (traveling salesman problem).


It is written in Python, uses NetworkX graphs on input (another
BSD-licensed Python library, de-facto standard graph lib for Python
language programmers), can connect to MILP solvers like glpk, cplex,
lpsolve, has a couple of other solvers - sa (simulated annealing, Python
code by John Montgomery) and interalg


If someone is interested, I could implement something from (or beyound)
its future plans till next OpenOpt stable release 0.41, that will be 2
weeks later (Sept-15).


Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] routine for linear least norms problems with specifiable accuracy

2012-07-16 Thread Dmitrey



hi all,


I have wrote a routine to solve dense / sparse problems 
min {alpha1*||A1 x - b1||_1 + alpha2*||A2 x - b2||^2 + beta1 * ||x||_1 +
beta2 * ||x||^2}
with specifiable accuracy fTol  0: abs(f-f*) = fTol (this parameter is
handled by solvers gsubg and maybe amsg2p, latter requires known good
enough fOpt estimation). Constraints (box-bound, linear, quadratic) also
could be easily connected.


This problem is very often encountered in many areas, e.g. machine
learning, sparse approximation, see for example
http://scikit-learn.org/stable/modules/ ? lastic-net


First of all solver large-scale gsubg is recommended. Some hand-tuning of
its parameters also could essentially speedup the solver. Also you could
be interested in other OpenOpt NSP solvers - ralg and amsg2p (they are
medium-scaled although).


You can see the source of the routine and its demo result here.


You shouldn't expect gsubg will always solve your problem and inform of
obtained result with specifiable accuracy - for some very difficult, e.g.
extremely ill-conditioned problems it may
* fail to solve QP subproblem (default QP solver is cvxopt, you may
involve another one, e.g. commercial or free-for-educational cplex)
* exit with another stop criterion, e.g. maxIter has been reached, or
maxShoots have been exceeded (usually latter means you have reached
solution, but it cannot be guaranteed in the case)


First of all I have created the routine to demonstrate gsubg abilities; I
haven't decided yet commit or not commit the routine to OpenOpt, with or
without special class for this problem; in either case you can very
easily create problems like this one in FuncDesigner (without having to
write a routine for derivatives) to solve them by gsubg or another NSP
solver; however, IIRC FuncDesigner dot() doesn't work with sparse
matrices yet
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] routine for linear least norms problems with specifiable accuracy

2012-07-16 Thread Dmitrey



gsubg uses N.Zhurbenko ( http://openopt.org/NikolayZhurbenko )
epsilon-subgradient method
ralg and amsg2p use other algorithms


--- Исходное сообщение ---
От кого: Henry Gomersall h...@cantab.net
Кому: Discussion of Numerical Python numpy-discussion@scipy.org
Дата: 16 июля 2012, 21:47:47
Тема: Re: [Numpy-discussion] routine for linear least norms problems with
specifiable accuracy





On Mon, 2012-07-16 at 20:35 +0300, Dmitrey wrote:
 I have wrote a routine to solve dense / sparse problems 
 min {alpha1*||A1 x - b1||_1 + alpha2*||A2 x - b2||^2 + beta1 * ||x||_1
 + beta2 * ||x||^2}
 with specifiable accuracy fTol  0: abs(f-f*) = fTol (this parameter
 is handled by solvers gsubg and maybe amsg2p, latter requires known
 good enough fOpt estimation). Constraints (box-bound, linear,
 quadratic) also could be easily connected.
 
Interesting. What algorithm are you using?

Henry

___
NumPy-Discussion mailing 
listNumPy-Discussion@scipy.orghttp://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] New Python tool for searching maximum stable set of a graph

2012-07-14 Thread Dmitrey



Hi all,

In the OpenOpt software (BSD-licensed, http://openopt.org ) we have
implemented new class - STAB - searching for maximum stable set of a
graph.

networkx graphs are used as input arguments. Unlike networkx
maximum_independent_set() we focus on searching for exact solution (this
is NP-Hard problem).


interalg or OpenOpt MILP solvers are used, some GUI features and stop
criterion (e.g. maxTime, maxCPUTime, fEnough) can be used. Optional
arguments are includedNodes and excludedNodes - nodes that have to be
present/absent in solution.


See http://openopt.org/STAB for details.


Future plans (probably very long-term although) include TSP and some
other graph problems.


-

Regards, Dmitrey.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Stochastic programming and optimization addon for FuncDesigner

2012-07-07 Thread Dmitrey



hi all,


you may be interested in stochastic programming and optimization with
free Python module FuncDesigner.


We have wrote Stochastic addon for FuncDesigner, but (at least for
several years) it will be commercional (currently it's free for some
small-scaled problems only and for noncommercial research / educational
purposes only). However, we will try to keep our prices several times
less than our competitors have. Also, we will provide some discounts,
including region-based ones, and first 15 customers will also got a
discount.


For further information, documentation and some examples etc read more at
http://openopt.org/StochasticProgramming


Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy bug with ndarray subclassing

2012-06-25 Thread Dmitrey



I will use walkaround but I think you'd better fix the numpy bug:

from numpy import ndarray, float64, asanyarray, array
class asdf(ndarray):
__array_priority__ = 10
def __new__(self, vals1, vals2):
obj = asanyarray(vals1).view(self)
obj.vals2 = vals2
return obj
def __add__(self, other):
print('add')
assert not isinstance(other , asdf), 'unimplemented'
return asdf(self.view(ndarray) + other, self.vals2)
def __radd__(self, other):
print('radd')
assert not isinstance(other , asdf), 'unimplemented'
return asdf(self.view(ndarray) + other, self.vals2)

a = asdf(array((1, 2, 3)), array((10, 20, 30)))
z = float64(1.0)

print(a.__array_priority__) # 10
print(z.__array_priority__) # -100.0

r2 = a + z
print(r2.vals2) # ok, prints 'add' and (10,20,30)

r1 = z+a
print(r1.vals2)
# doesn't print radd (i.e. doesn't enters asdf.__radd__ function at
all)
# raises AttributeError
#'asdf' object has no attribute 'vals2'

tried in Python2 + numpy 1.6.1 and Python3 + numpy 1.7.0 dev
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN} OpenOpt / FuncDesigner release 0.39

2012-06-15 Thread Dmitrey



Hi all,


I'm glad to inform you about new OpenOpt Suite release 0.39
(2012-June-15):
interalg: add categorical variables and general logical constraints, many
other improvements
Some improvements for automatic differentiation
DerApproximator and some OpenOpt / FuncDesigner functionality now works
with PyPy
New solver lsmr for dense / sparse LLSP
oovar constructors now can handle parameters lb and ub, e.g. a =
oovar('a', lb=-1, ub=[1,2,3]) (this oovar should have size 3) or x =
oovars(10, lb=-1, ub=1)
New FuncDesigner function hstack, similar syntax to numpy.hstack, e.g. f
= hstack((a,b,c,d))
Some bugfixes


I have some progress toward solving in FuncDesigner linear DAE
(differential algebraic equations, example) and Stochastic Opimization 
(example),
but this is too premature yet to be released, there is 60-70% probability
it will be properly implemented in next OpenOpt release.


In our website you could vote for most required OpenOpt Suite development
direction(s).




Regards, D.

http://openopt.org/Dmitrey
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Some numpy funcs for PyPy

2012-05-24 Thread Dmitrey



hi all,
maybe you're aware of numpypy - numpy port for pypy (pypy.org) - Python
language implementation with dynamic compilation.

Unfortunately, numpypy developmnent is very slow due to strict quality
standards and some other issues, so for my purposes I have provided some
missing numpypy funcs, in particular

* atleast_1d, atleast_2d, hstack, vstack, cumsum, isscalar, asscalar,
asfarray, flatnonzero, tile, zeros_like, ones_like, empty_like,
where, searchsorted

* with axis parameter: nan(arg)min, nan(arg)max, all, any

and have got some OpenOpt / FuncDesigner functionality working faster
than in CPython.

File with this functions you can get here

Also you may be interested in some info at http://openopt.org/PyPy
Regards, Dmitrey.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some numpy funcs for PyPy

2012-05-24 Thread Dmitrey





On your website you wrote:

 From my (Dmitrey) point of view numpypy development is
 very unfriendly for newcomers - PyPy developers say provide
 code, preferably in interpreter level instead of AppLevel,
 provide whole test coverage for all possible corner cases,
 provide hg diff for code, and then, maybe, it will be committed.
 Probably this is the reason why so insufficient number of
 developers work on numpypy.

I assume that is paraphrased with a little hyperbole, but it
isn't so different from numpy (other than using git), or many
other open source projects. 
 

Of course, many opensource projects do like that, but in the case of
numpypy IMHO the things are especially bad.






Unit tests are important, and
taking patches without them is risky.
 

Yes, but at first, things required from numpypy newcomers are TOO
complicated - and no guarrantee is provided, that elapsed efforts will
not be just a waste of time; at 2nd, the high-quality standards are
especially cynic when compared with their own code quality, e.g.
numpypy.all(True) doesn't work yet, despite it hangs in bug tracker for a
long time; a[a0] = b[b0] works incorrectly etc.
These are reasons that forced me to write some required for my purposes
missing funcs and some bug walkarounds (like for that one with
numpypy.all and any).






 I've been subscribed to the pypy-dev list for a while, 
 

I had been subsribed IIRC for a couple of months






but I
don't recall seeing you posting there.
 

I had made some, see my pypy activity here






 Have you tried to submit
any of your work to PyPy yet?
 

yes: I had spent lots of time for concatenate() (pypy developers said
noone works on it) - and finally they have committed code for this func
from other trunc. Things like this were with some other my proposed code
for PyPy and all those days spent for it.






 Perhaps you should have
sent this message to pypy-dev instead?
 

I had explained them my point of view in mail list and irc channel, their
answer was like don't borther horses, why do you in a hurry? All will be
done during several months, but I see it (porting whole numpy)
definitely won't be done during the term. IIRC during ~ 2 months only ~10
new items were added to numpypy; also, lots of numpypy items, when
calling, e.g. searchsorted, just raise NotImplementedError: wainting for
interplevel routine, or don't work with high-dimensional arrays and/or
some other corner cases.

numpypy developers go (rather slowly) their own way, while I just propose
temporary alternative, till proper PyPy-numpy implementation

regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Optimization with categorical variables, disjunctive (and other logical) constraints

2012-04-24 Thread Dmitrey



hi all,
free solver  interalg for global nonlinear optimization with specifiable
accuracy now can handle categorical variables, disjunctive (and other
logical) constraints, thus making it available to solve GDP, possibly in
multiobjective form.
There are ~ 2 months till next OpenOpt release, but I guess someone may
find it useful for his purposes right now.
See  here for more details.
Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] new release 0.38 of OpenOpt, FuncDesigner, SpaceFuncs, DerApproximator

2012-03-15 Thread Dmitrey



Hi,
I'm glad to inform you about new release 0.38 (2012-March-15):


OpenOpt:

interalg can handle discrete variables (see MINLP for examples)
interalg can handle multiobjective problems (MOP)
interalg can handle problems with parameters fixedVars/freeVars
Many interalg improvements and some bugfixes
Add another EIG solver: numpy.linalg.eig
New LLSP solver pymls with box bounds handling

FuncDesigner:

Some improvements for sum()
Add funcs tanh, arctanh, arcsinh, arccosh
Can solve EIG built from derivatives of several functions, obtained by
automatic differentiation by FuncDesigner

SpaceFuncs:

Add method point.symmetry(Point|Line|Plane)
Add method LineSegment.middle
Add method Point.rotate(Center, angle)

DerApproximator:

Minor changes


See http://openopt.org for more details.


Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] memory leak in numpy.take

2012-03-09 Thread Dmitrey



memory leak was observed in numpy versions 1.5.1 and latest git trunc

from numpy import *
for i in range(10):
if i % 100 == 0:
print(i)
a = empty(1,object)
for j in range(1):
a[j] = array(1)
a = take(a, range(9000),out=a[:9000])
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] new solver for multiobjective optimization problems

2012-02-10 Thread Dmitrey



hi,
I'm glad to inform you about new Python solver for multiobjective
optimization (MOP).

Some changes committed to solver interalg made it capable of handling
global nonlinear constrained  multiobjective problem (MOP), see the page
for more details.






Using interalg you can be 100% sure your result covers whole Pareto front
according to the required tolerances on objective functions.






Available features include real-time or final graphical output,
possibility of involving parallel calculations, handling both continuous
and discrete variables, export result to xls files.






Regards, D.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] global constrained solver with discrete variables

2012-01-16 Thread Dmitrey



hi all,
I've done support of discrete variables for interalg - free (license:
BSD) solver with specifiable accuracy, you can take a look at an example
here

It is written in Python + NumPy, and I hope it's speed will be
essentially increased when PyPy (Python with dynamic compilation) support
for NumPy will be done (some parts of code are not vectorized and still
use CPython cycles). Also, NumPy funcs like vstack or append produce only
copy of data, and it also slows the solver very much (for mature
problems).

Maybe some bugs still present somewhere - interalg code already became
very long, but since it already works, you could be interested in trying
to use it right now.

Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Ann: OpenOpt and FuncDesigner 0.37

2011-12-15 Thread Dmitrey



Hi all,
I'm glad to inform you about new release 0.37 (2011-Dec-15) of our free
software:



OpenOpt (numerical optimization):



IPOPT initialization time gap (time till first iteration) for
FuncDesigner models has been decreased
Some improvements and bugfixes for interalg, especially for search all
SNLE solutions mode (Systems of Non Linear Equations)
Eigenvalue problems (EIG) (in both OpenOpt and FuncDesigner)
Equality constraints for GLP (global) solver de
Some changes for goldenSection ftol stop criterion
GUI func manage - now button Enough works in Python3, but Run/Pause
not yet (probably something with threading and it will be fixed in Python
instead)



FuncDesigner:
Major sparse Automatic differentiation improvements for badly-vectorized
or unvectorized problems with lots of constraints (except of box bounds);
some problems now work many times or orders faster (of course not faster
than vectorized problems with insufficient number of variable arrays). It
is recommended to retest your large-scale problems with useSparse =
'auto' | True| False



Two new methods for splines to check their quality: plot and residual
Solving ODE dy/dt = f(t) with specifiable accuracy by interalg
Speedup for solving 1-dimensional IP by  interalg



SpaceFuncs and DerApproximator:



Some code cleanup






You may trace OpenOpt development information in our recently created
entries in Twitter and Facebook, see http://openopt.org for details.






See also: FuturePlans, this release announcement in OpenOpt forum






Regards, D.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Multifactor analysis tool for experiment planning

2011-10-24 Thread Dmitrey
   Hi all,

   


   

   new OpenOpt feature is available: Multifactor analysis tool for
   experiment planning (in physics, chemistry, biology etc). It is based
   on numerical optimization solver BOBYQA, released in 2009 by Michael
   J.D. Powell, and has easy and convenient GUI frontend, written in
   Python + tkinter. Maybe other (alternative) engines will be available
   in future.

   


   

   See its webpage for details.

   


   

   Regards, Dmitrey.

   
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] ODE dy/dt = f(t) solver with guaranteed speficiable accuracy

2011-09-26 Thread Dmitrey
 hi all,

   now free solver interalg from OpenOpt framework (based on interval
   analysis) can solve ODE dy/dt = f(t) with guaranteed specifiable
   accuracy.

   See the  ODE webpage for more details, there is an example of
   comparison with scipy.integrate.odeint, where latter fails to solve a
   problem.

   Future plans include solving of some general ODE systems dy/dt = f(y,
   t).

   Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN} OpenOpt, FuncDesigner, DerApproximator, SpaceFuncs release 0.36

2011-09-15 Thread Dmitrey
 Hi all,
   new release of our free soft (OpenOpt, FuncDesigner, DerApproximator,
   SpaceFuncs) v. 0.36 is out:

   OpenOpt:



 * Now solver interalg can handle all types of constraints and
   integration problems

  * Some minor improvements and code cleanup

 

   FuncDesigner:



 * Interval analysis now can involve min, max and 1-d monotone
   splines R - R of 1st and 3rd order

  * Some bugfixes and improvements

 

   SpaceFuncs:



 * Some minor changes

 

   DerApproximator:



 * Some improvements for obtaining derivatives in points from R^n
   where left or right derivative for a variable is absent,
   especially for stencil  1

 

   See http://openopt.org for more details.

   Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Constrained optimization solver with guaranteed precision

2011-08-15 Thread Dmitrey
 Hi all,
   I'm glad to inform you that general constraints handling for interalg
   (free solver with guaranteed user-defined precision) now is available.
   Despite it is very premature and requires lots of improvements, it is
   already capable of outperforming commercial BARON (example:
   http://openopt.org/interalg_bench#Test_4)  and thus you could be
   interested in trying it right now (next OpenOpt release will be no
   sooner than 1 month).

   interalg can be especially more effective than BARON (and some other
   competitors) on problems with huge or absent Lipschitz constant, for
   example on funcs like sqrt(x), log(x), 1/x, x**alpha, alpha1, when
   domain of x is something like [small_positive_value, another_value].

   Let me also remember you that interalg can search for all solutions of
   nonlinear equations / systems of them where local solvers like
   scipy.optimize fsolve cannot find anyone, and search single/multiple
   integral with guaranteed user-defined precision (speed of integration
   is intended to be enhanced in future).
   However, only FuncDesigner models are handled (read interalg webpage
   for more details).

   Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Constrained optimization solver with guaranteed precision

2011-08-15 Thread Dmitrey
 Hi Andrea,
   I believe benchmarks should be like Hans Mittelman do (
   http://plato.asu.edu/bench.html ) and of course number of funcs
   evaluations matters when slow Python code vs compiled is tested, but
   my current work doesn't allow me to spend so much time for OpenOpt
   development, so, moreover, for auxiliary work such as benchmarking
   (and making it properly like that). Also, benchmarks of someone's own
   soft usually are not very  trustful, moreover, on his own probs.

   BTW, please don't reply on my posts in scipy mail lists - I use them
   only to post the announcements like this and can miss a reply.

   Regards, D.

   --- Исходное сообщение ---
   От кого:  Andrea Gavana andrea.gav...@gmail.com
   Кому:  Discussion of Numerical Python numpy-discussion@scipy.org
   Дата: 15 августа 2011, 23:01:05
   Тема: Re: [Numpy-discussion] [ANN] Constrained optimization solver
   with guaranteed precision



 Hi Dmitrey,
 
 2011/8/15 Dmitrey  tm...@ukr.net :
  Hi all,
  I'm glad to inform you that general constraints handling for interalg 
(free
  solver with guaranteed user-defined precision) now is available. Despite 
it
  is very premature and requires lots of improvements, it is already 
capable
  of outperforming commercial BARON (example:
   http://openopt.org/interalg_bench#Test_4 )  and thus you could 
be interested
  in trying it right now (next OpenOpt release will be no sooner than 1
  month).
 
  interalg can be especially more effective than BARON (and some other
  competitors) on problems with huge or absent Lipschitz constant, for 
example
  on funcs like sqrt(x), log(x), 1/x, x**alpha, alpha1, when domain of x 
is
  something like [small_positive_value, another_value].
 
  Let me also remember you that interalg can search for all solutions of
  nonlinear equations / systems of them where local solvers like
  scipy.optimize fsolve cannot find anyone, and search single/multiple
  integral with guaranteed user-defined precision (speed of integration is
  intended to be enhanced in future).
  However, only FuncDesigner models are handled (read interalg webpage for
  more details).
 
 Thank you for this new improvements. I am one of those who use OpenOpt
 in real life problems, and if I can advance a suggestion (for the
 second time), when you post a benchmark of various optimization
 methods, please do not consider the elapsed time only as a
 meaningful variable to measure a success/failure of an algorithm.
 
 Some (most?) of real life problems require intensive and time
 consuming simulations for every *function evaluation*; the time spent
 by the solver itself doing its calculations simply disappears in front
 of the real process simulation. I know it because our simulations take
 between 2 and 48 hours to run, so what's 300 seconds more or less in
 the solver calculations? If you talk about synthetic problems (such as
 the ones defined by a formula), I can see your point. For everything
 else, I believe the number of function evaluations is a more direct
 way to assess the quality of an optimization algorithm.
 
 Just my 2c.
 
 Andrea.
 
   
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bug with latest numpy git snapshot build with Python3

2011-08-11 Thread Dmitrey
 bug in KUBUNTU 11.04, latest numpy git snapshot build with Python3
import numpy
   Traceback (most recent call last):
   File stdin, line 1, in module
   File /usr/local/lib/python3.2/dist-packages/numpy/__init__.py, line
   137, in module
   from . import add_newdocs
   File /usr/local/lib/python3.2/dist-packages/numpy/add_newdocs.py,
   line 9, in module
   from numpy.lib import add_newdoc
   File /usr/local/lib/python3.2/dist-packages/numpy/lib/__init__.py,
   line 4, in module
   from .type_check import *
   File /usr/local/lib/python3.2/dist-packages/numpy/lib/type_check.py,
   line 8, in module
   import numpy.core.numeric as _nx
   File /usr/local/lib/python3.2/dist-packages/numpy/core/__init__.py,
   line 10, in module
   from .numeric import
   *
   
   File /usr/local/lib/python3.2/dist-packages/numpy/core/numeric.py,
   line 27, in module  
   import
   multiarray   
 
   ImportError: No module named multiarray
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Numerical integration with guaranteed precision by interalg

2011-06-24 Thread Dmitrey
 Hi all,
   some ideas implemented in the solver interalg (INTERval ALGorithm)
   that already turn out to be more effective than its competitors in
   numerical optimization (benchmark) appears to be extremely effective
   in numerical integration with guaranteed precision.
   Here are some examples where interalg works perfectly while
   scipy.integrate solvers fail to solve the problems and lie about
   obtained residual:

 * 1-D (vs scipy.integrate quad)

  * 2-D (vs scipy.integrate dblquad)

  * 3-D (vs scipy.integrate tplquad)

 

   see http://openopt.org/IP for more details.
   Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] OpenOpt suite 0.34

2011-06-16 Thread Dmitrey
 Hi all,
   I'm glad to inform you about new quarterly release 0.34 of the OOSuite
   package software (OpenOpt, FuncDesigner, SpaceFuncs, DerApproximator)
   .

   Main changes:
   * Python 3 compatibility
   * Lots of improvements and speedup for interval calculations
   * Now interalg can obtain all solutions of nonlinear equation (example)
   or systems of them (example) in the involved box lb_i = x_i = ub_i
   (bounds can be very large), possibly constrained (e.g. sin(x) +
   cos(y+x)  0.5).
   * Many other improvements and speedup for interalg.

   See http://forum.openopt.org/viewtopic.php?id=425 for more details.

   Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-User] [ANN] Guaranteed solution of nonlinear equation(s)

2011-05-25 Thread Dmitrey
   --- Исходное сообщение ---
   От кого: Yosef Meller yosef...@post.tau.ac.il
   Кому: scipy-u...@scipy.org
   Дата: 25 мая 2011, 08:54:16
   Тема: Re: [SciPy-User] [ANN] Guaranteed solution of nonlinear
   equation(s)



 On ??? ? 24 ??? 2011 13:22:47 Dmitrey wrote:
   Hi all,
 I have made my free solver interalg ( http://openopt.org/interalg 
) be
 capable of solving nonlinear equations and systems of them. Unlike
 scipy optimize fsolve it doesn't matter which functions are involved -
 convex, nonconvex, multiextremum etc. Even some discontinuous funcs
 can be handled. If no solution exists, interalg determines it rather
 quickly.
  
 For more info see  http://forum.openopt.org/viewtopic.php?id=423  
   
 Interesting. Is there any description of the actual algorithm? I tried 
looking 
 for it in the link and the openopt site, but couldn't find it.
   

   Algorithm belongs to family of interval methods. Those ones, along
   with Lipschitz methods, are capable of doing the task (searching
   extremum with guaranteed precision), but require extremely much
   computational time and memory, thus are very rarely used. Some ideas
   created and programmed by me increased speed in many orders (see that
   benchmark vs Direct, intsolver and commercial BARON), memory
   consumption also isn't very huge.
   As for solving equations system, currently |f1| + |f2| + |f3| + ... +
   |f_k| is minimized (| . | means abs( . )). I know better way of
   handling nonlinear systems by interalg, but it would take much time to
   write those enhancements (maybe some weeks or even more), my current
   work doesn't allow me to spend so much time for interalg and other
   openopt items development.
   Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] Guaranteed solution of nonlinear equation(s)

2011-05-24 Thread Dmitrey
 Hi all,
   I have made my free solver interalg (http://openopt.org/interalg) be
   capable of solving nonlinear equations and systems of them. Unlike
   scipy optimize fsolve it doesn't matter which functions are involved -
   convex, nonconvex, multiextremum etc. Even some discontinuous funcs
   can be handled. If no solution exists, interalg determines it rather
   quickly.

   For more info see http://forum.openopt.org/viewtopic.php?id=423

   Regards, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bug with numpy nanargmax/nanargmin

2011-04-05 Thread Dmitrey
  from numpy import *
nanargmax([nan,nan])
   nan # ok
nanargmax([nan,nan],0)
   nan # ok
nanargmax([[1,nan],[1,nan]],0)
   Traceback (most recent call last):
   File stdin, line 1, in module
   File
   /usr/local/lib/python2.6/site-packages/numpy/lib/function_base.py,
   line 1606, in nanargmax
   return _nanop(np.argmax, -np.inf, a, axis)
   File
   /usr/local/lib/python2.6/site-packages/numpy/lib/function_base.py,
   line 1346, in _nanop
   res[mask_all_along_axis] = np.nan
   ValueError: cannot convert float NaN to integer
__version__
   '2.0.0.dev-1fe8136'

   D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] when numpy in Linux apt will be updated? It's still 1.3.0 with many bugs

2011-03-24 Thread Dmitrey
 hi,
   when numpy in Linux apt will be updated? It's still 1.3.0 with many
   bugs

   I tried to install numpy from PYPI where 1.5.1 seesm to be present,
   but somehow it involves 1.3.0 instead:

   $ sudo easy_install numpy
   install_dir /usr/local/lib/python2.6/dist-packages/
   Searching for numpy
   Best match: numpy 1.3.0
   Adding numpy 1.3.0 to easy-install.pth file

   only after aptitude remove python-numpy version 1.5.1. is involved by
   easy_install, but it fails:

   $ sudo easy_install numpy
   
   Adding numpy 1.5.1 to easy-install.pth file
   Installing f2py script to /usr/local/bin

   Installed
   /usr/local/lib/python2.6/dist-packages/numpy-1.5.1-py2.6-linux-x86_64.egg
   Processing dependencies for numpy
   Finished processing dependencies for numpy
   /tmp/easy_install-QF6uJM/numpy-1.5.1/numpy/distutils/misc_util.py:251:
   RuntimeWarning: Parent module 'numpy.distutils' not found while
   handling absolute import
   Error in atexit._run_exitfuncs:
   Traceback (most recent call last):
   File /usr/lib/python2.6/atexit.py, line 24, in _run_exitfuncs
   func(*targs, **kargs)
   File
   /tmp/easy_install-QF6uJM/numpy-1.5.1/numpy/distutils/misc_util.py,
   line 251, in clean_up_temporary_directory
   ImportError: No module named numpy.distutils
   Error in sys.exitfunc:
   Traceback (most recent call last):
   File /usr/lib/python2.6/atexit.py, line 24, in _run_exitfuncs
   func(*targs, **kargs)
   File
   /tmp/easy_install-QF6uJM/numpy-1.5.1/numpy/distutils/misc_util.py,
   line 251, in clean_up_temporary_directory
   ImportError: No module named numpy.distutils

   I have Linux KUBUNTU 10.10

   D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] should get rid of the annoying numpy STDERR output

2011-03-24 Thread Dmitrey
  from numpy import inf, array
inf*0
   nan

   (ok)

array(inf) * 0.0
   StdErr: Warning: invalid value encountered in multiply
   nan

   My cycled calculations yields this thousands times slowing
   computations and making text output completely non-readable.

from numpy import __version__  
__version__
   '2.0.0.dev-1fe8136'

   D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] should get rid of the annoying numpy STDERR output

2011-03-24 Thread Dmitrey
 Hi

 2011/3/24 Dmitrey tm...@ukr.net

from numpy import inf, array
inf*0
   nan

   (ok)

array(inf) * 0.0
   StdErr: Warning: invalid value encountered in multiply
   nan

   My cycled calculations yields this thousands times slowing
   computations and making text output completely non-readable.

 Would old= seterr(invalid= 'ignore') be sufficient for you?


   yes for me, but I'm not sure for all those users who use my soft.
   Maybe it will hide some bugs in their objective functions and
   nonlinear constraints in numerical optimization and nonlinear equation
   systems.

   D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] when numpy in Linux apt will be updated? It's still 1.3.0 with many bugs

2011-03-24 Thread Dmitrey
  Isnt [K]Ubuntu updated each 6 month?
 
 
 2011/3/24 Dmitrey 

 tm...@ukr.net :
  hi,
  when numpy in Linux apt will be updated? It's still 1.3.0 with many bugs
 
 There will always be bugs, but numpy 1.3 is a stable release, unless
 there is a bug that affects what your doing right now?
 
 If you find a bug that prevents you from from your specific work,
 better report that bug,
 if you haven't already.
 
 
  I tried to install numpy from PYPI where 1.5.1 seesm to be present, but
  somehow it involves 1.3.0 instead:
 
  $ sudo easy_install numpy
  install_dir /usr/local/lib/python2.6/dist-packages/
  Searching for numpy
  Best match: numpy 1.3.0
  Adding numpy 1.3.0 to easy-install.pth file
 
  only after aptitude remove python-numpy version 1.5.1. is involved by
  easy_install, but it fails:
 
  $ sudo easy_install numpy
  
  Adding numpy 1.5.1 to easy-install.pth file
  Installing f2py script to /usr/local/bin
 
  Installed
  /usr/local/lib/python2.6/dist-packages/numpy-1.5.1-py2.6-linux-x86_64.egg
  Processing dependencies for numpy
  Finished processing dependencies for numpy
  /tmp/easy_install-QF6uJM/numpy-1.5.1/numpy/distutils/misc_util.py:251:
  RuntimeWarning: Parent module 'numpy.distutils' not found while handling
  absolute import
 
 I see an *absolute* import, maybe easy_install is different from where
 ubuntu expects numpy to be installed?
 
 i think the folder distutils is specific for ubuntu ?
 
 
 try and remove manually the egg
 in 
/usr/local/lib/python2.6/dist-packages/numpy-1.5.1-py2.6-linux-x86_64.egg
 
 then do
 
 $ sudo easy_install -U numpy
 
 -U is for update, maybe you have to run without -U first?
 
  Error in atexit._run_exitfuncs:
  Traceback (most recent call last):
File /usr/lib/python2.6/atexit.py, line 24, in _run_exitfuncs
  func(*targs, **kargs)
File 
/tmp/easy_install-QF6uJM/numpy-1.5.1/numpy/distutils/misc_util.py,
  line 251, in clean_up_temporary_directory
  ImportError: No module named numpy.distutils
  Error in sys.exitfunc:
  Traceback (most recent call last):
File /usr/lib/python2.6/atexit.py, line 24, in _run_exitfuncs
  func(*targs, **kargs)
File 
/tmp/easy_install-QF6uJM/numpy-1.5.1/numpy/distutils/misc_util.py,
  line 251, in clean_up_temporary_directory
  ImportError: No module named numpy.distutils
 
  I have Linux KUBUNTU 10.10
 
  D.
 
 Good question.. ?
 
 easy_install should be distribution non-specific(as far as i know)
 
 see also this thread, even though its old. (late 2009)
 
http://old.nabble.com/numpy-1.3.0-eggs-with-python2.6-seem-broken-on-osx,-and-linux-td26551531.html
 
 so maybe then download only the egg(with easy_install), then enter the
 folder and do sudo python2.6 setup.py install
 Dont you have python 2.7 also on ubuntu?
 
 
 regards mic
 ___
 NumPy-Discussion mailing list NumPy-Discussion@scipy.org  
http://mail.scipy.org/mailman/listinfo/numpy-discussion

   Thanks for all those instructions, however, personally I don't need
   them, I have sucseeded with my own manipulations and even if I
   wouldn't I always can build numpy/scipy from sources.
   I mere care for quality and easibility of numpy installation for
   ordinary non-skilled users. They may just try installation, see that
   it's buggy and move away to use another soft.

   D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] argmin and argmax without nan

2011-03-24 Thread Dmitrey
 hi,
   is there any way to get argmin and argmax of an array w/o nans?
   Currently I have
from numpy import *
argmax([10,nan,100])
   1
argmin([10,nan,100])
   1
   But it's not the values I would like to get.

   The walkaround I use: get all indeces of nans, replace them by -inf,
   get argmax, replace them by inf, get argmin.
   Is there any better way? (BTW, I invoke argmin/argmax along of a
   chosen axis)
   D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] argmin and argmax without nan

2011-03-24 Thread Dmitrey
  2011/3/24 Dmitrey 

 tm...@ukr.net :
  hi,
  is there any way to get argmin and argmax of an array w/o nans?
  Currently I have
  from numpy import *
  argmax([10,nan,100])
  1
  argmin([10,nan,100])
  1
  But it's not the values I would like to get.
 
  The walkaround I use: get all indeces of nans, replace them by -inf, get
  argmax, replace them by inf, get argmin.
  Is there any better way? (BTW, I invoke argmin/argmax along of a chosen
  axis)
  D.
 
 In [3]: np.nanargmax([10, np.nan, 100])
 Out[3]: 2
 
 In [4]: np.nanargmin([10, np.nan, 100])
 Out[4]: 0
 
 Ralf


   Thanks, I thought np.argnanmin should do that but those funcs were
   absent.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] argmin and argmax without nan

2011-03-24 Thread Dmitrey
  On Thu, Mar 24, 2011 at 6:19 AM, Ralf Gommers
 

 ralf.gomm...@googlemail.com  wrote:
  2011/3/24 Dmitrey  tm...@ukr.net :
  hi,
  is there any way to get argmin and argmax of an array w/o nans?
  Currently I have
  from numpy import *
  argmax([10,nan,100])
  1
  argmin([10,nan,100])
  1
  But it's not the values I would like to get.
 
  The walkaround I use: get all indeces of nans, replace them by -inf, get
  argmax, replace them by inf, get argmin.
  Is there any better way? (BTW, I invoke argmin/argmax along of a chosen
  axis)
  D.
 
  In [3]: np.nanargmax([10, np.nan, 100])
  Out[3]: 2
 
  In [4]: np.nanargmin([10, np.nan, 100])
  Out[4]: 0
 
 And if speed is an issue (it usually isn't) you can use the nanargmax
 from Bottleneck:
 
  a = np.random.rand(1)
  a[a  0.5] = np.nan
  timeit np.nanargmax(a)
 1 loops, best of 3: 127 us per loop
  import bottleneck as bn
  timeit bn.nanargmax(a)
 10 loops, best of 3: 12.4 us per loop


   For some problems some % speedup could be yielded.
   Are there any plans for merging bottleneck into numpy?
   Also, are those benchmarks valid for ordinary numpy only or numpy with
   MKL/ACML or it doesn't matter?
   If I have huge arrays and multicore CPU, will numpy with MKL/ACML or
   something else involve parallel computations with numpy funcs like
   amin, amax, argmin, nanargmin etc?
   D.

 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy log2 has bug

2011-03-23 Thread Dmitrey
  from numpy import log2, __version__
   
   log2(2**63)  

   Traceback (most recent call
   last):   

   File stdin, line 1, in
   module 
   
   AttributeError: log2
__version__
   '2.0.0.dev-1fe8136'
   (doesn't work with 1.3.0 as well)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] bug with numpy 2 ** N

2011-03-23 Thread Dmitrey
  2**64
   18446744073709551616L
2**array(64)
   -9223372036854775808
2**100
   1267650600228229401496703205376L
2**array(100)
   -9223372036854775808
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] BUG: ndarray subclass calls __mul__ when ** (pow) is involved

2011-03-19 Thread Dmitrey
 I have ndarray subclass, its instance x and use
   r = x**2

   I expected it will call for each array element
   elem.__pow__(2)
   but it calls
   elem.__mul__(elem)
   instead.

   It essentially (tens or even more times) decreases my calculations
   speed for lots of cases.

numpy.__version__
   '2.0.0.dev-1fe8136'
   (taken some days ago from git)
   doesn't work with 1.3.0 as well..
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OpenOpt Suite release 0.33

2011-03-16 Thread Dmitrey
 Hi all,
   I'm glad to inform you about new release 0.33 of our completely free
   (license: BSD) cross-platform software:

   OpenOpt:



 * cplex has been connected

  * New global solver interalg with guarantied precision, competitor
   to LGO, BARON, MATLAB's intsolver and Direct (also can work in
   inexact mode)

  * New solver amsg2p for unconstrained medium-scaled NLP and NSP

 

   FuncDesigner:



 * Essential speedup for automatic differentiation when
   vector-variables are involved, for both dense and sparse cases

  * Solving MINLP became available

  * Add uncertainty analysis

  * Add interval analysis

  * Now you can solve systems of equations with automatic
   determination is the system linear or nonlinear (subjected to
   given set of free or fixed variables)

  * FD Funcs min and max can work on lists of oofuns

  * Bugfix for sparse SLE (system of linear equations), that slowed
   down computation time and demanded more memory

  * New oofuns angle, cross

  * Using OpenOpt result(oovars) is available, also, start points
   with oovars() now can be assigned easier

 

   SpaceFuncs (2D, 3D, N-dimensional geometric package with abilities for
   parametrized calculations, solving systems of geometric equations and
   numerical optimization with automatic differentiation):



 * Some bugfixes

 

   DerApproximator:



 * Adjusted with some changes in FuncDesigner

 

   For more details visit our site http://openopt.org.

   

   Regards, Dmitrey.

   
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Inplace remove some array rows

2011-03-12 Thread Dmitrey
 hi all,
   currently I use
   a = array(m,n)
   ...
   a = delete(a, indices, 0) # delete some rows

   Can I somehow perform the operation in-place, without creating
   auxiliary array?
   If I'll use

   numpy.compress(condition, a, axis=0, out=a),
   or
   numpy.take(a, indices, axis=0, out=a)

   will the operation be inplace?

   Thank you in advance,
   D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] New package: SpaceFuncs (2D, 3D, ND geometric modeling, optimization, solving)

2011-02-16 Thread Dmitrey
 Hi all,
   I'm glad to inform you about new, 4th OpenOpt Suite module:

   SpaceFuncs - a tool for 2D, 3D, N-dimensional geometric modeling with
   possibilities of parametrized calculations, numerical optimization and
   solving systems of geometrical equations with automatic
   differentiation.

   The module is written in Python + NumPy, requires FuncDesigner (and
   OpenOpt, DerApproximator for some operations). It has completely free
   license: BSD.

   For details see its home page
   http://openopt.org/SpaceFuncs
   and documentation
   http://openopt.org/SpaceFuncsDoc

   Also, you can try it online via our Sage-server (sometimes hangs due
   to high load, through)
   http://sage.openopt.org/welcome

   Regards,
   Dmitrey.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is numpy/scipy linux apt or PYPI installation linked with ACML?

2011-01-23 Thread Dmitrey
 Hi all,
   I have AMD processor and I would like to get to know what's the
   easiest way to install numpy/scipy linked with ACML.
   Is it possible to link linux apt or PYPI installation linked with
   ACML?
   Answer for the same question about MKL also would be useful, however,
   AFAIK it has commercial license and thus can't be handled in the ways.
   Thank you in advance, D.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is numpy/scipy linux apt or PYPI installation linked with ACML?

2011-01-23 Thread Dmitrey
 Are free EPD distributions linked with MKL and ACML?
   Does anyone know is SAGE or PythonXY already linked with ACML or MKL?

   Thanks, D.

   --- Исходное сообщение ---
   От кого: David Cournapeau courn...@gmail.com
   Кому: Discussion of Numerical Python numpy-discussion@scipy.org
   Дата: 23 января 2011, 12:07:29
   Тема: Re: [Numpy-discussion] Is numpy/scipy linux apt or PYPI
   installation linked with ACML?



 2011/1/23 Dmitrey  tm...@ukr.net :
  Hi all,
  I have AMD processor and I would like to get to know what's the easiest 
way
  to install numpy/scipy linked with ACML.
  Is it possible to link linux apt or PYPI installation linked with ACML?
  Answer for the same question about MKL also would be useful, however, 
AFAIK
  it has commercial license and thus can't be handled in the ways.
 
 For the MKL, the easiest solution is to get EPD, or to build
 numpy/scipy by yourself, although the later is not that easy. For
 ACML, I don't know how difficult it is, but I would be surprised if it
 worked out of the box.
 
 cheers,
 
 David
 ___
 NumPy-Discussion mailing list NumPy-Discussion@scipy.org  
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] new quarterly OpenOpt/FuncDesigner release 0.32

2010-12-15 Thread Dmitrey
 Hi all,
   I'm glad to inform you about new quarterly OpenOpt/FuncDesigner
   release (0.32):

   OpenOpt:
   * New class: LCP (and related solver)
   * New QP solver: qlcp
   * New NLP solver: sqlcp
   * New large-scale NSP (nonsmooth) solver gsubg. Currently it still
   requires lots of improvements (especially for constraints - their
   handling is very premature yet and often fails), but since the solver
   sometimes already works better than ipopt, algencan and other
   competitors it was tried with, I decided to include the one into the
   release.
   * Now SOCP can handle Ax = b constraints (and bugfix for handling lb
   = x = ub has been committed)
   * Some other fixes and improvements

   

   FuncDesigner:
   * Add new function removeAttachedConstraints
   * Add new oofuns min and max (their capabilities are quite restricted
   yet)
   * Systems of nonlinear equations: possibility to assign personal
   tolerance for an equation
   * Some fixes and improvements

   


   

   For more details see our forum entry

   

   http://forum.openopt.org/viewtopic.php?id=325

   


   

   Regards, D.

   
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: OpenOpt 0.31, FuncDesigner 0.21, DerApproximator 0.21

2010-09-15 Thread Dmitrey
 Hi all,

   I'm glad to inform you about new releases:
   OpenOpt 0.31, FuncDesigner 0.21, DerApproximator 0.21

   For details see
   http://forum.openopt.org/viewtopic.php?id=299
   or visit our homepage
   http://openopt.org

   Regards,
   Dmitrey
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] how to use ldexp?

2009-05-21 Thread dmitrey
hi all,
I have tried the example from numpy/add_newdocs.py

np.ldexp(5., 2)
but instead of the 20 declared there it yields
TypeError: function not supported for these types, and can't coerce
safely to supported types

I have tried arrays but it yields same error
 np.ldexp(np.array([5., 2.]), np.array([2, 1]))
Traceback (innermost last):
  File stdin, line 1, in module
TypeError: function not supported for these types, and can't coerce
safely to supported types

So, how can I use ldexp?
np.__version__ = '1.4.0.dev6972'

Thank you in advance,
D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy ufuncs and COREPY - any info?

2009-05-21 Thread dmitrey
hi all,
has anyone already tried to compare using an ordinary numpy ufunc vs
that one from corepy, first of all I mean the project
http://socghop.appspot.com/student_project/show/google/gsoc2009/python/t124024628235

It would be interesting to know what is speedup for (eg) vec ** 0.5 or
(if it's possible - it isn't pure ufunc) numpy.dot(Matrix, vec). Or
any another example.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to use ldexp?

2009-05-21 Thread dmitrey
On May 21, 11:29 am, David Cournapeau da...@ar.media.kyoto-u.ac.jp
wrote:
 dmitrey wrote:
  I have updated numpy to latest '1.4.0.dev7008', but the bug still
  remains.
  I use KUBUNTU 9.04, compilers - gcc (using build-essential), gfortran.
  D.

 Can you post the build output (after having removed the build directory
 : rm -rf build  python setup.py build  build.log) ?

 David

ok, it's here
http://pastebin.com/mb021e11
D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] where are the benefits of ldexp and/or array times 2?

2009-05-21 Thread dmitrey
Hi all,
I expected to have some speedup via using ldexp or multiplying an
array by a power of 2 (doesn't it have to perform a simple shift of
mantissa?), but I don't see the one.

Have I done something wrong? See the code below.

from scipy import rand
from numpy import dot, ones, zeros, array, ldexp
from time import time
N = 1500
A = rand(N, N)

b = rand(N)
b2 = 2*ones(A.shape, 'int32')

I = 100
t = time()

for i in xrange(I):
dot(A, b) # N^2 multiplications + some sum operations
#A * 2.1 # N^2 multiplications, so it should consume no greater
than 1st line time
#ldexp(A, b2) # it should consume no greater than prev line time,
isn't it?

print 'time elapsed:', time() - t

# 1st case: 0.62811088562
# 2nd case: 2.00850605965
# 3rd case: 6.79027700424

# Let me also note -
# 1) using b = 2 * ones(N) or b = zeros(N) doesn't yield any speedup
vs b = rand()
# 2) using A * 2.0 (or mere 2) instead of 2.1 doesn't yield any
speedup, despite it is exact integer power of 2.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] binary shift for ndarray

2009-05-20 Thread dmitrey
hi all,

suppose I have A that is numpy ndarray of floats, with shape n x n.

I want to obtain dot(A, b), b is vector of length n and norm(b)=1, but
instead of exact multiplication I want to approximate b as a vector
[+/- 2^m0, ± 2^m1, ± 2^m2 ,,, ± 2^m_n], m_i are integers, and then
invoke left_shift(vector_m) for rows of A.

So, what is the simplest way to do it, without cycles of course? Or it
cannot be implemented w/o cycles with current numpy version?

Thank you in advance, D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] binary shift for ndarray

2009-05-20 Thread dmitrey
On May 20, 10:34 pm, Robert Kern robert.k...@gmail.com wrote:
 On Wed, May 20, 2009 at 14:24, dmitrey dmitrey.kros...@scipy.org wrote:
  hi all,

  suppose I have A that is numpy ndarray of floats, with shape n x n.

  I want to obtain dot(A, b), b is vector of length n and norm(b)=1, but
  instead of exact multiplication I want to approximate b as a vector
  [+/- 2^m0, ± 2^m1, ± 2^m2 ,,, ± 2^m_n], m_i are integers, and then
  invoke left_shift(vector_m) for rows of A.

 You don't shift floats. You only shift integers. For floats,
 multiplying by an integer power of 2 should be fast because of the
 floating point representation (the exponent just gets incremented or
 decremented), so just do the multiplication.

  So, what is the simplest way to do it, without cycles of course? Or it
  cannot be implemented w/o cycles with current numpy version?

 It might help if you showed us an example of an actual b vector
 decomposed the way you describe. Your description is ambiguous.

 --
 Robert Kern

For the task involved (I intend to try using it for speed up ralg
solver) it doesn't matter essentially (using ceil, floor or round),
but for example let m_i is
floor(log2(b_i)) for b_i  1e-15,
ceil(log2(-b_i)) for b_i  - 1e-15,
for - 1e-15 = b_i = 1e-15 - don't modify the elements of A related
to the b_i at all.

D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] error building numpy: no file refecount.c

2009-05-05 Thread dmitrey
Hi all,
I've got the error during building numpy from latest svn snapshot -
any ideas?
D.

...
executing numpy/core/code_generators/generate_numpy_api.py
  adding 'build/src.linux-x86_64-2.6/numpy/core/include/numpy/
__multiarray_api.h' to sources.
numpy.core - nothing done with h_files = ['build/src.linux-x86_64-2.6/
numpy/core/include/numpy/config.h', 'build/src.linux-x86_64-2.6/numpy/
core/include/numpy/numpyconfig.h', 'build/src.linux-x86_64-2.6/numpy/
core/include/numpy/__multiarray_api.h']
building extension numpy.core.multiarray sources
error: src/multiarray/refecount.c: No such file or directory
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] does numpy/scipy have solver for Ax=b, L_inf (Chebyshev norm)?

2009-04-18 Thread dmitrey
Hi all,
does numpy/scipy, or maybe wrapper for a lapack routine have solver
for Ax=b, L_inf (Chebyshev norm, i.e. max |Ax-b| - min)? If there are
several ones, which ones are most suitable for large-scale, maybe ill-
conditioned problems?
Thank you in advance, D.
P.S. Currently I 'm not interested in translating the problem to LP
here, I search for more specialized solvers.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] best way to get vector representation in a basis?

2009-04-16 Thread dmitrey
Hi all,

I have orthonormal set of vectors B = [b_0, b_1,..., b_k-1],
b_i from R^n (k may be less than n), and vector a from R^n

What is most efficient way in numpy to get r from R^n and c_0, ...,
c_k-1 from R:
a = c_0*b_0+...+c_k-1*b_k-1 + r
(r is rest)

Thank you in advance, D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] best way to get vector representation in a basis?

2009-04-16 Thread dmitrey
Hi all,

I have orthonormal set of vectors B = [b_0, b_1,..., b_k-1],
b_i from R^n (k may be less than n), and vector a from R^n

What is most efficient way in numpy to get r from R^n and c_0, ...,
c_k-1 from R:
a = c_0*b_0+...+c_k-1*b_k-1 + r
(r is rest)

Thank you in advance, D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] n-dimensional array indexing question

2008-11-08 Thread dmitrey
hi all,
I have array A, A.ndim = n, and 1-dimensional array B of length n.
How can I get element of A with coords B[0],...,B[n-1]?
i.e. A[B[0], B[1], ..., B[n-1])

A, B, n are not known till execution time, and can have unpredictable 
lengths (still n is usually small, no more than 4-5).

I have tried via ix_ but haven't succeeded yet.

Thx in advance, D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ParallelProgramming wiki page

2008-10-27 Thread dmitrey
Did you mean this one
http://www.netlib.org/scalapack/pblas_qref.html
?

As for the ParallelProgramming wiki page, there are some words in 
section Use parallel primitives about numpy.dot still I can't 
understand from the section: if I get numpy from sources and compile it 
(via python setup.py build) in my AMD X2, will numpy.dot use 2nd CPU 
or not?

Regards, D.

Frédéric Bastien wrote:
 Hi,

 Their exist open source version of parallel BLAS library. I modified
 the section Use parallel primitives to tell it. But my English is
 bad, so if someone can check it, it would be nice.

 Fred

 On Mon, Oct 27, 2008 at 4:24 PM, Robert Kern [EMAIL PROTECTED] wrote:
   
 On Mon, Oct 27, 2008 at 15:20, Sebastien Binet
 [EMAIL PROTECTED] wrote:
 
 On Monday 27 October 2008 12:56:56 Robin wrote:
   
 Hi,

 I made some changes to the ParallelProgramming wiki page to outline
 use of the (multi)processing module as well as the threading module.

 I'm very much not an expert on this - just researched it for myself,
 so please feel free to correct/ extend/ delete as appropriate.
 
 I would mention the backport of multiprocessing for python-2.{4,5}:
  http://code.google.com/p/python-multiprocessing
 so the amount of editing when one switches from 2.{4,5} to 2.6 is minimal :)
   
 Go for it. The wiki is open to editing.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] asscalar(number) - why yields error, why can't return the number?!

2008-10-05 Thread dmitrey
hi all,
I wonder why numpy.asscalar(1.5) yields error, why it can't just return 
1.5? Is it intended to be ever changed?

  numpy.__version__
'1.3.0.dev5864'

D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] why type(array(1).tolist()) is int?

2008-10-01 Thread dmitrey
hi all,
why array(1).tolist() returns 1? I expected to get [1] instead.
D.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] why type(array(1).tolist()) is int?

2008-10-01 Thread dmitrey
let me also note that list(array((1))) returns

Traceback (innermost last):
 File stdin, line 1, in module
TypeError: iteration over a 0-d array

D.

dmitrey wrote:
 hi all,
 why array(1).tolist() returns 1? I expected to get [1] instead.
 D.

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] will array(Python set) be ever implemented as cast method?

2008-10-01 Thread dmitrey
hi all,
will array(Python set) (and asarray, asfarray etc) ever be implemented 
as cast method?

Now it just puts the set into 1st element:

  asarray(set([11, 12, 13, 14]))
array(set([11, 12, 13, 14]), dtype=object)

  array(set([11, 12, 13, 14]))
array(set([11, 12, 13, 14]), dtype=object)

Currently I use array(list(my_set)) instead.

D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] why type(array(1).tolist()) is int?

2008-10-01 Thread dmitrey
Alan G Isaac wrote:
 On 10/1/2008 9:04 AM dmitrey apparently wrote:
   
 why array(1).tolist() returns 1? I expected to get [1] instead.
 

 I guess I would expect it not to work at all.
 Given that it does work, this seems the best result.
 What list shape matches the shape of a 0-d array?

 What is the use case that makes this seem wrong?
   
Because I just expect something.tolist() return *Type* list, not *Type* 
integer.  tolist documentation says Return the array as a list or 
nested lists and nothing about possibility to return anything else.

As for my situation I store the list in my data field and then call
for item from prob.my_list: do_something()

D.
 Alan Isaac

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] does numpy have funcs like isanynan() or isallfinite()?

2008-09-06 Thread dmitrey
hi all,

does numpy have funcs like isanynan(array) or isallfinite(array)?

I very often use any(isnan(my_array)) or all(isfinite(my_array)), I 
guess having a single case triggered on would be enough here to omit 
further checks.

Regards, D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] sort documentation

2008-08-31 Thread dmitrey
As for me I can't understand the general rule: when numpy funcs return 
copy and when reference?

For example why x.fill() returns None (do inplace modification) while 
x.ravel(), x.flatten() returns copy? Why the latters don't do inplace 
modification, as should be expected?

D.

Alan G Isaac wrote:
 I find this confusing:

  numpy.sort(a, axis=-1, kind='quicksort', order=None)

  Return copy of 'a' sorted along the given axis.

  Perform an inplace sort along the given axis using the algorithm
  specified by the kind keyword.

 I suppose the last bit is supposed to refer to the ``sort``
 method rather than the function, but I do not see any signal
 that this is the case.

 Cheers,
 Alan Isaac


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] isn't it a bug in array.fill()?

2008-08-29 Thread dmitrey
hi all,
isn't it a bug
(latest numpy from svn, as well as my older version)

from numpy import array
print array((1,2,3)).fill(10)
None

Regards, D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isn't it a bug in array.fill()

2008-08-29 Thread dmitrey
sorry, it isn't a bug, it's my fault, fill() returns None and do 
in-place modification.
D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] isn't it a bug in array.fill()?

2008-08-29 Thread dmitrey

Keith Goodman wrote:
 Yeah, I do stuff like that too. fill works in place so it returns None.

   
 x = np.array([1,2])
 x.fill(10)
 x
   
array([10, 10])
   
 x = x.fill(10)  # -- Danger!
 print x
   
 None
   
Since result None is never used it would be better to return reference 
to the modified array, it would decrease number of bugs. The last 
expression can raise very seldom in untested cases, I have revealed one 
of this recently in my code:

if some_seldom_cond:
   r = empty(n, bool).fill(True)
else:
   r = None

So, as you see, r was always None

D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] preparing to tag NumPy 1.0.5 on Wednesday

2008-03-03 Thread dmitrey
Also, it would be very well if asfarray() doesn't drop down float128 to 
float64.
D.

Alan G Isaac wrote:
 I never got a response to this:
 URL:http://projects.scipy.org/pipermail/scipy-dev/2008-February/008424.html
 (Two different types claim to be numpy.int32.)

 Cheers,
 Alan



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matching 0-d arrays and NumPy scalars

2008-02-20 Thread dmitrey
Travis E. Oliphant wrote:
 Hi everybody,

 In writing some generic code, I've encountered situations where it would 
 reduce code complexity to allow NumPy scalars to be indexed in the 
 same number of limited ways, that 0-d arrays support.

 For example, 0-d arrays can be indexed with

 * Boolean masks
 * Ellipses x[...]  and x[..., newaxis]
 * Empty tuple x[()]

 I think that numpy scalars should also be indexable in these particular 
 cases as well (read-only of course,  i.e. no setting of the value would 
 be possible).

 This is an easy change to implement, and I don't think it would cause 
 any backward compatibility issues.

 Any opinions from the list?


 Best regards,

 -Travis O.

   
As for me I would be glad to see same behavior for numbers as for arrays 
at all, like it's implemented in MATLAB, i.e.
 a=80
 disp(a)
80
 disp(a(1,1))
80
ok, for numpy having at least possibility to use
a=array(80)
print a[0]
would be very convenient, now atleast_1d(a) is required very often, and 
sometimes errors occur only some times later, already during execution 
of user-installed code, when user usually pass several-variables arrays 
and some time later suddenly single-variable array have been encountered.
I guess it could be implemented via a simple check:
if user calls for a[0] and a is array of shape () (i.e. like 
a=array(80)) then return a[()]
D.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MLPY - Machine Learning Py - Python/NumPy based package for machine learning

2008-02-14 Thread dmitrey
isn't MLPY a new name to PyML?
http://mloss.org/software/view/28/

if no, I guess you'd better add link to your software to
http://mloss.org/software/
(mloss is machine learning open source software)
Regards, D.

Davide Albanese wrote:
 *Machine Learning Py* (MLPY) is a *Python/NumPy* based package for 
 machine learning.
 The package now includes:

 * *Support Vector Machines* (linear, gaussian, polinomial,
   terminated ramps) for 2-class problems
 * *Fisher Discriminant Analysis* for 2-class problems
 * *Iterative Relief* for feature weighting for 2-class problems
 * *Feature Ranking* methods based on Recursive Feature Elimination
   (rfe, onerfe, erfe, bisrfe, sqrtrfe) and Recursive Forward
   Selection (rfs)
 * *Input Data* functions
 * *Confidence Interval* functions

 Requires Python http://www.python.org/ = 2.4 and NumPy 
 http://www.scipy.org/ = 1.0.3.*
 MLPY* is a project of MPBA Group http://mpa.fbk.eu/ (mpa.fbk.eu) at 
 Fondazione Bruno Kessler (www.fbk.eu). http://www.fbk.eu/*
 MLPY* is free software. It is licensed under the GNU General Public 
 License (GPL) version 3 http://www.gnu.org/licenses/gpl-3.0.html.

 HomePage: mlpy.fbk.eu
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] asfarray() drops precision (float128-float64) - is it correct?

2008-02-12 Thread dmitrey
As for me, it yields lots of inconveniences (lots of my code should be 
rewritten, since I didn't know it before):

from numpy import *
a = array((1.0, 2.0), float128)
b=asfarray(a)
type(a[0])
#type 'numpy.float128'
type(b[0])
#type 'numpy.float64'
 __version__
'1.0.5.dev4767'

Shouldn't it be changed? (I.e. let's left 128).
As for me I use asfarray() very often since I don't know does user 
provide arrays as numpy ndarray or matrix or Python list/tuple.
D.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numerical noise for simple calcululations

2008-02-10 Thread dmitrey
hi all,
I need a good estimation of noise value for simple calculations.

I.e. when I calculate something like sin(15)+cos(80) I get a solution 
with precision, for example, 1e-11.

I guess the precision depends on system arch, isn't it?

So what's the best way to estimate the value?

I guess here should be something like 10*numpy.machine_precision, isn't it?
Regards, D.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numerical noise for simple calcululations

2008-02-10 Thread dmitrey
I need just a single number in avarage.
I have committed some changes to NLP/NSP ralg solver from 
scikits.openopt, for non-noisy funcs it works better, but for noisy 
funcs vise versa, hence now my examples/nssolveVSfsolve.py doesn't work 
as it should be, so I need to implement noise parameter and assing a 
default value to the one.

So, the question is: what default value should be here? I was thinking 
of either 0 or something like K*numpy.machine_precesion, where K is 
something like 1...10...100.
Regards, D.

Timothy Hochberg wrote:


 On Sun, Feb 10, 2008 at 4:23 AM, dmitrey [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 hi all,
 I need a good estimation of noise value for simple calculations.

 I.e. when I calculate something like sin(15)+cos(80) I get a solution
 with precision, for example, 1e-11.

 I guess the precision depends on system arch, isn't it?

 So what's the best way to estimate the value?

 I guess here should be something like 10*numpy.machine_precision,
 isn't it?


 This is a complicated subject, which I'm really not qualified to 
 comment on, but I'm not going to let that stop me. I believe that you 
 want to know how accurate something like the above is given exact 
 inputs. That is a somewhat artificial problem, but I'll answer it to 
 the best of my ability.

 Functions like sin, cos, +, etc can in theory compute there result to 
 within on ULP, or maybe half an ULP (I can't recall exactly). An ULP 
 is a Unit in the Last Place. To explain an ULP, let's pretend that we 
 were using decimal floating point with 3 digits of precision and look 
 at a couple of numbers:

 1.03e-03  -- 1 ULP = 1e-5
 3.05e+02 -- 1 ULP = 1

 We're obviously not using decimal floating point, we're using binary 
 floating point, but the basic idea is the same. The result is that the 
 accuracy is going to totally depend on the magnitude of the result. If 
 the result is small, in general the result will be more accurate in an 
 absolute sense, although not generally in a relative sense.

 In practice, this is drastically oversimplified since the inputs are 
 generally of finite accuracy. Different functions will either magnify 
 or shrink the input error depending on both the function and the value 
 of the input. If you can find an easy to read introduction to 
 numerical analysis, it would probably help. Unfortunately, I don't 
 know of a good one to recommend; the text I have is a pretty hard slog.

 To complicate this further, functions don't always compute there 
 results to maximum theoretical accuracy; presumably in the interest of 
 reasonable performance.

 So, in the end the answer is; it depends. In practice the only useful, 
 simple advice I've seen to get a handle on accuracy is to compute 
 results using at least two different precisions and verify that things 
 are converging sensibly. And compare to known results wherever possible.


 -- 
 .  __
 .   |-\
 .
 .  [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] isn't it a bug? (matrix multiplication)

2008-02-07 Thread dmitrey
from numpy import array
a = array((1.0, 2.0))

b = c = 15
b = b*a#ok
c *= a#ok

d = array(15)
e = array(15)
d = d*a#this works ok
e *= a#this intended to be same as prev line, but yields error:
Traceback (innermost last):
 File stdin, line 1, in module
ValueError: invalid return array shape
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] matrix - ndarray bug

2008-01-31 Thread dmitrey
I don't know, maybe it's already fixed in more recent versions?

  from numpy import *
  a=mat('1  2')
  b = asfarray(a).flatten()
  print b[0]
[[ 1.  2.]]

# ^^ I expected getting 1.0 here

  numpy.version.version
'1.0.3'
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread dmitrey
Hi all,
I don't know much about what are these scons are, if it's something 
essential (as it seems to be from amount of mailing list traffic) why 
can't it be just merged to numpy, w/o making any additional branches?

Regards, D.

David Cournapeau wrote:
  
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


  1   2   >