Sturla Molden wrote: > On 4/26/2007 2:19 PM, Steve Lianoglou wrote: > >>> Beside proper programing paradigm Python easily scales to large- >>> scale number crunching: You can run large-matrices calculations >>> with about 1/2 to 1/4 of memory consumption comparing to Matlab. >> >> Is that really true? (The large-matrix number crunching, not the >> proper programming paradigm ;-) >> >> By no scientific means of evaluation, I was under the impression that >> the opposite was true to a smaller degree. > > Matlab have pass-by-value semantics, so you have to copy your data in > and copy your data out for every function call. You can achieve the same > result in Python by pickling and unpickling arguments and return values, > e.g. using this function decorator: > > > import cPickle as pickle > > def Matlab_Semantics(f): > > ''' > Emulates Matlab's pass-by-value semantics, > objects are serialized in and serialized out. > > Example: @Matlab_Semantics > def foo(bar): > pass > ''' > > func = f > return wrapper > > def wrapper(*args,**kwargs): > args_in = pickle.loads(pickle.dumps(args)) > kwargs_in = {} > for k in kwargs: > kwargs_in[k] = pickle.loads(pickle.dumps(kwargs[k])) > args_out = func(*args_in,**kwargs_in) > args_out = pickle.loads(pickle.dumps(args_out)) > return args_out > > Imagine using this horrible semantics in several layers of function > calls. That is exactly what Matlab does. Granted, Matlab optimizes > function calls by using copy-on-write, so it will be efficient in some > cases, excessive cycles og copy-in and copy-out is usually what you get. > > That's interesting. How did you find this information?
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion