Here's a seed for your function:
s = 'ThesampletextthatcouldbereadedthesameinbothordersArozaupalanalapuazorA'
f = np.array(list(s)).view('int8').astype(float)
f -= f.mean()
maybe_here = np.argmax(np.convolve(f,f))/2
magic = 10
print s[maybe_here - magic:maybe_here + magic + 1]
Let us now how to
There are various ways to repack the pair of arrays into one array.
The most universal is probably to use structured array (can repack more
than a pair):
x = np.array(zip(a, b), dtype=[('a',int), ('b',int)])
After repacking you can use unique and other numpy methods:
xu = np.unique(x)
+length_data]) /= factor. and that for
every star_point and length data.
How to do this fast?
Cheers
Wolfgang
On 2012-05-31, at 1:43 AM, Val Kalatsky wrote:
What do you mean by normalized it?
Could you give the output of your procedure for the sample input data.
Val
On Thu, May 31, 2012
Confirmed on Ubuntu, np.__version__ 1.5.1 and 1.6.1 (backtraces are
bellow).
Something seems to be broken before it comes to memcpy
and/or _aligned_contig_to_strided_size1.
Val
-
np.__version__ 1.6.1
Program received signal SIGSEGV,
What do you mean by normalized it?
Could you give the output of your procedure for the sample input data.
Val
On Thu, May 31, 2012 at 12:36 AM, Wolfgang Kerzendorf wkerzend...@gmail.com
wrote:
Dear all,
I have an ndarray which consists of many arrays stacked behind each other
(only
You'll need some patience to get non-zeros, especially for k=1e-5
In [84]: np.sum(np.random.gamma(1e-5,size=100)!=0.0)
Out[84]: 7259
that's less than 1%. For k=1e-4 it's ~7%
Val
On Mon, May 28, 2012 at 10:33 PM, Uri Laserson uri.laser...@gmail.comwrote:
I am trying to sample from a
Hi Tod,
Would you consider bundling the quaternion dtype with your package.
I think everybody wins: your package would become stronger and
Martin's dtype would become easily available.
Thanks
Val
On Sat, May 5, 2012 at 6:27 AM, Tom Aldcroft
aldcr...@head.cfa.harvard.eduwrote:
On Fri, May 4,
The only slicing short-cut I can think of is the Ellipsis object, but it's
not going to help you much here.
The alternatives that come to my mind are (1) manipulation of shape
directly and (2) building a string and running eval on it.
Your solution is better than (1), and (2) is a horrible hack,
:53 PM, Matthew Brett matthew.br...@gmail.comwrote:
Hi,
On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky kalat...@gmail.com wrote:
Both results are correct.
There are 2 factors that make the results look different:
1) The order: the 2nd eigenvector of the numpy solution corresponds
Both results are correct.
There are 2 factors that make the results look different:
1) The order: the 2nd eigenvector of the numpy solution corresponds to the
1st eigenvector of your solution,
note that the vectors are written in columns.
2) The phase: an eigenvector can be multiplied by an
in my 1st
email should be negated:
0.99887305445887753+0.047461785427773337j
On Mon, Apr 2, 2012 at 8:53 PM, Matthew Brett matthew.br...@gmail.comwrote:
Hi,
On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky kalat...@gmail.com wrote:
Both results are correct.
There are 2 factors that make
I just happened to have an xp64 VM running:
My version of numpy (1.6.1) does not have float128 (see more below what I
get in ipython session).
If you need to test something else please let me know.
Val
---
Enthought Python Distribution -- www.enthought.com
Python 2.7.2 |EPD 7.2-2 (64-bit)|
+308
nexp =11 min=-max
-
On Thu, Mar 15, 2012 at 11:38 PM, Matthew Brett matthew.br...@gmail.comwrote:
Hi,
On Thu, Mar 15, 2012 at 9:33 PM, Val Kalatsky kalat...@gmail.com wrote:
I just happened to have an xp64
Can you?
The question should be: Why sympy does not have Fresnel integrals?
On Sun, Mar 11, 2012 at 1:06 AM, aa telukp...@gmail.com wrote:
why sympy cannot integrate sin(x**2)??
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
26 org.python.python 0x0001000b0286 Py_Main + 2718
27 org.python.python.app 0x00010e6c start + 52
Am 08.03.2012 um 02:36 schrieb Val Kalatsky:
Tried it on my Ubuntu 10.10 box, no problem:
1) Saved as spampub.c
2) Compiled with (setup.py attached): python
Seeing the backtrace would be helpful.
Can you do whatever leads to the segfault
from python run from gdb?
Val
On Wed, Mar 7, 2012 at 7:04 PM, Christoph Gohle
christoph.go...@mpq.mpg.dewrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I have been struggeling for quite some time now.
Out[4]: 'liter'
On Wed, Mar 7, 2012 at 7:15 PM, Val Kalatsky kalat...@gmail.com wrote:
Seeing the backtrace would be helpful.
Can you do whatever leads to the segfault
from python run from gdb?
Val
On Wed, Mar 7, 2012 at 7:04 PM, Christoph Gohle
christoph.go...@mpq.mpg.de wrote
Viewness is in the eyes of the beholder.
You have to use indirect methods to figure it out.
Probably the most robust approach is to go up the base chain until you get
None.
In [71]: c1=np.arange(16)
In [72]: c2=c1[::2]
In [73]: c4=c2[::2]
In [74]: c8=c4[::2]
In [75]: id(c8.base)==id(c4)
Out[75]:
Hi Slava,
Since your k is only 10, here is a quickie:
import numpy as np
arr = np.arange(n)
for i in range(k):
np.random.shuffle(arr)
print np.sort(arr[:p])
If your ever get non-unique entries in a set of k=10 for your n and p,
consider yourself lucky:)
Val
On Mon, Feb 20, 2012 at
Hi Bill,
Looks like you are running a very fresh version of numpy.
Without knowing the build version and what's going on in the extension
module I can't tell you much.
The usual suspects would be:
1) Numpy bug, not too likely.
2) Incorrect use of PyArray_FromObject, you'll need to send more info.
Aronne made good suggestions.
Here is another weapon for your arsenal:
1) I assume that the shape of your array is irrelevant (reshape if needed)
2) Depending on the structure of your data np.unique can be handy:
arr_unique, idx = np.unique(arr1d, return_inverse=True)
then search arr_unique
To avoid all the hassle I suggest getting EPD:
http://enthought.com/products/epd.php
You'd get way more than just NumPy, which may or may not be what you need.
I have installed various NumPy's on linux only and from source only which
did
require compilation (gcc), so I am not a good help for your
I believe there are no provisions made for that in ndarray.
But you can subclass ndarray.
Val
On Wed, Jan 25, 2012 at 12:10 PM, Emmanuel Mayssat emays...@gmail.comwrote:
Is there a way to store metadata for an array?
For example, date the samples were collected, name of the operator, etc.
Just what Bruce said.
You can run the following to confirm:
np.mean(data - data.mean())
If for some reason you do not want to convert to float64 you can add the
result of the previous line to the bad mean:
bad_mean = data.mean()
good_mean = bad_mean + np.mean(data - bad_mean)
Val
On Tue, Jan
or not. This has to
be evaluated quite a lot.
Oh well ... and 1.3.0 is pretty old :-)
cheers,
Samuel
On 31.12.2011, at 07:48, Val Kalatsky wrote:
Hi folks,
First post, may not follow the standards, please bear with me.
Need to define a ufunc that takes care of various type.
Fixed
OnFail: the resolution took place and did not succeed, the user is given a
chance to fix it.
In most of the case these callbacks are NULLs.
I could patch numpy with a generic method that does it, but it's a shame
not to use the good ufunc machine.
Thanks for tips and suggestions.
Val Kalatsky
26 matches
Mail list logo