On Wed, Sep 30, 2015 at 6:57 AM, Nathan Goldbaum
wrote:
> Note however that with the current version of the code, not having OpenMP
>> will severely limit the performance, especially on quad-core machines,
>> since an important factor in the speed comes from the parallel processing
>> of the inde
On Wed, Sep 30, 2015 at 12:32 AM, Sebastian Berg wrote:
> > >> Plus we hope that many use cases for object arrays will soon be
> > >> supplanted by better dtype support, so now may not be the best time to
> > >> invest heavily in making object arrays complicated and powerful.Well,
> what I mean i
On Tue, Sep 29, 2015 at 6:35 PM, Charles R Harris wrote:
> For this, and other use-cases, special casing Numpy arrays stored in
>>> object arrays does make sense:
>>>
>>> "If this is s a Numpy array, pass the operation through."
>>>
>>
>> Because we now (development) use rich compare, the result
On Tue, Sep 29, 2015 at 7:32 AM, Travis Oliphant
wrote:
> I'm in a situation now where at least for 6 months or so I can help with
> NumPy more than I have been able to for 7 years.
>
great news!
> 1) 1 year of inactivity to be removed from the council is too little for a
> long-running projec
One of the usecases that has sneaked in during the last few numpy versions
is that object arrays contain numerical arrays where the shapes don't add
up to a rectangular array.
I think that's the wrong way to dve that problem -- we really should have a
"proper" ragged array implementation. But is
This sounds pretty cool -- and I've had a use case. So it would be
nice to get into Numpy.
But: I doubt we'd want OpenMP dependence in Numpy itself.
But maybe a pure Cython non-MP version?
Are we putting Cuthon in Numpy now? I lost track.
-CHB
Sent from my iPhone
> On Sep 29, 2015, at 7:35 AM
On Wed, Sep 23, 2015 at 3:04 PM, Nathan Goldbaum
wrote:
> If you are going to do work at a terminal, I'd suggest using a library
> like doitlive (http://doitlive.readthedocs.org/en/latest/) so you can't
> make mistakes while still making it look like you are actually typing
> everything at a term
On Wed, Sep 23, 2015 at 2:31 PM, Stefan van der Walt
wrote:
> > b) a suggestion that we discuss it further personally, taking advantage
> of
> > the fact that we happen to be physically close.
>
> Sure, I'm happy to discuss the personal side of this offline.
>
Hey! I want to have a beer with you
On Wed, Sep 23, 2015 at 2:21 PM, Travis Oliphant
wrote:
> As the original author of NumPy, I would like to be on the seed council as
> long as it is larger than 7 people.That is my proposal.
>
Or the seed council could invite Travis to join as its first order of
business :-)
Actually, mayb
On Wed, Sep 23, 2015 at 12:40 PM, Nathaniel Smith wrote:
> > However, that had been contentious enough that it would probably be a
> > good idea to hash out some guidelines about the council membership.
>
> We actually do have some of those already -- dunno if they're perfect,
> but they exist :-
> But discussing who is great community leader, etc. is frankly not obvious to
> me related to numpy governance.
Thank you Sebastian.
Could we please try to get back to the governance issues, without
naming names? There are some specific questions on the table that need
to get hashed out.
Nump
On Sun, Sep 20, 2015 at 11:20 AM, Travis Oliphant
wrote:
> After long conversations at BIDS this weekend and after reading the entire
> governance document, I realized that the steering council is very large
>
How large are we talking? I think there were 8 people named -- and I'm not
sure all 8
Travis,
I'm sure you appreciate that this might all look a bit scary, given the
recent discussion about numpy governance.
But it's an open-source project, and I, at least, fully understand that
going through a big process is NOT the way to get a new idea tried out and
implemented. So I think thin
On Sat, Aug 29, 2015 at 12:55 AM, Phil Elson wrote:
> Biggus also has such a function:
> https://github.com/SciTools/biggus/blob/master/biggus/__init__.py#L2878
> It handles newaxis outside of that function in:
> https://github.com/SciTools/biggus/blob/master/biggus/__init__.py#L537.
>
> Again, i
1) I very much agree that governance can make or break a project. However,
the actual governance approach often ends up making less difference than
the people involved.
2) While the FreeBSD and XFree examples do point to some real problems with
the "core" model it seems that there are many other p
> Googling for a way to print UTC out of the box, the best thing I could
> find is:
>
> In [40]: [str(i.item()) for i in np.array([t], dtype="datetime64[s]")]
> Out[40]: ['2015-08-26 11:52:10']
>
> Now, is there a better way to specify that I want the datetimes printed
> always in UTC?
>
maybe, bu
On Fri, Aug 14, 2015 at 10:15 AM, Stefan van der Walt
wrote:
> Perhaps mpl_toolkits should think of
> becoming mpl_3d, mpl_basemaps, etc.?
>
namespace packages are a fine idea, but the implementation(s) are just one
big kludge...
I think so, but we're getting off-topic here.
numpy doesn't us
On Fri, Aug 14, 2015 at 10:08 AM, Benjamin Root wrote:
> I used to be a huge advocate for the "develop" mode, but not anymore. I
> have run into way too many Heisenbugs that would clear up if I nuked my
> source tree and re-clone.
>
well, you do need to remember to clean out once in a while, whe
On Thu, Aug 13, 2015 at 11:25 AM, Stefan van der Walt
wrote:
> >(for
> > example "python setup.py develop", although suggested by
> > setup.py itself, claims that "develop" is not a command).
develop is a command provided by setuptools, not distutils itself.
I find it absolutely invaluable --
On Mon, Aug 3, 2015 at 11:05 AM, Sturla Molden
wrote:
> On 03/08/15 18:25, Chris Barker wrote:
>
> > [NOTE: is there a C long dtype? I can't find it at the moment...]
>
> There is, it is called np.int.
well, IIUC, np.int is the python integer type, which
On Sun, Aug 2, 2015 at 1:46 PM, Sturla Molden
wrote:
> On 02/08/15 22:28, Bryan Van de Ven wrote:
> > And to eliminate the order kwarg, use functools.partial to patch the
> zeros function (or any others, as needed):
>
> This will probably break code that depends on NumPy, like SciPy and
> scikit-
On Sun, Aug 2, 2015 at 5:13 AM, Sturla Molden
wrote:
> > A long is only machine word wide on posix, in windows its not.
>
> Actually it is the opposite. A pointer is 64 bit on AMD64, but the
> native integer and pointer offset is only 32 bit. But it does not matter
> because it is int that should
>> Turns out I was passing in numpy arrays that I had typed as "np.int".
>> It worked OK two years ago when I was testing only on 32 bit pythons,
>> but today I got a bunch of failed tests on 64 bit OS-X -- a np.int is
>> now a C long!
>
> It has always been C long. It is the C long that varies bet
(I got a run-time error).
And Yeah to me for having at least some basic tests.
But Boo to numpy for a very easy to confuse type API.
-Chris
Sent from my iPhone
> On Jul 31, 2015, at 10:45 AM, Sturla Molden wrote:
>
> Chris Barker wrote:
>
>> What about Fortan -- I've
On Thu, Jul 30, 2015 at 11:24 PM, Jason Newton wrote:
> This really needs changing though. scientific researchers don't catch
> this subtlety and expect it to be just like the c and matlab types they
> know a little about.
>
well, C types are a %&$ nightmare as well! In fact, one of the biggest
On Sun, Jul 26, 2015 at 11:19 AM, Sturla Molden
wrote:
> > we get away with np.float, because every OS/compiler that gets any
> regular
> > use has np.float == a c double, which is always 64 bit.
>
> Not if we are passing an array of np.float to a ac routine that expects
> float*, e.g. in OpenGL,
On Fri, Jul 24, 2015 at 10:03 AM, Sturla Molden
wrote:
> > I don't see the issue. They are just aliases so how is np.float worse
> > than just float?
>
> I have burned my fingers on it.
>
I must have too -- but I don't recall, because I am VERY careful about not
using np.float, no.int, etc... bu
On Thu, Jul 2, 2015 at 6:18 PM, wrote:
> round, floor, ceil don't produce integers.
>
True -- in a dynamic language, they probably should, but that's legacy that
won't change.
It's annoying, but you do need to do:
int(round(want_it_to_be_an_index))
but as they say, explicite is better than im
Sent from my iPhone
>
> The disadvantage I see is, that some weirder calculations would possible
> work most of the times, but not always,
> not sure if you can define a "tolerance"
> reasonable here unless it is exact.
You could use a relative tolerance, but you'd still have to set that.
Bett
I _may_ be able to join -- but don't go setting up an alternative
conferencing system just for me.
But I'm planning on ring in Austin Tues in any case.
-Chris
Sent from my iPhone
> On Jun 29, 2015, at 9:59 PM, Nathaniel Smith wrote:
>
>> On Fri, Jun 26, 2015 at 2:32 AM, Nathaniel Smith wrote
On Wed, Jun 17, 2015 at 11:13 PM, Nathaniel Smith wrote:
> there's some
> argument that in Python, doing explicit type checks like this is
> usually a sign that one is doing something awkward,
I tend to agree with that.
On the other hand, numpy itself is kind-of sort-of statically typed. But
On Tue, Jun 9, 2015 at 4:48 PM, Charles R Harris
wrote:
> Thanks for looking into this. It is depressing that Windows is so
> difficult to support.
>
yes, thanks!
You might try posting on python-dev -- there is at least one person on that
list trying to help get Windows builds working better!
On Sun, May 17, 2015 at 9:23 PM, Matthew Brett
wrote:
> I believe OpenBLAS does run-time selection too.
very cool! then an excellent option if we can get it to work (make that you
can get it to work, I'm not doing squat in this effort other than
nudging...)
I think we discussed before having a
On Sun, May 17, 2015 at 3:06 AM, Ralf Gommers
wrote:
> Binaries which crash for ~1% of users (which ATLAS-SSE2 would result in)
> are still not acceptable I think.
>
what instruction set would an OpenBLAS build support? wouldn't we still
need to select a lowest common denominator instructions se
On Sun, May 17, 2015 at 12:11 PM, Robert Kern wrote:
> I don't think permission from Intel is the blocking issue for putting
> these binaries up on PyPI. Even with Intel's permission, we would be
> putting up proprietary binaries on a page that is explicitly claiming that
> the files linked there
On Fri, May 15, 2015 at 6:56 PM, wrote:
> Unrelated to the pip/wheel discussion.
>
> In my experience by far the easiest to get something running to play with
> is using Winpython. Download and unzip (and maybe add to system path) and
> most of the data analysis stack is available.
>
Sure -- if
On Fri, May 15, 2015 at 1:07 PM, Chris Barker wrote:
>> Hi folks.,
>>
>> I did a little "intro to scipy" session as part of a larger Python class the
>> other day, and was dismayed to find that "pip install numpy" still dosn't
>> work on Windows
Hi folks.,
I did a little "intro to scipy" session as part of a larger Python class
the other day, and was dismayed to find that "pip install numpy" still
dosn't work on Windows.
Thanks mostly to Matthew Brett's work, the whole scipy stack is
pip-installable on OS-X, it would be really nice if we
On Mon, Apr 13, 2015 at 5:02 AM, Neil Girdhar wrote:
> Can I suggest that we instead add the P-square algorithm for the dynamic
> calculation of histograms? (
> http://pierrechainais.ec-lille.fr/Centrale/Option_DAD/IMPACT_files/Dynamic%20quantiles%20calcultation%20-%20P2%20Algorythm.pdf
> )
>
T
On Thu, Mar 5, 2015 at 5:07 PM, Charles R Harris
wrote:
> Do line endings in the scripts matter?
>
I have no idea if powershell cares about line endings, but if you are using
git, then you'll want to make sure that your repo is properly configured to
normalize line endings -- then there should b
On Thu, Mar 5, 2015 at 8:42 AM, Benjamin Root wrote:
> dare I say... datetime64/timedelta64 support?
>
well, the precision of those is 64 bits, yes? so if you asked for less than
that, you'd still get a dt64. If you asked for 64 bits, you'd get it, if
you asked for datetime128 -- what would you
>> Are there plans to write a vectorized version for NumPy? :)
>
> np.isclose isn't identical, but IIRC the only difference is the defaults.
There are subtle differences in the algorithm as well. But not enough
that it makes sense to change the numpy one.
The results will be similar in most cases
On Tue, Feb 10, 2015 at 12:28 AM, Todd wrote:
> >> So maybe the better way would be not to add warnings to braodcasting
> operations,
> >> but to overhaul the matrix class
> >> to make it more attractive for numerical linear algebra(?)
>
> What about splitting it off into a scikit, or at least
On Mon, Feb 9, 2015 at 4:02 PM, cjw wrote:
> to overhaul the matrix class
> to make it more attractive for numerical linear algebra(?)
>
> +1
>
Sure -- though I don't know that this actually has anyting to do with
braodcasting -- unless the idea is that Matrices would be broadcastable?
But any
On Sun, Feb 8, 2015 at 2:17 PM, Stefan Reiterer wrote:
> Till now the only way out of the misery
> is to make proper unit tests,
>
That's the only way out of the misery of software bugs in general --
nothing special here ;-)
Python is a dynamically typed language -- EVERYTHING could do someth
>
> > I was thinking elapsed time. Nanoseconds can be rather crude for that
> > depending on the measurement.
>
> Wouldn't the user just keep elapsed time as a
> count, or floating point number, in whatever units the instrument spits
> out? Why does it need to be treated in a different way from
Sorry not to notice this for a while -- I've been distracted by
python-ideas. (Nathaniel knows what I'm talking about ;-) )
I do like the idea of prototyping some DateTime stuff -- it really isn't
clear what's needed or how to do it at this point. Though we did more or
less settle on a reasonable
ough if you do have a work-around for when the step is not returned, it
will likely break if suddenly zero or NaN is returned, as well. So I'm not
sure there is a fully backward compatible fix.
-CHB
> For some reason, I don't think people would appreciate that.
>
> Be
On Tue, Jan 13, 2015 at 7:23 AM, Benjamin Root wrote:
> Oh, wow. I never noticed that before. Yeah, if I state that retstep=True,
> then I am coding my handling to expect two values to be returned, not 1. I
> think it should be nan, but I could also agree with zero. It should
> definitely remain
On Mon, Jan 12, 2015 at 7:23 PM, Alexander Belopolsky
wrote:
>
> On Mon, Jan 12, 2015 at 8:48 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
> > I've often fantasized getting rid of the long type altogether ;) So it
> isn't exactly intended, but there is a reason...
>
>
> It is also
On Mon, Dec 22, 2014 at 10:15 AM, Maniteja Nandana <
maniteja.modesty...@gmail.com> wrote:
> As of now, I have now deleted that branch and created a new branch,
> taking care of the git add option. But I couldn't find a way to make the
> previous pull request use this branch. So, it was closed an
On Mon, Dec 22, 2014 at 8:22 AM, Maniteja Nandana <
maniteja.modesty...@gmail.com> wrote:
>
> I have tried to solve Issue #5354 in a branch. Now if I try to compare my
> branch with numpy master, there were some auto generated files in my branch.
>
no time to take a look right now -- but you don't
On Mon, Dec 22, 2014 at 7:40 AM, Maniteja Nandana <
maniteja.modesty...@gmail.com> wrote:
>
> Thank you for the suggestion. I will remember this as a viable option. As
> of now, I have a virtual machine of ubuntu running on Windows. So, I wanted
> to have least overhead while running the VM.
>
As
On Wed, Dec 10, 2014 at 8:40 PM, Sturla Molden
wrote:
> > I haven't managed to trigger a segfault yet but it sure looks like I
> > could...
>
> You can also trigger random errors. If the array is small, Python's memory
> mamager might keep the memory in the heap for reuse by PyMem_Malloc. And
> t
On Wed, Dec 10, 2014 at 11:44 AM, Andrea Gavana
wrote:
> The argument is not check_refs, but refcheck.
>
thanks -- yup, that works.
Useful -- but dangerous!
I haven't managed to trigger a segfault yet but it sure looks like I
could...
-CHB
--
Christopher Barker, Ph.D.
Oceanographer
Eme
On Tue, Dec 9, 2014 at 11:03 PM, Sturla Molden
wrote:
> Nathaniel Smith wrote:
>
> > @contextmanager
> > def tmp_zeros(*args, **kwargs):
> > arr = np.zeros(*args, **kwargs)
> > try:
> > yield arr
> > finally:
> > arr.resize((0,), check_refs=False)
>
> That one is inte
On Tue, Dec 9, 2014 at 7:01 AM, Sturla Molden
wrote:
>
> I wonder if ndarray should be a context manager so we can write
> something like this:
>
>
> with np.zeros(n) as x:
>[...]
>
>
> The difference should be that __exit__ should free the memory in x (if
> owned by x) and make x a z
Getting a bit OT here, but...
On Mon, Dec 1, 2014 at 10:46 AM, Fabien wrote:
> thanks for the hint! I am trying to gather information about how to make
> to make an interactive website with a python model running on a
> webserver and making plots. There's a bunch of tools out there, it's
> hard
On Mon, Dec 1, 2014 at 6:17 AM, Fabien wrote:
> Just out of curiosity: is Bokeh also able to provide interactive charts
> as Brython does?
>
> Example:
>
> http://www.brython.info/gallery/highcharts/examples/area-stacked/index_py.htm
not really your question, but that example is using Brython t
On Sat, Nov 1, 2014 at 7:31 AM, Warren Weckesser wrote:
> (2) Multiple arrays in a single file:
>
> ...
> The file contains multiple arrays. Each array is
> preceded by a line containing the number of rows
> and columns in that array. The `max_rows` argument
> would make it easy to read this fi
On Tue, Oct 28, 2014 at 1:24 PM, Nathaniel Smith wrote:
> > Memory efficiency -- somethign like my growable array is not all that
> hard to implement and pretty darn quick -- you just do the usual trick_
> over allocate a bit of memory, and when it gets full re-allocate a larger
> chunk.
>
> Can'
A few thoughts:
1) yes, a faster, more memory efficient text file parser would be great.
Yeah, if your workflow relies on parsing lots of huge text files, you
probably need another workflow. But it's a really really common thing to
nee to do -- why not do it fast?
2) """you are describing a speci
Sorry about SWIG -- maybe a chance to move on ;-)
I'd go with Cython -- this is pretty straightforward, and it handles the
buffer protocol for you under the hood.
And with XDress, you can get numpy wrapped std::vector out of the box, I
think:
https://s3.amazonaws.com/xdress/index.html
if you RE
On Tue, Oct 14, 2014 at 11:30 AM, Fadzil Mnor
wrote:
> I've been trying to install IRIS on my laptop (OS X) for months. Errors
> everywhere.
> I'll look at that IRIS again, and other links.
>
IRIS has been an install challeng,e but gotten better.
And you ay even find a conda package for it if y
On Tue, Sep 23, 2014 at 4:40 AM, Eric Moore wrote:
> Improving the dtype system requires working on c code.
>
yes -- it sure does. But I think that is a bit of a Red Herring. I'm barely
competent in C, and don't like it much, but the real barrier to entry for
me is not that it's in C, but that
On Thu, Sep 18, 2014 at 10:44 AM, Petr Viktorin wrote:
> > 2) the use-cases of the math lib and numpy are different, so they maybe
> > _should_ have different handling of this kind of thing.
>
> If you have a reason for the difference, I'd like to hear it.
For one, numpy does array operations,
Well,
First of all, numpy and the python math module have a number of differences
when it comes to handling these kind of special cases -- and I think that:
1) numpy needs to do what makes the most sense for numpy and NOT mirror the
math lib.
2) the use-cases of the math lib and numpy are differ
On Wed, Aug 6, 2014 at 8:32 AM, Charles R Harris
wrote:
> Should also mention that we don't have the ability to operate on stacked
> vectors because they can't be identified by dimension info. One workaround
> is to add dummy dimensions where needed, another is to add two flags, row
> and col, an
one more note:
>
>> If I download the zip file and try to use setup.py, I get messages like
>>
>>
>>
>> “No module named msvccompiler in numpy.distutils: trying from distutils
>>
>> error: unable to find vcvarsall.bat”
>>
>>
>>
>> I have no idea what this means or what to do about it.
>>
>
It mean
On Wed, Jul 30, 2014 at 1:36 PM, Jeffrey Ken Smith
wrote:
> I have been unable to install on my Windows 7 desktop computer, which is
> a Dell – I had no problems installing it on my new laptop, which is also a
> Dell. When I try to run the superpack .exe file, I get a message claiming
> that Pyt
On Fri, Jul 18, 2014 at 1:15 PM, Joseph Martinot-Lagarde <
joseph.martinot-laga...@m4x.org> wrote:
> In addition,
> you have to use AltGr on some keyboards to get the brackets.
If it's hard to type square brackets -- you're kind of dead in the water
with Python anyway -- this is not going to hel
On Fri, Jul 18, 2014 at 12:52 PM, Andrew Collette wrote:
> > What it would do is push the problem from the HDF5<->numpy interface to
> the
> > python<->numpy interface.
> >
> > I'm not sure that's a good trade off.
>
> Maybe I'm being too paranoid about the truncation issue.
Actually, I agree a
On Fri, Jul 18, 2014 at 12:43 PM, Pauli Virtanen wrote:
> 18.07.2014 22:13, Chris Barker kirjoitti:
> [clip]
> > but an appropriate rtol would work there too. If only zero testing is
> > needed, then atol=0 makes sense as a default. (or maybe atol=eps)
>
> There's
On Fri, Jul 18, 2014 at 9:59 AM, Nathaniel Smith wrote:
> IMO the extra characters aren't the most compelling argument for
> latin1 over ascii. Latin1 gives the nice assurance that if some jerk
> *does* give me an "ascii" file that somewhere has some byte with the
> 8th bit set, then I can still
On Fri, Jul 18, 2014 at 10:29 AM, Andrew Collette wrote:
> The root of the issue is that HDF5 provides a limited set of
> fixed-storage-width string types, and a fixed-storage-width NumPy type
> of the same size using Latin-1 can't map to any of them without losing
> data. For example, if "a10"
On Fri, Jul 18, 2014 at 9:59 AM, Nathaniel Smith wrote:
> IMO the extra characters aren't the most compelling argument for
> latin1 over ascii. Latin1 gives the nice assurance that if some jerk
> *does* give me an "ascii" file that somewhere has some byte with the
> 8th bit set, then I can still
On Fri, Jul 18, 2014 at 11:47 AM, Pauli Virtanen wrote:
> Using allclose in non-test code without specifying both tolerances
> explicitly is IMHO a sign of sloppiness, as the default tolerances are
> both pretty big (and atol != 0 is not scale-free).
>
using it without specifying tolerances is s
On Fri, Jul 18, 2014 at 11:49 AM, Nathaniel Smith wrote:
> Going through np.mat also fails on the meta-goal, which is to remove
> reasons for people to prefer np.matrix to np.ndarray, so that eventually we
> can deprecate the former without harm.
>
> As far as this goal goes, it's all very well f
er of people with
skills, time, and inclination to work on the core numpy code. Exactly one
of them (thanks Chuck!) was there for the sprints this year. If there were
a way to put together a stand-alone numpy sprint at some point, that would
be really great!
In particular, Chris Barker brought up
On Fri, Jul 18, 2014 at 6:18 AM, Charles R Harris wrote:
> I've toyed some with the idea of adding a flag bit for transpose of 1-d
> arrays. It would flip with every transpose and be ignored for non 1-d
> arrays. A bit of a hack, but would allow for a column/row vector
> distinction.
>
very cool
On Fri, Jul 18, 2014 at 9:53 AM, Nathaniel Smith wrote:
> > I don't know what the usecases for np.allclose are since I don't have
> any.
>
I use it all the time -- sometimes you want to check something, but not
raise an assertion -- and I use it like:
assert np.allclose()
with pytest, because
On Fri, Jul 18, 2014 at 9:32 AM, Andrew Collette
wrote:
> >> A Latin-1 based 'a' type
> >> would have similar problems.
> >
> > Maybe not -- latin1 is fixed width.
>
> Yes, Latin-1 is fixed width, but the issue is that when writing to a
> fixed-width UTF8 string in HDF5, it will expand, possibly
On Fri, Jul 18, 2014 at 9:07 AM, Pauli Virtanen wrote:
> Another approach would be to add a new 1-byte unicode
you can't do unicode in 1-byte -- so what does this mean, exactly?
> This also is not perfect, since array(['foo']) on Py2 should for
> backward compatibility continue returning dtyp
On Fri, Jul 18, 2014 at 3:33 AM, Nathaniel Smith wrote:
> > 2) A bytes types -- almost the current 'S' type
> > - A bytes type would map to/from py3 bytes objects (and py2 bytes
> > objects, which are the same as py2strings)
> > - one way is would differ from a py2str is that there would
On Thu, Jul 17, 2014 at 8:48 AM, Nathaniel Smith wrote:
> I'd be very concerned about backcompat for existing code that uses
> e.g. "S128" as a dtype to mean "128 arbitrary bytes".
yup -- 'S' matches teh py2 string well, which is BOTH text and bytes. That
should not change -- at least in py2.
On Wed, Jul 16, 2014 at 3:48 AM, Todd wrote:
> On Jul 16, 2014 11:43 AM, "Chris Barker" wrote:
> > So numpy should have dtypes to match these. We're a bit stuck, however,
> because 'S' mapped to the py2 string type, which no longer exists in py3.
> Sorry
On Tue, Jul 15, 2014 at 4:26 AM, Sebastian Berg
wrote:
> Just wondering, couldn't we have a type which actually has an
> (arbitrary, python supported) encoding (and "bytes" might even just be a
> special case of no encoding)?
well, then we're back to the core issue here:
numpy dtypes need to
> But HDF5
> additionally has a fixed-storage-width UTF8 type, so we could map to a
> NumPy fixed-storage-width type trivially.
Sure -- this is why *nix uses utf-8 for filenames -- it can just be a
char*. But that just punts the problem to client code.
I think a UTF-8 string type does not match t
On Mon, Jul 14, 2014 at 10:39 AM, Andrew Collette wrote:
> For storing data in HDF5 (PyTables or h5py), it would be somewhat
> cleaner if either ASCII or UTF-8 are used, as these are the only two
> charsets officially supported by the library.
good argument for ASCII, but utf-8 is a bad idea,
On Sat, Jul 12, 2014 at 10:17 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
> As previous posts have pointed out, Numpy's `S` type is currently treated
> as a byte string, which leads to more complicated code in python3.
>
Also, a byte string in py3 is not, in fact the same as the py2
On Jul 7, 2014, at 7:28 AM, Sebastian Berg wrote:
> not sure that many use np.r_ or np.c_
I actually really like those ;-)
-Chris
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
If you are going to introduce this functionality, please don't call it
np.arr.
I agree, but..,
I would suggest calling it something like np.array_simple or
np.array_from_string, but the best choice IMO, would be
np.ndarray.from_string (a static constructor method).
Except the entire point of h
2014 16:59, Chris Barker wrote:
> HI Folks,
>
> I will be hosting a "Teaching the SciPy Stack" BoF at SciPy this year:
>
> https://conference.scipy.org/scipy2014/schedule/presentation/1762/
>
> (Actually, I proposed it for the conference, but would be more than ha
HI Folks,
I will be hosting a "Teaching the SciPy Stack" BoF at SciPy this year:
https://conference.scipy.org/scipy2014/schedule/presentation/1762/
(Actually, I proposed it for the conference, but would be more than happy
to have other folks join me in facilitating, hosting, etc.)
I've put up
On Wed, Jul 2, 2014 at 10:36 AM, Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:
we recently fixed a float32/float64 issue in histogram.
> https://github.com/numpy/numpy/issues/4799
It's a good idea to keep the edges in the same dtype as the input data, it
will make for fewer surprises, bu
On Wed, Jul 2, 2014 at 6:36 AM, Matthew Brett
wrote:
>
> Having a noSSE channel would make sense.
>
>
Indeed -- the default (i.e what you get with pip install numpy) should be
SSE2 -- I":d much rather have a few folks with old hardware have to go
through some hoops that n have most people get som
On Wed, Jul 2, 2014 at 3:37 AM, Matthew Brett
wrote:
> It looks like
> 99% of Windows users do have SSE2 though [1]. So I think what is
> required is
>
> * Build the wheels for 32-bit (easy)
> * Patch the wheels to check and give helpful error in absence of SSE2
> (fairly easy)
> * Get agreemen
On Wed, Jul 2, 2014 at 7:57 AM, Mark Szepieniec wrote:
> Looks this could be a float32 vs float64 problem:
>
that would explain it.
> I guess users always be very careful when mixing floating point types, but
> should numpy prevent (or warn) the user from doing so in this case?
>
I don't thin
On Wed, Jul 2, 2014 at 3:46 AM, Julian Taylor wrote:
> numpy does not directly support irregular shaped arrays (or ragged arrays).
> If you look at the result of your example you will see this:
> In [5]: b
> Out[5]: array([array([ 1., 2., 3.]), array([-2., 4.]), array([
> 5.])], dtype=object)
A few thoughts:
1) don't use arange() for flaoting point numbers, use linspace().
2) histogram1d is a floating point function, and you shouldn't expect exact
results for floating point -- in particular, values exactly at the bin
boundaries are likely to be "uncertain" -- not quite the right word,
101 - 200 of 595 matches
Mail list logo