Re: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)

2024-03-30 Thread Diane Trout
Hi Julian,

On Sat, 2024-03-30 at 20:22 +, Julian Gilbey wrote:
> Lovely to hear from you, and oh wow, that's amazing, thank you!
> 
> I can't speak for anyone else, but I suggest that pushing your
> updates
> to the science-team package would be very sensible; it would be silly
> for someone else to have to redo your work.
> 
> What more is needed for it to be ready for unstable?


The things I think are kind of broken are:

We've got 7.0.0 and upstreams current version is 15.0.2.

the pyarrow 7.0.0 tests fail because it depends on a python test
library that breaks with pytest 8.0. Either I need to disable the
python tests or upgrade to a newer version.

My upgrade didn't go smoothly because uscan found also upstreams debian
watch file which is too loose and matches some other tar balls on their
distribution site.

(Though I don't know why uscan keeps looking for watch files after
finding one in debian/watch)

And you were probably right in that arrow needs to be a team, because I
have no idea how to get other the other languages interfaces packaged.

Oh and I probably need to get the pyarrow installed somewhere, since it
was stopping at the tests I hadn't run into dh_missing errors yet.

Diane



Re: Seeking a small group to package Apache Arrow (was: Bug#970021: RFP: apache-arrow -- cross-language development platform for in-memory analytics)

2024-03-29 Thread Diane Trout
On Mon, 2024-03-25 at 18:17 +, Julian Gilbey wrote:
> 
> 
> So this is a plea for anyone looking for something really helpful to
> do: it would be great to have a group of developers finally package
> this!  There was some initial work done (see the RFP bug report for
> details: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970021),
> but that is fairly old now.  As Apache Arrow supports numerous
> languages, it may well benefit from having a group of developers with
> different areas of expertise to build it.  (Or perhaps it would make
> more sense to split the upstream source into a collection of
> different
> Debian source packages for the different supported languages.  I
> don't
> know.)  Unfortunately I don't have the capacity to devote any time to
> it myself.
> 
> Thanks in advance for anyone who can step forward for this!

I've been maintain dask and anndata and saw that apache arrow was
getting increasingly popular.

I took the current science-team preliminary packaging 7.0.0 packaging
and managed to get it to build through a combination of patches and
turning off features.

I even mostly managed to get pyarrow to build. (Though some tests fail
due to pytest lazy-fixture being abandoned).

I pushed my current work in progress to.

https://salsa.debian.org/diane/arrow.git

Was anyone else planning on working on it or should I push my updates
to the science-team package?

Diane



Re: dask.distributed RC bug #1042135

2023-08-11 Thread Diane Trout
> > 
> 
> Thanks so much!  I see you've already started on dask :)
> 
> I took at quick look at arrow - yikes!  There is potentially work
> afoot on this though:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970021
> 

Dask & dask.distributed 2023.8.0 was easier to update than some of the
other versions they had between 2022.12 and now.

Dask would still benifit from pyarrow, by I added enough
pytest.importorskip to avoid triggering the tests that depend on
pyarrow.

It also looks like it builds for me and the debian builder so I closed
1042135.

Hopefully that helps. (And it looks like it's got some code for pandas
2.0 so hopefully that'll help Rebecca Palmer.

Diane



Re: Updating python3-xlrd for pandas 1.5 compatibility

2023-02-24 Thread Diane Trout
On Fri, 2023-02-24 at 19:33 +0100, Paul Gevers wrote:
> Hi Diane,
> 
> On 23-02-2023 08:12, Diane Trout wrote:
> > the version of python3-xlrd 1.2.0-3 in unstable/testing is too old
> > to
> > be used with pandas 1.5.3. (See Bug #1031701).
> 
> Do I understand correctly that this isn't an issue from the point of 
> python3-xlrd and that only pandas is effected? While investigating
> for 
> this reply I noticed src:pandas doesn't even have a dependency in any
> of 
> its binaries.

It looks like the xlrd dependency was commented out because the Debian
version is too old, though apparently that was done 7 months ago.

https://salsa.debian.org/science-team/pandas/-/blob/main/debian/control#L45

Here's the pandas module that conditionally uses xlrd if it's
available.

https://salsa.debian.org/science-team/pandas/-/blob/main/pandas/io/excel/_xlrd.py

> 
> > As it is a really common
> > workflow to use pandas to read excel files, it'd be nice if the
> > version
> > of xlrd in bookworm was compatible.
> 
> As the maintainer of pandas, do you consider it an RC issue that
> pandas 
> can't convert it? I guess not because you say "it'd be nice" and you 
> don't even have the required dependency. How severe do you consider
> this 
> issue for pandas? pandas has a quite extensive autopkgtest, doesn't
> it 
> cover this use case? Apparently you knew this earlier, why do you
> bring 
> this up now?

The issue is somewhere between a minor and a normal bug, it breaks a
small component of the library.

I wouldn't claim to be a maintainer of pandas, I feel Rebecca Palmer
has been doing the vast amount of work keeping pandas updated in
Debian.

I started investigating this up after my coworker ran into while trying
to process an .xls file. And when I looked, saw someone else had also
recently filed the same bug report.

> 
> > Because of the freeze I wanted to check if it was appropriate to
> > upload
> > the new version,
> 
> I'd hope that the "rules" are clear: 
> https://release.debian.org/testing/freeze_policy.html#soft. You can 
> contact the Release Team if you need further clarification.
> 
> > and what kind of warning I should give to the other
> > developers.
> 
> It depends. I'm worried about what you write below.

That's fair.

The counter argument is that xlrd's support for handling the xml based
.xslx files was unsafe since Python 3.9, and it has been recommended to
switching to another package like openpyxl to handle xlsx files for a
while.

(Release from xlrd announcement for thread mentioning the removal, and
then goes into discussing the security issues)
https://groups.google.com/g/python-excel/c/IRa8IWq_4zk/m/Af8-hrRnAgAJ

The reason the issue doesn't show up much is .xls files are deprecated
by nearly everyone, this only shows up when you're reading old data or
generated by old software.

The reason this is likely a minor issue, is there's a simple work
around which is to convert your xls file to a xlsx file.

Here's Pandas's discussion about deprecating xlrd for xlsx files.
https://github.com/pandas-dev/pandas/issues/28547

 
> > Here's the list of packages I found that have any relationship to
> > python-xlrd, if it looked like the autopkgtests actually tested
> > using
> > the xlrd library and what the level of declared dependency is.
> > (none
> > means the package lacks autopackage tests)
> > 
> > > nemo | none | Recommends    |
> > > odoo-14  | none | Depends   |
> > > ofxstatement-plugins | none | Depends   |
> > > psychopy | unlikely | Depends   |
> > > python3-agateexcel   | yes  | Depends   |
> > > python3-canmatrix    | no   | Recommends    |
> > > python3-drslib   | no   | Recommends    |
> > > python3-glue | yes  | Depends   |
> > > python3-pyspectral   | probably | Suggests  |
> > > python3-rows | unlikely | Recommends    |
> > > python3-tablib   | unlikely | Depends   |
> > > visidata | none | Build-Depends |
> > > vistrails    | none | Build-Depends |
> > > python-xrt   | none | Build-Depends |
> > > pyutilib | none | Build-Depends |
> 
> If I read everything correctly, it seems like you're too late with
> this 
> change.


With a bit more wakefulness, I looked through the packages that have
any dependency on xlrd.

I think odoo-14 is the package most likely to have issues. They use
xlrd and seem to expect to be able to read and write xls & xlsx files
using xlrd. Needless to say, updating xlrd would then break the ability
to process xlsx files. Tho

Updating python3-xlrd for pandas 1.5 compatibility

2023-02-22 Thread Diane Trout
Hi,

the version of python3-xlrd 1.2.0-3 in unstable/testing is too old to
be used with pandas 1.5.3. (See Bug #1031701). As it is a really common
workflow to use pandas to read excel files, it'd be nice if the version
of xlrd in bookworm was compatible.

Because of the freeze I wanted to check if it was appropriate to upload
the new version, and what kind of warning I should give to the other
developers.

THe xlrd changelog says the biggest change in going from 1.2 to 2.0 was
they removed the ability to read the newer XML excel files .xslx from
xlrd in favor of using openpyxl

I updated the source package python-xlrd to 2.0.1 and sent it through
experimental, where there were no issues detected by packages that had
CI tests.

Unfortunately there's packages without tests.

Here's the list of packages I found that have any relationship to
python-xlrd, if it looked like the autopkgtests actually tested using
the xlrd library and what the level of declared dependency is. (none
means the package lacks autopackage tests)

| nemo | none | Recommends|
| odoo-14  | none | Depends   |
| ofxstatement-plugins | none | Depends   |
| psychopy | unlikely | Depends   |
| python3-agateexcel   | yes  | Depends   |
| python3-canmatrix| no   | Recommends|
| python3-drslib   | no   | Recommends|
| python3-glue | yes  | Depends   |
| python3-pyspectral   | probably | Suggests  |
| python3-rows | unlikely | Recommends|
| python3-tablib   | unlikely | Depends   |
| visidata | none | Build-Depends |
| vistrails| none | Build-Depends |
| python-xrt   | none | Build-Depends |
| pyutilib | none | Build-Depends |

Thanks
Diane



Re: Bug#1030096: Any ideas Re: #1030096 dask.distributed intermittent autopkgtest fail ?

2023-02-06 Thread Diane Trout
On Mon, 2023-02-06 at 21:39 +, Rebecca N. Palmer wrote:
> I agree that xfailing the tests *may* be a reasonable solution.  I'm 
> only saying that it should be done by someone with more idea than me
> of 
> whether these particular tests are important, because blindly
> xfailing 
> everything that fails is effectively not having tests.
> 
> If we do choose that approach, at least test_balance_expensive_tasks 
> needs to be an outright xfail/skip not just a flaky, because when it 
> fails it fails repeatedly.

So my efforts at debugging are made harder by it working for me. I'm
using

a9771f68a28dfc65cae3ac6acf70451c264f3227

from Debian HEAD.

= 2745 passed, 93 skipped, 216 deselected, 18 xfailed, 8 xpassed in
1992.20s (0:33:12) =  

I looked at the last log on ci.debian.org for dask.distributed
https://ci.debian.net/data/autopkgtest/unstable/amd64/d/dask.distributed/31090863/log.gz

And it looks like several of those errors are networking related.

CI with the previously released 2022.12.1+ds.1-1 version is failing
with these tests:

test_defaults 
test_hostport 
test_file 
test_default_client_server_ipv6[tornado] 
test_default_client_server_ipv6[asyncio] 
test_tcp_client_server_ipv6[tornado] 
test_tcp_client_server_ipv6[asyncio] 
test_only_local_access 
test_remote_access 
test_adapt_then_manual 
test_local_tls[True] 
test_local_tls[False] 
test_run_spec 
test_balance_expensive_tasks[enough work to steal] 

I think several of those may depend on a proper network. The host I'm
using actually has both ipv4 and ipv6 working. I'm using sbuild
automatically running autopkgtests on a oldish 2x4 8 core xeon server
with ~24 GB of ram 

What's your test environment like?

I don't think head is hugely different from what was released in -1.

The diff looks like Andreas adjusted the dask dependency version,
configured a salsa CI run, and added some upstream metadata files

He had problems with a salsa build failure but that was with i386, I'm
currently setting up i386 to see if I can replicate the salsa failure.

Diane



Re: Any ideas Re: #1030096 dask.distributed intermittent autopkgtest fail ?

2023-02-06 Thread Diane Trout
On Mon, 2023-02-06 at 11:13 +0100, Andreas Tille wrote:
> Hi Rebecca,
> 
> Am Mon, Feb 06, 2023 at 07:59:17AM + schrieb Rebecca N. Palmer:
> > (Background: the pandas + dask transition broke dask.distributed
> > and it was
> > hence removed from testing; I didn't notice at the time that if we
> > don't get
> > it back in we lose Spyder.)
> 
> as far as I know Diane has put quite some effort into dask and I
> understood that dask and dask.distributed are closely interconnected.
>  

Hi now.

My fragments of time were spent fighting with numba, and I didn't have
the energy to be thinking about dask.distributed.

Numba should be in a better place right now. So I can set my build
machine to trying to build it and seeing where we are with it right
now.

The most important thing about dask / dask.distributed is they really
should be at about the same upstream version. I'm not 100% sure how to
mark that in the d/control file. Also upstream might have some ability
to do minor releases independently.

But if we do a new upstream release of dask, it needs to be paired with
a new upstream release of dask.distributed. And in my experience
dask.distributed is the one that's harder to get to work right.

Diane




Bug#1013092: ITP: python3-sphinx-autosummary-accessors -- sphinx autosummary extension to pandas or xarray accessors

2022-06-16 Thread Diane Trout
Package: wnpp
Owner: Diane Trout 
Severity: wishlist

* Package name: python3-sphinx-autosummary-accessors
  Version : 2022.4.0-1
  Upstream Author : Justus Magin 
* URL or Web page : 
https://github.com/xarray-contrib/sphinx-autosummary-accessors
* License : MIT
  Description : sphinx autosummary extension to pandas or xarray accessors

This is a new dependency for building the documentation for dask.

One confusing issue is the project is marked as being MIT licensed, but
includes the pandas BSD-3 license because some of this project was
derived from pandas.

Unfortunately there's nothing that says what files were derived from
pandas.

So my copyright file marks everything as MIT / Expat, but includes the
pandas BSD license block though I don't know what to attach it to.

I was planning on adding this to the debian python team.

Diane Trout



Re: Bug#982417: Python louvain packages naming confusion.

2021-02-10 Thread Diane Trout
On Wed, 2021-02-10 at 18:35 -0500, Sandro Tosi wrote:
> +Steffen explicitly, given the team is not in Maintainer nor
> Uploaders
> 
> > How about renaming the current python3-louvain package to
> > python3-community-louvain using a normal transition package.
> 
> that's incorrect: src:python-louvain builds a module called
> `community` (that includes also a cli tool), so the resulting binary
> package should either be `python3-community` or `community` where the
> cli is the main product and the module is installed in /usr/share/ to
> support it.
> 

It is bending policy, but I liked python3-community-louvain because the
package name "python3-community" is just an exceptionally vague name. I
think it's clearer if the name of the algorithm is in the package name.

Also while looking through scanpy's louvain function I learned of yet
more implementations of louvain. (Look through this function for
different "flavors")
https://github.com/theislab/scanpy/blob/550b82fdb53f35890e60343b826dd19454600bdb/scanpy/tools/_louvain.py#L23

Apparently what I'd previously called "igraph", scanpy calls "vtraag"
because the vtraag version builds on the louvain implementation
in python-igraph.

There's also yet another version in nvidia's rapids library.

> > Then I can submit the other louvain package using a binary package
> > name
> > of python3-louvain-igraph.
> 
> this is incorrect too: (perspective) src:louvain (or
> src:louvain-igraph, as the upstream called their repo) builds a
> module
> called `louvain` so the resulting binary package should be
> `python3-louvain` eventually conflicting with the existing package
> (<<
> current-version-in-sid)
> 
> src:python-louvain is in a pretty bad shape: it received a single
> upload in late 2018, it has an RC bug since a *day* after that
> upload,
> and it has never been in testing: tbh i dont consider this package to
> be maintained/targeting any stable release, so i believe you can
> "take
> over" the namespace given you seem to show interest in maintaining
> https://github.com/vtraag/louvain-igraph

I wasn't sure how much effort to put into saving the previous Debian
louvain package since it didn't look like it was really usable by
anyone.

Libraries.io makes it look like the python-louvain "import community"
version is a bit more popular and more actively developed. The "vtraag"
version has a comment in it's REAME saying the developer deprecated it
in favor of leidenalg (an improved method).

On the other hand scanpy thinks the vtraag version is the most feature
full implementation of louvain and uses it as their default.

Diane



Re: Python louvain packages naming confusion.

2021-02-10 Thread Diane Trout
> 
> In the short term I recommend fixing this by adding a file to the
> Debian python-louvain package named "debian/tests/autopkgtest-pkg-
> python.conf" with the contents "import_name = community"
> 

How about renaming the current python3-louvain package to 
python3-community-louvain using a normal transition package.

(I got the new name from wRAR pointing out that upstream even suggests
"import community as community_louvain")

Then I can submit the other louvain package using a binary package name
of python3-louvain-igraph.

Eventually we can just drop the python3-louvain package in favor of the
more specific names.

Diane




Re: Bug#982417: Python louvain packages naming confusion.

2021-02-10 Thread Diane Trout
On Wed, 2021-02-10 at 10:29 +0100, Michael R. Crusoe wrote:
> 
> In the short term I recommend fixing this by adding a file to the
> Debian python-louvain package named "debian/tests/autopkgtest-pkg-
> python.conf" with the contents "import_name = community"
> 

Thank you!

I had a hunch there was an override option, but I couldn't find it.

Diane




Re: Python louvain packages naming confusion.

2021-02-09 Thread Diane Trout
On Wed, 2021-02-10 at 01:49 +, Paul Wise wrote:
> On Tue, Feb 9, 2021 at 10:21 PM Diane Trout wrote:
> 
> > The fairly popular (in the world of bioinformatics) ScanPy package
> > uses
> > a Python version of the louvain clustering algorithm implemented
> > by:
> ...
> > However currently in the Debian archive there's a different louvain
> > package
> 
> I think this is something that the two upstream projects should
> discuss and come to an agreement on the right outcome, since the
> current set of names is confusing and overlapping. Perhaps the two
> projects will end up getting merged into one project, or one of them
> deprecated or one or both of them renamed.

>From the perspective of pypi. 

One is "louvain" (which installs into "louvain" and one is "python-
louvain", which installs into "community".

If you're using pip you can easily install both of them if you want.

> 
> > I was wondering if the python3-louvain's binary package should be
> > renamed to python3-community to match the python package name, and
> > then
> > the other louvain-igraph package could provide a bin package named
> > python3-louvain which would match the package name.
> 
> There are no reverse dependencies in Debian, but this is going to be
> tricky for users who previously installed python3-louvain.deb from
> python-louvain upstream and then after upgrading they suddenly get
> python3-louvain.deb from louvain-igraph with presumably an
> incompatible API etc.
> 

Cleaning it up seemed hard.

Currently the version python3-louvain in unstable based on python-
louvain is 0.0+20181013git3fc1f575 and the current upstream version is
0.15.

For the louvain igraph package their current version is 0.7.0.

At the very least, the current python3-louvain package needs to be
renamed to python3-community to meet python policy and to make the CI
autodep8 import test work.

So that seems like make a new release of python-louvain using the
0.0git convention with a transition package that depends on a new
python3-community package.

And then leave that alone for "a while".

At some point then the new python3-louvain package based on the louvain
igraph module could have a check in the maintainer script to tell the
user to switch to python3-community if it's the older python-louvain 
version.

Once the packages are renamed then the python-louvain version could
switch from the 0.0git convetion to the pypi version. (Upstream didn't
bother to put tags on github, so the only version numbers come from
pypi).

I have no idea how long the transition package should sit around for
though.

Diane



Python louvain packages naming confusion.

2021-02-09 Thread Diane Trout
Hello,

The fairly popular (in the world of bioinformatics) ScanPy package uses
a Python version of the louvain clustering algorithm implemented by:

https://github.com/vtraag/louvain-igraph
https://pypi.org/project/louvain/

which installs into the "louvain" dist-packages directory.
(from debc)
./usr/lib/python3/dist-packages/louvain/

I have it mostly packaged 

However currently in the Debian archive there's a different louvain
package 

https://github.com/taynaud/python-louvain
https://pypi.org/project/python-louvain/
https://salsa.debian.org/python-team/packages/python-louvain

It installs into (according to debc)
./usr/lib/python3/dist-packages/community/

Unfortunately for this package we now automatically run autodep8 which
fails because the import name is community and not louvain.

autopkgtest [13:29:03]: test autodep8-python3: set -e ; for py in
$(py3versions -r 2>/dev/null) ; do cd "$AUTOPKGTEST_TMP" ; echo
"Testing with $py:" ; $py -c "import louvain; print(louvain)" ; done
autopkgtest [13:29:03]: test autodep8-python3: [---
Testing with python3.9:
Traceback (most recent call last):
  File "", line 1, in 
ModuleNotFoundError: No module named 'louvain'
autopkgtest [13:29:03]: test autodep8-python3: ---]
autopkgtest [13:29:03]: test autodep8-python3:  - - - - - - - - - -
results - -


I think having a python3-louvain-igraph package which installs into
louvain, while there is a separate python3-louvain package which
installs into community is really confusing.

I was wondering if the python3-louvain's binary package should be
renamed to python3-community to match the python package name, and then
the other louvain-igraph package could provide a bin package named
python3-louvain which would match the package name.

But this is clearly a thing that needs to be discussed.

Diane





Re: gotchas when running tests via pybuild?

2021-01-22 Thread Diane Trout
Hello,

On Fri, 2021-01-22 at 14:06 -0300, Antonio Terceiro wrote:
> Does anybody have an insight on cases like this? Are there any
> details
> that I'm missing?

I occasionally have tests behave differently between the buildd and
autopkgtest runner.

Things that I have encountered.

The pybuild tests are run after the package is built in
${src}/.pybuild, the autopkgtests are run on the installed package, so
if the package depends on scripts being installed, they might fail in
the builder.

There might also be different arguments to the test runner (or even
different test runners) between what pybuild decided to do and what's
done in debian/tests/control.

Maybe those ideas might help?
Diane



signature.asc
Description: This is a digitally signed message part


Python 3.9 & Numba

2021-01-18 Thread Diane Trout
Hello,

I was trying to update dask & umap-learn both which led to a numba
dependency. Numba is currently broken in Debian unstable because we've
switched to Python 3.9, and the latest release of numba only supports
Python 3.8.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=972246

However there's a pull request that was merged upstream that updates
numba to support Python 3.9, 
https://github.com/numba/numba/pull/6579
Unfortunately upstream doesn't feel like there's been enough testing
yet of it for an official release.

(Upstream's full comments
https://github.com/numba/numba/issues/6345#issuecomment-762413345 )

The pull request merged onto the last released version of numba 0.52.0
with just a bit of patch fuzz and a conflict in setup.py over the
version of llvmlite. (Debian has numba 0.51.2 in unstable)

I've run all of numba's test cases using the 3.9 compatibility patch,
all the tests cases for an updated version of python-sparse 0.11.2 [1],
and all the tests cases for dask & dask.distributed  without problem.

There's a bunch of packages that depend on numba and I'm assuming we'd
like to see them in bullseye.

I pushed the work I did to a fork of the numba repository.
https://salsa.debian.org/diane/numba

I'm wondering what we'd like to do next?

Does Mo Zhou want to review the commits to numba? Or should I push them
to the main numba packaging repository? (I'm in the python, science,
and med teams)

I was guessing the most likely path would be to make an experimental
release of numba 0.52.0 with the compatibility patch and then see how
pandas, astro team packages do with it.

But it's a complicated package capable of strange side effects and I
thought we should talk it over first.

Thank you,
Diane Trout

[1] As an aside we also have an out of date version of python-sparse as
we have an out of date upstream url. Discussion
https://github.com/dask/dask/issues/7078
and pypi does list the same location the dask developer told me.
https://github.com/pydata/sparse/
This might be relevant to xarray as they patched around the old version
of sparse 
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960380



Re: Questions about including tests/ directory into package

2020-03-19 Thread Diane Trout
On Thu, 2020-03-19 at 15:22 +0900, Sao I Kuan wrote:
> Hi,
> 
> I'm newcomer to Debian packaging, and trying to add the autopkgtest
> test script into python-tinyalign[1].
> 
> [1] https://salsa.debian.org/med-team/python-tinyalign
> 
> And now I'm facing a (maybe simple) problem.
> 
> The upstream test files are located in tests/ directory, but seems
> this directory is excluded during packaging.
> I have no idea how to include this tests/ directory.

The way I solved that problem in a similar package was this

Test-Command: set -e
 ; for py in $(py3versions -r 2>/dev/null)
 ; do cd tests
 ; echo "Testing with $py in $(pwd):"
 ; http_proxy= $py -m pytest -v --pyargs 
 ; done
Depends: @, python3-all, python3-pytest

The goal of autopkgtests is to test the installed package, when
autopkgtest starts running it defaults to starting in the extracted
copy of the source package.

The goal is to make sure you test the copy in /usr/lib/python*/dist-
packages. So I assumed if cd-ed into the tests directory the package
source module wouldn't be available to the test runner.

I did make the assumption that none of the tests messed with the python
path  to look in the parent directory...

If that's a problem, maybe copy the contents of tests/ to
$AUTOPKGTEST_TMP and run the tests there.

Hope that helps,
Diane



Re: Should python-cloudpickle get a py2keep tag?

2019-09-05 Thread Diane Trout
On Thu, 2019-09-05 at 20:14 +0200, Matthias Klose wrote:
> 
> you are asking about the least preferred option, without telling why
> you can't 
> convert to python3, or why you can't remove the affected
> packages.  If you have 
> both spyder and spyder3 (Python3?), then why not drop spyder?  I
> didn't look at 
> skimage, but maybe look there first if it needs to be Python2.
> 

I was assuming that py2keep was a temporary flag that would be removed
after the leaf packages get fixed.

Since spyder and skimage are under the science team I wasn't sure how
long it would take for them to get around to removing the python2
packages.

Both spyder and skimage packages have python3 versions. spyder does
install into /usr/bin/spider so it's removal will have a bigger impact
on end users than something that's just a python package.

I did upload a new version of cloudpickle last night 1.2.1-1 with
Python2 removed, I'm trying to figure out if I should let the
dependencies just live with the previous python-cloudpickle 0.8.0
package or if I should upload a new version of 1.2.1 with the python2
packaging built.

Diane



Should python-cloudpickle get a py2keep tag?

2019-09-05 Thread Diane Trout
Hi,

The py2removal bug says to discuss the py2keep tag first.

src:cloudpickle is a dependency of src:spyder and src:skimage.

python-skimage has a popcon inst score of 469
spyder has a popcon inst score of 1385
spyder3 has a popcon inst score of 1069

The removal bug says the popcon threshold for py2keep is >= 300. So add
it?

Diane


signature.asc
Description: This is a digitally signed message part


updating python-stdeb

2019-07-31 Thread Diane Trout
Hi,

Reading about all this talk about removing leaf python2 packages left
me wondering.

Could we change py2dsc's default python interpreter to python3? 

And that led me to check tracker there's 15 open bugs, and the last
release was in 2015

Does anyone else use py2dsc?

I was wondering if I should be offering to volunteer to fix a few
issues...

Diane



Re: Updating nbconvert : problems with privacy breaches

2018-09-14 Thread Diane Trout

> But the other files... it's about changing the exporters and
> templates 
> during document generation ; and the resulting files might then get
> used 
> on non-Debian systems. In short : if I tamper with them to use
> Debian 
> local packages, that basically means nbconvert in Debian will
> produce 
> broken documents.
> 
> Perhaps patching customizing.html after it was built (using sed 
> magic...) is an acceptable solution?
> 
> Since I don't feel sure about my course of action, I thought it would
> be 
> better to ask for advice and ideas here.

I wonder if it'd be possible to get upstream to support picking where
to load resources from?

Bokeh has some options to pick loading resources between local or the
CDN. https://bokeh.pydata.org/en/latest/docs/reference/resources.html


signature.asc
Description: This is a digitally signed message part


Re: NMU Advice: pybluez

2018-09-11 Thread Diane Trout

> I think the general rule-of-thumb is to wait around 2 weeks for a
> reply.
> However, this package is team-maintained, so I would get in touch
> with
> the team first and ask them to sponsor the upload.  This way, you
> won't
> need to wait that much and can do a team upload.
> 

Ah good idea.

Thank you for the suggestion.

Diane


signature.asc
Description: This is a digitally signed message part


NMU Advice: pybluez

2018-09-10 Thread Diane Trout
Hi,

I was trying to do something with home-assistant and needed a Python 3
version of pybluez.

Unfortunately I found this bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=839100
python-bluez: debian pybluez package version 0.22-1 uses upstream
source code version 0.18

Which also blocks #787850 (Compile for python 3.x)

I fixed the problems for myself, tried emailing the maintainer, and
everyone who'd commented on the bug, but its been a couple days and I
haven't heard back from anyone.

This seems like the kind of situation that would warrant doing a NMU
upload to the delayed queue?

But I wanted to check first.
Diane


signature.asc
Description: This is a digitally signed message part


Re: Dask sourceless javascript passed by me.

2018-06-08 Thread Diane Trout
On Fri, 2018-06-08 at 10:05 +0800, Paul Wise wrote:
> On Fri, Jun 8, 2018 at 5:59 AM, Diane Trout wrote:
> 
> > How do I replace the .orig.tar.gz that I already uploaded?
> 
> You will need a new upstream version, typically people use
> 0.1.2+dfsg1
> (for DFSG issues) or 0.1.2+ds1 (for other repack reasons) in these
> sort of situations.

Ah I see, if I adjust the changelog to +dfsg1-1, I can upload
a new .orig.tar.gz... 

> 
> > I was planning on replacing the plots with a screen shot until I
> > have a
> > solution to actually build the plots from source.
> 
> Screenshots are derivative works of the UI they are from, so
> theoretically you need to care about the source for screenshots too.
> In practice I guess everyone ignores this.

Wait a second... The pages that I want to replace with a screenshot are
in a BSD-3-clause licensed repository. Aren't I allowed to make
derivative works?

Using a screen shot is just to deal with our build from source rule and
to avoid a privacy leak from loading a remote resource.

Diane



Re: Dask sourceless javascript passed by me.

2018-06-07 Thread Diane Trout
> > I would suggest talking to upstream about fixing this properly (no
> > prebuilt files or embedded code copies in the VCS and tarballs).
> 
> And in the meantime repacking the existing tarball to remove the
> sourceless files.
> 

I was suspecting that was going to be the answer... 

How do I replace the .orig.tar.gz that I already uploaded?


The offending .html files are linked into the diagnostics-local.html
page as   embedded iframes. you can see what they look like as "Profile
Results" a little below this anchor link.

https://dask.pydata.org/en/latest/diagnostics-local.html#example

I was planning on replacing the plots with a screen shot until I have a
solution to actually build the plots from source.

Diane



Dask sourceless javascript passed by me.

2018-06-05 Thread Diane Trout
Hi,

I discovered a mistake I made with packaging dask.

There's two static html files which embed some bokeh generated
javascript plot code that's in dask 0.17.5 and I uploaded that to the
Debian.

There doesn't appear to be source to build the files.

Bokeh is free software (BSD-3-Clause), but depends on a bunch of
javascript libraries so isn't available in Debian.

I should have repacked the archive to remove them, but I'm working on a
-2 release, and there's no new upstream release yet for me to use as a
base for repack.

I was planning on patching the references to the .html files out and
removing them in the debian/rules files.

But is that enough?

Diane


signature.asc
Description: This is a digitally signed message part


Bug#900535: Binary Independent build seems to have FTBFS

2018-05-31 Thread Diane Trout
Package: src:python3-stdlib-extensions
Version: 3.6.4-4
Severity: serious
X-Debbugs-CC: debian-python@lists.debian.org


Hello,

python3-distutils is currently unavailable in unstable, it appears that
python3-stdlib-extensions was intended to provided it, but the all
architecture package is failing to build because of a missing
dependency on distutils.

https://buildd.debian.org/status/fetch.php?pkg=python3-stdlib-extension
s=all=3.6.5-4=1527781937=0

Thank you,
Diane

signature.asc
Description: This is a digitally signed message part


What to do about packages that depend on distutils being not installable

2018-05-31 Thread Diane Trout
Hi,

I was trying to rebuild dask, and discovered that several dependencies
aren't installable because python3-distutils appears to have been
merged into python3-stdlib-extensions.

For example in an unstable chroot
 python3-sphinx : Depends: python3-lib2to3 but it is not installable
  Depends: python3-distutils but it is not installable

Is this a problem with the python3-stdlib-extensions packaging or does
anything that currently depends on python3-distutils need to be
updated?

Diane



Re: hangups

2018-05-24 Thread Diane Trout

> > I wasn't sure this should go into Debian python modules, or Debian
> > python applications?
> 
> Since it has a public module, DPMT is a better bet.
> 

Ok, thank you.

Also that also has the advantage that I already have permissions to
create repositories in DPMT

Diane

signature.asc
Description: This is a digitally signed message part


hangups

2018-05-23 Thread Diane Trout
Hi,

I'd packaged a hangouts client called hangups for myself a while ago,
and I thought I'd upload it to Debian.

https://hangups.readthedocs.io/en/latest/

I built the package so most of the python code is in a python3-hangups
module but the executable is in its own package.

(one could use the API for other hangouts applications)

I wasn't sure this should go into Debian python modules, or Debian
python applications?

Diane

signature.asc
Description: This is a digitally signed message part


Re: GitLab CI on salsa.debian.org

2018-03-21 Thread Diane Trout
Can you trigger test on dependencies changing?

Does CI run on architectures other than amd64?

(I was thinking of complex packages with many dependencies like dask,
or with fiddly bit manipulation like pandas)

So this would get tests on each commit instead of the current
autopkgtests which run on each build?

On Wed, 2018-03-21 at 15:27 +0100, Hans-Christoph Steiner wrote:
> Since I got only crickets on this email, let me elaborate:  gitlab-ci
> lets you run whatever you want as root via Docker images.  That means
> its easy to run full builds, installs, and tests in gitlab-ci.  It
> also
> makes it easy to add CI tests for various releases, like to support
> backports.



jupyter-notebook trickier copyright.

2017-10-25 Thread Diane Trout
Hi,

I was looking for copyrighted files and found

jupyter-notebook/docs/sphinxext/ which lists an "All rights reserved"
copyright.

Adapted from bitbucket example here:
https://bitbucket.org/birkenfeld/sphinx-contrib/src/tip/bitbucket/sphin
xcontrib/bitbucket.py

"""
#
# Original Copyright (c) 2010 Doug Hellmann.  All rights reserved.
#

Also the source url doesn't work.

I think I found the bitbucket version at

https://bitbucket.org/dhellmann/sphinxcontrib-bitbucket

at this URL
https://bitbucket.org/dhellmann/sphinxcontrib-bitbucket/src/36be4abe62e
594ad7d6673d62983c23c541d401d/sphinxcontrib/bitbucket.py?at=default
eviewer=file-view-default

The bitbucket repository does have a BSD-2-Clause license file, and the
notebook github.py file looks like it's a derived work from the
bitbucket.py file.

My guess would be to add a block for the bitbucket license.txt to
d/copyright, and then list the copyright holder as the original 2010
Doutg Hellman & The Jupyter Development Team.

There's the issue that the original file is 2-clause and the modified
file is 3-clause... but I suppose since the 2-clause license is a
subset, and is included it does technically meet the license
conditions.

Diane



signature.asc
Description: This is a digitally signed message part


Re: RFS: jupyter components

2017-10-25 Thread Diane Trout
Could you update the timestamp on jupyter-console as well?

That one also has a time-warp-standards

Thanks,
Diane

signature.asc
Description: This is a digitally signed message part


Re: RFS: jupyter components

2017-10-25 Thread Diane Trout
On Wed, 2017-10-25 at 22:15 +0200, Gordon Ball wrote:
> 
> I would normally not update the timestamp while the suite is
> UNRELEASED,
> and expect whoever ultimately makes the upload to `dch -r` and tag
> the
> release, but maybe it would be less ambiguous to update it each time
> d/changelog gets edited.

In my own packages I update the changelog timestamp when creating a new
release, and then one final update for release. I'm not sure there's
clearly a right answer in this case.

The thing that made me want to have one of you to update the changelog
was Section 4.4 of the debian policy which says:

The maintainer name and email address used in the changelog should be
the details of the person who prepared this release of the package.
They are not necessarily those of the uploader or usual package
maintainer. 

> 
> I think I have DM upload rights for jupyter-notebook but not the
> others
> listed.

I don't mind doing a review & uploading. But if you want to upload go
ahead

I just uploaded nbconvert instead of going around in email a couple
more times.

Diane




Re: RFS: jupyter components

2017-10-25 Thread Diane Trout

> I have just uploaded the current RFS packages (ipython,
> jupyter-notebook, jupyter-console, nbconvert) to mentors.d.n


I just reviewed nbconvert

I got one lintian warning

nbconvert source: timewarp-standards-version (2017-09-03 < 2017-09-27)

The source package refers to a Standards-Version that was released
after the date of the most recent debian/changelog entry. Perhaps you
forgot to update the timestamp in debian/changelog before building the
package?

Would anyone who worked on preparing the release like to update the
changelog timestamp?

Also it's probably a good idea not push the debian/5.3.1-1 tag for the
release until its actually accepted. There's a couple of commits after
the tag which are in the package on mentors.

Will go look at the other packages in a bit.

Diane



Re: doc-central

2017-10-13 Thread Diane Trout
On Fri, 2017-10-13 at 08:41 +0100, Simon McVittie wrote:
> On Fri, 13 Oct 2017 at 08:13:08 +0800, Paul Wise wrote:
> > On Fri, Oct 13, 2017 at 3:27 AM, Diane Trout wrote:
> > 
> > > Being able to find all your documentation in one place would
> > > really be
> > > convenient.
> > 
> > I don't think doc-base/doc-central will ever be the answer to this
> > as
> > it is very specific to Debian and thus not available on other
> > distros.
> > Eventually the Freedesktop folks will come up with something
> > cross-distro and cross-desktop and we will have to replace doc-base
> > with it, just like we had to do with the Debian-specific menu
> > system.
> 
> ... unless someone from Debian with an interest in documentation goes
> upstream and comes up with something cross-distro, cross-desktop and
> suspiciously similar to doc-base. Relevant people to talk to would

FWIW rarian is the closest thing to a freedesktop documentation
standard. It derives from scrollkeeper, and I thought scroolkeeper was
originally from Debian.


> As far as I understand it, yelp and devhelp are separate apps as a
> deliberate design choice, because they have different audiences and
> requirements. Whether you agree with it or not, understanding the
> reasoning behind that design choice seems likely to be valuable.

My first guess is that separating them makes it easier to simplify
search. Users will find user docs and developers will find developer
docs, and they can't accidentally be confused.

Which makes me wonder a bit about all the sphinx docs where there's a
user chapter and then API docs.

Diane



doc-central

2017-10-11 Thread Diane Trout
Hi,

I wanted to be able to browse documentation locally, and the Python
viewer doc-central is abandoned.

What I have so far is a Python 3 version using CGI scripts. What I'd
like is something uses wsgi and can run with Python's built-in wsgiref
server instead of requiring a full web server. (maybe flask?)

Pending a better repository my changes are currently at:
https://github.com/detrout/doc-central

I did reformat the code according to current pep 8 conventions. (There
were so many tabs...)

Also doc-central had its own code using the long deprecated rfc822
library for reading the package doc-base files. I dumped that and used
the Deb822 reader in python3-debian.

I forgot to look for a svn repository so I built a history with gbp
import-dsc. (Though the original svn repository is also pretty small,
and lacks tags).

Anyone want to review the changes? Should I release a new version?

Does anyone have a good way of migrating the old svn repository? Is it
even necessary? Where should a new git repository go?

Diane



pandas arm & mips FTBFS #877754

2017-10-04 Thread Diane Trout
Hi,

It's pretty difficult to run just one pandas unittest.

I managed to extract a couple examples from the build logs and
replicate the failures on a arm64 porterbox.

Debian Bug is https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=877754
Forwarded to https://github.com/pandas-dev/pandas/issues/17792

Diane



Re: pycharm package in debian

2017-10-04 Thread Diane Trout

> > who says that a "lagging behind" package doesn't have any security
> > issues? If
> > the package is lagging behind, how do you know that security
> > updates aren't
> > lagging behind either...
> 
> As this is Debian, I do expect that at least, I can read the security
> tracker to see the current status. For a snap package, I wouldn't
> know
> how to audit it.

I do wish that these third party app systems like conda, snappy or
flatpak would include metadata like AppStream or DOAP.

It would be really convenient to have one tool that could audit a
system for out of date packages, and for our bug reporting tools to be
able to direct users to the responsible party for a third party
installation.

I can see that GNOME software has some visibility into what valve's
steam is doing so it's at least theoretically possible.

Diane





Re: Python 3 Statsmodels & Pandas

2017-09-30 Thread Diane Trout
On Sat, 2017-09-30 at 12:26 +0300, Dmitry Shachnev wrote:
> 
> 
> > I wonder if it's better to filter sphinxdoc out of the dh line,
> > install
> > sphinx-common, or just always install python3-sphinx?
> 
> Adding sphinx-common to B-D and keeping python3-sphinx in B-D-Indep
> is
> probably the easiest solution. Also you can try not relying on --with
> at all and manually call dh_sphinxdoc when the build is arch-indep.
> 

It took me two tries, but statsmodels 0.8.0-6 builds everywhere pandas
builds now.

I am curious if Rebecca is using pandas on a non-intel architecture
though (was wondering how she noticed pandas hadn't built)

Diane

signature.asc
Description: This is a digitally signed message part


Re: Python 3 Statsmodels & Pandas

2017-09-27 Thread Diane Trout
On Wed, 2017-09-27 at 08:34 +0200, Andreas Tille wrote:
> 
> > 
> > https://ghic.org/~diane/debian/statsmodels.datasets.README.txt
> 
> I think regarding formatting and context its perfectly fine.
>  
> > Does it go in README.source? or in upstream/metadata? or something
> > else?
> 
> I think there is no "right" place to do it - but ftpmaster is used
> to read README.source in these cases.  So why not using it?
> 

I added the readme to README.source with a bit of explanatory text.

all changes are pushed.


> The bad news is that when I tried to build the branch + your patch
> (as I said, please push to enable more easy testing for others) I'm
> running into:
> 

Unfortunately that happened to me once (out of 4 or 5 builds)

and searching for the error message led me to

https://github.com/jupyter/jupyter_client/issues/154

There's also few other issues that also involve the Kernel dying error
message.

So it seems like a sporadic upstream bug?

I need to sleep now.

Try again? hopefully it'll work. Probably the stack trace and version
numbers of the jupyter components need to be filed with upstream.

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-26 Thread Diane Trout

> Since it is accepted for the R packages and the data are refering
> to R data I do not see any reason why this should not be accepted.

I traced back from Rdatasets to the original R packages.

Every one of the packages are licensed as some combination of GPL-2 and
GPL-3

However it's likely that some of the datasets are not copyrightable.

Here's my badly formatted notes of the name of the dataset
and the link to the R package documentation, and what GPL license it's
available under.

How should this be formatted to be shipped with Debian?

https://ghic.org/~diane/debian/statsmodels.datasets.README.txt

Does it go in README.source? or in upstream/metadata? or something
else?

I could also ask one of the campus librarians to help review the
datasets to determine if they should or shouldn't be copyrightable.

My current WIP patch for the documentation is here:
The Debian copyright file still needs to be updated, and the citations
need to be listed.

https://ghic.org/~diane/debian/statsmodels-0001-add-patch-use-cached-da
tasets-and-cache-all-of-the-g.patch

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-26 Thread Diane Trout

> While I have not tried to build the current status I wonder what you
> think about #873512.  I'm perfectly fine with your solution to
> exclude
> some tests - I just wanted to give a hint that there is a potential
> upstream patch.

I think I looked at the upstream commits that fixed it,

https://github.com/statsmodels/statsmodels/pull/3402/files

and discovered that it required too much back porting to easily apply
to 0.8.0

Since the error is just you can't pickle (save object state) a few data
types, I was assuming a user could just avoid the missing functionality
until there's a new upstream release.

I was more concerned about remembering to remove disabling the tests
after its fixed, in case something else goes wrong later.

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-25 Thread Diane Trout
On Mon, 2017-09-25 at 09:44 +0200, Andreas Tille wrote:
> Hi Diane,
> 
> On Sun, Sep 24, 2017 at 11:45:43PM -0700, Diane Trout wrote:
> > The remaining issues are:
> > 
> > * Some of the doc pages call get_rdataset, and there's no network
> > access in the builder so those calls fail. (ugliest error)
> 
> Can you pre-fetch the data and provide it in debian/datasets?

I made the changes and cached the downloaded zip files

and then realized isn't this redistributing datasets?

Don't we need to verify the license before uploading?

Below is what I've found so far, (before getting tired of licensing
issues)

Any thoughts about how to handle this?

Here's a list of the file names from the include-binaries file I
created via caching.

datasets.csv.zip
csv,HistData,Guerry.csv.zip 
doc,HistData,rst,Guerry.rst.zip 
csv,COUNT,medpar.csv.zip
doc,COUNT,rst,medpar.rst.zip
csv,car,Duncan.csv.zip  
doc,car,rst,Duncan.rst.zip  
csv,robustbase,starsCYG.csv.zip 
doc,robustbase,rst,starsCYG.rst.zip 
doc,car,rst,Moore.rst.zip   
csv,vcd,Arthritis.csv.zip   
doc,vcd,rst,Arthritis.rst.zip   
csv,MASS,epil.csv.zip   
doc,MASS,rst,epil.rst.zip   
csv,geepack,dietox.csv.zip  
doc,geepack,rst,dietox.rst.zip  

The files are being downloaded from this github repository.

https://github.com/vincentarelbundock/Rdatasets a useful index of the
datasets is 
http://vincentarelbundock.github.com/Rdatasets/datasets.html

Guerry.csv is probably safe as its from "Essay on the Moral Statistics
of France" published 1833.

medpar 2016's license is here:
https://www.healthdata.gov/dataset/medpar-limited-data-set-lds-hospital
-national
and is listed as "Open Data Commons Open Database License"
https://opendatacommons.org/licenses/odbl/1.0/

Duncan is the Duncan's Occupational Prestige Data from 1950.
Couldn't find a license

starsCYG is Data for the Hertzsprung-Russell Diagram of the star
cluster CYG OB1
http://ugrad.stat.ubc.ca/R/library/rrcov/html/stars.html
Couldn't find a license

Moore is from Moore, J. C., Jr. and Krupat, E. (1971) 
Relationship between source status, authoritarianism and conformity in
a social setting.
Couldn't find a license


Diane



Re: Python 3 Statsmodels & Pandas

2017-09-25 Thread Diane Trout
On Mon, 2017-09-25 at 09:44 +0200, Andreas Tille wrote:
> 
> > * Some of the doc pages call get_rdataset, and there's no network
> > access in the builder so those calls fail. (ugliest error)
> 
> Can you pre-fetch the data and provide it in debian/datasets?

Looks like it'll take a bit of patching. There is a caching mechanism,
but it's off by default.

> > * there's one intersphinx reference that's not in debian.
> 
> If it is not very important I would probably exclude the piece of
> documentation which is affected.  What exact reference is this?  Can
> we
> help by packaging something else that might be needed later not only
> inside the docs but also in code?

intersphinx is used for cross references, so there's two places in the
local docs which show plain text, and hyperlinks on the official site.

If you look at:

http://www.statsmodels.org/stable/dev/git_notes.html#merging-vs-rebasin
g

in the sentence "One great place to start learning about rebase is
rebasing without tears." the phrase "rebasing without tears" is a link
to the pydagogue site, and in my local package it's just plain text.

I'm planning on ignoring it for now.

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-25 Thread Diane Trout
On Sun, 2017-09-24 at 11:24 -0700, Diane Trout wrote:
> Status with statsmodels almost done
> 
> Trying to deal with jquery.
> 
> leaving command
> 
>   -rm ./build/html/_static/jquery.js
> 
> causes a build failure now.
> leaving it in causes a lintain privacy error.

I found the problem, I started building the notebooks and those
templates refer to jquery and requirejs in addition to mathjax.

(So I need to replace those with local copies)

In my first try I misunderstood what the Build-Depends-Indep field was
for and so I rewrote history to fix one of my commits. so ..

everything I've done is now detrout-python3-try2, and hopefully the
other branch can get deleted. (sorry about this)

The remaining issues are:

* Some of the doc pages call get_rdataset, and there's no network
access in the builder so those calls fail. (ugliest error)
* There's a lintian warning about no bindnow.
* there's one intersphinx reference that's not in debian.
* I couldn't get override_dh_auto_build-indep to actually build the
docs. I shoved the doc building into override_dh_installdocs

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-24 Thread Diane Trout
If the jquery.js file is deleted something later errors out.  My guess is it's 
listed in one of the .install files but I haven't looked yet

On September 24, 2017 3:51:30 PM PDT, Yaroslav Halchenko 
<deb...@onerussian.com> wrote:
>
>On Sun, 24 Sep 2017, Andreas Tille wrote:
>
>> On Sun, Sep 24, 2017 at 11:24:10AM -0700, Diane Trout wrote:
>> > Status with statsmodels almost done
>
>> > Trying to deal with jquery.
>
>> > leaving command
>
>> >-rm ./build/html/_static/jquery.js
>
>> > causes a build failure now.
>
>> Without checking the source:  I'm usually doing something like
>
>> override_dh_install:
>>  dh_install
>>  find debian -name jquery.js -delete
>
>> This should avoid build failures.
>
>FTR  not sure how that could have caused a build failure since leading
>'-'
>in makefile means "ignore the failure"



Re: Python 3 Statsmodels & Pandas

2017-09-24 Thread Diane Trout
Status with statsmodels almost done

Trying to deal with jquery.

leaving command

-rm ./build/html/_static/jquery.js

causes a build failure now.
leaving it in causes a lintain privacy error.

there's also lintain warnings about a missing hardening flag, and no
doc-base registration.

At some point tests in ./statsmodels/imputation/tests/test_mice.py were
failing. but I'm not sure if its still having trouble.

Kid really wants to go to the park right now, so I wont be able to
touch it for several hours.

I also need to make good patches for sending to alioth, so here's a
snapshot. (with the jquery deletion rule commented out)

I think you should get a package if you build this.

http://chaos.caltech.edu/~diane/debian/statsmodels_0.8.0-3.1.dsc

Diane



Re: subliminal

2017-09-23 Thread Diane Trout
On Thu, 2017-09-14 at 18:35 -0300, drebs wrote:
> Hi, I am interested in updating the subliminal[2] package[1]. Is
> there
> someone else already working on that? If not, what would be the
> process
> for having it uploaded? (i am not a dm or dd) Should i send the
> source
> package to this list? Would someone sponsor it? :)

You might want to check to see whats going on with the previous
uploaders.

If they aren't then yes once you have it updated, you'd probably submit
it to mentors.debian.net and ask either in this list or in the #debian-
python IRC to find a sponsor.

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-22 Thread Diane Trout
On Fri, 2017-09-22 at 10:57 +0200, Piotr Ożarowski wrote:
> [Diane Trout, 2017-09-21]
> > I made larger changes to statsmodels, by using pybuild instead of
> > the
> > previous multiple targets in debian/rules.
> 
> you can simplify it even further by using pybuild's --ext-dest-dir:
> (I didn't test as this branch FTBFS for me)

Ooh. I didn't know about PYBUILD_EXT_DEST_DIR_python3

that's useful.

Where should that option be documented?

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-21 Thread Diane Trout
I managed to merge the more important doc changes.

I have a patch to switch doc building to using Python 3 components, as
there's a goal of removing the Python 2 components at some point.

The there's the patch for the nodoc/nocheck build profile, as well as
adding a bunch of other dependencies for doc building.

After more testing I'll try posting them. But it probably wont be until
tomorrow.

diff -Nru pandas-0.20.3/debian/changelog pandas-
> 0.20.3/debian/changelog
> --- pandas-0.20.3/debian/changelog  2017-07-10 20:00:59.0
> -0400
> +++ pandas-0.20.3/debian/changelog  2017-09-21 16:11:29.0
> -0400
> @@ -1,3 +1,14 @@
> +pandas (0.20.3-2) unstable; urgency=medium
> +
> +  * debian/control
> +- boosted policy to 4.0.0 (I think we should be ok)
> +- drop statsmodels from build-depends to altogether avoid the
> circular

While I was looking at the policy updating checklist in 4.0.1 they
renamed XS-Testsuite to Testsuite. I think that's the only change I
noticed.

The version of debhelper and debian/compat is also pretty old.


> 
> I could also add you to pkg-exppsy team under which we still have the
> "active"
> vcs for pandas.

Thank you that sounds useful.

Diane


> > 



Re: Python 3 Statsmodels & Pandas

2017-09-21 Thread Diane Trout
On Thu, 2017-09-21 at 17:56 -0400, Yaroslav Halchenko wrote:
> If you could allow to review would be great.
> Thanks for all the work.
> I was btw also trying to build with the patch you shared yesterday

Once I have all the changes for pandas would you like me to put them on
a branch on alioth? Or should I send them via format-patch somewhere?

Also it looked to me like you were changing debian/changelog with each
change? (Some people just use the git log and then apply all the
updates to d/changelog in one go with git buildpackage.

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-21 Thread Diane Trout

> If my poor opinion counts:  For the moment we should run those tests
> in
> the build process than can be easily be run.  Everything else should
> probably be sorted out later (in autopkgtest or another later upload
> if
> somebody has a clue how we can solve the circular depenendecies).
> 
> We somehow need to get some working spatstats to continue with other
> packages.
> 

Status:

[X] Pandas builds with nocheck, nodoc
[X] Statsmodels builds with Python 3 using above pandas
[X] Pandas tests pass with statsmodels for Python 2 & 3 installed.
[ ] Pandas builds docs with statsmodels installed

My most recent build error was about pandoc not being available.

Unfortunately the tests take a long time enough that I can write this
email before I know if adding pandoc fixed the problem.

dh_auto_tests run Python 2.7, 3.5, 3.6 tests, and then autopkgtests
runs them again.

I posted the larger fixes to pandas I've done to the appropriate bugs

#875807 python3-pandas FTBFS: 3 timezone unit tests fail
#875805 python3-pandas: Please break circular dependency

There's a few more minor patches on my laptop that I haven't attached
to a bug for pandas.

* Updating standards version
* using debhelper 10
* switching sphinx doc build to use python3
* and deleting a few more build files in dh_clean target.

I made larger changes to statsmodels, by using pybuild instead of the
previous multiple targets in debian/rules.

All of those changes are currently on alioth in detrout-python3.

When all these tests pass, shall I add myself to uploaders and release?
or does someone else want to review first?

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-18 Thread Diane Trout

> the biggest downside with this approach is that you *completely* skip
> any
> testing on other architectures than amd64.  Is that what you really
> want? Dear
> porters, have fun where to search for bugs in packages without
> testsuites!

Ok you convinced me. dh_auto_tests stay.

Is there anything that can streamline rebuilding when there are cycles?
Do the build servers know to take advantage of the profiles to break
dependency chains?

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-17 Thread Diane Trout

> On Saturday, 16 September 2017 14:18:10 AEST Diane Trout wrote:
> > My solution was to use build-profiles to flag the test dependency
> > with
> > !nocheck
> 
> this is, of course, a very elegant solution and exactly what build
> profiles 
> are for...
> 
> I wonder though, is it really sustainable given this problem keeps
> coming up?

I just did it that way because it was the least disruptive change I
could make that would let me build and test the package.

Also I wasn't sure how much magic was happening on the Debian build
servers.

> 
> Is it feaible to completely break this circular dependency? If it is
> only 
> needed for tests, would be possible to disable the build-time tests
> and rely 

In my experience I'm much more likely to to notice a build time test
failure than one from the CI system. Though I did adjust my pbuilder
config to fail on test failures in the autopkgtests so I already solved
my own concern... 

What do other people think? If there are autopkgtests, should you still
let dh_auto_test run tests?

Diane



Re: Python 3 Statsmodels & Pandas

2017-09-16 Thread Diane Trout
On Sat, 2017-09-16 at 22:59 +0200, Yuri D'Elia wrote:
> On Sat, Sep 16 2017, Diane Trout wrote:
> > python3-pandas: Pandas is not installable
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=875723
> 
> I would have expected the rebuild of python packages affected by the
> fpectl extension with a transition, but it doesn't seem to be the
> case?
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=874253
> 
> More Breaks where added to {python,python3}-stdlib itself, but there
> are
> still packages which didn't rebuild.

I was assuming it's because there's a cyclic dependency between pandas
and statsmodels (needed for pandas unit tests), and statsmodels was
also broken by the fpectl problem.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=875805

My solution was to use build-profiles to flag the test dependency with
!nocheck

(The diff of the control file changes, though I forgot to add python3-
statsmodels.)

I have a few rules changes too, but I need to separate the changes into
different commits.

--- a/debian/control
+++ b/debian/control
@@ -3,7 +3,7 @@ Section: python
 Priority: optional
 Maintainer: NeuroDebian Team <t...@neuro.debian.net>
 Uploaders: Yaroslav Halchenko <deb...@onerussian.com>, Michael Hanke <
michael.hanke@g
mail.com>
-Build-Depends: debhelper (>= 7.0.50),
+Build-Depends: debhelper (>= 10),
python-all-dev (>= 2.5),
python-setuptools,
cython,
@@ -11,12 +11,11 @@ Build-Depends: debhelper (>= 7.0.50),
python-scipy,
python-tz,
python-tables [!m68k !sh4 !x32],
-   python-sphinx (>= 1.0~),
-   python-nbsphinx,
-   python-nose,
-   python-pytest,
+   python-sphinx (>= 1.0~) ,
+   python-nbsphinx ,
+   python-nose ,
+   python-pytest ,
python-matplotlib [!hurd-i386],
-   python-tk,
python-openpyxl, python-xlwt, python-xlrd,
python-bs4,
python-html5lib,
@@ -29,11 +28,10 @@ Build-Depends: debhelper (>= 7.0.50),
python3-numpy (>= 1:1.7~), python3-dateutil,
python3-scipy,
python3-tz,
-   python3-sphinx (>= 1.0~),
-   python3-nose,
-   python3-pytest,
+   python3-sphinx (>= 1.0~) ,
+   python3-nose ,
+   python3-pytest ,
python3-matplotlib [!hurd-i386]| python-matplotlib (<<
1.2.0~) [!hurd-i386],
-   python3-tk,
python3-bs4,
python3-six,
python3-lxml,
@@ -42,15 +40,15 @@ Build-Depends: debhelper (>= 7.0.50),
xvfb, xauth, xclip,
 Build-Depends-Indep:
  ipython (>= 0.12) | ipython2x | ipython1x,
- python-statsmodels [!arm64 !ppc64el !sparc64 !mips64el !ppc64
!sparc64 !sh4],
+ python-statsmodels [!arm64 !ppc64el !sparc64 !mips64el !ppc64
!sparc64 !sh4] ,



Python 3 Statsmodels & Pandas

2017-09-16 Thread Diane Trout
Hi,

Just wanted to give a progress report

I was able to build a python 3 version of statsmodels, however I wasn't
able to build it against the version of pandas in sid because pandas
can't be installed.

python3-pandas: Pandas is not installable
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=875723

I tried rebuilding to get around the cython issue, but then I had some
timezone related unit test failures.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=875807

Unfortunately upstream's test environments don't replicate it
https://github.com/pandas-dev/pandas/issues/17533

Looks like the tests pass with conda packages but not Debian packages.
I don't know why yet.

Diane



Re: Bug#729956: Forwarded upstream

2017-09-08 Thread Diane Trout
Hi

I pushed my work to alioth on the branch detrout-python3

I modified the statsmodels build recipe to at least partially use
pybuild, and the documentation build uses python 3 components instead
of python 2.

I skipped the 4 tests that failed for me, that had an upstream bug
report, when a new version comes out those skips need to be removed.

I tried rebasing on Yuri's latest commit but there was conflicts over
which version of ipython to use that I'm to tired to resolve right now.

It builds python 2 and python 3 statsmodels and statsmodels-lib as well
as the documentation. (including the example notebooks)

There was also some changes because there's were new temp files that
dh_auto_clean needed to fix.

I tried to commit changes in logical chunks for easier review.

There's still problems with dependencies in sid, so all my testing was
against my unclean testing laptop.

Someone should probably debdiff the package against a previous build to
see if anything important got lost.

If it looks good we can discuss if I need to rebase, or if you just
want to merge this work, but right now I need to go to sleep.

Diane



Re: Bug#729956: Forwarded upstream

2017-09-06 Thread Diane Trout
On Thu, 2017-09-07 at 06:20 +0200, Andreas Tille wrote:
> Hi Diane,
> 
> On Wed, Sep 06, 2017 at 02:45:14PM -0700, Diane Trout wrote:
> > 
> > > but the build failed (for other reasons).  I'd willing to work on
> > > this
> > > but I definitely need help since I'm lacking the needed Python
> > > knowledge.
> > 
> > Hi,
> > 
> > I saw your debian-python3 branch for statsmodels.
> > 
> > The dependencies added in the package should probably be added as
> > build-dependencies. and not package dependencies.
> > 
> > I believe python-zmq should be a binary dependency. 
> 
> Thanks for the hints.
>  
> > I was trying to build it right now but I'm getting a dependency
> > error.
> > 
> >  libpython2.7-stdlib : Breaks: python-pandas-lib (<= 0.20.3-1) but
> > 0.20.3-1 is to be installed
> 
> I also get an error in the Python 2.7 test suite so I have no idea
> where to continue with the Python3 stuff.

Was the test failures you were see 4 instances like whats shown below?

I found a match upstream at:
https://github.com/statsmodels/statsmodels/issues/3401

I've gotten failures with 2.7 and 3.6.

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in
runTest
self.test(*self.arg)
  File
"/home/diane/src/debian/statsmodels/.pybuild/pythonX.Y_2.7/build/statsm
odels/tsa/statespace/tests/test_save.py", line 65, in test_varmax
res.save('test_save_varmax.p')
  File
"/home/diane/src/debian/statsmodels/.pybuild/pythonX.Y_2.7/build/statsm
odels/base/wrapper.py", line 72, in save
save_pickle(self, fname)
  File
"/home/diane/src/debian/statsmodels/.pybuild/pythonX.Y_2.7/build/statsm
odels/iolib/smpickle.py", line 15, in save_pickle
cPickle.dump(obj, fout, protocol=-1)
  File "stringsource", line 2, in
statsmodels.tsa.statespace._statespace.zStatespace.__reduce_cython__
TypeError:
self._design,self._initial_state,self._initial_state_cov,self._obs,self
._obs_cov,self._obs_intercept,self._selected_state_cov,self._selection,
self._state_cov,self._state_intercept,self._transition cannot be
converted to a Python object for pickling



Re: Bug#729956: Forwarded upstream

2017-09-06 Thread Diane Trout

> but the build failed (for other reasons).  I'd willing to work on
> this
> but I definitely need help since I'm lacking the needed Python
> knowledge.

Hi,

I saw your debian-python3 branch for statsmodels.

The dependencies added in the package should probably be added as
build-dependencies. and not package dependencies.

I believe python-zmq should be a binary dependency. 

I was trying to build it right now but I'm getting a dependency error.

 libpython2.7-stdlib : Breaks: python-pandas-lib (<= 0.20.3-1) but
0.20.3-1 is to be installed

Should I see if I can get a python3 build working? and who should I
send any progress too?

Diane



Re: Scaling back for now

2017-08-28 Thread Diane Trout
I just wanted to say thank you for all you've done for Debian & Python.

Diane

signature.asc
Description: This is a digitally signed message part


Re: Bug#872183: RFP: importmagic -- Python library for finding unresolved symbols and managing imports

2017-08-14 Thread Diane Trout
On Mon, 2017-08-14 at 19:22 -0400, Nicholas D Steeves wrote:
> Package: wnpp
> Severity: wishlist
> 
> * Package name: importmagic
>   Version : 0.1.7
>   Upstream Author : Alec Thomas 
> * URL : https://github.com/alecthomas/importmagic
> * License : BSD-2-Clause
>   Programming Lang: Python
>   Description : Python library for finding unresolved symbols and
> corresponding imports
> 

Does anyone else want to work on this?

I've been using elpy via emacs packages for a while, and would
appreciate the extra functionality.

(Also thank you for packing elpy, I was going to look into it someday,
but hadn't gotten around to it)

Diane

> 



Re: MBF for deprecating Python2 usage

2017-08-07 Thread Diane Trout
On Tue, 2017-08-08 at 13:24 +1000, Ben Finney wrote:
> 
> Those people, not party ot this conversation, have reasonable
> expectation that such breakage will not happen without very good
> reason.
> Good reason would entail, as an example, that there is no better
> alternative.
> 

Why not ask?

I know people in training groups like software carpentry? Does anyone
know sites with large python deployments?

I'd love to know what Apple is going to do about this issue.

Also if we do come up with a migration it should happen at a release
and be clearly reported in the release notes.

Diane



Re: MBF for deprecating Python2 usage

2017-08-07 Thread Diane Trout

> What I am opposing is the suggestion to install, in the near to
> medium
> term, a command of exactly the same name that has subtly similar but
> incompatible behaviour, when that behaviour *already* has a command –
> ‘python3’ – that is widely used by those who need it.
> 

my problem with that plan is all of the printed documentation saying to
learn python, type "python".

At the very least there needs to a usr/bin/python that prints
instructions about what you should run.



Re: MBF for deprecating Python2 usage

2017-08-07 Thread Diane Trout

> What tearing need is there to change what the command ‘python’ does,
> in
> a backward-incompatible way?

Personally, I'm ready for python to point to python3 now.

I'm tired of writing python 2/3 compatible code because someone _might_
launch a script with "python my_python3_script.py instead of
./my_python3_script.py

Python has been remarkably good at avoiding breakage, but I've seen
other scripting languages have serious incompatibilities with far less
warning.

For example R packages need to be recompiled for point release updates.
Some perl scripts we inherited broke somewhere between perl 5.8 and
5.12.

Users have tools like virtualenv or conda to deal with the
incompatibilities.

It might be useful to add an option to the interpreter where if a
python script is launched without a usr/bin/python2 or usr/bin/python3
header it reports a deprecation warning (either to console or syslog)
so it's easier to find programs that still need updating.

Diane



Re: MBF for deprecating Python2 usage

2017-08-07 Thread Diane Trout

> I disagree, it's a bad idea to actively take steps to make the same
> command invoke *incompatible* programs depending on the time and
> host.

My suggestion was "the startup banner should print what command to run
to get Python 2."

I was thinking of the case of the end-user trying to follow a Python
tutorial. They'd still need to exit and run the python2 command if they
wanted 2.

Diane



Re: a few quick questions on gbp pq workflow

2017-08-07 Thread Diane Trout

> 
> Why would you need to repack a tarball just because it contains
> prebuilt docs (non-DFSG-free licensed documentation aside)? I'm all

I've occasionally repacked a tarball because upstream included minified
jquery or mathjax.

Diane

signature.asc
Description: This is a digitally signed message part


Re: MBF for deprecating Python2 usage

2017-08-07 Thread Diane Trout

> * Plan for a date at which /usr/bin/python will point to Python 3.  I
> know that’s the most controversial bit, but I do think that as time
> goes on and we’re past 2020, it will be the choice that gives our
> users the best experience.

I agree the default should change.

Perhaps when launching via the command "python" the interpreter could
hint python2 was available? Something like:

$ python
Python 3.5.4rc1 (default, Jul 25 2017, 08:53:34) 
[GCC 6.4.0 20170704] on linux
Type "python2" for Python 2.7.13
Type "help", "copyright", "credits" or "license" for more information.
>>> 

Diane

signature.asc
Description: This is a digitally signed message part


Packaging Bokeh

2015-09-05 Thread Diane Trout
Hi,

I've made some limited progress trying to package Bokeh (BSD-3-Clause) 
upstream: http://bokeh.pydata.org/en/latest/
my packaging: https://github.com/detrout/python-bokeh

I managed to get the version 0.9.1 from pypi installable. (Though since it was 
my own experiments I didn't remove the jquery / bootstrap libraries.)

The most proper packaging would require grunt to be able to rebuild bokeh.js. 
I was wondering if releasing the pypi version would be good enough. (The 
package does at least contain a non-minimized version of bokeh.js)
(And it looks like someone may have signed up to try and deal with the ITP for 
bokeh #756017)

I recently tried to update to bokeh 0.9.3 and their unit tests flushed out a 
few more dependencies.

I just packaged abstract_rendering (BSD-3-Clause) 
upstream: https://github.com/ContinuumIO/abstract_rendering/
my packaging: https://github.com/detrout/python-abstract-rendering

Bokeh's unit tests also appear to depend on blaze, and that looks like that 
has several missing dependencies.

Currently I built the packaging following the debian-qt team's common 
standards (source-less), Thought It looks like the python-modules team is just 
about to transition to git-dpm. 

Should I go ahead and submit abstract_rendering? Should I work on getting 
blaze submitted?

Diane



Re: Packaging Bokeh

2015-09-05 Thread Diane Trout
> > The most proper packaging would require grunt to be able to rebuild
> > bokeh.js. I was wondering if releasing the pypi version would be good
> > enough. (The package does at least contain a non-minimized version of
> > bokeh.js)
> I'm not sure about this, but it looks like the Bokeh source is a
> huge directory of coffeescript files, while the resulting
> bokeh.js is not the source code. So build is: 1. coffee -> js
> 2. concat all js. Maybe its possible without grunt, just like
> Antonio did with jQuery?

That's a good idea. I'll investigate.


> 
> Yes, but please fix the long description. It starts with
> "Abstract Rendering takes the opposite approach:" which confused
> me :~)

Good point

> 
> > Should I work on getting
> > blaze submitted?
> 
> If blaze is only needed for the tests, I suggest to postpone it.
> (What is blaze anyway?)

As far as I could tell blaze is an implementation of a subset of the 
numpy/pandas API for talking to like databases and csv files

http://blaze.pydata.org/en/latest/

There's several things that spun off and became dependencies of blaze. 
Probably one of the cooler is Dask. which is multi-core/out of memory numpy 
arrays & pandas dataframes with some kind of remote execution scheduler.
http://dask.pydata.org/en/latest/#



Re: Packaging Bokeh

2015-09-05 Thread Diane Trout
On Saturday, September 05, 2015 09:28:48 Diane Trout wrote:
> > > The most proper packaging would require grunt to be able to rebuild
> > > bokeh.js. I was wondering if releasing the pypi version would be good
> > > enough. (The package does at least contain a non-minimized version of
> > > bokeh.js)
> > 
> > I'm not sure about this, but it looks like the Bokeh source is a
> > huge directory of coffeescript files, while the resulting
> > bokeh.js is not the source code. So build is: 1. coffee -> js
> > 2. concat all js. Maybe its possible without grunt, just like
> > Antonio did with jQuery?
> 
> That's a good idea. I'll investigate.

It looks like there's something complicated going on with how BokehJS is 
built. grunt loads some browserfy plugins, one for templates and one for 
handling coffee script files.

It looks like browserfy is providing something like import semantics.

Faking grunt with a makefile may be pretty challenging.

Diane



recommended numpy dependency ranges?

2014-03-31 Thread Diane Trout
Hi,

I have a small package the depends on numpy and it recently stopped working.

 Traceback (most recent call last):
   File /usr/local/lib/R/site-
library/DEXSeq/python_scripts/dexseq_prepare_annotation.py,
 line 33, in module
 import HTSeq
   File /usr/lib/python2.7/dist-packages/HTSeq/__init__.py, line 9,
 in module
 from _HTSeq import *
   File numpy.pxd, line 155, in init HTSeq._HTSeq (src/_HTSeq.c:33074)
 ValueError: numpy.dtype has the wrong size, try recompiling


I'm pretty sure a recompile will fix it, the question I have is how often does 
numpy break binary compatibility?

Should set your numpy dependencies to something like:

python-numpy (= 1.8,  1.9)

Diane


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1427058.q2LImEDRuv@myrada



Re: recommended numpy dependency ranges?

2014-03-31 Thread Diane Trout

  I'm pretty sure a recompile will fix it, the question I have is how often
  does numpy break binary compatibility?
  
  Should set your numpy dependencies to something like:
  
  python-numpy (= 1.8,  1.9)
 
 do not hard code -- add calls to dh_numpy (dh_numpy3) to your
 rules and make sure you have ${python:Depends} in your Depends

I didn't know about dh_numpy, is there a recommended dh target to use?

Its use doesn't seem that consistent in codesearch.d.o.

Diane


-- 
To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/2707048.BARnNtrk0i@myrada



Re: Python 3 as default

2013-12-04 Thread Diane Trout

 Instead, I mean, what would it take for the basic Debian system to install
 Python 3 only by default, and have any system scripts that depend on Python
 be Python 3.

Nothing. 

I just did a default no-tasks selected debian wheezy system and no version of 
python was installed.

Using a cowbuilder wheezy  sid chroot I decided to see what python the tasks 
from tasksel install.

(e.g. apt-get -s install task-web-server | grep python)

Tasks:
task-desktop (with --no-install-recommends): no Python

  task-gnome-desktop: (wheezy Python 2.7) (sid both Python 2.7  Python 3.3)
  task-kde-desktop: (wheezy no Python) (sid Python 2.7)
  task-lxde-desktop: Python 2.7
  task-xfce-desktop: no Python.

task-web-server: no Python
task-print-server: no Python
task-database-server: no Python
task-dns-server: (wheezy no Python) (sid Python 2.7)
task-file-server: Python 2.7
task-mail-server: no Python
task-ssh-server: no Python
task-laptop: no Python

Looking at the lists of packages suggested by apt-get it seemed like only 
Gnome that wanted lots of python packages.

Diane

signature.asc
Description: This is a digitally signed message part.


Re: Packaging the new upstream release of ipython (i.e. 1.1.0)

2013-10-06 Thread Diane Trout

So there is a list of things that need doing for ipython. 

I'd built my own not-redistributable version of 1.1.0, (progress at 
https://github.com/detrout/debian-ipython)

I'll see if I can help with some of the work listed below.

Diane

On Thursday, October 03, 2013 22:29:41 Jean-Christophe Jaskula wrote:
 Hey,
 
 Thanks Thomas and Julian for your replies. Don't apologize for that, there
 is no problem taking time to release something clean. BTW, I realize than I
 was doing a package way simpler and dirtier than yours.
 
 I don't want to walk on anyone's toes or do unnecessary/redundant work. I'll
 be happy to help and have a look at one of these libraries' license. I
 can't promise to have a deep look quickly (i.e. next weekend) tho.
 
 Thanks,
 JC
 
 Le 3 oct. 2013 à 13:24, Julian Taylor a écrit :
  Hi,
  sorry I am working very very slow in ipython, a reason is that there is
  no stable release on the horizon, so I have been slacking of a bit.
  Now that ubuntu 13.10 is almost out soon I have no excuse anymore :)
  
  The issue with ipython 1.1 is the large amount of third party javascript
  libraries it embeds.
  These are the libraries it needs (from bowser.json) and its status in
  
  debian:
 bootstrap: ~2.3,
  
  in debian (libjs-twitter-bootstrap) but too old
  this is also hard to handle with packages so I currently plan to embed it.
  If you want to help out, please do a license/dfsg review of the embedded
  stuff and send me a debian/copyright file.
  
 codemirror: ~3.15,
  
  in debian but far to old and in hardly usable state.
  ipython currently embeds so I am willing to continue with that. But also
  needs a new license review for 1.0.
  
 font-awesome: ~3.1,
  
  debian package was not suitable, but was fixed recently, should be fine.
  Deian package needs adapting to use it.
  
 jquery: ~2.0,
  
  2.0 not in debian, hopefully will be soon. Maybe ipython also works with
  1.7, didn't test that much yet.
  
 jquery-ui: ~1.10,
  
  available in debian, packaging needs adapting to use it as 0.13.2 embeds
  a fork of it, fork not needed with 1.0 so far I know.
  
 less.js: ~1.3,
  
  in debian, package needs adapting to use it (via fab)
  
 marked: ~0.2.8,
  
  packaged for debian recently, ipython package needs adapting to use it.
  
 requirejs: ~2.1
  
  not in debian and looks like something that should be in debian (unlike
  bootstrap or codemirror it looks stable).
  Its not my area of expertise, help wanted.
  if none show up I'll probably go with embedding as I can't maintain a
  proper package of this. In this case needs a license review.
  
  On 03.10.2013 18:55, Jean-Christophe Jaskula wrote:
  Hey,
  
  I haven't heard from any of you and I'm still a bit curious of the status
  of the package. FYI, I continued patching and rearranging to fit with
  the upstream sources. I got something which starts looking good to me.
  
  Hope to hear from you soon :-)
  
  Cheers,
  JC
  
  Le 29 sept. 2013 à 16:31, Jean-Christophe Jaskula a écrit :
  Hi,
  
  The Ipython team has released a couple of major releases during the last
  months but I haven't seen any discussions about packaging them in
  debian. Just for curiosity, I decided to start trying to update the
  debian package (v0.13.2) to the latest release (v.1.1.0). When doing
  it, I realized there is a lot of changes to do and I do understand why
  it is not in sid yet (putting aside you might be also very busy with
  other things).
  
  I didn't plan to propose this work for a NMU but I'm wondering if
  someone was working on it. So far, my work is still in progress but I
  don't mind keeping working on it if it helps you or dropping it if a
  debian package is going to be uploaded soon. I have adapted most of the
  debian patches to the upstream release. I'm still working on the
  Mathjax patch to avoid ugly hack. I started from the source that one
  can get at: https://github.com/ipython/ipython/archive/rel-1.1.0.tar.gz
  . However, everything isn't shipped in this archive and I had to add
  static components that I took from:
  https://github.com/ipython/ipython/releases/download/rel-1.1.0/ipython- 
   1.1.0.tar.gz . I put these files altogether in the same .orig.tar.gz
  archive and started packaging from it.
  
  If you need help on this packaging, I'll be very happy to contribute.
  
  Cheers,
  JC
  
  PS: I'm attaching the debian folder for your information.
  ipython_1.1.0-0+nmu1.debian.tar.gz
  --
  Jean-Christophe Jaskula
 
 --
 Jean-Christophe Jaskula

On Thursday, October 03, 2013 22:29:41 Jean-Christophe Jaskula wrote:
 Hey,
 
 Thanks Thomas and Julian for your replies. Don't apologize for that, there
 is no problem taking time to release something clean. BTW, I realize than I
 was doing a package way simpler and dirtier than yours.
 
 I don't want to walk on anyone's toes or do unnecessary/redundant work. I'll
 be happy to help and have a look at one of these libraries' license. I