Re: Python C-library import paths
On 2022-04-02 16:04 +0100, Simon McVittie wrote: > On Sat, 02 Apr 2022 at 12:55:37 +0100, Wookey wrote: > > On 2022-04-01 00:30 -0400, M. Zhou wrote: > > > They have written > > > their own ffi loader, so I think it is an upstream bug. The upstream > > > should detect and add multiarch directory to the paths. > > > > A correct implemntation really should use the full ldconfig set of search > > paths. > > I think what they should actually be doing on Linux (at least in distro > packages) is taking a step back from all these attempts to reproduce > the system's search path for public shared libraries, and instead doing > this in https://github.com/apache/tvm/blob/main/python/tvm/_ffi/base.py: > > ctypes.CDLL('libtvm.so.0') > > which will (ask glibc to) do the correct path search, in something like > 99% less code. Aha. That sounds much more the answer to the query in my original mail 'or is there a way to turn on 'just use the system paths' mode?'. This does indeed work to load the library without a lot of manual search-path generation, but it's a bit tricky to use in the existing codebase which wants the loader function to return both a name and a path, and with this magic loading, I don't know the path. _LIB, _LIB_NAME = _load_lib() The second parameter only seems to be used to determine whether libtvm or libtvm_runtime was loaded. I think I can work round that. OK. This patch appears to basically work: -- tvm-0.8.0.orig/python/tvm/_ffi/base.py +++ tvm-0.8.0/python/tvm/_ffi/base.py @@ -48,15 +48,21 @@ else: def _load_lib(): """Load libary by searching possible path.""" -lib_path = libinfo.find_lib_path() # The dll search path need to be added explicitly in # windows after python 3.8 if sys.platform.startswith("win32") and sys.version_info >= (3, 8): for path in libinfo.get_dll_directories(): os.add_dll_directory(path) -lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_GLOBAL) + +use_runtime = os.environ.get("TVM_USE_RUNTIME_LIB", False) +if use_runtime: +lib = ctypes.CDLL('libtvm_runtime.so.0', ctypes.RTLD_GLOBAL) +sys.stderr.write("Loading runtime library %s... exec only\n" % lib._name) +sys.stderr.flush() +else: +lib = ctypes.CDLL('libtvm.so.0', ctypes.RTLD_GLOBAL) lib.TVMGetLastError.restype = ctypes.c_char_p -return lib, os.path.basename(lib_path[0]) +return lib try: @@ -68,10 +74,10 @@ except ImportError: # version number __version__ = libinfo.__version__ # library instance -_LIB, _LIB_NAME = _load_lib() +_LIB = _load_lib() # Whether we are runtime only -_RUNTIME_ONLY = "runtime" in _LIB_NAME +_RUNTIME_ONLY = "runtime" in _LIB._name Yay! _Unless_ you ask to use the runtime version - then it blows up. $ TVM_USE_RUNTIME_LIB=1 tvmc ... File "/usr/lib/python3/dist-packages/tvm/auto_scheduler/cost_model/cost_model.py", line 37, in __init__ self.__init_handle_by_constructor__(_ffi_api.RandomModel) AttributeError: module 'tvm.auto_scheduler._ffi_api' has no attribute 'RandomModel' I've not looked into that yet. Back to the issue of getting the path of the loaded library. Which I no longer obviously _need_, but I would like to know how to do it. There is ctypes.util.find_library(name) which says it returns a path but ctypes.util.find_library('tvm') just returns 'libtvm.so.0' I can't see an attribute in the returned lib object to get the path: lib=ctypes.CDLL('libtvm.so.0') >>> print(lib) >> print(lib.__dir__()) ['_name', '_FuncPtr', '_handle', '__module__', '__doc__', '_func_flags_', '_func_restype_', '__init__', '__repr__', '__getattr__', '__getitem__', '__dict__', '__weakref__', '__hash__', '__str__', '__getattribute__', '__setattr__', '__delattr__', '__lt__', '__le__', '__eq__', '__ne__', '__gt__', '__ge__', '__new__', '__reduce_ex__', '__reduce__', '__subclasshook__', '__init_subclass__', '__format__', '__sizeof__', '__dir__', '__class__'] But _something_ knows, because if I ask for an incorrect thing it prints it out as an 'AtributeError': >>> print(lib.wibble) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.9/ctypes/__init__.py", line 387, in __getattr__ func = self.__getitem__(name) File "/usr/lib/python3.9/ctypes/__init__.py", line 3
Re: Python C-library import paths
On 2022-04-01 00:30 -0400, M. Zhou wrote: > On Fri, 2022-04-01 at 02:32 +0100, Wookey wrote: > > > > > > So it tries quite hard to find it, but doesn't know about multiarch > > and thus fails to look in the right place: > > /usr/lib// (/usr/lib/x86_64-linux-gnu/ on this box) > > dlopen should know the multiarch triplet on debian. They have written > their own ffi loader, so I think it is an upstream bug. The upstream > should detect and add multiarch directory to the paths. Agreed. I also don't think it should use the $PATH paths for finding libraries (but maybe upstream have some reason for doing that) I made this patch but it's debian-specific, using dpkg-architecture. @@ -48,7 +49,8 @@ def get_dll_directories(): # $PREFIX/lib/python3.6/site-packages/tvm/_ffi ffi_dir = os.path.dirname(os.path.realpath(os.path.expanduser(__file__))) source_dir = os.path.join(ffi_dir, "..", "..", "..") -install_lib_dir = os.path.join(ffi_dir, "..", "..", "..", "..") +multiarch_name = subprocess.run(['dpkg-architecture', '-q', 'DEB_HOST_MULTIARCH'], stdout=subprocess.PIPE).stdout.decode('utf-8').rstrip() +install_lib_dir = os.path.join(ffi_dir, "..", "..", "..", "..", multiarch_name) (and it took me _ages_ to work out that suprocess.run without that .rstrip() leaves the trailing newline in the string which stops it working!) A correct implemntation really should use the full ldconfig set of search paths. > > OK, but that mostly reveals a second issue: it's looking for > > libtvm.so, but that unversioned link is only provoded in the dev > > package > > libtvm-dev. The library package has the versioned filenames > > /usr/lib/x86_64-linux-gnu/libtvm.so.0 > > /usr/lib/x86_64-linux-gnu/libtvm_runtime.so.0 > > I think it is fine to let it dlopen the libtvm.so, as it says > itself as some sort of "compiler". > > Take pytorch as example, python3-torch has some functionalities > for extending itself with C++. As a result, libtorch-dev is > a dependency of python3-torch. OK. I see there is also a find_include_path in libinfo.py so I guess if it needs the headers too then depending on the -dev package is indeed correct. I've reverted the change to look for libtvm.so.0. > > What it should actually be adding is what's in /etc/ld.so.conf.d/* > > That can be listed with > > /sbin/ldconfig -v 2>/dev/null | grep -v ^$'\t' | cut -d: -f1 > > (yuk? is there really no better way?) OK. I tried this, and given that I don't know any python it went better than I expected. So this code makes an array of paths (as strings) from ldconfig -v output. However I fell at the last hurble of joining the lib_search_dirs array to the dll_paths list such that I get one list of all the paths, not a list where the first entry still has multiple entries. My reading of the docs says that using extend() instead of append() should merge the lists, but it isn't for some reason. I made them both strings, rather than one lot of byte array and one lot of strings, but it still doesn't work. I'm sure this is trivial to fix for someone who actually knows some python, hence this mail. So I get this nice set of paths: search_dirs [['/usr/lib/x86_64-linux-gnu/libfakeroot:', '/usr/local/lib:', '/lib/x86_64-linux-gnu:', '/usr/lib/x86_64-linux-gnu:', '/lib:', '/usr/lib:']] which is combined with the other paths to get this incorrect data structure: dll_path: [['/usr/lib/x86_64-linux-gnu/libfakeroot:', '/usr/local/lib:', '/lib/x86_64-linux-gnu:', '/usr/lib/x86_64-linux-gnu:', '/lib:', '/usr/lib:'], '/usr/lib/python3/dist-packages/tvm/_ffi/..', '/usr/lib/python3/dist-packages/tvm/_ffi/../../../build', '/usr/lib/python3/dist-packages/tvm/_ffi/../../../build/Release', '/usr/lib/python3/dist-packages/tvm/_ffi/../../../lib'] Here is the code: def get_lib_search_dirs(): """Get unix library search path from ldconfig -v""" # loads of output, only lines starting with / are relevant output = subprocess.run(["/sbin/ldconfig","-v"],capture_output=True) paths = output.stdout.split(b'\n') filtered = [] for path in paths: if path[:1] == b'/': filtered.append(path.split()[0].decode()) return [filtered] def get_dll_directories(): """Get the possible dll directories""" # NB: This will either be the source directory (if TVM is run # inplace) or the install directory (if TVM
Python C-library import paths
I am packaging apache tvm. It builds a C-library libtvm.so (and libtvm_runtime.so). It also has a python interface which is how most people use it, so I've built that into python3-tvm It has a /usr/bin/tvmc which fails if you run it due to not being able to find the installed c-libraries. I have no idea how the python c-library-finding mechanism constructs its path list, so I'm not sure where to prod this to make it work. There is presumably a right place to add a path to look on, or maybe to enable the 'it's in the standard debian system path - just do what /etc/ld.so.conf.d/* says' functionality. Currently I get this: $ tvmc Traceback (most recent call last): File "/usr/bin/tvmc", line 33, in sys.exit(load_entry_point('tvm==0.8.0', 'console_scripts', 'tvmc')()) File "/usr/bin/tvmc", line 25, in importlib_load_entry_point return next(matches).load() File "/usr/lib/python3.9/importlib/metadata.py", line 77, in load module = import_module(match.group('module')) File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 972, in _find_and_load_unlocked File "", line 228, in _call_with_frames_removed File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 972, in _find_and_load_unlocked File "", line 228, in _call_with_frames_removed File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 972, in _find_and_load_unlocked File "", line 228, in _call_with_frames_removed File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 850, in exec_module File "", line 228, in _call_with_frames_removed File "/usr/lib/python3/dist-packages/tvm/__init__.py", line 26, in from ._ffi.base import TVMError, __version__, _RUNTIME_ONLY File "/usr/lib/python3/dist-packages/tvm/_ffi/__init__.py", line 28, in from .base import register_error File "/usr/lib/python3/dist-packages/tvm/_ffi/base.py", line 71, in _LIB, _LIB_NAME = _load_lib() File "/usr/lib/python3/dist-packages/tvm/_ffi/base.py", line 51, in _load_lib lib_path = libinfo.find_lib_path() File "/usr/lib/python3/dist-packages/tvm/_ffi/libinfo.py", line 146, in find_lib_path raise RuntimeError(message) RuntimeError: Cannot find the files. List of candidates: /home/wookey/bin/libtvm.so /usr/local/bin/libtvm.so /usr/bin/libtvm.so /bin/libtvm.so /usr/local/games/libtvm.so /usr/games/libtvm.so /usr/lib/python3/dist-packages/tvm/libtvm.so /usr/lib/libtvm.so /home/wookey/bin/libtvm_runtime.so /usr/local/bin/libtvm_runtime.so /usr/bin/libtvm_runtime.so /bin/libtvm_runtime.so /usr/local/games/libtvm_runtime.so /usr/games/libtvm_runtime.so /usr/lib/python3/dist-packages/tvm/libtvm_runtime.so /usr/lib/libtvm_runtime.so So it tries quite hard to find it, but doesn't know about multiarch and thus fails to look in the right place: /usr/lib// (/usr/lib/x86_64-linux-gnu/ on this box) Also does python really think that /usr/local/games/ should be cheked before /usr/lib/ ? That just seems wrong. Clues about where to prod gratefully received. I see that /usr/lib/python3/dist-packages/tvm/_ffi/libinfo.py contains a function 'get_dll_directories' which looks promising and adds TVM_LIBRARY_PATH to the search list and if I run tvmc like this: TVM_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/ tvmc then that path is at the top of the list. OK, but that mostly reveals a second issue: it's looking for libtvm.so, but that unversioned link is only provoded in the dev package libtvm-dev. The library package has the versioned filenames /usr/lib/x86_64-linux-gnu/libtvm.so.0 /usr/lib/x86_64-linux-gnu/libtvm_runtime.so.0 So I also have to persuade it to look for libtvm.so.0 not libtvm.so. Where does that info live? OK, a bit more research shows that that is in /usr/lib/python3/dist-packages/tvm/_ffi/libinfo.py which is in the source as python/tvm/_ffi_libinfo.py, in find_lib_path and that's easy to fix, and probably even the right place to fix it? The paths is harder though. get_dll_directories in python/tvm/_ffi_libinfo.py adds $PATH after $LD_LIBRARY_PATH to make it's search list. Is searching $PATH for libraries ever right? What it should actually be adding is what's in /etc/ld.so.conf.d/* That can be listed with /sbin/ldconfig -v 2>/dev/null | grep -v ^$'\t' | cut -d: -f1 (yuk? is there really no better way?) How does one do that in python to get that set of path added in the libinfo.py function? https://github.com/apache/tvm/blob/main/python/tvm/_ffi/libinfo.py Am I barking up the right tree here or is there a better way? Wookey -- Principal hats: Debian, Wookware, ARM http://wookware.org/ signature.asc Description: PGP signature
Re: RFP: mapillary-tools -- Useful tools and scripts related to Mapillary
[cc:ing debian-python in case anyone happens to know enough about python3-contruct to provide some clues] OK, so it turns out that there are problems packaging pymp4. It depends on construct, a (nice) library for parsing binary formats. However said library seems to have little interest in maintaining any sort of stable API so there have been significant changes between 2.8, 2.9 and 2.10 (and in fact pymp4 needs 2.8.8 quite specifically, and doesn't even work with 2.8.16). 2.8.8 is from October 2016 and Debian now has 2.10.x in stable, testing and unstable. There is a bug in construct 2.8.8 which means that pymp4 fails 5 out of 30-odd tests even with that. A python class moved, so that is trivially fixed with: --- construct-2.8.8.orig/construct/core.py +++ construct-2.8.8/construct/core.py @@ -997,7 +997,7 @@ class Range(Subconstruct): max = self.max(context) if callable(self.max) else self.max if not 0 <= min <= max <= sys.maxsize: raise RangeError("unsane min %s and max %s" % (min, max)) -if not isinstance(obj, collections.Sequence): +if not isinstance(obj, collections.abc.Sequence): raise RangeError("expected sequence type, found %s" % type(obj)) if not min <= len(obj) <= max: raise RangeError("expected from %d to %d elements, found %d" % (min, max, len(obj))) But as no-one cares about 2.8.x anyway this fix doesn't help much. What's really needed is updating pymp4 to use construct 2.10 (or switch to a more stable library if such a thing exists ('Kaitai Struct' was mentioned)). There is an upstream issue for this: https://github.com/beardypig/pymp4/issues/3 Which I've just added some info to. 2.9 changes the way Strings work: an encoding is always required, and explicit flavours of padding (left/right, specifiable padding char) have been removed. pymp4 uses these padding options (specifying spaces and right-padding), but may still work fine with the remaining default behaviour of 'PaddedString' (nulls and rightpading). I don't know enough about either the MP4 format or the codebase to be sure. I did know up a patch to update to the new string API. 2.9 also loses Embedded bitwise structs. And 2.10 loses 'Embedded' completely. I have not really managed to work out exactly what 'Embedded' does. I can't really work out what the difference between putting a bitwise struct in the middle of a struct and putting an Embedded bitwise struct in the middle of a struct is. Mostly this is because the online docs only cover 2.10, not 2.8 so I don't know what the old definition was. I spent a couple of hours trying to work it out. It's made more complicated by the fact that construct also switched from 'bits by default' to 'bytes by default' for efficiency reasons. I've put a half-arsed patch in the github issue which is probably OK for the strings part and almost certainly wrong for the Embedded part. So example I have no idea how to deal with this which selects one struct depending on a string: Box = PrefixedIncludingSize(Int32ub, Struct( "offset" / TellMinusSizeOf(Int32ub), "type" / Peek(String(4, padchar=b" ", paddir="right")), Embedded(Switch(this.type, { b"ftyp": FileTypeBox, b"styp": SegmentTypeBox, b"mvhd": MovieHeaderBox, b"moov": ContainerBoxLazy, ... b'afrt': HDSFragmentRunBox }, default=RawBox)), "end" / Tell )) changing -"type" / Peek(String(4, padchar=b" ", paddir="right")), to +"type" / Peek(PaddedString(4,"ascii")), is probably equivalent, but what is the equivalent syntax for choosing the right struct for the 'type' field according to the 1st 4 bytes of it, without using 'Embedded'? This was where I decided it was bedtime and admitted defeat for the time being. If someone familiar with construct 2.8 to 2.10 upgrades wanted to take a look at this that would be very helpful. For some things we might need to know something about the mp4 format too. I'm not sure to what degree we need to understand the format, or if we can more or less mechanically update the syntax. So, for now there is no mapillary-tools in Debian, and without a response from upstream or some help I'm stuck. Wookey -- Principal hats: Linaro, Debian, Wookware, ARM http://wookware.org/ signature.asc Description: PGP signature
Re: Multiple package versions in PPA
On 2021-09-13 16:00 +, Francis Murtagh wrote: >Hi, > >I've added a python package called python3-pyarmnn to Debian >(https://tracker.debian.org/pkg/armnn) but also have it pushed to a Ubuntu >Launchpad PPA >(https://launchpad.net/~armnn/+archive/ubuntu/ppa/+packages). >When I push new versions of the packaging it seems to overwrite the >previous, I'm assuming this is by design as the archive should only have >one version of the software at a time. Correct. >However, this python3-pyarmnn package is just a wrapper for a C++ library >libarmnn26, 26 being its major version. >So from the PPA the user can apt install libarmnn26 or libarmnn25 etc, but >if they install python3-pyarmnn it's always the latest and so that drags >in the latest libarmnn i.e 26. Right. Normally in debian (and Ubuntu) we allow multiple versions of libraries so that things linked against the older library still work until everything is ugraded, and nothing is using the old library (then it can be removed, and often is automatically). But only one version of higher-level apps which use those libraries, normally linked against the latest library version available. What do users gain from being able to install a python3-pyarmnn linked against an older version of the library? You could do it by having python3-pyarmnn25 and python3-pyarmnn26 etc, but normally there is not enough benefit from this for it to be worth the effort. Do you just want to be able to do it for test purposes? If so you could just make multiple PPAs, and install python3-pyarmnn from the appropriate one? Wookey -- Principal hats: Linaro, Debian, Wookware, ARM http://wookware.org/ signature.asc Description: PGP signature
Re: packaging django-rest-framework-filters
On 2018-11-22 08:59 +0100, Thomas Goirand wrote: > On 11/22/18 6:02 AM, Wookey wrote: > > Also related: I've updated drf-extensions to 0.4 (from the current > > 0.3.1), as that is needed for lava, and fixes > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865851 > > > > What I'm not quite sure about is if there is any reason _not_ to > > update this package. It has no reverse dependencies so I presume this > > is a good idea and I should just get on with it? would a 10-day NMU be > > appropriate? > > If you join the team, I see no reason why you couldn't do the upgrade > yourself indeed, especially if you do a 10-day NMU on it. OK. Can you/someone sign me up to the team please? (asking here seems to the mechanism, according to https://wiki.debian.org/Teams/PythonModulesTeam/HowToJoin I have read and accept: https://salsa.debian.org/python-team/tools/python-modules/blob/master/policy.rst (although this maintaining packages with git malarkey is entirely new to me - I'll try and get it right). It looks very complicated in comparison to making a debian package and uploading it. Then I'll do this drf-extensions update as a team upload. Wookey -- Principal hats: Linaro, Debian, Wookware, ARM http://wookware.org/ signature.asc Description: PGP signature
Re: packaging django-rest-framework-filters
On 2018-11-28 08:58 +, Neil Williams wrote: > On Mon, 26 Nov 2018 04:05:05 + > Wookey wrote: > > > On 2018-11-22 08:59 +0100, Thomas Goirand wrote: > > > On 11/22/18 6:02 AM, Wookey wrote: > > > > I guess djangorestframework-filters is > > clearer, and closer to upstream so people can find it. I'll go with > > that unless someone says the drf-* names are a better plan. > > > > OK. I've made a new package (djangorestframework-filters), which > > seems OK. (actually I've made 2 (see below)) > > > > I'm not a member of the modules team, so can't follow the instructions > > to make a salsa project under the python-team banner > > (https://wiki.debian.org/Python/GitPackaging) > > > > I'll put it under wookey for now. > > > > Also, the latest release is 0.10.0.post0, which says it's compatible > > with: > > * **Python**: 2.7 or 3.3+ > > * **Django**: 1.8, 1.9, 1.10, 1.11 > > * **DRF**: 3.5, 3.6 > > djangorestframework lists "Breaking changes" in the release notes for > 3.8, so this looks like an incompatibility with what's already in > Debian. > > https://github.com/encode/django-rest-framework/releases/tag/3.8.0 > > It might be worth testing whether 0.10.0.post0 or another release of > django-rest-framework-filters between that and 1.0.0dev0 is actually > fine with djangorestframework 3.8 - at least at a unit test level. Milosz and I are doing just that. It would be great if you could check whether these packages work OK with lava or not as that's at least one datapoint: http://wookware.org/software/repo/pool/main/d/drf-extensions/python3-djangorestframework-extensions_0.4.0-1_all.deb http://wookware.org/software/repo/pool/main/d/djangorestframework-filters/python3-djangorestframework-filters_0.10.2.post0-1_all.deb > > However the version of DRF in testing is now 3.8, and python 3.6 so > > perhaps it's better to upload the upcoming v1.0.0.dev0: > > * **Python**: 3.4, 3.5, 3.6 > > * **Django**: 1.11, 2.0, 2.1b1 > > * **DRF**: 3.8 > > * **django-filter**: 2.0 > > > > But the version of django-filter in debian is 1.1.0, so at first > > glance neither of these versions will work with the components > > available. > > > > I'd normally upload the last released version, i.e. 0.10.2.post0, but > > I'm not sure how these interactions with versions of > > djangorestframework and django-filter work. Any advice or shall I work > > this out with upstreams? > > django-filters has a new upstream 2.0 but a migration guide has been > published for that: > > https://django-filter.readthedocs.io/en/master/guide/migration.html#migrating-to-2-0 Cheers for that link > So it's likely that at least some reverse dependencies would be broken > by django-filters version 2.0. As you say - if we have to go to django-filters 2.0, then things probably get complicated (too complicated for buster). Wookey -- Principal hats: Linaro, Debian, Wookware, ARM http://wookware.org/ signature.asc Description: PGP signature
Re: packaging django-rest-framework-filters
On 2018-11-22 08:59 +0100, Thomas Goirand wrote: > On 11/22/18 6:02 AM, Wookey wrote: I guess djangorestframework-filters is clearer, and closer to upstream so people can find it. I'll go with that unless someone says the drf-* names are a better plan. OK. I've made a new package (djangorestframework-filters), which seems OK. (actually I've made 2 (see below)) I'm not a member of the modules team, so can't follow the instructions to make a salsa project under the python-team banner (https://wiki.debian.org/Python/GitPackaging) I'll put it under wookey for now. Also, the latest release is 0.10.0.post0, which says it's compatible with: * **Python**: 2.7 or 3.3+ * **Django**: 1.8, 1.9, 1.10, 1.11 * **DRF**: 3.5, 3.6 However the version of DRF in testing is now 3.8, and python 3.6 so perhaps it's better to upload the upcoming v1.0.0.dev0: * **Python**: 3.4, 3.5, 3.6 * **Django**: 1.11, 2.0, 2.1b1 * **DRF**: 3.8 * **django-filter**: 2.0 But the version of django-filter in debian is 1.1.0, so at first glance neither of these versions will work with the components available. I'd normally upload the last released version, i.e. 0.10.2.post0, but I'm not sure how these interactions with versions of djangorestframework and django-filter work. Any advice or shall I work this out with upstreams? Wookey -- Principal hats: Linaro, Debian, Wookware, ARM http://wookware.org/
Re: packaging django-rest-framework-filters
On 2018-11-22 08:59 +0100, Thomas Goirand wrote: > On 11/22/18 6:02 AM, Wookey wrote: No advice on best way to start a python packaging template? > > And it looks like it should be called src:drf-filters > > binary:python3-djangoresetframework-filters to fit in with naming > > conventions of related packages/python team (even though upstream is > > 'django-rest-framework-filters'). Right? > > The binary package name is right, though there's no convention for the > source package naming. "drf-filters" doesn't feel very descriptive to me > though. I agree, but I took it from drf-extensions -> python3-djangorestframework-extensions drf-generators -> python3-djangorestframework-generators drf-haystack -> python3-djangorestframework-haystack on the other hand there is: djangorestframework-gis -> python3-djangorestframework-gis I don't care which is used. I guess djangorestframework-filters is clearer, and closer to upstream so people can find it. I'll go with that unless someone says the drf-* names are a better plan. > > Also related: I've updated drf-extensions to 0.4 (from the current > > 0.3.1), as that is needed for lava, and fixes > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865851 > > > > What I'm not quite sure about is if there is any reason _not_ to > > update this package. It has no reverse dependencies so I presume this > > is a good idea and I should just get on with it? would a 10-day NMU be > > appropriate? > > If you join the team, I see no reason why you couldn't do the upgrade > yourself indeed, especially if you do a 10-day NMU on it. So I need to join the team to do anything in the salsa repos? I don't claim any real python extertise (I can just about read it - I try to avoid writing it as I'm a bash, perl and C person). I would prefer to sort this out to a decent standard and then hand it over the python team for maintenance, but I can join up as a minor conributor if that works better. It occurs to me that you probably prefer me to do this as a branch in the drf-extensions salsa repo, rather than a simple NMU? I prefer the latter as I know how to do that (sbuild+dupload), but if I have the permissions I can have a go at this newfangled git-pq + salsa stuff. I presume that'll be less work for you guys and I suppose I'll learn something :-) Wookey -- Principal hats: Linaro, Debian, Wookware, ARM http://wookware.org/ signature.asc Description: PGP signature
packaging django-rest-framework-filters
Hello python-people, I need to package django-rest-framework-filters in order to make lava (hardware test framework) work nicely in debian. https://www.linaro.org/engineering/projects/lava/ I found https://wiki.debian.org/Python/GitPackaging which seems very helpful That seems to say to just start a project on salsa, but I thought I'd better ask here if that was right, as this seems to imply that the debian python modules team would then be adopting this? I've done very little python packaging so advice on best approach is very welcome. Is there a dh_make-alike for python, or should I base the packing on something related, like djangorestframework-*/drf-*? And it looks like it should be called src:drf-filters binary:python3-djangoresetframework-filters to fit in with naming conventions of related packages/python team (even though upstream is 'django-rest-framework-filters'). Right? Also related: I've updated drf-extensions to 0.4 (from the current 0.3.1), as that is needed for lava, and fixes https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865851 What I'm not quite sure about is if there is any reason _not_ to update this package. It has no reverse dependencies so I presume this is a good idea and I should just get on with it? would a 10-day NMU be appropriate? I have another mostly written-mail with details of what I did, whcih I'll send shortly, but it's pretty uncontroversial. Cheers for any pointers. Wookey -- Principal hats: Linaro, Debian, Wookware, ARM http://wookware.org/ signature.asc Description: PGP signature
Re: Bug#743583: python3-gi fails to install on arm64 (struct.error from py3compile)
+++ Jakub Wilk [2014-04-04 01:21 +0200]: > * Emilio Pozuelo Monfort , 2014-04-04, 00:49: > >>Setting up python3-gi (3.10.2-2) ... > >>Traceback (most recent call last): > >> File "/usr/bin/py3compile", line 290, in > >>main() > >> File "/usr/bin/py3compile", line 270, in main > >>options.force, options.optimize, e_patterns) > >> File "/usr/bin/py3compile", line 172, in compile > >>interpreter.magic_number(version), mtime) > >>struct.error: 'l' format requires -2147483648 <= number <= 2147483647 > > Probably something to do with bogus timestamps in the deb: > > drwxr-xr-x root/root 0 2063-05-01 14:32 ./ Well spotted. Some repacking has just proved that that is indeed the problem. I did have a load of stuff built in '2021' earlier in this project (due to the hwclocks not being settable on the early arm64 hardware/kernels). I hadn't realised I have some thing from '2063' too. What's somewhat odd is that this packaged worked fine for quite some time (on 1st machine), and this issue only came up on a new machine. Possibly because it's due to time-difference, not absolute time. Anyway, glad to have that mystery cleared up. And it unbungs a pile of builds. thanks. > Of course, it would be better if dh_python3 was Y2038-compliant. :) yes. Is there any point filing a wishlist for that, to remind someone to do something about it before it's a real issue? Wookey -- Principal hats: Linaro, Emdebian, Wookware, Balloonboard, ARM http://wookware.org/ -- To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20140404000204.gw10...@stoneboat.aleph1.co.uk
Re: Solving the multiarch triplet once and for all
+++ Paul Wise [2012-12-19 01:24 +0800]: > On Wed, Dec 19, 2012 at 12:29 AM, Dmitrijs Ledkovs wrote: > > > cross-compiling the archive / cross-bootstrapping the archive for a > > new architecture. > > I suppose cross-compiling will be useful but I didn't think python was > part of the build-essential set that must be cross-compilable, is that > actually the case? It's not just build-essential that has to be cross-compilable. It's everything you need to build the packages in a debootstrappable (inc build-essential) image. That's actually at least 254 packages. And a quite a lot of those need perl or python to be multiarch aware in order to build. The set could probably be trimmed down further by adding more DEB_BUILD_PROFILE info to make reduced builds, but there is approx no chance of getting rid of python IMHO. The (current best guess) at the list is on http://wiki.debian.org/Arm64Port (that's actually derived from Quantal - unstable does not really have enough multiarch info in packages to get any useful answers currently) Wookey -- Principal hats: Linaro, Emdebian, Wookware, Balloonboard, ARM http://wookware.org/ -- To UNSUBSCRIBE, email to debian-python-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20121218174553.gk9...@stoneboat.aleph1.co.uk