thanks for the pacth :)
BUT python3-qt4 -> python3-pyqt4
I will upload spyder 2.3.7 today.
Thanks
Fred
ok, when I unmangle the symbol, I get this
c++filt _ZN5Tango11DeviceProxy14get_corba_nameB5cxx11Eb
Tango::DeviceProxy::get_corba_name[abi:cxx11](bool)
so it seems that this FTBFS is about a cxx11 ABi change.
during this build the c++ code compile in pytango (boost python)is noo more
Hello Doko,
libtool: link: g++ -g -O2 -fstack-protector-strong -Wformat
-Werror=format-security -std=c++11 -D_REENTRANT -DOMNI_UNLOADABLE_STUBS -Wl,-z
-Wl,relro -o .libs/notifd2db notifd2db.o -L../../lib/cpp/server
/scratch/packages/tmp/tango-8.1.2c+dfsg/build/lib/cpp/server/.libs/libtango.so
I am working on it with the upstream.
once fixed,I will upload a tango with the v5 extension. then I will ask for a
transition
right ?
any libstdc++6 follow-up transition is waived. you can just upload to
unstable.
ok, I will try to fix this issue next week.
thanks
Ok, with the new tango,I get another symbols problem
ImportError: /«PKGBUILDDIR»/build/lib.linux-i686-2.7/PyTango/_PyTango.so:
undefined symbol: _ZN5Tango17ranges_type2constIsE3strE
Tango::ranges_type2const::str
so once again a problem with a string ???
I started a thread about this on debian-python mailing list.
https://lists.debian.org/debian-python/2015/09/msg00028.html
ok, I just uploaded a tango package which fix the FTBFS with gcc5.
I also made a libstdc++6 transition for tango.
so now I think that after tango acceptation into unstable a simple binNMU
should fix this issue.
ok, so the missing symbols comes from
attribute.o and wattribute.o
:/usr/lib/i386-linux-gnu$ nm -D libtango.so.8.1.2 | grep ranges_type | c++filt
0045f258 D Tango::ranges_type2const::enu
00460dfc B Tango::ranges_type2const::str[abi:cxx11]
0045f280 D Tango::ranges_type2const::enu
00460fdc B Tango::ranges_type2const::str[abi:cxx11]
0045f27c D
Here also a discussion about the problem on the gcc mailing list
https://gcc.gnu.org/ml/gcc-help/2015-09/msg00057.html
It seems that a abi_tag attribut should be added in tango to the problematic
symbols in order to help gcc5 decide which ABI is expected.
ifdef _GLIBCXX_USE_CXX11_ABI
define
python-scientific is for now not compatible with numpy 1.9.
Hello Paul
> Officially, no, because the documentation says: "If files exist in both
> data and scripts, they will both be executed in an unspecified order."
> However, the current behavior of dbconfig-common is to first run the
> script and then run the admin code and then run the user code. So
> Ehm, yes. :)
so I just tested an upgrade from jessie to sid of tango-db and it works :)))
Now I have only one concern about the dump.
Since we had a failure with the dump when it ran as user, we discovered that
our procedures where wrong and necessitate the dbadmin grants in order to works.
Hello during the packaging I get this error message for the tests
==
ERROR: spyderlib.widgets.tests.test_array_builder
(unittest.loader.ModuleImportFailure)
--
Hello Andreas, what is strange is this
https://piuparts.debian.org/sid/state-successfully-tested.html#pymca-doc
Is there a problem with piuparts ?
Hello Luca
> This is very unfortunate, but as explained on the mailing list, this
> behaviour was an unintentional internal side effect. I didn't quite
> realise it was there, and so most other devs.
I understand, I just wanted to point that the synchrotron community invest a
lot of efforts in
Hello,
I just opened a bug for tango
https://github.com/tango-controls/cppTango/issues/312
what is the deadline where we can take the decision to upload or not zeromq
4.2.0 into Debian testing ?
This will let also some time in order to check if this 4.2.0 do not have other
size effect of
looking at the upstream repository,it seems that there is plenty of py3 fixes
since the last release 0.9.0
so maybe it would be better to not run the unit test for the python3 now.
Another solution is to take the HEAD of python-jedi, as explain by the
upstream[1] and see if it pass the unit
I just apply this patch and the import test PASS.
I took only a part of the upstream patch
but now I get this
I: pybuild base:184: cd /<>/.pybuild/pythonX.Y_3.5/build;
python3.5 -m pytest test
= test session starts ==
platform linux --
We are in the middle of the ipython transition and we are awaiting for the
ipykernel to b e uploaded.
I needed to uploade spyder in order to fix sardana for the ipython transition.
Cheers
Frederic
The problem was in scipy,
#840264
Now it is fixed.
Hello Andreas,
> In jessie, tango-db used mysql-server-5.5 (via mysql-server).
> The upgrade of tango-db was performed after mysql-server had been upgraded
> to mariadb-server-10.0 (via default-mysql-server) and was started again.
do you know if the mariadb-server was running during the upgrade
Hello, somenews about this issue ?
Cheers
Fred
Yes I work on this with the upstream :))
So don't worry I will tell you when it is ok.
Cheers
Fred
I Uploaded tango 9.2.5~rc3+dfsg1-1into Debian unstable.
I think that once migrated into testing it will be ok toclose this bug.
Thanks
Fred
Hello Paul
> I really hope I can upload this weekend. I have code that I believe does
> what I want. I am in the process of testing it.
thanks a lot.
> [...]
> What I meant,
> instead of the mysql code that runs as user, run a script for the
> upgrade (they are run with database
Hello Paul,
> Once I fixed 850190,
Do you think that you will fix this bug before next week in order to let me
enought time to fix tango and upload it.
> I believe that ought to work, although that is
> still a hack. I was thinking of doing the "DROP PROCEDURE IF EXISTS *"
> calls with the
Hello,
I discuss with the tango-db upstream and he found that
this one line fixed the problem, befrore doing the tango-db upgrade
UPDATE mysql.proc SET Definer='tango@localhost' where Db='tango';
Ideally it should be something like
UPDATE mysql.proc SET Definer='xxx' where Db='yyy';
where
Hello, I would like to discuss about this bug [1]
I tryed to reproduce the scenary of piuparts in a virtual machine (gnome-box)
installed in 3 steps:
jessie base system
mysql-server (I need a working database)
tango-db (daemon)
It works ok, I have a running tango-db daemon (ps aux |
Thanks to reynald
1) On Jessie
with the tango account
mysql> use tango;
mysql> show create procedure class_att_prop\G
I got "Create Procedure": NULL
But If I use the root account (mysqladmin)
CREATE DEFINER=`root`@`localhost` PROCEDURE `class_att_prop` (IN class_name
VARCHAR(255), INOUT
Hello,
> I am suspecting that this commit may be related to the current behavior:
> https://anonscm.debian.org/cgit/collab-maint/dbconfig-common.git/commit/?id=acdb99d61abfff54630c4cfba6e4452357a83fb9
> I believe I implemented there that the drop of the database is performed
> with the user
> I am not sure that I follow what you are doing, but if you need the code
> to be run with the dbadmin privileges, you should put the code in:
>/usr/share/dbconfig-common/data/PACKAGE/upgrade-dbadmin/DBTYPE/VERSION
> instead of in:
>
No i do not have access to my computer until 3 january
If you want to nmu go ahead
Cheers
De : Adrian Bunk [b...@stusta.de]
Envoyé : mercredi 21 décembre 2016 16:57
À : 811...@bugs.debian.org; Picca Frédéric-Emmanuel
Objet : Re: Bug#811973 closed by
It seems that the fix is not enought
this test failed at the flush
import nxs
f = nxs.open("/tmp/foo.h5", "w5")
f.makegroup('entry', 'NXentry')
f.opengroup('entry')
f.makegroup('g', 'NXcollection')
f.opengroup('g', 'NXcollection')
f.makedata('d', 'float64', shape=(1,))
f.opendata('d')
here the error message
~/Debian/nexus/bugs$ ./bug.py
Traceback (most recent call last):
File "./bug.py", line 15, in
f.flush()
File "/usr/lib/python2.7/dist-packages/nxs/napi.py", line 397, in flush
raise NeXusError, "Could not flush NeXus file %s"%(self.filename)
Let's instrument the code
print filename, mode, _ref(self.handle)
status = nxlib.nxiopen_(filename,mode,_ref(self.handle))
print status
$ python bug.py
filenamenxs.h5 5
0
Hello
here the napi code which cause some trouble.
# Convert open mode from string to integer and check it is valid
if mode in _nxopen_mode: mode = _nxopen_mode[mode]
if mode not in _nxopen_mode.values():
raise ValueError, "Invalid open mode %s",str(mode)
Herethe code ofthismethod
/**/
static NXstatus NXinternalopen(CONSTCHAR *userfilename, NXaccess am,
pFileStack fileStack);
/*--*/
NXstatus
activating the NXError reporting we got
filenamenxs.h5 5
ERROR: cannot open file: filenamenxs.h5
0
and looking for this errormessage,
we found it in the napi5.c file
NXstatus NX5open(CONSTCHAR *filename, NXaccess am,
NXhandle* pHandle)
{
hid_t
in the napi.h files we saw this.
define CONCAT(__a,__b) __a##__b/* token concatenation */
#ifdef __VMS
#define MANGLE(__arg) __arg
#else
#define MANGLE(__arg) CONCAT(__arg,_)
#endif
#define NXopen MANGLE(nxiopen)
/**
* Open a NeXus
Here after rebuilding hdf5 in debug mode
:~/Debian/nexus$ ./bug.py
H5get_libversion(majnum=0xbf8a5b04, minnum=0xbf8a5b08, relnum=0xbf8a5b0c) =
SUCCEED;
H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED;
H5open() = SUCCEED;
H5Pcreate(cls=8 (genprop class)) = 18 (genprop
This problem was due to this
python-fabio (0.5.0+dfsg-2) unstable; urgency=medium
* d/control
- python-qt4 -> python3-pyqt4-dbg (Closes: #876288)
Now that python-fabio was solved, it is ok to close this bug
thanks
Frederic
looking at the fedora project they renames async -> async_
https://koji.fedoraproject.org/koji/buildinfo?buildID=1097515
In code search I found another package affected by this problem.
Which seems to embed pyOpenGL.
https://codesearch.debian.net/search?q=OpenGL.raw.GL.SGIX.async=1
Cheers
looking at the fedora project they renames async -> async_
https://koji.fedoraproject.org/koji/buildinfo?buildID=1097515
Hello, Matthias,
I do not understand this bug report.
I use pybuild so fabio should be build for all python3 versions.
It is now FTBFS due to a problem with the cython package already reported.
#903909
Cheers
Frederic
> your autopkg tests loops over all *supported* python versions, but you only
> build the extension for the *default* python3 version. Try build-depending on
> python3-all-dev instead and see that you have extensions built for both 3.6
> and
> 3.7. Building in unstable, of course.
But , I
Hello Adrian
If I look at the current boost1.67, I find this in the
boost python package
https://packages.debian.org/sid/amd64/libboost-python1.67.0/filelist
and
https://packages.debian.org/sid/amd64/libboost-python1.67-dev/filelist
We can find these
thanks a lot both of you, I could not manage to find enought time these days
for this package...
Cheers
Fred
The upstream, Just packages the latest taurus.
So I think that you can defer your upload now.
thanks a lot for your help.
Frederic
Hello, this is a probleme due to a bug in python-numpy which is already solved
in python-numpy 1.6.5
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933056
Cheers
De : debian-science-maintainers
not better
test cpp engine for medfilt2d ... ok
testOpenCLMedFilt2d (silx.image.test.test_medianfilter.TestMedianFilterEngines)
test cpp engine for medfilt2d ... pocl error: lt_dlopen("(null)") or lt_dlsym()
failed with 'can't close resident module'.
note: missing symbols in the kernel binary
It seems that this test does not PASS
@unittest.skipUnless(ocl, "PyOpenCl is missing")
def testOpenCLMedFilt2d(self):
"""test cpp engine for medfilt2d"""
res = medianfilter.medfilt2d(
image=TestMedianFilterEngines.IMG,
> I didn't notice it, so wasn't planning to add it. spyder_kernels
> imports without complaining, and spyder seems to start fine anyway.
> Where does it come to notice?
I do not know, but on wndows it is optional.
So maybe this is not a big issue.
Fred
It seems that wurlitzer which is a dependency of spyder-kernel is missing.
did you plan to add it ?
cheers
Hello
>Package: sardana
>Version: 3.0.0a+3.f4f89e+dfsg-1
>Severity: serious
>The release team have decreed that non-buildd binaries cannot migrate to
>testing. Please make a source-only upload so your package can migrate.
ok, but this packages comes from NEW.
So it would be nice if the
Use salsa-ci, python-qtconsole FTBFS due to pyzmq
https://salsa.debian.org/python-team/modules/python-qtconsole/-/jobs/435758
Hello
> Hi Frédéric, I prepared spyder (and spyder-kernels) for python2 removal.
> The removal of cloudpickle forces us to do it earlier than we otherwise
> might have.
no problem for me :), the faster we get rid of Python2, the better.
> With spyder, it made sense to me to keep spyder as the
Hello andreas, In fact we were wayting for the pacakging of ipywidget 7.x
the jupyter-sphinx extension expected by lmfit-py require a newer version of
ipywidget.
So maybe the best solution for now is to not produce the documentation until
this dependency is ok.
cheers
Frederic
-lists.debian.net]
de la part de Andreas Tille [andr...@an3as.eu]
Envoyé : dimanche 22 décembre 2019 10:48
À : PICCA Frederic-Emmanuel
Cc : 943...@bugs.debian.org; MARIE Alexandre
Objet : Bug#943786: lmfit-py: failing tests with python3.8
On Sun, Dec 22, 2019 at 07:54:23AM +, PICCA Frederic-Emmanuel
With the silx.io import I have this
(sid_amd64-dchroot)picca@barriere:~$
PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py
pocl error: lt_dlopen("(null)") or lt_dlsym() failed with 'can't close resident
module'.
note: missing symbols in the kernel binary might be
I decided to concentrate myself on one opencl test (addition)
So I deactivated all other test by commenting the test in
silx/opencl/__init__.py
If I do not import silxs.io, this test works
(sid_amd64-dchroot)picca@barriere:~$ PYOPENCL_COMPILER_OUTPUT=1
looking in picca@sixs7:~/Debian/silx/silx/silx/opencl/test/test_addition.py
def setUp(self):
if ocl is None:
return
self.shape = 4096
self.data = numpy.random.random(self.shape).astype(numpy.float32)
self.d_array_img =
Hello Sandro this is strange because, I have this in the control file
Package: libufo-bin
Architecture: any
Depends: ${misc:Depends}, ${python3:Depends}, ${shlibs:Depends}
Suggests: ufo-core-doc
Description: Library for high-performance, GPU-based computing - tools
The UFO data processing
Hello, if it is like for my ufo-core package, this could be due to a script
file with a shebang using python instead of python3
Cheers
Fred
Maybe this is due to this
picca@cush:~/Debian/ufo-core/ufo-core/bin$ rgrep python *
ufo-mkfilter.in:#!/usr/bin/python
ufo-prof:#!/usr/bin/env python
I will replace python -> python3 and see what is going on
you can look also at the CI, now that it works :)
https://salsa.debian.org/science-team/veusz/pipelines/137494
Cheers
Frederic
A work around for now is to install by hand
apt install python3-scipy
reassign -1 silx
thanks
I tryed a new build and I end up with this error
gpgv: unknown type of key resource 'trustedkeys.kbx'
gpgv: keyblock resource '/tmp/dpkg-verify-sig.Wwlhs1jL/trustedkeys.kbx':
General error
gpgv: Signature made Mon Dec 16 20:17:19 2019 UTC
gpgv:using RSA key
close 957430 6.5.1-3
thanks
Hello Andreas,
I just built ghmm by removing --with-gsl.
It seems that the gsl implementation of blas conflict with the one provided in
atlas.
so --enable-gsl + --enable-atlas seems wrong...
+--+
| Summary
> Yes, good catch. The spec file for the openSUSE package has this [1]:
so it does not fit with our policy: do not hide problems ;)
The problem is that I do not have enougt time to investigate... on a porter box
> Well, the test is obviously broken and upstream currently can't be bothered
> to fix
> it on non-x86 targets. He will certainly have to do it at some point given
> that ARM64
> is replacing more and more x86_64 systems, but I wouldn't bother, personally.
so what is the best solution in order
Ok, in that case, I think that a comment in the d/rules files is enough in
order to keep in mind that we have this issue with ppc64el.
Hello
looking at the Opensuze log, I can find this
[ 93s] + pytest-3.8 --ignore=_build.python2 --ignore=_build.python3
--ignore=_build.pypy3 -v -k 'not speed and not (test_model_nan_policy or
test_shgo_scipy_vs_lmfit_2)'
[ 97s] = test session starts
> I have a package of Spyder 4 waiting to upload, but it requires five
> packages to be accepted into unstable from NEW first (pyls-server,
> pyls-black, pyls-spyder, abydos, textdistance); once that happens, the
> rest of the packages are almost ready to go.
Maybe you can contact the ftpmaster
> Strangely enough, I've already done that ;-)
my bad.
Cheers
Fred
the full python backtrace
#8
#14 Frame 0x120debd80, for file
/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/lines.py,
line 2888, in draw (self=,
figure=<...>, _transform=None, _transformSet=False, _visible=True,
_animated=False, _alpha=None, clipbox=None, _clippath=None,
Here no error during the build of numpy 1.19.5
= 10892 passed, 83 skipped, 108 deselected, 19 xfailed, 2 xpassed, 2 warnings
in 1658.41s (0:27:38) =
but 109 for numpy 1.21...
= 14045 passed, 397 skipped, 1253 deselected, 20 xfailed, 2 xpassed, 2
warnings, 109 errors in 869.47s (0:14:29) =
I investigated a bit more, it seems that cover is wrong.
In a bullseye chroot it works
$ python3 ./test.py
(bullseye_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$
ls
matplotlib mpl_toolkits pylab.py test.py toto.png
I found that the test failed between the
Here the py-bt
(gdb) py-bt
Traceback (most recent call first):
File
"/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/lines.py",
line 2888, in draw
File
"/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/artist.py",
line 50, in draw_wrapper
return
If I run in the sid chroot, but with the binaryed built from bullseye, it works.
(sid_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$
rm toto.png
(sid_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$
python3 test.py
I tested matplotlib built with numpy 0.17 0.19 0.21. each time I got the
segfault.
another difference was the gcc compiler.
So I switched to gcc-10
(sid_mips64el-dchroot)picca@eller:~/matplotlib$ CC=gcc-10 python3 setup.py build
if failed with this error
lto1: fatal error: bytecode stream in
Built with gcc-11 and -fno-lto it doesn not work.
(sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$
../../../test.py
Segmentation fault
(sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$
PYTHONPATH=. ../../../test.py
Segmentation fault
I can confirm that the bullseye matplotlib does not produce a segfault
Here the backtrace on mips64el
#0
agg::pixfmt_alpha_blend_rgba,
agg::order_rgba>, agg::row_accessor >::blend_solid_hspan(int,
int, unsigned int, agg::rgba8T const&, unsigned char const*)
(covers=0x100 , c=..., len=, y=166, x=,
this=)
at
bugs report are already filled on matplotlib
#1000774 and #1000435
I will try to see if this is identical...
This small script trigger the segfault.
#!/usr/bin/env python3
import matplotlib
import matplotlib.pyplot as plt
plt.figure()
plt.title("foo")
plt.savefig("toto.png")
Hello Paul, just for info, I have already reported this issue here
https://github.com/g1257/dmrgpp/issues/38
cheers
Fred.
It seems that it failing now
https://ci.debian.net/packages/p/pyfai/
I am on 0.21.2 but I do not know if it solve this mask issue.
Cheers
Fred
Is it not better to use the
DEB__MAINT_APPEND
variable in order to deal with this issue ?
It seems that this is an issue in gcc has observed when compiling tensorflow
https://zenn.dev/nbo/scraps/8f1505e365d961
Hello François,
thanks a lot, I removed the NMU number and release a -2 package. (uploaded)
thanks for your contribution to Debian.
Fred
in order to debug this, I started gdb
set a breakpoint in init_module_scitbx_linalg_ext
then a catch throw and I end up with this backtrace
Catchpoint 2 (exception thrown), 0x770a90a1 in __cxxabiv1::__cxa_throw
(obj=0xb542e0, tinfo=0x772d8200 , dest=0x772c1290
) at
There is a fix from the upstream around enum.
https://github.com/boostorg/python/commit/a218babc8daee904a83f550fb66e5cb3f1cb3013
Fix enum_type_object type on Python 3.11
The enum_type_object type inherits from PyLong_Type which is not tracked
by the GC. Instances doesn't have to be tracked
Hello Anton, I have just pushed a few dependencies in the -dev package in the
salsa repo
I did not updated the changelog.
Cheers
Fred
Hello Anton, I try to checkout paraview in order to add the -dev dependencies
but I have this message
$ git clone https://salsa.debian.org/science-team/paraview
Clonage dans 'paraview'...
remote: Enumerating objects: 175624, done.
remote: Counting objects: 100% (78929/78929), done.
remote:
ok, it seems that I generated an orig.tag.gz with this (Thu Jan 1 00:00:00
1970).
I can not remember which tool I used to generate this file.
gbp import-orig --uscan
or
deb-new-upstream
Nevertheless, why is it a serious bug ?
thanks
Frederic
> I am just the messenger here, if you disagree, please feel free to
> contact ftpmasters or lintian maintainers.
This was not a rant about this, I just wanted to understand what is going on :).
> Your package has been built successfully on (some) buildds, but then the
> binaries upload got
1 - 100 of 115 matches
Mail list logo