Re: glibc_2.27-8 build fails on Butcher (testing)
On 11/20/18 3:57 PM, Roberto C. Sánchez wrote: > I will report the logs (compressed) later. >> So far error are given to only stdout. I used >> >> debuild -b -uc -us > ~/glibc-stdout.log 2> ~/glibc-stderr.log & >> > That will make it difficult to correlate outputs and errors. Better > would be: > > debuild -b -uc -us > ~/glibc.log 2>&1 > > Regards, > > -Roberto > Ok, I'll do so. Regards, -Tetsuji
Re: glibc_2.27-8 build fails on Butcher (testing)
On Tue, Nov 20, 2018 at 03:47:44PM +0900, Tetsuji Rai wrote: > > > I want to stick to debuild because it looks like a standard. I am > recompiling with the logs. Suit yourself. Just be advised that debuild is 'standard' because it happens to be present and slightly more convenient than building packages correctly (i.e., in a clean build environment). > I will report the logs (compressed) later. > So far error are given to only stdout. I used > > debuild -b -uc -us > ~/glibc-stdout.log 2> ~/glibc-stderr.log & > That will make it difficult to correlate outputs and errors. Better would be: debuild -b -uc -us > ~/glibc.log 2>&1 Regards, -Roberto -- Roberto C. Sánchez
Re: glibc_2.27-8 build fails on Butcher (testing)
On 11/20/18 3:30 PM, Roberto C. Sánchez wrote: > On Tue, Nov 20, 2018 at 03:02:12PM +0900, Tetsuji Rai wrote: >> As the tutorial ( https://wiki.debian.org/BuildingTutorial ) says, >> >> download with "apt-get source libc6" in my ~/work-glibc, and cd >> glibc-2.27, then download build dependencies with "sudo apt-get >> build-dep libc6", then rebuild with "debuild -b -uc -us". >> >> After a while, I got these errors. But I might run "debuild -b -uc -us" >> in ~/work-glibc/glibc-2.27/debian. Does it matter? >> > Please don't top post. It is considered impolite. > > There are few packages more complex than glibc. Building using > 'debuild' instead of building inside of a clean pbuilder, cowbuilder, > schroot, or other build environment is sure to invite problems. > > My recommendation is to use one of those approaches instead of the > 'debuild'. > > If you are interested in trying to get the build working with debuild > then the entire build output will be needed. You can redirect the > output to a file (make sure to capture both standard output and standard > error) and then post it somewhere (it will most likely be too large to > send to the list as an attachment). > > Regards, > > -Roberto > I want to stick to debuild because it looks like a standard. I am recompiling with the logs. I will report the logs (compressed) later. So far error are given to only stdout. I used debuild -b -uc -us > ~/glibc-stdout.log 2> ~/glibc-stderr.log & Regards, -Tetsuji
Re: glibc_2.27-8 build fails on Butcher (testing)
On Tue, Nov 20, 2018 at 03:02:12PM +0900, Tetsuji Rai wrote: > As the tutorial ( https://wiki.debian.org/BuildingTutorial ) says, > > download with "apt-get source libc6" in my ~/work-glibc, and cd > glibc-2.27, then download build dependencies with "sudo apt-get > build-dep libc6", then rebuild with "debuild -b -uc -us". > > After a while, I got these errors. But I might run "debuild -b -uc -us" > in ~/work-glibc/glibc-2.27/debian. Does it matter? > Please don't top post. It is considered impolite. There are few packages more complex than glibc. Building using 'debuild' instead of building inside of a clean pbuilder, cowbuilder, schroot, or other build environment is sure to invite problems. My recommendation is to use one of those approaches instead of the 'debuild'. If you are interested in trying to get the build working with debuild then the entire build output will be needed. You can redirect the output to a file (make sure to capture both standard output and standard error) and then post it somewhere (it will most likely be too large to send to the list as an attachment). Regards, -Roberto -- Roberto C. Sánchez
Re: Install & restore backup: what if I use LVM?
On Tue, 20 Nov 2018 at 01:22, Pascal Hambourg wrote: > Much more than install with LVM from the start. You can install LVM if you want, but it might be worth while to take note that you loose all the data if the disk fails underneath. I had a bad experience using LVM including more than one disk and everything was lost when the disk failed - even on the disk that did not fail. Backups did help, but that was an experience that made me decide never to use LVM again. Regards Johann -- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)
Re: glibc_2.27-8 build fails on Butcher (testing)
As the tutorial ( https://wiki.debian.org/BuildingTutorial ) says, download with "apt-get source libc6" in my ~/work-glibc, and cd glibc-2.27, then download build dependencies with "sudo apt-get build-dep libc6", then rebuild with "debuild -b -uc -us". After a while, I got these errors. But I might run "debuild -b -uc -us" in ~/work-glibc/glibc-2.27/debian. Does it matter? Best regards, On 11/20/18 2:30 PM, Roberto C. Sánchez wrote: > On Tue, Nov 20, 2018 at 02:06:17PM +0900, Tetsuji Rai wrote: >> Hi all, >> >> I'm trying to build several packages from their source packages, however >> I have not succeeded in building glibc_2.27-8. It always fails telling >> ndbm.h and varargs.h (or stdarg.h) in testsuite building, although I >> have installed libgdbm-compat-dev (including /usr/include/ndbm.h) and >> stdarg.h, varargs.h (in libstdc++-7-dev and libstdc++-8-dev). Error >> messages are as follows. These appear many times because they are tried >> many times. >> >> >> Anyone have clues? I am sure many people have done this. > What commands are you using to build the package? This includes the > commands to download the package and, if applicable, unpack/prepare it > for the build. > > Regards, > > -Roberto >
Re: glibc_2.27-8 build fails on Butcher (testing)
On Tue, Nov 20, 2018 at 02:06:17PM +0900, Tetsuji Rai wrote: > Hi all, > > I'm trying to build several packages from their source packages, however > I have not succeeded in building glibc_2.27-8. It always fails telling > ndbm.h and varargs.h (or stdarg.h) in testsuite building, although I > have installed libgdbm-compat-dev (including /usr/include/ndbm.h) and > stdarg.h, varargs.h (in libstdc++-7-dev and libstdc++-8-dev). Error > messages are as follows. These appear many times because they are tried > many times. > > > Anyone have clues? I am sure many people have done this. What commands are you using to build the package? This includes the commands to download the package and, if applicable, unpack/prepare it for the build. Regards, -Roberto -- Roberto C. Sánchez
glibc_2.27-8 build fails on Butcher (testing)
Hi all, I'm trying to build several packages from their source packages, however I have not succeeded in building glibc_2.27-8. It always fails telling ndbm.h and varargs.h (or stdarg.h) in testsuite building, although I have installed libgdbm-compat-dev (including /usr/include/ndbm.h) and stdarg.h, varargs.h (in libstdc++-7-dev and libstdc++-8-dev). Error messages are as follows. These appear many times because they are tried many times. Anyone have clues? I am sure many people have done this. Thanks in advance!! Best regards, -Tetsuji errors 1-- XFAIL: conform/UNIX98/ndbm.h/conform original exit status 1 Testing Checking whether is available... FAIL Header not available Compiler message: --- /home/tetsuji/work-glibc/glibc-2.27/build-tree/amd64-libc/conform/UNIX98/ndbm.h/scratch/ndbm.h-test.c:1:10: fatal error: ndbm.h: No such file or directory #include ^~~~ compilation terminated. --- Checking the namespace of "ndbm.h"... SKIP -errors 2- XFAIL: conform/UNIX98/varargs.h/conform original exit status 1 Testing --- Checking whether is available... FAIL Header not available Compiler message: --- In file included from /home/tetsuji/work-glibc/glibc-2.27/build-tree/amd64-libc/conform/UNIX98/varargs.h/scratch/varargs.h-test.c:1:0: /usr/lib/gcc/x86_64-linux-gnu/7/include/varargs.h:4:2: error: #error "GCC no longer implements ." #error "GCC no longer implements ." ^ /usr/lib/gcc/x86_64-linux-gnu/7/include/varargs.h:5:2: error: #error "Revise your code to use ." #error "Revise your code to use ." ^ --- Checking the namespace of "varargs.h"... SKIP
Re: ssh
On Mon, Nov 19, 2018 at 12:12:50PM -0500, Michael Stone wrote: On Mon, Nov 19, 2018 at 09:43:29AM -0500, Jim Popovitch wrote: On Mon, 2018-11-19 at 08:38 -0500, Michael Stone wrote: On Mon, Nov 19, 2018 at 08:32:09AM -0500, Greg Wooledge wrote: If you're only going to login to the account using ssh keys, you don't need to give it a valid password hash at all. Just put a string of rubbish (English words qualify) in the hash field of /etc/shadow. Don't do that. Just use a *. Something that's always bugged me... is there any difference between using * or ! (both are valid)? ! locks the account, * is a convention that means "no password". I should clarify that a bit: a ! locked account can't be used at all (assuming that all login methods respect that convention) whereas the * account can't use password authentication but may be able to use other mechanisms like ssh keys. A completely blank field indicates an empty password.
Re: Install & restore backup: what if I use LVM?
Le 19/11/2018 à 16:34, solitone a écrit : Another option would be: (1) Install the system as it was (i.e. with physical partitions); (2) Restore the backed up configuration/files. (3) Move to LVM. How cumbersome would be point 3? Much more than install with LVM from the start.
Re: discover and install specific package version
have you tried this version? It worked for me on top of debian stretch-slim: https://snapshot.debian.org/archive/debian/20180701T205743Z/pool/main/f/firefox-esr/firefox-esr_52.9.0esr-1~deb9u1_amd64.deb
subject:"Re\: discover and install specific package version"
have you tried this version? It worked for me on top of debian stretch-slim: https://snapshot.debian.org/archive/debian/20180701T205743Z/pool/main/f/firefox-esr/firefox-esr_52.9.0esr-1~deb9u1_amd64.deb
Re: Install & restore backup: what if I use LVM?
Hello, On Mon, Nov 19, 2018 at 07:01:07AM +0100, solitone wrote: > Thanks to the back2l utility I have a full backup of Debian. Now I > would reinstall it and recover all the backed up files. However, I > didn’t use LVM and now I would. In this case, would the > adjustments needed from the original configuration be difficult? Probably the differences will be small and it will mostly work, however "mostly" can still result in a great deal of frustrating debugging, so personally if I were you I'd: - Install Debian again how I wanted, with LVM or whatever - Configure the services again, possibly referring to (but not simply bulk-overwriting existing directory trees with) my backups - Copy my data back into place from the backups It's not as quick as "press a button, there, it's re-imaged", but it avoids introducing new problems. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: how to backup to an encrypted usb drive? [OT: rsync metadata]
On Mon, Nov 19, 2018 at 09:50:12AM -0800, Rick Thomas wrote: > > > > On Nov 18, 2018, at 7:31 PM, Reco wrote: > > > > On Sun, Nov 18, 2018 at 11:56:27AM -0800, Rick Thomas wrote: > >> > On 11/14/18, Reco wrote: > > If you're content with losing all this metadata in your backup - there > > are rsync, cpio or tar. Or all those ‘backup solutions' based on those. > >> > On Wed, Nov 14, 2018 at 12:52:57PM -0500, Lee wrote: > Do I need all that metadata? This is for me at home so it's pretty > much a single user machine. > >> > >>> On Nov 14, 2018, at 10:26 AM, Reco wrote: > >>> That's for you to decide. I'd say you definitely need it for the backups > >>> of / and /var and can *probably* skip it for /home, but YMMV. > >> > >> Don’t the options for rsync -aAHX preserve all the metadata? Is there > >> something besides > > > > Yep, there is at least one thing rsync looses along the way: > > > > # chattr +i /bin/ping > > # rsync -aHAX /bin/ping /tmp > > # lsattr /bin/ping > > ie--- /bin/ping > > # lsattr /tmp/ping > > -e--- /tmp/ping > > Fascinating… > 1) Are there any other extended attributes that are not copied by > rsync? Or is there something special about “immutable”. Rsync should ignore any extended attribute listed at chattr(1) save for 'e'. > 2) Is this a bug or a feature? Rather a lack of implementation. Some may consider it a bug, but rsync behaved like this for may years - see [1], for example. > Should there be a bg report filed on this phenomenon? Probably. The question is - who's going to write a patch that implements such feature? > If not, should it be documented in the rsync(1) man page? Does that need a > bug report? Yes, definitely. But then again, chattr(1) is it's filesystem-specific. In Debian it's customary to document cornercases in README.Debian, not in manpage. Reco [1] https://lists.samba.org/archive/rsync/2011-February/026039.html
Re: ssh
On Mon, 2018-11-19 at 12:12 -0500, Michael Stone wrote: > On Mon, Nov 19, 2018 at 09:43:29AM -0500, Jim Popovitch wrote: > > On Mon, 2018-11-19 at 08:38 -0500, Michael Stone wrote: > > > On Mon, Nov 19, 2018 at 08:32:09AM -0500, Greg Wooledge wrote: > > > > If you're only going to login to the account using ssh keys, you > > > > don't need to give it a valid password hash at all. Just put a > > > > string of rubbish (English words qualify) in the hash field of > > > > /etc/shadow. > > > > > > Don't do that. Just use a *. > > > > Something that's always bugged me... is there any difference between > > using * or ! (both are valid)? > > ! locks the account, * is a convention that means "no password". > Ack! Thanks! -Jim P. signature.asc Description: This is a digitally signed message part
Re: how to backup to an encrypted usb drive? [OT: rsync metadata]
> On Nov 18, 2018, at 7:31 PM, Reco wrote: > > Hi. > > On Sun, Nov 18, 2018 at 11:56:27AM -0800, Rick Thomas wrote: >> On 11/14/18, Reco wrote: > If you're content with losing all this metadata in your backup - there > are rsync, cpio or tar. Or all those ‘backup solutions' based on those. >> On Wed, Nov 14, 2018 at 12:52:57PM -0500, Lee wrote: Do I need all that metadata? This is for me at home so it's pretty much a single user machine. >> >>> On Nov 14, 2018, at 10:26 AM, Reco wrote: >>> That's for you to decide. I'd say you definitely need it for the backups >>> of / and /var and can *probably* skip it for /home, but YMMV. >> >> Don’t the options for rsync -aAHX preserve all the metadata? Is there >> something besides > > Yep, there is at least one thing rsync looses along the way: > > # chattr +i /bin/ping > # rsync -aHAX /bin/ping /tmp > # lsattr /bin/ping > ie--- /bin/ping > # lsattr /tmp/ping > -e--- /tmp/ping > > Reco Fascinating… 1) Are there any other extended attributes that are not copied by rsync? Or is there something special about “immutable”. 2) Is this a bug or a feature? Should there be a bg report filed on this phenomenon? If not, should it be documented in the rsync(1) man page? Does that need a bug report? Enjoy! Rick
Re: ssh
On Mon, Nov 19, 2018 at 09:43:29AM -0500, Jim Popovitch wrote: On Mon, 2018-11-19 at 08:38 -0500, Michael Stone wrote: On Mon, Nov 19, 2018 at 08:32:09AM -0500, Greg Wooledge wrote: > If you're only going to login to the account using ssh keys, you > don't need to give it a valid password hash at all. Just put a > string of rubbish (English words qualify) in the hash field of > /etc/shadow. Don't do that. Just use a *. Something that's always bugged me... is there any difference between using * or ! (both are valid)? ! locks the account, * is a convention that means "no password".
Re: Install & restore backup: what if I use LVM?
> On 19 Nov 2018, at 16:59, Reco wrote: > > LVM requires certain kernel modules and hooks to be present in > initramfs. > If your current installation lacks them, I suggest you to install lvm2 > before the backup to save yourself the hassle of regenerating initramfs > after the restore. Unfortunately I don’t have the system any longer (I’ve lost everything while performing some risky partition resizing), so I’m late--I can’t install lvm2 and then backup. I can only rely on a backup that lacks those modules & hooks. > Also, since it's you're using backup2l with tar backend, you'll need to > do something to restore all those capabilities extended attributes. A > hint here is: > > grep setcap /var/lib/dpkg/info/* Uhmm... I forgot of this issue. I believe I’ll end up restoring just /home
Re: pdfjoin loses web links in the PDFs
On 2018-11-19 6:58 a.m., Steve McIntyre wrote: Jonathan Dowland wrote: On Sun, Nov 18, 2018 at 04:16:21PM -0500, Gary Dale wrote: This is frustrating. I'm running Debian/Buster (AMD64) and just finished creating web links (mostly mailto: in a two-part directory I created using Scribus. I exported the files to PDF on a Debian/Stretch machine because Buster uses a version of Ghostscript that breaks the PDF export in Scribus (they recently broke it on Stretch as well, but I put the older Ghostscript 9.20 version on hold so I can still use Scribus). The PDF files I created worked fine individually but when I tried to merge them into a single PDF, the web links are lost. I'm not sure if this a limitation of pdfjoin or a bug in the current implementation. That is frustrating. All I can suggest is experimenting with an alternative PDF joiner. I believe pdftk can do it (pdftk *pdf cat output out.pdf) qpdf also works well for me, although the command line can be a little baroque... It does seem a little odd to have to specify the file to manipulated as "empty" then add the pages from the two files I wanted to merge, but it did work. The links were preserved. In my case the command line read: qpdf --empty --pages cabinet.pdf 1-z clubs.pdf 1-z -- 2018-2019-directory.pdf where the first 2 pdf files are the input and the last one is the output.
Re: Install & restore backup: what if I use LVM?
On Mon, Nov 19, 2018 at 04:31:45PM +0100, solitone wrote: > > > On 19 Nov 2018, at 12:35, Jonathan Dowland wrote: > > > > On Mon, Nov 19, 2018 at 07:01:07AM +0100, solitone wrote: > >> When I was playing with my disk's partition table I messed it up and > >> lost everything. It was a dual boot system with macOS and Debian. > >> > >> Thanks to the back2l utility I have a full backup of Debian. Now I > >> would reinstall it and recover all the backed up files. However, I > >> didn’t use LVM and now I would. In this case, would the adjustments > >> needed from the original configuration be difficult? > > > > This rather depends on how back2l functions. I can't find a reference to > > it in the Debian package repositories. Can you point us at a URI that > > describes it? > > It simply results in a backup of the filesystem archived in a tarball. > Once I reinstall Debian, I can restore the original configuration > pulling in the original version of my files from that tarball. But > this would work if the configuration were the same. If I install with > LVM something would be different in terms of configuration, so some > original config files wouldn’t be right, and I’d need some manual > adjustement. The point is: how much? LVM requires certain kernel modules and hooks to be present in initramfs. If your current installation lacks them, I suggest you to install lvm2 before the backup to save yourself the hassle of regenerating initramfs after the restore. You'll definitely need to adjust /etc/fstab, most likely /etc/default/grub, and to update the bootloader. Also, since it's you're using backup2l with tar backend, you'll need to do something to restore all those capabilities extended attributes. A hint here is: grep setcap /var/lib/dpkg/info/* Reco
Re: Install & restore backup: what if I use LVM?
Another option would be: (1) Install the system as it was (i.e. with physical partitions); (2) Restore the backed up configuration/files. (3) Move to LVM. How cumbersome would be point 3?
Re: Install & restore backup: what if I use LVM?
> On 19 Nov 2018, at 12:35, Jonathan Dowland wrote: > > On Mon, Nov 19, 2018 at 07:01:07AM +0100, solitone wrote: >> When I was playing with my disk's partition table I messed it up and >> lost everything. It was a dual boot system with macOS and Debian. >> >> Thanks to the back2l utility I have a full backup of Debian. Now I >> would reinstall it and recover all the backed up files. However, I >> didn’t use LVM and now I would. In this case, would the adjustments >> needed from the original configuration be difficult? > > This rather depends on how back2l functions. I can't find a reference to > it in the Debian package repositories. Can you point us at a URI that > describes it? It simply results in a backup of the filesystem archived in a tarball. Once I reinstall Debian, I can restore the original configuration pulling in the original version of my files from that tarball. But this would work if the configuration were the same. If I install with LVM something would be different in terms of configuration, so some original config files wouldn’t be right, and I’d need some manual adjustement. The point is: how much?
Re: Why has nouveau vs. NVIDIA problem not been addressed?
On 11/15/18 10:24 PM, Tom D. wrote: Thank you, Sir. I understand now. Not quite, you're still top posting. It is frowned upon greatly in these parts. -- My father, Victor Moore (Vic) used to say: "There are two Great Sins in the world... ..the Sin of Ignorance, and the Sin of Stupidity. Only the former may be overcome." R.I.P. Dad. http://linuxcounter.net/user/44256.html
Re: ssh
On Mon, 2018-11-19 at 08:38 -0500, Michael Stone wrote: > On Mon, Nov 19, 2018 at 08:32:09AM -0500, Greg Wooledge wrote: > > If you're only going to login to the account using ssh keys, you > > don't need to give it a valid password hash at all. Just put a > > string of rubbish (English words qualify) in the hash field of > > /etc/shadow. > > Don't do that. Just use a *. Something that's always bugged me... is there any difference between using * or ! (both are valid)? -Jim P.
Re: discover and install specific package version
On Sat, Nov 17, 2018 at 08:53:19AM +0100, john doe wrote: > Using Bash you could use functions or aliases: > > search_pkg() { aptitude search -F '%p %V' --disable-columns ${1}; } You probably want "$@" there (with quotes) instead of $1.
Re: ssh
On Mon, Nov 19, 2018 at 08:32:09AM -0500, Greg Wooledge wrote: On Mon, Nov 19, 2018 at 07:28:15AM +, Michael Howard wrote: Don't get too hung up on it all. If the account needs login access then give it. Create or use an account with a shell of your choice and a secure password. You don't need to remember the password, as you are using keys, so it can be ridiculously secure. If you're only going to login to the account using ssh keys, you don't need to give it a valid password hash at all. Just put a string of rubbish (English words qualify) in the hash field of /etc/shadow. Don't do that. Just use a *.
Re: ssh
On Mon, Nov 19, 2018 at 07:28:15AM +, Michael Howard wrote: > Don't get too hung up on it all. > > If the account needs login access then give it. Create or use an account > with a shell of your choice and a secure password. You don't need to > remember the password, as you are using keys, so it can be ridiculously > secure. If you're only going to login to the account using ssh keys, you don't need to give it a valid password hash at all. Just put a string of rubbish (English words qualify) in the hash field of /etc/shadow. According to shadow(5): If the password field contains some string that is not a valid result of crypt(3), for instance ! or *, the user will not be able to use a unix password to log in (but the user may log in the system by other means). [...] A password field which starts with an exclamation mark means that the password is locked. The remaining characters on the line represent the password field before the password was locked. So, just make sure you don't start it with a bang, and you should be OK.
Re: latest Stretch update breaks Scribus!
On Sun, Nov 18, 2018 at 11:49:05PM -0500, Gary Dale wrote: > > > > > Of course, the world does not revolve around Scribus. > > No but it is a popular and important package that gives Linux a powerful > publishing application. > I agree and have been very happy with Scribus in the past when I have needed a solid publishing application. However, numerous other applications also depend on ghostscript, both directly and indirectly. > > > > Breaking existing applications is not taken lightly and the security > > goes to great lengths to prevent breakage altogether or to minimize > > breakage when avoidance is not possible. > > There have been lots of security holes found in Ghostscript that seem to > revolve around buffer overflows, which indicates to me that the Ghostscript > developers are behind the times in their development tools. Reading the > security notices about it makes me wonder what hasn't been found yet. > There are some applications and libraries (imagemagick is another that immediately springs to mind) that just seem to teaming with as yet undiscovered security vulnerabilities. I say that because the frequency with which new issues are reported does not seem to slowing down. I think that part of it is the advances in analysis tools. For example, many of the vulnerabilities I have seen reported and for which I have either backported or developed fixes over the last year or so have been found by fuzzing. That is something that was not done 10 or 20 years ago, and if it was done it was not done with the sophistication and thoroughness seen today. Given that codebases like ghostscript, tiff, imagemagick, and others have been around for 20 years or more in some cases it is not surprising that so many issues are just waiting to be discovered. > However the security holes on the 9.20 version which was used in > Stretch/Stable until recently have been around for a long time. Presumably > patches were made along the way so what was different this time? > That is not something that I can answer. However, based on my experience with some other packages I can say that there are cases where a vulnerability is identified and it takes a long time to develop a proper fix. The Spectre and Meltdown vulnerabilities which were first disclosed last year might fall into this category. Some initial fixes were made to address the vulnerability and as time went on, those fixes were refined to mitigate some of the performance impact and improve on the implementation. I am not sure what the case was with ghostscript, but it could have been something similar. > > > > apt-get install ghostscript=9.20~dfsg-3.2+deb9u5 > > libgs9=9.20~dfsg-3.2+deb9u5 libgs9-common=9.20~dfsg-3.2+deb9u5 > > > > I've already hunted down the packages and installed them so my virtual > machine's version of Scribus is working again. > > Apparently the Scribus developers have fixed the incompatibility in the > current development of 1.4.8 but Buster still uses 1.4.7. > It looks like 1.4.8 has not yet been released, so it might be unrealistic to expect that it make its way into Debian at this point. Perhaps you can contact the package maintainer to see if there is some way you can help speed up the process of getting 1.4.8 into Debian once it is released. Regards, -Roberto -- Roberto C. Sánchez
Re: VMPK
On Mon, Nov 19, 2018 at 12:56:38PM +0100, Magnus Johansson wrote: > Hello! When will VMPK 0.7.0 become available in the Debian software > repository? There's an outstanding bug to switch to version 0.6.2 [1]. Perhaps you can contribute to this (or file a new bug, dunno, but referring to this one makes sense, I guess). The Fishing Rod, or how I found out: I've no idea about VMPK. But I went to packages.debian.org [3] and searched for "VMPK" (important is to choose "all sections" and "all suites"). Cook kids put the search terms directly into the URL (the page teaches you, by example, how to do that). Then I saw [4] that all suites were "on" VMPK 0.4.0. Then I clicked on the "unstable" version [5] and there on "bug reports" [6]. Voilà. Why unstable? It is very important here to understand how new versions enter Debian. They first enter "unstable". Afer some time, while dust settles, they move to "testing". They stay there until they are superseded by a new version coming from unstable or until testing becomes, after a freeze period "the new stable". This means: a package version never changes in stable (yes, I crossed my fingers behind my back: "almost never", or something). This is a promise stable makes to you. Then there are backports and things. Cheers [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=870355 [2] https://en.wiktionary.org/wiki/give_a_man_a_fish_and_you_feed_him_for_a_day;_teach_a_man_to_fish_and_you_feed_him_for_a_lifetime [3] https://packages.debian.org/ [4] https://packages.debian.org/search?keywords=vmpk&searchon=names&suite=all§ion=all [5] https://packages.debian.org/sid/vmpk [6] https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=vmpk;dist=unstable -- tomás signature.asc Description: Digital signature
Re: fetchmail
* On 2018 19 Nov 02:10 -0600, mick crane wrote: > fetchmail for me had a problem with the ssl certificate of gmail which keeps > changing seemingly depending on which server connect to. Probably is fixable > but was easier to use getmail. I don't have a gmail account so I haven't run into that. I only receive POP3 mail at my n0nb.us domain so fetchmail still works well for me. - Nate -- "The optimist proclaims that we live in the best of all possible worlds. The pessimist fears this is true." Web: http://www.n0nb.us GPG key: D55A8819 GitHub: N0NB signature.asc Description: PGP signature
VMPK
Hello! When will VMPK 0.7.0 become available in the Debian software repository? Regards, Magnus Johansson
Re: pdfjoin loses web links in the PDFs
Jonathan Dowland wrote: >On Sun, Nov 18, 2018 at 04:16:21PM -0500, Gary Dale wrote: >>This is frustrating. I'm running Debian/Buster (AMD64) and just >>finished creating web links (mostly mailto: in a >>two-part directory I created using Scribus. I exported the files to >>PDF on a Debian/Stretch machine because Buster uses a version of >>Ghostscript that breaks the PDF export in Scribus (they recently broke >>it on Stretch as well, but I put the older Ghostscript 9.20 version on >>hold so I can still use Scribus). >> >>The PDF files I created worked fine individually but when I tried to >>merge them into a single PDF, the web links are lost. I'm not sure if >>this a limitation of pdfjoin or a bug in the current implementation. > >That is frustrating. All I can suggest is experimenting with an >alternative PDF joiner. I believe pdftk can do it (pdftk *pdf cat output >out.pdf) qpdf also works well for me, although the command line can be a little baroque... -- Steve McIntyre, Cambridge, UK.st...@einval.com "Further comment on how I feel about IBM will appear once I've worked out whether they're being malicious or incompetent. Capital letters are forecast." Matthew Garrett, http://www.livejournal.com/users/mjg59/30675.html
Re: pdfjoin loses web links in the PDFs
On Sun, Nov 18, 2018 at 04:16:21PM -0500, Gary Dale wrote: This is frustrating. I'm running Debian/Buster (AMD64) and just finished creating web links (mostly mailto: in a two-part directory I created using Scribus. I exported the files to PDF on a Debian/Stretch machine because Buster uses a version of Ghostscript that breaks the PDF export in Scribus (they recently broke it on Stretch as well, but I put the older Ghostscript 9.20 version on hold so I can still use Scribus). The PDF files I created worked fine individually but when I tried to merge them into a single PDF, the web links are lost. I'm not sure if this a limitation of pdfjoin or a bug in the current implementation. That is frustrating. All I can suggest is experimenting with an alternative PDF joiner. I believe pdftk can do it (pdftk *pdf cat output out.pdf) -- ⢀⣴⠾⠻⢶⣦⠀ ⣾⠁⢠⠒⠀⣿⡁ Jonathan Dowland ⢿⡄⠘⠷⠚⠋⠀ https://jmtd.net ⠈⠳⣄ Please do not CC me, I am subscribed to the list.
Re: Install & restore backup: what if I use LVM?
On Mon, Nov 19, 2018 at 07:01:07AM +0100, solitone wrote: When I was playing with my disk's partition table I messed it up and lost everything. It was a dual boot system with macOS and Debian. Thanks to the back2l utility I have a full backup of Debian. Now I would reinstall it and recover all the backed up files. However, I didn’t use LVM and now I would. In this case, would the adjustments needed from the original configuration be difficult? This rather depends on how back2l functions. I can't find a reference to it in the Debian package repositories. Can you point us at a URI that describes it? -- ⢀⣴⠾⠻⢶⣦⠀ ⣾⠁⢠⠒⠀⣿⡁ Jonathan Dowland ⢿⡄⠘⠷⠚⠋⠀ https://jmtd.net ⠈⠳⣄ Please do not CC me, I am subscribed to the list.
Re: latest Stretch update breaks Scribus!
Gary Dale wrote: > On 2018-11-18 1:19 p.m., Roberto C. Sánchez wrote: > > On Sun, Nov 18, 2018 at 11:19:00AM -0500, Gary Dale wrote: > > > This is one of the those WTF moments. Despite the fact that Ghostscript > > > 9.25 > > > has been known to break Scribus since at least the start of the month, the > > > Stable version of Ghostscript has just been changed from 9.22 to 9.25. ... > I've already hunted down the packages and installed them so my virtual > machine's version of Scribus is working again. > > Apparently the Scribus developers have fixed the incompatibility in the > current development of 1.4.8 but Buster still uses 1.4.7. So another option would be for you to backport the fix, and another would be to just compile 1.4.8 for your system. -dsr-
Re: ssh
On 11/19/2018 8:28 AM, Michael Howard wrote: > On 19/11/2018 02:46, Alan Taylor wrote: >> Thanks Mike, >> >> I was slowly coming to that conclusion ! >> What would be best practice regarding a password for that account >> (i.e. system account such as backuppc that needs ssh access but no >> shell access). >> >> If I create the user with bash as the shell, I seem to have a few >> options: >> 1) don’t set a password (i.e. no reference to password in the adduer >> command). The man page says this results in the password being >> “disabled”. What does this actually mean for security ? >> 2) use —disabled-password (same as 1 above ?) >> 3) the —disabled-password option appears to be only available on >> debian. Redhat derivatives only offer useradd which does not have this >> switch ? >> >> Which would be the most secure, while still allowing ssh access ? >> >> > > Don't get too hung up on it all. > > If the account needs login access then give it. Create or use an account > with a shell of your choice and a secure password. You don't need to > remember the password, as you are using keys, so it can be ridiculously > secure. A standard user cant do much harm if you don't give it any more > privileges than it needs. > Some hints to better secure your setup: Locking the password of the named accound (1). In '/etc/ssh/sshd_config' using an 'match' condition (2) to only allow public key. Further restricting the allowd commands in authorized_keys file (3, Format of the Authorized Keys File). 1) http://man7.org/linux/man-pages/man1/passwd.1.html 2) https://linux.die.net/man/5/sshd_config 3) https://www.ssh.com/ssh/authorized_keys/openssh#sec-Format-of-the-Authorized-Keys-File Note that this e-mail is folded by my mailer. -- John Doe
Re: fetchmail
On 2018-11-19 00:33, Nate Bargmann wrote: I installed the Sid version on this Buster installation and it works just fine. I simply downloaded it and installed it manually. Aptitude puts it in the Obsolete and Locally Created Packages section. Shrug. - Nate fetchmail for me had a problem with the ssl certificate of gmail which keeps changing seemingly depending on which server connect to. Probably is fixable but was easier to use getmail. mick -- Key ID4BFEBB31