Re: where are the crontab files in Trixie?
On 2024-02-27 14:13, Jeffrey Walton wrote: On Tue, Feb 27, 2024 at 2:12 PM Gary Dale wrote: On 2024-02-27 10:25, Kushal Kumaran wrote: On Tue, Feb 27 2024 at 10:15:59 AM, Gary Dale wrote: [...] Can anyone explain how Trixie is handling crontabs now? This behavior has existed forever. I'm on bookworm, though, so no idea if anything is changing in trixie. The debian wiki suggests that the handling of cron/anacron is evolving. That sounds like a euphemism for "being killed off" by Systemd and its timers. Jeff There are a lot of things going on these days that don't seem quite ready for prime time. Examples include systemd networking, which remains woefully ill-equipped to deal with bonding and wifi. Wayland may have some good things to say about it but when I try it, it messes up my desktop. I have my desktop scaled to 150% to help with my old eyes, but Wayland doesn't seem to apply it to text, which is where I really need it. I'm hoping Plasma 6 will address the Wayland issues at least.
Re: where are the crontab files in Trixie?
On 2024-02-27 10:32, Greg Wooledge wrote: On Tue, Feb 27, 2024 at 10:15:59AM -0500, Gary Dale wrote: I'm running Debian/Trixie on an AMD64 system. I have an old wifi adapter that Linux has problems with that works once I run: /usr/sbin/modprobe brcmfmac echo 13b1 0bdc > /sys/bus/usb/drivers/brcmfmac/new_id However when I add those lines to the root's crontab using # crontab -e as @reboot /usr/sbin/modprobe brcmfmac @reboot echo 13b1 0bdc > /sys/bus/usb/drivers/brcmfmac/new_id the second line fails. I get an e-mail stating "/bin/sh: 1: cannot create /sys/bus/usb/drivers/brcmfmac/new_id: Directory nonexistent" Having two separate @reboot lines might run them both in parallel, rather than sequentially. It might be better to combine them into one shell command, or one script. Something like this, perhaps: @reboot /usr/sbin/modprobe brcmfmac && echo 13b1 0bdc > /sys/bus/usb/drivers/brcmfmac/new_id Or put the commands into a shell script, then run the script from crontab. Yep. That works. Thanks.
Re: where are the crontab files in Trixie?
On 2024-02-27 10:25, Kushal Kumaran wrote: On Tue, Feb 27 2024 at 10:15:59 AM, Gary Dale wrote: I'm running Debian/Trixie on an AMD64 system. I have an old wifi adapter that Linux has problems with that works once I run: /usr/sbin/modprobe brcmfmac echo 13b1 0bdc > /sys/bus/usb/drivers/brcmfmac/new_id However when I add those lines to the root's crontab using # crontab -e as @reboot /usr/sbin/modprobe brcmfmac @reboot echo 13b1 0bdc > /sys/bus/usb/drivers/brcmfmac/new_id the second line fails. I get an e-mail stating "/bin/sh: 1: cannot create /sys/bus/usb/drivers/brcmfmac/new_id: Directory nonexistent" I'm not sure if the modprobe is working or if the module is being loaded without it. It's likely that debian detects the need for the module and loads it. Is it possible that the command is being run before the module is loaded. Consider putting both into a single command, perhaps by writing a script that does both things. Anyway, that got me down the rabbit hole to try to find where the crontab file is. ls -l /root/cron* ls: cannot access '/root/cron*': No such file or directory also # whereis crontab crontab: /usr/bin/crontab /etc/crontab /usr/share/man/man1/crontab.1.gz /usr/share/man/man5/crontab.5.gz so it's not in the location that you'd expect. Nor can I find it in /etc/. The various cron files there don't contain the lines I;m looking for. Editing/creating crontab files using "crontab -e" creates it in /var/spool/cron/crontabs. Can anyone explain how Trixie is handling crontabs now? This behavior has existed forever. I'm on bookworm, though, so no idea if anything is changing in trixie. The debian wiki suggests that the handling of cron/anacron is evolving.
Re: where are the crontab files in Trixie?
On 2024-02-27 10:26, The Wanderer wrote: On 2024-02-27 at 10:15, Gary Dale wrote: Anyway, that got me down the rabbit hole to try to find where the crontab file is. ls -l /root/cron* ls: cannot access '/root/cron*': No such file or directory also # whereis crontab crontab: /usr/bin/crontab /etc/crontab /usr/share/man/man1/crontab.1.gz /usr/share/man/man5/crontab.5.gz so it's not in the location that you'd expect. I'm not sure whereis is suitable for finding things like this. As its man page states, it's for finding "the binary, source, and manual page files for a command" - not the data files which the command may work with. locate crontab also fails to find it, as does find / -name crontab Nor can I find it in /etc/. The various cron files there don't contain the lines I;m looking for. Can anyone explain how Trixie is handling crontabs now? The first paragraph of crontab(1) states: Each user can have their own crontab, and though these are files in /var/spool/cron/crontabs,they are not intended to be edited directly. So, while I don't use per-user crontabs myself and so don't have experience with this personally, I would suggest looking in that directory - but not necessarily editing the files there, except via 'crontab -e' as you have already done. Thanks. I missed that when I was reading the comments. I need to enlarge the text more, I guess.
where are the crontab files in Trixie?
I'm running Debian/Trixie on an AMD64 system. I have an old wifi adapter that Linux has problems with that works once I run: /usr/sbin/modprobe brcmfmac echo 13b1 0bdc > /sys/bus/usb/drivers/brcmfmac/new_id However when I add those lines to the root's crontab using # crontab -e as @reboot /usr/sbin/modprobe brcmfmac @reboot echo 13b1 0bdc > /sys/bus/usb/drivers/brcmfmac/new_id the second line fails. I get an e-mail stating "/bin/sh: 1: cannot create /sys/bus/usb/drivers/brcmfmac/new_id: Directory nonexistent" I'm not sure if the modprobe is working or if the module is being loaded without it. It's likely that debian detects the need for the module and loads it. Anyway, that got me down the rabbit hole to try to find where the crontab file is. ls -l /root/cron* ls: cannot access '/root/cron*': No such file or directory also # whereis crontab crontab: /usr/bin/crontab /etc/crontab /usr/share/man/man1/crontab.1.gz /usr/share/man/man5/crontab.5.gz so it's not in the location that you'd expect. Nor can I find it in /etc/. The various cron files there don't contain the lines I;m looking for. However, running crontab -e as root definitely shows the file I expect to see. Specifically: # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any'). # # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command @reboot /usr/sbin/modprobe brcmfmac @reboot echo 13b1 0bdc > /sys/bus/usb/drivers/brcmfmac/new_id Looking at systemd-timers doesn't show anything obvious either. Can anyone explain how Trixie is handling crontabs now?
Re: running Jami in Trixie - possible locale issue
On 2024-02-26 22:47, Greg Wooledge wrote: On Mon, Feb 26, 2024 at 10:10:45PM -0500, Gremlin wrote: On 2/26/24 20:28, Gary Dale wrote: $locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=iu_CA.UTF-8 LANGUAGE=en_GB LC_CTYPE="iu_CA.UTF-8" LC_NUMERIC=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES="iu_CA.UTF-8" LC_PAPER="iu_CA.UTF-8" LC_NAME="iu_CA.UTF-8" LC_ADDRESS="iu_CA.UTF-8" LC_TELEPHONE="iu_CA.UTF-8" LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION="iu_CA.UTF-8" LC_ALL= $locale -a locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory C C.utf8 en_CA.utf8 en_US.utf8 fr_CA.utf8 POSIX Find out where LC_CTYPE and LC_MESSAGES is being set, they need changed. No, you're not reading it correctly. Look at LANG. Look at the double quotes around LC_CTYPE and LC_MESSAGES (among others). LC_CTYPE and LC_MESSAGES are *not* set. They are deduced from LANG. It's LANG that has the weird setting. All of the other iu_CA entries are double-quoted, so they are derived from it. If it was me, I would set /etc/default/locale to # File generated by update-locale LANG=C.UTF-8 and remove all references/assignments to any LC_ in all shell config files. then reboot and do a locale -a Rebooting doesn't do anything useful here. Simply logging out and back in would be sufficient. But there are two points of view here: 1) Why is Gary using locales that are not generated? 2) Why is Gary using *these specific* locales? I think you're approaching it from the point of view of "your settings are wrong, but you don't know where the settings are coming from, so find out, and fix them". Which is one valid POV. Another valid POV is "the settings are set the way Gary wants them, but the locales aren't generated, so generate them, and then it'll work". Only Gary can tell us which of these is the right approach. Maybe he's a fluent Inuktitut speaker. All I can say is that it's hard to believe that someone would *accidentally* have LANG set to iu_CA.UTF-8. Usually that's the kind of thing one would remember doing. The only unusual thing I've done was trying to set the locale to en_CA rather than en_US. However my installation dates back a long time and Linux has changed a lot over the years. At one point, I believe support for Canadian English was spotty so I had a en_GB locale added. The iu_CA is weird and seems to vanish when I set the default locale to C. But no, I've never gone beyond dpkg-reconfigure locales and the GUI settings for locales - other than yesterday trying to force en_CA in .bash_profile, which I have now removed.
Re: running Jami in Trixie - possible locale issue
On 2024-02-26 22:47, Greg Wooledge wrote: On Mon, Feb 26, 2024 at 10:10:45PM -0500, Gremlin wrote: On 2/26/24 20:28, Gary Dale wrote: $locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=iu_CA.UTF-8 LANGUAGE=en_GB LC_CTYPE="iu_CA.UTF-8" LC_NUMERIC=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES="iu_CA.UTF-8" LC_PAPER="iu_CA.UTF-8" LC_NAME="iu_CA.UTF-8" LC_ADDRESS="iu_CA.UTF-8" LC_TELEPHONE="iu_CA.UTF-8" LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION="iu_CA.UTF-8" LC_ALL= $locale -a locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory C C.utf8 en_CA.utf8 en_US.utf8 fr_CA.utf8 POSIX Find out where LC_CTYPE and LC_MESSAGES is being set, they need changed. No, you're not reading it correctly. Look at LANG. Look at the double quotes around LC_CTYPE and LC_MESSAGES (among others). LC_CTYPE and LC_MESSAGES are *not* set. They are deduced from LANG. It's LANG that has the weird setting. All of the other iu_CA entries are double-quoted, so they are derived from it. If it was me, I would set /etc/default/locale to # File generated by update-locale LANG=C.UTF-8 and remove all references/assignments to any LC_ in all shell config files. then reboot and do a locale -a Rebooting doesn't do anything useful here. Simply logging out and back in would be sufficient. But there are two points of view here: 1) Why is Gary using locales that are not generated? 2) Why is Gary using *these specific* locales? I think you're approaching it from the point of view of "your settings are wrong, but you don't know where the settings are coming from, so find out, and fix them". Which is one valid POV. Another valid POV is "the settings are set the way Gary wants them, but the locales aren't generated, so generate them, and then it'll work". Only Gary can tell us which of these is the right approach. Maybe he's a fluent Inuktitut speaker. All I can say is that it's hard to believe that someone would *accidentally* have LANG set to iu_CA.UTF-8. Usually that's the kind of thing one would remember doing. The only unusual thing I've done was trying to set the locale to en_CA rather than en_US. However my installation dates back a long time and Linux has changed a lot over the years. At one point, I believe support for Canadian English was spotty so I had a en_GB locale added. The iu_CA is weird and seems to vanish when I set the default locale to C. But no, I've never gone beyond
Re: running Jami in Trixie - possible locale issue
On 2024-02-26 22:10, Gremlin wrote: On 2/26/24 20:28, Gary Dale wrote: On 2024-02-26 17:31, Gremlin wrote: On 2/26/24 17:18, Gary Dale wrote: On 2024-02-26 16:03, Gremlin wrote: On 2/26/24 14:36, Gary Dale wrote: I'm running Debian/Trixie on an AMD64 system. I've installed jami from testing but it fails to start. When I run it from the command line, I get: $jami & [1] 7804 $ Using Qt runtime version: 6. 4.2 "notify server name: Plasma, vendor: KDE, version: 5.27.10, spec: 1.2" "Using locale: en_GB" terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid [1]+ Aborted (core dumped) jami garydale@transponder:~/mnt/archives/2024/Lions Cl There might be something wrong with my locales but dpkg-reconfigure locales doesn't fix it. After running it, I still get this output: $locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=iu_CA.UTF-8 LANGUAGE=en_GB LC_CTYPE="iu_CA.UTF-8" LC_NUMERIC=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES="iu_CA.UTF-8" LC_PAPER="iu_CA.UTF-8" LC_NAME="iu_CA.UTF-8" LC_ADDRESS="iu_CA.UTF-8" LC_TELEPHONE="iu_CA.UTF-8" LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION="iu_CA.UTF-8" LC_ALL= Please note that I have not selected iu_CA.utf8 nor en_GB in my locales. Any ideas on how to fix this? Thanks. Edit /etc/locale.gen and enable the locale(s) you wish to use. Then as root locale-gen dpkg-reconfigure locales Nope. /etc/locale.gen was already correct. Running the commands then rebooting leaves me with the same error messages. I also set up a ~/.bash_profile to set LANG to en_CA.UTF-8 but that also had no effect. The exact contents are: LANG="en_CA.UTF-8" export LANG You are making a mess, when about to make a mess stop until you have researched your issue. Start at the beginning not at the end.. Did you reboot or logout and login dpkg-reconfigure locales is suppose to set /etc/default/locales correctly, it runs update-locale if I remember correctly. cat /etc/locale.gen # This file lists locales that you wish to have built. You can find a list # of valid supported locales at /usr/share/i18n/SUPPORTED, and you can add # user defined locales to /usr/local/share/i18n/SUPPORTED. If you change # this file, you need to rerun locale-gen. # C.UTF-8 UTF-8 # aa_DJ ISO-8859-1 ^^ snip cat /etc/default/locale # File generated by update-locale LANG=C.UTF-8 locale -a I'm not making a mess, I'm trying to fix an existing mess. And yes, I've rebooted so many times today that I felt like I was running Windows. Sure you have made a mess, the debian installer didn't select locales and assign them at random. I am thinking the following will BARF also. localectl list-locales Sorry, but I've never touched locales except through apt/dpkg. I think the problem more likely relates to older locales not being properly removed by the upgrade/modification processes. $ localectl list-locales C.UTF-8 en_CA.UTF-8 en_US.UTF-8 fr_CA.UTF-8 cat /etc/default/locale LANG=en_CA.UTF-8 LANGUAGE=en_CA:en LC_TIME=en_CA.UTF-8 # locale -a C C.utf8 en_CA.utf8 en_US.utf8 fr_CA.utf8 POSIX Also: $locale -a locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory C C.utf8 en_CA.utf8 en_US.utf8 fr_CA.utf8 POSIX Find out where LC_CTYPE and LC_MESSAGES is being set, they need changed. If it was me, I would set /etc/default/locale to # File generated by update-locale LANG=C.UTF-8 and remove all references/assignments to any LC_ in all shell config files. then reboot and do a locale -a existing file before doing any changes: $ cat /etc/default/locale LANG=en_CA.UTF-8 LANGUAGE=en_CA:en LC_TIME=en_CA.UTF-8 I have no idea where LC_CTYPE and LC_MESSAGES are being set. Nor do I understand why the LANG should be set to C rather than en_CA. However, when I made that change and rebooted, the errors vanished. $ locale -a C C.utf8 en_CA.utf8 en_US.utf8 fr_CA.utf8 POSIX $ locale LANG=en_US.UTF-8 LANGUAGE=en_US LC_CTYPE="en_US.UTF-8" LC_NUMERIC=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= I can now successfully run jami! Thanks.
Re: running Jami in Trixie - possible locale issue
On 2024-02-26 21:29, Max Nikulin wrote: env | grep 'LC_\|LANG' systemctl --user show-environment | grep 'LC_\|LANG' $ env | grep 'LC_\|LANG' LANGUAGE=en_GB LC_MONETARY=en_CA.UTF-8 LANG=iu_CA.UTF-8 LC_MEASUREMENT=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_NUMERIC=en_CA.UTF-8 $ systemctl --user show-environment | grep 'LC_\|LANG' LANG=iu_CA.UTF-8 LANGUAGE=en_GB LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MEASUREMENT=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_NUMERIC=en_CA.UTF-8 They agree. The en_GB seems to be coming from Plasma 5's Region & Language settings. However I see the message that it is "unsupported", which seems appropriate. When I change it to American English, the en_GB disappears from the available settings. When I try to "Add More...", I'm only given the options of "C" and "American English", neither of which I want. However the Spellcheck options do allow for English (Canada).
Re: running Jami in Trixie - possible locale issue
On 2024-02-26 20:43, Greg Wooledge wrote: On Mon, Feb 26, 2024 at 08:28:01PM -0500, Gary Dale wrote: $locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=iu_CA.UTF-8 LANGUAGE=en_GB LC_CTYPE="iu_CA.UTF-8" LC_NUMERIC=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES="iu_CA.UTF-8" LC_PAPER="iu_CA.UTF-8" LC_NAME="iu_CA.UTF-8" LC_ADDRESS="iu_CA.UTF-8" LC_TELEPHONE="iu_CA.UTF-8" LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION="iu_CA.UTF-8" LC_ALL= You've got three different locales mentioned here: iu_CA.UTF-8 en_GB en_CA.UTF-8 # locale -a C C.utf8 en_CA.utf8 en_US.utf8 fr_CA.utf8 POSIX Out of the three that you're trying to use, only one has been generated. Either generate the two that you're missing, or stop using them. I'm trying to stop using them. That's the point. How do I get rid of them? They show up no matter how many times I reconfigure my locales.
Re: running Jami in Trixie - possible locale issue
On 2024-02-26 17:31, Gremlin wrote: On 2/26/24 17:18, Gary Dale wrote: On 2024-02-26 16:03, Gremlin wrote: On 2/26/24 14:36, Gary Dale wrote: I'm running Debian/Trixie on an AMD64 system. I've installed jami from testing but it fails to start. When I run it from the command line, I get: $jami & [1] 7804 $ Using Qt runtime version: 6. 4.2 "notify server name: Plasma, vendor: KDE, version: 5.27.10, spec: 1.2" "Using locale: en_GB" terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid [1]+ Aborted (core dumped) jami garydale@transponder:~/mnt/archives/2024/Lions Cl There might be something wrong with my locales but dpkg-reconfigure locales doesn't fix it. After running it, I still get this output: $locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=iu_CA.UTF-8 LANGUAGE=en_GB LC_CTYPE="iu_CA.UTF-8" LC_NUMERIC=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES="iu_CA.UTF-8" LC_PAPER="iu_CA.UTF-8" LC_NAME="iu_CA.UTF-8" LC_ADDRESS="iu_CA.UTF-8" LC_TELEPHONE="iu_CA.UTF-8" LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION="iu_CA.UTF-8" LC_ALL= Please note that I have not selected iu_CA.utf8 nor en_GB in my locales. Any ideas on how to fix this? Thanks. Edit /etc/locale.gen and enable the locale(s) you wish to use. Then as root locale-gen dpkg-reconfigure locales Nope. /etc/locale.gen was already correct. Running the commands then rebooting leaves me with the same error messages. I also set up a ~/.bash_profile to set LANG to en_CA.UTF-8 but that also had no effect. The exact contents are: LANG="en_CA.UTF-8" export LANG You are making a mess, when about to make a mess stop until you have researched your issue. Start at the beginning not at the end.. Did you reboot or logout and login dpkg-reconfigure locales is suppose to set /etc/default/locales correctly, it runs update-locale if I remember correctly. cat /etc/locale.gen # This file lists locales that you wish to have built. You can find a list # of valid supported locales at /usr/share/i18n/SUPPORTED, and you can add # user defined locales to /usr/local/share/i18n/SUPPORTED. If you change # this file, you need to rerun locale-gen. # C.UTF-8 UTF-8 # aa_DJ ISO-8859-1 # aa_DJ.UTF-8 UTF-8 # aa_ER UTF-8 # aa_ER@saaho UTF-8 # aa_ET UTF-8 # af_ZA ISO-8859-1 # af_ZA.UTF-8 UTF-8 # agr_PE UTF-8 # ak_GH UTF-8 # am_ET UTF-8 # an_ES ISO-8859-15 # an_ES.UTF-8 UTF-8 # anp_IN UTF-8 # ar_AE ISO-8859-6 # ar_AE.UTF-8 UTF-8 # ar_BH ISO-8859-6 # ar_BH.UTF-8 UTF-8 # ar_DZ ISO-8859-6 # ar_DZ.UTF-8 UTF-8 # ar_EG ISO-8859-6 # ar_EG.UTF-8 UTF-8 # ar_IN UTF-8 # ar_IQ ISO-8859-6 # ar_IQ.UTF-8 UTF-8 # ar_JO ISO-8859-6 # ar_JO.UTF-8 UTF-8 # ar_KW ISO-8859-6 # ar_KW.UTF-8 UTF-8 # ar_LB ISO-8859-6 # ar_LB.UTF-8 UTF-8 # ar_LY ISO-8859-6 # ar_LY.UTF-8 UTF-8 # ar_MA ISO-8859-6 # ar_MA.UTF-8 UTF-8 # ar_OM ISO-8859-6 # ar_OM.UTF-8 UTF-8 # ar_QA ISO-8859-6 # ar_QA.UTF-8 UTF-8 # ar_SA ISO-8859-6 # ar_SA.UTF-8 UTF-8 # ar_SD ISO-8859-6 # ar_SD.UTF-8 UTF-8 # ar_SS UTF-8 # ar_SY ISO-8859-6 # ar_SY.UTF-8 UTF-8 # ar_TN ISO-8859-6 # ar_TN.UTF-8 UTF-8 # ar_YE ISO-8859-6 # ar_YE.UTF-8 UTF-8 # as_IN UTF-8 # ast_ES ISO-8859-15 # ast_ES.UTF-8 UTF-8 # ayc_PE UTF-8 # az_AZ UTF-8 # az_IR UTF-8 # be_BY CP1251 # be_BY.UTF-8 UTF-8 # be_BY@latin UTF-8 # bem_ZM UTF-8 # ber_DZ UTF-8 # ber_MA UTF-8 # bg_BG CP1251 # bg_BG.UTF-8 UTF-8 # bhb_IN.UTF-8 UTF-8 # bho_IN UTF-8 # bho_NP UTF-8 # bi_VU UTF-8 # bn_BD UTF-8 # bn_IN UTF-8 # bo_CN UTF-8 # bo_IN UTF-8 # br_FR ISO-8859-1 # br_FR.UTF-8 UTF-8 # br_FR@euro ISO-8859-15 # brx_IN UTF-8 # bs_BA ISO-8859-2 # bs_BA.UTF-8 UTF-8 # byn_ER UTF-8 # ca_AD ISO-8859-15 # ca_AD.UTF-8 UTF-8 # ca_ES ISO-8859-1 # ca_ES.UTF-8 UTF-8 # ca_ES@euro ISO-8859-15 # ca_ES@valencia UTF-8 # ca_FR ISO-8859-15 # ca_FR.UTF-8 UTF-8 # ca_IT ISO-8859-15 # ca_IT.UTF-8 UTF-8 # ce_RU UTF-8 # chr_US UTF-8 # ckb_IQ UTF-8 # cmn_TW UTF-8 # crh_UA UTF-8 # cs_CZ ISO-8859-2 # cs_CZ.UTF-8 UTF-8 # csb_PL UTF-8 # cv_RU UTF-8 # cy_GB ISO-8859-14 # cy_GB.UTF-8 UTF-8 # da_DK ISO-8859-1 # da_DK.UTF-8 UTF-8 # de_AT ISO-8859-1 # de_AT.UTF-8 UTF-8 # de_AT@euro ISO-8859-15 # de_BE ISO-8859-1 # de_BE.UTF-8 UTF-8 # de_BE@euro ISO-8859-15 # de_CH ISO-8859-1 # de_CH.UTF-8 UTF-8 # de_DE ISO-8859-1 # de_DE.UTF-8 UTF-8 # de_DE@euro ISO-8859-15 # de_IT ISO-8859-1 # de_IT.UTF-8 UTF-8 # de_LI.UTF-8 UTF-8 # de_LU ISO-8859-1 # de_LU.UTF-8 UTF-8 # de_LU@euro ISO-8859-15 # doi_IN UTF-8 # dsb_DE UTF-8 # dv_MV UTF-8 # dz_BT UTF-8 # el_CY ISO-8859-7 # el_CY.UTF-8 UTF-8 # el_GR ISO-8859-7 # el_GR.UTF-8 UTF-8 # el_GR@euro ISO-8859-7 # en_AG UTF-8 # en_AU ISO-8859-1 # en_AU.UTF-8 UTF-8 # en
Re: running Jami in Trixie - possible locale issue
On 2024-02-26 16:03, Gremlin wrote: On 2/26/24 14:36, Gary Dale wrote: I'm running Debian/Trixie on an AMD64 system. I've installed jami from testing but it fails to start. When I run it from the command line, I get: $jami & [1] 7804 $ Using Qt runtime version: 6. 4.2 "notify server name: Plasma, vendor: KDE, version: 5.27.10, spec: 1.2" "Using locale: en_GB" terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid [1]+ Aborted (core dumped) jami garydale@transponder:~/mnt/archives/2024/Lions Cl There might be something wrong with my locales but dpkg-reconfigure locales doesn't fix it. After running it, I still get this output: $locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=iu_CA.UTF-8 LANGUAGE=en_GB LC_CTYPE="iu_CA.UTF-8" LC_NUMERIC=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES="iu_CA.UTF-8" LC_PAPER="iu_CA.UTF-8" LC_NAME="iu_CA.UTF-8" LC_ADDRESS="iu_CA.UTF-8" LC_TELEPHONE="iu_CA.UTF-8" LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION="iu_CA.UTF-8" LC_ALL= Please note that I have not selected iu_CA.utf8 nor en_GB in my locales. Any ideas on how to fix this? Thanks. Edit /etc/locale.gen and enable the locale(s) you wish to use. Then as root locale-gen dpkg-reconfigure locales Nope. /etc/locale.gen was already correct. Running the commands then rebooting leaves me with the same error messages. I also set up a ~/.bash_profile to set LANG to en_CA.UTF-8 but that also had no effect. The exact contents are: LANG="en_CA.UTF-8" export LANG
running Jami in Trixie - possible locale issue
I'm running Debian/Trixie on an AMD64 system. I've installed jami from testing but it fails to start. When I run it from the command line, I get: $jami & [1] 7804 $ Using Qt runtime version: 6. 4.2 "notify server name: Plasma, vendor: KDE, version: 5.27.10, spec: 1.2" "Using locale: en_GB" terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid [1]+ Aborted (core dumped) jami garydale@transponder:~/mnt/archives/2024/Lions Cl There might be something wrong with my locales but dpkg-reconfigure locales doesn't fix it. After running it, I still get this output: $locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=iu_CA.UTF-8 LANGUAGE=en_GB LC_CTYPE="iu_CA.UTF-8" LC_NUMERIC=en_CA.UTF-8 LC_TIME=en_CA.UTF-8 LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES="iu_CA.UTF-8" LC_PAPER="iu_CA.UTF-8" LC_NAME="iu_CA.UTF-8" LC_ADDRESS="iu_CA.UTF-8" LC_TELEPHONE="iu_CA.UTF-8" LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION="iu_CA.UTF-8" LC_ALL= Please note that I have not selected iu_CA.utf8 nor en_GB in my locales. Any ideas on how to fix this? Thanks.
Re: Can't list root directory
On 2024-02-01 02:37, Loren M. Lang wrote: On January 31, 2024 1:28:37 PM PST, hw wrote: On Wed, 2024-01-31 at 09:27 -0500, Gary Dale wrote: On 2024-01-30 15:54, hw wrote: On Mon, 2024-01-29 at 11:42 -0500, Gary Dale wrote: I'm running Debian/Trixie on an AMD64 workstation. I've lost the ability to see the root directory even when I am logged in as root (su -). This has been happening intermittently for several months. I initially thought it might be related to failing NVME drive that was part of a RAID1 array that is mounted as "/" but I replaced the device and the problem is still happening. [...] What happens when you put the device you replaced back? How could putting a known-failing device back in help? The problem existed before I replaced it and continues to exist after the replacement. It sounded like you were able to list the root directory (at least sometimes) before you did the replacement. Manually failing the device (perhaps after adding it back first) could make a difference. I've seen such indefinite hangs only when an NFS share has become unreachable after it had been mounted. You could use clonezilla to make a copy and then perhaps convert the file system to btrfs. Do you still have the problem when you remove one of the NVME storage things? Perhaps you have the equivivalent of a bad SATA cable or the mainboard doesn't like it when you access two of those at the same time, or something like that. Even simple network cables can behave very strangely, and NVME may be a bit more complicated than that. Running fsck on every boot to work around an issue like this is certainly a bad idea. Doesn't fsck report anything? If it really makes a difference in itself rather than creating some side effect that leads to the root directory being readable, it should report something. Perhaps you need to increase its verbosity. If there's no report then it would look like a side effect and raise the question what side effect it might be. Does fsck run before the RAID has been brought up or after? Is the RAID up when booting is completed? What does mdadm say about the device(s)? Can you still list the root directory when you manually fail either drive? What exactly are the circumstances under which you can and not list the root directory? You need to do some investigating and ask questions like those ... Also, instead of doing "ls -l /" which will stat() every child folder under root, try "/bin/ls -f /" and see if that is successful. That will only do a readdir() on root itself. Also, it might be interesting to get a log of "strace ls -l /" to confirm exactly where the hang happens. -Loren Thanks loren. /bin/ls -l works. The strace shows the hang is on /keybase. The strace did a really bad hang - ctrlC wouldn't kill it. I've set the fsck count to 1 again, so I can reboot and take a look at it.
Re: Can't list root directory
On 2024-01-31 12:02, Max Nikulin wrote: On 29/01/2024 23:42, Gary Dale wrote: "ls -l /" just hangs It may dereference symlinks, call stat, etc. to colorize output. May it happen that you have automount points or something related to network mounts? Does "echo /*" hangs? Even bash prompt may do some funny stuff. I would try it from "dash". Can you install strace? E.g. copy files while booted from a live media. Thanks everyone for the suggestions. I'll retune the array to not fsck every boot and see if the problem recurs so I can try your suggestions.
Re: Can't list root directory
On 2024-01-29 12:55, Hans wrote: Hi Gary, before loosing any data, I suggest, to boot from a liuvefile linux. Please use a modern livefile like Knoppix or Kali-Linux. If it is not a BIOS problem, you should see the device again and are able to mount it. If /root is on a seperated partition, you can do some filesystem checks, like e2fsck or else. Ans: Most important, with a livefile system you can mount an external harddrive and backup all files. Thus , even when the /dev/nvme*** is died or partly broken, you can maybe restore /root on another partition. Second: Please check ACL, although I do not believe the reason for these, it is worth to look at this. Maybe you or someone else has chenged it accidently. Third idea: Is the harddrive full? In the past I has the problem, not to be able to do anything. The reason: My harddrive was completely full (some temporary file was the reason). Deleting this big file was the trick. Just some ideas, maybe it could help. Good luck! Best Hans There is no problem seeing the root folder when I boot from a live distro. fsck never finds any significant issue. An ACL issue would be permanent. This comes and goes. I actually doubled the size of the root device when I put in the new NVME drive. When I set up the RAID array, I'd bought a 500G second drive to mirror the 256G original drive. When I replaced the 256G drive, I was able to expand the array to 500G (less a small amount for the EFI partition). The partition has lots of free space. As I said, running an fsck seems to fix the issue temporarily. I now run an fsck on every boot.
Re: Can't list root directory
On 2024-01-29 11:42, Gary Dale wrote: I'm running Debian/Trixie on an AMD64 workstation. I've lost the ability to see the root directory even when I am logged in as root (su -). This has been happening intermittently for several months. I initially thought it might be related to failing NVME drive that was part of a RAID1 array that is mounted as "/" but I replaced the device and the problem is still happening. I had been able to fix it by booting to SystemRescue and running an fsck on the device but it didn't work this time. The device checks out OK (even when using fsck -/dev/mdx -f) but I still can't list the root. "ls -l /" just hangs, as do any attempts to see the root directory in a graphical file manager. In dolphin this means there is nothing in the folders - and since that is the default starting point I have to manually enter a folder name (e.g. /home/me) in the location bar to be able to see anything - but even then the folders panel remains empty. Even running commands like df -h hang because they can't access the root folder. However the system is otherwise running normally. Strangely, in the past simply booting to a rescue shell then exiting would also work. I'd usually try to do an fsck on the raid device but that would always fail because it was mounted. The only thing I noticed that was unusual was I rebooted after installing the latest Trixie updates this morning. That took about 10 minutes to shut down - 6 of which were spent waiting for a drkonqi process to finish. There was also a systemd message really late in the shutdown about /dev/md0 but that's not the root device. I'm used to Linux taking its time to shutdown lately so I don't think this was related. The systemd shutdown just seems to be easily delayed. Any ideas on how I can restore my ability to see the root directory? OK, got it working again. I used tune2fs to do an fsck on every boot. This being an NVME device, it's barely noticeable.
Re: Can't list root directory
On 2024-01-30 15:54, hw wrote: On Mon, 2024-01-29 at 11:42 -0500, Gary Dale wrote: I'm running Debian/Trixie on an AMD64 workstation. I've lost the ability to see the root directory even when I am logged in as root (su -). This has been happening intermittently for several months. I initially thought it might be related to failing NVME drive that was part of a RAID1 array that is mounted as "/" but I replaced the device and the problem is still happening. [...] What happens when you put the device you replaced back? How could putting a known-failing device back in help? The problem existed before I replaced it and continues to exist after the replacement.
Can't list root directory
I'm running Debian/Trixie on an AMD64 workstation. I've lost the ability to see the root directory even when I am logged in as root (su -). This has been happening intermittently for several months. I initially thought it might be related to failing NVME drive that was part of a RAID1 array that is mounted as "/" but I replaced the device and the problem is still happening. I had been able to fix it by booting to SystemRescue and running an fsck on the device but it didn't work this time. The device checks out OK (even when using fsck -/dev/mdx -f) but I still can't list the root. "ls -l /" just hangs, as do any attempts to see the root directory in a graphical file manager. In dolphin this means there is nothing in the folders - and since that is the default starting point I have to manually enter a folder name (e.g. /home/me) in the location bar to be able to see anything - but even then the folders panel remains empty. Even running commands like df -h hang because they can't access the root folder. However the system is otherwise running normally. Strangely, in the past simply booting to a rescue shell then exiting would also work. I'd usually try to do an fsck on the raid device but that would always fail because it was mounted. The only thing I noticed that was unusual was I rebooted after installing the latest Trixie updates this morning. That took about 10 minutes to shut down - 6 of which were spent waiting for a drkonqi process to finish. There was also a systemd message really late in the shutdown about /dev/md0 but that's not the root device. I'm used to Linux taking its time to shutdown lately so I don't think this was related. The systemd shutdown just seems to be easily delayed. Any ideas on how I can restore my ability to see the root directory?
Is there a problem with Linux-image-6.1.0-16?
Several days ago my main server upgraded to kernel 6.1.0-16 but various other devices that are also running Bookworm seem stuck at 6.1.0-13. They are all using the same architecture. Some are using the same mirror as the server that upgraded. I haven't set any special policies on upgrades. Can anyone explain what's going on?
Re: IMPORTANT: do NOT upgrade to new stable point release
On 2023-12-09 13:09, Dan Ritter wrote: https://fulda.social/@Ganneff/111551628003050712 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1057843 The new kernel release is reported to contain an ext4 data corruption bug. It's prudent not to upgrade, or if you have started to upgrade, not to reboot, until a new kernel release is prepared. -dsr- Pleased to note that 6.1.0-15 seems to have hit the mirrors now. I assume this is the fixed version.
Re: IMPORTANT: do NOT upgrade to new stable point release
On 2023-12-10 11:56, Andrew M.A. Cater wrote: On Sun, Dec 10, 2023 at 11:50:18AM -0500, Gary Dale wrote: On 2023-12-09 14:18, Michael Kjörling wrote: On 9 Dec 2023 20:54 +0200, from ale...@nanoid.net (Alexis Grigoriou): I just upgraded to Bookworm this morning. I did reboot a couple of times but there seems to be no problem (yet). Is there anything I should look for or do other than rebooting? If you upgraded this morning, then I would expect that you are okay for now. Per #5 in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1057843 the bug is present in kernel Debian package version 6.1.64-1. If you are on 6.1.55-1 (current Bookworm stable per last night) you _likely_ aren't affected. If you are on 6.1.55-1 (or earlier), just hold off on upgrades for now; and if you need to upgrade something else, take great care for now to ensure that no Linux kernel packages get upgraded to any version < 6.1.66, and preferably not < 6.1.66-1. For versions, check: * uname -v * dpkg -l linux-image-\* In that bug report thread, #21 lists 6.1.66 as fixed upstream, and #28 indicates that 6.1.66-1 includes the fix from upstream, and that it is being published. Any idea when the fixed version will hit stable? With headless servers, it's a pain to downgrade to a previous kernel version. Give them a little while: release team are working on it right now as I type I'm fairly sure they're pushing it out more or less immediately once they're sure that it's built correctly and synced to all the appropriate places to be further synced to mirrors "Now" is almost exactly Sun 10 Dec 16:55:43 UTC 2023 Andy (amaca...@debian.org) Thanks. I logged into each of my headless servers and removed the problematic kernel then rebooted them so they are all at 6.1.0-13 now.
Re: IMPORTANT: do NOT upgrade to new stable point release
On 2023-12-10 12:24, Greg Wooledge wrote: On Sun, Dec 10, 2023 at 05:09:15PM -, Curt wrote: On 2023-12-10, Andrew M.A. Cater wrote: "Now" is almost exactly Sun 10 Dec 16:55:43 UTC 2023 You mean in the Zulu Time Zone (as I am all at sea)? Use "date -u" to see current UTC time. That should be sufficient to let you know how long it has been since Andrew's "now". You're getting too complicated. The date stamp on his e-mail will display the correct local time (as you have set it) so I can see that he wrote it 30 minutes ago. That relative time is universal across time zones.
Re: IMPORTANT: do NOT upgrade to new stable point release
On 2023-12-09 14:18, Michael Kjörling wrote: On 9 Dec 2023 20:54 +0200, from ale...@nanoid.net (Alexis Grigoriou): I just upgraded to Bookworm this morning. I did reboot a couple of times but there seems to be no problem (yet). Is there anything I should look for or do other than rebooting? If you upgraded this morning, then I would expect that you are okay for now. Per #5 in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1057843 the bug is present in kernel Debian package version 6.1.64-1. If you are on 6.1.55-1 (current Bookworm stable per last night) you _likely_ aren't affected. If you are on 6.1.55-1 (or earlier), just hold off on upgrades for now; and if you need to upgrade something else, take great care for now to ensure that no Linux kernel packages get upgraded to any version < 6.1.66, and preferably not < 6.1.66-1. For versions, check: * uname -v * dpkg -l linux-image-\* In that bug report thread, #21 lists 6.1.66 as fixed upstream, and #28 indicates that 6.1.66-1 includes the fix from upstream, and that it is being published. Any idea when the fixed version will hit stable? With headless servers, it's a pain to downgrade to a previous kernel version.
CUPS classes wrecking printing
I've running Debian/Bookworm (stable) on an AMD64 system - a laptop. It's a fresh install of Debian from about 6 months back that has been kept up to date. Each December I am involved in an event that requires me to use 3 photo-printers to print a lot of 4x6 photos. It takes 2 or 3 printers to get the throughput so people aren't waiting for their photos. I've been doing this for a decade using various photo printers. I've always just set up a CUPS "photo" class and added the printers to it. Then I'd use lpr -P photo to send the output to whichever printer was free. I even did it last year using the same laptop and printers and things worked. This year, because it was a new OS install, I had to connect the printers and install them again. This required the gutenprint drivers for two of the printers while the newest seems to work "driverless". All the printers were tested individually and printed the CUPS test page perfectly. However when I sent something to the "photo" class, whichever printer received the job just printed a page of bands of colour. I could send a picture to an individual printer OK but not send it to the "photo" class. I got through the event by skipping the lpr -P photo... command and manually selecting a printer from Gwenview when I was viewing the picture earlier in the workflow (to verify it was worth printing). This was not ideal and I only got through it because this year's event was less than half its usual size. This was not an lpr problem because I also couldn't print to the class from Gwenview. CUPS classes seem to be broken.
Re: network bonding on Debian/Trixie
On 2023-10-16 21:20, Igor Cicimov wrote: On Tue, Oct 17, 2023 at 12:12 PM Gary Dale wrote: On 2023-10-16 18:52, Igor Cicimov wrote: Hi, On Tue, Oct 17, 2023, 8:00 AM Gary Dale wrote: I'm trying to configure network bonding on an AMD64 system running Debian/Trixie. I've got a wired connection and a wifi connection, both of which work individually. I'd like them to work together to improve the throughput but for now I'm just trying to get the bond to work. However when I configure them, the wifi interface always shows down. # ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever 2: enp10s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff 4: wlxc4411e319ad5: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c4:41:1e:31:9a:d5 brd ff:ff:ff:ff:ff:ff 7: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff inet 192.168.1.20/24 <http://192.168.1.20/24> brd 192.168.1.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::3e7c:3fff:feef:1547/64 scope link proto kernel_ll valid_lft forever preferred_lft forever It does this even if I pull the cable from the wired connection. The wifi never comes up. Here's the /etc/network/interfaces file: auto lo iface lo inet loopback auto enp10s0 iface enp10s0 inet manual bond-master bond0 bond-mode 1 auto wlxc4411e319ad5 iface wlxc4411e319ad5 inet manual bond-master bond0 bond-mode 1 auto bond0 iface bond0 inet static address 192.168.1.20 netmask 255.255.255.0 network 192.168.1.0 gateway 192.168.1.1 bond-slaves enp10s0 wlxc4411e319ad5 bond-mode 1 bond-miimon 100 bond-downdelay 200 bond-updelay 200 I'd like to get it to work in a faster mode but for now the backup at least allows the networking to start without the wifi. Other modes seem to disable networking until both interfaces come up, which is not a good design decision IMHO. At least with mode 1, the network starts. Any ideas on how to get the wifi to work in bonding? Probably your wifi card does not support MII, check with: ~]# ethtool wlxc4411e319ad5 | grep "Link detected:" and: ~]# cat /proc/net/bonding/bind0 I'm assuming that no output is bad here. Still, I don't see why a device that works shouldn't be able to participate in a bond. As a network interface, the wifi device produces and responds to network traffic. Are you saying the bonding takes place below the driver level? I'm saying the bonding driver is doing its own link detection on the presented interfaces for failover purposes. It can use ARP or MII. You can not enable MII on an interface that does not support that functionality. Use mii-tool to check both interfaces and see the difference. Apparently neither interface supports it. According to what I have read, calling mii-tool with no parameters should return a terse list of all interfaces that support it. However, when I try that, I get "No interface specified". Moreover, # mii-tool enp10s0 SIOCGMIIPHY on 'enp10s0' failed: Operation not supported # mii-tool wlxc4411e319ad5 SIOCGMIIPHY on 'wlxc4411e319ad5' failed: Operation not supported which seems weird given that I have a recent, mainstream ASUS mainboard with a generic realtek onboard NIC that seems to be participating in the bonding. I've also not seen any warnings that bonding requires a specific (and apparently rare) type of NIC. Indeed, my laptop seems to fail over nicely between ethernet and wifi. Perhaps mii-tool is broken on Trixie?
Re: network bonding on Debian/Trixie
On 2023-10-16 18:52, Igor Cicimov wrote: Hi, On Tue, Oct 17, 2023, 8:00 AM Gary Dale wrote: I'm trying to configure network bonding on an AMD64 system running Debian/Trixie. I've got a wired connection and a wifi connection, both of which work individually. I'd like them to work together to improve the throughput but for now I'm just trying to get the bond to work. However when I configure them, the wifi interface always shows down. # ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever 2: enp10s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff 4: wlxc4411e319ad5: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c4:41:1e:31:9a:d5 brd ff:ff:ff:ff:ff:ff 7: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff inet 192.168.1.20/24 <http://192.168.1.20/24> brd 192.168.1.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::3e7c:3fff:feef:1547/64 scope link proto kernel_ll valid_lft forever preferred_lft forever It does this even if I pull the cable from the wired connection. The wifi never comes up. Here's the /etc/network/interfaces file: auto lo iface lo inet loopback auto enp10s0 iface enp10s0 inet manual bond-master bond0 bond-mode 1 auto wlxc4411e319ad5 iface wlxc4411e319ad5 inet manual bond-master bond0 bond-mode 1 auto bond0 iface bond0 inet static address 192.168.1.20 netmask 255.255.255.0 network 192.168.1.0 gateway 192.168.1.1 bond-slaves enp10s0 wlxc4411e319ad5 bond-mode 1 bond-miimon 100 bond-downdelay 200 bond-updelay 200 I'd like to get it to work in a faster mode but for now the backup at least allows the networking to start without the wifi. Other modes seem to disable networking until both interfaces come up, which is not a good design decision IMHO. At least with mode 1, the network starts. Any ideas on how to get the wifi to work in bonding? Probably your wifi card does not support MII, check with: ~]# ethtool wlxc4411e319ad5 | grep "Link detected:" and: ~]# cat /proc/net/bonding/bind0 I'm assuming that no output is bad here. Still, I don't see why a device that works shouldn't be able to participate in a bond. As a network interface, the wifi device produces and responds to network traffic. Are you saying the bonding takes place below the driver level?
network bonding on Debian/Trixie
I'm trying to configure network bonding on an AMD64 system running Debian/Trixie. I've got a wired connection and a wifi connection, both of which work individually. I'd like them to work together to improve the throughput but for now I'm just trying to get the bond to work. However when I configure them, the wifi interface always shows down. # ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever 2: enp10s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff 4: wlxc4411e319ad5: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c4:41:1e:31:9a:d5 brd ff:ff:ff:ff:ff:ff 7: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff inet 192.168.1.20/24 brd 192.168.1.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::3e7c:3fff:feef:1547/64 scope link proto kernel_ll valid_lft forever preferred_lft forever It does this even if I pull the cable from the wired connection. The wifi never comes up. Here's the /etc/network/interfaces file: auto lo iface lo inet loopback auto enp10s0 iface enp10s0 inet manual bond-master bond0 bond-mode 1 auto wlxc4411e319ad5 iface wlxc4411e319ad5 inet manual bond-master bond0 bond-mode 1 auto bond0 iface bond0 inet static address 192.168.1.20 netmask 255.255.255.0 network 192.168.1.0 gateway 192.168.1.1 bond-slaves enp10s0 wlxc4411e319ad5 bond-mode 1 bond-miimon 100 bond-downdelay 200 bond-updelay 200 I'd like to get it to work in a faster mode but for now the backup at least allows the networking to start without the wifi. Other modes seem to disable networking until both interfaces come up, which is not a good design decision IMHO. At least with mode 1, the network starts. Any ideas on how to get the wifi to work in bonding?
Re: SMART error messages being sent to the wrong address
On 2023-10-04 02:43, Jeffrey Walton wrote: On Tue, Oct 3, 2023 at 9:32 PM Gary Dale wrote: I'm running Debian/Bookworm on a headless server. The box has had a variety of roles and names. At one time it was called fanny after the groundbreaking rock band and because it had a lot of fans in it. This latter attribute led to it being made into a file server and renamed BigData. The problem I'm having is that SMART error messages are being sent to root@fanny. instead of to me. /etc/aliases has all mail to root going to me, but because this addressed to root on a machine with a different name, it goes out and I only get the message when it bounces (because the machine name no longer exists). I can't find where the e-mail address is being set. Tracing down the smartmontools config files didn't turn up any obvious problems. Can anyone point in the right direction? If Greg and Charles' suggestions do not work, I would grep for it. $ sudo su - $ grep -iIR fanny /etc That should uncover places the old name shows up. Jeff Thanks Jeff. It looks like it came from /etc/mailname.
Re: SMART error messages being sent to the wrong address
On 2023-10-04 03:22, to...@tuxteam.de wrote: On Wed, Oct 04, 2023 at 02:43:36AM -0400, Jeffrey Walton wrote: [...] $ sudo su - ...better spelt these days as "sudo -i" (or "sudo -s"), see sudo(1)'s man page. Or sudo bash - or whatever your preferred shell is.
Re: SMART error messages being sent to the wrong address
On 2023-10-03 22:31, Greg Wooledge wrote: On Tue, Oct 03, 2023 at 09:02:39PM -0500, John Hasler wrote: Add a CNAME record to your DNS. Mail delivery is supposed to ignore CNAME records. Perhaps you meant to say an MX record. But either way, the OP's question is still not answered -- where is the recipient address configured? My guess would be "in the system's MTA". I'm guessing the SMART tools are simply sending to "root" with no domain, and relying on the system's MTA to fill in the recipient domain. I'm guessing this MTA is not configured properly. The OP should verify this by running something like: echo test | mailx -s test root And see how the message headers get written, and where the message goes. If it turns out my guesses are right and the MTA is misconfigured, then we'll know the next steps. If it turns out my guesses are wrong, and the MTA is fine, then we'll have to figure out how these SMART tools are configured. Other mail from that server is being sent properly. It's just the SMART messages that are going to the wrong place.
Re: SMART error messages being sent to the wrong address
On 2023-10-03 23:39, Charles Curley wrote: On Tue, 3 Oct 2023 20:57:57 -0400 Gary Dale wrote: I can't find where the e-mail address is being set. Tracing down the smartmontools config files didn't turn up any obvious problems. The email address is set with the -m option in the file /etc/smartd.conf. I'd check that to see if you changed it from the default ('root'). And you should be able to set it to what you want. No. The -m option is "root".
SMART error messages being sent to the wrong address
I'm running Debian/Bookworm on a headless server. The box has had a variety of roles and names. At one time it was called fanny after the groundbreaking rock band and because it had a lot of fans in it. This latter attribute led to it being made into a file server and renamed BigData. The problem I'm having is that SMART error messages are being sent to root@fanny. instead of to me. /etc/aliases has all mail to root going to me, but because this addressed to root on a machine with a different name, it goes out and I only get the message when it bounces (because the machine name no longer exists). I can't find where the e-mail address is being set. Tracing down the smartmontools config files didn't turn up any obvious problems. Can anyone point in the right direction? Thanks
keybase upgrade / install fails
I'm running Debian/Bookworm on an AMD64 system. For the last two or three weeks I've been getting messages like below when I use apt: The keybase package doesn't seem to configure properly so apt keeps trying, and failing, to finish the package installation. Removing it (not purging) then reinstalling doesn't help. Setting up keybase (6.2.2-20230726175256.4464bfb32d) ... Autorestarting Keybase via systemd for mkdir: cannot stat ‘/keybase’: Transport endpoint is not connected chown: cannot access '/keybase': Transport endpoint is not connected chmod: cannot access '/keybase': Transport endpoint is not connected dpkg:error processing package keybase (--configure): installed keybase package post-installation script subprocess returned error exit status 1 Errors were encountered while processing: keybase E: Sub-process /usr/bin/dpkg returned an error code (1) Any suggestions?
Re: laptop stopped getting to desktop after latest updates [RESOLVED]
On 2023-05-19 23:32, Gary Dale wrote: I'm running Debian/Bookworm on an ASUS FA506IC laptop. It's got an AMD Ryzen processor but an NVidia graphics card that provides me with great evidence for why I had previously avoided NVidia cards. I'm running Bookworm because I couldn't get it work on Bullseye. It'd been running OK with the proprietary drivers (but not the Nouveau) until earlier today. I don't use it very often so it was probably a few weeks since I last updated it. I had trouble the previous time I'd updated it too, but that was the move to the non-free-firmware section that messed it up. Once I added the new section to the sources, things worked again. The symptoms I'm getting are the same as what it used to display when I tried to use sddm to start the desktop. Gdm3 and lightdm both worked in the past but now I'm getting the same symptom with all of them - a blank screen with a cursor flashing in the top-left corner. At that point I can't even bring up a text console, but I can reboot (ctrl-alt-del still works). I can boot to a recovery mode and start the network, but not sure how to track this down. Removing and reinstalling the NVidia driver didn't help. Trying to start the desktop without the NVidia driver (and firmware) installed also didn't work. I still get the system booting to a blank screen with a flashing cursor in the top let corner. Any ideas? Thanks. I got some time this afternoon to play around with the system. My first effort was to go back to Debian/Bullseye (11.7) and do a fresh install. Surprisingly, it worked this time. I could boot into Gnome without problems. However I had no wifi. Changing to sddm however led to failure to reach a login screen. Installing lightdm let me select Plasma 5 however. This is all with the Nouveau drivers. However the wifi issue needed to be resolved. It turns out there are working (non-free) drivers available but they require a more recent kernel. I added bullseye-backports and installed the newer kernel and firmware-misc-non-free. This led to a failure to reach a login screen (in fact, the boot appears to hang after producing about 4 lines of output but booting to recovery mode let me see that it failed loading the Nouveau drivers). Installing the nvidia-driver allowed the system to reach a graphical login but the system was not very stable. The difference in kernel versions was giving me a lot of errors so I tried to upgrade completely to Bookworm. That got me back to the point I was at when I first posted my original problem. I couldn't get to a working graphical login. This led me back to the Debian site to download the latest Bookworm installer. The RC3 netinst is problematic - it didn't recognize my /home partition as formatted and ended up wiping it out. And when I rebooted into the installed system, it left me at a grub prompt. But when I rebooted and selected the boot partition manually through the BIOS, I was able to boot into the system. I ran update-grub and it's now booting properly. Surprisingly, I was even able to switch to sddm and get into Plasma 5 (Wayland!), all while using the Nouveau drivers. And wifi just worked!
Re: laptop stopped getting to desktop after latest updates
Share with the Debian community the X server logs of "Debian" and "systemrescuecd". Groeten Geert Stappers First is the log from a session that failed. Below is a log from a previous session that worked. Sorry, didn't get one from a systemrescuecd session - I thought I'd copied it to a network share but now I can't find it... [ 15.585] X.Org X Server 1.21.1.7 X Protocol Version 11, Revision 0 [ 15.585] Current Operating System: Linux Aspect23 6.1.0-9-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.27-1 (2023-05-08) x86_64 [ 15.585] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.1.0-9-amd64 root=UUID=9562e6ee-0942-46a6-abdd-2e3a1a3a80bb ro quiet [ 15.585] xorg-server 2:21.1.7-3 (https://www.debian.org/support) [ 15.585] Current version of pixman: 0.42.2 [ 15.585] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 15.585] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 15.585] (==) Log file: "/var/log/Xorg.0.log", Time: Mon May 22 12:43:47 2023 [ 15.585] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 15.585] (==) No Layout section. Using the first Screen section. [ 15.585] (==) No screen section available. Using defaults. [ 15.585] (**) |-->Screen "Default Screen Section" (0) [ 15.585] (**) | |-->Monitor "" [ 15.585] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 15.585] (==) Automatically adding devices [ 15.585] (==) Automatically enabling devices [ 15.585] (==) Automatically adding GPU devices [ 15.585] (==) Automatically binding GPU devices [ 15.585] (==) Max clients allowed: 256, resource mask: 0x1f [ 15.585] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 15.585] Entry deleted from font path. [ 15.585] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, built-ins [ 15.585] (==) ModulePath set to "/usr/lib/xorg/modules" [ 15.585] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 15.585] (II) Loader magic: 0x55b923332f00 [ 15.585] (II) Module ABI versions: [ 15.585] X.Org ANSI C Emulation: 0.4 [ 15.585] X.Org Video Driver: 25.2 [ 15.585] X.Org XInput driver : 24.4 [ 15.585] X.Org Server Extension : 10.0 [ 15.586] (++) using VT number 7 [ 15.586] (II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration [ 15.586] (II) xfree86: Adding drm device (/dev/dri/card0) [ 15.586] (II) Platform probe for /sys/devices/pci:00/:00:01.1/:01:00.0/drm/card0 [ 15.589] (--) PCI: (1@0:0:0) 10de:25a2:1043:16ad rev 161, Mem @ 0xfb00/16777216, 0xfe/4294967296, 0xff/33554432, I/O @ 0xf000/128, BIOS @ 0x/524288 [ 15.589] (--) PCI:*(6@0:0:0) 1002:1636:1043:16ad rev 198, Mem @ 0xff1000/268435456, 0xff2000/2097152, 0xfc50/524288, I/O @ 0xd000/256 [ 15.589] (II) LoadModule: "glx" [ 15.589] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 15.590] (II) Module glx: vendor="X.Org Foundation" [ 15.590] compiled for 1.21.1.7, module version = 1.0.0 [ 15.590] ABI class: X.Org Server Extension, version 10.0 [ 15.590] (II) Applying OutputClass "nvidia" to /dev/dri/card0 [ 15.590] loading driver: nvidia [ 15.712] (==) Matched nvidia as autoconfigured driver 0 [ 15.712] (==) Matched nouveau as autoconfigured driver 1 [ 15.712] (==) Matched nv as autoconfigured driver 2 [ 15.712] (==) Matched ati as autoconfigured driver 3 [ 15.712] (==) Matched modesetting as autoconfigured driver 4 [ 15.712] (==) Matched fbdev as autoconfigured driver 5 [ 15.712] (==) Matched vesa as autoconfigured driver 6 [ 15.712] (==) Assigned the driver to the xf86ConfigLayout [ 15.712] (II) LoadModule: "nvidia" [ 15.712] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so [ 15.713] (II) Module nvidia: vendor="NVIDIA Corporation" [ 15.713] compiled for 1.6.99.901, module version = 1.0.0 [ 15.713] Module class: X.Org Video Driver [ 15.713] (II) LoadModule: "nouveau" [ 15.713] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 15.713] (II) Module nouveau: vendor="X.Org Foundation" [ 15.713] compiled for 1.21.1.3, module version = 1.0.17 [ 15.713] Module class: X.Org Video Driver [ 15.713] ABI class: X.Org Video Driver, version 25.2 [ 15.713] (II) LoadModule: "nv" [ 15.713]
Re: laptop stopped getting to desktop after latest updates
On 2023-05-19 23:32, Gary Dale wrote: I'm running Debian/Bookworm on an ASUS FA506IC laptop. It's got an AMD Ryzen processor but an NVidia graphics card that provides me with great evidence for why I had previously avoided NVidia cards. I'm running Bookworm because I couldn't get it work on Bullseye. It'd been running OK with the proprietary drivers (but not the Nouveau) until earlier today. I don't use it very often so it was probably a few weeks since I last updated it. I had trouble the previous time I'd updated it too, but that was the move to the non-free-firmware section that messed it up. Once I added the new section to the sources, things worked again. The symptoms I'm getting are the same as what it used to display when I tried to use sddm to start the desktop. Gdm3 and lightdm both worked in the past but now I'm getting the same symptom with all of them - a blank screen with a cursor flashing in the top-left corner. At that point I can't even bring up a text console, but I can reboot (ctrl-alt-del still works). I can boot to a recovery mode and start the network, but not sure how to track this down. Removing and reinstalling the NVidia driver didn't help. Trying to start the desktop without the NVidia driver (and firmware) installed also didn't work. I still get the system booting to a blank screen with a flashing cursor in the top let corner. Any ideas? Thanks. Further to above, I purged the nVidia drivers then noticed that there was still some nVidia stuff left (e.g. nvidia-persistence) so I did a further purge. When I rebooted, the system stalled initially after a couple of ACPI errors (there were only those lines on the screen - it was barely starting) but a ctrl-alt-del later and it got all the way to starting the nouveau drivers before stalling. I reinstalled the nVidia drivers and was back to the same problem. Rebooting to recovery mode and checking the journal for the previous boot, there were a string of errors relating to lightdm failing to start, followed by retry and the same error. I purged lightdm, rebooted and re-installed it but got the same errors. I don't believe this is a problem with lightdm because it is also happening with gdm3 and sddm. The only difference is that its been happening with sddm since I got the laptop last November whereas the problem with lightdm and gdm3 is much more recent. I can run the system with the Nouveau drivers when booting from systemrescuecd (a couple of recent versions, including 10.0), so this is a Debian issue.
laptop stopped getting to desktop after latest updates
I'm running Debian/Bookworm on an ASUS FA506IC laptop. It's got an AMD Ryzen processor but an NVidia graphics card that provides me with great evidence for why I had previously avoided NVidia cards. I'm running Bookworm because I couldn't get it work on Bullseye. It'd been running OK with the proprietary drivers (but not the Nouveau) until earlier today. I don't use it very often so it was probably a few weeks since I last updated it. I had trouble the previous time I'd updated it too, but that was the move to the non-free-firmware section that messed it up. Once I added the new section to the sources, things worked again. The symptoms I'm getting are the same as what it used to display when I tried to use sddm to start the desktop. Gdm3 and lightdm both worked in the past but now I'm getting the same symptom with all of them - a blank screen with a cursor flashing in the top-left corner. At that point I can't even bring up a text console, but I can reboot (ctrl-alt-del still works). I can boot to a recovery mode and start the network, but not sure how to track this down. Removing and reinstalling the NVidia driver didn't help. Trying to start the desktop without the NVidia driver (and firmware) installed also didn't work. I still get the system booting to a blank screen with a flashing cursor in the top let corner. Any ideas? Thanks.
Re: WiFi firmware issue in Bookworm
On 2023-02-09 22:09, The Wanderer wrote: On 2023-02-09 at 21:39, Gary Dale wrote: I'm trying to use a Linksys AE1200 wifi usb dongle as a second network connection for my Bookworm workstation. The device shows up in lsusb but not in ip link. According to what I've found, it needs the brcmfmac driver module, which seems to be in the 6.1 kernel and loaded: $ lsmod | grep brcmfmac brcmfmac 360448 0 brcmutil 20480 1 brcmfmac cfg80211 1122304 1 brcmfmac mmc_core 208896 1 brcmfmac usbcore 344064 10 xhci_hcd,snd_usb_audio,usbhid,snd_usbmidi_lib,usblp,usb_storage,uvcvideo,brcmfmac,xhci_pci,uas I'm using KDE/Plasma as my desktop and plasma-nm is loaded. However it too doesn't seem to think that there is a wifi network. Interestingly the device works in Bullseye as I installed Bullseye on the computer that used to use it. That really only required downloading the correct firmware package that contained the brcmfmac module. That package no longer exists in Bookworm. Would that be firmware-brcm80211? That still exists in bookworm; it's just been moved to the new non-free-firmware component, so it won't be showing up if your sources.list doesn't reference that component (in addition to e.g. main, contrib, and/or non-free). There was another thread on this mailing list just within the past day that asked a similar question regarding another firmware package, and the replies to that question include links to the announcements about the new component. Thanks. That points then to a problem with the package.debian.org page - it doesn't seem to search the new section. I found the announcement when I searched for debian non-free firmware. Right now if you don't know it exists, you can't find it. :(
Re: WiFi firmware issue in Bookworm
On 2023-02-09 22:07, piorunz wrote: On 10/02/2023 02:39, Gary Dale wrote: Interestingly the device works in Bullseye as I installed Bullseye on the computer that used to use it. That really only required downloading the correct firmware package that contained the brcmfmac module. That package no longer exists in Bookworm. All you need to do, is to search for package, it may have different name under Bookworm. dmesg give you file name you want: [ 176.573530] usb 1-3.1: firmware: failed to load brcm/brcmfmac43236b.bin (-2) apt-file search brcmfmac43236b.bin firmware-brcm80211: /lib/firmware/brcm/brcmfmac43236b.bin Install package firmware-brcm80211. As I identified, that package doesn't exist in Bookworm: # apt install firmware-brcm80211 Reading package lists... Done Building dependency tree... Done Reading state information... Done E: Unable to locate package firmware-brcm80211
WiFi firmware issue in Bookworm
I'm trying to use a Linksys AE1200 wifi usb dongle as a second network connection for my Bookworm workstation. The device shows up in lsusb but not in ip link. According to what I've found, it needs the brcmfmac driver module, which seems to be in the 6.1 kernel and loaded: $ lsmod | grep brcmfmac brcmfmac 360448 0 brcmutil 20480 1 brcmfmac cfg80211 1122304 1 brcmfmac mmc_core 208896 1 brcmfmac usbcore 344064 10 xhci_hcd,snd_usb_audio,usbhid,snd_usbmidi_lib,usblp,usb_storage,uvcvideo,brcmfmac,xhci_pci,uas I'm using KDE/Plasma as my desktop and plasma-nm is loaded. However it too doesn't seem to think that there is a wifi network. Interestingly the device works in Bullseye as I installed Bullseye on the computer that used to use it. That really only required downloading the correct firmware package that contained the brcmfmac module. That package no longer exists in Bookworm. Dmesg reveals the problem: [ 176.393749] usb 1-3.1: new high-speed USB device number 6 using xhci_hcd [ 176.546464] usb 1-3.1: New USB device found, idVendor=13b1, idProduct=0039, bcdDevice= 0.01 [ 176.546467] usb 1-3.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 176.546469] usb 1-3.1: Product: Linksys AE1200 [ 176.546470] usb 1-3.1: Manufacturer: Cisco [ 176.546470] usb 1-3.1: SerialNumber: 0001 [ 176.573466] brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43236b for chip BCM43235/3 [ 176.573530] usb 1-3.1: firmware: failed to load brcm/brcmfmac43236b.bin (-2) [ 176.573535] usb 1-3.1: firmware: failed to load brcm/brcmfmac43236b.bin (-2) [ 176.573537] usb 1-3.1: Direct firmware load for brcm/brcmfmac43236b.bin failed with error -2 [ 458.854083] usb 1-3.1: USB disconnect, device number 6 [ 464.232185] usb 1-3.1: new high-speed USB device number 7 using xhci_hcd [ 464.392635] usb 1-3.1: New USB device found, idVendor=13b1, idProduct=0039, bcdDevice= 0.01 [ 464.392642] usb 1-3.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 464.392644] usb 1-3.1: Product: Linksys AE1200 [ 464.392645] usb 1-3.1: Manufacturer: Cisco [ 464.392646] usb 1-3.1: SerialNumber: 0001 [ 464.422271] brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43236b for chip BCM43235/3 [ 464.422449] usb 1-3.1: firmware: failed to load brcm/brcmfmac43236b.bin (-2) [ 464.422462] usb 1-3.1: firmware: failed to load brcm/brcmfmac43236b.bin (-2) [ 464.422465] usb 1-3.1: Direct firmware load for brcm/brcmfmac43236b.bin failed with error -2 Apparently the firmware isn't loading. Any ideas on how to fix this?
Re: support for ASUS AC1200 USB-AC53 Nano wifi dongle
On 2023-02-08 10:55, Gary Dale wrote: On 2023-02-08 09:07, Gary Dale wrote: On 2023-02-08 00:55, Alexander V. Makartsev wrote: On 08.02.2023 09:07, Gary Dale wrote: I thought this would be easier than it's turned out to be. There are Internet posts going back years about support for this device but nothing recent - including a 5 year old Ubuntu post saying it works. Other wifi devices seem to be recognized out of the box or with a simple install of non-free firmware but not this one - at least not in Bullseye or Bookworm. The adapter itself seems to be quite popular so I'm hoping someone can provide some clues on how to make it work Thanks. Your device should be based on "RTL8822B" chip from Realtek, so you need to install "firmware-realtek" package. If after doing that you still didn't get a functioning network wifi adapter you might need to build driver kernel module. [1] This is what I had to do to get USB Bluetooth adapter from Asus to work without issues, even though it is supported by kernel in "bullseye". It is always the best to include extra information about your setup when you asking for help. At least output from these commands would be a start: $ uname -a $ lsusb -v -t # journalctl -b 0 --no-pager | grep -iE "rtl|rtk_|firmware" If the output is long you can use "paste" service [2] and send us a link. [1] https://www.asus.com/ca-en/networking-iot-servers/adapters/all-series/usb-ac53-nano/helpdesk_download/?model2Name=USB-AC53-Nano [2] https://paste.debian.net/ -- Thanks Alexander, but installing firmware-realtek doesn't work. It was the first thing I tried. Secondly, the ASUS driver fails to compile under Bullseye & later. It throws an error: 1.5_33902.20190604_COEX20180928-6a6a/include/rtw_security.h:255:8: error: redefinition of ‘struct sha256_state’ 255 | struct sha256_state { | ^~~~ This is the same error I find in various drivers from GitHub. They all seem to be for older kernels and no longer compile. The fact that drivers have existed for so long was one reason I thought the device should be reasonably supported by now. I had considered posting the output of lsusb but it simply shows that the device is recognized. Making it verbose returns a lot of capabilities information but not much else. Here it is: /: Bus 06.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 5000M ID 1d6b:0003 Linux Foundation 3.0 root hub /: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub |__ Port 1: Dev 3, If 0, Class=Vendor Specific Class, Driver=, 480M ID 0b05:184c ASUSTek Computer, Inc. /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 1M ID 1d6b:0003 Linux Foundation 3.0 root hub |__ Port 2: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M ID 0080:a001 Unknown JMS578 based SATA bridge /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/8p, 1M ID 1d6b:0003 Linux Foundation 3.0 root hub /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/14p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub |__ Port 13: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 480M ID 058f:6366 Alcor Micro Corp. Multi Flash Reader The journalctl command returns nothing. Found a github repository that compiles on Bullseye at https://github.com/morrownr/88x2bu. Then it's a matter of doing the following as root git clone https://github.com/morrownr/88x2bu-20210702## date string may different cd 88x2bu-20210702 make clean make make install then rebooting. The wifi dongle now shows in "ip addr". Had the wrong git command - now corrected above. Ended up having another issue after I got it installed (on a friend's machine that had been running Windows 7 badly but is now running Bullseye nicely). Their residence doesn't use a WiFi password, so I thought the device should just connect. Turns out there was a device fingerprinting system in place that worked with an annual voucher number you had to enter to connect to the Internet. Once I got the number, things worked perfectly.
Re: support for ASUS AC1200 USB-AC53 Nano wifi dongle
On 2023-02-09 03:30, Anssi Saari wrote: Gary Dale writes: I thought this would be easier than it's turned out to be. There are Internet posts going back years about support for this device but nothing recent - including a 5 year old Ubuntu post saying it works. Other wifi devices seem to be recognized out of the box or with a simple install of non-free firmware but not this one - at least not in Bullseye or Bookworm. Hm. What I found was the driver has been integrated in kernel 6.2 and if you need to build it for an older kernel, then they're supported too. Versions 5.12-6.2 have community support and 4.19-5.11 are supported by Realtek. I don't know what that means exactly though. Source: https://github.com/morrownr/88x2bu-20210702 I can try to build this with Debian's stable 5.10 kernel at some point but I don't have the hardware. Thanks. Found that github repo myself. I hope you are right about 6.2 integration, but I'm not sure we'll get there with Bookworm...
Re: support for ASUS AC1200 USB-AC53 Nano wifi dongle
On 2023-02-08 09:07, Gary Dale wrote: On 2023-02-08 00:55, Alexander V. Makartsev wrote: On 08.02.2023 09:07, Gary Dale wrote: I thought this would be easier than it's turned out to be. There are Internet posts going back years about support for this device but nothing recent - including a 5 year old Ubuntu post saying it works. Other wifi devices seem to be recognized out of the box or with a simple install of non-free firmware but not this one - at least not in Bullseye or Bookworm. The adapter itself seems to be quite popular so I'm hoping someone can provide some clues on how to make it work Thanks. Your device should be based on "RTL8822B" chip from Realtek, so you need to install "firmware-realtek" package. If after doing that you still didn't get a functioning network wifi adapter you might need to build driver kernel module. [1] This is what I had to do to get USB Bluetooth adapter from Asus to work without issues, even though it is supported by kernel in "bullseye". It is always the best to include extra information about your setup when you asking for help. At least output from these commands would be a start: $ uname -a $ lsusb -v -t # journalctl -b 0 --no-pager | grep -iE "rtl|rtk_|firmware" If the output is long you can use "paste" service [2] and send us a link. [1] https://www.asus.com/ca-en/networking-iot-servers/adapters/all-series/usb-ac53-nano/helpdesk_download/?model2Name=USB-AC53-Nano [2] https://paste.debian.net/ -- Thanks Alexander, but installing firmware-realtek doesn't work. It was the first thing I tried. Secondly, the ASUS driver fails to compile under Bullseye & later. It throws an error: 1.5_33902.20190604_COEX20180928-6a6a/include/rtw_security.h:255:8: error: redefinition of ‘struct sha256_state’ 255 | struct sha256_state { | ^~~~ This is the same error I find in various drivers from GitHub. They all seem to be for older kernels and no longer compile. The fact that drivers have existed for so long was one reason I thought the device should be reasonably supported by now. I had considered posting the output of lsusb but it simply shows that the device is recognized. Making it verbose returns a lot of capabilities information but not much else. Here it is: /: Bus 06.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 5000M ID 1d6b:0003 Linux Foundation 3.0 root hub /: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub |__ Port 1: Dev 3, If 0, Class=Vendor Specific Class, Driver=, 480M ID 0b05:184c ASUSTek Computer, Inc. /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 1M ID 1d6b:0003 Linux Foundation 3.0 root hub |__ Port 2: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M ID 0080:a001 Unknown JMS578 based SATA bridge /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/8p, 1M ID 1d6b:0003 Linux Foundation 3.0 root hub /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/14p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub |__ Port 13: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 480M ID 058f:6366 Alcor Micro Corp. Multi Flash Reader The journalctl command returns nothing. Found a github repository that compiles on Bullseye at https://github.com/morrownr/88x2bu. Then it's a matter of doing the following as root git clone https://github.com/morrownr/88x2bu cd 88x2bu-20210702 ## date string may different make clean make make install then rebooting. The wifi dongle now shows in "ip addr".
Re: support for ASUS AC1200 USB-AC53 Nano wifi dongle
On 2023-02-08 00:55, Alexander V. Makartsev wrote: On 08.02.2023 09:07, Gary Dale wrote: I thought this would be easier than it's turned out to be. There are Internet posts going back years about support for this device but nothing recent - including a 5 year old Ubuntu post saying it works. Other wifi devices seem to be recognized out of the box or with a simple install of non-free firmware but not this one - at least not in Bullseye or Bookworm. The adapter itself seems to be quite popular so I'm hoping someone can provide some clues on how to make it work Thanks. Your device should be based on "RTL8822B" chip from Realtek, so you need to install "firmware-realtek" package. If after doing that you still didn't get a functioning network wifi adapter you might need to build driver kernel module. [1] This is what I had to do to get USB Bluetooth adapter from Asus to work without issues, even though it is supported by kernel in "bullseye". It is always the best to include extra information about your setup when you asking for help. At least output from these commands would be a start: $ uname -a $ lsusb -v -t # journalctl -b 0 --no-pager | grep -iE "rtl|rtk_|firmware" If the output is long you can use "paste" service [2] and send us a link. [1] https://www.asus.com/ca-en/networking-iot-servers/adapters/all-series/usb-ac53-nano/helpdesk_download/?model2Name=USB-AC53-Nano [2] https://paste.debian.net/ -- Thanks Alexander, but installing firmware-realtek doesn't work. It was the first thing I tried. Secondly, the ASUS driver fails to compile under Bullseye & later. It throws an error: 1.5_33902.20190604_COEX20180928-6a6a/include/rtw_security.h:255:8: error: redefinition of ‘struct sha256_state’ 255 | struct sha256_state { | ^~~~ This is the same error I find in various drivers from GitHub. They all seem to be for older kernels and no longer compile. The fact that drivers have existed for so long was one reason I thought the device should be reasonably supported by now. I had considered posting the output of lsusb but it simply shows that the device is recognized. Making it verbose returns a lot of capabilities information but not much else. Here it is: /: Bus 06.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 5000M ID 1d6b:0003 Linux Foundation 3.0 root hub /: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub |__ Port 1: Dev 3, If 0, Class=Vendor Specific Class, Driver=, 480M ID 0b05:184c ASUSTek Computer, Inc. /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 1M ID 1d6b:0003 Linux Foundation 3.0 root hub |__ Port 2: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M ID 0080:a001 Unknown JMS578 based SATA bridge /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/8p, 1M ID 1d6b:0003 Linux Foundation 3.0 root hub /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/14p, 480M ID 1d6b:0002 Linux Foundation 2.0 root hub |__ Port 13: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 480M ID 058f:6366 Alcor Micro Corp. Multi Flash Reader The journalctl command returns nothing.
support for ASUS AC1200 USB-AC53 Nano wifi dongle
I thought this would be easier than it's turned out to be. There are Internet posts going back years about support for this device but nothing recent - including a 5 year old Ubuntu post saying it works. Other wifi devices seem to be recognized out of the box or with a simple install of non-free firmware but not this one - at least not in Bullseye or Bookworm. The adapter itself seems to be quite popular so I'm hoping someone can provide some clues on how to make it work Thanks.
Re: latest testing update broke my laptop
On 2022-12-19 23:38, Gary Dale wrote: On 2022-12-18 00:53, David Christensen wrote: On 12/17/22 13:00, Gary Dale wrote: On 2022-12-17 14:39, David Christensen wrote: On 12/17/22 04:44, Gary Dale wrote: On 2022-12-16 21:29, Gary Dale wrote: My laptop no longer boots thanks to the latest update. If you want a GNU/Linux distribution that "just works", one possibility is Debian Stable and "supported hardware". The former is easy -- download a d-i ISO. The latter can be anywhere from trivial to impossible to determine a priori; the practical answer is install and find out. What is the manufacturer, model, and part number of your computer? What options does it have? What components have you added, changed, or removed? What external hardware is connected? Do you have a broadband Internet connection? What d-i media did you use? Where did you get it? Did you verify the checksum of the download and/or media? Thanks David, but as I explained, Debian/Stable doesn't "just work". You need the second part of your condition, but it's hard to know if hardware is supported until you try it. And what doesn't work one week may work the next. I don't blame Debian in this case. It's clearly an nVidia problem. Normally I stay away from them when getting something for Linux, but I got a great Black Friday deal. That's why I even got a new laptop to begin with. Apart from the nVidia components, it seems to work fine. Added nothing - just removed the Windows partitions and installed Linux. As I explained, I used Debian netinst copied to a Ventoy USB. What was strange is that Stable has no problem installing (just problems running) but Testing seems to get hung up with the networking (when I tried a graphical install, it at least showed that was what it was doing. The text based installer flashed something on the screen but never got around to doing more than the background colours - no text or progress bar - so I wasn't sure what it was doing). Also the current testing alpha netinst iso doesn't seem work with Ventoy, which meant I had to dd it to its own usb stick. And yes, I only download the files from debian.org. Have you tried finding the Debian Testing netinst checksums? You can find them for the weekly builds if you look hard enough but not the ones for the Alpha release. I thought maybe the alpha release would be a little more stable than a weekly build I can confirm that the problem with FAT32 was fixed by a reboot. I don't reboot every day normally, The laptop is an ASUS FA506ICB. I'll be filing a bug report or three later. Yesterday I just needed to get it working again, but I wanted to document the pulling of hair and gnashing of teeth - I suspect I may have to do this again... STFW "ASUS FA506ICB linux" I am not seeing any promising hits. I would re-install Windows, run Debian Stable in a VM, STFW periodically, wait for a post by someone who succeeds running a GNU/Linux distribution on that machine, and try again. David You didn't read what I wrote. I've actually got it running quite well. The only thing is that it seems to need the proprietary nVidia drivers - the Nouveau ones won't cut it. Also, there seems to be an issue with sddm - gdm3 and lightdm both work. I can recommend the laptop as a reasonable candidate for Linux. Apart from the need for proprietary drivers, which is something I blame nVidia for, it seems to work perfectly. Actually, there is something else that doesn't work that doesn't bother me too much - the usb-c port doesn't work.
Re: latest testing update broke my laptop
On 2022-12-20 04:16, Brad Rogers wrote: On Mon, 19 Dec 2022 23:33:19 -0500 Gary Dale wrote: Hello Gary, you need to start with Debian/Stable then upgrade That's not correct. You *can* do it that way, but there are installer ISOs for testing. Not with this laptop. The Debian/Testing installer failed. I can't guarantee that they will always fail, but in this case, it wasn't working. I suspect it is the combination of AMD + nVidia that is the culprit. The Stable installer works (runs to completion) but the Testing one fails.
Re: latest testing update broke my laptop
On 2022-12-18 00:53, David Christensen wrote: On 12/17/22 13:00, Gary Dale wrote: On 2022-12-17 14:39, David Christensen wrote: On 12/17/22 04:44, Gary Dale wrote: On 2022-12-16 21:29, Gary Dale wrote: My laptop no longer boots thanks to the latest update. If you want a GNU/Linux distribution that "just works", one possibility is Debian Stable and "supported hardware". The former is easy -- download a d-i ISO. The latter can be anywhere from trivial to impossible to determine a priori; the practical answer is install and find out. What is the manufacturer, model, and part number of your computer? What options does it have? What components have you added, changed, or removed? What external hardware is connected? Do you have a broadband Internet connection? What d-i media did you use? Where did you get it? Did you verify the checksum of the download and/or media? Thanks David, but as I explained, Debian/Stable doesn't "just work". You need the second part of your condition, but it's hard to know if hardware is supported until you try it. And what doesn't work one week may work the next. I don't blame Debian in this case. It's clearly an nVidia problem. Normally I stay away from them when getting something for Linux, but I got a great Black Friday deal. That's why I even got a new laptop to begin with. Apart from the nVidia components, it seems to work fine. Added nothing - just removed the Windows partitions and installed Linux. As I explained, I used Debian netinst copied to a Ventoy USB. What was strange is that Stable has no problem installing (just problems running) but Testing seems to get hung up with the networking (when I tried a graphical install, it at least showed that was what it was doing. The text based installer flashed something on the screen but never got around to doing more than the background colours - no text or progress bar - so I wasn't sure what it was doing). Also the current testing alpha netinst iso doesn't seem work with Ventoy, which meant I had to dd it to its own usb stick. And yes, I only download the files from debian.org. Have you tried finding the Debian Testing netinst checksums? You can find them for the weekly builds if you look hard enough but not the ones for the Alpha release. I thought maybe the alpha release would be a little more stable than a weekly build I can confirm that the problem with FAT32 was fixed by a reboot. I don't reboot every day normally, The laptop is an ASUS FA506ICB. I'll be filing a bug report or three later. Yesterday I just needed to get it working again, but I wanted to document the pulling of hair and gnashing of teeth - I suspect I may have to do this again... STFW "ASUS FA506ICB linux" I am not seeing any promising hits. I would re-install Windows, run Debian Stable in a VM, STFW periodically, wait for a post by someone who succeeds running a GNU/Linux distribution on that machine, and try again. David You didn't read what I wrote. I've actually got it running quite well. The only thing is that it seems to need the proprietary nVidia drivers - the Nouveau ones won't cut it. Also, there seems to be an issue with sddm - gdm3 and lightdm both work. I can recommend the laptop as a reasonable candidate for Linux. Apart from the need for proprietary drivers, which is something I blame nVidia for, it seems to work perfectly.
Re: latest testing update broke my laptop
On 2022-12-17 23:10, Keith Bainbridge wrote: On 17 December 2022 9:00:49 pm UTC, Gary Dale wrote: On 2022-12-17 14:39, David Christensen wrote: On 12/17/22 04:44, Gary Dale wrote: On 2022-12-16 21:29, Gary Dale wrote: My laptop no longer boots thanks to the latest update. It stops after I select a normal boot - it goes to the text mode console and displays an error message about: [ 0.717939] ACPI BIOS Error (bug). If I go into recovery mode, I don't get that error but then it stops after a message about the nouveau driver. I never get to a command prompt. I can boot from System Rescus CD. I get the same BIOS error message but then it continues on as if it wasn't important. I tried updating the BIOS but that did nothing to resolve the problem. I did a reinstall and the problem survives. The problem actually started earlier in the day, when I did the apt full-upgrade. It updated the nvidia drivers so it wanted a reboot. When I rebooted, it refused to start sddm. It just sat there. I rebooted into recovery mode and changed to lightdm, which did the same thing. Gdm3 actually switched into a graphics mode before hanging. I purged the nvidia drivers and that was when the message cropped up. I tried booting from system rescue cd then switching into a bash shell on my / partition but lost my DNS so I couldn't (re) install the nouveau drivers (didn't want to touch the nvidia ones again). I did try updating initramfs, in case there was some nvidia stuff hanging around but it didn't help. That led to me reinstalling. I copied the Bookworm netinst to my Ventoy USB stick, but it wouldn't boot so I went back to Bullseye - which installed but wouldn't bring up a GUI. Booted to recovery mode, brought up the network and upgraded to Bookworm. That is where I am now - with the error message appearing after I leave the boot menu. This is basically clean install - just done in two parts. My laptop had been running fine since I got it and installed Debian. I couldn't get the Bookworm alpha install to work even when dd'd directly to a USB stick. However I was able to get to a recovery mode from the Bullseye install on Ventoy. From there I added the nVidia drivers and that got me past the error message. I was able to eventually get to a recovery session from the installation on the laptop. Sddm simply refused to work while gdm3 only seems to give me a Gnome desktop. After installing lightdm, I was able to get back to a Plasma desktop. Along the way, I found that my (Debian/Bookworm) workstation wont read USB sticks formatted with FAT32! I'm hoping a reboot later will fix that. Anyway, sddm seems to have some real problems with nVidia drivers. My laptop on the other hand seems to need them even though non-Bookworm distros don't. If you want a GNU/Linux distribution that "just works", one possibility is Debian Stable and "supported hardware". The former is easy -- download a d-i ISO. The latter can be anywhere from trivial to impossible to determine a priori; the practical answer is install and find out. What is the manufacturer, model, and part number of your computer? What options does it have? What components have you added, changed, or removed? What external hardware is connected? Do you have a broadband Internet connection? What d-i media did you use? Where did you get it? Did you verify the checksum of the download and/or media? David Thanks David, but as I explained, Debian/Stable doesn't "just work". You need the second part of your condition, but it's hard to know if hardware is supported until you try it. And what doesn't work one week may work the next. I don't blame Debian in this case. It's clearly an nVidia problem. Normally I stay away from them when getting something for Linux, but I got a great Black Friday deal. That's why I even got a new laptop to begin with. Apart from the nVidia components, it seems to work fine. Added nothing - just removed the Windows partitions and installed Linux. As I explained, I used Debian netinst copied to a Ventoy USB. What was strange is that Stable has no problem installing (just problems running) but Testing seems to get hung up with the networking (when I tried a graphical install, it at least showed that was what it was doing. The text based installer flashed something on the screen but never got around to doing more than the background colours - no text or progress bar - so I wasn't sure what it was doing). Also the current testing alpha netinst iso doesn't seem work with Ventoy, which meant I had to dd it to its own usb stick. And yes, I only download the files from debian.org. Have you tried finding the Debian Testing netinst checksums? You can find them for the weekly builds if you look hard enough but not the ones for the Alpha release. I thought maybe the alpha release would be a little more stable than a weekly build I can confirm that th
Re: latest testing update broke my laptop
On 2022-12-17 14:58, Charles Curley wrote: On Sat, 17 Dec 2022 11:39:51 -0800 David Christensen wrote: … the practical answer is install and find out. There are other ways besides "install[ing] and find[ing] out". https://linux-hardware.org is a very useful tool. And one should consider contributing as well. "apt show hwinfo". Another way is to stick with known reliable vendors and product lines. I've had excellent results with IBM/Lenovo products, except for some dicey results with one Yoga laptop. For IBM/Lenovo computers, there is the thinkwiki. https://www.thinkwiki.org/wiki/ThinkWiki A good bit of advice is, avoid bleeding edge hardware unless you want to help Linux to support it. And prices are lower for older or factory refurbished hardware. I've been hearing some negative things about Lenovo for several years now although I agree that IBM was a good bet. But even back then it wasn't always a slam dunk. I normally stick with AMD hardware but in this case ASUS decided to pair AMD with nVidia, I'd planned to just use the Nouveau drivers but that turned out to be a non-starter. :(
Re: latest testing update broke my laptop
On 2022-12-17 14:39, David Christensen wrote: On 12/17/22 04:44, Gary Dale wrote: On 2022-12-16 21:29, Gary Dale wrote: My laptop no longer boots thanks to the latest update. It stops after I select a normal boot - it goes to the text mode console and displays an error message about: [ 0.717939] ACPI BIOS Error (bug). If I go into recovery mode, I don't get that error but then it stops after a message about the nouveau driver. I never get to a command prompt. I can boot from System Rescus CD. I get the same BIOS error message but then it continues on as if it wasn't important. I tried updating the BIOS but that did nothing to resolve the problem. I did a reinstall and the problem survives. The problem actually started earlier in the day, when I did the apt full-upgrade. It updated the nvidia drivers so it wanted a reboot. When I rebooted, it refused to start sddm. It just sat there. I rebooted into recovery mode and changed to lightdm, which did the same thing. Gdm3 actually switched into a graphics mode before hanging. I purged the nvidia drivers and that was when the message cropped up. I tried booting from system rescue cd then switching into a bash shell on my / partition but lost my DNS so I couldn't (re) install the nouveau drivers (didn't want to touch the nvidia ones again). I did try updating initramfs, in case there was some nvidia stuff hanging around but it didn't help. That led to me reinstalling. I copied the Bookworm netinst to my Ventoy USB stick, but it wouldn't boot so I went back to Bullseye - which installed but wouldn't bring up a GUI. Booted to recovery mode, brought up the network and upgraded to Bookworm. That is where I am now - with the error message appearing after I leave the boot menu. This is basically clean install - just done in two parts. My laptop had been running fine since I got it and installed Debian. I couldn't get the Bookworm alpha install to work even when dd'd directly to a USB stick. However I was able to get to a recovery mode from the Bullseye install on Ventoy. From there I added the nVidia drivers and that got me past the error message. I was able to eventually get to a recovery session from the installation on the laptop. Sddm simply refused to work while gdm3 only seems to give me a Gnome desktop. After installing lightdm, I was able to get back to a Plasma desktop. Along the way, I found that my (Debian/Bookworm) workstation wont read USB sticks formatted with FAT32! I'm hoping a reboot later will fix that. Anyway, sddm seems to have some real problems with nVidia drivers. My laptop on the other hand seems to need them even though non-Bookworm distros don't. If you want a GNU/Linux distribution that "just works", one possibility is Debian Stable and "supported hardware". The former is easy -- download a d-i ISO. The latter can be anywhere from trivial to impossible to determine a priori; the practical answer is install and find out. What is the manufacturer, model, and part number of your computer? What options does it have? What components have you added, changed, or removed? What external hardware is connected? Do you have a broadband Internet connection? What d-i media did you use? Where did you get it? Did you verify the checksum of the download and/or media? David Thanks David, but as I explained, Debian/Stable doesn't "just work". You need the second part of your condition, but it's hard to know if hardware is supported until you try it. And what doesn't work one week may work the next. I don't blame Debian in this case. It's clearly an nVidia problem. Normally I stay away from them when getting something for Linux, but I got a great Black Friday deal. That's why I even got a new laptop to begin with. Apart from the nVidia components, it seems to work fine. Added nothing - just removed the Windows partitions and installed Linux. As I explained, I used Debian netinst copied to a Ventoy USB. What was strange is that Stable has no problem installing (just problems running) but Testing seems to get hung up with the networking (when I tried a graphical install, it at least showed that was what it was doing. The text based installer flashed something on the screen but never got around to doing more than the background colours - no text or progress bar - so I wasn't sure what it was doing). Also the current testing alpha netinst iso doesn't seem work with Ventoy, which meant I had to dd it to its own usb stick. And yes, I only download the files from debian.org. Have you tried finding the Debian Testing netinst checksums? You can find them for the weekly builds if you look hard enough but not the ones for the Alpha release. I thought maybe the alpha release would be a little more stable than a weekly build I can confirm that the problem with FAT32 was fixed by a reboot. I don't reboot every day normally, The la
Re: latest testing update broke my laptop
On 2022-12-16 21:29, Gary Dale wrote: My laptop no longer boots thanks to the latest update. It stops after I select a normal boot - it goes to the text mode console and displays an error message about: [ 0.717939] ACPI BIOS Error (bug). If I go into recovery mode, I don't get that error but then it stops after a message about the nouveau driver. I never get to a command prompt. I can boot from System Rescus CD. I get the same BIOS error message but then it continues on as if it wasn't important. I tried updating the BIOS but that did nothing to resolve the problem. I did a reinstall and the problem survives. The problem actually started earlier in the day, when I did the apt full-upgrade. It updated the nvidia drivers so it wanted a reboot. When I rebooted, it refused to start sddm. It just sat there. I rebooted into recovery mode and changed to lightdm, which did the same thing. Gdm3 actually switched into a graphics mode before hanging. I purged the nvidia drivers and that was when the message cropped up. I tried booting from system rescue cd then switching into a bash shell on my / partition but lost my DNS so I couldn't (re) install the nouveau drivers (didn't want to touch the nvidia ones again). I did try updating initramfs, in case there was some nvidia stuff hanging around but it didn't help. That led to me reinstalling. I copied the Bookworm netinst to my Ventoy USB stick, but it wouldn't boot so I went back to Bullseye - which installed but wouldn't bring up a GUI. Booted to recovery mode, brought up the network and upgraded to Bookworm. That is where I am now - with the error message appearing after I leave the boot menu. This is basically clean install - just done in two parts. My laptop had been running fine since I got it and installed Debian. I couldn't get the Bookworm alpha install to work even when dd'd directly to a USB stick. However I was able to get to a recovery mode from the Bullseye install on Ventoy. From there I added the nVidia drivers and that got me past the error message. I was able to eventually get to a recovery session from the installation on the laptop. Sddm simply refused to work while gdm3 only seems to give me a Gnome desktop. After installing lightdm, I was able to get back to a Plasma desktop. Along the way, I found that my (Debian/Bookworm) workstation wont read USB sticks formatted with FAT32! I'm hoping a reboot later will fix that. Anyway, sddm seems to have some real problems with nVidia drivers. My laptop on the other hand seems to need them even though non-Bookworm distros don't.
latest testing update broke my laptop
My laptop no longer boots thanks to the latest update. It stops after I select a normal boot - it goes to the text mode console and displays an error message about: [ 0.717939] ACPI BIOS Error (bug). If I go into recovery mode, I don't get that error but then it stops after a message about the nouveau driver. I never get to a command prompt. I can boot from System Rescus CD. I get the same BIOS error message but then it continues on as if it wasn't important. I tried updating the BIOS but that did nothing to resolve the problem. I did a reinstall and the problem survives. The problem actually started earlier in the day, when I did the apt full-upgrade. It updated the nvidia drivers so it wanted a reboot. When I rebooted, it refused to start sddm. It just sat there. I rebooted into recovery mode and changed to lightdm, which did the same thing. Gdm3 actually switched into a graphics mode before hanging. I purged the nvidia drivers and that was when the message cropped up. I tried booting from system rescue cd then switching into a bash shell on my / partition but lost my DNS so I couldn't (re) install the nouveau drivers (didn't want to touch the nvidia ones again). I did try updating initramfs, in case there was some nvidia stuff hanging around but it didn't help. That led to me reinstalling. I copied the Bookworm netinst to my Ventoy USB stick, but it wouldn't boot so I went back to Bullseye - which installed but wouldn't bring up a GUI. Booted to recovery mode, brought up the network and upgraded to Bookworm. That is where I am now - with the error message appearing after I leave the boot menu. This is basically clean install - just done in two parts. My laptop had been running fine since I got it and installed Debian.
Re: ASUS Laptops
On 2022-11-27 02:15, Georgi Naplatanov wrote: On 11/27/22 06:50, Gary Dale wrote: I've acquired an ASUS laptop (FA506IC-DS71-CA) and installed Debian on it. I started with Debian/Bullseye but had some problems starting a GUI (using sddm) so quickly did an apt full-upgrade to Bookworm. After installing the Realtek and Misc firmware, I'm able to boot into a Plasma5 desktop and things run as expected. I've got a couple immediate annoyances however: 1) no wifi - may be by Mediatek 2) keys are constantly backlit with rotating colours. I'm fine with the Nouveau drivers. Video speed seems adequate since I'm not really into gaming. This just seemed like a decent laptop for the price. Hoping someone can point me to something (or give me some pointers) on how to get this set up better. The two "annoyances" really get to me. Other things I can probably live with for a while. Hi Gary, please attach output of dmesg and lspci commands. Kind regards Georgi List server is rejecting my replies - probably because they are too large. However, it turns out I just didn't know how to connect to the wifi - the driver was there. And found out how to turn off the keyboard lights (fn + down arrow until they go off) from a question elsewhere that asked a similar question. I think there are likely still a few things I need to clean up, but they lack urgency right now.
ASUS Laptops
I've acquired an ASUS laptop (FA506IC-DS71-CA) and installed Debian on it. I started with Debian/Bullseye but had some problems starting a GUI (using sddm) so quickly did an apt full-upgrade to Bookworm. After installing the Realtek and Misc firmware, I'm able to boot into a Plasma5 desktop and things run as expected. I've got a couple immediate annoyances however: 1) no wifi - may be by Mediatek 2) keys are constantly backlit with rotating colours. I'm fine with the Nouveau drivers. Video speed seems adequate since I'm not really into gaming. This just seemed like a decent laptop for the price. Hoping someone can point me to something (or give me some pointers) on how to get this set up better. The two "annoyances" really get to me. Other things I can probably live with for a while.
Re: grep replacement using sed is behaving oddly
On 2022-10-21 15:14, David Wright wrote: On Fri 21 Oct 2022 at 14:15:01 (-0400), Greg Wooledge wrote: On Fri, Oct 21, 2022 at 08:01:00PM +0200, to...@tuxteam.de wrote: On Fri, Oct 21, 2022 at 01:21:44PM -0400, Gary Dale wrote: I'm hoping someone can tell me what I'm doing wrong. I have a line in a lot of HTML files that I'd like to remove. The line is: I'm testing the sed command to remove it on just one file. When it works, I'll run it against *.html. My command is: sed -i -s 's/\s*\//g' history.html Unfortunately, the replacement doesn't remove the line but rather leaves me with: <;"> This looks as if the <> in the regexp were interpreted as left and right word boundaries (but that would only be the case if you'd given the -E (or -r) option). Try explicitly adding the --posix option, perhaps... Gary is using non-POSIX syntax (specifically the \s), so that's not going to help unless he first changes his regular expression to be standard. The whitespace is tricky. I pasted the email into emacs, and I see that there are NO-BREAK SPACEs at the start, and one after "hr". Who knows whether they're really in the OP's files, or just put there by their MUA. I think you might be on to something with the \< and \> here. I can see absolutely no reason why Gary put backslashes in front of spaces and angle brackets here. I'm guessing the reason is guessing. The backslashes in front of the spaces are probably just noise, and can be ignored. The \< and \> on the other hand might be interpreted as something special, the same way \s is (because this is GNU sed, which loves to do nonstandard things). unicorn:~$ echo 'abc xyz' | sed 's/<.*>//' abc xyz unicorn:~$ echo 'abc xyz' | sed 's/\<.*\>//' unicorn:~$ So... yeah, \< and/or \> clearly have some special meaning to GNU sed. Good luck figuring out what that is. Word boundaries, as tomas said. The .*\> can be seen to have worked, as matching stopped after the end of the word "rem", leaving the punctuation behind. For Gary's actual problem, simply removing the backslashes where they're not wanted would be a good start. Actually learning sed could be step 2. The man/info pages leave a lot to be desired. A table with columns that showed: codesupported byeffect \s -e match all whitespace except NON-BREAK or whatever --posix -E --posix -E or whatever might really help. As it is, unless you're looking at a real book, you get a table like: '\s' Matches whitespace characters (spaces and tabs). Newlines embedded in the pattern/hold spaces will also match: '\S' Matches non-whitespace characters. '\<' Matches the beginning of a word. '\>' Matches the end of a word. but it's next to impossible to keep track of whether you're in a section that's speaking POSIX, GNU, or some mid-20th century tradition. I feel obliged at this point to mention that parsing HTML with regular expressions is a fool's errand, and that sed should not be the tool of choice here. Nor should grep, nor any other RE-based tool. This goes triple when one doesn't even know the correct syntax for their RE. https://stackoverflow.com/q/1732348 To be fair, I'm not sure whether the OP is really trying to parse HTML, or just remove some similar strings that they see as redundant. Cheers, David. Thanks. This command sed -i '//d' *.html did the trick. I've gotten into the habit of escaping special characters rather than memorizing the full list of which ones need to be escaped. I do most of my editing in Kate but use sed from to time when making the same change to all the files in a web site, as was the case here. Obviously I wasn't aware of the special meaning of \< and \> in sed... Thanks again.
grep replacement using sed is behaving oddly
I'm hoping someone can tell me what I'm doing wrong. I have a line in a lot of HTML files that I'd like to remove. The line is: I'm testing the sed command to remove it on just one file. When it works, I'll run it against *.html. My command is: sed -i -s 's/\s*\//g' history.html Unfortunately, the replacement doesn't remove the line but rather leaves me with: <;"> The leading spaces, angle brackets and some punctuation (but not all) is left behind. Moreover, If I try to remove the EOL by adding \n after the \>, the replace fails (and yes, the closing bracket is the last character on the line). I get the same behaviour under both Bullseye and Bookworm so I assume this is how sed is supposed to operate. However, when I try the same regex in Kate, it works. Is this a long-standing bug in sed or am I doing something wrong? Thanks
Re: How do I install PHPMailer on a Debian/Bullseye Apache2 server -- resolved
On 2022-09-07 17:49, Gareth Evans wrote: On 7 Sep 2022, at 22:24, Gareth Evans wrote: On 7 Sep 2022, at 22:01, Gareth Evans wrote: On 7 Sep 2022, at 21:27, Gareth Evans wrote: On 7 Sep 2022, at 17:55, Gary Dale wrote: I'm using a web hosting company that pretty much limits me to using PHPMailer on their servers for sending complex e-mails (e.g. with attachments). That is working. [...] However when I try it with my local Apache2 server, it doesn't work. [...] However the test .php file that works on the hosting company's server doesn't do anything on my local server. I try to load the page and get nothing - not even an error message. Hi Gary, If you expect output of some sort from the script, try putting ini_set("display_errors",1); ini_set("error_reporting",E_ALL); at the top of the script in question - this should show any errors on the page rather than having to look in /var/log/syslog, though that might be worthwhile too. Also $ php -l file.php (Lower case L after dash) Do you receive any bounce message in (iirc... or something like...) /var/spool/mail/username ? Not knowing whether you expect output, it could be that the script is working but the remote server rejects mail from non-routable (ie LAN) IPs - I don't think a bounce message is necessarily guaranteed though. Any difference sending with PHPMailer via SMTP ? Best wishes, Gareth Having said that re bounce messages, I can'tremember if the error report concerned, if any, may actually be found in syslog - it's a while since I've seen one as I gave up testing email from local machines for this reason. Possibly getting ahead of myself here, but it might also be worth mentioning greylisting as an issue to be aware of - particularly for email originating from non-routable addresses. "Mail from unrecognized servers is typically delayed by about 15 minutes, and could be delayed up to a few days for poorly configured sending systems." https://en.m.wikipedia.org/wiki/Greylisting_(email) Also does the (working) server run the same OS as your local machine? You may need to correct the location of sendmail (or whatever) in local phpmailer's config Owner / group / permissions for phpmailer dir + files? I should have thought of that one first - it's a common PHP "white screen of death" cause. Thanks Gareth. I didn't notice your answer because it was in a different thread. I've been working directly on the remote host all afternoon until I get the php script doing (mostly) what I want. When I followed your suggestion, it showed errors relating to file permissions on a file that I wanted to write to. I changed them and now its working the same as the remote host. However, the writing to the file wasn't part of the script in the morning. It was something I added after I got the script working on the remote host and wanted to add features. The basic script was just a test script with everything hardcoded. The one that is now working is actually driven by an HTML form so just the SMTP login is hardcoded. The writing to a file was a late addition to create a cumulative .csv log of submissions. I just needed to allow the file to be written to to get this later script to work. Anyway, it looks like the issue has gone away. Thanks!
How do I install PHPMailer on a Debian/Bullseye Apache2 server
I'm using a web hosting company that pretty much limits me to using PHPMailer on their servers for sending complex e-mails (e.g. with attachments). That is working. To get it to work, I used the zip archive from the PHPMailer's github page and unzipped it into the site's public directory - so that there is PHPMailer folder in the same folder that has the index.html file. I uploaded a test .php file and it sends mail. However when I try it with my local Apache2 server, it doesn't work. I've got PHP working because I have another PHP script that executes perfectly. However the test .php file that works on the hosting company's server doesn't do anything on my local server. I try to load the page and get nothing - not even an error message. I tried placing a copy of the PHPMailer files in the /usr/share/php folder and uncommented the include_path line in /etc/php/8.1/apache2php.ini file to ensure that the folder was in the path, but still no luck (various installation howtos say to make sure the PHPMailer folder is in the php include path). Does anyone have any ideas on how I can get this to work? I know that PHPMailer at the best of times is very finicky (e.g. it creates the same problem (no output) if I have a space in the (optional) second argument for the ->setFrom or ->addAddress methods), but in this case, the file is identical between the hosting company's working example and my local test. Nothing shows up in the various PHP logs nor in the syslog. Thanks.
how to change device number in RAID array
I'm running Debian/Bookworm on an AMD64 system. I recently added a second drive to it for use in a RAID1 array. However I'm now getting regular messages about "SparesMissing event on...". cat /proc/mdstat shows the problem: active raid1 sda1[0] sdb1[2] - the newly added drive is showing up as [2] instead of [1]. Apparently mdadm thinks there should be another drive sitting around as a spare. Is there a simple way to fix this? Thanks.
Re: Debian license issue
On 2022-06-01 09:14, Lidiya Pecherskaya wrote: Hello, Is it possible to get information on the type of license under which the Debian software is available? Thanks in advance. Most of the packages are distributed under a free license - usually GPL or MIT but sometimes others. Packages under the "non-free" section usually aren't - which is often because the source is not available.
Re: wtf just happened to my local staging web server
On 2022-05-05 02:37, Erwan David wrote: Le 04/05/2022 à 19:01, Gary Dale a écrit : My Apache2 file/print/web server is running Bullseye. I had to restart it yesterday evening to replace a disk drive. Otherwise the last reboot was a couple of weeks ago - I recall some updates to Jitsi - but I don't think there were any updates since then. Today I find that I can't get through to any of the sites on the server. Instead I get the Apache2 default web page. This happens with both Firefox and Chromium. This happens for all the staging sites (that I access as ".loc" through entries in my hosts file). My jitsi and nextcloud servers simply report failure to get to the server. I verified that the site files (-available and -enabled) haven't changed in months. I tried restarting the apache2 service and got an error so I tried stopping it then starting it again - same error: root@TheLibrarian:~# service apache2 start It looks like you started it, not restart, thus the running apache is not killed [...] May 04 12:16:55 TheLibrarian systemd[1]: Starting The Apache HTTP Server... May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use: AH00072: make_sock: could not bind to addre> May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use: AH00072: make_sock: could not bind to addre> This is consistent with former apache still running at that time, and using the wanted ports. If you read my original e-mail, I tried both restarting it and starting it. Also, in my original e-mail, I identified that Apache2 wasn't running by running ps aux output through grep. Again, this confirms the systemctl message - as Greg Wooledge mentions in his reply to you. Greg Wooledge showed me how to diagnose the problem by identifying the process (nginx in this case) that was grabbing the ports Apache2 needed. Claudio Kuenzler also provided an alternative method of diagnosing the problem. My problem is I'm not all that conversant in tracking down network issues, such as ports. I didn't know that lsof even had a port option. And I'm still getting used to systemctl / journalctl. Anyway, thanks for your attempt to help.
Re: wtf just happened to my local staging web server
On 2022-05-05 03:57, Stephan Seitz wrote: Am Do, Mai 05, 2022 at 09:30:42 +0200 schrieb Klaus Singvogel: I think there are more. Yes, I only know wtf as „what the fuck”. Stephan Actually, it's "what the frack" - a nod to the Battlestar Galactica TV/movie franchise, which uses frack as the expletive of choice. These days "frack" also refers to a gas extraction process with terrible environmental consequences, thereby justifying its use as an expletive in the broader world. Fracking is derived from fracturing, the breaking of something, which is appropriate in the case of my staging server suddenly being broken.
Re: wtf just happened to my local staging web server
On 2022-05-04 13:21, Greg Wooledge wrote: On Wed, May 04, 2022 at 01:01:58PM -0400, Gary Dale wrote: May 04 12:16:55 TheLibrarian systemd[1]: Starting The Apache HTTP Server... May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use: AH00072: make_sock: could not bind to addre> May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use: AH00072: make_sock: could not bind to addre> Something else is using the ports that Apache wants to use. Assuming those ports are 80 and 443, you could use commands like this to see what's using them: lsof -i :80 lsof -i :443 If your configuration is telling Apache to use some other ports, then substitute your port numbers. Thanks. Somehow nginx got installed. Wondering if jitsi or nextcloud did that because I certainly didn't (doesn't seem likely though because they both failed). I guess I should pay more attention to the packages that get installed when I do apt full-upgrade... Usually I just scan to see if there is anything that I should reboot over.
wtf just happened to my local staging web server
My Apache2 file/print/web server is running Bullseye. I had to restart it yesterday evening to replace a disk drive. Otherwise the last reboot was a couple of weeks ago - I recall some updates to Jitsi - but I don't think there were any updates since then. Today I find that I can't get through to any of the sites on the server. Instead I get the Apache2 default web page. This happens with both Firefox and Chromium. This happens for all the staging sites (that I access as ".loc" through entries in my hosts file). My jitsi and nextcloud servers simply report failure to get to the server. I verified that the site files (-available and -enabled) haven't changed in months. I tried restarting the apache2 service and got an error so I tried stopping it then starting it again - same error: root@TheLibrarian:~# service apache2 start Job for apache2.service failed because the control process exited with error code. See "systemctl status apache2.service" and "journalctl -xe" for details. root@TheLibrarian:~# systemctl status apache2.service ●apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: failed(Result: exit-code) since Wed 2022-05-04 12:16:55 EDT; 5s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 7932 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE) CPU: 29ms May 04 12:16:55 TheLibrarian systemd[1]: Starting The Apache HTTP Server... May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use: AH00072: make_sock: could not bind to addre> May 04 12:16:55 TheLibrarian apachectl[7935]: (98)Address already in use: AH00072: make_sock: could not bind to addre> May 04 12:16:55 TheLibrarian apachectl[7935]: no listening sockets available, shutting down May 04 12:16:55 TheLibrarian apachectl[7935]: AH00015: Unable to open logs May 04 12:16:55 TheLibrarian apachectl[7932]: Action 'start' failed. May 04 12:16:55 TheLibrarian apachectl[7932]: The Apache error log may have more information. May 04 12:16:55 TheLibrarian systemd[1]: apache2.service: Control process exited, code=exited, status=1/FAILURE May 04 12:16:55 TheLibrarian systemd[1]: apache2.service: Failed with result 'exit-code'. May 04 12:16:55 TheLibrarian systemd[1]: Failed to start The Apache HTTP Server. also root@TheLibrarian:/var/log# journalctl -xe ░░The job identifier is 4527. May 04 12:50:49 TheLibrarian apachectl[8232]: (98)Address already in use: AH00072: make_sock: could not bind to addre> May 04 12:50:49 TheLibrarian apachectl[8232]: (98)Address already in use: AH00072: make_sock: could not bind to addre> May 04 12:50:49 TheLibrarian apachectl[8232]: no listening sockets available, shutting down May 04 12:50:49 TheLibrarian apachectl[8232]: AH00015: Unable to open logs May 04 12:50:49 TheLibrarian apachectl[8229]: Action 'start' failed. May 04 12:50:49 TheLibrarian apachectl[8229]: The Apache error log may have more information. May 04 12:50:49 TheLibrarian systemd[1]: apache2.service: Control process exited, code=exited, status=1/FAILURE ░░Subject: Unit process exited ░░Defined-By: systemd ░░Support: https://www.debian.org/support ░░ ░░An ExecStart= process belonging to unit apache2.service has exited. ░░ ░░The process' exit code is 'exited' and its exit status is 1. May 04 12:50:49 TheLibrarian systemd[1]: apache2.service: Failed with result 'exit-code'. ░░Subject: Unit failed ░░Defined-By: systemd ░░Support: https://www.debian.org/support ░░ ░░The unit apache2.service has entered the 'failed' state with result 'exit-code'. May 04 12:50:49 TheLibrarian systemd[1]: Failed to start The Apache HTTP Server. ░░Subject: A start job for unit apache2.service has failed ░░Defined-By: systemd ░░Support: https://www.debian.org/support ░░ ░░A start job for unit apache2.service has finished with a failure. ░░ ░░The job identifier is 4527 and the job result is failed. As I said, I do get the default Apache2 page saying "It works" but that appears to be optimistic. ps aux | grep apache2 fails to show the service, which confirms the systemctl message that it isn't running. There is nothing in /var/log/apache2/error.log. The .1 log ends yesterday but only contains complaints about php7. Systemctl does report (above) "unable to open logs" so that would explain the lack of additional messages. The apache2 directory and its files are root:adm with only root having write privileges. I tried giving the adm group write privileges but that didn't work. Turns out the group is empty. Adding www-data to it didn't work either. Any ideas on how to track down the cause of the failure(s)? Thanks.
Jitsi-meet fails intermittently
Tried to send this message a month ago but couldn't it accepted by the list server due to the way it authenticates. Trying it again now that I've switched hosts to one that claims it accepts the list server's null-message test. I've also had to do the uninstall, reboot and reinstall one more time since then. This problem is becoming annoying because stuff running on stable I expect to work reliably. I've been running a jitsi meet server for about 16 months now on Debian/Stable. Last Christmas it stopped working. The server was running and accepting connections but participants couldn't see or hear each other. I fixed the problem by doing apt remove jitsi-meet && apt autoremove then rebooting and reinstalling. The removal and reinstall seem necessary as neither a simple reinstall nor a reboot fix the problem. It's been working fine ever since until up until last night when I tested it prior to another meeting (I'd used it monthly between the previous failure and last night). I got the same symptoms so I applied the same fix and it started working again. The jicofo.log and other jitsi logs show bridge failures, which explain the symptoms but don't point to cause. I note that jitsi-meet depends on at least: jitsi-meet-prosody jitsi-meet-web jitsi-meet-web-config jitsi-videobridge2 lua-bitop lua-event lua-expat lua-filesystem lua-sec lua-socket lua5.2 prosody Probably other packages are involved as well but these are the ones that are unique to jitsi-meet on my server. Since the autoremove appears to be a necessary part of the fix, I suspect that something is getting corrupted in a dependency. I think it is likely in the Debian package management (I do full-upgrades and autoremoves roughly every week but otherwise rarely touch the server software). However this is as far as my problem tracking skills take me. Any ideas?
Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?
On 2021-09-10 20:51, Gary Dale wrote: On 2021-09-10 18:32, jeremy ardley wrote: On 11/09/2021 6:26 am, Jeremy Ardley wrote: On 11/9/21 5:39 am, Gary Dale wrote: Does anyone have Thunderbird and Yahoo working together? I have it running on thunderbird. Both imap and smtp use ssl/tls and oauth2 smtp uses port 465 while imap uses port 993 I have some memory getting oauth2 to work may have been a bit of effort. This may be relevant. You need to remove any stored passwords after you apply oauth2 to an account. https://www.supertechcrew.com/thunderbird-oauth2-gmail/ Jeremy I've tried that already but I'll give it another go. I actually removed all my stored Thunderbird passwords for Yahoo before I recreated the account in Thunderbird. I may have spoke too soon. Got this message "An error occurred while sending mail. The mail server responded: Request failed; Mailbox unavailable. Please check the message and try again." when I tried to send an e-mail from the account. The only change I made to the settings was I set a reply-to... The message went into my sent folder after I cleared the error (for the second time) but it isn't showing up in the online (yahoo.com) sent folder but an earlier test message I sent is there. When I removed the reply-to header, the message went out without problems and showed up in the online sent folder as well. It looks like at least part of my problem aws the use of a reply-to address. >>>>>>>>> Found it. I also needed to remove the smtp server for Yahoo. When Thunderbird set up the account, it simply reused the existing smtp.mail.yahoo.com server definition. Even when I set that up correctly, it wasn't working. However when I deleted it along with the pop account, it recreated it from scratch and now it seems to work. I note that I now have 3 saved passwords for the account: 1) mailbox:// ... for the pop server 2) oauth:// 3) smtp:// ... The passwords are all the same - the very long computer generated one.
Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?
On 2021-09-10 20:51, Gary Dale wrote: On 2021-09-10 18:32, jeremy ardley wrote: On 11/09/2021 6:26 am, Jeremy Ardley wrote: On 11/9/21 5:39 am, Gary Dale wrote: Does anyone have Thunderbird and Yahoo working together? I have it running on thunderbird. Both imap and smtp use ssl/tls and oauth2 smtp uses port 465 while imap uses port 993 I have some memory getting oauth2 to work may have been a bit of effort. This may be relevant. You need to remove any stored passwords after you apply oauth2 to an account. https://www.supertechcrew.com/thunderbird-oauth2-gmail/ Jeremy I've tried that already but I'll give it another go. I actually removed all my stored Thunderbird passwords for Yahoo before I recreated the account in Thunderbird. Found it. I also needed to remove the smtp server for Yahoo. When Thunderbird set up the account, it simply reused the existing smtp.mail.yahoo.com server definition. Even when I set that up correctly, it wasn't working. However when I deleted it along with the pop account, it recreated it from scratch and now it seems to work. I note that I now have 3 saved passwords for the account: 1) mailbox:// ... for the pop server 2) oauth:// 3) smtp:// ... The passwords are all the same - the very long computer generated one.
Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?
On 2021-09-10 18:11, Alexander V. Makartsev wrote: On 11.09.2021 02:39, Gary Dale wrote: I've got a Yahoo mail account (among others) that I use for a particular purpose. However it's been a while since I've been able to send e-mail from it using Yahoo's smtp servers. Instead I've been sending e-mail via another smtp server so the "From" address doesn't match the login domain. Gmail apparently now considers that to be sufficient reason to bounce the e-mail so I've been trying to get Thunderbird to use the Yahoo server to send mail for this account. So far I haven't been able to come up with any combination of settings, including removing the account and recreating it in Thunderbird, that allow the mail to go through. The messages I get either complain about the password or tell me "An error occurred while sending mail. The mail server responded: Request failed; Mailbox unavailable. Please check the message and try again." Does anyone have Thunderbird and Yahoo working together? Pretty sure, it's the issue with Yahoo¹ ², nothing to do with Thunderbird. Yahoo is following the same path as GMail, forcing their users to use web browsers as mail clients. In case of GMail you have to use same generated "app-password" for both IMAP and SMTP services. I guess the same principle will be for Yahoo. [1] https://help.yahoo.com/kb/account/temporary-access-insecure-sln27791.html [2] https://help.yahoo.com/kb/account/generate-manage-third-party-passwords-sln15241.html -- Yes, I've tried the app passwords without success. As near as I can tell, they are just a randomly generated secure password that is linked to a particular application as well as the account.
Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?
On 2021-09-10 18:26, Jeremy Ardley wrote: On 11/9/21 5:39 am, Gary Dale wrote: I've got a Yahoo mail account (among others) that I use for a particular purpose. However it's been a while since I've been able to send e-mail from it using Yahoo's smtp servers. Instead I've been sending e-mail via another smtp server so the "From" address doesn't match the login domain. Gmail apparently now considers that to be sufficient reason to bounce the e-mail so I've been trying to get Thunderbird to use the Yahoo server to send mail for this account. So far I haven't been able to come up with any combination of settings, including removing the account and recreating it in Thunderbird, that allow the mail to go through. The messages I get either complain about the password or tell me "An error occurred while sending mail. The mail server responded: Request failed; Mailbox unavailable. Please check the message and try again." Does anyone have Thunderbird and Yahoo working together? I have it running on thunderbird. Both imap and smtp use ssl/tls and oauth2 smtp uses port 465 while imap uses port 993 I have some memory getting oauth2 to work may have been a bit of effort. I had a similar problem with a Rogers account that stopped working. It's one of the reasons I stopped using Rogers... They turned their e-mail over to Yahoo and Yahoo doesn't really support e-mail.
Re: is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?
On 2021-09-10 18:32, jeremy ardley wrote: On 11/09/2021 6:26 am, Jeremy Ardley wrote: On 11/9/21 5:39 am, Gary Dale wrote: Does anyone have Thunderbird and Yahoo working together? I have it running on thunderbird. Both imap and smtp use ssl/tls and oauth2 smtp uses port 465 while imap uses port 993 I have some memory getting oauth2 to work may have been a bit of effort. This may be relevant. You need to remove any stored passwords after you apply oauth2 to an account. https://www.supertechcrew.com/thunderbird-oauth2-gmail/ Jeremy I've tried that already but I'll give it another go. I actually removed all my stored Thunderbird passwords for Yahoo before I recreated the account in Thunderbird.
is it possible to send e-mail via Yahoo's smtp servers using Thunderbird (78.13.0)?
I've got a Yahoo mail account (among others) that I use for a particular purpose. However it's been a while since I've been able to send e-mail from it using Yahoo's smtp servers. Instead I've been sending e-mail via another smtp server so the "From" address doesn't match the login domain. Gmail apparently now considers that to be sufficient reason to bounce the e-mail so I've been trying to get Thunderbird to use the Yahoo server to send mail for this account. So far I haven't been able to come up with any combination of settings, including removing the account and recreating it in Thunderbird, that allow the mail to go through. The messages I get either complain about the password or tell me "An error occurred while sending mail. The mail server responded: Request failed; Mailbox unavailable. Please check the message and try again." Does anyone have Thunderbird and Yahoo working together?
Re: Dragon Player doesn't
On 2021-09-07 21:21, piorunz wrote: On 07/09/2021 18:04, Gary Dale wrote: I don't use Dragon Player normally but I was looking at it just now. When I right-click on a video file, select play with then choose Dragon Player to play it, it launches Dragon Player but doesn't play the file. When I select Play File from within Dragon Player, I can select a video to play, but again it doesn't play it. Has anyone else experienced this problem and/or come across a fix for it? I dont' use this program, but if you run it via terminal, what does it say? Good point. I get a lot of error messages that say "WARNING: bool Phonon::FactoryPrivate::createBackend() phonon backend plugin could not be loaded". There are other messages that also mention phonon, like this sequence: WARNING: Phonon::createPath: Cannot connect Phonon::MediaObject ( no objectName ) to Phonon::VideoWidget ( no objectName ). WARNING: bool Phonon::FactoryPrivate::createBackend() phonon backend plugin could not be loaded WARNING: Phonon::createPath: Cannot connect Phonon::MediaObject ( no objectName ) to Phonon::AudioOutput ( no objectName ). Along with numerous other messages, including several similar to: kf.coreaddons: no metadata found in "/usr/lib/x86_64-linux-gnu/qt5/plugins/kf5/kio/metainfo.so" "Failed to extract plugin meta data from '/usr/lib/x86_64-linux-gnu/qt5/plugins/kf5/kio/metainfo.so'"
Dragon Player doesn't
I don't use Dragon Player normally but I was looking at it just now. When I right-click on a video file, select play with then choose Dragon Player to play it, it launches Dragon Player but doesn't play the file. When I select Play File from within Dragon Player, I can select a video to play, but again it doesn't play it. Has anyone else experienced this problem and/or come across a fix for it?
Re: upgrade to testing
On 2021-06-15 13:26, Wil wrote: How do I upgrade from Debian stable to Debain testing? It's not really an upgrade. It's more a switch in priorities. However to answer your question directly, as root do either sed -i -s 's/buster/bullseye/g' /etc/apt/sources.list or sed -i -s 's/stable/testing/g' /etc/apt/sources.list depending on how your source.list file refers to the current stable distribution. After that, do apt update apt full-upgrade apt autoremove then reboot.
Re: mdadm and whole disk array members
On 2021-03-26 02:59, deloptes wrote: Gary Dale wrote: Perhaps it only works with virgin drives? Mine had been removed from another machine where they had been part of a different array. I zeroed the superblocks before creating the new array. I doubt that - IMO should be either the BIOS or the drives, or a combination of both It's a Gigabyte ROG STRIX B550-F board. The drives are Seagate Ironwolf and WD Red.
Re: mdadm and whole disk array members
On 2021-03-26 00:04, Felix Miata wrote: Gary Dale composed on 2021-03-25 21:19 (UTC-0400): From what I read in looking for solutions, the problem is common. I even tried one workaround of zapping any existing partition table on the drives. Nothing worked. "Zapped" exactly how? GPT tables are on both ends of the disks. Wiping the first sectors won't get the job done. sgdisk --zap wipes the partition tables.
Re: mdadm and whole disk array members
On 2021-03-25 21:14, Gary Dale wrote: On 2021-03-23 08:44, deloptes wrote: deloptes wrote: A friend told me that he found out it is a problem in some BIOSes with UEFI that can not handle a boot of md UEFI partition. Perhaps it also depends how they handle the raid of a whole disk. Are you trying to boot from that raid? Forgot to ask what is in your /etc/mdadm/mdadm.conf and the IDs of the disks IMO the problem is that if it is not a partition the mdadm can not assemble as it is looking for a partition, but not sure how grub or whatever handle it when you boot off the drive. The drives use normal /dev/sd* ids. They are not being booted from. I had updated /etc/mdadm/mdadm/conf with the new information for the array after creating it. When I did exactly the same thing after creating a single FD00 partition on the drives, everything worked. When I say "the same thing", I mean creating the array from the partitions instead of the whole drives.
Re: mdadm and whole disk array members
On 2021-03-23 11:45, Reco wrote: Hi. On Tue, Mar 23, 2021 at 01:44:23PM +0100, deloptes wrote: IMO the problem is that if it is not a partition the mdadm can not assemble as it is looking for a partition, My mdadm.conf says: # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, # using wildcards if desired. #DEVICE partitions containers And /proc/partitions always had whole disks, their partitions, lvm volumes and whatever else can be presented as a block device by the kernel. So mdadm is perfectly capable of assembling whole disk arrays, and it does so for me for more than 10 years. but not sure how grub or whatever handle it when you boot off the drive. GRUB2 can definitely boot from mdadm's RAID1 as it has an appropriate module for this specific task. Installing GRUB2 on mdadm array made of whole disks is tricky though. UEFI itself, on the other hand - definitely can not, unless you resort to some dirty hacks. After all, UEFI requires so-called "EFI System Partition" aka ESP. Reco From what I read in looking for solutions, the problem is common. I even tried one workaround of zapping any existing partition table on the drives. Nothing worked. Perhaps it only works with virgin drives? Mine had been removed from another machine where they had been part of a different array. I zeroed the superblocks before creating the new array.
Re: mdadm and whole disk array members
On 2021-03-23 08:44, deloptes wrote: deloptes wrote: A friend told me that he found out it is a problem in some BIOSes with UEFI that can not handle a boot of md UEFI partition. Perhaps it also depends how they handle the raid of a whole disk. Are you trying to boot from that raid? Forgot to ask what is in your /etc/mdadm/mdadm.conf and the IDs of the disks IMO the problem is that if it is not a partition the mdadm can not assemble as it is looking for a partition, but not sure how grub or whatever handle it when you boot off the drive. The drives use normal /dev/sd* ids. They are not being booted from. I had updated /etc/mdadm/mdadm/conf with the new information for the array after creating it. When I did exactly the same thing after creating a single FD00 partition on the drives, everything worked.
Re: mdadm and whole disk array members
On 2021-03-23 08:29, deloptes wrote: Gary Dale wrote: It's not just me but a lot of other people have been having the same problem. It's been reported many times as I discovered after trying to use whole disks. Moreover, the fixes that I'd used in the past don't seem to work reliably without partitions. A friend told me that he found out it is a problem in some BIOSes with UEFI that can not handle a boot of md UEFI partition. Perhaps it also depends how they handle the raid of a whole disk. Are you trying to boot from that raid? No.
Re: mdadm and whole disk array members
On 2021-03-22 18:49, Andy Smith wrote: Hi Gary, On Mon, Mar 22, 2021 at 06:20:56PM -0400, Gary Dale wrote: I suggest that, since it appears the developers can't get this work reliably, that the option to use the whole disk be removed and mdadm insist on using partitions. At the very least, mdadm --create should issue a warning that using a whole device instead of a partition may create problems. I've been using whole disks in mdadm arrays for more than 15 years across many many servers on Debian stable and have never experienced what you describe. There must be something else at play here. I suggest you post a detailed description of your problem to the linux-raid mailing list and hopefully someone can help debug it. https://raid.wiki.kernel.org/index.php/Linux_Raid#Mailing_list Cheers, Andy It's not just me but a lot of other people have been having the same problem. It's been reported many times as I discovered after trying to use whole disks. Moreover, the fixes that I'd used in the past don't seem to work reliably without partitions. There doesn't seem to be a downside to using partitions considering that partition tables have never taken up any significant amount of space. It was an interesting experiment / learning experience but I've decided that it's not worth going further so I'm going back to using disks with a single partition.
mdadm and whole disk array members
I've spent a few days experimenting with using whole disks in a RAID 5 array and have come to the conclusion that it simply doesn't work well enough to be used. The main problem I had was that mdadm seems to have problems assembling the array when it uses entire disks instead of partitions. Each time I restarted my computer, I would have to recreate the array. This causes the boot process to halt because /etc/mdadm/mdadm.conf and /etc/fstab both identify an array that should be started and mounted. Fortunately the create command was still in the bash history so I got the create parameters right. However, after I added another disk to the array, that made the original create command obsolete. Plus the kernel assigned different drive letters to the drives once I plugged in a new drive, so that I couldn't simply add the new drive to the create command. Fortunately I still had a decade-old script that would cycle through all combinations until it found one that would result in a mountable array (I had the script due to some problems I was having back in 2010). Unfortunately it didn't find any it could mount no matter what the order of the drives (which included one "missing"). I've found many other people complaining about similar issues when using whole disks to create mdadm RAID arrays. Some of these complaints go back many years, so this isn't new. I suggest that, since it appears the developers can't get this work reliably, that the option to use the whole disk be removed and mdadm insist on using partitions. At the very least, mdadm --create should issue a warning that using a whole device instead of a partition may create problems.
Re: Jitsi keeps failing when I want to use it [RESOLVED}
On 2021-03-06 15:54, Gary Dale wrote: On 2021-03-06 15:44, Gary Dale wrote: My phone lost its wifi so it only connects via the mobile data. In any event, the connection is through a public IP address. I've noticed one thing that puzzles me a little (after I tried removing and reinstalling Jitsi) and that is that the Debian/Buster package installs jitsi-videobridge while the jitsi install guide for Debian/Ubuntu talks about jitsi-videobridge2. On 2021-03-06 15:16, Dan Ritter wrote: Gary Dale wrote: I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It usually works fine. However every time I want to actually host a meeting, it decides to act up. Yesterday evening I tested Jitsi with a meeting between my desktop system and my Android phone. Everything worked properly. I could see the video feed from both devices on both devices. Since it was working, I notified people of the meeting address. Today, I tried it again pre-meeting only to find the usual issue cropped up. While both devices can connect to the meeting, I can only see the video feed from the local device (I can see my desktop feed on the desktop browser and the Android feed on the phone). I tried creating a new meeting but I get the same problem. I've tried reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which worked in the past, but still no luck. Nor does rebooting the server help. Is your phone connected via wifi to your LAN or to a mobile data service? Try it both ways. I suspect a STUN/TURN NAT issue. -dsr- BTW: I didn't change anything on the jitsi server between it working and it not working. I did ssh to it in the morning to check for apt updates but there weren't any. OK, it turns out the problem was with my phone. I have no idea why it worked yesterday but not today but when I tried it with another device, things worked.
Re: Jitsi keeps failing when I want to use it
On 2021-03-06 15:44, Gary Dale wrote: My phone lost its wifi so it only connects via the mobile data. In any event, the connection is through a public IP address. I've noticed one thing that puzzles me a little (after I tried removing and reinstalling Jitsi) and that is that the Debian/Buster package installs jitsi-videobridge while the jitsi install guide for Debian/Ubuntu talks about jitsi-videobridge2. On 2021-03-06 15:16, Dan Ritter wrote: Gary Dale wrote: I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It usually works fine. However every time I want to actually host a meeting, it decides to act up. Yesterday evening I tested Jitsi with a meeting between my desktop system and my Android phone. Everything worked properly. I could see the video feed from both devices on both devices. Since it was working, I notified people of the meeting address. Today, I tried it again pre-meeting only to find the usual issue cropped up. While both devices can connect to the meeting, I can only see the video feed from the local device (I can see my desktop feed on the desktop browser and the Android feed on the phone). I tried creating a new meeting but I get the same problem. I've tried reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which worked in the past, but still no luck. Nor does rebooting the server help. Is your phone connected via wifi to your LAN or to a mobile data service? Try it both ways. I suspect a STUN/TURN NAT issue. -dsr- BTW: I didn't change anything on the jitsi server between it working and it not working. I did ssh to it in the morning to check for apt updates but there weren't any.
Re: Jitsi keeps failing when I want to use it
On 2021-03-06 15:36, Henning Follmann wrote: On Sat, Mar 06, 2021 at 01:38:07PM -0500, Gary Dale wrote: I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It usually works fine. However every time I want to actually host a meeting, it decides to act up. Yesterday evening I tested Jitsi with a meeting between my desktop system and my Android phone. Everything worked properly. I could see the video feed from both devices on both devices. Since it was working, I notified people of the meeting address. Today, I tried it again pre-meeting only to find the usual issue cropped up. While both devices can connect to the meeting, I can only see the video feed from the local device (I can see my desktop feed on the desktop browser and the Android feed on the phone). I tried creating a new meeting but I get the same problem. I've tried reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which worked in the past, but still no luck. Nor does rebooting the server help. Any ideas anyone? As usual, no logs => it didn't happen! Without any information one could only guess. -H # tail -n 32 jvb.log 2021-03-06 15:40:29.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:40:39.306 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:40:49.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:40:59.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:41:09.285 INFO: [23] VideobridgeExpireThread.expire#140: Running expire() 2021-03-06 15:41:09.306 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:41:19.306 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:41:29.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:41:39.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:41:49.306 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:41:59.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:42:09.285 INFO: [23] VideobridgeExpireThread.expire#140: Running expire() 2021-03-06 15:42:09.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:42:19.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:42:29.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:42:39.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:42:49.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:42:59.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:43:09.285 INFO: [23] VideobridgeExpireThread.expire#140: Running expire() 2021-03-06 15:43:09.306 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:43:19.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:43:29.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:43:39.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:43:49.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:43:59.306 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:44:09.285 INFO: [23] VideobridgeExpireThread.expire#140: Running expire() 2021-03-06 15:44:09.306 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:44:19.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.1S. Sticky failure: false 2021-03-06 15:44:29.306 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:44:39.305 INFO: [24] HealthChecker.run#170: Performed a successful health check in PT0.11S. Sticky failure: false 2021-03-06 15:44:49.306 INFO: [24] HealthChecker.run#
Re: Jitsi keeps failing when I want to use it
On 2021-03-06 15:36, Henning Follmann wrote: On Sat, Mar 06, 2021 at 01:38:07PM -0500, Gary Dale wrote: I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It usually works fine. However every time I want to actually host a meeting, it decides to act up. Yesterday evening I tested Jitsi with a meeting between my desktop system and my Android phone. Everything worked properly. I could see the video feed from both devices on both devices. Since it was working, I notified people of the meeting address. Today, I tried it again pre-meeting only to find the usual issue cropped up. While both devices can connect to the meeting, I can only see the video feed from the local device (I can see my desktop feed on the desktop browser and the Android feed on the phone). I tried creating a new meeting but I get the same problem. I've tried reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which worked in the past, but still no luck. Nor does rebooting the server help. Any ideas anyone? As usual, no logs => it didn't happen! Without any information one could only guess. -H # tail -n 32 jicofo.log Jicofo 2021-03-06 15:38:06.901 INFO: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Added participant jid= shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521, bridge=jvbbrew...@internal.auth.meet.rahim-dale.org/3b2b133c-6e81-4d54-9563-48bc57c16876 Jicofo 2021-03-06 15:38:06.902 INFO: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Region info, conference=22383: [[null, null]] Jicofo 2021-03-06 15:38:06.906 INFO: [136] org.jitsi.jicofo.discovery.DiscoveryUtil.log() Doing feature discovery for shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521 Jicofo 2021-03-06 15:38:06.906 INFO: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Added participant jid= shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557, bridge=jvbbrew...@internal.auth.meet.rahim-dale.org/3b2b133c-6e81-4d54-9563-48bc57c16876 Jicofo 2021-03-06 15:38:06.907 INFO: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Region info, conference=22383: [[null, null, null]] Jicofo 2021-03-06 15:38:06.908 INFO: [137] org.jitsi.jicofo.discovery.DiscoveryUtil.log() Doing feature discovery for shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557 Jicofo 2021-03-06 15:38:07.192 INFO: [136] org.jitsi.jicofo.discovery.DiscoveryUtil.log() Successfully discovered features for shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521 in 285 Jicofo 2021-03-06 15:38:07.215 INFO: [136] org.jitsi.jicofo.AbstractChannelAllocator.log() Using jvbbrew...@internal.auth.meet.rahim-dale.org/3b2b133c-6e81-4d54-9563-48bc57c16876 to allocate channels for: Participant[shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521]@423117319 Jicofo 2021-03-06 15:38:08.151 INFO: [137] org.jitsi.jicofo.discovery.DiscoveryUtil.log() Successfully discovered features for shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557 in 1243 Jicofo 2021-03-06 15:38:08.155 INFO: [137] org.jitsi.jicofo.AbstractChannelAllocator.log() Using jvbbrew...@internal.auth.meet.rahim-dale.org/3b2b133c-6e81-4d54-9563-48bc57c16876 to allocate channels for: Participant[shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557]@2134948675 Jicofo 2021-03-06 15:38:08.344 INFO: [136] org.jitsi.jicofo.ParticipantChannelAllocator.log() Sending session-initiate to: shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521 Jicofo 2021-03-06 15:38:08.366 INFO: [137] org.jitsi.jicofo.ParticipantChannelAllocator.log() Sending session-initiate to: shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557 Jicofo 2021-03-06 15:38:08.600 INFO: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Got session-accept from: shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521 Jicofo 2021-03-06 15:38:08.614 INFO: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Received session-accept from shafeenasbirthdaypa...@conference.meet.rahim-dale.org/c397d521 with accepted sources:Sources{ video: [ssrc=2796851205 ssrc=3366031894 ssrc=2837324841 ] audio: [ssrc=682686965 ] }@887521678 Jicofo 2021-03-06 15:38:08.619 WARNING: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() No jingle session yet for shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557 Jicofo 2021-03-06 15:38:09.449 INFO: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Got session-accept from: shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557 Jicofo 2021-03-06 15:38:09.458 INFO: [32] org.jitsi.jicofo.JitsiMeetConferenceImpl.log() Received session-accept from shafeenasbirthdaypa...@conference.meet.rahim-dale.org/911a4557 with accepted sources:Sources{ video: [ssrc=682624931 ssrc=616139617 ssrc=4223476417 ssrc=2608752843 ssrc=22331310 ssrc=4245847487 ] audio: [ssrc=4124579793 ] }@703862439 Jicofo 2021-03-06 15:38:41.565 INFO: [32] org.jitsi.jicofo.ChatRoomRoleAndPresence.log() C
Re: Jitsi keeps failing when I want to use it
My phone lost its wifi so it only connects via the mobile data. In any event, the connection is through a public IP address. I've noticed one thing that puzzles me a little (after I tried removing and reinstalling Jitsi) and that is that the Debian/Buster package installs jitsi-videobridge while the jitsi install guide for Debian/Ubuntu talks about jitsi-videobridge2. On 2021-03-06 15:16, Dan Ritter wrote: Gary Dale wrote: I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It usually works fine. However every time I want to actually host a meeting, it decides to act up. Yesterday evening I tested Jitsi with a meeting between my desktop system and my Android phone. Everything worked properly. I could see the video feed from both devices on both devices. Since it was working, I notified people of the meeting address. Today, I tried it again pre-meeting only to find the usual issue cropped up. While both devices can connect to the meeting, I can only see the video feed from the local device (I can see my desktop feed on the desktop browser and the Android feed on the phone). I tried creating a new meeting but I get the same problem. I've tried reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which worked in the past, but still no luck. Nor does rebooting the server help. Is your phone connected via wifi to your LAN or to a mobile data service? Try it both ways. I suspect a STUN/TURN NAT issue. -dsr-
Jitsi keeps failing when I want to use it
I'm running a Jitis-meet server on a Debian/Buster AMD64 system. It usually works fine. However every time I want to actually host a meeting, it decides to act up. Yesterday evening I tested Jitsi with a meeting between my desktop system and my Android phone. Everything worked properly. I could see the video feed from both devices on both devices. Since it was working, I notified people of the meeting address. Today, I tried it again pre-meeting only to find the usual issue cropped up. While both devices can connect to the meeting, I can only see the video feed from the local device (I can see my desktop feed on the desktop browser and the Android feed on the phone). I tried creating a new meeting but I get the same problem. I've tried reinstalling Jitsi-meet (and also Jicofo and Jitsi-videobridge2), which worked in the past, but still no luck. Nor does rebooting the server help. Any ideas anyone?
Re: php-pear etc. on an Apache2 server [resolved]
On 2021-02-19 09:17, Gary Dale wrote: On 2021-02-18 09:06, Gary Dale wrote: On 2021-02-16 17:56, Gary Dale wrote: I'm running Buster on my local server which, among other things, I use for developing web sites before copying them onto a public host. I've recently been getting into a little php coding because there seem to a lot of sample code out there for things I want to do. Usually they work right away when I try running them after having installed libapache2-mod-php. Right now I'm trying to get a script working that is actually fairly small and seems straightforward. It displays a form that allows you to send an e-mail with an attachment. I actually have some similar scripts working locally that do pretty much the same thing, but I'm trying to get this one to work because the host I use for one site that needs this type of form has broken the script I had been using (I also didn't like it because it seemed overly complicated and under-featured). This script use the Pear libraries to handle the heavy lifting, which seems like a reasonable design decision. I installed the php-pear package and also php-mail-mime. Unfortunately, the script failed to work. It was uploading the file but failing to send the e-mail. I was able to find the line that was failing but it puzzles me. The line is $message = new Mail_mime(); which should be working. I ran the (frequently recommended) php-info page (nothing about mime. I'd expect that is something php handles though. I got the script to send an e-mail by removing all the mime parts and just using the php mail() function. However that's not really useful. I need the mime bits to add the attachment. Anyway, it looks like I need to do something more (besides restarting Apache2) to get php to use the php-mail-mime library but I'm not sure what. All the Debian information I've found just says "install the package php- and it will work". Can anyone help me here? The issue turned out to be that script had an incorrect include. It asked for Mail_Mime/mime.php when the actual include should have been Mail/mime.php. I suspect that the php package names may have changed since the author wrote the script and they never bothered to update it. Further to above, when I went to move the script to my host, I discovered that the cPanel php-pear installer used the package names from the original script. Their Mail_Mime package was actually called that while their Mail package doesn't include mime.php. My host's cPanel php-pear packages appear to be relatively recent as some of the documentation has dates from last year. Perhaps the different package names is a Debian thing? To make my confusion complete, even though the package is called Mail_Mime, to access the mime.php procedure, I need to point the include to the Mail directory, just like I had to locally. I'm sure that there is a logical reason for this somewhere... So my initial assumption was close to correct. Where the stuff installs isn't actually related to the package name.
Re: php-pear etc. on an Apache2 server [resolved]
On 2021-02-18 09:06, Gary Dale wrote: On 2021-02-16 17:56, Gary Dale wrote: I'm running Buster on my local server which, among other things, I use for developing web sites before copying them onto a public host. I've recently been getting into a little php coding because there seem to a lot of sample code out there for things I want to do. Usually they work right away when I try running them after having installed libapache2-mod-php. Right now I'm trying to get a script working that is actually fairly small and seems straightforward. It displays a form that allows you to send an e-mail with an attachment. I actually have some similar scripts working locally that do pretty much the same thing, but I'm trying to get this one to work because the host I use for one site that needs this type of form has broken the script I had been using (I also didn't like it because it seemed overly complicated and under-featured). This script use the Pear libraries to handle the heavy lifting, which seems like a reasonable design decision. I installed the php-pear package and also php-mail-mime. Unfortunately, the script failed to work. It was uploading the file but failing to send the e-mail. I was able to find the line that was failing but it puzzles me. The line is $message = new Mail_mime(); which should be working. I ran the (frequently recommended) php-info page (nothing about mime. I'd expect that is something php handles though. I got the script to send an e-mail by removing all the mime parts and just using the php mail() function. However that's not really useful. I need the mime bits to add the attachment. Anyway, it looks like I need to do something more (besides restarting Apache2) to get php to use the php-mail-mime library but I'm not sure what. All the Debian information I've found just says "install the package php- and it will work". Can anyone help me here? The issue turned out to be that script had an incorrect include. It asked for Mail_Mime/mime.php when the actual include should have been Mail/mime.php. I suspect that the php package names may have changed since the author wrote the script and they never bothered to update it. Further to above, when I went to move the script to my host, I discovered that the cPanel php-pear installer used the package names from the original script. Their Mail_Mime package was actually called that while their Mail package doesn't include mime.php. My host's cPanel php-pear packages appear to be relatively recent as some of the documentation has dates from last year. Perhaps the different package names is a Debian thing?
Re: Need Support for Dell XPS 15 7590, Hard Drive Make Micron 2300 NVMe 1024 GB
On 2021-02-18 12:25, Steve McIntyre wrote: g...@extremeground.com wrote: On 2021-02-18 09:48, Steve McIntyre wrote: zcor...@yahoo.com wrote: Just received a new laptop, and both Debian Stable, and Debian testing would not detect the hard drive. Any possibility this can be added to the to-do-list for developers? I bought a Dell XPS 15 7590 (2019) edition. Check in the BIOS settings - the drive may be configured in "RAID" mode. If so, switching to "AHCI" will most likely solve your problem. Good idea if it was a desktop and if the drive wasn't NVME (although 2020 saw some desktops with dual NVME slots). Seems a long shot for a laptop (even a used one as this might be). I wish you were right, but even in the space year 2020 it's still a thing! For an example, see: https://www.dell.com/community/XPS/Pros-Cons-AHCI-vs-Raid-On-XPS13-9300-NVMe/td-p/7636984 Interesting. Thanks for the information.
Re: rsync to NAS for backup
On 2021-02-18 12:22, to...@tuxteam.de wrote: On Thu, Feb 18, 2021 at 06:59:03PM +0200, Teemu Likonen wrote: * 2021-02-18 11:13:25-0500, Gary Dale wrote: rsync is a quick & dirty backup tactic but it's got limitations. 1) files may stay around forever in the backup even if you've deleted them from your main computer because you don't need them. 2) you only have one copy of a file and that only lasts until the next rsync. This limits your ability to restore from a backup before it is overwritten. rsync is not a good substitute for backups. No, it's not. It is a fantastic tool for backups :-) Rsync is great backup program with "--link-dest" option. Here is the idea in simplified code: [...] Absolutely. Time travel! Actually, I've implemented this at a customer's place. They were delighted. Where rsync shows some weaknesses is on big, fat files (think videos, one or several GB). Really huge directories (tens to hundreds of TB) were once a challenge, too, but I hear that they refined the scanning part in the meantime. No direct experience, though. And, oh, Gary: if you want to delete files which disappeared in the source, check out the --delete option. But this time-staggered backup with --link-dest is really great. Cheers While you can twist any tool to fit a task, real backup programs don't need to be twisted and do a better job. For example backup retention policy is intuitive and easy to set. Some backup programs even factor out common blocks for de-duplication, which can save a lot of space. Hard-links only do that if the file name is the same. And when you need to restore a file, backup programs usually let you see when the files changed then let you choose which version to restore. As for the delete option, it makes the rsync script even more complicated. A backup program will simply expire the file at the end of the retention period.
Re: rsync to NAS for backup
On 2021-02-18 10:57, mick crane wrote: On 2021-02-15 12:39, mick crane wrote: On 2021-02-13 19:20, David Christensen wrote: On 2021-02-13 01:27, mick crane wrote: I made a mistake and instead of getting a PC for backup I got a NAS. I'm struggling to get to grips with it. If rsync from PC to NAS NAS changes the owner/group of files to me/users which is probably no good for backing up. There's that problem then another that it won't let me login as root. I asked on Synology forum but not getting a lot of joy. https://community.synology.com/enu/forum/1/post/141137 Anybody used these things can advise ? What is the model of the Synology NAS? What options -- CPU, memory, disks, bays, interfaces, PSU, whatever? Support page URL? Reading the forum post, it sounds like you damaged the sudoers file. The fix would appear to be doing a Mode 2 reset per Synology's instructions: https://www.synology.com/en-global/knowledgebase/DSM/tutorial/General_Setup/How_to_reset_my_Synology_NAS Once the NAS has been reset, figure out how to meet your needs within the framework provided by Synology. Follow the User Guide. Follow the Admin Guide. Do not mess around "under the hood" with a terminal and sudo. Make Synology earn your money. But if you want complete control, buy or build an x86_64/amd64 server, install Debian, and have at it. thanks for advices folks. It was indeed user error with being in a rush and blurred eyesight mistook "%" for "#" We are making progress. Appears that to retain permissions need root at both ends of rsync. Have keys working with ssh for users to NAS ( not helped by default permissions for .ssh files being wrong) and can su to root so now need to get ssh working with keys with no passphrase for root and all should be good. further to this if it helps anybody can start sshd with -d switch and at same time do client with -vv switch then can see where is falling down. Having telnet available helps if break sshd_config can still telnet and mend it. mick rsync is a quick & dirty backup tactic but it's got limitations. 1) files may stay around forever in the backup even if you've deleted them from your main computer because you don't need them. 2) you only have one copy of a file and that only lasts until the next rsync. This limits your ability to restore from a backup before it is overwritten. Using a real backup program, which can run on you main computer to backup to the NAS, lets you define a retention policy so files no longer needed can be purged while you have multiple backups of files you are currently working on. rsync is not a good substitute for backups.
Re: Need Support for Dell XPS 15 7590, Hard Drive Make Micron 2300 NVMe 1024 GB
On 2021-02-18 09:48, Steve McIntyre wrote: zcor...@yahoo.com wrote: Just received a new laptop, and both Debian Stable, and Debian testing would not detect the hard drive. Any possibility this can be added to the to-do-list for developers? I bought a Dell XPS 15 7590 (2019) edition. Check in the BIOS settings - the drive may be configured in "RAID" mode. If so, switching to "AHCI" will most likely solve your problem. Good idea if it was a desktop and if the drive wasn't NVME (although 2020 saw some desktops with dual NVME slots). Seems a long shot for a laptop (even a used one as this might be).
Re: php-pear etc. on an Apache2 server [resolved]
On 2021-02-16 17:56, Gary Dale wrote: I'm running Buster on my local server which, among other things, I use for developing web sites before copying them onto a public host. I've recently been getting into a little php coding because there seem to a lot of sample code out there for things I want to do. Usually they work right away when I try running them after having installed libapache2-mod-php. Right now I'm trying to get a script working that is actually fairly small and seems straightforward. It displays a form that allows you to send an e-mail with an attachment. I actually have some similar scripts working locally that do pretty much the same thing, but I'm trying to get this one to work because the host I use for one site that needs this type of form has broken the script I had been using (I also didn't like it because it seemed overly complicated and under-featured). This script use the Pear libraries to handle the heavy lifting, which seems like a reasonable design decision. I installed the php-pear package and also php-mail-mime. Unfortunately, the script failed to work. It was uploading the file but failing to send the e-mail. I was able to find the line that was failing but it puzzles me. The line is $message = new Mail_mime(); which should be working. I ran the (frequently recommended) php-info page (about mime. I'd expect that is something php handles though. I got the script to send an e-mail by removing all the mime parts and just using the php mail() function. However that's not really useful. I need the mime bits to add the attachment. Anyway, it looks like I need to do something more (besides restarting Apache2) to get php to use the php-mail-mime library but I'm not sure what. All the Debian information I've found just says "install the package php- and it will work". Can anyone help me here? The issue turned out to be that script had an incorrect include. It asked for Mail_Mime/mime.php when the actual include should have been Mail/mime.php. I suspect that the php package names may have changed since the author wrote the script and they never bothered to update it.
Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye
On 2021-02-17 04:53, Andrei POPESCU wrote: On Ma, 16 feb 21, 16:45:13, Gary Dale wrote: I hear you, but the issue is that if I revert to a previous version, then I have to hold it to stop the buggy version from clobbering it every day. And I have to monitor the Testing version for changes to see when a fix is potentially available so I can remove the hold. Not just me but every user who is experiencing the bug also has to do this. This is what 'aptitude forbid-version' is for. Kind regards, Andrei Thanks. I wasn't aware of that option.
Re: FileZilla / ftp / GnuTLS error connecting to sites with Testing/Bullseye
On 2021-02-16 17:02, Philip Wyett wrote: On Tue, 2021-02-16 at 16:45 -0500, Gary Dale wrote: On 2021-02-13 03:02, Andrei POPESCU wrote: On Vi, 12 feb 21, 17:00:41, Gary Dale wrote: Which is why I think it would be useful to have way to rollback a package when you can't fix it quickly. That way you aren't asking all the users to do it themselves and track the bug status individually. When the maintainers think they have a fix, it can go through the normal process... Debian doesn't support downgrading of packages. When dpkg installs another version of a package (typically newer) it basically overwrites the existing version and runs the corresponding package scripts from the to be installed version. A newer package may introduce changes that the older package (scripts) can't deal with. In practice it does work in many cases, except for those where it doesn't. Fixing them would require a time machine ;) A roll-back, especially if automatic, could introduce more issues than it fixes. Someone(tm) has to determine on a case by case basis whether rolling back makes sense and the system administrator is in the best position to do so. In theory the package Maintainer could provide a general "hint" that system administrators could chose to ignore (at their own risk). Currently the infrastructure for this doesn't exist[1] and, besides, I'd rather have Maintainers focus on fixing the newer package instead. Volunteer time is precious! [1] it would need support in the Debian archive software and APT at a minimum. Besides, there is already an arguably safer (though hackish) way to achieve that by uploading a package with version+really.the.old.version instead. In this case the Maintainer can also take care to adjust the package scripts accordingly. Random example found on my system: $ rmadison fonts-font-awesome fonts-font-awesome | 4.2.0~dfsg-1 | oldoldstable | source, all fonts-font-awesome | 4.7.0~dfsg-1 | oldstable| source, all fonts-font-awesome | 5.0.10+really4.7.0~dfsg-1 | stable | source, all fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4~bpo10+1 | buster- backports | source, all fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4 | testing | source, all fonts-font-awesome | 5.0.10+really4.7.0~dfsg-4 | unstable | source, all Kind regards, Andrei I hear you, but the issue is that if I revert to a previous version, then I have to hold it to stop the buggy version from clobbering it every day. And I have to monitor the Testing version for changes to see when a fix is potentially available so I can remove the hold. Not just me but every user who is experiencing the bug also has to do this. There is a kludge for this if the buggy version didn't contain critical security fixes - re-release the previous version with a slightly higher version number than the buggy one (e.g. 3.7.0-5a). When the bug is (finally) fixed, give the fixed version a slightly higher number still (e.g. 3.7.0.5b). Again this would only be done where it appears that fixing the bug may take time (it's been over a month now). If I were to do the alternative - pull packages from Sid - I have no real indication if they fix it or introduce even worse problems. I can only assume that the reason a fix hasn't made it down through Sid yet is that it's not simple. My suggestion isn't to make more work for maintainers but rather to take the time pressure off them without leaving us testers to jump through hoops. Hi, What appears to be the fixed version is in sid (3.7.0-7). It has to pass in sid for 10 days before migration to testing, see below link. https://tracker.debian.org/pkg/gnutls28 https://metadata.ftp-master.debian.org/changelogs//main/g/gnutls28/gnutls28_3.7.0-7_changelog My testing with filezilla, shows all to be working once more, though testing has been limited. Regards Phil Confirmed. Seems to work. You need to install libnettle and libgnutls from Sid as well.
Re: networking.service fails
On 2021-02-17 08:28, Andrei POPESCU wrote: On Mi, 17 feb 21, 00:01:01, Gary Dale wrote: On 2021-02-16 19:44, Dmitry Katsubo wrote: Dear Debian community, I am puzzled with the following problem. When my Debian 10.8 starts, the unit "networking.service" is marked as failed with the following reason: root@debian:~ # systemctl status networking.service *— networking.service - Raise network interfaces Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2021-02-16 08:56:16 CET; 5h 27min ago Docs: man:interfaces(5) Process: 691 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE) Main PID: 691 (code=exited, status=1/FAILURE) Debian/Busteris still using Network Manager not systemd to control the network so I think the network.service shouldn't be used. Well, systemd as init is starting everything so that necessarily includes starting "the network", which in practice means starting whatever network management framework is in use[1]. The 'networking.service' service is part of ifupdown, Debian's default network management framework (Priority: important). Network Manager is Priority: optional and typically installed as a Depends/Recommends of Desktop Environments. [1] this is applicable even for systemd's own network management framework systemd-networkd, which is included in the 'systemd' Debian package, but not activated by default. Kind regards, Andrei Sorry, it was midnight when I replied. However the failure is likely still due to the interfaces misconfiguration - probably reporting a failure to raise a non-existent interface.