Re: failing HDD, ddrescue says remaning time is 7104d
On 8/31/22 15:35, David Wright wrote: On Wed 31 Aug 2022 at 14:02:19 (-0700), David Christensen wrote: On 8/31/22 06:25, ppr wrote: I would appreciate advice from the community about a failing hard drive. When booting up, the computer complained about /dev/sdb, which is a ext4 HDD with data (not the computer main disk). dmesg shows `AE_NOT_FOUND` and `failed command: READ FPDMA QUEUED` messages (full dmesg log at https://hastebin.com/raw/jebelileru). It has finally booted after trying unsuccessfully to start /dev/sdb. Comment out the /etc/crypttab and/or /etc/fstab entries for the failed drive. When you mount the drive, mount it read only. I don't think it's wise to mount this disk at all, and certainly not before everything that can be rescued from it has been obtained and copied/archived. First sentence -- You don't want the OS to access the drive on the next boot. Second sentence -- I should have prefaced that with "after ddrescue has finished". Consider doing the work in chunks. You should already have sectors 0- 33 GB. Skip 33 GB and/or 34 GB. Do 35-100 GB. Then, 100-200 GB, 200-300 GB, 300-400 GB, etc.. Get the good sectors first. Do the problem sectors last. Agreed, though ddrescue should be able to do this more flexibly, and automatically, with -K. RTFM [1], I don't know if I would use -K. Take a look at the examples given at the end of section "4 Algorithm" and in "10 A small tutorial with examples" (examples 3 and 5 look relevant to the OP). David [1] https://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html
Re: failing HDD, ddrescue says remaning time is 7104d
On Wed 31 Aug 2022 at 14:02:19 (-0700), David Christensen wrote: > On 8/31/22 06:25, ppr wrote: > > I would appreciate advice from the community about a failing hard drive. > > > > When booting up, the computer complained about /dev/sdb, which is > > a ext4 HDD with data (not the computer main disk). dmesg shows > > `AE_NOT_FOUND` and `failed command: READ FPDMA QUEUED` messages > > (full dmesg log at https://hastebin.com/raw/jebelileru). > > > > It has finally booted after trying unsuccessfully to start /dev/sdb. > Comment out the /etc/crypttab and/or /etc/fstab entries for the failed > drive. When you mount the drive, mount it read only. I don't think it's wise to mount this disk at all, and certainly not before everything that can be rescued from it has been obtained and copied/archived. > Consider doing the work in chunks. You > should already have sectors 0- 33 GB. Skip 33 GB and/or 34 GB. Do > 35-100 GB. Then, 100-200 GB, 200-300 GB, 300-400 GB, etc.. Get the > good sectors first. Do the problem sectors last. Agreed, though ddrescue should be able to do this more flexibly, and automatically, with -K. Cheers, David.
Re: failing HDD, ddrescue says remaning time is 7104d
On Wed 31 Aug 2022 at 15:27:04 (-0400), Michael Stone wrote: > On Wed, Aug 31, 2022 at 03:25:36PM +0200, ppr wrote: > > I did not try to mount the HDD. I plugged an external HDD (ext4) > > and launched ddrescue. After two days it has recovered 33GB of 1TB > > but the speed are now so slow it will take 7104 days to complete. > > is the img file still growing? in general you're going to have issues > with error retries on external disks because the various layers don't > play well together (including the sata/usb hardware in the enclosure). > your best bet would be to try to hook the drive up internally, but if > the disk is really dead nothing is going to help. I was under the impression that an internal drive was being rescued, and the copy and mapfile were being written to the external one. Or does the SMART information tell you something I've overlooked? Cheers, David.
Re: failing HDD, ddrescue says remaning time is 7104d
On Wed 31 Aug 2022 at 15:25:36 (+0200), ppr wrote: > Copying non-tried blocks... Pass 1 (forwards)^C > > Should I wait hoping for a speeding? Should I pass different option to > ddrescue or use another tool? I would look at two options you could try adding to your command line: -K --skip-size could ascertain whether you've hit a really bad patch that's holding everything up but can jump over it, or whether the rest of the disk is just as bad. -R --reverse will start an attempt from the end of the disk, and if you're extremely lucky, it might copy most of the remaining 960-odd GB of data. OTOH it might only confirm that the disk is closer to meeting its maker than it was when you started the rescue. Cheers, David.
Re: networking.service: start operation timed out [SOLVED]
On 8/31/22 11:03 AM, Jeremy Ardley wrote: > On 31/8/22 10:45 pm, Chuck Zmudzinski wrote: > > > > I don't use haproxy but I see there is a package for it in the Debian > > repos. I think what you are seeing should be reported as a bug in > > haproxy if you are using the Debian packaged version. The haproxy > > package should start haproxy at the appropriate time during boot, > > and systemd provides the ability to make services such as haproxy > > depend on certain systemd targets being reached before it tries to > > start, such as the network-online target which I think would be > > enough for haproxy to start. But in any case, you might report a bug > > in haproxy and see if the package maintainers can help you out if > > you are using the Debian packaged version. > > > haproxy does retry three (?) times over a period. The problem is my upstream > provider can take up to 10 minutes to provide a dhcp address and ipv6 RA. > > The network service does start correctly, but lapses into a retry mode when > it can't get the full delegation at once. > > haproxy requires a configured interface for it to bind to. Typically this > means bind to an IP address and port. If the solicitation to the upstream > router hasn't happened, there is no IP and port to bind. haproxy does have an > (undocumented?) retry feature to repeatedly try to bind over a period. > > If any bug request is to be logged, perhaps it should be for haproxy to have > configurable binding options including number of retries or time elapsed? > > Jeremy > It sounds like it should be either a request for better documentation on configuring retries over a long time period from the haproxy documentation or a bug with wishlist severity if haproxy currently cannot handle such a long time to wait for the address to be configured by the upstream router. It also seems to be a ridiculously long time (ten minutes) for your provider to configure your interface. I would look for a different provider if they can't or won't fix it. Chuck
Re: failing HDD, ddrescue says remaning time is 7104d
On 8/31/22 06:25, ppr wrote: I would appreciate advice from the community about a failing hard drive. When booting up, the computer complained about /dev/sdb, which is a ext4 HDD with data (not the computer main disk). dmesg shows `AE_NOT_FOUND` and `failed command: READ FPDMA QUEUED` messages (full dmesg log at https://hastebin.com/raw/jebelileru). It has finally booted after trying unsuccessfully to start /dev/sdb. I launched smartctl which shows hard drive failure. --- # smartctl -H -i /dev/sdb smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-21-amd64] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Toshiba 3.5" DT01ACA... Desktop HDD Device Model: TOSHIBA DT01ACA100 Serial Number: 663X1XGNS LU WWN Device Id: 5 39 fe9dad918 Firmware Version: MS2OA750 User Capacity: 1 000 204 886 016 bytes [1,00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Form Factor: 3.5 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Wed Aug 31 13:56:34 2022 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! Drive failure expected in less than 24 hours. SAVE ALL DATA. Failed Attributes: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 2 Throughput_Performance 0x0005 037 037 054 Pre-fail Offline FAILING_NOW 3774 5 Reallocated_Sector_Ct 0x0033 001 001 005 Pre-fail Always FAILING_NOW 2004 --- I did not try to mount the HDD. I plugged an external HDD (ext4) and launched ddrescue. After two days it has recovered 33GB of 1TB but the speed are now so slow it will take 7104 days to complete. # ddrescue -n /dev/sdb /media/sara/2274a2da-1f02-4afd-a5c5-e8dcb1c02195/recup_HDD_sara/image_HDD1.img /media/sara/2274a2da-1f02-4afd-a5c5-e8dcb1c02195/recup_HDD_sara/recup.log GNU ddrescue 1.23 Press Ctrl-C to interrupt ipos: 33992 MB, non-trimmed: 0 B, current rate: 636 B/s opos: 33992 MB, non-scraped: 0 B, average rate: 188 kB/s non-tried: 966212 MB, bad-sector: 0 B, error rate: 0 B/s rescued: 33992 MB, bad areas: 0, run time: 2d 2h 6m pct rescued: 3.39%, read errors: 0, remaining time: 7104d 20h time since last successful read: 0s Copying non-tried blocks... Pass 1 (forwards)^C Should I wait hoping for a speeding? Should I pass different option to ddrescue or use another tool? Unless you have enterprise grade equipment designed for 100% duty cycle for 48 hours, I would kill the ddresue job before your hardware is destroyed. Both the failed drive and the destination drive will be in heavy use while you attempt to recover sectors. At 100 MB/s, transferring 1 TB will take nearly 3 hours (!). Make sure everything has good power supplies and good cooling. Use the best drive you have for the destination; an SSD will expedite this process and steps that follow. Ensure that the destination contains zeros for sectors not recovered. Comment out the /etc/crypttab and/or /etc/fstab entries for the failed drive. When you mount the drive, mount it read only. The challenge is figuring out the right options and strategies for using ddresue(1) to get as many good sectors as you can off the failing drive before it dies completely. Fortunately or unfortunately, I have not needed ddrescue(1) in many years; so, I would RFTM carefully and then STFW for articles about using ddrescue(1) effectively. Consider doing the work in chunks. You should already have sectors 0- 33 GB. Skip 33 GB and/or 34 GB. Do 35-100 GB. Then, 100-200 GB, 200-300 GB, 300-400 GB, etc.. Get the good sectors first. Do the problem sectors last. Once you have an image file containing whatever sectors you could recover, make the file read-only and back it up. Better yet, make two backups and put one off-site. To do the filesystem repair/ recovery work, make a copy of the image and work on the copy. If you make a mistake, you can throw away the copy and start over. I find it very useful to install Debian onto a good quality USB 3.0 flash drive, to use for system administration, maintenance, trouble-shooting, etc.. I prefer this approach over "live" distributions because I have a full Debian system and can install anything I want or need. I find it very useful to have a spare computer for maintenance and troubleshooting tasks. I find it very useful to use a version control system for system configuration files, system administration notes, etc.. I backup, archive, and image com
Re: failing HDD, ddrescue says remaning time is 7104d
On Wed, Aug 31, 2022 at 03:25:36PM +0200, ppr wrote: I did not try to mount the HDD. I plugged an external HDD (ext4) and launched ddrescue. After two days it has recovered 33GB of 1TB but the speed are now so slow it will take 7104 days to complete. is the img file still growing? in general you're going to have issues with error retries on external disks because the various layers don't play well together (including the sata/usb hardware in the enclosure). your best bet would be to try to hook the drive up internally, but if the disk is really dead nothing is going to help.
Re: failing HDD, ddrescue says remaning time is 7104d
On 8/31/22, to...@tuxteam.de wrote: > On Wed, Aug 31, 2022 at 03:25:36PM +0200, ppr wrote: >> I would appreciate advice from the community about a failing hard drive. >> > > [...] > >> I did not try to mount the HDD. I plugged an external HDD (ext4) and >> launched ddrescue. After two days it has recovered 33GB of 1TB but the >> speed >> are now so slow it will take 7104 days to complete. > > External means an USB enclosure? Depending on the USB this might be the > bottleneck. My experience is that a session's reboot freshness affects transfer statistics, too. In addition to starting with a new reboot, I will also rsync single directories. Doing so means the system is will be churning away at, choking on less data while it grapples with copying that data over to other media. Since that method of attack leaves room for the human error of accidentally skipping over directories, I'll run the entire setup one last time at the end. Doing so does on occasion catch something I've missed. Cindy :) -- Talking Rock, Pickens County, Georgia, USA * runs with birdseed *
Re: failing HDD, ddrescue says remaning time is 7104d
On Wed, Aug 31, 2022 at 03:25:36PM +0200, ppr wrote: > I would appreciate advice from the community about a failing hard drive. > [...] > I did not try to mount the HDD. I plugged an external HDD (ext4) and > launched ddrescue. After two days it has recovered 33GB of 1TB but the speed > are now so slow it will take 7104 days to complete. External means an USB enclosure? Depending on the USB this might be the bottleneck. Cheers -- t signature.asc Description: PGP signature
Re: Substitute for archivemail
On 2022-08-31 08:47 -0400, Kenneth Parker wrote: > On Wed, Aug 31, 2022, 5:36 AM riveravaldez > wrote: > >> On 8/30/22, Anssi Saari wrote: >> > Leandro Noferini writes: >> > >> >> In these days I upgraded the server to bullseye and so I have not yet >> >> archivemail: what could I use as subsitute? >> > >> > I wonder about that too, >> >> Hi, not an archivemail user, but just in case it's useful: you can >> check the right column bottom section ('Similar packages') on Debian's >> archivemail package page to see if there's something relevant there >> (and if it's available in newer Debian versions): >> >> https://packages.debian.org/buster/archivemail > > > Okay. So archivemail hasn't been updated for Python 3 yet. s/ yet// Some people have tried, but gave up eventually, therefore the package has been removed. See https://bugs.debian.org/936146 for details. Cheers, Sven
Re: failing HDD, ddrescue says remaning time is 7104d
On Wed, 31 Aug 2022 15:25:36 +0200 ppr wrote: > Should I wait hoping for a speeding? Should I pass different option > to ddrescue or use another tool? -- Does anybody read signatures any more? https://charlescurley.com https://charlescurley.com/blog/
Re: networking.service: start operation timed out [SOLVED]
On 31/8/22 10:45 pm, Chuck Zmudzinski wrote: I don't use haproxy but I see there is a package for it in the Debian repos. I think what you are seeing should be reported as a bug in haproxy if you are using the Debian packaged version. The haproxy package should start haproxy at the appropriate time during boot, and systemd provides the ability to make services such as haproxy depend on certain systemd targets being reached before it tries to start, such as the network-online target which I think would be enough for haproxy to start. But in any case, you might report a bug in haproxy and see if the package maintainers can help you out if you are using the Debian packaged version. haproxy does retry three (?) times over a period. The problem is my upstream provider can take up to 10 minutes to provide a dhcp address and ipv6 RA. The network service does start correctly, but lapses into a retry mode when it can't get the full delegation at once. haproxy requires a configured interface for it to bind to. Typically this means bind to an IP address and port. If the solicitation to the upstream router hasn't happened, there is no IP and port to bind. haproxy does have an (undocumented?) retry feature to repeatedly try to bind over a period. If any bug request is to be logged, perhaps it should be for haproxy to have configurable binding options including number of retries or time elapsed? Jeremy OpenPGP_signature Description: OpenPGP digital signature
Re: networking.service: start operation timed out [SOLVED]
On 8/30/22 8:49 PM, Jeremy Ardley wrote: > On 30/8/22 9:56 am, Ross Boylan wrote: > > > > Now everything just works. > > > > Thanks again to everyone. > > > > There are probably some general lessons, though I'm not sure what they > > are. Clearly the systemd semantics tripped me up; it's kind of an odd > > beast. I understand one of its major goals was to allow startup to > > proceed in parallel, which is pretty asynchronous. But it has to > > assure that certain things happen in a certain order, which results in > > some things being synchronous and blocking. I'm surprised that a tool > > intended for use from the command line (systemctl) is blocking. > > > > Ross > > > > One of my problems with systemd is the that name resolution is by > default done by resolved. If resolved was bug free that might be O.K. > but it's not - and in a production environment it's not a safe option. > > A result of the use of resolved is the start-up and dependency logic. If > you start doing things outside of the plan, you run into all sorts of > problems. I use bind9 on my various machines and have had to go to some > lengths to take resolved out of the equation. > > On a similar but different topic. I have a router that connects to an > upstream server and also runs haproxy. The upstream connection uses DHCP > and IPv6 solicitation. The problem is haproxy fails to start when the > upstream connection is not established and configured quickly enough. > What would be very helpful is a systemd way to start haproxy when the > network is established 'as configured'. So far all I can do is run a > cron job to see if haproxy is running and if not, try and restart it. > There has to be a better way. > You are right, you should not need to use a cron job to start a service like haproxy. I don't use haproxy but I see there is a package for it in the Debian repos. I think what you are seeing should be reported as a bug in haproxy if you are using the Debian packaged version. The haproxy package should start haproxy at the appropriate time during boot, and systemd provides the ability to make services such as haproxy depend on certain systemd targets being reached before it tries to start, such as the network-online target which I think would be enough for haproxy to start. But in any case, you might report a bug in haproxy and see if the package maintainers can help you out if you are using the Debian packaged version. Best regards, Chuck
5 GHz hostapd stopped working
Hi. After I rebooted for a new kernel, I am having trouble with my 5 GHz wifi access point using hostapd. My setup is a Debian Testing updated almost daily, except for the summer weeks. Before summer, this configuration used to work: interface=wlan1 ssid=cigaes_paris2 country_code=FR # 36 48 ok channel=40 wpa=2 wpa_passphrase=rzgZFlr6xOFZIYu9 hw_mode=a ieee80211n=1 ieee80211ac=1 wpa_pairwise=TKIP CCMP # the remaining lines are the default configuraition logger_syslog=-1 logger_syslog_level=2 logger_stdout=-1 logger_stdout_level=2 ctrl_interface=/var/run/hostapd ctrl_interface_group=0 beacon_int=100 dtim_period=2 max_num_sta=255 rts_threshold=-1 fragm_threshold=-1 macaddr_acl=0 auth_algs=3 ignore_broadcast_ssid=0 wmm_enabled=1 wmm_ac_bk_cwmin=4 wmm_ac_bk_cwmax=10 wmm_ac_bk_aifs=7 wmm_ac_bk_txop_limit=0 wmm_ac_bk_acm=0 wmm_ac_be_aifs=3 wmm_ac_be_cwmin=4 wmm_ac_be_cwmax=10 wmm_ac_be_txop_limit=0 wmm_ac_be_acm=0 eapol_key_index_workaround=0 eap_server=0 own_ip_addr=127.0.0.1 Now, it fails: ○ hostapd@wlan1.service - Access point and authentication server for Wi-Fi and Ethernet (wlan1) Loaded: loaded (/lib/systemd/system/hostapd@.service; disabled; preset: enabled) Active: inactive (dead) Docs: man:hostapd(8) Aug 31 16:14:19 ssecem systemd[1]: Stopping Access point and authentication server for Wi-Fi and Ethernet (wlan1)... Aug 31 16:14:20 ssecem systemd[1]: hostapd@wlan1.service: Deactivated successfully. Aug 31 16:14:20 ssecem systemd[1]: Stopped Access point and authentication server for Wi-Fi and Ethernet (wlan1). Aug 31 16:14:20 ssecem systemd[1]: hostapd@wlan1.service: Consumed 1.965s CPU time. Aug 31 16:14:20 ssecem systemd[1]: Starting Access point and authentication server for Wi-Fi and Ethernet (wlan1)... Aug 31 16:14:20 ssecem hostapd[574214]: wlan1: interface state UNINITIALIZED->COUNTRY_UPDATE Aug 31 16:14:20 ssecem systemd[1]: Started Access point and authentication server for Wi-Fi and Ethernet (wlan1). Aug 31 16:14:25 ssecem hostapd[574215]: wlan1: IEEE 802.11 Configured channel (40) or frequency (5200) (secondary_channel=0) not found from the channel list of the current mode (2) IEEE 802.11a Aug 31 16:14:25 ssecem hostapd[574215]: wlan1: IEEE 802.11 Hardware does not support configured channel Aug 31 16:14:25 ssecem systemd[1]: hostapd@wlan1.service: Deactivated successfully. I understand it is linked to country settings, and I am having trouble with it. I have crda and wireless-regdb installed. If I downgrade to wireless-regdb=2022.04.08-2~deb11u1 from stable, I get: $ sudo COUNTRY=FR /lib/crda/crda Failed to set regulatory domain: -7 If I upgrade to current wireless-regdb=2022.06.06-1, I get: $ sudo COUNTRY=FR /lib/crda/crda failed to open db file: No such file or directory So there is something fishy going on. I also try setting the country code from iw: $ sudo iw reg set FR; sudo iw reg get; sudo iw reg get | sha256sum global country 98: DFS-UNSET (2400 - 2483 @ 40), (N/A, 20), (N/A) (5150 - 5250 @ 100), (N/A, 20), (0 ms), NO-OUTDOOR, DFS, AUTO-BW (5250 - 5350 @ 100), (N/A, 20), (0 ms), NO-OUTDOOR, DFS, AUTO-BW (5725 - 5850 @ 80), (N/A, 13), (N/A) (57240 - 59400 @ 2160), (N/A, 28), (N/A) (59400 - 63720 @ 2160), (N/A, 40), (N/A) (63720 - 65880 @ 2160), (N/A, 28), (N/A) phy#0 country CN: DFS-FCC (2400 - 2483 @ 40), (N/A, 20), (N/A) (5150 - 5350 @ 80), (N/A, 20), (0 ms), DFS, AUTO-BW (5725 - 5850 @ 80), (N/A, 33), (N/A) (57240 - 59400 @ 2160), (N/A, 28), (N/A) (59400 - 63720 @ 2160), (N/A, 44), (N/A) (63720 - 65880 @ 2160), (N/A, 28), (N/A) 499cb9dd085d177c863bd39316ae27bf4618068bc519a6e079d94216ab8cb616 - $ sudo iw reg set DE; sudo iw reg get | sha256sum 499cb9dd085d177c863bd39316ae27bf4618068bc519a6e079d94216ab8cb616 - $ sudo iw reg set UK; sudo iw reg get | sha256sum 499cb9dd085d177c863bd39316ae27bf4618068bc519a6e079d94216ab8cb616 - $ sudo iw reg set US; sudo iw reg get | sha256sum 499cb9dd085d177c863bd39316ae27bf4618068bc519a6e079d94216ab8cb616 - $ sudo iw reg set CN; sudo iw reg get | sha256sum 499cb9dd085d177c863bd39316ae27bf4618068bc519a6e079d94216ab8cb616 - $ sudo iw reg set FR; sudo iw reg get | sha256sum 499cb9dd085d177c863bd39316ae27bf4618068bc519a6e079d94216ab8cb616 - → iw reg set has absolutely no effect on the result of iw reg get. Also, phy#0 is wlan0, which works but only supports 2.4 GHz, wlan1 is phy#1, so it is completely absent from iw reg get. I can get this with iw list: Wiphy phy1 wiphy index: 1 max # scan SSIDs: 4 max scan IEs length: 2243 bytes max # sched scan SSIDs: 0 max # match sets: 0 Retry short limit: 7 Retry long limit: 4 Coverage class: 0 (up to 0m) Device supports RSN-IBSS. Device supports AP-side u-APSD. Device supports T-DLS. Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104
failing HDD, ddrescue says remaning time is 7104d
I would appreciate advice from the community about a failing hard drive. When booting up, the computer complained about /dev/sdb, which is a ext4 HDD with data (not the computer main disk). dmesg shows `AE_NOT_FOUND` and `failed command: READ FPDMA QUEUED` messages (full dmesg log at https://hastebin.com/raw/jebelileru). It has finally booted after trying unsuccessfully to start /dev/sdb. I launched smartctl which shows hard drive failure. --- # smartctl -H -i /dev/sdb smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-21-amd64] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Toshiba 3.5" DT01ACA... Desktop HDD Device Model: TOSHIBA DT01ACA100 Serial Number:663X1XGNS LU WWN Device Id: 5 39 fe9dad918 Firmware Version: MS2OA750 User Capacity:1 000 204 886 016 bytes [1,00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate:7200 rpm Form Factor: 3.5 inches Device is:In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is:Wed Aug 31 13:56:34 2022 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! Drive failure expected in less than 24 hours. SAVE ALL DATA. Failed Attributes: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 2 Throughput_Performance 0x0005 037 037 054Pre-fail Offline FAILING_NOW 3774 5 Reallocated_Sector_Ct 0x0033 001 001 005Pre-fail Always FAILING_NOW 2004 --- I did not try to mount the HDD. I plugged an external HDD (ext4) and launched ddrescue. After two days it has recovered 33GB of 1TB but the speed are now so slow it will take 7104 days to complete. # ddrescue -n /dev/sdb /media/sara/2274a2da-1f02-4afd-a5c5-e8dcb1c02195/recup_HDD_sara/image_HDD1.img /media/sara/2274a2da-1f02-4afd-a5c5-e8dcb1c02195/recup_HDD_sara/recup.log GNU ddrescue 1.23 Press Ctrl-C to interrupt ipos: 33992 MB, non-trimmed:0 B, current rate: 636 B/s opos: 33992 MB, non-scraped:0 B, average rate:188 kB/s non-tried: 966212 MB, bad-sector:0 B,error rate: 0 B/s rescued: 33992 MB, bad areas:0,run time: 2d 2h 6m pct rescued:3.39%, read errors:0, remaining time: 7104d 20h time since last successful read: 0s Copying non-tried blocks... Pass 1 (forwards)^C Should I wait hoping for a speeding? Should I pass different option to ddrescue or use another tool?
Re: networking.service: start operation timed out [SOLVED]
On 31/8/22 9:16 pm, Anssi Saari wrote: I wonder what bugs Jeremy has found and reported against systemd-resolved though. I remember getting a big headache trying to get interface specific DNS configuration going only to eventually find out it really wasn't working in the version Debian packaged at the time. My main problem was unexplained systemd-resolved slowdowns and timeouts on some DNS queries. It may have been related DNSSEC? The same queries using named had no problems, so I switched to that for the local resolver. I've also just had a look at man systemd-resolved.service The configuration seems very complex, especially the multicast DNS and the myriad variations relating to /etc/resolv.conf. If I have a spare week and an urgent need for multicast DNS I'll work through it. -- Jeremy OpenPGP_signature Description: OpenPGP digital signature
Re: networking.service: start operation timed out [SOLVED]
Greg Wooledge writes: > On Wed, Aug 31, 2022 at 08:49:29AM +0800, Jeremy Ardley wrote: >> One of my problems with systemd is the that name resolution is by default >> done by resolved. > > Not in Debian. > > unicorn:~$ systemctl status systemd-resolved > ● systemd-resolved.service - Network Name Resolution > Loaded: loaded (/lib/systemd/system/systemd-resolved.service; disabled; > ve> > Active: inactive (dead) >Docs: man:systemd-resolved.service(8) > man:org.freedesktop.resolve1(5) > > https://www.freedesktop.org/wiki/Software/systemd/writing-network-> > > https://www.freedesktop.org/wiki/Software/systemd/writing-resolver> > > That's the Debian default. I didn't have to disable it, although I > certainly *would* have, had the default been otherwise. I was wondering about the same thing, so far I've needed to explicitly enable systemd-resolved on Debian when I've wanted it. I wonder what bugs Jeremy has found and reported against systemd-resolved though. I remember getting a big headache trying to get interface specific DNS configuration going only to eventually find out it really wasn't working in the version Debian packaged at the time.
Re: Substitute for archivemail
On Wed, Aug 31, 2022, 5:36 AM riveravaldez wrote: > On 8/30/22, Anssi Saari wrote: > > Leandro Noferini writes: > > > >> In these days I upgraded the server to bullseye and so I have not yet > >> archivemail: what could I use as subsitute? > > > > I wonder about that too, > > Hi, not an archivemail user, but just in case it's useful: you can > check the right column bottom section ('Similar packages') on Debian's > archivemail package page to see if there's something relevant there > (and if it's available in newer Debian versions): > > https://packages.debian.org/buster/archivemail Okay. So archivemail hasn't been updated for Python 3 yet. Kind regards! Best regards, Kenneth Parker
Re: Substitute for archivemail
On 8/30/22, Anssi Saari wrote: > Leandro Noferini writes: > >> In these days I upgraded the server to bullseye and so I have not yet >> archivemail: what could I use as subsitute? > > I wonder about that too, Hi, not an archivemail user, but just in case it's useful: you can check the right column bottom section ('Similar packages') on Debian's archivemail package page to see if there's something relevant there (and if it's available in newer Debian versions): https://packages.debian.org/buster/archivemail Kind regards!
Problems with systemd-resolved (was Re: networking.service: start operation timed out [SOLVED]_
On Wed, 2022-08-31 at 08:49 +0800, Jeremy Ardley wrote: > A result of the use of resolved is the start-up and dependency logic. Another problem I had with systemd-resolved on an Ubuntu box was that it refuses to forward single part names to the DNS server it got by DHCP (names like 'printer1' as opposed 'printer1.domain'). Instead, it wants to try and resolve them as Link-Local Multicast Name Resolution. I tried disabling that feature [1] but it still doesn't forward, so I disabled systemd-resolved and hard coded /etc/resolv.conf point to my DNS server. My Debian boxes don't have this problem because as Greg pointed out, systemd-resolved isn't enabled on Debian by default (as of the current Stable release.) [1] https://bugs.launchpad.net/netplan/+bug/1777523/comments/8 -- Tixy