Bug#1003461: Missing syslinux-utils package breaks network booting of live ISO images

2022-01-10 Thread Sébastien Béhuret
Package: live-boot
Severity: important
Version: 1:20210208

Dear Maintainers,

Debian Live project includes code to detect Syslinux's MEMDISK and enable
network booting of live ISO images. This is however conditioned by the
existence of the binary file /usr/bin/memdiskfind, which is provided by
package syslinux-utils. This package is currently not included in any of
the Debian Live ISO images, which breaks network booting.

Suggested fix: Include package syslinux-utils in Debian Live builds, or at
least binary file /usr/bin/memdiskfind.

Current code from Debian Live project is reproduced below for reference:

[1/2] File /usr/share/initramfs-tools/hooks/live from package
live-boot-initramfs-tools:

# Program: memdisk
if [ -x /usr/bin/memdiskfind ]
then
[ "${QUIET}" ] || echo -n " memdisk"
 copy_exec /usr/bin/memdiskfind
 manual_add_modules phram
 manual_add_modules mtdblock
fi

[2/2] File /lib/live/boot/9990-main.sh from package live-boot:

if [ -x /usr/bin/memdiskfind ]
then
if MEMDISK=$(/usr/bin/memdiskfind)
then
# We found a memdisk, set up phram
# Sometimes "modprobe phram" can not successfully create /dev/mtd0.
# Have to try several times.
max_try=20
while [ ! -c /dev/mtd0 ] && [ "$max_try" -gt 0 ]; do
modprobe phram "phram=memdisk,${MEMDISK}"
sleep 0.2
if [ -c /dev/mtd0 ]; then
break
else
rmmod phram
fi
max_try=$((max_try - 1))
done

# Load mtdblock, the memdisk will be /dev/mtdblock0
modprobe mtdblock
fi
fi

Many thanks and kind regards,
Sebastien


Bug#930796: spindown_time and force_spindown_time are broken in hdparm 9.58+ds-1

2019-10-10 Thread Sébastien Béhuret
Hi Alex,

Apologies for the late reply, I found what prevented the HDDs to enter
standby mode: udisksd. I leave the workarounds here for reference.

Intro: The smartd and udisksd [1] daemons poll S.M.A.R.T. data from drives
regularly, and HDDs with a longer standby (or spindown) timeout than the
polling interval may fail to enter standby. In the case of udisksd, drives
that are already spun down are usually not affected, and standby timeout
applied by udisks2 seems to be unaffected.

Workarounds for smartd:
- Add -i value/--interval=value option to smartd_opts in
/etc/default/smartmontools, using a value greater than the standby timeout.
- Add -n standby or -n standby,q to DEVICESCAN statement in
/etc/smartd.conf to prevent checking disks in standby, and further suppress
log message to that effect so as not to cause a write to disk.

Workaround for udisksd:
- Run systemctl mask udisks2 to prevent udisksd execution.

Other possible workarounds could be setting the standby timeout to a
duration lower than the default polling interval (1800 seconds for smartd,
10 minutes for udisksd), forcing a manual spindown using hdparm -y
/dev/sdx, or trying hd-idle as suggested earlier in this thread.

Best regards,
Sebastien

[1]
https://wiki.archlinux.org/index.php/Udisks#Broken_standby_timer_(udisks2)

On Wed, Oct 9, 2019 at 7:58 AM Alex Mestiashvili 
wrote:

>
>
> On 7/1/19 1:23 PM, Sébastien Béhuret wrote:
> > For USB or FireWere disks, APM & spindown_time options are
> ignored,
> > other options are applied, force_spindown_time will be applied
> too.
> > There is bug, https://bugs.launchpad.net/bugs/515023
> > explaining why USB and FireWire drives are ignored, however the
> > situation might have improved since then.
> >
> >
> > I was unaware of this bug and never experienced this issue with
> > external USB drives. I do remember external USB drives going into
> > standby mode shortly after backup completion, but this does not
> > occur anymore in debian buster/testing. The drives in question do
> > not support APM so it makes sense given that -S36 is no longer
> > applied in this case.
> >
> >
> > Correction: The external USB drives that used to go into standby mode
> > were not getting any hdparm settings (not event -S36) as they were not
> > used on battery mode and spindown feature is disabled by default for USB
> > drives in recent hdparm versions. This must have been an internal
> > feature from WD as documented here:
> > https://support-en.wd.com/app/answers/detail/a_id/16047
> >
> > The fact that automatic spindown does not work anymore for these drives
> > in buster/testing may indicate that there is some form of system noise
> > (but somehow this noise would not be sufficient to wake up the drives
> > after hdparm -y) or that something else is actively disabling automatic
> > spindown.
>
> Hi Sébastien,
>
> I still have no solution for hdparm, but as workaround one can try
> hd-idle which is now available via buster-backports or testing.
>
> Best regards,
> Alex
>


Bug#930796: spindown_time and force_spindown_time are broken in hdparm 9.58+ds-1

2019-07-01 Thread Sébastien Béhuret
>
> For USB or FireWere disks, APM & spindown_time options are ignored,
>> other options are applied, force_spindown_time will be applied too.
>> There is bug, https://bugs.launchpad.net/bugs/515023
>> explaining why USB and FireWire drives are ignored, however the
>> situation might have improved since then.
>>
>
> I was unaware of this bug and never experienced this issue with external
> USB drives. I do remember external USB drives going into standby mode
> shortly after backup completion, but this does not occur anymore in debian
> buster/testing. The drives in question do not support APM so it makes sense
> given that -S36 is no longer applied in this case.
>

Correction: The external USB drives that used to go into standby mode were
not getting any hdparm settings (not event -S36) as they were not used on
battery mode and spindown feature is disabled by default for USB drives in
recent hdparm versions. This must have been an internal feature from WD as
documented here: https://support-en.wd.com/app/answers/detail/a_id/16047

The fact that automatic spindown does not work anymore for these drives in
buster/testing may indicate that there is some form of system noise (but
somehow this noise would not be sufficient to wake up the drives after
hdparm -y) or that something else is actively disabling automatic spindown.


Bug#930796: spindown_time and force_spindown_time are broken in hdparm 9.58+ds-1

2019-06-29 Thread Sébastien Béhuret
On Fri, Jun 28, 2019 at 10:40 AM Alex Mestiashvili 
wrote:

> > With your solution I assume that /lib/udev/hdparm would call hdparm
> > twice on each HDD during udev invocation, once for non-spindown options
> > returned by /lib/hdparm/hdparm-functions, and once through
> > /usr/lib/pm-utils/power.d/95hdparm-apm for spindown options.
>
> Exactly, for the APM options, apm, spindown_time and force_spindown_time
> /lib/udev/hdparm will call "/usr/lib/pm-utils/power.d/95hdparm-apm"
> For the other option hdparm will be called the second time. But I see no
> problem here. Please see the updated script here:
>
> https://salsa.debian.org/debian/hdparm/blob/930796/debian/udev-scripts/hdparm
>

The updated script looks just right with -B and -S options going through
95hdparm-apm and other options applied locally.

Thanks!



> With the new /lib/udev/hdparm, hdparm follows the logic below:
>
> No config (/etc/hdparm.conf doesn't list any drives):
>   * If disk supports APM, the defaults:
> - on boot, -B 254
> - on power, -B 254
> - on battery -B 128 -S36 (3 min)
>   * no APM support:
> - hdparm will not run (no config!)
> If disk config is present in /etc/hdparm.conf:
>   * disk supports APM
> - on boot, udev will call /lib/udev/hdparm, which in turn will call
>   /usr/lib/pm-utils/power.d/95hdparm-apm for apm options and hdparm
>   for other options.
> - on power, /usr/lib/pm-utils/power.d/95hdparm-apm
> - on battery, defaults -B 128 -S 36, use apm_battery and
>   spindown_time to set non-default values
>   * no APM support:
> - force_spindown_time and other options are applied,
>   apm and spindown_time are ignored
>

This is great default behavior. Calling 95hdparm-apm from /lib/udev/hdparm
also prevents setting options that laptop-mode-tools would normally handle.


> For USB or FireWere disks, APM & spindown_time options are ignored,
> other options are applied, force_spindown_time will be applied too.
> There is bug, https://bugs.launchpad.net/bugs/515023
> explaining why USB and FireWire drives are ignored, however the
> situation might have improved since then.
>

I was unaware of this bug and never experienced this issue with external
USB drives. I do remember external USB drives going into standby mode
shortly after backup completion, but this does not occur anymore in debian
buster/testing. The drives in question do not support APM so it makes sense
given that -S36 is no longer applied in this case.


>
> > Custom
> > scripts relying on hdparm_options() function in
> > /lib/hdparm/hdparm-functions would still fail if force_spindown_time is
> > used in /etc/hdparm.conf. I would suggest implementing the conversion
> > code directly into hdparm_options() function to avoid code duplication,
> > prevent misuse, and possibly avoid calling hdparm twice on each HDD.
>
> This makes sense, but
> 1. hdparm-functions is the debian specific helper script. The chances
> that somebody will use it for custom scripts is very low.
> 2. force_spindown_time is a hackish workaround and in order to implement
> it I need to parse this option later in "95hdparm-apm" script.
> Implementing proper handling of "force_spindown_time" in
> hdparm-functions will result in bringing part of
> resume_hdparm_spindown() function from 95hdparm-apm in hdparm-functions
> code. I don't like this idea, but please feel free to implement and send
> me a patch. :)
>

The logic that you described above is just fine and you are absolutely
right that it is unlikely hdparm-functions is/will be used for custom
scripts. If the force_spindown_time hack was implemented in
hdparm-functions, it would also be necessary to detect laptop-mode-tools
and parse its configuration there, making things a little trickier.


> >
> > 4. Thanks for your feedback. I have done some experiments and it appears
> > that the -S issue comes from something else. I can only confirm that the
> > -S option was still working fine at the time of hdparm 9.56+ds-2 in
> > buster/testing (Fall 2018) and it had been working for over 5 years with
> > various kernel and hdparm versions. Between hdparm 9.56+ds-2 and hdparm
> > 9.58+ds-1, the kernel was updated (4.17.8-1 => 4.19.37-3) and there were
> > also changes in udev (239-7 => 241-3).
>
> To exclude hdparm, one can try to build hdparm 9.58 on a stretch system.
> Building it with make will also work.
>


You are right, unfortunately I won't be able to make this test now.

I'm confident that hdparm -S is somehow broken is recent buster debian due
to:
- Multiple drives affected by the issue (internal and USB external)
- These drives will spin down successfully with hdparm -y, and will stay in
standby mode unless manually accessed (tested for over 48 hours)
- hdparm -S runs successfully but none of the delays work (tested delays
ranged from a few seconds to a few hours)
- It had been working flawlessly with this hardware running debian testing
up to Fall 2018 (hdparm 9.56, kernel 4.17, udev 

Bug#930796: spindown_time and force_spindown_time are broken in hdparm 9.58+ds-1

2019-06-27 Thread Sébastien Béhuret
Hi Alex,

Thanks for your detailed reply.

2. I agree that it is appropriate to drop /etc/apm/event.d/20hdparm.

3. Your solution is OK: Calling /usr/lib/pm-utils/power.d/95hdparm-apm from
/lib/udev/hdparm would fix the force_spindown_time conversion issue.

With your solution I assume that /lib/udev/hdparm would call hdparm twice
on each HDD during udev invocation, once for non-spindown options returned
by /lib/hdparm/hdparm-functions, and once through
/usr/lib/pm-utils/power.d/95hdparm-apm for spindown options. Custom scripts
relying on hdparm_options() function in /lib/hdparm/hdparm-functions would
still fail if force_spindown_time is used in /etc/hdparm.conf. I would
suggest implementing the conversion code directly into hdparm_options()
function to avoid code duplication, prevent misuse, and possibly avoid
calling hdparm twice on each HDD.

4. Thanks for your feedback. I have done some experiments and it appears
that the -S issue comes from something else. I can only confirm that the -S
option was still working fine at the time of hdparm 9.56+ds-2 in
buster/testing (Fall 2018) and it had been working for over 5 years with
various kernel and hdparm versions. Between hdparm 9.56+ds-2 and hdparm
9.58+ds-1, the kernel was updated (4.17.8-1 => 4.19.37-3) and there were
also changes in udev (239-7 => 241-3).

Below is a summary of what I did so far to try and debug hdparm -S:


A) hdparm versions tried:

$ hdparm-jessie -V
hdparm-jessie v9.43

$ hdparm-stretch -V
hdparm-stretch v9.51

$ hdparm-buster -V
hdparm-buster v9.58


B) What currently works for all versions:

$ hdparm -y /dev/sdx

/dev/sdx:
 issuing standby command

$ hdparm -C /dev/sdx

/dev/sdx:
 drive state is:  standby

## Accessing a mounted partition on /dev/sdx ##

$ hdparm -C /dev/sdx

/dev/sdx:
 drive state is:  active/idle

## Will still work if hdparm -y is repeated at this stage ##


C) What worked before at the time of hdparm 9.56+ds-2 (successful spindown
after the delay):

$ hdparm -S248 /dev/sdx

/dev/sdx:
 setting standby to 248 (4 hours)

## Other delays not tested ##


D) What does not work (anymore) for all versions (hdparm runs successfully
but will not spindown after the delay):

$ hdparm -S1 /dev/sdx

/dev/sdx:
 setting standby to 1 (5 seconds)

$ hdparm -S10 /dev/sdx

/dev/sdx:
 setting standby to 10 (50 seconds)

$ hdparm -S241 /dev/sdx

/dev/sdx:
 setting standby to 241 (30 minutes)

$ hdparm -S248 /dev/sdx

/dev/sdx:
 setting standby to 248 (4 hours)


Best regards,
Sebastien

On Mon, Jun 24, 2019 at 4:54 PM Alex Mestiashvili 
wrote:

>
> On 6/20/19 8:42 PM, Sébastien Béhuret wrote:
> > Package: hdparm
> > Version: 9.58+ds-1
> > Severity: serious
> >
> > Dear Maintainers,
> >
> > In this version of hdparm, a new option 'force_spindown_time' was
> > introduced to set the spindown time for disks that don't support APM.
> > This option is supposed to translate to hdparm -S, similarly to the
> > original option 'spindown_time'.
> >
> > hdparm package comes with 3 main scripts:
> >
> > 1) /usr/lib/pm-utils/power.d/95hdparm-apm
> > This script will translate 'force_spindown_time' to hdparm -S and apply
> > the option even if APM was not detected.
> > This is the desired behavior.
> >
> > 2) /etc/apm/event.d/20hdparm
> > This script will ignore /etc/hdparm.conf and apply hard-coded defaults
> > instead.
> > This behavior is unexpected.
> > Expected/Desired behavior: Read /etc/hdparm.conf and apply relevant
> options.
> >
> > 3) /lib/hdparm/hdparm-functions (sourced from /lib/udev/hdparm, which is
> > invoked by udev rule /lib/udev/rules.d/85-hdparm.rules)
> > - 'force_spindown_time' is buggy because it is not converted back to -S,
> > which leads to a syntax error during hdparm execution (e.g. hdparm
> > force_spindown_time$VALUE instead of hdparm -S$VALUE).
> > - Both options 'spindown_time' and 'force_spindown_time' are processed
> > even if APM is not supported. From the comments in the configuration
> > file (/etc/hdparm.conf), it is understood that 'spindown_time' will be
> > applied for APM disks only and 'force_spindown_time' for all disks (or
> > possibly for non-APM disks only).
> > - The scripts will also apply hard-coded defaults for -S and -B if APM
> > was detected. The hard-coded defaults differ from those used in
> > /etc/apm/event.d/20hdparm, leading to inconsistent behavior.
> >
> > 4) Additional issues with non-APM disks:
> > - Manually invoking hdparm -S$VALUE /dev/sdx is simply ignored even
> > though hdparm executes successfully. The disks do not spin down after
> > the time delay when there was no access.
> > - Manually invoking hdparm -y /dev/sdx will spin down the disks
> > immediately. The disks will not wake up unless 

Bug#930796: spindown_time and force_spindown_time are broken in hdparm 9.58+ds-1

2019-06-20 Thread Sébastien Béhuret
Package: hdparm
Version: 9.58+ds-1
Severity: serious

Dear Maintainers,

In this version of hdparm, a new option 'force_spindown_time' was
introduced to set the spindown time for disks that don't support APM.
This option is supposed to translate to hdparm -S, similarly to the
original option 'spindown_time'.

hdparm package comes with 3 main scripts:

1) /usr/lib/pm-utils/power.d/95hdparm-apm
This script will translate 'force_spindown_time' to hdparm -S and apply the
option even if APM was not detected.
This is the desired behavior.

2) /etc/apm/event.d/20hdparm
This script will ignore /etc/hdparm.conf and apply hard-coded defaults
instead.
This behavior is unexpected.
Expected/Desired behavior: Read /etc/hdparm.conf and apply relevant options.

3) /lib/hdparm/hdparm-functions (sourced from /lib/udev/hdparm, which is
invoked by udev rule /lib/udev/rules.d/85-hdparm.rules)
- 'force_spindown_time' is buggy because it is not converted back to -S,
which leads to a syntax error during hdparm execution (e.g. hdparm
force_spindown_time$VALUE instead of hdparm -S$VALUE).
- Both options 'spindown_time' and 'force_spindown_time' are processed even
if APM is not supported. From the comments in the configuration file
(/etc/hdparm.conf), it is understood that 'spindown_time' will be applied
for APM disks only and 'force_spindown_time' for all disks (or possibly for
non-APM disks only).
- The scripts will also apply hard-coded defaults for -S and -B if APM was
detected. The hard-coded defaults differ from those used in
/etc/apm/event.d/20hdparm, leading to inconsistent behavior.

4) Additional issues with non-APM disks:
- Manually invoking hdparm -S$VALUE /dev/sdx is simply ignored even though
hdparm executes successfully. The disks do not spin down after the time
delay when there was no access.
- Manually invoking hdparm -y /dev/sdx will spin down the disks
immediately. The disks will not wake up unless they are accessed, which is
the expected behavior.

These were all working fine in hdparm 9.51+ds-1+deb9u1, which is the
current version in stretch.

In short, it is currently impossible to obtain a consistent and working
configuration for non-APM disks.

Many thanks and regards,
Sebastien Behuret


Bug#807208: Various bugs in /etc/init.d/tahoe-lafs

2016-05-10 Thread Sébastien Béhuret
Dear Ramakrishnan,

Thank you for the update.

It seems like the patch would fix the first point but not the second and
third.
I think it should be sufficient to add some basic checks in the node_uid ()
function.

Many thanks and regards,
Sebastien


On Fri, Apr 29, 2016 at 4:36 PM, Ramakrishnan Muthukrishnan <
r...@rkrishnan.org> wrote:

> Sébastien Béhuret <sbehu...@gmail.com> writes:
>
> > Package: tahoe-lafs
> > Version: 1.10.2-2
> > Tags: patch
> >
> >
> > Dear Maintainer,
> >
> > There are a couple of bugs in /etc/init.d/tahoe-lafs:
> >
> > - When AUTOSTART is set to "none", the initscript attempts to start the
> > node “/var/lib/tahoe-lafs/none”.
> > - When AUTOSTART lists a non-existing node, the initscript attempts to
> > start it.
> > - When a node is not owned by any existing user (node with an uid but
> > without an username), stat -c %U returns "UNKNOWN".
> >
> > The attached patch resolves these issues. However, for the third issue,
> it
> > may be a good idea to allow starting nodes that are not owned by a
> regular
> > user, perhaps by using sudo -u '#uid' -g '#uid' instead of su.
>
> Dear Sebastien,
>
> Sorry for a very late response to this bug. Thanks a lot for the report.
>
> For the first two points, I tried to solve the issue by exiting
> immediately. Do you think that will work?
>
> diff --git a/debian/tahoe-lafs.init b/debian/tahoe-lafs.init
> index 27a614b..548d77a 100755
> --- a/debian/tahoe-lafs.init
> +++ b/debian/tahoe-lafs.init
> @@ -77,6 +77,7 @@ start|stop|restart)
>  if [ $# -eq 0 ]; then
>  if [ "$AUTOSTART" = "none" ] || [ -z "$AUTOSTART" ]; then
>  log_warning_msg " Autostart disabled."
> +exit 0
>  fi
>  if [ "$AUTOSTART" = "all" ]; then
>  # all nodes shall be taken care of automatically
>
>
> Thanks
> Ramakrishnan
>


Bug#807208: Various bugs in /etc/init.d/tahoe-lafs

2015-12-06 Thread Sébastien Béhuret
Package: tahoe-lafs
Version: 1.10.2-2
Tags: patch


Dear Maintainer,

There are a couple of bugs in /etc/init.d/tahoe-lafs:

- When AUTOSTART is set to "none", the initscript attempts to start the
node “/var/lib/tahoe-lafs/none”.
- When AUTOSTART lists a non-existing node, the initscript attempts to
start it.
- When a node is not owned by any existing user (node with an uid but
without an username), stat -c %U returns "UNKNOWN".

The attached patch resolves these issues. However, for the third issue, it
may be a good idea to allow starting nodes that are not owned by a regular
user, perhaps by using sudo -u '#uid' -g '#uid' instead of su.

Regrads,
Sebastien Behuret



*** /etc/init.d/tahoe-lafs2015-09-16 06:40:00.0 +0100
--- tahoe-lafs2015-12-06 13:41:44.487529209 +
***
*** 39,45 

  node_uid () {
  local node_dir="$1"
! stat -c %U "$CONFIG_DIR/${node_dir}"
  }

  _tahoe () {
--- 39,45 

  node_uid () {
  local node_dir="$1"
! [ -d "$CONFIG_DIR/${node_dir}" ] && stat -c %U
"$CONFIG_DIR/${node_dir}"
  }

  _tahoe () {
***
*** 47,57 
--- 47,67 
  local node_name="$2"
  local node_uid=$(node_uid "$node_name")

+ if [ -z "$node_uid" ]; then
+ log_failure_msg "${node_name} node directory does not exist!"
+ return 1
+ fi
+
  if [ "$node_uid" = "root" ]; then
  log_failure_msg "${node_name} node directory shouldn't be owned
by root!"
  return 1
  fi

+ if [ "$node_uid" = "UNKNOWN" ]; then
+ log_failure_msg "${node_name} node directory is not owned by any
user!"
+ return 1
+ fi
+
  case "$action" in
  start|restart)
  su -s "/bin/sh" \
***
*** 77,84 
  if [ $# -eq 0 ]; then
  if [ "$AUTOSTART" = "none" ] || [ -z "$AUTOSTART" ]; then
  log_warning_msg " Autostart disabled."
! fi
! if [ "$AUTOSTART" = "all" ]; then
  # all nodes shall be taken care of automatically
  for name in $(nodes_in $CONFIG_DIR); do
  _tahoe "$command" "$name" || STATUS="$?"
--- 87,93 
  if [ $# -eq 0 ]; then
  if [ "$AUTOSTART" = "none" ] || [ -z "$AUTOSTART" ]; then
  log_warning_msg " Autostart disabled."
! elif [ "$AUTOSTART" = "all" ]; then
  # all nodes shall be taken care of automatically
  for name in $(nodes_in $CONFIG_DIR); do
  _tahoe "$command" "$name" || STATUS="$?"


Bug#755057: Empty /usr/share/php5/php.ini-production in package php5-common on jessie/sid

2014-07-17 Thread Sébastien Béhuret
Package: php5-common
Version: 5.6.0~rc2+dfsg-3

$ ls -l /usr/share/php5/php.ini-production
-rw-r--r-- 1 root root 0 Jul 11 12:38 /usr/share/php5/php.ini-production

This may also cause /var/lib/dpkg/info/libapache2-mod-php5.postinst to fail:

Setting up libapache2-mod-php5 (5.6.0~rc2+dfsg-3) ...
usage: fail($reason, $retval) at /usr/sbin/a2query line 168.
usage: fail($reason, $retval) at /usr/sbin/a2query line 168.
usage: fail($reason, $retval) at /usr/sbin/a2query line 168.
/var/lib/dpkg/info/libapache2-mod-php5.postinst: 284: [: !=: unexpected
operator