Bug#1005870: Unable to activate singularity container from within boot service script

2022-02-28 Thread Fox, Ron
Thanks for your work.  I’m able to activate containers from within  a script 
run at boot time.  The update did the trick.


Ron.


Ron Fox
Senior Scientist
Facility for Rare Isotope Beams
Michigan State University
640 South Shaw Lane
East Lansing, MI 48824, USA
Tel. 517-908-7349
Email: f...@frib.msu.edu

[cid:image001.jpg@01D823DD.5CA436C0]



From: Nilesh Patra<mailto:nil...@nileshpatra.info>
Sent: Sunday, February 27, 2022 1:54 AM
To: Fox, Ron<mailto:f...@nscl.msu.edu>
Cc: 1005...@bugs.debian.org<mailto:1005...@bugs.debian.org>
Subject: Re: Unable to activate singularity container from within boot service 
script

[EXTERNAL] This email originated from outside of FRIB

On Wed, Feb 23, 2022 at 11:50:30PM +, Fox, Ron wrote:
> Thank you we'll give it a try as soon as it migrates out to our internal 
> mirrors.

Ron,
Any update on this - if it is there on your internal servers by now?

Regards,
Nilesh



Bug#1005870: Unable to activate singularity container from within boot service script

2022-02-27 Thread Fox, Ron
We just got it Friday I'll test this tomorrow thanks.
Ron

On Feb 27, 2022 01:54, Nilesh Patra  wrote:
[EXTERNAL] This email originated from outside of FRIB

On Wed, Feb 23, 2022 at 11:50:30PM +, Fox, Ron wrote:
> Thank you we'll give it a try as soon as it migrates out to our internal 
> mirrors.

Ron,
Any update on this - if it is there on your internal servers by now?

Regards,
Nilesh


Bug#1005870: Unable to activate singularity container from within boot service script

2022-02-27 Thread Fox, Ron
Thank you we'll give it a try as soon as it migrates out to our internal 
mirrors.

RF

On Feb 23, 2022 14:12, Nilesh Patra  wrote:
[EXTERNAL] This email originated from outside of FRIB


On Wed, 16 Feb 2022 12:58:57 + "Fox, Ron"  wrote:

>  On Debian 11 note this comes from debian-unstable.

> I am attempting to activate a singularity squashfs image from a script that 
> runs

> at boot time.   Singularity segfaults with the attached debug/traceback in

> the attached file portmanager

>

> Here is the tail of that file in case the bug reporting system does not 
> support attachments:

> [...]



There has been a new release that was uploaded to unstable a few hours ago.

Can you please check with this one if the problem still persists?



Regards,

Nilesh



Bug#1005870: Unable to activate singularity container from within boot service script

2022-02-16 Thread Fox, Ron
Package: singularity-container
Version: 3.5.2+ds2-1
Severity: important

On Debian 11 note this comes from debian-unstable.
I am attempting to activate a singularity squashfs image from a script that runs
at boot time.   Singularity segfaults with the attached debug/traceback in
the attached file portmanager

Here is the tail of that file in case the bug reporting system does not support 
attachments:

sandbox format initializer returned: not a directory image
[0mDEBUG   [0m[U=0,P=1210]   Init()Check for sif 
image format
[0mDEBUG   [0m[U=0,P=1210]   Init()sif format 
initializer returned: SIF magic not found
[0mDEBUG   [0m[U=0,P=1210]   Init()Check for 
squashfs image format
[0mDEBUG   [0m[U=0,P=1210]   Init()squashfs image 
format detected
[0mDEBUG   [0m[U=0,P=1210]   setSessionLayer() Overlay seems 
supported and allowed by kernel
[0mDEBUG   [0m[U=0,P=1210]   setSessionLayer() Attempting to 
use overlayfs (enable overlay = try)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x606f8a]

goroutine 7 [running]:
github.com/sylabs/singularity/internal/pkg/runtime/engine/config/starter.(*Config).SetCapabilities(0xc109c8,
 {0x8e7c7e, 0x0}, {0xc00046eb00, 0x29, 0x0})
   
/build/singularity-container-aOOjKg/singularity-container-3.5.2+ds2/_build/src/github.com/sylabs/singularity/internal/pkg/runtime/engine/config/starter/starter_linux.go:403
 +0x26a
github.com/sylabs/singularity/internal/pkg/runtime/engine/singularity.(*EngineOperations).PrepareConfig(0xc0003b7d60,
 0xc109c8)
   
/build/singularity-container-aOOjKg/singularity-container-3.5.2+ds2/_build/src/github.com/sylabs/singularity/internal/pkg/runtime/engine/singularity/prepare_linux.go:140
 +0x5ab
github.com/sylabs/singularity/internal/app/starter.StageOne(0x988d40, 
0xc0ed08)
   
/build/singularity-container-aOOjKg/singularity-container-3.5.2+ds2/_build/src/github.com/sylabs/singularity/internal/app/starter/stage_linux.go:27
 +0x6a
main.startup()
   
/build/singularity-container-aOOjKg/singularity-container-3.5.2+ds2/_build/src/github.com/sylabs/singularity/cmd/starter/main_linux.go:56
 +0x1ed
created by main.main
   
/build/singularity-container-aOOjKg/singularity-container-3.5.2+ds2/_build/src/github.com/sylabs/singularity/cmd/starter/main_linux.go:102
 +0x25
VERBOSE [U=0,P=835]wait_child()  stage 1 exited with 
status 2
[0m

Once the system is booted, I can activate the image in the same way
with no problem (same command).  I can even do this as an unprivileged user.

Here are excerpts from the script which show what I'm doing when that happens:


#!/bin/sh
### BEGIN INIT INFO
# Provides:  nscldaq
# Required-Start:$network $time $named $remote_fs $syslog
# Required-Stop: $network $time $named $remote_fs $syslog
# Should-Start:  nscldaq
# Default-Start: 2 3 4 5
# Default-Stop:  0 1 6
# Short-Description: NSCL data acquisition daemons
# Description:   NSCL data acquisition daemons
### END INIT INFO
export _SYSTEMCTL_SKIP_REDIRECT=true

...


##
#  Some definitions:
#

#  We run the port manager and the ring master under the following singularity 
container
#  with BUSTEROPT bound to /usr/opt

SINGULARITY_CONTAINER="/usr/opt/nscl-buster.img"
USROPT="/usr/opt/opt-buster"

...
PIDDIR="/scratch/nscldaq/run"
LOGDIR="/scratch/nscldaq/log"

DAQHOME="/usr/opt/daq/current"
DAQBIN="${DAQHOME}/bin"

DAQPORTMANAGER="${DAQBIN}/DaqPortManager"
DAQPORTMANAGERPIDFILE="${PIDDIR}/portmgr.pid"
DAQPORTMANAGERLOGFILE="${LOGDIR}/portmgr.log"


...

start_portmanager() {
nohup singularity -d  exec --bind ${USROPT}:/usr/opt,/scratch --no-home 
${SINGULARITY_CONTAINER}
  \
  ${DAQPORTMANAGER} \
  -log ${DAQPORTMANAGERLOGFILE} \
  -pidfile ${DAQPORTMANAGERPIDFILE}  \
  /scratch/portmanager 2>&1 &
   log_daemon_msg portmanager
sleep 3 # Let the port manager start.

}


Note that I have attempted to do the same thing after converting the container 
image to a .sif file and that too failed with essentially the same result.

Thank you for any help you might be able to provide. I'd be happy to provide any
additional information.

Ron.


portmanager
Description: portmanager