Re: Exiting an unconditional juju debug-hooks session

2017-06-05 Thread Dmitrii Shcherbakov
John,

Any will work:

- ./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
- ./hooks/$JUJU_HOOK_NAME, C-a d
- C-a 0, exit, ./hooks/$JUJU_HOOK_NAME, C-a exit

This is because we have an `exec` on new session creation
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L103
and on attachment to an existing session:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L61

we have screen keys enabled for tmux so the prefix is C-a.

However, it is interesting that on session detach I am still getting a
message (defined in a function associated with the trap on EXIT) that
says "Cleaning up the debug session" - shouldn't be there after the
exec.

Looking at the process tree, I can see 4 bash processes under an sshd
process with the same script in base64:
http://paste.ubuntu.com/24781121/

It seems like bash (pid 3391) was forked 4 times consecutively with
the last process (3404) `exec`-ing `tmux attach-session -t
{unit_name}`

I am going to need to check why there are 4 of them but detaching like
that is fine by me.
Best Regards,
Dmitrii Shcherbakov

Field Software Engineer
IRC (freenode): Dmitrii-Sh


On Sun, Jun 4, 2017 at 6:56 PM, John Meinel  wrote:
> Doesn't the equivalent of ^A ^D (from screen) also work to just disconnect
> all sessions? (http://www.dayid.org/comp/tm.html says it would be ^B d). Or
> switching to session 0 and exiting that one first?
>
> I thought we had a quick way to disconnect, but its possible you have to
> exit 2x and that fast firing hooks always catch a new window before you can
> exit a second time.
>
> John
> =:->
>
>
> On Sun, Jun 4, 2017 at 5:56 PM, Dmitrii Shcherbakov
>  wrote:
>>
>> Hi everybody,
>>
>> Currently if you do
>>
>> juju debug-hooks  # no event (hook) in particular
>>
>> each time there is a new event you will get a new tmux window open and
>> this will be done serially as there is no parallelism in hook
>> execution on a given logical machine. This is all good and intentional
>> but when you've observed the charm behavior and want to let it work
>> without your interference again, you need to end your tmux session.
>> This can be hard via `exit [status]` shell builtin when you get a lot
>> of events (think of an OpenStack HA deployment) - each time you do
>>
>> ./hooks/$JUJU_HOOK_NAME && exit
>>
>> you are dropped into a session '0' and a new session is created for a
>> queued event for which you have to manually execute a hook and exit
>> again until you process the backlog.
>>
>> tmux list-windows
>> 0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
>> dropping here after `exit`
>> 1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3
>> (active)
>>
>>
>> https://jujucharms.com/docs/stable/authors-hook-debug#running-a-debug-session
>> "Note: To allow Juju to continue processing events normally, you must
>> exit the hook execution with a zero return code (using the exit
>> command), otherwise all further events on that unit may be blocked
>> indefinitely."
>>
>> My initial thought was something like this - send SIGTERM to a child
>> of sshd which will terminate your ssh session:
>> unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
>> [ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
>> $p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
>> kill $pc
>>
>> as an agent waits for an SSH client to exit:
>>
>> https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53
>>
>> After thinking about it some more, I thought it would be cleaner to
>> just kill a specific tmux session:
>>
>> tmux list-sessions
>> gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62]
>> (attached)
>>
>> ./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
>> [exited]
>> Cleaning up the debug session
>> no server running on /tmp/tmux-0/default
>> Connection to 10.10.101.77 closed.
>>
>> The cleanup message comes from debugHooksClientScript that simply sets
>> up a bash trap on EXIT:
>>
>> https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L51
>>
>> Judging by the code, it should be pretty safe to do so - unless there
>> is a debug session in a debug context for a particular unit, other
>> hooks will be executed regularly by an agent instead of creating a new
>> tmux window:
>>
>> https://github.com/juju/juju/blob/develop/worker/uniter/runner/runner.go#L225
>> debugctx := debug.NewHooksContext(runner.context.UnitName())
>> if session, _ := debugctx.FindSession(); session != nil &&
>> session.MatchHook(hookName) {
>> logger.Infof("executing %s via debug-hooks", hookName)
>> err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
>> } else {
>> err = runner.runCharmHook(hookName, env, charmLocation)
>> }
>> return runner.context.Flush(hookName, err)
>>
>> There are two scripts:
>>
>> - a client script 

Re: Exiting an unconditional juju debug-hooks session

2017-06-04 Thread John Meinel
Doesn't the equivalent of ^A ^D (from screen) also work to just disconnect
all sessions? (http://www.dayid.org/comp/tm.html says it would be ^B d). Or
switching to session 0 and exiting that one first?

I thought we had a quick way to disconnect, but its possible you have to
exit 2x and that fast firing hooks always catch a new window before you can
exit a second time.

John
=:->


On Sun, Jun 4, 2017 at 5:56 PM, Dmitrii Shcherbakov <
dmitrii.shcherba...@canonical.com> wrote:

> Hi everybody,
>
> Currently if you do
>
> juju debug-hooks  # no event (hook) in particular
>
> each time there is a new event you will get a new tmux window open and
> this will be done serially as there is no parallelism in hook
> execution on a given logical machine. This is all good and intentional
> but when you've observed the charm behavior and want to let it work
> without your interference again, you need to end your tmux session.
> This can be hard via `exit [status]` shell builtin when you get a lot
> of events (think of an OpenStack HA deployment) - each time you do
>
> ./hooks/$JUJU_HOOK_NAME && exit
>
> you are dropped into a session '0' and a new session is created for a
> queued event for which you have to manually execute a hook and exit
> again until you process the backlog.
>
> tmux list-windows
> 0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
> dropping here after `exit`
> 1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)
>
> https://jujucharms.com/docs/stable/authors-hook-debug#
> running-a-debug-session
> "Note: To allow Juju to continue processing events normally, you must
> exit the hook execution with a zero return code (using the exit
> command), otherwise all further events on that unit may be blocked
> indefinitely."
>
> My initial thought was something like this - send SIGTERM to a child
> of sshd which will terminate your ssh session:
> unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
> [ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
> $p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
> kill $pc
>
> as an agent waits for an SSH client to exit:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L53
>
> After thinking about it some more, I thought it would be cleaner to
> just kill a specific tmux session:
>
> tmux list-sessions
> gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)
>
> ./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
> [exited]
> Cleaning up the debug session
> no server running on /tmp/tmux-0/default
> Connection to 10.10.101.77 closed.
>
> The cleanup message comes from debugHooksClientScript that simply sets
> up a bash trap on EXIT:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/client.go#L51
>
> Judging by the code, it should be pretty safe to do so - unless there
> is a debug session in a debug context for a particular unit, other
> hooks will be executed regularly by an agent instead of creating a new
> tmux window:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/runner.go#L225
> debugctx := debug.NewHooksContext(runner.context.UnitName())
> if session, _ := debugctx.FindSession(); session != nil &&
> session.MatchHook(hookName) {
> logger.Infof("executing %s via debug-hooks", hookName)
> err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
> } else {
> err = runner.runCharmHook(hookName, env, charmLocation)
> }
> return runner.context.Flush(hookName, err)
>
> There are two scripts:
>
> - a client script executed via an ssh client when you run juju debug-hooks
> - a server script which is executed in the `RunHook` function by an
> agent and creates a new window for an existing tmux session.
>
> client side:
> https://github.com/juju/juju/blob/develop/cmd/juju/
> commands/debughooks.go#L137
> script := base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(
> debugctx,
> c.hooks)))
> innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`,
> script)
> args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
> c.Args = args
> return c.sshCommand.Run(ctx)
>
> Server script:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L90
> Client script:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/client.go#L49
>
> A worker waits until a client exists by monitoring a file lock at
> ClientExitFileLock:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L34
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L53
> A path of a lock itself for a particular session:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/common.go#L24
>
> ---
>
> If this approach with killing a tmux session is fine then I could
> create a PR for the doc repo and for the description in the
> 

Re: Exiting an unconditional juju debug-hooks session

2017-06-04 Thread John Meinel
Doesn't the equivalent of ^A ^D (from screen) also work to just disconnect
all sessions? (http://www.dayid.org/comp/tm.html says it would be ^B d). Or
switching to session 0 and exiting that one first?

I thought we had a quick way to disconnect, but its possible you have to
exit 2x and that fast firing hooks always catch a new window before you can
exit a second time.

John
=:->


On Sun, Jun 4, 2017 at 5:56 PM, Dmitrii Shcherbakov <
dmitrii.shcherba...@canonical.com> wrote:

> Hi everybody,
>
> Currently if you do
>
> juju debug-hooks  # no event (hook) in particular
>
> each time there is a new event you will get a new tmux window open and
> this will be done serially as there is no parallelism in hook
> execution on a given logical machine. This is all good and intentional
> but when you've observed the charm behavior and want to let it work
> without your interference again, you need to end your tmux session.
> This can be hard via `exit [status]` shell builtin when you get a lot
> of events (think of an OpenStack HA deployment) - each time you do
>
> ./hooks/$JUJU_HOOK_NAME && exit
>
> you are dropped into a session '0' and a new session is created for a
> queued event for which you have to manually execute a hook and exit
> again until you process the backlog.
>
> tmux list-windows
> 0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
> dropping here after `exit`
> 1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)
>
> https://jujucharms.com/docs/stable/authors-hook-debug#
> running-a-debug-session
> "Note: To allow Juju to continue processing events normally, you must
> exit the hook execution with a zero return code (using the exit
> command), otherwise all further events on that unit may be blocked
> indefinitely."
>
> My initial thought was something like this - send SIGTERM to a child
> of sshd which will terminate your ssh session:
> unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
> [ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
> $p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
> kill $pc
>
> as an agent waits for an SSH client to exit:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L53
>
> After thinking about it some more, I thought it would be cleaner to
> just kill a specific tmux session:
>
> tmux list-sessions
> gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)
>
> ./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
> [exited]
> Cleaning up the debug session
> no server running on /tmp/tmux-0/default
> Connection to 10.10.101.77 closed.
>
> The cleanup message comes from debugHooksClientScript that simply sets
> up a bash trap on EXIT:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/client.go#L51
>
> Judging by the code, it should be pretty safe to do so - unless there
> is a debug session in a debug context for a particular unit, other
> hooks will be executed regularly by an agent instead of creating a new
> tmux window:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/runner.go#L225
> debugctx := debug.NewHooksContext(runner.context.UnitName())
> if session, _ := debugctx.FindSession(); session != nil &&
> session.MatchHook(hookName) {
> logger.Infof("executing %s via debug-hooks", hookName)
> err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
> } else {
> err = runner.runCharmHook(hookName, env, charmLocation)
> }
> return runner.context.Flush(hookName, err)
>
> There are two scripts:
>
> - a client script executed via an ssh client when you run juju debug-hooks
> - a server script which is executed in the `RunHook` function by an
> agent and creates a new window for an existing tmux session.
>
> client side:
> https://github.com/juju/juju/blob/develop/cmd/juju/
> commands/debughooks.go#L137
> script := base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(
> debugctx,
> c.hooks)))
> innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`,
> script)
> args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
> c.Args = args
> return c.sshCommand.Run(ctx)
>
> Server script:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L90
> Client script:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/client.go#L49
>
> A worker waits until a client exists by monitoring a file lock at
> ClientExitFileLock:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L34
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L53
> A path of a lock itself for a particular session:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/common.go#L24
>
> ---
>
> If this approach with killing a tmux session is fine then I could
> create a PR for the doc repo and for the description in the
> 

Exiting an unconditional juju debug-hooks session

2017-06-04 Thread Dmitrii Shcherbakov
Hi everybody,

Currently if you do

juju debug-hooks  # no event (hook) in particular

each time there is a new event you will get a new tmux window open and
this will be done serially as there is no parallelism in hook
execution on a given logical machine. This is all good and intentional
but when you've observed the charm behavior and want to let it work
without your interference again, you need to end your tmux session.
This can be hard via `exit [status]` shell builtin when you get a lot
of events (think of an OpenStack HA deployment) - each time you do

./hooks/$JUJU_HOOK_NAME && exit

you are dropped into a session '0' and a new session is created for a
queued event for which you have to manually execute a hook and exit
again until you process the backlog.

tmux list-windows
0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
dropping here after `exit`
1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)

https://jujucharms.com/docs/stable/authors-hook-debug#running-a-debug-session
"Note: To allow Juju to continue processing events normally, you must
exit the hook execution with a zero return code (using the exit
command), otherwise all further events on that unit may be blocked
indefinitely."

My initial thought was something like this - send SIGTERM to a child
of sshd which will terminate your ssh session:
unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
[ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
$p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
kill $pc

as an agent waits for an SSH client to exit:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53

After thinking about it some more, I thought it would be cleaner to
just kill a specific tmux session:

tmux list-sessions
gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)

./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
[exited]
Cleaning up the debug session
no server running on /tmp/tmux-0/default
Connection to 10.10.101.77 closed.

The cleanup message comes from debugHooksClientScript that simply sets
up a bash trap on EXIT:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L51

Judging by the code, it should be pretty safe to do so - unless there
is a debug session in a debug context for a particular unit, other
hooks will be executed regularly by an agent instead of creating a new
tmux window:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/runner.go#L225
debugctx := debug.NewHooksContext(runner.context.UnitName())
if session, _ := debugctx.FindSession(); session != nil &&
session.MatchHook(hookName) {
logger.Infof("executing %s via debug-hooks", hookName)
err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
} else {
err = runner.runCharmHook(hookName, env, charmLocation)
}
return runner.context.Flush(hookName, err)

There are two scripts:

- a client script executed via an ssh client when you run juju debug-hooks
- a server script which is executed in the `RunHook` function by an
agent and creates a new window for an existing tmux session.

client side:
https://github.com/juju/juju/blob/develop/cmd/juju/commands/debughooks.go#L137
script := 
base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(debugctx,
c.hooks)))
innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`, script)
args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
c.Args = args
return c.sshCommand.Run(ctx)

Server script:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L90
Client script:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L49

A worker waits until a client exists by monitoring a file lock at
ClientExitFileLock:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L34
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53
A path of a lock itself for a particular session:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/common.go#L24

---

If this approach with killing a tmux session is fine then I could
create a PR for the doc repo and for the description in the
debugHooksServerScript to explicitly mention it.

I doubt it deserves a helper command but rather a more verbose explanation.

Have anybody else encountered the need to do the same?

Best Regards,
Dmitrii Shcherbakov

Field Software Engineer
IRC (freenode): Dmitrii-Sh

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Exiting an unconditional juju debug-hooks session

2017-06-04 Thread Dmitrii Shcherbakov
Hi everybody,

Currently if you do

juju debug-hooks  # no event (hook) in particular

each time there is a new event you will get a new tmux window open and
this will be done serially as there is no parallelism in hook
execution on a given logical machine. This is all good and intentional
but when you've observed the charm behavior and want to let it work
without your interference again, you need to end your tmux session.
This can be hard via `exit [status]` shell builtin when you get a lot
of events (think of an OpenStack HA deployment) - each time you do

./hooks/$JUJU_HOOK_NAME && exit

you are dropped into a session '0' and a new session is created for a
queued event for which you have to manually execute a hook and exit
again until you process the backlog.

tmux list-windows
0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
dropping here after `exit`
1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)

https://jujucharms.com/docs/stable/authors-hook-debug#running-a-debug-session
"Note: To allow Juju to continue processing events normally, you must
exit the hook execution with a zero return code (using the exit
command), otherwise all further events on that unit may be blocked
indefinitely."

My initial thought was something like this - send SIGTERM to a child
of sshd which will terminate your ssh session:
unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
[ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
$p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
kill $pc

as an agent waits for an SSH client to exit:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53

After thinking about it some more, I thought it would be cleaner to
just kill a specific tmux session:

tmux list-sessions
gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)

./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
[exited]
Cleaning up the debug session
no server running on /tmp/tmux-0/default
Connection to 10.10.101.77 closed.

The cleanup message comes from debugHooksClientScript that simply sets
up a bash trap on EXIT:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L51

Judging by the code, it should be pretty safe to do so - unless there
is a debug session in a debug context for a particular unit, other
hooks will be executed regularly by an agent instead of creating a new
tmux window:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/runner.go#L225
debugctx := debug.NewHooksContext(runner.context.UnitName())
if session, _ := debugctx.FindSession(); session != nil &&
session.MatchHook(hookName) {
logger.Infof("executing %s via debug-hooks", hookName)
err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
} else {
err = runner.runCharmHook(hookName, env, charmLocation)
}
return runner.context.Flush(hookName, err)

There are two scripts:

- a client script executed via an ssh client when you run juju debug-hooks
- a server script which is executed in the `RunHook` function by an
agent and creates a new window for an existing tmux session.

client side:
https://github.com/juju/juju/blob/develop/cmd/juju/commands/debughooks.go#L137
script := 
base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(debugctx,
c.hooks)))
innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`, script)
args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
c.Args = args
return c.sshCommand.Run(ctx)

Server script:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L90
Client script:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L49

A worker waits until a client exists by monitoring a file lock at
ClientExitFileLock:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L34
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53
A path of a lock itself for a particular session:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/common.go#L24

---

If this approach with killing a tmux session is fine then I could
create a PR for the doc repo and for the description in the
debugHooksServerScript to explicitly mention it.

I doubt it deserves a helper command but rather a more verbose explanation.

Have anybody else encountered the need to do the same?

Best Regards,
Dmitrii Shcherbakov

Field Software Engineer
IRC (freenode): Dmitrii-Sh

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev