Re: Exiting an unconditional juju debug-hooks session

2017-06-04 Thread John Meinel
Doesn't the equivalent of ^A ^D (from screen) also work to just disconnect
all sessions? (http://www.dayid.org/comp/tm.html says it would be ^B d). Or
switching to session 0 and exiting that one first?

I thought we had a quick way to disconnect, but its possible you have to
exit 2x and that fast firing hooks always catch a new window before you can
exit a second time.

John
=:->


On Sun, Jun 4, 2017 at 5:56 PM, Dmitrii Shcherbakov <
dmitrii.shcherba...@canonical.com> wrote:

> Hi everybody,
>
> Currently if you do
>
> juju debug-hooks  # no event (hook) in particular
>
> each time there is a new event you will get a new tmux window open and
> this will be done serially as there is no parallelism in hook
> execution on a given logical machine. This is all good and intentional
> but when you've observed the charm behavior and want to let it work
> without your interference again, you need to end your tmux session.
> This can be hard via `exit [status]` shell builtin when you get a lot
> of events (think of an OpenStack HA deployment) - each time you do
>
> ./hooks/$JUJU_HOOK_NAME && exit
>
> you are dropped into a session '0' and a new session is created for a
> queued event for which you have to manually execute a hook and exit
> again until you process the backlog.
>
> tmux list-windows
> 0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
> dropping here after `exit`
> 1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)
>
> https://jujucharms.com/docs/stable/authors-hook-debug#
> running-a-debug-session
> "Note: To allow Juju to continue processing events normally, you must
> exit the hook execution with a zero return code (using the exit
> command), otherwise all further events on that unit may be blocked
> indefinitely."
>
> My initial thought was something like this - send SIGTERM to a child
> of sshd which will terminate your ssh session:
> unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
> [ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
> $p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
> kill $pc
>
> as an agent waits for an SSH client to exit:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L53
>
> After thinking about it some more, I thought it would be cleaner to
> just kill a specific tmux session:
>
> tmux list-sessions
> gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)
>
> ./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
> [exited]
> Cleaning up the debug session
> no server running on /tmp/tmux-0/default
> Connection to 10.10.101.77 closed.
>
> The cleanup message comes from debugHooksClientScript that simply sets
> up a bash trap on EXIT:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/client.go#L51
>
> Judging by the code, it should be pretty safe to do so - unless there
> is a debug session in a debug context for a particular unit, other
> hooks will be executed regularly by an agent instead of creating a new
> tmux window:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/runner.go#L225
> debugctx := debug.NewHooksContext(runner.context.UnitName())
> if session, _ := debugctx.FindSession(); session != nil &&
> session.MatchHook(hookName) {
> logger.Infof("executing %s via debug-hooks", hookName)
> err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
> } else {
> err = runner.runCharmHook(hookName, env, charmLocation)
> }
> return runner.context.Flush(hookName, err)
>
> There are two scripts:
>
> - a client script executed via an ssh client when you run juju debug-hooks
> - a server script which is executed in the `RunHook` function by an
> agent and creates a new window for an existing tmux session.
>
> client side:
> https://github.com/juju/juju/blob/develop/cmd/juju/
> commands/debughooks.go#L137
> script := base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(
> debugctx,
> c.hooks)))
> innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`,
> script)
> args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
> c.Args = args
> return c.sshCommand.Run(ctx)
>
> Server script:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L90
> Client script:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/client.go#L49
>
> A worker waits until a client exists by monitoring a file lock at
> ClientExitFileLock:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L34
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/server.go#L53
> A path of a lock itself for a particular session:
> https://github.com/juju/juju/blob/develop/worker/uniter/
> runner/debug/common.go#L24
>
> ---
>
> If this approach with killing a tmux session is fine then I could
> create a PR for the doc repo and for the description in the
> 

Re: OS X VMS on JAAS

2017-06-04 Thread James Beedy
@john, @andrew thanks for the details here

On Sat, Jun 3, 2017 at 10:21 PM, Andrew Wilkins <
andrew.wilk...@canonical.com> wrote:

> On Sat, Jun 3, 2017 at 2:56 PM John Meinel  wrote:
>
>> You can add a manually provisioned machine to any model, as long as there
>> is connectivity from the machine to the controller. Now, I would have
>> thought initial setup was initiated by the Controller, but its possible
>> that initial setup is actually initiated from the client.
>>
>
> Given the command:
>
> $ juju add-machine ssh:
>
> it goes something like this:
>
> 1. client connects to  via SSH, and performs basic hardware/OS
> discovery
> 2. client asks controller to add a machine entry, and controller returns a
> script to be executed on the target machine, using the discovered details,
> in order to fetch and install jujud
> 3. client executes that script over the SSH connection
>
> Once initial setup is complete, then it is definitely true that all
>> connections are initiated from the agent running on the controlled machine
>> to the controller. The controller no longer tries to socket.connect to the
>> machine. (In 1.X 'actions' were initiated via ssh from the controller, but
>> in 2.X the agents listen to see if there are any actions to run like they
>> do for all other changes.)
>>
>> Now, given that he added a model into "us-east-1" if he ever did just a
>> plain "juju add-machine" or "juju deploy" (without --to) it would
>> definitely create a new instance in AWS and start configuring it, rather
>> than from your VM.
>>
>> Which is why using something like the "lxd provider" would be a more
>> natural use case, but according to James the sticking point is having to
>> set up a controller in the first place. So "--to lxd:0" is easier for them
>> to think about than setting up a provider and letting it decide how to
>> allocate machines.
>>
>> Note, it probably wouldn't be possible to use JAAS to drive an LXD
>> provider, because *that* would have JAAS be trying to make a direct
>> connection to your LXD agent in order to provision the next machine.
>> However "--to lxd:0" has the local juju agent (running for 'machine 0')
>> talking to the local LXD agent in order to create a container.
>>
>> John
>> =:->
>>
>>
>> On Fri, Jun 2, 2017 at 6:28 PM, Jay Wren  wrote:
>>
>>> I do not understand how this works. Could someone with knowledge of how
>>> jujud on a  controller communicates with jujud agents on units describe how
>>> that is done?
>>>
>>> My limited understanding must be wrong give that James has this working.
>>>
>>> This is what I thought:
>>>
>>> On most cloud providers: add-machine instructs the cloud provider to
>>> start a new instance and the cloud-config passed to cloud-init includes how
>>> to download jujud agent and run it and configure it with public key trust
>>> of the juju controller.
>>>
>>> On manually added machine: same thing only instead of cloud-init and
>>> cloud-config an ssh connection is used to perform the same commands.
>>>
>>> I had thought the juju controller was initiating the ssh-connection to
>>> the address given in the add-machine command and that a non-internet
>>> routable address would simply not work as the controller cannot open any
>>> TCP connection to it. This is where my understanding stops.
>>>
>>> Please, anyone, describe how this works?
>>> --
>>> Jay
>>>
>>>
>>> On Fri, Jun 2, 2017 at 9:42 AM, James Beedy 
>>> wrote:
>>>
 I think the primary advantage being less clutter to the end user. The
 difference between the end user have to bootstrap and control things from
 inside the vm vs from their host. For some reason this small change made
 some of my users who were previously not really catching on, far more apt
 to jump in. I personally like it because these little vms go further when
 they don't have the controller on them as well. @jameinel totally, possibly
 I'll add the bridge bits in place of the lxd-proxy in that write up, or
 possibly in another.

 ~James

 On Jun 2, 2017, at 12:56 AM, John Meinel 
 wrote:

 Interesting. I wouldn't have thought to use a manually added machine to
 use JAAS to deploy applications to your local virtualbox. Is there a reason
 this is easier than just "juju bootstrap lxd" from inside the VM?

 I suppose our default lxd provider puts the new containers on a NAT
 bridge, though you can reconfigure 'lxdbr0' to bridge your 'eth0' as well.

 John
 =:->


 On Fri, Jun 2, 2017 at 8:33 AM, James Beedy 
 wrote:

> https://medium.com/@jamesbeedy/using-jaas-to-deploy-lxd-containers-to-
> virtualbox-vms-on-os-x-a06a8046756a
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev

Exiting an unconditional juju debug-hooks session

2017-06-04 Thread Dmitrii Shcherbakov
Hi everybody,

Currently if you do

juju debug-hooks  # no event (hook) in particular

each time there is a new event you will get a new tmux window open and
this will be done serially as there is no parallelism in hook
execution on a given logical machine. This is all good and intentional
but when you've observed the charm behavior and want to let it work
without your interference again, you need to end your tmux session.
This can be hard via `exit [status]` shell builtin when you get a lot
of events (think of an OpenStack HA deployment) - each time you do

./hooks/$JUJU_HOOK_NAME && exit

you are dropped into a session '0' and a new session is created for a
queued event for which you have to manually execute a hook and exit
again until you process the backlog.

tmux list-windows
0: bash- (1 panes) [239x62] [layout bbde,239x62,0,0,1] @1 # <---
dropping here after `exit`
1: update-status* (1 panes) [239x62] [layout bbe0,239x62,0,0,3] @3 (active)

https://jujucharms.com/docs/stable/authors-hook-debug#running-a-debug-session
"Note: To allow Juju to continue processing events normally, you must
exit the hook execution with a zero return code (using the exit
command), otherwise all further events on that unit may be blocked
indefinitely."

My initial thought was something like this - send SIGTERM to a child
of sshd which will terminate your ssh session:
unset n ; p=`pgrep -f 'tmux attach-session.*'$JUJU_UNIT_NAME` ; while
[ "$n" != "sshd" ] ; do pc=$p ; p=$(ps -o ppid= $p | tr -d ' ') ; echo
$p ; n=`basename $(readlink /proc/$p/exe || echo -n none)` ; done &&
kill $pc

as an agent waits for an SSH client to exit:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53

After thinking about it some more, I thought it would be cleaner to
just kill a specific tmux session:

tmux list-sessions
gluster/0: 2 windows (created Fri Jun  2 20:22:30 2017) [239x62] (attached)

./hooks/$JUJU_HOOK_NAME && tmux kill-session -t $JUJU_UNIT_NAME
[exited]
Cleaning up the debug session
no server running on /tmp/tmux-0/default
Connection to 10.10.101.77 closed.

The cleanup message comes from debugHooksClientScript that simply sets
up a bash trap on EXIT:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L51

Judging by the code, it should be pretty safe to do so - unless there
is a debug session in a debug context for a particular unit, other
hooks will be executed regularly by an agent instead of creating a new
tmux window:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/runner.go#L225
debugctx := debug.NewHooksContext(runner.context.UnitName())
if session, _ := debugctx.FindSession(); session != nil &&
session.MatchHook(hookName) {
logger.Infof("executing %s via debug-hooks", hookName)
err = session.RunHook(hookName, runner.paths.GetCharmDir(), env)
} else {
err = runner.runCharmHook(hookName, env, charmLocation)
}
return runner.context.Flush(hookName, err)

There are two scripts:

- a client script executed via an ssh client when you run juju debug-hooks
- a server script which is executed in the `RunHook` function by an
agent and creates a new window for an existing tmux session.

client side:
https://github.com/juju/juju/blob/develop/cmd/juju/commands/debughooks.go#L137
script := 
base64.StdEncoding.EncodeToString([]byte(unitdebug.ClientScript(debugctx,
c.hooks)))
innercmd := fmt.Sprintf(`F=$(mktemp); echo %s | base64 -d > $F; . $F`, script)
args := []string{fmt.Sprintf("sudo /bin/bash -c '%s'", innercmd)}
c.Args = args
return c.sshCommand.Run(ctx)

Server script:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L90
Client script:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/client.go#L49

A worker waits until a client exists by monitoring a file lock at
ClientExitFileLock:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L34
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/server.go#L53
A path of a lock itself for a particular session:
https://github.com/juju/juju/blob/develop/worker/uniter/runner/debug/common.go#L24

---

If this approach with killing a tmux session is fine then I could
create a PR for the doc repo and for the description in the
debugHooksServerScript to explicitly mention it.

I doubt it deserves a helper command but rather a more verbose explanation.

Have anybody else encountered the need to do the same?

Best Regards,
Dmitrii Shcherbakov

Field Software Engineer
IRC (freenode): Dmitrii-Sh

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev