Chad Smith has proposed merging ~chad.smith/cloud-init:analyze into 
cloud-init:master.

Requested reviews:
  cloud-init commiters (cloud-init-dev)

For more details, see:
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/328819

tools: Add tooling for basic cloud-init performance analisis.

This branch pull in a significant portion of cloudinit-analyze into cloud-init 
proper.
It adds a subcommand "cloudinit analyze" to the cloudinit commandline utility 
for
quick performance assessment of cloud-init stages and events.

On an cloud-init configured instance, running "cloudinit analyze blame" or show 
or dump will now report which cloud-init events spent to most wall time to 
allow for quick assessment of the most costly stages of cloud-init.

This functionality is pulled from 
https://git.launchpad.net/~raharper/+git/cloudinit-analyze.
The cloudinit-analyze main script itself has been refactored a bit for 
inclusion as a subcommand of cloudinit proper.  There will be a followup branch 
at some point which will pull in more detailed strace profiling funcitonality, 
but that approach needs a bit more discussion.

Additional initial example usage docs are in cloudinit/analyze/READMME.md.

LP: #1709761


-- 
Your team cloud-init commiters is requested to review the proposed merge of 
~chad.smith/cloud-init:analyze into cloud-init:master.
diff --git a/cloudinit/analyze/README.md b/cloudinit/analyze/README.md
new file mode 100644
index 0000000..e496d74
--- /dev/null
+++ b/cloudinit/analyze/README.md
@@ -0,0 +1,167 @@
+Getting Started
+===============
+
+You'll need a working LXD setup.  If you can lxc launch a machine then you're
+good to go.
+
+1. lxc init ubuntu-daily:yakkety y1
+
+# for now, we need to adjust cloud-init's logging to emit messages in
+# a more useful format for parsing.
+#
+# NOTE: you need mount-image-callback
+# from https://code.launchpad.net/~smoser/cloud-utils/trunk.mic-lxd
+2. sudo ./bin/inject-payload.sh lxd:y1
+
+# now you can start it
+3. lxc start y1
+
+# now you can analyze it:
+# % bin/cloudinit-analyze --help
+# usage: cloudinit analyze <subcommand> [<args>]
+#
+# Subcommands:
+#    blame     Print list of executed stages ordered by time to init
+#    dump      Print an in-order list of events in JSON format
+#    show      Print an in-order list of stages with execution times
+#
+# Analyze cloud-init logs and data
+#
+# positional arguments:
+#   subcommand     subcommand to run
+# 
+# optional arguments:
+#   -h, --help  show this help message and exit
+
+# Print boot events, sections ran, unit time and total time, per boot
+4. lxc exec y1 -- journalctl-cloudinitlog | ./bin/cloudinit analyze show -i -
+
+
+# % $cmd | tail -n 20
+# |`->config-byobu previously ran @01.48600s +00.00000s
+# Finished stage: (modules-config) 00.03300 seconds 
+# 
+# Starting stage: modules-final
+# |`->config-rightscale_userdata previously ran @01.92300s +00.00000s
+# |`->config-scripts-vendor previously ran @01.92300s +00.00100s
+# |`->config-scripts-per-once previously ran @01.92400s +00.00000s
+# |`->config-scripts-per-boot ran successfully @01.92400s +00.00100s
+# |`->config-scripts-per-instance previously ran @01.92500s +00.00000s
+# |`->config-scripts-user previously ran @01.92500s +00.00100s
+# |`->config-ssh-authkey-fingerprints previously ran @01.92600s +00.00000s
+# |`->config-keys-to-console previously ran @01.92600s +00.00000s
+# |`->config-phone-home previously ran @01.92700s +00.00000s
+# |`->config-final-message ran successfully @01.92700s +00.00600s
+# |`->config-power-state-change previously ran @01.93300s +00.00000s
+# Finished stage: (modules-final) 00.01800 seconds 
+# 
+# Total Time: 0.25700 seconds
+# 
+# 2 boot records analyzed
+
+
+# dump json event stream parsted from cloud-init.log
+5. lxc exec y1 -- journalctl-cloudinitlog | ./bin/cloudinit-analyze dump -i -
+
+#  {
+#   "description": "config-power-state-change previously ran",
+#   "event_type": "finish",
+#   "name": "modules-final/config-power-state-change",
+#   "origin": "cloudinit",
+#   "result": "SUCCESS",
+#   "timestamp": 1473722226.711
+#  },
+#  {
+#   "description": "running modules for final",
+#   "event_type": "finish",
+#   "name": "modules-final",
+#   "origin": "cloudinit",
+#   "result": "SUCCESS",
+#   "timestamp": 1473722226.713
+#  }
+# ]
+
+
+# sort events by time length (blame!)
+6. lxc exec y1 -- journalctl-cloudinitlog | ./bin/cloudinit-analyze blame -i -
+
+# -- Boot Record 01 --
+#      00.68700s (modules-config/config-locale)
+#      00.16800s (init-network/config-ssh)
+#      00.14600s (modules-config/config-grub-dpkg)
+#      00.07600s (modules-config/config-apt-configure)
+#      00.01900s (modules-final/config-keys-to-console)
+#      00.01900s (init-local/search-NoCloud)
+#      00.01500s (init-network/config-users-groups)
+#      00.00500s (modules-final/config-final-message)
+#      00.00500s (init-network/consume-vendor-data)
+#      00.00500s (init-network/config-growpart)
+#      00.00400s (init-network/config-set_hostname)
+#      00.00400s (init-network/config-resizefs)
+#      00.00300s (init-network/consume-user-data)
+#      00.00200s (modules-final/config-ssh-authkey-fingerprints)
+#      00.00200s (modules-final/config-scripts-per-instance)
+#      00.00200s (modules-final/config-phone-home)
+#      00.00200s (modules-config/config-snappy)
+#      00.00200s (modules-config/confg-mounts)
+#      00.00200s (modules-config/config-fan)
+
+
+Testing with KVM
+================
+
+1. Download the current cloud image
+
+% https://cloud-images.ubuntu.com/daily/server/yakkety/current/yakkety-server-cloudimg-amd64.img
+
+2. Create snapshot image to preserve the original cloud-image.
+
+% qemu-img create -b yakkety-server-cloudimg-amd64.img -f qcow2 test-cloudinit.qcow2
+
+3. Inject the cloud-init modifications.  Note, this takes longer as it installs
+profiling packages.
+
+% ./bin/inject-payload.sh ./test-cloudinit.qcow2
+
+4. Create a seed image using `cloud-localds`
+
+% cat my-user-data
+#cloud-config
+password: passw0rd
+chpasswd: { expire: False }
+ssh_pwauth: True
+ssh_import_id: <gh:userid or lp:userid>
+% echo "instance-id: $(uuidgen || echo i-abcdefg)" > my-meta-data
+% cloud-localds my-seed.img my-user-data my-meta-data
+
+5. Launch your modifed VM
+
+% kvm -m 512 -net nic -net user -redir tcp:2222::22 \
+   -drive file=test-cloudinit.qcow2,if=virtio,format=qcow2 \
+   -drive file=my-seed.img,if=virtio,format=raw
+
+6. Analyze the boot (blame, dump, show)
+
+% ssh -p 2222 ubuntu@localhost 'cat /var/log/cloud-init.log' | ./bin/./bin/cloudinit-analyze blame -i
+
+7. Profiling Data is present in /var/lib/cloud/data/cloud-init-<TIMESTAMP>-<PROFILER>
+
+On KVM and other VM-based images, with profling enabled, blktrace IO and perf
+cpu stacks are recorded for later data analysis.
+
+# IO tracing on the cloud-image disk and seed device
+% ssh -p 2222 ubuntu@localhost
+% cd /var/lib/cloud-init/data/cloud-init-*-blktrace
+% sudo zcat vda1.blktrace.* | blkparse -i - -s
+
+# or for btt data
+sudo gunzip *blktrace*
+sudo blkparse -i vda1 -d bp.vda1.bin
+sudo btt -A -i bp.vda1.bin | sudo tee bp.vda1.txt
+
+
+# Profiling of all CPUs during cloud-init run
+% ssh -p 2222 ubuntu@localhost
+% cd /var/lib/cloud-init/data/cloud-init-*-perf
+% sudo gunzip perf.data.gz
+% sudo perf report --stdio
diff --git a/cloudinit/analyze/__init__.py b/cloudinit/analyze/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/cloudinit/analyze/__init__.py
diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py
new file mode 100644
index 0000000..0f28e7d
--- /dev/null
+++ b/cloudinit/analyze/__main__.py
@@ -0,0 +1,155 @@
+# Copyright (C) 2017 Canonical Ltd.
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import argparse
+import re
+import sys
+
+from . import dump
+from . import show
+
+
+def get_parser(parser=None):
+    if not parser:
+        parser = argparse.ArgumentParser(
+            prog='cloudinit-analyze',
+            description='Analyze cloud-init logs and data')
+    subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand')
+    subparsers.required = True
+
+    parser_blame = subparsers.add_parser(
+        'blame', help='Print list of executed stages ordered by time to init')
+    parser_blame.add_argument(
+        '-i', '--infile', action='store', dest='infile',
+        default='/var/log/cloud-init.log',
+        help='specify where to read input.')
+    parser_blame.add_argument(
+        '-o', '--outfile', action='store', dest='outfile', default='-',
+        help='specify where to write output. ')
+    parser_blame.set_defaults(action=('blame', analyze_blame))
+
+    parser_show = subparsers.add_parser(
+        'show', help='Print list of in-order events during execution')
+    parser_show.add_argument('-f', '--format', action='store',
+                             dest='print_format', default='%I%D @%Es +%ds',
+                             help='specify formatting of output.')
+    parser_show.add_argument('-i', '--infile', action='store',
+                             dest='infile', default='/var/log/cloud-init.log',
+                             help='specify where to read input.')
+    parser_show.add_argument('-o', '--outfile', action='store',
+                             dest='outfile', default='-',
+                             help='specify where to write output.')
+    parser_show.set_defaults(action=('show', analyze_show))
+    parser_dump = subparsers.add_parser(
+        'dump', help='Dump cloud-init events in JSON format')
+    parser_dump.add_argument('-i', '--infile', action='store',
+                             dest='infile', default='/var/log/cloud-init.log',
+                             help='specify where to read input. ')
+    parser_dump.add_argument('-o', '--outfile', action='store',
+                             dest='outfile', default='-',
+                             help='specify where to write output. ')
+    parser_dump.set_defaults(action=('dump', analyze_dump))
+    return parser
+
+
+def analyze_blame(name, args):
+    """Report a list of records sorted by largest time delta.
+
+    For example:
+      30.210s (init-local) searching for datasource
+       8.706s (init-network) reading and applying user-data
+        166ms (modules-config) ....
+        807us (modules-final) ...
+
+    We generate event records parsing cloud-init logs, formatting the output
+    and sorting by record data ('delta')
+    """
+    (infh, outfh) = configure_io(args)
+    blame_format = '     %ds (%n)'
+    r = re.compile('(^\s+\d+\.\d+)', re.MULTILINE)
+    for idx, record in enumerate(show.show_events(_get_events(infh),
+                                                  blame_format)):
+        srecs = sorted(filter(r.match, record), reverse=True)
+        outfh.write('-- Boot Record %02d --\n' % (idx + 1))
+        outfh.write('\n'.join(srecs) + '\n')
+        outfh.write('\n')
+    outfh.write('%d boot records analyzed\n' % (idx + 1))
+
+
+def analyze_show(name, args):
+    """Generate output records using the 'standard' format to printing events.
+
+    Example output follows:
+        Starting stage: (init-local)
+          ...
+        Finished stage: (init-local) 0.105195 seconds
+
+        Starting stage: (init-network)
+          ...
+        Finished stage: (init-network) 0.339024 seconds
+
+        Starting stage: (modules-config)
+          ...
+        Finished stage: (modules-config) 0.NNN seconds
+
+        Starting stage: (modules-final)
+          ...
+        Finished stage: (modules-final) 0.NNN seconds
+    """
+    (infh, outfh) = configure_io(args)
+    for idx, record in enumerate(show.show_events(_get_events(infh),
+                                                  args.print_format)):
+        outfh.write('-- Boot Record %02d --\n' % (idx + 1))
+        outfh.write('The total time elapsed since completing an event is'
+                    ' printed after the "@" character.\n')
+        outfh.write('The time the event takes is printed after the "+" '
+                    'character.\n\n')
+        outfh.write('\n'.join(record) + '\n')
+    outfh.write('%d boot records analyzed\n' % (idx + 1))
+
+
+def analyze_dump(name, args):
+    """Dump cloud-init events in json format"""
+    (infh, outfh) = configure_io(args)
+    outfh.write(dump.json_dumps(_get_events(infh)) + '\n')
+
+
+def _get_events(infile):
+    rawdata = None
+    events, rawdata = show.load_events(infile, None)
+    if not events:
+        events, _ = dump.dump_events(infile, rawdata)
+    return events
+
+
+def configure_io(args):
+    """Common parsing and setup of input/output files"""
+    if args.infile == '-':
+        infh = sys.stdin
+    else:
+        try:
+            infh = open(args.infile, 'r')
+        except (FileNotFoundError, PermissionError):
+            sys.stderr.write('Cannot open file %s\n' % args.infile)
+            sys.exit(1)
+
+    if args.outfile == '-':
+        outfh = sys.stdout
+    else:
+        try:
+            outfh = open(args.outfile, 'w')
+        except PermissionError:
+            sys.stderr.write('Cannot open file %s\n' % args.outfile)
+            sys.exit(1)
+
+    return (infh, outfh)
+
+
+if __name__ == '__main__':
+    parser = get_parser()
+    args = parser.parse_args()
+    (name, action_functor) = args.action
+    action_functor(name, args)
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/analyze/dump.py b/cloudinit/analyze/dump.py
new file mode 100644
index 0000000..49a9d9b
--- /dev/null
+++ b/cloudinit/analyze/dump.py
@@ -0,0 +1,177 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import calendar
+from datetime import datetime
+import json
+import subprocess
+import sys
+
+stage_to_description = {
+    'finished': 'finished running cloud-init',
+    'init-local': 'starting search for local datasources',
+    'init-network': 'searching for network datasources',
+    'init': 'searching for network datasources',
+    'modules-config': 'running config modules',
+    'modules-final': 'finalizing modules',
+    'modules': 'running modules for',
+    'single': 'running single module ',
+}
+
+# logger's asctime format
+CLOUD_INIT_ASCTIME_FMT = "%Y-%m-%d %H:%M:%S,%f"
+
+# journctl -o short-precise
+CLOUD_INIT_JOURNALCTL_FMT = "%b %d %H:%M:%S.%f %Y"
+
+# other
+DEFAULT_FMT = "%b %d %H:%M:%S %Y"
+
+
+def parse_timestamp(timestampstr):
+    # default syslog time does not include the current year
+    months = [calendar.month_abbr[m] for m in range(1, 13)]
+    if timestampstr.split()[0] in months:
+        # Aug 29 22:55:26
+        FMT = DEFAULT_FMT
+        if '.' in timestampstr:
+            FMT = CLOUD_INIT_JOURNALCTL_FMT
+        dt = datetime.strptime(timestampstr + " " +
+                               str(datetime.now().year),
+                               FMT)
+        timestamp = dt.strftime("%s.%f")
+    elif "," in timestampstr:
+        # 2016-09-12 14:39:20,839
+        dt = datetime.strptime(timestampstr, CLOUD_INIT_ASCTIME_FMT)
+        timestamp = dt.strftime("%s.%f")
+    else:
+        # allow date(1) to handle other formats we don't expect
+        timestamp = parse_timestamp_from_date(timestampstr)
+
+    return float(timestamp)
+
+
+def parse_timestamp_from_date(timestampstr):
+    out = subprocess.check_output(['date', '+%s.%3N', '-d', timestampstr])
+    timestamp = out.decode('ascii').strip()
+    return float(timestamp)
+
+
+def parse_ci_logline(line):
+    # Stage Starts:
+    # Cloud-init v. 0.7.7 running 'init-local' at \
+    #               Fri, 02 Sep 2016 19:28:07 +0000. Up 1.0 seconds.
+    # Cloud-init v. 0.7.7 running 'init' at \
+    #               Fri, 02 Sep 2016 19:28:08 +0000. Up 2.0 seconds.
+    # Cloud-init v. 0.7.7 finished at
+    # Aug 29 22:55:26 test1 [CLOUDINIT] handlers.py[DEBUG]: \
+    #               finish: modules-final: SUCCESS: running modules for final
+    # 2016-08-30T21:53:25.972325+00:00 y1 [CLOUDINIT] handlers.py[DEBUG]: \
+    #               finish: modules-final: SUCCESS: running modules for final
+    #
+    # Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]: \
+    #               Cloud-init v. 0.7.8 running 'init-local' at \
+    #               Thu, 03 Nov 2016 06:51:06 +0000. Up 1.0 seconds.
+    #
+    # 2017-05-22 18:02:01,088 - util.py[DEBUG]: Cloud-init v. 0.7.9 running \
+    #         'init-local' at Mon, 22 May 2017 18:02:01 +0000. Up 2.0 seconds.
+
+    separators = [' - ', ' [CLOUDINIT] ']
+    found = False
+    for sep in separators:
+        if sep in line:
+            found = True
+            break
+
+    if not found:
+        return None
+
+    (timehost, eventstr) = line.split(sep)
+
+    # journalctl -o short-precise
+    if timehost.endswith(":"):
+        timehost = " ".join(timehost.split()[0:-1])
+
+    if "," in timehost:
+        timestampstr, extra = timehost.split(",")
+        timestampstr += ",%s" % extra.split()[0]
+        if ' ' in extra:
+            hostname = extra.split()[-1]
+    else:
+        hostname = timehost.split()[-1]
+        timestampstr = timehost.split(hostname)[0].strip()
+    if 'Cloud-init v.' in eventstr:
+        event_type = 'start'
+        if 'running' in eventstr:
+            stage_and_timestamp = eventstr.split('running')[1].lstrip()
+            event_name, _ = stage_and_timestamp.split(' at ')
+            event_name = event_name.replace("'", "").replace(":", "-")
+            if event_name == "init":
+                event_name = "init-network"
+        else:
+            # don't generate a start for the 'finished at' banner
+            return None
+        event_description = stage_to_description[event_name]
+    else:
+        (pymodloglvl, event_type, event_name) = eventstr.split()[0:3]
+        event_description = eventstr.split(event_name)[1].strip()
+
+    event = {
+        'name': event_name.rstrip(":"),
+        'description': event_description,
+        'timestamp': parse_timestamp(timestampstr),
+        'origin': 'cloudinit',
+        'event_type': event_type.rstrip(":"),
+    }
+    if event['event_type'] == "finish":
+        result = event_description.split(":")[0]
+        desc = event_description.split(result)[1].lstrip(':').strip()
+        event['result'] = result
+        event['description'] = desc.strip()
+
+    return event
+
+
+def json_dumps(data):
+    return json.dumps(data, indent=1, sort_keys=True,
+                      separators=(',', ': '))
+
+
+def dump_events(cisource=None, rawdata=None):
+    events = []
+    event = None
+    CI_EVENT_MATCHES = ['start:', 'finish:', 'Cloud-init v.']
+
+    if not any([cisource, rawdata]):
+        raise ValueError('Either cisource or rawdata parameters are required')
+    if cisource and rawdata:
+        raise ValueError('cisource and rawdata parameters cannot both be set')
+
+    if rawdata:
+        data = rawdata.splitlines()
+    else:
+        data = cisource.readlines()
+
+    for line in data:
+        for match in CI_EVENT_MATCHES:
+            if match in line:
+                try:
+                    event = parse_ci_logline(line)
+                except ValueError:
+                    sys.stderr.write('Skipping invalid entry\n')
+                if event:
+                    events.append(event)
+
+    return events, data
+
+
+def main():
+    if len(sys.argv) > 1:
+        cisource = open(sys.argv[1])
+    else:
+        cisource = sys.stdin
+
+    return json_dumps(dump_events(cisource))
+
+
+if __name__ == "__main__":
+    print(main())
diff --git a/cloudinit/analyze/show.py b/cloudinit/analyze/show.py
new file mode 100644
index 0000000..6b4c10c
--- /dev/null
+++ b/cloudinit/analyze/show.py
@@ -0,0 +1,241 @@
+#!/usr/bin/env python3
+#   Copyright (C) 2016 Canonical Ltd.
+#
+#   Author: Ryan Harper <ryan.har...@canonical.com>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+import argparse
+import base64
+import datetime
+import json
+import os
+import sys
+
+#  An event:
+'''
+{
+        "description": "executing late commands",
+        "event_type": "start",
+        "level": "INFO",
+        "name": "cmd-install/stage-late"
+        "origin": "cloudinit",
+        "timestamp": 1461164249.1590767,
+},
+
+    {
+        "description": "executing late commands",
+        "event_type": "finish",
+        "level": "INFO",
+        "name": "cmd-install/stage-late",
+        "origin": "cloudinit",
+        "result": "SUCCESS",
+        "timestamp": 1461164249.1590767
+    }
+
+'''
+format_key = {
+    '%d': 'delta',
+    '%D': 'description',
+    '%E': 'elapsed',
+    '%e': 'event_type',
+    '%I': 'indent',
+    '%l': 'level',
+    '%n': 'name',
+    '%o': 'origin',
+    '%r': 'result',
+    '%t': 'timestamp',
+    '%T': 'total_time',
+}
+
+formatting_help = " ".join(["{}: {}".format(k.replace('%', '%%'), v)
+                           for k, v in format_key.items()])
+
+
+def format_record(msg, event):
+    for i, j in format_key.items():
+        if i in msg:
+            # ensure consistent formatting of time values
+            if j in ['delta', 'elapsed', 'timestamp']:
+                msg = msg.replace(i, "{%s:08.5f}" % j)
+            else:
+                msg = msg.replace(i, "{%s}" % j)
+    return msg.format(**event)
+
+
+def dump_event_files(event):
+    content = {k: v for k, v in event.items() if k not in ['content']}
+    files = content['files']
+    saved = []
+    for f in files:
+        fname = f['path']
+        fn_local = os.path.basename(fname)
+        fcontent = base64.b64decode(f['content']).decode('ascii')
+        with open(fn_local, 'w') as fh:
+            fh.write(fcontent)
+        saved.append(fn_local)
+
+    return saved
+
+
+def event_name(event):
+    if event:
+        return event.get('name')
+    return None
+
+
+def event_type(event):
+    if event:
+        return event.get('event_type')
+    return None
+
+
+def event_parent(event):
+    if event:
+        return event_name(event).split("/")[0]
+    return None
+
+
+def event_timestamp(event):
+    return float(event.get('timestamp'))
+
+
+def event_datetime(event):
+    return datetime.datetime.utcfromtimestamp(event_timestamp(event))
+
+
+def delta_seconds(t1, t2):
+    return (t2 - t1).total_seconds()
+
+
+def event_duration(start, finish):
+    return delta_seconds(event_datetime(start), event_datetime(finish))
+
+
+def event_record(start_time, start, finish):
+    record = finish.copy()
+    record.update({
+        'delta': event_duration(start, finish),
+        'elapsed': delta_seconds(start_time, event_datetime(start)),
+        'indent': '|' + ' ' * (event_name(start).count('/') - 1) + '`->',
+    })
+
+    return record
+
+
+def total_time_record(total_time):
+    return 'Total Time: %3.5f seconds\n' % total_time
+
+
+def generate_records(events, blame_sort=False,
+                     print_format="(%n) %d seconds in %I%D",
+                     dump_files=False, log_datafiles=False):
+
+    sorted_events = sorted(events, key=lambda x: x['timestamp'])
+    records = []
+    start_time = None
+    total_time = 0.0
+    stage_start_time = {}
+    stages_seen = []
+    boot_records = []
+
+    unprocessed = []
+    for e in range(0, len(sorted_events)):
+        event = events[e]
+        try:
+            next_evt = events[e + 1]
+        except IndexError:
+            next_evt = None
+
+        if event_type(event) == 'start':
+            if event.get('name') in stages_seen:
+                records.append(total_time_record(total_time))
+                boot_records.append(records)
+                records = []
+                start_time = None
+                total_time = 0.0
+
+            if start_time is None:
+                stages_seen = []
+                start_time = event_datetime(event)
+                stage_start_time[event_parent(event)] = start_time
+
+            # see if we have a pair
+            if event_name(event) == event_name(next_evt):
+                if event_type(next_evt) == 'finish':
+                    records.append(format_record(print_format,
+                                                 event_record(start_time,
+                                                              event,
+                                                              next_evt)))
+            else:
+                # This is a parent event
+                records.append("Starting stage: %s" % event.get('name'))
+                unprocessed.append(event)
+                stages_seen.append(event.get('name'))
+                continue
+        else:
+            prev_evt = unprocessed.pop()
+            if event_name(event) == event_name(prev_evt):
+                record = event_record(start_time, prev_evt, event)
+                records.append(format_record("Finished stage: "
+                                             "(%n) %d seconds ",
+                                             record) + "\n")
+                total_time += record.get('delta')
+            else:
+                # not a match, put it back
+                unprocessed.append(prev_evt)
+
+    records.append(total_time_record(total_time))
+    boot_records.append(records)
+    return boot_records
+
+
+def show_events(events, print_format):
+    return generate_records(events, print_format=print_format)
+
+
+def load_events(infile, rawdata=None):
+    if rawdata:
+        data = rawdata.read()
+    else:
+        data = infile.read()
+
+    j = None
+    try:
+        j = json.loads(data)
+    except json.JSONDecodeError:
+        pass
+
+    return j, data
+
+
+def main():
+    parser = argparse.ArgumentParser(
+        description=('cloudinit-print-log - '
+                     'pretty print and sort cloudinit logs'),
+        prog='cloudinit-print-log')
+    parser.add_argument('--format', action='store',
+                        dest='print_format',
+                        default='%I (%n) %D @%Es +%ds',
+                        help='specify formatting of output. ' +
+                        formatting_help)
+    parser.add_argument('infile', nargs='?', type=argparse.FileType('r'),
+                        help='Path to log to parse. Use - for stdin')
+
+    opts = parser.parse_args(sys.argv[1:])
+    if not opts.infile:
+        parser.print_help()
+        sys.exit(1)
+
+    j, _ = load_events(opts.infile)
+
+    records = show_events(j, opts.print_format)
+    for record in records:
+        print('The total time elapsed since completing an event is printed'
+              ' after the "@" character.')
+        print('The time the event takes is printed after the "+" character.')
+        print()
+        print("\n".join(record))
+
+
+if __name__ == '__main__':
+    main()
diff --git a/cloudinit/analyze/tests/test_dump.py b/cloudinit/analyze/tests/test_dump.py
new file mode 100644
index 0000000..72c15fb
--- /dev/null
+++ b/cloudinit/analyze/tests/test_dump.py
@@ -0,0 +1,208 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from datetime import datetime
+import subprocess
+from textwrap import dedent
+
+from cloudinit.analyze.dump import (
+    dump_events, parse_ci_logline, parse_timestamp)
+from cloudinit.util import write_file
+from tests.unittests.helpers import CiTestCase
+
+
+class TestParseTimestamp(CiTestCase):
+
+    def test_parse_timestamp_handles_cloud_init_default_format(self):
+        """Logs with cloud-init detailed formats will be properly parsed."""
+        trusty_fmt = '%Y-%m-%d %H:%M:%S,%f'
+        trusty_stamp = '2016-09-12 14:39:20,839'
+
+        parsed = parse_timestamp(trusty_stamp)
+
+        # convert ourselves
+        dt = datetime.strptime(trusty_stamp, trusty_fmt)
+        expected = float(dt.strftime('%s.%f'))
+
+        # use date(1)
+        out = subprocess.check_output(['date', '+%s.%3N', '-d',
+                                      trusty_stamp])
+        timestamp = out.decode('ascii').strip()
+        date_ts = float(timestamp)
+
+        self.assertEqual(expected, parsed)
+        self.assertEqual(expected, date_ts)
+        self.assertEqual(date_ts, parsed)
+
+    def test_parse_timestamp_handles_syslog_adding_year(self):
+        """Syslog timestamps lack a year. Add year and properly parse."""
+        syslog_fmt = '%b %d %H:%M:%S %Y'
+        syslog_stamp = 'Aug 08 15:12:51'
+
+        # convert stamp ourselves by adding the missing year value
+        year = datetime.now().year
+        dt = datetime.strptime(syslog_stamp + " " + str(year), syslog_fmt)
+        expected = float(dt.strftime('%s.%f'))
+        parsed = parse_timestamp(syslog_stamp)
+
+        # use date(1)
+        out = subprocess.check_output(['date', '+%s.%3N', '-d',
+                                      syslog_stamp])
+        timestamp = out.decode('ascii').strip()
+        date_ts = float(timestamp)
+
+        self.assertEqual(expected, parsed)
+        self.assertEqual(expected, date_ts)
+        self.assertEqual(date_ts, parsed)
+
+    def test_parse_timestamp_handles_journalctl_format_adding_year(self):
+        """Journalctl precise timestamps lack a year. Add year and parse."""
+        journal_fmt = '%b %d %H:%M:%S.%f %Y'
+        journal_stamp = 'Aug 08 17:15:50.606811'
+
+        # convert stamp ourselves by adding the missing year value
+        year = datetime.now().year
+        dt = datetime.strptime(journal_stamp + " " + str(year), journal_fmt)
+        expected = float(dt.strftime('%s.%f'))
+        parsed = parse_timestamp(journal_stamp)
+
+        # use date(1)
+        out = subprocess.check_output(['date', '+%s.%6N', '-d',
+                                      journal_stamp])
+        timestamp = out.decode('ascii').strip()
+        date_ts = float(timestamp)
+
+        self.assertEqual(expected, parsed)
+        self.assertEqual(expected, date_ts)
+        self.assertEqual(date_ts, parsed)
+
+    def test_parse_unexpected_timestamp_format_with_date_command(self):
+        """Dump sends unexpected timestamp formats to data for processing."""
+        new_fmt = '%H:%M %m/%d %Y'
+        new_stamp = '17:15 08/08'
+
+        # convert stamp ourselves by adding the missing year value
+        year = datetime.now().year
+        dt = datetime.strptime(new_stamp + " " + str(year), new_fmt)
+        expected = float(dt.strftime('%s.%f'))
+        parsed = parse_timestamp(new_stamp)
+
+        # use date(1)
+        out = subprocess.check_output(['date', '+%s.%6N', '-d', new_stamp])
+        timestamp = out.decode('ascii').strip()
+        date_ts = float(timestamp)
+
+        self.assertEqual(expected, parsed)
+        self.assertEqual(expected, date_ts)
+        self.assertEqual(date_ts, parsed)
+
+class TestParseCILogLine(CiTestCase):
+
+    def test_parse_logline_returns_none_without_separators(self):
+        """When no separators are found, parse_ci_logline returns None."""
+        expected_parse_ignores = [
+            '', '-', 'adsf-asdf', '2017-05-22 18:02:01,088', 'CLOUDINIT']
+        for parse_ignores in expected_parse_ignores:
+            self.assertIsNone(parse_ci_logline(parse_ignores))
+
+    def test_parse_logline_returns_event_for_cloud_init_logs(self):
+        """parse_ci_logline returns an event parse from cloud-init format."""
+        line = ("2017-08-08 20:05:07,147 - util.py[DEBUG]: Cloud-init v. 0.7.9"
+               " running 'init-local' at Tue, 08 Aug 2017 20:05:07 +0000. Up"
+               " 6.26 seconds.")
+        expected = {
+            'description': 'starting search for local datasources',
+            'event_type': 'start',
+            'name': 'init-local',
+            'origin': 'cloudinit',
+            'timestamp': 1502244307.147}
+        self.assertEqual(expected, parse_ci_logline(line))
+
+    def test_parse_logline_returns_event_for_journalctl_logs(self):
+        """parse_ci_logline returns an event parse from journalctl format."""
+        line = ("Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT]"
+                " util.py[DEBUG]: Cloud-init v. 0.7.8 running 'init-local' at"
+                "  Thu, 03 Nov 2016 06:51:06 +0000. Up 1.0 seconds.")
+        expected = {
+            'description': 'starting search for local datasources',
+            'event_type': 'start',
+            'name': 'init-local',
+            'origin': 'cloudinit',
+            'timestamp': 1509713466.07441}
+        self.assertEqual(expected, parse_ci_logline(line))
+
+    def test_parse_logline_returns_event_for_finish_events(self):
+        """parse_ci_logline returns a finish event for a parsed log line."""
+        line = ('2016-08-30T21:53:25.972325+00:00 y1 [CLOUDINIT]'
+                ' handlers.py[DEBUG]: finish: modules-final: SUCCESS: running'
+                ' modules for final')
+        expected = {
+            'description': 'running modules for final',
+            'event_type': 'finish',
+            'name': 'modules-final',
+            'origin': 'cloudinit',
+            'result': 'SUCCESS',
+            'timestamp': 1472594005.972}
+        self.assertEqual(expected, parse_ci_logline(line))
+
+SAMPLE_LOGS = dedent("""\
+Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]:\
+ Cloud-init v. 0.7.8 running 'init-local' at Thu, 03 Nov 2016\
+ 06:51:06 +0000. Up 1.0 seconds.
+2016-08-30T21:53:25.972325+00:00 y1 [CLOUDINIT] handlers.py[DEBUG]: finish:\
+ modules-final: SUCCESS: running modules for final
+""")
+
+class TestDumpEvents(CiTestCase):
+    maxDiff = None
+
+    def test_dump_cisource_and_rawdata_are_mutually_exclusive(self):
+        """Either cisource or rawdata must be provided, but not both."""
+        with self.assertRaises(ValueError) as context_manager:
+            dump_events()
+        self.assertEqual(
+            'Either cisource or rawdata parameters are required',
+            str(context_manager.exception))
+        with self.assertRaises(ValueError) as context_manager:
+            dump_events(cisource='something', rawdata='somethingelse')
+        self.assertEqual(
+            'cisource and rawdata parameters cannot both be set',
+            str(context_manager.exception))
+
+    def test_dump_events_with_rawdata(self):
+        """Rawdata is split and parsed into a tuple of events and data"""
+        events, data = dump_events(rawdata=SAMPLE_LOGS)
+        expected_data = SAMPLE_LOGS.splitlines()
+        expected_events = [{
+            'description': 'starting search for local datasources',
+            'event_type': 'start',
+            'name': 'init-local',
+            'origin': 'cloudinit',
+            'timestamp': 1509713466.07441}, {
+            'description': 'running modules for final',
+            'event_type': 'finish',
+            'name': 'modules-final',
+            'origin': 'cloudinit',
+            'result': 'SUCCESS',
+            'timestamp': 1472594005.972}]
+        self.assertEqual(expected_events, events)
+        self.assertEqual(expected_data, data)
+
+    def test_dump_events_with_cisource(self):
+        """Cisource file is read and parsed into a tuple of events and data."""
+        tmpfile = self.tmp_path('logfile')
+        write_file(tmpfile, SAMPLE_LOGS)
+        events, data = dump_events(cisource=open(tmpfile))
+        expected_events = [{
+            'description': 'starting search for local datasources',
+            'event_type': 'start',
+            'name': 'init-local',
+            'origin': 'cloudinit',
+            'timestamp': 1509713466.07441}, {
+            'description': 'running modules for final',
+            'event_type': 'finish',
+            'name': 'modules-final',
+            'origin': 'cloudinit',
+            'result': 'SUCCESS',
+            'timestamp': 1472594005.972}]
+        self.assertEqual(expected_events, events)
+        self.assertEqual(SAMPLE_LOGS.splitlines(), [d.strip() for d in data])
diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
index 139e03b..8e02627 100644
--- a/cloudinit/cmd/main.py
+++ b/cloudinit/cmd/main.py
@@ -9,6 +9,7 @@
 # Author: Juerg Haefliger <juerg.haefli...@hp.com>
 # Author: Joshua Harlow <harlo...@yahoo-inc.com>
 # Author: Andrew Jorgensen <ajorg...@amazon.com>
+# Author: Chad Smith <chad.sm...@canonical.com>
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
@@ -22,6 +23,7 @@ import traceback
 from cloudinit import patcher
 patcher.patch()  # noqa
 
+from cloudinit.analyze.__main__ import get_parser as analyze_parser
 from cloudinit import log as logging
 from cloudinit import netinfo
 from cloudinit import signal_handler
@@ -48,7 +50,7 @@ WELCOME_MSG_TPL = ("Cloud-init v. {version} running '{action}' at "
                    "{timestamp}. Up {uptime} seconds.")
 
 # Module section template
-MOD_SECTION_TPL = "cloud_%s_modules"
+MOD_SECTION_TPL = 'cloud_%s_modules'
 
 # Things u can query on
 QUERY_DATA_TYPES = [
@@ -72,18 +74,18 @@ LOG = logging.getLogger()
 # and we still want to print exceptions...
 def print_exc(msg=''):
     if msg:
-        sys.stderr.write("%s\n" % (msg))
+        sys.stderr.write('%s\n' % (msg))
     sys.stderr.write('-' * 60)
-    sys.stderr.write("\n")
+    sys.stderr.write('\n')
     traceback.print_exc(file=sys.stderr)
     sys.stderr.write('-' * 60)
-    sys.stderr.write("\n")
+    sys.stderr.write('\n')
 
 
 def welcome(action, msg=None):
     if not msg:
         msg = welcome_format(action)
-    util.multi_log("%s\n" % (msg),
+    util.multi_log('%s\n' % (msg),
                    console=False, stderr=True, log=LOG)
     return msg
 
@@ -116,11 +118,11 @@ def run_module_section(mods, action_name, section):
     if total_attempted == 0:
         msg = ("No '%s' modules to run"
                " under section '%s'") % (action_name, full_section_name)
-        sys.stderr.write("%s\n" % (msg))
+        sys.stderr.write('%s\n' % (msg))
         LOG.debug(msg)
         return []
     else:
-        LOG.debug("Ran %s modules with %s failures",
+        LOG.debug('Ran %s modules with %s failures',
                   len(which_ran), len(failures))
         return failures
 
@@ -164,9 +166,9 @@ def attempt_cmdline_url(path, network=True, cmdline=None):
     try:
         cmdline_name, url = parse_cmdline_url(cmdline)
     except KeyError:
-        return (logging.DEBUG, "No kernel command line url found.")
+        return (logging.DEBUG, 'No kernel command line url found.')
 
-    path_is_local = url.startswith("file://") or url.startswith("/")
+    path_is_local = url.startswith('file://') or url.startswith('/')
 
     if path_is_local and os.path.exists(path):
         if network:
@@ -225,8 +227,8 @@ def main_init(name, args):
     early_logs = []
     early_logs.append(
         attempt_cmdline_url(
-            path=os.path.join("%s.d" % CLOUD_CONFIG,
-                              "91_kernel_cmdline_url.cfg"),
+            path=os.path.join('%s.d' % CLOUD_CONFIG,
+                              '91_kernel_cmdline_url.cfg'),
             network=not args.local))
 
     # Cloud-init 'init' stage is broken up into the following sub-stages
@@ -245,7 +247,7 @@ def main_init(name, args):
     if not args.local:
         w_msg = welcome_format(name)
     else:
-        w_msg = welcome_format("%s-local" % (name))
+        w_msg = welcome_format('%s-local' % (name))
     init = stages.Init(ds_deps=deps, reporter=args.reporter)
     # Stage 1
     init.read_cfg(extract_fns(args))
@@ -253,18 +255,18 @@ def main_init(name, args):
     outfmt = None
     errfmt = None
     try:
-        early_logs.append((logging.DEBUG, "Closing stdin."))
+        early_logs.append((logging.DEBUG, 'Closing stdin.'))
         util.close_stdin()
         (outfmt, errfmt) = util.fixup_output(init.cfg, name)
     except Exception:
-        msg = "Failed to setup output redirection!"
+        msg = 'Failed to setup output redirection!'
         util.logexc(LOG, msg)
         print_exc(msg)
         early_logs.append((logging.WARN, msg))
     if args.debug:
         # Reset so that all the debug handlers are closed out
-        LOG.debug(("Logging being reset, this logger may no"
-                   " longer be active shortly"))
+        LOG.debug(('Logging being reset, this logger may no'
+                   ' longer be active shortly'))
         logging.resetLogging()
     logging.setupLogging(init.cfg)
     apply_reporting_cfg(init.cfg)
@@ -282,21 +284,21 @@ def main_init(name, args):
     try:
         init.initialize()
     except Exception:
-        util.logexc(LOG, "Failed to initialize, likely bad things to come!")
+        util.logexc(LOG, 'Failed to initialize, likely bad things to come!')
     # Stage 4
     path_helper = init.paths
     mode = sources.DSMODE_LOCAL if args.local else sources.DSMODE_NETWORK
 
     if mode == sources.DSMODE_NETWORK:
-        existing = "trust"
-        sys.stderr.write("%s\n" % (netinfo.debug_info()))
-        LOG.debug(("Checking to see if files that we need already"
-                   " exist from a previous run that would allow us"
-                   " to stop early."))
+        existing = 'trust'
+        sys.stderr.write('%s\n' % (netinfo.debug_info()))
+        LOG.debug(('Checking to see if files that we need already'
+                   ' exist from a previous run that would allow us'
+                   ' to stop early.'))
         # no-net is written by upstart cloud-init-nonet when network failed
         # to come up
         stop_files = [
-            os.path.join(path_helper.get_cpath("data"), "no-net"),
+            os.path.join(path_helper.get_cpath('data'), 'no-net'),
         ]
         existing_files = []
         for fn in stop_files:
@@ -304,27 +306,27 @@ def main_init(name, args):
                 existing_files.append(fn)
 
         if existing_files:
-            LOG.debug("[%s] Exiting. stop file %s existed",
+            LOG.debug('[%s] Exiting. stop file %s existed',
                       mode, existing_files)
             return (None, [])
         else:
-            LOG.debug("Execution continuing, no previous run detected that"
-                      " would allow us to stop early.")
+            LOG.debug('Execution continuing, no previous run detected that'
+                      ' would allow us to stop early.')
     else:
-        existing = "check"
+        existing = 'check'
         mcfg = util.get_cfg_option_bool(init.cfg, 'manual_cache_clean', False)
         if mcfg:
-            LOG.debug("manual cache clean set from config")
-            existing = "trust"
+            LOG.debug('manual cache clean set from config')
+            existing = 'trust'
         else:
-            mfile = path_helper.get_ipath_cur("manual_clean_marker")
+            mfile = path_helper.get_ipath_cur('manual_clean_marker')
             if os.path.exists(mfile):
-                LOG.debug("manual cache clean found from marker: %s", mfile)
-                existing = "trust"
+                LOG.debug('manual cache clean found from marker: %s', mfile)
+                existing = 'trust'
 
         init.purge_cache()
         # Delete the non-net file as well
-        util.del_file(os.path.join(path_helper.get_cpath("data"), "no-net"))
+        util.del_file(os.path.join(path_helper.get_cpath('data'), 'no-net'))
 
     # Stage 5
     try:
@@ -332,7 +334,7 @@ def main_init(name, args):
         # if in network mode, and the datasource is local
         # then work was done at that stage.
         if mode == sources.DSMODE_NETWORK and init.datasource.dsmode != mode:
-            LOG.debug("[%s] Exiting. datasource %s in local mode",
+            LOG.debug('[%s] Exiting. datasource %s in local mode',
                       mode, init.datasource)
             return (None, [])
     except sources.DataSourceNotFoundException:
@@ -341,35 +343,35 @@ def main_init(name, args):
         # found. When using upstart it will also mentions job failure
         # in console log if exit code is != 0.
         if mode == sources.DSMODE_LOCAL:
-            LOG.debug("No local datasource found")
+            LOG.debug('No local datasource found')
         else:
-            util.logexc(LOG, ("No instance datasource found!"
-                              " Likely bad things to come!"))
+            util.logexc(LOG, ('No instance datasource found!'
+                              ' Likely bad things to come!'))
         if not args.force:
             init.apply_network_config(bring_up=not args.local)
-            LOG.debug("[%s] Exiting without datasource in local mode", mode)
+            LOG.debug('[%s] Exiting without datasource in local mode', mode)
             if mode == sources.DSMODE_LOCAL:
                 return (None, [])
             else:
-                return (None, ["No instance datasource found."])
+                return (None, ['No instance datasource found.'])
         else:
-            LOG.debug("[%s] barreling on in force mode without datasource",
+            LOG.debug('[%s] barreling on in force mode without datasource',
                       mode)
 
     # Stage 6
     iid = init.instancify()
-    LOG.debug("[%s] %s will now be targeting instance id: %s. new=%s",
+    LOG.debug('[%s] %s will now be targeting instance id: %s. new=%s',
               mode, name, iid, init.is_new_instance())
 
     init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))
 
     if mode == sources.DSMODE_LOCAL:
         if init.datasource.dsmode != mode:
-            LOG.debug("[%s] Exiting. datasource %s not in local mode.",
+            LOG.debug('[%s] Exiting. datasource %s not in local mode.',
                       mode, init.datasource)
             return (init.datasource, [])
         else:
-            LOG.debug("[%s] %s is in local mode, will apply init modules now.",
+            LOG.debug('[%s] %s is in local mode, will apply init modules now.',
                       mode, init.datasource)
 
     # Give the datasource a chance to use network resources.
@@ -394,8 +396,8 @@ def main_init(name, args):
             # reason behind this...
             init.consume_data(PER_ALWAYS)
     except Exception:
-        util.logexc(LOG, "Consuming user data failed!")
-        return (init.datasource, ["Consuming user data failed!"])
+        util.logexc(LOG, 'Consuming user data failed!')
+        return (init.datasource, ['Consuming user data failed!'])
 
     apply_reporting_cfg(init.cfg)
 
@@ -407,11 +409,11 @@ def main_init(name, args):
         errfmt_orig = errfmt
         (outfmt, errfmt) = util.get_output_cfg(mods.cfg, name)
         if outfmt_orig != outfmt or errfmt_orig != errfmt:
-            LOG.warning("Stdout, stderr changing to (%s, %s)",
+            LOG.warning('Stdout, stderr changing to (%s, %s)',
                         outfmt, errfmt)
             (outfmt, errfmt) = util.fixup_output(mods.cfg, name)
     except Exception:
-        util.logexc(LOG, "Failed to re-adjust output redirection!")
+        util.logexc(LOG, 'Failed to re-adjust output redirection!')
     logging.setupLogging(mods.cfg)
 
     # give the activated datasource a chance to adjust
@@ -425,12 +427,12 @@ def main_init(name, args):
 
 def di_report_warn(datasource, cfg):
     if 'di_report' not in cfg:
-        LOG.debug("no di_report found in config.")
+        LOG.debug('no di_report found in config.')
         return
 
     dicfg = cfg.get('di_report', {})
     if not isinstance(dicfg, dict):
-        LOG.warning("di_report config not a dictionary: %s", dicfg)
+        LOG.warning('di_report config not a dictionary: %s', dicfg)
         return
 
     dslist = dicfg.get('datasource_list')
@@ -438,12 +440,12 @@ def di_report_warn(datasource, cfg):
         LOG.warning("no 'datasource_list' found in di_report.")
         return
     elif not isinstance(dslist, list):
-        LOG.warning("di_report/datasource_list not a list: %s", dslist)
+        LOG.warning('di_report/datasource_list not a list: %s', dslist)
         return
 
     # ds.__module__ is like cloudinit.sources.DataSourceName
     # where Name is the thing that shows up in datasource_list.
-    modname = datasource.__module__.rpartition(".")[2]
+    modname = datasource.__module__.rpartition('.')[2]
     if modname.startswith(sources.DS_PREFIX):
         modname = modname[len(sources.DS_PREFIX):]
     else:
@@ -471,13 +473,13 @@ def main_modules(action_name, args):
     #    the modules objects configuration
     # 5. Run the modules for the given stage name
     # 6. Done!
-    w_msg = welcome_format("%s:%s" % (action_name, name))
+    w_msg = welcome_format('%s:%s' % (action_name, name))
     init = stages.Init(ds_deps=[], reporter=args.reporter)
     # Stage 1
     init.read_cfg(extract_fns(args))
     # Stage 2
     try:
-        init.fetch(existing="trust")
+        init.fetch(existing='trust')
     except sources.DataSourceNotFoundException:
         # There was no datasource found, theres nothing to do
         msg = ('Can not apply stage %s, no datasource found! Likely bad '
@@ -490,15 +492,15 @@ def main_modules(action_name, args):
     mods = stages.Modules(init, extract_fns(args), reporter=args.reporter)
     # Stage 4
     try:
-        LOG.debug("Closing stdin")
+        LOG.debug('Closing stdin')
         util.close_stdin()
         util.fixup_output(mods.cfg, name)
     except Exception:
-        util.logexc(LOG, "Failed to setup output redirection!")
+        util.logexc(LOG, 'Failed to setup output redirection!')
     if args.debug:
         # Reset so that all the debug handlers are closed out
-        LOG.debug(("Logging being reset, this logger may no"
-                   " longer be active shortly"))
+        LOG.debug(('Logging being reset, this logger may no'
+                   ' longer be active shortly'))
         logging.resetLogging()
     logging.setupLogging(mods.cfg)
     apply_reporting_cfg(init.cfg)
@@ -531,37 +533,37 @@ def main_single(name, args):
     init.read_cfg(extract_fns(args))
     # Stage 2
     try:
-        init.fetch(existing="trust")
+        init.fetch(existing='trust')
     except sources.DataSourceNotFoundException:
         # There was no datasource found,
         # that might be bad (or ok) depending on
         # the module being ran (so continue on)
-        util.logexc(LOG, ("Failed to fetch your datasource,"
-                          " likely bad things to come!"))
-        print_exc(("Failed to fetch your datasource,"
-                   " likely bad things to come!"))
+        util.logexc(LOG, ('Failed to fetch your datasource,'
+                          ' likely bad things to come!'))
+        print_exc(('Failed to fetch your datasource,'
+                   ' likely bad things to come!'))
         if not args.force:
             return 1
     # Stage 3
     mods = stages.Modules(init, extract_fns(args), reporter=args.reporter)
     mod_args = args.module_args
     if mod_args:
-        LOG.debug("Using passed in arguments %s", mod_args)
+        LOG.debug('Using passed in arguments %s', mod_args)
     mod_freq = args.frequency
     if mod_freq:
-        LOG.debug("Using passed in frequency %s", mod_freq)
+        LOG.debug('Using passed in frequency %s', mod_freq)
         mod_freq = FREQ_SHORT_NAMES.get(mod_freq)
     # Stage 4
     try:
-        LOG.debug("Closing stdin")
+        LOG.debug('Closing stdin')
         util.close_stdin()
         util.fixup_output(mods.cfg, None)
     except Exception:
-        util.logexc(LOG, "Failed to setup output redirection!")
+        util.logexc(LOG, 'Failed to setup output redirection!')
     if args.debug:
         # Reset so that all the debug handlers are closed out
-        LOG.debug(("Logging being reset, this logger may no"
-                   " longer be active shortly"))
+        LOG.debug(('Logging being reset, this logger may no'
+                   ' longer be active shortly'))
         logging.resetLogging()
     logging.setupLogging(mods.cfg)
     apply_reporting_cfg(init.cfg)
@@ -574,10 +576,10 @@ def main_single(name, args):
                                             mod_args,
                                             mod_freq)
     if failures:
-        LOG.warning("Ran %s but it failed!", mod_name)
+        LOG.warning('Ran %s but it failed!', mod_name)
         return 1
     elif not which_ran:
-        LOG.warning("Did not run %s, does it exist?", mod_name)
+        LOG.warning('Did not run %s, does it exist?', mod_name)
         return 1
     else:
         # Guess it worked
@@ -592,28 +594,28 @@ def dhclient_hook(name, args):
 
 def status_wrapper(name, args, data_d=None, link_d=None):
     if data_d is None:
-        data_d = os.path.normpath("/var/lib/cloud/data")
+        data_d = os.path.normpath('/var/lib/cloud/data')
     if link_d is None:
-        link_d = os.path.normpath("/run/cloud-init")
+        link_d = os.path.normpath('/run/cloud-init')
 
-    status_path = os.path.join(data_d, "status.json")
-    status_link = os.path.join(link_d, "status.json")
-    result_path = os.path.join(data_d, "result.json")
-    result_link = os.path.join(link_d, "result.json")
+    status_path = os.path.join(data_d, 'status.json')
+    status_link = os.path.join(link_d, 'status.json')
+    result_path = os.path.join(data_d, 'result.json')
+    result_link = os.path.join(link_d, 'result.json')
 
     util.ensure_dirs((data_d, link_d,))
 
     (_name, functor) = args.action
 
-    if name == "init":
+    if name == 'init':
         if args.local:
-            mode = "init-local"
+            mode = 'init-local'
         else:
-            mode = "init"
-    elif name == "modules":
-        mode = "modules-%s" % args.mode
+            mode = 'init'
+    elif name == 'modules':
+        mode = 'modules-%s' % args.mode
     else:
-        raise ValueError("unknown name: %s" % name)
+        raise ValueError('unknown name: %s' % name)
 
     modes = ('init', 'init-local', 'modules-config', 'modules-final')
 
@@ -658,8 +660,8 @@ def status_wrapper(name, args, data_d=None, link_d=None):
         v1[mode]['errors'] = [str(e) for e in errors]
 
     except Exception as e:
-        util.logexc(LOG, "failed stage %s", mode)
-        print_exc("failed run of stage %s" % mode)
+        util.logexc(LOG, 'failed stage %s', mode)
+        print_exc('failed run of stage %s' % mode)
         v1[mode]['errors'] = [str(e)]
 
     v1[mode]['finished'] = time.time()
@@ -667,7 +669,7 @@ def status_wrapper(name, args, data_d=None, link_d=None):
 
     atomic_helper.write_json(status_path, status)
 
-    if mode == "modules-final":
+    if mode == 'modules-final':
         # write the 'finished' file
         errors = []
         for m in modes:
@@ -713,83 +715,74 @@ def main(sysv_args=None):
                         default=False)
 
     parser.set_defaults(reporter=None)
-    subparsers = parser.add_subparsers()
+    subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand')
+    subparsers.required = True
 
     # Each action and its sub-options (if any)
-    parser_init = subparsers.add_parser('init',
-                                        help=('initializes cloud-init and'
-                                              ' performs initial modules'))
-    parser_init.add_argument("--local", '-l', action='store_true',
-                             help="start in local mode (default: %(default)s)",
-                             default=False)
-    # This is used so that we can know which action is selected +
-    # the functor to use to run this subcommand
+    parser_analyze = subparsers.add_parser(
+        'analyze', help='Analyze cloud-init logs and data')
+    # Construct analyze subcommand parser
+    parser_analyze = analyze_parser(parser_analyze)
+
+    parser_init = subparsers.add_parser(
+        'init', help='initializes cloud-init and performs initial modules')
+    parser_init.add_argument(
+        '--local', '-l', action='store_true', default=False,
+        help='start in local mode (default: %(default)s)')
     parser_init.set_defaults(action=('init', main_init))
 
     # These settings are used for the 'config' and 'final' stages
-    parser_mod = subparsers.add_parser('modules',
-                                       help=('activates modules using '
-                                             'a given configuration key'))
-    parser_mod.add_argument("--mode", '-m', action='store',
-                            help=("module configuration name "
-                                  "to use (default: %(default)s)"),
+    parser_mod = subparsers.add_parser(
+        'modules', help='activates modules using a given configuration key')
+    parser_mod.add_argument('--mode', '-m', action='store',
+                            help=('module configuration name '
+                                  'to use (default: %(default)s)'),
                             default='config',
                             choices=('init', 'config', 'final'))
     parser_mod.set_defaults(action=('modules', main_modules))
 
     # These settings are used when you want to query information
     # stored in the cloud-init data objects/directories/files
-    parser_query = subparsers.add_parser('query',
-                                         help=('query information stored '
-                                               'in cloud-init'))
-    parser_query.add_argument("--name", '-n', action="store",
-                              help="item name to query on",
-                              required=True,
-                              choices=QUERY_DATA_TYPES)
+    parser_query = subparsers.add_parser(
+        'query', help='query information stored in cloud-init')
+    parser_query.add_argument(
+        '--name', '-n', action='store', help='item name to query on',
+        required=True, choices=QUERY_DATA_TYPES)
     parser_query.set_defaults(action=('query', main_query))
 
     # This subcommand allows you to run a single module
-    parser_single = subparsers.add_parser('single',
-                                          help=('run a single module '))
-    parser_single.add_argument("--name", '-n', action="store",
-                               help="module name to run",
-                               required=True)
-    parser_single.add_argument("--frequency", action="store",
-                               help=("frequency of the module"),
-                               required=False,
-                               choices=list(FREQ_SHORT_NAMES.keys()))
-    parser_single.add_argument("--report", action="store_true",
-                               help="enable reporting",
-                               required=False)
-    parser_single.add_argument("module_args", nargs="*",
-                               metavar='argument',
-                               help=('any additional arguments to'
-                                     ' pass to this module'))
+    parser_single = subparsers.add_parser(
+        'single', help='run a single module')
+    parser_single.add_argument('--name', '-n', action='store', required=True,
+                               help='module name to run')
+    parser_single.add_argument('--frequency', action='store',
+                               choices=list(FREQ_SHORT_NAMES.keys()),
+                               help='frequency of the module')
+    parser_single.add_argument('--report', action='store_true',
+                               help='enable reporting')
+    parser_single.add_argument('module_args', nargs='*', metavar='argument',
+                               help=('any additional arguments to pass to '
+                                     ' this module'))
     parser_single.set_defaults(action=('single', main_single))
 
-    parser_dhclient = subparsers.add_parser('dhclient-hook',
-                                            help=('run the dhclient hook'
-                                                  'to record network info'))
-    parser_dhclient.add_argument("net_action",
-                                 help=('action taken on the interface'))
+    parser_dhclient = subparsers.add_parser(
+        'dhclient-hook', help='run the dhclient hook to record network info')
+    parser_dhclient.add_argument('net_action',
+                                 help='action taken on the interface')
     parser_dhclient.add_argument("net_interface",
                                  help=('the network interface being acted'
                                        ' upon'))
     parser_dhclient.set_defaults(action=('dhclient_hook', dhclient_hook))
 
-    parser_features = subparsers.add_parser('features',
-                                            help=('list defined features'))
+    parser_features = subparsers.add_parser(
+        'features', help='list defined features')
     parser_features.set_defaults(action=('features', main_features))
 
     args = parser.parse_args(args=sysv_args)
 
-    try:
-        (name, functor) = args.action
-    except AttributeError:
-        parser.error('too few arguments')
+    (name, functor) = args.action
 
     # Setup basic logging to start (until reinitialized)
-    # iff in debug mode...
     if args.debug:
         logging.setupBasicLogging()
 
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index 06f366b..646bf9a 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -31,9 +31,96 @@ class TestCLI(test_helpers.FilesystemMockingTestCase):
 
     def test_no_arguments_shows_error_message(self):
         exit_code = self._call_main()
-        self.assertIn('cloud-init: error: too few arguments',
-                      self.stderr.getvalue())
+        missing_subcommand_message = [
+            'too few arguments',  # python2.7 msg
+            'the following arguments are required: subcommand'  # python3 msg
+        ]
+        error = self.stderr.getvalue()
+        matches = ([msg in error for msg in missing_subcommand_message])
+        self.assertTrue(
+            any(matches), 'Did not find error message for missing subcommand')
         self.assertEqual(2, exit_code)
 
+    def test_all_subcommands_represented_in_help(self):
+        """All known subparsers are represented in the cloud-int help doc."""
+        self._call_main()
+        error = self.stderr.getvalue()
+        expected_subcommands = ['analyze', 'init', 'modules', 'query',
+                                'single', 'dhclient-hook', 'features']
+        self.assertIn(','.join(expected_subcommands), error)
 
-# vi: ts=4 expandtab
+    @mock.patch('cloudinit.cmd.main.status_wrapper')
+    def test_init_subcommand_parser(self, m_status_wrapper):
+        """The subcommand 'init' calls status_wrapper passing init."""
+        self._call_main(['cloud-init', 'init'])
+        (name, parseargs) = m_status_wrapper.call_args_list[0][0]
+        self.assertEqual('init', name)
+        self.assertEqual('init', parseargs.subcommand)
+        self.assertEqual('init', parseargs.action[0])
+        self.assertEqual('main_init', parseargs.action[1].func_name)
+
+    @mock.patch('cloudinit.cmd.main.status_wrapper')
+    def test_modules_subcommand_parser(self, m_status_wrapper):
+        """The subcommand 'modules' calls status_wrapper passing modules."""
+        self._call_main(['cloud-init', 'modules'])
+        (name, parseargs) = m_status_wrapper.call_args_list[0][0]
+        self.assertEqual('modules', name)
+        self.assertEqual('modules', parseargs.subcommand)
+        self.assertEqual('modules', parseargs.action[0])
+        self.assertEqual('main_modules', parseargs.action[1].func_name)
+
+    def test_analyze_subcommand_parser(self):
+        """The subcommand cloud-init analyze calls the correct subparser."""
+        self._call_main(['cloud-init', 'analyze'])
+        # These subcommands only valid for cloud-init analyze script
+        expected_subcommands = ['blame', 'show', 'dump']
+        error = self.stderr.getvalue()
+        self.assertIn(','.join(expected_subcommands), error)
+
+    def test_query_subcommand_parser_not_implemented(self):
+        """The subcommand 'query' is not implemented."""
+        with self.assertRaises(NotImplementedError) as context_manager:
+            self._call_main(['cloud-init', 'query', '--name', 'data'])
+        self.assertEqual(
+            "Action 'query' is not currently implemented",
+            str(context_manager.exception))
+
+    @mock.patch('cloudinit.cmd.main.main_single')
+    def test_single_subcommand(self, m_main_single):
+        """The subcommand 'single' calls main_single with valid args."""
+        self._call_main(['cloud-init', 'single', '--name', 'cc_ntp'])
+        (name, parseargs) = m_main_single.call_args_list[0][0]
+        self.assertEqual('single', name)
+        self.assertEqual('single', parseargs.subcommand)
+        self.assertEqual('single', parseargs.action[0])
+        self.assertFalse(parseargs.debug)
+        self.assertFalse(parseargs.force)
+        self.assertIsNone(parseargs.frequency)
+        self.assertEqual('cc_ntp', parseargs.name)
+        self.assertFalse(parseargs.report)
+
+    @mock.patch('cloudinit.cmd.main.dhclient_hook')
+    def test_dhclient_hook_subcommand(self, m_dhclient_hook):
+        """The subcommand 'dhclient-hook' calls dhclient_hook with args."""
+        self._call_main(['cloud-init', 'dhclient-hook', 'net_action', 'eth0'])
+        (name, parseargs) = m_dhclient_hook.call_args_list[0][0]
+        self.assertEqual('dhclient_hook', name)
+        self.assertEqual('dhclient-hook', parseargs.subcommand)
+        self.assertEqual('dhclient_hook', parseargs.action[0])
+        self.assertFalse(parseargs.debug)
+        self.assertFalse(parseargs.force)
+        self.assertEqual('net_action', parseargs.net_action)
+        self.assertEqual('eth0', parseargs.net_interface)
+
+    @mock.patch('cloudinit.cmd.main.main_features')
+    def test_features_hook_subcommand(self, m_features):
+        """The subcommand 'features' calls main_features with args."""
+        self._call_main(['cloud-init', 'features'])
+        (name, parseargs) = m_features.call_args_list[0][0]
+        self.assertEqual('features', name)
+        self.assertEqual('features', parseargs.subcommand)
+        self.assertEqual('features', parseargs.action[0])
+        self.assertFalse(parseargs.debug)
+        self.assertFalse(parseargs.force)
+
+# : ts=4 expandtab
_______________________________________________
Mailing list: https://launchpad.net/~cloud-init-dev
Post to     : cloud-init-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~cloud-init-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to