Chad Smith has proposed merging ~chad.smith/cloud-init:ubuntu/disco into 
cloud-init:ubuntu/disco.

Commit message:
new upstream snapshot for release into disco

nothing special here for ubuntu advantage config module as 
ubuntu-advantage-tools is the new CLI

Requested reviews:
  cloud-init commiters (cloud-init-dev)
Related bugs:
  Bug #1825596 in cloud-init: "Azure reboot with unformatted ephemeral drive 
won't mount reformatted volume"
  https://bugs.launchpad.net/cloud-init/+bug/1825596
  Bug #1828479 in cloud-init: "Release 19.1"
  https://bugs.launchpad.net/cloud-init/+bug/1828479

For more details, see:
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/367300
-- 
Your team cloud-init commiters is requested to review the proposed merge of 
~chad.smith/cloud-init:ubuntu/disco into cloud-init:ubuntu/disco.
diff --git a/ChangeLog b/ChangeLog
index 8fa6fdd..bf48fd4 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,120 @@
+19.1:
+  - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]
+  - tests: add Eoan release [Paride Legovini]
+  - cc_mounts: check if mount -a on no-change fstab path
+    [Jason Zions (MSFT)] (LP: #1825596)
+  - replace remaining occurrences of LOG.warn [Daniel Watkins]
+  - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo]
+  - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo]
+  - git tests: no longer show warning about safe yaml.
+  - tools/read-version: handle errors [Chad Miller]
+  - net/sysconfig: only indicate available on known sysconfig distros
+    (LP: #1819994)
+  - packages: update rpm specs for new bash completion path
+    [Daniel Watkins] (LP: #1825444)
+  - test_azure: mock util.SeLinuxGuard where needed
+    [Jason Zions (MSFT)] (LP: #1825253)
+  - setup.py: install bash completion script in new location [Daniel Watkins]
+  - mount_cb: do not pass sync and rw options to mount
+    [Gonéri Le Bouder] (LP: #1645824)
+  - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel]
+  - Revert "DataSource: move update_events from a class to an instance..."
+    [Daniel Watkins]
+  - Change DataSourceNoCloud to ignore file system label's case.
+    [Risto Oikarinen]
+  - cmd:main.py: Fix missing 'modules-init' key in modes dict
+    [Antonio Romito] (LP: #1815109)
+  - ubuntu_advantage: rewrite cloud-config module
+  - Azure: Treat _unset network configuration as if it were absent
+    [Jason Zions (MSFT)] (LP: #1823084)
+  - DatasourceAzure: add additional logging for azure datasource [Anh Vo]
+  - cloud_tests: fix apt_pipelining test-cases
+  - Azure: Ensure platform random_seed is always serializable as JSON.
+    [Jason Zions (MSFT)]
+  - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert]
+  - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold]
+  - net: Fix ipv6 static routes when using eni renderer
+    [Raphael Glon] (LP: #1818669)
+  - Add ubuntu_drivers config module [Daniel Watkins]
+  - doc: Refresh Azure walinuxagent docs [Daniel Watkins]
+  - tox: bump pylint version to latest (2.3.1) [Daniel Watkins]
+  - DataSource: move update_events from a class to an instance attribute
+    [Daniel Watkins] (LP: #1819913)
+  - net/sysconfig: Handle default route setup for dhcp configured NICs
+    [Robert Schweikert] (LP: #1812117)
+  - DataSourceEc2: update RELEASE_BLOCKER to be more accurate
+    [Daniel Watkins]
+  - cloud-init-per: POSIX sh does not support string subst, use sed
+    (LP: #1819222)
+  - Support locking user with usermod if passwd is not available.
+  - Example for Microsoft Azure data disk added. [Anton Olifir]
+  - clean: correctly determine the path for excluding seed directory
+    [Daniel Watkins] (LP: #1818571)
+  - helpers/openstack: Treat unknown link types as physical
+    [Daniel Watkins] (LP: #1639263)
+  - drop Python 2.6 support and our NIH version detection [Daniel Watkins]
+  - tip-pylint: Fix assignment-from-return-none errors
+  - net: append type:dhcp[46] only if dhcp[46] is True in v2 netconfig
+    [Kurt Stieger] (LP: #1818032)
+  - cc_apt_pipelining: stop disabling pipelining by default
+    [Daniel Watkins] (LP: #1794982)
+  - tests: fix some slow tests and some leaking state [Daniel Watkins]
+  - util: don't determine string_types ourselves [Daniel Watkins]
+  - cc_rsyslog: Escape possible nested set [Daniel Watkins] (LP: #1816967)
+  - Enable encrypted_data_bag_secret support for Chef
+    [Eric Williams] (LP: #1817082)
+  - azure: Filter list of ssh keys pulled from fabric [Jason Zions (MSFT)]
+  - doc: update merging doc with fixes and some additional details/examples
+  - tests: integration test failure summary to use traceback if empty error
+  - This is to fix https://bugs.launchpad.net/cloud-init/+bug/1812676
+    [Vitaly Kuznetsov]
+  - EC2: Rewrite network config on AWS Classic instances every boot
+    [Guilherme G. Piccoli] (LP: #1802073)
+  - netinfo: Adjust ifconfig output parsing for FreeBSD ipv6 entries
+    (LP: #1779672)
+  - netplan: Don't render yaml aliases when dumping netplan (LP: #1815051)
+  - add PyCharm IDE .idea/ path to .gitignore [Dominic Schlegel]
+  - correct grammar issue in instance metadata documentation
+    [Dominic Schlegel] (LP: #1802188)
+  - clean: cloud-init clean should not trace when run from within cloud_dir
+    (LP: #1795508)
+  - Resolve flake8 comparison and pycodestyle over-ident issues
+    [Paride Legovini]
+  - opennebula: also exclude epochseconds from changed environment vars
+    (LP: #1813641)
+  - systemd: Render generator from template to account for system
+    differences. [Robert Schweikert]
+  - sysconfig: On SUSE, use STARTMODE instead of ONBOOT
+    [Robert Schweikert] (LP: #1799540)
+  - flake8: use ==/!= to compare str, bytes, and int literals
+    [Paride Legovini]
+  - opennebula: exclude EPOCHREALTIME as known bash env variable with a
+    delta (LP: #1813383)
+  - tox: fix disco httpretty dependencies for py37 (LP: #1813361)
+  - run-container: uncomment baseurl in yum.repos.d/*.repo when using a
+    proxy [Paride Legovini]
+  - lxd: install zfs-linux instead of zfs meta package
+    [Johnson Shi] (LP: #1799779)
+  - net/sysconfig: do not write a resolv.conf file with only the header.
+    [Robert Schweikert]
+  - net: Make sysconfig renderer compatible with Network Manager.
+    [Eduardo Otubo]
+  - cc_set_passwords: Fix regex when parsing hashed passwords
+    [Marlin Cremers] (LP: #1811446)
+  - net: Wait for dhclient to daemonize before reading lease file
+    [Jason Zions] (LP: #1794399)
+  - [Azure] Increase retries when talking to Wireserver during metadata walk
+    [Jason Zions]
+  - Add documentation on adding a datasource.
+  - doc: clean up some datasource documentation.
+  - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo.
+  - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc]
+  - OVF: simplify expected return values of transport functions.
+  - Vmware: Add support for the com.vmware.guestInfo OVF transport.
+    (LP: #1807466)
+  - HACKING.rst: change contact info to Josh Powers
+  - Update to pylint 2.2.2.
+
 18.5:
  - tests: add Disco release [Joshua Powers]
  - net: render 'metric' values in per-subnet routes (LP: #1805871)
diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py
index e18944e..919d199 100644
--- a/cloudinit/config/cc_apt_configure.py
+++ b/cloudinit/config/cc_apt_configure.py
@@ -127,7 +127,7 @@ to ``^[\\w-]+:\\w``
 
 Source list entries can be specified as a dictionary under the ``sources``
 config key, with key in the dict representing a different source file. The key
-The key of each source entry will be used as an id that can be referenced in
+of each source entry will be used as an id that can be referenced in
 other config entries, as well as the filename for the source's configuration
 under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``,
 it will be appended. If there is no configuration for a key in ``sources``, no
diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py
index 339baba..123ffb8 100644
--- a/cloudinit/config/cc_mounts.py
+++ b/cloudinit/config/cc_mounts.py
@@ -439,6 +439,7 @@ def handle(_name, cfg, cloud, log, _args):
 
     cc_lines = []
     needswap = False
+    need_mount_all = False
     dirs = []
     for line in actlist:
         # write 'comment' in the fs_mntops, entry,  claiming this
@@ -449,11 +450,18 @@ def handle(_name, cfg, cloud, log, _args):
             dirs.append(line[1])
         cc_lines.append('\t'.join(line))
 
+    mount_points = [v['mountpoint'] for k, v in util.mounts().items()
+                    if 'mountpoint' in v]
     for d in dirs:
         try:
             util.ensure_dir(d)
         except Exception:
             util.logexc(log, "Failed to make '%s' config-mount", d)
+        # dirs is list of directories on which a volume should be mounted.
+        # If any of them does not already show up in the list of current
+        # mount points, we will definitely need to do mount -a.
+        if not need_mount_all and d not in mount_points:
+            need_mount_all = True
 
     sadds = [WS.sub(" ", n) for n in cc_lines]
     sdrops = [WS.sub(" ", n) for n in fstab_removed]
@@ -473,6 +481,9 @@ def handle(_name, cfg, cloud, log, _args):
         log.debug("No changes to /etc/fstab made.")
     else:
         log.debug("Changes to fstab: %s", sops)
+        need_mount_all = True
+
+    if need_mount_all:
         activate_cmds.append(["mount", "-a"])
         if uses_systemd:
             activate_cmds.append(["systemctl", "daemon-reload"])
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 0998392..a47da0a 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -18,6 +18,8 @@ from .network_state import (
 
 LOG = logging.getLogger(__name__)
 NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf"
+KNOWN_DISTROS = [
+    'opensuse', 'sles', 'suse', 'redhat', 'fedora', 'centos']
 
 
 def _make_header(sep='#'):
@@ -717,8 +719,8 @@ class Renderer(renderer.Renderer):
 def available(target=None):
     sysconfig = available_sysconfig(target=target)
     nm = available_nm(target=target)
-
-    return any([nm, sysconfig])
+    return (util.get_linux_distro()[0] in KNOWN_DISTROS
+            and any([nm, sysconfig]))
 
 
 def available_sysconfig(target=None):
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index f55c31e..6d2affe 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -7,11 +7,11 @@ import mock
 import os
 import requests
 import textwrap
-import yaml
 
 import cloudinit.net as net
 from cloudinit.util import ensure_file, write_file, ProcessExecutionError
 from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase
+from cloudinit import safeyaml as yaml
 
 
 class TestSysDevPath(CiTestCase):
diff --git a/cloudinit/reporting/handlers.py b/cloudinit/reporting/handlers.py
old mode 100644
new mode 100755
index 6d23558..10165ae
--- a/cloudinit/reporting/handlers.py
+++ b/cloudinit/reporting/handlers.py
@@ -5,7 +5,6 @@ import fcntl
 import json
 import six
 import os
-import re
 import struct
 import threading
 import time
@@ -14,6 +13,7 @@ from cloudinit import log as logging
 from cloudinit.registry import DictRegistry
 from cloudinit import (url_helper, util)
 from datetime import datetime
+from six.moves.queue import Empty as QueueEmptyError
 
 if six.PY2:
     from multiprocessing.queues import JoinableQueue as JQueue
@@ -129,24 +129,50 @@ class HyperVKvpReportingHandler(ReportingHandler):
     DESC_IDX_KEY = 'msg_i'
     JSON_SEPARATORS = (',', ':')
     KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1'
+    _already_truncated_pool_file = False
 
     def __init__(self,
                  kvp_file_path=KVP_POOL_FILE_GUEST,
                  event_types=None):
         super(HyperVKvpReportingHandler, self).__init__()
         self._kvp_file_path = kvp_file_path
+        HyperVKvpReportingHandler._truncate_guest_pool_file(
+            self._kvp_file_path)
+
         self._event_types = event_types
         self.q = JQueue()
-        self.kvp_file = None
         self.incarnation_no = self._get_incarnation_no()
         self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX,
                                                   self.incarnation_no)
-        self._current_offset = 0
         self.publish_thread = threading.Thread(
                 target=self._publish_event_routine)
         self.publish_thread.daemon = True
         self.publish_thread.start()
 
+    @classmethod
+    def _truncate_guest_pool_file(cls, kvp_file):
+        """
+        Truncate the pool file if it has not been truncated since boot.
+        This should be done exactly once for the file indicated by
+        KVP_POOL_FILE_GUEST constant above. This method takes a filename
+        so that we can use an arbitrary file during unit testing.
+        Since KVP is a best-effort telemetry channel we only attempt to
+        truncate the file once and only if the file has not been modified
+        since boot. Additional truncation can lead to loss of existing
+        KVPs.
+        """
+        if cls._already_truncated_pool_file:
+            return
+        boot_time = time.time() - float(util.uptime())
+        try:
+            if os.path.getmtime(kvp_file) < boot_time:
+                with open(kvp_file, "w"):
+                    pass
+        except (OSError, IOError) as e:
+            LOG.warning("failed to truncate kvp pool file, %s", e)
+        finally:
+            cls._already_truncated_pool_file = True
+
     def _get_incarnation_no(self):
         """
         use the time passed as the incarnation number.
@@ -162,20 +188,15 @@ class HyperVKvpReportingHandler(ReportingHandler):
 
     def _iterate_kvps(self, offset):
         """iterate the kvp file from the current offset."""
-        try:
-            with open(self._kvp_file_path, 'rb+') as f:
-                self.kvp_file = f
-                fcntl.flock(f, fcntl.LOCK_EX)
-                f.seek(offset)
+        with open(self._kvp_file_path, 'rb') as f:
+            fcntl.flock(f, fcntl.LOCK_EX)
+            f.seek(offset)
+            record_data = f.read(self.HV_KVP_RECORD_SIZE)
+            while len(record_data) == self.HV_KVP_RECORD_SIZE:
+                kvp_item = self._decode_kvp_item(record_data)
+                yield kvp_item
                 record_data = f.read(self.HV_KVP_RECORD_SIZE)
-                while len(record_data) == self.HV_KVP_RECORD_SIZE:
-                    self._current_offset += self.HV_KVP_RECORD_SIZE
-                    kvp_item = self._decode_kvp_item(record_data)
-                    yield kvp_item
-                    record_data = f.read(self.HV_KVP_RECORD_SIZE)
-                fcntl.flock(f, fcntl.LOCK_UN)
-        finally:
-            self.kvp_file = None
+            fcntl.flock(f, fcntl.LOCK_UN)
 
     def _event_key(self, event):
         """
@@ -207,23 +228,13 @@ class HyperVKvpReportingHandler(ReportingHandler):
 
         return {'key': k, 'value': v}
 
-    def _update_kvp_item(self, record_data):
-        if self.kvp_file is None:
-            raise ReportException(
-                "kvp file '{0}' not opened."
-                .format(self._kvp_file_path))
-        self.kvp_file.seek(-self.HV_KVP_RECORD_SIZE, 1)
-        self.kvp_file.write(record_data)
-
     def _append_kvp_item(self, record_data):
-        with open(self._kvp_file_path, 'rb+') as f:
+        with open(self._kvp_file_path, 'ab') as f:
             fcntl.flock(f, fcntl.LOCK_EX)
-            # seek to end of the file
-            f.seek(0, 2)
-            f.write(record_data)
+            for data in record_data:
+                f.write(data)
             f.flush()
             fcntl.flock(f, fcntl.LOCK_UN)
-            self._current_offset = f.tell()
 
     def _break_down(self, key, meta_data, description):
         del meta_data[self.MSG_KEY]
@@ -279,40 +290,26 @@ class HyperVKvpReportingHandler(ReportingHandler):
 
     def _publish_event_routine(self):
         while True:
+            items_from_queue = 0
             try:
                 event = self.q.get(block=True)
-                need_append = True
+                items_from_queue += 1
+                encoded_data = []
+                while event is not None:
+                    encoded_data += self._encode_event(event)
+                    try:
+                        # get all the rest of the events in the queue
+                        event = self.q.get(block=False)
+                        items_from_queue += 1
+                    except QueueEmptyError:
+                        event = None
                 try:
-                    if not os.path.exists(self._kvp_file_path):
-                        LOG.warning(
-                            "skip writing events %s to %s. file not present.",
-                            event.as_string(),
-                            self._kvp_file_path)
-                    encoded_event = self._encode_event(event)
-                    # for each encoded_event
-                    for encoded_data in (encoded_event):
-                        for kvp in self._iterate_kvps(self._current_offset):
-                            match = (
-                                re.match(
-                                    r"^{0}\|(\d+)\|.+"
-                                    .format(self.EVENT_PREFIX),
-                                    kvp['key']
-                                ))
-                            if match:
-                                match_groups = match.groups(0)
-                                if int(match_groups[0]) < self.incarnation_no:
-                                    need_append = False
-                                    self._update_kvp_item(encoded_data)
-                                    continue
-                        if need_append:
-                            self._append_kvp_item(encoded_data)
-                except IOError as e:
-                    LOG.warning(
-                        "failed posting event to kvp: %s e:%s",
-                        event.as_string(), e)
+                    self._append_kvp_item(encoded_data)
+                except (OSError, IOError) as e:
+                    LOG.warning("failed posting events to kvp, %s", e)
                 finally:
-                    self.q.task_done()
-
+                    for _ in range(items_from_queue):
+                        self.q.task_done()
             # when main process exits, q.get() will through EOFError
             # indicating we should exit this thread.
             except EOFError:
@@ -322,7 +319,7 @@ class HyperVKvpReportingHandler(ReportingHandler):
     # if the kvp pool already contains a chunk of data,
     # so defer it to another thread.
     def publish_event(self, event):
-        if (not self._event_types or event.event_type in self._event_types):
+        if not self._event_types or event.event_type in self._event_types:
             self.q.put(event)
 
     def flush(self):
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 76b1661..b7440c1 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -57,7 +57,12 @@ AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77'
 REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds"
 REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready"
 AGENT_SEED_DIR = '/var/lib/waagent'
+
+# In the event where the IMDS primary server is not
+# available, it takes 1s to fallback to the secondary one
+IMDS_TIMEOUT_IN_SECONDS = 2
 IMDS_URL = "http://169.254.169.254/metadata/";
+
 PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0"
 
 # List of static scripts and network config artifacts created by
@@ -407,7 +412,7 @@ class DataSourceAzure(sources.DataSource):
                 elif cdev.startswith("/dev/"):
                     if util.is_FreeBSD():
                         ret = util.mount_cb(cdev, load_azure_ds_dir,
-                                            mtype="udf", sync=False)
+                                            mtype="udf")
                     else:
                         ret = util.mount_cb(cdev, load_azure_ds_dir)
                 else:
@@ -582,9 +587,9 @@ class DataSourceAzure(sources.DataSource):
                         return
                     self._ephemeral_dhcp_ctx.clean_network()
                 else:
-                    return readurl(url, timeout=1, headers=headers,
-                                   exception_cb=exc_cb, infinite=True,
-                                   log_req_resp=False).contents
+                    return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS,
+                                   headers=headers, exception_cb=exc_cb,
+                                   infinite=True, log_req_resp=False).contents
             except UrlError:
                 # Teardown our EphemeralDHCPv4 context on failure as we retry
                 self._ephemeral_dhcp_ctx.clean_network()
@@ -1291,8 +1296,8 @@ def _get_metadata_from_imds(retries):
     headers = {"Metadata": "true"}
     try:
         response = readurl(
-            url, timeout=1, headers=headers, retries=retries,
-            exception_cb=retry_on_url_exc)
+            url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
+            retries=retries, exception_cb=retry_on_url_exc)
     except Exception as e:
         LOG.debug('Ignoring IMDS instance metadata: %s', e)
         return {}
diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py
index d4b758f..f185dc7 100644
--- a/cloudinit/sources/DataSourceCloudStack.py
+++ b/cloudinit/sources/DataSourceCloudStack.py
@@ -95,7 +95,7 @@ class DataSourceCloudStack(sources.DataSource):
         start_time = time.time()
         url = uhelp.wait_for_url(
             urls=urls, max_wait=url_params.max_wait_seconds,
-            timeout=url_params.timeout_seconds, status_cb=LOG.warn)
+            timeout=url_params.timeout_seconds, status_cb=LOG.warning)
 
         if url:
             LOG.debug("Using metadata source: '%s'", url)
diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py
index 564e3eb..571d30d 100644
--- a/cloudinit/sources/DataSourceConfigDrive.py
+++ b/cloudinit/sources/DataSourceConfigDrive.py
@@ -72,15 +72,12 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
             dslist = self.sys_cfg.get('datasource_list')
             for dev in find_candidate_devs(dslist=dslist):
                 try:
-                    # Set mtype if freebsd and turn off sync
-                    if dev.startswith("/dev/cd"):
+                    if util.is_FreeBSD() and dev.startswith("/dev/cd"):
                         mtype = "cd9660"
-                        sync = False
                     else:
                         mtype = None
-                        sync = True
                     results = util.mount_cb(dev, read_config_drive,
-                                            mtype=mtype, sync=sync)
+                                            mtype=mtype)
                     found = dev
                 except openstack.NonReadable:
                     pass
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index ac28f1d..5c017bf 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -208,7 +208,7 @@ class DataSourceEc2(sources.DataSource):
         start_time = time.time()
         url = uhelp.wait_for_url(
             urls=urls, max_wait=url_params.max_wait_seconds,
-            timeout=url_params.timeout_seconds, status_cb=LOG.warn)
+            timeout=url_params.timeout_seconds, status_cb=LOG.warning)
 
         if url:
             self.metadata_address = url2base[url]
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index d3af05e..82c4c8c 100755
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -20,6 +20,9 @@ from cloudinit.reporting import events
 
 LOG = logging.getLogger(__name__)
 
+# This endpoint matches the format as found in dhcp lease files, since this
+# value is applied if the endpoint can't be found within a lease file
+DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10"
 
 azure_ds_reporter = events.ReportEventStack(
     name="azure-ds",
@@ -297,7 +300,12 @@ class WALinuxAgentShim(object):
     @azure_ds_telemetry_reporter
     def _get_value_from_leases_file(fallback_lease_file):
         leases = []
-        content = util.load_file(fallback_lease_file)
+        try:
+            content = util.load_file(fallback_lease_file)
+        except IOError as ex:
+            LOG.error("Failed to read %s: %s", fallback_lease_file, ex)
+            return None
+
         LOG.debug("content is %s", content)
         option_name = _get_dhcp_endpoint_option_name()
         for line in content.splitlines():
@@ -372,9 +380,9 @@ class WALinuxAgentShim(object):
                           fallback_lease_file)
                 value = WALinuxAgentShim._get_value_from_leases_file(
                     fallback_lease_file)
-
         if value is None:
-            raise ValueError('No endpoint found.')
+            LOG.warning("No lease found; using default endpoint")
+            value = DEFAULT_WIRESERVER_ENDPOINT
 
         endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)
         LOG.debug('Azure endpoint found at %s', endpoint_ip_address)
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 385f231..ea4199c 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -1679,7 +1679,7 @@ def mounts():
     return mounted
 
 
-def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True,
+def mount_cb(device, callback, data=None, mtype=None,
              update_env_for_mount=None):
     """
     Mount the device, call method 'callback' passing the directory
@@ -1726,18 +1726,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True,
             for mtype in mtypes:
                 mountpoint = None
                 try:
-                    mountcmd = ['mount']
-                    mountopts = []
-                    if rw:
-                        mountopts.append('rw')
-                    else:
-                        mountopts.append('ro')
-                    if sync:
-                        # This seems like the safe approach to do
-                        # (ie where this is on by default)
-                        mountopts.append("sync")
-                    if mountopts:
-                        mountcmd.extend(["-o", ",".join(mountopts)])
+                    mountcmd = ['mount', '-o', 'ro']
                     if mtype:
                         mountcmd.extend(['-t', mtype])
                     mountcmd.append(device)
diff --git a/cloudinit/version.py b/cloudinit/version.py
index a2c5d43..ddcd436 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,7 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-__VERSION__ = "18.5"
+__VERSION__ = "19.1"
 _PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
 
 FEATURES = [
diff --git a/debian/changelog b/debian/changelog
index 0630854..8379093 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,30 @@
+cloud-init (19.1-1-gbaa47854-0ubuntu1~19.04.1) disco; urgency=medium
+
+  * New upstream snapshot.
+    - Azure: Return static fallback address as if failed to find endpoint
+      [Jason Zions (MSFT)]
+    - release 19.1 (LP: #1828479)
+    - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]
+    - tests: add Eoan release [Paride Legovini]
+    - cc_mounts: check if mount -a on no-change fstab path
+      [Jason Zions (MSFT)] (LP: #1825596)
+    - replace remaining occurrences of LOG.warn
+    - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo]
+    - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo]
+    - git tests: no longer show warning about safe yaml. [Scott Moser]
+    - tools/read-version: handle errors [Chad Miller]
+    - net/sysconfig: only indicate available on known sysconfig distros
+      (LP: #1819994)
+    - packages: update rpm specs for new bash completion path (LP: #1825444)
+    - test_azure: mock util.SeLinuxGuard where needed
+      [Jason Zions (MSFT)] (LP: #1825253)
+    - setup.py: install bash completion script in new location
+    - mount_cb: do not pass sync and rw options to mount
+      [Gonéri Le Bouder] (LP: #1645824)
+    - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel]
+
+ -- Chad Smith <chad.sm...@canonical.com>  Fri, 10 May 2019 21:11:57 -0600
+
 cloud-init (18.5-62-g6322c2dd-0ubuntu1) disco; urgency=medium
 
   * New upstream snapshot.
diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
index 6b2022b..057a578 100644
--- a/packages/redhat/cloud-init.spec.in
+++ b/packages/redhat/cloud-init.spec.in
@@ -205,7 +205,9 @@ fi
 %dir                    %{_sysconfdir}/cloud/templates
 %config(noreplace)      %{_sysconfdir}/cloud/templates/*
 %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf
-%{_sysconfdir}/bash_completion.d/cloud-init
+
+# Bash completion script
+%{_datadir}/bash-completion/completions/cloud-init
 
 %{_libexecdir}/%{name}
 %dir %{_sharedstatedir}/cloud
diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in
index 26894b3..004b875 100644
--- a/packages/suse/cloud-init.spec.in
+++ b/packages/suse/cloud-init.spec.in
@@ -120,7 +120,9 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f)
 %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README
 %dir               %{_sysconfdir}/cloud/templates
 %config(noreplace) %{_sysconfdir}/cloud/templates/*
-%{_sysconfdir}/bash_completion.d/cloud-init
+
+# Bash completion script
+%{_datadir}/bash-completion/completions/cloud-init
 
 %{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient
 %{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager
diff --git a/setup.py b/setup.py
index 186e215..fcaf26f 100755
--- a/setup.py
+++ b/setup.py
@@ -245,13 +245,14 @@ if not in_virtualenv():
         INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k]
 
 data_files = [
-    (ETC + '/bash_completion.d', ['bash_completion/cloud-init']),
     (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]),
     (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')),
     (ETC + '/cloud/templates', glob('templates/*')),
     (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify',
                                     'tools/uncloud-init',
                                     'tools/write-ssh-key-fingerprints']),
+    (USR + '/share/bash-completion/completions',
+     ['bash_completion/cloud-init']),
     (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]),
     (USR + '/share/doc/cloud-init/examples',
         [f for f in glob('doc/examples/*') if is_f(f)]),
diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml
index ec5da72..924ad95 100644
--- a/tests/cloud_tests/releases.yaml
+++ b/tests/cloud_tests/releases.yaml
@@ -129,6 +129,22 @@ features:
 
 releases:
     # UBUNTU =================================================================
+    eoan:
+        # EOL: Jul 2020
+        default:
+            enabled: true
+            release: eoan
+            version: 19.10
+            os: ubuntu
+            feature_groups:
+                - base
+                - debian_base
+                - ubuntu_specific
+        lxd:
+            sstreams_server: https://cloud-images.ubuntu.com/daily
+            alias: eoan
+            setup_overrides: null
+            override_templates: false
     disco:
         # EOL: Jan 2020
         default:
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 53c56cd..427ab7e 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -163,7 +163,8 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
 
         m_readurl.assert_called_with(
             self.network_md_url, exception_cb=mock.ANY,
-            headers={'Metadata': 'true'}, retries=2, timeout=1)
+            headers={'Metadata': 'true'}, retries=2,
+            timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS)
 
     @mock.patch('cloudinit.url_helper.time.sleep')
     @mock.patch(MOCKPATH + 'net.is_up')
@@ -1375,12 +1376,15 @@ class TestCanDevBeReformatted(CiTestCase):
         self._domock(p + "util.mount_cb", 'm_mount_cb')
         self._domock(p + "os.path.realpath", 'm_realpath')
         self._domock(p + "os.path.exists", 'm_exists')
+        self._domock(p + "util.SeLinuxGuard", 'm_selguard')
 
         self.m_exists.side_effect = lambda p: p in bypath
         self.m_realpath.side_effect = realpath
         self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs
         self.m_mount_cb.side_effect = mount_cb
         self.m_partitions_on_device.side_effect = partitions_on_device
+        self.m_selguard.__enter__ = mock.Mock(return_value=False)
+        self.m_selguard.__exit__ = mock.Mock()
 
     def test_three_partitions_is_false(self):
         """A disk with 3 partitions can not be formatted."""
@@ -1788,7 +1792,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
                                     headers={'Metadata': 'true',
                                              'User-Agent':
                                              'Cloud-Init/%s' % vs()
-                                             }, method='GET', timeout=1,
+                                             }, method='GET',
+                                    timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS,
                                     url=full_url)])
         self.assertEqual(m_dhcp.call_count, 2)
         m_net.assert_any_call(
@@ -1825,7 +1830,9 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
                                     headers={'Metadata': 'true',
                                              'User-Agent':
                                              'Cloud-Init/%s' % vs()},
-                                    method='GET', timeout=1, url=full_url)])
+                                    method='GET',
+                                    timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS,
+                                    url=full_url)])
         self.assertEqual(m_dhcp.call_count, 2)
         m_net.assert_any_call(
             broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index 0255616..bd006ab 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -67,12 +67,17 @@ class TestFindEndpoint(CiTestCase):
         self.networkd_leases.return_value = None
 
     def test_missing_file(self):
-        self.assertRaises(ValueError, wa_shim.find_endpoint)
+        """wa_shim find_endpoint uses default endpoint if leasefile not found
+        """
+        self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16")
 
     def test_missing_special_azure_line(self):
+        """wa_shim find_endpoint uses default endpoint if leasefile is found
+        but does not contain DHCP Option 245 (whose value is the endpoint)
+        """
         self.load_file.return_value = ''
         self.dhcp_options.return_value = {'eth0': {'key': 'value'}}
-        self.assertRaises(ValueError, wa_shim.find_endpoint)
+        self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16")
 
     @staticmethod
     def _build_lease_content(encoded_address):
diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py
index 8fea6c2..0fb160b 100644
--- a/tests/unittests/test_handler/test_handler_mounts.py
+++ b/tests/unittests/test_handler/test_handler_mounts.py
@@ -154,7 +154,15 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase):
                        return_value=True)
 
         self.add_patch('cloudinit.config.cc_mounts.util.subp',
-                       'mock_util_subp')
+                       'm_util_subp')
+
+        self.add_patch('cloudinit.config.cc_mounts.util.mounts',
+                       'mock_util_mounts',
+                       return_value={
+                           '/dev/sda1': {'fstype': 'ext4',
+                                         'mountpoint': '/',
+                                         'opts': 'rw,relatime,discard'
+                                         }})
 
         self.mock_cloud = mock.Mock()
         self.mock_log = mock.Mock()
@@ -230,4 +238,24 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase):
             fstab_new_content = fd.read()
             self.assertEqual(fstab_expected_content, fstab_new_content)
 
+    def test_no_change_fstab_sets_needs_mount_all(self):
+        '''verify unchanged fstab entries are mounted if not call mount -a'''
+        fstab_original_content = (
+            'LABEL=cloudimg-rootfs / ext4 defaults 0 0\n'
+            'LABEL=UEFI /boot/efi vfat defaults 0 0\n'
+            '/dev/vdb /mnt auto defaults,noexec,comment=cloudconfig 0 2\n'
+        )
+        fstab_expected_content = fstab_original_content
+        cc = {'mounts': [
+                 ['/dev/vdb', '/mnt', 'auto', 'defaults,noexec']]}
+        with open(cc_mounts.FSTAB_PATH, 'w') as fd:
+            fd.write(fstab_original_content)
+        with open(cc_mounts.FSTAB_PATH, 'r') as fd:
+            fstab_new_content = fd.read()
+            self.assertEqual(fstab_expected_content, fstab_new_content)
+        cc_mounts.handle(None, cc, self.mock_cloud, self.mock_log, [])
+        self.m_util_subp.assert_has_calls([
+            mock.call(['mount', '-a']),
+            mock.call(['systemctl', 'daemon-reload'])])
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index fd03deb..e85e964 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -9,6 +9,7 @@ from cloudinit.net import (
 from cloudinit.sources.helpers import openstack
 from cloudinit import temp_utils
 from cloudinit import util
+from cloudinit import safeyaml as yaml
 
 from cloudinit.tests.helpers import (
     CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir)
@@ -21,7 +22,7 @@ import json
 import os
 import re
 import textwrap
-import yaml
+from yaml.serializer import Serializer
 
 
 DHCP_CONTENT_1 = """
@@ -3269,9 +3270,12 @@ class TestNetplanPostcommands(CiTestCase):
         mock_netplan_generate.assert_called_with(run=True)
         mock_net_setup_link.assert_called_with(run=True)
 
+    @mock.patch('cloudinit.util.SeLinuxGuard')
     @mock.patch.object(netplan, "get_devicelist")
     @mock.patch('cloudinit.util.subp')
-    def test_netplan_postcmds(self, mock_subp, mock_devlist):
+    def test_netplan_postcmds(self, mock_subp, mock_devlist, mock_sel):
+        mock_sel.__enter__ = mock.Mock(return_value=False)
+        mock_sel.__exit__ = mock.Mock()
         mock_devlist.side_effect = [['lo']]
         tmp_dir = self.tmp_dir()
         ns = network_state.parse_net_config_data(self.mycfg,
@@ -3572,7 +3576,7 @@ class TestNetplanRoundTrip(CiTestCase):
         # now look for any alias, avoid rendering them entirely
         # generate the first anchor string using the template
         # as of this writing, looks like "&id001"
-        anchor = r'&' + yaml.serializer.Serializer.ANCHOR_TEMPLATE % 1
+        anchor = r'&' + Serializer.ANCHOR_TEMPLATE % 1
         found_alias = re.search(anchor, content, re.MULTILINE)
         if found_alias:
             msg = "Error at: %s\nContent:\n%s" % (found_alias, content)
@@ -3826,6 +3830,41 @@ class TestNetRenderers(CiTestCase):
         self.assertRaises(net.RendererNotFoundError, renderers.select,
                           priority=['sysconfig', 'eni'])
 
+    @mock.patch("cloudinit.net.renderers.netplan.available")
+    @mock.patch("cloudinit.net.renderers.sysconfig.available_sysconfig")
+    @mock.patch("cloudinit.net.renderers.sysconfig.available_nm")
+    @mock.patch("cloudinit.net.renderers.eni.available")
+    @mock.patch("cloudinit.net.renderers.sysconfig.util.get_linux_distro")
+    def test_sysconfig_selected_on_sysconfig_enabled_distros(self, m_distro,
+                                                             m_eni, m_sys_nm,
+                                                             m_sys_scfg,
+                                                             m_netplan):
+        """sysconfig only selected on specific distros (rhel/sles)."""
+
+        # Ubuntu with Network-Manager installed
+        m_eni.return_value = False       # no ifupdown (ifquery)
+        m_sys_scfg.return_value = False  # no sysconfig/ifup/ifdown
+        m_sys_nm.return_value = True     # network-manager is installed
+        m_netplan.return_value = True    # netplan is installed
+        m_distro.return_value = ('ubuntu', None, None)
+        self.assertEqual('netplan', renderers.select(priority=None)[0])
+
+        # Centos with Network-Manager installed
+        m_eni.return_value = False       # no ifupdown (ifquery)
+        m_sys_scfg.return_value = False  # no sysconfig/ifup/ifdown
+        m_sys_nm.return_value = True     # network-manager is installed
+        m_netplan.return_value = False    # netplan is not installed
+        m_distro.return_value = ('centos', None, None)
+        self.assertEqual('sysconfig', renderers.select(priority=None)[0])
+
+        # OpenSuse with Network-Manager installed
+        m_eni.return_value = False       # no ifupdown (ifquery)
+        m_sys_scfg.return_value = False  # no sysconfig/ifup/ifdown
+        m_sys_nm.return_value = True     # network-manager is installed
+        m_netplan.return_value = False    # netplan is not installed
+        m_distro.return_value = ('opensuse', None, None)
+        self.assertEqual('sysconfig', renderers.select(priority=None)[0])
+
 
 class TestGetInterfaces(CiTestCase):
     _data = {'bonds': ['bond1'],
diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py
old mode 100644
new mode 100755
index 2e64c6c..d01ed5b
--- a/tests/unittests/test_reporting_hyperv.py
+++ b/tests/unittests/test_reporting_hyperv.py
@@ -1,10 +1,12 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit.reporting import events
-from cloudinit.reporting import handlers
+from cloudinit.reporting.handlers import HyperVKvpReportingHandler
 
 import json
 import os
+import struct
+import time
 
 from cloudinit import util
 from cloudinit.tests.helpers import CiTestCase
@@ -13,7 +15,7 @@ from cloudinit.tests.helpers import CiTestCase
 class TestKvpEncoding(CiTestCase):
     def test_encode_decode(self):
         kvp = {'key': 'key1', 'value': 'value1'}
-        kvp_reporting = handlers.HyperVKvpReportingHandler()
+        kvp_reporting = HyperVKvpReportingHandler()
         data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value'])
         self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE)
         decoded_kvp = kvp_reporting._decode_kvp_item(data)
@@ -26,57 +28,9 @@ class TextKvpReporter(CiTestCase):
         self.tmp_file_path = self.tmp_path('kvp_pool_file')
         util.ensure_file(self.tmp_file_path)
 
-    def test_event_type_can_be_filtered(self):
-        reporter = handlers.HyperVKvpReportingHandler(
-            kvp_file_path=self.tmp_file_path,
-            event_types=['foo', 'bar'])
-
-        reporter.publish_event(
-            events.ReportingEvent('foo', 'name', 'description'))
-        reporter.publish_event(
-            events.ReportingEvent('some_other', 'name', 'description3'))
-        reporter.q.join()
-
-        kvps = list(reporter._iterate_kvps(0))
-        self.assertEqual(1, len(kvps))
-
-        reporter.publish_event(
-            events.ReportingEvent('bar', 'name', 'description2'))
-        reporter.q.join()
-        kvps = list(reporter._iterate_kvps(0))
-        self.assertEqual(2, len(kvps))
-
-        self.assertIn('foo', kvps[0]['key'])
-        self.assertIn('bar', kvps[1]['key'])
-        self.assertNotIn('some_other', kvps[0]['key'])
-        self.assertNotIn('some_other', kvps[1]['key'])
-
-    def test_events_are_over_written(self):
-        reporter = handlers.HyperVKvpReportingHandler(
-            kvp_file_path=self.tmp_file_path)
-
-        self.assertEqual(0, len(list(reporter._iterate_kvps(0))))
-
-        reporter.publish_event(
-            events.ReportingEvent('foo', 'name1', 'description'))
-        reporter.publish_event(
-            events.ReportingEvent('foo', 'name2', 'description'))
-        reporter.q.join()
-        self.assertEqual(2, len(list(reporter._iterate_kvps(0))))
-
-        reporter2 = handlers.HyperVKvpReportingHandler(
-            kvp_file_path=self.tmp_file_path)
-        reporter2.incarnation_no = reporter.incarnation_no + 1
-        reporter2.publish_event(
-            events.ReportingEvent('foo', 'name3', 'description'))
-        reporter2.q.join()
-
-        self.assertEqual(2, len(list(reporter2._iterate_kvps(0))))
-
     def test_events_with_higher_incarnation_not_over_written(self):
-        reporter = handlers.HyperVKvpReportingHandler(
+        reporter = HyperVKvpReportingHandler(
             kvp_file_path=self.tmp_file_path)
-
         self.assertEqual(0, len(list(reporter._iterate_kvps(0))))
 
         reporter.publish_event(
@@ -86,7 +40,7 @@ class TextKvpReporter(CiTestCase):
         reporter.q.join()
         self.assertEqual(2, len(list(reporter._iterate_kvps(0))))
 
-        reporter3 = handlers.HyperVKvpReportingHandler(
+        reporter3 = HyperVKvpReportingHandler(
             kvp_file_path=self.tmp_file_path)
         reporter3.incarnation_no = reporter.incarnation_no - 1
         reporter3.publish_event(
@@ -95,7 +49,7 @@ class TextKvpReporter(CiTestCase):
         self.assertEqual(3, len(list(reporter3._iterate_kvps(0))))
 
     def test_finish_event_result_is_logged(self):
-        reporter = handlers.HyperVKvpReportingHandler(
+        reporter = HyperVKvpReportingHandler(
             kvp_file_path=self.tmp_file_path)
         reporter.publish_event(
             events.FinishReportingEvent('name2', 'description1',
@@ -105,7 +59,7 @@ class TextKvpReporter(CiTestCase):
 
     def test_file_operation_issue(self):
         os.remove(self.tmp_file_path)
-        reporter = handlers.HyperVKvpReportingHandler(
+        reporter = HyperVKvpReportingHandler(
             kvp_file_path=self.tmp_file_path)
         reporter.publish_event(
             events.FinishReportingEvent('name2', 'description1',
@@ -113,7 +67,7 @@ class TextKvpReporter(CiTestCase):
         reporter.q.join()
 
     def test_event_very_long(self):
-        reporter = handlers.HyperVKvpReportingHandler(
+        reporter = HyperVKvpReportingHandler(
             kvp_file_path=self.tmp_file_path)
         description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE
         long_event = events.FinishReportingEvent(
@@ -132,3 +86,43 @@ class TextKvpReporter(CiTestCase):
             self.assertEqual(msg_slice['msg_i'], i)
             full_description += msg_slice['msg']
         self.assertEqual(description, full_description)
+
+    def test_not_truncate_kvp_file_modified_after_boot(self):
+        with open(self.tmp_file_path, "wb+") as f:
+            kvp = {'key': 'key1', 'value': 'value1'}
+            data = (struct.pack("%ds%ds" % (
+                    HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE,
+                    HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE),
+                    kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8')))
+            f.write(data)
+        cur_time = time.time()
+        os.utime(self.tmp_file_path, (cur_time, cur_time))
+
+        # reset this because the unit test framework
+        # has already polluted the class variable
+        HyperVKvpReportingHandler._already_truncated_pool_file = False
+
+        reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
+        kvps = list(reporter._iterate_kvps(0))
+        self.assertEqual(1, len(kvps))
+
+    def test_truncate_stale_kvp_file(self):
+        with open(self.tmp_file_path, "wb+") as f:
+            kvp = {'key': 'key1', 'value': 'value1'}
+            data = (struct.pack("%ds%ds" % (
+                HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE,
+                HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE),
+                kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8')))
+            f.write(data)
+
+        # set the time ways back to make it look like
+        # we had an old kvp file
+        os.utime(self.tmp_file_path, (1000000, 1000000))
+
+        # reset this because the unit test framework
+        # has already polluted the class variable
+        HyperVKvpReportingHandler._already_truncated_pool_file = False
+
+        reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
+        kvps = list(reporter._iterate_kvps(0))
+        self.assertEqual(0, len(kvps))
diff --git a/tools/build-on-freebsd b/tools/build-on-freebsd
index d23fde2..dc3b974 100755
--- a/tools/build-on-freebsd
+++ b/tools/build-on-freebsd
@@ -9,6 +9,7 @@ fail() { echo "FAILED:" "$@" 1>&2; exit 1; }
 depschecked=/tmp/c-i.dependencieschecked
 pkgs="
    bash
+   chpasswd
    dmidecode
    e2fsprogs
    py27-Jinja2
@@ -17,6 +18,7 @@ pkgs="
    py27-configobj
    py27-jsonpatch
    py27-jsonpointer
+   py27-jsonschema
    py27-oauthlib
    py27-requests
    py27-serial
@@ -28,12 +30,9 @@ pkgs="
 [ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages"
 touch $depschecked
 
-# Required but unavailable port/pkg: py27-jsonpatch py27-jsonpointer
-# Luckily, the install step will take care of this by installing it from pypi...
-
 # Build the code and install in /usr/local/:
-python setup.py build
-python setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd
+python2.7 setup.py build
+python2.7 setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd
 
 # Enable cloud-init in /etc/rc.conf:
 sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf
diff --git a/tools/read-version b/tools/read-version
index e69c2ce..6dca659 100755
--- a/tools/read-version
+++ b/tools/read-version
@@ -71,9 +71,12 @@ if is_gitdir(_tdir) and which("git"):
         flags = ['--tags']
     cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags
 
-    version = tiny_p(cmd).strip()
+    try:
+        version = tiny_p(cmd).strip()
+    except RuntimeError:
+        version = None
 
-    if not version.startswith(src_version):
+    if version is None or not version.startswith(src_version):
         sys.stderr.write("git describe version (%s) differs from "
                          "cloudinit.version (%s)\n" % (version, src_version))
         sys.stderr.write(
_______________________________________________
Mailing list: https://launchpad.net/~cloud-init-dev
Post to     : cloud-init-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~cloud-init-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to