Hello community,

here is the log from the commit of package urlwatch for openSUSE:Factory 
checked in at 2019-04-15 13:59:52
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/urlwatch (Old)
 and      /work/SRC/openSUSE:Factory/.urlwatch.new.17052 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "urlwatch"

Mon Apr 15 13:59:52 2019 rev:16 rq:694199 version:2.17

Changes:
--------
--- /work/SRC/openSUSE:Factory/urlwatch/urlwatch.changes        2019-01-28 
20:50:28.669776545 +0100
+++ /work/SRC/openSUSE:Factory/.urlwatch.new.17052/urlwatch.changes     
2019-04-15 14:00:01.100719035 +0200
@@ -1,0 +2,16 @@
+Mon Apr 15 08:29:05 UTC 2019 - [email protected]
+
+- Update to 2.17:
+  Added:
+  * XPath/CSS: Support for excluding elements (#333, by Chenfeng Bao)
+  * Add support for using external diff_tool on Windows (#373, by Chenfeng Bao)
+  * Document how to use Amazon Simple E-Mail Service "SES" (by mborsetti)
+  * Compare data with multiple old versions (compared_versions, #328, by 
Chenfeng Bao)
+  Fixed:
+  * YAML: Fix deprecation warnings (#367, by Florent Aide)
+  * Updated manpage with new options: Authentication, filter tests (Fixes #351)
+  * Text formatter: Do not emit empty lines for line_length=0 (Fixes #357)
+  Changed:
+  * SMTP configuration fix: Only use smtp.user config if it's a non-empty value
+
+-------------------------------------------------------------------

Old:
----
  urlwatch-2.16.tar.gz

New:
----
  urlwatch-2.17.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ urlwatch.spec ++++++
--- /var/tmp/diff_new_pack.G3sqwJ/_old  2019-04-15 14:00:01.956719315 +0200
+++ /var/tmp/diff_new_pack.G3sqwJ/_new  2019-04-15 14:00:01.960719316 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           urlwatch
-Version:        2.16
+Version:        2.17
 Release:        0
 Summary:        A tool for monitoring webpages for updates
 License:        BSD-3-Clause

++++++ urlwatch-2.16.tar.gz -> urlwatch-2.17.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/CHANGELOG.md 
new/urlwatch-2.17/CHANGELOG.md
--- old/urlwatch-2.16/CHANGELOG.md      2019-01-27 11:48:43.000000000 +0100
+++ new/urlwatch-2.17/CHANGELOG.md      2019-04-12 17:29:43.000000000 +0200
@@ -4,6 +4,23 @@
 
 The format mostly follows [Keep a 
Changelog](http://keepachangelog.com/en/1.0.0/).
 
+## [2.17] -- 2019-04-12
+
+### Added
+- XPath/CSS: Support for excluding elements (#333, by Chenfeng Bao)
+- Add support for using external `diff_tool` on Windows (#373, by Chenfeng Bao)
+- Document how to use Amazon Simple E-Mail Service "SES" (by mborsetti)
+- Compare data with multiple old versions (`compared_versions`, #328, by 
Chenfeng Bao)
+
+### Fixed
+- YAML: Fix deprecation warnings (#367, by Florent Aide)
+- Updated manpage with new options: Authentication, filter tests (Fixes #351)
+- Text formatter: Do not emit empty lines for `line_length=0` (Fixes #357)
+
+### Changed
+- SMTP configuration fix: Only use smtp.user config if it's a non-empty value
+
+
 ## [2.16] -- 2019-01-27
 
 ### Added
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/README.md new/urlwatch-2.17/README.md
--- old/urlwatch-2.16/README.md 2019-01-27 11:48:43.000000000 +0100
+++ new/urlwatch-2.17/README.md 2019-04-12 17:29:43.000000000 +0200
@@ -332,6 +332,27 @@
 password.
 
 
+E-MAIL VIA AMAZON SIMPLE EMAIL SERVICE (SES)
+--------------------------------------------
+
+Start the configuration editor: `urlwatch --edit-config`
+
+These are the keys you need to configure:
+
+- `report/email/enabled`: `true`
+- `report/email/from`: `you@verified_domain.com` (edit accordingly)
+- `report/email/method`: `smtp`
+- `report/email/smtp/host`: `email-smtp.us-west-2.amazonaws.com` (edit 
accordingly)
+- `report/email/smtp/user`: `ABCDEFGHIJ1234567890` (edit accordingly)
+- `report/email/smtp/keyring`: `true`
+- `report/email/smtp/port`: `587` (25 or 465 also work)
+- `report/email/smtp/starttls`: `true`
+- `report/email/to`: The e-mail address you want to send reports to
+
+The password is not stored in the config file, but in your keychain. To store
+the password, run: `urlwatch --smtp-login` and enter your password.
+
+
 TESTING FILTERS
 ---------------
 
@@ -377,8 +398,8 @@
 ```
 
 
-USING XPATH AND CSS FILTERS WITH XML
---------------------------------
+USING XPATH AND CSS FILTERS WITH XML AND EXCLUSIONS
+---------------------------------------------------
 
 By default, XPath and CSS filters are set up for HTML documents. However,
 it is possible to use them for XML documents as well (these examples parse
@@ -400,6 +421,34 @@
   - html2text: re
 ```
 
+Another useful option with XPath and CSS filters is `exclude`. Elements 
selected
+by this `exclude` expression are removed from the final result. For example, 
the
+following job will not have any `<a>` tag in its results:
+
+```yaml
+url: https://example.org/
+filter:
+  - css:
+      selector: 'body'
+      exclude: 'a'
+```
+
+
+COMPARE WITH SEVERAL LATEST SNAPSHOTS
+-------------------------------------
+If a webpage frequently changes between several known stable states, it may be
+desirable to have changes reported only if the webpage changes into a new
+unknown state. You can use `compared_versions` to do this.
+
+```yaml
+url: https://example.com/
+compared_versions: 3
+```
+
+In this example, changes are only reported if the webpage becomes different 
from
+the latest three distinct states. The differences are shown relative to the
+closest match.
+
 
 MIGRATION FROM URLWATCH 1.x
 ---------------------------
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/lib/urlwatch/__init__.py 
new/urlwatch-2.17/lib/urlwatch/__init__.py
--- old/urlwatch-2.16/lib/urlwatch/__init__.py  2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/lib/urlwatch/__init__.py  2019-04-12 17:29:43.000000000 
+0200
@@ -12,5 +12,5 @@
 __author__ = 'Thomas Perl <[email protected]>'
 __license__ = 'BSD'
 __url__ = 'https://thp.io/2008/urlwatch/'
-__version__ = '2.16'
+__version__ = '2.17'
 __user_agent__ = '%s/%s (+https://thp.io/2008/urlwatch/info.html)' % (pkgname, 
__version__)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/lib/urlwatch/command.py 
new/urlwatch-2.17/lib/urlwatch/command.py
--- old/urlwatch-2.16/lib/urlwatch/command.py   2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/lib/urlwatch/command.py   2019-04-12 17:29:43.000000000 
+0200
@@ -265,7 +265,7 @@
                 print('Please configure the SMTP hostname in the config 
first.')
                 success = False
 
-            smtp_username = smtp_config.get('user', config['from'])
+            smtp_username = smtp_config.get('user', None) or config['from']
             if not smtp_username:
                 print('Please configure the SMTP user in the config first.')
                 success = False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/lib/urlwatch/filters.py 
new/urlwatch-2.17/lib/urlwatch/filters.py
--- old/urlwatch-2.16/lib/urlwatch/filters.py   2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/lib/urlwatch/filters.py   2019-04-12 17:29:43.000000000 
+0200
@@ -372,7 +372,7 @@
 
     def __init__(self, filter_kind, subfilter, expr_key):
         self.filter_kind = filter_kind
-        self.expression, self.method = self.parse_subfilter(
+        self.expression, self.method, self.exclude = self.parse_subfilter(
             filter_kind, subfilter, expr_key, self.EXPR_NAMES[filter_kind])
         self.parser = (etree.HTMLParser if self.method == 'html' else 
etree.XMLParser)()
         self.data = ''
@@ -384,17 +384,19 @@
         if isinstance(subfilter, str):
             expression = subfilter
             method = 'html'
+            exclude = None
         elif isinstance(subfilter, dict):
             if expr_key not in subfilter:
                 raise ValueError('Need %s for filtering' % (expr_name,))
             expression = subfilter[expr_key]
             method = subfilter.get('method', 'html')
+            exclude = subfilter.get('exclude')
             if method not in ('html', 'xml'):
                 raise ValueError('%s method must be "html" or "xml", got %r' % 
(filter_kind, method))
         else:
             raise ValueError('%s subfilter must be a string or dict' % 
(filter_kind,))
 
-        return expression, method
+        return expression, method, exclude
 
     def feed(self, data):
         self.data += data
@@ -406,6 +408,62 @@
 
         return etree.tostring(element, pretty_print=True, method=self.method, 
encoding='unicode', with_tail=False)
 
+    @staticmethod
+    def _remove_element(element):
+        parent = element.getparent()
+        if parent is None:
+            # Do not exclude root element
+            return
+        if isinstance(element, etree._ElementUnicodeResult):
+            if element.is_tail:
+                parent.tail = None
+            elif element.is_text:
+                parent.text = None
+            elif element.is_attribute:
+                del parent.attrib[element.attrname]
+        else:
+            previous = element.getprevious()
+            if element.tail is not None:
+                if previous is not None:
+                    previous.tail = previous.tail + element.tail if 
previous.tail else element.tail
+                else:
+                    parent.text = parent.text + element.tail if parent.text 
else element.tail
+            parent.remove(element)
+
+    @classmethod
+    def _reevaluate(cls, element):
+        if cls._orphaned(element):
+            return None
+        if isinstance(element, etree._ElementUnicodeResult):
+            parent = element.getparent()
+            if parent is None:
+                return element
+            if element.is_tail:
+                return parent.tail
+            elif element.is_text:
+                return parent.text
+            elif element.is_attribute:
+                return parent.attrib.get(element.attrname)
+        else:
+            return element
+
+    @staticmethod
+    def _orphaned(element):
+        if isinstance(element, etree._ElementUnicodeResult):
+            parent = element.getparent()
+            if ((element.is_tail and parent.tail is None)
+                    or (element.is_text and parent.text is None)
+                    or (element.is_attribute and 
parent.attrib.get(element.attrname) is None)):
+                return True
+            else:
+                element = parent
+        try:
+            tree = element.getroottree()
+            path = tree.getpath(element)
+            return element is not tree.xpath(path)[0]
+        except (ValueError, IndexError):
+            return True
+
     def _get_filtered_elements(self):
         try:
             root = etree.fromstring(self.data, self.parser)
@@ -418,10 +476,17 @@
             root = etree.fromstring(self.data, self.parser)
         if root is None:
             return []
+        excluded_elems = None
         if self.filter_kind == 'css':
-            return root.cssselect(self.expression)
+            selected_elems = root.cssselect(self.expression)
+            excluded_elems = root.cssselect(self.exclude) if self.exclude else 
None
         elif self.filter_kind == 'xpath':
-            return root.xpath(self.expression)
+            selected_elems = root.xpath(self.expression)
+            excluded_elems = root.xpath(self.exclude) if self.exclude else None
+        if excluded_elems is not None:
+            for el in excluded_elems:
+                self._remove_element(el)
+        return [el for el in map(self._reevaluate, selected_elems) if el is 
not None]
 
     def get_filtered_data(self):
         return '\n'.join(self._to_string(element) for element in 
self._get_filtered_elements())
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/lib/urlwatch/handler.py 
new/urlwatch-2.17/lib/urlwatch/handler.py
--- old/urlwatch-2.16/lib/urlwatch/handler.py   2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/lib/urlwatch/handler.py   2019-04-12 17:29:43.000000000 
+0200
@@ -47,6 +47,7 @@
         self.verb = None
         self.old_data = None
         self.new_data = None
+        self.history_data = {}
         self.timestamp = None
         self.exception = None
         self.traceback = None
@@ -55,9 +56,12 @@
         self.error_ignored = False
 
     def load(self):
-        self.old_data, self.timestamp, self.tries, self.etag = 
self.cache_storage.load(self.job, self.job.get_guid())
+        guid = self.job.get_guid()
+        self.old_data, self.timestamp, self.tries, self.etag = 
self.cache_storage.load(self.job, guid)
         if self.tries is None:
             self.tries = 0
+        if self.job.compared_versions and self.job.compared_versions > 1:
+            self.history_data = self.cache_storage.get_history_data(guid, 
self.job.compared_versions)
 
     def save(self):
         if self.new_data is None and self.exception is not None:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/lib/urlwatch/jobs.py 
new/urlwatch-2.17/lib/urlwatch/jobs.py
--- old/urlwatch-2.16/lib/urlwatch/jobs.py      2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/lib/urlwatch/jobs.py      2019-04-12 17:29:43.000000000 
+0200
@@ -166,7 +166,7 @@
 
 class Job(JobBase):
     __required__ = ()
-    __optional__ = ('name', 'filter', 'max_tries', 'diff_tool')
+    __optional__ = ('name', 'filter', 'max_tries', 'diff_tool', 
'compared_versions')
 
     # determine if hyperlink "a" tag is used in HtmlReporter
     LOCATION_IS_URL = False
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/lib/urlwatch/reporters.py 
new/urlwatch-2.17/lib/urlwatch/reporters.py
--- old/urlwatch-2.16/lib/urlwatch/reporters.py 2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/lib/urlwatch/reporters.py 2019-04-12 17:29:43.000000000 
+0200
@@ -35,6 +35,7 @@
 import email.utils
 import itertools
 import logging
+import os
 import sys
 import time
 import cgi
@@ -109,19 +110,20 @@
 
     def unified_diff(self, job_state):
         if job_state.job.diff_tool is not None:
-            with tempfile.NamedTemporaryFile() as old_file, 
tempfile.NamedTemporaryFile() as new_file:
-                old_file.write(job_state.old_data.encode('utf-8'))
-                old_file.flush()
-                new_file.write(job_state.new_data.encode('utf-8'))
-                new_file.flush()
-                cmdline = shlex.split(job_state.job.diff_tool) + 
[old_file.name, new_file.name]
+            with tempfile.TemporaryDirectory() as tmpdir:
+                old_file_path = os.path.join(tmpdir, 'old_file')
+                new_file_path = os.path.join(tmpdir, 'new_file')
+                with open(old_file_path, 'w+b') as old_file, 
open(new_file_path, 'w+b') as new_file:
+                    old_file.write(job_state.old_data.encode('utf-8'))
+                    new_file.write(job_state.new_data.encode('utf-8'))
+                cmdline = shlex.split(job_state.job.diff_tool) + 
[old_file_path, new_file_path]
                 proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE)
                 stdout, _ = proc.communicate()
                 # Diff tools return 0 for "nothing changed" or 1 for "files 
differ", anything else is an error
                 if proc.returncode in (0, 1):
                     return stdout.decode('utf-8')
                 else:
-                    raise subprocess.CalledProcessError(result, cmdline)
+                    raise subprocess.CalledProcessError(proc.returncode, 
cmdline)
 
         timestamp_old = email.utils.formatdate(job_state.timestamp, 
localtime=1)
         timestamp_new = email.utils.formatdate(time.time(), localtime=1)
@@ -258,12 +260,12 @@
             details.extend(details_part)
 
         if summary:
-            sep = line_length * '='
-            yield from itertools.chain(
+            sep = (line_length * '=') or None
+            yield from (part for part in itertools.chain(
                 (sep,),
                 ('%02d. %s' % (idx + 1, line) for idx, line in 
enumerate(summary)),
                 (sep, ''),
-            )
+            ) if part is not None)
 
         if show_details:
             yield from details
@@ -301,11 +303,12 @@
 
         summary_part.append(pretty_summary)
 
-        sep = line_length * '-'
+        sep = (line_length * '-') or None
         details_part.extend((sep, summary, sep))
         if content is not None:
             details_part.extend((content, sep))
-        details_part.extend(('', ''))
+        details_part.extend(('', '') if sep else ('',))
+        details_part = [part for part in details_part if part is not None]
 
         return summary_part, details_part
 
@@ -348,7 +351,7 @@
         cfg = self.report.config['report']['text']
         line_length = cfg['line_length']
 
-        separators = (line_length * '=', line_length * '-', '-- ')
+        separators = (line_length * '=', line_length * '-', '-- ') if 
line_length else ()
         body = '\n'.join(super().submit())
 
         for line in body.splitlines():
@@ -393,7 +396,7 @@
             logger.debug('Not sending e-mail (no changes)')
             return
         if self.config['method'] == "smtp":
-            smtp_user = self.config['smtp'].get('user', self.config['from'])
+            smtp_user = self.config['smtp'].get('user', None) or 
self.config['from']
             mailer = SMTPMailer(smtp_user, self.config['smtp']['host'], 
self.config['smtp']['port'],
                                 self.config['smtp']['starttls'], 
self.config['smtp']['keyring'])
         elif self.config['method'] == "sendmail":
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/lib/urlwatch/storage.py 
new/urlwatch-2.17/lib/urlwatch/storage.py
--- old/urlwatch-2.16/lib/urlwatch/storage.py   2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/lib/urlwatch/storage.py   2019-04-12 17:29:43.000000000 
+0200
@@ -78,6 +78,7 @@
             'method': 'smtp',
             'smtp': {
                 'host': 'localhost',
+                'user': '',
                 'port': 25,
                 'starttls': True,
                 'keyring': True,
@@ -288,7 +289,7 @@
         filename = args[0]
         if filename is not None and os.path.exists(filename):
             with open(filename) as fp:
-                return yaml.load(fp)
+                return yaml.load(fp, Loader=yaml.SafeLoader)
 
 
 class YamlConfigStorage(BaseYamlFileStorage):
@@ -307,7 +308,7 @@
         filename = args[0]
         if filename is not None and os.path.exists(filename):
             with open(filename) as fp:
-                return [JobBase.unserialize(job) for job in yaml.load_all(fp) 
if job is not None]
+                return [JobBase.unserialize(job) for job in yaml.load_all(fp, 
Loader=yaml.SafeLoader) if job is not None]
 
     def save(self, *args):
         jobs = args[0]
@@ -318,7 +319,7 @@
 
     def load(self, *args):
         with open(self.filename) as fp:
-            return [JobBase.unserialize(job) for job in yaml.load_all(fp) if 
job is not None]
+            return [JobBase.unserialize(job) for job in yaml.load_all(fp, 
Loader=yaml.SafeLoader) if job is not None]
 
 
 class UrlsTxt(BaseTxtFileStorage, UrlsBaseFileStorage):
@@ -457,6 +458,20 @@
 
         return None, None, 0, None
 
+    def get_history_data(self, guid, count=1):
+        history = {}
+        if count < 1:
+            return history
+        for data, timestamp in CacheEntry.query(self.db, CacheEntry.c.data // 
CacheEntry.c.timestamp,
+                                                
order_by=minidb.columns(CacheEntry.c.timestamp.desc, CacheEntry.c.tries.desc),
+                                                where=(CacheEntry.c.guid == 
guid)
+                                                & ((CacheEntry.c.tries == 0) | 
(CacheEntry.c.tries == None))):  # noqa
+            if data not in history:
+                history[data] = timestamp
+                if len(history) >= count:
+                    break
+        return history
+
     def save(self, job, guid, data, timestamp, tries, etag=None):
         self.db.save(CacheEntry(guid=guid, timestamp=timestamp, data=data, 
tries=tries, etag=etag))
         self.db.commit()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/lib/urlwatch/worker.py 
new/urlwatch-2.17/lib/urlwatch/worker.py
--- old/urlwatch-2.16/lib/urlwatch/worker.py    2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/lib/urlwatch/worker.py    2019-04-12 17:29:43.000000000 
+0200
@@ -30,6 +30,7 @@
 
 import concurrent.futures
 import logging
+import difflib
 
 import requests
 
@@ -86,15 +87,22 @@
                 report.error(job_state)
 
         elif job_state.old_data is not None:
-            if job_state.old_data.splitlines() != 
job_state.new_data.splitlines():
-                report.changed(job_state)
-                job_state.tries = 0
-                job_state.save()
-            else:
+            matched_history_time = 
job_state.history_data.get(job_state.new_data)
+            if matched_history_time:
+                job_state.timestamp = matched_history_time
+            if matched_history_time or job_state.new_data == 
job_state.old_data:
                 report.unchanged(job_state)
                 if job_state.tries > 0:
                     job_state.tries = 0
                     job_state.save()
+            else:
+                close_matches = difflib.get_close_matches(job_state.new_data, 
job_state.history_data, n=1)
+                if close_matches:
+                    job_state.old_data = close_matches[0]
+                    job_state.timestamp = 
job_state.history_data[close_matches[0]]
+                report.changed(job_state)
+                job_state.tries = 0
+                job_state.save()
         else:
             report.new(job_state)
             job_state.tries = 0
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/share/man/man1/urlwatch.1 
new/urlwatch-2.17/share/man/man1/urlwatch.1
--- old/urlwatch-2.16/share/man/man1/urlwatch.1 2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/share/man/man1/urlwatch.1 2019-04-12 17:29:43.000000000 
+0200
@@ -1,22 +1,11 @@
-.TH URLWATCH "1" "January 2016" "urlwatch 2.0" "User Commands"
+.TH URLWATCH "1" "January 2019" "urlwatch 2.16" "User Commands"
 .SH NAME
-urlwatch \- a tool for monitoring webpages for updates
-.SH USAGE
-.B urlwatch
-[\-h] [\-\-version] [\-v]
-.IP
-[\-\-urls FILE] [\-\-config FILE] [\-\-hooks FILE] [\-\-cache FILE]
-.IP
-[\-\-list] [\-\-add JOB] [\-\-delete JOB]
-.IP
-[\-\-edit] [\-\-edit\-config] [\-\-edit\-hooks]
-.IP
-[\-\-features] [\-\-gc\-cache]
-.PP
+urlwatch \- monitors webpages for you
+.SH SYNOPSIS
+.B urlwatch [options]
 .SH DESCRIPTION
-.PP
 urlwatch is intended to help you watch changes in webpages and get notified
-(via email, in your terminal or with a custom-written reporter class) of any
+(via e\-mail, in your terminal or through various third party services) of any
 changes. The change notification will include the URL that has changed and
 a unified diff of what has changed.
 .SS "optional arguments:"
@@ -42,6 +31,16 @@
 .TP
 \fB\-\-cache\fR FILE
 use FILE as cache database
+.SS "Authentication:"
+.TP
+\fB\-\-smtp\-login\fR
+Enter password for SMTP (store in keyring)
+.TP
+\fB\-\-telegram\-chats\fR
+List telegram chats the bot is joined to
+.TP
+\fB\-\-test\-slack\fR
+Send a test notification to Slack
 .SS "job list management:"
 .TP
 \fB\-\-list\fR
@@ -52,6 +51,9 @@
 .TP
 \fB\-\-delete\fR JOB
 delete job by location or index
+.TP
+\fB\-\-test\-filter\fR JOB
+test filter output of job by location or index
 .SS "interactive commands ($EDITOR/$VISUAL):"
 .TP
 \fB\-\-edit\fR
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/test/data/filter_tests.yaml 
new/urlwatch-2.17/test/data/filter_tests.yaml
--- old/urlwatch-2.16/test/data/filter_tests.yaml       2019-01-27 
11:48:43.000000000 +0100
+++ new/urlwatch-2.17/test/data/filter_tests.yaml       2019-04-12 
17:29:43.000000000 +0200
@@ -63,6 +63,21 @@
     expected_result: |-
         foo
         bar
+xpath_exclude:
+    filter:
+        xpath:
+            path: //div
+            exclude: //*[@class='excl'] | //*/@class
+    data: |
+        <html><head></head><body>
+        <div class="excl">you don't want to see me</div>
+        <div class="foo">f<span class="excl">interrupt!</span>o<span 
class="excl">interrupt!</span>o</div>
+        <div id="bar">bar</div>
+        </body></html>
+    expected_result: |
+        <div>foo</div>
+        
+        <div id="bar">bar</div>
 css:
     filter: css:div
     data: |
@@ -74,6 +89,19 @@
         <div>foo</div>
         
         <div>bar</div>
+css_exclude:
+    filter:
+        css:
+            selector: div
+            exclude: '.excl, #bar'
+    data: |
+        <html><head></head><body>
+        <div class="excl">you don't want to see me</div>
+        <div class="foo">f<span class="excl">interrupt!</span>o<span 
class="excl">interrupt!</span>o</div>
+        <div id="bar">bar</div>
+        </body></html>
+    expected_result: |
+        <div class="foo">foo</div>
 grep:
     filter: grep:blue
     data: |
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/urlwatch-2.16/test/test_handler.py 
new/urlwatch-2.17/test/test_handler.py
--- old/urlwatch-2.16/test/test_handler.py      2019-01-27 11:48:43.000000000 
+0100
+++ new/urlwatch-2.17/test/test_handler.py      2019-04-12 17:29:43.000000000 
+0200
@@ -76,7 +76,7 @@
 
 def test_pep8_conformance():
     """Test that we conform to PEP-8."""
-    style = pycodestyle.StyleGuide(ignore=['E501', 'E402'])
+    style = pycodestyle.StyleGuide(ignore=['E501', 'E402', 'W503'])
 
     py_files = [y for x in os.walk(os.path.abspath('.')) for y in 
glob(os.path.join(x[0], '*.py'))]
     py_files.append(os.path.abspath('urlwatch'))


Reply via email to