Hello community,

here is the log from the commit of package youtube-dl for openSUSE:Factory 
checked in at 2018-12-24 11:48:18
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/youtube-dl (Old)
 and      /work/SRC/openSUSE:Factory/.youtube-dl.new.28833 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "youtube-dl"

Mon Dec 24 11:48:18 2018 rev:90 rq:660863 version:2018.12.17

Changes:
--------
--- /work/SRC/openSUSE:Factory/youtube-dl/youtube-dl.changes    2018-12-11 
15:49:20.722106259 +0100
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.28833/youtube-dl.changes 
2018-12-24 11:48:19.933084298 +0100
@@ -0,0 +1,16 @@
+Sat Dec 22 15:34:11 UTC 2018 - [email protected]
+
+- Update to new upstream releease 2018.12.17
+  * ard: Improve geo restricted videos extraction
+  * ard: Fix subtitles extraction
+  * ard: Improve extraction robustness
+  * ard: Relax URL regular expression
+  * acast: Add support for embed.acast.com/play.acast.com
+  * iprima: Relax URL regular expression
+  * vrv: Fix initial state extraction
+  * youtube: Fix mark watched
+  * safari: Add support for learning.oreilly.com
+  * youtube: Fix multifeed extraction
+  * lecturio: Improve subtitles extraction
+  * uol: Fix format URL extraction
+

Old:
----
  youtube-dl-2018.12.09.tar.gz
  youtube-dl-2018.12.09.tar.gz.sig

New:
----
  youtube-dl-2018.12.17.tar.gz
  youtube-dl-2018.12.17.tar.gz.sig

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.KdvLek/_old  2018-12-24 11:48:20.485083814 +0100
+++ /var/tmp/diff_new_pack.KdvLek/_new  2018-12-24 11:48:20.489083810 +0100
@@ -19,7 +19,7 @@
 %define modname youtube-dl
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-youtube-dl
-Version:        2018.12.09
+Version:        2018.12.17
 Release:        0
 Summary:        A python module for downloading from video sites for offline 
watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.KdvLek/_old  2018-12-24 11:48:20.509083793 +0100
+++ /var/tmp/diff_new_pack.KdvLek/_new  2018-12-24 11:48:20.513083789 +0100
@@ -17,7 +17,7 @@
 
 
 Name:           youtube-dl
-Version:        2018.12.09
+Version:        2018.12.17
 Release:        0
 Summary:        A tool for downloading from video sites for offline watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl-2018.12.09.tar.gz -> youtube-dl-2018.12.17.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/ChangeLog new/youtube-dl/ChangeLog
--- old/youtube-dl/ChangeLog    2018-12-09 17:11:30.000000000 +0100
+++ new/youtube-dl/ChangeLog    2018-12-16 23:37:46.000000000 +0100
@@ -1,3 +1,21 @@
+version 2018.12.17
+
+Extractors
+* [ard:beta] Improve geo restricted videos extraction
+* [ard:beta] Fix subtitles extraction
+* [ard:beta] Improve extraction robustness
+* [ard:beta] Relax URL regular expression (#18441)
+* [acast] Add support for embed.acast.com and play.acast.com (#18483)
+* [iprima] Relax URL regular expression (#18515, #18540)
+* [vrv] Fix initial state extraction (#18553)
+* [youtube] Fix mark watched (#18546)
++ [safari] Add support for learning.oreilly.com (#18510)
+* [youtube] Fix multifeed extraction (#18531)
+* [lecturio] Improve subtitles extraction (#18488)
+* [uol] Fix format URL extraction (#18480)
++ [ard:mediathek] Add support for classic.ardmediathek.de (#18473)
+
+
 version 2018.12.09
 
 Core
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/README.md new/youtube-dl/README.md
--- old/youtube-dl/README.md    2018-12-09 17:11:32.000000000 +0100
+++ new/youtube-dl/README.md    2018-12-16 23:37:49.000000000 +0100
@@ -1024,7 +1024,7 @@
     ```
 5. Add an import in 
[`youtube_dl/extractor/extractors.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/extractors.py).
 6. Run `python test/test_download.py TestDownload.test_YourExtractor`. This 
*should fail* at first, but you can continually re-run it until you're done. If 
you decide to add more than one test, then rename ``_TEST`` to ``_TESTS`` and 
make it into a list of dictionaries. The tests will then be named 
`TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, 
`TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` 
key in test's dict are not counted in.
-7. Have a look at 
[`youtube_dl/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py)
 for possible helper methods and a [detailed description of what your extractor 
should and may 
return](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L74-L252).
 Add tests and code for as many as you want.
+7. Have a look at 
[`youtube_dl/extractor/common.py`](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py)
 for possible helper methods and a [detailed description of what your extractor 
should and may 
return](https://github.com/rg3/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303).
 Add tests and code for as many as you want.
 8. Make sure your code follows [youtube-dl coding 
conventions](#youtube-dl-coding-conventions) and check the code with 
[flake8](https://pypi.python.org/pypi/flake8). Also make sure your code works 
under all [Python](https://www.python.org/) versions claimed supported by 
youtube-dl, namely 2.6, 2.7, and 3.2+.
 9. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files 
and [commit](https://git-scm.com/docs/git-commit) them and 
[push](https://git-scm.com/docs/git-push) the result, like this:
 
@@ -1045,7 +1045,7 @@
 
 ### Mandatory and optional metafields
 
-For extraction to work youtube-dl relies on metadata your extractor extracts 
and provides to youtube-dl expressed by an [information 
dictionary](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L75-L257)
 or simply *info dict*. Only the following meta fields in the *info dict* are 
considered mandatory for a successful extraction process by youtube-dl:
+For extraction to work youtube-dl relies on metadata your extractor extracts 
and provides to youtube-dl expressed by an [information 
dictionary](https://github.com/rg3/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303)
 or simply *info dict*. Only the following meta fields in the *info dict* are 
considered mandatory for a successful extraction process by youtube-dl:
 
  - `id` (media identifier)
  - `title` (media title)
@@ -1053,7 +1053,7 @@
 
 In fact only the last option is technically mandatory (i.e. if you can't 
figure out the download location of the media the extraction does not make any 
sense). But by convention youtube-dl also treats `id` and `title` as mandatory. 
Thus the aforementioned metafields are the critical data that the extraction 
does not make any sense without and if any of them fail to be extracted then 
the extractor is considered completely broken.
 
-[Any 
field](https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L149-L257)
 apart from the aforementioned ones are considered **optional**. That means 
that extraction should be **tolerant** to situations when sources for these 
fields can potentially be unavailable (even if they are always available at the 
moment) and **future-proof** in order not to break the extraction of general 
purpose mandatory fields.
+[Any 
field](https://github.com/rg3/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L188-L303)
 apart from the aforementioned ones are considered **optional**. That means 
that extraction should be **tolerant** to situations when sources for these 
fields can potentially be unavailable (even if they are always available at the 
moment) and **future-proof** in order not to break the extraction of general 
purpose mandatory fields.
 
 #### Example
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/test/testdata/cookies/session_cookies.txt 
new/youtube-dl/test/testdata/cookies/session_cookies.txt
--- old/youtube-dl/test/testdata/cookies/session_cookies.txt    2018-12-03 
01:03:07.000000000 +0100
+++ new/youtube-dl/test/testdata/cookies/session_cookies.txt    2018-12-16 
23:36:54.000000000 +0100
@@ -2,5 +2,5 @@
 # http://curl.haxx.se/rfc/cookie_spec.html
 # This is a generated file!  Do not edit.
 
+www.foobar.foobar      FALSE   /       TRUE            YoutubeDLExpiresEmpty   
YoutubeDLExpiresEmptyValue
 www.foobar.foobar      FALSE   /       TRUE    0       YoutubeDLExpires0       
YoutubeDLExpires0Value
-www.foobar.foobar      FALSE   /       TRUE    0       YoutubeDLExpiresEmpty   
YoutubeDLExpiresEmptyValue
Binary files old/youtube-dl/youtube-dl and new/youtube-dl/youtube-dl differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube-dl.1 new/youtube-dl/youtube-dl.1
--- old/youtube-dl/youtube-dl.1 2018-12-09 17:12:06.000000000 +0100
+++ new/youtube-dl/youtube-dl.1 2018-12-16 23:38:26.000000000 +0100
@@ -2091,7 +2091,7 @@
 \f[C]youtube_dl/extractor/common.py\f[] 
(https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py)
 for possible helper methods and a detailed description of what your
 extractor should and may
-return 
(https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L74-L252).
+return 
(https://github.com/rg3/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303).
 Add tests and code for as many as you want.
 .IP " 8." 4
 Make sure your code follows youtube\-dl coding conventions and check the
@@ -2144,7 +2144,7 @@
 .PP
 For extraction to work youtube\-dl relies on metadata your extractor
 extracts and provides to youtube\-dl expressed by an information
-dictionary 
(https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L75-L257)
+dictionary 
(https://github.com/rg3/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L94-L303)
 or simply \f[I]info dict\f[].
 Only the following meta fields in the \f[I]info dict\f[] are considered
 mandatory for a successful extraction process by youtube\-dl:
@@ -2165,7 +2165,7 @@
 extracted then the extractor is considered completely broken.
 .PP
 Any
-field 
(https://github.com/rg3/youtube-dl/blob/master/youtube_dl/extractor/common.py#L149-L257)
+field 
(https://github.com/rg3/youtube-dl/blob/7f41a598b3fba1bcab2817de64a08941200aa3c8/youtube_dl/extractor/common.py#L188-L303)
 apart from the aforementioned ones are considered \f[B]optional\f[].
 That means that extraction should be \f[B]tolerant\f[] to situations
 when sources for these fields can potentially be unavailable (even if
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/acast.py 
new/youtube-dl/youtube_dl/extractor/acast.py
--- old/youtube-dl/youtube_dl/extractor/acast.py        2018-12-03 
01:02:57.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/acast.py        2018-12-16 
23:36:54.000000000 +0100
@@ -17,25 +17,15 @@
 
 class ACastIE(InfoExtractor):
     IE_NAME = 'acast'
-    _VALID_URL = 
r'https?://(?:www\.)?acast\.com/(?P<channel>[^/]+)/(?P<id>[^/#?]+)'
+    _VALID_URL = r'''(?x)
+                    https?://
+                        (?:
+                            (?:(?:embed|www)\.)?acast\.com/|
+                            play\.acast\.com/s/
+                        )
+                        (?P<channel>[^/]+)/(?P<id>[^/#?]+)
+                    '''
     _TESTS = [{
-        # test with one bling
-        'url': 
'https://www.acast.com/condenasttraveler/-where-are-you-taipei-101-taiwan',
-        'md5': 'ada3de5a1e3a2a381327d749854788bb',
-        'info_dict': {
-            'id': '57de3baa-4bb0-487e-9418-2692c1277a34',
-            'ext': 'mp3',
-            'title': '"Where Are You?": Taipei 101, Taiwan',
-            'description': 'md5:a0b4ef3634e63866b542e5b1199a1a0e',
-            'timestamp': 1196172000,
-            'upload_date': '20071127',
-            'duration': 211,
-            'creator': 'Concierge',
-            'series': 'Condé Nast Traveler Podcast',
-            'episode': '"Where Are You?": Taipei 101, Taiwan',
-        }
-    }, {
-        # test with multiple blings
         'url': 
'https://www.acast.com/sparpodcast/2.raggarmordet-rosterurdetforflutna',
         'md5': 'a02393c74f3bdb1801c3ec2695577ce0',
         'info_dict': {
@@ -50,6 +40,12 @@
             'series': 'Spår',
             'episode': '2. Raggarmordet - Röster ur det förflutna',
         }
+    }, {
+        'url': 
'http://embed.acast.com/adambuxton/ep.12-adam-joeschristmaspodcast2015',
+        'only_matching': True,
+    }, {
+        'url': 
'https://play.acast.com/s/rattegangspodden/s04e09-styckmordet-i-helenelund-del-22',
+        'only_matching': True,
     }]
 
     def _real_extract(self, url):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/ard.py 
new/youtube-dl/youtube_dl/extractor/ard.py
--- old/youtube-dl/youtube_dl/extractor/ard.py  2018-12-03 01:03:07.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/ard.py  2018-12-16 23:36:54.000000000 
+0100
@@ -8,20 +8,23 @@
 from ..utils import (
     determine_ext,
     ExtractorError,
-    qualities,
     int_or_none,
     parse_duration,
+    qualities,
+    str_or_none,
+    try_get,
     unified_strdate,
-    xpath_text,
+    unified_timestamp,
     update_url_query,
     url_or_none,
+    xpath_text,
 )
 from ..compat import compat_etree_fromstring
 
 
 class ARDMediathekIE(InfoExtractor):
     IE_NAME = 'ARD:mediathek'
-    _VALID_URL = 
r'^https?://(?:(?:www\.)?ardmediathek\.de|mediathek\.(?:daserste|rbb-online)\.de|one\.ard\.de)/(?:.*/)(?P<video_id>[0-9]+|[^0-9][^/\?]+)[^/\?]*(?:\?.*)?'
+    _VALID_URL = 
r'^https?://(?:(?:(?:www|classic)\.)?ardmediathek\.de|mediathek\.(?:daserste|rbb-online)\.de|one\.ard\.de)/(?:.*/)(?P<video_id>[0-9]+|[^0-9][^/\?]+)[^/\?]*(?:\?.*)?'
 
     _TESTS = [{
         # available till 26.07.2022
@@ -51,8 +54,15 @@
         # audio
         'url': 
'http://mediathek.rbb-online.de/radio/Hörspiel/Vor-dem-Fest/kulturradio/Audio?documentId=30796318&topRessort=radio&bcastId=9839158',
         'only_matching': True,
+    }, {
+        'url': 
'https://classic.ardmediathek.de/tv/Panda-Gorilla-Co/Panda-Gorilla-Co-Folge-274/Das-Erste/Video?bcastId=16355486&documentId=58234698',
+        'only_matching': True,
     }]
 
+    @classmethod
+    def suitable(cls, url):
+        return False if ARDBetaMediathekIE.suitable(url) else 
super(ARDMediathekIE, cls).suitable(url)
+
     def _extract_media_info(self, media_info_url, webpage, video_id):
         media_info = self._download_json(
             media_info_url, video_id, 'Downloading media JSON')
@@ -293,7 +303,7 @@
 
 
 class ARDBetaMediathekIE(InfoExtractor):
-    _VALID_URL = 
r'https://beta\.ardmediathek\.de/[a-z]+/player/(?P<video_id>[a-zA-Z0-9]+)/(?P<display_id>[^/?#]+)'
+    _VALID_URL = 
r'https://(?:beta|www)\.ardmediathek\.de/[^/]+/(?:player|live)/(?P<video_id>[a-zA-Z0-9]+)(?:/(?P<display_id>[^/?#]+))?'
     _TESTS = [{
         'url': 
'https://beta.ardmediathek.de/ard/player/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhdG9ydC9mYmM4NGM1NC0xNzU4LTRmZGYtYWFhZS0wYzcyZTIxNGEyMDE/die-robuste-roswita',
         'md5': '2d02d996156ea3c397cfc5036b5d7f8f',
@@ -307,12 +317,18 @@
             'upload_date': '20180826',
             'ext': 'mp4',
         },
+    }, {
+        'url': 
'https://www.ardmediathek.de/ard/player/Y3JpZDovL3N3ci5kZS9hZXgvbzEwNzE5MTU/',
+        'only_matching': True,
+    }, {
+        'url': 
'https://www.ardmediathek.de/swr/live/Y3JpZDovL3N3ci5kZS8xMzQ4MTA0Mg',
+        'only_matching': True,
     }]
 
     def _real_extract(self, url):
         mobj = re.match(self._VALID_URL, url)
         video_id = mobj.group('video_id')
-        display_id = mobj.group('display_id')
+        display_id = mobj.group('display_id') or video_id
 
         webpage = self._download_webpage(url, display_id)
         data_json = 
self._search_regex(r'window\.__APOLLO_STATE__\s*=\s*(\{.*);\n', webpage, 'json')
@@ -323,43 +339,62 @@
             'display_id': display_id,
         }
         formats = []
+        subtitles = {}
+        geoblocked = False
         for widget in data.values():
-            if widget.get('_geoblocked'):
-                raise ExtractorError('This video is not available due to 
geoblocking', expected=True)
-
+            if widget.get('_geoblocked') is True:
+                geoblocked = True
             if '_duration' in widget:
-                res['duration'] = widget['_duration']
+                res['duration'] = int_or_none(widget['_duration'])
             if 'clipTitle' in widget:
                 res['title'] = widget['clipTitle']
             if '_previewImage' in widget:
                 res['thumbnail'] = widget['_previewImage']
             if 'broadcastedOn' in widget:
-                res['upload_date'] = unified_strdate(widget['broadcastedOn'])
+                res['timestamp'] = unified_timestamp(widget['broadcastedOn'])
             if 'synopsis' in widget:
                 res['description'] = widget['synopsis']
-            if '_subtitleUrl' in widget:
-                res['subtitles'] = {'de': [{
+            subtitle_url = url_or_none(widget.get('_subtitleUrl'))
+            if subtitle_url:
+                subtitles.setdefault('de', []).append({
                     'ext': 'ttml',
-                    'url': widget['_subtitleUrl'],
-                }]}
+                    'url': subtitle_url,
+                })
             if '_quality' in widget:
-                format_url = widget['_stream']['json'][0]
-
-                if format_url.endswith('.f4m'):
+                format_url = url_or_none(try_get(
+                    widget, lambda x: x['_stream']['json'][0]))
+                if not format_url:
+                    continue
+                ext = determine_ext(format_url)
+                if ext == 'f4m':
                     formats.extend(self._extract_f4m_formats(
                         format_url + '?hdcore=3.11.0',
                         video_id, f4m_id='hds', fatal=False))
-                elif format_url.endswith('m3u8'):
+                elif ext == 'm3u8':
                     formats.extend(self._extract_m3u8_formats(
-                        format_url, video_id, 'mp4', m3u8_id='hls', 
fatal=False))
+                        format_url, video_id, 'mp4', m3u8_id='hls',
+                        fatal=False))
                 else:
+                    # HTTP formats are not available when geoblocked is True,
+                    # other formats are fine though
+                    if geoblocked:
+                        continue
+                    quality = str_or_none(widget.get('_quality'))
                     formats.append({
-                        'format_id': 'http-' + widget['_quality'],
+                        'format_id': ('http-' + quality) if quality else 
'http',
                         'url': format_url,
                         'preference': 10,  # Plain HTTP, that's nice
                     })
 
+        if not formats and geoblocked:
+            self.raise_geo_restricted(
+                msg='This video is not available due to geoblocking',
+                countries=['DE'])
+
         self._sort_formats(formats)
-        res['formats'] = formats
+        res.update({
+            'subtitles': subtitles,
+            'formats': formats,
+        })
 
         return res
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/iprima.py 
new/youtube-dl/youtube_dl/extractor/iprima.py
--- old/youtube-dl/youtube_dl/extractor/iprima.py       2018-12-03 
01:03:07.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/iprima.py       2018-12-16 
23:36:54.000000000 +0100
@@ -12,7 +12,7 @@
 
 
 class IPrimaIE(InfoExtractor):
-    _VALID_URL = 
r'https?://(?:play|prima|www)\.iprima\.cz/(?:[^/]+/)*(?P<id>[^/?#&]+)'
+    _VALID_URL = r'https?://(?:[^/]+)\.iprima\.cz/(?:[^/]+/)*(?P<id>[^/?#&]+)'
     _GEO_BYPASS = False
 
     _TESTS = [{
@@ -44,6 +44,21 @@
     }, {
         'url': 'http://www.iprima.cz/filmy/desne-rande',
         'only_matching': True,
+    }, {
+        'url': 
'https://zoom.iprima.cz/10-nejvetsich-tajemstvi-zahad/posvatna-mista-a-stavby',
+        'only_matching': True,
+    }, {
+        'url': 'https://krimi.iprima.cz/mraz-0/sebevrazdy',
+        'only_matching': True,
+    }, {
+        'url': 'https://cool.iprima.cz/derava-silnice-nevadi',
+        'only_matching': True,
+    }, {
+        'url': 'https://love.iprima.cz/laska-az-za-hrob/slib-dany-bratrovi',
+        'only_matching': True,
+    }, {
+        'url': 'https://autosalon.iprima.cz/motorsport/7-epizoda-1',
+        'only_matching': True,
     }]
 
     def _real_extract(self, url):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/lecturio.py 
new/youtube-dl/youtube_dl/extractor/lecturio.py
--- old/youtube-dl/youtube_dl/extractor/lecturio.py     2018-12-03 
01:03:07.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/lecturio.py     2018-12-16 
23:36:54.000000000 +0100
@@ -136,9 +136,15 @@
             cc_url = url_or_none(cc_url)
             if not cc_url:
                 continue
-            sub_dict = automatic_captions if 'auto-translated' in cc_label 
else subtitles
             lang = self._search_regex(
-                r'/([a-z]{2})_', cc_url, 'lang', default=cc_label.split()[0])
+                r'/([a-z]{2})_', cc_url, 'lang',
+                default=cc_label.split()[0] if cc_label else 'en')
+            original_lang = self._search_regex(
+                r'/[a-z]{2}_([a-z]{2})_', cc_url, 'original lang',
+                default=None)
+            sub_dict = (automatic_captions
+                        if 'auto-translated' in cc_label or original_lang
+                        else subtitles)
             sub_dict.setdefault(self._CC_LANGS.get(lang, lang), []).append({
                 'url': cc_url,
             })
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/safari.py 
new/youtube-dl/youtube_dl/extractor/safari.py
--- old/youtube-dl/youtube_dl/extractor/safari.py       2018-12-03 
01:02:58.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/safari.py       2018-12-16 
23:36:54.000000000 +0100
@@ -15,10 +15,10 @@
 
 
 class SafariBaseIE(InfoExtractor):
-    _LOGIN_URL = 'https://www.safaribooksonline.com/accounts/login/'
+    _LOGIN_URL = 'https://learning.oreilly.com/accounts/login/'
     _NETRC_MACHINE = 'safari'
 
-    _API_BASE = 'https://www.safaribooksonline.com/api/v1'
+    _API_BASE = 'https://learning.oreilly.com/api/v1'
     _API_FORMAT = 'json'
 
     LOGGED_IN = False
@@ -76,7 +76,7 @@
     IE_DESC = 'safaribooksonline.com online video'
     _VALID_URL = r'''(?x)
                         https?://
-                            (?:www\.)?safaribooksonline\.com/
+                            
(?:www\.)?(?:safaribooksonline|learning\.oreilly)\.com/
                             (?:
                                 
library/view/[^/]+/(?P<course_id>[^/]+)/(?P<part>[^/?\#&]+)\.html|
                                 
videos/[^/]+/[^/]+/(?P<reference_id>[^-]+-[^/?\#&]+)
@@ -104,6 +104,9 @@
     }, {
         'url': 
'https://www.safaribooksonline.com/videos/python-programming-language/9780134217314/9780134217314-PYMC_13_00',
         'only_matching': True,
+    }, {
+        'url': 
'https://learning.oreilly.com/videos/hadoop-fundamentals-livelessons/9780133392838/9780133392838-00_SeriesIntro',
+        'only_matching': True,
     }]
 
     _PARTNER_ID = '1926081'
@@ -160,7 +163,7 @@
 
 class SafariApiIE(SafariBaseIE):
     IE_NAME = 'safari:api'
-    _VALID_URL = 
r'https?://(?:www\.)?safaribooksonline\.com/api/v1/book/(?P<course_id>[^/]+)/chapter(?:-content)?/(?P<part>[^/?#&]+)\.html'
+    _VALID_URL = 
r'https?://(?:www\.)?(?:safaribooksonline|learning\.oreilly)\.com/api/v1/book/(?P<course_id>[^/]+)/chapter(?:-content)?/(?P<part>[^/?#&]+)\.html'
 
     _TESTS = [{
         'url': 
'https://www.safaribooksonline.com/api/v1/book/9780133392838/chapter/part00.html',
@@ -185,7 +188,7 @@
     _VALID_URL = r'''(?x)
                     https?://
                         (?:
-                            (?:www\.)?safaribooksonline\.com/
+                            
(?:www\.)?(?:safaribooksonline|learning\.oreilly)\.com/
                             (?:
                                 library/view/[^/]+|
                                 api/v1/book|
@@ -213,6 +216,9 @@
     }, {
         'url': 
'https://www.safaribooksonline.com/videos/python-programming-language/9780134217314',
         'only_matching': True,
+    }, {
+        'url': 
'https://learning.oreilly.com/videos/hadoop-fundamentals-livelessons/9780133392838',
+        'only_matching': True,
     }]
 
     @classmethod
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/teachable.py 
new/youtube-dl/youtube_dl/extractor/teachable.py
--- old/youtube-dl/youtube_dl/extractor/teachable.py    2018-12-03 
01:03:07.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/teachable.py    2018-12-16 
23:36:54.000000000 +0100
@@ -135,7 +135,6 @@
     @staticmethod
     def _extract_url(webpage, source_url):
         if not TeachableIE._is_teachable(webpage):
-            print('NOT TEACHABLE')
             return
         if re.match(r'https?://[^/]+/(?:courses|p)', source_url):
             return '%s%s' % (TeachableBaseIE._URL_PREFIX, source_url)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/uol.py 
new/youtube-dl/youtube_dl/extractor/uol.py
--- old/youtube-dl/youtube_dl/extractor/uol.py  2018-12-03 01:02:59.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/uol.py  2018-12-16 23:36:54.000000000 
+0100
@@ -61,7 +61,7 @@
             'height': 360,
         },
         '5': {
-            'width': 1080,
+            'width': 1280,
             'height': 720,
         },
         '6': {
@@ -80,6 +80,10 @@
             'width': 568,
             'height': 320,
         },
+        '11': {
+            'width': 640,
+            'height': 360,
+        }
     }
 
     def _real_extract(self, url):
@@ -111,19 +115,31 @@
             'ver': video_data.get('numRevision', 2),
             'r': 'http://mais.uol.com.br',
         }
+        for k in ('token', 'sign'):
+            v = video_data.get(k)
+            if v:
+                query[k] = v
+
         formats = []
         for f in video_data.get('formats', []):
             f_url = f.get('url') or f.get('secureUrl')
             if not f_url:
                 continue
+            f_url = update_url_query(f_url, query)
             format_id = str_or_none(f.get('id'))
+            if format_id == '10':
+                formats.extend(self._extract_m3u8_formats(
+                    f_url, video_id, 'mp4', 'm3u8_native',
+                    m3u8_id='hls', fatal=False))
+                continue
             fmt = {
                 'format_id': format_id,
-                'url': update_url_query(f_url, query),
+                'url': f_url,
+                'source_preference': 1,
             }
             fmt.update(self._FORMATS.get(format_id, {}))
             formats.append(fmt)
-        self._sort_formats(formats)
+        self._sort_formats(formats, ('height', 'width', 'source_preference', 
'tbr', 'ext'))
 
         tags = []
         for tag in video_data.get('tags', []):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/vrv.py 
new/youtube-dl/youtube_dl/extractor/vrv.py
--- old/youtube-dl/youtube_dl/extractor/vrv.py  2018-12-03 01:02:59.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/vrv.py  2018-12-16 23:36:54.000000000 
+0100
@@ -120,8 +120,10 @@
             url, video_id,
             headers=self.geo_verification_headers())
         media_resource = self._parse_json(self._search_regex(
-            r'window\.__INITIAL_STATE__\s*=\s*({.+?})</script>',
-            webpage, 'inital state'), video_id).get('watch', 
{}).get('mediaResource') or {}
+            [
+                r'window\.__INITIAL_STATE__\s*=\s*({.+?})(?:</script>|;)',
+                r'window\.__INITIAL_STATE__\s*=\s*({.+})'
+            ], webpage, 'inital state'), video_id).get('watch', 
{}).get('mediaResource') or {}
 
         video_data = media_resource.get('json')
         if not video_data:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/youtube.py 
new/youtube-dl/youtube_dl/extractor/youtube.py
--- old/youtube-dl/youtube_dl/extractor/youtube.py      2018-12-03 
01:02:59.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/youtube.py      2018-12-16 
23:36:54.000000000 +0100
@@ -48,6 +48,7 @@
     unified_strdate,
     unsmuggle_url,
     uppercase_escape,
+    url_or_none,
     urlencode_postdata,
 )
 
@@ -1386,8 +1387,11 @@
             self._downloader.report_warning(err_msg)
             return {}
 
-    def _mark_watched(self, video_id, video_info):
-        playback_url = video_info.get('videostats_playback_base_url', 
[None])[0]
+    def _mark_watched(self, video_id, video_info, player_response):
+        playback_url = url_or_none(try_get(
+            player_response,
+            lambda x: 
x['playbackTracking']['videostatsPlaybackUrl']['baseUrl']) or try_get(
+            video_info, lambda x: x['videostats_playback_base_url'][0]))
         if not playback_url:
             return
         parsed_playback_url = compat_urlparse.urlparse(playback_url)
@@ -1712,30 +1716,36 @@
             else:
                 video_description = ''
 
-        if 'multifeed_metadata_list' in video_info and not 
smuggled_data.get('force_singlefeed', False):
+        if not smuggled_data.get('force_singlefeed', False):
             if not self._downloader.params.get('noplaylist'):
-                entries = []
-                feed_ids = []
-                multifeed_metadata_list = 
video_info['multifeed_metadata_list'][0]
-                for feed in multifeed_metadata_list.split(','):
-                    # Unquote should take place before split on comma (,) 
since textual
-                    # fields may contain comma as well (see
-                    # https://github.com/rg3/youtube-dl/issues/8536)
-                    feed_data = 
compat_parse_qs(compat_urllib_parse_unquote_plus(feed))
-                    entries.append({
-                        '_type': 'url_transparent',
-                        'ie_key': 'Youtube',
-                        'url': smuggle_url(
-                            '%s://www.youtube.com/watch?v=%s' % (proto, 
feed_data['id'][0]),
-                            {'force_singlefeed': True}),
-                        'title': '%s (%s)' % (video_title, 
feed_data['title'][0]),
-                    })
-                    feed_ids.append(feed_data['id'][0])
-                self.to_screen(
-                    'Downloading multifeed video (%s) - add --no-playlist to 
just download video %s'
-                    % (', '.join(feed_ids), video_id))
-                return self.playlist_result(entries, video_id, video_title, 
video_description)
-            self.to_screen('Downloading just video %s because of 
--no-playlist' % video_id)
+                multifeed_metadata_list = try_get(
+                    player_response,
+                    lambda x: 
x['multicamera']['playerLegacyMulticameraRenderer']['metadataList'],
+                    compat_str) or try_get(
+                    video_info, lambda x: x['multifeed_metadata_list'][0], 
compat_str)
+                if multifeed_metadata_list:
+                    entries = []
+                    feed_ids = []
+                    for feed in multifeed_metadata_list.split(','):
+                        # Unquote should take place before split on comma (,) 
since textual
+                        # fields may contain comma as well (see
+                        # https://github.com/rg3/youtube-dl/issues/8536)
+                        feed_data = 
compat_parse_qs(compat_urllib_parse_unquote_plus(feed))
+                        entries.append({
+                            '_type': 'url_transparent',
+                            'ie_key': 'Youtube',
+                            'url': smuggle_url(
+                                '%s://www.youtube.com/watch?v=%s' % (proto, 
feed_data['id'][0]),
+                                {'force_singlefeed': True}),
+                            'title': '%s (%s)' % (video_title, 
feed_data['title'][0]),
+                        })
+                        feed_ids.append(feed_data['id'][0])
+                    self.to_screen(
+                        'Downloading multifeed video (%s) - add --no-playlist 
to just download video %s'
+                        % (', '.join(feed_ids), video_id))
+                    return self.playlist_result(entries, video_id, 
video_title, video_description)
+            else:
+                self.to_screen('Downloading just video %s because of 
--no-playlist' % video_id)
 
         if view_count is None:
             view_count = extract_view_count(video_info)
@@ -2116,7 +2126,7 @@
 
         self._sort_formats(formats)
 
-        self.mark_watched(video_id, video_info)
+        self.mark_watched(video_id, video_info, player_response)
 
         return {
             'id': video_id,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/version.py 
new/youtube-dl/youtube_dl/version.py
--- old/youtube-dl/youtube_dl/version.py        2018-12-09 17:11:30.000000000 
+0100
+++ new/youtube-dl/youtube_dl/version.py        2018-12-16 23:37:46.000000000 
+0100
@@ -1,3 +1,3 @@
 from __future__ import unicode_literals
 
-__version__ = '2018.12.09'
+__version__ = '2018.12.17'


Reply via email to