Hello community,

here is the log from the commit of package youtube-dl for openSUSE:Factory 
checked in at 2019-08-05 10:40:45
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/youtube-dl (Old)
 and      /work/SRC/openSUSE:Factory/.youtube-dl.new.4126 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "youtube-dl"

Mon Aug  5 10:40:45 2019 rev:112 rq:720655 version:2019.08.02

Changes:
--------
--- /work/SRC/openSUSE:Factory/youtube-dl/python-youtube-dl.changes     
2019-07-18 15:20:50.476139553 +0200
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.4126/python-youtube-dl.changes   
2019-08-05 10:41:14.907299753 +0200
@@ -1,0 +2,21 @@
+Fri Aug  2 12:16:31 UTC 2019 - Ismail Dönmez <[email protected]>
+
+- Update to new upstream release 2019.08.02
+  * [yahoo:japannews] Add support for yahoo.co.jp (#21698, #21265)
+  * [discovery] Add support go.discovery.com URLs
+  * [youtube:playlist] Relax video regular expression (#21844)
+  * [generic] Restrict --default-search schemeless URLs detection pattern
+    (#21842)
+  * [vrv] Fix CMS signing query extraction (#21809)
+  * [youtube] Fix and improve title and description extraction (#21934)
+  * [tvigle] Add support for HLS and DASH formats (#21967)
+  * [tvigle] Fix extraction (#21967)
+  * [yandexvideo] Add support for DASH formats (#21971)
+  * [discovery] Use API call for video data extraction (#21808)
+  * [mgtv] Extract format_note (#21881)
+  * [tvn24] Fix metadata extraction (#21833, #21834)
+  * [dlive] Relax URL regular expression (#21909)
+  * [openload] Add support for oload.best (#21913)
+  * [youtube] Improve metadata extraction for age gate content (#21943)
+
+-------------------------------------------------------------------
@@ -12,0 +34,8 @@
+
+-------------------------------------------------------------------
+Mon Jul  1 18:43:48 UTC 2019 - Jan Engelhardt <[email protected]>
+
+- Update to new upstream release 2019.07.02
+  * Introduce random_user_agent and use as default User-Agent (closes #21546)
+  * dailymotion: add support embed with DM.player js call
+  * openload: Add support for oload.biz
--- /work/SRC/openSUSE:Factory/youtube-dl/youtube-dl.changes    2019-07-18 
15:20:51.880139302 +0200
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.4126/youtube-dl.changes  
2019-08-05 10:41:15.219299717 +0200
@@ -1,0 +2,21 @@
+Fri Aug  2 12:16:31 UTC 2019 - Ismail Dönmez <[email protected]>
+
+- Update to new upstream release 2019.08.02
+  * [yahoo:japannews] Add support for yahoo.co.jp (#21698, #21265)
+  * [discovery] Add support go.discovery.com URLs
+  * [youtube:playlist] Relax video regular expression (#21844)
+  * [generic] Restrict --default-search schemeless URLs detection pattern
+    (#21842)
+  * [vrv] Fix CMS signing query extraction (#21809)
+  * [youtube] Fix and improve title and description extraction (#21934)
+  * [tvigle] Add support for HLS and DASH formats (#21967)
+  * [tvigle] Fix extraction (#21967)
+  * [yandexvideo] Add support for DASH formats (#21971)
+  * [discovery] Use API call for video data extraction (#21808)
+  * [mgtv] Extract format_note (#21881)
+  * [tvn24] Fix metadata extraction (#21833, #21834)
+  * [dlive] Relax URL regular expression (#21909)
+  * [openload] Add support for oload.best (#21913)
+  * [youtube] Improve metadata extraction for age gate content (#21943)
+
+-------------------------------------------------------------------

Old:
----
  youtube-dl-2019.07.16.tar.gz
  youtube-dl-2019.07.16.tar.gz.sig

New:
----
  youtube-dl-2019.08.02.tar.gz
  youtube-dl-2019.08.02.tar.gz.sig

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.DtI6Mt/_old  2019-08-05 10:41:16.071299619 +0200
+++ /var/tmp/diff_new_pack.DtI6Mt/_new  2019-08-05 10:41:16.075299618 +0200
@@ -19,7 +19,7 @@
 %define modname youtube-dl
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-youtube-dl
-Version:        2019.07.16
+Version:        2019.08.02
 Release:        0
 Summary:        A python module for downloading from video sites for offline 
watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.DtI6Mt/_old  2019-08-05 10:41:16.103299615 +0200
+++ /var/tmp/diff_new_pack.DtI6Mt/_new  2019-08-05 10:41:16.103299615 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           youtube-dl
-Version:        2019.07.16
+Version:        2019.08.02
 Release:        0
 Summary:        A tool for downloading from video sites for offline watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl-2019.07.16.tar.gz -> youtube-dl-2019.08.02.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/ChangeLog new/youtube-dl/ChangeLog
--- old/youtube-dl/ChangeLog    2019-07-15 19:01:39.000000000 +0200
+++ new/youtube-dl/ChangeLog    2019-08-02 00:37:46.000000000 +0200
@@ -1,3 +1,34 @@
+version 2019.08.02
+
+Extractors
++ [tvigle] Add support for HLS and DASH formats (#21967)
+* [tvigle] Fix extraction (#21967)
++ [yandexvideo] Add support for DASH formats (#21971)
+* [discovery] Use API call for video data extraction (#21808)
++ [mgtv] Extract format_note (#21881)
+* [tvn24] Fix metadata extraction (#21833, #21834)
+* [dlive] Relax URL regular expression (#21909)
++ [openload] Add support for oload.best (#21913)
+* [youtube] Improve metadata extraction for age gate content (#21943)
+
+
+version 2019.07.30
+
+Extractors
+* [youtube] Fix and improve title and description extraction (#21934)
+
+
+version 2019.07.27
+
+Extractors
++ [yahoo:japannews] Add support for yahoo.co.jp (#21698, #21265)
++ [discovery] Add support go.discovery.com URLs
+* [youtube:playlist] Relax video regular expression (#21844)
+* [generic] Restrict --default-search schemeless URLs detection pattern
+  (#21842)
+* [vrv] Fix CMS signing query extraction (#21809)
+
+
 version 2019.07.16
 
 Extractors
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/docs/supportedsites.md 
new/youtube-dl/docs/supportedsites.md
--- old/youtube-dl/docs/supportedsites.md       2019-07-15 19:01:46.000000000 
+0200
+++ new/youtube-dl/docs/supportedsites.md       2019-08-02 00:37:53.000000000 
+0200
@@ -1117,6 +1117,7 @@
  - **Yahoo**: Yahoo screen and movies
  - **yahoo:gyao**
  - **yahoo:gyao:player**
+ - **yahoo:japannews**: Yahoo! Japan News
  - **YandexDisk**
  - **yandexmusic:album**: Яндекс.Музыка - Альбом
  - **yandexmusic:playlist**: Яндекс.Музыка - Плейлист
Binary files old/youtube-dl/youtube-dl and new/youtube-dl/youtube-dl differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/__init__.py 
new/youtube-dl/youtube_dl/__init__.py
--- old/youtube-dl/youtube_dl/__init__.py       2019-07-15 18:59:55.000000000 
+0200
+++ new/youtube-dl/youtube_dl/__init__.py       2019-08-02 00:37:30.000000000 
+0200
@@ -94,7 +94,7 @@
             if opts.verbose:
                 write_string('[debug] Batch file urls: ' + repr(batch_urls) + 
'\n')
         except IOError:
-            sys.exit('ERROR: batch file could not be read')
+            sys.exit('ERROR: batch file %s could not be read' % opts.batchfile)
     all_urls = batch_urls + [url.strip() for url in args]  # batch_urls are 
already striped in read_batch_urls
     _enc = preferredencoding()
     all_urls = [url.decode(_enc, 'ignore') if isinstance(url, bytes) else url 
for url in all_urls]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/downloader/dash.py 
new/youtube-dl/youtube_dl/downloader/dash.py
--- old/youtube-dl/youtube_dl/downloader/dash.py        2019-07-15 
18:59:56.000000000 +0200
+++ new/youtube-dl/youtube_dl/downloader/dash.py        2019-08-02 
00:37:17.000000000 +0200
@@ -53,7 +53,7 @@
                 except compat_urllib_error.HTTPError as err:
                     # YouTube may often return 404 HTTP error for a fragment 
causing the
                     # whole download to fail. However if the same fragment is 
immediately
-                    # retried with the same request data this usually succeeds 
(1-2 attemps
+                    # retried with the same request data this usually succeeds 
(1-2 attempts
                     # is usually enough) thus allowing to download the whole 
file successfully.
                     # To be future-proof we will retry all fragments that fail 
with any
                     # HTTP error.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/downloader/ism.py 
new/youtube-dl/youtube_dl/downloader/ism.py
--- old/youtube-dl/youtube_dl/downloader/ism.py 2019-07-15 18:59:56.000000000 
+0200
+++ new/youtube-dl/youtube_dl/downloader/ism.py 2019-08-02 00:37:17.000000000 
+0200
@@ -146,7 +146,7 @@
             sps, pps = codec_private_data.split(u32.pack(1))[1:]
             avcc_payload = u8.pack(1)  # configuration version
             avcc_payload += sps[1:4]  # avc profile indication + profile 
compatibility + avc level indication
-            avcc_payload += u8.pack(0xfc | 
(params.get('nal_unit_length_field', 4) - 1))  # complete represenation (1) + 
reserved (11111) + length size minus one
+            avcc_payload += u8.pack(0xfc | 
(params.get('nal_unit_length_field', 4) - 1))  # complete representation (1) + 
reserved (11111) + length size minus one
             avcc_payload += u8.pack(1)  # reserved (0) + number of sps 
(0000001)
             avcc_payload += u16.pack(len(sps))
             avcc_payload += sps
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/common.py 
new/youtube-dl/youtube_dl/extractor/common.py
--- old/youtube-dl/youtube_dl/extractor/common.py       2019-07-15 
18:59:56.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/common.py       2019-08-02 
00:37:18.000000000 +0200
@@ -220,7 +220,7 @@
                         * "preference" (optional, int) - quality of the image
                         * "width" (optional, int)
                         * "height" (optional, int)
-                        * "resolution" (optional, string "{width}x{height"},
+                        * "resolution" (optional, string "{width}x{height}",
                                         deprecated)
                         * "filesize" (optional, int)
     thumbnail:      Full URL to a video thumbnail image.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/ctsnews.py 
new/youtube-dl/youtube_dl/extractor/ctsnews.py
--- old/youtube-dl/youtube_dl/extractor/ctsnews.py      2019-07-15 
19:00:11.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/ctsnews.py      2019-08-02 
00:37:18.000000000 +0200
@@ -5,6 +5,7 @@
 from ..utils import unified_timestamp
 from .youtube import YoutubeIE
 
+
 class CtsNewsIE(InfoExtractor):
     IE_DESC = '華視新聞'
     _VALID_URL = 
r'https?://news\.cts\.com\.tw/[a-z]+/[a-z]+/\d+/(?P<id>\d+)\.html'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/discovery.py 
new/youtube-dl/youtube_dl/extractor/discovery.py
--- old/youtube-dl/youtube_dl/extractor/discovery.py    2019-07-15 
18:59:56.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/discovery.py    2019-08-02 
00:37:30.000000000 +0200
@@ -5,23 +5,17 @@
 import string
 
 from .discoverygo import DiscoveryGoBaseIE
-from ..compat import (
-    compat_str,
-    compat_urllib_parse_unquote,
-)
-from ..utils import (
-    ExtractorError,
-    try_get,
-)
+from ..compat import compat_urllib_parse_unquote
+from ..utils import ExtractorError
 from ..compat import compat_HTTPError
 
 
 class DiscoveryIE(DiscoveryGoBaseIE):
     _VALID_URL = r'''(?x)https?://
         (?P<site>
+            (?:(?:www|go)\.)?discovery|
             (?:www\.)?
                 (?:
-                    discovery|
                     investigationdiscovery|
                     discoverylife|
                     animalplanet|
@@ -40,15 +34,15 @@
                     cookingchanneltv|
                     motortrend
                 )
-        
)\.com(?P<path>/tv-shows/[^/]+/(?:video|full-episode)s/(?P<id>[^./?#]+))'''
+        )\.com/tv-shows/[^/]+/(?:video|full-episode)s/(?P<id>[^./?#]+)'''
     _TESTS = [{
-        'url': 'https://www.discovery.com/tv-shows/cash-cab/videos/dave-foley',
+        'url': 
'https://go.discovery.com/tv-shows/cash-cab/videos/riding-with-matthew-perry',
         'info_dict': {
-            'id': '5a2d9b4d6b66d17a5026e1fd',
+            'id': '5a2f35ce6b66d17a5026e29e',
             'ext': 'mp4',
-            'title': 'Dave Foley',
-            'description': 'md5:4b39bcafccf9167ca42810eb5f28b01f',
-            'duration': 608,
+            'title': 'Riding with Matthew Perry',
+            'description': 'md5:a34333153e79bc4526019a5129e7f878',
+            'duration': 84,
         },
         'params': {
             'skip_download': True,  # requires ffmpeg
@@ -56,20 +50,16 @@
     }, {
         'url': 
'https://www.investigationdiscovery.com/tv-shows/final-vision/full-episodes/final-vision',
         'only_matching': True,
+    }, {
+        'url': 
'https://go.discovery.com/tv-shows/alaskan-bush-people/videos/follow-your-own-road',
+        'only_matching': True,
     }]
     _GEO_COUNTRIES = ['US']
     _GEO_BYPASS = False
+    _API_BASE_URL = 'https://api.discovery.com/v1/'
 
     def _real_extract(self, url):
-        site, path, display_id = re.match(self._VALID_URL, url).groups()
-        webpage = self._download_webpage(url, display_id)
-
-        react_data = self._parse_json(self._search_regex(
-            r'window\.__reactTransmitPacket\s*=\s*({.+?});',
-            webpage, 'react data'), display_id)
-        content_blocks = react_data['layout'][path]['contentBlocks']
-        video = next(cb for cb in content_blocks if cb.get('type') == 
'video')['content']['items'][0]
-        video_id = video['id']
+        site, display_id = re.match(self._VALID_URL, url).groups()
 
         access_token = None
         cookies = self._get_cookies(url)
@@ -79,27 +69,33 @@
         if auth_storage_cookie and auth_storage_cookie.value:
             auth_storage = self._parse_json(compat_urllib_parse_unquote(
                 compat_urllib_parse_unquote(auth_storage_cookie.value)),
-                video_id, fatal=False) or {}
+                display_id, fatal=False) or {}
             access_token = auth_storage.get('a') or 
auth_storage.get('access_token')
 
         if not access_token:
             access_token = self._download_json(
-                'https://%s.com/anonymous' % site, display_id, query={
+                'https://%s.com/anonymous' % site, display_id,
+                'Downloading token JSON metadata', query={
                     'authRel': 'authorization',
-                    'client_id': try_get(
-                        react_data, lambda x: x['application']['apiClientId'],
-                        compat_str) or '3020a40c2356a645b4b4',
+                    'client_id': '3020a40c2356a645b4b4',
                     'nonce': ''.join([random.choice(string.ascii_letters) for 
_ in range(32)]),
                     'redirectUri': 
'https://fusion.ddmcdn.com/app/mercury-sdk/180/redirectHandler.html?https://www.%s.com'
 % site,
                 })['access_token']
 
-        try:
-            headers = self.geo_verification_headers()
-            headers['Authorization'] = 'Bearer ' + access_token
+        headers = self.geo_verification_headers()
+        headers['Authorization'] = 'Bearer ' + access_token
 
+        try:
+            video = self._download_json(
+                self._API_BASE_URL + 'content/videos',
+                display_id, 'Downloading content JSON metadata',
+                headers=headers, query={
+                    'slug': display_id,
+                })[0]
+            video_id = video['id']
             stream = self._download_json(
-                'https://api.discovery.com/v1/streaming/video/' + video_id,
-                display_id, headers=headers)
+                self._API_BASE_URL + 'streaming/video/' + video_id,
+                display_id, 'Downloading streaming JSON metadata', 
headers=headers)
         except ExtractorError as e:
             if isinstance(e.cause, compat_HTTPError) and e.cause.code in (401, 
403):
                 e_description = self._parse_json(
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/dlive.py 
new/youtube-dl/youtube_dl/extractor/dlive.py
--- old/youtube-dl/youtube_dl/extractor/dlive.py        2019-07-15 
18:59:56.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/dlive.py        2019-08-02 
00:37:30.000000000 +0200
@@ -9,8 +9,8 @@
 
 class DLiveVODIE(InfoExtractor):
     IE_NAME = 'dlive:vod'
-    _VALID_URL = 
r'https?://(?:www\.)?dlive\.tv/p/(?P<uploader_id>.+?)\+(?P<id>[a-zA-Z0-9]+)'
-    _TEST = {
+    _VALID_URL = 
r'https?://(?:www\.)?dlive\.tv/p/(?P<uploader_id>.+?)\+(?P<id>[^/?#&]+)'
+    _TESTS = [{
         'url': 'https://dlive.tv/p/pdp+3mTzOl4WR',
         'info_dict': {
             'id': '3mTzOl4WR',
@@ -20,7 +20,10 @@
             'timestamp': 1562011015,
             'uploader_id': 'pdp',
         }
-    }
+    }, {
+        'url': 'https://dlive.tv/p/pdpreplay+D-RD-xSZg',
+        'only_matching': True,
+    }]
 
     def _real_extract(self, url):
         uploader_id, vod_id = re.match(self._VALID_URL, url).groups()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/extractors.py 
new/youtube-dl/youtube_dl/extractor/extractors.py
--- old/youtube-dl/youtube_dl/extractor/extractors.py   2019-07-15 
18:59:57.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/extractors.py   2019-08-02 
00:37:18.000000000 +0200
@@ -1448,6 +1448,7 @@
     YahooSearchIE,
     YahooGyaOPlayerIE,
     YahooGyaOIE,
+    YahooJapanNewsIE,
 )
 from .yandexdisk import YandexDiskIE
 from .yandexmusic import (
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/generic.py 
new/youtube-dl/youtube_dl/extractor/generic.py
--- old/youtube-dl/youtube_dl/extractor/generic.py      2019-07-15 
18:59:57.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/generic.py      2019-08-02 
00:37:18.000000000 +0200
@@ -2226,7 +2226,7 @@
                 default_search = 'fixup_error'
 
             if default_search in ('auto', 'auto_warning', 'fixup_error'):
-                if '/' in url:
+                if re.match(r'^[^\s/]+\.[^\s/]+/', url):
                     self._downloader.report_warning('The url doesn\'t specify 
the protocol, trying with http')
                     return self.url_result('http://' + url)
                 elif default_search != 'fixup_error':
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/leeco.py 
new/youtube-dl/youtube_dl/extractor/leeco.py
--- old/youtube-dl/youtube_dl/extractor/leeco.py        2019-07-15 
18:59:57.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/leeco.py        2019-08-02 
00:37:19.000000000 +0200
@@ -326,7 +326,7 @@
             elif play_json.get('code'):
                 raise ExtractorError('Letv cloud returned error %d' % 
play_json['code'], expected=True)
             else:
-                raise ExtractorError('Letv cloud returned an unknwon error')
+                raise ExtractorError('Letv cloud returned an unknown error')
 
         def b64decode(s):
             return compat_b64decode(s).decode('utf-8')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/mgtv.py 
new/youtube-dl/youtube_dl/extractor/mgtv.py
--- old/youtube-dl/youtube_dl/extractor/mgtv.py 2019-07-15 18:59:57.000000000 
+0200
+++ new/youtube-dl/youtube_dl/extractor/mgtv.py 2019-08-02 00:37:30.000000000 
+0200
@@ -82,6 +82,7 @@
                 'http_headers': {
                     'Referer': url,
                 },
+                'format_note': stream.get('name'),
             })
         self._sort_formats(formats)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/openload.py 
new/youtube-dl/youtube_dl/extractor/openload.py
--- old/youtube-dl/youtube_dl/extractor/openload.py     2019-07-15 
18:59:57.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/openload.py     2019-08-02 
00:37:30.000000000 +0200
@@ -243,7 +243,7 @@
 
 
 class OpenloadIE(InfoExtractor):
-    _DOMAINS = 
r'(?:openload\.(?:co|io|link|pw)|oload\.(?:tv|biz|stream|site|xyz|win|download|cloud|cc|icu|fun|club|info|press|pw|life|live|space|services|website)|oladblock\.(?:services|xyz|me)|openloed\.co)'
+    _DOMAINS = 
r'(?:openload\.(?:co|io|link|pw)|oload\.(?:tv|best|biz|stream|site|xyz|win|download|cloud|cc|icu|fun|club|info|press|pw|life|live|space|services|website)|oladblock\.(?:services|xyz|me)|openloed\.co)'
     _VALID_URL = r'''(?x)
                     https?://
                         (?P<host>
@@ -369,6 +369,9 @@
         'url': 'https://oload.biz/f/bEk3Gp8ARr4/',
         'only_matching': True,
     }, {
+        'url': 'https://oload.best/embed/kkz9JgVZeWc/',
+        'only_matching': True,
+    }, {
         'url': 'https://oladblock.services/f/b8NWEgkqNLI/',
         'only_matching': True,
     }, {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/rtlnl.py 
new/youtube-dl/youtube_dl/extractor/rtlnl.py
--- old/youtube-dl/youtube_dl/extractor/rtlnl.py        2019-07-15 
18:59:58.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/rtlnl.py        2019-08-02 
00:37:20.000000000 +0200
@@ -32,7 +32,7 @@
             'duration': 1167.96,
         },
     }, {
-        # best format avaialble a3t
+        # best format available a3t
         'url': 
'http://www.rtl.nl/system/videoplayer/derden/rtlnieuws/video_embed.html#uuid=84ae5571-ac25-4225-ae0c-ef8d9efb2aed/autoplay=false',
         'md5': 'dea7474214af1271d91ef332fb8be7ea',
         'info_dict': {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/soundcloud.py 
new/youtube-dl/youtube_dl/extractor/soundcloud.py
--- old/youtube-dl/youtube_dl/extractor/soundcloud.py   2019-07-15 
18:59:58.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/soundcloud.py   2019-08-02 
00:37:20.000000000 +0200
@@ -197,7 +197,7 @@
                 'skip_download': True,
             },
         },
-        # not avaialble via api.soundcloud.com/i1/tracks/id/streams
+        # not available via api.soundcloud.com/i1/tracks/id/streams
         {
             'url': 'https://soundcloud.com/giovannisarani/mezzo-valzer',
             'md5': 'e22aecd2bc88e0e4e432d7dcc0a1abf7',
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/tvigle.py 
new/youtube-dl/youtube_dl/extractor/tvigle.py
--- old/youtube-dl/youtube_dl/extractor/tvigle.py       2019-07-15 
18:59:58.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/tvigle.py       2019-08-02 
00:37:30.000000000 +0200
@@ -9,6 +9,8 @@
     float_or_none,
     int_or_none,
     parse_age_limit,
+    try_get,
+    url_or_none,
 )
 
 
@@ -23,11 +25,10 @@
     _TESTS = [
         {
             'url': 'http://www.tvigle.ru/video/sokrat/',
-            'md5': '36514aed3657d4f70b4b2cef8eb520cd',
             'info_dict': {
                 'id': '1848932',
                 'display_id': 'sokrat',
-                'ext': 'flv',
+                'ext': 'mp4',
                 'title': 'Сократ',
                 'description': 'md5:d6b92ffb7217b4b8ebad2e7665253c17',
                 'duration': 6586,
@@ -37,7 +38,6 @@
         },
         {
             'url': 
'http://www.tvigle.ru/video/vladimir-vysotskii/vedushchii-teleprogrammy-60-minut-ssha-o-vladimire-vysotskom/',
-            'md5': 'e7efe5350dd5011d0de6550b53c3ba7b',
             'info_dict': {
                 'id': '5142516',
                 'ext': 'flv',
@@ -62,7 +62,7 @@
             webpage = self._download_webpage(url, display_id)
             video_id = self._html_search_regex(
                 (r'<div[^>]+class=["\']player["\'][^>]+id=["\'](\d+)',
-                 r'var\s+cloudId\s*=\s*["\'](\d+)',
+                 r'cloudId\s*=\s*["\'](\d+)',
                  r'class="video-preview current_playing" id="(\d+)"'),
                 webpage, 'video id')
 
@@ -90,21 +90,40 @@
         age_limit = parse_age_limit(item.get('ageRestrictions'))
 
         formats = []
-        for vcodec, fmts in item['videos'].items():
+        for vcodec, url_or_fmts in item['videos'].items():
             if vcodec == 'hls':
-                continue
-            for format_id, video_url in fmts.items():
-                if format_id == 'm3u8':
+                m3u8_url = url_or_none(url_or_fmts)
+                if not m3u8_url:
+                    continue
+                formats.extend(self._extract_m3u8_formats(
+                    m3u8_url, video_id, ext='mp4', 
entry_protocol='m3u8_native',
+                    m3u8_id='hls', fatal=False))
+            elif vcodec == 'dash':
+                mpd_url = url_or_none(url_or_fmts)
+                if not mpd_url:
+                    continue
+                formats.extend(self._extract_mpd_formats(
+                    mpd_url, video_id, mpd_id='dash', fatal=False))
+            else:
+                if not isinstance(url_or_fmts, dict):
                     continue
-                height = self._search_regex(
-                    r'^(\d+)[pP]$', format_id, 'height', default=None)
-                formats.append({
-                    'url': video_url,
-                    'format_id': '%s-%s' % (vcodec, format_id),
-                    'vcodec': vcodec,
-                    'height': int_or_none(height),
-                    'filesize': int_or_none(item.get('video_files_size', 
{}).get(vcodec, {}).get(format_id)),
-                })
+                for format_id, video_url in url_or_fmts.items():
+                    if format_id == 'm3u8':
+                        continue
+                    video_url = url_or_none(video_url)
+                    if not video_url:
+                        continue
+                    height = self._search_regex(
+                        r'^(\d+)[pP]$', format_id, 'height', default=None)
+                    filesize = int_or_none(try_get(
+                        item, lambda x: 
x['video_files_size'][vcodec][format_id]))
+                    formats.append({
+                        'url': video_url,
+                        'format_id': '%s-%s' % (vcodec, format_id),
+                        'vcodec': vcodec,
+                        'height': int_or_none(height),
+                        'filesize': filesize,
+                    })
         self._sort_formats(formats)
 
         return {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/tvn24.py 
new/youtube-dl/youtube_dl/extractor/tvn24.py
--- old/youtube-dl/youtube_dl/extractor/tvn24.py        2019-07-15 
18:59:58.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/tvn24.py        2019-08-02 
00:37:30.000000000 +0200
@@ -4,6 +4,7 @@
 from .common import InfoExtractor
 from ..utils import (
     int_or_none,
+    NO_DEFAULT,
     unescapeHTML,
 )
 
@@ -17,10 +18,22 @@
             'id': '1584444',
             'ext': 'mp4',
             'title': '"Święta mają być wesołe, dlatego, ludziska, wszyscy pod 
jemiołę"',
-            'description': 'Wyjątkowe orędzie Artura Andrusa, jednego z gości 
"Szkła kontaktowego".',
+            'description': 'Wyjątkowe orędzie Artura Andrusa, jednego z gości 
Szkła kontaktowego.',
             'thumbnail': 're:https?://.*[.]jpeg',
         }
     }, {
+        # different layout
+        'url': 
'https://tvnmeteo.tvn24.pl/magazyny/maja-w-ogrodzie,13/odcinki-online,1,4,1,0/pnacza-ptaki-i-iglaki-odc-691-hgtv-odc-29,1771763.html',
+        'info_dict': {
+            'id': '1771763',
+            'ext': 'mp4',
+            'title': 'Pnącza, ptaki i iglaki (odc. 691 /HGTV odc. 29)',
+            'thumbnail': 're:https?://.*',
+        },
+        'params': {
+            'skip_download': True,
+        },
+    }, {
         'url': 
'http://fakty.tvn24.pl/ogladaj-online,60/53-konferencja-bezpieczenstwa-w-monachium,716431.html',
         'only_matching': True,
     }, {
@@ -35,18 +48,21 @@
     }]
 
     def _real_extract(self, url):
-        video_id = self._match_id(url)
+        display_id = self._match_id(url)
 
-        webpage = self._download_webpage(url, video_id)
+        webpage = self._download_webpage(url, display_id)
 
-        title = self._og_search_title(webpage)
+        title = self._og_search_title(
+            webpage, default=None) or self._search_regex(
+            r'<h\d+[^>]+class=["\']magazineItemHeader[^>]+>(.+?)</h',
+            webpage, 'title')
 
-        def extract_json(attr, name, fatal=True):
+        def extract_json(attr, name, default=NO_DEFAULT, fatal=True):
             return self._parse_json(
                 self._search_regex(
                     r'\b%s=(["\'])(?P<json>(?!\1).+?)\1' % attr, webpage,
-                    name, group='json', fatal=fatal) or '{}',
-                video_id, transform_source=unescapeHTML, fatal=fatal)
+                    name, group='json', default=default, fatal=fatal) or '{}',
+                display_id, transform_source=unescapeHTML, fatal=fatal)
 
         quality_data = extract_json('data-quality', 'formats')
 
@@ -59,16 +75,24 @@
             })
         self._sort_formats(formats)
 
-        description = self._og_search_description(webpage)
+        description = self._og_search_description(webpage, default=None)
         thumbnail = self._og_search_thumbnail(
             webpage, default=None) or self._html_search_regex(
             r'\bdata-poster=(["\'])(?P<url>(?!\1).+?)\1', webpage,
             'thumbnail', group='url')
 
+        video_id = None
+
         share_params = extract_json(
-            'data-share-params', 'share params', fatal=False)
+            'data-share-params', 'share params', default=None)
         if isinstance(share_params, dict):
-            video_id = share_params.get('id') or video_id
+            video_id = share_params.get('id')
+
+        if not video_id:
+            video_id = self._search_regex(
+                r'data-vid-id=["\'](\d+)', webpage, 'video id',
+                default=None) or self._search_regex(
+                r',(\d+)\.html', url, 'video id', default=display_id)
 
         return {
             'id': video_id,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/vrv.py 
new/youtube-dl/youtube_dl/extractor/vrv.py
--- old/youtube-dl/youtube_dl/extractor/vrv.py  2019-07-15 18:59:59.000000000 
+0200
+++ new/youtube-dl/youtube_dl/extractor/vrv.py  2019-08-02 00:37:21.000000000 
+0200
@@ -64,7 +64,15 @@
 
     def _call_cms(self, path, video_id, note):
         if not self._CMS_SIGNING:
-            self._CMS_SIGNING = self._call_api('index', video_id, 'CMS 
Signing')['cms_signing']
+            index = self._call_api('index', video_id, 'CMS Signing')
+            self._CMS_SIGNING = index.get('cms_signing') or {}
+            if not self._CMS_SIGNING:
+                for signing_policy in index.get('signing_policies', []):
+                    signing_path = signing_policy.get('path')
+                    if signing_path and signing_path.startswith('/cms/'):
+                        name, value = signing_policy.get('name'), 
signing_policy.get('value')
+                        if name and value:
+                            self._CMS_SIGNING[name] = value
         return self._download_json(
             self._API_DOMAIN + path, video_id, query=self._CMS_SIGNING,
             note='Downloading %s JSON metadata' % note, 
headers=self.geo_verification_headers())
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/yahoo.py 
new/youtube-dl/youtube_dl/extractor/yahoo.py
--- old/youtube-dl/youtube_dl/extractor/yahoo.py        2019-07-15 
18:59:59.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/yahoo.py        2019-08-02 
00:37:22.000000000 +0200
@@ -1,12 +1,14 @@
 # coding: utf-8
 from __future__ import unicode_literals
 
+import hashlib
 import itertools
 import json
 import re
 
 from .common import InfoExtractor, SearchInfoExtractor
 from ..compat import (
+    compat_str,
     compat_urllib_parse,
     compat_urlparse,
 )
@@ -18,7 +20,9 @@
     int_or_none,
     mimetype2ext,
     smuggle_url,
+    try_get,
     unescapeHTML,
+    url_or_none,
 )
 
 from .brightcove import (
@@ -556,3 +560,130 @@
                 'https://gyao.yahoo.co.jp/player/%s/' % video_id.replace(':', 
'/'),
                 YahooGyaOPlayerIE.ie_key(), video_id))
         return self.playlist_result(entries, program_id)
+
+
+class YahooJapanNewsIE(InfoExtractor):
+    IE_NAME = 'yahoo:japannews'
+    IE_DESC = 'Yahoo! Japan News'
+    _VALID_URL = 
r'https?://(?P<host>(?:news|headlines)\.yahoo\.co\.jp)[^\d]*(?P<id>\d[\d-]*\d)?'
+    _GEO_COUNTRIES = ['JP']
+    _TESTS = [{
+        'url': 
'https://headlines.yahoo.co.jp/videonews/ann?a=20190716-00000071-ann-int',
+        'info_dict': {
+            'id': '1736242',
+            'ext': 'mp4',
+            'title': 'ムン大統領が対日批判を強化“現金化”効果は?(テレビ朝日系(ANN)) - Yahoo!ニュース',
+            'description': '韓国の元徴用工らを巡る裁判の原告が弁護士が差し押さえた三菱重工業の資産を売却して - 
Yahoo!ニュース(テレビ朝日系(ANN))',
+            'thumbnail': r're:^https?://.*\.[a-zA-Z\d]{3,4}$',
+        },
+        'params': {
+            'skip_download': True,
+        },
+    }, {
+        # geo restricted
+        'url': 'https://headlines.yahoo.co.jp/hl?a=20190721-00000001-oxv-l04',
+        'only_matching': True,
+    }, {
+        'url': 'https://headlines.yahoo.co.jp/videonews/',
+        'only_matching': True,
+    }, {
+        'url': 'https://news.yahoo.co.jp',
+        'only_matching': True,
+    }, {
+        'url': 
'https://news.yahoo.co.jp/byline/hashimotojunji/20190628-00131977/',
+        'only_matching': True,
+    }, {
+        'url': 'https://news.yahoo.co.jp/feature/1356',
+        'only_matching': True
+    }]
+
+    def _extract_formats(self, json_data, content_id):
+        formats = []
+
+        video_data = try_get(
+            json_data,
+            lambda x: x['ResultSet']['Result'][0]['VideoUrlSet']['VideoUrl'],
+            list)
+        for vid in video_data or []:
+            delivery = vid.get('delivery')
+            url = url_or_none(vid.get('Url'))
+            if not delivery or not url:
+                continue
+            elif delivery == 'hls':
+                formats.extend(
+                    self._extract_m3u8_formats(
+                        url, content_id, 'mp4', 'm3u8_native',
+                        m3u8_id='hls', fatal=False))
+            else:
+                formats.append({
+                    'url': url,
+                    'format_id': 'http-%s' % compat_str(vid.get('bitrate', 
'')),
+                    'height': int_or_none(vid.get('height')),
+                    'width': int_or_none(vid.get('width')),
+                    'tbr': int_or_none(vid.get('bitrate')),
+                })
+        self._remove_duplicate_formats(formats)
+        self._sort_formats(formats)
+
+        return formats
+
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        host = mobj.group('host')
+        display_id = mobj.group('id') or host
+
+        webpage = self._download_webpage(url, display_id)
+
+        title = self._html_search_meta(
+            ['og:title', 'twitter:title'], webpage, 'title', default=None
+        ) or self._html_search_regex('<title>([^<]+)</title>', webpage, 
'title')
+
+        if display_id == host:
+            # Headline page (w/ multiple BC playlists) ('news.yahoo.co.jp', 
'headlines.yahoo.co.jp/videonews/', ...)
+            stream_plists = re.findall(r'plist=(\d+)', webpage) or 
re.findall(r'plist["\']:\s*["\']([^"\']+)', webpage)
+            entries = [
+                self.url_result(
+                    smuggle_url(
+                        
'http://players.brightcove.net/5690807595001/HyZNerRl7_default/index.html?playlistId=%s'
 % plist_id,
+                        {'geo_countries': ['JP']}),
+                    ie='BrightcoveNew', video_id=plist_id)
+                for plist_id in stream_plists]
+            return self.playlist_result(entries, playlist_title=title)
+
+        # Article page
+        description = self._html_search_meta(
+            ['og:description', 'description', 'twitter:description'],
+            webpage, 'description', default=None)
+        thumbnail = self._og_search_thumbnail(
+            webpage, default=None) or self._html_search_meta(
+            'twitter:image', webpage, 'thumbnail', default=None)
+        space_id = self._search_regex([
+            r'<script[^>]+class=["\']yvpub-player["\'][^>]+spaceid=([^&"\']+)',
+            r'YAHOO\.JP\.srch\.\w+link\.onLoad[^;]+spaceID["\' ]*:["\' 
]+([^"\']+)',
+            r'<!--\s+SpaceID=(\d+)'
+        ], webpage, 'spaceid')
+
+        content_id = self._search_regex(
+            
r'<script[^>]+class=["\']yvpub-player["\'][^>]+contentid=(?P<contentid>[^&"\']+)',
+            webpage, 'contentid', group='contentid')
+
+        json_data = self._download_json(
+            'https://feapi-yvpub.yahooapis.jp/v1/content/%s' % content_id,
+            content_id,
+            query={
+                'appid': 
'dj0zaiZpPVZMTVFJR0FwZWpiMyZzPWNvbnN1bWVyc2VjcmV0Jng9YjU-',
+                'output': 'json',
+                'space_id': space_id,
+                'domain': host,
+                'ak': hashlib.md5('_'.join((space_id, 
host)).encode()).hexdigest(),
+                'device_type': '1100',
+            })
+        formats = self._extract_formats(json_data, content_id)
+
+        return {
+            'id': content_id,
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'formats': formats,
+        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/yandexvideo.py 
new/youtube-dl/youtube_dl/extractor/yandexvideo.py
--- old/youtube-dl/youtube_dl/extractor/yandexvideo.py  2019-07-15 
18:59:59.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/yandexvideo.py  2019-08-02 
00:37:30.000000000 +0200
@@ -3,6 +3,7 @@
 
 from .common import InfoExtractor
 from ..utils import (
+    determine_ext,
     int_or_none,
     url_or_none,
 )
@@ -47,6 +48,10 @@
         # episode, sports
         'url': 
'https://yandex.ru/?stream_channel=1538487871&stream_id=4132a07f71fb0396be93d74b3477131d',
         'only_matching': True,
+    }, {
+        # DASH with DRM
+        'url': 
'https://yandex.ru/portal/video?from=morda&stream_id=485a92d94518d73a9d0ff778e13505f8',
+        'only_matching': True,
     }]
 
     def _real_extract(self, url):
@@ -59,13 +64,22 @@
                 'disable_trackings': 1,
             })['content']
 
-        m3u8_url = url_or_none(content.get('content_url')) or url_or_none(
+        content_url = url_or_none(content.get('content_url')) or url_or_none(
             content['streams'][0]['url'])
         title = content.get('title') or content.get('computed_title')
 
-        formats = self._extract_m3u8_formats(
-            m3u8_url, video_id, 'mp4', entry_protocol='m3u8_native',
-            m3u8_id='hls')
+        ext = determine_ext(content_url)
+
+        if ext == 'm3u8':
+            formats = self._extract_m3u8_formats(
+                content_url, video_id, 'mp4', entry_protocol='m3u8_native',
+                m3u8_id='hls')
+        elif ext == 'mpd':
+            formats = self._extract_mpd_formats(
+                content_url, video_id, mpd_id='dash')
+        else:
+            formats = [{'url': content_url}]
+
         self._sort_formats(formats)
 
         description = content.get('description')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/youtube.py 
new/youtube-dl/youtube_dl/extractor/youtube.py
--- old/youtube-dl/youtube_dl/extractor/youtube.py      2019-07-15 
19:00:11.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/youtube.py      2019-08-02 
00:37:30.000000000 +0200
@@ -1700,6 +1700,15 @@
         def extract_token(v_info):
             return dict_get(v_info, ('account_playback_token', 
'accountPlaybackToken', 'token'))
 
+        def extract_player_response(player_response, video_id):
+            pl_response = str_or_none(player_response)
+            if not pl_response:
+                return
+            pl_response = self._parse_json(pl_response, video_id, fatal=False)
+            if isinstance(pl_response, dict):
+                add_dash_mpd_pr(pl_response)
+                return pl_response
+
         player_response = {}
 
         # Get video info
@@ -1722,7 +1731,10 @@
                 note='Refetching age-gated info webpage',
                 errnote='unable to download video info webpage')
             video_info = compat_parse_qs(video_info_webpage)
+            pl_response = video_info.get('player_response', [None])[0]
+            player_response = extract_player_response(pl_response, video_id)
             add_dash_mpd(video_info)
+            view_count = extract_view_count(video_info)
         else:
             age_gate = False
             video_info = None
@@ -1745,11 +1757,7 @@
                     is_live = True
                 sts = ytplayer_config.get('sts')
                 if not player_response:
-                    pl_response = str_or_none(args.get('player_response'))
-                    if pl_response:
-                        pl_response = self._parse_json(pl_response, video_id, 
fatal=False)
-                        if isinstance(pl_response, dict):
-                            player_response = pl_response
+                    player_response = 
extract_player_response(args.get('player_response'), video_id)
             if not video_info or 
self._downloader.params.get('youtube_include_dash_manifest', True):
                 add_dash_mpd_pr(player_response)
                 # We also try looking in get_video_info since it may contain 
different dashmpd
@@ -1781,9 +1789,7 @@
                     get_video_info = compat_parse_qs(video_info_webpage)
                     if not player_response:
                         pl_response = get_video_info.get('player_response', 
[None])[0]
-                        if isinstance(pl_response, dict):
-                            player_response = pl_response
-                            add_dash_mpd_pr(player_response)
+                        player_response = extract_player_response(pl_response, 
video_id)
                     add_dash_mpd(get_video_info)
                     if view_count is None:
                         view_count = extract_view_count(get_video_info)
@@ -1820,16 +1826,11 @@
         video_details = try_get(
             player_response, lambda x: x['videoDetails'], dict) or {}
 
-        # title
-        if 'title' in video_info:
-            video_title = video_info['title'][0]
-        elif 'title' in player_response:
-            video_title = video_details['title']
-        else:
+        video_title = video_info.get('title', [None])[0] or 
video_details.get('title')
+        if not video_title:
             self._downloader.report_warning('Unable to extract video title')
             video_title = '_'
 
-        # description
         description_original = video_description = 
get_element_by_id("eow-description", video_webpage)
         if video_description:
 
@@ -1854,11 +1855,7 @@
             ''', replace_url, video_description)
             video_description = clean_html(video_description)
         else:
-            fd_mobj = re.search(r'<meta name="description" content="([^"]+)"', 
video_webpage)
-            if fd_mobj:
-                video_description = unescapeHTML(fd_mobj.group(1))
-            else:
-                video_description = ''
+            video_description = self._html_search_meta('description', 
video_webpage) or video_details.get('shortDescription')
 
         if not smuggled_data.get('force_singlefeed', False):
             if not self._downloader.params.get('noplaylist'):
@@ -2432,7 +2429,7 @@
                         (%(playlist_id)s)
                      )""" % {'playlist_id': 
YoutubeBaseInfoExtractor._PLAYLIST_ID_RE}
     _TEMPLATE_URL = 'https://www.youtube.com/playlist?list=%s'
-    _VIDEO_RE = 
r'href="\s*/watch\?v=(?P<id>[0-9A-Za-z_-]{11})&amp;[^"]*?index=(?P<index>\d+)(?:[^>]+>(?P<title>[^<]+))?'
+    _VIDEO_RE = 
r'href="\s*/watch\?v=(?P<id>[0-9A-Za-z_-]{11})(?:&amp;(?:[^"]*?index=(?P<index>\d+))?(?:[^>]+>(?P<title>[^<]+))?)?'
     IE_NAME = 'youtube:playlist'
     _TESTS = [{
         'url': 
'https://www.youtube.com/playlist?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re',
@@ -2455,6 +2452,8 @@
         'info_dict': {
             'title': '29C3: Not my department',
             'id': 'PLwP_SiAcdui0KVebT0mU9Apz359a4ubsC',
+            'uploader': 'Christiaan008',
+            'uploader_id': 'ChRiStIaAn008',
         },
         'playlist_count': 95,
     }, {
@@ -2463,6 +2462,8 @@
         'info_dict': {
             'title': '[OLD]Team Fortress 2 (Class-based LP)',
             'id': 'PLBB231211A4F62143',
+            'uploader': 'Wickydoo',
+            'uploader_id': 'Wickydoo',
         },
         'playlist_mincount': 26,
     }, {
@@ -2471,6 +2472,8 @@
         'info_dict': {
             'title': 'Uploads from Cauchemar',
             'id': 'UUBABnxM4Ar9ten8Mdjj1j0Q',
+            'uploader': 'Cauchemar',
+            'uploader_id': 'Cauchemar89',
         },
         'playlist_mincount': 799,
     }, {
@@ -2488,13 +2491,17 @@
         'info_dict': {
             'title': 'JODA15',
             'id': 'PL6IaIsEjSbf96XFRuNccS_RuEXwNdsoEu',
+            'uploader': 'milan',
+            'uploader_id': 'UCEI1-PVPcYXjB73Hfelbmaw',
         }
     }, {
         'url': 
'http://www.youtube.com/embed/_xDOZElKyNU?list=PLsyOSbh5bs16vubvKePAQ1x3PhKavfBIl',
         'playlist_mincount': 485,
         'info_dict': {
-            'title': '2017 華語最新單曲 (2/24更新)',
+            'title': '2018 Chinese New Singles (11/6 updated)',
             'id': 'PLsyOSbh5bs16vubvKePAQ1x3PhKavfBIl',
+            'uploader': 'LBK',
+            'uploader_id': 'sdragonfang',
         }
     }, {
         'note': 'Embedded SWF player',
@@ -2503,13 +2510,16 @@
         'info_dict': {
             'title': 'JODA7',
             'id': 'YN5VISEtHet5D4NEvfTd0zcgFk84NqFZ',
-        }
+        },
+        'skip': 'This playlist does not exist',
     }, {
         'note': 'Buggy playlist: the webpage has a "Load more" button but it 
doesn\'t have more videos',
         'url': 
'https://www.youtube.com/playlist?list=UUXw-G3eDE9trcvY2sBMM_aA',
         'info_dict': {
             'title': 'Uploads from Interstellar Movie',
             'id': 'UUXw-G3eDE9trcvY2sBMM_aA',
+            'uploader': 'Interstellar Movie',
+            'uploader_id': 'InterstellarMovie1',
         },
         'playlist_mincount': 21,
     }, {
@@ -2534,6 +2544,7 @@
         'params': {
             'skip_download': True,
         },
+        'skip': 'This video is not available.',
         'add_ie': [YoutubeIE.ie_key()],
     }, {
         'url': 
'https://youtu.be/yeWKywCrFtk?list=PL2qgrgXsNUG5ig9cat4ohreBjYLAPC0J5',
@@ -2545,7 +2556,6 @@
             'uploader_id': 'backuspagemuseum',
             'uploader_url': 
r're:https?://(?:www\.)?youtube\.com/user/backuspagemuseum',
             'upload_date': '20161008',
-            'license': 'Standard YouTube License',
             'description': 'md5:800c0c78d5eb128500bffd4f0b4f2e8a',
             'categories': ['Nonprofits & Activism'],
             'tags': list,
@@ -2557,6 +2567,16 @@
             'skip_download': True,
         },
     }, {
+        # https://github.com/ytdl-org/youtube-dl/issues/21844
+        'url': 
'https://www.youtube.com/playlist?list=PLzH6n4zXuckpfMu_4Ff8E7Z1behQks5ba',
+        'info_dict': {
+            'title': 'Data Analysis with Dr Mike Pound',
+            'id': 'PLzH6n4zXuckpfMu_4Ff8E7Z1behQks5ba',
+            'uploader_id': 'Computerphile',
+            'uploader': 'Computerphile',
+        },
+        'playlist_mincount': 11,
+    }, {
         'url': 'https://youtu.be/uWyaPkt-VOI?list=PL9D9FC436B881BA21',
         'only_matching': True,
     }, {
@@ -2722,6 +2742,8 @@
         'info_dict': {
             'id': 'UUKfVa3S1e4PHvxWcwyMMg8w',
             'title': 'Uploads from lex will',
+            'uploader': 'lex will',
+            'uploader_id': 'UCKfVa3S1e4PHvxWcwyMMg8w',
         }
     }, {
         'note': 'Age restricted channel',
@@ -2731,6 +2753,8 @@
         'info_dict': {
             'id': 'UUs0ifCMCm1icqRbqhUINa0w',
             'title': 'Uploads from Deus Ex',
+            'uploader': 'Deus Ex',
+            'uploader_id': 'DeusExOfficial',
         },
     }, {
         'url': 'https://invidio.us/channel/UC23qupoDRn9YOAVzeoxjOQA',
@@ -2815,6 +2839,8 @@
         'info_dict': {
             'id': 'UUfX55Sx5hEFjoC3cNs6mCUQ',
             'title': 'Uploads from The Linux Foundation',
+            'uploader': 'The Linux Foundation',
+            'uploader_id': 'TheLinuxFoundation',
         }
     }, {
         # Only available via https://www.youtube.com/c/12minuteathlete/videos
@@ -2824,6 +2850,8 @@
         'info_dict': {
             'id': 'UUVjM-zV6_opMDx7WYxnjZiQ',
             'title': 'Uploads from 12 Minute Athlete',
+            'uploader': '12 Minute Athlete',
+            'uploader_id': 'the12minuteathlete',
         }
     }, {
         'url': 'ytuser:phihag',
@@ -2917,7 +2945,7 @@
         'playlist_mincount': 4,
         'info_dict': {
             'id': 'ThirstForScience',
-            'title': 'Thirst for Science',
+            'title': 'ThirstForScience',
         },
     }, {
         # with "Load more" button
@@ -2934,6 +2962,7 @@
             'id': 'UCiU1dHvZObB2iP6xkJ__Icw',
             'title': 'Chem Player',
         },
+        'skip': 'Blocked',
     }]
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/version.py 
new/youtube-dl/youtube_dl/version.py
--- old/youtube-dl/youtube_dl/version.py        2019-07-15 19:01:39.000000000 
+0200
+++ new/youtube-dl/youtube_dl/version.py        2019-08-02 00:37:46.000000000 
+0200
@@ -1,3 +1,3 @@
 from __future__ import unicode_literals
 
-__version__ = '2019.07.16'
+__version__ = '2019.08.02'


Reply via email to