Hello community,

here is the log from the commit of package youtube-dl for openSUSE:Factory 
checked in at 2019-10-22 15:45:02
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/youtube-dl (Old)
 and      /work/SRC/openSUSE:Factory/.youtube-dl.new.2352 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "youtube-dl"

Tue Oct 22 15:45:02 2019 rev:118 rq:741613 version:2019.10.22

Changes:
--------
--- /work/SRC/openSUSE:Factory/youtube-dl/python-youtube-dl.changes     
2019-10-17 12:22:43.487106736 +0200
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.2352/python-youtube-dl.changes   
2019-10-22 15:45:04.081690892 +0200
@@ -2 +2,11 @@
-Wed Oct 16 17:42:32 UTC 2019 - Sebastien CHAVAUX <[email protected]>
+Mon Oct 21 18:19:17 UTC 2019 - Jan Engelhardt <[email protected]>
+
+- Update to release 2019.10.22
+  * atresplayer: fix extraction
+  * dumpert: fix extraction
+  * mit: Remove support for video.mit.edu
+  * twitch: update VOD URL matching
+  * facebook: Bypass download rate limits
+
+-------------------------------------------------------------------
+Wed Oct 16 17:37:41 UTC 2019 - Sebastien CHAVAUX <[email protected]>
--- /work/SRC/openSUSE:Factory/youtube-dl/youtube-dl.changes    2019-10-17 
12:22:43.711106173 +0200
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.2352/youtube-dl.changes  
2019-10-22 15:45:04.265691102 +0200
@@ -1,0 +2,10 @@
+Mon Oct 21 18:19:17 UTC 2019 - Jan Engelhardt <[email protected]>
+
+- Update to release 2019.10.22
+  * atresplayer: fix extraction
+  * dumpert: fix extraction
+  * mit: Remove support for video.mit.edu
+  * twitch: update VOD URL matching
+  * facebook: Bypass download rate limits
+
+-------------------------------------------------------------------

Old:
----
  youtube-dl-2019.10.16.tar.gz
  youtube-dl-2019.10.16.tar.gz.sig

New:
----
  youtube-dl-2019.10.22.tar.gz
  youtube-dl-2019.10.22.tar.gz.sig

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.boPDmw/_old  2019-10-22 15:45:05.245692219 +0200
+++ /var/tmp/diff_new_pack.boPDmw/_new  2019-10-22 15:45:05.253692229 +0200
@@ -19,7 +19,7 @@
 %define modname youtube-dl
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-youtube-dl
-Version:        2019.10.16
+Version:        2019.10.22
 Release:        0
 Summary:        A Python module for downloading from video sites for offline 
watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.boPDmw/_old  2019-10-22 15:45:05.293692274 +0200
+++ /var/tmp/diff_new_pack.boPDmw/_new  2019-10-22 15:45:05.301692283 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           youtube-dl
-Version:        2019.10.16
+Version:        2019.10.22
 Release:        0
 Summary:        A tool for downloading from video sites for offline watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl-2019.10.16.tar.gz -> youtube-dl-2019.10.22.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/ChangeLog new/youtube-dl/ChangeLog
--- old/youtube-dl/ChangeLog    2019-10-15 22:26:43.000000000 +0200
+++ new/youtube-dl/ChangeLog    2019-10-21 19:08:59.000000000 +0200
@@ -1,3 +1,26 @@
+version 2019.10.22
+
+Core
+* [utils] Improve subtitles_filename (#22753)
+
+Extractors
+* [facebook] Bypass download rate limits (#21018)
++ [contv] Add support for contv.com
+- [viewster] Remove extractor
+* [xfileshare] Improve extractor (#17032, #17906, #18237, #18239)
+    * Update the list of domains
+    + Add support for aa-encoded video data
+    * Improve jwplayer format extraction
+    + Add support for Clappr sources
+* [mangomolo] Fix video format extraction and add support for player URLs
+* [audioboom] Improve metadata extraction
+* [twitch] Update VOD URL matching (#22395, #22727)
+- [mit] Remove support for video.mit.edu (#22403)
+- [servingsys] Remove extractor (#22639)
+* [dumpert] Fix extraction (#22428, #22564)
+* [atresplayer] Fix extraction (#16277, #16716)
+
+
 version 2019.10.16
 
 Core
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/docs/supportedsites.md 
new/youtube-dl/docs/supportedsites.md
--- old/youtube-dl/docs/supportedsites.md       2019-10-15 22:26:46.000000000 
+0200
+++ new/youtube-dl/docs/supportedsites.md       2019-10-21 19:09:02.000000000 
+0200
@@ -183,6 +183,7 @@
  - **ComedyCentralShortname**
  - **ComedyCentralTV**
  - **CondeNast**: Condé Nast media group: Allure, Architectural Digest, Ars 
Technica, Bon Appétit, Brides, Condé Nast, Condé Nast Traveler, Details, 
Epicurious, GQ, Glamour, Golf Digest, SELF, Teen Vogue, The New Yorker, Vanity 
Fair, Vogue, W Magazine, WIRED
+ - **CONtv**
  - **Corus**
  - **Coub**
  - **Cracked**
@@ -784,7 +785,6 @@
  - **Seeker**
  - **SenateISVP**
  - **SendtoNews**
- - **ServingSys**
  - **Servus**
  - **Sexu**
  - **SeznamZpravy**
@@ -1005,7 +1005,6 @@
  - **Viddler**
  - **Videa**
  - **video.google:search**: Google Video search
- - **video.mit.edu**
  - **VideoDetective**
  - **videofy.me**
  - **videomore**
@@ -1023,7 +1022,6 @@
  - **vier:videos**
  - **ViewLift**
  - **ViewLiftEmbed**
- - **Viewster**
  - **Viidea**
  - **viki**
  - **viki:channel**
@@ -1097,7 +1095,7 @@
  - **WWE**
  - **XBef**
  - **XboxClips**
- - **XFileShare**: XFileShare based sites: DaClips, FileHoot, GorillaVid, 
MovPod, PowerWatch, Rapidvideo.ws, TheVideoBee, Vidto, Streamin.To, XVIDSTAGE, 
Vid ABC, VidBom, vidlo, RapidVideo.TV, FastVideo.me
+ - **XFileShare**: XFileShare based sites: ClipWatching, GoUnlimited, GoVid, 
HolaVid, Streamty, TheVideoBee, Uqload, VidBom, vidlo, VidLocker, VidShare, 
VUp, XVideoSharing
  - **XHamster**
  - **XHamsterEmbed**
  - **XHamsterUser**
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/test/test_utils.py 
new/youtube-dl/test/test_utils.py
--- old/youtube-dl/test/test_utils.py   2019-10-15 22:26:25.000000000 +0200
+++ new/youtube-dl/test/test_utils.py   2019-10-21 19:07:39.000000000 +0200
@@ -74,6 +74,7 @@
     str_to_int,
     strip_jsonp,
     strip_or_none,
+    subtitles_filename,
     timeconvert,
     unescapeHTML,
     unified_strdate,
@@ -261,6 +262,11 @@
         self.assertEqual(replace_extension('.abc', 'temp'), '.abc.temp')
         self.assertEqual(replace_extension('.abc.ext', 'temp'), '.abc.temp')
 
+    def test_subtitles_filename(self):
+        self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt'), 
'abc.en.vtt')
+        self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt', 'ext'), 
'abc.en.vtt')
+        self.assertEqual(subtitles_filename('abc.unexpected_ext', 'en', 'vtt', 
'ext'), 'abc.unexpected_ext.en.vtt')
+
     def test_remove_start(self):
         self.assertEqual(remove_start(None, 'A - '), None)
         self.assertEqual(remove_start('A - B', 'A - '), 'B')
Binary files old/youtube-dl/youtube-dl and new/youtube-dl/youtube-dl differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/YoutubeDL.py 
new/youtube-dl/youtube_dl/YoutubeDL.py
--- old/youtube-dl/youtube_dl/YoutubeDL.py      2019-10-15 22:26:25.000000000 
+0200
+++ new/youtube-dl/youtube_dl/YoutubeDL.py      2019-10-21 19:07:39.000000000 
+0200
@@ -1814,7 +1814,7 @@
             ie = self.get_info_extractor(info_dict['extractor_key'])
             for sub_lang, sub_info in subtitles.items():
                 sub_format = sub_info['ext']
-                sub_filename = subtitles_filename(filename, sub_lang, 
sub_format)
+                sub_filename = subtitles_filename(filename, sub_lang, 
sub_format, info_dict.get('ext'))
                 if self.params.get('nooverwrites', False) and 
os.path.exists(encodeFilename(sub_filename)):
                     self.to_screen('[info] Video subtitle %s.%s is already 
present' % (sub_lang, sub_format))
                 else:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/atresplayer.py 
new/youtube-dl/youtube_dl/extractor/atresplayer.py
--- old/youtube-dl/youtube_dl/extractor/atresplayer.py  2019-10-15 
22:26:25.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/atresplayer.py  2019-10-21 
19:07:39.000000000 +0200
@@ -1,202 +1,118 @@
+# coding: utf-8
 from __future__ import unicode_literals
 
-import time
-import hmac
-import hashlib
 import re
 
 from .common import InfoExtractor
-from ..compat import compat_str
+from ..compat import compat_HTTPError
 from ..utils import (
     ExtractorError,
-    float_or_none,
     int_or_none,
-    sanitized_Request,
     urlencode_postdata,
-    xpath_text,
 )
 
 
 class AtresPlayerIE(InfoExtractor):
-    _VALID_URL = 
r'https?://(?:www\.)?atresplayer\.com/television/[^/]+/[^/]+/[^/]+/(?P<id>.+?)_\d+\.html'
+    _VALID_URL = 
r'https?://(?:www\.)?atresplayer\.com/[^/]+/[^/]+/[^/]+/[^/]+/(?P<display_id>.+?)_(?P<id>[0-9a-f]{24})'
     _NETRC_MACHINE = 'atresplayer'
     _TESTS = [
         {
-            'url': 
'http://www.atresplayer.com/television/programas/el-club-de-la-comedia/temporada-4/capitulo-10-especial-solidario-nochebuena_2014122100174.html',
-            'md5': 'efd56753cda1bb64df52a3074f62e38a',
+            'url': 
'https://www.atresplayer.com/antena3/series/pequenas-coincidencias/temporada-1/capitulo-7-asuntos-pendientes_5d4aa2c57ed1a88fc715a615/',
             'info_dict': {
-                'id': 'capitulo-10-especial-solidario-nochebuena',
+                'id': '5d4aa2c57ed1a88fc715a615',
                 'ext': 'mp4',
-                'title': 'Especial Solidario de Nochebuena',
-                'description': 'md5:e2d52ff12214fa937107d21064075bf1',
-                'duration': 5527.6,
-                'thumbnail': r're:^https?://.*\.jpg$',
+                'title': 'Capítulo 7: Asuntos pendientes',
+                'description': 'md5:7634cdcb4d50d5381bedf93efb537fbc',
+                'duration': 3413,
+            },
+            'params': {
+                'format': 'bestvideo',
             },
             'skip': 'This video is only available for registered users'
         },
         {
-            'url': 
'http://www.atresplayer.com/television/especial/videoencuentros/temporada-1/capitulo-112-david-bustamante_2014121600375.html',
-            'md5': '6e52cbb513c405e403dbacb7aacf8747',
-            'info_dict': {
-                'id': 'capitulo-112-david-bustamante',
-                'ext': 'flv',
-                'title': 'David Bustamante',
-                'description': 'md5:f33f1c0a05be57f6708d4dd83a3b81c6',
-                'duration': 1439.0,
-                'thumbnail': r're:^https?://.*\.jpg$',
-            },
+            'url': 
'https://www.atresplayer.com/lasexta/programas/el-club-de-la-comedia/temporada-4/capitulo-10-especial-solidario-nochebuena_5ad08edf986b2855ed47adc4/',
+            'only_matching': True,
         },
         {
-            'url': 
'http://www.atresplayer.com/television/series/el-secreto-de-puente-viejo/el-chico-de-los-tres-lunares/capitulo-977-29-12-14_2014122400174.html',
+            'url': 
'https://www.atresplayer.com/antena3/series/el-secreto-de-puente-viejo/el-chico-de-los-tres-lunares/capitulo-977-29-12-14_5ad51046986b2886722ccdea/',
             'only_matching': True,
         },
     ]
-
-    _USER_AGENT = 'Dalvik/1.6.0 (Linux; U; Android 4.3; GT-I9300 Build/JSS15J'
-    _MAGIC = 'QWtMLXs414Yo+c#_+Q#K@NN)'
-    _TIMESTAMP_SHIFT = 30000
-
-    _TIME_API_URL = 'http://servicios.atresplayer.com/api/admin/time.json'
-    _URL_VIDEO_TEMPLATE = 
'https://servicios.atresplayer.com/api/urlVideo/{1}/{0}/{1}|{2}|{3}.json'
-    _PLAYER_URL_TEMPLATE = 
'https://servicios.atresplayer.com/episode/getplayer.json?episodePk=%s'
-    _EPISODE_URL_TEMPLATE = 'http://www.atresplayer.com/episodexml/%s'
-
-    _LOGIN_URL = 'https://servicios.atresplayer.com/j_spring_security_check'
-
-    _ERRORS = {
-        'UNPUBLISHED': 'We\'re sorry, but this video is not yet available.',
-        'DELETED': 'This video has expired and is no longer available for 
online streaming.',
-        'GEOUNPUBLISHED': 'We\'re sorry, but this video is not available in 
your region due to right restrictions.',
-        # 'PREMIUM': 'PREMIUM',
-    }
+    _API_BASE = 'https://api.atresplayer.com/'
 
     def _real_initialize(self):
         self._login()
 
+    def _handle_error(self, e, code):
+        if isinstance(e.cause, compat_HTTPError) and e.cause.code == code:
+            error = self._parse_json(e.cause.read(), None)
+            if error.get('error') == 'required_registered':
+                self.raise_login_required()
+            raise ExtractorError(error['error_description'], expected=True)
+        raise
+
     def _login(self):
         username, password = self._get_login_info()
         if username is None:
             return
 
-        login_form = {
-            'j_username': username,
-            'j_password': password,
-        }
+        self._request_webpage(
+            self._API_BASE + 'login', None, 'Downloading login page')
 
-        request = sanitized_Request(
-            self._LOGIN_URL, urlencode_postdata(login_form))
-        request.add_header('Content-Type', 'application/x-www-form-urlencoded')
-        response = self._download_webpage(
-            request, None, 'Logging in')
-
-        error = self._html_search_regex(
-            r'(?s)<ul[^>]+class="[^"]*\blist_error\b[^"]*">(.+?)</ul>',
-            response, 'error', default=None)
-        if error:
-            raise ExtractorError(
-                'Unable to login: %s' % error, expected=True)
+        try:
+            target_url = self._download_json(
+                'https://account.atresmedia.com/api/login', None,
+                'Logging in', headers={
+                    'Content-Type': 'application/x-www-form-urlencoded'
+                }, data=urlencode_postdata({
+                    'username': username,
+                    'password': password,
+                }))['targetUrl']
+        except ExtractorError as e:
+            self._handle_error(e, 400)
 
-    def _real_extract(self, url):
-        video_id = self._match_id(url)
+        self._request_webpage(target_url, None, 'Following Target URL')
 
-        webpage = self._download_webpage(url, video_id)
+    def _real_extract(self, url):
+        display_id, video_id = re.match(self._VALID_URL, url).groups()
 
-        episode_id = self._search_regex(
-            r'episode="([^"]+)"', webpage, 'episode id')
+        try:
+            episode = self._download_json(
+                self._API_BASE + 'client/v1/player/episode/' + video_id, 
video_id)
+        except ExtractorError as e:
+            self._handle_error(e, 403)
 
-        request = sanitized_Request(
-            self._PLAYER_URL_TEMPLATE % episode_id,
-            headers={'User-Agent': self._USER_AGENT})
-        player = self._download_json(request, episode_id, 'Downloading player 
JSON')
-
-        episode_type = player.get('typeOfEpisode')
-        error_message = self._ERRORS.get(episode_type)
-        if error_message:
-            raise ExtractorError(
-                '%s returned error: %s' % (self.IE_NAME, error_message), 
expected=True)
+        title = episode['titulo']
 
         formats = []
-        video_url = player.get('urlVideo')
-        if video_url:
-            format_info = {
-                'url': video_url,
-                'format_id': 'http',
-            }
-            mobj = 
re.search(r'(?P<bitrate>\d+)K_(?P<width>\d+)x(?P<height>\d+)', video_url)
-            if mobj:
-                format_info.update({
-                    'width': int_or_none(mobj.group('width')),
-                    'height': int_or_none(mobj.group('height')),
-                    'tbr': int_or_none(mobj.group('bitrate')),
-                })
-            formats.append(format_info)
-
-        timestamp = int_or_none(self._download_webpage(
-            self._TIME_API_URL,
-            video_id, 'Downloading timestamp', fatal=False), 1000, time.time())
-        timestamp_shifted = compat_str(timestamp + self._TIMESTAMP_SHIFT)
-        token = hmac.new(
-            self._MAGIC.encode('ascii'),
-            (episode_id + timestamp_shifted).encode('utf-8'), hashlib.md5
-        ).hexdigest()
-
-        request = sanitized_Request(
-            self._URL_VIDEO_TEMPLATE.format('windows', episode_id, 
timestamp_shifted, token),
-            headers={'User-Agent': self._USER_AGENT})
-
-        fmt_json = self._download_json(
-            request, video_id, 'Downloading windows video JSON')
-
-        result = fmt_json.get('resultDes')
-        if result.lower() != 'ok':
-            raise ExtractorError(
-                '%s returned error: %s' % (self.IE_NAME, result), 
expected=True)
-
-        for format_id, video_url in fmt_json['resultObject'].items():
-            if format_id == 'token' or not video_url.startswith('http'):
-                continue
-            if 'geodeswowsmpra3player' in video_url:
-                # f4m_path = video_url.split('smil:', 1)[-1].split('free_', 
1)[0]
-                # f4m_url = 
'http://drg.antena3.com/{0}hds/es/sd.f4m'.format(f4m_path)
-                # this videos are protected by DRM, the f4m downloader doesn't 
support them
+        for source in episode.get('sources', []):
+            src = source.get('src')
+            if not src:
                 continue
-            video_url_hd = video_url.replace('free_es', 'es')
-            formats.extend(self._extract_f4m_formats(
-                video_url_hd[:-9] + '/manifest.f4m', video_id, f4m_id='hds',
-                fatal=False))
-            formats.extend(self._extract_mpd_formats(
-                video_url_hd[:-9] + '/manifest.mpd', video_id, mpd_id='dash',
-                fatal=False))
+            src_type = source.get('type')
+            if src_type == 'application/vnd.apple.mpegurl':
+                formats.extend(self._extract_m3u8_formats(
+                    src, video_id, 'mp4', 'm3u8_native',
+                    m3u8_id='hls', fatal=False))
+            elif src_type == 'application/dash+xml':
+                formats.extend(self._extract_mpd_formats(
+                    src, video_id, mpd_id='dash', fatal=False))
         self._sort_formats(formats)
 
-        path_data = player.get('pathData')
-
-        episode = self._download_xml(
-            self._EPISODE_URL_TEMPLATE % path_data, video_id,
-            'Downloading episode XML')
-
-        duration = float_or_none(xpath_text(
-            episode, './media/asset/info/technical/contentDuration', 
'duration'))
-
-        art = episode.find('./media/asset/info/art')
-        title = xpath_text(art, './name', 'title')
-        description = xpath_text(art, './description', 'description')
-        thumbnail = xpath_text(episode, './media/asset/files/background', 
'thumbnail')
-
-        subtitles = {}
-        subtitle_url = xpath_text(episode, './media/asset/files/subtitle', 
'subtitle')
-        if subtitle_url:
-            subtitles['es'] = [{
-                'ext': 'srt',
-                'url': subtitle_url,
-            }]
+        heartbeat = episode.get('heartbeat') or {}
+        omniture = episode.get('omniture') or {}
+        get_meta = lambda x: heartbeat.get(x) or omniture.get(x)
 
         return {
+            'display_id': display_id,
             'id': video_id,
             'title': title,
-            'description': description,
-            'thumbnail': thumbnail,
-            'duration': duration,
+            'description': episode.get('descripcion'),
+            'thumbnail': episode.get('imgPoster'),
+            'duration': int_or_none(episode.get('duration')),
             'formats': formats,
-            'subtitles': subtitles,
+            'channel': get_meta('channel'),
+            'season': get_meta('season'),
+            'episode_number': int_or_none(get_meta('episodeNumber')),
         }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/audioboom.py 
new/youtube-dl/youtube_dl/extractor/audioboom.py
--- old/youtube-dl/youtube_dl/extractor/audioboom.py    2019-10-15 
22:26:25.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/audioboom.py    2019-10-21 
19:07:39.000000000 +0200
@@ -2,22 +2,25 @@
 from __future__ import unicode_literals
 
 from .common import InfoExtractor
-from ..utils import float_or_none
+from ..utils import (
+    clean_html,
+    float_or_none,
+)
 
 
 class AudioBoomIE(InfoExtractor):
     _VALID_URL = 
r'https?://(?:www\.)?audioboom\.com/(?:boos|posts)/(?P<id>[0-9]+)'
     _TESTS = [{
-        'url': 
'https://audioboom.com/boos/4279833-3-09-2016-czaban-hour-3?t=0',
-        'md5': '63a8d73a055c6ed0f1e51921a10a5a76',
+        'url': 'https://audioboom.com/posts/7398103-asim-chaudhry',
+        'md5': '7b00192e593ff227e6a315486979a42d',
         'info_dict': {
-            'id': '4279833',
+            'id': '7398103',
             'ext': 'mp3',
-            'title': '3/09/2016 Czaban Hour 3',
-            'description': 'Guest:   Nate Davis - NFL free agency,   Guest:   
Stan Gans',
-            'duration': 2245.72,
-            'uploader': 'SB Nation A.M.',
-            'uploader_url': 
r're:https?://(?:www\.)?audioboom\.com/channel/steveczabanyahoosportsradio',
+            'title': 'Asim Chaudhry',
+            'description': 'md5:2f3fef17dacc2595b5362e1d7d3602fc',
+            'duration': 4000.99,
+            'uploader': 'Sue Perkins: An hour or so with...',
+            'uploader_url': 
r're:https?://(?:www\.)?audioboom\.com/channel/perkins',
         }
     }, {
         'url': 
'https://audioboom.com/posts/4279833-3-09-2016-czaban-hour-3?t=0',
@@ -32,8 +35,8 @@
         clip = None
 
         clip_store = self._parse_json(
-            self._search_regex(
-                
r'data-new-clip-store=(["\'])(?P<json>{.*?"clipId"\s*:\s*%s.*?})\1' % video_id,
+            self._html_search_regex(
+                r'data-new-clip-store=(["\'])(?P<json>{.+?})\1',
                 webpage, 'clip store', default='{}', group='json'),
             video_id, fatal=False)
         if clip_store:
@@ -47,14 +50,15 @@
 
         audio_url = from_clip('clipURLPriorToLoading') or 
self._og_search_property(
             'audio', webpage, 'audio url')
-        title = from_clip('title') or self._og_search_title(webpage)
-        description = from_clip('description') or 
self._og_search_description(webpage)
+        title = from_clip('title') or self._html_search_meta(
+            ['og:title', 'og:audio:title', 'audio_title'], webpage)
+        description = from_clip('description') or 
clean_html(from_clip('formattedDescription')) or 
self._og_search_description(webpage)
 
         duration = float_or_none(from_clip('duration') or 
self._html_search_meta(
             'weibo:audio:duration', webpage))
 
-        uploader = from_clip('author') or self._og_search_property(
-            'audio:artist', webpage, 'uploader', fatal=False)
+        uploader = from_clip('author') or self._html_search_meta(
+            ['og:audio:artist', 'twitter:audio:artist_name', 'audio_artist'], 
webpage, 'uploader')
         uploader_url = from_clip('author_url') or self._html_search_meta(
             'audioboo:channel', webpage, 'uploader url')
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/contv.py 
new/youtube-dl/youtube_dl/extractor/contv.py
--- old/youtube-dl/youtube_dl/extractor/contv.py        1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/contv.py        2019-10-21 
19:07:39.000000000 +0200
@@ -0,0 +1,118 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import (
+    float_or_none,
+    int_or_none,
+)
+
+
+class CONtvIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?contv\.com/details-movie/(?P<id>[^/]+)'
+    _TESTS = [{
+        'url': 
'https://www.contv.com/details-movie/CEG10022949/days-of-thrills-&-laughter',
+        'info_dict': {
+            'id': 'CEG10022949',
+            'ext': 'mp4',
+            'title': 'Days Of Thrills & Laughter',
+            'description': 'md5:5d6b3d0b1829bb93eb72898c734802eb',
+            'upload_date': '20180703',
+            'timestamp': 1530634789.61,
+        },
+        'params': {
+            # m3u8 download
+            'skip_download': True,
+        },
+    }, {
+        'url': 
'https://www.contv.com/details-movie/CLIP-show_fotld_bts/fight-of-the-living-dead:-behind-the-scenes-bites',
+        'info_dict': {
+            'id': 'CLIP-show_fotld_bts',
+            'title': 'Fight of the Living Dead: Behind the Scenes Bites',
+        },
+        'playlist_mincount': 7,
+    }]
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        details = self._download_json(
+            'http://metax.contv.live.junctiontv.net/metax/2.5/details/' + 
video_id,
+            video_id, query={'device': 'web'})
+
+        if details.get('type') == 'episodic':
+            seasons = self._download_json(
+                
'http://metax.contv.live.junctiontv.net/metax/2.5/seriesfeed/json/' + video_id,
+                video_id)
+            entries = []
+            for season in seasons:
+                for episode in season.get('episodes', []):
+                    episode_id = episode.get('id')
+                    if not episode_id:
+                        continue
+                    entries.append(self.url_result(
+                        'https://www.contv.com/details-movie/' + episode_id,
+                        CONtvIE.ie_key(), episode_id))
+            return self.playlist_result(entries, video_id, 
details.get('title'))
+
+        m_details = details['details']
+        title = details['title']
+
+        formats = []
+
+        media_hls_url = m_details.get('media_hls_url')
+        if media_hls_url:
+            formats.extend(self._extract_m3u8_formats(
+                media_hls_url, video_id, 'mp4',
+                m3u8_id='hls', fatal=False))
+
+        media_mp4_url = m_details.get('media_mp4_url')
+        if media_mp4_url:
+            formats.append({
+                'format_id': 'http',
+                'url': media_mp4_url,
+            })
+
+        self._sort_formats(formats)
+
+        subtitles = {}
+        captions = m_details.get('captions') or {}
+        for caption_url in captions.values():
+            subtitles.setdefault('en', []).append({
+                'url': caption_url
+            })
+
+        thumbnails = []
+        for image in m_details.get('images', []):
+            image_url = image.get('url')
+            if not image_url:
+                continue
+            thumbnails.append({
+                'url': image_url,
+                'width': int_or_none(image.get('width')),
+                'height': int_or_none(image.get('height')),
+            })
+
+        description = None
+        for p in ('large_', 'medium_', 'small_', ''):
+            d = m_details.get(p + 'description')
+            if d:
+                description = d
+                break
+
+        return {
+            'id': video_id,
+            'title': title,
+            'formats': formats,
+            'thumbnails': thumbnails,
+            'description': description,
+            'timestamp': float_or_none(details.get('metax_added_on'), 1000),
+            'subtitles': subtitles,
+            'duration': float_or_none(m_details.get('duration'), 1000),
+            'view_count': int_or_none(details.get('num_watched')),
+            'like_count': int_or_none(details.get('num_fav')),
+            'categories': details.get('category'),
+            'tags': details.get('tags'),
+            'season_number': int_or_none(details.get('season')),
+            'episode_number': int_or_none(details.get('episode')),
+            'release_year': int_or_none(details.get('pub_year')),
+        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/dumpert.py 
new/youtube-dl/youtube_dl/extractor/dumpert.py
--- old/youtube-dl/youtube_dl/extractor/dumpert.py      2019-10-15 
22:26:26.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/dumpert.py      2019-10-21 
19:07:39.000000000 +0200
@@ -1,20 +1,17 @@
 # coding: utf-8
 from __future__ import unicode_literals
 
-import re
-
 from .common import InfoExtractor
-from ..compat import compat_b64decode
 from ..utils import (
+    int_or_none,
     qualities,
-    sanitized_Request,
 )
 
 
 class DumpertIE(InfoExtractor):
-    _VALID_URL = 
r'(?P<protocol>https?)://(?:www\.)?dumpert\.nl/(?:mediabase|embed)/(?P<id>[0-9]+/[0-9a-zA-Z]+)'
+    _VALID_URL = 
r'(?P<protocol>https?)://(?:(?:www|legacy)\.)?dumpert\.nl/(?:mediabase|embed|item)/(?P<id>[0-9]+[/_][0-9a-zA-Z]+)'
     _TESTS = [{
-        'url': 'http://www.dumpert.nl/mediabase/6646981/951bc60f/',
+        'url': 'https://www.dumpert.nl/item/6646981_951bc60f',
         'md5': '1b9318d7d5054e7dcb9dc7654f21d643',
         'info_dict': {
             'id': '6646981/951bc60f',
@@ -24,46 +21,60 @@
             'thumbnail': r're:^https?://.*\.jpg$',
         }
     }, {
-        'url': 'http://www.dumpert.nl/embed/6675421/dc440fe7/',
+        'url': 'https://www.dumpert.nl/embed/6675421_dc440fe7',
+        'only_matching': True,
+    }, {
+        'url': 'http://legacy.dumpert.nl/mediabase/6646981/951bc60f',
+        'only_matching': True,
+    }, {
+        'url': 'http://legacy.dumpert.nl/embed/6675421/dc440fe7',
         'only_matching': True,
     }]
 
     def _real_extract(self, url):
-        mobj = re.match(self._VALID_URL, url)
-        video_id = mobj.group('id')
-        protocol = mobj.group('protocol')
-
-        url = '%s://www.dumpert.nl/mediabase/%s' % (protocol, video_id)
-        req = sanitized_Request(url)
-        req.add_header('Cookie', 'nsfw=1; cpc=10')
-        webpage = self._download_webpage(req, video_id)
-
-        files_base64 = self._search_regex(
-            r'data-files="([^"]+)"', webpage, 'data files')
-
-        files = self._parse_json(
-            compat_b64decode(files_base64).decode('utf-8'),
-            video_id)
+        video_id = self._match_id(url).replace('_', '/')
+        item = self._download_json(
+            'http://api-live.dumpert.nl/mobile_api/json/info/' + 
video_id.replace('/', '_'),
+            video_id)['items'][0]
+        title = item['title']
+        media = next(m for m in item['media'] if m.get('mediatype') == 'VIDEO')
 
         quality = qualities(['flv', 'mobile', 'tablet', '720p'])
-
-        formats = [{
-            'url': video_url,
-            'format_id': format_id,
-            'quality': quality(format_id),
-        } for format_id, video_url in files.items() if format_id != 'still']
+        formats = []
+        for variant in media.get('variants', []):
+            uri = variant.get('uri')
+            if not uri:
+                continue
+            version = variant.get('version')
+            formats.append({
+                'url': uri,
+                'format_id': version,
+                'quality': quality(version),
+            })
         self._sort_formats(formats)
 
-        title = self._html_search_meta(
-            'title', webpage) or self._og_search_title(webpage)
-        description = self._html_search_meta(
-            'description', webpage) or self._og_search_description(webpage)
-        thumbnail = files.get('still') or self._og_search_thumbnail(webpage)
+        thumbnails = []
+        stills = item.get('stills') or {}
+        for t in ('thumb', 'still'):
+            for s in ('', '-medium', '-large'):
+                still_id = t + s
+                still_url = stills.get(still_id)
+                if not still_url:
+                    continue
+                thumbnails.append({
+                    'id': still_id,
+                    'url': still_url,
+                })
+
+        stats = item.get('stats') or {}
 
         return {
             'id': video_id,
             'title': title,
-            'description': description,
-            'thumbnail': thumbnail,
-            'formats': formats
+            'description': item.get('description'),
+            'thumbnails': thumbnails,
+            'formats': formats,
+            'duration': int_or_none(media.get('duration')),
+            'like_count': int_or_none(stats.get('kudos_total')),
+            'view_count': int_or_none(stats.get('views_total')),
         }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/extractors.py 
new/youtube-dl/youtube_dl/extractor/extractors.py
--- old/youtube-dl/youtube_dl/extractor/extractors.py   2019-10-15 
22:26:36.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/extractors.py   2019-10-21 
19:07:39.000000000 +0200
@@ -231,6 +231,7 @@
     RtmpIE,
 )
 from .condenast import CondeNastIE
+from .contv import CONtvIE
 from .corus import CorusIE
 from .cracked import CrackedIE
 from .crackle import CrackleIE
@@ -644,7 +645,7 @@
 from .ministrygrid import MinistryGridIE
 from .minoto import MinotoIE
 from .miomio import MioMioIE
-from .mit import TechTVMITIE, MITIE, OCWMITIE
+from .mit import TechTVMITIE, OCWMITIE
 from .mitele import MiTeleIE
 from .mixcloud import (
     MixcloudIE,
@@ -995,7 +996,6 @@
 from .seeker import SeekerIE
 from .senateisvp import SenateISVPIE
 from .sendtonews import SendtoNewsIE
-from .servingsys import ServingSysIE
 from .servus import ServusIE
 from .sevenplus import SevenPlusIE
 from .sexu import SexuIE
@@ -1323,7 +1323,6 @@
     ViewLiftIE,
     ViewLiftEmbedIE,
 )
-from .viewster import ViewsterIE
 from .viidea import ViideaIE
 from .vimeo import (
     VimeoIE,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/facebook.py 
new/youtube-dl/youtube_dl/extractor/facebook.py
--- old/youtube-dl/youtube_dl/extractor/facebook.py     2019-10-15 
22:26:26.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/facebook.py     2019-10-21 
19:07:39.000000000 +0200
@@ -405,6 +405,11 @@
         if not formats:
             raise ExtractorError('Cannot find video formats')
 
+        # Downloads with browser's User-Agent are rate limited. Working around
+        # with non-browser User-Agent.
+        for f in formats:
+            f.setdefault('http_headers', {})['User-Agent'] = 
'facebookexternalhit/1.1'
+
         self._sort_formats(formats)
 
         video_title = self._html_search_regex(
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/generic.py 
new/youtube-dl/youtube_dl/extractor/generic.py
--- old/youtube-dl/youtube_dl/extractor/generic.py      2019-10-15 
22:26:36.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/generic.py      2019-10-21 
19:07:39.000000000 +0200
@@ -2962,10 +2962,14 @@
 
         # Look for Mangomolo embeds
         mobj = re.search(
-            
r'''(?x)<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?admin\.mangomolo\.com/analytics/index\.php/customers/embed/
+            r'''(?x)<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//
+                (?:
+                    admin\.mangomolo\.com/analytics/index\.php/customers/embed|
+                    player\.mangomolo\.com/v1
+                )/
                 (?:
                     video\?.*?\bid=(?P<video_id>\d+)|
-                    
index\?.*?\bchannelid=(?P<channel_id>(?:[A-Za-z0-9+/=]|%2B|%2F|%3D)+)
+                    
(?:index|live)\?.*?\bchannelid=(?P<channel_id>(?:[A-Za-z0-9+/=]|%2B|%2F|%3D)+)
                 ).+?)\1''', webpage)
         if mobj is not None:
             info = {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/mangomolo.py 
new/youtube-dl/youtube_dl/extractor/mangomolo.py
--- old/youtube-dl/youtube_dl/extractor/mangomolo.py    2019-10-15 
22:26:27.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/mangomolo.py    2019-10-21 
19:07:39.000000000 +0200
@@ -10,18 +10,21 @@
 
 
 class MangomoloBaseIE(InfoExtractor):
+    _BASE_REGEX = 
r'https?://(?:admin\.mangomolo\.com/analytics/index\.php/customers/embed/|player\.mangomolo\.com/v1/)'
+
     def _get_real_id(self, page_id):
         return page_id
 
     def _real_extract(self, url):
         page_id = self._get_real_id(self._match_id(url))
-        webpage = self._download_webpage(url, page_id)
+        webpage = self._download_webpage(
+            'https://player.mangomolo.com/v1/%s?%s' % (self._TYPE, 
url.split('?')[1]), page_id)
         hidden_inputs = self._hidden_inputs(webpage)
         m3u8_entry_protocol = 'm3u8' if self._IS_LIVE else 'm3u8_native'
 
         format_url = self._html_search_regex(
             [
-                r'file\s*:\s*"(https?://[^"]+?/playlist\.m3u8)',
+                r'(?:file|src)\s*:\s*"(https?://[^"]+?/playlist\.m3u8)',
                 r'<a[^>]+href="(rtsp://[^"]+)"'
             ], webpage, 'format url')
         formats = self._extract_wowza_formats(
@@ -39,14 +42,16 @@
 
 
 class MangomoloVideoIE(MangomoloBaseIE):
-    IE_NAME = 'mangomolo:video'
-    _VALID_URL = 
r'https?://admin\.mangomolo\.com/analytics/index\.php/customers/embed/video\?.*?\bid=(?P<id>\d+)'
+    _TYPE = 'video'
+    IE_NAME = 'mangomolo:' + _TYPE
+    _VALID_URL = MangomoloBaseIE._BASE_REGEX + r'video\?.*?\bid=(?P<id>\d+)'
     _IS_LIVE = False
 
 
 class MangomoloLiveIE(MangomoloBaseIE):
-    IE_NAME = 'mangomolo:live'
-    _VALID_URL = 
r'https?://admin\.mangomolo\.com/analytics/index\.php/customers/embed/index\?.*?\bchannelid=(?P<id>(?:[A-Za-z0-9+/=]|%2B|%2F|%3D)+)'
+    _TYPE = 'live'
+    IE_NAME = 'mangomolo:' + _TYPE
+    _VALID_URL = MangomoloBaseIE._BASE_REGEX + 
r'(live|index)\?.*?\bchannelid=(?P<id>(?:[A-Za-z0-9+/=]|%2B|%2F|%3D)+)'
     _IS_LIVE = True
 
     def _get_real_id(self, page_id):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/mit.py 
new/youtube-dl/youtube_dl/extractor/mit.py
--- old/youtube-dl/youtube_dl/extractor/mit.py  2019-10-15 22:26:27.000000000 
+0200
+++ new/youtube-dl/youtube_dl/extractor/mit.py  2019-10-21 19:07:39.000000000 
+0200
@@ -65,30 +65,6 @@
         }
 
 
-class MITIE(TechTVMITIE):
-    IE_NAME = 'video.mit.edu'
-    _VALID_URL = r'https?://video\.mit\.edu/watch/(?P<title>[^/]+)'
-
-    _TEST = {
-        'url': 
'http://video.mit.edu/watch/the-government-is-profiling-you-13222/',
-        'md5': '7db01d5ccc1895fc5010e9c9e13648da',
-        'info_dict': {
-            'id': '21783',
-            'ext': 'mp4',
-            'title': 'The Government is Profiling You',
-            'description': 'md5:ad5795fe1e1623b73620dbfd47df9afd',
-        },
-    }
-
-    def _real_extract(self, url):
-        mobj = re.match(self._VALID_URL, url)
-        page_title = mobj.group('title')
-        webpage = self._download_webpage(url, page_title)
-        embed_url = self._search_regex(
-            r'<iframe .*?src="(.+?)"', webpage, 'embed url')
-        return self.url_result(embed_url)
-
-
 class OCWMITIE(InfoExtractor):
     IE_NAME = 'ocw.mit.edu'
     _VALID_URL = r'^https?://ocw\.mit\.edu/courses/(?P<topic>[a-z0-9\-]+)'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/servingsys.py 
new/youtube-dl/youtube_dl/extractor/servingsys.py
--- old/youtube-dl/youtube_dl/extractor/servingsys.py   2019-10-15 
22:26:28.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/servingsys.py   1970-01-01 
01:00:00.000000000 +0100
@@ -1,72 +0,0 @@
-from __future__ import unicode_literals
-
-from .common import InfoExtractor
-from ..utils import (
-    int_or_none,
-)
-
-
-class ServingSysIE(InfoExtractor):
-    _VALID_URL = 
r'https?://(?:[^.]+\.)?serving-sys\.com/BurstingPipe/adServer\.bs\?.*?&pli=(?P<id>[0-9]+)'
-
-    _TEST = {
-        'url': 
'http://bs.serving-sys.com/BurstingPipe/adServer.bs?cn=is&c=23&pl=VAST&pli=5349193&PluID=0&pos=7135&ord=[timestamp]&cim=1?',
-        'info_dict': {
-            'id': '5349193',
-            'title': 'AdAPPter_Hyundai_demo',
-        },
-        'playlist': [{
-            'md5': 'baed851342df6846eb8677a60a011a0f',
-            'info_dict': {
-                'id': '29955898',
-                'ext': 'flv',
-                'title': 'AdAPPter_Hyundai_demo (1)',
-                'duration': 74,
-                'tbr': 1378,
-                'width': 640,
-                'height': 400,
-            },
-        }, {
-            'md5': '979b4da2655c4bc2d81aeb915a8c5014',
-            'info_dict': {
-                'id': '29907998',
-                'ext': 'flv',
-                'title': 'AdAPPter_Hyundai_demo (2)',
-                'duration': 34,
-                'width': 854,
-                'height': 480,
-                'tbr': 516,
-            },
-        }],
-        'params': {
-            'playlistend': 2,
-        },
-        '_skip': 'Blocked in the US [sic]',
-    }
-
-    def _real_extract(self, url):
-        pl_id = self._match_id(url)
-        vast_doc = self._download_xml(url, pl_id)
-
-        title = vast_doc.find('.//AdTitle').text
-        media = vast_doc.find('.//MediaFile').text
-        info_url = self._search_regex(r'&adData=([^&]+)&', media, 'info URL')
-
-        doc = self._download_xml(info_url, pl_id, 'Downloading video info')
-        entries = [{
-            '_type': 'video',
-            'id': a.attrib['id'],
-            'title': '%s (%s)' % (title, a.attrib['assetID']),
-            'url': a.attrib['URL'],
-            'duration': int_or_none(a.attrib.get('length')),
-            'tbr': int_or_none(a.attrib.get('bitrate')),
-            'height': int_or_none(a.attrib.get('height')),
-            'width': int_or_none(a.attrib.get('width')),
-        } for a in doc.findall('.//AdditionalAssets/asset')]
-
-        return {
-            '_type': 'playlist',
-            'id': pl_id,
-            'title': title,
-            'entries': entries,
-        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/twitch.py 
new/youtube-dl/youtube_dl/extractor/twitch.py
--- old/youtube-dl/youtube_dl/extractor/twitch.py       2019-10-15 
22:26:28.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/twitch.py       2019-10-21 
19:07:39.000000000 +0200
@@ -248,7 +248,7 @@
                     https?://
                         (?:
                             
(?:(?:www|go|m)\.)?twitch\.tv/(?:[^/]+/v(?:ideo)?|videos)/|
-                            player\.twitch\.tv/\?.*?\bvideo=v
+                            player\.twitch\.tv/\?.*?\bvideo=v?
                         )
                         (?P<id>\d+)
                     '''
@@ -306,6 +306,9 @@
     }, {
         'url': 'https://www.twitch.tv/northernlion/video/291940395',
         'only_matching': True,
+    }, {
+        'url': 'https://player.twitch.tv/?video=480452374',
+        'only_matching': True,
     }]
 
     def _real_extract(self, url):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/viewster.py 
new/youtube-dl/youtube_dl/extractor/viewster.py
--- old/youtube-dl/youtube_dl/extractor/viewster.py     2019-10-15 
22:26:28.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/viewster.py     1970-01-01 
01:00:00.000000000 +0100
@@ -1,217 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import re
-
-from .common import InfoExtractor
-from ..compat import (
-    compat_HTTPError,
-    compat_urllib_parse_unquote,
-)
-from ..utils import (
-    determine_ext,
-    ExtractorError,
-    int_or_none,
-    parse_iso8601,
-    sanitized_Request,
-    HEADRequest,
-    url_basename,
-)
-
-
-class ViewsterIE(InfoExtractor):
-    _VALID_URL = 
r'https?://(?:www\.)?viewster\.com/(?:serie|movie)/(?P<id>\d+-\d+-\d+)'
-    _TESTS = [{
-        # movie, Type=Movie
-        'url': 
'http://www.viewster.com/movie/1140-11855-000/the-listening-project/',
-        'md5': 'e642d1b27fcf3a4ffa79f194f5adde36',
-        'info_dict': {
-            'id': '1140-11855-000',
-            'ext': 'mp4',
-            'title': 'The listening Project',
-            'description': 'md5:bac720244afd1a8ea279864e67baa071',
-            'timestamp': 1214870400,
-            'upload_date': '20080701',
-            'duration': 4680,
-        },
-    }, {
-        # series episode, Type=Episode
-        'url': 
'http://www.viewster.com/serie/1284-19427-001/the-world-and-a-wall/',
-        'md5': '9243079a8531809efe1b089db102c069',
-        'info_dict': {
-            'id': '1284-19427-001',
-            'ext': 'mp4',
-            'title': 'The World and a Wall',
-            'description': 'md5:24814cf74d3453fdf5bfef9716d073e3',
-            'timestamp': 1428192000,
-            'upload_date': '20150405',
-            'duration': 1500,
-        },
-    }, {
-        # serie, Type=Serie
-        'url': 'http://www.viewster.com/serie/1303-19426-000/',
-        'info_dict': {
-            'id': '1303-19426-000',
-            'title': 'Is It Wrong to Try to Pick up Girls in a Dungeon?',
-            'description': 'md5:eeda9bef25b0d524b3a29a97804c2f11',
-        },
-        'playlist_count': 13,
-    }, {
-        # unfinished serie, no Type
-        'url': 
'http://www.viewster.com/serie/1284-19427-000/baby-steps-season-2/',
-        'info_dict': {
-            'id': '1284-19427-000',
-            'title': 'Baby Steps—Season 2',
-            'description': 'md5:e7097a8fc97151e25f085c9eb7a1cdb1',
-        },
-        'playlist_mincount': 16,
-    }, {
-        # geo restricted series
-        'url': 'https://www.viewster.com/serie/1280-18794-002/',
-        'only_matching': True,
-    }, {
-        # geo restricted video
-        'url': 
'https://www.viewster.com/serie/1280-18794-002/what-is-extraterritoriality-lawo/',
-        'only_matching': True,
-    }]
-
-    _ACCEPT_HEADER = 'application/json, text/javascript, */*; q=0.01'
-
-    def _download_json(self, url, video_id, note='Downloading JSON metadata', 
fatal=True, query={}):
-        request = sanitized_Request(url)
-        request.add_header('Accept', self._ACCEPT_HEADER)
-        request.add_header('Auth-token', self._AUTH_TOKEN)
-        return super(ViewsterIE, self)._download_json(request, video_id, note, 
fatal=fatal, query=query)
-
-    def _real_extract(self, url):
-        video_id = self._match_id(url)
-        # Get 'api_token' cookie
-        self._request_webpage(
-            HEADRequest('http://www.viewster.com/'),
-            video_id, headers=self.geo_verification_headers())
-        cookies = self._get_cookies('http://www.viewster.com/')
-        self._AUTH_TOKEN = 
compat_urllib_parse_unquote(cookies['api_token'].value)
-
-        info = self._download_json(
-            'https://public-api.viewster.com/search/%s' % video_id,
-            video_id, 'Downloading entry JSON')
-
-        entry_id = info.get('Id') or info['id']
-
-        # unfinished serie has no Type
-        if info.get('Type') in ('Serie', None):
-            try:
-                episodes = self._download_json(
-                    'https://public-api.viewster.com/series/%s/episodes' % 
entry_id,
-                    video_id, 'Downloading series JSON')
-            except ExtractorError as e:
-                if isinstance(e.cause, compat_HTTPError) and e.cause.code == 
404:
-                    self.raise_geo_restricted()
-                else:
-                    raise
-            entries = [
-                self.url_result(
-                    'http://www.viewster.com/movie/%s' % episode['OriginId'], 
'Viewster')
-                for episode in episodes]
-            title = (info.get('Title') or info['Synopsis']['Title']).strip()
-            description = info.get('Synopsis', {}).get('Detailed')
-            return self.playlist_result(entries, video_id, title, description)
-
-        formats = []
-        for language_set in info.get('LanguageSets', []):
-            manifest_url = None
-            m3u8_formats = []
-            audio = language_set.get('Audio') or ''
-            subtitle = language_set.get('Subtitle') or ''
-            base_format_id = audio
-            if subtitle:
-                base_format_id += '-%s' % subtitle
-
-            def concat(suffix, sep='-'):
-                return (base_format_id + '%s%s' % (sep, suffix)) if 
base_format_id else suffix
-
-            medias = self._download_json(
-                'https://public-api.viewster.com/movies/%s/videos' % entry_id,
-                video_id, fatal=False, query={
-                    'mediaTypes': ['application/f4m+xml', 
'application/x-mpegURL', 'video/mp4'],
-                    'language': audio,
-                    'subtitle': subtitle,
-                })
-            if not medias:
-                continue
-            for media in medias:
-                video_url = media.get('Uri')
-                if not video_url:
-                    continue
-                ext = determine_ext(video_url)
-                if ext == 'f4m':
-                    manifest_url = video_url
-                    video_url += '&' if '?' in video_url else '?'
-                    video_url += 'hdcore=3.2.0&plugin=flowplayer-3.2.0.1'
-                    formats.extend(self._extract_f4m_formats(
-                        video_url, video_id, f4m_id=concat('hds')))
-                elif ext == 'm3u8':
-                    manifest_url = video_url
-                    m3u8_formats = self._extract_m3u8_formats(
-                        video_url, video_id, 'mp4', m3u8_id=concat('hls'),
-                        fatal=False)  # m3u8 sometimes fail
-                    if m3u8_formats:
-                        formats.extend(m3u8_formats)
-                else:
-                    qualities_basename = self._search_regex(
-                        r'/([^/]+)\.csmil/',
-                        manifest_url, 'qualities basename', default=None)
-                    if not qualities_basename:
-                        continue
-                    QUALITIES_RE = r'((,\d+k)+,?)'
-                    qualities = self._search_regex(
-                        QUALITIES_RE, qualities_basename,
-                        'qualities', default=None)
-                    if not qualities:
-                        continue
-                    qualities = list(map(lambda q: int(q[:-1]), 
qualities.strip(',').split(',')))
-                    qualities.sort()
-                    http_template = re.sub(QUALITIES_RE, r'%dk', 
qualities_basename)
-                    http_url_basename = url_basename(video_url)
-                    if m3u8_formats:
-                        self._sort_formats(m3u8_formats)
-                        m3u8_formats = list(filter(
-                            lambda f: f.get('vcodec') != 'none', m3u8_formats))
-                    if len(qualities) == len(m3u8_formats):
-                        for q, m3u8_format in zip(qualities, m3u8_formats):
-                            f = m3u8_format.copy()
-                            f.update({
-                                'url': video_url.replace(http_url_basename, 
http_template % q),
-                                'format_id': f['format_id'].replace('hls', 
'http'),
-                                'protocol': 'http',
-                            })
-                            formats.append(f)
-                    else:
-                        for q in qualities:
-                            formats.append({
-                                'url': video_url.replace(http_url_basename, 
http_template % q),
-                                'ext': 'mp4',
-                                'format_id': 'http-%d' % q,
-                                'tbr': q,
-                            })
-
-        if not formats and not info.get('VODSettings'):
-            self.raise_geo_restricted()
-
-        self._sort_formats(formats)
-
-        synopsis = info.get('Synopsis') or {}
-        # Prefer title outside synopsis since it's less messy
-        title = (info.get('Title') or synopsis['Title']).strip()
-        description = synopsis.get('Detailed') or (info.get('Synopsis') or 
{}).get('Short')
-        duration = int_or_none(info.get('Duration'))
-        timestamp = parse_iso8601(info.get('ReleaseDate'))
-
-        return {
-            'id': video_id,
-            'title': title,
-            'description': description,
-            'timestamp': timestamp,
-            'duration': duration,
-            'formats': formats,
-        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/xfileshare.py 
new/youtube-dl/youtube_dl/extractor/xfileshare.py
--- old/youtube-dl/youtube_dl/extractor/xfileshare.py   2019-10-15 
22:26:28.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/xfileshare.py   2019-10-21 
19:07:39.000000000 +0200
@@ -4,37 +4,64 @@
 import re
 
 from .common import InfoExtractor
+from ..compat import compat_chr
 from ..utils import (
     decode_packed_codes,
     determine_ext,
     ExtractorError,
     int_or_none,
-    NO_DEFAULT,
+    js_to_json,
     urlencode_postdata,
 )
 
 
+# based on openload_decode from 2bfeee69b976fe049761dd3012e30b637ee05a58
+def aa_decode(aa_code):
+    symbol_table = [
+        ('7', '((゚ー゚) + (o^_^o))'),
+        ('6', '((o^_^o) +(o^_^o))'),
+        ('5', '((゚ー゚) + (゚Θ゚))'),
+        ('2', '((o^_^o) - (゚Θ゚))'),
+        ('4', '(゚ー゚)'),
+        ('3', '(o^_^o)'),
+        ('1', '(゚Θ゚)'),
+        ('0', '(c^_^o)'),
+    ]
+    delim = '(゚Д゚)[゚ε゚]+'
+    ret = ''
+    for aa_char in aa_code.split(delim):
+        for val, pat in symbol_table:
+            aa_char = aa_char.replace(pat, val)
+        aa_char = aa_char.replace('+ ', '')
+        m = re.match(r'^\d+', aa_char)
+        if m:
+            ret += compat_chr(int(m.group(0), 8))
+        else:
+            m = re.match(r'^u([\da-f]+)', aa_char)
+            if m:
+                ret += compat_chr(int(m.group(1), 16))
+    return ret
+
+
 class XFileShareIE(InfoExtractor):
     _SITES = (
-        (r'daclips\.(?:in|com)', 'DaClips'),
-        (r'filehoot\.com', 'FileHoot'),
-        (r'gorillavid\.(?:in|com)', 'GorillaVid'),
-        (r'movpod\.in', 'MovPod'),
-        (r'powerwatch\.pw', 'PowerWatch'),
-        (r'rapidvideo\.ws', 'Rapidvideo.ws'),
+        (r'clipwatching\.com', 'ClipWatching'),
+        (r'gounlimited\.to', 'GoUnlimited'),
+        (r'govid\.me', 'GoVid'),
+        (r'holavid\.com', 'HolaVid'),
+        (r'streamty\.com', 'Streamty'),
         (r'thevideobee\.to', 'TheVideoBee'),
-        (r'vidto\.(?:me|se)', 'Vidto'),
-        (r'streamin\.to', 'Streamin.To'),
-        (r'xvidstage\.com', 'XVIDSTAGE'),
-        (r'vidabc\.com', 'Vid ABC'),
+        (r'uqload\.com', 'Uqload'),
         (r'vidbom\.com', 'VidBom'),
         (r'vidlo\.us', 'vidlo'),
-        (r'rapidvideo\.(?:cool|org)', 'RapidVideo.TV'),
-        (r'fastvideo\.me', 'FastVideo.me'),
+        (r'vidlocker\.xyz', 'VidLocker'),
+        (r'vidshare\.tv', 'VidShare'),
+        (r'vup\.to', 'VUp'),
+        (r'xvideosharing\.com', 'XVideoSharing'),
     )
 
     IE_DESC = 'XFileShare based sites: %s' % ', '.join(list(zip(*_SITES))[1])
-    _VALID_URL = 
(r'https?://(?P<host>(?:www\.)?(?:%s))/(?:embed-)?(?P<id>[0-9a-zA-Z]+)'
+    _VALID_URL = 
(r'https?://(?:www\.)?(?P<host>%s)/(?:embed-)?(?P<id>[0-9a-zA-Z]+)'
                   % '|'.join(site for site in list(zip(*_SITES))[0]))
 
     _FILE_NOT_FOUND_REGEXES = (
@@ -43,82 +70,14 @@
     )
 
     _TESTS = [{
-        'url': 'http://gorillavid.in/06y9juieqpmi',
-        'md5': '5ae4a3580620380619678ee4875893ba',
-        'info_dict': {
-            'id': '06y9juieqpmi',
-            'ext': 'mp4',
-            'title': 'Rebecca Black My Moment Official Music Video 
Reaction-6GK87Rc8bzQ',
-            'thumbnail': r're:http://.*\.jpg',
-        },
-    }, {
-        'url': 'http://gorillavid.in/embed-z08zf8le23c6-960x480.html',
-        'only_matching': True,
-    }, {
-        'url': 'http://daclips.in/3rso4kdn6f9m',
-        'md5': '1ad8fd39bb976eeb66004d3a4895f106',
-        'info_dict': {
-            'id': '3rso4kdn6f9m',
-            'ext': 'mp4',
-            'title': 'Micro Pig piglets ready on 16th July 2009-bG0PdrCdxUc',
-            'thumbnail': r're:http://.*\.jpg',
-        }
-    }, {
-        'url': 'http://movpod.in/0wguyyxi1yca',
-        'only_matching': True,
-    }, {
-        'url': 'http://filehoot.com/3ivfabn7573c.html',
+        'url': 'http://xvideosharing.com/fq65f94nd2ve',
+        'md5': '4181f63957e8fe90ac836fa58dc3c8a6',
         'info_dict': {
-            'id': '3ivfabn7573c',
+            'id': 'fq65f94nd2ve',
             'ext': 'mp4',
-            'title': 'youtube-dl test video \'äBaW_jenozKc.mp4.mp4',
+            'title': 'sample',
             'thumbnail': r're:http://.*\.jpg',
         },
-        'skip': 'Video removed',
-    }, {
-        'url': 'http://vidto.me/ku5glz52nqe1.html',
-        'info_dict': {
-            'id': 'ku5glz52nqe1',
-            'ext': 'mp4',
-            'title': 'test'
-        }
-    }, {
-        'url': 'http://powerwatch.pw/duecjibvicbu',
-        'info_dict': {
-            'id': 'duecjibvicbu',
-            'ext': 'mp4',
-            'title': 'Big Buck Bunny trailer',
-        },
-    }, {
-        'url': 'http://xvidstage.com/e0qcnl03co6z',
-        'info_dict': {
-            'id': 'e0qcnl03co6z',
-            'ext': 'mp4',
-            'title': 'Chucky Prank 2015.mp4',
-        },
-    }, {
-        # removed by administrator
-        'url': 'http://xvidstage.com/amfy7atlkx25',
-        'only_matching': True,
-    }, {
-        'url': 'http://vidabc.com/i8ybqscrphfv',
-        'info_dict': {
-            'id': 'i8ybqscrphfv',
-            'ext': 'mp4',
-            'title': 're:Beauty and the Beast 2017',
-        },
-        'params': {
-            'skip_download': True,
-        },
-    }, {
-        'url': 'http://www.rapidvideo.cool/b667kprndr8w',
-        'only_matching': True,
-    }, {
-        'url': 
'http://www.fastvideo.me/k8604r8nk8sn/FAST_FURIOUS_8_-_Trailer_italiano_ufficiale.mp4.html',
-        'only_matching': True,
-    }, {
-        'url': 'http://vidto.se/1tx1pf6t12cg.html',
-        'only_matching': True,
     }]
 
     @staticmethod
@@ -131,10 +90,9 @@
                 webpage)]
 
     def _real_extract(self, url):
-        mobj = re.match(self._VALID_URL, url)
-        video_id = mobj.group('id')
+        host, video_id = re.match(self._VALID_URL, url).groups()
 
-        url = 'http://%s/%s' % (mobj.group('host'), video_id)
+        url = 'https://%s/' % host + ('embed-%s.html' % video_id if host in 
('govid.me', 'vidlo.us') else video_id)
         webpage = self._download_webpage(url, video_id)
 
         if any(re.search(p, webpage) for p in self._FILE_NOT_FOUND_REGEXES):
@@ -142,7 +100,7 @@
 
         fields = self._hidden_inputs(webpage)
 
-        if fields['op'] == 'download1':
+        if fields.get('op') == 'download1':
             countdown = int_or_none(self._search_regex(
                 r'<span id="countdown_str">(?:[Ww]ait)?\s*<span 
id="cxc">(\d+)</span>\s*(?:seconds?)?</span>',
                 webpage, 'countdown', default=None))
@@ -160,13 +118,37 @@
             (r'style="z-index: [0-9]+;">([^<]+)</span>',
              r'<td nowrap>([^<]+)</td>',
              r'h4-fine[^>]*>([^<]+)<',
-             r'>Watch (.+) ',
+             r'>Watch (.+)[ <]',
              r'<h2 class="video-page-head">([^<]+)</h2>',
-             r'<h2 style="[^"]*color:#403f3d[^"]*"[^>]*>([^<]+)<'),  # 
streamin.to
+             r'<h2 style="[^"]*color:#403f3d[^"]*"[^>]*>([^<]+)<',  # 
streamin.to
+             r'title\s*:\s*"([^"]+)"'),  # govid.me
             webpage, 'title', default=None) or self._og_search_title(
             webpage, default=None) or video_id).strip()
 
-        def extract_formats(default=NO_DEFAULT):
+        for regex, func in (
+                (r'(eval\(function\(p,a,c,k,e,d\){.+)', decode_packed_codes),
+                (r'(゚.+)', aa_decode)):
+            obf_code = self._search_regex(regex, webpage, 'obfuscated code', 
default=None)
+            if obf_code:
+                webpage = webpage.replace(obf_code, func(obf_code))
+
+        formats = []
+
+        jwplayer_data = self._search_regex(
+            [
+                r'jwplayer\("[^"]+"\)\.load\(\[({.+?})\]\);',
+                r'jwplayer\("[^"]+"\)\.setup\(({.+?})\);',
+            ], webpage,
+            'jwplayer data', default=None)
+        if jwplayer_data:
+            jwplayer_data = self._parse_json(
+                jwplayer_data.replace(r"\'", "'"), video_id, js_to_json)
+            if jwplayer_data:
+                formats = self._parse_jwplayer_data(
+                    jwplayer_data, video_id, False,
+                    m3u8_id='hls', mpd_id='dash')['formats']
+
+        if not formats:
             urls = []
             for regex in (
                     
r'(?:file|src)\s*:\s*(["\'])(?P<url>http(?:(?!\1).)+\.(?:m3u8|mp4|flv)(?:(?!\1).)*)\1',
@@ -177,6 +159,12 @@
                     video_url = mobj.group('url')
                     if video_url not in urls:
                         urls.append(video_url)
+
+            sources = self._search_regex(
+                r'sources\s*:\s*(\[(?!{)[^\]]+\])', webpage, 'sources', 
default=None)
+            if sources:
+                urls.extend(self._parse_json(sources, video_id))
+
             formats = []
             for video_url in urls:
                 if determine_ext(video_url) == 'm3u8':
@@ -189,21 +177,13 @@
                         'url': video_url,
                         'format_id': 'sd',
                     })
-            if not formats and default is not NO_DEFAULT:
-                return default
-            self._sort_formats(formats)
-            return formats
-
-        formats = extract_formats(default=None)
-
-        if not formats:
-            webpage = decode_packed_codes(self._search_regex(
-                
r"(}\('(.+)',(\d+),(\d+),'[^']*\b(?:file|embed)\b[^']*'\.split\('\|'\))",
-                webpage, 'packed code'))
-            formats = extract_formats()
+        self._sort_formats(formats)
 
         thumbnail = self._search_regex(
-            r'image\s*:\s*["\'](http[^"\']+)["\'],', webpage, 'thumbnail', 
default=None)
+            [
+                r'<video[^>]+poster="([^"]+)"',
+                r'(?:image|poster)\s*:\s*["\'](http[^"\']+)["\'],',
+            ], webpage, 'thumbnail', default=None)
 
         return {
             'id': video_id,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/postprocessor/ffmpeg.py 
new/youtube-dl/youtube_dl/postprocessor/ffmpeg.py
--- old/youtube-dl/youtube_dl/postprocessor/ffmpeg.py   2019-10-15 
22:26:29.000000000 +0200
+++ new/youtube-dl/youtube_dl/postprocessor/ffmpeg.py   2019-10-21 
19:07:39.000000000 +0200
@@ -393,7 +393,7 @@
             sub_ext = sub_info['ext']
             if ext != 'webm' or ext == 'webm' and sub_ext == 'vtt':
                 sub_langs.append(lang)
-                sub_filenames.append(subtitles_filename(filename, lang, 
sub_ext))
+                sub_filenames.append(subtitles_filename(filename, lang, 
sub_ext, ext))
             else:
                 if not webm_vtt_warn and ext == 'webm' and sub_ext != 'vtt':
                     webm_vtt_warn = True
@@ -606,9 +606,9 @@
                 self._downloader.to_screen(
                     '[ffmpeg] Subtitle file for %s is already in the requested 
format' % new_ext)
                 continue
-            old_file = subtitles_filename(filename, lang, ext)
+            old_file = subtitles_filename(filename, lang, ext, info.get('ext'))
             sub_filenames.append(old_file)
-            new_file = subtitles_filename(filename, lang, new_ext)
+            new_file = subtitles_filename(filename, lang, new_ext, 
info.get('ext'))
 
             if ext in ('dfxp', 'ttml', 'tt'):
                 self._downloader.report_warning(
@@ -616,7 +616,7 @@
                     'which results in style information loss')
 
                 dfxp_file = old_file
-                srt_file = subtitles_filename(filename, lang, 'srt')
+                srt_file = subtitles_filename(filename, lang, 'srt', 
info.get('ext'))
 
                 with open(dfxp_file, 'rb') as f:
                     srt_data = dfxp2srt(f.read())
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/utils.py 
new/youtube-dl/youtube_dl/utils.py
--- old/youtube-dl/youtube_dl/utils.py  2019-10-15 22:26:29.000000000 +0200
+++ new/youtube-dl/youtube_dl/utils.py  2019-10-21 19:07:39.000000000 +0200
@@ -2906,8 +2906,8 @@
         return default_ext
 
 
-def subtitles_filename(filename, sub_lang, sub_format):
-    return filename.rsplit('.', 1)[0] + '.' + sub_lang + '.' + sub_format
+def subtitles_filename(filename, sub_lang, sub_format, expected_real_ext=None):
+    return replace_extension(filename, sub_lang + '.' + sub_format, 
expected_real_ext)
 
 
 def date_from_str(date_str):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/version.py 
new/youtube-dl/youtube_dl/version.py
--- old/youtube-dl/youtube_dl/version.py        2019-10-15 22:26:43.000000000 
+0200
+++ new/youtube-dl/youtube_dl/version.py        2019-10-21 19:08:59.000000000 
+0200
@@ -1,3 +1,3 @@
 from __future__ import unicode_literals
 
-__version__ = '2019.10.16'
+__version__ = '2019.10.22'


Reply via email to