Hello community,

here is the log from the commit of package youtube-dl for openSUSE:Factory 
checked in at 2019-01-15 09:17:39
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/youtube-dl (Old)
 and      /work/SRC/openSUSE:Factory/.youtube-dl.new.28833 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "youtube-dl"

Tue Jan 15 09:17:39 2019 rev:92 rq:665362 version:2019.01.10

Changes:
--------
--- /work/SRC/openSUSE:Factory/youtube-dl/python-youtube-dl.changes     
2019-01-08 12:31:43.000060527 +0100
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.28833/python-youtube-dl.changes  
2019-01-15 09:18:26.650119823 +0100
@@ -1,0 +2,12 @@
+Thu Jan 10 21:50:10 UTC 2019 - Sebastien CHAVAUX 
<[email protected]>
+
+- Update to new upstream release 2019.01.10
+  * Embed subtitles with non-standard language codes
+  * Add language codes replaced in 1989 revision of ISO 639
+    to ISO639Utils
+  * youtube: Extract live HLS URL from player response
+  * Add support for outsidetv.com, National Geographic,
+    playplus.tv, gaia.com, hungama.com
+  * Use JW Platform Delivery API V2 and add support for more URLs
+
+-------------------------------------------------------------------
youtube-dl.changes: same change

Old:
----
  youtube-dl-2019.01.02.tar.gz
  youtube-dl-2019.01.02.tar.gz.sig

New:
----
  youtube-dl-2019.01.10.tar.gz
  youtube-dl-2019.01.10.tar.gz.sig

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.KmQWNm/_old  2019-01-15 09:18:29.626117045 +0100
+++ /var/tmp/diff_new_pack.KmQWNm/_new  2019-01-15 09:18:29.630117041 +0100
@@ -19,7 +19,7 @@
 %define modname youtube-dl
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-youtube-dl
-Version:        2019.01.02
+Version:        2019.01.10
 Release:        0
 Summary:        A python module for downloading from video sites for offline 
watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.KmQWNm/_old  2019-01-15 09:18:29.650117023 +0100
+++ /var/tmp/diff_new_pack.KmQWNm/_new  2019-01-15 09:18:29.654117019 +0100
@@ -17,7 +17,7 @@
 
 
 Name:           youtube-dl
-Version:        2019.01.02
+Version:        2019.01.10
 Release:        0
 Summary:        A tool for downloading from video sites for offline watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl-2019.01.02.tar.gz -> youtube-dl-2019.01.10.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/ChangeLog new/youtube-dl/ChangeLog
--- old/youtube-dl/ChangeLog    2019-01-02 17:52:51.000000000 +0100
+++ new/youtube-dl/ChangeLog    2019-01-10 17:26:46.000000000 +0100
@@ -1,3 +1,29 @@
+version 2019.01.10
+
+Core
+* [extractor/common] Use episode name as title in _json_ld
++ [extractor/common] Add support for movies in _json_ld
+* [postprocessor/ffmpeg] Embed subtitles with non-standard language codes
+  (#18765)
++ [utils] Add language codes replaced in 1989 revision of ISO 639
+  to ISO639Utils (#18765)
+
+Extractors
+* [youtube] Extract live HLS URL from player response (#18799)
++ [outsidetv] Add support for outsidetv.com (#18774)
+* [jwplatform] Use JW Platform Delivery API V2 and add support for more URLs
++ [fox] Add support National Geographic (#17985, #15333, #14698)
++ [playplustv] Add support for playplus.tv (#18789)
+* [globo] Set GLBID cookie manually (#17346)
++ [gaia] Add support for gaia.com (#14605)
+* [youporn] Fix title and description extraction (#18748)
++ [hungama] Add support for hungama.com (#17402, #18771)
+* [dtube] Fix extraction (#18741)
+* [tvnow] Fix and rework extractors and prepare for a switch to the new API
+  (#17245, #18499)
+* [carambatv:page] Fix extraction (#18739)
+
+
 version 2019.01.02
 
 Extractors
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/docs/supportedsites.md 
new/youtube-dl/docs/supportedsites.md
--- old/youtube-dl/docs/supportedsites.md       2019-01-02 17:52:54.000000000 
+0100
+++ new/youtube-dl/docs/supportedsites.md       2019-01-10 17:26:53.000000000 
+0100
@@ -320,6 +320,7 @@
  - **Fusion**
  - **Fux**
  - **FXNetworks**
+ - **Gaia**
  - **GameInformer**
  - **GameOne**
  - **gameone:playlist**
@@ -370,6 +371,8 @@
  - **HRTiPlaylist**
  - **Huajiao**: 花椒直播
  - **HuffPost**: Huffington Post
+ - **Hungama**
+ - **HungamaSong**
  - **Hypem**
  - **Iconosquare**
  - **ign.com**
@@ -540,8 +543,6 @@
  - **MyviEmbed**
  - **MyVisionTV**
  - **n-tv.de**
- - **natgeo**
- - **natgeo:episodeguide**
  - **natgeo:video**
  - **Naver**
  - **NBA**
@@ -642,6 +643,7 @@
  - **orf:oe1**: Radio Österreich 1
  - **orf:tvthek**: ORF TVthek
  - **OsnatelTV**
+ - **OutsideTV**
  - **PacktPub**
  - **PacktPubCourse**
  - **PandaTV**: 熊猫TV
@@ -666,6 +668,7 @@
  - **Pinkbike**
  - **Pladform**
  - **play.fm**
+ - **PlayPlusTV**
  - **PlaysTV**
  - **Playtvak**: Playtvak.cz, iDNES.cz and Lidovky.cz
  - **Playvid**
@@ -934,7 +937,9 @@
  - **TVNet**
  - **TVNoe**
  - **TVNow**
- - **TVNowList**
+ - **TVNowAnnual**
+ - **TVNowNew**
+ - **TVNowSeason**
  - **TVNowShow**
  - **tvp**: Telewizja Polska
  - **tvp:embed**: Telewizja Polska
Binary files old/youtube-dl/youtube-dl and new/youtube-dl/youtube-dl differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/carambatv.py 
new/youtube-dl/youtube_dl/extractor/carambatv.py
--- old/youtube-dl/youtube_dl/extractor/carambatv.py    2019-01-02 
17:52:02.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/carambatv.py    2019-01-04 
16:33:13.000000000 +0100
@@ -82,6 +82,12 @@
         webpage = self._download_webpage(url, video_id)
 
         videomore_url = VideomoreIE._extract_url(webpage)
+        if not videomore_url:
+            videomore_id = self._search_regex(
+                r'getVMCode\s*\(\s*["\']?(\d+)', webpage, 'videomore id',
+                default=None)
+            if videomore_id:
+                videomore_url = 'videomore:%s' % videomore_id
         if videomore_url:
             title = self._og_search_title(webpage)
             return {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/common.py 
new/youtube-dl/youtube_dl/extractor/common.py
--- old/youtube-dl/youtube_dl/extractor/common.py       2019-01-02 
17:52:02.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/common.py       2019-01-04 
16:33:13.000000000 +0100
@@ -1239,17 +1239,27 @@
                 if expected_type is not None and expected_type != item_type:
                     return info
                 if item_type in ('TVEpisode', 'Episode'):
+                    episode_name = unescapeHTML(e.get('name'))
                     info.update({
-                        'episode': unescapeHTML(e.get('name')),
+                        'episode': episode_name,
                         'episode_number': int_or_none(e.get('episodeNumber')),
                         'description': unescapeHTML(e.get('description')),
                     })
+                    if not info.get('title') and episode_name:
+                        info['title'] = episode_name
                     part_of_season = e.get('partOfSeason')
                     if isinstance(part_of_season, dict) and 
part_of_season.get('@type') in ('TVSeason', 'Season', 'CreativeWorkSeason'):
                         info['season_number'] = 
int_or_none(part_of_season.get('seasonNumber'))
                     part_of_series = e.get('partOfSeries') or 
e.get('partOfTVSeries')
                     if isinstance(part_of_series, dict) and 
part_of_series.get('@type') in ('TVSeries', 'Series', 'CreativeWorkSeries'):
                         info['series'] = 
unescapeHTML(part_of_series.get('name'))
+                elif item_type == 'Movie':
+                    info.update({
+                        'title': unescapeHTML(e.get('name')),
+                        'description': unescapeHTML(e.get('description')),
+                        'duration': parse_duration(e.get('duration')),
+                        'timestamp': unified_timestamp(e.get('dateCreated')),
+                    })
                 elif item_type in ('Article', 'NewsArticle'):
                     info.update({
                         'timestamp': parse_iso8601(e.get('datePublished')),
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/dtube.py 
new/youtube-dl/youtube_dl/extractor/dtube.py
--- old/youtube-dl/youtube_dl/extractor/dtube.py        2019-01-02 
17:52:03.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/dtube.py        2019-01-04 
16:33:13.000000000 +0100
@@ -15,16 +15,16 @@
 class DTubeIE(InfoExtractor):
     _VALID_URL = 
r'https?://(?:www\.)?d\.tube/(?:#!/)?v/(?P<uploader_id>[0-9a-z.-]+)/(?P<id>[0-9a-z]{8})'
     _TEST = {
-        'url': 'https://d.tube/#!/v/benswann/zqd630em',
-        'md5': 'a03eaa186618ffa7a3145945543a251e',
+        'url': 'https://d.tube/#!/v/broncnutz/x380jtr1',
+        'md5': '9f29088fa08d699a7565ee983f56a06e',
         'info_dict': {
-            'id': 'zqd630em',
+            'id': 'x380jtr1',
             'ext': 'mp4',
-            'title': 'Reality Check: FDA\'s Disinformation Campaign on Kratom',
-            'description': 'md5:700d164e066b87f9eac057949e4227c2',
-            'uploader_id': 'benswann',
-            'upload_date': '20180222',
-            'timestamp': 1519328958,
+            'title': 'Lefty 3-Rings is Back Baby!! NCAA Picks',
+            'description': 'md5:60be222088183be3a42f196f34235776',
+            'uploader_id': 'broncnutz',
+            'upload_date': '20190107',
+            'timestamp': 1546854054,
         },
         'params': {
             'format': '480p',
@@ -48,7 +48,7 @@
         def canonical_url(h):
             if not h:
                 return None
-            return 'https://ipfs.io/ipfs/' + h
+            return 'https://video.dtube.top/ipfs/' + h
 
         formats = []
         for q in ('240', '480', '720', '1080', ''):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/extractors.py 
new/youtube-dl/youtube_dl/extractor/extractors.py
--- old/youtube-dl/youtube_dl/extractor/extractors.py   2019-01-02 
17:52:03.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/extractors.py   2019-01-04 
16:33:13.000000000 +0100
@@ -411,6 +411,7 @@
 from .funnyordie import FunnyOrDieIE
 from .fusion import FusionIE
 from .fxnetworks import FXNetworksIE
+from .gaia import GaiaIE
 from .gameinformer import GameInformerIE
 from .gameone import (
     GameOneIE,
@@ -469,6 +470,10 @@
 )
 from .huajiao import HuajiaoIE
 from .huffpost import HuffPostIE
+from .hungama import (
+    HungamaIE,
+    HungamaSongIE,
+)
 from .hypem import HypemIE
 from .iconosquare import IconosquareIE
 from .ign import (
@@ -682,11 +687,7 @@
     MyviEmbedIE,
 )
 from .myvidster import MyVidsterIE
-from .nationalgeographic import (
-    NationalGeographicVideoIE,
-    NationalGeographicIE,
-    NationalGeographicEpisodeGuideIE,
-)
+from .nationalgeographic import NationalGeographicVideoIE
 from .naver import NaverIE
 from .nba import NBAIE
 from .nbc import (
@@ -828,6 +829,7 @@
     ORFOE1IE,
     ORFIPTVIE,
 )
+from .outsidetv import OutsideTVIE
 from .packtpub import (
     PacktPubIE,
     PacktPubCourseIE,
@@ -856,6 +858,7 @@
 from .pinkbike import PinkbikeIE
 from .pladform import PladformIE
 from .playfm import PlayFMIE
+from .playplustv import PlayPlusTVIE
 from .plays import PlaysTVIE
 from .playtvak import PlaytvakIE
 from .playvid import PlayvidIE
@@ -1193,7 +1196,9 @@
 from .tvnoe import TVNoeIE
 from .tvnow import (
     TVNowIE,
-    TVNowListIE,
+    TVNowNewIE,
+    TVNowSeasonIE,
+    TVNowAnnualIE,
     TVNowShowIE,
 )
 from .tvp import (
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/fox.py 
new/youtube-dl/youtube_dl/extractor/fox.py
--- old/youtube-dl/youtube_dl/extractor/fox.py  2019-01-02 17:52:03.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/fox.py  2019-01-04 16:33:13.000000000 
+0100
@@ -1,11 +1,11 @@
 # coding: utf-8
 from __future__ import unicode_literals
 
+# import json
+# import uuid
+
 from .adobepass import AdobePassIE
-from .uplynk import UplynkPreplayIE
-from ..compat import compat_str
 from ..utils import (
-    HEADRequest,
     int_or_none,
     parse_age_limit,
     parse_duration,
@@ -16,7 +16,7 @@
 
 
 class FOXIE(AdobePassIE):
-    _VALID_URL = r'https?://(?:www\.)?fox\.com/watch/(?P<id>[\da-fA-F]+)'
+    _VALID_URL = 
r'https?://(?:www\.)?(?:fox\.com|nationalgeographic\.com/tv)/watch/(?P<id>[\da-fA-F]+)'
     _TESTS = [{
         # clip
         'url': 'https://www.fox.com/watch/4b765a60490325103ea69888fb2bd4e8/',
@@ -43,41 +43,47 @@
         # episode, geo-restricted, tv provided required
         'url': 'https://www.fox.com/watch/30056b295fb57f7452aeeb4920bc3024/',
         'only_matching': True,
+    }, {
+        'url': 
'https://www.nationalgeographic.com/tv/watch/f690e05ebbe23ab79747becd0cc223d1/',
+        'only_matching': True,
     }]
+    # _access_token = None
+
+    # def _call_api(self, path, video_id, data=None):
+    #     headers = {
+    #         'X-Api-Key': '238bb0a0c2aba67922c48709ce0c06fd',
+    #     }
+    #     if self._access_token:
+    #         headers['Authorization'] = 'Bearer ' + self._access_token
+    #     return self._download_json(
+    #         'https://api2.fox.com/v2.0/' + path, video_id, data=data, 
headers=headers)
+
+    # def _real_initialize(self):
+    #     self._access_token = self._call_api(
+    #         'login', None, json.dumps({
+    #             'deviceId': compat_str(uuid.uuid4()),
+    #         }).encode())['accessToken']
 
     def _real_extract(self, url):
         video_id = self._match_id(url)
 
         video = self._download_json(
-            'https://api.fox.com/fbc-content/v1_4/video/%s' % video_id,
+            'https://api.fox.com/fbc-content/v1_5/video/%s' % video_id,
             video_id, headers={
                 'apikey': 'abdcbed02c124d393b39e818a4312055',
                 'Content-Type': 'application/json',
                 'Referer': url,
             })
+        # video = self._call_api('vodplayer/' + video_id, video_id)
 
         title = video['name']
         release_url = video['videoRelease']['url']
-
-        description = video.get('description')
-        duration = int_or_none(video.get('durationInSeconds')) or int_or_none(
-            video.get('duration')) or parse_duration(video.get('duration'))
-        timestamp = unified_timestamp(video.get('datePublished'))
-        rating = video.get('contentRating')
-        age_limit = parse_age_limit(rating)
+        # release_url = video['url']
 
         data = try_get(
             video, lambda x: x['trackingData']['properties'], dict) or {}
 
-        creator = data.get('brand') or data.get('network') or 
video.get('network')
-
-        series = video.get('seriesName') or data.get(
-            'seriesName') or data.get('show')
-        season_number = int_or_none(video.get('seasonNumber'))
-        episode = video.get('name')
-        episode_number = int_or_none(video.get('episodeNumber'))
-        release_year = int_or_none(video.get('releaseYear'))
-
+        rating = video.get('contentRating')
         if data.get('authRequired'):
             resource = self._get_mvpd_resource(
                 'fbc-fox', title, video.get('guid'), rating)
@@ -86,6 +92,18 @@
                     'auth': self._extract_mvpd_auth(
                         url, video_id, 'fbc-fox', resource)
                 })
+        m3u8_url = self._download_json(release_url, video_id)['playURL']
+        formats = self._extract_m3u8_formats(
+            m3u8_url, video_id, 'mp4',
+            entry_protocol='m3u8_native', m3u8_id='hls')
+        self._sort_formats(formats)
+
+        duration = int_or_none(video.get('durationInSeconds')) or int_or_none(
+            video.get('duration')) or parse_duration(video.get('duration'))
+        timestamp = unified_timestamp(video.get('datePublished'))
+        creator = data.get('brand') or data.get('network') or 
video.get('network')
+        series = video.get('seriesName') or data.get(
+            'seriesName') or data.get('show')
 
         subtitles = {}
         for doc_rel in video.get('documentReleases', []):
@@ -98,36 +116,19 @@
             }]
             break
 
-        info = {
+        return {
             'id': video_id,
             'title': title,
-            'description': description,
+            'formats': formats,
+            'description': video.get('description'),
             'duration': duration,
             'timestamp': timestamp,
-            'age_limit': age_limit,
+            'age_limit': parse_age_limit(rating),
             'creator': creator,
             'series': series,
-            'season_number': season_number,
-            'episode': episode,
-            'episode_number': episode_number,
-            'release_year': release_year,
+            'season_number': int_or_none(video.get('seasonNumber')),
+            'episode': video.get('name'),
+            'episode_number': int_or_none(video.get('episodeNumber')),
+            'release_year': int_or_none(video.get('releaseYear')),
             'subtitles': subtitles,
         }
-
-        urlh = self._request_webpage(HEADRequest(release_url), video_id)
-        video_url = compat_str(urlh.geturl())
-
-        if UplynkPreplayIE.suitable(video_url):
-            info.update({
-                '_type': 'url_transparent',
-                'url': video_url,
-                'ie_key': UplynkPreplayIE.ie_key(),
-            })
-        else:
-            m3u8_url = self._download_json(release_url, video_id)['playURL']
-            formats = self._extract_m3u8_formats(
-                m3u8_url, video_id, 'mp4',
-                entry_protocol='m3u8_native', m3u8_id='hls')
-            self._sort_formats(formats)
-            info['formats'] = formats
-        return info
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/gaia.py 
new/youtube-dl/youtube_dl/extractor/gaia.py
--- old/youtube-dl/youtube_dl/extractor/gaia.py 1970-01-01 01:00:00.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/gaia.py 2019-01-04 16:33:13.000000000 
+0100
@@ -0,0 +1,98 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..compat import compat_str
+from ..utils import (
+    int_or_none,
+    str_or_none,
+    strip_or_none,
+    try_get,
+)
+
+
+class GaiaIE(InfoExtractor):
+    _VALID_URL = 
r'https?://(?:www\.)?gaia\.com/video/(?P<id>[^/?]+).*?\bfullplayer=(?P<type>feature|preview)'
+    _TESTS = [{
+        'url': 
'https://www.gaia.com/video/connecting-universal-consciousness?fullplayer=feature',
+        'info_dict': {
+            'id': '89356',
+            'ext': 'mp4',
+            'title': 'Connecting with Universal Consciousness',
+            'description': 'md5:844e209ad31b7d31345f5ed689e3df6f',
+            'upload_date': '20151116',
+            'timestamp': 1447707266,
+            'duration': 936,
+        },
+        'params': {
+            # m3u8 download
+            'skip_download': True,
+        },
+    }, {
+        'url': 
'https://www.gaia.com/video/connecting-universal-consciousness?fullplayer=preview',
+        'info_dict': {
+            'id': '89351',
+            'ext': 'mp4',
+            'title': 'Connecting with Universal Consciousness',
+            'description': 'md5:844e209ad31b7d31345f5ed689e3df6f',
+            'upload_date': '20151116',
+            'timestamp': 1447707266,
+            'duration': 53,
+        },
+        'params': {
+            # m3u8 download
+            'skip_download': True,
+        },
+    }]
+
+    def _real_extract(self, url):
+        display_id, vtype = re.search(self._VALID_URL, url).groups()
+        node_id = self._download_json(
+            'https://brooklyn.gaia.com/pathinfo', display_id, query={
+                'path': 'video/' + display_id,
+            })['id']
+        node = self._download_json(
+            'https://brooklyn.gaia.com/node/%d' % node_id, node_id)
+        vdata = node[vtype]
+        media_id = compat_str(vdata['nid'])
+        title = node['title']
+
+        media = self._download_json(
+            'https://brooklyn.gaia.com/media/' + media_id, media_id)
+        formats = self._extract_m3u8_formats(
+            media['mediaUrls']['bcHLS'], media_id, 'mp4')
+        self._sort_formats(formats)
+
+        subtitles = {}
+        text_tracks = media.get('textTracks', {})
+        for key in ('captions', 'subtitles'):
+            for lang, sub_url in text_tracks.get(key, {}).items():
+                subtitles.setdefault(lang, []).append({
+                    'url': sub_url,
+                })
+
+        fivestar = node.get('fivestar', {})
+        fields = node.get('fields', {})
+
+        def get_field_value(key, value_key='value'):
+            return try_get(fields, lambda x: x[key][0][value_key])
+
+        return {
+            'id': media_id,
+            'display_id': display_id,
+            'title': title,
+            'formats': formats,
+            'description': strip_or_none(get_field_value('body') or 
get_field_value('teaser')),
+            'timestamp': int_or_none(node.get('created')),
+            'subtitles': subtitles,
+            'duration': int_or_none(vdata.get('duration')),
+            'like_count': int_or_none(try_get(fivestar, lambda x: 
x['up_count']['value'])),
+            'dislike_count': int_or_none(try_get(fivestar, lambda x: 
x['down_count']['value'])),
+            'comment_count': int_or_none(node.get('comment_count')),
+            'series': try_get(node, lambda x: x['series']['title'], 
compat_str),
+            'season_number': int_or_none(get_field_value('season')),
+            'season_id': str_or_none(get_field_value('series_nid', 'nid')),
+            'episode_number': int_or_none(get_field_value('episode')),
+        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/globo.py 
new/youtube-dl/youtube_dl/extractor/globo.py
--- old/youtube-dl/youtube_dl/extractor/globo.py        2019-01-02 
17:52:03.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/globo.py        2019-01-04 
16:33:13.000000000 +0100
@@ -72,7 +72,7 @@
             return
 
         try:
-            self._download_json(
+            glb_id = (self._download_json(
                 'https://login.globo.com/api/authentication', None, 
data=json.dumps({
                     'payload': {
                         'email': email,
@@ -81,7 +81,9 @@
                     },
                 }).encode(), headers={
                     'Content-Type': 'application/json; charset=utf-8',
-                })
+                }) or {}).get('glbId')
+            if glb_id:
+                self._set_cookie('.globo.com', 'GLBID', glb_id)
         except ExtractorError as e:
             if isinstance(e.cause, compat_HTTPError) and e.cause.code == 401:
                 resp = self._parse_json(e.cause.read(), None)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/hungama.py 
new/youtube-dl/youtube_dl/extractor/hungama.py
--- old/youtube-dl/youtube_dl/extractor/hungama.py      1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/hungama.py      2019-01-04 
16:33:13.000000000 +0100
@@ -0,0 +1,117 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..utils import (
+    int_or_none,
+    urlencode_postdata,
+)
+
+
+class HungamaIE(InfoExtractor):
+    _VALID_URL = r'''(?x)
+                    https?://
+                        (?:www\.)?hungama\.com/
+                        (?:
+                            (?:video|movie)/[^/]+/|
+                            tv-show/(?:[^/]+/){2}\d+/episode/[^/]+/
+                        )
+                        (?P<id>\d+)
+                    '''
+    _TESTS = [{
+        'url': 'http://www.hungama.com/video/krishna-chants/39349649/',
+        'md5': 'a845a6d1ebd08d80c1035126d49bd6a0',
+        'info_dict': {
+            'id': '2931166',
+            'ext': 'mp4',
+            'title': 'Lucky Ali - Kitni Haseen Zindagi',
+            'track': 'Kitni Haseen Zindagi',
+            'artist': 'Lucky Ali',
+            'album': 'Aks',
+            'release_year': 2000,
+        }
+    }, {
+        'url': 'https://www.hungama.com/movie/kahaani-2/44129919/',
+        'only_matching': True,
+    }, {
+        'url': 
'https://www.hungama.com/tv-show/padded-ki-pushup/season-1/44139461/episode/ep-02-training-sasu-pathlaag-karing/44139503/',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        webpage = self._download_webpage(url, video_id)
+
+        info = self._search_json_ld(webpage, video_id)
+
+        m3u8_url = self._download_json(
+            'https://www.hungama.com/index.php', video_id,
+            data=urlencode_postdata({'content_id': video_id}), headers={
+                'Content-Type': 'application/x-www-form-urlencoded; 
charset=UTF-8',
+                'X-Requested-With': 'XMLHttpRequest',
+            }, query={
+                'c': 'common',
+                'm': 'get_video_mdn_url',
+            })['stream_url']
+
+        formats = self._extract_m3u8_formats(
+            m3u8_url, video_id, ext='mp4', entry_protocol='m3u8_native',
+            m3u8_id='hls')
+        self._sort_formats(formats)
+
+        info.update({
+            'id': video_id,
+            'formats': formats,
+        })
+        return info
+
+
+class HungamaSongIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?hungama\.com/song/[^/]+/(?P<id>\d+)'
+    _TEST = {
+        'url': 'https://www.hungama.com/song/kitni-haseen-zindagi/2931166/',
+        'md5': 'a845a6d1ebd08d80c1035126d49bd6a0',
+        'info_dict': {
+            'id': '2931166',
+            'ext': 'mp4',
+            'title': 'Lucky Ali - Kitni Haseen Zindagi',
+            'track': 'Kitni Haseen Zindagi',
+            'artist': 'Lucky Ali',
+            'album': 'Aks',
+            'release_year': 2000,
+        }
+    }
+
+    def _real_extract(self, url):
+        audio_id = self._match_id(url)
+
+        data = self._download_json(
+            'https://www.hungama.com/audio-player-data/track/%s' % audio_id,
+            audio_id, query={'_country': 'IN'})[0]
+
+        track = data['song_name']
+        artist = data.get('singer_name')
+
+        m3u8_url = self._download_json(
+            data.get('file') or data['preview_link'],
+            audio_id)['response']['media_url']
+
+        formats = self._extract_m3u8_formats(
+            m3u8_url, audio_id, ext='mp4', entry_protocol='m3u8_native',
+            m3u8_id='hls')
+        self._sort_formats(formats)
+
+        title = '%s - %s' % (artist, track) if artist else track
+        thumbnail = data.get('img_src') or data.get('album_image')
+
+        return {
+            'id': audio_id,
+            'title': title,
+            'thumbnail': thumbnail,
+            'track': track,
+            'artist': artist,
+            'album': data.get('album_name'),
+            'release_year': int_or_none(data.get('date')),
+            'formats': formats,
+        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/jwplatform.py 
new/youtube-dl/youtube_dl/extractor/jwplatform.py
--- old/youtube-dl/youtube_dl/extractor/jwplatform.py   2019-01-02 
17:52:03.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/jwplatform.py   2019-01-04 
16:33:13.000000000 +0100
@@ -7,8 +7,8 @@
 
 
 class JWPlatformIE(InfoExtractor):
-    _VALID_URL = 
r'(?:https?://content\.jwplatform\.com/(?:feeds|players|jw6)/|jwplatform:)(?P<id>[a-zA-Z0-9]{8})'
-    _TEST = {
+    _VALID_URL = 
r'(?:https?://(?:content\.jwplatform|cdn\.jwplayer)\.com/(?:(?:feed|player|thumb|preview|video|manifest)s|jw6|v2/media)/|jwplatform:)(?P<id>[a-zA-Z0-9]{8})'
+    _TESTS = [{
         'url': 'http://content.jwplatform.com/players/nPripu9l-ALJ3XQCI.js',
         'md5': 'fa8899fa601eb7c83a64e9d568bdf325',
         'info_dict': {
@@ -19,7 +19,10 @@
             'upload_date': '20081127',
             'timestamp': 1227796140,
         }
-    }
+    }, {
+        'url': 'https://cdn.jwplayer.com/players/nPripu9l-ALJ3XQCI.js',
+        'only_matching': True,
+    }]
 
     @staticmethod
     def _extract_url(webpage):
@@ -34,5 +37,5 @@
 
     def _real_extract(self, url):
         video_id = self._match_id(url)
-        json_data = 
self._download_json('http://content.jwplatform.com/feeds/%s.json' % video_id, 
video_id)
+        json_data = self._download_json('https://cdn.jwplayer.com/v2/media/' + 
video_id, video_id)
         return self._parse_jwplayer_data(json_data, video_id)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/youtube-dl/youtube_dl/extractor/nationalgeographic.py 
new/youtube-dl/youtube_dl/extractor/nationalgeographic.py
--- old/youtube-dl/youtube_dl/extractor/nationalgeographic.py   2019-01-02 
17:52:04.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/nationalgeographic.py   2019-01-04 
16:33:13.000000000 +0100
@@ -1,15 +1,9 @@
 from __future__ import unicode_literals
 
-import re
-
 from .common import InfoExtractor
-from .adobepass import AdobePassIE
-from .theplatform import ThePlatformIE
 from ..utils import (
     smuggle_url,
     url_basename,
-    update_url_query,
-    get_element_by_class,
 )
 
 
@@ -64,132 +58,3 @@
                 {'force_smil_url': True}),
             'id': guid,
         }
-
-
-class NationalGeographicIE(ThePlatformIE, AdobePassIE):
-    IE_NAME = 'natgeo'
-    _VALID_URL = 
r'https?://channel\.nationalgeographic\.com/(?:(?:(?:wild/)?[^/]+/)?(?:videos|episodes)|u)/(?P<id>[^/?]+)'
-
-    _TESTS = [
-        {
-            'url': 
'http://channel.nationalgeographic.com/u/kdi9Ld0PN2molUUIMSBGxoeDhD729KRjQcnxtetilWPMevo8ZwUBIDuPR0Q3D2LVaTsk0MPRkRWDB8ZhqWVeyoxfsZZm36yRp1j-zPfsHEyI_EgAeFY/',
-            'md5': '518c9aa655686cf81493af5cc21e2a04',
-            'info_dict': {
-                'id': 'vKInpacll2pC',
-                'ext': 'mp4',
-                'title': 'Uncovering a Universal Knowledge',
-                'description': 'md5:1a89148475bf931b3661fcd6ddb2ae3a',
-                'timestamp': 1458680907,
-                'upload_date': '20160322',
-                'uploader': 'NEWA-FNG-NGTV',
-            },
-            'add_ie': ['ThePlatform'],
-        },
-        {
-            'url': 
'http://channel.nationalgeographic.com/u/kdvOstqYaBY-vSBPyYgAZRUL4sWUJ5XUUPEhc7ISyBHqoIO4_dzfY3K6EjHIC0hmFXoQ7Cpzm6RkET7S3oMlm6CFnrQwSUwo/',
-            'md5': 'c4912f656b4cbe58f3e000c489360989',
-            'info_dict': {
-                'id': 'Pok5lWCkiEFA',
-                'ext': 'mp4',
-                'title': 'The Stunning Red Bird of Paradise',
-                'description': 'md5:7bc8cd1da29686be4d17ad1230f0140c',
-                'timestamp': 1459362152,
-                'upload_date': '20160330',
-                'uploader': 'NEWA-FNG-NGTV',
-            },
-            'add_ie': ['ThePlatform'],
-        },
-        {
-            'url': 
'http://channel.nationalgeographic.com/the-story-of-god-with-morgan-freeman/episodes/the-power-of-miracles/',
-            'only_matching': True,
-        },
-        {
-            'url': 
'http://channel.nationalgeographic.com/videos/treasures-rediscovered/',
-            'only_matching': True,
-        },
-        {
-            'url': 
'http://channel.nationalgeographic.com/the-story-of-god-with-morgan-freeman/videos/uncovering-a-universal-knowledge/',
-            'only_matching': True,
-        },
-        {
-            'url': 
'http://channel.nationalgeographic.com/wild/destination-wild/videos/the-stunning-red-bird-of-paradise/',
-            'only_matching': True,
-        }
-    ]
-
-    def _real_extract(self, url):
-        display_id = self._match_id(url)
-        webpage = self._download_webpage(url, display_id)
-        release_url = self._search_regex(
-            r'video_auth_playlist_url\s*=\s*"([^"]+)"',
-            webpage, 'release url')
-        theplatform_path = 
self._search_regex(r'https?://link\.theplatform\.com/s/([^?]+)', release_url, 
'theplatform path')
-        video_id = theplatform_path.split('/')[-1]
-        query = {
-            'mbr': 'true',
-        }
-        is_auth = self._search_regex(r'video_is_auth\s*=\s*"([^"]+)"', 
webpage, 'is auth', fatal=False)
-        if is_auth == 'auth':
-            auth_resource_id = self._search_regex(
-                r"video_auth_resourceId\s*=\s*'([^']+)'",
-                webpage, 'auth resource id')
-            query['auth'] = self._extract_mvpd_auth(url, video_id, 'natgeo', 
auth_resource_id)
-
-        formats = []
-        subtitles = {}
-        for key, value in (('switch', 'http'), ('manifest', 'm3u')):
-            tp_query = query.copy()
-            tp_query.update({
-                key: value,
-            })
-            tp_formats, tp_subtitles = self._extract_theplatform_smil(
-                update_url_query(release_url, tp_query), video_id, 
'Downloading %s SMIL data' % value)
-            formats.extend(tp_formats)
-            subtitles = self._merge_subtitles(subtitles, tp_subtitles)
-        self._sort_formats(formats)
-
-        info = self._extract_theplatform_metadata(theplatform_path, display_id)
-        info.update({
-            'id': video_id,
-            'formats': formats,
-            'subtitles': subtitles,
-            'display_id': display_id,
-        })
-        return info
-
-
-class NationalGeographicEpisodeGuideIE(InfoExtractor):
-    IE_NAME = 'natgeo:episodeguide'
-    _VALID_URL = 
r'https?://channel\.nationalgeographic\.com/(?:wild/)?(?P<id>[^/]+)/episode-guide'
-    _TESTS = [
-        {
-            'url': 
'http://channel.nationalgeographic.com/the-story-of-god-with-morgan-freeman/episode-guide/',
-            'info_dict': {
-                'id': 'the-story-of-god-with-morgan-freeman-season-1',
-                'title': 'The Story of God with Morgan Freeman - Season 1',
-            },
-            'playlist_mincount': 6,
-        },
-        {
-            'url': 
'http://channel.nationalgeographic.com/underworld-inc/episode-guide/?s=2',
-            'info_dict': {
-                'id': 'underworld-inc-season-2',
-                'title': 'Underworld, Inc. - Season 2',
-            },
-            'playlist_mincount': 7,
-        },
-    ]
-
-    def _real_extract(self, url):
-        display_id = self._match_id(url)
-        webpage = self._download_webpage(url, display_id)
-        show = get_element_by_class('show', webpage)
-        selected_season = self._search_regex(
-            r'<div[^>]+class="select-seasons[^"]*".*?<a[^>]*>(.*?)</a>',
-            webpage, 'selected season')
-        entries = [
-            self.url_result(self._proto_relative_url(entry_url), 
'NationalGeographic')
-            for entry_url in 
re.findall('(?s)<div[^>]+class="col-inner"[^>]*?>.*?<a[^>]+href="([^"]+)"', 
webpage)]
-        return self.playlist_result(
-            entries, '%s-%s' % (display_id, selected_season.lower().replace(' 
', '-')),
-            '%s - %s' % (show, selected_season))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/outsidetv.py 
new/youtube-dl/youtube_dl/extractor/outsidetv.py
--- old/youtube-dl/youtube_dl/extractor/outsidetv.py    1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/outsidetv.py    2019-01-04 
16:33:13.000000000 +0100
@@ -0,0 +1,28 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+
+
+class OutsideTVIE(InfoExtractor):
+    _VALID_URL = 
r'https?://(?:www\.)?outsidetv\.com/(?:[^/]+/)*?play/[a-zA-Z0-9]{8}/\d+/\d+/(?P<id>[a-zA-Z0-9]{8})'
+    _TESTS = [{
+        'url': 
'http://www.outsidetv.com/category/snow/play/ZjQYboH6/1/10/Hdg0jukV/4',
+        'md5': '192d968fedc10b2f70ec31865ffba0da',
+        'info_dict': {
+            'id': 'Hdg0jukV',
+            'ext': 'mp4',
+            'title': 'Home - Jackson Ep 1 | Arbor Snowboards',
+            'description': 'md5:41a12e94f3db3ca253b04bb1e8d8f4cd',
+            'upload_date': '20181225',
+            'timestamp': 1545742800,
+        }
+    }, {
+        'url': 'http://www.outsidetv.com/home/play/ZjQYboH6/1/10/Hdg0jukV/4',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        jw_media_id = self._match_id(url)
+        return self.url_result(
+            'jwplatform:' + jw_media_id, 'JWPlatform', jw_media_id)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/playplustv.py 
new/youtube-dl/youtube_dl/extractor/playplustv.py
--- old/youtube-dl/youtube_dl/extractor/playplustv.py   1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/playplustv.py   2019-01-04 
16:33:13.000000000 +0100
@@ -0,0 +1,109 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import json
+import re
+
+from .common import InfoExtractor
+from ..compat import compat_HTTPError
+from ..utils import (
+    clean_html,
+    ExtractorError,
+    int_or_none,
+    PUTRequest,
+)
+
+
+class PlayPlusTVIE(InfoExtractor):
+    _VALID_URL = 
r'https?://(?:www\.)?playplus\.tv/VOD/(?P<project_id>[0-9]+)/(?P<id>[0-9a-f]{32})'
+    _TEST = {
+        'url': 
'https://www.playplus.tv/VOD/7572/db8d274a5163424e967f35a30ddafb8e',
+        'md5': 'd078cb89d7ab6b9df37ce23c647aef72',
+        'info_dict': {
+            'id': 'db8d274a5163424e967f35a30ddafb8e',
+            'ext': 'mp4',
+            'title': 'Capítulo 179 - Final',
+            'description': 'md5:01085d62d8033a1e34121d3c3cabc838',
+            'timestamp': 1529992740,
+            'upload_date': '20180626',
+        },
+        'skip': 'Requires account credential',
+    }
+    _NETRC_MACHINE = 'playplustv'
+    _GEO_COUNTRIES = ['BR']
+    _token = None
+    _profile_id = None
+
+    def _call_api(self, resource, video_id=None, query=None):
+        return self._download_json('https://api.playplus.tv/api/media/v2/get' 
+ resource, video_id, headers={
+            'Authorization': 'Bearer ' + self._token,
+        }, query=query)
+
+    def _real_initialize(self):
+        email, password = self._get_login_info()
+        if email is None:
+            self.raise_login_required()
+
+        req = PUTRequest(
+            'https://api.playplus.tv/api/web/login', json.dumps({
+                'email': email,
+                'password': password,
+            }).encode(), {
+                'Content-Type': 'application/json; charset=utf-8',
+            })
+
+        try:
+            self._token = self._download_json(req, None)['token']
+        except ExtractorError as e:
+            if isinstance(e.cause, compat_HTTPError) and e.cause.code == 401:
+                raise ExtractorError(self._parse_json(
+                    e.cause.read(), None)['errorMessage'], expected=True)
+            raise
+
+        self._profile = self._call_api('Profiles')['list'][0]['_id']
+
+    def _real_extract(self, url):
+        project_id, media_id = re.match(self._VALID_URL, url).groups()
+        media = self._call_api(
+            'Media', media_id, {
+                'profileId': self._profile,
+                'projectId': project_id,
+                'mediaId': media_id,
+            })['obj']
+        title = media['title']
+
+        formats = []
+        for f in media.get('files', []):
+            f_url = f.get('url')
+            if not f_url:
+                continue
+            file_info = f.get('fileInfo') or {}
+            formats.append({
+                'url': f_url,
+                'width': int_or_none(file_info.get('width')),
+                'height': int_or_none(file_info.get('height')),
+            })
+        self._sort_formats(formats)
+
+        thumbnails = []
+        for thumb in media.get('thumbs', []):
+            thumb_url = thumb.get('url')
+            if not thumb_url:
+                continue
+            thumbnails.append({
+                'url': thumb_url,
+                'width': int_or_none(thumb.get('width')),
+                'height': int_or_none(thumb.get('height')),
+            })
+
+        return {
+            'id': media_id,
+            'title': title,
+            'formats': formats,
+            'thumbnails': thumbnails,
+            'description': clean_html(media.get('description')) or 
media.get('shortDescription'),
+            'timestamp': int_or_none(media.get('publishDate'), 1000),
+            'view_count': int_or_none(media.get('numberOfViews')),
+            'comment_count': int_or_none(media.get('numberOfComments')),
+            'tags': media.get('tags'),
+        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/tvnow.py 
new/youtube-dl/youtube_dl/extractor/tvnow.py
--- old/youtube-dl/youtube_dl/extractor/tvnow.py        2019-01-02 
17:52:05.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/tvnow.py        2019-01-04 
16:33:13.000000000 +0100
@@ -10,8 +10,9 @@
     int_or_none,
     parse_iso8601,
     parse_duration,
-    try_get,
+    str_or_none,
     update_url_query,
+    urljoin,
 )
 
 
@@ -24,8 +25,7 @@
 
     def _call_api(self, path, video_id, query):
         return self._download_json(
-            'https://api.tvnow.de/v3/' + path,
-            video_id, query=query)
+            'https://api.tvnow.de/v3/' + path, video_id, query=query)
 
     def _extract_video(self, info, display_id):
         video_id = compat_str(info['id'])
@@ -108,6 +108,11 @@
                         (?!(?:list|jahr)(?:/|$))(?P<id>[^/?\#&]+)
                     '''
 
+    @classmethod
+    def suitable(cls, url):
+        return (False if TVNowNewIE.suitable(url) or 
TVNowSeasonIE.suitable(url) or TVNowAnnualIE.suitable(url) or 
TVNowShowIE.suitable(url)
+                else super(TVNowIE, cls).suitable(url))
+
     _TESTS = [{
         'url': 
'https://www.tvnow.de/rtl2/grip-das-motormagazin/der-neue-porsche-911-gt-3/player',
         'info_dict': {
@@ -116,7 +121,6 @@
             'ext': 'mp4',
             'title': 'Der neue Porsche 911 GT 3',
             'description': 'md5:6143220c661f9b0aae73b245e5d898bb',
-            'thumbnail': r're:^https?://.*\.jpg$',
             'timestamp': 1495994400,
             'upload_date': '20170528',
             'duration': 5283,
@@ -161,136 +165,314 @@
         info = self._call_api(
             'movies/' + display_id, display_id, query={
                 'fields': ','.join(self._VIDEO_FIELDS),
-                'station': mobj.group(1),
             })
 
         return self._extract_video(info, display_id)
 
 
-class TVNowListBaseIE(TVNowBaseIE):
-    _SHOW_VALID_URL = r'''(?x)
-                    (?P<base_url>
-                        https?://
-                            (?:www\.)?tvnow\.(?:de|at|ch)/[^/]+/
-                            (?P<show_id>[^/]+)
-                    )
+class TVNowNewIE(InfoExtractor):
+    _VALID_URL = r'''(?x)
+                    (?P<base_url>https?://
+                        (?:www\.)?tvnow\.(?:de|at|ch)/
+                        (?:shows|serien))/
+                        (?P<show>[^/]+)-\d+/
+                        [^/]+/
+                        episode-\d+-(?P<episode>[^/?$&]+)-(?P<id>\d+)
                     '''
 
-    def _extract_list_info(self, display_id, show_id):
-        fields = list(self._SHOW_FIELDS)
-        fields.extend('formatTabs.%s' % field for field in self._SEASON_FIELDS)
-        fields.extend(
-            'formatTabs.formatTabPages.container.movies.%s' % field
-            for field in self._VIDEO_FIELDS)
-        return self._call_api(
-            'formats/seo', display_id, query={
-                'fields': ','.join(fields),
-                'name': show_id + '.php'
-            })
-
-
-class TVNowListIE(TVNowListBaseIE):
-    _VALID_URL = r'%s/(?:list|jahr)/(?P<id>[^?\#&]+)' % 
TVNowListBaseIE._SHOW_VALID_URL
+    _TESTS = [{
+        'url': 
'https://www.tvnow.de/shows/grip-das-motormagazin-1669/2017-05/episode-405-der-neue-porsche-911-gt-3-331082',
+        'only_matching': True,
+    }]
 
-    _SHOW_FIELDS = ('title', )
-    _SEASON_FIELDS = ('id', 'headline', 'seoheadline', )
-    _VIDEO_FIELDS = ('id', 'headline', 'seoUrl', )
+    def _real_extract(self, url):
+        mobj = re.match(self._VALID_URL, url)
+        base_url = re.sub(r'(?:shows|serien)', '_', mobj.group('base_url'))
+        show, episode = mobj.group('show', 'episode')
+        return self.url_result(
+            # Rewrite new URLs to the old format and use extraction via old API
+            # at api.tvnow.de as a loophole for bypassing premium content 
checks
+            '%s/%s/%s' % (base_url, show, episode),
+            ie=TVNowIE.ie_key(), video_id=mobj.group('id'))
+
+
+class TVNowNewBaseIE(InfoExtractor):
+    def _call_api(self, path, video_id, query={}):
+        result = self._download_json(
+            'https://apigw.tvnow.de/module/' + path, video_id, query=query)
+        error = result.get('error')
+        if error:
+            raise ExtractorError(
+                '%s said: %s' % (self.IE_NAME, error), expected=True)
+        return result
+
+
+"""
+TODO: new apigw.tvnow.de based version of TVNowIE. Replace old TVNowIE with it
+when api.tvnow.de is shut down. This version can't bypass premium checks 
though.
+class TVNowIE(TVNowNewBaseIE):
+    _VALID_URL = r'''(?x)
+                    https?://
+                        (?:www\.)?tvnow\.(?:de|at|ch)/
+                        (?:shows|serien)/[^/]+/
+                        (?:[^/]+/)+
+                        (?P<display_id>[^/?$&]+)-(?P<id>\d+)
+                    '''
 
     _TESTS = [{
-        'url': 'https://www.tvnow.de/rtl/30-minuten-deutschland/list/aktuell',
+        # episode with annual navigation
+        'url': 
'https://www.tvnow.de/shows/grip-das-motormagazin-1669/2017-05/episode-405-der-neue-porsche-911-gt-3-331082',
         'info_dict': {
-            'id': '28296',
-            'title': '30 Minuten Deutschland - Aktuell',
+            'id': '331082',
+            'display_id': 'grip-das-motormagazin/der-neue-porsche-911-gt-3',
+            'ext': 'mp4',
+            'title': 'Der neue Porsche 911 GT 3',
+            'description': 'md5:6143220c661f9b0aae73b245e5d898bb',
+            'thumbnail': r're:^https?://.*\.jpg$',
+            'timestamp': 1495994400,
+            'upload_date': '20170528',
+            'duration': 5283,
+            'series': 'GRIP - Das Motormagazin',
+            'season_number': 14,
+            'episode_number': 405,
+            'episode': 'Der neue Porsche 911 GT 3',
         },
-        'playlist_mincount': 1,
     }, {
-        'url': 'https://www.tvnow.de/vox/ab-ins-beet/list/staffel-14',
+        # rtl2, episode with season navigation
+        'url': 
'https://www.tvnow.de/shows/armes-deutschland-11471/staffel-3/episode-14-bernd-steht-seit-der-trennung-von-seiner-frau-allein-da-526124',
         'only_matching': True,
     }, {
-        'url': 'https://www.tvnow.de/rtl2/grip-das-motormagazin/jahr/2018/3',
+        # rtlnitro
+        'url': 
'https://www.tvnow.de/serien/alarm-fuer-cobra-11-die-autobahnpolizei-1815/staffel-13/episode-5-auf-eigene-faust-pilot-366822',
+        'only_matching': True,
+    }, {
+        # superrtl
+        'url': 
'https://www.tvnow.de/shows/die-lustigsten-schlamassel-der-welt-1221/staffel-2/episode-14-u-a-ketchup-effekt-364120',
+        'only_matching': True,
+    }, {
+        # ntv
+        'url': 
'https://www.tvnow.de/shows/startup-news-10674/staffel-2/episode-39-goetter-in-weiss-387630',
+        'only_matching': True,
+    }, {
+        # vox
+        'url': 
'https://www.tvnow.de/shows/auto-mobil-174/2017-11/episode-46-neues-vom-automobilmarkt-2017-11-19-17-00-00-380072',
+        'only_matching': True,
+    }, {
+        'url': 
'https://www.tvnow.de/shows/grip-das-motormagazin-1669/2017-05/episode-405-der-neue-porsche-911-gt-3-331082',
         'only_matching': True,
     }]
 
-    @classmethod
-    def suitable(cls, url):
-        return (False if TVNowIE.suitable(url)
-                else super(TVNowListIE, cls).suitable(url))
+    def _extract_video(self, info, url, display_id):
+        config = info['config']
+        source = config['source']
 
-    def _real_extract(self, url):
-        base_url, show_id, season_id = re.match(self._VALID_URL, url).groups()
+        video_id = compat_str(info.get('id') or source['videoId'])
+        title = source['title'].strip()
+
+        paths = []
+        for manifest_url in (info.get('manifest') or {}).values():
+            if not manifest_url:
+                continue
+            manifest_url = update_url_query(manifest_url, {'filter': ''})
+            path = self._search_regex(r'https?://[^/]+/(.+?)\.ism/', 
manifest_url, 'path')
+            if path in paths:
+                continue
+            paths.append(path)
 
-        list_info = self._extract_list_info(season_id, show_id)
+            def url_repl(proto, suffix):
+                return re.sub(
+                    r'(?:hls|dash|hss)([.-])', proto + r'\1', re.sub(
+                        r'\.ism/(?:[^.]*\.(?:m3u8|mpd)|[Mm]anifest)',
+                        '.ism/' + suffix, manifest_url))
 
-        season = next(
-            season for season in list_info['formatTabs']['items']
-            if season.get('seoheadline') == season_id)
-
-        title = list_info.get('title')
-        headline = season.get('headline')
-        if title and headline:
-            title = '%s - %s' % (title, headline)
+            formats = self._extract_mpd_formats(
+                url_repl('dash', '.mpd'), video_id,
+                mpd_id='dash', fatal=False)
+            formats.extend(self._extract_ism_formats(
+                url_repl('hss', 'Manifest'),
+                video_id, ism_id='mss', fatal=False))
+            formats.extend(self._extract_m3u8_formats(
+                url_repl('hls', '.m3u8'), video_id, 'mp4',
+                'm3u8_native', m3u8_id='hls', fatal=False))
+            if formats:
+                break
         else:
-            title = headline or title
+            if try_get(info, lambda x: x['rights']['isDrm']):
+                raise ExtractorError(
+                    'Video %s is DRM protected' % video_id, expected=True)
+            if try_get(config, lambda x: x['boards']['geoBlocking']['block']):
+                raise self.raise_geo_restricted()
+            if not info.get('free', True):
+                raise ExtractorError(
+                    'Video %s is not available for free' % video_id, 
expected=True)
+        self._sort_formats(formats)
+
+        description = source.get('description')
+        thumbnail = url_or_none(source.get('poster'))
+        timestamp = unified_timestamp(source.get('previewStart'))
+        duration = parse_duration(source.get('length'))
+
+        series = source.get('format')
+        season_number = int_or_none(self._search_regex(
+            r'staffel-(\d+)', url, 'season number', default=None))
+        episode_number = int_or_none(self._search_regex(
+            r'episode-(\d+)', url, 'episode number', default=None))
+
+        return {
+            'id': video_id,
+            'display_id': display_id,
+            'title': title,
+            'description': description,
+            'thumbnail': thumbnail,
+            'timestamp': timestamp,
+            'duration': duration,
+            'series': series,
+            'season_number': season_number,
+            'episode_number': episode_number,
+            'episode': title,
+            'formats': formats,
+        }
+
+    def _real_extract(self, url):
+        display_id, video_id = re.match(self._VALID_URL, url).groups()
+        info = self._call_api('player/' + video_id, video_id)
+        return self._extract_video(info, video_id, display_id)
+"""
+
+
+class TVNowListBaseIE(TVNowNewBaseIE):
+    _SHOW_VALID_URL = r'''(?x)
+                    (?P<base_url>
+                        https?://
+                            (?:www\.)?tvnow\.(?:de|at|ch)/(?:shows|serien)/
+                            [^/?#&]+-(?P<show_id>\d+)
+                    )
+                    '''
+
+    @classmethod
+    def suitable(cls, url):
+        return (False if TVNowNewIE.suitable(url)
+                else super(TVNowListBaseIE, cls).suitable(url))
+
+    def _extract_items(self, url, show_id, list_id, query):
+        items = self._call_api(
+            'teaserrow/format/episode/' + show_id, list_id,
+            query=query)['items']
 
         entries = []
-        for container in season['formatTabPages']['items']:
-            items = try_get(
-                container, lambda x: x['container']['movies']['items'],
-                list) or []
-            for info in items:
-                seo_url = info.get('seoUrl')
-                if not seo_url:
-                    continue
-                video_id = info.get('id')
-                entries.append(self.url_result(
-                    '%s/%s/player' % (base_url, seo_url), TVNowIE.ie_key(),
-                    compat_str(video_id) if video_id else None))
+        for item in items:
+            if not isinstance(item, dict):
+                continue
+            item_url = urljoin(url, item.get('url'))
+            if not item_url:
+                continue
+            video_id = str_or_none(item.get('id') or item.get('videoId'))
+            item_title = item.get('subheadline') or item.get('text')
+            entries.append(self.url_result(
+                item_url, ie=TVNowNewIE.ie_key(), video_id=video_id,
+                video_title=item_title))
 
-        return self.playlist_result(
-            entries, compat_str(season.get('id') or season_id), title)
+        return self.playlist_result(entries, '%s/%s' % (show_id, list_id))
 
 
-class TVNowShowIE(TVNowListBaseIE):
-    _VALID_URL = TVNowListBaseIE._SHOW_VALID_URL
+class TVNowSeasonIE(TVNowListBaseIE):
+    _VALID_URL = r'%s/staffel-(?P<id>\d+)' % TVNowListBaseIE._SHOW_VALID_URL
+    _TESTS = [{
+        'url': 
'https://www.tvnow.de/serien/alarm-fuer-cobra-11-die-autobahnpolizei-1815/staffel-13',
+        'info_dict': {
+            'id': '1815/13',
+        },
+        'playlist_mincount': 22,
+    }]
+
+    def _real_extract(self, url):
+        _, show_id, season_id = re.match(self._VALID_URL, url).groups()
+        return self._extract_items(
+            url, show_id, season_id, {'season': season_id})
 
-    _SHOW_FIELDS = ('id', 'title', )
-    _SEASON_FIELDS = ('id', 'headline', 'seoheadline', )
-    _VIDEO_FIELDS = ()
 
+class TVNowAnnualIE(TVNowListBaseIE):
+    _VALID_URL = r'%s/(?P<year>\d{4})-(?P<month>\d{2})' % 
TVNowListBaseIE._SHOW_VALID_URL
     _TESTS = [{
-        'url': 'https://www.tvnow.at/vox/ab-ins-beet',
+        'url': 'https://www.tvnow.de/shows/grip-das-motormagazin-1669/2017-05',
         'info_dict': {
-            'id': 'ab-ins-beet',
-            'title': 'Ab ins Beet!',
+            'id': '1669/2017-05',
         },
-        'playlist_mincount': 7,
-    }, {
-        'url': 'https://www.tvnow.at/vox/ab-ins-beet/list',
-        'only_matching': True,
+        'playlist_mincount': 2,
+    }]
+
+    def _real_extract(self, url):
+        _, show_id, year, month = re.match(self._VALID_URL, url).groups()
+        return self._extract_items(
+            url, show_id, '%s-%s' % (year, month), {
+                'year': int(year),
+                'month': int(month),
+            })
+
+
+class TVNowShowIE(TVNowListBaseIE):
+    _VALID_URL = TVNowListBaseIE._SHOW_VALID_URL
+    _TESTS = [{
+        # annual navigationType
+        'url': 'https://www.tvnow.de/shows/grip-das-motormagazin-1669',
+        'info_dict': {
+            'id': '1669',
+        },
+        'playlist_mincount': 73,
     }, {
-        'url': 'https://www.tvnow.de/rtl2/grip-das-motormagazin/jahr/',
-        'only_matching': True,
+        # season navigationType
+        'url': 'https://www.tvnow.de/shows/armes-deutschland-11471',
+        'info_dict': {
+            'id': '11471',
+        },
+        'playlist_mincount': 3,
     }]
 
     @classmethod
     def suitable(cls, url):
-        return (False if TVNowIE.suitable(url) or TVNowListIE.suitable(url)
+        return (False if TVNowNewIE.suitable(url) or 
TVNowSeasonIE.suitable(url) or TVNowAnnualIE.suitable(url)
                 else super(TVNowShowIE, cls).suitable(url))
 
     def _real_extract(self, url):
         base_url, show_id = re.match(self._VALID_URL, url).groups()
 
-        list_info = self._extract_list_info(show_id, show_id)
+        result = self._call_api(
+            'teaserrow/format/navigation/' + show_id, show_id)
+
+        items = result['items']
 
         entries = []
-        for season_info in list_info['formatTabs']['items']:
-            season_url = season_info.get('seoheadline')
-            if not season_url:
-                continue
-            season_id = season_info.get('id')
-            entries.append(self.url_result(
-                '%s/list/%s' % (base_url, season_url), TVNowListIE.ie_key(),
-                compat_str(season_id) if season_id else None,
-                season_info.get('headline')))
+        navigation = result.get('navigationType')
+        if navigation == 'annual':
+            for item in items:
+                if not isinstance(item, dict):
+                    continue
+                year = int_or_none(item.get('year'))
+                if year is None:
+                    continue
+                months = item.get('months')
+                if not isinstance(months, list):
+                    continue
+                for month_dict in months:
+                    if not isinstance(month_dict, dict) or not month_dict:
+                        continue
+                    month_number = int_or_none(list(month_dict.keys())[0])
+                    if month_number is None:
+                        continue
+                    entries.append(self.url_result(
+                        '%s/%04d-%02d' % (base_url, year, month_number),
+                        ie=TVNowAnnualIE.ie_key()))
+        elif navigation == 'season':
+            for item in items:
+                if not isinstance(item, dict):
+                    continue
+                season_number = int_or_none(item.get('season'))
+                if season_number is None:
+                    continue
+                entries.append(self.url_result(
+                    '%s/staffel-%d' % (base_url, season_number),
+                    ie=TVNowSeasonIE.ie_key()))
+        else:
+            raise ExtractorError('Unknown navigationType')
 
-        return self.playlist_result(entries, show_id, list_info.get('title'))
+        return self.playlist_result(entries, show_id)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/youporn.py 
new/youtube-dl/youtube_dl/extractor/youporn.py
--- old/youtube-dl/youtube_dl/extractor/youporn.py      2019-01-02 
17:52:05.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/youporn.py      2019-01-04 
16:33:13.000000000 +0100
@@ -68,11 +68,9 @@
         request.add_header('Cookie', 'age_verified=1')
         webpage = self._download_webpage(request, display_id)
 
-        title = self._search_regex(
-            
[r'(?:video_titles|videoTitle)\s*[:=]\s*(["\'])(?P<title>(?:(?!\1).)+)\1',
-             r'<h1[^>]+class=["\']heading\d?["\'][^>]*>(?P<title>[^<]+)<'],
-            webpage, 'title', group='title',
-            default=None) or self._og_search_title(
+        title = self._html_search_regex(
+            r'(?s)<div[^>]+class=["\']watchVideoTitle[^>]+>(.+?)</div>',
+            webpage, 'title', default=None) or self._og_search_title(
             webpage, default=None) or self._html_search_meta(
             'title', webpage, fatal=True)
 
@@ -134,7 +132,11 @@
             formats.append(f)
         self._sort_formats(formats)
 
-        description = self._og_search_description(webpage, default=None)
+        description = self._html_search_regex(
+            r'(?s)<div[^>]+\bid=["\']description["\'][^>]*>(.+?)</div>',
+            webpage, 'description',
+            default=None) or self._og_search_description(
+            webpage, default=None)
         thumbnail = self._search_regex(
             r'(?:imageurl\s*=|poster\s*:)\s*(["\'])(?P<thumbnail>.+?)\1',
             webpage, 'thumbnail', fatal=False, group='thumbnail')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/youtube.py 
new/youtube-dl/youtube_dl/extractor/youtube.py
--- old/youtube-dl/youtube_dl/extractor/youtube.py      2019-01-02 
17:52:05.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/youtube.py      2019-01-04 
16:33:13.000000000 +0100
@@ -1931,31 +1931,38 @@
                         'http_chunk_size': 10485760,
                     }
                 formats.append(dct)
-        elif video_info.get('hlsvp'):
-            manifest_url = video_info['hlsvp'][0]
-            formats = []
-            m3u8_formats = self._extract_m3u8_formats(
-                manifest_url, video_id, 'mp4', fatal=False)
-            for a_format in m3u8_formats:
-                itag = self._search_regex(
-                    r'/itag/(\d+)/', a_format['url'], 'itag', default=None)
-                if itag:
-                    a_format['format_id'] = itag
-                    if itag in self._formats:
-                        dct = self._formats[itag].copy()
-                        dct.update(a_format)
-                        a_format = dct
-                a_format['player_url'] = player_url
-                # Accept-Encoding header causes failures in live streams on 
Youtube and Youtube Gaming
-                a_format.setdefault('http_headers', 
{})['Youtubedl-no-compression'] = 'True'
-                formats.append(a_format)
         else:
-            error_message = clean_html(video_info.get('reason', [None])[0])
-            if not error_message:
-                error_message = extract_unavailable_message()
-            if error_message:
-                raise ExtractorError(error_message, expected=True)
-            raise ExtractorError('no conn, hlsvp or url_encoded_fmt_stream_map 
information found in video info')
+            manifest_url = (
+                url_or_none(try_get(
+                    player_response,
+                    lambda x: x['streamingData']['hlsManifestUrl'],
+                    compat_str)) or
+                url_or_none(try_get(
+                    video_info, lambda x: x['hlsvp'][0], compat_str)))
+            if manifest_url:
+                formats = []
+                m3u8_formats = self._extract_m3u8_formats(
+                    manifest_url, video_id, 'mp4', fatal=False)
+                for a_format in m3u8_formats:
+                    itag = self._search_regex(
+                        r'/itag/(\d+)/', a_format['url'], 'itag', default=None)
+                    if itag:
+                        a_format['format_id'] = itag
+                        if itag in self._formats:
+                            dct = self._formats[itag].copy()
+                            dct.update(a_format)
+                            a_format = dct
+                    a_format['player_url'] = player_url
+                    # Accept-Encoding header causes failures in live streams 
on Youtube and Youtube Gaming
+                    a_format.setdefault('http_headers', 
{})['Youtubedl-no-compression'] = 'True'
+                    formats.append(a_format)
+            else:
+                error_message = clean_html(video_info.get('reason', [None])[0])
+                if not error_message:
+                    error_message = extract_unavailable_message()
+                if error_message:
+                    raise ExtractorError(error_message, expected=True)
+                raise ExtractorError('no conn, hlsvp, hlsManifestUrl or 
url_encoded_fmt_stream_map information found in video info')
 
         # uploader
         video_uploader = try_get(
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/postprocessor/ffmpeg.py 
new/youtube-dl/youtube_dl/postprocessor/ffmpeg.py
--- old/youtube-dl/youtube_dl/postprocessor/ffmpeg.py   2019-01-02 
17:52:05.000000000 +0100
+++ new/youtube-dl/youtube_dl/postprocessor/ffmpeg.py   2019-01-04 
16:33:13.000000000 +0100
@@ -384,9 +384,8 @@
             opts += ['-c:s', 'mov_text']
         for (i, lang) in enumerate(sub_langs):
             opts.extend(['-map', '%d:0' % (i + 1)])
-            lang_code = ISO639Utils.short2long(lang)
-            if lang_code is not None:
-                opts.extend(['-metadata:s:s:%d' % i, 'language=%s' % 
lang_code])
+            lang_code = ISO639Utils.short2long(lang) or lang
+            opts.extend(['-metadata:s:s:%d' % i, 'language=%s' % lang_code])
 
         temp_filename = prepend_extension(filename, 'temp')
         self._downloader.to_screen('[ffmpeg] Embedding subtitles in \'%s\'' % 
filename)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/utils.py 
new/youtube-dl/youtube_dl/utils.py
--- old/youtube-dl/youtube_dl/utils.py  2019-01-02 17:52:05.000000000 +0100
+++ new/youtube-dl/youtube_dl/utils.py  2019-01-04 16:33:13.000000000 +0100
@@ -2968,6 +2968,7 @@
         'gv': 'glv',
         'ha': 'hau',
         'he': 'heb',
+        'iw': 'heb',  # Replaced by he in 1989 revision
         'hi': 'hin',
         'ho': 'hmo',
         'hr': 'hrv',
@@ -2977,6 +2978,7 @@
         'hz': 'her',
         'ia': 'ina',
         'id': 'ind',
+        'in': 'ind',  # Replaced by id in 1989 revision
         'ie': 'ile',
         'ig': 'ibo',
         'ii': 'iii',
@@ -3091,6 +3093,7 @@
         'wo': 'wol',
         'xh': 'xho',
         'yi': 'yid',
+        'ji': 'yid',  # Replaced by yi in 1989 revision
         'yo': 'yor',
         'za': 'zha',
         'zh': 'zho',
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/version.py 
new/youtube-dl/youtube_dl/version.py
--- old/youtube-dl/youtube_dl/version.py        2019-01-02 17:52:51.000000000 
+0100
+++ new/youtube-dl/youtube_dl/version.py        2019-01-10 17:26:46.000000000 
+0100
@@ -1,3 +1,3 @@
 from __future__ import unicode_literals
 
-__version__ = '2019.01.02'
+__version__ = '2019.01.10'


Reply via email to