Hello community,

here is the log from the commit of package youtube-dl for openSUSE:Factory 
checked in at 2020-11-26 23:14:15
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/youtube-dl (Old)
 and      /work/SRC/openSUSE:Factory/.youtube-dl.new.5913 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "youtube-dl"

Thu Nov 26 23:14:15 2020 rev:145 rq:850826 version:2020.11.26

Changes:
--------
--- /work/SRC/openSUSE:Factory/youtube-dl/python-youtube-dl.changes     
2020-11-23 16:28:25.820742751 +0100
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.5913/python-youtube-dl.changes   
2020-11-26 23:15:26.141039516 +0100
@@ -1,0 +2,21 @@
+Wed Nov 25 20:40:49 UTC 2020 - Jan Engelhardt <jeng...@inai.de>
+
+- Update to release 2020.11.26
+  * cda, nrk: fix extraction
+  * youtube: improve music metadata and license extraction
+  * medaltv: Add new extractor
+  * bbc: fix BBC News videos extraction, BBC Three clip extraction
+  * vlive: Add support for post URLs
+
+-------------------------------------------------------------------
+Mon Nov 23 18:26:17 UTC 2020 - Jan Engelhardt <jeng...@inai.de>
+
+- Update to release 2020.11.24
+  * pinterest: Add extractor
+  * extractor/common: add generic support for akamai http format
+    extraction
+  * skyit: add support for multiple Sky Italia websites
+  * pinterest: Add support for large collections (more than 25
+    pins)
+
+-------------------------------------------------------------------
--- /work/SRC/openSUSE:Factory/youtube-dl/youtube-dl.changes    2020-11-24 
22:13:49.423570290 +0100
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.5913/youtube-dl.changes  
2020-11-26 23:15:26.801040029 +0100
@@ -1,0 +2,10 @@
+Wed Nov 25 20:40:49 UTC 2020 - Jan Engelhardt <jeng...@inai.de>
+
+- Update to release 2020.11.26
+  * cda, nrk: fix extraction
+  * youtube: improve music metadata and license extraction
+  * medaltv: Add new extractor
+  * bbc: fix BBC News videos extraction, BBC Three clip extraction
+  * vlive: Add support for post URLs
+
+-------------------------------------------------------------------

Old:
----
  youtube-dl-2020.11.24.tar.gz
  youtube-dl-2020.11.24.tar.gz.sig

New:
----
  youtube-dl-2020.11.26.tar.gz
  youtube-dl-2020.11.26.tar.gz.sig

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.FR773b/_old  2020-11-26 23:15:27.469040547 +0100
+++ /var/tmp/diff_new_pack.FR773b/_new  2020-11-26 23:15:27.477040554 +0100
@@ -19,7 +19,7 @@
 %define modname youtube-dl
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-youtube-dl
-Version:        2020.11.24
+Version:        2020.11.26
 Release:        0
 Summary:        A Python module for downloading from video sites for offline 
watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.FR773b/_old  2020-11-26 23:15:27.497040569 +0100
+++ /var/tmp/diff_new_pack.FR773b/_new  2020-11-26 23:15:27.501040573 +0100
@@ -17,7 +17,7 @@
 
 
 Name:           youtube-dl
-Version:        2020.11.24
+Version:        2020.11.26
 Release:        0
 Summary:        A tool for downloading from video sites for offline watching
 License:        SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl-2020.11.24.tar.gz -> youtube-dl-2020.11.26.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/ChangeLog new/youtube-dl/ChangeLog
--- old/youtube-dl/ChangeLog    2020-11-23 18:23:11.000000000 +0100
+++ new/youtube-dl/ChangeLog    2020-11-25 21:05:47.000000000 +0100
@@ -1,3 +1,22 @@
+version 2020.11.26
+
+Core
+* [downloader/fragment] Set final file's mtime according to last fragment's
+  Last-Modified header (#11718, #18384, #27138)
+
+Extractors
++ [spreaker] Add support for spreaker.com (#13480, #13877)
+* [vlive] Improve extraction for geo-restricted videos
++ [vlive] Add support for post URLs (#27122, #27123)
+* [viki] Fix video API request (#27184)
+* [bbc] Fix BBC Three clip extraction
+* [bbc] Fix BBC News videos extraction
++ [medaltv] Add support for medal.tv (#27149)
+* [youtube] Imporve music metadata and license extraction (#26013)
+* [nrk] Fix extraction
+* [cda] Fix extraction (#17803, #24458, #24518, #26381)
+
+
 version 2020.11.24
 
 Core
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/docs/supportedsites.md 
new/youtube-dl/docs/supportedsites.md
--- old/youtube-dl/docs/supportedsites.md       2020-11-23 18:23:14.000000000 
+0100
+++ new/youtube-dl/docs/supportedsites.md       2020-11-25 21:05:51.000000000 
+0100
@@ -471,6 +471,7 @@
  - **massengeschmack.tv**
  - **MatchTV**
  - **MDR**: MDR.DE and KiKA
+ - **MedalTV**
  - **media.ccc.de**
  - **media.ccc.de:lists**
  - **Medialaan**
@@ -839,6 +840,10 @@
  - **Sport5**
  - **SportBox**
  - **SportDeutschland**
+ - **Spreaker**
+ - **SpreakerPage**
+ - **SpreakerShow**
+ - **SpreakerShowPage**
  - **SpringboardPlatform**
  - **Sprout**
  - **sr:mediathek**: Saarländischer Rundfunk
@@ -1055,6 +1060,7 @@
  - **vk:wallpost**
  - **vlive**
  - **vlive:channel**
+ - **vlive:post**
  - **Vodlocker**
  - **VODPl**
  - **VODPlatform**
Binary files old/youtube-dl/youtube-dl and new/youtube-dl/youtube-dl differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/downloader/fragment.py 
new/youtube-dl/youtube_dl/downloader/fragment.py
--- old/youtube-dl/youtube_dl/downloader/fragment.py    2020-11-23 
18:22:27.000000000 +0100
+++ new/youtube-dl/youtube_dl/downloader/fragment.py    2020-11-25 
21:05:35.000000000 +0100
@@ -97,12 +97,15 @@
 
     def _download_fragment(self, ctx, frag_url, info_dict, headers=None):
         fragment_filename = '%s-Frag%d' % (ctx['tmpfilename'], 
ctx['fragment_index'])
-        success = ctx['dl'].download(fragment_filename, {
+        fragment_info_dict = {
             'url': frag_url,
             'http_headers': headers or info_dict.get('http_headers'),
-        })
+        }
+        success = ctx['dl'].download(fragment_filename, fragment_info_dict)
         if not success:
             return False, None
+        if fragment_info_dict.get('filetime'):
+            ctx['fragment_filetime'] = fragment_info_dict.get('filetime')
         down, frag_sanitized = sanitize_open(fragment_filename, 'rb')
         ctx['fragment_filename_sanitized'] = frag_sanitized
         frag_content = down.read()
@@ -258,6 +261,13 @@
             downloaded_bytes = ctx['complete_frags_downloaded_bytes']
         else:
             self.try_rename(ctx['tmpfilename'], ctx['filename'])
+            if self.params.get('updatetime', True):
+                filetime = ctx.get('fragment_filetime')
+                if filetime:
+                    try:
+                        os.utime(ctx['filename'], (time.time(), filetime))
+                    except Exception:
+                        pass
             downloaded_bytes = os.path.getsize(encodeFilename(ctx['filename']))
 
         self._hook_progress({
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/bbc.py 
new/youtube-dl/youtube_dl/extractor/bbc.py
--- old/youtube-dl/youtube_dl/extractor/bbc.py  2020-11-23 18:22:27.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/bbc.py  2020-11-25 21:05:35.000000000 
+0100
@@ -981,7 +981,7 @@
         group_id = self._search_regex(
             r'<div[^>]+\bclass=["\']video["\'][^>]+\bdata-pid=["\'](%s)' % 
self._ID_REGEX,
             webpage, 'group id', default=None)
-        if playlist_id:
+        if group_id:
             return self.url_result(
                 'https://www.bbc.co.uk/programmes/%s' % group_id,
                 ie=BBCCoUkIE.ie_key())
@@ -1092,10 +1092,26 @@
             self._search_regex(
                 r'(?s)bbcthreeConfig\s*=\s*({.+?})\s*;\s*<', webpage,
                 'bbcthree config', default='{}'),
-            playlist_id, transform_source=js_to_json, fatal=False)
-        if bbc3_config:
+            playlist_id, transform_source=js_to_json, fatal=False) or {}
+        payload = bbc3_config.get('payload') or {}
+        if payload:
+            clip = payload.get('currentClip') or {}
+            clip_vpid = clip.get('vpid')
+            clip_title = clip.get('title')
+            if clip_vpid and clip_title:
+                formats, subtitles = self._download_media_selector(clip_vpid)
+                self._sort_formats(formats)
+                return {
+                    'id': clip_vpid,
+                    'title': clip_title,
+                    'thumbnail': dict_get(clip, ('poster', 'imageUrl')),
+                    'description': clip.get('description'),
+                    'duration': parse_duration(clip.get('duration')),
+                    'formats': formats,
+                    'subtitles': subtitles,
+                }
             bbc3_playlist = try_get(
-                bbc3_config, lambda x: 
x['payload']['content']['bbcMedia']['playlist'],
+                payload, lambda x: x['content']['bbcMedia']['playlist'],
                 dict)
             if bbc3_playlist:
                 playlist_title = bbc3_playlist.get('title') or playlist_title
@@ -1118,6 +1134,39 @@
                 return self.playlist_result(
                     entries, playlist_id, playlist_title, playlist_description)
 
+        initial_data = self._parse_json(self._search_regex(
+            r'window\.__INITIAL_DATA__\s*=\s*({.+?});', webpage,
+            'preload state', default='{}'), playlist_id, fatal=False)
+        if initial_data:
+            def parse_media(media):
+                if not media:
+                    return
+                for item in (try_get(media, lambda x: x['media']['items'], 
list) or []):
+                    item_id = item.get('id')
+                    item_title = item.get('title')
+                    if not (item_id and item_title):
+                        continue
+                    formats, subtitles = self._download_media_selector(item_id)
+                    self._sort_formats(formats)
+                    entries.append({
+                        'id': item_id,
+                        'title': item_title,
+                        'thumbnail': item.get('holdingImageUrl'),
+                        'formats': formats,
+                        'subtitles': subtitles,
+                    })
+            for resp in (initial_data.get('data') or {}).values():
+                name = resp.get('name')
+                if name == 'media-experience':
+                    parse_media(try_get(resp, lambda x: 
x['data']['initialItem']['mediaItem'], dict))
+                elif name == 'article':
+                    for block in (try_get(resp, lambda x: x['data']['blocks'], 
list) or []):
+                        if block.get('type') != 'media':
+                            continue
+                        parse_media(block.get('model'))
+            return self.playlist_result(
+                entries, playlist_id, playlist_title, playlist_description)
+
         def extract_all(pattern):
             return list(filter(None, map(
                 lambda s: self._parse_json(s, playlist_id, fatal=False),
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/cda.py 
new/youtube-dl/youtube_dl/extractor/cda.py
--- old/youtube-dl/youtube_dl/extractor/cda.py  2020-11-23 18:22:27.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/cda.py  2020-11-25 21:05:35.000000000 
+0100
@@ -5,10 +5,16 @@
 import re
 
 from .common import InfoExtractor
+from ..compat import (
+    compat_chr,
+    compat_ord,
+    compat_urllib_parse_unquote,
+)
 from ..utils import (
     ExtractorError,
     float_or_none,
     int_or_none,
+    merge_dicts,
     multipart_encode,
     parse_duration,
     random_birthday,
@@ -107,8 +113,9 @@
             r'Odsłony:(?:\s|&nbsp;)*([0-9]+)', webpage,
             'view_count', default=None)
         average_rating = self._search_regex(
-            
r'<(?:span|meta)[^>]+itemprop=(["\'])ratingValue\1[^>]*>(?P<rating_value>[0-9.]+)',
-            webpage, 'rating', fatal=False, group='rating_value')
+            
(r'<(?:span|meta)[^>]+itemprop=(["\'])ratingValue\1[^>]*>(?P<rating_value>[0-9.]+)',
+             
r'<span[^>]+\bclass=["\']rating["\'][^>]*>(?P<rating_value>[0-9.]+)'), webpage, 
'rating', fatal=False,
+            group='rating_value')
 
         info_dict = {
             'id': video_id,
@@ -123,6 +130,24 @@
             'age_limit': 18 if need_confirm_age else 0,
         }
 
+        # Source: https://www.cda.pl/js/player.js?t=1606154898
+        def decrypt_file(a):
+            for p in ('_XDDD', '_CDA', '_ADC', '_CXD', '_QWE', '_Q5', 
'_IKSDE'):
+                a = a.replace(p, '')
+            a = compat_urllib_parse_unquote(a)
+            b = []
+            for c in a:
+                f = compat_ord(c)
+                b.append(compat_chr(33 + (f + 14) % 94) if 33 <= f and 126 >= 
f else compat_chr(f))
+            a = ''.join(b)
+            a = a.replace('.cda.mp4', '')
+            for p in ('.2cda.pl', '.3cda.pl'):
+                a = a.replace(p, '.cda.pl')
+            if '/upstream' in a:
+                a = a.replace('/upstream', '.mp4/upstream')
+                return 'https://' + a
+            return 'https://' + a + '.mp4'
+
         def extract_format(page, version):
             json_str = self._html_search_regex(
                 r'player_data=(\\?["\'])(?P<player_data>.+?)\1', page,
@@ -141,6 +166,8 @@
                 video['file'] = codecs.decode(video['file'], 'rot_13')
                 if video['file'].endswith('adc.mp4'):
                     video['file'] = video['file'].replace('adc.mp4', '.mp4')
+            elif not video['file'].startswith('http'):
+                video['file'] = decrypt_file(video['file'])
             f = {
                 'url': video['file'],
             }
@@ -179,4 +206,6 @@
 
         self._sort_formats(formats)
 
-        return info_dict
+        info = self._search_json_ld(webpage, video_id, default={})
+
+        return merge_dicts(info_dict, info)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/extractors.py 
new/youtube-dl/youtube_dl/extractor/extractors.py
--- old/youtube-dl/youtube_dl/extractor/extractors.py   2020-11-23 
18:22:33.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/extractors.py   2020-11-25 
21:05:35.000000000 +0100
@@ -606,6 +606,7 @@
 from .massengeschmacktv import MassengeschmackTVIE
 from .matchtv import MatchTVIE
 from .mdr import MDRIE
+from .medaltv import MedalTVIE
 from .mediaset import MediasetIE
 from .mediasite import (
     MediasiteIE,
@@ -1081,6 +1082,12 @@
 from .sport5 import Sport5IE
 from .sportbox import SportBoxIE
 from .sportdeutschland import SportDeutschlandIE
+from .spreaker import (
+    SpreakerIE,
+    SpreakerPageIE,
+    SpreakerShowIE,
+    SpreakerShowPageIE,
+)
 from .springboardplatform import SpringboardPlatformIE
 from .sprout import SproutIE
 from .srgssr import (
@@ -1374,6 +1381,7 @@
 )
 from .vlive import (
     VLiveIE,
+    VLivePostIE,
     VLiveChannelIE,
 )
 from .vodlocker import VodlockerIE
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/medaltv.py 
new/youtube-dl/youtube_dl/extractor/medaltv.py
--- old/youtube-dl/youtube_dl/extractor/medaltv.py      1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/medaltv.py      2020-11-25 
21:05:35.000000000 +0100
@@ -0,0 +1,131 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..compat import compat_str
+from ..utils import (
+    ExtractorError,
+    float_or_none,
+    int_or_none,
+    str_or_none,
+    try_get,
+)
+
+
+class MedalTVIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?medal\.tv/clips/(?P<id>[0-9]+)'
+    _TESTS = [{
+        'url': 'https://medal.tv/clips/34934644/3Is9zyGMoBMr',
+        'md5': '7b07b064331b1cf9e8e5c52a06ae68fa',
+        'info_dict': {
+            'id': '34934644',
+            'ext': 'mp4',
+            'title': 'Quad Cold',
+            'description': 'Medal,https://medal.tv/desktop/',
+            'uploader': 'MowgliSB',
+            'timestamp': 1603165266,
+            'upload_date': '20201020',
+            'uploader_id': 10619174,
+        }
+    }, {
+        'url': 'https://medal.tv/clips/36787208',
+        'md5': 'b6dc76b78195fff0b4f8bf4a33ec2148',
+        'info_dict': {
+            'id': '36787208',
+            'ext': 'mp4',
+            'title': 'u tk me i tk u bigger',
+            'description': 'Medal,https://medal.tv/desktop/',
+            'uploader': 'Mimicc',
+            'timestamp': 1605580939,
+            'upload_date': '20201117',
+            'uploader_id': 5156321,
+        }
+    }]
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        webpage = self._download_webpage(url, video_id)
+
+        hydration_data = self._parse_json(self._search_regex(
+            
r'<script[^>]*>\s*(?:var\s*)?hydrationData\s*=\s*({.+?})\s*</script>',
+            webpage, 'hydration data', default='{}'), video_id)
+
+        clip = try_get(
+            hydration_data, lambda x: x['clips'][video_id], dict) or {}
+        if not clip:
+            raise ExtractorError(
+                'Could not find video information.', video_id=video_id)
+
+        title = clip['contentTitle']
+
+        source_width = int_or_none(clip.get('sourceWidth'))
+        source_height = int_or_none(clip.get('sourceHeight'))
+
+        aspect_ratio = source_width / source_height if source_width and 
source_height else 16 / 9
+
+        def add_item(container, item_url, height, id_key='format_id', 
item_id=None):
+            item_id = item_id or '%dp' % height
+            if item_id not in item_url:
+                return
+            width = int(round(aspect_ratio * height))
+            container.append({
+                'url': item_url,
+                id_key: item_id,
+                'width': width,
+                'height': height
+            })
+
+        formats = []
+        thumbnails = []
+        for k, v in clip.items():
+            if not (v and isinstance(v, compat_str)):
+                continue
+            mobj = re.match(r'(contentUrl|thumbnail)(?:(\d+)p)?$', k)
+            if not mobj:
+                continue
+            prefix = mobj.group(1)
+            height = int_or_none(mobj.group(2))
+            if prefix == 'contentUrl':
+                add_item(
+                    formats, v, height or source_height,
+                    item_id=None if height else 'source')
+            elif prefix == 'thumbnail':
+                add_item(thumbnails, v, height, 'id')
+
+        error = clip.get('error')
+        if not formats and error:
+            if error == 404:
+                raise ExtractorError(
+                    'That clip does not exist.',
+                    expected=True, video_id=video_id)
+            else:
+                raise ExtractorError(
+                    'An unknown error occurred ({0}).'.format(error),
+                    video_id=video_id)
+
+        self._sort_formats(formats)
+
+        # Necessary because the id of the author is not known in advance.
+        # Won't raise an issue if no profile can be found as this is optional.
+        author = try_get(
+            hydration_data, lambda x: list(x['profiles'].values())[0], dict) 
or {}
+        author_id = str_or_none(author.get('id'))
+        author_url = 'https://medal.tv/users/{0}'.format(author_id) if 
author_id else None
+
+        return {
+            'id': video_id,
+            'title': title,
+            'formats': formats,
+            'thumbnails': thumbnails,
+            'description': clip.get('contentDescription'),
+            'uploader': author.get('displayName'),
+            'timestamp': float_or_none(clip.get('created'), 1000),
+            'uploader_id': author_id,
+            'uploader_url': author_url,
+            'duration': int_or_none(clip.get('videoLengthSeconds')),
+            'view_count': int_or_none(clip.get('views')),
+            'like_count': int_or_none(clip.get('likes')),
+            'comment_count': int_or_none(clip.get('comments')),
+        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/nrk.py 
new/youtube-dl/youtube_dl/extractor/nrk.py
--- old/youtube-dl/youtube_dl/extractor/nrk.py  2020-11-23 18:22:27.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/nrk.py  2020-11-25 21:05:35.000000000 
+0100
@@ -9,6 +9,7 @@
     compat_urllib_parse_unquote,
 )
 from ..utils import (
+    determine_ext,
     ExtractorError,
     int_or_none,
     js_to_json,
@@ -16,185 +17,13 @@
     parse_age_limit,
     parse_duration,
     try_get,
+    url_or_none,
 )
 
 
 class NRKBaseIE(InfoExtractor):
     _GEO_COUNTRIES = ['NO']
 
-    _api_host = None
-
-    def _real_extract(self, url):
-        video_id = self._match_id(url)
-
-        api_hosts = (self._api_host, ) if self._api_host else self._API_HOSTS
-
-        for api_host in api_hosts:
-            data = self._download_json(
-                'http://%s/mediaelement/%s' % (api_host, video_id),
-                video_id, 'Downloading mediaelement JSON',
-                fatal=api_host == api_hosts[-1])
-            if not data:
-                continue
-            self._api_host = api_host
-            break
-
-        title = data.get('fullTitle') or data.get('mainTitle') or data['title']
-        video_id = data.get('id') or video_id
-
-        entries = []
-
-        conviva = data.get('convivaStatistics') or {}
-        live = (data.get('mediaElementType') == 'Live'
-                or data.get('isLive') is True or conviva.get('isLive'))
-
-        def make_title(t):
-            return self._live_title(t) if live else t
-
-        media_assets = data.get('mediaAssets')
-        if media_assets and isinstance(media_assets, list):
-            def video_id_and_title(idx):
-                return ((video_id, title) if len(media_assets) == 1
-                        else ('%s-%d' % (video_id, idx), '%s (Part %d)' % 
(title, idx)))
-            for num, asset in enumerate(media_assets, 1):
-                asset_url = asset.get('url')
-                if not asset_url:
-                    continue
-                formats = self._extract_akamai_formats(asset_url, video_id)
-                if not formats:
-                    continue
-                self._sort_formats(formats)
-
-                # Some f4m streams may not work with hdcore in fragments' URLs
-                for f in formats:
-                    extra_param = f.get('extra_param_to_segment_url')
-                    if extra_param and 'hdcore' in extra_param:
-                        del f['extra_param_to_segment_url']
-
-                entry_id, entry_title = video_id_and_title(num)
-                duration = parse_duration(asset.get('duration'))
-                subtitles = {}
-                for subtitle in ('webVtt', 'timedText'):
-                    subtitle_url = asset.get('%sSubtitlesUrl' % subtitle)
-                    if subtitle_url:
-                        subtitles.setdefault('no', []).append({
-                            'url': compat_urllib_parse_unquote(subtitle_url)
-                        })
-                entries.append({
-                    'id': asset.get('carrierId') or entry_id,
-                    'title': make_title(entry_title),
-                    'duration': duration,
-                    'subtitles': subtitles,
-                    'formats': formats,
-                })
-
-        if not entries:
-            media_url = data.get('mediaUrl')
-            if media_url:
-                formats = self._extract_akamai_formats(media_url, video_id)
-                self._sort_formats(formats)
-                duration = parse_duration(data.get('duration'))
-                entries = [{
-                    'id': video_id,
-                    'title': make_title(title),
-                    'duration': duration,
-                    'formats': formats,
-                }]
-
-        if not entries:
-            MESSAGES = {
-                'ProgramRightsAreNotReady': 'Du kan dessverre ikke se eller 
høre programmet',
-                'ProgramRightsHasExpired': 'Programmet har gått ut',
-                'NoProgramRights': 'Ikke tilgjengelig',
-                'ProgramIsGeoBlocked': 'NRK har ikke rettigheter til å vise 
dette programmet utenfor Norge',
-            }
-            message_type = data.get('messageType', '')
-            # Can be ProgramIsGeoBlocked or ChannelIsGeoBlocked*
-            if 'IsGeoBlocked' in message_type:
-                self.raise_geo_restricted(
-                    msg=MESSAGES.get('ProgramIsGeoBlocked'),
-                    countries=self._GEO_COUNTRIES)
-            raise ExtractorError(
-                '%s said: %s' % (self.IE_NAME, MESSAGES.get(
-                    message_type, message_type)),
-                expected=True)
-
-        series = conviva.get('seriesName') or data.get('seriesTitle')
-        episode = conviva.get('episodeName') or data.get('episodeNumberOrDate')
-
-        season_number = None
-        episode_number = None
-        if data.get('mediaElementType') == 'Episode':
-            _season_episode = data.get('scoresStatistics', 
{}).get('springStreamStream') or \
-                data.get('relativeOriginUrl', '')
-            EPISODENUM_RE = [
-                r'/s(?P<season>\d{,2})e(?P<episode>\d{,2})\.',
-                r'/sesong-(?P<season>\d{,2})/episode-(?P<episode>\d{,2})',
-            ]
-            season_number = int_or_none(self._search_regex(
-                EPISODENUM_RE, _season_episode, 'season number',
-                default=None, group='season'))
-            episode_number = int_or_none(self._search_regex(
-                EPISODENUM_RE, _season_episode, 'episode number',
-                default=None, group='episode'))
-
-        thumbnails = None
-        images = data.get('images')
-        if images and isinstance(images, dict):
-            web_images = images.get('webImages')
-            if isinstance(web_images, list):
-                thumbnails = [{
-                    'url': image['imageUrl'],
-                    'width': int_or_none(image.get('width')),
-                    'height': int_or_none(image.get('height')),
-                } for image in web_images if image.get('imageUrl')]
-
-        description = data.get('description')
-        category = data.get('mediaAnalytics', {}).get('category')
-
-        common_info = {
-            'description': description,
-            'series': series,
-            'episode': episode,
-            'season_number': season_number,
-            'episode_number': episode_number,
-            'categories': [category] if category else None,
-            'age_limit': parse_age_limit(data.get('legalAge')),
-            'thumbnails': thumbnails,
-        }
-
-        vcodec = 'none' if data.get('mediaType') == 'Audio' else None
-
-        for entry in entries:
-            entry.update(common_info)
-            for f in entry['formats']:
-                f['vcodec'] = vcodec
-
-        points = data.get('shortIndexPoints')
-        if isinstance(points, list):
-            chapters = []
-            for next_num, point in enumerate(points, start=1):
-                if not isinstance(point, dict):
-                    continue
-                start_time = parse_duration(point.get('startPoint'))
-                if start_time is None:
-                    continue
-                end_time = parse_duration(
-                    data.get('duration')
-                    if next_num == len(points)
-                    else points[next_num].get('startPoint'))
-                if end_time is None:
-                    continue
-                chapters.append({
-                    'start_time': start_time,
-                    'end_time': end_time,
-                    'title': point.get('title'),
-                })
-            if chapters and len(entries) == 1:
-                entries[0]['chapters'] = chapters
-
-        return self.playlist_result(entries, video_id, title, description)
-
 
 class NRKIE(NRKBaseIE):
     _VALID_URL = r'''(?x)
@@ -202,13 +31,13 @@
                             nrk:|
                             https?://
                                 (?:
-                                    (?:www\.)?nrk\.no/video/PS\*|
+                                    (?:www\.)?nrk\.no/video/(?:PS\*|[^_]+_)|
                                     v8[-.]psapi\.nrk\.no/mediaelement/
                                 )
                             )
-                            (?P<id>[^?#&]+)
+                            (?P<id>[^?\#&]+)
                         '''
-    _API_HOSTS = ('psapi.nrk.no', 'v8-psapi.nrk.no')
+
     _TESTS = [{
         # video
         'url': 'http://www.nrk.no/video/PS*150533',
@@ -240,8 +69,76 @@
     }, {
         'url': 
'https://v8-psapi.nrk.no/mediaelement/ecc1b952-96dc-4a98-81b9-5296dc7a98d9',
         'only_matching': True,
+    }, {
+        'url': 
'https://www.nrk.no/video/dompap-og-andre-fugler-i-piip-show_150533',
+        'only_matching': True,
+    }, {
+        'url': 
'https://www.nrk.no/video/humor/kommentatorboksen-reiser-til-sjos_d1fda11f-a4ad-437a-a374-0398bc84e999',
+        'only_matching': True,
     }]
 
+    def _extract_from_playback(self, video_id):
+        manifest = self._download_json(
+            'http://psapi.nrk.no/playback/manifest/%s' % video_id,
+            video_id, 'Downloading manifest JSON')
+
+        playable = manifest['playable']
+
+        formats = []
+        for asset in playable['assets']:
+            if not isinstance(asset, dict):
+                continue
+            if asset.get('encrypted'):
+                continue
+            format_url = url_or_none(asset.get('url'))
+            if not format_url:
+                continue
+            if asset.get('format') == 'HLS' or determine_ext(format_url) == 
'm3u8':
+                formats.extend(self._extract_m3u8_formats(
+                    format_url, video_id, 'mp4', entry_protocol='m3u8_native',
+                    m3u8_id='hls', fatal=False))
+        self._sort_formats(formats)
+
+        data = self._download_json(
+            'http://psapi.nrk.no/playback/metadata/%s' % video_id,
+            video_id, 'Downloading metadata JSON')
+
+        preplay = data['preplay']
+        titles = preplay['titles']
+        title = titles['title']
+        alt_title = titles.get('subtitle')
+
+        description = preplay.get('description')
+        duration = parse_duration(playable.get('duration')) or 
parse_duration(data.get('duration'))
+
+        thumbnails = []
+        for image in try_get(
+                preplay, lambda x: x['poster']['images'], list) or []:
+            if not isinstance(image, dict):
+                continue
+            image_url = url_or_none(image.get('url'))
+            if not image_url:
+                continue
+            thumbnails.append({
+                'url': image_url,
+                'width': int_or_none(image.get('pixelWidth')),
+                'height': int_or_none(image.get('pixelHeight')),
+            })
+
+        return {
+            'id': video_id,
+            'title': title,
+            'alt_title': alt_title,
+            'description': description,
+            'duration': duration,
+            'thumbnails': thumbnails,
+            'formats': formats,
+        }
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        return self._extract_from_playback(video_id)
+
 
 class NRKTVIE(NRKBaseIE):
     IE_DESC = 'NRK TV and NRK Radio'
@@ -380,6 +277,181 @@
         'only_matching': True,
     }]
 
+    _api_host = None
+
+    def _extract_from_mediaelement(self, video_id):
+        api_hosts = (self._api_host, ) if self._api_host else self._API_HOSTS
+
+        for api_host in api_hosts:
+            data = self._download_json(
+                'http://%s/mediaelement/%s' % (api_host, video_id),
+                video_id, 'Downloading mediaelement JSON',
+                fatal=api_host == api_hosts[-1])
+            if not data:
+                continue
+            self._api_host = api_host
+            break
+
+        title = data.get('fullTitle') or data.get('mainTitle') or data['title']
+        video_id = data.get('id') or video_id
+
+        entries = []
+
+        conviva = data.get('convivaStatistics') or {}
+        live = (data.get('mediaElementType') == 'Live'
+                or data.get('isLive') is True or conviva.get('isLive'))
+
+        def make_title(t):
+            return self._live_title(t) if live else t
+
+        media_assets = data.get('mediaAssets')
+        if media_assets and isinstance(media_assets, list):
+            def video_id_and_title(idx):
+                return ((video_id, title) if len(media_assets) == 1
+                        else ('%s-%d' % (video_id, idx), '%s (Part %d)' % 
(title, idx)))
+            for num, asset in enumerate(media_assets, 1):
+                asset_url = asset.get('url')
+                if not asset_url:
+                    continue
+                formats = self._extract_akamai_formats(asset_url, video_id)
+                if not formats:
+                    continue
+                self._sort_formats(formats)
+
+                # Some f4m streams may not work with hdcore in fragments' URLs
+                for f in formats:
+                    extra_param = f.get('extra_param_to_segment_url')
+                    if extra_param and 'hdcore' in extra_param:
+                        del f['extra_param_to_segment_url']
+
+                entry_id, entry_title = video_id_and_title(num)
+                duration = parse_duration(asset.get('duration'))
+                subtitles = {}
+                for subtitle in ('webVtt', 'timedText'):
+                    subtitle_url = asset.get('%sSubtitlesUrl' % subtitle)
+                    if subtitle_url:
+                        subtitles.setdefault('no', []).append({
+                            'url': compat_urllib_parse_unquote(subtitle_url)
+                        })
+                entries.append({
+                    'id': asset.get('carrierId') or entry_id,
+                    'title': make_title(entry_title),
+                    'duration': duration,
+                    'subtitles': subtitles,
+                    'formats': formats,
+                })
+
+        if not entries:
+            media_url = data.get('mediaUrl')
+            if media_url:
+                formats = self._extract_akamai_formats(media_url, video_id)
+                self._sort_formats(formats)
+                duration = parse_duration(data.get('duration'))
+                entries = [{
+                    'id': video_id,
+                    'title': make_title(title),
+                    'duration': duration,
+                    'formats': formats,
+                }]
+
+        if not entries:
+            MESSAGES = {
+                'ProgramRightsAreNotReady': 'Du kan dessverre ikke se eller 
høre programmet',
+                'ProgramRightsHasExpired': 'Programmet har gått ut',
+                'NoProgramRights': 'Ikke tilgjengelig',
+                'ProgramIsGeoBlocked': 'NRK har ikke rettigheter til å vise 
dette programmet utenfor Norge',
+            }
+            message_type = data.get('messageType', '')
+            # Can be ProgramIsGeoBlocked or ChannelIsGeoBlocked*
+            if 'IsGeoBlocked' in message_type:
+                self.raise_geo_restricted(
+                    msg=MESSAGES.get('ProgramIsGeoBlocked'),
+                    countries=self._GEO_COUNTRIES)
+            raise ExtractorError(
+                '%s said: %s' % (self.IE_NAME, MESSAGES.get(
+                    message_type, message_type)),
+                expected=True)
+
+        series = conviva.get('seriesName') or data.get('seriesTitle')
+        episode = conviva.get('episodeName') or data.get('episodeNumberOrDate')
+
+        season_number = None
+        episode_number = None
+        if data.get('mediaElementType') == 'Episode':
+            _season_episode = data.get('scoresStatistics', 
{}).get('springStreamStream') or \
+                data.get('relativeOriginUrl', '')
+            EPISODENUM_RE = [
+                r'/s(?P<season>\d{,2})e(?P<episode>\d{,2})\.',
+                r'/sesong-(?P<season>\d{,2})/episode-(?P<episode>\d{,2})',
+            ]
+            season_number = int_or_none(self._search_regex(
+                EPISODENUM_RE, _season_episode, 'season number',
+                default=None, group='season'))
+            episode_number = int_or_none(self._search_regex(
+                EPISODENUM_RE, _season_episode, 'episode number',
+                default=None, group='episode'))
+
+        thumbnails = None
+        images = data.get('images')
+        if images and isinstance(images, dict):
+            web_images = images.get('webImages')
+            if isinstance(web_images, list):
+                thumbnails = [{
+                    'url': image['imageUrl'],
+                    'width': int_or_none(image.get('width')),
+                    'height': int_or_none(image.get('height')),
+                } for image in web_images if image.get('imageUrl')]
+
+        description = data.get('description')
+        category = data.get('mediaAnalytics', {}).get('category')
+
+        common_info = {
+            'description': description,
+            'series': series,
+            'episode': episode,
+            'season_number': season_number,
+            'episode_number': episode_number,
+            'categories': [category] if category else None,
+            'age_limit': parse_age_limit(data.get('legalAge')),
+            'thumbnails': thumbnails,
+        }
+
+        vcodec = 'none' if data.get('mediaType') == 'Audio' else None
+
+        for entry in entries:
+            entry.update(common_info)
+            for f in entry['formats']:
+                f['vcodec'] = vcodec
+
+        points = data.get('shortIndexPoints')
+        if isinstance(points, list):
+            chapters = []
+            for next_num, point in enumerate(points, start=1):
+                if not isinstance(point, dict):
+                    continue
+                start_time = parse_duration(point.get('startPoint'))
+                if start_time is None:
+                    continue
+                end_time = parse_duration(
+                    data.get('duration')
+                    if next_num == len(points)
+                    else points[next_num].get('startPoint'))
+                if end_time is None:
+                    continue
+                chapters.append({
+                    'start_time': start_time,
+                    'end_time': end_time,
+                    'title': point.get('title'),
+                })
+            if chapters and len(entries) == 1:
+                entries[0]['chapters'] = chapters
+
+        return self.playlist_result(entries, video_id, title, description)
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+        return self._extract_from_mediaelement(video_id)
+
 
 class NRKTVEpisodeIE(InfoExtractor):
     _VALID_URL = 
r'https?://tv\.nrk\.no/serie/(?P<id>[^/]+/sesong/\d+/episode/\d+)'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/spreaker.py 
new/youtube-dl/youtube_dl/extractor/spreaker.py
--- old/youtube-dl/youtube_dl/extractor/spreaker.py     1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/spreaker.py     2020-11-25 
21:05:35.000000000 +0100
@@ -0,0 +1,176 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import itertools
+
+from .common import InfoExtractor
+from ..compat import compat_str
+from ..utils import (
+    float_or_none,
+    int_or_none,
+    str_or_none,
+    try_get,
+    unified_timestamp,
+    url_or_none,
+)
+
+
+def _extract_episode(data, episode_id=None):
+    title = data['title']
+    download_url = data['download_url']
+
+    series = try_get(data, lambda x: x['show']['title'], compat_str)
+    uploader = try_get(data, lambda x: x['author']['fullname'], compat_str)
+
+    thumbnails = []
+    for image in ('image_original', 'image_medium', 'image'):
+        image_url = url_or_none(data.get('%s_url' % image))
+        if image_url:
+            thumbnails.append({'url': image_url})
+
+    def stats(key):
+        return int_or_none(try_get(
+            data,
+            (lambda x: x['%ss_count' % key],
+             lambda x: x['stats']['%ss' % key])))
+
+    def duration(key):
+        return float_or_none(data.get(key), scale=1000)
+
+    return {
+        'id': compat_str(episode_id or data['episode_id']),
+        'url': download_url,
+        'display_id': data.get('permalink'),
+        'title': title,
+        'description': data.get('description'),
+        'timestamp': unified_timestamp(data.get('published_at')),
+        'uploader': uploader,
+        'uploader_id': str_or_none(data.get('author_id')),
+        'creator': uploader,
+        'duration': duration('duration') or duration('length'),
+        'view_count': stats('play'),
+        'like_count': stats('like'),
+        'comment_count': stats('message'),
+        'format': 'MPEG Layer 3',
+        'format_id': 'mp3',
+        'container': 'mp3',
+        'ext': 'mp3',
+        'thumbnails': thumbnails,
+        'series': series,
+        'extractor_key': SpreakerIE.ie_key(),
+    }
+
+
+class SpreakerIE(InfoExtractor):
+    _VALID_URL = r'''(?x)
+                    https?://
+                        api\.spreaker\.com/
+                        (?:
+                            (?:download/)?episode|
+                            v2/episodes
+                        )/
+                        (?P<id>\d+)
+                    '''
+    _TESTS = [{
+        'url': 'https://api.spreaker.com/episode/12534508',
+        'info_dict': {
+            'id': '12534508',
+            'display_id': 'swm-ep15-how-to-market-your-music-part-2',
+            'ext': 'mp3',
+            'title': 'EP:15 | Music Marketing (Likes) - Part 2',
+            'description': 'md5:0588c43e27be46423e183076fa071177',
+            'timestamp': 1502250336,
+            'upload_date': '20170809',
+            'uploader': 'SWM',
+            'uploader_id': '9780658',
+            'duration': 1063.42,
+            'view_count': int,
+            'like_count': int,
+            'comment_count': int,
+            'series': 'Success With Music (SWM)',
+        },
+    }, {
+        'url': 
'https://api.spreaker.com/download/episode/12534508/swm_ep15_how_to_market_your_music_part_2.mp3',
+        'only_matching': True,
+    }, {
+        'url': 
'https://api.spreaker.com/v2/episodes/12534508?export=episode_segments',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        episode_id = self._match_id(url)
+        data = self._download_json(
+            'https://api.spreaker.com/v2/episodes/%s' % episode_id,
+            episode_id)['response']['episode']
+        return _extract_episode(data, episode_id)
+
+
+class SpreakerPageIE(InfoExtractor):
+    _VALID_URL = 
r'https?://(?:www\.)?spreaker\.com/user/[^/]+/(?P<id>[^/?#&]+)'
+    _TESTS = [{
+        'url': 
'https://www.spreaker.com/user/9780658/swm-ep15-how-to-market-your-music-part-2',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+        webpage = self._download_webpage(url, display_id)
+        episode_id = self._search_regex(
+            (r'data-episode_id=["\'](?P<id>\d+)',
+             r'episode_id\s*:\s*(?P<id>\d+)'), webpage, 'episode id')
+        return self.url_result(
+            'https://api.spreaker.com/episode/%s' % episode_id,
+            ie=SpreakerIE.ie_key(), video_id=episode_id)
+
+
+class SpreakerShowIE(InfoExtractor):
+    _VALID_URL = r'https?://api\.spreaker\.com/show/(?P<id>\d+)'
+    _TESTS = [{
+        'url': 'https://www.spreaker.com/show/3-ninjas-podcast',
+        'info_dict': {
+            'id': '4652058',
+        },
+        'playlist_mincount': 118,
+    }]
+
+    def _entries(self, show_id):
+        for page_num in itertools.count(1):
+            episodes = self._download_json(
+                'https://api.spreaker.com/show/%s/episodes' % show_id,
+                show_id, note='Downloading JSON page %d' % page_num, query={
+                    'page': page_num,
+                    'max_per_page': 100,
+                })
+            pager = try_get(episodes, lambda x: x['response']['pager'], dict)
+            if not pager:
+                break
+            results = pager.get('results')
+            if not results or not isinstance(results, list):
+                break
+            for result in results:
+                if not isinstance(result, dict):
+                    continue
+                yield _extract_episode(result)
+            if page_num == pager.get('last_page'):
+                break
+
+    def _real_extract(self, url):
+        show_id = self._match_id(url)
+        return self.playlist_result(self._entries(show_id), 
playlist_id=show_id)
+
+
+class SpreakerShowPageIE(InfoExtractor):
+    _VALID_URL = r'https?://(?:www\.)?spreaker\.com/show/(?P<id>[^/?#&]+)'
+    _TESTS = [{
+        'url': 'https://www.spreaker.com/show/success-with-music',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+        webpage = self._download_webpage(url, display_id)
+        show_id = self._search_regex(
+            r'show_id\s*:\s*(?P<id>\d+)', webpage, 'show id')
+        return self.url_result(
+            'https://api.spreaker.com/show/%s' % show_id,
+            ie=SpreakerShowIE.ie_key(), video_id=show_id)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/viki.py 
new/youtube-dl/youtube_dl/extractor/viki.py
--- old/youtube-dl/youtube_dl/extractor/viki.py 2020-11-23 18:22:27.000000000 
+0100
+++ new/youtube-dl/youtube_dl/extractor/viki.py 2020-11-25 21:05:35.000000000 
+0100
@@ -20,6 +20,7 @@
     parse_age_limit,
     parse_iso8601,
     sanitized_Request,
+    std_headers,
 )
 
 
@@ -226,8 +227,10 @@
 
         resp = self._download_json(
             'https://www.viki.com/api/videos/' + video_id,
-            video_id, 'Downloading video JSON',
-            headers={'x-viki-app-ver': '4.0.57'})
+            video_id, 'Downloading video JSON', headers={
+                'x-client-user-agent': std_headers['User-Agent'],
+                'x-viki-app-ver': '4.0.57',
+            })
         video = resp['video']
 
         self._check_errors(video)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/vlive.py 
new/youtube-dl/youtube_dl/extractor/vlive.py
--- old/youtube-dl/youtube_dl/extractor/vlive.py        2020-11-23 
18:22:27.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/vlive.py        2020-11-25 
21:05:35.000000000 +0100
@@ -13,6 +13,8 @@
     ExtractorError,
     int_or_none,
     merge_dicts,
+    str_or_none,
+    strip_or_none,
     try_get,
     urlencode_postdata,
 )
@@ -66,6 +68,10 @@
     }, {
         'url': 'https://www.vlive.tv/embed/1326',
         'only_matching': True,
+    }, {
+        # works only with gcc=KR
+        'url': 'https://www.vlive.tv/video/225019',
+        'only_matching': True,
     }]
 
     def _real_initialize(self):
@@ -100,26 +106,26 @@
             raise ExtractorError('Unable to log in', expected=True)
 
     def _call_api(self, path_template, video_id, fields=None):
-        query = {'appId': self._APP_ID}
+        query = {'appId': self._APP_ID, 'gcc': 'KR'}
         if fields:
             query['fields'] = fields
-        return self._download_json(
-            'https://www.vlive.tv/globalv-web/vam-web/' + path_template % 
video_id, video_id,
-            'Downloading %s JSON metadata' % 
path_template.split('/')[-1].split('-')[0],
-            headers={'Referer': 'https://www.vlive.tv/'}, query=query)
-
-    def _real_extract(self, url):
-        video_id = self._match_id(url)
-
         try:
-            post = self._call_api(
-                'post/v1.0/officialVideoPost-%s', video_id,
-                
'author{nickname},channel{channelCode,channelName},officialVideo{commentCount,exposeStatus,likeCount,playCount,playTime,status,title,type,vodId}')
+            return self._download_json(
+                'https://www.vlive.tv/globalv-web/vam-web/' + path_template % 
video_id, video_id,
+                'Downloading %s JSON metadata' % 
path_template.split('/')[-1].split('-')[0],
+                headers={'Referer': 'https://www.vlive.tv/'}, query=query)
         except ExtractorError as e:
             if isinstance(e.cause, compat_HTTPError) and e.cause.code == 403:
                 
self.raise_login_required(json.loads(e.cause.read().decode())['message'])
             raise
 
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        post = self._call_api(
+            'post/v1.0/officialVideoPost-%s', video_id,
+            
'author{nickname},channel{channelCode,channelName},officialVideo{commentCount,exposeStatus,likeCount,playCount,playTime,status,title,type,vodId}')
+
         video = post['officialVideo']
 
         def get_common_fields():
@@ -170,6 +176,83 @@
                 raise ExtractorError('Unknown status ' + status)
 
 
+class VLivePostIE(VLiveIE):
+    IE_NAME = 'vlive:post'
+    _VALID_URL = r'https?://(?:(?:www|m)\.)?vlive\.tv/post/(?P<id>\d-\d+)'
+    _TESTS = [{
+        # uploadType = SOS
+        'url': 'https://www.vlive.tv/post/1-20088044',
+        'info_dict': {
+            'id': '1-20088044',
+            'title': 'Hola estrellitas la tierra les dice hola (si era así 
no?) Ha...',
+            'description': 'md5:fab8a1e50e6e51608907f46c7fa4b407',
+        },
+        'playlist_count': 3,
+    }, {
+        # uploadType = V
+        'url': 'https://www.vlive.tv/post/1-20087926',
+        'info_dict': {
+            'id': '1-20087926',
+            'title': 'James Corden: And so, the baby becamos the Papa💜😭💪😭',
+        },
+        'playlist_count': 1,
+    }]
+    _FVIDEO_TMPL = 'fvideo/v1.0/fvideo-%%s/%s'
+    _SOS_TMPL = _FVIDEO_TMPL % 'sosPlayInfo'
+    _INKEY_TMPL = _FVIDEO_TMPL % 'inKey'
+
+    def _real_extract(self, url):
+        post_id = self._match_id(url)
+
+        post = self._call_api(
+            'post/v1.0/post-%s', post_id,
+            'attachments{video},officialVideo{videoSeq},plainBody,title')
+
+        video_seq = str_or_none(try_get(
+            post, lambda x: x['officialVideo']['videoSeq']))
+        if video_seq:
+            return self.url_result(
+                'http://www.vlive.tv/video/' + video_seq,
+                VLiveIE.ie_key(), video_seq)
+
+        title = post['title']
+        entries = []
+        for idx, video in enumerate(post['attachments']['video'].values()):
+            video_id = video.get('videoId')
+            if not video_id:
+                continue
+            upload_type = video.get('uploadType')
+            upload_info = video.get('uploadInfo') or {}
+            entry = None
+            if upload_type == 'SOS':
+                download = self._call_api(
+                    self._SOS_TMPL, video_id)['videoUrl']['download']
+                formats = []
+                for f_id, f_url in download.items():
+                    formats.append({
+                        'format_id': f_id,
+                        'url': f_url,
+                        'height': int_or_none(f_id[:-1]),
+                    })
+                self._sort_formats(formats)
+                entry = {
+                    'formats': formats,
+                    'id': video_id,
+                    'thumbnail': upload_info.get('imageUrl'),
+                }
+            elif upload_type == 'V':
+                vod_id = upload_info.get('videoId')
+                if not vod_id:
+                    continue
+                inkey = self._call_api(self._INKEY_TMPL, video_id)['inKey']
+                entry = self._extract_video_info(video_id, vod_id, inkey)
+            if entry:
+                entry['title'] = '%s_part%s' % (title, idx)
+                entries.append(entry)
+        return self.playlist_result(
+            entries, post_id, title, strip_or_none(post.get('plainBody')))
+
+
 class VLiveChannelIE(VLiveBaseIE):
     IE_NAME = 'vlive:channel'
     _VALID_URL = 
r'https?://(?:channels\.vlive\.tv|(?:(?:www|m)\.)?vlive\.tv/channel)/(?P<id>[0-9A-Z]+)'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/youtube.py 
new/youtube-dl/youtube_dl/extractor/youtube.py
--- old/youtube-dl/youtube_dl/extractor/youtube.py      2020-11-23 
18:22:33.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/youtube.py      2020-11-25 
21:05:35.000000000 +0100
@@ -2162,7 +2162,7 @@
         # Youtube Music Auto-generated description
         release_date = release_year = None
         if video_description:
-            mobj = re.search(r'(?s)Provided to YouTube by 
[^\n]+\n+(?P<track>[^·]+)·(?P<artist>[^\n]+)\n+(?P<album>[^\n]+)(?:.+?℗\s*(?P<release_year>\d{4})(?!\d))?(?:.+?Released
 
on\s*:\s*(?P<release_date>\d{4}-\d{2}-\d{2}))?(.+?\nArtist\s*:\s*(?P<clean_artist>[^\n]+))?',
 video_description)
+            mobj = 
re.search(r'(?s)(?P<track>[^·\n]+)·(?P<artist>[^\n]+)\n+(?P<album>[^\n]+)(?:.+?℗\s*(?P<release_year>\d{4})(?!\d))?(?:.+?Released
 
on\s*:\s*(?P<release_date>\d{4}-\d{2}-\d{2}))?(.+?\nArtist\s*:\s*(?P<clean_artist>[^\n]+))?.+\nAuto-generated
 by YouTube\.\s*$', video_description)
             if mobj:
                 if not track:
                     track = mobj.group('track').strip()
@@ -2179,6 +2179,34 @@
                 if release_year:
                     release_year = int(release_year)
 
+        yt_initial_data = self._extract_yt_initial_data(video_id, 
video_webpage)
+        contents = try_get(yt_initial_data, lambda x: 
x['contents']['twoColumnWatchNextResults']['results']['results']['contents'], 
list) or []
+        for content in contents:
+            rows = try_get(content, lambda x: 
x['videoSecondaryInfoRenderer']['metadataRowContainer']['metadataRowContainerRenderer']['rows'],
 list) or []
+            multiple_songs = False
+            for row in rows:
+                if try_get(row, lambda x: 
x['metadataRowRenderer']['hasDividerLine']) is True:
+                    multiple_songs = True
+                    break
+            for row in rows:
+                mrr = row.get('metadataRowRenderer') or {}
+                mrr_title = try_get(
+                    mrr, lambda x: x['title']['simpleText'], compat_str)
+                mrr_contents = try_get(
+                    mrr, lambda x: x['contents'][0], dict) or {}
+                mrr_contents_text = try_get(mrr_contents, [lambda x: 
x['simpleText'], lambda x: x['runs'][0]['text']], compat_str)
+                if not (mrr_title and mrr_contents_text):
+                    continue
+                if mrr_title == 'License':
+                    video_license = mrr_contents_text
+                elif not multiple_songs:
+                    if mrr_title == 'Album':
+                        album = mrr_contents_text
+                    elif mrr_title == 'Artist':
+                        artist = mrr_contents_text
+                    elif mrr_title == 'Song':
+                        track = mrr_contents_text
+
         m_episode = re.search(
             
r'<div[^>]+id="watch7-headline"[^>]*>\s*<span[^>]*>.*?>(?P<series>[^<]+)</a></b>\s*S(?P<season>\d+)\s*•\s*E(?P<episode>\d+)</span>',
             video_webpage)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/version.py 
new/youtube-dl/youtube_dl/version.py
--- old/youtube-dl/youtube_dl/version.py        2020-11-23 18:23:11.000000000 
+0100
+++ new/youtube-dl/youtube_dl/version.py        2020-11-25 21:05:47.000000000 
+0100
@@ -1,3 +1,3 @@
 from __future__ import unicode_literals
 
-__version__ = '2020.11.24'
+__version__ = '2020.11.26'
_______________________________________________
openSUSE Commits mailing list -- commit@lists.opensuse.org
To unsubscribe, email commit-le...@lists.opensuse.org
List Netiquette: https://en.opensuse.org/openSUSE:Mailing_list_netiquette
List Archives: 
https://lists.opensuse.org/archives/list/commit@lists.opensuse.org

Reply via email to