Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package streamlink for openSUSE:Factory 
checked in at 2024-07-08 19:07:22
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/streamlink (Old)
 and      /work/SRC/openSUSE:Factory/.streamlink.new.2080 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "streamlink"

Mon Jul  8 19:07:22 2024 rev:14 rq:1185837 version:6.8.2

Changes:
--------
--- /work/SRC/openSUSE:Factory/streamlink/streamlink.changes    2024-06-21 
16:04:05.764025622 +0200
+++ /work/SRC/openSUSE:Factory/.streamlink.new.2080/streamlink.changes  
2024-07-08 19:07:42.597532556 +0200
@@ -1,0 +2,12 @@
+Fri Jul  5 07:25:07 UTC 2024 - Richard Rahl <[email protected]>
+
+- update to 6.8.2:
+  * douyin: new plugin
+  * huya: fixed stream URLs
+  * pluzz: fixed API URL, stream tokens and validation schemas
+  * twitch: added info log messages about ad break durations
+  * twitch: fixed clip URLs
+  * twitch: fixed discontinuity warning spam in certain circumstances
+  * vidio: fixed stream tokens, added metadata
+
+-------------------------------------------------------------------

Old:
----
  streamlink-6.8.1.tar.gz
  streamlink-6.8.1.tar.gz.asc

New:
----
  streamlink-6.8.2.tar.gz
  streamlink-6.8.2.tar.gz.asc

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ streamlink.spec ++++++
--- /var/tmp/diff_new_pack.V7uIbk/_old  2024-07-08 19:07:43.805576736 +0200
+++ /var/tmp/diff_new_pack.V7uIbk/_new  2024-07-08 19:07:43.805576736 +0200
@@ -24,7 +24,7 @@
 %define         psuffix %nil
 %endif
 Name:           streamlink%{psuffix}
-Version:        6.8.1
+Version:        6.8.2
 Release:        0
 Summary:        Program to pipe streams from services into a video player
 License:        Apache-2.0 AND BSD-2-Clause

++++++ streamlink-6.8.1.tar.gz -> streamlink-6.8.2.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/CHANGELOG.md 
new/streamlink-6.8.2/CHANGELOG.md
--- old/streamlink-6.8.1/CHANGELOG.md   2024-06-18 14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/CHANGELOG.md   2024-07-04 20:45:42.000000000 +0200
@@ -1,5 +1,21 @@
 # Changelog
 
+## streamlink 6.8.2 (2024-07-04)
+
+Patch release:
+
+- Updated plugins:
+  - douyin: new plugin 
([#6059](https://github.com/streamlink/streamlink/pull/6059))
+  - huya: fixed stream URLs 
([#6058](https://github.com/streamlink/streamlink/pull/6058))
+  - pluzz: fixed API URL, stream tokens and validation schemas 
([#6048](https://github.com/streamlink/streamlink/pull/6048))
+  - twitch: added info log messages about ad break durations 
([#6051](https://github.com/streamlink/streamlink/pull/6051))
+  - twitch: fixed clip URLs 
([#6045](https://github.com/streamlink/streamlink/pull/6045))
+  - twitch: fixed discontinuity warning spam in certain circumstances 
([#6022](https://github.com/streamlink/streamlink/pull/6022))
+  - vidio: fixed stream tokens, added metadata 
([#6057](https://github.com/streamlink/streamlink/pull/6057))
+
+[Full 
changelog](https://github.com/streamlink/streamlink/compare/6.8.1...6.8.2)
+
+
 ## streamlink 6.8.1 (2024-06-18)
 
 Patch release:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/PKG-INFO 
new/streamlink-6.8.2/PKG-INFO
--- old/streamlink-6.8.1/PKG-INFO       2024-06-18 14:42:51.425662800 +0200
+++ new/streamlink-6.8.2/PKG-INFO       2024-07-04 20:46:17.149702800 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: streamlink
-Version: 6.8.1
+Version: 6.8.2
 Summary: Streamlink is a command-line utility that extracts streams from 
various services and pipes them into a video player of choice.
 Author: Streamlink
 Author-email: [email protected]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/dev-requirements.txt 
new/streamlink-6.8.2/dev-requirements.txt
--- old/streamlink-6.8.1/dev-requirements.txt   2024-06-18 14:42:16.000000000 
+0200
+++ new/streamlink-6.8.2/dev-requirements.txt   2024-07-04 20:45:42.000000000 
+0200
@@ -15,10 +15,10 @@
 coverage[toml]
 
 # code-linting
-ruff ==0.4.9
+ruff ==0.5.0
 
 # typing
-mypy ==1.10.0
+mypy ==1.10.1
 lxml-stubs
 trio-typing
 types-freezegun
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/docs/_build/man/streamlink.1 
new/streamlink-6.8.2/docs/_build/man/streamlink.1
--- old/streamlink-6.8.1/docs/_build/man/streamlink.1   2024-06-18 
14:42:45.000000000 +0200
+++ new/streamlink-6.8.2/docs/_build/man/streamlink.1   2024-07-04 
20:46:11.000000000 +0200
@@ -27,7 +27,7 @@
 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
 .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
 ..
-.TH "STREAMLINK" "1" "Jun 18, 2024" "6.8.1" "Streamlink"
+.TH "STREAMLINK" "1" "Jul 04, 2024" "6.8.2" "Streamlink"
 .SH NAME
 streamlink \- extracts streams from various services and pipes them into a 
video player of choice
 .SH SYNOPSIS
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/docs/changelog.md 
new/streamlink-6.8.2/docs/changelog.md
--- old/streamlink-6.8.1/docs/changelog.md      2024-06-18 14:42:16.000000000 
+0200
+++ new/streamlink-6.8.2/docs/changelog.md      2024-07-04 20:45:42.000000000 
+0200
@@ -1,5 +1,21 @@
 # Changelog
 
+## streamlink 6.8.2 (2024-07-04)
+
+Patch release:
+
+- Updated plugins:
+  - douyin: new plugin 
([#6059](https://github.com/streamlink/streamlink/pull/6059))
+  - huya: fixed stream URLs 
([#6058](https://github.com/streamlink/streamlink/pull/6058))
+  - pluzz: fixed API URL, stream tokens and validation schemas 
([#6048](https://github.com/streamlink/streamlink/pull/6048))
+  - twitch: added info log messages about ad break durations 
([#6051](https://github.com/streamlink/streamlink/pull/6051))
+  - twitch: fixed clip URLs 
([#6045](https://github.com/streamlink/streamlink/pull/6045))
+  - twitch: fixed discontinuity warning spam in certain circumstances 
([#6022](https://github.com/streamlink/streamlink/pull/6022))
+  - vidio: fixed stream tokens, added metadata 
([#6057](https://github.com/streamlink/streamlink/pull/6057))
+
+[Full 
changelog](https://github.com/streamlink/streamlink/compare/6.8.1...6.8.2)
+
+
 ## streamlink 6.8.1 (2024-06-18)
 
 Patch release:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/setup.py 
new/streamlink-6.8.2/setup.py
--- old/streamlink-6.8.1/setup.py       2024-06-18 14:42:51.429662700 +0200
+++ new/streamlink-6.8.2/setup.py       2024-07-04 20:46:17.149702800 +0200
@@ -90,5 +90,5 @@
         cmdclass=get_cmdclasses(cmdclass),
         entry_points=entry_points,
         data_files=data_files,
-        version="6.8.1",
+        version="6.8.2",
     )
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/src/streamlink/_version.py 
new/streamlink-6.8.2/src/streamlink/_version.py
--- old/streamlink-6.8.1/src/streamlink/_version.py     2024-06-18 
14:42:51.429662700 +0200
+++ new/streamlink-6.8.2/src/streamlink/_version.py     2024-07-04 
20:46:17.149702800 +0200
@@ -1 +1 @@
-__version__ = "6.8.1"
+__version__ = "6.8.2"
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/streamlink-6.8.1/src/streamlink/plugin/api/useragents.py 
new/streamlink-6.8.2/src/streamlink/plugin/api/useragents.py
--- old/streamlink-6.8.1/src/streamlink/plugin/api/useragents.py        
2024-06-18 14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/src/streamlink/plugin/api/useragents.py        
2024-07-04 20:45:42.000000000 +0200
@@ -1,10 +1,10 @@
-ANDROID = "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like 
Gecko) Chrome/125.0.6422.147 Mobile Safari/537.36"
-CHROME = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/125.0.0.0 Safari/537.36"
+ANDROID = "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like 
Gecko) Chrome/126.0.6478.122 Mobile Safari/537.36"
+CHROME = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/126.0.0.0 Safari/537.36"
 CHROME_OS = "Mozilla/5.0 (X11; CrOS x86_64 15633.69.0) AppleWebKit/537.36 
(KHTML, like Gecko) Chrome/119.0.6045.212 Safari/537.36"
-FIREFOX = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:126.0) Gecko/20100101 
Firefox/126.0"
+FIREFOX = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:127.0) Gecko/20100101 
Firefox/127.0"
 IE_11 = "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
 IPHONE = "Mozilla/5.0 (iPhone; CPU iPhone OS 17_5_1 like Mac OS X) 
AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.4.1 Mobile/15E148 
Safari/604.1"
-OPERA = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/125.0.0.0 Safari/537.36 OPR/111.0.0.0"
+OPERA = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/126.0.0.0 Safari/537.36 OPR/111.0.0.0"
 SAFARI = "Mozilla/5.0 (Macintosh; Intel Mac OS X 14_5) AppleWebKit/605.1.15 
(KHTML, like Gecko) Version/17.4.1 Safari/605.1.15"
 
 # Backwards compatibility
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/src/streamlink/plugins/douyin.py 
new/streamlink-6.8.2/src/streamlink/plugins/douyin.py
--- old/streamlink-6.8.1/src/streamlink/plugins/douyin.py       1970-01-01 
01:00:00.000000000 +0100
+++ new/streamlink-6.8.2/src/streamlink/plugins/douyin.py       2024-07-04 
20:45:42.000000000 +0200
@@ -0,0 +1,153 @@
+"""
+$description Chinese live-streaming platform owned by ByteDance.
+$url live.douyin.com
+$type live
+$metadata id
+$metadata author
+$metadata title
+"""
+
+import logging
+import re
+import uuid
+from typing import Dict
+
+from streamlink.plugin import Plugin, pluginmatcher
+from streamlink.plugin.api import validate
+from streamlink.stream.http import HTTPStream
+from streamlink.utils.url import update_scheme
+
+
+log = logging.getLogger(__name__)
+
+
+@pluginmatcher(
+    re.compile(
+        r"https?://(?:live\.)?douyin\.com/(?P<room_id>[^/?]+)",
+    ),
+)
+class Douyin(Plugin):
+    _STATUS_LIVE = 2
+
+    QUALITY_WEIGHTS: Dict[str, int] = {}
+
+    @classmethod
+    def stream_weight(cls, key):
+        weight = cls.QUALITY_WEIGHTS.get(key)
+        if weight:
+            return weight, key
+
+        return super().stream_weight(key)
+
+    SCHEMA_ROOM_STORE = validate.all(
+        {
+            "roomInfo": {
+                # "room" and "anchor" keys are missing on invalid channels
+                validate.optional("room"): validate.all(
+                    {
+                        "id_str": str,
+                        "status": int,
+                        "title": str,
+                    },
+                    validate.union_get(
+                        "status",
+                        "id_str",
+                        "title",
+                    ),
+                ),
+                validate.optional("anchor"): validate.all(
+                    {"nickname": str},
+                    validate.get("nickname"),
+                ),
+            },
+        },
+        validate.union_get(
+            ("roomInfo", "room"),
+            ("roomInfo", "anchor"),
+        ),
+    )
+
+    SCHEMA_STREAM_STORE = validate.all(
+        {
+            "streamData": {
+                "H264_streamData": {
+                    # "stream" value is `none` on offline/invalid channels
+                    "stream": validate.none_or_all(
+                        {
+                            str: validate.all(
+                                {
+                                    "main": {
+                                        # HLS stream URLs are multivariant 
streams but only with a single media playlist,
+                                        # so avoid using HLS in favor of 
having reduced stream lookup/start times
+                                        "flv": validate.any("", 
validate.url()),
+                                        "sdk_params": validate.all(
+                                            validate.parse_json(),
+                                            {"vbitrate": int},
+                                            validate.get("vbitrate"),
+                                        ),
+                                    },
+                                },
+                                validate.union_get(
+                                    ("main", "sdk_params"),
+                                    ("main", "flv"),
+                                ),
+                            ),
+                        },
+                    ),
+                },
+            },
+        },
+        validate.get(("streamData", "H264_streamData", "stream")),
+    )
+
+    def _get_streams(self):
+        data = self.session.http.get(
+            self.url,
+            cookies={
+                "__ac_nonce": uuid.uuid4().hex[:21],
+            },
+            schema=validate.Schema(
+                
re.compile(r"self\.__pace_f\.push\(\[\d,(?P<json_string>\"[a-z]:.+?\")]\)</script>"),
+                validate.none_or_all(
+                    validate.get("json_string"),
+                    validate.parse_json(),
+                    validate.transform(lambda s: re.sub(r"^[a-z]:", "", s)),
+                    validate.parse_json(),
+                    list,
+                    validate.filter(lambda item: isinstance(item, dict) and 
"state" in item),
+                    validate.length(1),
+                    validate.get((0, "state")),
+                    {
+                        "roomStore": self.SCHEMA_ROOM_STORE,
+                        "streamStore": self.SCHEMA_STREAM_STORE,
+                    },
+                    validate.union_get(
+                        "roomStore",
+                        "streamStore",
+                    ),
+                ),
+            ),
+        )
+        if not data:
+            return
+
+        (room_info, self.author), stream_data = data
+        if not room_info:
+            return
+
+        status, self.id, self.title = room_info
+        if status != self._STATUS_LIVE:
+            log.info("The channel is currently offline")
+            return
+
+        for name, (vbitrate, url) in stream_data.items():
+            if not url:
+                continue
+            self.QUALITY_WEIGHTS[name] = vbitrate
+            url = update_scheme("https://";, url, force=True)
+            yield name, HTTPStream(self.session, url)
+
+        log.debug(f"{self.QUALITY_WEIGHTS=!r}")
+
+
+__plugin__ = Douyin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/src/streamlink/plugins/huya.py 
new/streamlink-6.8.2/src/streamlink/plugins/huya.py
--- old/streamlink-6.8.1/src/streamlink/plugins/huya.py 2024-06-18 
14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/src/streamlink/plugins/huya.py 2024-07-04 
20:45:42.000000000 +0200
@@ -8,31 +8,41 @@
 """
 
 import base64
+import hashlib
 import logging
+import random
 import re
+import sys
+import time
 from html import unescape as html_unescape
 from typing import Dict
-from urllib.parse import parse_qsl
+from urllib.parse import parse_qsl, unquote
 
 from streamlink.plugin import Plugin, pluginmatcher
 from streamlink.plugin.api import validate
 from streamlink.stream.http import HTTPStream
-from streamlink.utils.url import update_qsd, update_scheme
+from streamlink.utils.url import update_scheme
 
 
 log = logging.getLogger(__name__)
 
 
-@pluginmatcher(re.compile(
-    r"https?://(?:www\.)?huya\.com/(?P<channel>[^/?]+)",
-))
+@pluginmatcher(
+    re.compile(
+        r"https?://(?:www\.)?huya\.com/(?P<channel>[^/?]+)",
+    ),
+)
 class Huya(Plugin):
     QUALITY_WEIGHTS: Dict[str, int] = {}
 
-    _QUALITY_WEIGHTS_OVERRIDE = {
-        "source_hy": -1000,  # SSLCertVerificationError
+    _STREAM_URL_QUERYSTRING_PARAMS = "wsTime", "fm", "ctype", "fs"
+
+    _CONSTANTS = {
+        "t": 100,
+        "ver": 1,
+        "sv": 2401090219,
+        "codec": 264,
     }
-    _STREAM_URL_QUERYSTRING_PARAMS = "wsSecret", "wsTime"
 
     @classmethod
     def stream_weight(cls, key):
@@ -43,81 +53,119 @@
         return super().stream_weight(key)
 
     def _get_streams(self):
-        data = self.session.http.get(self.url, schema=validate.Schema(
-            validate.parse_html(),
-            validate.xml_xpath_string(".//script[contains(text(),'var 
hyPlayerConfig = {')][1]/text()"),
-            validate.none_or_all(
-                
re.compile(r"""(?P<q>"?)stream(?P=q)\s*:\s*(?:"(?P<base64>.+?)"|(?P<json>\{.+?})\s*}\s*;)"""),
-            ),
-            validate.none_or_all(
-                validate.any(
-                    validate.all(
-                        validate.get("base64"),
-                        str,
-                        validate.transform(base64.b64decode),
-                    ),
-                    validate.all(
-                        validate.get("json"),
-                        str,
-                    ),
+        data = self.session.http.get(
+            self.url,
+            schema=validate.Schema(
+                validate.parse_html(),
+                validate.xml_xpath_string(".//script[contains(text(),'var 
hyPlayerConfig = {')][1]/text()"),
+                validate.none_or_all(
+                    
re.compile(r"""(?P<q>"?)stream(?P=q)\s*:\s*(?:"(?P<base64>.+?)"|(?P<json>\{.+?})\s*}\s*;)"""),
                 ),
-                validate.parse_json(),
-                {
-                    "data": [{
-                        "gameLiveInfo": {
-                            "liveId": str,
-                            "nick": str,
-                            "roomName": str,
-                        },
-                        "gameStreamInfoList": [validate.all(
+                validate.none_or_all(
+                    validate.any(
+                        validate.all(
+                            validate.get("base64"),
+                            str,
+                            validate.transform(base64.b64decode),
+                        ),
+                        validate.all(
+                            validate.get("json"),
+                            str,
+                        ),
+                    ),
+                    validate.parse_json(),
+                    {
+                        "data": [
                             {
-                                "sCdnType": str,
-                                "iPCPriorityRate": int,
-                                "sStreamName": str,
-                                "sFlvUrl": str,
-                                "sFlvUrlSuffix": str,
-                                "sFlvAntiCode": validate.all(str, 
validate.transform(html_unescape)),
+                                "gameLiveInfo": {
+                                    "liveId": str,
+                                    "nick": str,
+                                    "roomName": str,
+                                },
+                                "gameStreamInfoList": [
+                                    validate.all(
+                                        {
+                                            "sCdnType": str,
+                                            "sStreamName": str,
+                                            "sFlvUrl": str,
+                                            "sFlvUrlSuffix": str,
+                                            "sFlvAntiCode": validate.all(str, 
validate.transform(html_unescape)),
+                                        },
+                                        validate.union_get(
+                                            "sCdnType",
+                                            "sStreamName",
+                                            "sFlvUrl",
+                                            "sFlvUrlSuffix",
+                                            "sFlvAntiCode",
+                                        ),
+                                    ),
+                                ],
                             },
-                            validate.union_get(
-                                "sCdnType",
-                                "iPCPriorityRate",
-                                "sStreamName",
-                                "sFlvUrl",
-                                "sFlvUrlSuffix",
-                                "sFlvAntiCode",
-                            )),
                         ],
-                    }],
-                },
-                validate.get(("data", 0)),
-                validate.union_get(
-                    ("gameLiveInfo", "liveId"),
-                    ("gameLiveInfo", "nick"),
-                    ("gameLiveInfo", "roomName"),
-                    "gameStreamInfoList",
+                        "vMultiStreamInfo": [{"iBitRate": int}],
+                    },
+                    validate.union_get(
+                        ("data", 0, "gameLiveInfo", "liveId"),
+                        ("data", 0, "gameLiveInfo", "nick"),
+                        ("data", 0, "gameLiveInfo", "roomName"),
+                        ("data", 0, "gameStreamInfoList"),
+                        "vMultiStreamInfo",
+                    ),
                 ),
             ),
-        ))
+        )
         if not data:
             return
 
-        self.id, self.author, self.title, streamdata = data
+        self.id, self.author, self.title, streamdata, v_multi_stream_info = 
data
 
         self.session.http.headers.update({
             "Origin": "https://www.huya.com";,
             "Referer": "https://www.huya.com/";,
         })
 
-        for cdntype, priority, streamname, flvurl, suffix, anticode in 
streamdata:
-            qs = {k: v for k, v in dict(parse_qsl(anticode)).items() if k in 
self._STREAM_URL_QUERYSTRING_PARAMS}
-            url = update_scheme("https://";, f"{flvurl}/{streamname}.{suffix}")
-            url = update_qsd(url, qs)
-
-            name = f"source_{cdntype.lower()}"
-            self.QUALITY_WEIGHTS[name] = 
self._QUALITY_WEIGHTS_OVERRIDE.get(name, priority)
-            yield name, HTTPStream(self.session, url)
+        for cdn_type, stream_name, flvurl, suffix, anticode in streamdata:
+            for v_stream_info in v_multi_stream_info:
+                i_bit_rate = v_stream_info["iBitRate"]
+                qs = {k: v for k, v in dict(parse_qsl(anticode)).items() if k 
in self._STREAM_URL_QUERYSTRING_PARAMS}
+                url = update_scheme("https://";, 
f"{flvurl}/{stream_name}.{suffix}")
+                params = self._get_stream_params(
+                    qs.get("fm", ""),
+                    qs.get("fs", ""),
+                    qs.get("ctype", "huya_live"),
+                    qs.get("wsTime", ""),
+                    stream_name,
+                    i_bit_rate,
+                )
+                name = f"{cdn_type.lower()}_{'source' if i_bit_rate == 0 else 
f'{i_bit_rate}k'}"
+                weight = sys.maxsize if i_bit_rate == 0 else i_bit_rate
+                self.QUALITY_WEIGHTS[name] = weight
+                yield name, HTTPStream(self.session, url, params=params)
 
         log.debug(f"QUALITY_WEIGHTS: {self.QUALITY_WEIGHTS!r}")
 
+    def _get_stream_params(self, fm, fs, ctype, ws_time, stream_name, 
i_bit_rate):
+        uid = random.randint(12340000, 12349999)
+        convert_uid = (uid << 8 | uid >> (32 - 8)) & 0xFFFFFFFF
+        timestamp = int(time.time() * 1000)
+        seqid = uid + timestamp
+        ws_secret_prefix = 
base64.b64decode(unquote(fm).encode()).decode().split("_")[0]
+        ws_secret_hash = 
hashlib.md5(f'{seqid}|{ctype}|{self._CONSTANTS["t"]}'.encode()).hexdigest()
+        ws_secret = hashlib.md5(
+            
f"{ws_secret_prefix}_{convert_uid}_{stream_name}_{ws_secret_hash}_{ws_time}".encode(),
+        ).hexdigest()
+        params = {
+            "wsSecret": ws_secret,
+            "wsTime": ws_time,
+            "ctype": ctype,
+            "fs": fs,
+            "seqid": seqid,
+            "u": convert_uid,
+            "sdk_sid": timestamp,
+            "ratio": i_bit_rate,
+        }
+        params.update(self._CONSTANTS)
+        return params
+
 
 __plugin__ = Huya
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/src/streamlink/plugins/pluzz.py 
new/streamlink-6.8.2/src/streamlink/plugins/pluzz.py
--- old/streamlink-6.8.1/src/streamlink/plugins/pluzz.py        2024-06-18 
14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/src/streamlink/plugins/pluzz.py        2024-07-04 
20:45:42.000000000 +0200
@@ -16,23 +16,23 @@
 from streamlink.stream.dash import DASHStream
 from streamlink.stream.hls import HLSStream
 from streamlink.utils.times import localnow
-from streamlink.utils.url import update_qsd
 
 
 log = logging.getLogger(__name__)
 
 
-@pluginmatcher(re.compile(r"""
-    https?://(?:
-        (?:www\.)?france\.tv/
-        |
-        (?:.+\.)?francetvinfo\.fr/
-    )
-""", re.VERBOSE))
+@pluginmatcher(
+    name="francetv",
+    pattern=re.compile(r"https?://(?:[\w-]+\.)?france\.tv/"),
+)
+@pluginmatcher(
+    name="francetvinfofr",
+    pattern=re.compile(r"https?://(?:[\w-]+\.)?francetvinfo\.fr/"),
+)
 class Pluzz(Plugin):
     PLAYER_VERSION = "5.51.35"
     GEO_URL = "https://geoftv-a.akamaihd.net/ws/edgescape.json";
-    API_URL = 
"https://player.webservices.francetelevisions.fr/v1/videos/{video_id}";
+    API_URL = "https://k7.ftven.fr/videos/{video_id}";
 
     def _get_streams(self):
         self.session.http.headers.update({
@@ -56,6 +56,7 @@
             video_id = self.session.http.get(self.url, schema=validate.Schema(
                 validate.parse_html(),
                 validate.any(
+                    # default francetv player
                     validate.all(
                         
validate.xml_xpath_string(".//script[contains(text(),'window.FTVPlayerVideos')][1]/text()"),
                         str,
@@ -68,18 +69,19 @@
                         [{"videoId": str}],
                         validate.get((0, "videoId")),
                     ),
+                    # francetvinfo.fr overseas live stream
                     validate.all(
-                        
validate.xml_xpath_string(".//script[contains(text(),'new 
Magnetoscope')][1]/text()"),
+                        
validate.xml_xpath_string(".//script[contains(text(),'magneto:{videoId:')][1]/text()"),
                         str,
-                        validate.regex(re.compile(
-                            
r"""player\.load\s*\(\s*{\s*src\s*:\s*(?P<q>['"])(?P<video_id>.+?)(?P=q)\s*}\s*\)\s*;""",
-                        )),
+                        
validate.regex(re.compile(r"""magneto:\{videoId:(?P<q>['"])(?P<video_id>.+?)(?P=q)""")),
                         validate.get("video_id"),
                     ),
+                    # francetvinfo.fr news article
                     validate.all(
                         
validate.xml_xpath_string(".//*[@id][contains(@class,'francetv-player-wrapper')][1]/@id"),
                         str,
                     ),
+                    # francetvinfo.fr videos
                     validate.all(
                         
validate.xml_xpath_string(".//*[@data-id][contains(@class,'magneto')][1]/@data-id"),
                         str,
@@ -92,47 +94,54 @@
             return
         log.debug(f"Video ID: {video_id}")
 
-        api_url = update_qsd(self.API_URL.format(video_id=video_id), {
-            "country_code": country_code,
-            "w": 1920,
-            "h": 1080,
-            "player_version": self.PLAYER_VERSION,
-            "domain": urlparse(self.url).netloc,
-            "device_type": "mobile",
-            "browser": "chrome",
-            "browser_version": CHROME_VERSION,
-            "os": "ios",
-            "gmt": localnow().strftime("%z"),
-        })
-        video_format, token_url, url, self.title = 
self.session.http.get(api_url, schema=validate.Schema(
-            validate.parse_json(),
-            {
-                "video": {
-                    "workflow": validate.any("token-akamai", "dai"),
-                    "format": validate.any("dash", "hls"),
-                    "token": validate.url(),
-                    "url": validate.url(),
-                },
-                "meta": {
-                    "title": str,
-                },
+        video_format, token_url, url, self.title = self.session.http.get(
+            self.API_URL.format(video_id=video_id),
+            params={
+                "country_code": country_code,
+                "w": 1920,
+                "h": 1080,
+                "player_version": self.PLAYER_VERSION,
+                "domain": urlparse(self.url).netloc,
+                "device_type": "mobile",
+                "browser": "chrome",
+                "browser_version": CHROME_VERSION,
+                "os": "ios",
+                "gmt": localnow().strftime("%z"),
             },
-            validate.union_get(
-                ("video", "format"),
-                ("video", "token"),
-                ("video", "url"),
-                ("meta", "title"),
+            schema=validate.Schema(
+                validate.parse_json(),
+                {
+                    "video": {
+                        "format": validate.any("dash", "hls"),
+                        "token": {
+                            "akamai": validate.url(),
+                        },
+                        "url": validate.url(),
+                    },
+                    "meta": {
+                        "title": str,
+                    },
+                },
+                validate.union_get(
+                    ("video", "format"),
+                    ("video", "token", "akamai"),
+                    ("video", "url"),
+                    ("meta", "title"),
+                ),
             ),
-        ))
+        )
 
-        data_url = update_qsd(token_url, {
-            "url": url,
-        })
-        video_url = self.session.http.get(data_url, schema=validate.Schema(
-            validate.parse_json(),
-            {"url": validate.url()},
-            validate.get("url"),
-        ))
+        video_url = self.session.http.get(
+            token_url,
+            params={
+                "url": url,
+            },
+            schema=validate.Schema(
+                validate.parse_json(),
+                {"url": validate.url()},
+                validate.get("url"),
+            ),
+        )
 
         if video_format == "dash":
             yield from DASHStream.parse_manifest(self.session, 
video_url).items()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/src/streamlink/plugins/twitch.py 
new/streamlink-6.8.2/src/streamlink/plugins/twitch.py
--- old/streamlink-6.8.1/src/streamlink/plugins/twitch.py       2024-06-18 
14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/src/streamlink/plugins/twitch.py       2024-07-04 
20:45:42.000000000 +0200
@@ -15,14 +15,16 @@
 import argparse
 import base64
 import logging
+import math
 import re
 import sys
+from collections import deque
 from contextlib import suppress
 from dataclasses import dataclass, replace as dataclass_replace
 from datetime import datetime, timedelta
 from json import dumps as json_dumps
 from random import random
-from typing import ClassVar, Mapping, Optional, Tuple, Type
+from typing import ClassVar, Deque, List, Mapping, Optional, Tuple, Type
 from urllib.parse import urlparse
 
 from requests.exceptions import HTTPError
@@ -62,35 +64,54 @@
 
 
 class TwitchM3U8(M3U8[TwitchHLSSegment, HLSPlaylist]):
-    def __init__(self, *args, **kwargs):
+    def __init__(self, *args, **kwargs) -> None:
         super().__init__(*args, **kwargs)
-        self.dateranges_ads = []
+        self.dateranges_ads: List[DateRange] = []
 
 
 class TwitchM3U8Parser(M3U8Parser[TwitchM3U8, TwitchHLSSegment, HLSPlaylist]):
     __m3u8__: ClassVar[Type[TwitchM3U8]] = TwitchM3U8
     __segment__: ClassVar[Type[TwitchHLSSegment]] = TwitchHLSSegment
 
+    @parse_tag("EXT-X-TWITCH-LIVE-SEQUENCE")
+    def parse_ext_x_twitch_live_sequence(self, value):
+        # Unset discontinuity state if the previous segment was not an ad,
+        # as the following segment won't be an ad
+        if self.m3u8.segments and not self.m3u8.segments[-1].ad:
+            self._discontinuity = False
+
     @parse_tag("EXT-X-TWITCH-PREFETCH")
     def parse_tag_ext_x_twitch_prefetch(self, value):
         segments = self.m3u8.segments
         if not segments:  # pragma: no cover
             return
         last = segments[-1]
+
         # Use the average duration of all regular segments for the duration of 
prefetch segments.
         # This is better than using the duration of the last segment when 
regular segment durations vary a lot.
         # In low latency mode, the playlist reload time is the duration of the 
last segment.
         duration = last.duration if last.prefetch else sum(segment.duration 
for segment in segments) / float(len(segments))
+
         # Use the last duration for extrapolating the start time of the 
prefetch segment, which is needed for checking
         # whether it is an ad segment and matches the parsed date ranges or not
         date = last.date + timedelta(seconds=last.duration)
+
         # Always treat prefetch segments after a discontinuity as ad segments
+        # (discontinuity tag inserted after last regular segment)
+        # Don't reset discontinuity state: the date extrapolation might be 
inaccurate,
+        # so all following prefetch segments should be considered an ad after 
a discontinuity
         ad = self._discontinuity or self._is_segment_ad(date)
+
+        # Since we don't reset the discontinuity state in prefetch segments 
for the purpose of ad detection,
+        # set the prefetch segment's discontinuity attribute based on ad 
transitions
+        discontinuity = ad != last.ad
+
         segment = dataclass_replace(
             last,
             uri=self.uri(value),
             duration=duration,
             title=None,
+            discontinuity=discontinuity,
             date=date,
             ad=ad,
             prefetch=True,
@@ -108,6 +129,15 @@
         ad = self._is_segment_ad(self._date, self._extinf.title if 
self._extinf else None)
         segment: TwitchHLSSegment = super().get_segment(uri, ad=ad, 
prefetch=False)  # type: ignore[assignment]
 
+        # Special case where Twitch incorrectly inserts discontinuity tags 
between segments of the live content
+        if (
+            segment.discontinuity
+            and not segment.ad
+            and self.m3u8.segments
+            and not self.m3u8.segments[-1].ad
+        ):
+            segment.discontinuity = False
+
         return segment
 
     def _is_segment_ad(self, date: Optional[datetime], title: Optional[str] = 
None) -> bool:
@@ -130,8 +160,9 @@
     writer: "TwitchHLSStreamWriter"
     stream: "TwitchHLSStream"
 
-    def __init__(self, reader, *args, **kwargs):
-        self.had_content = False
+    def __init__(self, reader, *args, **kwargs) -> None:
+        self.had_content: bool = False
+        self.logged_ads: Deque[str] = deque(maxlen=10)
         super().__init__(reader, *args, **kwargs)
 
     def _playlist_reload_time(self, playlist: TwitchM3U8):  # type: 
ignore[override]
@@ -162,6 +193,27 @@
         if self.stream.disable_ads and self.playlist_sequence == -1 and not 
self.had_content:
             log.info("Waiting for pre-roll ads to finish, be patient")
 
+        # log the duration of whole advertisement breaks
+        for daterange_ads in playlist.dateranges_ads:
+            if not daterange_ads.duration:  # pragma: no cover
+                continue
+
+            ads_id: Optional[str] = (
+                daterange_ads.x.get("X-TV-TWITCH-AD-COMMERCIAL-ID")
+                or daterange_ads.x.get("X-TV-TWITCH-AD-ROLL-TYPE")
+            )
+            if not ads_id or ads_id in self.logged_ads:
+                continue
+            self.logged_ads.append(ads_id)
+
+            # use Twitch's own ads duration metadata if available
+            try:
+                duration = 
math.ceil(float(daterange_ads.x.get("X-TV-TWITCH-AD-POD-FILLED-DURATION", "")))
+            except ValueError:
+                duration = math.ceil(daterange_ads.duration.total_seconds())
+
+            log.info(f"Detected advertisement break of {duration} second{'s' 
if duration != 1 else ''}")
+
         return super().process_segments(playlist)
 
 
@@ -607,19 +659,30 @@
         return parse_json(token, exception=PluginError, schema=schema)
 
 
-@pluginmatcher(re.compile(r"""
-    https?://(?:(?P<subdomain>[\w-]+)\.)?twitch\.tv/
-    (?:
-        videos/(?P<videos_id>\d+)
-        |
-        (?P<channel>[^/?]+)
-        (?:
-            /v(?:ideo)?/(?P<video_id>\d+)
-            |
-            /clip/(?P<clip_name>[^/?]+)
-        )?
-    )
-""", re.VERBOSE))
+@pluginmatcher(
+    name="player",
+    pattern=re.compile(
+        r"https?://player\.twitch\.tv/\?.+",
+    ),
+)
+@pluginmatcher(
+    name="clip",
+    pattern=re.compile(
+        
r"https?://(?:clips\.twitch\.tv|(?:[\w-]+\.)?twitch\.tv/(?:[\w-]+/)?clip)/(?P<clip_id>[^/?]+)",
+    ),
+)
+@pluginmatcher(
+    name="vod",
+    pattern=re.compile(
+        
r"https?://(?:[\w-]+\.)?twitch\.tv/(?:[\w-]+/)?v(?:ideos?)?/(?P<video_id>\d+)",
+    ),
+)
+@pluginmatcher(
+    name="live",
+    pattern=re.compile(
+        
r"https?://(?:(?!clips\.)[\w-]+\.)?twitch\.tv/(?P<channel>(?!v(?:ideos?)?/|clip/)[^/?]+)/?(?:\?|$)",
+    ),
+)
 @pluginargument(
     "disable-ads",
     action="store_true",
@@ -695,27 +758,21 @@
 
     def __init__(self, *args, **kwargs):
         super().__init__(*args, **kwargs)
-        match = self.match.groupdict()
-        parsed = urlparse(self.url)
-        self.params = parse_qsd(parsed.query)
-        self.subdomain = match.get("subdomain")
-        self.video_id = None
-        self.channel = None
-        self.clip_name = None
-        self._checked_metadata = False
 
-        if self.subdomain == "player":
-            # pop-out player
-            if self.params.get("video"):
-                self.video_id = self.params["video"]
-            self.channel = self.params.get("channel")
-        elif self.subdomain == "clips":
-            # clip share URL
-            self.clip_name = match.get("channel")
-        else:
-            self.channel = match.get("channel") and 
match.get("channel").lower()
-            self.video_id = match.get("video_id") or match.get("videos_id")
-            self.clip_name = match.get("clip_name")
+        params = parse_qsd(urlparse(self.url).query)
+
+        self.channel = self.match["channel"] if self.matches["live"] else None
+        self.video_id = self.match["video_id"] if self.matches["vod"] else None
+        self.clip_id = self.match["clip_id"] if self.matches["clip"] else None
+
+        if self.matches["player"]:
+            self.channel = params.get("channel")
+            self.video_id = params.get("video")
+
+        try:
+            self.time_offset = hours_minutes_seconds_float(params.get("t", 
"0"))
+        except ValueError:
+            self.time_offset = 0
 
         self.api = TwitchAPI(
             session=self.session,
@@ -724,6 +781,8 @@
         )
         self.usher = UsherService(session=self.session)
 
+        self._checked_metadata = False
+
         def method_factory(parent_method):
             def inner():
                 if not self._checked_metadata:
@@ -741,8 +800,8 @@
         try:
             if self.video_id:
                 data = self.api.metadata_video(self.video_id)
-            elif self.clip_name:
-                data = self.api.metadata_clips(self.clip_name)
+            elif self.clip_id:
+                data = self.api.metadata_clips(self.clip_id)
             elif self.channel:
                 data = self.api.metadata_channel(self.channel)
             else:  # pragma: no cover
@@ -825,18 +884,11 @@
         return self._get_hls_streams(url, restricted_bitrates, 
force_restart=True)
 
     def _get_hls_streams(self, url, restricted_bitrates, **extra_params):
-        time_offset = self.params.get("t", 0)
-        if time_offset:
-            try:
-                time_offset = hours_minutes_seconds_float(time_offset)
-            except ValueError:
-                time_offset = 0
-
         try:
             streams = TwitchHLSStream.parse_variant_playlist(
                 self.session,
                 url,
-                start_offset=time_offset,
+                start_offset=self.time_offset,
                 # Check if the media playlists are accessible:
                 # This is a workaround for checking the GQL API for the 
channel's live status,
                 # which can be delayed by up to a minute.
@@ -874,7 +926,7 @@
 
     def _get_clips(self):
         try:
-            sig, token, streams = self.api.clips(self.clip_name)
+            sig, token, streams = self.api.clips(self.clip_id)
         except (PluginError, TypeError):
             return
 
@@ -884,7 +936,7 @@
     def _get_streams(self):
         if self.video_id:
             return self._get_hls_streams_video()
-        elif self.clip_name:
+        elif self.clip_id:
             return self._get_clips()
         elif self.channel:
             return self._get_hls_streams_live()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/src/streamlink/plugins/vidio.py 
new/streamlink-6.8.2/src/streamlink/plugins/vidio.py
--- old/streamlink-6.8.1/src/streamlink/plugins/vidio.py        2024-06-18 
14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/src/streamlink/plugins/vidio.py        2024-07-04 
20:45:42.000000000 +0200
@@ -2,10 +2,12 @@
 $description Indonesian & international live TV channels and video on-demand 
service. OTT service from Vidio.
 $url vidio.com
 $type live, vod
+$metadata id
+$metadata title
 """
+
 import logging
 import re
-from urllib.parse import urlsplit, urlunsplit
 from uuid import uuid4
 
 from streamlink.plugin import Plugin, pluginmatcher
@@ -24,7 +26,6 @@
     tokens_url = "https://www.vidio.com/live/{id}/tokens";
 
     def _get_stream_token(self, stream_id, stream_type):
-        log.debug("Getting stream token")
         return self.session.http.post(
             self.tokens_url.format(id=stream_id),
             params={"type": stream_type},
@@ -35,19 +36,28 @@
             },
             schema=validate.Schema(
                 validate.parse_json(),
-                {"token": str},
-                validate.get("token"),
+                {
+                    "token": str,
+                    "hls_url": validate.any("", validate.url()),
+                    "dash_url": validate.any("", validate.url()),
+                },
+                validate.union_get(
+                    "token",
+                    "hls_url",
+                    "dash_url",
+                ),
             ),
         )
 
-    def _get_streams(self):
-        stream_id, has_token, hls_url, dash_url = self.session.http.get(
+    def _get_stream_data(self):
+        return self.session.http.get(
             self.url,
             schema=validate.Schema(
                 validate.parse_html(),
                 validate.xml_find(".//*[@data-video-id]"),
                 validate.union((
                     validate.get("data-video-id"),
+                    validate.get("data-video-title"),
                     validate.all(
                         validate.get("data-video-has-token"),
                         validate.transform(lambda val: val and val != "false"),
@@ -58,28 +68,25 @@
             ),
         )
 
-        if dash_url and has_token:
-            token = self._get_stream_token(stream_id, "dash")
-            parsed = urlsplit(dash_url)
-            dash_url = 
urlunsplit(parsed._replace(path=f"{token}{parsed.path}"))
-            return DASHStream.parse_manifest(
-                self.session,
-                dash_url,
-                headers={"Referer": "https://www.vidio.com/"},
-            )
+    def _get_streams(self):
+        self.session.http.headers.update({
+            "Origin": "https://www.vidio.com/";,
+            "Referer": self.url,
+        })
 
-        if not hls_url:
-            return
+        self.id, self.title, has_token, hls_url, dash_url = 
self._get_stream_data()
+        log.debug(f"{self.id=} {has_token=}")
 
+        params = {}
         if has_token:
-            token = self._get_stream_token(stream_id, "hls")
-            hls_url = f"{hls_url}?{token}"
-
-        return HLSStream.parse_variant_playlist(
-            self.session,
-            hls_url,
-            headers={"Referer": "https://www.vidio.com/"},
-        )
+            token, hls_url, dash_url = self._get_stream_token(self.id, "dash")
+            log.trace(f"{token=}")
+            params.update([param.split("=", 1) for param in (token.split("&") 
if token else [])])
+
+        if hls_url:
+            return HLSStream.parse_variant_playlist(self.session, hls_url, 
params=params)
+        if dash_url:
+            return DASHStream.parse_manifest(self.session, dash_url, 
params=params)
 
 
 __plugin__ = Vidio
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/src/streamlink.egg-info/PKG-INFO 
new/streamlink-6.8.2/src/streamlink.egg-info/PKG-INFO
--- old/streamlink-6.8.1/src/streamlink.egg-info/PKG-INFO       2024-06-18 
14:42:51.000000000 +0200
+++ new/streamlink-6.8.2/src/streamlink.egg-info/PKG-INFO       2024-07-04 
20:46:16.000000000 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: streamlink
-Version: 6.8.1
+Version: 6.8.2
 Summary: Streamlink is a command-line utility that extracts streams from 
various services and pipes them into a video player of choice.
 Author: Streamlink
 Author-email: [email protected]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/src/streamlink.egg-info/SOURCES.txt 
new/streamlink-6.8.2/src/streamlink.egg-info/SOURCES.txt
--- old/streamlink-6.8.1/src/streamlink.egg-info/SOURCES.txt    2024-06-18 
14:42:51.000000000 +0200
+++ new/streamlink-6.8.2/src/streamlink.egg-info/SOURCES.txt    2024-07-04 
20:46:17.000000000 +0200
@@ -145,6 +145,7 @@
 src/streamlink/plugins/dlive.py
 src/streamlink/plugins/dogan.py
 src/streamlink/plugins/dogus.py
+src/streamlink/plugins/douyin.py
 src/streamlink/plugins/drdk.py
 src/streamlink/plugins/earthcam.py
 src/streamlink/plugins/euronews.py
@@ -424,6 +425,7 @@
 tests/plugins/test_dlive.py
 tests/plugins/test_dogan.py
 tests/plugins/test_dogus.py
+tests/plugins/test_douyin.py
 tests/plugins/test_drdk.py
 tests/plugins/test_earthcam.py
 tests/plugins/test_euronews.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/tests/plugins/test_douyin.py 
new/streamlink-6.8.2/tests/plugins/test_douyin.py
--- old/streamlink-6.8.1/tests/plugins/test_douyin.py   1970-01-01 
01:00:00.000000000 +0100
+++ new/streamlink-6.8.2/tests/plugins/test_douyin.py   2024-07-04 
20:45:42.000000000 +0200
@@ -0,0 +1,14 @@
+from streamlink.plugins.douyin import Douyin
+from tests.plugins import PluginCanHandleUrl
+
+
+class TestPluginCanHandleUrlDouyin(PluginCanHandleUrl):
+    __plugin__ = Douyin
+
+    should_match_groups = [
+        ("https://live.douyin.com/ROOM_ID";, {"room_id": "ROOM_ID"}),
+    ]
+
+    should_not_match = [
+        "https://live.douyin.com";,
+    ]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/tests/plugins/test_pluzz.py 
new/streamlink-6.8.2/tests/plugins/test_pluzz.py
--- old/streamlink-6.8.1/tests/plugins/test_pluzz.py    2024-06-18 
14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/tests/plugins/test_pluzz.py    2024-07-04 
20:45:42.000000000 +0200
@@ -6,17 +6,31 @@
     __plugin__ = Pluzz
 
     should_match = [
-        "https://www.france.tv/france-2/direct.html";,
-        "https://www.france.tv/france-3/direct.html";,
-        "https://www.france.tv/france-4/direct.html";,
-        "https://www.france.tv/france-5/direct.html";,
-        "https://www.france.tv/franceinfo/direct.html";,
-        
"https://www.france.tv/france-2/journal-20h00/141003-edition-du-lundi-8-mai-2017.html";,
-        
"https://france3-regions.francetvinfo.fr/bourgogne-franche-comte/direct/franche-comte";,
-        "https://www.francetvinfo.fr/en-direct/tv.html";,
-        
"https://www.francetvinfo.fr/meteo/orages/inondations-dans-le-gard-plus-de-deux-mois-de-pluie-en-quelques-heures-des";
-        + "-degats-mais-pas-de-victime_4771265.html",
-        "https://la1ere.francetvinfo.fr/info-en-continu-24-24";,
-        
"https://la1ere.francetvinfo.fr/programme-video/france-3_outremer-ledoc/diffusion/";
-        + "2958951-polynesie-les-sages-de-l-ocean.html",
+        ("francetv", "https://www.france.tv/france-2/direct.html";),
+        ("francetv", "https://www.france.tv/france-3/direct.html";),
+        ("francetv", "https://www.france.tv/france-4/direct.html";),
+        ("francetv", "https://www.france.tv/france-5/direct.html";),
+        ("francetv", "https://www.france.tv/franceinfo/direct.html";),
+        ("francetv", 
"https://www.france.tv/france-2/journal-20h00/141003-edition-du-lundi-8-mai-2017.html";),
+
+        (
+            "francetvinfofr",
+            "https://www.francetvinfo.fr/en-direct/tv.html";,
+        ),
+        (
+            "francetvinfofr",
+            
"https://www.francetvinfo.fr/meteo/orages/inondations-dans-le-gard-plus-de-deux-mois-de-pluie-en-quelques-heures-des-degats-mais-pas-de-victime_4771265.html";,
+        ),
+        (
+            "francetvinfofr",
+            
"https://france3-regions.francetvinfo.fr/bourgogne-franche-comte/direct/franche-comte";,
+        ),
+        (
+            "francetvinfofr",
+            
"https://la1ere.francetvinfo.fr/programme-video/france-3_outremer-ledoc/diffusion/2958951-polynesie-les-sages-de-l-ocean.html";,
+        ),
+        (
+            "francetvinfofr",
+            "https://la1ere.francetvinfo.fr/info-en-continu-24-24";,
+        ),
     ]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/tests/plugins/test_twitch.py 
new/streamlink-6.8.2/tests/plugins/test_twitch.py
--- old/streamlink-6.8.1/tests/plugins/test_twitch.py   2024-06-18 
14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/tests/plugins/test_twitch.py   2024-07-04 
20:45:42.000000000 +0200
@@ -20,38 +20,58 @@
     __plugin__ = Twitch
 
     should_match_groups = [
-        ("https://www.twitch.tv/twitch";, {
-            "subdomain": "www",
-            "channel": "twitch",
-        }),
-        ("https://www.twitch.tv/videos/150942279";, {
-            "subdomain": "www",
-            "videos_id": "150942279",
-        }),
-        ("https://clips.twitch.tv/ObservantBenevolentCarabeefPhilosoraptor";, {
-            "subdomain": "clips",
-            "channel": "ObservantBenevolentCarabeefPhilosoraptor",
-        }),
-        
("https://www.twitch.tv/weplaydota/clip/FurryIntelligentDonutAMPEnergyCherry-akPRxv7Y3w58WmFq";,
 {
-            "subdomain": "www",
-            "channel": "weplaydota",
-            "clip_name": 
"FurryIntelligentDonutAMPEnergyCherry-akPRxv7Y3w58WmFq",
-        }),
-        ("https://www.twitch.tv/twitch/video/292713971";, {
-            "subdomain": "www",
-            "channel": "twitch",
-            "video_id": "292713971",
-        }),
-        ("https://www.twitch.tv/twitch/v/292713971";, {
-            "subdomain": "www",
-            "channel": "twitch",
-            "video_id": "292713971",
+        (("live", "https://www.twitch.tv/CHANNELNAME";), {
+            "channel": "CHANNELNAME",
         }),
+        (("live", "https://www.twitch.tv/CHANNELNAME?";), {
+            "channel": "CHANNELNAME",
+        }),
+        (("live", "https://www.twitch.tv/CHANNELNAME/";), {
+            "channel": "CHANNELNAME",
+        }),
+        (("live", "https://www.twitch.tv/CHANNELNAME/?";), {
+            "channel": "CHANNELNAME",
+        }),
+
+        (("vod", "https://www.twitch.tv/videos/1963401646";), {
+            "video_id": "1963401646",
+        }),
+        (("vod", "https://www.twitch.tv/dota2ti/v/1963401646";), {
+            "video_id": "1963401646",
+        }),
+        (("vod", "https://www.twitch.tv/dota2ti/video/1963401646";), {
+            "video_id": "1963401646",
+        }),
+        (("vod", "https://www.twitch.tv/videos/1963401646?t=1h23m45s";), {
+            "video_id": "1963401646",
+        }),
+
+        (("clip", 
"https://clips.twitch.tv/GoodEndearingPassionfruitPMSTwin-QfRLYDPKlscgqt-4";), {
+            "clip_id": "GoodEndearingPassionfruitPMSTwin-QfRLYDPKlscgqt-4",
+        }),
+        (("clip", 
"https://www.twitch.tv/clip/GoodEndearingPassionfruitPMSTwin-QfRLYDPKlscgqt-4";),
 {
+            "clip_id": "GoodEndearingPassionfruitPMSTwin-QfRLYDPKlscgqt-4",
+        }),
+        (("clip", 
"https://www.twitch.tv/lirik/clip/GoodEndearingPassionfruitPMSTwin-QfRLYDPKlscgqt-4";),
 {
+            "clip_id": "GoodEndearingPassionfruitPMSTwin-QfRLYDPKlscgqt-4",
+        }),
+
+        (("player", 
"https://player.twitch.tv/?parent=twitch.tv&channel=CHANNELNAME";), {}),
+        (("player", 
"https://player.twitch.tv/?parent=twitch.tv&video=1963401646";), {}),
+        (("player", 
"https://player.twitch.tv/?parent=twitch.tv&video=1963401646&t=1h23m45s";), {}),
     ]
 
     should_not_match = [
         "https://www.twitch.tv";,
         "https://www.twitch.tv/";,
+        "https://www.twitch.tv/videos/";,
+        "https://www.twitch.tv/dota2ti/v";,
+        "https://www.twitch.tv/dota2ti/video/";,
+        "https://clips.twitch.tv/";,
+        "https://www.twitch.tv/clip/";,
+        "https://www.twitch.tv/lirik/clip/";,
+        "https://player.twitch.tv/";,
+        "https://player.twitch.tv/?";,
     ]
 
 
@@ -189,7 +209,7 @@
             duration=1,
             attrid="foo",
             classname="/",
-            custom={"X-TV-TWITCH-AD-URL": "/"},
+            custom={"X-TV-TWITCH-AD-ROLL-TYPE": "PREROLL"},
         )
 
         segments = self.subject([
@@ -203,7 +223,10 @@
 
     @patch("streamlink.plugins.twitch.log")
     def test_hls_disable_ads_has_preroll(self, mock_log):
-        daterange = TagDateRangeAd(duration=4)
+        daterange = TagDateRangeAd(
+            duration=4,
+            custom={"X-TV-TWITCH-AD-ROLL-TYPE": "PREROLL"},
+        )
         segments = self.subject([
             Playlist(0, [daterange, Segment(0), Segment(1)]),
             Playlist(2, [daterange, Segment(2), Segment(3)]),
@@ -217,11 +240,16 @@
         assert mock_log.info.mock_calls == [
             call("Will skip ad segments"),
             call("Waiting for pre-roll ads to finish, be patient"),
+            call("Detected advertisement break of 4 seconds"),
         ]
 
     @patch("streamlink.plugins.twitch.log")
-    def test_hls_disable_ads_has_midstream(self, mock_log):
-        daterange = TagDateRangeAd(start=DATETIME_BASE + timedelta(seconds=2), 
duration=2)
+    def test_hls_disable_ads_has_midroll(self, mock_log):
+        daterange = TagDateRangeAd(
+            start=DATETIME_BASE + timedelta(seconds=2),
+            duration=2,
+            custom={"X-TV-TWITCH-AD-ROLL-TYPE": "MIDROLL", 
"X-TV-TWITCH-AD-COMMERCIAL-ID": "123"},
+        )
         segments = self.subject([
             Playlist(0, [Segment(0), Segment(1)]),
             Playlist(2, [daterange, Segment(2), Segment(3)]),
@@ -234,11 +262,64 @@
         assert all(self.called(s) for s in segments.values()), "Downloads all 
segments"
         assert mock_log.info.mock_calls == [
             call("Will skip ad segments"),
+            call("Detected advertisement break of 2 seconds"),
+        ]
+
+    @patch("streamlink.plugins.twitch.log")
+    def test_hls_disable_ads_has_preroll_and_midstream(self, mock_log):
+        ads1a = TagDateRangeAd(
+            start=DATETIME_BASE,
+            duration=2,
+            custom={"X-TV-TWITCH-AD-ROLL-TYPE": "PREROLL"},
+        )
+        ads1b = TagDateRangeAd(
+            start=DATETIME_BASE,
+            duration=1,
+        )
+        ads2 = TagDateRangeAd(
+            start=DATETIME_BASE + timedelta(seconds=4),
+            duration=4,
+            custom={
+                "X-TV-TWITCH-AD-ROLL-TYPE": "MIDROLL",
+                "X-TV-TWITCH-AD-COMMERCIAL-ID": "123",
+            },
+        )
+        ads3 = TagDateRangeAd(
+            start=DATETIME_BASE + timedelta(seconds=8),
+            duration=1,
+            custom={
+                "X-TV-TWITCH-AD-ROLL-TYPE": "MIDROLL",
+                "X-TV-TWITCH-AD-COMMERCIAL-ID": "456",
+                "X-TV-TWITCH-AD-POD-FILLED-DURATION": ".9",
+            },
+        )
+        segments = self.subject([
+            Playlist(0, [ads1a, ads1b, Segment(0)]),
+            Playlist(1, [ads1a, ads1b, Segment(1)]),
+            Playlist(2, [Segment(2), Segment(3)]),
+            Playlist(4, [ads2, Segment(4), Segment(5)]),
+            Playlist(6, [ads2, Segment(6), Segment(7)]),
+            Playlist(8, [ads3, Segment(8), Segment(9)], end=True),
+        ], streamoptions={"disable_ads": True, "low_latency": False})
+
+        self.await_write(10)
+        data = self.await_read(read_all=True)
+        assert data == self.content(segments, cond=lambda s: s.num not in (0, 
1, 4, 5, 6, 7, 8)), "Filters out all ad segments"
+        assert all(self.called(s) for s in segments.values()), "Downloads all 
segments"
+        assert mock_log.info.mock_calls == [
+            call("Will skip ad segments"),
+            call("Waiting for pre-roll ads to finish, be patient"),
+            call("Detected advertisement break of 2 seconds"),
+            call("Detected advertisement break of 4 seconds"),
+            call("Detected advertisement break of 1 second"),
         ]
 
     @patch("streamlink.plugins.twitch.log")
     def test_hls_no_disable_ads_has_preroll(self, mock_log):
-        daterange = TagDateRangeAd(duration=2)
+        daterange = TagDateRangeAd(
+            duration=2,
+            custom={"X-TV-TWITCH-AD-ROLL-TYPE": "PREROLL"},
+        )
         segments = self.subject([
             Playlist(0, [daterange, Segment(0), Segment(1)]),
             Playlist(2, [Segment(2), Segment(3)], end=True),
@@ -248,7 +329,9 @@
         data = self.await_read(read_all=True)
         assert data == self.content(segments), "Doesn't filter out segments"
         assert all(self.called(s) for s in segments.values()), "Downloads all 
segments"
-        assert mock_log.info.mock_calls == [], "Doesn't log anything"
+        assert mock_log.info.mock_calls == [
+            call("Detected advertisement break of 2 seconds"),
+        ]
 
     @patch("streamlink.plugins.twitch.log")
     def test_hls_low_latency_has_prefetch(self, mock_log):
@@ -305,7 +388,10 @@
 
     @patch("streamlink.plugins.twitch.log")
     def test_hls_low_latency_has_prefetch_has_preroll(self, mock_log):
-        daterange = TagDateRangeAd(duration=4)
+        daterange = TagDateRangeAd(
+            duration=4,
+            custom={"X-TV-TWITCH-AD-ROLL-TYPE": "PREROLL"},
+        )
         segments = self.subject([
             Playlist(0, [daterange, Segment(0), Segment(1), Segment(2), 
Segment(3)]),
             Playlist(4, [Segment(4), Segment(5), Segment(6), Segment(7), 
SegmentPrefetch(8), SegmentPrefetch(9)], end=True),
@@ -316,11 +402,17 @@
         assert data == self.content(segments, cond=lambda s: s.num > 1), 
"Skips first two segments due to reduced live-edge"
         assert not any(self.called(s) for s in segments.values() if s.num < 
2), "Skips first two preroll segments"
         assert all(self.called(s) for s in segments.values() if s.num >= 2), 
"Downloads all remaining segments"
-        assert mock_log.info.mock_calls == [call("Low latency streaming (HLS 
live edge: 2)")]
+        assert mock_log.info.mock_calls == [
+            call("Low latency streaming (HLS live edge: 2)"),
+            call("Detected advertisement break of 4 seconds"),
+        ]
 
     @patch("streamlink.plugins.twitch.log")
     def test_hls_low_latency_has_prefetch_disable_ads_has_preroll(self, 
mock_log):
-        daterange = TagDateRangeAd(duration=4)
+        daterange = TagDateRangeAd(
+            duration=4,
+            custom={"X-TV-TWITCH-AD-ROLL-TYPE": "PREROLL"},
+        )
         self.subject([
             Playlist(0, [daterange, Segment(0), Segment(1), Segment(2), 
Segment(3)]),
             Playlist(4, [Segment(4), Segment(5), Segment(6), Segment(7), 
SegmentPrefetch(8), SegmentPrefetch(9)], end=True),
@@ -332,6 +424,7 @@
             call("Will skip ad segments"),
             call("Low latency streaming (HLS live edge: 2)"),
             call("Waiting for pre-roll ads to finish, be patient"),
+            call("Detected advertisement break of 4 seconds"),
         ]
 
     @patch("streamlink.plugins.twitch.log")
@@ -341,8 +434,13 @@
         Seg, Pre = Segment, SegmentPrefetch
         ads = [
             Tag("EXT-X-DISCONTINUITY"),
-            TagDateRangeAd(start=DATETIME_BASE + timedelta(seconds=3), 
duration=4),
+            TagDateRangeAd(
+                start=DATETIME_BASE + timedelta(seconds=3),
+                duration=4,
+                custom={"X-TV-TWITCH-AD-ROLL-TYPE": "MIDROLL"},
+            ),
         ]
+        tls = Tag("EXT-X-TWITCH-LIVE-SEQUENCE", 7)
         # noinspection PyTypeChecker
         segments = self.subject([
             # regular stream data with prefetch segments
@@ -354,9 +452,9 @@
             # still no prefetch segments while ads are playing
             Playlist(3, [*ads, Seg(3), Seg(4), Seg(5), Seg(6)]),
             # new prefetch segments on the first regular segment occurrence
-            Playlist(4, [*ads, Seg(4), Seg(5), Seg(6), Seg(7), Pre(8), 
Pre(9)]),
-            Playlist(5, [*ads, Seg(5), Seg(6), Seg(7), Seg(8), Pre(9), 
Pre(10)]),
-            Playlist(6, [*ads, Seg(6), Seg(7), Seg(8), Seg(9), Pre(10), 
Pre(11)]),
+            Playlist(4, [*ads, Seg(4), Seg(5), Seg(6), tls, Seg(7), Pre(8), 
Pre(9)]),
+            Playlist(5, [*ads, Seg(5), Seg(6), tls, Seg(7), Seg(8), Pre(9), 
Pre(10)]),
+            Playlist(6, [*ads, Seg(6), tls, Seg(7), Seg(8), Seg(9), Pre(10), 
Pre(11)]),
             Playlist(7, [Seg(7), Seg(8), Seg(9), Seg(10), Pre(11), Pre(12)], 
end=True),
         ], streamoptions={"disable_ads": True, "low_latency": True})
 
@@ -366,11 +464,15 @@
         assert mock_log.info.mock_calls == [
             call("Will skip ad segments"),
             call("Low latency streaming (HLS live edge: 2)"),
+            call("Detected advertisement break of 4 seconds"),
         ]
 
     @patch("streamlink.plugins.twitch.log")
     def test_hls_low_latency_no_prefetch_disable_ads_has_preroll(self, 
mock_log):
-        daterange = TagDateRangeAd(duration=4)
+        daterange = TagDateRangeAd(
+            duration=4,
+            custom={"X-TV-TWITCH-AD-ROLL-TYPE": "PREROLL"},
+        )
         self.subject([
             Playlist(0, [daterange, Segment(0), Segment(1), Segment(2), 
Segment(3)]),
             Playlist(4, [Segment(4), Segment(5), Segment(6), Segment(7)], 
end=True),
@@ -382,6 +484,7 @@
             call("Will skip ad segments"),
             call("Low latency streaming (HLS live edge: 2)"),
             call("Waiting for pre-roll ads to finish, be patient"),
+            call("Detected advertisement break of 4 seconds"),
             call("This is not a low latency stream"),
         ]
 
@@ -395,6 +498,36 @@
         self.await_read(read_all=True)
         assert self.thread.reader.worker.playlist_reload_time == 
pytest.approx(23 / 3)
 
+    @patch("streamlink.stream.hls.hls.log")
+    def test_hls_prefetch_after_discontinuity(self, mock_log):
+        segments = self.subject([
+            Playlist(0, [Segment(0), Segment(1)]),
+            Playlist(2, [Segment(2), Segment(3), Tag("EXT-X-DISCONTINUITY"), 
SegmentPrefetch(4), SegmentPrefetch(5)]),
+            Playlist(6, [Segment(6), Segment(7)], end=True),
+        ], streamoptions={"disable_ads": True, "low_latency": True})
+
+        self.await_write(8)
+        assert self.await_read(read_all=True) == self.content(segments, 
cond=lambda seg: seg.num not in (4, 5))
+        assert mock_log.warning.mock_calls == [
+            call("Encountered a stream discontinuity. This is unsupported and 
will result in incoherent output data."),
+        ]
+
+    @patch("streamlink.stream.hls.hls.log")
+    def test_hls_ignored_discontinuity(self, mock_log):
+        Seg, Pre = Segment, SegmentPrefetch
+        discontinuity = Tag("EXT-X-DISCONTINUITY")
+        tls = Tag("EXT-X-TWITCH-LIVE-SEQUENCE", 1234)  # value is irrelevant
+        segments = self.subject([
+            Playlist(0, [Seg(0), discontinuity, Seg(1)]),
+            Playlist(2, [Seg(2), Seg(3), discontinuity, Seg(4), Seg(5)]),
+            Playlist(6, [Seg(6), Seg(7), discontinuity, tls, Pre(8), Pre(9)]),
+            Playlist(10, [Seg(10), Seg(11), discontinuity, tls, Pre(12), 
discontinuity, tls, Pre(13)], end=True),
+        ], streamoptions={"disable_ads": True, "low_latency": True})
+
+        self.await_write(14)
+        assert self.await_read(read_all=True) == self.content(segments)
+        assert mock_log.warning.mock_calls == []
+
 
 class TestTwitchAPIAccessToken:
     @pytest.fixture(autouse=True)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/streamlink-6.8.1/tests/plugins/test_vidio.py 
new/streamlink-6.8.2/tests/plugins/test_vidio.py
--- old/streamlink-6.8.1/tests/plugins/test_vidio.py    2024-06-18 
14:42:16.000000000 +0200
+++ new/streamlink-6.8.2/tests/plugins/test_vidio.py    2024-07-04 
20:45:42.000000000 +0200
@@ -8,6 +8,7 @@
     should_match = [
         "https://www.vidio.com/live/204-sctv-tv-stream";,
         "https://www.vidio.com/live/5075-dw-tv-stream";,
+        "https://www.vidio.com/live/777-metro-tv";,
         
"https://www.vidio.com/watch/766861-5-rekor-fantastis-zidane-bersama-real-madrid";,
     ]
 

++++++ streamlink.keyring ++++++
--- /var/tmp/diff_new_pack.V7uIbk/_old  2024-07-08 19:07:44.053585806 +0200
+++ /var/tmp/diff_new_pack.V7uIbk/_new  2024-07-08 19:07:44.057585952 +0200
@@ -1,6 +1,6 @@
 -----BEGIN PGP PUBLIC KEY BLOCK-----
-Comment: Hostname: 
 Version: Hockeypuck 2.2
+Comment: Hostname: 
 
 xjMEZLWhshYJKwYBBAHaRw8BAQdAu0sD5Ez8mfroVXpEohGHAeH1H2xduEHsYHkG
 IciKHdzNMlN0cmVhbWxpbmsgc2lnbmluZyBrZXkgPHN0cmVhbWxpbmtAcHJvdG9u

Reply via email to