Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package you-get for openSUSE:Factory checked 
in at 2024-05-22 21:32:03
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/you-get (Old)
 and      /work/SRC/openSUSE:Factory/.you-get.new.1880 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "you-get"

Wed May 22 21:32:03 2024 rev:46 rq:1175698 version:0.4.1700

Changes:
--------
--- /work/SRC/openSUSE:Factory/you-get/you-get.changes  2022-12-12 
17:41:31.677735071 +0100
+++ /work/SRC/openSUSE:Factory/.you-get.new.1880/you-get.changes        
2024-05-22 21:32:29.650016808 +0200
@@ -1,0 +2,8 @@
+Wed May 22 07:21:43 UTC 2024 - Luigi Baldoni <aloi...@gmx.com>
+
+- Update to version 0.4.1700
+  * Bilibili: fix extraction
+  * TikTok: fix extraction
+  * X (Twitter): fix extraction
+
+-------------------------------------------------------------------

Old:
----
  you-get-0.4.1650.tar.gz

New:
----
  you-get-0.4.1700.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ you-get.spec ++++++
--- /var/tmp/diff_new_pack.Hthqaw/_old  2024-05-22 21:32:30.326041533 +0200
+++ /var/tmp/diff_new_pack.Hthqaw/_new  2024-05-22 21:32:30.326041533 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package you-get
 #
-# Copyright (c) 2022 SUSE LLC
+# Copyright (c) 2024 SUSE LLC
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -17,7 +17,7 @@
 
 
 Name:           you-get
-Version:        0.4.1650
+Version:        0.4.1700
 Release:        0
 Summary:        Dumb downloader that scrapes the web
 License:        MIT

++++++ you-get-0.4.1650.tar.gz -> you-get-0.4.1700.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/you-get-0.4.1650/.github/workflows/python-package.yml 
new/you-get-0.4.1700/.github/workflows/python-package.yml
--- old/you-get-0.4.1650/.github/workflows/python-package.yml   2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/.github/workflows/python-package.yml   2024-05-22 
01:58:47.000000000 +0200
@@ -1,5 +1,4 @@
 # This workflow will install Python dependencies, run tests and lint with a 
variety of Python versions
-# For more information see: 
https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
 
 name: develop
 
@@ -16,7 +15,7 @@
     strategy:
       fail-fast: false
       matrix:
-        python-version: [3.7, 3.8, 3.9, '3.10', 3.11-dev, pypy-3.8, pypy-3.9]
+        python-version: [3.7, 3.8, 3.9, '3.10', '3.11', '3.12', pypy-3.8, 
pypy-3.9, pypy-3.10]
 
     steps:
     - uses: actions/checkout@v3
@@ -26,7 +25,7 @@
         python-version: ${{ matrix.python-version }}
     - name: Install dependencies
       run: |
-        python -m pip install --upgrade pip
+        python -m pip install --upgrade pip setuptools
         pip install flake8 pytest
         if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
     - name: Lint with flake8
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/LICENSE.txt 
new/you-get-0.4.1700/LICENSE.txt
--- old/you-get-0.4.1650/LICENSE.txt    2022-12-11 18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/LICENSE.txt    2024-05-22 01:58:47.000000000 +0200
@@ -1,6 +1,6 @@
 MIT License
 
-Copyright (c) 2012-2020 Mort Yao <mort....@gmail.com> and other contributors
+Copyright (c) 2012-2024 Mort Yao <mort....@gmail.com> and other contributors
               (https://github.com/soimort/you-get/graphs/contributors)
 Copyright (c) 2012 Boyu Guo <iam...@gmail.com>
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/README.md 
new/you-get-0.4.1700/README.md
--- old/you-get-0.4.1650/README.md      2022-12-11 18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/README.md      2024-05-22 01:58:47.000000000 +0200
@@ -63,9 +63,9 @@
 
 ### Option 1: Install via pip
 
-The official release of `you-get` is distributed on 
[PyPI](https://pypi.python.org/pypi/you-get), and can be installed easily from 
a PyPI mirror via the 
[pip](https://en.wikipedia.org/wiki/Pip_\(package_manager\)) package manager. 
Note that you must use the Python 3 version of `pip`:
+The official release of `you-get` is distributed on 
[PyPI](https://pypi.python.org/pypi/you-get), and can be installed easily from 
a PyPI mirror via the 
[pip](https://en.wikipedia.org/wiki/Pip_\(package_manager\)) package manager: 
(Note that you must use the Python 3 version of `pip`)
 
-    $ pip3 install you-get
+    $ pip install you-get
 
 ### Option 2: Install via [Antigen](https://github.com/zsh-users/antigen) (for 
Zsh users)
 
@@ -80,16 +80,18 @@
 Alternatively, run
 
 ```
-$ [sudo] python3 setup.py install
+$ cd path/to/you-get
+$ [sudo] python -m pip install .
 ```
 
 Or
 
 ```
-$ python3 setup.py install --user
+$ cd path/to/you-get
+$ python -m pip install . --user
 ```
 
-to install `you-get` to a permanent path.
+to install `you-get` to a permanent path. (And don't omit the dot `.` 
representing the current directory)
 
 You can also use the [pipenv](https://pipenv.pypa.io/en/latest) to install the 
`you-get` in the Python virtual environment.
 
@@ -107,7 +109,7 @@
 $ git clone git://github.com/soimort/you-get.git
 ```
 
-Then put the cloned directory into your `PATH`, or run `./setup.py install` to 
install `you-get` to a permanent path.
+Then put the cloned directory into your `PATH`, or run `python -m pip install 
path/to/you-get` to install `you-get` to a permanent path.
 
 ### Option 5: Homebrew (Mac only)
 
@@ -134,7 +136,7 @@
 Based on which option you chose to install `you-get`, you may upgrade it via:
 
 ```
-$ pip3 install --upgrade you-get
+$ pip install --upgrade you-get
 ```
 
 or download the latest release via:
@@ -146,7 +148,7 @@
 In order to get the latest ```develop``` branch without messing up the PIP, 
you can try:
 
 ```
-$ pip3 install --upgrade git+https://github.com/soimort/you-get@develop
+$ pip install --upgrade git+https://github.com/soimort/you-get@develop
 ```
 
 ## Getting Started
@@ -266,25 +268,20 @@
 Size:       0.06 MiB (66482 Bytes)
 
 Downloading rms.jpg ...
-100.0% (  0.1/0.1  MB) 
├████████████████████████████████████████┤[1/1]
  127 kB/s
+ 100% (  0.1/  0.1MB) 
├████████████████████████████████████████┤[1/1]
  127 kB/s
 ```
 
 Otherwise, `you-get` will scrape the web page and try to figure out if there's 
anything interesting to you:
 
 ```
-$ you-get http://kopasas.tumblr.com/post/69361932517
+$ you-get https://kopasas.tumblr.com/post/69361932517
 Site:       Tumblr.com
-Title:      kopasas
-Type:       Unknown type (None)
-Size:       0.51 MiB (536583 Bytes)
-
-Site:       Tumblr.com
-Title:      tumblr_mxhg13jx4n1sftq6do1_1280
+Title:      [tumblr] tumblr_mxhg13jx4n1sftq6do1_640
 Type:       Portable Network Graphics (image/png)
-Size:       0.51 MiB (536583 Bytes)
+Size:       0.11 MiB (118484 Bytes)
 
-Downloading tumblr_mxhg13jx4n1sftq6do1_1280.png ...
-100.0% (  0.5/0.5  MB) 
├████████████████████████████████████████┤[1/1]
   22 MB/s
+Downloading [tumblr] tumblr_mxhg13jx4n1sftq6do1_640.png ...
+ 100% (  0.1/  0.1MB) 
├████████████████████████████████████████┤[1/1]
   22 MB/s
 ```
 
 **Note:**
@@ -374,82 +371,81 @@
 | Site | URL | Videos? | Images? | Audios? |
 | :--: | :-- | :-----: | :-----: | :-----: |
 | **YouTube** | <https://www.youtube.com/>    |✓| | |
-| **Twitter** | <https://twitter.com/>        |✓|✓| |
-| VK          | <http://vk.com/>              |✓|✓| |
-| Vine        | <https://vine.co/>            |✓| | |
+| **X (Twitter)** | <https://x.com/>        |✓|✓| |
+| VK          | <https://vk.com/>              |✓|✓| |
 | Vimeo       | <https://vimeo.com/>          |✓| | |
-| Veoh        | <http://www.veoh.com/>        |✓| | |
+| Veoh        | <https://www.veoh.com/>        |✓| | |
 | **Tumblr**  | <https://www.tumblr.com/>     |✓|✓|✓|
-| TED         | <http://www.ted.com/>         |✓| | |
+| TED         | <https://www.ted.com/>         |✓| | |
 | SoundCloud  | <https://soundcloud.com/>     | | |✓|
 | SHOWROOM    | <https://www.showroom-live.com/> |✓| | |
 | Pinterest   | <https://www.pinterest.com/>  | |✓| |
-| MTV81       | <http://www.mtv81.com/>       |✓| | |
+| MTV81       | <https://www.mtv81.com/>       |✓| | |
 | Mixcloud    | <https://www.mixcloud.com/>   | | |✓|
-| Metacafe    | <http://www.metacafe.com/>    |✓| | |
-| Magisto     | <http://www.magisto.com/>     |✓| | |
+| Metacafe    | <https://www.metacafe.com/>    |✓| | |
+| Magisto     | <https://www.magisto.com/>     |✓| | |
 | Khan Academy | <https://www.khanacademy.org/> |✓| | |
 | Internet Archive | <https://archive.org/>   |✓| | |
 | **Instagram** | <https://instagram.com/>    |✓|✓| |
-| InfoQ       | <http://www.infoq.com/presentations/> |✓| | |
-| Imgur       | <http://imgur.com/>           | |✓| |
-| Heavy Music Archive | <http://www.heavy-music.ru/> | | |✓|
-| Freesound   | <http://www.freesound.org/>   | | |✓|
+| InfoQ       | <https://www.infoq.com/presentations/> |✓| | |
+| Imgur       | <https://imgur.com/>           | |✓| |
+| Heavy Music Archive | <https://www.heavy-music.ru/> | | |✓|
+| Freesound   | <https://www.freesound.org/>   | | |✓|
 | Flickr      | <https://www.flickr.com/>     |✓|✓| |
-| FC2 Video   | <http://video.fc2.com/>       |✓| | |
+| FC2 Video   | <https://video.fc2.com/>       |✓| | |
 | Facebook    | <https://www.facebook.com/>   |✓| | |
-| eHow        | <http://www.ehow.com/>        |✓| | |
-| Dailymotion | <http://www.dailymotion.com/> |✓| | |
-| Coub        | <http://coub.com/>            |✓| | |
-| CBS         | <http://www.cbs.com/>         |✓| | |
-| Bandcamp    | <http://bandcamp.com/>        | | |✓|
-| AliveThai   | <http://alive.in.th/>         |✓| | |
-| interest.me | <http://ch.interest.me/tvn>   |✓| | |
-| **755<br/>ナナゴーゴー** | <http://7gogo.jp/> |✓|✓| |
-| **niconico<br/>ニコニコ動画** | <http://www.nicovideo.jp/> |✓| | |
-| **163<br/>网易视频<br/>网易云音乐** | 
<http://v.163.com/><br/><http://music.163.com/> |✓| |✓|
-| 56网     | <http://www.56.com/>           |✓| | |
-| **AcFun** | <http://www.acfun.cn/>        |✓| | |
-| **Baidu<br/>百度贴吧** | <http://tieba.baidu.com/> |✓|✓| |
-| 爆米花网 | <http://www.baomihua.com/>     |✓| | |
-| **bilibili<br/>哔哩哔哩** | <http://www.bilibili.com/> |✓|✓|✓|
-| 豆瓣     | <http://www.douban.com/>       |✓| |✓|
-| 斗鱼     | <http://www.douyutv.com/>      |✓| | |
-| 凤凰视频 | <http://v.ifeng.com/>          |✓| | |
-| 风行网   | <http://www.fun.tv/>           |✓| | |
-| iQIYI<br/>爱奇艺 | <http://www.iqiyi.com/> |✓| | |
-| 激动网   | <http://www.joy.cn/>           |✓| | |
-| 酷6网    | <http://www.ku6.com/>          |✓| | |
-| 酷狗音乐 | <http://www.kugou.com/>        | | |✓|
-| 酷我音乐 | <http://www.kuwo.cn/>          | | |✓|
-| 乐视网   | <http://www.le.com/>           |✓| | |
-| 荔枝FM   | <http://www.lizhi.fm/>         | | |✓|
-| 懒人听书 | <http://www.lrts.me/>          | | |✓|
-| 秒拍     | <http://www.miaopai.com/>      |✓| | |
-| MioMio弹幕网 | <http://www.miomio.tv/>    |✓| | |
-| MissEvan<br/>猫耳FM | <http://www.missevan.com/> | | |✓|
+| eHow        | <https://www.ehow.com/>        |✓| | |
+| Dailymotion | <https://www.dailymotion.com/> |✓| | |
+| Coub        | <https://coub.com/>            |✓| | |
+| CBS         | <https://www.cbs.com/>         |✓| | |
+| Bandcamp    | <https://bandcamp.com/>        | | |✓|
+| AliveThai   | <https://alive.in.th/>         |✓| | |
+| interest.me | <https://ch.interest.me/tvn>   |✓| | |
+| **755<br/>ナナゴーゴー** | <https://7gogo.jp/> |✓|✓| |
+| **niconico<br/>ニコニコ動画** | <https://www.nicovideo.jp/> |✓| | |
+| **163<br/>网易视频<br/>网易云音乐** | 
<https://v.163.com/><br/><https://music.163.com/> |✓| |✓|
+| 56网     | <https://www.56.com/>           |✓| | |
+| **AcFun** | <https://www.acfun.cn/>        |✓| | |
+| **Baidu<br/>百度贴吧** | <https://tieba.baidu.com/> |✓|✓| |
+| 爆米花网 | <https://www.baomihua.com/>     |✓| | |
+| **bilibili<br/>哔哩哔哩** | <https://www.bilibili.com/> |✓|✓|✓|
+| 豆瓣     | <https://www.douban.com/>       |✓| |✓|
+| 斗鱼     | <https://www.douyutv.com/>      |✓| | |
+| 凤凰视频 | <https://v.ifeng.com/>          |✓| | |
+| 风行网   | <https://www.fun.tv/>           |✓| | |
+| iQIYI<br/>爱奇艺 | <https://www.iqiyi.com/> |✓| | |
+| 激动网   | <https://www.joy.cn/>           |✓| | |
+| 酷6网    | <https://www.ku6.com/>          |✓| | |
+| 酷狗音乐 | <https://www.kugou.com/>        | | |✓|
+| 酷我音乐 | <https://www.kuwo.cn/>          | | |✓|
+| 乐视网   | <https://www.le.com/>           |✓| | |
+| 荔枝FM   | <https://www.lizhi.fm/>         | | |✓|
+| 懒人听书 | <https://www.lrts.me/>          | | |✓|
+| 秒拍     | <https://www.miaopai.com/>      |✓| | |
+| MioMio弹幕网 | <https://www.miomio.tv/>    |✓| | |
+| MissEvan<br/>猫耳FM | <https://www.missevan.com/> | | |✓|
 | 痞客邦   | <https://www.pixnet.net/>      |✓| | |
-| PPTV聚力 | <http://www.pptv.com/>         |✓| | |
-| 齐鲁网   | <http://v.iqilu.com/>          |✓| | |
-| QQ<br/>腾讯视频 | <http://v.qq.com/>      |✓| | |
-| 企鹅直播 | <http://live.qq.com/>          |✓| | |
-| Sina<br/>新浪视频<br/>微博秒拍视频 | 
<http://video.sina.com.cn/><br/><http://video.weibo.com/> |✓| | |
-| Sohu<br/>搜狐视频 | <http://tv.sohu.com/> |✓| | |
-| **Tudou<br/>土豆** | <http://www.tudou.com/> |✓| | |
-| 阳光卫视 | <http://www.isuntv.com/>       |✓| | |
-| **Youku<br/>优酷** | <http://www.youku.com/> |✓| | |
-| 战旗TV   | <http://www.zhanqi.tv/lives>   |✓| | |
-| 央视网   | <http://www.cntv.cn/>          |✓| | |
-| Naver<br/>네이버 | <http://tvcast.naver.com/>     |✓| | |
-| 芒果TV   | <http://www.mgtv.com/>         |✓| | |
-| 火猫TV   | <http://www.huomao.com/>       |✓| | |
-| 阳光宽频网 | <http://www.365yg.com/>      |✓| | |
+| PPTV聚力 | <https://www.pptv.com/>         |✓| | |
+| 齐鲁网   | <https://v.iqilu.com/>          |✓| | |
+| QQ<br/>腾讯视频 | <https://v.qq.com/>      |✓| | |
+| 企鹅直播 | <https://live.qq.com/>          |✓| | |
+| Sina<br/>新浪视频<br/>微博秒拍视频 | 
<https://video.sina.com.cn/><br/><https://video.weibo.com/> |✓| | |
+| Sohu<br/>搜狐视频 | <https://tv.sohu.com/> |✓| | |
+| **Tudou<br/>土豆** | <https://www.tudou.com/> |✓| | |
+| 阳光卫视 | <https://www.isuntv.com/>       |✓| | |
+| **Youku<br/>优酷** | <https://www.youku.com/> |✓| | |
+| 战旗TV   | <https://www.zhanqi.tv/lives>   |✓| | |
+| 央视网   | <https://www.cntv.cn/>          |✓| | |
+| Naver<br/>네이버 | <https://tvcast.naver.com/>     |✓| | |
+| 芒果TV   | <https://www.mgtv.com/>         |✓| | |
+| 火猫TV   | <https://www.huomao.com/>       |✓| | |
+| 阳光宽频网 | <https://www.365yg.com/>      |✓| | |
 | 西瓜视频 | <https://www.ixigua.com/>      |✓| | |
 | 新片场 | <https://www.xinpianchang.com/>      |✓| | |
 | 快手 | <https://www.kuaishou.com/>      |✓|✓| |
 | 抖音 | <https://www.douyin.com/>      |✓| | |
 | TikTok | <https://www.tiktok.com/>      |✓| | |
-| 中国体育(TV) | <http://v.zhibo.tv/> </br><http://video.zhibo.tv/>    
|✓| | |
+| 中国体育(TV) | <https://v.zhibo.tv/> </br><https://video.zhibo.tv/>    
|✓| | |
 | 知乎 | <https://www.zhihu.com/>      |✓| | |
 
 For all other sites not on the list, the universal extractor will take care of 
finding and downloading interesting resources from the page.
@@ -462,7 +458,7 @@
 
 ## Getting Involved
 
-You can reach us on the Gitter channel 
[#soimort/you-get](https://gitter.im/soimort/you-get) (here's how you [set up 
your IRC client](http://irc.gitter.im) for Gitter). If you have a quick 
question regarding `you-get`, ask it there.
+You can reach us on the Gitter channel 
[#soimort/you-get](https://gitter.im/soimort/you-get) (here's how you [set up 
your IRC client](https://irc.gitter.im) for Gitter). If you have a quick 
question regarding `you-get`, ask it there.
 
 If you are seeking to report an issue or contribute, please make sure to read 
[the 
guidelines](https://github.com/soimort/you-get/blob/develop/CONTRIBUTING.md) 
first.
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/SECURITY.md 
new/you-get-0.4.1700/SECURITY.md
--- old/you-get-0.4.1650/SECURITY.md    1970-01-01 01:00:00.000000000 +0100
+++ new/you-get-0.4.1700/SECURITY.md    2024-05-22 01:58:47.000000000 +0200
@@ -0,0 +1,5 @@
+# Security Policy
+
+## Reporting a Vulnerability
+
+Please report security issues to <mort.yao+you-...@gmail.com>.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/setup.py 
new/you-get-0.4.1700/setup.py
--- old/you-get-0.4.1650/setup.py       2022-12-11 18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/setup.py       2024-05-22 01:58:47.000000000 +0200
@@ -5,7 +5,20 @@
 
 PROJ_METADATA = '%s.json' % PROJ_NAME
 
-import os, json, imp
+import importlib.util
+import importlib.machinery
+
+def load_source(modname, filename):
+    loader = importlib.machinery.SourceFileLoader(modname, filename)
+    spec = importlib.util.spec_from_file_location(modname, filename, 
loader=loader)
+    module = importlib.util.module_from_spec(spec)
+    # The module is always executed and not cached in sys.modules.
+    # Uncomment the following line to cache the module.
+    # sys.modules[module.__name__] = module
+    loader.exec_module(module)
+    return module
+
+import os, json
 here = os.path.abspath(os.path.dirname(__file__))
 proj_info = json.loads(open(os.path.join(here, PROJ_METADATA), 
encoding='utf-8').read())
 try:
@@ -13,7 +26,7 @@
 except:
     README = ""
 CHANGELOG = open(os.path.join(here, 'CHANGELOG.rst'), encoding='utf-8').read()
-VERSION = imp.load_source('version', os.path.join(here, 'src/%s/version.py' % 
PACKAGE_NAME)).__version__
+VERSION = load_source('version', os.path.join(here, 'src/%s/version.py' % 
PACKAGE_NAME)).__version__
 
 from setuptools import setup, find_packages
 setup(
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/common.py 
new/you-get-0.4.1700/src/you_get/common.py
--- old/you-get-0.4.1650/src/you_get/common.py  2022-12-11 18:15:46.000000000 
+0100
+++ new/you-get-0.4.1700/src/you_get/common.py  2024-05-22 01:58:47.000000000 
+0200
@@ -111,8 +111,8 @@
     'wanmen'           : 'wanmen',
     'weibo'            : 'miaopai',
     'veoh'             : 'veoh',
-    'vine'             : 'vine',
     'vk'               : 'vk',
+    'x'                : 'twitter',
     'xiaokaxiu'        : 'yixia',
     'xiaojiadianvideo' : 'fc2video',
     'ximalaya'         : 'ximalaya',
@@ -138,13 +138,14 @@
 insecure = False
 m3u8 = False
 postfix = False
+prefix = None
 
 fake_headers = {
-    'Accept': 
'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',  # noqa
+    'Accept': 
'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
     'Accept-Charset': 'UTF-8,*;q=0.5',
     'Accept-Encoding': 'gzip,deflate,sdch',
     'Accept-Language': 'en-US,en;q=0.8',
-    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.74 Safari/537.36 
Edg/79.0.309.43',  # noqa
+    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36 
Edg/123.0.2420.97'  # Latest Edge
 }
 
 if sys.stdout.isatty():
@@ -351,6 +352,7 @@
     conn.set_debuglevel(debuglevel)
     conn.request("GET", url, headers=headers)
     resp = conn.getresponse()
+    logging.debug('getHttps: %s' % resp.getheaders())
     set_cookie = resp.getheader('set-cookie')
 
     data = resp.read()
@@ -361,7 +363,7 @@
         pass
 
     conn.close()
-    return str(data, encoding='utf-8'), set_cookie
+    return str(data, encoding='utf-8'), set_cookie  # TODO: support raw data
 
 
 # DEPRECATED in favor of get_content()
@@ -1014,6 +1016,8 @@
     title = tr(get_filename(title))
     if postfix and 'vid' in kwargs:
         title = "%s [%s]" % (title, kwargs['vid'])
+    if prefix is not None:
+        title = "[%s] %s" % (prefix, title)
     output_filename = get_output_filename(urls, title, ext, output_dir, merge)
     output_filepath = os.path.join(output_dir, output_filename)
 
@@ -1563,10 +1567,14 @@
         help='Do not download captions (subtitles, lyrics, danmaku, ...)'
     )
     download_grp.add_argument(
-        '--postfix', action='store_true', default=False,
+        '--post', '--postfix', dest='postfix', action='store_true', 
default=False,
         help='Postfix downloaded files with unique identifiers'
     )
     download_grp.add_argument(
+        '--pre', '--prefix', dest='prefix', metavar='PREFIX', default=None,
+        help='Prefix downloaded files with string'
+    )
+    download_grp.add_argument(
         '-f', '--force', action='store_true', default=False,
         help='Force overwriting existing files'
     )
@@ -1689,6 +1697,7 @@
     global insecure
     global m3u8
     global postfix
+    global prefix
     output_filename = args.output_filename
     extractor_proxy = args.extractor_proxy
 
@@ -1726,6 +1735,7 @@
         insecure = True
 
     postfix = args.postfix
+    prefix = args.prefix
 
     if args.no_proxy:
         set_http_proxy('')
@@ -1846,9 +1856,12 @@
         )
     else:
         try:
-            location = get_location(url) # t.co isn't happy with fake_headers
+            try:
+                location = get_location(url) # t.co isn't happy with 
fake_headers
+            except:
+                location = get_location(url, headers=fake_headers)
         except:
-            location = get_location(url, headers=fake_headers)
+            location = get_location(url, headers=fake_headers, 
get_method='GET')
 
         if location and location != url and not location.startswith('/'):
             return url_to_module(location)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/extractors/__init__.py 
new/you-get-0.4.1700/src/you_get/extractors/__init__.py
--- old/you-get-0.4.1650/src/you_get/extractors/__init__.py     2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/src/you_get/extractors/__init__.py     2024-05-22 
01:58:47.000000000 +0200
@@ -74,7 +74,6 @@
 from .ucas import *
 from .veoh import *
 from .vimeo import *
-from .vine import *
 from .vk import *
 from .w56 import *
 from .wanmen import *
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/extractors/bilibili.py 
new/you-get-0.4.1700/src/you_get/extractors/bilibili.py
--- old/you-get-0.4.1650/src/you_get/extractors/bilibili.py     2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/src/you_get/extractors/bilibili.py     2024-05-22 
01:58:47.000000000 +0200
@@ -42,6 +42,8 @@
         {'id': 'jpg', 'quality': 0},
     ]
 
+    codecids = {7: 'AVC', 12: 'HEVC', 13: 'AV1'}
+
     @staticmethod
     def height_to_quality(height, qn):
         if height <= 360 and qn <= 16:
@@ -70,7 +72,7 @@
 
     @staticmethod
     def bilibili_api(avid, cid, qn=0):
-        return 
'https://api.bilibili.com/x/player/playurl?avid=%s&cid=%s&qn=%s&type=&otype=json&fnver=0&fnval=16&fourk=1'
 % (avid, cid, qn)
+        return 
'https://api.bilibili.com/x/player/playurl?avid=%s&cid=%s&qn=%s&type=&otype=json&fnver=0&fnval=4048&fourk=1'
 % (avid, cid, qn)
 
     @staticmethod
     def bilibili_audio_api(sid):
@@ -98,7 +100,8 @@
         appkey, sec = ''.join([chr(ord(i) + 2) for i in 
entropy[::-1]]).split(':')
         params = 'appkey=%s&cid=%s&otype=json&qn=%s&quality=%s&type=' % 
(appkey, cid, qn, qn)
         chksum = hashlib.md5(bytes(params + sec, 'utf8')).hexdigest()
-        return 'https://interface.bilibili.com/v2/playurl?%s&sign=%s' % 
(params, chksum)
+        return 'https://api.bilibili.com/x/player/wbi/v2?%s&sign=%s' % 
(params, chksum)
+
 
     @staticmethod
     def bilibili_live_api(cid):
@@ -115,7 +118,7 @@
     @staticmethod
     def bilibili_space_channel_api(mid, cid, pn=1, ps=100):
         return 
'https://api.bilibili.com/x/space/channel/video?mid=%s&cid=%s&pn=%s&ps=%s&order=0&jsonp=jsonp'
 % (mid, cid, pn, ps)
-   
+
     @staticmethod
     def bilibili_space_collection_api(mid, cid, pn=1, ps=30):
         return 
'https://api.bilibili.com/x/polymer/space/seasons_archives_list?mid=%s&season_id=%s&sort_reverse=false&page_num=%s&page_size=%s'
 % (mid, cid, pn, ps)
@@ -123,7 +126,7 @@
     @staticmethod
     def bilibili_series_archives_api(mid, sid, pn=1, ps=100):
         return 
'https://api.bilibili.com/x/series/archives?mid=%s&series_id=%s&pn=%s&ps=%s&only_normal=true&sort=asc&jsonp=jsonp'
 % (mid, sid, pn, ps)
-    
+
     @staticmethod
     def bilibili_space_favlist_api(fid, pn=1, ps=20):
         return 
'https://api.bilibili.com/x/v3/fav/resource/list?media_id=%s&pn=%s&ps=%s&order=mtime&type=0&tid=0&jsonp=jsonp'
 % (fid, pn, ps)
@@ -222,6 +225,10 @@
             if 'videoData' in initial_state:
                 # (standard video)
 
+                # warn if cookies are not loaded
+                if cookies is None:
+                    log.w('You will need login cookies for 720p formats or 
above. (use --cookies to load cookies.txt.)')
+
                 # warn if it is a multi-part video
                 pn = initial_state['videoData']['videos']
                 if pn > 1 and not kwargs.get('playlist'):
@@ -302,11 +309,10 @@
                 if 'dash' in playinfo['data']:
                     audio_size_cache = {}
                     for video in playinfo['data']['dash']['video']:
-                        # prefer the latter codecs!
                         s = self.stream_qualities[video['id']]
-                        format_id = 'dash-' + s['id']  # prefix
+                        format_id = 
f"dash-{s['id']}-{self.codecids[video['codecid']]}"  # prefix
                         container = 'mp4'  # enforce MP4 container
-                        desc = s['desc']
+                        desc = s['desc'] + ' ' + video['codecs']
                         audio_quality = s['audio_quality']
                         baseurl = video['baseUrl']
                         size = self.url_size(baseurl, 
headers=self.bilibili_headers(referer=self.url))
@@ -329,7 +335,7 @@
                                                             'src': 
[[baseurl]], 'size': size}
 
             # get danmaku
-            self.danmaku = get_content('http://comment.bilibili.com/%s.xml' % 
cid)
+            self.danmaku = get_content('https://comment.bilibili.com/%s.xml' % 
cid, headers=self.bilibili_headers(referer=self.url))
 
         # bangumi
         elif sort == 'bangumi':
@@ -408,7 +414,7 @@
                                                         'src': [[baseurl], 
[audio_baseurl]], 'size': size}
 
             # get danmaku
-            self.danmaku = get_content('http://comment.bilibili.com/%s.xml' % 
cid)
+            self.danmaku = get_content('https://comment.bilibili.com/%s.xml' % 
cid, headers=self.bilibili_headers(referer=self.url))
 
         # vc video
         elif sort == 'vc':
@@ -590,7 +596,7 @@
                                                         'src': [[baseurl]], 
'size': size}
 
         # get danmaku
-        self.danmaku = get_content('http://comment.bilibili.com/%s.xml' % cid)
+        self.danmaku = get_content('https://comment.bilibili.com/%s.xml' % 
cid, headers=self.bilibili_headers(referer=self.url))
 
     def extract(self, **kwargs):
         # set UA and referer for downloading
@@ -747,13 +753,20 @@
         elif sort == 'space_channel_series':
             m = 
re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/seriesdetail\?.*sid=(\d+)',
 self.url)
             mid, sid = m.group(1), m.group(2)
-            api_url = self.bilibili_series_archives_api(mid, sid)
-            api_content = get_content(api_url, 
headers=self.bilibili_headers(referer=self.url))
-            archives_info = json.loads(api_content)
-            # TBD: channel of more than 100 videos
+            pn = 1
+            video_list = []
+            while True:
+                api_url = self.bilibili_series_archives_api(mid, sid, pn)
+                api_content = get_content(api_url, 
headers=self.bilibili_headers(referer=self.url))
+                archives_info = json.loads(api_content)
+                video_list.extend(archives_info['data']['archives'])
+                if len(video_list) < archives_info['data']['page']['total'] 
and len(archives_info['data']['archives']) > 0:
+                    pn += 1
+                else:
+                    break
 
-            epn, i = len(archives_info['data']['archives']), 0
-            for video in archives_info['data']['archives']:
+            epn, i = len(video_list), 0
+            for video in video_list:
                 i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                 url = 'https://www.bilibili.com/video/av%s' % video['aid']
                 self.__class__().download_playlist_by_url(url, **kwargs)
@@ -761,13 +774,20 @@
         elif sort == 'space_channel_collection':
             m = 
re.match(r'https?://space\.?bilibili\.com/(\d+)/channel/collectiondetail\?.*sid=(\d+)',
 self.url)
             mid, sid = m.group(1), m.group(2)
-            api_url = self.bilibili_space_collection_api(mid, sid)
-            api_content = get_content(api_url, 
headers=self.bilibili_headers(referer=self.url))
-            archives_info = json.loads(api_content)
-            # TBD: channel of more than 100 videos
+            pn = 1
+            video_list = []
+            while True:
+                api_url = self.bilibili_space_collection_api(mid, sid, pn)
+                api_content = get_content(api_url, 
headers=self.bilibili_headers(referer=self.url))
+                archives_info = json.loads(api_content)
+                video_list.extend(archives_info['data']['archives'])
+                if len(video_list) < archives_info['data']['page']['total'] 
and len(archives_info['data']['archives']) > 0:
+                    pn += 1
+                else:
+                    break
 
-            epn, i = len(archives_info['data']['archives']), 0
-            for video in archives_info['data']['archives']:
+            epn, i = len(video_list), 0
+            for video in video_list:
                 i += 1; log.w('Extracting %s of %s videos ...' % (i, epn))
                 url = 'https://www.bilibili.com/video/av%s' % video['aid']
                 self.__class__().download_playlist_by_url(url, **kwargs)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/extractors/imgur.py 
new/you-get-0.4.1700/src/you_get/extractors/imgur.py
--- old/you-get-0.4.1650/src/you_get/extractors/imgur.py        2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/src/you_get/extractors/imgur.py        2024-05-22 
01:58:47.000000000 +0200
@@ -13,9 +13,11 @@
     ]
 
     def prepare(self, **kwargs):
+        self.ua = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36 
Edg/123.0.2420.97'
+
         if re.search(r'imgur\.com/a/', self.url):
             # album
-            content = get_content(self.url)
+            content = get_content(self.url, headers=fake_headers)
             album = match1(content, r'album\s*:\s*({.*}),') or \
                     match1(content, r'image\s*:\s*({.*}),')
             album = json.loads(album)
@@ -39,7 +41,7 @@
 
         elif re.search(r'i\.imgur\.com/', self.url):
             # direct image
-            _, container, size = url_info(self.url)
+            _, container, size = url_info(self.url, faker=True)
             self.streams = {
                 'original': {
                     'src': [self.url],
@@ -51,10 +53,10 @@
 
         else:
             # gallery image
-            content = get_content(self.url)
+            content = get_content(self.url, headers=fake_headers)
             url = match1(content, r'meta 
property="og:video"[^>]+(https?://i.imgur.com/[^"?]+)') or \
                 match1(content, r'meta 
property="og:image"[^>]+(https?://i.imgur.com/[^"?]+)')
-            _, container, size = url_info(url)
+            _, container, size = url_info(url, headers={'User-Agent': 
fake_headers['User-Agent']})
             self.streams = {
                 'original': {
                     'src': [url],
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/extractors/tiktok.py 
new/you-get-0.4.1700/src/you_get/extractors/tiktok.py
--- old/you-get-0.4.1650/src/you_get/extractors/tiktok.py       2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/src/you_get/extractors/tiktok.py       2024-05-22 
01:58:47.000000000 +0200
@@ -27,12 +27,12 @@
     tt_chain_token = r1('tt_chain_token=([^;]+);', set_cookie)
     headers['Cookie'] = 'tt_chain_token=%s' % tt_chain_token
 
-    data = r1(r'window\[\'SIGI_STATE\'\]=(.*?);window\[\'SIGI_RETRY\'\]', 
html) or \
-        r1(r'<script id="SIGI_STATE" type="application/json">(.*?)</script>', 
html)
+    data = r1(r'<script id="__UNIVERSAL_DATA_FOR_REHYDRATION__" 
type="application/json">(.*?)</script>', html)
     info = json.loads(data)
-    downloadAddr = info['ItemModule'][vid]['video']['downloadAddr']
-    author = info['ItemModule'][vid]['author']  # same as uniqueId
-    nickname = info['UserModule']['users'][author]['nickname']
+    itemStruct = 
info['__DEFAULT_SCOPE__']['webapp.video-detail']['itemInfo']['itemStruct']
+    downloadAddr = itemStruct['video']['downloadAddr']
+    author = itemStruct['author']['uniqueId']
+    nickname = itemStruct['author']['nickname']
     title = '%s [%s]' % (nickname or author, vid)
 
     mime, ext, size = url_info(downloadAddr, headers=headers)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/extractors/tumblr.py 
new/you-get-0.4.1700/src/you_get/extractors/tumblr.py
--- old/you-get-0.4.1650/src/you_get/extractors/tumblr.py       2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/src/you_get/extractors/tumblr.py       2024-05-22 
01:58:47.000000000 +0200
@@ -6,7 +6,6 @@
 from .universal import *
 from .dailymotion import dailymotion_download
 from .vimeo import vimeo_download
-from .vine import vine_download
 
 def tumblr_download(url, output_dir='.', merge=True, info_only=False, 
**kwargs):
     if re.match(r'https?://\d+\.media\.tumblr\.com/', url):
@@ -82,16 +81,16 @@
             except: pass
 
         if tuggles:
-            size = sum([tuggles[t]['size'] for t in tuggles])
-            print_info(site_info, page_title, None, size)
+            #size = sum([tuggles[t]['size'] for t in tuggles])
+            #print_info(site_info, page_title, None, size)
 
-            if not info_only:
-                for t in tuggles:
-                    title = tuggles[t]['title']
-                    ext = tuggles[t]['ext']
-                    size = tuggles[t]['size']
-                    url = tuggles[t]['url']
-                    print_info(site_info, title, ext, size)
+            for t in tuggles:
+                title = '[tumblr] ' + tuggles[t]['title']
+                ext = tuggles[t]['ext']
+                size = tuggles[t]['size']
+                url = tuggles[t]['url']
+                print_info(site_info, title, ext, size)
+                if not info_only:
                     download_urls([url], title, ext, size,
                                   output_dir=output_dir)
             return
@@ -125,9 +124,6 @@
             elif re.search(r'dailymotion\.com', iframe_url):
                 dailymotion_download(iframe_url, output_dir, merge=merge, 
info_only=info_only, **kwargs)
                 return
-            elif re.search(r'vine\.co', iframe_url):
-                vine_download(iframe_url, output_dir, merge=merge, 
info_only=info_only, **kwargs)
-                return
             else:
                 iframe_html = get_content(iframe_url)
                 real_url = r1(r'<source src="([^"]*)"', iframe_html)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/extractors/twitter.py 
new/you-get-0.4.1700/src/you_get/extractors/twitter.py
--- old/you-get-0.4.1650/src/you_get/extractors/twitter.py      2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/src/you_get/extractors/twitter.py      2024-05-22 
01:58:47.000000000 +0200
@@ -4,7 +4,6 @@
 
 from ..common import *
 from .universal import *
-from .vine import vine_download
 
 def extract_m3u(source):
     r1 = get_content(source)
@@ -23,7 +22,7 @@
     if re.match(r'https?://mobile', url): # normalize mobile URL
         url = 'https://' + match1(url, r'//mobile\.(.+)')
 
-    if re.match(r'https?://twitter\.com/i/moments/', url): # moments
+    if re.match(r'https?://twitter\.com/i/moments/', url): # FIXME: moments
         html = get_html(url, faker=True)
         paths = re.findall(r'data-permalink-path="([^"]+)"', html)
         for path in paths:
@@ -34,102 +33,49 @@
                              **kwargs)
         return
 
-    html = get_html(url, faker=True) # now it seems faker must be enabled
-    screen_name = r1(r'twitter\.com/([^/]+)', url) or 
r1(r'data-screen-name="([^"]*)"', html) or \
-        r1(r'<meta name="twitter:title" content="([^"]*)"', html)
-    item_id = r1(r'twitter\.com/[^/]+/status/(\d+)', url) or 
r1(r'data-item-id="([^"]*)"', html) or \
-        r1(r'<meta name="twitter:site:id" content="([^"]*)"', html)
+    m = re.match('^https?://(mobile\.)?(x|twitter)\.com/([^/]+)/status/(\d+)', 
url)
+    assert m
+    screen_name, item_id = m.group(3), m.group(4)
     page_title = "{} [{}]".format(screen_name, item_id)
 
-    try:
-        authorization = 'Bearer 
AAAAAAAAAAAAAAAAAAAAANRILgAAAAAAnNwIzUejRCOuH5E6I8xnZz4puTs%3D1Zv7ttfk8LF81IUq16cHjhLTvJu4FA33AGWWjCpTnA'
-
-        # FIXME: 403 with cookies
-        ga_url = 'https://api.twitter.com/1.1/guest/activate.json'
-        ga_content = post_content(ga_url, headers={'authorization': 
authorization})
-        guest_token = json.loads(ga_content)['guest_token']
-
-        api_url = 
'https://api.twitter.com/2/timeline/conversation/%s.json?tweet_mode=extended' % 
item_id
-        api_content = get_content(api_url, headers={'authorization': 
authorization, 'x-guest-token': guest_token})
-
-        info = json.loads(api_content)
-        if item_id not in info['globalObjects']['tweets']:
-            # something wrong here
-            #log.wtf('[Failed] ' + 
info['timeline']['instructions'][0]['addEntries']['entries'][0]['content']['item']['content']['tombstone']['tombstoneInfo']['richText']['text'],
 exit_code=None)
-            assert False
-
-        elif 'extended_entities' in info['globalObjects']['tweets'][item_id]:
-            # if the tweet contains media, download them
-            media = 
info['globalObjects']['tweets'][item_id]['extended_entities']['media']
-
-        elif 'entities' in info['globalObjects']['tweets'][item_id]:
-            # if the tweet contains media from another tweet, download it
-            expanded_url = None
-            for j in 
info['globalObjects']['tweets'][item_id]['entities']['urls']:
-                if re.match(r'^https://twitter.com/.*', j['expanded_url']):
-                    # FIXME: multiple valid expanded_url's?
-                    expanded_url = j['expanded_url']
-            if expanded_url is not None:
-                item_id = r1(r'/status/(\d+)', expanded_url)
-                assert False
-
-        elif info['globalObjects']['tweets'][item_id].get('is_quote_status') 
== True:
-            # if the tweet does not contain media, but it quotes a tweet
-            # and the quoted tweet contains media, download them
-            item_id = 
info['globalObjects']['tweets'][item_id]['quoted_status_id_str']
-
-            api_url = 
'https://api.twitter.com/2/timeline/conversation/%s.json?tweet_mode=extended' % 
item_id
-            api_content = get_content(api_url, headers={'authorization': 
authorization, 'x-guest-token': guest_token})
-
-            info = json.loads(api_content)
-
-            if 'extended_entities' in info['globalObjects']['tweets'][item_id]:
-                media = 
info['globalObjects']['tweets'][item_id]['extended_entities']['media']
-            else:
-                # quoted tweet has no media
-                return
-
-        else:
-            # no media, no quoted tweet
-            return
-
-    except:
-        authorization = 'Bearer 
AAAAAAAAAAAAAAAAAAAAAPYXBAAAAAAACLXUNDekMxqa8h%2F40K4moUkGsoc%3DTYfbDKbT3jJPCEVnMYqilB28NHfOPqkca3qaAxGfsyKCs0wRbw'
-
-        # FIXME: 403 with cookies
-        ga_url = 'https://api.twitter.com/1.1/guest/activate.json'
-        ga_content = post_content(ga_url, headers={'authorization': 
authorization})
-        guest_token = json.loads(ga_content)['guest_token']
-
-        api_url = 
'https://api.twitter.com/1.1/statuses/show/%s.json?tweet_mode=extended' % 
item_id
-        api_content = get_content(api_url, headers={'authorization': 
authorization, 'x-guest-token': guest_token})
-        info = json.loads(api_content)
-        media = info['extended_entities']['media']
-
-    for medium in media:
-        if 'video_info' in medium:
-            variants = medium['video_info']['variants']
-            variants = sorted(variants, key=lambda kv: kv.get('bitrate', 0))
-            title = item_id + '_' + 
variants[-1]['url'].split('/')[-1].split('?')[0].split('.')[0]
-            urls = [ variants[-1]['url'] ]
+    # FIXME: this API won't work for protected or nsfw contents
+    api_url = 'https://cdn.syndication.twimg.com/tweet-result?id=%s&token=!' % 
item_id
+    content = get_content(api_url)
+    info = json.loads(content)
+
+    author = info['user']['name']
+    url = 'https://twitter.com/%s/status/%s' % (info['user']['screen_name'], 
item_id)
+    full_text = info['text']
+
+    if 'photos' in info:
+        for photo in info['photos']:
+            photo_url = photo['url']
+            title = item_id + '_' + photo_url.split('.')[-2].split('/')[-1]
+            urls = [ photo_url + ':orig' ]
             size = urls_size(urls)
-            mime, ext = variants[-1]['content_type'], 'mp4'
+            ext = photo_url.split('.')[-1]
 
-            print_info(site_info, title, mime, size)
+            print_info(site_info, title, ext, size)
             if not info_only:
                 download_urls(urls, title, ext, size, output_dir, merge=merge)
 
-        else:
-            title = item_id + '_' + 
medium['media_url_https'].split('.')[-2].split('/')[-1]
-            urls = [ medium['media_url_https'] + ':orig' ]
+    if 'video' in info:
+        for mediaDetail in info['mediaDetails']:
+            if 'video_info' not in mediaDetail: continue
+            variants = mediaDetail['video_info']['variants']
+            variants = sorted(variants, key=lambda kv: kv.get('bitrate', 0))
+            title = item_id + '_' + 
variants[-1]['url'].split('/')[-1].split('?')[0].split('.')[0]
+            urls = [ variants[-1]['url'] ]
             size = urls_size(urls)
-            ext = medium['media_url_https'].split('.')[-1]
+            mime, ext = variants[-1]['content_type'], 'mp4'
 
             print_info(site_info, title, ext, size)
             if not info_only:
                 download_urls(urls, title, ext, size, output_dir, merge=merge)
 
+    # TODO: should we deal with quoted tweets?
+
 
-site_info = "Twitter.com"
+site_info = "X.com"
 download = twitter_download
 download_playlist = playlist_not_supported('twitter')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/extractors/vine.py 
new/you-get-0.4.1700/src/you_get/extractors/vine.py
--- old/you-get-0.4.1650/src/you_get/extractors/vine.py 2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/src/you_get/extractors/vine.py 1970-01-01 
01:00:00.000000000 +0100
@@ -1,36 +0,0 @@
-#!/usr/bin/env python
-
-__all__ = ['vine_download']
-
-from ..common import *
-import json
-
-
-def vine_download(url, output_dir='.', merge=True, info_only=False, **kwargs):
-    html = get_content(url)
-
-    video_id = r1(r'vine.co/v/([^/]+)', url)
-    title = r1(r'<title>([^<]*)</title>', html)
-    stream = r1(r'<meta property="twitter:player:stream" content="([^"]*)">', 
html)
-    if not stream:  # https://vine.co/v/.../card
-        stream = r1(r'"videoUrl":"([^"]+)"', html)
-        if stream:
-            stream = stream.replace('\\/', '/')
-        else:
-            posts_url = 'https://archive.vine.co/posts/' + video_id + '.json'
-            json_data = json.loads(get_content(posts_url))
-            stream = json_data['videoDashUrl']
-            title = json_data['description']
-            if title == "":
-                title = json_data['username'].replace(" ", "_") + "_" + 
video_id
-
-    mime, ext, size = url_info(stream)
-
-    print_info(site_info, title, mime, size)
-    if not info_only:
-        download_urls([stream], title, ext, size, output_dir, merge=merge)
-
-
-site_info = "Vine.co"
-download = vine_download
-download_playlist = playlist_not_supported('vine')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/processor/ffmpeg.py 
new/you-get-0.4.1700/src/you_get/processor/ffmpeg.py
--- old/you-get-0.4.1650/src/you_get/processor/ffmpeg.py        2022-12-11 
18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/src/you_get/processor/ffmpeg.py        2024-05-22 
01:58:47.000000000 +0200
@@ -128,7 +128,7 @@
 
 def ffmpeg_concat_ts_to_mkv(files, output='output.mkv'):
     print('Merging video parts... ', end="", flush=True)
-    params = [FFMPEG] + LOGLEVEL + ['-isync', '-y', '-i']
+    params = [FFMPEG] + LOGLEVEL + ['-y', '-i']
     params.append('concat:')
     for file in files:
         if os.path.isfile(file):
@@ -175,7 +175,7 @@
     if FFMPEG == 'avconv':
         params += ['-c', 'copy']
     else:
-        params += ['-c', 'copy', '-absf', 'aac_adtstoasc']
+        params += ['-c', 'copy', '-bsf:a', 'aac_adtstoasc']
     params.extend(['--', output])
 
     if subprocess.call(params, stdin=STDIN) == 0:
@@ -229,7 +229,7 @@
     if FFMPEG == 'avconv':
         params += ['-c', 'copy']
     else:
-        params += ['-c', 'copy', '-absf', 'aac_adtstoasc']
+        params += ['-c', 'copy', '-bsf:a', 'aac_adtstoasc']
     params.extend(['--', output])
 
     subprocess.check_call(params, stdin=STDIN)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/src/you_get/version.py 
new/you-get-0.4.1700/src/you_get/version.py
--- old/you-get-0.4.1650/src/you_get/version.py 2022-12-11 18:15:46.000000000 
+0100
+++ new/you-get-0.4.1700/src/you_get/version.py 2024-05-22 01:58:47.000000000 
+0200
@@ -1,4 +1,4 @@
 #!/usr/bin/env python
 
 script_name = 'you-get'
-__version__ = '0.4.1650'
+__version__ = '0.4.1700'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/tests/test.py 
new/you-get-0.4.1700/tests/test.py
--- old/you-get-0.4.1650/tests/test.py  2022-12-11 18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/tests/test.py  2024-05-22 01:58:47.000000000 +0200
@@ -19,6 +19,7 @@
 class YouGetTests(unittest.TestCase):
     def test_imgur(self):
         imgur.download('http://imgur.com/WVLk5nD', info_only=True)
+        imgur.download('https://imgur.com/we-should-have-listened-WVLk5nD', 
info_only=True)
 
     def test_magisto(self):
         magisto.download(
@@ -40,10 +41,10 @@
         #)
 
     def test_acfun(self):
-        acfun.download('https://www.acfun.cn/v/ac11701912', info_only=True)
+        acfun.download('https://www.acfun.cn/v/ac44560432', info_only=True)
 
-    #def test_bilibili(self):
-    #    bilibili.download('https://www.bilibili.com/video/BV1sL4y177sC', 
info_only=True)
+    def test_bilibili(self):
+        bilibili.download('https://www.bilibili.com/video/BV1sL4y177sC', 
info_only=True)
 
     #def test_soundcloud(self):
         ## single song
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/you-get-0.4.1650/you-get.json 
new/you-get-0.4.1700/you-get.json
--- old/you-get-0.4.1650/you-get.json   2022-12-11 18:15:46.000000000 +0100
+++ new/you-get-0.4.1700/you-get.json   2024-05-22 01:58:47.000000000 +0200
@@ -22,6 +22,8 @@
     "Programming Language :: Python :: 3.8",
     "Programming Language :: Python :: 3.9",
     "Programming Language :: Python :: 3.10",
+    "Programming Language :: Python :: 3.11",
+    "Programming Language :: Python :: 3.12",
     "Topic :: Internet",
     "Topic :: Internet :: WWW/HTTP",
     "Topic :: Multimedia",

Reply via email to