日本免费全黄少妇一区二区三区-高清无码一区二区三区四区-欧美中文字幕日韩在线观看-国产福利诱惑在线网站-国产中文字幕一区在线-亚洲欧美精品日韩一区-久久国产精品国产精品国产-国产精久久久久久一区二区三区-欧美亚洲国产精品久久久久

抖音別人的作品怎么保存不了 抖音怎么下載別人的作品

要批量下載抖音某博主的視頻 , 并將視頻的內(nèi)容文本保存,可以使用Python中的requests和beautifulsoup庫來實(shí)現(xiàn) 。具體步驟如下:
1. 使用requests庫來獲取抖音某博主的主頁html代碼 。
“`python
import requests
url = 'https://www.douyin.com/user/xxxxxx'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299'
}
response = requests.get(url, headers=headers)
html = response.text
“`
其中,xxxxxx為該博主的抖音ID 。
2. 使用beautifulsoup庫來解析html代碼,獲取該博主的視頻列表 。
“`python
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'html.parser')
video_list = soup.find_all('div', {'class': 'video-card'})
“`
其中,'video-card'是抖音視頻卡片的class名稱 。
3. 對于每個視頻,使用正則表達(dá)式來獲取視頻的下載鏈接,并使用requests庫下載視頻 。
“`python
import re
for video in video_list:
video_url = re.findall(r'"playAddr":"(.*?)"', str(video))[0].encode('utf-8').decode('unicode_escape')
video_title = video.find('p', {'class': 'desc'}).text
video_response = requests.get(video_url, headers=headers)
with open(video_title + '.mp4', 'wb') as f:
f.write(video_response.content)
“`
其中,video_url為視頻的下載鏈接 , video_title為視頻的標(biāo)題 。
4. 對于每個視頻 , 使用正則表達(dá)式來獲取視頻的文本內(nèi)容 , 并保存到文本文件中 。
“`python
for video in video_list:
video_url = re.findall(r'"playAddr":"(.*?)"', str(video))[0].encode('utf-8').decode('unicode_escape')
video_title = video.find('p', {'class': 'desc'}).text
video_response = requests.get(video_url, headers=headers)
with open(video_title + '.mp4', 'wb') as f:
f.write(video_response.content)
video_html = video.find('a', {'class': 'video-title'}).get('href')
video_response = requests.get(video_html, headers=headers)
video_soup = BeautifulSoup(video_response.text, 'html.parser')
video_text = video_soup.find('div', {'class': 'body'}).text
with open(video_title + '.txt', 'w', encoding='utf-8') as f:
f.write(video_text)
“`
其中,video_html為視頻的詳情頁鏈接 , video_text為視頻的文本內(nèi)容 。
完整代碼如下:
“`python
import requests
from bs4 import BeautifulSoup
import re
url = 'https://www.douyin.com/user/xxxxxx'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299'
}
response = requests.get(url, headers=headers)
html = response.text
soup = BeautifulSoup(html, 'html.parser')
video_list = soup.find_all('div', {'class': 'video-card'})
for video in video_list:
video_url = re.findall(r'"playAddr":"(.*?)"', str(video))[0].encode('utf-8').decode('unicode_escape')
video_title = video.find('p', {'class': 'desc'}).text
video_response = requests.get(video_url, headers=headers)
with open(video_title + '.mp4', 'wb') as f:
f.write(video_response.content)
video_html = video.find('a', {'class': 'video-title'}).get('href')

推薦閱讀