您好,登錄后才能下訂單哦!
爬取貓眼電影TOP100(http://maoyan.com/board/4?offset=90)
1). 爬取內容: 電影名稱,主演, 上映時間,圖片url地址保存到mariadb數據庫中;
2). 所有的圖片保存到本地/mnt/maoyan/電影名.png
代碼:
import re
import pymysql as mysql
from urllib import request
from urllib.request import urlopen
u = 'root'
p = 'root'
d = 'python'
sql = 'insert into maoyan_top100 values(%s,%s,%s,%s,%s)'
url = 'http://maoyan.com/board/4?offset='
pattern = r'<dd>[\s\S]*?board-index.*?>(\d+)</i>[\s\S]*?<img data-src="(http://.+?)" alt="(.*?)"[\s\S]*?star">[\s]*(.*?)[\s]*?</p>[\s\S]*?releasetime">[\s]*(.*?)[\s]*?</p>'
myAgent = "Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0"
conn = mysql.connect(user=u, passwd=p, db=d, charset='utf8', autocommit=True)
cur = conn.cursor()
def write_to_mysql(item):
cur.executemany(sql,item)
def save_picture(rank,img_url,film_name):
img_content = urlopen(img_url).read()
img_name = 'maoyan_images/'+rank+'_'+film_name+'.jpg'
with open(img_name,'wb') as f:
f.write(img_content)
def main():
for i in range(10):
page_url = url+str(i*10)
myrequest = request.Request(page_url, headers={'User-Agent': myAgent})
page_content = urlopen(myrequest).read().decode('utf-8')
items = re.findall(pattern,page_content)
# [('1', 'http://p1.meituan.net/movie/20803f59291c47e1e116c11963ce019e68711.jpg@160w_220h_1e_1c', '霸王別姬', '主演:張國榮,張豐毅,鞏俐', '上映時間:1993-01-01')...]
write_to_mysql(items)
for item in items:
save_picture(item[0],item[1],item[2])
if __name__ == '__main__':
main()
爬取結果:
1)保存海報圖片
2)數據庫數據
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。