您好,登錄后才能下訂單哦!
這篇文章主要為大家展示了Python中如何爬取新聞資訊,內容簡而易懂,下面讓小編帶大家一起學習一下吧。
一個簡單的Python資訊采集案例,列表頁到詳情頁,到數據保存,保存為txt文檔,網站網頁結構算是比較規整,簡單清晰明了,資訊新聞內容的采集和保存!
應用到的庫
requests,time,re,UserAgent,etree
import requests,time,re
from fake_useragent import UserAgent
from lxml import etree
列表頁面
列表頁,鏈接xpath解析
href_list=req.xpath('//ul[@class="news-list"]/li/a/@href')
詳情頁
內容xpath解析
h3=req.xpath('//div[@class="title-box"]/h3/text()')[0]
author=req.xpath('//div[@class="title-box"]/span[@class="news-from"]/text()')[0]
details=req.xpath('//div[@class="content-l detail"]/p/text()')
內容格式化處理
detail='\n'.join(details)
標題格式化處理,替換非法字符
pattern = r"[\/\\\:\*\?\"\<\>\|]"
new_title = re.sub(pattern, "_", title) # 替換為下劃線
保存數據,保存為txt文本
def save(self,h3, author, detail):
with open(f'{h3}.txt','w',encoding='utf-8') as f:
f.write('%s%s%s%s%s'%(h3,'\n',detail,'\n',author))print(f"保存{h3}.txt文本成功!")
遍歷數據采集,yield處理
def get_tasks(self):
data_list = self.parse_home_list(self.url)
for item in data_list:
yield item
程序運行效果
程序采集效果
附源碼參考:
# -*- coding: UTF-8 -*- import requests,time,re from fake_useragent import UserAgent from lxml import etree class RandomHeaders(object): ua=UserAgent() @property def random_headers(self): return { 'User-Agent': self.ua.random, } class Spider(RandomHeaders): def __init__(self,url): self.url=url def parse_home_list(self,url): response=requests.get(url,headers=self.random_headers).content.decode('utf-8') req=etree.HTML(response) href_list=req.xpath('//ul[@class="news-list"]/li/a/@href') print(href_list) for href in href_list: item = self.parse_detail(f'https://yz.chsi.com.cn{href}') yield item def parse_detail(self,url): print(f">>正在爬取{url}") try: response = requests.get(url, headers=self.random_headers).content.decode('utf-8') time.sleep(2) except Exception as e: print(e.args) self.parse_detail(url) else: req = etree.HTML(response) try: h3=req.xpath('//div[@class="title-box"]/h3/text()')[0] h3=self.validate_title(h3) author=req.xpath('//div[@class="title-box"]/span[@class="news-from"]/text()')[0] details=req.xpath('//div[@class="content-l detail"]/p/text()') detail='\n'.join(details) print(h3, author, detail) self.save(h3, author, detail) return h3, author, detail except IndexError: print(">>>采集出錯需延時,5s后重試..") time.sleep(5) self.parse_detail(url) @staticmethod def validate_title(title): pattern = r"[\/\\\:\*\?\"\<\>\|]" new_title = re.sub(pattern, "_", title) # 替換為下劃線 return new_title def save(self,h3, author, detail): with open(f'{h3}.txt','w',encoding='utf-8') as f: f.write('%s%s%s%s%s'%(h3,'\n',detail,'\n',author)) print(f"保存{h3}.txt文本成功!") def get_tasks(self): data_list = self.parse_home_list(self.url) for item in data_list: yield item if __name__=="__main__": url="https://yz.chsi.com.cn/kyzx/jyxd/" spider=Spider(url) for data in spider.get_tasks(): print(data)
以上就是關于Python中如何爬取新聞資訊的內容,如果你們有學習到知識或者技能,可以把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。