您好,登錄后才能下訂單哦!
獲取單獨一個table,代碼如下:
#!/usr/bin/env python3 # _*_ coding=utf-8 _*_ import csv from urllib.request import urlopen from bs4 import BeautifulSoup from urllib.request import HTTPError try: html = urlopen("http://en.wikipedia.org/wiki/Comparison_of_text_editors") except HTTPError as e: print("not found") bsObj = BeautifulSoup(html,"html.parser") table = bsObj.findAll("table",{"class":"wikitable"})[0] if table is None: print("no table"); exit(1) rows = table.findAll("tr") csvFile = open("editors.csv",'wt',newline='',encoding='utf-8') writer = csv.writer(csvFile) try: for row in rows: csvRow = [] for cell in row.findAll(['td','th']): csvRow.append(cell.get_text()) writer.writerow(csvRow) finally: csvFile.close()
獲取所有table,代碼如下:
#!/usr/bin/env python3 # _*_ coding=utf-8 _*_ import csv from urllib.request import urlopen from bs4 import BeautifulSoup from urllib.request import HTTPError try: html = urlopen("http://en.wikipedia.org/wiki/Comparison_of_text_editors") except HTTPError as e: print("not found") bsObj = BeautifulSoup(html,"html.parser") tables = bsObj.findAll("table",{"class":"wikitable"}) if tables is None: print("no table"); exit(1) i = 1 for table in tables: fileName = "table%s.csv" % i rows = table.findAll("tr") csvFile = open(fileName,'wt',newline='',encoding='utf-8') writer = csv.writer(csvFile) try: for row in rows: csvRow = [] for cell in row.findAll(['td','th']): csvRow.append(cell.get_text()) writer.writerow(csvRow) finally: csvFile.close() i += 1
以上這篇python 獲取頁面表格數據存放到csv中的方法就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支持億速云。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。