您好,登錄后才能下訂單哦!
這篇文章主要介紹怎么使用Python爬蟲庫BeautifulSoup遍歷文檔樹并對標簽進行操作,文中介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們一定要看完!
Python是一種跨平臺的、具有解釋性、編譯性、互動性和面向對象的腳本語言,其最初的設計是用于編寫自動化腳本,隨著版本的不斷更新和新功能的添加,常用于用于開發獨立的項目和大型項目。
使用Python爬蟲庫BeautifulSoup對文檔樹進行遍歷并對標簽進行操作的實例
html_doc = """ <html><head><title>The Dormouse's story</title></head> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>, <a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and <a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ from bs4 import BeautifulSoup soup = BeautifulSoup(html_doc,'lxml')
一、子節點
一個Tag可能包含多個字符串或者其他Tag,這些都是這個Tag的子節點.BeautifulSoup提供了許多操作和遍歷子結點的屬性。
1.通過Tag的名字來獲得Tag
print(soup.head) print(soup.title)
<head><title>The Dormouse's story</title></head> <title>The Dormouse's story</title>
通過名字的方法只能獲得第一個Tag,如果要獲得所有的某種Tag可以使用find_all方法
soup.find_all('a')
[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]
2.contents屬性:將Tag的子節點通過列表的方式返回
head_tag = soup.head head_tag.contents
[<title>The Dormouse's story</title>]
title_tag = head_tag.contents[0] title_tag
<title>The Dormouse's story</title>
title_tag.contents
["The Dormouse's story"]
3.children:通過該屬性對子節點進行循環
for child in title_tag.children: print(child)
The Dormouse's story
4.descendants: 不論是contents還是children都是返回直接子節點,而descendants對所有tag的子孫節點進行遞歸循環
for child in head_tag.children: print(child)
<title>The Dormouse's story</title>
for child in head_tag.descendants: print(child)
<title>The Dormouse's story</title> The Dormouse's story
5.string 如果tag只有一個NavigableString類型的子節點,那么tag可以使用.string得到該子節點
title_tag.string
"The Dormouse's story"
如果一個tag只有一個子節點,那么使用.string可以獲得其唯一子結點的NavigableString.
head_tag.string
"The Dormouse's story"
如果tag有多個子節點,tag無法確定.string對應的是那個子結點的內容,故返回None
print(soup.html.string)
None
6.strings和stripped_strings
如果tag包含多個字符串,可以使用.strings循環獲取
for string in soup.strings: print(string)
The Dormouse's story The Dormouse's story Once upon a time there were three little sisters; and their names were Elsie , Lacie and Tillie ; and they lived at the bottom of a well. ...
.string輸出的內容包含了許多空格和空行,使用strpped_strings去除這些空白內容
for string in soup.stripped_strings: print(string)
The Dormouse's story The Dormouse's story Once upon a time there were three little sisters; and their names were Elsie , Lacie and Tillie ; and they lived at the bottom of a well. ...
二、父節點
1.parent:獲得某個元素的父節點
title_tag = soup.title title_tag.parent
<head><title>The Dormouse's story</title></head>
字符串也有父節點
title_tag.string.parent
<title>The Dormouse's story</title>
2.parents:遞歸的獲得所有父輩節點
link = soup.a for parent in link.parents: if parent is None: print(parent) else: print(parent.name)
p body html [document]
三、兄弟結點
sibling_soup = BeautifulSoup("<a><b>text1</b><c>text2</c></b></a>",'lxml') print(sibling_soup.prettify())
<html> <body> <a> <b> text1 </b> <c> text2 </c> </a> </body> </html>
1.next_sibling和previous_sibling
sibling_soup.b.next_sibling
<c>text2</c>
sibling_soup.c.previous_sibling
<b>text1</b>
在實際文檔中.next_sibling和previous_sibling通常是字符串或者空白符
soup.find_all('a')
[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]
soup.a.next_sibling # 第一個<a></a>的next_sibling是,\n
',\n'
soup.a.next_sibling.next_sibling
<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
2.next_siblings和previous_siblings
for sibling in soup.a.next_siblings: print(repr(sibling))
',\n' <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a> ' and\n' <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a> ';\nand they lived at the bottom of a well.'
for sibling in soup.find(id="link3").previous_siblings: print(repr(sibling))
' and\n' <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a> ',\n' <a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a> 'Once upon a time there were three little sisters; and their names were\n'
四、回退與前進
1.next_element和previous_element
指向下一個或者前一個被解析的對象(字符串或tag),即深度優先遍歷的后序節點和前序節點
last_a_tag = soup.find("a", id="link3") print(last_a_tag.next_sibling) print(last_a_tag.next_element)
; and they lived at the bottom of a well. Tillie
last_a_tag.previous_element
' and\n'
2.next_elements和previous_elements
通過.next_elements和previous_elements可以向前或向后訪問文檔的解析內容,就好像文檔正在被解析一樣
for element in last_a_tag.next_elements: print(repr(element))
'Tillie' ';\nand they lived at the bottom of a well.' '\n' <p class="story">...</p> '...' '\n'
以上是“怎么使用Python爬蟲庫BeautifulSoup遍歷文檔樹并對標簽進行操作”這篇文章的所有內容,感謝各位的閱讀!希望分享的內容對大家有幫助,更多相關知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。