您好,登錄后才能下訂單哦!
這篇“Python中aiohttp如何使用”文章的知識點大部分人都不太理解,所以小編給大家總結了以下內容,內容詳細,步驟清晰,具有一定的借鑒價值,希望大家閱讀完這篇文章能有所收獲,下面我們一起來看看這篇“Python中aiohttp如何使用”文章吧。
aiohttp 是一個基于 asyncio 的異步 HTTP 網絡模塊,它既提供了服務端,又提供了客戶端
import aiohttp import asyncio async def fetch(session, url): # 聲明一個支持異步的上下文管理器 async with session.get(url) as response: # response.text()是coroutine對象 需要加await return await response.text(), response.status async def main(): # 聲明一個支持異步的上下文管理器 async with aiohttp.ClientSession() as session: html, status = await fetch(session, 'https://cuiqingcai.com') print(f'html: {html[:100]}...') print(f'status: {status}') if __name__ == '__main__': # Python 3.7 及以后,不需要顯式聲明事件循環,可以使用 asyncio.run(main())來代替最后的啟動操作 asyncio.get_event_loop().run_until_complete(main())
session.post('http://httpbin.org/post', data=b'data') session.put('http://httpbin.org/put', data=b'data') session.delete('http://httpbin.org/delete') session.head('http://httpbin.org/get') session.options('http://httpbin.org/get') session.patch('http://httpbin.org/patch', data=b'data')
print('status:', response.status) # 狀態碼 print('headers:', response.headers)# 響應頭 print('body:', await response.text())# 響應體 print('bytes:', await response.read())# 響應體二進制內容 print('json:', await response.json())# 響應體json數據
import aiohttp import asyncio async def main(): #設置 1 秒的超時 timeout = aiohttp.ClientTimeout(total=1) async with aiohttp.ClientSession(timeout=timeout) as session: async with session.get('https://httpbin.org/get') as response: print('status:', response.status) if __name__ == '__main__': asyncio.get_event_loop().run_until_complete(main())
import asyncio import aiohttp # 聲明最大并發量為5 CONCURRENCY = 5 semaphore = asyncio.Semaphore(CONCURRENCY) URL = 'https://www.baidu.com' session = None async def scrape_api(): async with semaphore: print('scraping', URL) async with session.get(URL) as response: await asyncio.sleep(1) return await response.text() async def main(): global session session = aiohttp.ClientSession() scrape_index_tasks = [asyncio.ensure_future(scrape_api()) for _ in range(10000)] await asyncio.gather(*scrape_index_tasks) if __name__ == '__main__': asyncio.get_event_loop().run_until_complete(main())
import asyncio import aiohttp import logging import json logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s: %(message)s') INDEX_URL = 'https://dynamic5.scrape.center/api/book/?limit=18&offset={offset}' DETAIL_URL = 'https://dynamic5.scrape.center/api/book/{id}' PAGE_SIZE = 18 PAGE_NUMBER = 100 CONCURRENCY = 5 semaphore = asyncio.Semaphore(CONCURRENCY) session = None async def scrape_api(url): async with semaphore: try: logging.info('scraping %s', url) async with session.get(url) as response: return await response.json() except aiohttp.ClientError: logging.error('error occurred while scraping %s', url, exc_info=True) async def scrape_index(page): url = INDEX_URL.format(offset=PAGE_SIZE * (page - 1)) return await scrape_api(url) async def main(): global session session = aiohttp.ClientSession() scrape_index_tasks = [asyncio.ensure_future(scrape_index(page)) for page in range(1, PAGE_NUMBER + 1)] results = await asyncio.gather(*scrape_index_tasks) logging.info('results %s', json.dumps(results, ensure_ascii=False, indent=2)) if __name__ == '__main__': asyncio.get_event_loop().run_until_complete(main())
以上就是關于“Python中aiohttp如何使用”這篇文章的內容,相信大家都有了一定的了解,希望小編分享的內容對大家有幫助,若想了解更多相關的知識內容,請關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。