處理大文件時,可以采用以下方法來避免內存溢出問題:
with open('filename.txt', 'r') as file:
for line in file:
# 處理每一行的數據
with open('filename.txt', 'r') as file:
chunk_size = 1024 # 設置每次讀取的塊大小
while True:
data = file.read(chunk_size)
if not data:
break
# 處理當前塊的數據
def read_file(filename):
with open(filename, 'r') as file:
for line in file:
yield line
for data in read_file('filename.txt'):
# 處理每一行的數據
import pandas as pd
chunk_size = 1000 # 設置每次讀取的塊大小
for chunk in pd.read_csv('filename.csv', chunksize=chunk_size):
# 處理當前塊的數據
通過以上方法,可以有效地處理大文件并避免內存溢出問題。