您好,登錄后才能下訂單哦!
這篇文章給大家介紹Python中怎么預測缺失值,內容非常詳細,感興趣的小伙伴們可以參考借鑒,希望對大家能有所幫助。
import pandas as pd df = pd.read_csv("winemag-data-130k-v2.csv")
接下來,讓我們輸出前五行數據:
print(df.head())
讓我們從這些數據中隨機抽取500條記錄。這將有助于加快模型訓練和測試,盡管讀者可以很容易地對其進行修改:
import pandas as pd df = pd.read_csv("winemag-data-130k-v2.csv").sample(n=500https://my.oschina.net/u/4253699/blog/, random_state = 42)
現在,讓我們打印與數據對應的信息,這將使我們了解哪些列缺少值:
print(df.info())
有幾個列的非空值小于500,這與缺少的值相對應。首先,讓我們考慮建立一個模型,用“points”來估算缺失的“price”值。首先,讓我們打印“price”和“points”之間的相關性:
print("Correlation: "https://my.oschina.net/u/4253699/blog/, df['points'].corr(df['price']))
我們看到了一個微弱的正相關。讓我們建立一個線性回歸模型,用“points”來預測“price”。首先,讓我們從“scikit learn”導入“LinearRegresssion”模塊:
from sklearn.linear_model import LinearRegression
現在,讓我們為訓練和測試拆分數據。我們希望能夠預測缺失值,但我們應該使用真實值“price”來驗證我們的預測。讓我們通過只選擇正價格值來篩選缺少的值:
import numpy as np df_filter = df[df['price'] > 0].copy()
我們還可以初始化用于存儲預測和實際值的列表:
y_pred = [] y_true = []
我們將使用K-fold交叉驗證來驗證我們的模型。讓我們從“scikit learn”導入“KFolds”模塊。我們將使用10折來驗證我們的模型:
from sklearn.model_selection import KFold kf = KFold(n_splits=10https://my.oschina.net/u/4253699/blog/, random_state = 42) for train_indexhttps://my.oschina.net/u/4253699/blog/, test_index in kf.split(df_filter): df_test = df_filter.iloc[test_index] df_train = df_filter.iloc[train_index]
我們現在可以定義我們的輸入和輸出:
for train_indexhttps://my.oschina.net/u/4253699/blog/, test_index in kf.split(df_filter): ... X_train = np.array(df_train['points']).reshape(-1https://my.oschina.net/u/4253699/blog/, 1) y_train = np.array(df_train['price']).reshape(-1https://my.oschina.net/u/4253699/blog/, 1) X_test = np.array(df_test['points']).reshape(-1https://my.oschina.net/u/4253699/blog/, 1) y_test = np.array(df_test['price']).reshape(-1https://my.oschina.net/u/4253699/blog/, 1)
并擬合我們的線性回歸模型:
for train_indexhttps://my.oschina.net/u/4253699/blog/, test_index in kf.split(df_filter): ... model = LinearRegression() model.fit(X_trainhttps://my.oschina.net/u/4253699/blog/, y_train)
現在讓我們生成并存儲我們的預測:
for train_indexhttps://my.oschina.net/u/4253699/blog/, test_index in kf.split(df_filter): ... y_pred.append(model.predict(X_test)[0]) y_true.append(y_test[0])
現在讓我們評估一下模型的性能。讓我們用均方誤差來評估模型的性能:
print("Mean Square Error: "https://my.oschina.net/u/4253699/blog/, mean_squared_error(y_truehttps://my.oschina.net/u/4253699/blog/, y_pred))
并不太好。我們可以通過訓練平均價格加上一個標準差來改善這一點:
df_filter = df[df['price'] <= df['price'].mean() + df['price'].std() ].copy() ... print("Mean Square Error: "https://my.oschina.net/u/4253699/blog/, mean_squared_error(y_truehttps://my.oschina.net/u/4253699/blog/, y_pred))
雖然這大大提高了性能,但其代價是無法準確估算葡萄酒的price。與使用單一特征的回歸模型預測價格不同,我們可以使用樹基模型,例如隨機森林模型,它可以處理類別和數值變量。
讓我們建立一個隨機森林回歸模型,使用“country”、“province”、“variety”、“winery”和“points”來預測葡萄酒的“price”。首先,讓我們將分類變量轉換為可由隨機森林模型處理的分類代碼:
df['country_cat'] = df['country'].astype('category') df['country_cat'] = df['country_cat'].cat.codes df['province_cat'] = df['province'].astype('category') df['province_cat'] = df['province_cat'].cat.codes df['winery_cat'] = df['winery'].astype('category') df['winery_cat'] = df['winery_cat'].cat.codes df['variety_cat'] = df['variety'].astype('category') df['variety_cat'] = df['variety_cat'].cat.codes
讓我們將隨機樣本大小增加到5000:
df = pd.read_csv("winemag-data-130k-v2.csv").sample(n=5000https://my.oschina.net/u/4253699/blog/, random_state = 42)
接下來,讓我們從scikit learn導入隨機森林回歸器模塊。我們還可以定義用于訓練模型的特征列表:
from sklearn.ensemble import RandomForestRegressor features = ['points'https://my.oschina.net/u/4253699/blog/, 'country_cat'https://my.oschina.net/u/4253699/blog/, 'province_cat'https://my.oschina.net/u/4253699/blog/, 'winery_cat'https://my.oschina.net/u/4253699/blog/, 'variety_cat']
讓我們用一個隨機森林來訓練我們的模型,它有1000個估計量,最大深度為1000。然后,讓我們生成預測并將其附加到新列表中:
for train_indexhttps://my.oschina.net/u/4253699/blog/, test_index in kf.split(df_filter): df_test = df_filter.iloc[test_index] df_train = df_filter.iloc[train_index] X_train = np.array(df_train[features]) y_train = np.array(df_train['price']) X_test = np.array(df_test[features]) y_test = np.array(df_test['price']) model = RandomForestRegressor(n_estimators = 1000https://my.oschina.net/u/4253699/blog/, max_depth = 1000https://my.oschina.net/u/4253699/blog/, random_state = 42) model.fit(X_trainhttps://my.oschina.net/u/4253699/blog/, y_train) y_pred_rf.append(model.predict(X_test)[0]) y_true_rf.append(y_test[0])
最后,讓我們評估隨機森林和線性回歸模型的均方誤差:
print("Mean Square Error (Linear Regression): "https://my.oschina.net/u/4253699/blog/, mean_squared_error(y_truehttps://my.oschina.net/u/4253699/blog/, y_pred)) print("Mean Square Error (Random Forest): "https://my.oschina.net/u/4253699/blog/, mean_squared_error(y_pred_rfhttps://my.oschina.net/u/4253699/blog/, y_true_rf))
我們看到隨機森林模型具有優越的性能。現在,讓我們使用我們的模型預測缺失的價格值,并顯示price預測:
df_missing = df[df['price'].isnull()].copy() X_test_lr = np.array(df_missing['points']).reshape(-1https://my.oschina.net/u/4253699/blog/, 1) X_test_rf = np.array(df_missing[features]) X_train_lr = np.array(df_filter['points']).reshape(-1https://my.oschina.net/u/4253699/blog/, 1) y_train_lr = np.array(df_filter['price']).reshape(-1https://my.oschina.net/u/4253699/blog/, 1) X_train_rf = np.array(df_filter[features]) y_train_rf = np.array(df_filter['price']) model_lr = LinearRegression() model_lr.fit(X_train_lrhttps://my.oschina.net/u/4253699/blog/, y_train_lr) print("Linear regression predictions: "https://my.oschina.net/u/4253699/blog/, model_lr.predict(X_test_lr)[0][0]) model_rf = RandomForestRegressor(n_estimators = 1000https://my.oschina.net/u/4253699/blog/, max_depth = 1000https://my.oschina.net/u/4253699/blog/, random_state = 42) model_rf.fit(X_train_rfhttps://my.oschina.net/u/4253699/blog/, y_train_rf) print("Random forests regression predictions: "https://my.oschina.net/u/4253699/blog/, model_rf.predict(X_test_rf)[0])
我就到此為止,但我鼓勵你嘗試一下特征選擇和超參數調整,看看是否可以提高性能。此外,我鼓勵你擴展此數據進行插補模型,以填補“region_1”和“designation”等分類字段中的缺失值。在這里,你可以構建一個基于樹的分類模型,根據分類和數值特征來預測所列類別的缺失值。
關于Python中怎么預測缺失值就分享到這里了,希望以上內容可以對大家有一定的幫助,可以學到更多知識。如果覺得文章不錯,可以把它分享出去讓更多的人看到。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。