您好,登錄后才能下訂單哦!
本篇內容介紹了“Pytorch如何保存訓練好的模型”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!
用數據對模型進行訓練后得到了比較理想的模型,但在實際應用的時候不可能每次都先進行訓練然后再使用,所以就得先將之前訓練好的模型保存下來,然后在需要用到的時候加載一下直接使用。
模型的本質是一堆用某種結構存儲起來的參數,所以在保存的時候有兩種方式
一種方式是直接將整個模型保存下來,之后直接加載整個模型,但這樣會比較耗內存;
另一種是只保存模型的參數,之后用到的時候再創建一個同樣結構的新模型,然后把所保存的參數導入新模型。
(1)只保存模型參數字典(推薦)
#保存 torch.save(the_model.state_dict(), PATH) #讀取 the_model = TheModelClass(*args, **kwargs) the_model.load_state_dict(torch.load(PATH))
(2)保存整個模型
#保存 torch.save(the_model, PATH) #讀取 the_model = torch.load(PATH)
pytorch會把模型的參數放在一個字典里面,而我們所要做的就是將這個字典保存,然后再調用。
比如說設計一個單層LSTM的網絡,然后進行訓練,訓練完之后將模型的參數字典進行保存,保存為同文件夾下面的rnn.pt文件:
class LSTM(nn.Module): def __init__(self, input_size, hidden_size, num_layers): super(LSTM, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, 1) def forward(self, x): # Set initial states h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) # 2 for bidirection c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) # Forward propagate LSTM out, _ = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size*2) out = self.fc(out) return out rnn = LSTM(input_size=1, hidden_size=10, num_layers=2).to(device) # optimize all cnn parameters optimizer = torch.optim.Adam(rnn.parameters(), lr=0.001) # the target label is not one-hotted loss_func = nn.MSELoss() for epoch in range(1000): output = rnn(train_tensor) # cnn output` loss = loss_func(output, train_labels_tensor) # cross entropy loss optimizer.zero_grad() # clear gradients for this training step loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradients output_sum = output # 保存模型 torch.save(rnn.state_dict(), 'rnn.pt')
保存完之后利用這個訓練完的模型對數據進行處理:
# 測試所保存的模型 m_state_dict = torch.load('rnn.pt') new_m = LSTM(input_size=1, hidden_size=10, num_layers=2).to(device) new_m.load_state_dict(m_state_dict) predict = new_m(test_tensor)
這里做一下說明,在保存模型的時候rnn.state_dict()表示rnn這個模型的參數字典,在測試所保存的模型時要先將這個參數字典加載一下
m_state_dict = torch.load('rnn.pt');
然后再實例化一個LSTM對像,這里要保證傳入的參數跟實例化rnn是傳入的對象時一樣的,即結構相同
new_m = LSTM(input_size=1, hidden_size=10, num_layers=2).to(device);
下面是給這個新的模型傳入之前加載的參數
new_m.load_state_dict(m_state_dict);
最后就可以利用這個模型處理數據了
predict = new_m(test_tensor)
class LSTM(nn.Module): def __init__(self, input_size, hidden_size, num_layers): super(LSTM, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, 1) def forward(self, x): # Set initial states h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) # 2 for bidirection c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) # Forward propagate LSTM out, _ = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size*2) # print("output_in=", out.shape) # print("fc_in_shape=", out[:, -1, :].shape) # Decode the hidden state of the last time step # out = torch.cat((out[:, 0, :], out[-1, :, :]), axis=0) # out = self.fc(out[:, -1, :]) # 取最后一列為out out = self.fc(out) return out rnn = LSTM(input_size=1, hidden_size=10, num_layers=2).to(device) print(rnn) optimizer = torch.optim.Adam(rnn.parameters(), lr=0.001) # optimize all cnn parameters loss_func = nn.MSELoss() # the target label is not one-hotted for epoch in range(1000): output = rnn(train_tensor) # cnn output` loss = loss_func(output, train_labels_tensor) # cross entropy loss optimizer.zero_grad() # clear gradients for this training step loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradients output_sum = output # 保存模型 torch.save(rnn, 'rnn1.pt')
保存完之后利用這個訓練完的模型對數據進行處理:
new_m = torch.load('rnn1.pt') predict = new_m(test_tensor)
“Pytorch如何保存訓練好的模型”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業相關的知識可以關注億速云網站,小編將為大家輸出更多高質量的實用文章!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。