您好,登錄后才能下訂單哦!
這篇文章主要介紹了keras tensorflow如何實現在python下多進程運行,具有一定借鑒價值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
如下所示:
from multiprocessing import Process import os def training_function(...): import keras # 此處需要在子進程中 ... if __name__ == '__main__': p = Process(target=training_function, args=(...,)) p.start()
原文地址:https://stackoverflow.com/questions/42504669/keras-tensorflow-and-multiprocessing-in-python
1、DO NOT LOAD KERAS TO YOUR MAIN ENVIRONMENT. If you want to load Keras / Theano / TensorFlow do it only in the function environment. E.g. don't do this:
import keras def training_function(...): ...
but do the following:
def training_function(...): import keras ...
Run work connected with each model in a separate process: I'm usually creating workers which are making the job (like e.g. training, tuning, scoring) and I'm running them in separate processes. What is nice about it that whole memory used by this process is completely freedwhen your process is done. This helps you with loads of memory problems which you usually come across when you are using multiprocessing or even running multiple models in one process. So this looks e.g. like this:
def _training_worker(train_params): import keras model = obtain_model(train_params) model.fit(train_params) send_message_to_main_process(...) def train_new_model(train_params): training_process = multiprocessing.Process(target=_training_worker, args = train_params) training_process.start() get_message_from_training_process(...) training_process.join()
Different approach is simply preparing different scripts for different model actions. But this may cause memory errors especially when your models are memory consuming. NOTE that due to this reason it's better to make your execution strictly sequential.
感謝你能夠認真閱讀完這篇文章,希望小編分享的“keras tensorflow如何實現在python下多進程運行”這篇文章對大家有幫助,同時也希望大家多多支持億速云,關注億速云行業資訊頻道,更多相關知識等著你來學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。