您好,登錄后才能下訂單哦!
這篇文章主要介紹了如何創建用于室內和室外火災檢測的定制InceptionV3和CNN架構,具有一定借鑒價值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
嵌入式處理技術的最新進展已使基于視覺的系統可以在監視過程中使用卷積神經網絡檢測火災。在本文中,兩個定制的CNN模型已經實現,它們擁有用于監視視頻的高成本效益的火災檢測CNN架構。第一個模型是受AlexNet架構啟發定制的基本CNN架構。我們將實現和查看其輸出和限制,并創建一個定制的InceptionV3模型。為了平衡效率和準確性,考慮到目標問題和火災數據的性質對模型進行了微調。我們將使用三個不同的數據集來訓練我們的模型。
創建定制的CNN架構
import tensorflow as tfimport keras_preprocessingfrom keras_preprocessing import imagefrom keras_preprocessing.image import ImageDataGeneratorTRAINING_DIR = "Train"training_datagen = ImageDataGenerator(rescale = 1./255, horizontal_flip=True, rotation_range=30, height_shift_range=0.2, fill_mode='nearest')VALIDATION_DIR = "Validation"validation_datagen = ImageDataGenerator(rescale = 1./255)train_generator = training_datagen.flow_from_directory(TRAINING_DIR, target_size=(224,224), class_mode='categorical', batch_size = 64)validation_generator = validation_datagen.flow_from_directory( VALIDATION_DIR, target_size=(224,224), class_mode='categorical', batch_size= 16)
from tensorflow.keras.optimizers import Adammodel = tf.keras.models.Sequential([tf.keras.layers.Conv2D(96, (11,11), strides=(4,4), activation='relu', input_shape=(224, 224, 3)), tf.keras.layers.MaxPooling2D(pool_size = (3,3), strides=(2,2)),tf.keras.layers.Conv2D(256, (5,5), activation='relu'),tf.keras.layers.MaxPooling2D(pool_size = (3,3), strides=(2,2)),tf.keras.layers.Conv2D(384, (5,5), activation='relu'),tf.keras.layers.MaxPooling2D(pool_size = (3,3), strides=(2,2)),tf.keras.layers.Flatten(),tf.keras.layers.Dropout(0.2),tf.keras.layers.Dense(2048, activation='relu'),tf.keras.layers.Dropout(0.25),tf.keras.layers.Dense(1024, activation='relu'),tf.keras.layers.Dropout(0.2),tf.keras.layers.Dense(2, activation='softmax')])model.compile(loss='categorical_crossentropy',optimizer=Adam(lr=0.0001),metrics=['acc'])history = model.fit(train_generator,steps_per_epoch = 15,epochs = 50,validation_data = validation_generator,validation_steps = 15)
創建定制的InceptionV3模型
我們開始為自定義的InceptionV3創建ImageDataGenerator。數據集包含3個類,但對于本文,我們將僅使用2個類。它包含用于訓練的1800張圖像和用于驗證的200張圖像。另外,我添加了8張客廳圖像,以在數據集中添加一些噪點。
import tensorflow as tfimport keras_preprocessingfrom keras_preprocessing import imagefrom keras_preprocessing.image import ImageDataGeneratorTRAINING_DIR = "Train"training_datagen = ImageDataGenerator(rescale=1./255,zoom_range=0.15,horizontal_flip=True,fill_mode='nearest')VALIDATION_DIR = "/content/FIRE-SMOKE-DATASET/Test"validation_datagen = ImageDataGenerator(rescale = 1./255)train_generator = training_datagen.flow_from_directory(TRAINING_DIR,target_size=(224,224),shuffle = True,class_mode='categorical',batch_size = 128)validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,target_size=(224,224),class_mode='categorical',shuffle = True,batch_size= 14)
from tensorflow.keras.applications.inception_v3 import InceptionV3from tensorflow.keras.preprocessing import imagefrom tensorflow.keras.models import Modelfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Input, Dropoutinput_tensor = Input(shape=(224, 224, 3))base_model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=False)x = base_model.outputx = GlobalAveragePooling2D()(x)x = Dense(2048, activation='relu')(x)x = Dropout(0.25)(x)x = Dense(1024, activation='relu')(x)x = Dropout(0.2)(x)predictions = Dense(2, activation='softmax')(x)model = Model(inputs=base_model.input, outputs=predictions)for layer in base_model.layers: layer.trainable = Falsemodel.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['acc'])history = model.fit(train_generator,steps_per_epoch = 14,epochs = 20,validation_data = validation_generator,validation_steps = 14)
#To train the top 2 inception blocks, freeze the first 249 layers and unfreeze the rest.for layer in model.layers[:249]: layer.trainable = Falsefor layer in model.layers[249:]: layer.trainable = True#Recompile the model for these modifications to take effectfrom tensorflow.keras.optimizers import SGDmodel.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['acc'])history = model.fit(train_generator,steps_per_epoch = 14,epochs = 10,validation_data = validation_generator,validation_steps = 14)
實時測試
import cv2import numpy as npfrom PIL import Imageimport tensorflow as tffrom keras.preprocessing import image#Load the saved modelmodel = tf.keras.models.load_model('InceptionV3.h6')video = cv2.VideoCapture(0)while True: _, frame = video.read()#Convert the captured frame into RGB im = Image.fromarray(frame, 'RGB')#Resizing into 224x224 because we trained the model with this image size. im = im.resize((224,224)) img_array = image.img_to_array(im) img_array = np.expand_dims(img_array, axis=0) / 255 probabilities = model.predict(img_array)[0] #Calling the predict method on model to predict 'fire' on the image prediction = np.argmax(probabilities) #if prediction is 0, which means there is fire in the frame. if prediction == 0: frame = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY) print(probabilities[prediction])cv2.imshow("Capturing", frame) key=cv2.waitKey(1) if key == ord('q'): breakvideo.release()cv2.destroyAllWindows()
感謝你能夠認真閱讀完這篇文章,希望小編分享的“如何創建用于室內和室外火災檢測的定制InceptionV3和CNN架構”這篇文章對大家有幫助,同時也希望大家多多支持億速云,關注億速云行業資訊頻道,更多相關知識等著你來學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。