您好,登錄后才能下訂單哦!
這篇文章主要介紹“python怎么使用OpenCV實現多目標跟蹤”的相關知識,小編通過實際案例向大家展示操作過程,操作方法簡單快捷,實用性強,希望這篇“python怎么使用OpenCV實現多目標跟蹤”文章能幫助大家解決問題。
計算機視覺和機器學習的大多數初學者都學習對象檢測。如果您是初學者,您可能會想到為什么我們需要對象跟蹤。我們不能只檢測每一幀中的物體嗎?
讓我們探討一下跟蹤有用的幾個原因:
首先,當在視頻幀中檢測到多個對象(比如人)時,跟蹤有助于跨幀確定對象的身份。
其次,在某些情況下,目標檢測可能會失敗,但仍可能跟蹤對象,因為跟蹤會考慮前一幀中對象的位置和外觀。
第三,一些跟蹤算法非常快,因為它們進行本地搜索而不是全局搜索。因此,我們可以通過每第n幀執行目標檢測并在中間幀中跟蹤對象來為我們的系統獲得非常高的性能。
那么,為什么不在第一次檢測后無限期地跟蹤對象呢?跟蹤算法有時可能會丟失其正在跟蹤的對象。例如,當對象的運動太大時,跟蹤算法可能無法跟上。通常會在目標跟蹤一段時間后再次目標檢測。
在本教程中,我們將只關注跟蹤部分。我們要跟蹤的對象將通過指定它們周圍的邊界框來獲取。
OpenCV中的多目標跟蹤器MultiTracker類提供了多目標跟蹤的實現。但是這只是一個初步的實現,因為它只處理跟蹤對象,而不對被跟蹤對象進行任何優化。
多對象跟蹤器只是單個對象跟蹤器的集合。我們首先定義一個將跟蹤器類型作為輸入并創建跟蹤器對象的函數。
OpenCV有8種不同的跟蹤器類型:BOOSTING,MIL,KCF,TLD,MEDIANFLOW,GOTURN,MOSSE,CSRT。本文不使用GOTURN跟蹤器。一般我們先給定跟蹤器類的名稱,再返回單跟蹤器對象,然后建立多跟蹤器類。
C++代碼:
vector<string> trackerTypes = {"BOOSTING", "MIL", "KCF", "TLD", "MEDIANFLOW", "GOTURN", "MOSSE", "CSRT"}; /** * @brief Create a Tracker By Name object 根據設定的類型初始化跟蹤器 * * @param trackerType * @return Ptr<Tracker> */ Ptr<Tracker> createTrackerByName(string trackerType) { Ptr<Tracker> tracker; if (trackerType == trackerTypes[0]) tracker = TrackerBoosting::create(); else if (trackerType == trackerTypes[1]) tracker = TrackerMIL::create(); else if (trackerType == trackerTypes[2]) tracker = TrackerKCF::create(); else if (trackerType == trackerTypes[3]) tracker = TrackerTLD::create(); else if (trackerType == trackerTypes[4]) tracker = TrackerMedianFlow::create(); else if (trackerType == trackerTypes[5]) tracker = TrackerGOTURN::create(); else if (trackerType == trackerTypes[6]) tracker = TrackerMOSSE::create(); else if (trackerType == trackerTypes[7]) tracker = TrackerCSRT::create(); else { cout << "Incorrect tracker name" << endl; cout << "Available trackers are: " << endl; for (vector<string>::iterator it = trackerTypes.begin(); it != trackerTypes.end(); ++it) { std::cout << " " << *it << endl; } } return tracker; }
python代碼:
from __future__ import print_function import sys import cv2 from random import randint trackerTypes = ['BOOSTING', 'MIL', 'KCF','TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT'] def createTrackerByName(trackerType): # Create a tracker based on tracker name if trackerType == trackerTypes[0]: tracker = cv2.TrackerBoosting_create() elif trackerType == trackerTypes[1]: tracker = cv2.TrackerMIL_create() elif trackerType == trackerTypes[2]: tracker = cv2.TrackerKCF_create() elif trackerType == trackerTypes[3]: tracker = cv2.TrackerTLD_create() elif trackerType == trackerTypes[4]: tracker = cv2.TrackerMedianFlow_create() elif trackerType == trackerTypes[5]: tracker = cv2.TrackerGOTURN_create() elif trackerType == trackerTypes[6]: tracker = cv2.TrackerMOSSE_create() elif trackerType == trackerTypes[7]: tracker = cv2.TrackerCSRT_create() else: tracker = None print('Incorrect tracker name') print('Available trackers are:') for t in trackerTypes: print(t) return tracker
多對象跟蹤器需要兩個輸入即一個視頻幀和我們想要跟蹤的所有對象的位置(邊界框)。
給定此信息,跟蹤器在所有后續幀中跟蹤這些指定對象的位置。在下面的代碼中,我們首先使用VideoCapture
類加載視頻并讀取第一幀。稍后將使用它來初始化MultiTracker。
C++代碼:
// Set tracker type. Change this to try different trackers. 選擇追蹤器類型 string trackerType = trackerTypes[6]; // set default values for tracking algorithm and video 視頻讀取 string videoPath = "video/run.mp4"; // Initialize MultiTracker with tracking algo 邊界框 vector<Rect> bboxes; // create a video capture object to read videos 讀視頻 cv::VideoCapture cap(videoPath); Mat frame; // quit if unable to read video file if (!cap.isOpened()) { cout << "Error opening video file " << videoPath << endl; return -1; } // read first frame 讀第一幀 cap >> frame;
python代碼:
# Set video to load videoPath = "video/run.mp4" # Create a video capture object to read videos cap = cv2.VideoCapture(videoPath) # Read first frame success, frame = cap.read() # quit if unable to read the video file if not success: print('Failed to read video') sys.exit(1)
接下來,我們需要在第一幀中找到我們想要跟蹤的對象。OpenCV提供了一個名為selectROIs的函數,它彈出一個GUI來選擇邊界框(也稱為感興趣區域(ROI))。在C++版本中可以通過selectROIs允許您獲取多個邊界框,但在Python版本中,只能通過selectROI獲得一個邊界框。因此,在Python版本中,我們需要一個循環來獲取多個邊界框。對于每個對象,我們還選擇隨機顏色來顯示邊界框。selectROI函數步驟為先在圖像上畫框,然后按ENTER確定完成畫框畫下一個框。按ESC退出畫框開始執行程序
C++代碼:
// Get bounding boxes for first frame // selectROI's default behaviour is to draw box starting from the center // when fromCenter is set to false, you can draw box starting from top left corner bool showCrosshair = true; bool fromCenter = false; cout << "\n==========================================================\n"; cout << "OpenCV says press c to cancel objects selection process" << endl; cout << "It doesn't work. Press Escape to exit selection process" << endl; cout << "\n==========================================================\n"; cv::selectROIs("MultiTracker", frame, bboxes, showCrosshair, fromCenter); // quit if there are no objects to track if(bboxes.size() < 1) return 0; vector<Scalar> colors; getRandomColors(colors, bboxes.size());
// Fill the vector with random colors void getRandomColors(vector<Scalar>& colors, int numColors) { RNG rng(0); for(int i=0; i < numColors; i++) colors.push_back(Scalar(rng.uniform(0,255), rng.uniform(0, 255), rng.uniform(0, 255))); }
python代碼:
## Select boxes bboxes = [] colors = [] # OpenCV's selectROI function doesn't work for selecting multiple objects in Python # So we will call this function in a loop till we are done selecting all objects while True: # draw bounding boxes over objects # selectROI's default behaviour is to draw box starting from the center # when fromCenter is set to false, you can draw box starting from top left corner bbox = cv2.selectROI('MultiTracker', frame) bboxes.append(bbox) colors.append((randint(0, 255), randint(0, 255), randint(0, 255))) print("Press q to quit selecting boxes and start tracking") print("Press any other key to select next object") k = cv2.waitKey(0) & 0xFF if (k == 113): # q is pressed break print('Selected bounding boxes {}'.format(bboxes))
到目前為止,我們已經讀取了第一幀并獲得了對象周圍的邊界框。這是我們初始化多對象跟蹤器所需的所有信息。我們首先創建一個MultiTracker對象,并添加你要跟蹤目標數的單個對象跟蹤器。在此示例中,我們使用CSRT單個對象跟蹤器,但您可以通過將下面的trackerType變量更改為本文開頭提到的8個跟蹤器時間之一來嘗試其他跟蹤器類型。該CSRT跟蹤器是不是最快的,但它產生在我們嘗試很多情況下,最好的結果。
您也可以使用包含在同一MultiTracker
中的不同跟蹤器,但當然,它沒有多大意義。能用的不多。CSRT精度最高,KCF速度精度綜合最好,MOSSE速度最快。
MultiTracker類只是這些單個對象跟蹤器的包裝器。正如我們在上一篇文章中所知道的那樣,使用第一幀和邊界框初始化單個對象跟蹤器,該邊界框指示我們想要跟蹤的對象的位置。MultiTracker將此信息傳遞給它內部包裝的單個目標跟蹤器。
C++代碼:
// Create multitracker 創建多目標跟蹤類 Ptr<MultiTracker> multiTracker = cv::MultiTracker::create(); // initialize multitracker 初始化 for (int i = 0; i < bboxes.size(); i++) { multiTracker->add(createTrackerByName(trackerType), frame, Rect2d(bboxes[i])); }
python代碼:
# Specify the tracker type trackerType = "CSRT" # Create MultiTracker object multiTracker = cv2.MultiTracker_create() # Initialize MultiTracker for bbox in bboxes: multiTracker.add(createTrackerByName(trackerType), frame, bbox)
最后,我們的MultiTracker
準備就緒,我們可以在新的幀中跟蹤多個對象。我們使用MultiTracker
類的update方法在新幀中定位對象。每個被跟蹤對象的每個邊界框都使用不同的顏色繪制。
Update函數會返回true和false。update
如果跟蹤失敗會返回false,C++代碼加了判斷,Python沒有加。但是要注意的是update函數哪怕返回了false,也會繼續更新函數,給出邊界框。所以返回false,建議停止追蹤。
C++代碼:
while (cap.isOpened()) { // get frame from the video 逐幀處理 cap >> frame; // stop the program if reached end of video if (frame.empty()) { break; } //update the tracking result with new frame 更新每一幀 bool ok = multiTracker->update(frame); if (ok == true) { cout << "Tracking success" << endl; } else { cout << "Tracking failure" << endl; } // draw tracked objects 畫框 for (unsigned i = 0; i < multiTracker->getObjects().size(); i++) { rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1); } // show frame imshow("MultiTracker", frame); // quit on x button if (waitKey(1) == 27) { break; } }
python代碼:
# Process video and track objects while cap.isOpened(): success, frame = cap.read() if not success: break # get updated location of objects in subsequent frames success, boxes = multiTracker.update(frame) # draw tracked objects for i, newbox in enumerate(boxes): p1 = (int(newbox[0]), int(newbox[1])) p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3])) cv2.rectangle(frame, p1, p2, colors[i], 2, 1) # show frame cv2.imshow('MultiTracker', frame) # quit on ESC button if cv2.waitKey(1) & 0xFF == 27: # Esc pressed break
就結果而言,多目標跟蹤就是生成多個單目標跟蹤器,每個單目標跟蹤器跟蹤一個對象。如果你想和目標檢測結合,其中的對象框如果要自己設定,push
一個Rect對象就行了。
//自己設定對象的檢測框
//x,y,width,height
//bboxes.push_back(Rect(388, 155, 30, 40));
//bboxes.push_back(Rect(492, 205, 50, 80));
總體來說精度和單目標跟蹤器差不多,所耗時間差不多5到7倍,不同算法不同。
完整代碼如下:
C++:
// Opencv_MultiTracker.cpp : 此文件包含 "main" 函數。程序執行將在此處開始并結束。 // #include "pch.h" #include <iostream> #include <opencv2/opencv.hpp> #include <opencv2/tracking.hpp> using namespace cv; using namespace std; vector<string> trackerTypes = {"BOOSTING", "MIL", "KCF", "TLD", "MEDIANFLOW", "GOTURN", "MOSSE", "CSRT"}; /** * @brief Create a Tracker By Name object 根據設定的類型初始化跟蹤器 * * @param trackerType * @return Ptr<Tracker> */ Ptr<Tracker> createTrackerByName(string trackerType) { Ptr<Tracker> tracker; if (trackerType == trackerTypes[0]) tracker = TrackerBoosting::create(); else if (trackerType == trackerTypes[1]) tracker = TrackerMIL::create(); else if (trackerType == trackerTypes[2]) tracker = TrackerKCF::create(); else if (trackerType == trackerTypes[3]) tracker = TrackerTLD::create(); else if (trackerType == trackerTypes[4]) tracker = TrackerMedianFlow::create(); else if (trackerType == trackerTypes[5]) tracker = TrackerGOTURN::create(); else if (trackerType == trackerTypes[6]) tracker = TrackerMOSSE::create(); else if (trackerType == trackerTypes[7]) tracker = TrackerCSRT::create(); else { cout << "Incorrect tracker name" << endl; cout << "Available trackers are: " << endl; for (vector<string>::iterator it = trackerTypes.begin(); it != trackerTypes.end(); ++it) { std::cout << " " << *it << endl; } } return tracker; } /** * @brief Get the Random Colors object 隨機涂色 * * @param colors * @param numColors */ void getRandomColors(vector<Scalar> &colors, int numColors) { RNG rng(0); for (int i = 0; i < numColors; i++) { colors.push_back(Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255))); } } int main(int argc, char *argv[]) { // Set tracker type. Change this to try different trackers. 選擇追蹤器類型 string trackerType = trackerTypes[7]; // set default values for tracking algorithm and video 視頻讀取 string videoPath = "video/run.mp4"; // Initialize MultiTracker with tracking algo 邊界框 vector<Rect> bboxes; // create a video capture object to read videos 讀視頻 cv::VideoCapture cap(videoPath); Mat frame; // quit if unable to read video file if (!cap.isOpened()) { cout << "Error opening video file " << videoPath << endl; return -1; } // read first frame 讀第一幀 cap >> frame; // draw bounding boxes over objects 在第一幀內確定對象框 /* 先在圖像上畫框,然后按ENTER確定畫下一個框。按ESC退出畫框開始執行程序 */ cout << "\n==========================================================\n"; cout << "OpenCV says press c to cancel objects selection process" << endl; cout << "It doesn't work. Press Esc to exit selection process" << endl; cout << "\n==========================================================\n"; cv::selectROIs("MultiTracker", frame, bboxes, false); //自己設定對象的檢測框 //x,y,width,height //bboxes.push_back(Rect(388, 155, 30, 40)); //bboxes.push_back(Rect(492, 205, 50, 80)); // quit if there are no objects to track 如果沒有選擇對象 if (bboxes.size() < 1) { return 0; } vector<Scalar> colors; //給各個框涂色 getRandomColors(colors, bboxes.size()); // Create multitracker 創建多目標跟蹤類 Ptr<MultiTracker> multiTracker = cv::MultiTracker::create(); // initialize multitracker 初始化 for (int i = 0; i < bboxes.size(); i++) { multiTracker->add(createTrackerByName(trackerType), frame, Rect2d(bboxes[i])); } // process video and track objects 開始處理圖像 cout << "\n==========================================================\n"; cout << "Started tracking, press ESC to quit." << endl; while (cap.isOpened()) { // get frame from the video 逐幀處理 cap >> frame; // stop the program if reached end of video if (frame.empty()) { break; } //update the tracking result with new frame 更新每一幀 bool ok = multiTracker->update(frame); if (ok == true) { cout << "Tracking success" << endl; } else { cout << "Tracking failure" << endl; } // draw tracked objects 畫框 for (unsigned i = 0; i < multiTracker->getObjects().size(); i++) { rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1); } // show frame imshow("MultiTracker", frame); // quit on x button if (waitKey(1) == 27) { break; } } waitKey(0); return 0; }
Python:
from __future__ import print_function import sys import cv2 from random import randint trackerTypes = ['BOOSTING', 'MIL', 'KCF','TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT'] def createTrackerByName(trackerType): # Create a tracker based on tracker name if trackerType == trackerTypes[0]: tracker = cv2.TrackerBoosting_create() elif trackerType == trackerTypes[1]: tracker = cv2.TrackerMIL_create() elif trackerType == trackerTypes[2]: tracker = cv2.TrackerKCF_create() elif trackerType == trackerTypes[3]: tracker = cv2.TrackerTLD_create() elif trackerType == trackerTypes[4]: tracker = cv2.TrackerMedianFlow_create() elif trackerType == trackerTypes[5]: tracker = cv2.TrackerGOTURN_create() elif trackerType == trackerTypes[6]: tracker = cv2.TrackerMOSSE_create() elif trackerType == trackerTypes[7]: tracker = cv2.TrackerCSRT_create() else: tracker = None print('Incorrect tracker name') print('Available trackers are:') for t in trackerTypes: print(t) return tracker if __name__ == '__main__': print("Default tracking algoritm is CSRT \n" "Available tracking algorithms are:\n") for t in trackerTypes: print(t) trackerType = "CSRT" # Set video to load videoPath = "video/run.mp4" # Create a video capture object to read videos cap = cv2.VideoCapture(videoPath) # Read first frame success, frame = cap.read() # quit if unable to read the video file if not success: print('Failed to read video') sys.exit(1) ## Select boxes bboxes = [] colors = [] # OpenCV's selectROI function doesn't work for selecting multiple objects in Python # So we will call this function in a loop till we are done selecting all objects while True: # draw bounding boxes over objects # selectROI's default behaviour is to draw box starting from the center # when fromCenter is set to false, you can draw box starting from top left corner bbox = cv2.selectROI('MultiTracker', frame) bboxes.append(bbox) colors.append((randint(64, 255), randint(64, 255), randint(64, 255))) print("Press q to quit selecting boxes and start tracking") print("Press any other key to select next object") k = cv2.waitKey(0) & 0xFF if (k == 113): # q is pressed break print('Selected bounding boxes {}'.format(bboxes)) ## Initialize MultiTracker # There are two ways you can initialize multitracker # 1. tracker = cv2.MultiTracker("CSRT") # All the trackers added to this multitracker # will use CSRT algorithm as default # 2. tracker = cv2.MultiTracker() # No default algorithm specified # Initialize MultiTracker with tracking algo # Specify tracker type # Create MultiTracker object multiTracker = cv2.MultiTracker_create() # Initialize MultiTracker for bbox in bboxes: multiTracker.add(createTrackerByName(trackerType), frame, bbox) # Process video and track objects while cap.isOpened(): success, frame = cap.read() if not success: break # get updated location of objects in subsequent frames success, boxes = multiTracker.update(frame) # draw tracked objects for i, newbox in enumerate(boxes): p1 = (int(newbox[0]), int(newbox[1])) p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3])) cv2.rectangle(frame, p1, p2, colors[i], 2, 1) # show frame cv2.imshow('MultiTracker', frame) # quit on ESC button if cv2.waitKey(1) & 0xFF == 27: # Esc pressed break
關于“python怎么使用OpenCV實現多目標跟蹤”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業相關的知識,可以關注億速云行業資訊頻道,小編每天都會為大家更新不同的知識點。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。