自拍偷在线精品自拍偷,亚洲欧美中文日韩v在线观看不卡

使用 C++ 部署 YOLO 目標(biāo)檢測模型

開發(fā)
本文將向你展示如何僅使用OpenCV庫在C++中運(yùn)行YOLO模型。本文介紹的是如何在CPU上運(yùn)行YOLOv11模型,而不是GPU。

歸功于Ultralytics,在Python中運(yùn)行目標(biāo)檢測模型已經(jīng)變得非常容易,但在C++中運(yùn)行YOLO模型呢?本文將向你展示如何僅使用OpenCV庫在C++中運(yùn)行YOLO模型。本文介紹的是如何在CPU上運(yùn)行YOLOv11模型,而不是GPU。在GPU上運(yùn)行模型需要安裝CUDA、CUDNN等,這些步驟可能會讓人感到困惑且耗時。我將在未來寫另一篇文章,介紹如何在支持CUDA的情況下運(yùn)行YOLO模型。

目前,你只需要安裝OpenCV庫。如果還沒有安裝,可以學(xué)習(xí)這個鏈接安裝:https://www.youtube.com/watch?v=CnXUTG9XYGI&t=159s。

1. 克隆ultralytics/yolov11倉庫以導(dǎo)出為ONNX模型

  • 創(chuàng)建一個新文件夾,并隨意命名。打開終端并將yolov11倉庫克隆到此文件夾中。我們將使用此倉庫將模型導(dǎo)出為onnx格式。
git clone https://github.com/ultralytics/ultralytics

  • 我將使用yolov11s.pt模型,但你可以使用自定義的yolov11模型,過程不會改變。你可以從這個鏈接(https://github.com/ultralytics/ultralytics)下載預(yù)訓(xùn)練模型,或者使用自己的模型。

現(xiàn)在讓我們將使用下面命令將模型導(dǎo)出為onnx。在模型轉(zhuǎn)換過程中有不同的參數(shù)可以設(shè)置,你可以查看第二部分代碼中的model.export中的常用參數(shù)及解釋。

from ultralytics import YOLO
 
# 加載一個模型,路徑為 YOLO 模型的 .pt 文件
model = YOLO("/path/to/best.pt")
 
# 導(dǎo)出模型,格式為 ONNX
model.export(format="onnx")
model.export(
    format="onnx",      # 導(dǎo)出格式為 ONNX
    imgsz=(640, 640),   # 設(shè)置輸入圖像的尺寸
    keras=False,        # 不導(dǎo)出為 Keras 格式
    optimize=False,     # 不進(jìn)行優(yōu)化 False, 移動設(shè)備優(yōu)化的參數(shù),用于在導(dǎo)出為TorchScript 格式時進(jìn)行模型優(yōu)化
    half=False,         # 不啟用 FP16 量化
    int8=False,         # 不啟用 INT8 量化
    dynamic=False,      # 不啟用動態(tài)輸入尺寸
    simplify=True,      # 簡化 ONNX 模型
    opset=None,         # 使用最新的 opset 版本
    workspace=4.0,      # 為 TensorRT 優(yōu)化設(shè)置最大工作區(qū)大?。℅iB)
    nms=False,          # 不添加 NMS(非極大值抑制)
    batch=1,            # 指定批處理大小
    device="cpu"        # 指定導(dǎo)出設(shè)備為CPU或GPU,對應(yīng)參數(shù)為"cpu" , "0"
)

2. 創(chuàng)建TXT文件以存儲YOLO模型標(biāo)簽

這一步非常簡單,你只需要創(chuàng)建一個txt文件來存儲標(biāo)簽。如果你像我一樣使用預(yù)訓(xùn)練的YOLO模型,你可以直接從這個鏈接下載txt文件:https://github.com/amikelive/coco-labels/blob/master/coco-labels-2014_2017.txt。

如果你有自定義模型,那么創(chuàng)建一個新的txt文件并在其中寫入你的標(biāo)簽,并可以隨意命名此文件。

3. 創(chuàng)建CMakeLists.txt文件

現(xiàn)在,讓我們創(chuàng)建一個CMakeLists.txt文件。使用CMake編譯C++程序時需要此文件。如果你從我分享的鏈接安裝了OpenCV,你應(yīng)該已經(jīng)安裝了CMake。

cmake_minimum_required(VERSION 3.10)
project(cpp-yolo-detection) # your folder name here


# Find OpenCV
set(OpenCV_DIR C:/Libraries/opencv/build) # path to opencv
find_package(OpenCV REQUIRED)


add_executable(object-detection object-detection.cpp) # your file name


# Link OpenCV libraries
target_link_libraries(object-detection ${OpenCV_LIBS})

4. 代碼

最后,這是最后一步。我使用了這個倉庫中的代碼,但我修改了一些部分并添加了注釋,以幫助你更好地理解。

#include <fstream>
#include <opencv2/opencv.hpp>




// Load labels from coco-classes.txt file
std::vector<std::string> load_class_list()
{
    std::vector<std::string> class_list;
    // change this txt file  to your txt file that contains labels 
    std::ifstream ifs("C:/Users/sirom/Desktop/cpp-ultralytics/coco-classes.txt");
    std::string line;
    while (getline(ifs, line))
    {
        class_list.push_back(line);
    }
    return class_list;
}


// Model 
void load_net(cv::dnn::Net &net)
{   
    // change this path to your model path 
    auto result = cv::dnn::readNet("C:/Users/sirom/Desktop/cpp-ultralytics/yolov5s.onnx");


    std::cout << "Running on CPU/n";
    result.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
    result.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
 
    net = result;
}


const std::vector<cv::Scalar> colors = {cv::Scalar(255, 255, 0), cv::Scalar(0, 255, 0), cv::Scalar(0, 255, 255), cv::Scalar(255, 0, 0)};


// You can change this parameters to obtain better results
const float INPUT_WIDTH = 640.0;
const float INPUT_HEIGHT = 640.0;
const float SCORE_THRESHOLD = 0.5;
const float NMS_THRESHOLD = 0.5;
const float CONFIDENCE_THRESHOLD = 0.5;


struct Detection
{
    int class_id;
    float confidence;
    cv::Rect box;
};


// yolov5 format
cv::Mat format_yolov5(const cv::Mat &source) {
    int col = source.cols;
    int row = source.rows;
    int _max = MAX(col, row);
    cv::Mat result = cv::Mat::zeros(_max, _max, CV_8UC3);
    source.copyTo(result(cv::Rect(0, 0, col, row)));
    return result;
}


// Detection function
void detect(cv::Mat &image, cv::dnn::Net &net, std::vector<Detection> &output, const std::vector<std::string> &className) {
    cv::Mat blob;


    // Format the input image to fit the model input requirements
    auto input_image = format_yolov5(image);
    
    // Convert the image into a blob and set it as input to the network
    cv::dnn::blobFromImage(input_image, blob, 1./255., cv::Size(INPUT_WIDTH, INPUT_HEIGHT), cv::Scalar(), true, false);
    net.setInput(blob);
    std::vector<cv::Mat> outputs;
    net.forward(outputs, net.getUnconnectedOutLayersNames());


    // Scaling factors to map the bounding boxes back to original image size
    float x_factor = input_image.cols / INPUT_WIDTH;
    float y_factor = input_image.rows / INPUT_HEIGHT;
    
    float *data = (float *)outputs[0].data;


    const int dimensions = 85;
    const int rows = 25200;
    
    std::vector<int> class_ids; // Stores class IDs of detections
    std::vector<float> confidences; // Stores confidence scores of detections
    std::vector<cv::Rect> boxes;   // Stores bounding boxes


   // Loop through all the rows to process predictions
    for (int i = 0; i < rows; ++i) {


        // Get the confidence of the current detection
        float confidence = data[4];


        // Process only detections with confidence above the threshold
        if (confidence >= CONFIDENCE_THRESHOLD) {
            
            // Get class scores and find the class with the highest score
            float * classes_scores = data + 5;
            cv::Mat scores(1, className.size(), CV_32FC1, classes_scores);
            cv::Point class_id;
            double max_class_score;
            minMaxLoc(scores, 0, &max_class_score, 0, &class_id);


            // If the class score is above the threshold, store the detection
            if (max_class_score > SCORE_THRESHOLD) {


                confidences.push_back(confidence);
                class_ids.push_back(class_id.x);


                // Calculate the bounding box coordinates
                float x = data[0];
                float y = data[1];
                float w = data[2];
                float h = data[3];
                int left = int((x - 0.5 * w) * x_factor);
                int top = int((y - 0.5 * h) * y_factor);
                int width = int(w * x_factor);
                int height = int(h * y_factor);
                boxes.push_back(cv::Rect(left, top, width, height));
            }
        }


        data += 85;
    }


    // Apply Non-Maximum Suppression
    std::vector<int> nms_result;
    cv::dnn::NMSBoxes(boxes, confidences, SCORE_THRESHOLD, NMS_THRESHOLD, nms_result);


    // Draw the NMS filtered boxes and push results to output
    for (int i = 0; i < nms_result.size(); i++) {
        int idx = nms_result[i];


        // Only push the filtered detections
        Detection result;
        result.class_id = class_ids[idx];
        result.confidence = confidences[idx];
        result.box = boxes[idx];
        output.push_back(result);


        // Draw the final NMS bounding box and label
        cv::rectangle(image, boxes[idx], cv::Scalar(0, 255, 0), 3);
        std::string label = className[class_ids[idx]];
        cv::putText(image, label, cv::Point(boxes[idx].x, boxes[idx].y - 5), cv::FONT_HERSHEY_SIMPLEX, 2, cv::Scalar(255, 255, 255), 2);
    }
}




int main(int argc, char **argv)
{   
    // Load class list 
    std::vector<std::string> class_list = load_class_list();


    // Load input image
    std::string image_path = cv::samples::findFile("C:/Users/sirom/Desktop/cpp-ultralytics/test2.jpg");
    cv::Mat frame = cv::imread(image_path, cv::IMREAD_COLOR);


    // Load the  modeL
    cv::dnn::Net net;
    load_net(net);


    // Vector to store detection results
    std::vector<Detection> output;
    // Run detection on the input image
    detect(frame, net, output, class_list);


    // Save the result to a file
    cv::imwrite("C:/Users/sirom/Desktop/cpp-ultralytics/result.jpg", frame);


    while (true)
    {       
        // display image
        cv::imshow("image",frame);


        // Exit the loop if any key is pressed
        if (cv::waitKey(1) != -1)
        {
            std::cout << "finished by user\n";
            break;
        }
    }


    std::cout << "Processing complete. Image saved /n";
    return 0;
}

5. 編譯并運(yùn)行代碼

mkdir build
cd build 
cmake ..
cmake --build .
.\Debug\object-detection.exe
責(zé)任編輯:趙寧寧 來源: 小白玩轉(zhuǎn)Python
相關(guān)推薦

2025-01-14 08:30:00

YOLO目標(biāo)檢測YOLOv8

2024-11-29 16:10:31

2024-10-09 17:02:34

2024-07-30 09:50:00

深度學(xué)習(xí)目標(biāo)檢測

2024-09-09 16:35:10

YOLO模型

2025-01-22 11:10:34

2024-11-27 16:06:12

2023-12-05 15:44:46

計(jì)算機(jī)視覺FastAPI

2020-11-24 17:25:19

模型人工智能深度學(xué)習(xí)

2024-06-21 10:40:00

計(jì)算機(jī)視覺

2024-08-20 09:30:00

2010-01-26 13:36:27

C++設(shè)計(jì)

2024-10-29 16:18:32

YOLOOpenCV

2025-01-06 08:20:00

YOLOv11目標(biāo)檢測Python

2011-05-31 17:59:48

C++

2010-01-21 16:45:02

C++設(shè)計(jì)目標(biāo)

2011-04-11 09:23:27

設(shè)計(jì)目標(biāo)原則C++

2025-02-11 08:30:00

2024-07-03 10:46:10

2024-07-22 11:14:36

點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號