自拍偷在线精品自拍偷,亚洲欧美中文日韩v在线观看不卡

提升篇 | 優(yōu)化 YOLOv8 以加快推理速度

開發(fā)
如果你有一塊高級(jí)GPU,使用TensorRT是最佳選擇。然而,如果你在配備Intel CPU的計(jì)算機(jī)上工作,OpenVINO是首選。

為了一項(xiàng)研究,我需要減少YOLOv8的推理時(shí)間。在這項(xiàng)研究中,我使用了自己的電腦而不是Google Colab。我的電腦有一個(gè)Intel i5(第12代)處理器,我的GPU是NVIDIA GeForce RTX 3050。這些信息很重要,因?yàn)槲以谝恍┓椒ㄖ惺褂昧薈PU,在其他方法中使用了GPU。

原始模型使用情況

為了測(cè)試,我們使用了Ultralytics提供的YOLOv8n.pt模型,并使用bus.jpg圖像進(jìn)行評(píng)估。我們將分析獲得的時(shí)間值和結(jié)果。要了解模型的性能,還要知道它運(yùn)行在哪個(gè)設(shè)備上——無(wú)論是使用CUDA GPU還是CPU。


# cuda
import cv2
import matplotlib.pyplot as plt
from ultralytics import YOLO
import torch

yolov8model = YOLO("yolov8n.pt")
img = cv2.imread("bus.jpg")
results = yolov8model.predict(source=img, device='cuda')


img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

for result in results:
    boxes = result.boxes
    for box in boxes:
        x1, y1, x2, y2 = box.xyxy[0].tolist()
        confidence = box.conf[0].item()
        class_id = int(box.cls[0].item())

        cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (255, 0, 0), 2)
        cv2.putText(img, f'ID: {class_id} Conf: {confidence:.2f}', 
                    (int(x1), int(y1)-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)

used_device = next(yolov8model.model.parameters()).device
print("Model is running on:", used_device)
plt.figure(figsize=(10, 10))
plt.imshow(img)
plt.axis('off')
plt.show()


# cpu
import cv2
import matplotlib.pyplot as plt
from ultralytics import YOLO
import torch

yolov8model = YOLO("yolov8n.pt")
img = cv2.imread("bus.jpg")
results = yolov8model.predict(source=img, device='cpu')


img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

for result in results:
    boxes = result.boxes
    for box in boxes:
        x1, y1, x2, y2 = box.xyxy[0].tolist()
        confidence = box.conf[0].item()
        class_id = int(box.cls[0].item())

        cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (255, 0, 0), 2)
        cv2.putText(img, f'ID: {class_id} Conf: {confidence:.2f}', 
                    (int(x1), int(y1)-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)

plt.figure(figsize=(10, 10))
plt.imshow(img)
plt.axis('off')
plt.show()

used_device = next(yolov8model.model.parameters()).device
print("Model is running on:", used_device)

現(xiàn)在,我們有一個(gè)起點(diǎn)。具體來說,對(duì)于bus.jpg圖像,模型在CPU上的推理時(shí)間是199.7毫秒,在GPU上是47.2毫秒。

剪枝

我們使用的第一個(gè)方法是剪枝模型。剪枝改變了模型并創(chuàng)建了一個(gè)更高效的版本。有些方法修改了模型本身,而其他方法改變了輸入或直接影響推理。在剪枝中,模型中較不重要或影響最小的連接被移除。這導(dǎo)致了一個(gè)更小、更快的模型,但它可能會(huì)對(duì)準(zhǔn)確性產(chǎn)生負(fù)面影響。


import torch
import torch.nn.utils.prune as prune
from ultralytics import YOLO

def prune_model(model,amount=0.3):
    for module in model.modules():
        if isinstance(module,torch.nn.Conv2d):
            prune.l1_unstructured(module,name="weight",amount=amount)
            prune.remove(module,"weight")
    return model

model = YOLO("yolov8n.pt")
#results= model.val(data="coco.yaml")

#print(f"mAP50-95: {results.box.map}")
torch_model = model.model
print(torch_model)

print("Prunning model...")
pruned_torch_model = prune_model(torch_model,amount=0.1)
print("Model pruned.")

model.model =pruned_torch_model

print("Saving pruned model...")
model.save("yolov8n_trained_pruned.pt")

print("Pruned model saved.")

通常,一種方法被用來比較數(shù)據(jù)集;然而,在這個(gè)例子中,使用了大約18 GB的數(shù)據(jù)集的通用yolov8n.pt模型。在這個(gè)例子中,沒有使用coco.yaml文件。

我將分享使用的GPU的結(jié)果,我們將更新比較圖,因?yàn)閼?yīng)用不同的參數(shù)時(shí)時(shí)間可能會(huì)改變。通常,我無(wú)法弄清楚時(shí)間為何會(huì)改變,但這可能是由于內(nèi)存或其他因素。


# cuda pruned
import cv2
import matplotlib.pyplot as plt
from ultralytics import YOLO
import torch

yolov8model = YOLO("yolov8n_trained_pruned.pt")
img = cv2.imread("bus.jpg")
results = yolov8model.predict(source=img, device='cuda')


img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

for result in results:
    boxes = result.boxes
    for box in boxes:
        x1, y1, x2, y2 = box.xyxy[0].tolist()
        confidence = box.conf[0].item()
        class_id = int(box.cls[0].item())

        cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (255, 0, 0), 2)
        cv2.putText(img, f'ID: {class_id} Conf: {confidence:.2f}', 
                    (int(x1), int(y1)-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)
        
used_device = next(yolov8model.model.parameters()).device
print("Model is running on:", used_device)
plt.figure(figsize=(10, 10))
plt.imshow(img)
plt.axis('off')
plt.show()

正如你看到的,結(jié)果有點(diǎn)令人困惑;ID和blob不準(zhǔn)確。

然而,當(dāng)我們比較推理時(shí)間時(shí),剪枝模型在CPU和GPU上都比原始模型表現(xiàn)略好。剪枝模型的問題是它會(huì)影響結(jié)果,但它減少了模型的推理時(shí)間。

改變批量大小

在確定模型訓(xùn)練或預(yù)測(cè)的批量大小時(shí),我們模型中同時(shí)處理的幀數(shù)至關(guān)重要。我創(chuàng)建了一個(gè)循環(huán)來識(shí)別最優(yōu)批量大小,因?yàn)樵黾优看笮∮袝r(shí)可能會(huì)產(chǎn)生負(fù)面影響。然而,我注意到每次嘗試時(shí)最優(yōu)批量大小都會(huì)改變。我嘗試平均結(jié)果,但這種方法是不充分的。為了說明我的發(fā)現(xiàn),我將分享一張我的初步試驗(yàn)的表格,用紅點(diǎn)突出顯示最優(yōu)批量大小。

import cv2
import matplotlib.pyplot as plt
from ultralytics import YOLO
import torch
import time

yolov8model = YOLO("yolov8n.pt")
img = cv2.imread("bus.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

inference_times = []


for batch_size in range(1, 41):
    start_time = time.time()  


    results = yolov8model.predict(source=img_rgb, device='cuda', batch=batch_size)
    
    end_time = time.time() 
    inference_time = end_time - start_time 
    
    inference_times.append((batch_size, inference_time))
    print(f"Batch Size: {batch_size}, Inference Time: {inference_time:.4f} seconds")


plt.figure(figsize=(10, 5))
batch_sizes = [bt[0] for bt in inference_times]
times = [bt[1] for bt in inference_times]


min_time_index = times.index(min(times))
min_batch_size = batch_sizes[min_time_index]
min_inference_time = times[min_time_index]


plt.plot(batch_sizes, times, marker='o')
plt.plot(min_batch_size, min_inference_time, 'ro', markersize=8)  
plt.title('Inference Time vs. Batch Size')
plt.xlabel('Batch Size')
plt.ylabel('Inference Time (seconds)')
plt.xticks(batch_sizes)
plt.grid()


plt.show()


best_results = yolov8model.predict(source=img_rgb, device='cuda', batch=min_batch_size)


for result in best_results:
    boxes = result.boxes 
    for box in boxes:
        x1, y1, x2, y2 = box.xyxy[0].cpu().numpy()  
        conf = box.conf[0].cpu().numpy()  
        cls = int(box.cls[0].cpu().numpy())  


        cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (0, 0, 255), 2)  
        cv2.putText(img, f'Class: {cls}, Conf: {conf:.2f}', (int(x1), int(y1) - 10), 
                    cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)


plt.figure(figsize=(10, 10))
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.title(f'Results with Batch Size {min_batch_size}')
plt.axis('off')
plt.show()

硬件加速方法

為了提高YOLOv8模型的性能,另一個(gè)選擇是使用硬件加速。為此目的有幾種工具可用,比如TensorRT和OpenVINO。

1.TensorRT

TensorRT是一種使用NVIDIA硬件優(yōu)化推理效率的方法。在這部分中,我使用了帶有T4 GPU的Google Colab來比較標(biāo)準(zhǔn)模型和TensorRT優(yōu)化模型的性能。讓我們從如何將我們的模型轉(zhuǎn)換為TensorRT格式開始。首先,我們需要將模型文件上傳到Colab,然后編寫以下代碼:

from ultralytics import YOLO

model = YOLO("yolov8n.pt")

model.export(format="engine")

然后,我們使用bus.jpg測(cè)試模型,TensorRT優(yōu)化模型的推理時(shí)間為6.6毫秒。相比之下,標(biāo)準(zhǔn)模型的推理時(shí)間為6.9毫秒。從結(jié)果來看,由于更先進(jìn)的T4硬件,TensorRT模型的性能略優(yōu)于標(biāo)準(zhǔn)模型。


import cv2
import matplotlib.pyplot as plt
from ultralytics import YOLO
import torch

yolov8model = YOLO('yolov8n.engine')  

img = cv2.imread("bus.jpg")

results = yolov8model.predict(source=img, device='cuda')

img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

for result in results:
    boxes = result.boxes
    for box in boxes:
        x1, y1, x2, y2 = box.xyxy[0].tolist()
        confidence = box.conf[0].item()
        class_id = int(box.cls[0].item())

        cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (255, 0, 0), 2)
        cv2.putText(img, f'ID: {class_id} Conf: {confidence:.2f}', 
                    (int(x1), int(y1)-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)

used_device = yolov8model.device
print("Model is running on:", used_device)

plt.figure(figsize=(10, 10))
plt.imshow(img)
plt.axis('off')
plt.show()

2.OpenVINO

OpenVINO是一個(gè)主要為優(yōu)化模型性能而設(shè)計(jì)的套件,特別是在Intel硬件上。它可以顯著提高CPU性能,通常在常規(guī)使用中可提高多達(dá)3倍。讓我們從將我們的模型轉(zhuǎn)換為OpenVINO格式開始。

from ultralytics import YOLO

# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")

# Export the model
model.export(format="openvino")  # creates 'yolov8n_openvino_model/'

# Load the exported OpenVINO model
ov_model = YOLO("yolov8n_openvino_model/")

import cv2
import matplotlib.pyplot as plt
from ultralytics import YOLO


yolov8model = YOLO('yolov8n_openvino_model/', task="detect")  


img = cv2.imread("bus.jpg")


results = yolov8model.predict(source=img)

img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

for result in results:
    boxes = result.boxes
    for box in boxes:
        x1, y1, x2, y2 = box.xyxy[0].tolist()
        confidence = box.conf[0].item()
        class_id = int(box.cls[0].item())

        cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (255, 0, 0), 2)
        cv2.putText(img, f'ID: {class_id} Conf: {confidence:.2f}', 
                    (int(x1), int(y1)-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)

plt.figure(figsize=(10, 10))
plt.imshow(img)
plt.axis('off')
plt.show()

正如你看到的,在CPU性能上OpenVINO模型的推理時(shí)間略有下降。以下是我嘗試的不同方法的比較結(jié)果。

總之,如果你有一塊高級(jí)GPU,使用TensorRT是最佳選擇。然而,如果你在配備Intel CPU的計(jì)算機(jī)上工作,OpenVINO是首選。不同的方法會(huì)導(dǎo)致不同的推理時(shí)間,因此每種方法都進(jìn)行了多次測(cè)試以觀察差異。

責(zé)任編輯:趙寧寧 來源: 小白玩轉(zhuǎn)Python
相關(guān)推薦

2024-07-25 08:25:35

2024-01-29 09:29:02

計(jì)算機(jī)視覺模型

2025-02-24 09:50:21

2024-05-15 09:16:05

2024-07-22 13:49:38

YOLOv8目標(biāo)檢測(cè)開發(fā)

2023-12-06 08:30:02

Spring項(xiàng)目

2011-08-29 17:16:29

Ubuntu

2024-11-18 17:31:27

2019-03-15 15:00:49

Webpack構(gòu)建速度前端

2009-09-04 11:34:31

NetBeans優(yōu)化

2011-09-11 02:58:12

Windows 8build微軟

2023-10-14 15:22:22

2021-08-02 10:50:57

性能微服務(wù)數(shù)據(jù)

2025-02-07 14:52:11

2019-03-18 15:35:45

WebCSS前端

2024-07-11 08:25:34

2024-10-25 08:30:57

計(jì)算機(jī)視覺神經(jīng)網(wǎng)絡(luò)YOLOv8模型

2024-11-01 07:30:00

2009-07-01 15:02:56

JSP程序JSP操作

2025-01-24 07:37:19

計(jì)算機(jī)視覺熱力圖YOLOv8
點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號(hào)