使用 YOLO 和 EasyOCR 從視頻文件中檢測車牌
本文將介紹如何通過Python中的YOLO(ou Only Look Once)和EasyOCR(光學(xué)字符識別)技術(shù)來實現(xiàn)從視頻文件中檢測車牌。本技術(shù)依托于深度學(xué)習(xí),以實現(xiàn)車牌的即時檢測與識別。
從視頻文件中檢測車牌
先決條件
在我們開始之前,請確保已安裝以下Python包:
pip install opencv-python ultralytics easyocr Pillow numpy
實施步驟
步驟1:初始化庫
我們將首先導(dǎo)入必要的庫。我們將使用OpenCV進行視頻處理,使用YOLO進行目標檢測,并使用EasyOCR讀取檢測到的車牌上的文字。
import cv2
from ultralytics import YOLO
import easyocr
from PIL import Image
import numpy as np
# Initialize EasyOCR reader
reader = easyocr.Reader(['en'], gpu=False)
# Load your YOLO model (replace with your model's path)
model = YOLO('best_float32.tflite', task='detect')
# Open the video file (replace with your video file path)
video_path = 'sample4.mp4'
cap = cv2.VideoCapture(video_path)
# Create a VideoWriter object (optional, if you want to save the output)
output_path = 'output_video.mp4'
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter(output_path, fourcc, 30.0, (640, 480)) # Adjust frame size if necessary
步驟2:處理視頻幀
我們將從視頻文件中讀取每一幀,處理它以檢測車牌,然后應(yīng)用OCR來識別車牌上的文字。為了提高性能,我們可以跳過每第三幀的處理。
# Frame skipping factor (adjust as needed for performance)
frame_skip = 3 # Skip every 3rd frame
frame_count = 0
while cap.isOpened():
ret, frame = cap.read() # Read a frame from the video
if not ret:
break # Exit loop if there are no frames left
# Skip frames
if frame_count % frame_skip != 0:
frame_count += 1
continue # Skip processing this frame
# Resize the frame (optional, adjust size as needed)
frame = cv2.resize(frame, (640, 480)) # Resize to 640x480
# Make predictions on the current frame
results = model.predict(source=frame)
# Iterate over results and draw predictions
for result in results:
boxes = result.boxes # Get the boxes predicted by the model
for box in boxes:
class_id = int(box.cls) # Get the class ID
confidence = box.conf.item() # Get confidence score
coordinates = box.xyxy[0] # Get box coordinates as a tensor
# Extract and convert box coordinates to integers
x1, y1, x2, y2 = map(int, coordinates.tolist()) # Convert tensor to list and then to int
# Draw the box on the frame
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2) # Draw rectangle
# Try to apply OCR on detected region
try:
# Ensure coordinates are within frame bounds
r0 = max(0, x1)
r1 = max(0, y1)
r2 = min(frame.shape[1], x2)
r3 = min(frame.shape[0], y2)
# Crop license plate region
plate_region = frame[r1:r3, r0:r2]
# Convert to format compatible with EasyOCR
plate_image = Image.fromarray(cv2.cvtColor(plate_region, cv2.COLOR_BGR2RGB))
plate_array = np.array(plate_image)
# Use EasyOCR to read text from plate
plate_number = reader.readtext(plate_array)
concat_number = ' '.join([number[1] for number in plate_number])
number_conf = np.mean([number[2] for number in plate_number])
# Draw the detected text on the frame
cv2.putText(
img=frame,
text=f"Plate: {concat_number} ({number_conf:.2f})",
org=(r0, r1 - 10),
fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=0.7,
color=(0, 0, 255),
thickness=2
)
except Exception as e:
print(f"OCR Error: {e}")
pass
# Show the frame with detections
cv2.imshow('Detections', frame)
# Write the frame to the output video (optional)
out.write(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break # Exit loop if 'q' is pressed
frame_count += 1 # Increment frame count
# Release resources
cap.release()
out.release() # Release the VideoWriter object if used
cv2.destroyAllWindows()
代碼解釋:
- 啟動EasyOCR:設(shè)置EasyOCR以識別英文字符。
- 導(dǎo)入YOLO模型:從特定路徑載入YOLO模型,需替換為模型的實際路徑。
- 視頻幀讀取:利用OpenCV打開視頻文件,若需保存輸出,則啟動VideoWriter。
- 幀尺寸調(diào)整與處理:逐幀讀取并調(diào)整尺寸,隨后使用模型預(yù)測車牌位置。
- 繪制識別結(jié)果:在視頻幀上標出識別到的車牌邊界框,并裁剪出車牌區(qū)域以進行OCR識別。
- 執(zhí)行OCR:EasyOCR識別裁剪后的車牌圖像中的文本,并在幀上展示識別結(jié)果及置信度。
- 視頻輸出:處理后的視頻幀可顯示在窗口中,也可以選擇保存為視頻文件。
結(jié)論
本段代碼展示了如何綜合運用YOLO和EasyOCR技術(shù),從視頻文件中檢測并識別車牌。遵循這些步驟,你可以為自己的需求構(gòu)建相似的系統(tǒng)。根據(jù)實際情況,靈活調(diào)整參數(shù)和優(yōu)化模型性能。