自拍偷在线精品自拍偷,亚洲欧美中文日韩v在线观看不卡

使用 Yolov8 Flask 自定義訓(xùn)練實(shí)時(shí)火災(zāi)和煙霧檢測(cè)

人工智能
本文開發(fā)了一個(gè)專門用于火災(zāi)和煙霧檢測(cè)的自定義訓(xùn)練YOLOv8模型。用于此訓(xùn)練的數(shù)據(jù)集可在Kaggle上找到,如果需要重新訓(xùn)練模型,訓(xùn)練腳本也可供使用。

近年來(lái),人工智能和機(jī)器學(xué)習(xí)的進(jìn)步徹底改變了包括公共安全在內(nèi)的各個(gè)行業(yè)。這些技術(shù)在火災(zāi)和煙霧檢測(cè)方面取得了顯著進(jìn)展,這對(duì)于早期預(yù)警系統(tǒng)和高效的應(yīng)急響應(yīng)至關(guān)重要。實(shí)現(xiàn)這一目標(biāo)的最有效方法之一是將YOLOv8強(qiáng)大的目標(biāo)檢測(cè)能力與基于Python的輕量級(jí)Web框架Flask的靈活性相結(jié)合。它們共同構(gòu)成了一個(gè)通過(guò)視頻流實(shí)現(xiàn)的強(qiáng)大實(shí)時(shí)火災(zāi)和煙霧檢測(cè)解決方案。

本文開發(fā)了一個(gè)專門用于火災(zāi)和煙霧檢測(cè)的自定義訓(xùn)練YOLOv8模型。用于此訓(xùn)練的數(shù)據(jù)集可在Kaggle上找到,如果需要重新訓(xùn)練模型,訓(xùn)練腳本也可供使用。

數(shù)據(jù)集:

https://www.kaggle.com/code/deepaknr/yolov8-fire-and-smoke-detection?source=post_page-----79058b024b09--------------------------------

訓(xùn)練腳本:

實(shí)際示例:使用YOLOv8和Flask進(jìn)行火災(zāi)和煙霧檢測(cè)

假設(shè)一個(gè)實(shí)際場(chǎng)景,您需要監(jiān)控一個(gè)有火災(zāi)風(fēng)險(xiǎn)的工業(yè)場(chǎng)地。通過(guò)攝像頭建立實(shí)時(shí)視頻流并利用YOLOv8模型的火災(zāi)檢測(cè)功能,您可以及早識(shí)別火災(zāi)或煙霧,從而預(yù)防潛在的災(zāi)難。以下是一個(gè)Python代碼片段,展示了如何將YOLOv8與Flask集成以實(shí)現(xiàn)火災(zāi)和煙霧檢測(cè)。

import os
import cv2
import numpy as np
from flask import Flask, render_template, Response, request
from werkzeug.utils import secure_filename
from ultralytics import YOLO

app = Flask(__name__)

YOLOV8_MODEL_PATH = 'path-to-yolov8-model'

ALLOWED_EXTENSIONS = {'mp4', 'avi', 'mov'}
video_path = None


def allowed_file(filename):
    return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS


model = YOLO(YOLOV8_MODEL_PATH)


@app.route('/')
def index():
    return render_template('index.html')


@app.route('/upload', methods=['POST'])
def upload():
    global video_path
    if 'file' not in request.files:
        return 'No file part', 400

    file = request.files['file']

    if file and allowed_file(file.filename):
        filename = secure_filename(file.filename)
        filepath = os.path.join('uploads', filename)
        file.save(filepath)
        video_path = filepath
        return render_template('index.html')

    return 'Invalid file type', 400


def generate_frames():
    global video_path
    if video_path is None:
        return None
    cap = cv2.VideoCapture(video_path)

    alpha = 0.4
    while True:
        success, frame = cap.read()
        if not success:
            break

        result = model(frame, verbose=False, conf=0.35)[0]

        bboxes = np.array(result.boxes.xyxy.cpu(), dtype="int")
        classes = np.array(result.boxes.cls.cpu(), dtype="int")
        confidence = np.array(result.boxes.conf.cpu(), dtype="float")

        for cls, bbox, conf in zip(classes, bboxes, confidence):
            (x1, y1, x2, y2) = bbox
            object_name = model.names[cls]
            if object_name == 'fire':

                color = (19, 127, 240)
            else:
                color = (145, 137, 132)

            cropped_image = frame[int(y1):int(y2), int(x1):int(x2)]
            white_layer = np.ones(cropped_image.shape, dtype=np.uint8) * 255
            cropped_image = cv2.addWeighted(cropped_image, 1 - alpha, white_layer, alpha, 0)
            frame[int(y1):int(y2), int(x1):int(x2)] = cropped_image
            cv2.rectangle(frame, (x1, y1 -30), (x1 + 200, y1), color, -1)

            cv2.rectangle(frame, (x1, y1), (x2, y2), color, 2)

            cv2.putText(frame, f"{object_name.capitalize()}: {conf * 100:.2f}%", (x1, y1 - 5), cv2.FONT_HERSHEY_DUPLEX,
                        0.8, (255, 255, 255), 1)

        ret, buffer = cv2.imencode('.jpg', frame)
        frame = buffer.tobytes()

        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')

    cap.release()


@app.route('/video_feed')
def video_feed():
    return Response(generate_frames(), mimetype='multipart/x-mixed-replace; boundary=frame')


if __name__ == '__main__':
    os.makedirs('uploads', exist_ok=True)

    app.run(host='0.0.0.0', port=5000, debug=True)import os
import cv2
import numpy as np
from flask import Flask, render_template, Response, request
from werkzeug.utils import secure_filename
from ultralytics import YOLO

app = Flask(__name__)

YOLOV8_MODEL_PATH = 'path-to-yolov8-model'

ALLOWED_EXTENSIONS = {'mp4', 'avi', 'mov'}
video_path = None


def allowed_file(filename):
    return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS


model = YOLO(YOLOV8_MODEL_PATH)


@app.route('/')
def index():
    return render_template('index.html')


@app.route('/upload', methods=['POST'])
def upload():
    global video_path
    if 'file' not in request.files:
        return 'No file part', 400

    file = request.files['file']

    if file and allowed_file(file.filename):
        filename = secure_filename(file.filename)
        filepath = os.path.join('uploads', filename)
        file.save(filepath)
        video_path = filepath
        return render_template('index.html')

    return 'Invalid file type', 400


def generate_frames():
    global video_path
    if video_path is None:
        return None
    cap = cv2.VideoCapture(video_path)

    alpha = 0.4
    while True:
        success, frame = cap.read()
        if not success:
            break

        result = model(frame, verbose=False, conf=0.35)[0]

        bboxes = np.array(result.boxes.xyxy.cpu(), dtype="int")
        classes = np.array(result.boxes.cls.cpu(), dtype="int")
        confidence = np.array(result.boxes.conf.cpu(), dtype="float")

        for cls, bbox, conf in zip(classes, bboxes, confidence):
            (x1, y1, x2, y2) = bbox
            object_name = model.names[cls]
            if object_name == 'fire':

                color = (19, 127, 240)
            else:
                color = (145, 137, 132)

            cropped_image = frame[int(y1):int(y2), int(x1):int(x2)]
            white_layer = np.ones(cropped_image.shape, dtype=np.uint8) * 255
            cropped_image = cv2.addWeighted(cropped_image, 1 - alpha, white_layer, alpha, 0)
            frame[int(y1):int(y2), int(x1):int(x2)] = cropped_image
            cv2.rectangle(frame, (x1, y1 -30), (x1 + 200, y1), color, -1)

            cv2.rectangle(frame, (x1, y1), (x2, y2), color, 2)

            cv2.putText(frame, f"{object_name.capitalize()}: {conf * 100:.2f}%", (x1, y1 - 5), cv2.FONT_HERSHEY_DUPLEX,
                        0.8, (255, 255, 255), 1)

        ret, buffer = cv2.imencode('.jpg', frame)
        frame = buffer.tobytes()

        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')

    cap.release()


@app.route('/video_feed')
def video_feed():
    return Response(generate_frames(), mimetype='multipart/x-mixed-replace; boundary=frame')


if __name__ == '__main__':
    os.makedirs('uploads', exist_ok=True)

    app.run(host='0.0.0.0', port=5000, debug=True)

主要函數(shù)說(shuō)明:

  • def generate_frames():此函數(shù)從上傳的視頻中提取幀,并利用YOLOv8模型進(jìn)行目標(biāo)檢測(cè),特別是針對(duì)火災(zāi)和煙霧等元素。幀上會(huì)渲染帶有相應(yīng)類別標(biāo)簽(火災(zāi)、煙霧)的邊界框。為了增強(qiáng)可見性,在檢測(cè)到物體的區(qū)域應(yīng)用了半透明的白色覆蓋層。處理后的幀被轉(zhuǎn)換為JPEG格式,并持續(xù)輸出以生成視頻流。
  • def video_feed():此路由使用generate_frames函數(shù)將處理后的視頻幀作為HTTP響應(yīng)流式傳輸。它使用MIME類型multipart/x-mixed-replace向Web客戶端發(fā)送JPEG圖像流。

應(yīng)用程序啟動(dòng):

if __name__ == '__main__':
    os.makedirs('uploads', exist_ok=True)
    app.run(host='0.0.0.0', port=5000, debug=True)

如果直接運(yùn)行腳本,它會(huì)確保uploads目錄存在,然后在端口5000上啟動(dòng)Flask應(yīng)用程序,并監(jiān)聽所有接口(0.0.0.0)。


責(zé)任編輯:趙寧寧 來(lái)源: 小白玩轉(zhuǎn)Python
相關(guān)推薦

2025-02-07 14:52:11

2024-10-16 16:49:44

定向邊界框目標(biāo)檢測(cè)YOLOv8

2024-07-01 12:55:50

2024-10-25 08:30:57

計(jì)算機(jī)視覺神經(jīng)網(wǎng)絡(luò)YOLOv8模型

2023-01-12 12:20:29

YOLOv8模型

2024-10-10 17:05:00

2024-08-27 12:40:59

2024-07-09 08:50:23

2024-01-29 09:29:02

計(jì)算機(jī)視覺模型

2024-07-22 13:49:38

YOLOv8目標(biāo)檢測(cè)開發(fā)

2024-10-07 11:12:55

2024-11-18 17:31:27

2022-12-08 08:40:38

YOLOv7模型AI

2024-07-11 08:25:34

2024-10-14 17:43:05

2025-02-24 09:50:21

2024-05-15 09:16:05

2023-12-11 10:18:38

YOLOv8檢測(cè)器實(shí)戰(zhàn)

2023-02-02 09:00:00

2025-01-22 11:10:34

點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號(hào)