自拍偷在线精品自拍偷,亚洲欧美中文日韩v在线观看不卡

終于把機器學習中的損失函數搞懂了?。?!

人工智能 機器學習
Huber Loss 是介于 MSE 和 MAE 之間的一種損失函數,當誤差較小時,它像 MSE 一樣處理,而當誤差較大時,它像 MAE 一樣處理。

圖片1. Mean Squared Error (MSE)

MSE 是回歸任務中最常用的損失函數之一。

它衡量模型預測值與實際值之間的平均平方誤差。

公式:

特點:

  • 對于大的誤差,MSE 會給出更大的懲罰,因為誤差被平方。
  • 對于異常值較為敏感。
import tensorflow as tf
import matplotlib.pyplot as plt

class MeanSquaredError_Loss:
    """
    This class provides two methods to calculate Mean Squared Error Loss.
    """
    def __init__(self):
        pass

    @staticmethod
    def mean_squared_error_manual(y_true, y_pred):
        
        squared_difference = tf.square(y_true - y_pred)
        loss = tf.reduce_mean(squared_difference)
        return loss

    @staticmethod
    def mean_squared_error_tf(y_true, y_pred):
        
        mse = tf.keras.losses.MeanSquaredError()
        loss = mse(y_true, y_pred)
        return loss

if __name__ == "__main__":
    def mean_squared_error_test(N=10, C=10):
    
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)


        # Test the MeanSquaredError_Loss class
        mse_manual = MeanSquaredError_Loss.mean_squared_error_manual(y_true, y_pred)
        print(f"mean_squared_error_manual: {mse_manual}")

        mse_tf = MeanSquaredError_Loss.mean_squared_error_tf(y_true, y_pred)
        print(f"mean_squared_error_tensorflow: {mse_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nMean Squared Error: {mse_manual.numpy():.4f}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    mean_squared_error_test()

圖片圖片

2. Mean Absolute Error (MAE)

MAE 也是用于回歸任務的損失函數,它計算的是預測值與實際值之間誤差的絕對值的平均值。

公式:

特點:

  • MAE 不像 MSE 那樣對異常值敏感,因為它沒有平方誤差。
  • 更加直觀,直接反映了誤差的平均大小。
import tensorflow as tf
import matplotlib.pyplot as plt

class MeanAbsoluteError_Loss:
    """
    This class provides two methods to calculate Mean Absolute Error Loss.
    """
    def __init__(self):
        pass

    @staticmethod
    def mean_absolute_error_manual(y_true, y_pred):
        absolute_difference = tf.math.abs(y_true - y_pred)
        loss = tf.reduce_mean(absolute_difference)
        return loss

    @staticmethod
    def mean_absolute_error_tf(y_true, y_pred):
        mae = tf.keras.losses.MeanAbsoluteError()
        loss = mae(y_true, y_pred)
        return loss

if __name__ == "__main__":
    def mean_absolute_error_test(N=10, C=10):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)


        # Test the MeanabsoluteError_Loss class
        mae_manual = MeanAbsoluteError_Loss.mean_absolute_error_manual(y_true, y_pred)
        print(f"mean_absolute_error_manual: {mae_manual}")

        mae_tf = MeanAbsoluteError_Loss.mean_absolute_error_tf(y_true, y_pred)
        print(f"mean_absolute_error_tensorflow: {mae_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nMean Absolute Error: {mae_manual.numpy():.4f}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    mean_absolute_error_test()

圖片圖片

3. Huber Loss

Huber Loss 是介于 MSE 和 MAE 之間的一種損失函數,當誤差較小時,它像 MSE 一樣處理,而當誤差較大時,它像 MAE 一樣處理。

這樣可以在處理異常值時更穩(wěn)定。

公式:

特點:

  • 對異常值更具有魯棒性,同時保留了誤差較小時的敏感性。
import tensorflow as tf
import matplotlib.pyplot as plt

class Huber_Loss:
    """
    This class provides two methods to calculate Huber Loss.
    """
    def __init__(self, delta = 1.0):
        
        self.delta = delta

    def huber_loss_manual(self, y_true, y_pred):
        
        error = tf.math.abs(y_true - y_pred)
        is_small_error = tf.math.less_equal(error, self.delta)
        small_error_loss = tf.math.square(error) / 2
        large_error_loss = self.delta * (error - (0.5 * self.delta))
        loss = tf.where(is_small_error, small_error_loss, large_error_loss)
        loss = tf.reduce_mean(loss)
        return loss

    def huber_loss_tf(self, y_true, y_pred):
        
        huber_loss = tf.keras.losses.Huber(delta = self.delta)(y_true, y_pred)
        return huber_loss

if __name__ == "__main__":
    def huber_loss_test(N=10, C=10):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)


        # Test the Huber_Loss class
        huber = Huber_Loss() 
        hl_manual = huber.huber_loss_manual(y_true, y_pred)
        print(f"huber_loss_manual: {hl_manual}")

        hl_tf = huber.huber_loss_tf(y_true, y_pred)
        print(f"huber_loss_tensorflow: {hl_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nHuber Loss: {hl_manual.numpy():.4f}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    huber_loss_test()

圖片圖片

4. Cross-Entropy Loss

Cross-Entropy Loss 是分類任務中廣泛使用的損失函數,尤其是在二分類和多分類問題中。

它衡量的是模型輸出的概率分布與實際類別的分布之間的差異。

公式:

對于二分類問題:

特點:

  • 當預測概率與實際標簽匹配時,損失較低;否則損失較高。
  • 對于分類問題的優(yōu)化尤為有效。
import tensorflow as tf
import matplotlib.pyplot as plt

class Cross_Entropy_Loss:
    """
    This class provides two methods to calculate Cross-Entropy Loss.
    """
    def __init__(self):
        pass

    def cross_entropy_loss_manual(self, y_true, y_pred):
        y_pred /= tf.reduce_sum(y_pred)
        epsilon = tf.keras.backend.epsilon()
        y_pred_new = tf.clip_by_value(y_pred, epsilon, 1.)
        loss =  - tf.reduce_sum(y_true * tf.math.log(y_pred_new))
        return loss 

    def cross_entropy_loss_tf(self, y_true, y_pred):
        loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)
        return loss

if __name__ == "__main__":
    def cross_entropy_loss_test(N=10, C=1):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)


        # Test the Cross-Entropy_Loss class
        cross_entropy = Cross_Entropy_Loss() 
        ce_manual = cross_entropy.cross_entropy_loss_manual(y_true, y_pred)
        print(f"cross_entropy_loss_manual: {ce_manual}")

        ce_tf = cross_entropy.cross_entropy_loss_tf(y_true, y_pred)
        print(f"cross_entropy_loss_tensorflow: {ce_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nCross-Entropy Loss: {ce_manual.numpy():.4f}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    cross_entropy_loss_test()

5. Hinge Loss

Hinge Loss 通常用于支持向量機(SVM)中。

它鼓勵模型使得正確類別的得分高于錯誤類別至少一個邊距(通常是1)。

公式:

特點:

  • 強制模型為正確類別創(chuàng)造一個“邊距”,使得分類更加魯棒。
  • 適用于線性分類器的優(yōu)化。
import tensorflow as tf
import matplotlib.pyplot as plt

class Hinge_Loss:
    """
    This class provides two methods to calculate Hinge Loss.
    """
    def __init__(self):
        pass

    def hinge_loss_manual(self, y_true, y_pred):
        
        pos = tf.reduce_sum(y_true * y_pred, axis=-1)
        neg = tf.reduce_max((1 - y_true) * y_pred, axis=-1)
        loss = tf.maximum(0, neg - pos + 1)
        return loss 

    def hinge_loss_tf(self, y_true, y_pred):
        
        loss = tf.keras.losses.CategoricalHinge()(y_true, y_pred)
        return loss

if __name__ == "__main__":
    def hinge_loss_test(N=10, C=10):
       
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.int32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.int32)


        # Test the Hinge_Loss class
        cross_entropy = Hinge_Loss() 
        hl_manual = cross_entropy.hinge_loss_manual(y_true, y_pred)
        print(f"hinge_loss_manual: {hl_manual}")

        hl_tf = cross_entropy.hinge_loss_tf(y_true, y_pred)
        print(f"hinge_loss_tensorflow: {hl_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nHinge Loss: {hl_manual.numpy()}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    hinge_loss_test()

6. Intersection Over Union (IoU)

IoU 通常用于目標檢測任務中,衡量預測的邊界框與實際邊界框之間的重疊程度。

公式:

特點:

  • 值域在0到1之間,1表示完美重疊,0表示沒有重疊。
  • 用于評估邊界框預測的準確性。
import tensorflow as tf
import matplotlib.pyplot as plt

class IOU:
    def __init__(self):
        pass

    def IOU_manual(self, y_true, y_pred):
        intersection = tf.reduce_sum(tf.cast(tf.logical_and(tf.equal(y_true, 1), tf.equal(y_pred, 1)), dtype=tf.float32))
        union = tf.reduce_sum(tf.cast(tf.logical_or(tf.equal(y_true, 1), tf.equal(y_pred, 1)), dtype=tf.float32))
        iou = intersection / union
        return iou

    def IOU_tf(self, y_true, y_pred):
        iou_metric = tf.keras.metrics.IoU(num_classes=2, target_class_ids=[1])
        iou_metric.update_state(y_true, y_pred)
        iou = iou_metric.result()
        return iou

if __name__ == "__main__":
    def IOU_test(N=10, C=10):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.int32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.int32)

        y_true = tf.constant([[0, 1, 1, 0], 
                              [0, 1, 1, 0], 
                              [0, 0, 0, 0], 
                              [0, 0, 0, 0]], dtype=tf.float32)  # Example binary mask (ground truth)

        y_pred = tf.constant([[0, 1, 1, 0], 
                              [1, 1, 0, 0], 
                              [0, 0, 0, 0], 
                              [0, 0, 0, 0]], dtype=tf.float32)  # Example binary mask (prediction)

        iou = IOU()

        iou_manual = iou.IOU_manual(y_true, y_pred)
        print(f"IOU_manual: {iou_manual}")

        iou_tf = iou.IOU_tf(y_true, y_pred)
        print(f"IOU_tensorflow: {iou_tf}")

    IOU_test()

7. Kullback-Leibler (KL) Divergence

KL 散度是一種衡量兩個概率分布之間差異的非對稱性度量,通常用于生成模型和變分自編碼器中。

公式:

特點:

  • 當 P 和 Q 完全相同時,KL 散度為0。
  • 適用于評估模型預測的概率分布與目標概率分布之間的差異。
import tensorflow as tf
import matplotlib.pyplot as plt

class Kullback_Leibler:
    """
    This class provides two methods to calculate Kullback-Leibler Loss.
    """
    def __init__(self):
        pass

    def kullback_leibler_manual(self, y_true, y_pred):
        epsilon = tf.keras.backend.epsilon()
        y_true = tf.clip_by_value(y_true, epsilon, 1)
        y_pred = tf.clip_by_value(y_pred, epsilon, 1)
        
        loss = tf.reduce_sum(y_true * tf.math.log(y_true / y_pred), axis=-1)
        return loss

    def kullback_leibler_tf(self, y_true, y_pred):
        loss = tf.reduce_sum(tf.keras.losses.KLDivergence()(y_true, y_pred))
        return loss

if __name__ == "__main__":
    def kullback_leibler_test(N=5, C=1):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=0, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=0, maxval=C, dtype=tf.float32)

        #converting them to probabilities
        y_true /= tf.reduce_sum(y_true)
        y_pred /= tf.reduce_sum(y_pred)

        # Test the kullback_leibler class
        kl = Kullback_Leibler() 
        kl_manual = kl.kullback_leibler_manual(y_true, y_pred)
        print(f"kullback_leibler_manual: {kl_manual}")

        kl_tf = kl.kullback_leibler_tf(y_true, y_pred)
        print(f"kullback_leibler_tensorflow: {kl_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([0, C], [0, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nKullback-Leibler Loss: {kl_manual.numpy()}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    kullback_leibler_test()

圖片圖片


責任編輯:武曉燕 來源: 小寒聊python
相關推薦

2024-10-08 15:09:17

2024-10-08 10:16:22

2024-10-30 08:23:07

2024-10-28 15:52:38

機器學習特征工程數據集

2025-01-15 11:25:35

2024-12-26 00:34:47

2025-01-20 09:21:00

2024-10-28 00:00:10

機器學習模型程度

2024-09-18 16:42:58

機器學習評估指標模型

2024-10-14 14:02:17

機器學習評估指標人工智能

2024-08-23 09:06:35

機器學習混淆矩陣預測

2024-11-25 08:20:35

2025-01-20 09:00:00

2025-01-07 12:55:28

2025-02-17 13:09:59

深度學習模型壓縮量化

2024-07-24 08:04:24

神經網絡激活函數

2024-11-07 08:26:31

神經網絡激活函數信號

2024-07-17 09:32:19

2024-09-23 09:12:20

2025-03-03 01:50:00

深度學習微調遷移學習
點贊
收藏

51CTO技術棧公眾號