自拍偷在线精品自拍偷,亚洲欧美中文日韩v在线观看不卡

MetaMath:新數(shù)學(xué)推理語言模型,訓(xùn)練大模型的逆向思維

人工智能 新聞
為了提升模型的正向和逆向推理能力,劍橋、港科大、華為的研究者基于兩個常用的數(shù)學(xué)數(shù)據(jù)集(GSM8K 和 MATH)提出了 MetaMathQA 數(shù)據(jù)集:一個覆蓋面廣、質(zhì)量高的數(shù)學(xué)推理數(shù)據(jù)集。MetaMathQA 由 395K 個大語言模型生成的正向逆向數(shù)學(xué)問答對組成。

復(fù)雜數(shù)學(xué)推理是評價大語言模型推理能力的重要指標,目前常用的數(shù)學(xué)推理數(shù)據(jù)集樣本量有限且問題多樣性不足,導(dǎo)致大語言模型存在 [逆轉(zhuǎn)詛咒] 的現(xiàn)象,即一個訓(xùn)練于「A 是 B」的語言模型無法推廣到「B 是 A」[1]。此現(xiàn)象在數(shù)學(xué)推理任務(wù)中的具體形式是:即給定一個數(shù)學(xué)問題,語言模型擅于用正向推理解答問題但缺乏逆向推理解決問題的能力。逆向推理在數(shù)學(xué)問題中十分常見,如下 2 個例子。

1. 經(jīng)典問題 - 雞兔同籠

  • 正向推理:籠子里有 23 只雞和 12 只兔,問籠子里有多少個頭和多少只腳?
  • 逆向推理:有若干只雞兔同在一個籠子里,從上面數(shù),有 35 個頭,從下面數(shù),有 94 只腳。問籠中各有多少只雞和兔?

2. GSM8K 問題

  • 正向推理: James buys 5 packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay?
  • 逆向推理: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay? If we know the answer to the above question is 110, what is the value of unknown variable x?

為了提升模型的正向和逆向推理能力,劍橋、港科大、華為的研究者基于兩個常用的數(shù)學(xué)數(shù)據(jù)集(GSM8K 和 MATH)提出了 MetaMathQA 數(shù)據(jù)集:一個覆蓋面廣、質(zhì)量高的數(shù)學(xué)推理數(shù)據(jù)集。MetaMathQA 由 395K 個大語言模型生成的正向逆向數(shù)學(xué)問答對組成。他們在 MetaMathQA 數(shù)據(jù)集上基于 LLaMA-2 微調(diào)得到專注于數(shù)學(xué)推理 (正向和逆向) 的大語言模型 MetaMath,在數(shù)學(xué)推理數(shù)據(jù)集上達到了 SOTA。MetaMathQA 數(shù)據(jù)集和不同規(guī)模的 MetaMath 模型已開源供研究人員使用。

圖片

  • 項目地址:https://meta-math.github.io/
  • 論文地址:https://arxiv.org/abs/2309.12284
  • 數(shù)據(jù)地址:https://huggingface.co/datasets/meta-math/MetaMathQA
  • 模型地址:https://huggingface.co/meta-math
  • 代碼地址:https://github.com/meta-math/MetaMath

本文中構(gòu)造了一個逆向推理數(shù)據(jù)集 GSM8K-Backward。實驗結(jié)果表明,目前的方法(e.g., SFT、RFT、WizardMath)在逆向推理問題中表現(xiàn)遠差于正向推理。相較而言,MetaMath 模型在正向和反向推理上都取得了優(yōu)越的性能。

圖片

方法

圖片

MetaMathQA 包含四種數(shù)據(jù)增強的方法:

1. Answer Augmentation (答案增強):給定問題,通過大語言模型生成能得到正確結(jié)果的思維鏈作為數(shù)據(jù)增廣。

  • Question: James buys 5 packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay?
  • Answer: James buys 5 packs of beef that are 4 pounds each, so he buys a total of 5 * 4 = 20 pounds of beef. The price of beef is $5.50 per pound, so he pays 20 * $5.50 = $110. The answer is: 110.

2. Rephrasing Question (問題改寫增強):給定元問題,通過大語言模型重寫問題并生成得到正確結(jié)果的思維鏈作為數(shù)據(jù)增廣。

  • Question: What is the total amount that James paid when he purchased 5 packs of beef, each weighing 4 pounds, at a price of $5.50 per pound?
  • Answer: Each pack of beef weighs 4 pounds, so 5 packs weigh 4 * 5 = 20 pounds in total. The price per pound of beef is $5.50, so the total cost for 20 pounds is 20 * $5.50 = $110. Therefore, James paid a total of $110. The answer is: 110.

3. FOBAR Question (FOBAR 逆向問題增強):給定元問題,通過掩碼條件中的數(shù)字為 x,給定原有答案并反推 x 來產(chǎn)生逆向問題,并基于該逆向問題生成正確的思維鏈過程來進行數(shù)據(jù)增廣(反推樣例:“If we know the answer to the above question is 110, what is the value of unknown variable x?”)。

  • Question: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. How much did he pay? If we know the answer to the above question is 110, what is the value of unknown variable x?
  • Answer: James buys x packs of beef that are 4 pounds each, so he buys a total of 4x pounds of beef. The price of beef is $5.50 per pound, so the total cost of the beef is 5.50 * 4x = 22x. We are given that the total cost is $110, so we can write: 22x = 110. Dividing both sides by 22, we get: x = 5. The value of x is 5.

4. Self-Verification Question (Self-Verification 逆向問題增強):在 FOBAR 的基礎(chǔ)上,通過大語言模型改寫逆向問題部分為陳述的語句來進行數(shù)據(jù)增廣(改寫樣例:“How much did he pay?” (with the answer 110) 被改寫成 “He paid 110”)。

  • Question: James buys x packs of beef that are 4 pounds each. The price of beef is $5.50 per pound. He paid 110. What is the value of unknown variable x?
  • Answer: To solve this problem, we need to determine the value of x, which represents the number of packs of beef that James bought. Each pack of beef weighs 4 pounds and costs $5.50 per pound. The total amount James paid is $110. We can set up the equation as follows: Number of packs of beef * Weight per pack * Price per pound = Total amount paid; x * 4 * $5.50 = $110; Let's simplify and solve for x: 4x * $5.50 = $110; 22x = $110; To isolate x, we divide both sides of the equation by 22: 22x / 22 = $110 / 22; x = $5; The value of x is 5.

實驗結(jié)果

圖片

在兩個常見數(shù)學(xué)推理數(shù)據(jù)集(GSM8K 和 MATH)的實驗結(jié)果表明,在不借助外部工具(e.g., code interpreter), MetaMath 在性能上顯著優(yōu)于已有開源 LLM 模型。其中,我們的 MetaMath-7B 模型在 GSM8K 上達到了 66.5% 的準確率,在 MATH 上達到了 19.8% 的準確率,超過了相同規(guī)模的最先進模型分別 11.6% 和 9.1%。特別地,MetaMath-70B 在 GSM8K 上達到了 82.3% 的準確率,超過 GPT-3.5-Turbo。

根據(jù) Superficial Alignment Hypothesis [2],大語言模型的能力源于預(yù)訓(xùn)練,而來自下游任務(wù)的數(shù)據(jù)則激活了預(yù)訓(xùn)練期間學(xué)習(xí)到的語言模型的固有能力。由此產(chǎn)生了兩個重要問題:(i)什么類型的數(shù)據(jù)可以最有效地激活潛在知識,以及(ii)為什么一個數(shù)據(jù)集在這種激活中比另一個數(shù)據(jù)集更好?

為什么 MetaMathQA 有用?提高了思維鏈數(shù)據(jù)的質(zhì)量 (Perplexity)

圖片

如上圖所示,研究者們計算了 LLaMA-2-7B 模型在僅答案數(shù)據(jù)、GSM8K CoT 以及 MetaMathQA 數(shù)據(jù)的各部分上的 perplexity。MetaMathQA 各部分的 perplexity 顯著低于其他兩個數(shù)據(jù)集,這突顯了它固有的易學(xué)性,可能更有助于引導(dǎo)出模型的潛在知識。

為什么 MetaMathQA 有用?增加了思維鏈數(shù)據(jù)的多樣性 (Diversity)

通過比較數(shù)據(jù)的多樣性增益和模型的準確率增益,他們發(fā)現(xiàn) Rephrasing、FOBAR 和 SV 添加相同數(shù)量的增廣數(shù)據(jù)帶來了明顯的多樣性增益,顯著提升了模型準確率。相比之下,簡單地使用答案增強會導(dǎo)致明顯的準確率飽和。在準確率飽和之后,增加 AnsAug 數(shù)據(jù)只會帶來有限的性能提升。

責(zé)任編輯:張燕妮 來源: 機器之心
相關(guān)推薦

2023-01-05 09:33:37

視覺模型訓(xùn)練

2024-04-11 11:35:03

大語言模型LLMs

2025-03-06 09:46:00

AI模型代碼

2023-05-05 13:29:04

模型推理

2023-06-24 19:59:40

2024-09-09 08:31:15

2025-04-16 02:30:00

2025-01-10 10:30:00

大模型統(tǒng)計評估

2023-08-18 14:34:00

研究模型

2023-05-15 15:38:59

AI模型

2025-02-08 17:00:11

2024-06-24 12:22:34

2023-06-09 07:29:03

模型文本document

2025-01-20 07:58:51

2024-08-19 09:42:00

模型MIT

2023-05-26 13:22:49

2024-12-02 08:30:00

2025-01-16 08:40:00

2024-10-21 16:41:17

點贊
收藏

51CTO技術(shù)棧公眾號