自拍偷在线精品自拍偷,亚洲欧美中文日韩v在线观看不卡

RAG高級優(yōu)化:檢索后處理模塊成竹在胸 原創(chuàng)

發(fā)布于 2024-9-25 10:16
瀏覽
0收藏

通過上文的方法??RAG高級優(yōu)化:一文看盡query的轉(zhuǎn)換之路??,我們召回了一些相關(guān)片段,本文我們將介紹在將召回片段送入大模型之前的一些優(yōu)化手段,它們能幫助大模型更好的理解上下文知識,給出最佳的回答:

  • Long-text Reorder
  • Contextual compression
  • Refine
  • Emotion Prompt

Long-text Reorder

根據(jù)論文 Lost in the Middle: How Language Models Use Long Contexts,的實(shí)驗(yàn)表明,大模型更容易記憶開頭和結(jié)尾的文檔,而對中間部分的文檔記憶能力不強(qiáng),因此可以根據(jù)召回的文檔和query的相關(guān)性進(jìn)行重排序。

RAG高級優(yōu)化:檢索后處理模塊成竹在胸-AI.x社區(qū)

核心的代碼可以參考langchain的實(shí)現(xiàn):

def _litm_reordering(documents: List[Document]) -> List[Document]:
    """Lost in the middle reorder: the less relevant documents will be at the
    middle of the list and more relevant elements at beginning / end.
    See: https://arxiv.org/abs//2307.03172"""

    documents.reverse()
    reordered_result = []
    for i, value in enumerate(documents):
        if i % 2 == 1:
            reordered_result.append(value)
        else:
            reordered_result.insert(0, value)
    return reordered_result

Contextual compression

本質(zhì)上利用LLM去判斷檢索之后的文檔和用戶query的相關(guān)性,只返回相關(guān)度最高的k個(gè)。

from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
from langchain_openai import OpenAI
 
llm = OpenAI(temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(
    base_compressor=compressor, base_retriever=retriever
)
 
compressed_docs = compression_retriever.get_relevant_documents(
    "What did the president say about Ketanji Jackson Brown"
)
print(compressed_docs)

Refine

對最后大模型生成的回答進(jìn)行進(jìn)一步的改寫,保證回答的準(zhǔn)確性。主要涉及提示詞工程,參考的提示詞如下:

The original query is as follows: {query_str}
We have provided an existing answer: {existing_answer}
We have the opportunity to refine the existing answer (only if needed) with some more context below.
------------
{context_msg}
------------
Given the new context, refine the original answer to better answer the query. If the context isn't useful, return the original answer.
Refined Answer:

Emotion Prompt

同樣是提示詞工程的一部分,思路來源于微軟的論文:

Large Language Models Understand and Can Be Enhanced by Emotional Stimuli

在論文中,微軟研究員提出,在提示詞中增加一些情緒情感相關(guān)的提示,有助于大模型輸出高質(zhì)量的回答。

參考提示詞如下:

emotion_stimuli_dict = {
    "ep01": "Write your answer and give me a confidence score between 0-1 for your answer. ",
    "ep02": "This is very important to my career. ",
    "ep03": "You'd better be sure.",
    # add more from the paper here!!
}
 
# NOTE: ep06 is the combination of ep01, ep02, ep03
emotion_stimuli_dict["ep06"] = (
    emotion_stimuli_dict["ep01"]
    + emotion_stimuli_dict["ep02"]
    + emotion_stimuli_dict["ep03"]
)
 
 
from llama_index.prompts import PromptTemplate
 
 
qa_tmpl_str = """\
Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, \
answer the query.
{emotion_str}
Query: {query_str}
Answer: \
"""
qa_tmpl = PromptTemplate(qa_tmpl_str)


本文轉(zhuǎn)載自公眾號哎呀AIYA

原文鏈接:??https://mp.weixin.qq.com/s/-orAp5c6LGfnse83Lgy7Jw???

?著作權(quán)歸作者所有,如需轉(zhuǎn)載,請注明出處,否則將追究法律責(zé)任
標(biāo)簽
收藏
回復(fù)
舉報(bào)
回復(fù)
相關(guān)推薦