自拍偷在线精品自拍偷,亚洲欧美中文日韩v在线观看不卡

PydanticAI:一個(gè)基于 Python 的新代理框架,用于構(gòu)建生產(chǎn)級(jí) LLM 支持的應(yīng)用程序

發(fā)布于 2024-12-16 13:16
瀏覽
0收藏

01、概述

在AI技術(shù)飛速發(fā)展的今天,越來(lái)越多的應(yīng)用程序開(kāi)始依賴大語(yǔ)言模型(LLM)來(lái)提供智能化功能。然而,將LLM應(yīng)用于實(shí)際生產(chǎn)環(huán)境并非易事。開(kāi)發(fā)者往往會(huì)面臨如下挑戰(zhàn):

  • 模型響應(yīng)不一致,無(wú)法始終保證輸出的準(zhǔn)確性。
  • 缺乏穩(wěn)健性,在高并發(fā)場(chǎng)景下難以保持系統(tǒng)穩(wěn)定。
  • 類型安全性薄弱,導(dǎo)致數(shù)據(jù)結(jié)構(gòu)不符合預(yù)期。

在這種背景下,如何開(kāi)發(fā)一款能夠?yàn)橛脩籼峁┛煽俊⒕珳?zhǔn)且上下文適宜的輸出的LLM應(yīng)用,成為了開(kāi)發(fā)者的首要任務(wù)。傳統(tǒng)方法在處理這些問(wèn)題時(shí)顯得力不從心,尤其是當(dāng)需要高質(zhì)量和結(jié)構(gòu)化響應(yīng)時(shí),開(kāi)發(fā)者往往難以快速擴(kuò)展解決方案。

今天,我們來(lái)聊聊一款專為解決這些痛點(diǎn)而生的全新框架——PydanticAI。

02、什么是PydanticAI?

PydanticAI 是一款基于 Python 的智能代理框架,由著名的 Pydantic 團(tuán)隊(duì)開(kāi)發(fā)。它的設(shè)計(jì)初衷是幫助開(kāi)發(fā)者打造生產(chǎn)級(jí)別的LLM應(yīng)用。PydanticAI 無(wú)縫結(jié)合了 Pydantic 的強(qiáng)類型校驗(yàn)功能,并且對(duì)LLM模型具備高度的靈活性和兼容性。

這意味著開(kāi)發(fā)者可以在不同的LLM之間自由切換,同時(shí)享受Pydantic所帶來(lái)的可靠性和安全性。這種“模型無(wú)關(guān)性”大大提升了開(kāi)發(fā)效率,尤其是在不斷變化的AI生態(tài)中,開(kāi)發(fā)者可以靈活選擇最佳模型來(lái)滿足業(yè)務(wù)需求。

PydanticAI:一個(gè)基于 Python 的新代理框架,用于構(gòu)建生產(chǎn)級(jí) LLM 支持的應(yīng)用程序-AI.x社區(qū)

03、PydanticAI 的核心功能

1. 類型安全的響應(yīng)校驗(yàn)

PydanticAI 最顯著的特點(diǎn)之一是其對(duì)LLM輸出的強(qiáng)類型校驗(yàn)功能。通過(guò) Pydantic,開(kāi)發(fā)者能夠確保模型返回的數(shù)據(jù)結(jié)構(gòu)與預(yù)期完全一致。這在生產(chǎn)環(huán)境中尤為重要,因?yàn)椴灰恢碌捻憫?yīng)可能會(huì)引發(fā)系統(tǒng)錯(cuò)誤,甚至影響用戶體驗(yàn)。

示例:開(kāi)發(fā)者希望LLM返回一個(gè)用戶信息對(duì)象(包括用戶名、郵箱和年齡)。PydanticAI 能夠自動(dòng)校驗(yàn)這些字段是否符合預(yù)期類型和格式。即使模型偶爾返回錯(cuò)誤或缺失數(shù)據(jù),框架也能及時(shí)捕獲并提醒,確保系統(tǒng)運(yùn)行的可靠性。

2. 支持流式響應(yīng)

PydanticAI 支持流式響應(yīng)的生成與驗(yàn)證。這一特性在需要處理高并發(fā)請(qǐng)求或大規(guī)模數(shù)據(jù)時(shí)尤為關(guān)鍵。例如,實(shí)時(shí)聊天系統(tǒng)或視頻字幕生成工具,能夠利用這一功能一邊接收數(shù)據(jù),一邊校驗(yàn)其合法性,從而提升整體性能。

3. Logfire 集成:調(diào)試與監(jiān)控

PydanticAI 與 Logfire 集成,提供調(diào)試與監(jiān)控功能。開(kāi)發(fā)者可以通過(guò) Logfire 輕松追蹤系統(tǒng)日志、診斷問(wèn)題,并快速解決故障。這種高可觀測(cè)性對(duì)于生產(chǎn)級(jí)應(yīng)用尤為重要,因?yàn)樵趯?shí)際運(yùn)行中,任何問(wèn)題都需要被迅速定位和修復(fù)。

4. 模型無(wú)關(guān)性

PydanticAI 并不依賴于特定的LLM模型。無(wú)論是OpenAI的GPT系列、Meta的LLaMA,還是其他開(kāi)源模型,開(kāi)發(fā)者都可以輕松接入。這種靈活性為不同業(yè)務(wù)場(chǎng)景提供了更多選擇,避免了被某單一技術(shù)棧鎖定的風(fēng)險(xiǎn)。

04、PydanticAI 為開(kāi)發(fā)者帶來(lái)的價(jià)值

1. 可靠性提升

PydanticAI 的類型校驗(yàn)和結(jié)構(gòu)化響應(yīng)能力幫助開(kāi)發(fā)者大幅減少運(yùn)行時(shí)錯(cuò)誤。無(wú)論是小型聊天機(jī)器人還是復(fù)雜的企業(yè)級(jí)應(yīng)用,PydanticAI 都能保證系統(tǒng)輸出的一致性和可靠性。

2. 開(kāi)發(fā)效率提高

由于其簡(jiǎn)潔易用的接口和內(nèi)置功能,開(kāi)發(fā)者可以專注于核心業(yè)務(wù)邏輯,而不必花費(fèi)大量時(shí)間在數(shù)據(jù)驗(yàn)證、錯(cuò)誤處理等底層問(wèn)題上。根據(jù)早期用戶反饋,PydanticAI 顯著縮短了開(kāi)發(fā)周期,讓開(kāi)發(fā)者能夠更快地將產(chǎn)品推向市場(chǎng)。

3. 迭代速度加快

PydanticAI 提供了一套以評(píng)估驅(qū)動(dòng)的開(kāi)發(fā)工具,開(kāi)發(fā)者可以快速對(duì)LLM進(jìn)行微調(diào)和測(cè)試,從而在產(chǎn)品上線前確保性能達(dá)標(biāo)??蚣軆?nèi)置的調(diào)試與監(jiān)控功能,也進(jìn)一步支持持續(xù)優(yōu)化和高效運(yùn)維。

4. 降低運(yùn)維成本

通過(guò) Logfire 的調(diào)試和監(jiān)控,團(tuán)隊(duì)可以更快速地識(shí)別并解決問(wèn)題,減少因系統(tǒng)故障導(dǎo)致的停機(jī)時(shí)間。對(duì)于運(yùn)行在生產(chǎn)環(huán)境中的應(yīng)用,這意味著更高的用戶滿意度和更低的運(yùn)營(yíng)成本。

05、實(shí)戰(zhàn)案例:PydanticAI 的早期用戶反饋

雖然 PydanticAI 還是一款新興框架,但它已經(jīng)吸引了不少開(kāi)發(fā)者的關(guān)注。根據(jù)初期用戶反饋,這款框架在處理復(fù)雜LLM任務(wù)時(shí)表現(xiàn)出了出色的簡(jiǎn)便性和高效性。

  • 案例一:某初創(chuàng)公司使用 PydanticAI 開(kāi)發(fā)了一款企業(yè)內(nèi)部聊天機(jī)器人,幫助員工快速獲取公司政策和信息。通過(guò)強(qiáng)類型校驗(yàn)和流式響應(yīng)功能,這款機(jī)器人在處理數(shù)千個(gè)并發(fā)請(qǐng)求時(shí),依舊能夠提供精準(zhǔn)的回答。
  • 案例二:一家大型電商企業(yè)利用 PydanticAI 優(yōu)化了客戶服務(wù)系統(tǒng),顯著降低了開(kāi)發(fā)時(shí)間,并減少了因LLM不一致響應(yīng)導(dǎo)致的用戶投訴。

#使用案例

from dataclasses import dataclass

from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext

from bank_database import DatabaseConn


# SupportDependencies is used to pass data, connections, and logic into the model that will be needed when running
# system prompt and tool functions. Dependency injection provides a type-safe way to customise the behavior of your agents.
@dataclass
class SupportDependencies:
    customer_id: int
    db: DatabaseConn


# This pydantic model defines the structure of the result returned by the agent.
class SupportResult(BaseModel):
    support_advice: str = Field(descriptinotallow='Advice returned to the customer')
    block_card: bool = Field(descriptinotallow="Whether to block the customer's card")
    risk: int = Field(descriptinotallow='Risk level of query', ge=0, le=10)


# This agent will act as first-tier support in a bank.
# Agents are generic in the type of dependencies they accept and the type of result they return.
# In this case, the support agent has type `Agent[SupportDependencies, SupportResult]`.
support_agent = Agent(
    'openai:gpt-4o',
    deps_type=SupportDependencies,
    # The response from the agent will, be guaranteed to be a SupportResult,
    # if validation fails the agent is prompted to try again.
    result_type=SupportResult,
    system_prompt=(
        'You are a support agent in our bank, give the '
        'customer support and judge the risk level of their query.'
    ),
)


# Dynamic system prompts can can make use of dependency injection.
# Dependencies are carried via the `RunContext` argument, which is parameterized with the `deps_type` from above.
# If the type annotation here is wrong, static type checkers will catch it.
@support_agent.system_prompt
async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str:
    customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)
    return f"The customer's name is {customer_name!r}"


# `tool` let you register functions which the LLM may call while responding to a user.
# Again, dependencies are carried via `RunContext`, any other arguments become the tool schema passed to the LLM.
# Pydantic is used to validate these arguments, and errors are passed back to the LLM so it can retry.
@support_agent.tool
async def customer_balance(
    ctx: RunContext[SupportDependencies], include_pending: bool
) -> float:
    """Returns the customer's current account balance."""
    # The docstring of a tool is also passed to the LLM as the description of the tool.
    # Parameter descriptions are extracted from the docstring and added to the parameter schema sent to the LLM.
    balance = await ctx.deps.db.customer_balance(
        id=ctx.deps.customer_id,
        include_pending=include_pending,
    )
    return balance


...  # In a real use case, you'd add more tools and a longer system prompt


async def main():
    deps = SupportDependencies(customer_id=123, db=DatabaseConn())
    # Run the agent asynchronously, conducting a conversation with the LLM until a final response is reached.
    # Even in this fairly simple case, the agent will exchange multiple messages with the LLM as tools are called to retrieve a result.
    result = await support_agent.run('What is my balance?', deps=deps)
    # The result will be validated with Pydantic to guarantee it is a `SupportResult`, since the agent is generic,
    # it'll also be typed as a `SupportResult` to aid with static type checking.
    print(result.data)
    """
    support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1
    """

    result = await support_agent.run('I just lost my card!', deps=deps)
    print(result.data)
    """
    support_advice="I'm sorry to hear that, John. We are temporarily blocking your card to prevent unauthorized transactions." block_card=True risk=8
    """


from pydantic_ai import Agent

# Define a very simple agent including the model to use, you can also set the model when running the agent.
agent = Agent(
    'gemini-1.5-flash',
    # Register a static system prompt using a keyword argument to the agent.
    # For more complex dynamically-generated system prompts, see the example below.
    system_prompt='Be concise, reply with one sentence.',
)

# Run the agent synchronously, conducting a conversation with the LLM.
# Here the exchange should be very short: PydanticAI will send the system prompt and the user query to the LLM,
# the model will return a text response. See below for a more complex run.
result = agent.run_sync('Where does "hello world" come from?')
print(result.data)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language."""

06、未來(lái)展望

隨著AI技術(shù)的不斷進(jìn)步,像 PydanticAI 這樣的工具將在行業(yè)中扮演越來(lái)越重要的角色。無(wú)論是構(gòu)建一個(gè)簡(jiǎn)單的對(duì)話機(jī)器人,還是開(kāi)發(fā)一個(gè)復(fù)雜的智能系統(tǒng),PydanticAI 都能夠?yàn)殚_(kāi)發(fā)者提供強(qiáng)大的支持。

在未來(lái),我們有理由期待更多開(kāi)發(fā)者采納這款工具,從而推動(dòng)LLM技術(shù)在各行各業(yè)的廣泛應(yīng)用。

07、結(jié)語(yǔ)

如果你正在尋求一款能夠讓LLM開(kāi)發(fā)變得更加簡(jiǎn)單、高效的框架,PydanticAI 無(wú)疑是一個(gè)值得嘗試的選擇。它通過(guò)類型安全、流式響應(yīng)支持、調(diào)試與監(jiān)控工具,為開(kāi)發(fā)者提供了從開(kāi)發(fā)到生產(chǎn)的一站式解決方案。

趕緊試試PydanticAI,讓你的LLM應(yīng)用開(kāi)發(fā)如虎添翼! 

參考:

  1. ??https://github.com/pydantic/pydantic-ai??


本文轉(zhuǎn)載自公眾號(hào)Halo咯咯 作者:基咯咯

原文鏈接:??https://mp.weixin.qq.com/s/ZQsdR1qHsi0BRjBPcXc4-g??


標(biāo)簽
收藏
回復(fù)
舉報(bào)
回復(fù)
相關(guān)推薦