mcp server list

client

参考:

stdio client

创建和调用流程:

  1. 启动 server

    • server 需要指定 server 的启动方法:

      1. python 命令
      2. .py 文件
      3. 环境变量
  2. 启动 client session

    • session 和 server 的连接方式 (read, write) 相关联
  3. session 获取 server 提供的功能

    • prompts: session.list_prompts()
    • tools: session.list_tools()
    • resources: session.list_resources()
  4. 生成 prompt:

    • session.get_prompt("prompt_name", arguments={k:v, ....})
  5. 获取 resource:

    • session.read_resource("resource_uri")

      • eg:

        content, mime_type = await session.read_resource("file://some/path")
        
  6. 调用 tool

    • session.call_tool("tool_name", arguments={k: v, ....})

      • eg:

        result = await session.call_tool("tool-name", arguments={"arg1": "value"})
        
    • 这里的 tooll_name 和 arguments 由大模型的 tools 调用 response 生成

      1. 在调用大模型时, 把 tools 放入 ChatCompletion 请求中,返回的结果中 会包含类似 content.type == 'tool_use', content.name, content.input 的参数(这是基于 anthropic 例子的调用结果)
  7. tools 调用流程:

    1. 把 list_tools() –> 转换成大模型可以使用的格式 + 用户的问题 query

      • openai function_call
      • langchain tools
      • anthropic tools 调用
    2. response –> 使用 tool -> 获得 tool 调用相关数据

      • tool_name
      • arguments
    3. 调用 tool

      • session.call_tool("tool_name", arguments=arguments)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.stdio import stdio_client

# Create server parameters for stdio connection
server_params = StdioServerParameters(
    command="python",  # Executable
    args=["example_server.py"],  # Optional command line arguments
    env=None,  # Optional environment variables
)
async def run():
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(
            read, write, sampling_callback=handle_sampling_message
        ) as session:
            # Initialize the connection
            await session.initialize()

streamable http server 的连接(client)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
from mcp.client.streamable_http import streamablehttp_client
from mcp import ClientSession


async def main():
    # Connect to a streamable HTTP server
    async with streamablehttp_client("example/mcp") as (
        read_stream,
        write_stream,
        _,
    ):
        # Create a session using the client streams
        async with ClientSession(read_stream, write_stream) as session:
            # Initialize the connection
            await session.initialize()
            # Call a tool
            tool_result = await session.call_tool("echo", {"message": "hello"})

转换 session.list

fastapi + streamable http + mcp 例子

server 代码

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# -*- coding: utf-8; -*-
import contextlib
import math
import random
from typing import Any

from fastapi import FastAPI
from mcp.server.fastmcp import FastMCP


mcp_app = FastMCP("model predicting")


@mcp_app.tool()
def temperature_predict(longitude: float, lattitude: float) -> float:
    """"""
    return random.random()


# Create a combined lifespan to manage both session managers
@contextlib.asynccontextmanager
async def lifespan(app: FastAPI):
    async with contextlib.AsyncExitStack() as stack:
        await stack.enter_async_context(mcp_app.session_manager.run())
        # await stack.enter_async_context(math.mcp.session_manager.run())
        yield


app = FastAPI(lifespan=lifespan) # 注意这里必须使用 lifespan=lifespan 初始化 FastMCP session


app.mount("/model_predict", mcp_app.streamable_http_app())

if __name__ == '__main__':
    mcp_app.run(transport="streamable-http", )

运行方法

  1. 使用 FastMCP.run(transport="streamable-http")

    python server.py
    
  2. 使用 fastapi + uvicorn 运行

    uvicorn mcp_server:app --host 0.0.0.0 --port 8044
    
    1. 需要在 FastAPI 的 lifespan 中先 run FastMCP 的 session_manager
    2. mount 需要指定 path
    3. 使用时(在 client 中),需要指定 route –> path/mcp

client 代码

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# -*- coding: utf-8; -*-
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.streamable_http import streamablehttp_client


async def main():
    async with streamablehttp_client("http://localhost:8044/model_predict/mcp") as (sread, swrite, _): # 使用 fastapi 运行 server
    # async with streamablehttp_client("http://localhost:8000/mcp") as (sread, swrite, _): # 使用 FastMCP.run() 运行server
        async with ClientSession(sread, swrite) as session:
            await session.initialize()
            # print(await session.list_prompts())
            print(await session.list_tools())

if __name__ == '__main__':
    import asyncio
    asyncio.run(main())

一个完整的 mcp streamable http_client + openai tools 调用 client

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
# -*- coding: utf-8; -*-
import json

import openai
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.streamable_http import streamablehttp_client

# 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


def convert_tool_format(tool):
    converted_tool = {
        "type": "function",
        "function": {
            "name": tool.name,
            "description": tool.description,
            "parameters": {
                "type": "object",
                "properties": tool.inputSchema["properties"],
                "required": tool.inputSchema["required"],
            },
        },
    }
    return converted_tool


async def main():
    async with streamablehttp_client("http://localhost:8044/model_predict/mcp") as (
        sread,
        swrite,
        _,
    ):
        # async with streamablehttp_client("http://localhost:8000/mcp") as (sread, swrite, _):
        async with ClientSession(sread, swrite) as session:
            await session.initialize()
            # print(await session.list_prompts())
            resp_tools = await session.list_tools()
            print(resp_tools.tools)
            available_tools = [convert_tool_format(t) for t in resp_tools.tools]

            print(f"{available_tools = }")
            tool_result = await session.call_tool(
                "temperature_predict",
                arguments={"longitude": 88.88, "lattitude": 10.10},
            )

            print(f"{tool_result = }\n {tool_result.content}")
            base_url="https://openrouter.ai/api/v1",
            api_key="<YOUR API KEY>",
            # MODEL="deepseek/deepseek-r1-0528:free",
            MODEL = "openai/gpt-4o-mini"

            base_url = "https://api.siliconflow.cn/v1"
            api_key="<YOUR API KEY>",
            MODEL = "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
            MODEL = "deepseek-ai/DeepSeek-V3"

            openai_client = openai.OpenAI(
                base_url=base_url,
                api_key=api_key,
            )
            messages = [
                {
                    "role": "user",
                    "content": "I want to take part in a party at (100, 88) of longittude and lattitude, Give me some wearing suggestions based on the temperature",
                }
            ]

            completion = openai_client.chat.completions.create(
                tools=available_tools,
                model=MODEL,
                messages=messages,
            )
            resp = completion.choices[0].message
            messages.append(resp.model_dump())
            print(f"{resp = }")

            # resp = {'content': '', 'refusal': None, 'role': 'assistant', 'annotations': None, 'audio': None, 'function_call': None, 'tool_calls': [{'id': 'call_Lwe5euiu0sH8R4kg9CZYL5jp', 'function': {'arguments': '{"longitude":100,"lattitude":88}', 'name': 'temperature_predict'}, 'type': 'function', 'index': 0}], 'reasoning': None}
            if resp.tool_calls:
                tool_name = resp.tool_calls[0].function.name
                tool_args = resp.tool_calls[0].function.arguments
                if tool_args:
                    tool_args = json.loads(tool_args)
                else:
                    tool_args = {}

                try:
                    tool_result = await session.call_tool(
                        name=tool_name, arguments=tool_args
                    )
                except Exception as e:
                    print(e)
                    tool_result = None

                print(f"\n\n{tool_result = }")

                if tool_result:
                    if isinstance(tool_result.content, str):
                        content = tool_result.content
                    elif isinstance(tool_result.content, list):
                        content = tool_result.content[0]
                        if not isinstance(content, str):
                            content = content.text

                    messages.append(
                        {
                            "role": "tool",
                            "content": content,
                            "tool_call_id": resp.tool_calls[0].id, # 必须提供
                            "name": tool_name,
                        }
                    )

                    print(f'\n\n{messages[-1] = }\n')
                    resp = openai_client.chat.completions.create(
                        model=MODEL,
                        messages=messages,
                    )
                    print(f"\n\n{resp = }")

                    messages.append(resp.choices[0].message.model_dump())

                    print(f'--'*10)
                    from pprint import pprint
                    pprint(messages)


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())

python /data/sawyer/npu-3-sawyer/source/mdq-expstep-wukuang/client.py [Tool(name='temperature_predict', description='tomorrow temperature forecast for a given longitude,lattitude', inputSchema={'properties': {'longitude': {'title': 'Longitude', 'type': 'number'}, 'lattitude': {'title': 'Lattitude', 'type': 'number'}}, 'required': ['longitude', 'lattitude'], 'title': 'temperature_predictArguments', 'type': 'object'}, annotations=None)] available_tools = [{'type': 'function', 'function': {'name': 'temperature_predict', 'description': 'tomorrow temperature forecast for a given longitude,lattitude', 'parameters': {'type': 'object', 'properties': {'longitude': {'title': 'Longitude', 'type': 'number'}, 'lattitude': {'title': 'Lattitude', 'type': 'number'}}, 'required': ['longitude', 'lattitude']}}}] tool_result = CallToolResult(meta=None, content=[TextContent(type='text', text='temperature tomorrow: 26.74℃', annotations=None)], isError=False) [TextContent(type='text', text='temperature tomorrow: 26.74℃', annotations=None)] resp = ChatCompletionMessage(content='', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_y3XaVUTGUmqjaYJt9fWVwMRF', function=Function(arguments='{"longitude":100,"lattitude":88}', name='temperature_predict'), type='function', index=0)], reasoning=None)

tool_result = CallToolResult(meta=None, content=[TextContent(type='text', text='temperature tomorrow: 3.03℃', annotations=None)], isError=False)

resp = ChatCompletion(id='gen-1750323842-3sYR6s9VtrJd4oXhzTDM', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content="The expected temperature at your location (100° longitude, 88° latitude) is approximately 3.03°C. Here are some wearing suggestions to keep you comfortable at this cool temperature:\n\n1. Layer Up: \n - Wear a thermal base layer (top and bottom) under your clothes for added warmth.\n - A long-sleeve shirt or a lightweight sweater as a middle layer.\n\n2. Outer Layer:\n - A warm, insulated jacket, preferably windproof, to protect against the chill.\n\n3. Bottoms:\n - Wear comfortable jeans or thermal leggings. If you tend to get cold easily, consider thicker trousers.\n\n4. Accessories:\n - A knit hat or beanie to keep your head warm.\n - A scarf to protect your neck and provide extra warmth.\n - Gloves or mittens to keep your hands cozy.\n\n5. Footwear:\n - Insulated boots or warm shoes with thick socks to keep your feet warm.\n\n6. Optional:\n - If it's windy or there's a chance of rain, consider a waterproof outer layer or an umbrella.\n\nMake sure to check if it might get colder in the evening so you can adjust your layers accordingly!", refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None, reasoning=None), native_finish_reason='stop')], created=1750323842, model='openai/gpt-4o-mini', object='chat.completion', service_tier=None, system_fingerprint='fp_34a54ae93c', usage=CompletionUsage(completion_tokens=251, prompt_tokens=75, total_tokens=326, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None), prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=0)), provider='OpenAI')


[{'content': 'I want to take part in a party at (100, 88) of longittude and ' 'lattitude, Give me some wearing suggestions based on the ' 'temperature', 'role': 'user'}, {'annotations': None, 'audio': None, 'content': '', 'function_call': None, 'reasoning': None, 'refusal': None, 'role': 'assistant', 'tool_calls': [{'function': {'arguments': '{"longitude":100,"lattitude":88}', 'name': 'temperature_predict'}, 'id': 'call_y3XaVUTGUmqjaYJt9fWVwMRF', 'index': 0, 'type': 'function'}]}, {'content': [TextContent(type='text', text='temperature tomorrow: 3.03℃', annotations=None)], 'name': 'temperature_predict', 'role': 'tool', 'tool_call_id': 'call_y3XaVUTGUmqjaYJt9fWVwMRF'}, {'annotations': None, 'audio': None, 'content': 'The expected temperature at your location (100° longitude, 88° ' 'latitude) is approximately 3.03°C. Here are some wearing ' 'suggestions to keep you comfortable at this cool temperature:\n' '\n' '1. Layer Up: \n' ' - Wear a thermal base layer (top and bottom) under your ' 'clothes for added warmth.\n' ' - A long-sleeve shirt or a lightweight sweater as a middle ' 'layer.\n' '\n' '2. Outer Layer:\n' ' - A warm, insulated jacket, preferably windproof, to protect ' 'against the chill.\n' '\n' '3. Bottoms:\n' ' - Wear comfortable jeans or thermal leggings. If you tend to ' 'get cold easily, consider thicker trousers.\n' '\n' '4. Accessories:\n' ' - A knit hat or beanie to keep your head warm.\n' ' - A scarf to protect your neck and provide extra warmth.\n' ' - Gloves or mittens to keep your hands cozy.\n' '\n' '5. Footwear:\n' ' - Insulated boots or warm shoes with thick socks to keep your ' 'feet warm.\n' '\n' '6. Optional:\n' " - If it's windy or there's a chance of rain, consider a " 'waterproof outer layer or an umbrella.\n' '\n' 'Make sure to check if it might get colder in the evening so you ' 'can adjust your layers accordingly!', 'function_call': None, 'reasoning': None, 'refusal': None, 'role': 'assistant', 'tool_calls': None}]

mcp server 收集

代码运行

CodeRunner

可以运行多种代码, eg: python, javascript, …

搜索 api

brave search

api 有两种形式:

  • web

    • 原生搜索引擎结果
  • ai

    • 被 ai 加工过

价格:

  • 2000 次免费 web search 调用额度

    • 3$/1000 次
  • 1000 次免费 ai search 调用额度

tavily

ai 搜索(结果可能没有 web 搜索的事实性,但是好处是信息量密集)

额度:

  • 1000 次免费调用

google/serper api

谷歌搜索

价格:

  • 1$/1000 次

mcp + langgraph

参考:

要点:

  • 使用 Tool 节点: ToolNode

    from langgraph.prebuilt import ToolNode, tools_condition
    
  • graph 的调用:

    math_response = await graph.ainvoke({"messages": "what's (3 + 5) x 12?"})
    

注意

  1. tool 返回结果可能为空

    • ToolMessage(content='')
  2. 虽然 tool 有返回结果, tool_call –> model_call 大模型没有根据 tool 的返回结果总结或者再次思考用户的问题

    • 导致 AIMessage 为空