first commit
This commit is contained in:
9
.env.example
Executable file
9
.env.example
Executable file
@@ -0,0 +1,9 @@
|
|||||||
|
# 下游 Ollama 地址
|
||||||
|
OLLAMA_PROXY_TARGET=http://127.0.0.1:11434
|
||||||
|
|
||||||
|
# 默认模型名 (可以为空,如果需要强制指定一个,也可以从配置中配)
|
||||||
|
# OLLAMA_DEFAULT_MODEL=Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-4bit
|
||||||
|
|
||||||
|
# 代理服务监听端口
|
||||||
|
PROXY_PORT=11435
|
||||||
|
PROXY_HOST=127.0.0.1
|
||||||
6
.gitignore
vendored
Executable file
6
.gitignore
vendored
Executable file
@@ -0,0 +1,6 @@
|
|||||||
|
node_modules/
|
||||||
|
dist/
|
||||||
|
.env
|
||||||
|
.DS_Store
|
||||||
|
*.log
|
||||||
|
temp_out.json
|
||||||
80
README.md
Executable file
80
README.md
Executable file
@@ -0,0 +1,80 @@
|
|||||||
|
# OpenClaw Ollama Toolcall Proxy
|
||||||
|
|
||||||
|
提供在 OpenClaw 和 Ollama 之间的一个轻量级本地兼容 HTTP 代理层,专门修复部分大模型直接把函数调用(Tool Call)作为 XML 文本输出在 `content` 里,而无法触发本地 Agent 的问题。
|
||||||
|
|
||||||
|
## 背景原理
|
||||||
|
当我们使用诸如 `OpenClaw` 这类 Agent 框架连接 `Ollama` 提供的开源大模型时:
|
||||||
|
- 大模型(如特定微调版本)由于对齐方式,有时会在回答(`content`)中使用形如 `<function=xxx>` 的 XML/标记格式呼叫工具。
|
||||||
|
- 下游的客户端(如 OpenClaw)通常只认标准结构化的 JSON `tool_calls`。
|
||||||
|
- 本项目由此诞生:代理接收这些含有未解析标记的流,抽取并清洗后,重新组装为符合标准 OpenAI/Ollama 协议的结构化 `tool_calls`。
|
||||||
|
- 如果下游本身正常输出了 `tool_calls`,本代理不仅会透明无痕放行,甚至兼容 `<thinking>` 这类原生字段体。
|
||||||
|
|
||||||
|
## 技术栈与特性
|
||||||
|
* **跨平台部署**:Node.js + TypeScript 编写,可以在 Linux (包括 WSL)、macOS 和 Windows 上以极小资源开销跑起来。
|
||||||
|
* **Fastify 核心**:高性能 HTTP 代理。内部直接由内置 Fetch 转发,不携带额外重量级依赖。
|
||||||
|
* **自动修正**:内置独立的 `xml-toolcall` 解析器重写响应体。
|
||||||
|
* **环境变量控制**:利用 `.env` 对外暴漏服务映射控制。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 快速上手
|
||||||
|
|
||||||
|
### 1. 前置要求
|
||||||
|
- 安装 Node.js (推荐 v18+)
|
||||||
|
- npm / yarn / pnpm
|
||||||
|
|
||||||
|
### 2. 获取代码与安装
|
||||||
|
\`\`\`bash
|
||||||
|
# 克隆代码
|
||||||
|
git clone ssh://git@gitea.jmsu.top:2222/lingyuzeng/openclaw-ollama-toolcall-proxy.git
|
||||||
|
cd openclaw-ollama-toolcall-proxy
|
||||||
|
|
||||||
|
# 安装依赖
|
||||||
|
npm install
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
### 3. 环境配置
|
||||||
|
复制环境变量样本:
|
||||||
|
\`\`\`bash
|
||||||
|
cp .env.example .env
|
||||||
|
\`\`\`
|
||||||
|
打开 `.env` 文件,你可以配置如下属性:
|
||||||
|
- `OLLAMA_PROXY_TARGET=http://你的OllamaIP:11434` (代理将会将请求路由至此,例如你在局域网内部机器)
|
||||||
|
- `PROXY_PORT=11435` (本代理的监听端口,为了避开 Ollama 默认的 11434)
|
||||||
|
- `OLLAMA_DEFAULT_MODEL=` (如果请求中不带 model 参数,可以强行兜底一个模型名)
|
||||||
|
|
||||||
|
### 4. 运行服务
|
||||||
|
开发或者调试模式启动(带有热更新重载):
|
||||||
|
\`\`\`bash
|
||||||
|
npm run dev
|
||||||
|
\`\`\`
|
||||||
|
生产运行模式:
|
||||||
|
\`\`\`bash
|
||||||
|
# 预先进行 TypeScript 编译构建
|
||||||
|
npm run build
|
||||||
|
# 从产物运行
|
||||||
|
npm run start
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 自定义测试说明
|
||||||
|
|
||||||
|
对于环境和转发规则是否工作,可以使用本仓库里提供好的范例 JSON 对你的新端口进行验证:
|
||||||
|
|
||||||
|
\`\`\`bash
|
||||||
|
# 目标确保对准代理监听(默认 11435)
|
||||||
|
curl -s http://127.0.0.1:11435/api/chat \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d @test/fixtures/openclaw-like-request.json
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
随后你应该能在控制台以及返回结果中观察到结构化完毕的调用了。代理在识别时会打印:`[INFO] Rewriting response: found 1 tool calls in XML content`。
|
||||||
|
|
||||||
|
## 单元与集成测试验证
|
||||||
|
|
||||||
|
项目中提供了由 Vitest 构建的测试脚本。
|
||||||
|
\`\`\`bash
|
||||||
|
npm run test
|
||||||
|
\`\`\`
|
||||||
|
将会进行 `xml-toolcall` 解析用例和端到端整合测试。
|
||||||
2886
package-lock.json
generated
Normal file
2886
package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
23
package.json
Executable file
23
package.json
Executable file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"name": "openclaw-ollama-toolcall-proxy",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"description": "A proxy layer between OpenClaw and Ollama to rewrite XML tool calls into structured tool_calls.",
|
||||||
|
"main": "dist/index.js",
|
||||||
|
"scripts": {
|
||||||
|
"start": "tsx src/index.ts",
|
||||||
|
"dev": "tsx watch src/index.ts",
|
||||||
|
"build": "tsc",
|
||||||
|
"test": "vitest run",
|
||||||
|
"test:watch": "vitest"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"dotenv": "^16.4.5",
|
||||||
|
"fastify": "^4.26.2"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/node": "^20.12.7",
|
||||||
|
"tsx": "^4.7.2",
|
||||||
|
"typescript": "^5.4.5",
|
||||||
|
"vitest": "^1.5.2"
|
||||||
|
}
|
||||||
|
}
|
||||||
17
src/config.ts
Executable file
17
src/config.ts
Executable file
@@ -0,0 +1,17 @@
|
|||||||
|
import dotenv from 'dotenv';
|
||||||
|
import path from 'path';
|
||||||
|
|
||||||
|
// 加载环境变量,优先从本地 .env 加载
|
||||||
|
dotenv.config({ path: path.resolve(process.cwd(), '.env') });
|
||||||
|
|
||||||
|
export const config = {
|
||||||
|
// 代理服务监听配置
|
||||||
|
host: process.env.PROXY_HOST || '127.0.0.1',
|
||||||
|
port: process.env.PROXY_PORT ? parseInt(process.env.PROXY_PORT, 10) : 11435,
|
||||||
|
|
||||||
|
// 下游 Ollama 的真实地此
|
||||||
|
targetUrl: process.env.OLLAMA_PROXY_TARGET || 'http://127.0.0.1:11434',
|
||||||
|
|
||||||
|
// 默认模型:当上游没有带模型时,或者为了默认测试配置,提供一个默认模型名称
|
||||||
|
defaultModel: process.env.OLLAMA_DEFAULT_MODEL || 'Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-4bit',
|
||||||
|
};
|
||||||
22
src/index.ts
Executable file
22
src/index.ts
Executable file
@@ -0,0 +1,22 @@
|
|||||||
|
import { buildServer } from './server';
|
||||||
|
import { config } from './config';
|
||||||
|
import { logger } from './utils/logger';
|
||||||
|
|
||||||
|
async function start() {
|
||||||
|
const server = buildServer();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const address = await server.listen({
|
||||||
|
port: config.port,
|
||||||
|
host: config.host,
|
||||||
|
});
|
||||||
|
logger.info(`Server listening at ${address}`);
|
||||||
|
logger.info(`Target Ollama URL: ${config.targetUrl}`);
|
||||||
|
logger.info(`Default Model: ${config.defaultModel}`);
|
||||||
|
} catch (err: any) {
|
||||||
|
logger.error('Failed to start server:', err);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
start();
|
||||||
1
src/parsers/index.ts
Executable file
1
src/parsers/index.ts
Executable file
@@ -0,0 +1 @@
|
|||||||
|
export * from './xml-toolcall';
|
||||||
55
src/parsers/xml-toolcall.ts
Executable file
55
src/parsers/xml-toolcall.ts
Executable file
@@ -0,0 +1,55 @@
|
|||||||
|
import { ParsedToolCall } from '../types/toolcall';
|
||||||
|
import { logger } from '../utils/logger';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parses XML-style tool calls from the message content.
|
||||||
|
* Expected formats:
|
||||||
|
* <function=read>
|
||||||
|
* <parameter=path>
|
||||||
|
* /tmp/test.txt
|
||||||
|
* </parameter>
|
||||||
|
* </function>
|
||||||
|
*
|
||||||
|
* Or wrapped in <tool_call></tool_call>
|
||||||
|
*
|
||||||
|
* Returns an array of parsed tool calls.
|
||||||
|
*/
|
||||||
|
export function parseXmlToolCalls(content: string): ParsedToolCall[] {
|
||||||
|
if (!content) return [];
|
||||||
|
|
||||||
|
const results: ParsedToolCall[] = [];
|
||||||
|
|
||||||
|
// Match each <function=NAME>...</function> block
|
||||||
|
// We use `[\s\S]*?` for non-greedy multiline matching
|
||||||
|
const functionRegex = /<function=([^>]+)>([\s\S]*?)<\/function>/g;
|
||||||
|
|
||||||
|
let match;
|
||||||
|
while ((match = functionRegex.exec(content)) !== null) {
|
||||||
|
const name = match[1].trim();
|
||||||
|
const innerContent = match[2];
|
||||||
|
|
||||||
|
// Parse parameters inside the function block
|
||||||
|
const args: Record<string, any> = {};
|
||||||
|
const paramRegex = /<parameter=([^>]+)>([\s\S]*?)<\/parameter>/g;
|
||||||
|
|
||||||
|
let paramMatch;
|
||||||
|
while ((paramMatch = paramRegex.exec(innerContent)) !== null) {
|
||||||
|
const paramName = paramMatch[1].trim();
|
||||||
|
const paramValue = paramMatch[2].trim();
|
||||||
|
args[paramName] = paramValue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sometimes arguments are JSON encoded strings inside XML, or we can just pass them as strings.
|
||||||
|
results.push({
|
||||||
|
name,
|
||||||
|
args
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Debug logging if we found anything
|
||||||
|
if (results.length > 0) {
|
||||||
|
logger.debug(`Parsed ${results.length} tool calls from XML.`);
|
||||||
|
}
|
||||||
|
|
||||||
|
return results;
|
||||||
|
}
|
||||||
40
src/proxy/forward.ts
Executable file
40
src/proxy/forward.ts
Executable file
@@ -0,0 +1,40 @@
|
|||||||
|
import { config } from '../config';
|
||||||
|
import { rewriteResponse } from './response-rewriter';
|
||||||
|
import { logger } from '../utils/logger';
|
||||||
|
|
||||||
|
export async function forwardChatRequest(requestBody: any): Promise<any> {
|
||||||
|
const targetHost = config.targetUrl;
|
||||||
|
const targetEndpoint = `${targetHost}/api/chat`;
|
||||||
|
|
||||||
|
// Inject default model if not provided
|
||||||
|
if (!requestBody.model && config.defaultModel) {
|
||||||
|
requestBody.model = config.defaultModel;
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info(`Forwarding chat request to ${targetEndpoint} for model: ${requestBody.model}`);
|
||||||
|
|
||||||
|
const options: RequestInit = {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'Accept': 'application/json'
|
||||||
|
},
|
||||||
|
body: JSON.stringify(requestBody)
|
||||||
|
};
|
||||||
|
|
||||||
|
const response = await fetch(targetEndpoint, options);
|
||||||
|
|
||||||
|
if (!response.ok) {
|
||||||
|
const errorText = await response.text();
|
||||||
|
logger.error(`Ollama upstream error ${response.status}: ${errorText}`);
|
||||||
|
throw new Error(`Upstream returned ${response.status}: ${errorText}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Assuming it's not a stream for now
|
||||||
|
const responseData = await response.json();
|
||||||
|
|
||||||
|
// Rewrite if necessary
|
||||||
|
const rewrittenData = rewriteResponse(responseData);
|
||||||
|
|
||||||
|
return rewrittenData;
|
||||||
|
}
|
||||||
59
src/proxy/response-rewriter.ts
Executable file
59
src/proxy/response-rewriter.ts
Executable file
@@ -0,0 +1,59 @@
|
|||||||
|
import { OllamaChatResponse, ToolCall } from '../types/ollama';
|
||||||
|
import { parseXmlToolCalls } from '../parsers';
|
||||||
|
import { logger } from '../utils/logger';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Rewrites the Ollama response to include structured tool calls if missing
|
||||||
|
* but present in XML tags within the content.
|
||||||
|
*/
|
||||||
|
export function rewriteResponse(response: OllamaChatResponse): OllamaChatResponse {
|
||||||
|
// If the response isn't properly formed or has no message, return as is
|
||||||
|
if (!response || !response.message) {
|
||||||
|
return response;
|
||||||
|
}
|
||||||
|
|
||||||
|
// If the response already has tool_calls, do nothing
|
||||||
|
if (response.message.tool_calls && response.message.tool_calls.length > 0) {
|
||||||
|
return response;
|
||||||
|
}
|
||||||
|
|
||||||
|
const content = response.message.content;
|
||||||
|
if (!content) {
|
||||||
|
return response;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to parse XML tool calls from content
|
||||||
|
const parsedCalls = parseXmlToolCalls(content);
|
||||||
|
|
||||||
|
if (parsedCalls.length > 0) {
|
||||||
|
logger.info(`Rewriting response: found ${parsedCalls.length} tool calls in XML content`);
|
||||||
|
|
||||||
|
// Construct standard tool_calls
|
||||||
|
const standardToolCalls: ToolCall[] = parsedCalls.map((call, index) => {
|
||||||
|
// Ensure arguments are correctly stringified as expected by standard OpenAI/Ollama APIs
|
||||||
|
let argumentsString = '{}';
|
||||||
|
try {
|
||||||
|
argumentsString = JSON.stringify(call.args);
|
||||||
|
} catch (e) {
|
||||||
|
logger.error('Failed to stringify arguments for tool call', call.args);
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
id: `call_${Date.now()}_${index}`,
|
||||||
|
type: 'function',
|
||||||
|
function: {
|
||||||
|
name: call.name,
|
||||||
|
arguments: argumentsString,
|
||||||
|
}
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
// We can decide to either clear the content or keep it.
|
||||||
|
// Usually, if we parsed tool calls, we clear the content to avoid confusion
|
||||||
|
// But retaining it is also fine. Let's clear the XML parts or the whole content to be safe.
|
||||||
|
response.message.tool_calls = standardToolCalls;
|
||||||
|
response.message.content = '';
|
||||||
|
}
|
||||||
|
|
||||||
|
return response;
|
||||||
|
}
|
||||||
30
src/routes/ollama.ts
Executable file
30
src/routes/ollama.ts
Executable file
@@ -0,0 +1,30 @@
|
|||||||
|
import { FastifyInstance, FastifyPluginAsync } from 'fastify';
|
||||||
|
import { forwardChatRequest } from '../proxy/forward';
|
||||||
|
import { logger } from '../utils/logger';
|
||||||
|
|
||||||
|
const ollamaRoutes: FastifyPluginAsync = async (server: FastifyInstance) => {
|
||||||
|
server.post('/api/chat', async (request, reply) => {
|
||||||
|
try {
|
||||||
|
const body = request.body as any;
|
||||||
|
|
||||||
|
// Currently only supporting non-streaming requests in this proxy MVP
|
||||||
|
if (body?.stream === true) {
|
||||||
|
// As per requirements: return clear error or pass through without rewriting
|
||||||
|
// We'll return a clear error for now, because stream parsing is out of scope for MVP
|
||||||
|
reply.status(400).send({
|
||||||
|
error: "Streaming is not supported by this proxy MVP. Please set stream=false."
|
||||||
|
});
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const response = await forwardChatRequest(body);
|
||||||
|
|
||||||
|
reply.status(200).send(response);
|
||||||
|
} catch (error: any) {
|
||||||
|
logger.error('Error handling /api/chat:', error.message);
|
||||||
|
reply.status(500).send({ error: error.message });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
export default ollamaRoutes;
|
||||||
17
src/server.ts
Executable file
17
src/server.ts
Executable file
@@ -0,0 +1,17 @@
|
|||||||
|
import fastify, { FastifyInstance } from 'fastify';
|
||||||
|
import { config } from './config';
|
||||||
|
import ollamaRoutes from './routes/ollama';
|
||||||
|
|
||||||
|
export function buildServer(): FastifyInstance {
|
||||||
|
const server = fastify({ logger: false }); // Using our custom logger instead
|
||||||
|
|
||||||
|
// Basic health check
|
||||||
|
server.get('/', async () => {
|
||||||
|
return { status: 'ok', service: 'openclaw-ollama-toolcall-proxy' };
|
||||||
|
});
|
||||||
|
|
||||||
|
// Register routes
|
||||||
|
server.register(ollamaRoutes);
|
||||||
|
|
||||||
|
return server;
|
||||||
|
}
|
||||||
30
src/types/ollama.ts
Executable file
30
src/types/ollama.ts
Executable file
@@ -0,0 +1,30 @@
|
|||||||
|
export interface OllamaMessage {
|
||||||
|
role: string;
|
||||||
|
content: string;
|
||||||
|
tool_calls?: ToolCall[];
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface ToolCall {
|
||||||
|
id: string;
|
||||||
|
type: 'function';
|
||||||
|
function: {
|
||||||
|
name: string;
|
||||||
|
arguments: string; // JSON string or directly object in some variations (but usually stringified JSON)
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface OllamaChatRequest {
|
||||||
|
model: string;
|
||||||
|
messages: OllamaMessage[];
|
||||||
|
stream?: boolean;
|
||||||
|
tools?: any[];
|
||||||
|
[key: string]: any;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface OllamaChatResponse {
|
||||||
|
model: string;
|
||||||
|
created_at?: string;
|
||||||
|
message: OllamaMessage;
|
||||||
|
done: boolean;
|
||||||
|
[key: string]: any;
|
||||||
|
}
|
||||||
4
src/types/toolcall.ts
Executable file
4
src/types/toolcall.ts
Executable file
@@ -0,0 +1,4 @@
|
|||||||
|
export interface ParsedToolCall {
|
||||||
|
name: string;
|
||||||
|
args: Record<string, any>;
|
||||||
|
}
|
||||||
10
src/utils/logger.ts
Executable file
10
src/utils/logger.ts
Executable file
@@ -0,0 +1,10 @@
|
|||||||
|
export const logger = {
|
||||||
|
info: (...args: any[]) => console.log(`[INFO ${new Date().toISOString()}]`, ...args),
|
||||||
|
warn: (...args: any[]) => console.warn(`[WARN ${new Date().toISOString()}]`, ...args),
|
||||||
|
error: (...args: any[]) => console.error(`[ERROR ${new Date().toISOString()}]`, ...args),
|
||||||
|
debug: (...args: any[]) => {
|
||||||
|
if (process.env.DEBUG) {
|
||||||
|
console.debug(`[DEBUG ${new Date().toISOString()}]`, ...args);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
19
test/fixtures/ollama-toolcalls-response.json
vendored
Executable file
19
test/fixtures/ollama-toolcalls-response.json
vendored
Executable file
@@ -0,0 +1,19 @@
|
|||||||
|
{
|
||||||
|
"model": "Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-4bit",
|
||||||
|
"created_at": "2024-05-01T10:00:00.000000Z",
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": "",
|
||||||
|
"tool_calls": [
|
||||||
|
{
|
||||||
|
"id": "call_abc123",
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "read",
|
||||||
|
"arguments": "{\"path\":\"/tmp/test.txt\"}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"done": true
|
||||||
|
}
|
||||||
9
test/fixtures/ollama-xml-response.json
vendored
Executable file
9
test/fixtures/ollama-xml-response.json
vendored
Executable file
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"model": "Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-4bit",
|
||||||
|
"created_at": "2024-05-01T10:00:00.000000Z",
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": "I will read the file for you.\n<tool_call>\n<function=read>\n<parameter=path>\n/tmp/test.txt\n</parameter>\n</function>\n</tool_call>"
|
||||||
|
},
|
||||||
|
"done": true
|
||||||
|
}
|
||||||
32
test/fixtures/openclaw-like-request.json
vendored
Executable file
32
test/fixtures/openclaw-like-request.json
vendored
Executable file
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"model": "hotwa/qwen35-9b-agent:latest",
|
||||||
|
"stream": false,
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "system",
|
||||||
|
"content": "You are a helpful assistant."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "请读取 /tmp/test.txt 的内容"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"tools": [
|
||||||
|
{
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "read",
|
||||||
|
"description": "Read a file from disk",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"path": {
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["path"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
78
test/integration.proxy.test.ts
Executable file
78
test/integration.proxy.test.ts
Executable file
@@ -0,0 +1,78 @@
|
|||||||
|
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||||
|
import { buildServer } from '../src/server';
|
||||||
|
import { FastifyInstance } from 'fastify';
|
||||||
|
import fs from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
|
||||||
|
describe('Proxy Integration Test', () => {
|
||||||
|
let server: FastifyInstance;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
server = buildServer();
|
||||||
|
// In vitest we can mock the global fetch
|
||||||
|
global.fetch = vi.fn();
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(async () => {
|
||||||
|
await server.close();
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('proxies request and rewrites XML response to tool_calls', async () => {
|
||||||
|
// Read fixtures
|
||||||
|
const requestFixturePath = path.join(__dirname, 'fixtures', 'openclaw-like-request.json');
|
||||||
|
const responseFixturePath = path.join(__dirname, 'fixtures', 'ollama-xml-response.json');
|
||||||
|
|
||||||
|
const requestJson = JSON.parse(fs.readFileSync(requestFixturePath, 'utf8'));
|
||||||
|
const responseJson = JSON.parse(fs.readFileSync(responseFixturePath, 'utf8'));
|
||||||
|
|
||||||
|
// Mock fetch to return the ollama-xml-response.json
|
||||||
|
(global.fetch as any).mockResolvedValue({
|
||||||
|
ok: true,
|
||||||
|
json: async () => responseJson
|
||||||
|
});
|
||||||
|
|
||||||
|
const response = await server.inject({
|
||||||
|
method: 'POST',
|
||||||
|
url: '/api/chat',
|
||||||
|
payload: requestJson
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(response.statusCode).toBe(200);
|
||||||
|
const body = JSON.parse(response.payload);
|
||||||
|
|
||||||
|
// Verify proxy forwarded it
|
||||||
|
expect(global.fetch).toHaveBeenCalledTimes(1);
|
||||||
|
const fetchArgs = (global.fetch as any).mock.calls[0];
|
||||||
|
expect(fetchArgs[0]).toContain('/api/chat');
|
||||||
|
|
||||||
|
const upstreamBody = JSON.parse(fetchArgs[1].body);
|
||||||
|
expect(upstreamBody.model).toBe('Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-4bit');
|
||||||
|
|
||||||
|
// Verify response was rewritten
|
||||||
|
expect(body.message.content).toBe("");
|
||||||
|
expect(body.message.tool_calls).toBeDefined();
|
||||||
|
expect(body.message.tool_calls).toHaveLength(1);
|
||||||
|
expect(body.message.tool_calls[0].function.name).toBe('read');
|
||||||
|
expect(JSON.parse(body.message.tool_calls[0].function.arguments)).toEqual({
|
||||||
|
path: "/tmp/test.txt"
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('rejects streaming requests cleanly', async () => {
|
||||||
|
const requestFixturePath = path.join(__dirname, 'fixtures', 'openclaw-like-request.json');
|
||||||
|
const requestJson = JSON.parse(fs.readFileSync(requestFixturePath, 'utf8'));
|
||||||
|
requestJson.stream = true;
|
||||||
|
|
||||||
|
const response = await server.inject({
|
||||||
|
method: 'POST',
|
||||||
|
url: '/api/chat',
|
||||||
|
payload: requestJson
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(response.statusCode).toBe(400);
|
||||||
|
const body = JSON.parse(response.payload);
|
||||||
|
expect(body.error).toContain('Streaming is not supported');
|
||||||
|
expect(global.fetch).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
});
|
||||||
52
test/response-rewriter.test.ts
Executable file
52
test/response-rewriter.test.ts
Executable file
@@ -0,0 +1,52 @@
|
|||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { rewriteResponse } from '../src/proxy/response-rewriter';
|
||||||
|
import { OllamaChatResponse } from '../src/types/ollama';
|
||||||
|
|
||||||
|
describe('Response Rewriter', () => {
|
||||||
|
it('rewrites XML tool call in content into structured tool_calls', () => {
|
||||||
|
const inputResponse: OllamaChatResponse = {
|
||||||
|
model: "test-model",
|
||||||
|
done: true,
|
||||||
|
message: {
|
||||||
|
role: "assistant",
|
||||||
|
content: "<function=read>\n<parameter=path>\n/tmp/test.txt\n</parameter>\n</function>"
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = rewriteResponse(inputResponse);
|
||||||
|
|
||||||
|
expect(result.message.content).toBe("");
|
||||||
|
expect(result.message.tool_calls).toBeDefined();
|
||||||
|
expect(result.message.tool_calls).toHaveLength(1);
|
||||||
|
|
||||||
|
const toolCall = result.message.tool_calls![0];
|
||||||
|
expect(toolCall.type).toBe('function');
|
||||||
|
expect(toolCall.function.name).toBe('read');
|
||||||
|
|
||||||
|
const argsObject = JSON.parse(toolCall.function.arguments);
|
||||||
|
expect(argsObject).toEqual({ path: '/tmp/test.txt' });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('does not touch response that already has tool_calls', () => {
|
||||||
|
const inputResponse: OllamaChatResponse = {
|
||||||
|
model: "test-model",
|
||||||
|
done: true,
|
||||||
|
message: {
|
||||||
|
role: "assistant",
|
||||||
|
content: "Here are the calls",
|
||||||
|
tool_calls: [
|
||||||
|
{
|
||||||
|
id: "123",
|
||||||
|
type: "function",
|
||||||
|
function: { name: "read", arguments: "{}" }
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const result = rewriteResponse(inputResponse);
|
||||||
|
|
||||||
|
expect(result.message.content).toBe("Here are the calls"); // not cleared
|
||||||
|
expect(result.message.tool_calls).toHaveLength(1);
|
||||||
|
});
|
||||||
|
});
|
||||||
48
test/xml-toolcall.test.ts
Executable file
48
test/xml-toolcall.test.ts
Executable file
@@ -0,0 +1,48 @@
|
|||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { parseXmlToolCalls } from '../src/parsers/xml-toolcall';
|
||||||
|
|
||||||
|
describe('XML ToolCall Parser', () => {
|
||||||
|
it('parses basic XML tool call correctly', () => {
|
||||||
|
const content = `
|
||||||
|
<tool_call>
|
||||||
|
<function=read>
|
||||||
|
<parameter=path>
|
||||||
|
/tmp/test.txt
|
||||||
|
</parameter>
|
||||||
|
</function>
|
||||||
|
</tool_call>
|
||||||
|
`;
|
||||||
|
|
||||||
|
const result = parseXmlToolCalls(content);
|
||||||
|
expect(result).toHaveLength(1);
|
||||||
|
expect(result[0].name).toBe('read');
|
||||||
|
expect(result[0].args).toEqual({ path: '/tmp/test.txt' });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('parses multiple parameters correctly', () => {
|
||||||
|
const content = `
|
||||||
|
<function=write>
|
||||||
|
<parameter=path>
|
||||||
|
/tmp/a.txt
|
||||||
|
</parameter>
|
||||||
|
<parameter=content>
|
||||||
|
hello
|
||||||
|
</parameter>
|
||||||
|
</function>
|
||||||
|
`;
|
||||||
|
|
||||||
|
const result = parseXmlToolCalls(content);
|
||||||
|
expect(result).toHaveLength(1);
|
||||||
|
expect(result[0].name).toBe('write');
|
||||||
|
expect(result[0].args).toEqual({
|
||||||
|
path: '/tmp/a.txt',
|
||||||
|
content: 'hello'
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('ignores normal text', () => {
|
||||||
|
const content = `I will read the file. Let me check the system.`;
|
||||||
|
const result = parseXmlToolCalls(content);
|
||||||
|
expect(result).toHaveLength(0);
|
||||||
|
});
|
||||||
|
});
|
||||||
14
tsconfig.json
Executable file
14
tsconfig.json
Executable file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"target": "ES2022",
|
||||||
|
"module": "CommonJS",
|
||||||
|
"rootDir": "./src",
|
||||||
|
"outDir": "./dist",
|
||||||
|
"esModuleInterop": true,
|
||||||
|
"forceConsistentCasingInFileNames": true,
|
||||||
|
"strict": true,
|
||||||
|
"skipLibCheck": true,
|
||||||
|
"resolveJsonModule": true
|
||||||
|
},
|
||||||
|
"include": ["src/**/*"]
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user