.do
Mcp

Llm MCP

Model Context Protocol reference for llm.do - Unified gateway for large language models (Large Language Models (LLMs))

Llm MCP

Unified gateway for large language models (Large Language Models (LLMs))

Overview

The Model Context Protocol (MCP) provides AI models with direct access to llm.do through a standardized interface.

Installation

pnpm add @modelcontextprotocol/sdk

Configuration

Add to your MCP server configuration:

{
  "mcpServers": {
    "llm": {
      "command": "npx",
      "args": ["-y", "@dotdo/mcp-server"],
      "env": {
        "DO_API_KEY": "your-api-key"
      }
    }
  }
}

Tools

llm/invoke

Main tool for llm.do operations.

{
  "name": "llm/invoke",
  "description": "Unified gateway for large language models (Large Language Models (LLMs))",
  "inputSchema": {
    "type": "object",
    "properties": {
      "operation": {
        "type": "string",
        "description": "Operation to perform"
      },
      "parameters": {
        "type": "object",
        "description": "Operation parameters"
      }
    },
    "required": ["operation"]
  }
}

Usage in AI Models

Claude Desktop

// ~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "llm": {
      "command": "npx",
      "args": ["-y", "@dotdo/mcp-server", "--tool=llm"],
      "env": {
        "DO_API_KEY": "undefined"
      }
    }
  }
}

OpenAI GPTs

# Custom GPT configuration
tools:
  - type: mcp
    server: llm
    operations:
      - invoke
      - query
      - execute

Custom Integration

import { Client } from '@modelcontextprotocol/sdk/client/index.js'
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js'

const transport = new StdioClientTransport({
  command: 'npx',
  args: ['-y', '@dotdo/mcp-server', '--tool=llm'],
})

const client = new Client(
  {
    name: 'llm-client',
    version: '1.0.0',
  },
  {
    capabilities: {},
  }
)

await client.connect(transport)

// Call tool
const result = await client.callTool({
  name: 'llm/invoke',
  arguments: {
    operation: 'llm',
    parameters: {},
  },
})

Tool Definitions

Available Tools

{
  "tools": [
    {
      "name": "llm/invoke",
      "description": "Invoke llm.do",
      "inputSchema": {
        /* ... */
      }
    },
    {
      "name": "llm/query",
      "description": "Query llm.do resources",
      "inputSchema": {
        /* ... */
      }
    },
    {
      "name": "llm/status",
      "description": "Check llm.do status",
      "inputSchema": {
        /* ... */
      }
    }
  ]
}

Resources

Available Resources

{
  "resources": [
    {
      "uri": "llm://config",
      "name": "Llm Configuration",
      "mimeType": "application/json"
    },
    {
      "uri": "llm://docs",
      "name": "Llm Documentation",
      "mimeType": "text/markdown"
    }
  ]
}

Prompts

Pre-configured Prompts

{
  "prompts": [
    {
      "name": "llm-quick-start",
      "description": "Quick start guide for llm.do",
      "arguments": []
    },
    {
      "name": "llm-best-practices",
      "description": "Best practices for llm.do",
      "arguments": []
    }
  ]
}

Examples

Basic Usage

// AI model calls tool via MCP
mcp call llm/call

With Parameters

// Call with parameters
await mcp.callTool('llm/invoke', {
  operation: 'process',
  parameters: {
    // Operation-specific parameters
  },
  options: {
    timeout: 30000,
  },
})

Error Handling

try {
  const result = await mcp.callTool('llm/invoke', {
    operation: 'process',
  })
  return result
} catch (error) {
  if (error.code === 'TOOL_NOT_FOUND') {
    console.error('Llm tool not available')
  } else {
    throw error
  }
}

AI Integration Patterns

Agentic Workflows

// AI agent uses llm.do in workflow
const workflow = {
  steps: [
    {
      tool: 'llm/invoke',
      operation: 'analyze',
      input: 'user-data',
    },
    {
      tool: 'llm/process',
      operation: 'transform',
      input: 'analysis-result',
    },
  ],
}

Chain of Thought

AI models can reason about llm.do operations:

User: "I need to process this data"

AI: "I'll use the llm tool to:
1. Validate the data format
2. Process it through llm.do
3. Return the results

Let me start..."

[Calls: mcp call llm/call]

Server Implementation

Custom MCP Server

import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'

const server = new Server(
  {
    name: 'llm-server',
    version: '1.0.0',
  },
  {
    capabilities: {
      tools: {},
      resources: {},
      prompts: {},
    },
  }
)

// Register tool
server.setRequestHandler('tools/call', async (request) => {
  if (request.params.name === 'llm/invoke') {
    // Handle llm.do operation
    return {
      content: [
        {
          type: 'text',
          text: JSON.stringify(result),
        },
      ],
    }
  }
})

const transport = new StdioServerTransport()
await server.connect(transport)

Best Practices

  1. Tool Design - Keep tools focused and single-purpose
  2. Error Messages - Provide clear, actionable errors
  3. Documentation - Include examples in tool descriptions
  4. Rate Limiting - Implement appropriate limits
  5. Security - Validate all inputs from AI models
  6. Monitoring - Track tool usage and errors