4  Model Context Protocol

4.1 Overview

The Model Context Protocol (MCP) [1] is an open standard for connecting AI applications to external systems. Released by Anthropic in November 2024, MCP provides a standardized way for AI assistants to access data sources, execute tools, and interact with domain-specific systems.

Think of MCP as a “USB-C port for AI applications”—a universal interface that allows any MCP-compatible AI host (Claude Desktop, VS Code, custom applications) to connect to any MCP server providing specialized capabilities.

4.2 Architecture

MCP follows a client-server architecture with three key participants:

  • MCP Host: The AI application (Claude Desktop, VS Code) that coordinates connections
  • MCP Client: A component within the host that maintains a connection to one MCP server
  • MCP Server: A program that provides context (tools, resources) to clients
┌─────────────────────────────────────────────────────────────┐
│                  MCP Host (AI Application)                  │
│               Claude Desktop / VS Code / etc.               │
│  ┌───────────────────────────────────────────────────────┐  │
│  │                      MCP Client                       │  │
│  │   - Maintains connection to server                    │  │
│  │   - Discovers available tools/resources               │  │
│  │   - Routes tool calls from LLM                        │  │
│  └───────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘
                              │
                              │ JSON-RPC 2.0
                              │ (stdio or HTTP)
                              ▼
┌─────────────────────────────────────────────────────────────┐
│                         MCP Server                          │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐          │
│  │   Tools     │  │  Resources  │  │   Prompts   │          │
│  │             │  │             │  │             │          │
│  │ sysml_parse │  │ sysml://    │  │ (templates) │          │
│  │ repo_read   │  │ repo://     │  │             │          │
│  │ sysml_valid │  │             │  │             │          │
│  └─────────────┘  └─────────────┘  └─────────────┘          │
└─────────────────────────────────────────────────────────────┘

4.2.1 Transport Mechanisms

MCP supports two transport layers:

Transport Use Case Characteristics
stdio Local processes Claude Desktop, VS Code; no network overhead
HTTP Remote/CI deployment Team servers, CI pipelines (e.g., GitLab CI)

The open-mcp-sysml server currently supports stdio transport for local development with Claude Desktop. HTTP transport for remote/CI deployment is planned for Phase 2.

4.2.2 Protocol Flow

  1. Initialize: Client and server negotiate capabilities
  2. Discover: Client lists available tools and resources
  3. Execute: Client calls tools or reads resources as needed
  4. Notify: Server sends real-time updates when state changes

4.3 Model Context Protocol (MCP) Primitives

MCP defines three core primitives that servers expose to clients:

4.3.1 Tools

Tools are executable functions that AI applications can invoke. Each tool has:

  • Name: Unique identifier (e.g., sysml_parse)
  • Description: What the tool does
  • Input Schema: JSON Schema defining expected parameters
  • Output: Structured response (text, JSON, errors)

Tools enable AI assistants to take actions—reading files, validating models, committing changes—rather than just providing information.

4.3.2 Resources

Resources are read-only data sources accessed via URI patterns. They provide contextual information without side effects:

  • sysml://examples/vehicle — bundled example model
  • repo://myorg/project/file/model.sysml — file from Git repository

Resources let AI assistants browse and read project content without executing operations.

4.3.3 Prompts

Prompts are reusable interaction templates that help structure LLM conversations. While MCP supports prompts, our SysML v2 server does not implement them—the tools and resources provide sufficient capability for MBSE workflows.

4.4 SysML v2 Server Design

The SysML v2 MCP server exposes tools and resources tailored for AI-augmented MBSE workflows. Design aligns with requirements in Section 9.3.1.

4.4.1 Tool Definitions

Implemented (Phase 1):

Tool Purpose Inputs Output
sysml_parse Extract elements from SysML text source, detail_level Element list (JSON, L0/L1/L2)
sysml_validate Validate SysML v2 syntax source Parse diagnostics
sysml_list_definitions List definitions in SysML text source Definition names and types
repo_list_files List files in Git repository project, path, ref, sysml_only File list
repo_get_file Read file from Git repository project, path, ref File content

Planned (Phase 2+):

Tool Purpose Status
sysml_query Query elements by type/properties Planned
repo_commit Commit file changes Planned

4.4.2 Resource URIs (Planned)

Resource URIs are not yet implemented; the current server exposes tools only. Planned resource patterns:

Pattern Example Description
sysml://examples/{name} sysml://examples/vehicle Bundled example models
repo://{project}/file/{path} repo://myorg/models/file/vehicle.sysml Git repository file

4.4.3 Typical Workflow

A systems engineer asks their AI assistant about requirements in a SysML project:

┌───────────────────────────────────────────────────────────────┐
│  User: "What requirements are defined in this project?"       │
└───────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌───────────────────────────────────────────────────────────────┐
│  1. AI calls repo_list_files(project="myorg/vehicle")          │
│     → Returns: ["requirements.sysml", "architecture.sysml"]   │
└───────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌───────────────────────────────────────────────────────────────┐
│  2. AI calls repo_get_file(path="requirements.sysml")          │
│     → Returns: SysML v2 source text                           │
└───────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌───────────────────────────────────────────────────────────────┐
│  3. AI calls sysml_parse(source=<file content>)               │
│     → Returns: [{type: "RequirementDefinition", name: "..."}] │
└───────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌───────────────────────────────────────────────────────────────┐
│  AI: "This project defines 12 requirements including..."      │
└───────────────────────────────────────────────────────────────┘

The AI can continue the conversation—suggesting improvements, drafting new requirements, validating changes—all while maintaining full project context through the MCP server.

4.5 Implementation Considerations

4.5.1 Error Handling

The server handles degraded conditions gracefully:

Condition Behavior
Invalid SysML syntax Return parse errors with line numbers via tree-sitter diagnostics
Git provider authentication failure Return clear error with remediation steps
File not found Return structured error with available paths
SysML v2 API unavailable Planned: fall back to local parsing (API integration is Phase 2+)

4.5.2 Security

  • Git provider tokens passed via environment variable (e.g., GITLAB_TOKEN, GITHUB_TOKEN)
  • Tokens never logged or included in error messages
  • Input validation prevents injection attacks
  • HTTP transport supports TLS for remote deployment

4.5.3 Deployment Modes

Mode Transport Configuration Use Case
Local stdio Claude Desktop config Individual engineer
Team HTTP Docker/Podman Shared team server
CI/CD HTTP CI service container Automated validation

See Section 10.13 for detailed deployment architecture.