4  Model Context Protocol

4.1 Overview

The Model Context Protocol (MCP) [1] is an open standard for connecting AI applications to external systems. Released by Anthropic in November 2024, MCP provides a standardized way for AI assistants to access data sources, execute tools, and interact with domain-specific systems.

Think of MCP as a “USB-C port for AI applications”—a universal interface that allows any MCP-compatible AI host (Claude Desktop, VS Code, custom applications) to connect to any MCP server providing specialized capabilities.

4.2 Architecture

MCP follows a client-server architecture with three key participants:

  • MCP Host: The AI application (Claude Desktop, VS Code) that coordinates connections
  • MCP Client: A component within the host that maintains a connection to one MCP server
  • MCP Server: A program that provides context (tools, resources) to clients
┌─────────────────────────────────────────────────────────────┐
│                  MCP Host (AI Application)                  │
│               Claude Desktop / VS Code / etc.               │
│  ┌───────────────────────────────────────────────────────┐  │
│  │                      MCP Client                       │  │
│  │   - Maintains connection to server                    │  │
│  │   - Discovers available tools/resources               │  │
│  │   - Routes tool calls from LLM                        │  │
│  └───────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘
                              │
                              │ JSON-RPC 2.0
                              │ (stdio or HTTP)
                              ▼
┌─────────────────────────────────────────────────────────────┐
│                         MCP Server                          │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐          │
│  │   Tools     │  │  Resources  │  │   Prompts   │          │
│  │             │  │             │  │             │          │
│  │ sysml_parse │  │ sysml://    │  │ (templates) │          │
│  │ gitlab_read │  │ gitlab://   │  │             │          │
│  │ sysml_valid │  │             │  │             │          │
│  └─────────────┘  └─────────────┘  └─────────────┘          │
└─────────────────────────────────────────────────────────────┘

4.2.1 Transport Mechanisms

MCP supports two transport layers:

Transport Use Case Characteristics
stdio Local processes Claude Desktop, VS Code; no network overhead
HTTP Remote/CI deployment Team servers, GitLab CI pipelines

The SysML v2 MCP server supports both transports, enabling local development with Claude Desktop and remote deployment for CI/CD integration.

4.2.2 Protocol Flow

  1. Initialize: Client and server negotiate capabilities
  2. Discover: Client lists available tools and resources
  3. Execute: Client calls tools or reads resources as needed
  4. Notify: Server sends real-time updates when state changes

4.3 MCP Primitives

MCP defines three core primitives that servers expose to clients:

4.3.1 Tools

Tools are executable functions that AI applications can invoke. Each tool has:

  • Name: Unique identifier (e.g., sysml_parse)
  • Description: What the tool does
  • Input Schema: JSON Schema defining expected parameters
  • Output: Structured response (text, JSON, errors)

Tools enable AI assistants to take actions—reading files, validating models, committing changes—rather than just providing information.

4.3.2 Resources

Resources are read-only data sources accessed via URI patterns. They provide contextual information without side effects:

  • sysml://examples/vehicle — bundled example model
  • gitlab://myorg/project/file/model.sysml — file from GitLab

Resources let AI assistants browse and read project content without executing operations.

4.3.3 Prompts

Prompts are reusable interaction templates that help structure LLM conversations. While MCP supports prompts, our SysML v2 server does not implement them—the tools and resources provide sufficient capability for MBSE workflows.

4.4 SysML v2 Server Design

The SysML v2 MCP server exposes tools and resources tailored for AI-augmented MBSE workflows. Design aligns with requirements in Section 8.3.1.

4.4.1 Tool Definitions

Tool Purpose Inputs Output
sysml_parse Extract elements from SysML text source Element list (JSON)
gitlab_read_file Read .sysml from GitLab project, path, ref File content
gitlab_list_models List .sysml files in directory project, path File list
sysml_validate Validate via SysML v2 API source Validation result
sysml_query Query elements by type project, element_type Element list
gitlab_commit Commit file changes project, branch, files, message Commit URL

4.4.2 Resource URIs

Pattern Example Description
sysml://examples/{name} sysml://examples/vehicle Bundled example models
gitlab://{project}/file/{path} gitlab://myorg/models/file/vehicle.sysml GitLab repository file

4.4.3 Typical Workflow

A systems engineer asks their AI assistant about requirements in a SysML project:

┌───────────────────────────────────────────────────────────────┐
│  User: "What requirements are defined in this project?"       │
└───────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌───────────────────────────────────────────────────────────────┐
│  1. AI calls gitlab_list_models(project="myorg/vehicle")      │
│     → Returns: ["requirements.sysml", "architecture.sysml"]   │
└───────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌───────────────────────────────────────────────────────────────┐
│  2. AI calls gitlab_read_file(path="requirements.sysml")      │
│     → Returns: SysML v2 source text                           │
└───────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌───────────────────────────────────────────────────────────────┐
│  3. AI calls sysml_parse(source=<file content>)               │
│     → Returns: [{type: "RequirementDefinition", name: "..."}] │
└───────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌───────────────────────────────────────────────────────────────┐
│  AI: "This project defines 12 requirements including..."      │
└───────────────────────────────────────────────────────────────┘

The AI can continue the conversation—suggesting improvements, drafting new requirements, validating changes—all while maintaining full project context through the MCP server.

4.5 Implementation Considerations

4.5.1 Error Handling

The server handles degraded conditions gracefully:

Condition Behavior
SysML v2 API unavailable Fall back to local parsing (no full validation)
GitLab authentication failure Return clear error with remediation steps
Invalid SysML syntax Return parse errors with line numbers
Network timeout Configurable timeout with retry guidance

4.5.2 Security

  • GitLab Personal Access Token passed via environment variable (GITLAB_TOKEN)
  • Tokens never logged or included in error messages
  • Input validation prevents injection attacks
  • HTTP transport supports TLS for remote deployment

4.5.3 Deployment Modes

Mode Transport Configuration Use Case
Local stdio Claude Desktop config Individual engineer
Team HTTP Docker/Podman Shared team server
CI/CD HTTP GitLab CI service Automated validation

See Section 9.10 for detailed deployment architecture.