Guide to integrating NeuralMind with MCP tools, graphify, and other development workflows.
NeuralMind requires a knowledge graph generated by graphify to function. Graphify analyzes your codebase and creates a structured representation of code entities and their relationships.
pip install graphifyy
# Navigate to your project
cd /path/to/your/project
# Generate knowledge graph
graphify update .
# This creates:
# - graphify-out/graph.json (knowledge graph)
# - graphify-out/GRAPH_REPORT.md (analysis report)
# - graphify-out/cache/ (processing cache)
The knowledge graph contains:
{
"nodes": [
{
"id": "unique_id",
"name": "authenticate_user",
"type": "function",
"file_path": "auth/handlers.py",
"description": "Validates user credentials",
"community": 5
}
],
"edges": [
{
"source": "node_id_1",
"target": "node_id_2",
"type": "calls"
}
],
"communities": [
{
"id": 5,
"name": "Authentication",
"description": "User authentication and authorization"
}
]
}
Update the graph when code changes:
# Manual update
graphify update /path/to/project
# Then rebuild NeuralMind index
neuralmind build /path/to/project
Add to .git/hooks/post-commit:
#!/bin/bash
# Automatically update knowledge graph after commits
PROJECT_ROOT=$(git rev-parse --show-toplevel)
# Update graph
graphify update "$PROJECT_ROOT" 2>/dev/null
# Rebuild NeuralMind index
neuralmind build "$PROJECT_ROOT" 2>/dev/null
Make it executable:
chmod +x .git/hooks/post-commit
NeuralMind includes a Model Context Protocol (MCP) server for seamless integration with AI coding assistants.
The MCP server exposes NeuralMind’s functionality as tools that AI assistants can call:
| Tool | Description |
|---|---|
neuralmind_wakeup |
Get wake-up context for a project |
neuralmind_query |
Query project with natural language |
neuralmind_search |
Semantic search across codebase |
neuralmind_skeleton |
Graph-backed file view |
neuralmind_recursive_query |
Decompose and explore complex questions |
neuralmind_query_docs |
Search reference documents (PDFs, DOCX) |
neuralmind_build |
Build/rebuild neural index |
neuralmind_stats |
Get project statistics |
neuralmind_benchmark |
Run performance benchmark |
# Using the CLI entry point
neuralmind-mcp
# Or as a Python module
python -m neuralmind.mcp_server
# With custom port (if supported)
neuralmind-mcp --port 8080
Claude Desktop supports MCP servers natively.
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json~/.config/Claude/claude_desktop_config.json{
"mcpServers": {
"neuralmind": {
"command": "neuralmind-mcp",
"args": [],
"env": {}
}
}
}
Once configured, Claude can:
User: What does the authentication module do in /projects/myapp?
Claude: [Calls neuralmind_query with project_path and question]
Based on the codebase analysis, the authentication module...
If NeuralMind is installed in a virtual environment:
{
"mcpServers": {
"neuralmind": {
"command": "/path/to/venv/bin/neuralmind-mcp",
"args": [],
"env": {
"VIRTUAL_ENV": "/path/to/venv"
}
}
}
}
Cursor IDE supports MCP through its AI features.
{
"neuralmind": {
"command": "neuralmind-mcp"
}
}
Create .cursor/mcp.json in your project:
{
"servers": {
"neuralmind": {
"command": "neuralmind-mcp",
"env": {
"NEURALMIND_PROJECT": "${workspaceFolder}"
}
}
}
}
For building your own MCP client integration:
import asyncio
import json
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def query_neuralmind(project_path: str, question: str):
server_params = StdioServerParameters(
command="neuralmind-mcp",
args=[]
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Call the query tool
result = await session.call_tool(
"neuralmind_query",
{
"project_path": project_path,
"question": question
}
)
return result
# Usage
result = asyncio.run(query_neuralmind(
"/path/to/project",
"How does authentication work?"
))
print(result)
NeuralMind MCP tools follow these schemas:
{
"neuralmind_wakeup": {
"input": {
"project_path": "string (required)"
},
"output": {
"context": "string",
"tokens": "integer",
"layers": "array"
}
},
"neuralmind_query": {
"input": {
"project_path": "string (required)",
"question": "string (required)"
},
"output": {
"context": "string",
"tokens": "integer",
"reduction_ratio": "number",
"layers": "array",
"communities": "array"
}
},
"neuralmind_search": {
"input": {
"project_path": "string (required)",
"query": "string (required)",
"limit": "integer (optional, default 10)"
},
"output": {
"results": "array of search results"
}
},
"neuralmind_build": {
"input": {
"project_path": "string (required)",
"force": "boolean (optional, default false)"
},
"output": {
"nodes_processed": "integer",
"nodes_embedded": "integer",
"communities": "integer",
"time_elapsed": "number"
}
},
"neuralmind_stats": {
"input": {
"project_path": "string (required)"
},
"output": {
"node_count": "integer",
"community_count": "integer",
"last_build": "string"
}
},
"neuralmind_benchmark": {
"input": {
"project_path": "string (required)"
},
"output": {
"results": "array",
"averages": "object"
}
}
}
NeuralMind can use NVIDIA NIM (free, 80+ models) for LLM-based question decomposition in recursive queries:
# Get free API key at https://build.nvidia.com
export NVIDIA_API_KEY="nvapi-..."
# Recursive queries will use NVIDIA for decomposition when available
neuralmind_recursive_query(project_path=".", question="How does auth work?")
Base URL: https://integrate.api.nvidia.com/v1 (OpenAI-compatible)
Recommended models:
google/gemma-3-4b-it (fast)meta/llama-3.3-70b-instruct (reliable)qwen/qwen3-coder-480b-a35b-instruct (specialist)Add NeuralMind to your CI pipeline:
# .github/workflows/neuralmind.yml
name: Update NeuralMind Index
on:
push:
branches: [main]
paths:
- '**.py'
- '**.js'
- '**.ts'
jobs:
update-index:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install dependencies
run: |
pip install graphifyy neuralmind
- name: Update knowledge graph
run: graphify update .
- name: Build NeuralMind index
run: neuralmind build .
- name: Run benchmark
run: neuralmind benchmark . --json > benchmark.json
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: neuralmind-index
path: |
graphify-out/graph.json
graphify-out/neuralmind_db/
benchmark.json
Validate NeuralMind setup before commits:
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: neuralmind-check
name: Check NeuralMind Index
entry: bash -c 'neuralmind stats . || echo "Warning: NeuralMind index not built"'
language: system
pass_filenames: false
always_run: true
Create .vscode/tasks.json:
{
"version": "2.0.0",
"tasks": [
{
"label": "NeuralMind: Build Index",
"type": "shell",
"command": "neuralmind build ${workspaceFolder}",
"problemMatcher": [],
"group": "build"
},
{
"label": "NeuralMind: Wake-up Context",
"type": "shell",
"command": "neuralmind wakeup ${workspaceFolder}",
"problemMatcher": [],
"presentation": {
"reveal": "always",
"panel": "new"
}
},
{
"label": "NeuralMind: Query",
"type": "shell",
"command": "neuralmind query ${workspaceFolder} \"${input:question}\"",
"problemMatcher": [],
"presentation": {
"reveal": "always",
"panel": "new"
}
},
{
"label": "NeuralMind: Benchmark",
"type": "shell",
"command": "neuralmind benchmark ${workspaceFolder}",
"problemMatcher": [],
"presentation": {
"reveal": "always",
"panel": "new"
}
}
],
"inputs": [
{
"id": "question",
"type": "promptString",
"description": "Enter your question about the codebase"
}
]
}
Add to keybindings.json:
[
{
"key": "ctrl+shift+n w",
"command": "workbench.action.tasks.runTask",
"args": "NeuralMind: Wake-up Context"
},
{
"key": "ctrl+shift+n q",
"command": "workbench.action.tasks.runTask",
"args": "NeuralMind: Query"
}
]
Name: NeuralMind Query
Program: neuralmind
Arguments: query $ProjectFileDir$ "$Prompt$"
Working directory: $ProjectFileDir$
#!/bin/bash
# update_knowledge.sh - Update graphify and NeuralMind
set -e
PROJECT_PATH="${1:-.}"
FORCE_REBUILD="${2:-false}"
echo "Updating knowledge system for: $PROJECT_PATH"
# Update graphify
echo "Running graphify update..."
graphify update "$PROJECT_PATH"
# Build NeuralMind
echo "Building NeuralMind index..."
if [ "$FORCE_REBUILD" = "true" ]; then
neuralmind build "$PROJECT_PATH" --force
else
neuralmind build "$PROJECT_PATH"
fi
# Show stats
echo ""
echo "=== Index Statistics ==="
neuralmind stats "$PROJECT_PATH"
echo ""
echo "Done!"
#!/bin/bash
# query_and_copy.sh - Query and copy result to clipboard
PROJECT_PATH="$1"
QUESTION="$2"
if [ -z "$PROJECT_PATH" ] || [ -z "$QUESTION" ]; then
echo "Usage: $0 <project_path> <question>"
exit 1
fi
RESULT=$(neuralmind query "$PROJECT_PATH" "$QUESTION")
echo "$RESULT"
# Copy to clipboard
if command -v pbcopy &> /dev/null; then
echo "$RESULT" | pbcopy
echo "\n[Copied to clipboard]"
elif command -v xclip &> /dev/null; then
echo "$RESULT" | xclip -selection clipboard
echo "\n[Copied to clipboard]"
fi
#!/usr/bin/env python3
"""Process multiple projects with NeuralMind."""
import sys
from pathlib import Path
from neuralmind import NeuralMind
def process_projects(project_paths: list, questions: list):
"""Process multiple projects with common questions."""
results = []
for project_path in project_paths:
path = Path(project_path)
if not path.exists():
print(f"Skipping {project_path}: not found")
continue
print(f"\nProcessing: {project_path}")
try:
mind = NeuralMind(str(path))
mind.build()
project_results = {
'project': project_path,
'stats': mind.get_stats(),
'queries': []
}
for question in questions:
result = mind.query(question)
project_results['queries'].append({
'question': question,
'tokens': result.budget.total,
'reduction': result.reduction_ratio
})
results.append(project_results)
except Exception as e:
print(f"Error processing {project_path}: {e}")
return results
if __name__ == '__main__':
projects = [
'/path/to/project1',
'/path/to/project2',
]
questions = [
"How does authentication work?",
"What are the main API endpoints?",
]
results = process_projects(projects, questions)
# Print summary
for r in results:
print(f"\n{r['project']}: {r['stats']['node_count']} nodes")
for q in r['queries']:
print(f" - {q['question'][:40]}... ({q['tokens']} tokens, {q['reduction']:.1f}x)")
#!/usr/bin/env python3
"""Watch for changes and auto-rebuild NeuralMind index."""
import time
import subprocess
from pathlib import Path
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class CodeChangeHandler(FileSystemEventHandler):
def __init__(self, project_path: str):
self.project_path = project_path
self.last_rebuild = 0
self.debounce_seconds = 5
def on_modified(self, event):
if event.is_directory:
return
# Only watch code files
extensions = {'.py', '.js', '.ts', '.java', '.go', '.rs'}
if Path(event.src_path).suffix not in extensions:
return
# Debounce
current_time = time.time()
if current_time - self.last_rebuild < self.debounce_seconds:
return
self.last_rebuild = current_time
print(f"Change detected: {event.src_path}")
self.rebuild()
def rebuild(self):
print("Rebuilding knowledge graph...")
subprocess.run(['graphify', 'update', self.project_path], check=True)
print("Rebuilding NeuralMind index...")
subprocess.run(['neuralmind', 'build', self.project_path], check=True)
print("Done!\n")
if __name__ == '__main__':
import sys
project_path = sys.argv[1] if len(sys.argv) > 1 else '.'
event_handler = CodeChangeHandler(project_path)
observer = Observer()
observer.schedule(event_handler, project_path, recursive=True)
observer.start()
print(f"Watching {project_path} for changes...")
print("Press Ctrl+C to stop\n")
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()