
A Model Context Protocol Server (MCP) for Microsoft Paint
Why did I do this? I have no idea, honest, but it now exists. It has been over 10 years since I last had to use the Win32 API, and part of me was slightly curious about how the Win32 interop works with Rust.
Anywhoooo, below you'll find the primitives that can be used to connect Microsoft Paint to Cursor or ClaudeDesktop and use them to draw in Microsoft Paint. Here's the source code.
I'm not saying it's great quality or in any form feature complete; this is about as low-effort as possible, as it's not a serious project. If you want to take ownership of it and turn it into a 100% complete meme, get in touch. It was created using my /stdlib + /specs technical patterns to drive the LLM towards successful outcomes (aka "vibe coding")

/stdlib

/specs
If you have read the above posts (thanks!), hopefully, you now understand that LLM outcomes can be programmed. Thus, any issue in the code above could have been solved through additional programming or better prompting during the stdlib+specs phase and by driving an evaluation loop.
show me
how does this work under the hood?
To answer that, I must first explain what model context protocol is about as it seems like everyone's buzzing about it at the moment, with folks declaring it as "the last API you will ever write" (which curmudgeons such as myself have heard N-times before) or the "USB-C of APIs", but none of those explanations hits home as a developer tooling engineer.
To MCP or not to MCP, that's the question. Lmk in comments
— Sundar Pichai (@sundarpichai) March 30, 2025
First and foremost, MCPs are a specification that describes how LLMs can remote procedure call (RPC) with tools external to the LLM itself.
There are a couple of different implementations (JSON-RPC STDIO and JSON-RPC over HTTPS), but the specification is rapidly evolving, so it's not worth covering here. Refer to https://spec.modelcontextprotocol.io/specification/2025-03-26/ for the latest specification and the article below to understand what this all means from a security perspective...
Instead, let's focus on the fundamentals for engineers who seek to automate software authoring—tools and tool descriptions—because I suspect these foundational concepts will last forever.
so, what is a tool?
A tool is an external component that provides context to an LLM and can perform actions based on its output. Tools can invoke other tools as chains of tools similar to POSIX pipes. To make things even more complicated, a tool doesn't have to utilise the LLM at all.
so, what is a tool prompt?
A tool prompt defines how/when an LLM should interpret/use a tool. It's a "rulebook" describing how AI should process and respond to inputs. A tool prompt should be long and wordy. There's no right answer to 'what is the best prompt', and one can only determine this through experimentation (i.e. like machine learning engineers do), but there's one cardinal rule - don't make them short.
I think you should be making your tool descriptions much much longer. They are like system prompts.
— Quinn Slack (@sqs) February 25, 2025
example: how Claude code creates pull-requests
Right now, the best example of a finely tuned MCP tool prompt is inside of Claude Code. Below is the prompt Anthropic uses to create pull requests with GitHub.
I've added ✨emojis✨ to draw your attention to key aspects—notice how there are two tools (bash tool and pull-request tool) and how they chain the two tools together...
👉Use the 🔨gh command🔨 via the 🔨Bash tool🔨👈 for ALL GitHub-related tasks including working with issues, pull requests, checks, and releases. 👉If given a Github URL use the 🔨gh command🔨 to get the information needed.👈
IMPORTANT: When the user asks you to create a pull request, follow these steps carefully:
1. Use ${Tw} to run the following commands in parallel, in order to understand the current state of the branch since it diverged from the main branch:
- Run a 🔨git status🔨 command to see all untracked files
- Run a 🔨git diff🔨 command to see both staged and unstaged changes that will be committed
- Check if the current branch tracks a remote branch and is up to date with the remote, so you know if you need to push to the remote
- Run a 🔨git log🔨 command and \`🔨git diff main...HEAD🔨\` to understand the full commit history for the current branch (from the time it diverged from the \`main\` branch)
2. Analyze all changes that will be included in the pull request, making sure to look at all relevant commits (NOT just the latest commit, but ALL commits that will be included in the pull request!!!), and draft a pull request summary. Wrap your analysis process in <pr_analysis> tags:
<pr_analysis>
- List the commits since diverging from the main branch
- Summarize the nature of the changes (eg. new feature, enhancement to an existing feature, bug fix, refactoring, test, docs, etc.)
- Brainstorm the purpose or motivation behind these changes
- Assess the impact of these changes on the overall project
- Do not use tools to explore code, beyond what is available in the git context
- Check for any sensitive information that shouldn't be committed
- Draft a concise (1-2 bullet points) pull request summary that focuses on the "why" rather than the "what"
- Ensure the summary accurately reflects all changes since diverging from the main branch
- Ensure your language is clear, concise, and to the point
- Ensure the summary accurately reflects the changes and their purpose (ie. "add" means a wholly new feature, "update" means an enhancement to an existing feature, "fix" means a bug fix, etc.)
- Ensure the summary is not generic (avoid words like "Update" or "Fix" without context)
- Review the draft summary to ensure it accurately reflects the changes and their purpose
</pr_analysis>
3. Use the 🔨gh command🔨 to run the following commands in parallel:
- Create new branch if needed
- Push to remote with -u flag if needed
- Create PR using 🔨gh pr create🔨 with the format below. Use a HEREDOC to pass the body to ensure correct formatting.
<example>
🔨gh pr create --title "the pr title" --body "$(cat <<'EOF'🔨
## Summary
<1-3 bullet points>
## Test plan
[Checklist of TODOs for testing the pull request...]
\uD83E\uDD16 Generated with [${T2}](${aa})
EOF
)"
</example>
Important:
- NEVER update the git config
- Return an empty response - the user will see the gh output directly
# Other common operations
- View comments on a Github PR: 🔨gh api repos/foo/bar/pulls/123/comments`🔨
tools + tool prompts in action

how do I use this knowledge to automate software development at my company?
MCPs are an important concept for any engineer serious about learning how to orchestrate their job function - especially if you are using Claude Code, Cursor, Cline, or Windsurf and aren't satisfied with their outcomes.
The /stdlib pattern will only get you so far. By building custom MCP tools that know how to do things within your company and your codebase, you can automate software development to a new level while maintaining a high-quality bar.

I see possibilities for a future where each tool is purchased from one or more vendors, but as each codebase at every company is somewhat unique, for best results, internal tooling engineers should be focusing on building out their own MCP tools (everything except the edit tool - purchase it instead) that use the following techniques:
- Utilizing the LLM context window for evaluating outcomes and code generation through controlling what gets injected into the context window.
- Not using the LLM context window as a hammer. If flow control/decision-making can be achieved without involving an LLM, then do it.
- Tool call chaining - similar to the Claude Code (TM) pull-request tool description above, where many single-purpose tools that do one job well (e.g., POSIX) are composed to achieve bigger and better outcomes.
If you drive above in a while(true)
, with bespoke MCP tools that understand your codebase, coding conventions and company practices, you end up with a very disruptive and powerful primitive that can automate classes of software development at a company…

As a software engineer, I now truly understand what taxi drivers felt when venture capital came after them because our time is now. In the end, Uber won due to convenience.
Automating software will happen because it makes financial sense. Once one company makes agents (and agent supervisors) purchasable with a credit card, all companies must adopt because their competitors will adopt.
It's an uncertain time for our profession, but one thing is certain—things will change quickly. Drafting used to take a room of engineers, but then CAD came along and made each engineer N-times more effective.

And after that transition, architects still exist - just as software engineers will, and companies will need software engineers to:
- Cut problems down into smaller problems.
- Program the vibe coders (agents and sub-agents).
- Program the agent supervisors.
- Own the outcome of the resulting generated code and perform code reviews.
But the days of artisanal hand-crafted commits are over...
