Intro:
Over the past few days, I've been exploring how to design intelligent agents using large language models (LLMs). What started as a simple weather bot quickly turned into a deep dive into tool orchestration, modularity, and the infrastructure decisions that separate demos from production-grade systems. Here's a snapshot of what I'm learning.

1. Prompting Alone Isn't Enough

While LLMs are powerful, they can't do everything. Some tasks require real-time data, structured logic, or external APIs. Prompting is great for nuance and natural interaction. But as soon as reliability, modularity, or auditability matters, prompting alone breaks down.

2. MCP: The Protocol Layer for Tool Use

Anthropic's Model Context Protocol (MCP) introduces a clean separation between model and tool. Instead of hardcoding API calls in your app, MCP servers expose structured capabilities ("tools") via JSON-RPC. LLM clients call these tools and use the structured output as part of their reasoning.

This allows for:

  • Secure, auditable tool execution
  • Interoperability between agents and tools
  • Swappable, versioned tool logic

3. LangChain vs LangGraph vs MCP: Who Does What?

Here's the breakdown:

  • LangChain: The glue layer. Handles prompts, memory, output parsing, tool wrappers.
  • LangGraph: Orchestrates flow. State machines and control logic for complex agents.
  • MCP: The tool interface. Exposes tools that the agent can call in a standard way.

They all play together. LangGraph uses LangChain inside nodes. LangChain can call MCP tools. MCP tools can be swapped in/out without changing the agent logic.

4. Don't Turn Everything Into a Tool

Yes, tools are reusable and powerful. But if you turn everything into a tool, your system becomes unmaintainable. Use tools for:

  • Deterministic, testable logic
  • Capabilities shared across agents
  • Calls that require external data or permission gating

Keep trivial or fast-changing logic in the prompt or client.

5. Modularity Enables Intelligence

The real magic of agentic systems is composition:

  • get_weather() ➞ get_outfit() ➞ get_reminder()
  • goal + memory + planner + tools = autonomy

This works only if tools are composable, inspectable, and interchangeable. That's the vision MCP supports.

Final Thought

I'm no longer thinking about LLMs as chatbots. I'm thinking of them as orchestrators of context, logic, and tools. Building agentic systems is a different mindset — one that requires real software design discipline. It's exciting, overwhelming, and exactly where the frontier is.