
Over the past few weeks, we’ve spoken with many platform teams about how they’re experimenting with AI in their developer portals and in their workflows. While there’s a lot of experimentation in various directions, one pattern is already abundantly clear: AI isn’t a future add-on, it’s already an expectation, both for platform teams and broader engineering organizations.
But like everything in (platform) engineering, adoption isn’t linear. It’s messy, exploratory, and sometimes wildly inventive. Teams are hacking together tools, wiring up LLMs, and figuring out what “AI” actually means in the day-to-day of a developer experience and platform engineering. It’s the wild west at the moment, and we thought we’d share some of our observations from the front lines.
What Platform Teams Are Trying
We’ve seen a few consistent experiments in the wild:
- RAG over TechDocs: Letting engineers ask natural language questions over their internal documentation using Retrieval Augmented Generation (RAG). More than one customer has built a version of this themselves using OpenAI and their TechDocs content, wired through the developer portal UI or third-party integration (Slack is especially popular).
- Slack + Catalog Q&A: “Who owns this service?” is still one of the most common questions. Several teams have built Slack bots to pull ownership annotations from the catalog, and some are layering in code insights too - like linking to recent deployments or PRs.
- Agent-like automations: Some teams are building agent workflows and expressing interest in agent-driven automations. For example, the idea of automatically creating Jira tickets for failing scorecards has come up in multiple conversations as a natural evolution from today’s Tech Insights visibility.
What’s Hard Right Now
Despite the momentum, we’re also hearing consistent friction:
- Context engineering: AI outputs are only as good as the context they get. Teams are struggling with what data to pass, how to structure it, and how to keep it fresh across multiple sources. Too much context can overwhelm the model; too little leads to incomplete answers.
- LLM hallucination: Without a structured source of truth, LLMs invent scaffolder actions, make up ownership metadata, or reference outdated or non-existent docs. The quality is unpredictable.
- Tool fragmentation: Documentation lives in TechDocs, Confluence, Notion, Google Docs. Data is siloed. One customer told us: “We want developers to be able to ask a question, and it shouldn’t matter where the answer lives - TechDocs or Confluence or Slack or wherever.”
- Lack of discoverability and reusability: Internal agents are often one-offs. Teams struggle to expose them in a central place. There’s no “catalog of agents” yet - just siloed agents. All the usual caveats here about reusability and discoverability apply.
- Governance and ownership: Once you’ve got three agents running, who owns them? Who updates them when the data model changes? This quickly becomes tech debt if left unstructured.
What We’re Doing About It at Roadie
We’re leaning into this space because we believe AI + platform engineering is a natural fit. Here’s how we’re helping:
1. Cataloging Agents and MCP Servers
As AI agents start creeping into more parts of the internal developer experience, answering questions, triggering automations, guiding workflows, a new kind of infrastructure is quietly taking shape. These aren’t services or systems in the traditional sense, but they’re just as operationally important. And yet, for now, most teams are flying blind.
Right now, AI agents live in the shadows. Someone built a Slack bot last quarter. Who owns it? Someone else built RAG for TechDocs. Where’s the codebase? A third team spun up a tool that opens Jira tickets based on scorecard failures. These AI tools are useful, but they’re not documented, discoverable, or governable. Nobody knows who owns them. Nobody knows if they follow security and reliability best practices, or even if they still work.
If it sounds familiar it’s because it’s the same chaos we saw with microservices before developer portals came along. So here’s the idea: what if we treat internal agents the same way we treat services or systems? What if we give them a proper home in the catalog, with metadata, ownership, links to their code, a purpose tag, and maybe even their scopes and capabilities?
We’ve already started floating this model in conversations with customers. The feedback is consistent: “Yes, please. We need to know what exists, who owns it, what it does, and whether or not we can trust it. It’s maybe not a huge problem now that we only have two or three, but it’s going to be an issue soon enough.”
And this isn’t just for visibility. Once agents are modeled in the catalog, a lot of downstream benefits open up:
- Discoverability: Engineers can browse or search for “agents that help with onboarding” or “Slack bots connected to Roadie.”
- Governance: Platform teams can track which agents are connected to which datasets, where PII might flow, or whether an agent is deprecated.
- Ownership and auditing: Just like services, agents need owners. Catalog entries can help enforce that.
- Integration: Once agents are cataloged, they can show up in scorecards, dependency graphs, and documentation—just like any other component.
This is an initial observation, but it fits a broader pattern: as AI agents become part of the developer experience fabric, they deserve the same treatment as other pieces of platform infrastructure, and to be treated like first-class citizens.
Any mature IDP can help by making agents first-class citizens in the developer ecosystem: discoverable, governed, and owned just like services. When agents are in the catalog, they’re easier to monitor, trust, evolve, and reuse. At Roadie, we’re building native support for this because we see it becoming table stakes for running AI at scale.
2. MCP Server: Making Roadie’s Metadata AI-Ready
One of the biggest issues platform teams run into when trying to use LLMs is that the data they need isn’t structured or accessible in a way that AI can reliably work with. You end up with hallucinations, brittle prompts, and tools that kind of work, sometimes, if you’re lucky. And that’s not good enough, especially for workflows where trust and correctness matter.
That’s why we’re building MCP (Model Context Protocol) servers into Roadie. Roadie is already the canonical source of truth for your software, so the idea is simple: make everything inside Roadie (your catalog, your scorecards, your API specs, your scaffolder actions) available to other tools, including AI agents and copilots, through a structured and queryable API surface.
The use cases are compelling:
- Developers working in their IDE ask copilot to “generate a Python client for the Streetlights API,” and having the LLM automatically retrieve the correct OpenAPI spec via MCP.
- Slack agents that can respond to questions like “Who owns this service?” by pulling ownership directly from Roadie’s catalog, without users having to context switch and log into another window.
- Agents that can look up Tech Insights data (like whether a service is failing a scorecard) and take action or report it, without needing human intervention.
An IDP should act as the live, structured source of truth that AI agents and developer tools can trust, in much the same way it already serves human developers. Exposing this data through standard protocols like MCP means your automation and AI layers always work from the latest, correct information. Roadie has already begun implementing this so our customers’ portals can be both the human and machine-facing source of truth.
3. Multi-Source RAG: Unifying Your Docs into One Knowledge Graph
Ask any platform team what slows down developer onboarding, and you’ll hear the same thing: “We have docs, but they’re all over the place.” There’s TechDocs, Confluence, internal wikis, Notion, Google Docs, even Slack threads. It’s no wonder teams are looking at RAG to unify this mess and make information queryable with natural language.
We’re hearing from teams who want their developers to be able to ask questions like “How do I provision a new database?” and get a real answer, without needing to know where that answer lives!
But this only works if the RAG system can reach across all your documentation sources. Most tools today are single-source: they might work over TechDocs, but not Confluence. Or vice versa. That’s why we’re looking at multi-source RAG support as part of Roadie’s broader AI strategy.
The intention is to build a unified knowledge graph that spans TechDocs, Confluence, and other internal documentation sources. That way, developers can get answers in any channkel (Slack, their IDE, within Roadie) regardless of where the answer is, and platform teams don’t have to duplicate effort or field the same Slack questions over and over.
We’ve heard teams ask for this explicitly: “We want our engineers to ask a question once and get the right answer no matter which system it’s in.”
One of the most valuable things an IDP can do is act as the unifying layer for knowledge, irrespective of whether it lives in TechDocs, Confluence, or anywhere else. Multi-source RAG turns the IDP into a single place where developers (and their AI copilots) can find answers without hunting. Roadie’s goal is to be that connective tissue for our customers.
4. Making Scaffolding AI-Ready
Roadie’s Scaffolder is basically a UI form that does stuff - an easy way for developers to fill in parameters and create components or request infrastructure changes. But with the rise of AI-assisted development, platform teams are thinking about how the Scaffolder might be a programmable interface, something that can be invoked by agents or copilots directly from an IDE or chat environment. This reframing turns the Scaffolder into a backend surface, not just a frontend tool.
We’ve heard from teams who want to use LLMs to help author templates from scratch, generate the right parameters, and even guide developers through the scaffolding process conversationally. The challenge they face is that LLMs often hallucinate or suggest unsupported actions - because they don’t have access to the source of truth.
This is where Roadie can help. Since Roadie maintains the canonical configuration of Scaffolder actions per tenant, we can expose that metadata (through an MCP server) in a structured, machine-readable format. That allows developers to point their LLMs at a live, up-to-date catalog of what’s actually supported in their environment.
The result? AI that’s context-aware and grounded in real platform capabilities. Instead of manually digging through docs or trial-and-error authoring, developers get accurate suggestions, faster feedback, and the ability to trigger safer automation from within their IDEs.
Teams are imagining workflows like:
- “Create a new microservice” prompts in Slack that create a template based on the available actions and automatically open a PR.
- IDE-based copilots that suggest valid actions and parameters based on intent as you write a new template.
- AI agents that help debug broken scaffolder flows by understanding action compatibility.
Any IDP can add value by making its automation surfaces - whether that’s Scaffolder, Ansible playbooks, or other tools - accessible to both humans and AI in a safe, structured way. That means exposing live metadata, enforcing supported actions, and reducing the gap between what teams want to automate and what they can safely automate. At Roadie, we’re starting with Scaffolder but see this as a broader pattern.
What Comes Next?
If your team is experimenting with AI, building internal agents, or even just asking “Where do we start?”, we’d love to chat. You’re not alone; the best practices are being invented right now, and we’d love to explore that together.