The steps to make your own public Docker image

Extend your AI Agent: adding MCP support is still a wild west effort


AI agents are cool and all… but let’s be honest—they get way cooler when they can actually do stuff in the real world. Sure, they can chat, summarize, and spit out ideas, but how do you hook them up to external tools and systems? That’s when the magic happens.

For example if you ask an AI agent the current weather, it would hallucinate an answer for you depending on what it finds from the text it used to learn. So how do we make a chatbot to talk with some Weather API to get the current weather instead? It would be really cool to link the AI Agent to a Weather API. MCP is a protocol that tells how the AI agent and a REST API should communicatie with each other.

That’s the idea I’m chasing with Apie. From my previous article I talked about the concept of engines in Apie and the lack of MVC: you provide domain objects and Apie provides for example a REST API that can be talked to. Wouldn't it be cool if all you need to connect your domain objects to an AI agent would be require apie/mcp-server as a composer dependency? I’ve been working on the next big step for my library: adding MCP (Model Context Protocol) support so a non-technical user can use plain text to do API calls. To my understanding no library offers a MCP server next to a REST API out of the box without coding yet.

Picking a MCP package

When I started development of this package I had no single experience or knowledge what MCP is all about or how an AI agent would communicate with a MCP server. For that reason it would seem to be easier to use an existing package for setting up a MCP server. I found 2 Composer packages that seem like a good fit:

php-mcp/server

This was the first package I checked. It looks like modern PHP code with making a php service class and add some attributes to link it to MCP. While I liked the concept behind it, I could not use it in my case as I wanted apie/mcp-server to make the tools depending on what domain objects there are. I would need to dive so much into the library or do unconventional things like generating these service classes which sounded like over-engineering.

logiscape/mcp-sdk-php

When I looked at it, I felt very little of this package It is a direct port of the python version of the official MCP SDK and the creator used AI to make the package. The lack of good documentation also hinders it. However it does what it should do and I could make it work despite my initial thought that the package is of low quality.

Test-Driven development

I have little knowledge how MCP works, and I also know very little of logiscape/mcp-sdk-php with the very little documentation. So to me the best approach to make this package was test-driven development. I take the example server code put this in a symfony/console command and try to run it inside an integration test and...........

the test waits forever or crashes if I run it like this:

echo 'Hi!' | php vendor/bin/phpunit
The reason is simple. The console command picks stdin/stdout as transport layer and uses this in the test. Since 'Hi!' is not valid JSON, let alone a JSON-RPC Message the code crashes. Of course you can make tests with creating an external php process and run it with symfony/process in your test, but it can be made easier.

Transport options in MCP

MCP follows the JSON RPC 2.0 spec to send messages and does this either locally with stdin/stdout by piping text from a terminal or does this remotely by running a server and accept SSE requests. Both transport options are awkward to use inside tests.
With stdin/stdout transport I need to run the actual test in a separate php process, so this means no code coverage measurement and possibility the test is flaky if the computer is on resource overload.
If I use the SSE Request transport I can not test the console command here. Also curently integration tests run my own test application to run the tests under different frameworks. I would need to add an extra functionality to handle SSE requests.
I asked the creator of the package on advice as the package how to write proper integration or unit tests, but got the response I expected: he tests his code manually.

So I fixed it by making my own transport layer that works with a static array:
class InMemoryTransport implements Transport
{
    /**
     * @param array<int, JsonRpcMessage> $messages
     */
    public function __construct(
        private array $messages
    ) {
    }
    private bool $isRunning = false;

    public function readMessage(): ?JsonRpcMessage
    {
        if (!$this->isRunning) {
            throw new StopRunnerException('Transport is not running');
        }
        $res = array_shift($this->messages) ?: null;
        if (empty($this->messages)) {
            $this->isRunning = false;
        }
        return $res;
    }

    public function writeMessage(JsonRpcMessage $message): void
    {
        if (!$this->isRunning) {
            throw new StopRunnerException('Transport is not running');
        }
        $this->writtenMessages[] = $message;
    }
    public function start(): void
    {
        $this->isRunning = true;
    }

    public function stop(): void
    {
        $this->isRunning = false;
        $this->messages = [];
    }
}
So in the test I make sure the used transport class is this class! And this was big as I can now easily make integration tests without the need for testing it with an actual AI agent or with an awkward test setup.The only thing I did add was the StopRunnerException because normally the code never stops unless the transport layer throws an error.

The real test: integrate it in a agent

So we made a package called apie/mcp-server using a 3rd party library to make a MCP server and we wrote tests to see it does not crash. Now let's see what happens if we link it to an AI agent!

Gemini

I started with gemini cli first, because I can easily test this without having to spend money. And....Gemini could not find the application path, so you have to specify the full absolute path to make it work. This is the Gemini config file for setting up the MCP server loated at .gemini/settings.json:
{
  "mcpServers": {
    "Apie": {
     "command": "php",
      "args": ["/var/www/html/bin/console", "apie:mcp-server"],
      "env": {
        "API_KEY": "$MY_API_TOKEN"
      },
      "cwd": "/var/www/html/",
      "timeout": 30000,
      "trust": true
    }
  }
}
Gemini complained that the tool names were not unique. So after some fixing with the tool names, I could see the available tools!

So the test-driven development approach seemed to work! But then I tried to prompt for anything:
After some searching it seems to be a very common issue with Gemini. The Gemini API is very strict on the tools definitions and with every chat request the entire MCP tools are being sent to the LLM. If I look at github issues I found out that a very common issue is that MCP will provide a JSON schema, but that Gemini API will refuse any value for "format" that is not "enum" or "date-time". To make matters even more annoying is that the validation error is in the response body, but the error message will not display the response body at all! So I had to simplify the schema generated to make Gemini happy because of the information given in a Github ticket. Sadly it is not enough and no matter what I could not make it work with Gemini!
It stilll has a ticket to improve support. To me it's odd that they validate this, because you have very little control of the generated schema by a MCP server. And it would benefit me greatly if I see why the request is invalid. So this shows the immature status of AI agents very easily! I tried to debug the thing, even putting console.log in the node_modules folder, but to no success in figuring out why Gemini still does not like the tools.

Codex

Codex is the AI bot written by OpenAI, so  I was expecting support for MCP. It seems to claim it does, but like most chatbots it seems to be a hallucination because I could not find anything that made it connect with my MCP command. In fact if I asked questions about MCP it did not even know about MCP and was thinking I wanted to connect with the Minecraft Protocol for the videogame Minecraft!

Vscode + Copilot

Setting it up in the IDE is also easy to do, To add your own MCP server all you have to is add a .vscode/mcp.json with all MCP server definitions in it. The IDE will also show a button to start/restart/stop the MCP server and has an error log.
I wanted Vscode to connect with the Apie playground to test this when working on the library. It runs inside docker, so I had to set it up to run through docker:

But sadly it fails when I try to start! The error message seems to be Docker compose finding no configuration. If I run the command myself locally it works just fine. After some fiddling and a reboot the setup suddenly did work!

It gave me a list of invalid properties (even though I did not find this output very clear):
At least it gives me validation errors unlike gemini cli! On examining the errors I could finally found out what was wrong wit it and it has to do with the PHP function json_encode always encoding an empty PHP array into [] and never as {}. Normally you would use stdClass in this case to always encode an empty list as {}, but this breaks the MCP Composer package again that wants to have an array. Luckily it also accepts the tools without a properties setting, so I remove the entire property if it is an empty array. After making the change and restarting it I have a working MCP server!

I open copilot chat in vscode and switch to 'Agent' mode. I click on the tools box and I have a list of tools:

And it seems to finally work:


I did have to fix some error handling since Vscode waited forever when an error occured. When this was fixed, it seems Vscode can handle my MCP server very well.

Copilot still has trouble with polymorphic Apie entities, but I have to agree Copilot helps you in getting around it and eventually does a good call.

And of course I need to be logged in in the application to create a user, but again Vscode handles this perfectly. This is great as MCP in current standing is known to be very low on security.
I could test the other agents like Claude and RooCode, but I decided that making it work in VsCode is good enough. I also expect similar results with Claude and RooCode anyway.

Conclusion

So what can we conclude from making our own MCP server that uses Apie?
  • First of all Apie is awesome and it integrates well wil AI systems. The Apie integration was actually the easy part in getting everything done.
  • Test-driven development helped me in the start, because I knew nothing of MCP when I started. Even though the tests ran successfully, it was still not enough to make it actually work with an AI agent without changes though.
  • Many of the MCP tooling is still very much focused on the 'happy path' and have terrible developer experience if you get an error.
  • Many libraries related to AI are vibe coded themselves or it can easily be seen they are built with AI. The number of design issues and easy to see bugs are staggering!
  • Unidirectional asynchronous MCP communication over SSE is limited in PHP inside frameworks as frameworks wait for the request to complete. Luckily we do not need it and I have not seen an AI agent that forces this.
  • MCP still has very limited security features, so the best way of running it is locally with the stdio transport.
  • It seems weird that any API refuses the schema of a MCP server as the user has no control over what schema a MCP server submits.



Comments