I like trying new technology. Not just using it, but poking at it, figuring out how it works, and bending it until it works the way I want it to.
Lately I've been experimenting with AI agents, specifically OpenClaw. And they are pretty impressive. But there’s a problem: I don’t trust them. Not fully anyway. And definitely not with things like sending e-mails in my name.
So naturally, I built something to solve that problem:
👉 https://github.com/ArktIQ-IT/ai-email-gateway
This is AI Email Gateway, a small system designed to let AI agents help me with e-mail without ever giving them access to actually send one.
Because when it comes to AI agents, I still think:
There should always be a human in the middle.
My self-hosted AI agent journey started with OpenClaw - the hyped up open source AI agent. I run it in a separate VM, and I give it access to capabilities skill by skill. I've also integrated it with my private Slack, so both my smart house and my AI assistant are easily accessible just a message away. That way it only gets access to exactly what it needs, and it's easily accessible to me.
The first thing I tried was letting it analyze logs using the log-dive skill against my Kibana instance, to which I send all my logs from various servers and services.
It was wild.
OpenClaw could navigate messy logs, find patterns, and suggest possible actions far faster than I could ever have done. Fair enough, for simple inquiries I'm faster than the AI, provided that I have the Grafana interface up and running. But keep in mind, with OpenClaw I can simply query via my mobile phone, via Slack, stuff like: "Did my last roll out of service X lead to errors in my logs?". And I get replies, and even suggestions on how to fix it. Pretty amazing.
That’s when I started thinking (not really, but that is what the AI suggested for the dramatic effect):
What if it could help with email too?
An AI email assistant sounds great. But the moment you let an AI send or manage e-mail on your behalf, things can get uncomfortable, like the Meta director if AI safety experienced.
AI agents are:
That’s not a great combination for something like email. I don’t want an AI to:
So instead of letting AI send or manage my inbox… I decided it never should.
The solution became AI e-mail gateway. Instead of connecting an AI agent directly to an email server, the agent talks to a controlled gateway.
The gateway can:
What it cannot do is send email, delete e-mail or other creative things.
That part is still up to the human. This creates a simple but powerful safety model:
The AI can help.
But you stay in control.
One important design choice was to not tie this to any specific email provider. The gateway works with any IMAP-enabled mailbox.
So whether you're using:
…it should work just fine as long as it can work over IMAP.
Here’s the funny part. I naturally used AI to build the gateway. And I also used AI to help write this article.
But:
AI helped — a human decided.
I adjusted quite a few things before I was happy with the result. Both with this article, and the gateway code.
AI can assist.
But I still think it shouldn't operate without guardrails and oversight.
When I wanted to integrate this with OpenClaw, I needed to build a ClawHub skill. It's based on Agent Skills, with a twist, it seems.
The challenge? The documentation isn’t really part AI models’ training data yet. But that turned out to be easy to solve.
I simply pointed an AI at the OpenClaw and ClawHub documentation pages, and pasted an OpenClaw skill for how to create skills, and that was enough for it to understand the format and help generate the skill.
So:
I used an OpenClaw skill to help write an OpenClaw skill.
Very meta.
The final skill is here:
👉 https://clawhub.ai/remimikalsen/a-safer-email-assistant
The end result is a system where:
But it cannot send anything. Only a human can.
That means the worst-case scenario is something like:
“The AI suggested a weird draft.”
Which is perfectly acceptable.
You just delete it.
That’s a very different risk profile than letting an AI fire off emails autonomously.
Probably not. There might already be similar tools out there. But this one solves my needs, and now it’s available for anyone else who wants it.
It’s simple, transparent, and it keeps the human in control.
If you want to experiment with safer AI email assistants, you can find the project here:
👉 https://github.com/ArktIQ-IT/ai-email-gateway
It’s open source and free to use.
If it helps someone else experiment with AI agents in a safer way, then it has already done its job.
Because AI agents are powerful. But I still prefer having a human in the loop.
This blog post was mainly written by an AI under my oversight, but I read through every word, corrected it here and there, and added my human touch to it too. I still don't like the tone of voice of AIs, and it's honestly annoying how they use many very short sentences in a row to underline a point, but hey, at least it makes my point come across. And in spite of the AI doing the writing, it's still based on my notes, drafts and ideas. And it was I who hit the submit-button.