Make Real
Make Real
Mahbub Rahman
Mahbub Rahman
Available for new projects

Hire a Developer for Custom AI Agents

Build AI that takes action, not just chats.

View My Work

EXECUTIVE SUMMARY

Mahbub Rahman builds custom, autonomous AI agents and workflow automations using tool calling, LangGraph, and specialized APIs to automate complex startup operations.

The Technical Reality

Chatbots are a conversational interface; Agents are state machines. I build deterministic agentic workflows using LangGraph and function calling, allowing LLMs to safely query databases, trigger external APIs, and request human-in-the-loop permission before taking destructive actions.

WHY FOUNDERS COME TO ME

Chat is passive.You already know this.
THE LIMITATION

Chatbots are lazy UX.

Users don't want to chat; they want tasks done. You need an AI agent that can autonomously call external APIs, read databases, and execute workflows in the background.

Action-oriented AI
THE ORCHESTRATION

Multi-step reasoning fails.

When you ask an LLM to do five things, it forgets step three. You need stateful agent architectures (like LangGraph) with memory, human-in-the-loop approvals, and cyclical reasoning.

Deterministic Workflows
THE OUTPUT

Parsing text is a nightmare.

If your app relies on regexing the LLM's response, it will break tomorrow. You need strict Structured Outputs and function calling so the AI returns perfect, typed JSON every time.

100% Typed JSON

WHAT I BUILD WITH

Tools for autonomy.No hand-offs required.

From database to deployment. I own the whole thing.

AGENT FRAMEWORK
LangGraph
Vercel AI SDK
CrewAI
TOOLING
Function Calling
Structured Outputs
BACKEND
Next.js
REST APIs
Webhooks
INFRASTRUCTURE
PostgreSQL
Redis
Cron Jobs

HOW IT WORKS

Engineering autonomy.

We give LLMs hands and eyes, safely.

01

Tool Definition

The API layer

We define precise, Zod-validated tools (e.g., 'update_crm_record', 'send_email') that the LLM is allowed to call, ensuring it understands the exact parameters required.

02

State Graph Setup

Routing logic

Using LangGraph, we define the decision tree. The agent receives a task, plans the steps, executes a tool, evaluates the result, and loops until the goal is met.

03

Safety & Memory

Human-in-the-loop

We implement persistence (so the agent remembers past interactions) and approval gates, requiring human confirmation before the agent executes sensitive actions.

COMMON QUESTIONS

Questions aboutalways ask me.

How to trust an AI to push the button.

By design, agents cannot execute code they haven't been given explicitly as a 'Tool'. For highly sensitive tools (like sending a mass email or deleting a DB record), I implement a 'Human-in-the-loop' interrupt. The agent pauses its execution state, sends a notification to you, and waits for a boolean approval via an API before proceeding.

The beauty of a graph-based agent architecture is error recovery. If a tool call fails, the error message is fed back into the agent's context window. The agent can read the error, understand what went wrong, and autonomously retry with corrected parameters.

Yes. Because an agentic workflow involves a loop of thinking, acting, and observing, a single user request might trigger 3 to 10 underlying LLM calls. We mitigate this by using smaller, faster models (like GPT-4o-mini or Claude 3.5 Haiku) for intermediate reasoning steps, only escalating to larger models for complex logic.

READY?

Let's buildsomething real.

30 minutes. No pitch. No pressure. Just an honest conversation about your project and whether I can actually help.

✓ Free 30-min call✓ No commitment✓ You'll know after 1 chat