8 min read

How I Deployed My Own AI Chatbot with n8n and a Custom Proxy (2025 Case Study)

Vedad Termiz

Tech Insights & Digital Productivity

How I Deployed My Own AI Chatbot with n8n and a Custom Proxy (2025 Case Study)
I built and deployed a secure AI chatbot using n8n, hosted on a private server and connected through a custom reverse proxy. This post breaks down how I set it up, solved major integration bugs, and hardened it with proper security—so you can build your own without wasting weeks debugging.

🧩 Introduction: Why I Built It

I needed a flexible and private chatbot solution for handling user queries and integrating with automated workflows. Most hosted chatbot services were too restrictive or expensive. So I decided to self-host an AI-powered chatbot using n8n, an open-source automation tool. What seemed like a weekend project… turned into a frustrating multi-week challenge.

💡 The Goal

  • 🧠 AI chatbot powered by OpenAI (GPT-4o)
  • 💬 Chat interface on a custom subdomain
  • 🔐 End-to-end control over security and traffic
  • 🧱 Self-hosted with Docker on a VPS
  • 🌐 Public access only through HTTPS, protected by Basic Auth

🛠️ How I Set It Up

1. VPS and Server Prep

I rented a small VPS, installed Ubuntu, and configured it with:

  • SSH key-only login
  • Disabled root access
  • UFW firewall and Fail2Ban
  • Docker + Docker Compose stack
✅ Tip: Lock down the server before anything else. Bots will start scanning your IP the moment it's live.

2. Running n8n via Docker

services:
  n8n:
    image: n8nio/n8n
    environment:
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=strongpass
      - N8N_PORT=5678
    ports:
      - "5678:5678"
    volumes:
      - ./data:/home/node/.n8n

3. Reverse Proxy with NGINX

location /webhook/chat {
  proxy_pass http://localhost:5678;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection 'upgrade';
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
}

4. Embedding the Widget

<script src="https://cdn.jsdelivr.net/npm/@n8n/chat@latest"></script>
<n8n-chat
  endpoint="https://your-subdomain.com/webhook/chat"
  title="Ask Me Anything!"
  />

🧨 What Went Wrong (And Why It Took Weeks)

  • ❌ CORS issues
  • ❌ Auth conflicts
  • ❌ WebSocket timeouts
  • ❌ Cookie errors

✅ Fixes: I ended up whitelisting /webhook/chat from auth and tuning CORS and cookie policies manually.

🔒 Final Security Enhancements

  • Rate limiting via limit_req
  • CSP and Referrer-Policy headers
  • Fail2Ban for brute-force attempts
  • No ports open except 443/80
  • Session cookies with Secure + SameSite=Strict

📈 Results

MetricValue
Latency~1.2s avg
Widget success100% stable
Time to build~3 weeks 😅
Monthly cost< $10 VPS

💡 Key Lessons

  • WebSockets are fragile—test them early
  • CORS will quietly ruin everything if not configured
  • Avoid exposing /webhook/ to the public without proper protection
  • Start with security, not after launch

✅ Conclusion

Deploying your own AI chatbot is empowering—but only if you’re ready to dive into server config, CORS debugging, and reverse proxy hell. Now that it’s working, I wouldn’t go back to SaaS chatbots.

👉 Want My Configs?

I’m happy to share my sanitized NGINX + Docker templates. Just drop a comment below or message me privately—I'll send them your way.

📬 Contact

For help, questions, or custom setup: Contact me at HVTEQ