n8n + OpenAI: Building a Self-Hosted AI Automation System
n8n is the open-source alternative to Make.com — and when self-hosted, it gives you AI automation with complete data privacy, no per-execution pricing, and full control over your infrastructure.
When Self-Hosted Automation Makes Sense
Make.com and n8n both enable powerful AI automation. The choice depends on your priorities.
| Factor | Make.com | n8n (self-hosted) |
|---|---|---|
| Pricing model | Per operation — costs scale with volume | Self-hosted: infrastructure cost only |
| Data residency | Data passes through Make servers | All data stays on your server |
| Setup complexity | Low — sign up and build | Medium — requires server/Docker setup |
| Maintenance | Zero — fully managed | You manage updates and uptime |
| Customisation | Limited to available modules | Full — write custom JavaScript nodes |
| Best for | Teams that want fast setup and managed reliability | Teams with data privacy requirements or high execution volumes |
📌 n8n is the right choice for businesses processing sensitive customer data (healthcare, legal, financial) where passing data through a third-party platform creates compliance risk — or for high-volume automations where Make.com’s per-operation pricing becomes prohibitive.
Getting n8n Running in Under 30 Minutes
n8n can be deployed on any VPS, AWS EC2, or DigitalOcean Droplet using Docker.
# Install Docker if not already installed
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
# Create a directory for n8n data
mkdir ~/.n8n
# Run n8n with Docker
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
-e N8N_BASIC_AUTH_ACTIVE=true \
-e N8N_BASIC_AUTH_USER=admin \
-e N8N_BASIC_AUTH_PASSWORD=yourpassword \
n8nio/n8n
# Access n8n at http://your-server-ip:5678
📌 For production, run n8n behind an Nginx reverse proxy with SSL. Use docker-compose with a postgres database for persistence rather than SQLite. n8n’s documentation covers the production deployment setup in detail.
API Configuration
n8n has a native OpenAI node. Configure it once and reuse across all workflows.
Add OpenAI credentials
In n8n, go to Credentials and create a new OpenAI credential. Enter your API key. n8n stores it encrypted in your local database — it never leaves your server. Name it clearly (e.g., ‘OpenAI Production’) so you can identify it across workflows.
Add an OpenAI node to your workflow
In any workflow, click + and search for OpenAI. Select the ‘Message a Model’ operation. Choose your saved credential, select the model (gpt-4o-mini or gpt-4o), and configure your system and user messages using n8n’s expression syntax for dynamic values.
Access dynamic data with expressions
n8n uses double-curly-brace expressions to reference data from previous nodes. For example: {{ $json.email_body }} pulls the email body from the previous node’s output. Use these expressions in your OpenAI prompt to personalise each API call.
Parse and route the response
The OpenAI node returns the response as a JSON object. Access the reply with {{ $json.message.content }}. Pass this to downstream nodes — a database write, an email send, a Slack message, or another API call.
A Self-Hosted Use Case Walk-Through
This example builds a self-hosted medical document analysis workflow where patient data never leaves your infrastructure.
Trigger: File uploaded to local storage
A webhook node receives notification when a new document is uploaded to your self-hosted file server. n8n reads the file — never uploading it to a cloud storage provider.
Extract text locally
An Execute Command node runs a local Python script (pdfplumber or similar) to extract text from the PDF. The text stays on your server.
Analyse with OpenAI (or local model)
Pass the extracted text to OpenAI for analysis — or, for maximum privacy, use a locally hosted model via Ollama. n8n supports HTTP requests to local endpoints.
Write results to local database
The analysis results are written to a PostgreSQL database running on the same server. Zero data leaves your infrastructure at any point.
Which Should You Use?
Choose Make.com when…
You want to be live in hours, not days. You process moderate volumes (under 50,000 operations/month). You do not have data sovereignty requirements. You want a fully managed platform with no maintenance overhead.
Choose n8n self-hosted when…
You process sensitive data (healthcare, legal, financial) that cannot leave your infrastructure. Your automation volume is high enough that Make.com pricing becomes significant. You need custom JavaScript logic in your workflows. You have DevOps capability to manage a server.
Use both when…
Non-sensitive, lower-volume automations run on Make.com for simplicity. Sensitive data workflows run on self-hosted n8n. Many businesses run hybrid stacks — the tools are complementary, not competing.
How much does self-hosting n8n cost?
A DigitalOcean or Linode VPS with 2GB RAM and 1 vCPU (approximately $12-18/month) runs n8n comfortably for most SME automation workloads. Add OpenAI API costs per execution. For high-volume workflows, total cost is typically 60-80% lower than equivalent Make.com usage.
Is n8n as capable as Make.com?
For AI automation workflows, yes — the core capability is comparable. Make.com has more pre-built app integrations (1000+ vs n8n’s 400+). For apps not natively supported in n8n, the HTTP Request node handles any REST API. n8n’s JavaScript Code node is more powerful than Make.com’s equivalent.
Can I use n8n Cloud instead of self-hosting?
Yes. n8n offers a managed cloud version starting at $24/month. It gives you n8n’s workflow power and custom JavaScript capability with cloud-managed infrastructure — a middle ground between Make.com and full self-hosting.
Want an AI Automation System Built for Your Business?
SA Solutions builds automation systems on Make.com, n8n, and Bubble.io — selecting the right platform for your data requirements, volume, and budget.
