# Airtable GPT Automation: Auto-Generate SEO Meta + Executive Summary
## 1. Trigger Setup
Use **"When record matches conditions"**:
- **Table:** Posts
- **Condition:** `Status` is `Ready for Review`
This fires only on the status transition, avoiding accidental runs on unrelated edits.
## 2. Required Fields in `Posts`
| Field | Type | Notes |
|---|---|---|
| Title | Single line text | Existing |
| Draft | Long text | Existing |
| Status | Single select | Existing; include "Ready for Review" |
| Meta Description | Long text | New — GPT output |
| Executive Summary | Long text | New — GPT output (3 bullets) |
| GPT Error | Long text | New — captures failures for debugging |
## 3. Run Script Action
In the automation, add a **"Run script"** action. Configure **Input variables**:
- `recordId` → Airtable record ID (from trigger)
- `title` → Title (from trigger)
- `draft` → Draft (from trigger)
- `openaiKey` → your API key (hardcode here or store in a config table and pull it in)
Paste this script:
```javascript
const { recordId, title, draft, openaiKey } = input.config();
const table = base.getTable("Posts");
const systemPrompt = `You are an expert SEO editor and content strategist.
You return ONLY valid JSON matching this exact schema:
{"meta_description": "string (<=155 chars, compelling, includes primary keyword)",
"executive_summary": ["bullet 1", "bullet 2", "bullet 3"]}
No preamble, no markdown fences.`;
const userPrompt = `Title: ${title}
Draft:
${draft}
Generate:
1. An SEO meta description under 155 characters.
2. A 3-bullet executive summary (each bullet <=20 words, action-oriented).`;
try {
const res = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${openaiKey}`
},
body: JSON.stringify({
model: "gpt-4o",
temperature: 0.4,
response_format: { type: "json_object" },
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userPrompt }
]
})
});
if (!res.ok) {
const errText = await res.text();
throw new Error(`OpenAI ${res.status}: ${errText}`);
}
const data = await res.json();
const content = data.choices[0].message.content;
const parsed = JSON.parse(content);
const bullets = parsed.executive_summary
.map(b => `• ${b}`)
.join("\n");
await table.updateRecordAsync(recordId, {
"Meta Description": parsed.meta_description,
"Executive Summary": bullets,
"GPT Error": ""
});
} catch (err) {
await table.updateRecordAsync(recordId, {
"GPT Error": `${new Date().toISOString()} — ${err.message}`
});
throw err; // surface failure in automation run history
}
```
## 4. Prompt Strategy (embedded above)
- **System message** locks output to strict JSON — critical since Airtable scripts cannot retry parsing loops gracefully.
- **`response_format: json_object`** forces OpenAI to return valid JSON, eliminating markdown fence stripping.
- **User message** passes dynamic record data and explicit constraints (char limits, bullet count).
## 5. Testing & Error-Handling Tips
1. **Test with a real record:** In the automation editor, click the trigger step → "Choose record" → pick a Post with real Draft content so input variables populate during script testing.
2. **Run the script step in isolation** using the "Test" button before enabling the automation. Check the console for the raw OpenAI response.
3. **Watch for truncated drafts:** Airtable input variables have a ~100KB limit, but very long drafts may hit OpenAI token limits. Add `draft.slice(0, 12000)` if you deal with long-form content.
4. **The `GPT Error` field** lets non-technical teammates see failures without opening automation run history.
5. **Rate limits / transient 429s:** Wrap the fetch in a retry helper if volume grows:
```javascript
for (let i = 0; i < 3; i++) {
const r = await fetch(...);
if (r.status !== 429) return r;
await new Promise(res => setTimeout(res, 1000 * (i + 1)));
}
```
6. **Avoid infinite loops:** Since writing to the record doesn't change Status, the trigger won't re-fire. Confirm by keeping the trigger condition scoped to `Status = Ready for Review` only.
7. **Cost control:** Log usage by adding `data.usage.total_tokens` to a separate "Token Log" table if you scale this across many posts.
How to Run GPT Prompts Inside Airtable Automations
Tested prompts for airtable gpt automation compared across 5 leading AI models.
You're running an Airtable base and you want GPT to do work inside it. Maybe you need to summarize inbound leads, classify support tickets, draft outreach emails, or tag records by sentiment, without exporting CSVs to ChatGPT and pasting results back. That is what 'airtable gpt automation' solves: triggering an OpenAI call directly from an Airtable Automation so a field updates itself when a record is created or changed.
The good news is you do not need Zapier or Make for this. Airtable's native Automations include a 'Run script' action and, on paid plans, a direct OpenAI integration. Both can send a prompt to GPT and write the response back to any field in the same record.
This page walks through the exact prompt structure that works inside Airtable Automations, four model outputs tested on the same input, and a comparison so you can pick the right model for cost and quality. Below the table you will find when this pattern fits, when to skip it, concrete examples, and the mistakes that cause silent failures.
When to use this
Use Airtable + GPT automation when you have structured records that need a repeatable text transformation applied on create or update. The pattern works best when inputs are predictable in shape, the output fits in a single field, and a human does not need to approve every result. If those conditions hold, you can process hundreds of records per day hands-off.
- Classifying inbound form submissions into categories (lead quality, topic, urgency)
- Summarizing long text fields like meeting notes or call transcripts into a 2-sentence TL;DR
- Drafting first-pass outreach emails or replies based on CRM fields
- Extracting structured data (company name, job title, budget) from freeform descriptions
- Tagging sentiment or intent on customer feedback as it arrives
When this format breaks down
- High-volume workflows over a few thousand records per day, where Airtable automation run limits and GPT latency will throttle you. Use a queue or external worker instead.
- Tasks that require retrieval from large document sets. Airtable scripting cannot hold a vector store, so RAG needs a dedicated backend.
- Anything where a wrong answer has real cost (legal language, medical triage, financial advice) unless a human reviews every output.
- Multi-step agent workflows that need tool calls, branching, or memory across records. Use a proper agent framework.
The prompt we tested
You are an Airtable automation architect who specializes in integrating GPT prompts into Airtable workflows using the Scripting extension, Automations, and OpenAI API calls. Follow these rules: Format the response with clear numbered sections and use fenced code blocks for all JavaScript and JSON. Keep explanations tight and practical (under 600 words total), assume the reader has an OpenAI API key stored as an input variable, and use fetch() since Airtable scripts don't support npm packages. Here is the user's Airtable automation scenario: I run a content agency and have an Airtable base called 'Blog Pipeline' with a table 'Posts' containing fields: Title (single line), Draft (long text), and Status (single select). When Status changes to 'Ready for Review', I want GPT-4 to generate an SEO meta description and a 3-bullet executive summary, then write them into new fields on the same record. Produce a complete, production-ready Airtable GPT automation workflow that includes: (1) the trigger setup (e.g., 'When record matches conditions', 'When record created'), (2) the exact fields needed in the base with field types, (3) the 'Run script' action with a full JavaScript code block that calls the OpenAI API, uses input variables from the trigger record, parses the response, and writes results back to the record, (4) the GPT prompt text to embed in the script (system + user message), and (5) testing and error-handling tips specific to Airtable's scripting environment.
Model comparison
| Model | Quality | Cost | Latency | Tokens |
|---|---|---|---|---|
| Claude Opus 4.7 | 9 | $0.1368 | 23.2s | 490 / 1726 |
| Claude Haiku 4.5 | 9 | $0.0097 | 14.4s | 365 / 1871 |
| GPT-5.4 | Not run yet | |||
| Gemini 2.5 Pro | 9 | $0.0013 | 16.8s | 320 / 183 |
| Grok 4.1 Fast Reasoning | 9 | $0.0005 | 14.9s | 467 / 734 |
Outputs
# Airtable GPT-4 Automation: SEO Meta & Executive Summary Generator
## 1. Trigger Setup
**Automation Name:** Generate SEO Meta & Summary on Review
**Trigger Type:** When a record matches a condition
- **Table:** Posts
- **Condition:** `Status` is exactly `Ready for Review`
- **Trigger:** Runs once per matching update
This ensures the automation fires only when Status changes to "Ready for Review", avoiding redundant API calls.
---
## 2. Required Fields in 'Posts' Table
| Field Name | Field Type | Notes |
|---|---|---|
| Title | Single line text | Existing |
| Draft | Long text | Existing |
| Status | Single select | Existing; options include "Ready for Review" |
| SEO Meta Description | Long text | **New** – stores GPT output |
| Executive Summary | Long text | **New** – stores GPT output (3 bullets) |
Both output fields should allow rich text or plain text (long text type handles both). Do not set them as formulas or rollups—they need to be writable via the script.
---
## 3. Run Script Action Configuration
In your Automation, add a **Run script** action after the trigger.
**Input Variables to Configure:**
- `OPENAI_API_KEY` – reference your stored secret (Airtable → Automations → Secrets)
- `recordId` – automatically populated from trigger
- `title` – map from `{Posts.Title}`
- `draft` – map from `{Posts.Draft}`
**Script Code:**
```javascript
// Input variables provided by Airtable automation
const { OPENAI_API_KEY, recordId, title, draft } = input.config();
const baseId = 'appXXXXXXXXXXXXXX'; // Replace with your base ID
const tableId = 'tblXXXXXXXXXXXXXX'; // Replace with your Posts table ID
const apiToken = 'patXXXXXXXXXXXXXX'; // Replace with your Airtable PAT
// OpenAI API call
async function generateContent(postTitle, postDraft) {
const systemPrompt = `You are an expert SEO copywriter and content strategist. Your task is to:
1. Create a compelling SEO meta description (max 160 characters) that includes the primary keyword and encourages clicks.
2. Write a 3-bullet executive summary of the post content, each bullet 1–2 sentences, highlighting key takeaways.
Return ONLY valid JSON with no markdown, no extra text. Format:
{
"metaDescription": "...",
"executiveSummary": "• Bullet 1\n• Bullet 2\n• Bullet 3"
}`;
const userPrompt = `Post Title: "${postTitle}"
Post Draft:
${postDraft}`;
try {
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-4',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt },
],
temperature: 0.7,
max_tokens: 500,
}),
});
if (!response.ok) {
throw new Error(`OpenAI API error: ${response.status} ${response.statusText}`);
}
const data = await response.json();
const content = data.choices[0].message.content.trim();
// Parse JSON response
const parsed = JSON.parse(content);
return {
metaDescription: parsed.metaDescription,
executiveSummary: parsed.executiveSummary,
};
} catch (error) {
console.error('Error calling OpenAI:', error.message);
throw error;
}
}
// Update Airtable record
async function updateRecord(recordId, metaDescription, executiveSummary) {
const url = `https://api.airtable.com/v0/${baseId}/${tableId}/${recordId}`;
const response = await fetch(url, {
method: 'PATCH',
headers: {
'Authorization': `Bearer ${apiToken}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
fields: {
'SEO Meta Description': metaDescription,
'Executive Summary': executiveSummary,
},
}),
});
if (!response.ok) {
throw new Error(`Airtable update failed: ${response.status} ${response.statusText}`);
}
return response.json();
}
// Main execution
(async () => {
try {
const content = await generateContent(title, draft);
await updateRecord(recordId, content.metaDescription, content.executiveSummary);
console.log('✓ Record updated successfully');
} catch (error) {
console.error('Automation failed:', error.message);
throw error;
}
})();
```
---
## 4. GPT Prompt (Embedded in Script)
**System Message:**
```
You are an expert SEO copywriter and content strategist. Your task is to:
1. Create a compelling SEO meta description (max 160 characters) that includes the primary keyword and encourages clicks.
2. Write a 3-bullet executive summary of the post content, each bullet 1–2 sentences, highlighting key takeaways.
Return ONLY valid JSON with no markdown, no extra text.
```
**User Message:**
```
Post Title: "{postTitle}"
Post Draft:
{postDraft}
```
**Response Format (JSON):**
```json
{
"metaDescription": "Your 160-char meta description here",
"executiveSummary": "• Bullet 1\n• Bullet 2\n• Bullet 3"
}
```
The JSON-first prompt prevents GPT from adding markdown fences or extra commentary.
---
## 5. Testing & Error Handling Tips
1. **Replace IDs Before Running:**
- Find `baseId` and `tableId` in your Airtable URL: `https://airtable.com/bases/{baseId}/{tableId}`
- Generate a Personal Access Token (PAT) at airtable.com/account/tokens with `data.records:write` scope
2. **Test Dry Run:**
- Manually change a test record's Status to "Ready for Review"
- Check Automation logs (Automations tab → History) for script output
3. **Common Issues:**
- **API Key missing:** Verify the secret is stored in Automations → Secrets, not hardcoded
- **Invalid JSON from GPT:** Add retry logic or use `temperature: 0.5` for consistency
- **Field name mismatch:** Ensure field names in the script exactly match Airtable (case-sensitive)
4. **Add Fallback Fields:**
- Create a `Generation Status` field (single select: Success/Failed) to track runs
- Log errors to an `Error Log` field for debugging
5. **Rate Limiting:**
- Airtable scripts timeout after 30 seconds; OpenAI usually responds in 2–5 seconds
- If batching posts, add 1-second delays between API calls
This workflow is production-ready and handles async/await patterns within Airtable's scripting sandbox.
Of course. Here is a complete, production-ready Airtable GPT automation workflow to generate a meta description and executive summary for your blog posts. ### 1. Automation Trigger Setup The automation should trigger only when a post is explicitly marked as ready for review. This prevents it from running on every minor edit. * **Trigger type:** `When record matches conditions` * **Table:** `Posts` * **Conditions:** `Where [Status] is [Ready for Review]` This setup ensures the script runs once at the correct stage in your pipeline. ### 2. Airtable Fields Required Ensure your `Posts` table has the following fields. The last two are new fields you will need to add for the GPT-4 output. * `Title` (Single line text) - Input * `Draft` (Long text) - Input
### 1. Required Fields
Add to 'Posts' table:
- **Meta Description** (Long text)
- **Executive Summary** (Long text)
Existing: Title (Single line text), Draft (Long text), Status (Single select: include "Ready for Review").
### 2. Automation Trigger Setup
- **Trigger**: When record updated (Posts table).
- **Condition** (Only continue if): `{Status} = "Ready for Review"` **AND** `{Meta Description} is empty`.
- **Action**: Run script.
- Input variables:
| Variable | Type | Source |
|----------------|------|-----------------|
| `recordId` | Text | Record ID |
| `openAiApiKey` | Text | (Paste your key)|
### 3. Run Script Action Code
```javascript
let inputConfig = input.config();
let recordId = inputConfig.recordId;
let apiKey = inputConfig.openAiApiKey;
let table = base.getTable("Posts");
let record = await table.selectRecordAsync(recordId, {fields: ["Title", "Draft"]});
if (!record) {
output.text("Record not found");
return;
}
let title = record.getCellValue("Title") || "";
let draft = record.getCellValue("Draft") || "";
let systemPrompt = `You are an expert SEO copywriter for blog posts. Generate concise, compelling outputs.`;
let userPrompt = `Title: ${title}\n\nDraft: ${draft}\n\nOutput ONLY valid JSON: {"metaDescription": "150-160 char SEO meta desc with keywords", "executiveSummary": "- Bullet 1\n- Bullet 2\n- Bullet 3"}`;
let response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "gpt-4o-mini",
messages: [
{"role": "system", "content": systemPrompt},
{"role": "user", "content": userPrompt}
],
temperature: 0.3,
max_tokens: 300
})
});
let data = await response.json();
if (!response.ok) {
output.text(`API Error: ${data.error?.message}`);
return;
}
let content = data.choices[0].message.content;
let result;
try {
result = JSON.parse(content);
} catch (e) {
output.text(`Parse Error: ${content}`);
return;
}
await table.updateRecordAsync(recordId, {
"Meta Description": result.metaDescription,
"Executive Summary": result.executiveSummary
});
output.text("Fields updated successfully");
```
### 4. GPT Prompt
Embedded in script (Section 3): System + User messages generate JSON for reliable parsing.
### 5. Testing & Error-Handling Tips
- **Test**: Use Scripting app: `let table = base.getTable('Posts'); await table.updateRecordAsync('recXXX', {Status: 'Ready for Review'});` then manually run automation.
- **Debug**: Check "Runs" tab for logs. Add `output.text()` for vars (e.g., title). Handle rate limits with `try-catch` on fetch.
- **Errors**: Invalid key → 401 (check input var). Empty fields → skip via condition. Loops prevented by condition. Monitor token usage (~200/record).
(298 words)
What makes these work
-
01Pin the model and temperature
Set model explicitly (gpt-4o-mini for cheap classification, gpt-4o for nuanced drafts) and use temperature 0 to 0.3 for anything structured. Leaving defaults means outputs drift over time and your downstream fields break.
-
02Ask for JSON, parse it
Instead of writing one field, have GPT return JSON with multiple keys and parse it in the script to update several fields at once. One API call becomes category, priority, and summary in a single run.
-
03Guard against empty triggers
Start your script with a check: if the input field is empty or under N characters, exit early. Airtable automations fire on every matching change and you will burn tokens on blank rows otherwise.
-
04Log the raw response
Write the full GPT response to a hidden 'debug' long text field for the first week. When a record has a weird value, you can see exactly what the model returned before parsing.
More example scenarios
Name: Priya Shah. Company: Nordwind Logistics (450 employees). Message: 'We move about 12,000 shipments a month and our current TMS is falling over. Looking to evaluate replacements in Q1. Budget approved.' Source: LinkedIn ad.
Score: Hot. Reasoning: Mid-market company, explicit pain point with current TMS, defined evaluation window (Q1), and pre-approved budget. Recommended next step: route to enterprise AE within 1 business day, reference TMS migration case studies in first reply.
Subject: Can't export my data. Body: 'Hi, I've been trying to download my project history as a CSV for two days and the button just spins forever. This is blocking a client report I owe tomorrow morning. Using Chrome on Mac.'
Category: Bug - Export. Priority: High (deadline-sensitive, blocking customer deliverable). Affected feature: CSV export. Browser: Chrome/macOS. Suggested team: Data Platform. Sentiment: Frustrated but not hostile.
Raw notes: 'Talked to Marcus from Finance. They need the Q3 forecast model updated with new EU tax rates by Oct 15. He'll send the rate sheet Monday. Also discussed hiring a junior analyst, Sarah to write the JD. Follow-up call Oct 8.'
Action items: 1) Marcus sends EU tax rate sheet (due Monday). 2) Update Q3 forecast model with new rates (due Oct 15, owner: this team). 3) Sarah drafts junior analyst JD (no date set). 4) Follow-up call scheduled Oct 8.
Review: 'Headphones sound great and battery lasts forever. But the app is a disaster. It disconnects every time I switch devices and the EQ presets reset constantly. Close to returning them just because of the software.'
Overall sentiment: Mixed-Negative. Positive aspects: audio quality, battery life. Negative aspects: app stability, device switching, EQ persistence. Churn risk: High. Recommended tag: software-bug-feedback.
Common mistakes to avoid
-
Triggering on every field change
Using 'When record updated' without filtering means the automation fires when GPT itself writes back, creating loops or redundant calls. Always scope the trigger to a specific field or a 'Needs Processing' checkbox.
-
Stuffing the whole base into the prompt
People pass entire linked record arrays into the prompt and hit token limits or get unfocused outputs. Send only the fields the model needs for this specific decision.
-
No error handling on the API call
If OpenAI returns a rate limit or timeout, an unhandled script error stops the automation and the record stays blank with no indication why. Wrap the fetch in try/catch and write the error to a status field.
-
Storing the API key in the script
Hardcoding the key means anyone with base edit access can read it. Use a secret in the script action's input variables, not the script body itself.
-
Running expensive models on simple tasks
Using gpt-4o to tag sentiment on 10,000 reviews runs up a bill for work gpt-4o-mini does identically. Test the cheaper model first and only upgrade where quality actually drops.
Related queries
Frequently asked questions
Do I need a paid Airtable plan to run GPT in automations?
The 'Run script' action is available on Team plans and above, and that is the most flexible way to call OpenAI. The native ChatGPT integration requires Business plan or higher. Free and Plus users can still use Zapier or Make as a bridge but will hit task limits fast.
How much does it cost to run GPT on every new record?
With gpt-4o-mini at current pricing, a typical classification prompt (300 input tokens, 100 output tokens) costs roughly $0.0001 per record. That is about 10 cents per 1,000 records. gpt-4o is roughly 15x more expensive, so reserve it for tasks where mini underperforms.
Can I call GPT from an Airtable button instead of an automation?
Yes. Use a button field with the 'Run script' extension or trigger an automation via a checkbox the button flips. Buttons are better when you want human-in-the-loop review before processing rather than automatic firing on every new record.
What is the token limit I can send from Airtable?
Airtable scripts have no hard input limit, but GPT-4o accepts 128k input tokens and gpt-4o-mini the same. The practical limit is your prompt quality: past 10-20k tokens, accuracy drops on extraction tasks. Chunk long documents and summarize in passes.
How do I handle OpenAI rate limits inside an automation?
Catch 429 responses in your script, write a 'retry' status to the record, and use a scheduled automation every 5 minutes to re-process rows with that status. Airtable's per-run timeout is 30 seconds, so do not build in-script retry loops longer than that.
Can GPT update multiple fields from one Airtable automation run?
Yes. Have GPT return a JSON object with keys for each field, parse it in the script, then call updateRecordAsync with all fields at once. This is cheaper and faster than running separate automations for category, priority, and summary.