**Key Decisions** - Launch date confirmed for March 15 (verified with exec team) - "Guaranteed ROI" claim must be removed from slide 4 per legal **Action Items** - @mike: Review landing page copy tonight — https://docs.google.com/xyz - @sarah: Landing page copy review needed by EOD Thursday - @jen: Deliver updated deck (v2) with pricing claim removed by Wednesday morning **Open Questions** - @jen: Should we run a beta with 10 customers before full launch?
Build a Slack Bot That Summarizes Channels Using AI
Tested prompts for automate slack message summaries with ai compared across 5 leading AI models.
If you're searching for how to automate Slack message summaries with AI, you're probably drowning in channels. Maybe you manage a team across three timezones and #general has 400 unread messages every Monday. Maybe you run a customer support workspace where #incidents pings all night and leadership wants a Monday-morning digest. Either way, manually scrolling through a week of threads is a tax on your attention you don't need to pay.
The fix is a Slack bot that pulls messages on a schedule, feeds them to an LLM with a tight summarization prompt, and posts the summary back into a channel or DM. It's not a new product you buy. It's a 100-line workflow connecting Slack's conversations.history API, a model like GPT-4o or Claude, and a cron trigger in something like n8n, Zapier, or a small Python script.
This page walks through the exact prompt structure, shows what four different models produce from the same input, and tells you when this setup is the right answer versus when you should reach for something else.
When to use this
This approach fits when you have read-access to a Slack channel, messages are mostly text (not heavy media), and you want a recurring digest rather than real-time analysis. It works best when the summary audience is humans who want to skim, not systems that need structured data extraction for downstream automation.
- Daily or weekly digest of a busy #general, #random, or #announcements channel for execs who don't live in Slack
- On-call handoff summary that condenses overnight #incidents activity into a morning brief
- Customer support channel digest grouping recurring complaints and flagged tickets
- Community Slack or Discord-style workspace where moderators need a recap of key discussions
- Post-meeting recap when your team uses a Slack huddle channel for async updates
When this format breaks down
- You need real-time alerts on specific keywords. Use Slack's native keyword notifications or a regex-based trigger instead, not an LLM summarizer.
- The channel contains regulated data (PHI, PCI, customer PII under GDPR) and you haven't cleared the model vendor for that data class. Summarizing it through a third-party API is a compliance problem.
- You want to extract structured data (tickets, action items, decisions) for a database. Use a function-calling or JSON-mode prompt, not a free-text summarizer.
- The channel has fewer than 20 messages per day. A summary isn't worth the token cost or engineering overhead; people can just read it.
The prompt we tested
You are an AI assistant that summarizes Slack channel activity for busy team members. Review the Slack messages provided and produce a concise, scannable summary following the rules below. Rules: Output a summary with three sections: 'Key Decisions' (bullet points), 'Action Items' (bullets with @person and deadline if mentioned), and 'Open Questions' (bullets). Keep the total length under 200 words, preserve @mentions and links, and skip small talk or off-topic chatter. Slack messages to summarize: #product-launch channel, last 24 hours: @sarah: Hey team, the landing page copy is ready for review — can someone check by EOD Thursday? Link: https://docs.google.com/xyz @mike: I'll review tonight. Also, legal flagged the pricing claim on slide 4 — we need to remove 'guaranteed ROI' before launch. @sarah: Good catch. @jen can you update the deck? @jen: On it, will have v2 by Wednesday morning. @mike: Quick q — are we still launching March 15 or did that slip? @sarah: Still March 15. Confirmed with exec team yesterday. @dan: lol nice meme earlier @jen: should we run a beta with 10 customers first? Produce the summary now.
Model comparison
| Model | Quality | Cost | Latency | Tokens |
|---|---|---|---|---|
| Claude Opus 4.7 | 9 | $0.0208 | 3.2s | 439 / 189 |
| Claude Haiku 4.5 | 9 | $0.0009 | 1.5s | 308 / 122 |
| GPT-5.4 | 9 | $0.0309 | 9.5s | 273 / 704 |
| Gemini 2.5 Pro | 9 | $0.0010 | 7.7s | 291 / 129 |
| Grok 4.1 Fast Reasoning | 9 | $0.0001 | 3.7s | 420 / 89 |
Outputs
# #product-launch Summary **Key Decisions** - Launch date confirmed for March 15 (per exec team approval) - "Guaranteed ROI" claim must be removed from slide 4 per legal requirements **Action Items** - @mike: Review landing page copy by EOD Thursday (https://docs.google.com/xyz) - @jen: Update deck to v2 removing pricing claim by Wednesday morning **Open Questions** - Should a beta launch with 10 customers be conducted before March 15 release?
Key Decisions - Launch date confirmed: March 15 (@sarah, confirmed with exec team). - Remove “guaranteed ROI” claim from slide 4 per legal (@mike). Action Items - @mike: Review landing page copy by EOD Thursday (link: https://docs.google.com/xyz); starting tonight. - @jen: Update deck to remove “guaranteed ROI” on slide 4; deliver v2 by Wednesday morning. Open Questions - @jen: Should we run a beta with 10 customers first (10 customers) before launch?
Here is a summary of the #product-launch channel activity: **Key Decisions** * The launch date is officially confirmed for March 15. * The phrase 'guaranteed ROI' will be removed from the presentation deck following a flag from the legal team. **Action Items** * @mike: Review the landing page copy tonight. Link: https://docs.google.com/xyz * @jen: Update the presentation deck with the required legal change by Wednesday morning. **Open Questions** * Should we run a beta test with 10 customers before the launch?
### Key Decisions - Launch date confirmed for March 15. - Remove 'guaranteed ROI' claim from slide 4 before launch. ### Action Items - @mike: Review landing page copy tonight (https://docs.google.com/xyz); original ask EOD Thursday. - @jen: Update deck to v2 by Wednesday morning. ### Open Questions - Should we run a beta with 10 customers first? (@jen)
What makes these work
-
01Chunk by thread, not by message
Slack conversations are threaded. If you flatten everything into a single message list, the model loses context about what reply belongs to what. Group messages by thread_ts before sending to the LLM and preserve parent-child structure.
-
02Include usernames and timestamps
A summary that says 'someone deployed a fix' is useless. Pass real display names and ISO timestamps into the prompt and tell the model to preserve them. Readers need to know who did what and when so they can follow up.
-
03Set a hard output structure
Tell the model exactly what sections to produce: Decisions, Action Items, Open Questions, Flags. A free-form summary drifts week to week and becomes hard to skim. Structured output makes the digest scannable in under 15 seconds.
-
04Truncate smartly, not by character count
If a channel exceeds your token budget, drop low-signal messages (reactions-only, single-emoji replies, bot noise) before truncating. A naive head/tail cut loses the middle of active discussions, which is usually where the actual decisions happened.
More example scenarios
12 messages from #incidents between 10pm and 6am: 2 PagerDuty alerts for elevated API latency on the payments service, a thread where Priya rolled back deploy v4.2.1, Raj noting Redis memory at 89%, and a resolved alert at 4:17am. Include timestamps and usernames.
Overnight summary (10pm-6am): Payments API hit elevated latency twice (11:04pm, 2:30am). Priya rolled back deploy v4.2.1 at 11:20pm, which resolved the first alert. Raj flagged Redis memory at 89% and recommended scaling before market open. Final alert cleared at 4:17am. Open item: Redis capacity review.
About 80 customer support messages forwarded into #product-feedback over 7 days. Mix of bug reports, feature requests, and praise. Tag each theme with frequency.
This week's product feedback themes: (1) CSV export bug on reports over 10k rows, mentioned 14 times, highest priority. (2) Requests for Outlook calendar integration, 9 mentions. (3) Praise for the new dark mode, 6 mentions. (4) Confusion about the new billing page, 5 mentions. Suggested action: prioritize export fix this sprint.
Friday afternoon messages in #marketing-team covering campaign launches, copy reviews, and a debate about holiday email timing. Summarize decisions and open questions separately.
Decisions made Friday: Holiday email drops Nov 28 at 9am ET (Jess approved). Landing page copy v3 signed off by Marcus. Instagram carousel scheduled for Monday 11am. Open questions: Should we A/B test subject lines for the holiday send? Budget approval still pending from Finance for paid retargeting.
300+ messages across #introductions, #help, and #show-and-tell over 24 hours in a 5,000-member developer community. Flag any unanswered questions older than 4 hours and any posts needing moderation.
24-hour community digest: 47 new member intros, 12 help questions (3 unanswered past 4 hours, linked below), 8 show-and-tell posts with strong engagement. Moderation flags: one post in #help containing a suspicious affiliate link (already removed by auto-mod), one heated thread in #show-and-tell about framework preferences, currently civil but worth watching.
Common mistakes to avoid
-
Summarizing everything, including bot noise
Deploy bots, GitHub bots, and PagerDuty bots post constantly. If you don't filter them out, your summary becomes 80% 'bot posted a status update'. Filter by user type or bot_id before the LLM sees anything.
-
Running summaries too often
Hourly summaries of a low-traffic channel produce empty or repetitive digests and burn tokens. Match frequency to channel velocity: daily for active channels, weekly for most, on-demand for quiet ones.
-
Ignoring Slack's rate limits
conversations.history is tier 3 (50+ requests per minute) but paginated. Pulling a week of messages from a busy channel means multiple API calls. Add retry logic with backoff, or you'll get silent failures on Monday mornings.
-
Posting the summary back into the same channel
This creates a loop where next week's summary includes last week's summary. Post to a dedicated #digests channel or send as a DM to the intended reader.
-
Not handling private channels and DMs correctly
Your bot can only read channels it's been invited to. If leadership expects a summary of private channels, you need explicit invites and probably a user token with appropriate scopes, not just a bot token.
Related queries
Frequently asked questions
What Slack API scopes do I need to build a summarizer bot?
At minimum: channels:history and channels:read for public channels, groups:history for private channels the bot is invited to, and chat:write to post the summary. If you want to read DMs or summarize across all channels a user can see, you'll need a user token with im:history and mpim:history.
Which AI model works best for Slack summarization?
For most teams, GPT-4o-mini or Claude Haiku give the best cost-to-quality ratio; they handle 50-200 message summaries cleanly for a fraction of a cent. Step up to GPT-4o or Claude Sonnet if your channels include technical content where nuance matters, like engineering incidents or legal discussion.
How do I handle Slack threads in the summary?
Fetch threads separately using conversations.replies for each parent message that has reply_count > 0. Then format them in the prompt as nested structures so the model understands the hierarchy. Summarizing threads flat loses too much context.
Can I do this without code using Zapier or n8n?
Yes. n8n has native Slack and OpenAI nodes and a cron trigger, so a daily summary workflow takes about 30 minutes to build. Zapier works too but gets expensive at high message volumes because of task-based pricing. For anything beyond a single channel, a small Python script on a cron is cheaper and more flexible.
How much does it cost to run Slack summaries with AI?
For a channel with 500 messages per week, expect roughly 8-15k input tokens per summary. On GPT-4o-mini that's under a cent per run. Even daily summaries across 10 channels come in under $3 per month. Model cost is rarely the constraint; engineering time is.
Is it safe to send Slack messages to an LLM API?
Depends on your data and vendor. OpenAI, Anthropic, and Azure OpenAI all offer no-training-on-data terms for API usage, and Azure and AWS Bedrock add enterprise controls. If your Slack contains regulated data, get a signed BAA or equivalent and check your company's data classification policy before wiring anything up.