Diagnose week-one drop-off in your paid Slack community in 30 minutes

Most operators of paid Slack communities suspect they are losing new members in the first week. Few have actually measured it. This post walks through three queries you can run today — with nothing but the Slack Web API, a CSV export, and a spreadsheet — to find out whether your week-one drop-off is mild, real, or acute. No tool required. No new accounts to sign up for.

If you run a paid community on Slack and you have ever wondered “is my onboarding actually working,” this is the cheapest answer you can get. It will take you less than thirty minutes. The output is three numbers, with benchmarks, and a clear next step for each.

What you need before you start

One important note before we start: these queries read the messages your members have already posted. They do not surface anything that was not already visible to a workspace admin. If you are not comfortable running this on your own data, you should not be running a paid community on Slack — the platform’s admin model is what it is.

Query 1 — Week-one post rate (the headline number)

This is the single most diagnostic measure of whether your community has a week-one drop-off problem. The question:

Of the members who joined your workspace 8–14 days ago, what percentage posted at least one message in their first seven days?

The 8–14-day window is deliberate — you want a cohort whose first week has fully closed (so day-7 is in the past), but you do not want stale data. Each week you re-run, you get a fresh seven-day cohort.

The mechanic in three steps:

  1. Call users.list. Filter to non-bot, non-guest, non-deleted members whose updated field places their join 8–14 days ago. (Slack does not expose a clean created for older accounts, but for new joins to a workspace, updated on first install is the join. For paid communities the cleaner source is your billing system — pull the list of members who started a subscription 8–14 days ago and join on email.)
  2. For each member, call conversations.list to enumerate the public channels they are in, then conversations.history on each, filtering messages where user equals the member’s ID and ts is within seven days of their join. You only need to know whether they posted at least once — you do not need the message text. The first match short-circuits the loop.
  3. Divide: members who posted in week one / members in cohort. That is your week-one post rate.

Benchmarks (from public community-ops conversations and the private exports we have seen):

Query 2 — Time-to-first-post, median

This number tells you when your members activate — if at all. It is the second-highest-leverage measurement after query 1, and it answers a different question: of the members who do post, are they posting fast enough that the activation moment compounds, or are they posting so late that they are already half-disengaged?

The mechanic:

  1. For the same cohort as query 1, filter to the subset who posted in week one (the numerator from query 1).
  2. For each, compute (first_message_ts - join_ts) in days.
  3. Take the median across the cohort. (Mean is misleading here — one member who posts on day 7 distorts the mean against several who posted on day 1.)

Benchmarks:

Query 3 — Day-90 retention by week-one post status

This is the query that turns the first two numbers from interesting to irrefutable. It answers: how much retention does a week-one post actually buy you?

The mechanic:

  1. Take a cohort that joined ninety days ago (give or take a week — you want the day-90 mark to have passed).
  2. Split the cohort into two groups: posted in week one and did not post in week one. Use the same query-1 logic but on the older cohort.
  3. For each group, count how many are still “active” today — defined as “visited the workspace in the past fourteen days,” which you can pull from users.list’s updated timestamp combined with workspace-presence pings if you have admin analytics, or simpler: how many posted at least one message in the past thirty days. Divide each by the original cohort count.
  4. Compare the two retention rates.

What you will almost certainly find: a 2×–4× gap. The week-one posters retain at 50–70% by day 90. The week-one non-posters retain at 15–30%. This is the gap you are leaving on the table every cohort. If you are paying $100–500 per acquired seat and losing one in three seats in week one, the math on closing this gap is brutal — usually one saved cohort pays for a year of any tooling that solves it.

What to do once you have the numbers

The interventions are not subtle, and they do not need to be:

You can build this yourself, or you can use us

Everything above is buildable with the Slack Web API, a cron job, and a Postgres table. Several operators have built it. The reason most have not is that the day-3 conditional logic and the day-7 scorecard email are non-trivial to keep working — Slack rate-limits, schema changes, sub-channel onboarding edge cases — and the ongoing maintenance burden ends up larger than the initial build. The right call depends on whether your engineering bandwidth is the bottleneck or your member retention is.

Foothold is the same diagnosis turned into a one-click Slack install: it computes these three numbers continuously, runs the day-0 DM and day-3 conditional nudge automatically, and emails you the day-7 scorecard every Monday. The early access waitlist is open below.