FunnelScout
CompletedMulti-tenant B2B SaaS on GoHighLevel - weekly AI pipeline analysis, Stripe billing, and webhooks that return before heavy work runs.
Small GHL agencies run 5-15 client accounts with no analyst. Pipeline issues show up late - often after the client asks why leads stopped closing. If you're checking dashboards by hand, you miss patterns across accounts until the damage is obvious.
FunnelScout is a B2B SaaS layer on GoHighLevel's marketplace OAuth: connect once, pull every sub-account, stream opportunity updates through webhooks, and run a multi-step Claude pass per client on a schedule. Every Monday the owner gets a digest - three revenue recommendations per client with dollar estimates. Stripe subscriptions, tenant-scoped data, and every Claude call written to cost logs.
Acknowledge webhooks fast - heavy work runs in the background
GHL and Stripe need fast responses - GHL webhooks time out, Stripe expects a quick 200. Analysis takes tens of seconds. So routes verify signatures, hand work to Inngest, and return. No heavy work on the HTTP thread.
Same event or double-click cannot create duplicate work
GHL retries webhooks. People double-click Run analysis now. Inngest retries failed jobs. Opportunity events dedupe in the database; each analysis run is keyed so the same window can't queue twice. Click all you want - you still get one run.
Encrypt GHL tokens at rest - one leak is the whole agency
One leaked GHL token exposes the whole agency, not one user. BetterAuth won't encrypt custom columns for you, so I encrypt access and refresh tokens with AES-256-GCM and a fresh IV each time before storage. Lose the key and you lose every connection - that's written down.
Billing tier checked before analysis spends money
Analysis costs money. I check the plan before any Claude call - not after tokens are spent. At-limit accounts still run; only over-limit is blocked. An audit once found an off-by-one that stopped people exactly at the cap - fixed and covered by a test.
- 1OAuth to GHLOne flow through GoHighLevel OAuth - approve access, sub-accounts land in the dashboard. Tokens never sit in plaintext in the database.
- 2Backfill and streamAfter connect, a job pulls about three months of opportunity history. Live updates come through signed webhooks: verify, queue, return 200 - processing happens in the background.
- 3Weekly fan-outInngest cron (Monday 9 AM Pacific) runs one analysis per active sub-account. Concurrency stays capped so Anthropic rate limits don't spike.
- 4Three-step Claude passPer client: metrics summary, anomaly pass, then revenue recommendations. Each Claude call logs cost.
- 5Digest emailWhen the run finishes, Resend sends the owner the digest - same path for manual Run analysis now, just triggered differently.
Every query scoped to the agency
Multi-tenancy isn't a sidebar toggle. Every Drizzle query in loaders, API routes, and Inngest jobs filters by agency. Skip it once and you leak another shop's data.
One place for product logic, thin HTTP handlers
Product logic lives in lib/. Background jobs call lib/ and stay thin. app/api/ only covers webhooks, OAuth, the job runner, and auth. Components render - no database calls in the tree. Schema changes mean a migration on purpose.
Retries should not double-call Claude or duplicate cost rows
Inside Inngest, each expensive step is wrapped so retries don't redo finished work. Otherwise a partial failure could bill Claude twice. Same idea as webhook deduping.
AI cost logged as the agent runs
The multi-step agent writes cost during the run, not in a separate logging pass. Per-org and per-day spend stays visible months later.
Token pricing math stays exact at volume
Usage dollars are computed in BigInt, then converted once at the end. Float math drifts over thousands of small charges - tests lock 0.0105 to 0.0105.
Webhooks and OAuth locked down the usual way
GHL HMAC checks use timingSafeEqual. Stripe reads the raw body for signatures. OAuth state is strict. CSP uses per-request nonces so Stripe.js loads without unsafe-inline.
Failed runs show up in the UI
Each analysis has status and errors. If Claude fails or the chain throws, the row is failed, Sentry sees it, and the UI can show it.
Missing secrets fail at startup, not on first traffic
Configuration validates on boot - including the GHL token encryption secret with a guard against placeholder values. I'd rather crash deploy than discover a missing key under a customer.
- Swap the in-memory webhook rate limit for Upstash Redis (or similar). On Vercel, each instance has its own Map, so limits don't add up. Documented - I'd still fix it before I trusted it under abuse.
- Ship account deletion behind a real legal review - right now it's a known gap for anyone who needs full data purge on request.
- Use a separate secret for OAuth state HMAC vs BetterAuth sessions - today one rotation moves both. Low risk day to day; I'd split it before a real security review.
- Add a deployment pipeline that runs migrations in a defined order - manual deploys are fine for a solo build until a missed migration becomes an incident.
- Add one full E2E path: sign up, verify email, connect GHL, run analysis, get mail. Integration tests cover the sharp edges; the happy path still isn't one script end to end.
Core
Supporting