← Selected work
Case study

FunnelScout

Multi-tenant SaaS with scheduled AI analysis per account. Each tenant isolated, usage tracked, AI budget controlled per plan. The hard part wasn't the model. It was making sure tenant A never touched tenant B's data - and that one heavy user couldn't burn through the budget for everyone else.

The problem

Small GHL agencies usually run 5-15 client accounts without anyone acting as a real analyst. Pipeline issues show up late, usually after the client is already asking what happened. If you're checking dashboards by hand, you're already behind.

What I built

FunnelScout is a SaaS layer on top of GoHighLevel's marketplace OAuth. Connect once, pull every sub-account, stream opportunity updates through webhooks, and run a multi-step Claude analysis per client on a schedule. Every Monday the owner gets a digest with three revenue recommendations per client and estimated impact. Stripe subscriptions, tenant boundaries, and every Claude call written to cost logs all stay part of the product.

Screenshots
Landing page positioning the product around weekly analysis and usage-aware billing.

Click image to enlarge

1 / 7

Decisions that matter

Acknowledge webhooks fast - heavy work runs in the background

GHL and Stripe both want fast responses. The analysis itself takes tens of seconds. So routes verify signatures, hand the work to Inngest, and return. Heavy work does not get to sit on the HTTP thread.

Same event or double-click cannot create duplicate work

GHL retries webhooks. People double-click Run analysis now. Inngest retries failed jobs. Opportunity events dedupe in the database, and each analysis window is keyed so it cannot queue twice. Click all you want. You still get one run.

Encrypt GHL tokens at rest - one leak is the whole agency

One leaked GHL token exposes the whole agency, not one user. BetterAuth does not encrypt custom columns for you, so I encrypt access and refresh tokens with AES-256-GCM and a fresh IV before storage. Lose the key and you lose every connection. That part is documented too.

Billing tier checked before analysis spends money

Analysis costs money. I check the plan before any Claude call, not after tokens are gone. At-limit accounts still run. Only over-limit is blocked. An audit once caught an off-by-one at the cap, which is exactly the kind of bug that gets expensive if nobody notices.

How it works
Agency: connect to Monday digest
  1. 1
    OAuth to GHLOne flow through GoHighLevel OAuth - approve access, sub-accounts land in the dashboard. Tokens never sit in plaintext in the database.
  2. 2
    Backfill and streamAfter connect, a job pulls about three months of opportunity history. Live updates come through signed webhooks: verify, queue, return 200 - processing happens in the background.
  3. 3
    Weekly fan-outInngest cron (Monday 9 AM Pacific) runs one analysis per active sub-account. Concurrency stays capped so Anthropic rate limits don't spike.
  4. 4
    Three-step Claude passPer client: metrics summary, anomaly pass, then revenue recommendations. Each Claude call logs cost.
  5. 5
    Digest emailWhen the run finishes, Resend sends the owner the digest - same path for manual Run analysis now, just triggered differently.
Billing: subscribe without support tickets
CheckoutStripe Checkout for Starter ($49/mo, 5 sub-accounts), Agency ($99/mo, 15), Pro ($199/mo, uncapped tier in product terms).
Webhook updates stateSubscription rows update from verified Stripe webhooks - raw body preserved for signature verification, same discipline as GHL.
Customer portalStripe Customer Portal for cancel, upgrade, and payment method - no custom billing UI maintenance.
Limits before spendIf you're over plan on sub-accounts, analysis doesn't start - the expensive path never opens.
What matters in production

Every query scoped to the agency

Multi-tenancy is not a sidebar toggle. Every Drizzle query in loaders, API routes, and Inngest jobs filters by agency. Skip it once and you leak another shop's data.

One place for product logic, thin HTTP handlers

Product logic lives in lib/. Background jobs call into lib/ and stay thin. app/api/ only handles webhooks, OAuth, the job runner, and auth. Components render. They do not run database logic from the tree.

Retries should not double-call Claude or duplicate cost rows

Inside Inngest, each expensive step is wrapped so retries do not redo finished work. Otherwise a partial failure could bill Claude twice and duplicate the cost rows. Same idea as webhook deduping. Same reason too.

AI cost logged as the agent runs

The multi-step agent writes cost during the run, not in a separate logging pass later. Per-org and per-day spend stays visible months later, which is when people suddenly care.

Token pricing math stays exact at volume

Usage dollars are computed in BigInt and converted once at the end. Float math drifts over thousands of small charges. That seems harmless until finance starts asking questions.

Webhooks and OAuth locked down the usual way

GHL HMAC checks use timingSafeEqual. Stripe reads the raw body for signatures. OAuth state is strict. CSP uses per-request nonces so Stripe.js loads without unsafe-inline. Boring security work. Very useful security work.

Failed runs show up in the UI

Each analysis has status and errors. If Claude fails or the chain throws, the row is marked failed, Sentry sees it, and the UI can show it. Silent failure is how trust dies in products like this.

Missing secrets fail at startup, not on first traffic

Configuration validates on boot, including the GHL token encryption secret with a guard against placeholder values. I'd rather crash on deploy than discover a missing key under a customer.

What I'd tighten
  • I'd swap the in-memory webhook rate limit for Redis or something similar. On Vercel, each instance has its own Map, so the current limit does not really add up under abuse.
  • I'd ship account deletion only after a real legal review. Right now that is a known gap for anyone who needs a full purge on request.
  • I'd split the OAuth state HMAC secret from the BetterAuth session secret. Low risk day to day. Still not how I'd leave it before a real security review.
  • I'd add a deployment pipeline that runs migrations in a defined order. Manual deploys are fine until one missed migration becomes an incident.
  • I'd add one full end-to-end path: sign up, verify email, connect GHL, run analysis, get the mail. The sharp edges are covered. The happy path still deserves one script all the way through.
Stack

Core

Next.js 16TypeScriptSupabasePostgreSQLDrizzle ORMInngestBetterAuthStripe

Supporting

Anthropic APIResendSentryVercel
View live →