Product Case Study

From Enterprise AI
to Main Street

How I built an AI-powered business from zero, translating enterprise tooling into real solutions for local operators who need tech that actually works.

Humberto "Beto" SalgadoSenior Product Manager · AI-Enabled Platforms
TL;DR
~10 min read

Most PMs ship and move on. I stay. I find what customers actually need, build it with them, and keep the relationship going long after the launch deck closes.

That's been the job for fifteen years across enterprise tech, federal programs, and now AI builds for small businesses. This case study walks through one of them, end to end, for a four-attorney law firm in Orland Park.

Scroll for the full story
01 · The Problem

They're not lost on AI.
They're out of time.

Most local operators I sit down with already know AI can help them. They've seen the demos. They've watched colleagues experiment. The honest question I get most often, halfway through a coffee, isn't what is this? It's why can't we do this ourselves?

They could. They're capable. They just don't have a free Tuesday afternoon to figure out which AI tools fit their workflow, which ones break the moment a real customer hits them, and which ones are worth the monthly fee. They're running their business. Figuring out AI doesn't fit between the actual work.

That's the gap. Not "explain AI to people who don't get it." Translate enterprise AI into something a four-person practice can actually adopt, in a way that protects their time instead of asking for more of it.

Who I see, day to day

The Homeowner

Smart home gear from five brands, none talking to each other.

The Small Business Owner

Heard AI can help but doesn't know where to start.

The Local Operator

Running on spreadsheets because nobody has shown them automation.

The Trust Gap

Everyone selling "AI" speaks a language these people don't recognize.

02 · Discovery Before the Demo

It started over coffee.
In Orland Park.

Techo Tuesday's first AI engagement came through a friend. He introduced me to a family member who works at a four-attorney personal injury practice in Orland Park. The intro was casual. The conversation was the same. We talked through their day, where things felt repetitive, where things broke down.

The pain that surfaced wasn't a missing tool. The firm doesn't run 24/7, but client inquiries don't follow office hours. Phone calls, web forms, and emails kept arriving after the lights went out, and the work to triage them landed on paralegals and the receptionist, often as late-night follow-ups, paperwork, and email catch-up done outside office hours, or as a backlog to clear before the next day's actual work could start. They'd thought about automation. Nobody had explained what it would actually look like for an office their size.

I didn't pitch. I asked questions. I listened. Then I proposed a small starting point: a two-week trial where they'd show me how a typical day actually runs.

What discovery actually looked like Method
Coffee, not a sprint. One real conversation. No interview deck, no discovery framework.
A typical-day walkthrough. They showed me how calls, forms, and emails get handled today. Who touches each one. Where the wait times come from.
Three days in their office. I sat with the workflow in person. Watched the tools they actually use. Picked up on what wasn't in the documentation.
Anonymized intake samples. I took real intake patterns home, customer info redacted, so I could prototype against their actual cases instead of imagined ones.

From observation to prototype

The shadowing made the build choices obvious. Inquiries arriving after hours meant a capture-and-hold problem, not a staffing problem. Repetitive intake questions meant a qualification rubric, not a chat model. The morning-catch-up pattern meant the right output wasn't a real-time alert. It was a prepped queue waiting on the screen at 8 AM.

I built a working prototype in Claude before anything was implemented in their environment. Walked them through it in person. Showed them how it would handle their actual cases. Once they could see it working, we ran a two-week trial against real client traffic. The trial proved the model. From there: build, refinement, delivery, training, with ongoing support.

03 · The System I Built

A live intake agent, end to end.

Inbound inquiries land across four channels. The agent qualifies, routes, books, and hands off, all before the office opens.

How I worked the problem
From signal to system, in five moves.
SAW FRAMED PROVED BUILT GREW
01
Saw
23%
of weekly intake outside staffed hours
Drowning in after-hours intake
New-client conversion lost. Conflicts caught after booking.
Signal Voicemail backlog every morning
02
Framed
Qualify Route Book
Advise
Triage agent, not a replacement
The line stays bright on purpose.
Scope Three jobs in. One job out.
03
Proved
91% right 6% review 3% wrong
Validated on 50 historical inquiries
Two weeks. No build commitment yet.
Result Bet earned. Build greenlit.
04
Built
Pilot 2w Build 6w Live
Pilot first. Scale second.
Two attorneys ran it before firm-wide rollout.
Killed Auto-reply to declines. Wrong tone.
05
Grew
1 4
practice areas served
Built to expand without rework
Qualification rubric is config, not code.
Now Original build assumed 1.
The system in action
Inbound · After Hours 19:42:15 · TUE
Channel Web form
Caller intent New client · personal injury · auto accident
Locale Orland Park, IL
Agent engaged
Pipeline · Live
v2.1 · last tuned 3 days ago
01 Capture
Web Form Phone Email Chat
Four channels in. One source.
This run came in via web form.
This week
Inquiries 184 · Avg 4.2 min
02 Qualify
94%
Personal Injury · IL · No conflicts.
Confidence cleared the auto-route threshold.
Threshold
85% required · 94% achieved
Classification breakdown
Case typePersonal Injury
JurisdictionIL · in scope
Conflict screenNo match
Urgency tierStandard · 7 days
Confidence94%
03 Route
Qualified
Review
Decline
Routed to qualified consult queue.
No escalation. No human review needed.
Reason
94% · in scope · no conflicts
Routing logic
≥ 85% · in scopeAuto-book
60–84% · partial fitManual review
< 60% · out of scopeDecline + refer
Threshold tuned across two weeks of pilot feedback. Lower thresholds cost attorney time.
04 Act
Booked
Confirmed
Drafted
Logged
Thursday 2:00 PM with Atty. M.K.
Confirmation sent. Intake packet drafted.
End-to-end
4 seconds from intake to booking
Generated outputs
Confirmation subject"Your consultation is confirmed"
Calendar eventConsult · M. Rivera · 30 min
Intake packet4-page summary, ready
CRM recordContact + matter created
05 Hand off
7 Overnight
2 Flagged
Morning brief, attorney-ready.
Two cases flagged for human review.
Needs review
J. Patel · possible conflict #2024-118
How it performed
System Instrumentation
Last 12 weeks
Weekly inquiries handled 184 /wk
trending up
Manual intake hours / week ~2 hrs
down 80%
847
Requests handled
0.8s
Avg agent response
94%
Auto-route rate
$0
Marginal cost / run
Last night's queue 9 captured · 6 shown
17:33:08Flagged · possible conflict · J. Patel · #2024-118
18:54:21Declined · out of jurisdiction · referral sent
19:18:03Booked · estate planning · D. Foster · Atty. R. S.
19:31:42Manual review · auto accident · low confidence · R. Chen
19:42:18Booked · personal injury · M. Rivera · Atty. M. K.
08:00:00Morning brief delivered · 7 overnight · 2 flagged for review
Why I made these calls
Decisions I made & why Product judgment
The threshold Why auto-route at 85% confidence Below 85%, false routings cost more attorney time than manual review saves. We tested 80, 85, and 90 against two weeks of historical inquiries before settling. ↳ tradeoff: precision over throughput
Where the agent stops Why the agent never gives legal advice Liability risk plus unclear value. The agent qualifies and routes; humans practice law. The line is bright on purpose, even when callers push. ↳ tradeoff: scope discipline
The notification model Why a morning brief, not real-time alerts After-hours pings trained attorneys to mute notifications. Batched delivery preserved attention and made the morning queue feel earned, not noisy. ↳ tradeoff: signal over speed
What this meant
My role Where I added value
What I owned
Scoped the agent's decision boundaries and escalation criteria
Defined the qualification rubric and confidence thresholds
Designed the human-in-the-loop handoff and morning brief spec
Set SLA targets and instrumentation requirements
What I delegated
Twilio voice and SMS integration
Claude API calls and prompt implementation
Calendar API and CRM webhook plumbing
What the firm couldn't do before Outcome
Triage 7+ after-hours inquiries before 8 AM, every day
Catch potential conflicts before booking instead of after
Hand attorneys a prepped queue every morning, not a backlog
Stay reachable 24/7 without staffing 24/7
04 · The Tool Stack

Five tools.
Each one earned its place.

The build runs on five integrations. Two were already part of the firm's day-to-day, three were new. I matched the stack to how they already work, then filled the gaps. The job of the tool is to fit the operator, not the other way around.

Twilio VOICE + SMS GATEWAY

Evaluated against Bandwidth. Twilio's Studio editor and friendlier dashboard meant the receptionist could see what the agent was doing without me there.

Claude REASONING + INTAKE

I'd been working with Claude across other Techo Tuesday work and trust how it handles ambiguity. For an attorney's office where the agent has to know when not to answer, that mattered more than raw capability.

31
Calendar BOOKING SURFACE

They already used it. Don't introduce a new tool when the existing one works. The agent writes to their calendar. The staff sees what they always see. No training overhead.

Teams STAFF HANDOFF + BRIEF

They were already on Microsoft 365. Same logic as Calendar: stay where the team already lives. The agent posts the morning brief into a channel they check first thing.

Zapier ORCHESTRATION

Picked over n8n. n8n is more powerful for self-hosted work, but Zapier's hosted simplicity wins for a four-person firm. The receptionist can read a Zap and understand what's happening. n8n's editor would scare a non-technical operator.

05 · GTM in Six Months

Six months. Three moves.

Techo Tuesday is the lab where I stay sharp on AI-enabled product work. The Orland Park engagement was the first build. Here's what comes next, and what each move proves about how I think.

1-2
Months · Deepen

Voice Agent for the same firm.

They asked. I'm learning the build now. The lesson is the one every senior PM eventually internalizes: a satisfied client is the cheapest distribution channel you have. Don't chase the next logo before you've fully served the one you're with.

3-4
Months · Productize

Templated AI intake for SMBs.

What I built for the law firm wasn't bespoke. The receptionist's pain is the receptionist's pain whether it's a law office, a dental practice, or a small accounting firm. Take the playbook, identify what's reusable (discovery framework, prompt patterns, orchestration scaffolding), turn it into a four-week engagement at a fixed price. This is the move that separates consulting from product thinking.

5-6
Months · Distribute

Three to five paying engagements.

The pipeline I'm working now. Adjacent professional-services firms across Chicagoland. Each one is a chance to refine the productized offering against real client variance. By month six the question isn't "do I have a service" but "is the service tight enough to run without me, and what's the next layer to add."

Six-month rhythm: build, productize, distribute.
The same loop I'd run at any stage company.

06 · What I Learned

Fifteen-plus years told me what to expect.
The build confirmed it.

The Orland Park engagement was my first paying AI build. I went in nervous. I came out understanding something I'd actually known for years but hadn't put into words.

AI didn't change the job. It just gave me a more capable tool to bring into the room.
01

The problems don't change. The tools do.

I've worked with Fortune 500 vendors, OEMs, federal partners, enterprise customers, and now a four-attorney law firm. Different industries, different scales, different stakes. The actual problem in the room was always the same: someone has work they can't keep up with, someone is trying to sell them a tool that doesn't quite fit, and someone in the middle has to figure out what's actually being asked. AI didn't make that work obsolete. It just gave me more leverage inside it.

02

The receptionist shaped every technical choice.

Twilio over Bandwidth, because the Studio editor was readable to a non-engineer. Zapier over n8n, for the same reason. Microsoft Teams instead of introducing Slack, because they were already on Microsoft 365. Google Calendar instead of a new booking system, because they already used it. Every "right" technical answer started from "what does the receptionist already do, and where does this fit." The system that ships is the system the operators understand.

03

The trust got built before the demo.

Three days in their office, asking questions, listening more than talking. By the time I showed them the Claude prototype, they already knew I understood the work. The demo confirmed it. It didn't sell it. If I'd led with the demo, I'd still be sending follow-up emails. The senior partner didn't hire AI. They hired the person who'd already shown they were paying attention.

07 · Why This Matters

It always comes back to the room.

You can read this case study and walk away thinking it's about AI. It isn't. It's about a Tuesday morning at a small law firm in Orland Park, and a partner who needed someone to listen before they needed someone to build.

I started in network engineering fifteen years ago. Cables, racks, packets. The rooms were always the same: a customer with a problem they couldn't fully describe, a vendor with a product that didn't quite fit, and a person in the middle who had to make it work. That's the job I've been doing ever since.

The technology changed. The job didn't.

AI doesn't change the job either. It's a more capable tool. The PM who can sit across the table, hear what's actually being asked, and shape the work around it is still the one who ships. That's how I built this system. That's how I'll build the next one.