Index

Argus – AI‑powered B2B SaaS for Medicare Compliance and Clawback Protection

Shoreline Medical Administration is a startup that protects wound care providers from financial loss by reviewing claim‑related clinical records and offering a guarantee against insurance clawbacks resulting from Medicare non‑compliance.

Phone

In late 2025, the company was looking to unlock new levels of growth in part by launching a proprietary SaaS platform, Argus, in a move that would help position them to scale sustainably and operate more efficiently. I had the opportunity to help them bring their vision for Argus to life.

RoleTimelineCollaboratorsToolsIndustry
Lead DesignerSep-Nov 2025 (6 weeks)1 Engineer, CTO, Dir. of Product, PMFigma, Miro, Loom, Typeform, LLMsHealthtech

Ownership & Impact

I led the end-to-end design of Argus, partnering with Shoreline Medical's engineering, product, and operations teams to translate business constraints and user insight into scalable product architecture. Over six weeks, I ran rapid research, architected the core product architecture and interaction systems, negotiated design priorities within tight constraints, and owned the UI design. My contributions helped position Shoreline for immediate efficiency gains and long-term growth.

A winning business model limited by its tooling

Shoreline was already a profitable Medicare compliance business, earning a percentage on successfully defended reimbursements and handling tens of millions in claims each month. When I joined, their system consisted of two apps built in Retool.

Client-facing app
where clinics could upload and manage claim-related documents

Internal Ops app
where specialists received documents from clients and evaluated them using an internal AI review service

By leveraging a fledgling internal AI review service used by the Ops team, Shoreline planned to transform the client-facing component of their system into a SaaS platform where clinicians could run compliance reviews on their own.

From concept to beta in six weeks

The Stakes

WISeR—a new CMS regulatory model set to roll out in January 2026—would subject Medicare claims in some states to automated AI scrutiny before approval, likely resulting in a significant rise in denials. Being first-to-market with a tool that mitigated WISeR's strict adjudication would give Shoreline a significant edge.

Design Mandate

Transform Shoreline's existing Retool portal into a multi-tenant, HIPAA-compliant platform that could:

  • Enable self-service review without ops team intervention
  • Surface AI-flagged issues in a way users could actually understand and act on
  • Support rapid iteration—users upload, get feedback, fix locally, resubmit
  • Make the upgrade path to Shoreline's Guarantee service
  • Scale to handle 20+ organizations, 80%+ MAU, without linear headcount growth

Domain immersion & design priorities

I wasted no time in familiarizing myself with Shoreline's operations, graft procedure Medicare compliance and client workflows. I studied the current Retool implementation, dove into clinical documentation, and hosted structured sessions with Shoreline Reviewers, Compliance & Auditing Specialist, and CTO.

Key findings

Human reviewers on the Shoreline Operations team do not use the AI feedback
I was surprised to learn that the sole purpose of the AI review service was to assist the operations team, yet no one actually used it... except for one Reviewer, Benjamin, who typically just scanned the feedback for critical fails as a starting point. The team was bought into the vision of automated review, but no one trusted the AI yet.

Both Shoreline Reviewers and Clients regularly bypass the Retool ecosystem altogether, running review cycles in their inbox instead
The existing Retool apps created friction in the review workflow. Users found it easier to communicate via email, downloading and uploading documents manually rather than using the purpose-built tools.

Non-clinical office administrators are primary users
Contrary to initial assumptions, the people most frequently interacting with the system were office administrators, not clinicians. This discovery significantly influenced the interface design priorities.

Clients spend a nontrivial amount of time trying to decipher where they are in the process
Status visibility was a major pain point. Clients frequently reached out to Shoreline staff just to understand the current state of their submissions, creating unnecessary overhead for both parties.

The focus needed to be on improving usability, maximizing trust and helping clients confidently navigate a high-stakes, multi-part, multi-day workflow. I documented design priorities based on my findings to supplement Shoreline's feature wishlist and assist with ideation.

Benchmarking + Goal setting

There were little to no analytics available. We wanted a snapshot of sentiment and quantifiable metrics to demonstrate the project's ROI, so the interview script and a survey were designed to capture that information.

≤20%
pre-guarantee submissions requiring human reviewer intervention

≥70%
of first-round errors flagged by AI are addressed and resolved during first resubmission

≤5%
abandoned submissions

0
direct email requests from Argus users to Guarantee a submission

>75
SUS Score

>40
NPS Score

Early exploration & scope refinement

Together with the product team, I mapped out the MVP product requirements and flow, using insights from my conversations with Operations to highlight friction points and guide feature discussions and prioritization. Distinct goal-based workflows began to emerge and the exercise led to collaborative, exploratory end-to-end sketching, where early screen concepts began to take shape.

Key early design concepts locked in

  • IA with "Encounter" as the core object; defined as the combination of 1 date of service and 1 patient and supporting multiple claims.
  • AI feedback visible in same view as Encounter content
  • A file detail screen that displays scoped feedback next to a preview of the source
  • Multi‑step Encounter creation flow that begins with uploading Clinical Notes and uses extracted data to validate and guide
  • Clinic‑based multi‑tenant architecture organized around user global ID to support outsourced administration and encourage expansion
  • HITL mechanism for flagging AI feedback perceived to be inaccurate
  • Automatic document versioning upon resubmission

Critical to be explored further

During exploration we identified statuses and feedback as two underlying systems that were driving the core user experience. Optimizing these would be fundamental to creating an MVP that "just worked".

UI foundations

The existing Shoreline brand came across as warm but too dated and underpowered for a high‑trust, AI-powered SaaS UI in 2025. I partnered with the Head of Marketing to create a lightweight extension of the existing guidelines sufficient to support the project while staying visually cohesive with Shoreline.

Color

The primary color was refined to a deeper, more dynamic shade of Shoreline teal that was well within WCAG 2.2 AA standards. System colors were established, including a grayscale and the RAG values we would need for validation and feedback.

We agreed that "Guaranteeing" an Encounter should feel elevated and visually distinct without competing with system CTAs. I explored using Shoreline 'sunshine' yellow to create a gradient which would be reserved for Guarantee-related elements.

Font

Work Sans was friendly but didn't signal expertise, and felt especially flimsy at smaller font sizes.

Funnel Sans was chosen instead for its balance of approachability and competence, as well as strong legibility at any scale.

Logo

An idea for a logomark came to me during a run one evening. Branding wasn't in scope for the project but I couldn't help but draft a concept! I pitched it to the team the next day and was met with positive feedback.

Developing core workflows

In the early stages of the project we uncovered two underlying paradigms that were driving the core user workflows. Optimizing these would be fundamental to creating an MVP that "just worked".

Feedback Interaction Model (Jump to solution)

I learned in Discovery that feedback from the AI review service wasn't being used because it lacked credibility and was hard to digest. If we expected non‑experts to embrace the AI recommendations, I needed to design a feedback system that was organized, scored, written, and interactive in a way that would

Object State Model & Statuses (Jump to solution)

In the current implementation, Encounter and document statuses were manually managed by Shoreline reviewers. Argus would have no human intermediary, so transitions needed to be automated based on object states. Additionally, user research revealed that Encounter status labels were confusing, leaving users unclear about where things stood or what to do next. I needed to redefine object states to be congruent with the new self‑serve model and map them to clearer, more helpful statuses.

Because these systems were tightly coupled, I decided to work them out pragmatically through rapid, iterative cycles of lo-fi prototyping and review, rather than in isolation. This also gave me the chance to test out some high-level UI decisions on the encounter and file detail pages.

My first attempt at tying everything together missed the mark but did succeed in flushing out finer details, edge cases, and—most importantly—some differing interpretations of operational flow among the cross-functional team.

To align everyone and get the answers I needed to continue, I created a service blueprint as a shared source of truth.

Iterating on the prototype and blueprint in parallel helped close gaps in our understanding of object state architecture and even led operations to adopt a change to their process. Each successive cycle informed documentation and shaped the UI, bringing us closer to an MVP-ready solution.

Object State Model & Statuses

Objects had different lifecycles, states, and relationships to other objects. For this reason, they were allowed to have different statuses. Variances were documented and used to keep complex workflows predictable and extendable.

Design infrastructure handoff

Architecture was set. The brand was approved. Layout, theme (type, color), styled but non-custom core components (buttons, menu, etc.) handed off. Functioned as a test handoff of sorts as well.

Construction of key components

With a conviction for the layout of the encounter and file details pages, some pieces survived multiple iterations. I began to formalize global components for each.

  • Encounter sidebar
  • Suggestion
  • File sidebar

The pivot

Midway through Week 3, Shoreline's leadership made a strategic decision: the AI model wasn't ready for public-facing self-service (as evidenced by reviewers ignoring it), and the sense of urgency had also somewhat relaxed, as WISeR faced significant opposition and legislative challenges that cast doubt on its scheduled January 1, 2026, rollout. Rushing an immature AI experience for no reason could damage the brand in an arena where trust is paramount. Instead, the MVP would be an improved client portal that would replace the Retool implementation, poised to evolve into the envisioned form when the model reached an acceptable level of maturity.

Shoreline wanted to keep the blueprint for the AI self-service future vision we had created, but now also wanted to take a phased approach and begin with an MVP that got them halfway there. Fortunately, abstracting the user goals from features early on paid dividends—the foundation was strong. After a close examination of the new PRD, it was apparent the core workflow was the same. The statuses held true. Even many of the tasks remained intact. The paths and features that supported the paths to those goals would be different, at first. This meant a full-scale overhaul was prevented, but some of my screens and components would need to work overtime and be carefully crafted to embrace the change in phase 2 and beyond.

What would need to change:

  • Encounter submission — creating a new encounter would remain the same, OCR was reliable enough to implement in phase 1. But upon creating a new encounter, instead of triggering the AI reviewer service to process, it would be submitted to Retool where the ops team would receive it exactly as they have been. Status-based comms important here.
  • Feedback loop — instead of improving their submission by iterating with the AI review service, clients would be iterating with a human reviewer. To stay in line with available functionality and avoid additional work on the backend Retool ops application, individual "suggestions" that were linked to one or more elements of an encounter were out for phase 1.
  • Encounter & document details — a downstream impact of not having individual suggestions meant doc completion and encounter completion rate rollup wouldn't be possible. This would affect what we were able to expose in tables.
  • Notifications & Tasks — the infrastructure would be the same, the content would change.

Meaningful features remaining in for MVP: tasks, notifications, versioning, patient match during creation (AI sucked but OCR was fine).

Features now out (deferred) for MVP: self-onboarding, asynchronous processing/non-blocking UI.

The new challenge

How could I create this proposed "phase 1" of Argus without necessitating large updates to the existing Ops Retool app, while improving the current client portal experience, and delivering it all in a set of screens and components that could painlessly evolve to handle the "phase 2" vision of AI self-service with minimal effort from design & engineering?

Well, first... I would need more time. Shoreline extended my contract.

Then, I got to work. Below are the results.

Final designs

Workflow: Create Encounter

  • Multi-step flow
  • Bulk/batch upload of supporting files permitted
  • Fully manual → assume & confirm model
  • Auto patient match

Workflow: Using Feedback to make adjustments

  • View Feedback
    • Structured feedback
      • Phase 2 (AI): suggestion issue type (critical/recommended), file vs encounter scoped suggestion, single file vs multi-file suggestion, suggestion status, and suggestion metadata
    • Feedback + actions in context
      • Retool: disjointed
      • Phase 1 (human): relevant notes appear alongside file in file view
      • Phase 2 (AI): highlight with yellow 1/many on hover over suggestion; highlight actual extracted data in file view when possible, when suggestion selected; actions available in this view
    • System status visibility
      • Retool: none
      • Phase 1 (human): milestones help formulate mental model of process; messaging with spot illustrations during periods of waiting
      • Phase 2 (AI): readiness % rollup with sub scores; loading design for AI latency
    • Tasks & Notifications
      • Retool: n/a
      • Phase 1 (human): enables client to easily pick up where they left off; shortlist to tackle
      • Phase 2 (AI): late addition, no deep thought given here but would pretty much be exactly phase 1; long-running review service processing radial progress paired with notification
  • Flag Feedback
    • HITL
      • Retool: n/a
      • Phase 1 (human): n/a
      • Phase 2 (AI): mechanism for flagging AI suggestions that are inaccurate

Workflow: Manage Uploaded Docs

  • Organized history
    • Retool: list dump with dates, no visibility into feedback, just able to open resubmits
    • Phase 1 (human): faux chat provided running feedback and noted upload/resubmit events; versions available, file view would only show notes relevant to specific version by looking at date metadata
    • Phase 2 (AI): suggestion statuses (to do, done, ignored) + keeping suggestions scoped to specific version gave full history in file view, neatly organized; helpful down the road for auditing

Workflow: Manage Encounter

  • Status-driven progressive action
    • Retool: all actions possible at all stages
    • Phase 1 (human): status system helped guide user to next steps
    • Phase 2 (AI): status system and feedback system worked together to guide user intuitively through the steps
  • Upload additional files
  • Room to grow — Encounter details screen designed for unknowns and future features such as tracking multiple wounds in a single encounter

Workflow: Guarantee Encounter

Human vibe (difference maker in mostly sterile clinical apps): spot illustrations, weighty thumbnails (encounter + file), faux chat gave a personalized touch, voice + tone (conversational, not commanding), dark mode enabled.

Reflection

Less about getting everything right in the MVP and more about leaning into assumptions with a high ROI if we were right but easy recovery if we were wrong.

analytics.example.com
Overview — Last 30 days
Conversion rate
3.2%
0
Avg. session
4m 12s
0
Bounce rate
41.7%
0
Conversion rate
Cancel
Add