Transforming Healthcare with AI-Driven Insights: A UX/UI Design Case Study for Intermountain Health
Background
Intermountain Health had developed an internal analytics platform used by clinical and executive teams to monitor patient outcomes, department performance, and operational trends. As part of an effort to make data more actionable, the analytics group integrated machine learning models that automatically generated insights based on recent patterns. These insights were displayed on dashboards and intended to support faster decision-making.
However, early versions of the insights were difficult to understand. These early versions relied on technical language directly derived from the models, often lacking clarity and context. Users reported skipping over these sections or misinterpreting the insights. Some insights were helpful, but others were too vague, too detailed, or misaligned with how clinicians and administrators processed information in high-pressure settings.
The analytics team recognized the need to enhance the presentation of AI-generated insights. They asked me to lead a redesign effort to make insights clearer, more human-readable, and more usable in daily workflows.
My Role
As the UX lead on this project, I was responsible for redesigning the presentation of AI-generated insights across the analytics platform. I partnered with data analysts, product managers, and clinicians to translate machine-generated outputs into clear, actionable summaries that supported real-time decision-making.
My role included reviewing existing insight structures, leading design strategy, writing and testing new insight patterns, and collaborating with developers to ensure the new model could be integrated with live data. I also worked closely with stakeholders to validate usability improvements and ensure the insights aligned with clinical and administrative workflows.
Research and Insights
To inform the redesign, I began by studying dozens of machine-generated summaries produced by internal data science models. These insights were based on trends in patient outcomes, staffing performance, and operational workflows. While technically sound, they were often dense, inconsistent, or disconnected from the dashboard visuals.
I conducted interviews and usability testing with clinical leaders, analysts, and hospital administrators to learn how they interpreted the insights and what they found frustrating. Many users ignored the insights entirely or misunderstood them. Some lacked context, others required verification, and a few introduced uncertainty instead of clarity.
A clear pattern emerged. Users sought concise, trustworthy summaries that highlighted fundamental changes and explained their significance. They preferred a headline-and-detail structure that used plain language, an active tone, and a strong visual connection to the metric being referenced. Insights are needed to make a callout, not a wall of text.
This research provided a clear direction for rewriting the summaries, rethinking the layout, and designing an interaction model that made insights worthwhile without overwhelming users.
Design Process
The original approach surfaced AI-generated summaries in a drawer labeled “Alerts.” These alerts were triggered when the system detected significant changes in metrics related to equity, operations, or performance. The drawer could be opened from a button in the dashboard interface, and summaries were presented in a linear list. While the design was functional, early feedback showed that both the interaction model and the content needed refinement.
Some users found the term “Alerts” too strong for what were often routine or contextual updates. They associated the word with high-priority emergencies and felt it could lead to alarm fatigue. Based on this input, we tested alternative labels and settled on “Notes,” which aligned better with how users wanted to engage with the insights: informational, not urgent.

Early testing revealed “Notes” was preferred over “Alerts” as a label.
In addition, most potential users expressed a strong preference for only seeing a note or alert when something required their attention. They did not want the interface cluttered with AI-generated summaries for every metric, especially if nothing meaningful had changed. This helped shape the logic for when notes would appear. The insight module would only activate when a change crossed a threshold for urgency or significance, reducing unnecessary interruptions and reinforcing trust in the system.
Another key design insight came from how users wanted to interact with these summaries. Most preferred a visible, concise note on the metric card itself—something that would quickly signal when a change had occurred. Rather than opening a drawer to browse multiple notes, users wanted a short AI-driven summary embedded directly on the card. The drawer then became a secondary layer, offering more context only when needed.

Visual hierarchy and tone were refined through multiple iterations and key stakeholder feedback. I replaced the original alert icon with a subtle visual marker that paired well with the new “Notes” label. Color and typography were adjusted to support scannability and minimize unnecessary urgency.

Separate feedback focused on the content of the summaries. With potential users and during usability tests, we reviewed dozens of actual outputs from the machine learning models. The original insights were technically accurate but often written in long, dense paragraphs. They included technical jargon and lacked visual or structural ties to the metrics they described. Users reported that the insights were difficult to scan, inconsistent in tone, and occasionally unclear about their relevance.
To address the content issues, I rewrote dozens of real AI-generated summaries using a structure that was easier to understand and trust. Each summary started with a short, bolded headline calling out the most important change. A second sentence followed with a plain-language explanation that gave quick context and used familiar clinical or operational terms. This format helped users quickly understand what had changed and why it mattered—without needing to decode a paragraph of technical language.
The final design included micro-interactions like hover states and expandable detail views, all of which could be generated dynamically from model output. This ensured that as the machine learning models improved, the system could adapt while maintaining usability and trust.

Outcome and Impact
The redesigned insight modules were implemented across the analytics platform and adopted by clinical and executive users. Following the launch, feedback indicated that the new summaries were significantly easier to understand and more trustworthy.

Users began referencing insights during meetings and using them to guide follow-up questions or deeper analysis. Platform analytics showed a 20 percent increase in insight interaction. Users spent more time hovering, expanding, and following up on summaries.
The clear tone and structured layout helped clinical teams adopt a shared understanding of trends and changes. This redesign became the foundation for future AI-driven summaries across the platform and demonstrated that even complex content can be made usable through thoughtful design.