Viva Goals - Copilot
When we introduced OKRs into Viva Goals, we envisioned a future where goal tracking and alignment would feel seamless, empowering teams at every level to stay in sync. But for many enterprise users, the experience wasn’t living up to that promise. Goals were set—and then forgotten. Check-ins dropped off. And users, overwhelmed by complexity, began losing context and confidence in the product’s value.
One question stood out: "What if understanding and acting on your goals was as easy as asking a colleague for an update?"
That question sparked the beginning of Copilot in Viva Goals.
❗ The Problem
Through usage analysis and feedback sessions, we uncovered three core issues:
Users struggled to understand the context and progress of their goals
Check-ins were often missed or done poorly due to lack of clarity
Our existing UX did little to guide users through the next steps
OKRs are powerful—but only when continuously engaged with. We had to remove the friction in consuming goals, while supporting users in navigating them better.
🧭 My Role
As the designer responsible for Copilot, I led the effort to embed AI in a way that supported—not overwhelmed—users. This meant:
Defining Copilot’s interaction model and entry points
Partnering with PMs and content strategists to shape prompts that felt clear, trustworthy, and useful
Creating flexible UI patterns for summary, suggestion, and fallback flows
Aligning tightly with engineering on implementation boundaries, response reliability, and latency concerns
💡 The Approach
Rather than building a chatbot, we focused on small, contextual moments where Copilot could step in:
"Summarize this goal"
"What’s my progress so far?"
"What should I do next?"
We embedded these prompts into existing UI, like goal detail cards and check-in pages. The key was to surface value where users already were, without changing their habits or flow.
🎨 Design Highlights
Light-touch prompts embedded into goal cards
Inline summaries with collapsible states
Suggested actions tied directly to check-in history
Simple fallback messages and graceful error states

⚙️ Collaboration & Constraints
Working with our engineers, we had to:
Manage the unpredictable nature of LLM responses
Build graceful fallback states and loading behaviors
Balance automation with trust and user control
This meant building in guardrails—like clear prompt affordances and editable suggestions—so users felt in control.
🧪 Testing & Refinement
We ran a series of moderated tests with enterprise users to gauge:
Comprehension of summaries
Trust in AI-suggested actions
Willingness to engage with nudges
We learned that clarity > cleverness. The more actionable and specific Copilot’s input, the more likely users were to engage.
📈 Results
While exact figures remain confidential, the impact was clear:
Faster comprehension of goal context
Notable lift in check-in frequency for users exposed to Copilot prompts
Positive feedback on the unobtrusive and helpful tone of suggestions
🎓 What I Learned
Designing with AI means designing for uncertainty—and trust. It’s not about flashy features, but meaningful, invisible support. Small, well-placed nudges did more for engagement than any fully-automated system could have.
This project reminded me that the best design doesn’t shout. It listens, waits, and speaks only when it has something truly useful to say.