Viva Goals - Copilot
When we introduced different types of goal settingsinto Viva Goals, we envisioned a future where goal tracking and alignment would feel seamless, empowering teams at every level to stay in sync. But for many enterprise users, the experience wasn’t living up to that promise. Goals were set—and then forgotten. Check-ins dropped off. And users, overwhelmed by complexity, began losing context and confidence in the product’s value.
One question stood out: "What if understanding and acting on your goals was as easy as asking a colleague for an update?"
That question sparked the beginning of Copilot in Viva Goals.
❗ The Problem
Through usage analysis and feedback sessions, we uncovered three core issues:
Users struggled to understand the context and progress of their goals
Check-ins were often missed or done poorly due to lack of clarity
Our existing UX did little to guide users through the next steps
OKRs are powerful—but only when continuously engaged with. We had to remove the friction in consuming goals, while supporting users in navigating them better.
Major Goals:
Increase adoption of Viva Goals by making OKRs easier to access, update, and act on.
Reduce friction in check-ins by helping users quickly summarize progress and identify next steps.
Build trust in AI assistance by ensuring outputs are reliable, editable, and contextually relevant.
Create consistency across workflows so users don’t have to leave their natural flow to get AI support.
Success Metrics:
Engagement, Adoption, Efficiency of co-pilot in helping form goals, User Satisfaction
🧭 My Role
As the designer responsible for Copilot, I led the effort to embed AI in a way that supported—not overwhelmed—users. This meant:
Defining Copilot’s interaction model and entry points
Partnering with PMs and content strategists to shape prompts that felt clear, trustworthy, and useful
Creating flexible UI patterns for summary, suggestion, and fallback flows
Aligning tightly with engineering on implementation boundaries, response reliability, and latency concerns
💡 Approach and Strategy
Analyzed current workflows to identify friction in OKR adoption, check-ins, and context gathering.
Positioned Copilot as an embedded assistant rather than a chatbot, surfacing contextual prompts within goal detail cards and check-in flows.
Designed lightweight AI interactions like “Summarize this goal” or “What’s my progress?” to reduce effort without disrupting workflows.
Maintained user trust by keeping outputs editable, writing transparent language, and designing fallback/error states for LLM unpredictability.
Partnered with engineering to address performance, latency, and reliability so AI assistance felt seamless and dependable.
Balanced guidance with control, ensuring AI reduced friction while users remained in charge of final actions.
🎨 Design Process and Highlights
Based on the research done for overall viva goals, we conducted an envisioning sprint where I was driving the sprint along with another senior designer. This helped us quickly formulate what our goals for Co-pilot for Viva Goals were and how we could seamlessly bring in the user workflow across various Microsoft products
Light-touch prompts embedded into goal cards
Inline summaries with collapsible states
Suggested actions tied directly to check-in history
Simple fallback messages and graceful error states

Detailed designs can be shown if needed.
⚙️ Collaboration & Constraints
Data fragmentation: OKR progress data lived across multiple tools (e.g., Azure DevOps, Jira, Excel, CRM), making it difficult to fetch a complete picture for Copilot responses.
Integration limits: Some third-party systems had limited or inconsistent APIs, constraining how deeply we could integrate their data into Copilot.
Data freshness: Updates in external tools weren’t always real-time, meaning Copilot risked surfacing outdated or incomplete progress information.
Access & permissions: Not all users had the same access rights across connected systems, requiring Copilot to respect security and privacy boundaries when pulling context.
Consistency of data models: Each tool represented “goals,” “tasks,” or “metrics” differently, so we had to normalize data to provide a unified and comprehensible Copilot output.
🧪 Testing & Refinement
We ran a series of moderated tests with enterprise users to gauge:
Comprehension of summaries
Trust in AI-suggested actions
Willingness to engage with nudges
We learned that clarity > cleverness. The more actionable and specific Copilot’s input, the more likely users were to engage.
📈 Results
In our private preview, we discovered,
30% increase in weekly goal check-ins during pilot rollout
Significant uplift in adoption: more than half of test users engaged with Copilot features within the first two weeks
Users reported higher clarity when summarizing OKRs and reduced time spent preparing check-ins
Product leadership highlighted Copilot as a key differentiator in Microsoft Viva’s AI narrative
🎓 What I Learned
My biggest learnings from this project were,
Low-friction AI works best: embedding prompts into existing workflows drove much higher adoption than standalone chatbots
Trust is crucial: giving users the ability to edit or discard AI outputs reduced skepticism and increased usage
LLM unpredictability needs design: fallback states and graceful errors were as important as the “happy path”
Cross-team alignment pays off: early involvement of PMs, engineers, and content strategists avoided costly rework and kept the design realistic