AI Concierge
Core Problem
Interakt’s users who juggled various business responsibilities and had to manually re-create campaigns that performed well if they wanted to extend, re-target or re-launch it leading to user frustration.
Target Audience
SMB Users - Owners, CEOs of SMBs, Marketing & Sales Manager, Digital Marketing Managers
Enterprise Users - Marketing Heads/ Managers ,Business Development Heads / Managers
My Role
In this project, I was involved in key stakeholder discussions to define business goals, led a design thinking workshop with senior members of the product and engineering team to define problems, ideate solutions and validate concepts, worked with the PM and Marketing team, sales teams for GTM efforts
Key Collaborators
1 Lead Designer, 1 ux researcher, 1 Product Director, 1 Engineering Manager, 2 ML engineers, 1 Front-end engineer. 1 Senior Marketing Manager and the business head of Interakt
Major Goals/ Success Metrics
Enhance notification centre to make it more interactive in nature to increase user engagement
Reduce the time needed for repeat campaign creation and launch, allowing users to allocate time to other essential duties.
Adoption rate for AI Concierge among our existing customers
Assess the number of new customers who utilized AI Concierge within the initial month.
Monitor the number of repeat campaigns generated via notification nudges
Problem Discovery
During a key leadership meeting, an analysis of our platform's performance metrics revealed some concerning trends:
Low User Engagement on Repeat Campaigns - Repeat campaigns allowed users to easily re-deploy successful campaigns, but the low engagement numbers indicated this feature was being underutilized.
Poor Notification Adherence - users were frequently missing or ignoring the notifications we sent regarding the performance of their active campaigns.
My Approach
In collaboration with the UX researcher, I compiled a list of clients who had initiated repeat campaigns followed by conducting user interviews to identify primary pain points related to campaign creation within Interakt. Furthermore, I collaborated with the PM to explore the feasibility of extracting revenue data from our analytics.
Campaign deployment was treated as a set-it-forget-it process rather than an active cycle of monitoring and organization
The platform lacked functionality allowing users to extend, re-target, or re-launch an existing campaign, necessitating manual creation and input for each new campaign.
Additionally, users often disregarded notifications, relying solely on metrics to understand campaign performance
Repeated campaigns had once constituted 18% of our revenue and now had dropped to 6%
The key question was- How might we simplify the process of repeating and monitoring campaigns so users remain actively engaged throughout the entire lifecycle?
Towards this, I led a design thinking workshop with senior product and engineering folks to define challenges, brainstorm solutions, and validate concepts, leveraging our recent collaboration with OpenAI to explore innovative approaches, leading us to consider:
An AI-powered feature capable of leveraging data from past campaigns to proactively encourage users to extend, relaunch, or retarget successful campaigns.
An interactive notification hub enabling users to efficiently repeat campaigns based on performance.
An automated system to identify the target audience, configure responses, schedule the campaign, and provide a preview, reducing user effort by 90% to review and approve.
A gamification element featuring badges. Users earned higher-level badges by taking affirmative actions on notifications.
Constraints and Challenges
Ensuring consistency and appropriateness of the training data
Balancing between AI-generated campaign content and the necessity for personalized campaigns tailored to individual user data
Integrating AI-powered features seamlessly into users' existing workflow and campaign management processes
Designs




Implementation
Since this was our tryst into an AI-feature, We closely collaborated with the ML team,
Involved them early on, to consider implementation feasibility
To ensure that the design decisions were communicated clearly
Actively sought feedback to address any technical constraints and feedback
To incorporate ML model into our A/B testing to evaluate if there were any hallucinations or biases that might need to be accounted for
We were able to implement the feature as per designs within the given timeline.
User Testing, GTM activities
Prior to the full-scale launch, We conducted,
An internal evaluation using sample data to assess the performance and identify potential issues with the AI model.
A preliminary A/B testing ( with the feature released to 400 clients) was conducted with a sample of 5% of our Client base
Encouraging results were observed, including a notable 68% notification opening rate and a subsequent affirmative action taken by 75% of users who engaged with the announcement hub.
Following this phase, collaboration with the marketing manager ensued to refine the value proposition, ensuring that our platform's seamless user experience was effectively communicated through marketing materials.
Results and Learnings
Just before release, further integration testing was undertaken to address some issues with the AI model, culminating in the feature's official release delayed to earlier this year. The results for 3 months are
1813 merchants have engaged with announcement hub
3000 nudges have had affirmative actions taken on them
126 repeat campaigns have happened through notification centre
Key Learnings
Better insight into how our ML team works and tests
Establishing key feedback loops with ML team in particular to ensure that we were aligned on design decisions