AI Shopping Assistant

AI Shopping Assistant

AI Shopping Assistant

My Role

UX Designer & UI Designer

Platform

Mobile & Desktop

Date

January 2025-Present

Tools

Figma, Figjam, Lottie, Tableu & Countly

Project Overview

Project Overview

In a fast-growing fintech and commerce ecosystem, users increasingly face decision fatigue while shopping online. Our goal was to build an AI-powered shopping assistant that could support users in making faster, more confident, and personalized shopping decisions—while also supporting business metrics such as conversion and engagement.

In a fast-growing fintech and commerce ecosystem, users increasingly face decision fatigue while shopping online. Our goal was to build an AI-powered shopping assistant that could support users in making faster, more confident, and personalized shopping decisions—while also supporting business metrics such as conversion and engagement.

In a fast-growing fintech and commerce ecosystem, users increasingly face decision fatigue while shopping online. Our goal was to build an AI-powered shopping assistant that could support users in making faster, more confident, and personalized shopping decisions—while also supporting business metrics such as conversion and engagement.

Goals

Business Goals

  • Increase user engagement and retention

  • Boost conversion and average basket size

  • Provide a single, cohesive shopping experience combining speed, personalization, and trust

User Goals

  • Discover relevant products without spending too much time

  • Compare alternatives more easily

  • Feel guided and confident when making a purchase decision

Research & Discovery

Research & Discovery

User Interviews

  • 2 Interviews

  • 40 Participants in Total

Competitors Analysis

  • 4 Shopping Related

  • 7 AI Tools (Web and App)

Visual Test
Visual Test
Visual Test

Visual Test for Navigation

Affinity Map: Visual Test

We conducted two separate user interview rounds with 20 participants each:

  • Round 1: Understanding expectations from AI assistants, and where to place AI feature on the app. We had 3 alternatives for this interview.

  • Round 2: Usability testing with a high-level prototype to gather feedback on the assistant’s tone, recommendation relevance, and interaction experience.


From the interviews, we synthesized insights using affinity diagrams and then iterated on them.

We conducted two separate user interview rounds with 20 participants each:

  • Round 1: Understanding expectations from AI assistants, and where to place AI feature on the app. We had 3 alternatives for this interview.

  • Round 2: Usability testing with a high-level prototype to gather feedback on the assistant’s tone, recommendation relevance, and interaction experience.


From the interviews, we synthesized insights using affinity diagrams and then iterated on them.

I’d also like to see related campaigns or offers tied to the products.

I’d also like to see related campaigns or offers tied to the products.

The interface gives the impression that I’ve entered an AI-powered experience.

The interface gives the impression that I’ve entered an AI-powered experience.

Auto-swiping prompts feel clickable and make me want to interact!

Auto-swiping prompts feel clickable and make me want to interact!

The dynamic structure grabs attention more effectively. I’d only engage if something catches my interest.

The dynamic structure grabs attention more effectively. I’d only engage if something catches my interest.

User Feedbacks from Usability Test of the First Prototype

Affinity Map: Usability Test of the First Prototype

Constraints

Constraints

This project had some notable constraints that influenced our design:

• Unestablished UX Patterns Among Users: One of the biggest challenges was the lack of well-established UX expectations among the target user base. Since users had limited experience with AI-powered assistants in shopping contexts, their expectations, needs, and pain points were often vague or inconsistent. As a result, we acknowledged that the true value of the feature might only become clear after release, through real-world usage and feedback loops.

• Technical Limitations Due to LLM Integration: This was a highly technical project involving collaboration with data scientists and the integration of large language models (LLMs). The development teams themselves were exploring new territories with the LLM infrastructure, which introduced many constraints.

• Chat-Based Interface: Recommendation s needed to appear simultaneously within the conversation UI (not one-by-one).

These constraints shaped how we structured the UI and information hierarchy.

This project had some notable constraints that influenced our design:

• Unestablished UX Patterns Among Users: One of the biggest challenges was the lack of well-established UX expectations among the target user base. Since users had limited experience with AI-powered assistants in shopping contexts, their expectations, needs, and pain points were often vague or inconsistent. As a result, we acknowledged that the true value of the feature might only become clear after release, through real-world usage and feedback loops.

• Technical Limitations Due to LLM Integration: This was a highly technical project involving collaboration with data scientists and the integration of large language models (LLMs). The development teams themselves were exploring new territories with the LLM infrastructure, which introduced many constraints.

• Chat-Based Interface: Recommendation s needed to appear simultaneously within the conversation UI (not one-by-one).

These constraints shaped how we structured the UI and information hierarchy.

First Time User Onboarding

First Time User Onboarding

First Time User Onboarding

Chat Layout

Notes

  • LLM response formatting dictated design decisions: even minor visual details like capital letters in system-suggested prompts had a direct impact on LLM interpretation and response time.

  • Product card design and prompt structure had to be optimized around how the AI parsed and returned data.

  • NLP-based prompt handling meant design decisions required close collaboration with back-end and AI teams, making it essential to understand the full technical pipeline.

Design Process

Design Process

Stage

What We Did

Empathize

Conducted interviews to understand pain points and AI expectations

Define

Created personas, user journeys (new and existed users) and problem statements around decision fatigue and overload

Ideate

Ran collaborative workshops to ideate flows, product surfaces, and features

Prototype

Created mid- and high-fidelity prototypes for chat-based AI recommendation

Test

Validated with users, iterated based on insights, and A/B tested UI variants

Design Solutions & Key Features

Design Solutions & Key Features

1

Swiping Prompts

Prompts are designed to feel interactive and tappable, encouraging users to express preferences quickly. The auto-swipe behavior reinforces a sense of movement and flow.

2

Human-Centered Interaction Flow

Despite the technical complexity, the focus was on creating a user-friendly, curiosity-driven journey—encouraging users to explore, tap, and engage at their own pace without cognitive overload.

3

Instant Access to Product Details

Users can tap on recommended items to explore them further via a seamless in-app webview, supporting continuous and fluid browsing behavior.

4

AI-Driven Personalization

The interface uses an LLM-based engine to suggest products tailored to users’ past interactions and behavioral data, creating a sense of intelligent assistance.

4

AI-Driven Personalization

The interface uses an LLM-based engine to suggest products tailored to users’ past interactions and behavioral data, creating a sense of intelligent assistance.

Handling Asynchronous Image Loads in a Stream-Based LLM Environment

In traditional e-commerce, product visuals are prioritized in vertical card layouts (just like on the left). However, our AI shopping assistant—powered by LLMs—required a rethinking of this due to the streaming nature of text responses and slower image loads from external sources.

To address this, we implemented a left-to-right horizontal card layout, prioritizing the early display of product titles and pricing information, while allowing the image to load progressively on the right.

In traditional e-commerce, product visuals are prioritized in vertical card layouts (just like on the left). However, our AI shopping assistant—powered by LLMs—required a rethinking of this due to the streaming nature of text responses and slower image loads from external sources.

To address this, we implemented a left-to-right horizontal card layout, prioritizing the early display of product titles and pricing information, while allowing the image to load progressively on the right.

From a UX perspective, this shift to a content-first approach helped bridge the performance gap caused by delayed media assets, aligning the visual hierarchy with what could be rendered immediately. Rather than delaying the entire UI until all elements were available, we embraced a stream-friendly interaction model, optimized for the unique behavior of LLM-based systems.
This experience taught us how LLMs don’t just affect backend logic, but deeply impact frontend decisions as well—from layout structure to perceived performance—and how designers must adapt to build developer-friendly, resilient interfaces in emerging AI contexts.

From a UX perspective, this shift to a content-first approach helped bridge the performance gap caused by delayed media assets, aligning the visual hierarchy with what could be rendered immediately. Rather than delaying the entire UI until all elements were available, we embraced a stream-friendly interaction model, optimized for the unique behavior of LLM-based systems.
This experience taught us how LLMs don’t just affect backend logic, but deeply impact frontend decisions as well—from layout structure to perceived performance—and how designers must adapt to build developer-friendly, resilient interfaces in emerging AI contexts.

First Month

Users mistook this area for payment support.

Second Month

After refining the placeholder and welcome message, user input became more relevant to product discovery.

Optimizing the First Interaction with AI

15% Drop in Out-of-Contex

Since AI shopping assistants are still unfamiliar to many users, they often confused the prompt area with a support section, especially for payment-related help.

In the first month, this caused a high out-of-context rate.

After updating the greeting text and input placeholder to make the assistant's role clearer, we observed a 15% drop in irrelevant inputs.

Takeaways

Takeaways

Deeper Understanding of LLM Behavior: Through this project, I gained firsthand experience with how LLMs process prompts and how their internal mechanisms can shape UX and UI decisions—right down to capitalization or product card layout.

Developer-Friendly Design Mindset: Working closely with developers and data scientists highlighted the importance of design systems that are not only user-centric but also implementation-aware. By understanding the AI model’s constraints and how front-end systems fetch and display AI-generated responses, I was able to create designs that were more aligned with engineering realities.

Designing in Uncertainty: The ambiguity around user expectations for AI assistance taught me to lean more heavily on user interviews and iterative validation. It also emphasized the importance of planning for post-launch learnings and adaptability.