AnDI
AI-powered manual assistant for science labs
2024 - 2025
Figma prototype showcasing the AI-powered RAG search solution for scientific documentation (designed by me)
4,000 pages. $600,000 machines.
That’s the reality for scientists operating equipment like the NMR Spectrometer at national labs. When questions come up mid-experiment, they often dig through dense manuals or wait for senior staff — slowing research and increasing anxiety.
I created AnDI as an experience, an AI-powered assistant that helps scientists get instant answers from complex documentation — freeing them to focus on discovery, not page-flipping.
Originally designed with a voice UI for hands-free use, I pivoted to a text-based interface after usability testing revealed safety concerns — lab users couldn’t wear noise-canceling headphones in loud environments.
Role
Product Designer & Researcher · Intrapreneur
Team
1 Product design & research (Me)
1 UX research intern
3 Developer (AI, front-end, back-end)
1 Project manager
Context
Born from a fast-paced, 8-hour company hackathon
Impact
$100k
R&D investment
Won internal funding to validate the concept through user research and expand development toward real lab partnerships
3
Organizations & Partners
Nuclear Magnetic Resonance (NMR), a PNNL lab
Grid Launch Pad, a PNNL lab
Advanced Metering (U.S. Dept. of Energy)
2/5
hackathon awards
Out of 15 teams, our solution won:
“Most WOW Factor”
“Most Complete Solution”
NMR Lab Chief Scientist
Target audience, during Alpha testing
"We can pop this (AnDI) up in my wet lab and start using it today. It’s ready to go. Everyone from top to bottom will benefit from this (AnDI)."
Hackathon approach: user-research focused problem solving
Hour 0 - 1: Framing the problem
On the day of the hackathon, in just 20 minutes, I scoped the problem space by aligning AI possibilities with development constraints in collaboration with the project manager. I quickly drafted an outline for contextual inquiry to guide our user research during the hackathon day, ensuring we grounded our ideas in real lab needs from the start.
Lab Technician
Target audience, during hackathon day
"Reading the documentation was time consuming, and I was really afraid I would break it ($750k equipment)."
Hour 1 - 2: Contextual inquiries
"What was the most difficult part of your experiment?"
From this contextual inquiry and direct user interviews, I identified three recurring challenges that AI could meaningfully address:
Repetitive labeling of lab samples
Organizing and transferring emergency freezer inventory
Time-consuming and anxiety-inducing equipment operation
These findings gave us a human-centered foundation for scoping feasible, high-impact AI solutions within the remaining hackathon hours.

Pain point 1: Space management resulted in pain points

Pain point 2: Physical space of the lab I conducted contextual inquiries

Pain point 3: Innovation filled with equipment operation roadblocks
Hour 2 – 3: Synthesis, feasibility mapping & early ideation
With early user insights in hand, I worked with developers to assess what was feasible within the hackathon’s 8-hour limit. We reviewed all three identified pain points and prioritized them based on impact vs. complexity:
Not feasible in the hackathon timeframe:
Labeling lab samples (would require advanced ML + object detection)
Freezer inventory management (complex tracking logic and physical constraints)
Technically achievable:
Operating complex lab equipment — a high-anxiety, high-friction task that could be improved with documentation access via AI
The Concept: Voice-driven AI assistant
I proposed a voice-activated assistant powered by Retrieval-Augmented Generation (RAG), designed to retrieve precise instructions from dense documentation in real time.
It was tailored specifically for lab conditions:
🔊 Loud ambient noise
🧪 Hands-on equipment use
⏱️ Time-sensitive decision-making
The Vision
If a scientist could simply ask a question out loud and get an answer — without pausing their work or flipping through a 4,000-page manual — they’d operate faster, with greater confidence and less stress.
I lead the ideation workshop for the hackathon team to hone in on one idea
Hour 4 - 8: Storyboarding
To help our audience — many of whom weren’t scientists — clearly imagine the experience, I chose to focus on storyboarding instead of building a UI. This allowed the team to spend our limited hours communicating the idea and its impact, rather than debating visual details.
I used whiteboard to drew out an experience to align the team on one concept
I framed the story like this:
Imagine you're working on a brand-new piece of research equipment. It's exciting — and a little daunting. You don’t want to break a $750K machine, but you’re not sure where to start.
That’s where InsDo (early name for AnDI) steps in.
You ask questions using voice commands and instantly get answers pulled from thousands of pages of documentation. You never need to stop your work, flip through a binder, or wait for someone more experienced.
InsDo cross-references multiple documents to provide a full picture of the machine and the lab it's in.
It’s your hands-free, AI-powered research companion.
Pitch impact: Turning insight into recognition
I collaborated with the team to build a compelling pitch deck that brought our concept to life. It included:
A hook: “The average scientific manual is 4,000 pages.”
Target audience framing
Storyboards illustrating the user journey
A recorded demo of the RAG-powered interaction
This storytelling-first approach helped our team stand out among 15 other teams, ultimately earning:
🏆 “Most WOW Factor”
🏆 “Most Complete Solution”
💰 $100K in internal R&D investment to develop and validate the product
Post-hackathon: From concept to AI-driven product
Following the $100K investment awarded after the hackathon, I transitioned into leading design execution under real-world resource and budget constraints.
As the lead product designer, I guided the development team to build from the storyboard, core workflow, and interaction principles inspired by Google’s NotebookLM. This approach allowed us to move quickly, stay aligned with the original vision, and begin validating the solution through real usability testing.
Letting users lead: Validating AnDI in the lab
Prototyping strategy for AI
Rather than focusing on a polished UI first, I made the strategic choice to let engineering lead with backend development.
RAG-based systems generate unpredictable responses, making it difficult to prototype convincingly without a working engine.
This decision allowed us to test with real AI behavior, not just a staged Figma prototype.
Pivoting from voice to typed interaction
The original idea centered on a voice-first assistant for hands-free operation. But usability testing surfaced a critical blocker: in noisy, safety-focused labs, headphones aren’t allowed, similar to why pedestrians avoid noise-canceling audio in high-risk settings.
Based on this insight, I shifted AnDI’s primary mode to a desktop-first, typed interface, keeping voice as a secondary feature for remote scenarios.
Real-world validation
User testing confirmed both the need and practicality of AnDI for:
Small to mid-sized lab equipment (under $500K)
In-person and remote lab settings
Paper-based, documentation-heavy workflows
A particularly validating moment: a lab scientist offered to deploy AnDI on an old laptop so their team could start using it immediately, clear evidence of value, even without full deployment.
Designing for real use and future growth
Following usability testing, I synthesized findings using affinity diagramming and translated them into actionable design goals. Drawing inspiration from NotebookLM, I focused the UI on three pillars to power AnDI's usability and scalability:
Goal 1: Organize documentation by physical context
To bridge the gap between digital tools and real lab environments, I designed the UI to reflect the physical ontology of a lab — aligning with how users naturally group and access information:
Lab Spaces → Equipment → Documentation Search (via RAG)
Included shortcuts for recently used equipment to support repetitive workflows and reduce navigation friction.
This system-to-reality alignment helped users find documentation faster and stay grounded in familiar lab structures.
Goal 2: Streamlined, context-aware RAG search
RAG is the heart of AnDI’s functionality, so I designed the interface to support the complexity of natural language queries:
Conducted competitive analysis of Claude, ChatGPT, and NotebookLM
Optimized the UI for longer prompts (100–300 tokens) — reflecting real user behavior in searching complex procedures
Introduced a document switcher dropdown, allowing users to swap or add documents mid-task without disrupting context
Document switcher
To avoid disrupting task flow, I implemented a contextual dropdown for users to switch or add documents on the fly — keeping them anchored in their current interaction.
This approach balanced AI power with task continuity, ensuring users stayed in flow while retrieving accurate, relevant information.
Goal 3: Enable future collaboration and ecosystem expansion
Beyond day-one functionality, I designed AnDI with future partnerships and lab ecosystem growth in mind:
Rather than opening AnDI to everyone, I framed it as an exclusive early-access opportunity, a deliberate move to spark curiosity, drive feedback, and start conversations with those most eager to see it succeed.
Future partnership 1: Lab tour integration
To align with digital twin initiatives, I designed space in the UI to host lab tour documentation and shared environments, laying the foundation for future visualization and simulation use cases.

Future partnership 2: Crowdsourced document sharing
Usability testing revealed a key pain point: locating digital documentation was difficult. I proposed a crowdsourced hub where researchers could upload and share manuals — similar to academic platforms.
The shift AnDI enables
Before
Dense, 4,000-page paper manuals
Scientists reliant on mentors for routine questions
High anxiety around breaking expensive lab equipment
After
Organized, searchable digital documentation
|
Increased confidence through instant access to guidance |
Technical takeaway: Speeding up with a Figma UI kit
For this project, I used the SnowUI Figma UI Kit to conserve time and design resources, allowing me to communicate the core experience in under 16 hours.
It was a valuable exercise in onboarding a design system I hadn’t used before — starting slow, learning the logic behind the components, and then ramping up quickly to produce high-quality UI.
In future projects, I’ve learned to start with foundational elements like buttons and redesign existing patterns. It helps me internalize the logic of the system before creating anything new.
AI Takeaway: Designing for practical impact, not just hype
Working on AnDI deepened my understanding of what it means to design for AI in real-world, high-stakes environments. AI is powerful — but it’s not magic. The core of good AI design still comes down to human-centered thinking: reducing cognitive load, supporting decision-making, and meeting people where they are.
Instead of chasing novelty, I focused on delivering practical, trustworthy value — designing around RAG’s unpredictability, grounding interactions in lab workflows, and knowing when to pivot (like deprioritizing VUI based on safety and usability constraints).
AnDI reminded me that AI design isn’t about showing off the model — it’s about amplifying the user.