
Redesigning search (internal platform)
The internal search tool had become slow, cluttered, and hard to trust. Our cross-functional team, including engineering, data science, and product redesigned it from the ground up. I led the design effort, working closely with colleagues to simplify filters, introduce semantic search, and create a more intuitive interface. The result? an increase in expert matches, faster workflows, and a much smoother experience for staff.
Challenge
The internal expert search tool was a key part of the company’s workflow, used by staff to match experts to research projects. The tool was hard to use, filters were overwhelming, and search results weren’t always relevant. Staff were relying on memory, workaround methods, and manual filtering to do their jobs effectively.
Project goal:
Redesign the search experience to make it faster, smarter and more intuitive, enabling internal staff to find the right expert more efficiently.
Original platform search view.
Discovery & problem definition
I started by interviewing internal staff across departments to understand how they used the tool and where it fell short.
The key issues became clear:
15+ filters, many rarely used
Basic keyword-only search that returned poor results
Clunky UI with limited flexibility (no multi-select, no free text).
From this, we focused on two primary goals:
Improve the relevance of search results
Make the interface faster and more intuitive to use
We also identified an opportunity to introduce semantic search, allowing the tool to better understand context and broaden the search.
Collating user feedback from the interviews
Prototyping & testing
After some terrible ideas I developed two initial layout concepts, which were:
Combined view: filters and results on a single screen
Split view: filters on one screen, results on another
We tested both with internal users. There was a preference for the split view, users found it cleaner and easier to focus on reviewing experts results without distraction.
User testing helped start to inform areas of focus, such as filter behaviour (single vs. multi-select), filter types, and how we could show AI-suggested matches.
Combined and split view concepts
UI design & collaboration
I started work on the overall search area and functionality of the required filters and behaviours, ensuring design decisions where clearly documented.
Close collaboration with engineers was essential. We aligned early on component behaviour, technical constraints, and performance expectations. This helped avoid any design/dev friction and ensured the new filters and AI suggestions would work reliably.
I also started to look at other small improvements that were highlighted from user insights during the initial interviews, for example being able to save and share search results.
Defining filter interactions
Pilot & feedback loop
Engineers built a coded prototype, which we pilot tested with around fifty users across various projects. We also set up metrics to validate its effectiveness.
We ran the pilot for around three months to help us gather insights, data and to gain early validation.
Refined search UI screens
Results
-
✅ Decrease in average search refinements per session
Old search: 2.4
New search: 1.1
Change: -1.3 -
✅ Decrease in average time to add first expert
Old search: 2 mins 15 sec
New search: 1 min 20 sec
Change: -55 sec -
✅ Average satisfaction score of 4.1 out 5 for ease of use
Based on user satisfaction survey responses asking users how easy it was to find an expert via the search
-
Data based on a small sample size, not hard proof but a good early signal that users where finding more relevant experts faster.
💡Key learnings
Challenging legacy elements
This was essential to simplifying the search. Reviewing real usage data helped us confidently remove filters and focus on what users actually needed.
Making search smarter with AI
Working with AI-powered semantic search taught me how different it is from traditional keyword-based search. Instead of looking for exact matches, the AI looks for contextual relevance, understanding the intent behind a query and matching it to expert profiles even if the wording isn’t identical.
Clearly documenting all potential interaction states for form elements is essential
It helps prevent ambiguity during development and ensures a smoother handoff to engineers.
🥵 Challenges
Search transparency
Users wanted clarity on why certain experts appeared in search results, how they matched the criteria. We uncovered this need later in the process, highlighting a gap in early discovery.
Expert results layout
Although it wasn’t originally in scope, the results view needed refinement. Scanning profiles wasn't efficient, which likely affected user engagement and contributed to weaker performance on some key metrics.
Load times
With multiple filters and database requests, load times became a concern. We had to strike a balance between flexibility and performance, too many requests slowed down the experience.
Refined search UI screens
Role: Senior product designer
Project duration: 6 months
Team: Project manager, Development, Data science.
Responsibilities: Discovery and user research • Ideation and prototyping • Validation and testing • UI and interaction design
Tools: Miro • Figma