Challenge

How could we turn support conversations into useful knowledge that could easily be referenced by chatbots and other customer service agents?

Solution

A system and LLM prompt to efficiently extract the primary question and answer within a support conversation.

My Role

Staff AI product designer


 

Personal Takeaways

This continues to be a tough project: I maintain strong product intuition that it is valuable, but for various reasons it has failed to live up to its potential. Perhaps I haven’t yet learned the art of dropping a project?

 

SITUATION

After the launch of ourLLM-powered AI chatbot Fin, it became more important to ensure that our customers were fully prepared to make use of Fin. To do so, they would need to have high-caliber help center material to serve as the knowledge base for Fin.

We had some customers with a lot of conversational data (that is, historical customer service chat conversations), but potentially not a lot of knowledge base material (that is, help center articles and content). We wanted to find a way to extract that knowledge from the conversational data, and bring it into a format and structure where it could be used efficiently by Fin. We strongly believed this could provide significant value for our customers.

 
 
 

PROJECT GOALS

Ultimately this project was an experiment. We had hypotheses that this data existed, that the data was valuable, and that our customers had no easy way to access and leverage the data. We sought to prove whether those hypotheses were true.

 

DESIGN APPROACH

My primary contribution to the initial stage of this project was two-fold: to work with an ML scientist to achieve an optimum LLM output that would represent the “meaty” part of a support conversation, and then to design a method by which we could get confirmation in the moment from a customer support rep. (We initially wanted to test a workflow where customer support reps would approve and suggest the extracted content while finishing up the conversation at hand, before moving onto another conversation.)

My first week on the project was spent providing feedback and direction to the ML scientist on how to shape our LLM prompt to exclude extraneous information, and to consolidate and streamline the information within the conversation until we had a pithy representation of the conversation’s main question.

Then I worked with a designer from the Inbox to create a modal experience for service reps to quickly view, edit, and approve the extracted snippet.

 
 
 

PROTOTYPE + TESTING

Like many of our features, we first launched a prototype to our internal customer service (CS) team at Intercom. We set an upper limit of how many times any individual CS rep would receive the modal prompt, so that we wouldn’t cause an undue burden on them.

Feedback was generally positive about the content of the extracted snippets. The modal experience was disruptive to a rep’s mental flow when finishing up a conversation, but because we’d limited the number of times it could interrupt them each day, we determined it was an acceptable risk and decided to launch to more customers.

 

LAUNCH + OUTCOMES

We initially launched with some beta customers while running an AB test to determine whether the generated snippets actually improved the resolution rate of the Fin AI chatbot when using these snippets. We saw a small but statistically significant improvement, so we made the feature available to all customers.

 
 
 

Details

URL: Launch post
Date: June 2023
Role: Staff AI product designer
Tools: Figma, GPT4