One of the critical issues surrounding AI chatbots is user trust. We've all experienced some lame support bot that provides no support at all. Sometimes, it parrots a FAQ at us; sometimes it doesn't even do that (and those FAQs hardly ever answer actual questions frequently asked by users—they are marketing collateral). Not answering our actual questions is the least of it. One system I tried responded with nothing other than "I don't understand your question." Multiple attempts with rephrasing changed nothing, and there was no access to humans.
This is a surefire way to lose customers. Your customer-support system is *part of the product*, as essential (sometimes more so) as anything written by programmers. When it fails, the product fails. It is not a "cost center." It's not something separate from the product itself. It's integral to the product. A bad support system is a giant bug right in your customer's face.
In the worst cases, I stop using a particular product altogether and go to a competitor. If Claude or ChatGPT can provide better support for your product than that chatbot you spent millions developing, what exactly have you accomplished? When you hide sources of real information from public AI's (e.g., a "forum" that requires a login) so your lame system is the only option, I'll go elsewhere.
So, "what's a mother to do?" (Do people even know where that quote came from anymore?)
The solution, as is often the case, goes back to feedback. You need to design these systems to be both externally evaluated and self-evaluating, and in practice, adjust them based on the feedback/evaluation. Ask your users if you solved their problems (right there in the chat system). Look at patterns of use (is someone asking the same thing over and over? Are there always the same follow-ups?). Find out which questions are really asked most frequently and then engineer your AI context to handle those answers better. Evaluate the prompts. (The prompt "I NEED TO TALK TO A FUCKING HUMAN BEING" in all caps is a clue.) Even typing patterns can be used as a way of detecting user frustration.
So, gather as much feedback as you can, then adjust the system. This is not a one-time "we'll write a chatbot and then fire all the support people" solution. It's an ongoing process, and those human support people are an integral part of the system. You need a fallback when (not if) the AI fails.
Banger. I’m almost sick of AI already, but only because lazy people are using and gate keeping. It’s awful.