We are all excited about adding a new AI feature to our product, and chatbots are among them. Chatbots change (and in some cases improve) how we interact with users, making customer service faster, smarter, and more accessible. But with this excitement, let us not forget to do our due diligence and put security first.
In this article, we will talk about securing chatbots and the APIs behind them. Because if we skip the security checks, that helpful bot could quickly become a massive vulnerability.
Don't take my word for it, though. Let us look at some incidents from the past that resulted from following the AI buzz. Let us look at what happens when chatbots are not secured properly and what it costs those involved.
In August 2025, security researchers found a serious flaw in Lenovo's AI customer service assistant.
By typing a very specific prompt, they tricked the chatbot into replying with hidden HTML code rather than plain text. When the web application tried to process that hidden code, it leaked the live session cookies of Lenovo's customer support agents directly to the attackers. Anyone with those stolen cookies could bypass the login screen entirely, hijack an agent's account, and read through live chats and past customer records.
The lesson here is that we must always sanitize both the input going into a chatbot and the output coming out of it. Our web applications should never trust and execute code generated by an AI.
Read more about the Lenovo Chatbot Flaw here
In July 2025, security researchers discovered a flaw in the AI recruitment chatbot "Olivia" used by McDonald's, which was built by a third-party vendor named Paradox.ai.
The researchers found a login portal for the chatbot's backend API and simply guessed the administrator password, which was shockingly set to "123456". Because the system lacked basic multi-factor authentication, they gained full access to over 64 million job applicant records.
When we integrate a third-party chatbot, we adopt its security posture into our own environment. This proves that our security is truly only as strong as our weakest vendor.
Read more about the McDonald's Chatbot Breach here
In August 2025, attackers targeted Drift, a popular customer service chatbot platform used by hundreds of major tech companies.
The hackers managed to steal a master OAuth token from an internal code repository. This token was the digital permission slip the chatbot used to authenticate and interact with backend APIs, such as Salesforce. By abusing this single token, the attackers extracted massive amounts of customer contact details and support logs from over 700 different organizations.
This highlights a classic API security failure. When our API tokens are compromised, attackers bypass the frontend entirely and communicate directly with our databases.
Read more about the Salesloft Drift Breach here
What makes chatbots so risky? It usually comes down to how they connect to your internal systems via APIs.
Integrating AI tools into our products is not going to slow down anytime soon, so we need a solid baseline for security. Before you push that new chat feature to production, I highly recommend running through a standard evaluation.
Here is a practical security checklist to keep on hand when building or buying chatbots for your website or app. Following these steps will help you catch the most common API flaws before they turn into costly problems:
Let us build cool things, but let us make sure we build them safely. Our users are trusting us with their personal and financial data, and we have a strict responsibility to protect it.
Securing the APIs that power these chatbots is not a one-time task that you can just check off a list and forget about. It requires continuous testing, logging, and monitoring to adapt to new threats.
See you on the next one!