Is Your New Chatbot Secured?

March 1, 2026
chatbot

We are all excited about adding a new AI feature to our product, and chatbots are among them. Chatbots change (and in some cases improve) how we interact with users, making customer service faster, smarter, and more accessible. But with this excitement, let us not forget to do our due diligence and put security first.

In this article, we will talk about securing chatbots and the APIs behind them. Because if we skip the security checks, that helpful bot could quickly become a massive vulnerability.

When did Chatbots Go Bad?

Don't take my word for it, though. Let us look at some incidents from the past that resulted from following the AI buzz. Let us look at what happens when chatbots are not secured properly and what it costs those involved.

1. Lenovo and the Chatbot Session Hijack

In August 2025, security researchers found a serious flaw in Lenovo's AI customer service assistant.

By typing a very specific prompt, they tricked the chatbot into replying with hidden HTML code rather than plain text. When the web application tried to process that hidden code, it leaked the live session cookies of Lenovo's customer support agents directly to the attackers. Anyone with those stolen cookies could bypass the login screen entirely, hijack an agent's account, and read through live chats and past customer records.

The lesson here is that we must always sanitize both the input going into a chatbot and the output coming out of it. Our web applications should never trust and execute code generated by an AI.

Read more about the Lenovo Chatbot Flaw here

2. McDonald's and the Paradox.ai Chatbot

In July 2025, security researchers discovered a flaw in the AI recruitment chatbot "Olivia" used by McDonald's, which was built by a third-party vendor named Paradox.ai.

The researchers found a login portal for the chatbot's backend API and simply guessed the administrator password, which was shockingly set to "123456". Because the system lacked basic multi-factor authentication, they gained full access to over 64 million job applicant records.

When we integrate a third-party chatbot, we adopt its security posture into our own environment. This proves that our security is truly only as strong as our weakest vendor.

Read more about the McDonald's Chatbot Breach here

3. The Salesloft Drift and Salesforce OAuth Breach

In August 2025, attackers targeted Drift, a popular customer service chatbot platform used by hundreds of major tech companies.

The hackers managed to steal a master OAuth token from an internal code repository. This token was the digital permission slip the chatbot used to authenticate and interact with backend APIs, such as Salesforce. By abusing this single token, the attackers extracted massive amounts of customer contact details and support logs from over 700 different organizations.

This highlights a classic API security failure. When our API tokens are compromised, attackers bypass the frontend entirely and communicate directly with our databases.

Read more about the Salesloft Drift Breach here

The Risks We Need to Manage

What makes chatbots so risky? It usually comes down to how they connect to your internal systems via APIs.

  • Overly Permissive APIs: Chatbots need to fetch data to be helpful, but giving them unlimited access is dangerous. If an API serving the chatbot has access to your entire customer database without strict boundaries, a compromised bot could result in a major data leak. I frequently see chatbots assigned administrative roles simply to make the integration work faster, which is a critical misconfiguration.
  • Injection Attacks: Chatbots accept user input. If your backend APIs blindly trust the text coming from the chat window, attackers can use prompt injection or malicious scripts to trick the system into handing over sensitive data or executing unauthorized commands. If the API concatenates input directly into a database query, the chatbot essentially becomes an open terminal for the attacker.
  • Authentication and Authorization Flaws: Sometimes, chatbots are integrated without proper identity checks. If the API serving the chatbot does not verify exactly who is asking for the information, a clever user might be able to manipulate the request to see another person's account details. This is Broken Object Level Authorization (BOLA) in action. The chatbot might ask the backend for a specific order number, but if the API does not verify that the current session owns that order, it will happily return the data to anyone who asks.

Your Chatbot Security Checklist

Integrating AI tools into our products is not going to slow down anytime soon, so we need a solid baseline for security. Before you push that new chat feature to production, I highly recommend running through a standard evaluation.

Here is a practical security checklist to keep on hand when building or buying chatbots for your website or app. Following these steps will help you catch the most common API flaws before they turn into costly problems:

  • Enforce the Principle of Least Privilege: Grant your chatbot API only the permissions it needs to do its job. Do not give it full read and write access to your primary database if it only needs to check order statuses. Always scope your OAuth tokens strictly.
  • Sanitize All Inputs: Treat every word typed into the chatbot as untrusted. Ensure your APIs validate and sanitize the data before processing it to prevent injection attacks. Use parameterized queries on the backend and context-aware output encoding on the frontend.
  • Implement Rate Limiting: Protect your backend APIs from being overwhelmed by automated attacks. Limit the number of requests the chatbot can make in a given timeframe to prevent trivial brute-force attacks and denial-of-service conditions.
  • Vet Your Vendors: If you are buying a chatbot service, ask hard questions about their security. Ensure they use multi-factor authentication and rotate their API keys regularly. Request their latest penetration testing reports.
  • Mask Sensitive Data: Configure your APIs so they never send full credit card numbers or raw passwords to the chatbot interface. If the bot does not need to display it, the API should simply not send it.

Let us build cool things, but let us make sure we build them safely. Our users are trusting us with their personal and financial data, and we have a strict responsibility to protect it.

Securing the APIs that power these chatbots is not a one-time task that you can just check off a list and forget about. It requires continuous testing, logging, and monitoring to adapt to new threats.

See you on the next one!

Made With Traleor