AI Cost by Feature: Why Your Chatbot Might Be Eating Your Margins
We assumed our document summarisation feature was the expensive one. Long inputs, long outputs — it had to be. We were wrong. Our chatbot — the feature we thought was cheap — was 41% of our AI bill. We'd never looked at it by feature. When we did, the numbers made us wince. AI cost by feature is the breakdown most teams never get. It changes how you price, what you gate, and where you optimise.
Why chatbots are often the culprit
Every message is a request. Long conversations mean you're sending the full context — system prompt plus history — with every call. Output tokens add up fast. We had one power user doing 3,000 requests per month. Free tier. Multiply that by a few hundred free users and the cost explodes. Chatbot AI cost sneaks up because volume compounds. We hadn't seen it because we were looking at total spend, not spend by feature. Which AI feature costs most? We had no idea. Turns out it was the one we thought was cheap.
How to get an AI feature cost breakdown
Tag each request with a feature name — chatbot, summarisation, code gen, email drafting. Aggregate by feature. The breakdown will surprise you. We found chatbot at 41%, summarisation at 22%, code gen at 18%. An AI feature cost breakdown like that gives you a clear picture. Once you have it, you can gate the expensive ones, cap usage for free users, or switch models for specific features.
What we did with the data
We gated the chatbot behind a higher tier. Capped free users at 50 messages per day. Switched summarisation to a cheaper model. Margin on both features improved. The AI cost by feature view we finally had — which AI feature costs most, which customers use it, which tier burns margin — drove every pricing decision we made that quarter.
PerUnit breaks down cost by customer, feature, and tier. No data engineering. See which features are burning margin before you scale further.