Token Counter
for LLMs
Paste your prompt and instantly see how many tokens it uses in each model — with API cost estimate. Free, no signup.
Estimativa baseada nos padrões BPE documentados de cada modelo (±5% para texto comum). Nenhum provedor disponibiliza tokenizador JS oficial — para contagem exata use os SDKs Python oficiais.
How does token counting work?
Language models don't process text as characters or words — they use tokens, text units defined by each model's tokenizer. In English, 1 token typically corresponds to ~4 characters or ~¾ of a word.
Values in this tool are estimates based on documented averages per model. For exact counts, use each provider's official tokenization endpoint. For practical cost planning and prompt sizing, the estimate is accurate enough.
The cost estimate shows only the input cost (sending the prompt). Output cost depends on the size of the model's response, which varies by task.