Token Counter

Count tokens in your text. Character count, word count, estimated tokens. See % of context window for GPT-4, GPT-3.5, Claude 3, Gemini. Progress bars. Browser-based.

0
Characters
0
Words
~0
Est. Tokens
word×1.3: ~0
char÷4: ~0

Context window usage

GPT-4 8K0.0% (0 / 8,192)
GPT-4 32K0.0% (0 / 32,768)
GPT-4 128K0.0% (0 / 131,072)
GPT-3.5 16K0.0% (0 / 16,384)
Claude 3 200K0.0% (0 / 200,000)
Gemini0.0% (0 / 128,000)

What Is a Token Counter?

A token counter estimates how many tokens your text will consume when sent to an AI model. Tokens are the units models use — roughly 4 characters or 0.75 words in English. Knowing your token count helps you stay within context limits and estimate API costs.

This tool uses two rules: words × 1.3 (a GPT-style approximation) and characters ÷ 4. It shows estimates for multiple models and what percentage of each model's context window your text would use.

Context Windows by Model

GPT-4: 8K, 32K, or 128K tokens depending on variant. GPT-3.5-turbo: 16K. Claude 3: up to 200K. Gemini: 128K. The context window includes both your input and the model's output. If your prompt is 4,000 tokens and the model can generate 4,000 more, you're at the 8K limit. Plan accordingly.

When Token Count Matters

Token count matters when: building long system prompts, summarizing documents, batching multiple messages, or estimating API costs. Most APIs charge per token. A 10K-token prompt costs more than a 1K-token prompt. Use this tool to trim or restructure content before sending.

Frequently Asked Questions

Related Tools

Explore More Tools

Find this tool useful? Buy us a coffee to keep DuskTools free and ad-light.