detecting locale...

Every model has a native tongue.
The question is whether yours matches.

Tokenizer efficiency is a proxy for how well a model understands your language. The same content in Thai needs 2.8× more tokens than in English — less effective context window, worse reasoning, higher cost multiplier.

your language across tokenizers
# tokenizer chars/token fertility vs english efficiency
detecting your language...
cost multiplier
your token rate / 1m
$

* mothertoken owns no pricing data. the rtc multiplier is yours to apply.
whatever you pay per token × multiplier = what your language actually costs.