TokenAI
Join Us
Back to Models

Traum Tokenizer

Tokenization Model

Tokenization
Hugging FaceHugging Face

Overview

Traum Tokenizer is a high-performance, specialized tokenizer designed for next-generation Large Language Models (LLMs) and specifically optimized for the Flash - SLM project. Developed after extensive research into existing tokenizers like GPT-2 and BERT, Traum Tokenizer addresses the critical need for a balanced approach between compression efficiency, training speed, and linguistic understanding.

Utilizing a Byte-Level BPE (Byte-Pair Encoding) algorithm, it ensures no unknown or encoding error tokens are produced, making it robust across diverse text types.

Key Features

  • Massive Training Scale – Trained on a diverse dataset of 20 billion tokens.
  • Expanded Vocabulary – Vocabulary size larger than GPT-2 by over 15,000 tokens for better terminology representation.
  • Precision Engineering – Optimized for reasoning, mathematical symbols, and structural code.
  • Optimized for Efficiency – Maximizes training throughput and inference quality for Small Language Models (SLMs).

Performance Benchmarks

CategoryTraumGPT-2LLaMA
English Text2.802.802.33
Math Logic1.001.000.83
Code Syntax2.572.572.57
Reasoning (CoT)7.003.503.11

Values represent Characters per Token. Higher = More Efficient.

Benchmark Analysis

English: Traum outperforms the LLaMA tokenizer and establishes a performance profile comparable to industry standards.

Mathematics: Superior efficiency compared to both GPT-2 and LLaMA, capturing mathematical structures with high precision.

Code: Performance is consistent and equal with current state-of-the-art tokenizers.

Reasoning (CoT): Exhibits extremely high compression in reasoning tasks (7.00 chars/token). Future versions (v2) will further enhance linguistic nuances.

Usage

Load the tokenizer via the Hugging Face Transformers library:

Python (Transformers Library)

"keyword">from transformers "keyword">import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("assemsabry/traum-tokenizer")

# Example usage
text = "The quick brown fox jumps over the lazy dog."
tokens = tokenizer.encode(text)

print(f"Encoded tokens: {tokens}")
print(f"Decoded text: {tokenizer.decode(tokens)}")

Future Development

Traum Tokenizer is the foundational component for a series of upcoming open-source AI models designed for high-efficiency reasoning. These models will be released on the official account. Based on community interest and feedback, the tokenizer architecture may be fully open-sourced for broad use in the future.