アルゴリズム

Unicode テキスト分割

テキストの境界を見つけるアルゴリズム:書記素クラスター・単語・文境界。カーソル移動・テキスト選択・テキスト処理に不可欠です。

· Updated

Boundaries in Unicode Text

Text is not a flat sequence of code points — it has structure. Users think in terms of characters, words, and sentences. But Unicode code points do not map cleanly to these concepts. A single visible character (a grapheme cluster) can span multiple code points. A "word" means different things in English, Japanese, and Arabic. A sentence boundary after a period is ambiguous when periods also appear in abbreviations and numbers.

Unicode Text Segmentation (UAX #29) defines algorithms for finding grapheme cluster boundaries, word boundaries, and sentence boundaries. These algorithms are the foundation for correct cursor movement, text selection, word counting, and spell checking in any Unicode-aware application.

The Grapheme Cluster Problem

Python's len() function counts code points, not user-perceived characters:

# Emoji with ZWJ sequence: 1 visible character, 7 code points
family = "\U0001F468\u200D\U0001F469\u200D\U0001F467\u200D\U0001F466"
print(len(family))        # 7 (code points)
# User sees: 👨‍👩‍👧‍👦 (one family emoji)

# Combining characters
cafe = "cafe\u0301"       # e + combining acute = é
print(len(cafe))           # 5 (code points)
print(len("café"))         # 4 (precomposed NFC)
# Both render as "café" — 4 user-perceived characters

# Flag emoji: 2 regional indicator symbols = 1 flag
flag = "\U0001F1FA\U0001F1F8"  # 🇺🇸
print(len(flag))           # 2 (code points)
# User sees: 🇺🇸 (1 flag)

A grapheme cluster is the minimal unit a user thinks of as a single character. UAX #29 defines grapheme cluster boundary rules that handle: - Base + combining marks - Hangul syllable sequences (jamo combining rules) - Regional indicator pairs (flags) - Zero Width Joiner (ZWJ) sequences (family/profession emoji) - Extend characters (tags, emoji modifiers)

Using UAX #29 in Python

The grapheme package provides UAX #29-compliant grapheme cluster segmentation:

# pip install grapheme
import grapheme

family = "\U0001F468\u200D\U0001F469\u200D\U0001F467\u200D\U0001F466"
print(grapheme.length(family))           # 1
print(list(grapheme.graphemes(family)))  # ['👨‍👩‍👧‍👦']

text = "Hello, 世界! 🌍"
print(grapheme.length(text))             # 11 (user-perceived chars)

# Safe string slicing (by grapheme, not code point)
print(grapheme.slice(text, 0, 5))        # 'Hello'

For industrial-strength segmentation including word and sentence boundaries, use ICU via PyICU:

from icu import BreakIterator, Locale

text = "Don't stop. Dr. Smith arrived at 3.14 PM."
bi = BreakIterator.createSentenceInstance(Locale("en_US"))
bi.setText(text)
start = 0
for end in bi:
    print(repr(text[start:end]))
    start = end
# "Don't stop. " | "Dr. Smith arrived at 3.14 PM."

Quick Facts

Property Value
Specification Unicode Standard Annex #29 (UAX #29)
Boundary types Grapheme cluster, word, sentence
Python len() Counts code points, not grapheme clusters
Python package grapheme (pip install grapheme)
Full ICU support PyICUBreakIterator.createGraphemeInstance() etc.
ZWJ sequences Zero Width Joiner (U+200D) joins emoji into single grapheme cluster
Regional indicators Two regional indicator letters form a single flag grapheme cluster
Hangul Jamo sequences (L + V + T) form a single syllable grapheme cluster

関連用語

アルゴリズム のその他の用語

Case Folding

Mapping characters to a common case form for case-insensitive comparison. More comprehensive …

Grapheme Cluster Boundary

Rules (UAX#29) for determining where one user-perceived character ends and another begins. …

NFC (Canonical Composition)

正規化形式C:分解してから正規再合成し、最短の形式を生成します。データの保存と交換に推奨されており、Webの標準形式です。

NFD (Canonical Decomposition)

正規化形式D:再合成せずに完全分解します。macOSのHFS+ファイルシステムで使われます。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。

NFKC (Compatibility Composition)

正規化形式KC:互換分解後に正規合成。視覚的に類似した文字を統合します(fi→fi、²→2、Ⅳ→IV)。識別子の比較に使われます。

NFKD (Compatibility Decomposition)

正規化形式KD:再合成せずに互換分解。最も強力な正規化で、最も多くの書式情報を失います。

String Comparison

Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …

Unicode 双方向アルゴリズム (UBA)

文字の双方向カテゴリと明示的な方向オーバーライドを使って、混在方向テキスト(例:英語+アラビア語)の表示順序を決定するアルゴリズム。

Unicode 正規化

Unicodeテキストを標準的な正規形に変換するプロセス。4つの形式:NFC(合成)、NFD(分解)、NFKC(互換合成)、NFKD(互換分解)。

Unicode 照合アルゴリズム (UCA)

基本文字 → アクセント → 大小文字 → タイブレーカーの多段階比較でUnicode文字列を比較・ソートする標準アルゴリズム。ロケールのカスタマイズが可能です。