アルゴリズム

String Comparison

Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary comparison of code points alone gives incorrect results for equivalent strings.

What is Unicode String Comparison?

Unicode string comparison is the process of determining whether two Unicode strings are equal, or which comes first in a sorted order. It sounds simple, but Unicode's encoding flexibility makes naive byte-by-byte comparison unreliable. The same text can be encoded in multiple valid ways, and language-specific sorting rules vary enormously across the world's scripts and locales.

Binary Comparison Pitfalls

The most obvious approach — comparing strings byte-by-byte or code-point-by-code-point — fails silently in many real-world situations. Consider the character é. It can be encoded as:

  • U+00E9 — a single precomposed code point (NFC form)
  • U+0065 U+0301 — the base letter e followed by a combining acute accent (NFD form)

These two sequences are canonically equivalent under the Unicode Standard but are binary-unequal. A naive comparison would declare them different strings, even though they represent identical text. This causes bugs in search, deduplication, password checks, and username lookups.

NFC Normalization Before Comparing

The standard defense is to normalize both strings to the same Unicode normalization form before comparing. NFC (Canonical Decomposition followed by Canonical Composition) is the recommended form for most applications because it produces compact, precomposed forms that work well with legacy systems.

import unicodedata

a = "e\u0301"          # e + combining acute (NFD-style)
b = "\u00e9"           # precomposed é (NFC-style)

a == b                                        # False — binary comparison
unicodedata.normalize("NFC", a) == unicodedata.normalize("NFC", b)  # True

Always normalize to NFC before storing usernames, email addresses, or any text that will be compared for equality.

Locale-Aware Collation with ICU

Sorting order is a separate problem from equality. In English, ä typically sorts near a, but in Swedish, ä sorts after z. In French, accents are compared from right-to-left as a tiebreaker. These rules are formalized in the Unicode Collation Algorithm (UCA) and implemented by the ICU library (International Components for Unicode).

In Python, the pyuca package or the locale module provide UCA-based sorting. In JavaScript, Intl.Collator wraps ICU directly.

// JavaScript locale-aware sort
const words = ["ä", "z", "a"];
words.sort(new Intl.Collator("sv").compare); // Swedish: ["a", "z", "ä"]
words.sort(new Intl.Collator("de").compare); // German: ["a", "ä", "z"]

Case-Insensitive Comparison via Case Folding

Converting both strings to lowercase before comparing does not work correctly across all scripts. Unicode defines case folding (documented in CaseFolding.txt) as a locale-independent way to erase case distinctions for comparison purposes. Case folding handles edge cases like the German ß, which folds to ss, and the Greek capital letter sigma Σ, which folds to σ.

# Python case folding
"Straße".casefold() == "STRASSE".casefold()  # True

Quick Facts

Property Value
Key pitfall Canonically equivalent strings are binary-unequal
Recommended normalization NFC for general text, NFD for internal processing
Collation standard Unicode Collation Algorithm (UCA), CLDR locale rules
ICU library International Components for Unicode
Case folding spec Unicode CaseFolding.txt (UCD)
Python module unicodedata (normalize, casefold)
JS API Intl.Collator, String.prototype.normalize()

関連用語

アルゴリズム のその他の用語

Case Folding

Mapping characters to a common case form for case-insensitive comparison. More comprehensive …

Grapheme Cluster Boundary

Rules (UAX#29) for determining where one user-perceived character ends and another begins. …

NFC (Canonical Composition)

正規化形式C:分解してから正規再合成し、最短の形式を生成します。データの保存と交換に推奨されており、Webの標準形式です。

NFD (Canonical Decomposition)

正規化形式D:再合成せずに完全分解します。macOSのHFS+ファイルシステムで使われます。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。

NFKC (Compatibility Composition)

正規化形式KC:互換分解後に正規合成。視覚的に類似した文字を統合します(fi→fi、²→2、Ⅳ→IV)。識別子の比較に使われます。

NFKD (Compatibility Decomposition)

正規化形式KD:再合成せずに互換分解。最も強力な正規化で、最も多くの書式情報を失います。

Unicode テキスト分割

テキストの境界を見つけるアルゴリズム:書記素クラスター・単語・文境界。カーソル移動・テキスト選択・テキスト処理に不可欠です。

Unicode 双方向アルゴリズム (UBA)

文字の双方向カテゴリと明示的な方向オーバーライドを使って、混在方向テキスト(例:英語+アラビア語)の表示順序を決定するアルゴリズム。

Unicode 正規化

Unicodeテキストを標準的な正規形に変換するプロセス。4つの形式:NFC(合成)、NFD(分解)、NFKC(互換合成)、NFKD(互換分解)。

Unicode 照合アルゴリズム (UCA)

基本文字 → アクセント → 大小文字 → タイブレーカーの多段階比較でUnicode文字列を比較・ソートする標準アルゴリズム。ロケールのカスタマイズが可能です。