アルゴリズム

合成除外

非先頭分解の防止とアルゴリズムの安定性を確保するため、正規合成(NFC)から除外される文字。CompositionExclusions.txtに一覧があります。

· Updated

Why Some Decomposed Characters Are Never Recomposed

NFC (Canonical Composition) is defined as: first apply NFD (full canonical decomposition), then apply canonical composition. You might expect that NFD then compose would always restore the original precomposed characters. But it does not — some characters that decompose in NFD are explicitly excluded from recomposition in NFC. These are called composition exclusions.

The exclusion mechanism is what makes canonical composition an algorithm rather than a simple inversion of the decomposition mapping. It ensures that NFC is a well-defined unique form: every string has exactly one NFC representation, and that NFC is stable across Unicode versions.

Three Categories of Composition Exclusions

Unicode defines three types of characters that are excluded from canonical composition:

1. Singletons

Characters whose canonical decomposition is a single code point (not a base + combining sequence). For example:

  • U+212B ANGSTROM SIGN (Å) decomposes to U+00C5 (LATIN CAPITAL LETTER A WITH RING ABOVE)
  • U+2126 OHM SIGN (Ω) decomposes to U+03A9 (GREEK CAPITAL LETTER OMEGA)

These are singletons: their NFD form is a different (but canonically equivalent) single code point. During NFC composition, the algorithm would never try to "compose" a single code point into something else, so these are excluded naturally. After NFD, you get U+00C5 and U+03A9; NFC keeps them as U+00C5 and U+03A9 (not re-creating U+212B or U+2126).

2. Non-Starter Decompositions

Characters that decompose to a sequence where the first character is itself a combining mark (Canonical Combining Class > 0). These cannot participate in composition because the composition algorithm requires a starter (CCC=0) as the base character.

Example: U+0340 COMBINING GRAVE TONE MARK decomposes to U+0300 (COMBINING GRAVE ACCENT). Since U+0300 has CCC=230, the result of NFD is a combining mark that cannot serve as a base for further composition.

3. The Composition Exclusion Table

Beyond singletons and non-starter decompositions, Unicode maintains an explicit Composition Exclusion Table of characters excluded for stability and historical reasons. This list includes:

  • Post-composition version compatibility: Characters added to Unicode after the composition algorithm was defined, where recomposing them would change the NFC form of existing text (breaking NFC stability guarantee)
  • Explicitly excluded: Characters that should round-trip through NFD/NFC as their decomposed form

The full list is in DerivedNormalizationProps.txt as the Composition_Exclusion property.

Implementing Composition Exclusion

import unicodedata

def is_composition_excluded(cp: int) -> bool:
    char = chr(cp)
    decomp = unicodedata.decomposition(char)
    if not decomp:
        return False  # No decomposition — not relevant
    # Singletons: decomposition is a single code point (no combining sequence)
    parts = decomp.split()
    if parts[0].startswith('<'):
        return True   # Compatibility decomposition — excluded from canonical composition
    if len(parts) == 1:
        return True   # Singleton
    return False  # Simplified check — full check requires the exclusion table

# Verify that U+212B (ANGSTROM SIGN) normalizes to U+00C5, not back to U+212B
angstrom = "\u212B"
nfd = unicodedata.normalize("NFD", angstrom)
nfc_of_nfd = unicodedata.normalize("NFC", nfd)
print(hex(ord(angstrom)))       # 0x212b
print(hex(ord(nfd)))            # 0xc5  (LATIN A WITH RING ABOVE)
print(hex(ord(nfc_of_nfd)))     # 0xc5  — NOT restored to 0x212b

Why This Matters for Implementors

If you are implementing a Unicode normalization algorithm from scratch (for a new language runtime or embedded system), you must:

  1. Obtain the full Composition Exclusion list from Unicode data files
  2. Check this list before performing any canonical composition step
  3. Ensure your NFC implementation produces stable output across Unicode versions

For most developers using a standard library (unicodedata, ICU, Java's Normalizer), composition exclusions are handled automatically. But understanding them explains why unicodedata.normalize("NFC", "\u212B") returns "\u00C5" instead of "\u212B" — the Angstrom sign is a composition exclusion singleton.

Quick Facts

Property Value
Unicode data file DerivedNormalizationProps.txtComposition_Exclusion property
Category 1 Singletons: decompose to a single code point
Category 2 Non-starter decompositions: first code point in decomp has CCC > 0
Category 3 Explicit exclusion table (stability guarantee)
Purpose Ensures NFC is a unique, stable canonical form
Key example U+212B ANGSTROM SIGN → U+00C5 (not recomposed back to U+212B)
Python verification unicodedata.normalize("NFC", unicodedata.normalize("NFD", s))
NFC stability Unicode guarantees NFC of NFC-safe text is stable across versions

関連用語

アルゴリズム のその他の用語

Case Folding

Mapping characters to a common case form for case-insensitive comparison. More comprehensive …

Grapheme Cluster Boundary

Rules (UAX#29) for determining where one user-perceived character ends and another begins. …

NFC (Canonical Composition)

正規化形式C:分解してから正規再合成し、最短の形式を生成します。データの保存と交換に推奨されており、Webの標準形式です。

NFD (Canonical Decomposition)

正規化形式D:再合成せずに完全分解します。macOSのHFS+ファイルシステムで使われます。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。

NFKC (Compatibility Composition)

正規化形式KC:互換分解後に正規合成。視覚的に類似した文字を統合します(fi→fi、²→2、Ⅳ→IV)。識別子の比較に使われます。

NFKD (Compatibility Decomposition)

正規化形式KD:再合成せずに互換分解。最も強力な正規化で、最も多くの書式情報を失います。

String Comparison

Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …

Unicode テキスト分割

テキストの境界を見つけるアルゴリズム:書記素クラスター・単語・文境界。カーソル移動・テキスト選択・テキスト処理に不可欠です。

Unicode 双方向アルゴリズム (UBA)

文字の双方向カテゴリと明示的な方向オーバーライドを使って、混在方向テキスト(例:英語+アラビア語)の表示順序を決定するアルゴリズム。

Unicode 正規化

Unicodeテキストを標準的な正規形に変換するプロセス。4つの形式:NFC(合成)、NFD(分解)、NFKC(互換合成)、NFKD(互換分解)。