Composition Exclusion
Characters excluded from canonical composition (NFC) to prevent non-starter decomposition and ensure algorithmic stability. Listed in CompositionExclusions.txt.
Why Some Decomposed Characters Are Never Recomposed
NFC (Canonical Composition) is defined as: first apply NFD (full canonical decomposition), then apply canonical composition. You might expect that NFD then compose would always restore the original precomposed characters. But it does not — some characters that decompose in NFD are explicitly excluded from recomposition in NFC. These are called composition exclusions.
The exclusion mechanism is what makes canonical composition an algorithm rather than a simple inversion of the decomposition mapping. It ensures that NFC is a well-defined unique form: every string has exactly one NFC representation, and that NFC is stable across Unicode versions.
Three Categories of Composition Exclusions
Unicode defines three types of characters that are excluded from canonical composition:
1. Singletons
Characters whose canonical decomposition is a single code point (not a base + combining sequence). For example:
- U+212B ANGSTROM SIGN (Å) decomposes to U+00C5 (LATIN CAPITAL LETTER A WITH RING ABOVE)
- U+2126 OHM SIGN (Ω) decomposes to U+03A9 (GREEK CAPITAL LETTER OMEGA)
These are singletons: their NFD form is a different (but canonically equivalent) single code point. During NFC composition, the algorithm would never try to "compose" a single code point into something else, so these are excluded naturally. After NFD, you get U+00C5 and U+03A9; NFC keeps them as U+00C5 and U+03A9 (not re-creating U+212B or U+2126).
2. Non-Starter Decompositions
Characters that decompose to a sequence where the first character is itself a combining mark (Canonical Combining Class > 0). These cannot participate in composition because the composition algorithm requires a starter (CCC=0) as the base character.
Example: U+0340 COMBINING GRAVE TONE MARK decomposes to U+0300 (COMBINING GRAVE ACCENT). Since U+0300 has CCC=230, the result of NFD is a combining mark that cannot serve as a base for further composition.
3. The Composition Exclusion Table
Beyond singletons and non-starter decompositions, Unicode maintains an explicit Composition Exclusion Table of characters excluded for stability and historical reasons. This list includes:
- Post-composition version compatibility: Characters added to Unicode after the composition algorithm was defined, where recomposing them would change the NFC form of existing text (breaking NFC stability guarantee)
- Explicitly excluded: Characters that should round-trip through NFD/NFC as their decomposed form
The full list is in DerivedNormalizationProps.txt as the Composition_Exclusion property.
Implementing Composition Exclusion
import unicodedata
def is_composition_excluded(cp: int) -> bool:
char = chr(cp)
decomp = unicodedata.decomposition(char)
if not decomp:
return False # No decomposition — not relevant
# Singletons: decomposition is a single code point (no combining sequence)
parts = decomp.split()
if parts[0].startswith('<'):
return True # Compatibility decomposition — excluded from canonical composition
if len(parts) == 1:
return True # Singleton
return False # Simplified check — full check requires the exclusion table
# Verify that U+212B (ANGSTROM SIGN) normalizes to U+00C5, not back to U+212B
angstrom = "\u212B"
nfd = unicodedata.normalize("NFD", angstrom)
nfc_of_nfd = unicodedata.normalize("NFC", nfd)
print(hex(ord(angstrom))) # 0x212b
print(hex(ord(nfd))) # 0xc5 (LATIN A WITH RING ABOVE)
print(hex(ord(nfc_of_nfd))) # 0xc5 — NOT restored to 0x212b
Why This Matters for Implementors
If you are implementing a Unicode normalization algorithm from scratch (for a new language runtime or embedded system), you must:
- Obtain the full Composition Exclusion list from Unicode data files
- Check this list before performing any canonical composition step
- Ensure your NFC implementation produces stable output across Unicode versions
For most developers using a standard library (unicodedata, ICU, Java's Normalizer), composition exclusions are handled automatically. But understanding them explains why unicodedata.normalize("NFC", "\u212B") returns "\u00C5" instead of "\u212B" — the Angstrom sign is a composition exclusion singleton.
Quick Facts
| Property | Value |
|---|---|
| Unicode data file | DerivedNormalizationProps.txt — Composition_Exclusion property |
| Category 1 | Singletons: decompose to a single code point |
| Category 2 | Non-starter decompositions: first code point in decomp has CCC > 0 |
| Category 3 | Explicit exclusion table (stability guarantee) |
| Purpose | Ensures NFC is a unique, stable canonical form |
| Key example | U+212B ANGSTROM SIGN → U+00C5 (not recomposed back to U+212B) |
| Python verification | unicodedata.normalize("NFC", unicodedata.normalize("NFD", s)) |
| NFC stability | Unicode guarantees NFC of NFC-safe text is stable across versions |
Related Terms
More in Algorithms
Mapping characters to a common case form for case-insensitive comparison. More comprehensive …
Rules (UAX#29) for determining where one user-perceived character ends and another begins. …
Normalization Form C: decompose then recompose canonically, producing the shortest form. Recommended …
Normalization Form D: fully decompose without recomposing. Used by the macOS HFS+ …
Normalization Form KC: compatibility decomposition then canonical composition. Merges visually similar characters …
Normalization Form KD: compatibility decomposition without recomposing. The most aggressive normalization, losing …
The position between sentences per Unicode rules. More complex than splitting on …
Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …
Algorithm determining display order of characters in mixed-direction text (e.g., English + …
Standard algorithm for comparing and sorting Unicode strings using multi-level comparison: base …