分解
将字符映射为其组成部分的过程。规范分解保留语义(é → e + ◌́),兼容分解可能改变语义(fi → fi)。
What Is a Decomposition Mapping?
A decomposition mapping tells you how a Unicode character can be broken down into a sequence of simpler characters. There are two kinds:
- Canonical decomposition: the character is identical in meaning and rendering to its decomposed sequence. For example, U+00E9 LATIN SMALL LETTER E WITH ACUTE (é) canonically decomposes to U+0065 LATIN SMALL LETTER E + U+0301 COMBINING ACUTE ACCENT.
- Compatibility decomposition: the character is only compatible (semantically similar, possibly different appearance) with its decomposed sequence. For example, the ligature U+FB01 fi (fi) compatibility-decomposes to U+0066 f + U+0069 i, and U+00B2 ² (superscript two) decomposes to U+0032 2.
Normalization Forms
The four Unicode Normalization Forms are defined in terms of decomposition and canonical composition:
| Form | Decomposition | Composition |
|---|---|---|
| NFD | Canonical | No |
| NFC | Canonical | Yes (canonical) |
| NFKD | Compatibility | No |
| NFKC | Compatibility | Yes (canonical) |
import unicodedata
samples = [
("\u00E9", "é e+acute"), # canonical
("\u00C5", "Å A+ring"), # canonical
("\uFB01", "fi fi ligature"), # compatibility
("\u00B2", "² superscript 2"), # compatibility
("\u2126", "Ω OHM SIGN"), # canonical → U+03A9 GREEK CAPITAL OMEGA
]
for char, label in samples:
raw = unicodedata.decomposition(char)
nfd = unicodedata.normalize("NFD", char)
nfkd = unicodedata.normalize("NFKD", char)
nfc = unicodedata.normalize("NFC", nfd)
print(f" {label}")
print(f" decomposition() raw : {raw!r}")
print(f" NFD : {[f'U+{ord(c):04X}' for c in nfd]}")
print(f" NFKD : {[f'U+{ord(c):04X}' for c in nfkd]}")
print(f" NFC : {[f'U+{ord(c):04X}' for c in nfc]}")
The unicodedata.decomposition() function returns a raw string from UnicodeData.txt. A leading tag in angle brackets like <compat>, <font>, <circle>, <wide>, etc. indicates a compatibility decomposition; no tag means canonical.
Practical Implications
Search and indexing: NFKC normalization lets you match file against file or 2 against 2. Many search engines apply NFKC before indexing. Security: Compatibility decomposition can reveal confusable characters—U+2126 Ω and U+03A9 Ω look identical and are canonically equivalent, so an application that compares usernames should normalize first. Identifiers: Python 3 uses NFKC for identifier normalization (PEP 3131).
Quick Facts
| Property | Value |
|---|---|
| Unicode property name | Decomposition_Mapping |
| Short alias | dm |
| Types | Canonical, Compatibility (13 tags: <compat>, <font>, <circle>, etc.) |
| Python function | unicodedata.decomposition(char) → raw string |
| Normalization function | unicodedata.normalize(form, string) |
| Forms | NFD, NFC, NFKD, NFKC |
| Spec reference | Unicode Standard Annex #15 (UAX #15) |
相关术语
字符属性 中的更多内容
字符首次被分配时所在的Unicode版本,有助于判断各系统和软件版本的字符支持情况。
Unicode property (UAX#11) classifying characters as Narrow, Wide, Fullwidth, Halfwidth, Ambiguous, or …
Unicode property controlling how Arabic and Syriac characters connect to adjacent characters. …
Unicode property listing all scripts that use a character, broader than the …
将每个码位归入30个类别(Lu、Ll、Nd、So等)之一的分类体系,分为7大类:字母、标记、数字、标点、符号、分隔符和其他。
具有相同抽象内容但外观可能不同的两个字符序列,比规范等价更宽泛,例如fi ≈ fi,² ≈ 2。
命名的连续码位范围(如基本拉丁文 = U+0000–U+007F)。Unicode 16.0定义了336个区块,每个码位恰好属于一个区块。
决定字符在双向文本中(LTR、RTL、弱、中性)行为方式的属性,由Unicode双向算法用于确定显示顺序。
由于稳定性策略规定Unicode名称不可更改,因此提供字符的备用名称,用于更正、缩写和别名。
将字符在大写、小写和标题大小写之间转换的规则,可能因区域设置而异(土耳其语I问题),也存在一对多映射(ß → SS)。