规范等价
语义上完全相同、应被视为等价的两个字符序列,例如é(U+00E9)≡ e + ◌́(U+0065 + U+0301)。
What Is Canonical Equivalence?
Two Unicode strings are canonically equivalent if they represent the same abstract character sequence and should be treated as identical in all Unicode-conforming operations. They look the same, are pronounced the same, and have the same semantic value—the only difference is how the code points are arranged.
The canonical equivalence. The most common example is a precomposed character versus a base letter followed by a combining diacritic:
- U+00F1 LATIN SMALL LETTER N WITH TILDE (ñ) — a single code point
- U+006E LATIN SMALL LETTER N + U+0303 COMBINING TILDE — two code points
These two sequences are canonically equivalent. They must render identically and compare as equal after normalization.
Canonical Normalization Forms
Unicode defines two canonical normalization forms:
| Form | Description |
|---|---|
| NFD (Canonical Decomposition) | Break all precomposed characters into base + combining marks; apply canonical ordering |
| NFC (Canonical Composition) | Apply NFD, then recompose into precomposed characters where possible |
import unicodedata
# Two ways to write Spanish "ñ"
precomposed = "\u00F1" # ñ as single code point
decomposed = "\u006E\u0303" # n + combining tilde
# They look the same:
print(precomposed, decomposed)
# ñ ñ
# But they are NOT equal as raw Python strings:
print(precomposed == decomposed)
# False
print(len(precomposed), len(decomposed))
# 1 2
# After NFC normalization they are equal:
nfc_pre = unicodedata.normalize("NFC", precomposed)
nfc_dec = unicodedata.normalize("NFC", decomposed)
print(nfc_pre == nfc_dec)
# True
# After NFD normalization they are also equal:
nfd_pre = unicodedata.normalize("NFD", precomposed)
nfd_dec = unicodedata.normalize("NFD", decomposed)
print(nfd_pre == nfd_dec)
# True
print(len(nfd_pre), len(nfd_dec))
# 2 2 (both are now decomposed)
Why This Matters
String comparison: Any application that compares user input against stored data must normalize both sides to the same form. Passwords, usernames, and search queries can silently differ due to canonical equivalence. The Python unicodedata.normalize("NFC", s) call is the standard fix.
Database storage: PostgreSQL uses NFC internally for text; MySQL's behavior depends on collation. Storing NFD strings in a NFC-collating database can cause subtle lookup failures.
File systems: macOS HFS+ normalizes filenames to NFD; Windows NTFS and Linux ext4 are normalization-agnostic. A file named ñ.txt may be stored differently on different systems, causing sync tools to create duplicates.
Quick Facts
| Property | Value |
|---|---|
| Concept | Canonical equivalence |
| Normalization forms | NFD, NFC |
| Python function | unicodedata.normalize("NFC", s) / "NFD" |
| Common pitfall | Comparing strings without normalizing first |
| Opposite concept | Compatibility equivalence (looser, NFKD/NFKC) |
| Spec reference | Unicode Standard Annex #15 (UAX #15) |
相关术语
字符属性 中的更多内容
字符首次被分配时所在的Unicode版本,有助于判断各系统和软件版本的字符支持情况。
Unicode property (UAX#11) classifying characters as Narrow, Wide, Fullwidth, Halfwidth, Ambiguous, or …
Unicode property controlling how Arabic and Syriac characters connect to adjacent characters. …
Unicode property listing all scripts that use a character, broader than the …
将每个码位归入30个类别(Lu、Ll、Nd、So等)之一的分类体系,分为7大类:字母、标记、数字、标点、符号、分隔符和其他。
具有相同抽象内容但外观可能不同的两个字符序列,比规范等价更宽泛,例如fi ≈ fi,² ≈ 2。
将字符映射为其组成部分的过程。规范分解保留语义(é → e + ◌́),兼容分解可能改变语义(fi → fi)。
命名的连续码位范围(如基本拉丁文 = U+0000–U+007F)。Unicode 16.0定义了336个区块,每个码位恰好属于一个区块。
决定字符在双向文本中(LTR、RTL、弱、中性)行为方式的属性,由Unicode双向算法用于确定显示顺序。
由于稳定性策略规定Unicode名称不可更改,因此提供字符的备用名称,用于更正、缩写和别名。