Unicode 规范化
将Unicode文本转换为标准规范形式的过程,包含四种形式:NFC(合成)、NFD(分解)、NFKC(兼容合成)、NFKD(兼容分解)。
Why the Same Text Can Look Identical But Be Different
Consider the letter é. You can encode it two ways: as a single precomposed character é (U+00E9, LATIN SMALL LETTER E WITH ACUTE) or as the sequence e (U+0065) followed by the combining acute accent ´ (U+0301). Both render identically, both are valid Unicode — but they are different byte sequences and will not compare as equal with a naive string comparison.
This is the core problem Unicode Normalization solves. Without normalization, the same word typed on macOS (which prefers decomposed forms) can fail to match the same word stored on a Linux system (which may prefer composed forms). Searching, sorting, deduplication, and hashing all break when you have silent encoding differences.
The Four Normal Forms
Unicode defines four normalization forms, each serving different needs:
| Form | Full Name | What it does |
|---|---|---|
| NFC | Canonical Decomposition + Canonical Composition | Decompose then recompose — most compact canonical form |
| NFD | Canonical Decomposition | Fully decompose to base + combining marks |
| NFKC | Compatibility Decomposition + Canonical Composition | Like NFC but also folds compatibility variants |
| NFKD | Compatibility Decomposition | The most aggressive decomposition |
The "K" variants additionally fold compatibility characters — characters that are semantically equivalent but visually or historically distinct, such as fi (fi ligature, U+FB01) → fi, or ² (superscript 2, U+00B2) → 2.
Using Normalization in Python
Python's unicodedata module provides normalization through a single function:
import unicodedata
text = "caf\u00e9" # café with precomposed é (NFC)
nfd = unicodedata.normalize("NFD", text)
print(len(text)) # 4
print(len(nfd)) # 5 (e + combining acute)
# Roundtrip
assert unicodedata.normalize("NFC", nfd) == text
# Checking which form a string is already in
print(unicodedata.is_normalized("NFC", text)) # True
print(unicodedata.is_normalized("NFD", text)) # False
A safe comparison pattern for user-facing text:
def normalize_for_comparison(s: str) -> str:
return unicodedata.normalize("NFC", s.casefold())
Quick Facts
| Property | Value |
|---|---|
| Unicode standard | The Unicode Standard, Section 3.11 |
| Python module | unicodedata.normalize(form, string) |
| Valid form names | "NFC", "NFD", "NFKC", "NFKD" |
| Web standard | W3C recommends NFC for all web content |
| macOS file system | HFS+ stores filenames in NFD |
| Idempotency | Applying normalization twice gives the same result |
| Related concept | Canonical equivalence, compatibility equivalence |
相关术语
算法 中的更多内容
Mapping characters to a common case form for case-insensitive comparison. More comprehensive …
Rules (UAX#29) for determining where one user-perceived character ends and another begins. …
规范化形式C:先分解再规范合成,生成最短形式,推荐用于数据存储和交换,是Web标准形式。
规范化形式D:完全分解而不重新合成,macOS HFS+文件系统使用此形式。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。
规范化形式KC:兼容分解后再规范合成,合并视觉上相似的字符(fi→fi、²→2、Ⅳ→IV),用于标识符比较。
规范化形式KD:兼容分解而不重新合成,是最激进的规范化方式,会丢失最多的格式信息。
Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …
利用字符双向类别和明确方向覆盖,确定混合方向文本(如英语+阿拉伯语)显示顺序的算法。
根据字符属性、CJK词边界和换行时机,确定文本可换至下一行位置的规则。
通过多级比较(基础字符→变音符号→大小写→决胜符)对Unicode字符串进行比较和排序的标准算法,支持区域设置自定义。