NFD (Canonical Decomposition)
规范化形式D:完全分解而不重新合成,macOS HFS+文件系统使用此形式。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。
NFD: Decomposing Characters to Their Components
NFD (Normalization Form D — Canonical Decomposition) is the form where every composed character is broken down into its constituent parts: a base character followed by one or more combining marks in canonical order. Unlike NFC which recomposes after decomposing, NFD leaves everything decomposed.
For é (U+00E9, LATIN SMALL LETTER E WITH ACUTE), NFD yields the two-code-point sequence e (U+0065) + ́ (U+0301, COMBINING ACUTE ACCENT). For a string like "über", NFD yields u + combining diaeresis + b + e + r — five code points for a four-character word.
When NFD is Used
NFD is the internal form used by Apple's HFS+ and APFS file systems. When macOS writes a filename to disk, it stores it in NFD. This is why filenames created on a Mac can cause issues when transferred to Linux systems: the file café is stored as 5 code points (NFD) on HFS+ but most Linux applications expect 4 code points (NFC). Python's os module will give you the NFD filename on macOS unless you normalize it.
NFD is also useful when you need to strip diacritics:
import unicodedata
def strip_accents(text: str) -> str:
# Decompose to base + combining marks, then remove all combining marks
nfd = unicodedata.normalize("NFD", text)
return "".join(
c for c in nfd
if unicodedata.category(c) != "Mn" # Mn = Mark, Nonspacing
)
print(strip_accents("naïve")) # naive
print(strip_accents("résumé")) # resume
print(strip_accents("über")) # uber
Combining Mark Order
A subtle but important aspect of NFD: when multiple combining marks follow a base character, Unicode specifies their Canonical Combining Class (CCC) order. Marks with lower CCC values come first (CCC=0 is the base character). The acute accent has CCC=230, the cedilla has CCC=202. NFD ensures combining marks are always in this canonical order, which is necessary for correct canonical equivalence testing.
import unicodedata
# Check combining class of a character
print(unicodedata.combining("\u0301")) # 230 (COMBINING ACUTE ACCENT)
print(unicodedata.combining("\u0327")) # 202 (COMBINING CEDILLA)
print(unicodedata.combining("a")) # 0 (not a combining mark)
Quick Facts
| Property | Value |
|---|---|
| Full name | Normalization Form Canonical Decomposition |
| Algorithm | Recursive canonical decomposition, then CCC sorting |
| macOS HFS+ | Filenames stored in NFD |
| Typical use | Diacritic stripping, internal processing, canonical comparison |
| Python | unicodedata.normalize("NFD", s) |
| Handles compatibility chars? | No |
| Relation to NFC | NFD then compose = NFC |
| String length | Equal or longer than NFC (decomposed forms use more code points) |
相关术语
算法 中的更多内容
Mapping characters to a common case form for case-insensitive comparison. More comprehensive …
Rules (UAX#29) for determining where one user-perceived character ends and another begins. …
规范化形式C:先分解再规范合成,生成最短形式,推荐用于数据存储和交换,是Web标准形式。
规范化形式KC:兼容分解后再规范合成,合并视觉上相似的字符(fi→fi、²→2、Ⅳ→IV),用于标识符比较。
规范化形式KD:兼容分解而不重新合成,是最激进的规范化方式,会丢失最多的格式信息。
Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …
利用字符双向类别和明确方向覆盖,确定混合方向文本(如英语+阿拉伯语)显示顺序的算法。
根据字符属性、CJK词边界和换行时机,确定文本可换至下一行位置的规则。
通过多级比较(基础字符→变音符号→大小写→决胜符)对Unicode字符串进行比较和排序的标准算法,支持区域设置自定义。
查找文本中各类边界的算法:字素簇、词和句子边界,对光标移动、文本选择和文本处理至关重要。