NFKD (Compatibility Decomposition)
规范化形式KD:兼容分解而不重新合成,是最激进的规范化方式,会丢失最多的格式信息。
NFKD: The Most Aggressive Decomposition
NFKD (Normalization Form KD — Compatibility Decomposition) is the most expansive of the four normalization forms. It applies both canonical decomposition (like NFD) and compatibility decomposition, breaking everything down to its most primitive components without any recomposition.
Where NFD decomposes é → e + combining acute, NFKD does that plus it also decomposes compatibility characters like ligatures, superscripts, and fullwidth variants. The result is the longest possible representation of a string in Unicode.
NFKD vs NFKC: When to Choose Each
Both NFKD and NFKC apply compatibility folding. The difference is the final step: NFKC recomposes canonical sequences back into precomposed characters, while NFKD does not.
import unicodedata
# é as precomposed (U+00E9)
text = "caf\u00e9"
nfkc_result = unicodedata.normalize("NFKC", text)
nfkd_result = unicodedata.normalize("NFKD", text)
print(len(nfkc_result)) # 4 — é recomposed to single code point
print(len(nfkd_result)) # 5 — é left as e + combining acute
# fi ligature
ligature = "find"
print(unicodedata.normalize("NFKC", ligature)) # "find" — 4 chars
print(unicodedata.normalize("NFKD", ligature)) # "find" — 4 chars (fi has no combining marks to add)
# Superscript
sup = "x²"
print(unicodedata.normalize("NFKC", sup)) # "x2" — 2 recomposed
print(unicodedata.normalize("NFKD", sup)) # "x2" — same (2 has no accent to decompose)
Practical Uses of NFKD
Diacritic stripping with compatibility folding: NFKD is the standard starting point when you want to remove accents AND fold compatibility characters:
import unicodedata
def aggressive_normalize(text: str) -> str:
# 1. NFKD: compatibility fold + decompose
nfkd = unicodedata.normalize("NFKD", text)
# 2. Drop all combining marks (accents, etc.)
stripped = "".join(
c for c in nfkd
if unicodedata.category(c) != "Mn"
)
return stripped
print(aggressive_normalize("fiancée")) # fiancee
print(aggressive_normalize("résumé")) # resume
print(aggressive_normalize("naïve²")) # naive2
Database full-text search: Some search systems use NFKD + accent stripping as a pre-processing step to improve recall — a search for "resume" will match "résumé".
Fingerprinting and deduplication: NFKD provides a canonical key for detecting near-duplicate strings that differ only in how they encode the same visual text.
The Information Loss Warning
Like NFKC, NFKD is a lossy transformation. Once you strip superscripts, ligatures, and combining marks, you cannot recover the original. Only use NFKD for derived index keys or search normalization — never as your storage format.
Quick Facts
| Property | Value |
|---|---|
| Full name | Normalization Form Compatibility Decomposition |
| Algorithm | Compatibility decomposition + canonical decomposition + CCC sort |
| Relation to NFKC | NFKD then compose = NFKC |
| String length | Longest of all four forms |
| Python | unicodedata.normalize("NFKD", s) |
| Lossy? | Yes |
| Typical use | Diacritic stripping, aggressive search normalization, deduplication keys |
| Does NOT recompose | Unlike NFKC — all decomposed sequences stay decomposed |
相关术语
算法 中的更多内容
Mapping characters to a common case form for case-insensitive comparison. More comprehensive …
Rules (UAX#29) for determining where one user-perceived character ends and another begins. …
规范化形式C:先分解再规范合成,生成最短形式,推荐用于数据存储和交换,是Web标准形式。
规范化形式D:完全分解而不重新合成,macOS HFS+文件系统使用此形式。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。
规范化形式KC:兼容分解后再规范合成,合并视觉上相似的字符(fi→fi、²→2、Ⅳ→IV),用于标识符比较。
Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …
利用字符双向类别和明确方向覆盖,确定混合方向文本(如英语+阿拉伯语)显示顺序的算法。
根据字符属性、CJK词边界和换行时机,确定文本可换至下一行位置的规则。
通过多级比较(基础字符→变音符号→大小写→决胜符)对Unicode字符串进行比较和排序的标准算法,支持区域设置自定义。
查找文本中各类边界的算法:字素簇、词和句子边界,对光标移动、文本选择和文本处理至关重要。