アルゴリズム

NFKD (Compatibility Decomposition)

正規化形式KD:再合成せずに互換分解。最も強力な正規化で、最も多くの書式情報を失います。

· Updated

NFKD: The Most Aggressive Decomposition

NFKD (Normalization Form KD — Compatibility Decomposition) is the most expansive of the four normalization forms. It applies both canonical decomposition (like NFD) and compatibility decomposition, breaking everything down to its most primitive components without any recomposition.

Where NFD decomposes ée + combining acute, NFKD does that plus it also decomposes compatibility characters like ligatures, superscripts, and fullwidth variants. The result is the longest possible representation of a string in Unicode.

NFKD vs NFKC: When to Choose Each

Both NFKD and NFKC apply compatibility folding. The difference is the final step: NFKC recomposes canonical sequences back into precomposed characters, while NFKD does not.

import unicodedata

# é as precomposed (U+00E9)
text = "caf\u00e9"

nfkc_result = unicodedata.normalize("NFKC", text)
nfkd_result = unicodedata.normalize("NFKD", text)

print(len(nfkc_result))  # 4 — é recomposed to single code point
print(len(nfkd_result))  # 5 — é left as e + combining acute

# fi ligature
ligature = "find"
print(unicodedata.normalize("NFKC", ligature))   # "find" — 4 chars
print(unicodedata.normalize("NFKD", ligature))   # "find" — 4 chars (fi has no combining marks to add)

# Superscript
sup = "x²"
print(unicodedata.normalize("NFKC", sup))  # "x2" — 2 recomposed
print(unicodedata.normalize("NFKD", sup))  # "x2" — same (2 has no accent to decompose)

Practical Uses of NFKD

Diacritic stripping with compatibility folding: NFKD is the standard starting point when you want to remove accents AND fold compatibility characters:

import unicodedata

def aggressive_normalize(text: str) -> str:
    # 1. NFKD: compatibility fold + decompose
    nfkd = unicodedata.normalize("NFKD", text)
    # 2. Drop all combining marks (accents, etc.)
    stripped = "".join(
        c for c in nfkd
        if unicodedata.category(c) != "Mn"
    )
    return stripped

print(aggressive_normalize("fiancée"))   # fiancee
print(aggressive_normalize("résumé"))   # resume
print(aggressive_normalize("naïve²"))   # naive2

Database full-text search: Some search systems use NFKD + accent stripping as a pre-processing step to improve recall — a search for "resume" will match "résumé".

Fingerprinting and deduplication: NFKD provides a canonical key for detecting near-duplicate strings that differ only in how they encode the same visual text.

The Information Loss Warning

Like NFKC, NFKD is a lossy transformation. Once you strip superscripts, ligatures, and combining marks, you cannot recover the original. Only use NFKD for derived index keys or search normalization — never as your storage format.

Quick Facts

Property Value
Full name Normalization Form Compatibility Decomposition
Algorithm Compatibility decomposition + canonical decomposition + CCC sort
Relation to NFKC NFKD then compose = NFKC
String length Longest of all four forms
Python unicodedata.normalize("NFKD", s)
Lossy? Yes
Typical use Diacritic stripping, aggressive search normalization, deduplication keys
Does NOT recompose Unlike NFKC — all decomposed sequences stay decomposed

関連用語

アルゴリズム のその他の用語

Case Folding

Mapping characters to a common case form for case-insensitive comparison. More comprehensive …

Grapheme Cluster Boundary

Rules (UAX#29) for determining where one user-perceived character ends and another begins. …

NFC (Canonical Composition)

正規化形式C:分解してから正規再合成し、最短の形式を生成します。データの保存と交換に推奨されており、Webの標準形式です。

NFD (Canonical Decomposition)

正規化形式D:再合成せずに完全分解します。macOSのHFS+ファイルシステムで使われます。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。

NFKC (Compatibility Composition)

正規化形式KC:互換分解後に正規合成。視覚的に類似した文字を統合します(fi→fi、²→2、Ⅳ→IV)。識別子の比較に使われます。

String Comparison

Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …

Unicode テキスト分割

テキストの境界を見つけるアルゴリズム:書記素クラスター・単語・文境界。カーソル移動・テキスト選択・テキスト処理に不可欠です。

Unicode 双方向アルゴリズム (UBA)

文字の双方向カテゴリと明示的な方向オーバーライドを使って、混在方向テキスト(例:英語+アラビア語)の表示順序を決定するアルゴリズム。

Unicode 正規化

Unicodeテキストを標準的な正規形に変換するプロセス。4つの形式:NFC(合成)、NFD(分解)、NFKC(互換合成)、NFKD(互換分解)。

Unicode 照合アルゴリズム (UCA)

基本文字 → アクセント → 大小文字 → タイブレーカーの多段階比較でUnicode文字列を比較・ソートする標準アルゴリズム。ロケールのカスタマイズが可能です。