NFKC (Compatibility Composition)
正規化形式KC:互換分解後に正規合成。視覚的に類似した文字を統合します(fi→fi、²→2、Ⅳ→IV)。識別子の比較に使われます。
NFKC: Normalization for Identifiers and Search
NFKC (Normalization Form KC — Compatibility Decomposition followed by Canonical Composition) is the most useful normalization form for search engines, programming language identifiers, and any system that needs to treat visually or semantically similar characters as equivalent.
The "K" in NFKC stands for "Kompatibility" — beyond the canonical decompositions that NFC handles, NFKC also folds compatibility characters: Unicode code points that exist for historical or typographic reasons but represent the same abstract character as something already encoded.
What Compatibility Folding Does
Compatibility decompositions cover hundreds of character categories:
| Original | After NFKC | Category |
|---|---|---|
fi (U+FB01) |
fi |
Ligature |
² (U+00B2) |
2 |
Superscript |
℃ (U+2103) |
°C |
Symbol |
A (U+FF21) |
A |
Fullwidth |
a (U+FF41) |
a |
Fullwidth |
① (U+2460) |
1 |
Enclosed |
㎞ (U+338F) |
km |
Compatibility CJK |
ナ (U+FF85) |
ナ |
Halfwidth Katakana |
import unicodedata
# Ligature folding
assert unicodedata.normalize("NFKC", "file") == "file"
# Fullwidth ASCII folding (common in CJK input)
fw = "A1B2" # fullwidth
ascii_eq = unicodedata.normalize("NFKC", fw)
print(ascii_eq) # "A1B2"
# Superscript folding
assert unicodedata.normalize("NFKC", "x²") == "x2"
# Combined with casefold for case-insensitive search
def normalize_for_search(s: str) -> str:
return unicodedata.normalize("NFKC", s).casefold()
print(normalize_for_search("HELLOWORLD゙")) # "helloworld゛" → further processing needed
NFKC in Standards
Python identifiers: Python 3 uses NFKC normalization for identifiers. That means MyClass (fullwidth) is valid and equivalent to MyClass.
PRECIS (RFC 8264): The successor to stringprep for usernames and passwords uses NFKC as a normalization step.
IDNA (Internationalized Domain Names): Domain name processing uses NFKC.
Password hashing: Many systems apply NFKC before hashing to ensure finance and finance hash the same way.
NFKC vs NFC: The Trade-off
NFKC loses information — ² and 2 are distinct characters with different semantic roles, but NFKC makes them identical. This is intentional for search and identifiers but wrong for document storage. Never use NFKC as your storage format if you need to preserve superscripts, ligatures, or enclosed numbers.
Quick Facts
| Property | Value |
|---|---|
| Full name | Normalization Form Compatibility Composition |
| Algorithm | NFKD first, then canonical composition |
| Python identifiers | Normalized with NFKC (PEP 3131) |
| PRECIS framework | RFC 8264 uses NFKC |
| Python | unicodedata.normalize("NFKC", s) |
| Lossy? | Yes — compatibility distinctions are discarded |
| Typical use | Search normalization, identifier comparison, password processing |
| Handles accents? | Yes (also decomposes/recomposes canonical characters) |
関連用語
アルゴリズム のその他の用語
Mapping characters to a common case form for case-insensitive comparison. More comprehensive …
Rules (UAX#29) for determining where one user-perceived character ends and another begins. …
正規化形式C:分解してから正規再合成し、最短の形式を生成します。データの保存と交換に推奨されており、Webの標準形式です。
正規化形式D:再合成せずに完全分解します。macOSのHFS+ファイルシステムで使われます。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。
正規化形式KD:再合成せずに互換分解。最も強力な正規化で、最も多くの書式情報を失います。
Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …
テキストの境界を見つけるアルゴリズム:書記素クラスター・単語・文境界。カーソル移動・テキスト選択・テキスト処理に不可欠です。
文字の双方向カテゴリと明示的な方向オーバーライドを使って、混在方向テキスト(例:英語+アラビア語)の表示順序を決定するアルゴリズム。
Unicodeテキストを標準的な正規形に変換するプロセス。4つの形式:NFC(合成)、NFD(分解)、NFKC(互換合成)、NFKD(互換分解)。
基本文字 → アクセント → 大小文字 → タイブレーカーの多段階比較でUnicode文字列を比較・ソートする標準アルゴリズム。ロケールのカスタマイズが可能です。