アルゴリズム

NFC (Canonical Composition)

正規化形式C:分解してから正規再合成し、最短の形式を生成します。データの保存と交換に推奨されており、Webの標準形式です。

· Updated

NFC: The Web's Default Normal Form

NFC (Normalization Form C — Canonical Composition) is the most widely used Unicode normalization form. It works in two passes: first, it decomposes all characters into their canonical base + combining mark sequences (like NFD), then it recomposes them back into precomposed characters wherever the Unicode standard defines a canonical composition.

The result is the shortest canonical representation of a string. For most Latin-script text, NFC means characters like é, ü, and ñ are stored as single code points rather than two-code-point sequences. For text that is already in NFC (pure ASCII, for instance), normalization is a no-op.

The W3C mandates NFC for all web content in the Character Model for the World Wide Web. Most databases, APIs, and programming environments assume NFC. HTTP headers, JSON payloads, and HTML source files are all expected to use NFC.

macOS user-space applications generally use NFC (despite HFS+ using NFD internally — the OS translates at the file system boundary). Windows and Linux also default to NFC in most contexts. If you are writing text to a file, database, or API and you want maximum interoperability, NFC is the right choice.

Python Examples

import unicodedata

# e + combining acute → é (one code point)
decomposed = "e\u0301"      # NFD form: 2 code points
composed = unicodedata.normalize("NFC", decomposed)

print(repr(decomposed))     # 'e\u0301'
print(repr(composed))       # '\xe9'  (which is é, U+00E9)
print(len(decomposed))      # 2
print(len(composed))        # 1

# Normalize user input before storing
def store_text(text: str) -> str:
    return unicodedata.normalize("NFC", text)

# NFC is idempotent
s = "caf\u00e9"
assert unicodedata.normalize("NFC", s) == s
assert unicodedata.is_normalized("NFC", s)

NFC does NOT fold compatibility characters. The fi ligature (U+FB01) remains under NFC. For search and identifier normalization where you want == fi, use NFKC instead.

Quick Facts

Property Value
Full name Normalization Form Canonical Composition
Algorithm NFD first, then canonical composition
Typical use Web content, databases, API responses, user input storage
W3C standard Required for all web content (Character Model for the WWW)
Python unicodedata.normalize("NFC", s)
Handles compatibility chars? No — use NFKC for that
Idempotent? Yes
Comparison to NFD Usually equal or shorter (composed chars save one code point each)

関連用語

アルゴリズム のその他の用語

Case Folding

Mapping characters to a common case form for case-insensitive comparison. More comprehensive …

Grapheme Cluster Boundary

Rules (UAX#29) for determining where one user-perceived character ends and another begins. …

NFD (Canonical Decomposition)

正規化形式D:再合成せずに完全分解します。macOSのHFS+ファイルシステムで使われます。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。

NFKC (Compatibility Composition)

正規化形式KC:互換分解後に正規合成。視覚的に類似した文字を統合します(fi→fi、²→2、Ⅳ→IV)。識別子の比較に使われます。

NFKD (Compatibility Decomposition)

正規化形式KD:再合成せずに互換分解。最も強力な正規化で、最も多くの書式情報を失います。

String Comparison

Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …

Unicode テキスト分割

テキストの境界を見つけるアルゴリズム:書記素クラスター・単語・文境界。カーソル移動・テキスト選択・テキスト処理に不可欠です。

Unicode 双方向アルゴリズム (UBA)

文字の双方向カテゴリと明示的な方向オーバーライドを使って、混在方向テキスト(例:英語+アラビア語)の表示順序を決定するアルゴリズム。

Unicode 正規化

Unicodeテキストを標準的な正規形に変換するプロセス。4つの形式:NFC(合成)、NFD(分解)、NFKC(互換合成)、NFKD(互換分解)。

Unicode 照合アルゴリズム (UCA)

基本文字 → アクセント → 大小文字 → タイブレーカーの多段階比較でUnicode文字列を比較・ソートする標準アルゴリズム。ロケールのカスタマイズが可能です。