NFC (Canonical Composition)
规范化形式C:先分解再规范合成,生成最短形式,推荐用于数据存储和交换,是Web标准形式。
NFC: The Web's Default Normal Form
NFC (Normalization Form C — Canonical Composition) is the most widely used Unicode normalization form. It works in two passes: first, it decomposes all characters into their canonical base + combining mark sequences (like NFD), then it recomposes them back into precomposed characters wherever the Unicode standard defines a canonical composition.
The result is the shortest canonical representation of a string. For most Latin-script text, NFC means characters like é, ü, and ñ are stored as single code points rather than two-code-point sequences. For text that is already in NFC (pure ASCII, for instance), normalization is a no-op.
Why NFC is the Recommended Default
The W3C mandates NFC for all web content in the Character Model for the World Wide Web. Most databases, APIs, and programming environments assume NFC. HTTP headers, JSON payloads, and HTML source files are all expected to use NFC.
macOS user-space applications generally use NFC (despite HFS+ using NFD internally — the OS translates at the file system boundary). Windows and Linux also default to NFC in most contexts. If you are writing text to a file, database, or API and you want maximum interoperability, NFC is the right choice.
Python Examples
import unicodedata
# e + combining acute → é (one code point)
decomposed = "e\u0301" # NFD form: 2 code points
composed = unicodedata.normalize("NFC", decomposed)
print(repr(decomposed)) # 'e\u0301'
print(repr(composed)) # '\xe9' (which is é, U+00E9)
print(len(decomposed)) # 2
print(len(composed)) # 1
# Normalize user input before storing
def store_text(text: str) -> str:
return unicodedata.normalize("NFC", text)
# NFC is idempotent
s = "caf\u00e9"
assert unicodedata.normalize("NFC", s) == s
assert unicodedata.is_normalized("NFC", s)
NFC does NOT fold compatibility characters. The fi ligature fi (U+FB01) remains fi under NFC. For search and identifier normalization where you want fi == fi, use NFKC instead.
Quick Facts
| Property | Value |
|---|---|
| Full name | Normalization Form Canonical Composition |
| Algorithm | NFD first, then canonical composition |
| Typical use | Web content, databases, API responses, user input storage |
| W3C standard | Required for all web content (Character Model for the WWW) |
| Python | unicodedata.normalize("NFC", s) |
| Handles compatibility chars? | No — use NFKC for that |
| Idempotent? | Yes |
| Comparison to NFD | Usually equal or shorter (composed chars save one code point each) |
相关术语
算法 中的更多内容
Mapping characters to a common case form for case-insensitive comparison. More comprehensive …
Rules (UAX#29) for determining where one user-perceived character ends and another begins. …
规范化形式D:完全分解而不重新合成,macOS HFS+文件系统使用此形式。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。
规范化形式KC:兼容分解后再规范合成,合并视觉上相似的字符(fi→fi、²→2、Ⅳ→IV),用于标识符比较。
规范化形式KD:兼容分解而不重新合成,是最激进的规范化方式,会丢失最多的格式信息。
Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …
利用字符双向类别和明确方向覆盖,确定混合方向文本(如英语+阿拉伯语)显示顺序的算法。
根据字符属性、CJK词边界和换行时机,确定文本可换至下一行位置的规则。
通过多级比较(基础字符→变音符号→大小写→决胜符)对Unicode字符串进行比较和排序的标准算法,支持区域设置自定义。
查找文本中各类边界的算法:字素簇、词和句子边界,对光标移动、文本选择和文本处理至关重要。