字符编码
将字符映射为字节序列以供数字存储和传输的系统。每个文本文件都有编码,关键在于是否正确声明了该编码。
What is Character Encoding?
Character encoding is the system by which abstract characters (letters, digits, symbols, ideographs) are mapped to numeric values that computers can store and process. Every piece of text in a computer — every email, web page, source code file, database record — exists as a sequence of bytes, and a character encoding is the specification that says which byte patterns correspond to which characters.
Without an agreed-upon character encoding, there is no text: only meaningless bytes. The history of character encoding is a history of different communities independently building their own mappings, the problems that caused when those systems met, and ultimately the creation of Unicode to provide a single universal system.
The Three-Layer Model
Understanding character encoding requires separating three distinct concepts that are often conflated:
1. Character repertoire (the "what"): The set of abstract characters that the system can represent. Unicode's repertoire includes all human writing systems — over 149,000 characters as of Unicode 15.1. ASCII's repertoire is 128 characters.
2. Coded character set (the "which number"): An assignment of a unique number (code point) to each character in the repertoire. In Unicode, the letter 'A' is always U+0041 regardless of how it is stored. In ASCII, 'A' is 65. These numbers are abstract — they do not specify how bytes are arranged in memory.
3. Character encoding form / transfer encoding (the "how stored"): The specification for how code point numbers are serialized into bytes. For Unicode code point U+0041, UTF-8 stores it as a single byte 0x41, UTF-16 stores it as two bytes 0x41 0x00 (LE) or 0x00 0x41 (BE), and UTF-32 stores it as four bytes 0x41 0x00 0x00 0x00 (LE).
For pre-Unicode single-byte encodings like ASCII or ISO 8859-1, the code point number and the byte value are the same, so the distinction collapses. For Unicode with multiple transfer encodings (UTF-8, UTF-16, UTF-32), the distinction is crucial.
Why Encoding Declaration Matters
Text has no intrinsic meaning without its encoding. The byte sequence 0x63 0x61 0x66 0xE9 could mean:
café(if decoded as ISO 8859-1 or Windows-1252)caféor error (if decoded as UTF-8 — because 0xE9 is not a valid UTF-8 continuation byte in this position)- Different garbage in Shift-JIS or EUC-KR
This is why encoding declarations are required in HTML (<meta charset="utf-8">), HTTP headers (Content-Type: text/html; charset=utf-8), XML (<?xml version="1.0" encoding="utf-8"?>), and Python source files (# -*- coding: utf-8 -*-, though Python 3 defaults to UTF-8).
Code Examples
# The same string stored differently by encoding
text = 'café'
encodings = ['ascii', 'latin-1', 'utf-8', 'utf-16-le', 'utf-32-le']
for enc in encodings:
try:
encoded = text.encode(enc)
print(f'{enc:12}: {encoded.hex()} ({len(encoded)} bytes)')
except UnicodeEncodeError as e:
print(f'{enc:12}: ERROR — {e}')
# ascii : ERROR — 'ascii' codec can't encode character '\xe9' (é is not ASCII)
# latin-1 : 636166e9 (4 bytes)
# utf-8 : 636166c3a9 (5 bytes — é becomes 2 bytes)
# utf-16-le : 630065006600e900 (8 bytes — 2 bytes per char)
# utf-32-le : 63000000650000006600000000e90000 (16 bytes)
<!-- HTML: always declare encoding in the first 1024 bytes -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"> <!-- Browser uses this to decode the rest of the page -->
<title>My Page</title>
</head>
The Encoding Detection Problem
When encoding is not declared, software must guess. Encoding detection (charset detection) is an imperfect science:
- BOM detection: Byte Order Marks are reliable when present.
- Statistical analysis: Libraries like
chardetanalyze byte frequency distributions and multi-byte patterns to guess encodings. They work well for large samples of natural language text but can be fooled by short strings or unusual content. - Context heuristics: A file served by a Japanese web server with a Japanese domain name is likely Shift-JIS or UTF-8; a file from a Russian mail server is likely KOI8-R or Windows-1251.
The best practice is always to declare the encoding explicitly and never rely on detection.
Quick Facts
| Property | Value |
|---|---|
| Synonym | Charset, codepage, text encoding |
| Key components | Repertoire, coded character set, encoding form |
| Universal standard | Unicode (with UTF-8/16/32 encoding forms) |
| Web default | UTF-8 (WHATWG, RFC 8259) |
| Python 3 default | UTF-8 for source files and I/O |
| Detection library | chardet, charset-normalizer (Python) |
Common Pitfalls
Conflating "Unicode" with "UTF-8." Unicode is a coded character set (assigning code points to characters). UTF-8 is one encoding form for those code points. A string can be Unicode but stored as UTF-16 or UTF-32 — it is still "Unicode." Saying "encode this string as Unicode" is ambiguous; saying "encode as UTF-8" is precise.
Assuming text files have a consistent encoding. A file might be mostly UTF-8 but contain a few Windows-1252 bytes embedded by a text editor that mixed encodings. The Python errors parameter (errors='replace', errors='ignore', errors='surrogateescape') controls how to handle such mixed content.
The Python 2 vs. 3 transition. Python 2 str was bytes; Python 3 str is Unicode. The most common Python 2→3 migration bug is forgetting to handle encoding explicitly when reading files or making HTTP requests.
相关术语
编码 中的更多内容
美国信息交换标准代码。7位编码,涵盖128个字符(0–127),包括控制字符、数字、拉丁字母和基本符号。
Visual art created from text characters, originally limited to the 95 printable …
Binary-to-text encoding that represents binary data using 64 ASCII characters (A–Z, a–z, …
主要在台湾和香港使用的繁体中文字符编码,收录约13,000个CJK字符。
扩展二进制编码十进制交换码。IBM大型机编码,字母范围不连续,至今仍用于银行和企业大型机。
基于KS X 1001的韩语字符编码,将韩文音节和汉字映射为双字节序列。
简体中文字符编码系列:GB2312(6,763字)经GBK演化为GB18030,成为与Unicode兼容的中国强制性国家标准。
由IANA维护的字符编码名称官方注册表,用于HTTP Content-Type头和MIME(如charset=utf-8)。
针对不同语言组的8位单字节编码系列,ISO 8859-1(Latin-1)是Unicode前256个码位的基础。
将单字节ASCII/JIS罗马字与双字节JIS X 0208汉字相结合的日语字符编码,仍在传统日语系统中使用。