词边界
由Unicode词边界规则确定的词间位置,不是简单地按空格分割,而是正确处理CJK(无空格)、缩写和数字。
What is a "Word"?
English speakers intuitively know where words start and end: spaces and punctuation serve as dividers. But for a computer processing multilingual text, "word" is a surprisingly complex concept. Chinese, Japanese, and Thai use no spaces between words. German compounds like "Donaudampfschifffahrtsgesellschaft" are single orthographic words. English contractions like "don't" and "I've" may be one or two tokens depending on the application.
UAX #29 Word Boundary rules provide an algorithmic definition of word boundaries that works reasonably across scripts, suitable for applications that need to tokenize text, implement double-click selection, count words, or process natural language.
Word Boundary Rules Overview
The UAX #29 word boundary algorithm assigns each character to a word break property and applies a table of rules to determine if a boundary exists between adjacent characters. Key properties:
| Property | Examples | Meaning |
|---|---|---|
| Letter | A–Z, a–z, accented letters | Part of a word |
| Numeric | 0–9 | Part of a numeric run |
| MidLetter | ' · |
Allowed within a word (contractions) |
| MidNum | , . |
Allowed within a number (1,000 or 3.14) |
| ExtendNumLet | _ |
Word extender (identifiers) |
| WSegSpace | regular spaces | Word boundary position |
| Newline | CR, LF, NEL | Word boundary position |
Notable rules:
- Contractions: don't is ONE word token because apostrophe (MidLetter) between two Letter characters is not a boundary.
- Numbers: 3.14 and 1,000 are single tokens because . and , between digits are MidNum characters.
- Identifiers: my_variable is one token because _ is ExtendNumLet.
- Email/URL: UAX #29 has special rules to keep [email protected] as tokens.
CJK and Scripts Without Spaces
For Chinese, Japanese, and Thai, UAX #29 uses a simplified approach: every character is its own "word" at the UAX #29 level. Real word segmentation for these scripts requires language-specific processing (statistical models, dictionaries):
# UAX #29 treats each CJK character as a separate word token
# For real Japanese segmentation, use MeCab or SudachiPy
# For real Chinese, use jieba or pkuseg
# For real Thai, use PyThaiNLP
import jieba # pip install jieba
tokens = list(jieba.cut("我喜欢学习自然语言处理"))
print(tokens) # ['我', '喜欢', '学习', '自然语言处理']
Python and ICU Word Segmentation
from icu import BreakIterator, Locale, RuleBasedBreakIterator
text = "Don't stop, it's 3.14 o'clock. [email protected]"
bi = BreakIterator.createWordInstance(Locale("en_US"))
bi.setText(text)
start = 0
for end in bi:
token = text[start:end]
rule_status = bi.getRuleStatus()
# rule_status == 200-299: word (letter-based)
# rule_status == 100-199: number
# rule_status == 0: non-word (space/punctuation)
if rule_status != 0:
print(f"word: {repr(token)}")
start = end
Quick Facts
| Property | Value |
|---|---|
| Specification | UAX #29, Section 4 (Word Boundaries) |
| Contractions | Apostrophe between letters is NOT a boundary |
| CJK text | No inter-character breaks by default — language tools needed |
| Thai | No space-based segmentation — requires dictionary/ML approach |
| Double-click selection | Should use word boundary algorithm |
| Search engines | Use language-specific tokenizers, not raw UAX #29 |
| Python ICU | BreakIterator.createWordInstance(Locale("en_US")) |
相关术语
算法 中的更多内容
Mapping characters to a common case form for case-insensitive comparison. More comprehensive …
Rules (UAX#29) for determining where one user-perceived character ends and another begins. …
规范化形式C:先分解再规范合成,生成最短形式,推荐用于数据存储和交换,是Web标准形式。
规范化形式D:完全分解而不重新合成,macOS HFS+文件系统使用此形式。é(U+00E9)→ e + ◌́(U+0065 + U+0301)。
规范化形式KC:兼容分解后再规范合成,合并视觉上相似的字符(fi→fi、²→2、Ⅳ→IV),用于标识符比较。
规范化形式KD:兼容分解而不重新合成,是最激进的规范化方式,会丢失最多的格式信息。
Comparing Unicode strings requires normalization (NFC/NFD) and optionally collation (locale-aware sorting). Binary …
利用字符双向类别和明确方向覆盖,确定混合方向文本(如英语+阿拉伯语)显示顺序的算法。
根据字符属性、CJK词边界和换行时机,确定文本可换至下一行位置的规则。
通过多级比较(基础字符→变音符号→大小写→决胜符)对Unicode字符串进行比较和排序的标准算法,支持区域设置自定义。