Algorithmes

Frontière de mot

La position entre les mots selon les règles de coupure de mots Unicode. Plus simple qu'un simple découpage sur les espaces — gère correctement le CJK (sans espaces), les contractions et les nombres.

· Updated

What is a "Word"?

English speakers intuitively know where words start and end: spaces and punctuation serve as dividers. But for a computer processing multilingual text, "word" is a surprisingly complex concept. Chinese, Japanese, and Thai use no spaces between words. German compounds like "Donaudampfschifffahrtsgesellschaft" are single orthographic words. English contractions like "don't" and "I've" may be one or two tokens depending on the application.

UAX #29 Word Boundary rules provide an algorithmic definition of word boundaries that works reasonably across scripts, suitable for applications that need to tokenize text, implement double-click selection, count words, or process natural language.

Word Boundary Rules Overview

The UAX #29 word boundary algorithm assigns each character to a word break property and applies a table of rules to determine if a boundary exists between adjacent characters. Key properties:

Property Examples Meaning
Letter A–Z, a–z, accented letters Part of a word
Numeric 0–9 Part of a numeric run
MidLetter ' · Allowed within a word (contractions)
MidNum , . Allowed within a number (1,000 or 3.14)
ExtendNumLet _ Word extender (identifiers)
WSegSpace regular spaces Word boundary position
Newline CR, LF, NEL Word boundary position

Notable rules: - Contractions: don't is ONE word token because apostrophe (MidLetter) between two Letter characters is not a boundary. - Numbers: 3.14 and 1,000 are single tokens because . and , between digits are MidNum characters. - Identifiers: my_variable is one token because _ is ExtendNumLet. - Email/URL: UAX #29 has special rules to keep [email protected] as tokens.

CJK and Scripts Without Spaces

For Chinese, Japanese, and Thai, UAX #29 uses a simplified approach: every character is its own "word" at the UAX #29 level. Real word segmentation for these scripts requires language-specific processing (statistical models, dictionaries):

# UAX #29 treats each CJK character as a separate word token
# For real Japanese segmentation, use MeCab or SudachiPy
# For real Chinese, use jieba or pkuseg
# For real Thai, use PyThaiNLP

import jieba  # pip install jieba
tokens = list(jieba.cut("我喜欢学习自然语言处理"))
print(tokens)  # ['我', '喜欢', '学习', '自然语言处理']

Python and ICU Word Segmentation

from icu import BreakIterator, Locale, RuleBasedBreakIterator

text = "Don't stop, it's 3.14 o'clock. [email protected]"
bi = BreakIterator.createWordInstance(Locale("en_US"))
bi.setText(text)

start = 0
for end in bi:
    token = text[start:end]
    rule_status = bi.getRuleStatus()
    # rule_status == 200-299: word (letter-based)
    # rule_status == 100-199: number
    # rule_status == 0: non-word (space/punctuation)
    if rule_status != 0:
        print(f"word: {repr(token)}")
    start = end

Quick Facts

Property Value
Specification UAX #29, Section 4 (Word Boundaries)
Contractions Apostrophe between letters is NOT a boundary
CJK text No inter-character breaks by default — language tools needed
Thai No space-based segmentation — requires dictionary/ML approach
Double-click selection Should use word boundary algorithm
Search engines Use language-specific tokenizers, not raw UAX #29
Python ICU BreakIterator.createWordInstance(Locale("en_US"))

Termes associés

Plus dans Algorithmes