pip_services3_expressions.tokenizers package
Subpackages
- pip_services3_expressions.tokenizers.generic package
- Submodules
- pip_services3_expressions.tokenizers.generic.CCommentState module
- pip_services3_expressions.tokenizers.generic.CppCommentState module
- pip_services3_expressions.tokenizers.generic.GenericCommentState module
- pip_services3_expressions.tokenizers.generic.GenericNumberState module
- pip_services3_expressions.tokenizers.generic.GenericQuoteState module
- pip_services3_expressions.tokenizers.generic.GenericSymbolState module
- pip_services3_expressions.tokenizers.generic.GenericTokenizer module
- pip_services3_expressions.tokenizers.generic.GenericWhitespaceState module
- pip_services3_expressions.tokenizers.generic.GenericWordState module
- pip_services3_expressions.tokenizers.generic.SymbolNode module
- pip_services3_expressions.tokenizers.generic.SymbolRootNode module
- Module contents
- Submodules
- pip_services3_expressions.tokenizers.utilities package
Submodules
- pip_services3_expressions.tokenizers.AbstractTokenizer module
- pip_services3_expressions.tokenizers.ICommentState module
- pip_services3_expressions.tokenizers.INumberState module
- pip_services3_expressions.tokenizers.IQuoteState module
- pip_services3_expressions.tokenizers.ISymbolState module
- pip_services3_expressions.tokenizers.ITokenizer module
- pip_services3_expressions.tokenizers.ITokenizerState module
- pip_services3_expressions.tokenizers.IWhitespaceState module
- pip_services3_expressions.tokenizers.IWordState module
- pip_services3_expressions.tokenizers.Token module
- pip_services3_expressions.tokenizers.TokenType module
Module contents
-
class
pip_services3_expressions.tokenizers.
AbstractTokenizer
Bases:
pip_services3_expressions.tokenizers.ITokenizer.ITokenizer
,abc.ABC
Implements an abstract tokenizer class.
-
clear_character_states
()
-
get_character_state
(symbol: int) → pip_services3_expressions.tokenizers.ITokenizerState.ITokenizerState
-
has_next_token
() → bool Checks if there is the next token exist.
- Returns
True if scanner has the next token.
-
next_token
() → pip_services3_expressions.tokenizers.Token.Token Gets the next token from the scanner.
- Returns
Next token of null if there are no more tokens left.
-
property
scanner
-
set_character_state
(from_symbol: int, to_symbol: int, state: pip_services3_expressions.tokenizers.ITokenizerState.ITokenizerState)
-
tokenize_buffer
(buffer: str) → List[pip_services3_expressions.tokenizers.Token.Token] Tokenizes a string buffer into a list of tokens structures.
- Parameters
buffer – A string buffer to be tokenized.
- Returns
A list of token structures.
-
tokenize_buffer_to_strings
(buffer: str) → List[str] Tokenizes a string buffer into a list of strings.
- Parameters
buffer – A string buffer to be tokenized.
- Returns
A list of token strings.
-
tokenize_stream
(scanner: pip_services3_expressions.io.IScanner.IScanner) → List[pip_services3_expressions.tokenizers.Token.Token] Tokenizes a textual stream into a list of token structures.
- Parameters
scanner – A textual stream to be tokenized.
- Returns
A list of token structures.
-
tokenize_stream_to_string
(scanner: pip_services3_expressions.io.IScanner.IScanner) → List[str] Tokenizes a textual stream into a list of strings.
- Parameters
scanner – A textual stream to be tokenized.
- Returns
A list of token strings.
-
-
class
pip_services3_expressions.tokenizers.
ICommentState
Bases:
pip_services3_expressions.tokenizers.ITokenizerState.ITokenizerState
,abc.ABC
Defines an interface for tokenizer state that processes comments.
-
class
pip_services3_expressions.tokenizers.
INumberState
Bases:
pip_services3_expressions.tokenizers.ITokenizerState.ITokenizerState
,abc.ABC
Defines interface for tokenizer state that processes numbers - Integers, Floats, HexDec..
-
class
pip_services3_expressions.tokenizers.
IQuoteState
Bases:
pip_services3_expressions.tokenizers.ITokenizerState.ITokenizerState
,abc.ABC
Defines an interface for tokenizer state that processes quoted strings.
-
class
pip_services3_expressions.tokenizers.
ISymbolState
Bases:
pip_services3_expressions.tokenizers.ITokenizerState.ITokenizerState
,abc.ABC
Defines an interface for tokenizer state that processes delimiters.
-
add
(value: str, token_type: pip_services3_expressions.tokenizers.TokenType.TokenType) Add a multi-character symbol.
- Parameters
value – The symbol to add, such as “=:=”
token_type – The token type
-
-
class
pip_services3_expressions.tokenizers.
ITokenizer
Bases:
abc.ABC
A tokenizer divides a string into tokens. This class is highly customizable with regard to exactly how this division occurs, but it also has defaults that are suitable for many languages. This class assumes that the character values read from the string lie in the range 0-255. For example, the Unicode value of a capital A is 65, so chr(65) prints out a capital A.
The behavior of a tokenizer depends on its character state table. This table is an array of 256
TokenizerState
states. The state table decides which state to enter upon reading a character from the input string.For example, by default, upon reading an ‘A’, a tokenizer will enter a “word” state. This means the tokenizer will ask a
WordState
object to consume the ‘A’, along with the characters after the ‘A’ that form a word. The state’s responsibility is to consume characters and return a complete token.The default table sets a SymbolState for every character from 0 to 255, and then overrides this with:
From
To
State
0
‘ ‘
whitespaceState
‘a’,
‘z’
wordState
‘A’
‘Z’
wordState
160
255
wordState
‘0’
‘9’
numberState
‘-’
‘-’
numberState
‘.’
‘.’
numberState
‘”’
‘”’
quoteState
‘’’
‘’’
quoteState
‘/’
‘/’
slashState
In addition to allowing modification of the state table, this class makes each of the states above available. Some of these states are customizable. For example, wordState allows customization of what characters can be part of a word, after the first character.
-
has_next_token
() → bool Checks if there is the next token exist.
- Returns
True if scanner has the next token.
-
next_token
() → pip_services3_expressions.tokenizers.Token.Token Gets the next token from the scanner.
- Returns
Next token of null if there are no more tokens left.
-
tokenize_buffer
(buffer: str) → List[pip_services3_expressions.tokenizers.Token.Token] Tokenizes a string buffer into a list of tokens structures.
- Parameters
buffer – A string buffer to be tokenized.
- Returns
A list of token structures.
-
tokenize_buffer_to_strings
(buffer: str) → List[str] Tokenizes a string buffer into a list of strings.
- Parameters
buffer – A string buffer to be tokenized.
- Returns
A list of token strings.
-
tokenize_stream
(scanner: pip_services3_expressions.io.IScanner.IScanner) → List[pip_services3_expressions.tokenizers.Token.Token] Tokenizes a textual stream into a list of token structures.
- Parameters
scanner – A textual stream to be tokenized.
- Returns
A list of token structures.
-
tokenize_stream_to_string
(scanner: pip_services3_expressions.io.IScanner.IScanner) → List[str] Tokenizes a textual stream into a list of strings.
- Parameters
scanner – A textual stream to be tokenized.
- Returns
A list of token strings.
-
whitespace_state
: pip_services3_expressions.tokenizers.IWhitespaceState.IWhitespaceState
-
-
class
pip_services3_expressions.tokenizers.
ITokenizerState
Bases:
abc.ABC
A tokenizerState returns a token, given a scanner, an initial character read from the scanner, and a tokenizer that is conducting an overall tokenization of the scanner. The tokenizer will typically have a character state table that decides which state to use, depending on an initial character. If a single character is insufficient, a state such as SlashState will read a second character, and may delegate to another state, such as SlashStarState. This prospect of delegation is the reason that the
next_token()
method has a tokenizer argument.-
next_token
(scanner: pip_services3_expressions.io.IScanner.IScanner, tokenizer: ITokenizer) → pip_services3_expressions.tokenizers.Token.Token Gets the next token from the stream started from the character linked to this state.
- Parameters
scanner – A textual string to be tokenized.
tokenizer – A tokenizer class that controls the process.
- Returns
The next token from the top of the stream.
-
-
class
pip_services3_expressions.tokenizers.
IWhitespaceState
Bases:
pip_services3_expressions.tokenizers.ITokenizerState.ITokenizerState
,abc.ABC
Defines an interface for tokenizer state that processes whitespaces (’ ‘, ‘ ‘)
-
clear_whitespace_chars
() Clears definitions of whitespace characters.
-
set_whitespace_chars
(from_symbol: int, to_symbol: int, enable: bool) Establish the given characters as whitespace to ignore.
- Parameters
from_symbol – First character index of the interval.
to_symbol – Last character index of the interval.
enable – true if this state should ignore characters in the given range.
-
-
class
pip_services3_expressions.tokenizers.
IWordState
Bases:
pip_services3_expressions.tokenizers.ITokenizerState.ITokenizerState
,abc.ABC
Defines an interface for tokenizer state that processes words, identificators or keywords
-
clear_word_chars
() Clears definitions of word chars.
-
set_word_chars
(from_symbol: int, to_symbol: int, enable: bool) Establish characters in the given range as valid characters for part of a word after the first character. Note that the tokenizer must determine which characters are valid as the beginning character of a word.
- Parameters
from_symbol – First character index of the interval.
to_symbol – Last character index of the interval.
enable – True if this state should use characters in the given range.
-
-
class
pip_services3_expressions.tokenizers.
Token
(type: pip_services3_expressions.tokenizers.TokenType.TokenType, value: str, line: int, column: int) Bases:
object
A token represents a logical chunk of a string. For example, a typical tokenizer would break the string “1.23 <= 12.3” into three tokens: the number 1.23, a less-than-or-equal symbol, and the number 12.3. A token is a receptacle, and relies on a tokenizer to decide precisely how to divide a string into tokens.
-
property
column
The column number where the token is.
-
property
line
The line number where the token is.
-
property
type
The token type.
-
property
value
The token value.
-
property
-
class
pip_services3_expressions.tokenizers.
TokenType
(value) Bases:
enum.Enum
Types (categories) of tokens such as “number”, “symbol” or “word”.
-
Comment
= 12
-
Eof
= 1
-
Eol
= 2
-
Float
= 3
-
HexDecimal
= 5
-
Integer
= 4
-
Keyword
= 10
-
Number
= 6
-
Quoted
= 8
-
Special
= 13
-
Symbol
= 7
-
Unknown
= (0,)
-
Whitespace
= 11
-
Word
= 9
-