module documentation

Implements a Jinja / Python combination lexer. The ``Lexer`` class is used to do some preprocessing. It filters out invalid operators like the bitshift operators we don't allow in templates. It separates template code and python code in expressions.

Class Failure Class that raises a `TemplateSyntaxError` if called. Used by the `Lexer` to specify known errors.
Class Lexer Class that implements a lexer for a given environment. Automatically created by the environment class, usually you don't have to do that.
Class OptionalLStrip A special tuple for marking a point in the state that can have lstrip applied.
Class Token No class docstring; 0/3 class variable, 2/3 methods documented
Class TokenStream A token stream is an iterable that yields :class:`Token`\s. The parser however does not iterate over it but calls :meth:`next` to go one token ahead. The current active token is stored as :attr:`current`.
Class TokenStreamIterator The iterator for tokenstreams. Iterate over the stream until the eof token is reached.
Function compile_rules Compiles all the rules from the environment into a list of rules.
Function count_newlines Count the number of newline characters in the string. This is useful for extensions that filter a stream.
Function describe_token Returns a description of the token.
Function describe_token_expr Like `describe_token` but for token expressions.
Function get_lexer Return a lexer which is probably cached.
Constant TOKEN_ADD Undocumented
Constant TOKEN_ASSIGN Undocumented
Constant TOKEN_BLOCK_BEGIN Undocumented
Constant TOKEN_BLOCK_END Undocumented
Constant TOKEN_COLON Undocumented
Constant TOKEN_COMMA Undocumented
Constant TOKEN_COMMENT Undocumented
Constant TOKEN_COMMENT_BEGIN Undocumented
Constant TOKEN_COMMENT_END Undocumented
Constant TOKEN_DATA Undocumented
Constant TOKEN_DIV Undocumented
Constant TOKEN_DOT Undocumented
Constant TOKEN_EOF Undocumented
Constant TOKEN_EQ Undocumented
Constant TOKEN_FLOAT Undocumented
Constant TOKEN_FLOORDIV Undocumented
Constant TOKEN_GT Undocumented
Constant TOKEN_GTEQ Undocumented
Constant TOKEN_INITIAL Undocumented
Constant TOKEN_INTEGER Undocumented
Constant TOKEN_LBRACE Undocumented
Constant TOKEN_LBRACKET Undocumented
Constant TOKEN_LINECOMMENT Undocumented
Constant TOKEN_LINECOMMENT_BEGIN Undocumented
Constant TOKEN_LINECOMMENT_END Undocumented
Constant TOKEN_LINESTATEMENT_BEGIN Undocumented
Constant TOKEN_LINESTATEMENT_END Undocumented
Constant TOKEN_LPAREN Undocumented
Constant TOKEN_LT Undocumented
Constant TOKEN_LTEQ Undocumented
Constant TOKEN_MOD Undocumented
Constant TOKEN_MUL Undocumented
Constant TOKEN_NAME Undocumented
Constant TOKEN_NE Undocumented
Constant TOKEN_OPERATOR Undocumented
Constant TOKEN_PIPE Undocumented
Constant TOKEN_POW Undocumented
Constant TOKEN_RAW_BEGIN Undocumented
Constant TOKEN_RAW_END Undocumented
Constant TOKEN_RBRACE Undocumented
Constant TOKEN_RBRACKET Undocumented
Constant TOKEN_RPAREN Undocumented
Constant TOKEN_SEMICOLON Undocumented
Constant TOKEN_STRING Undocumented
Constant TOKEN_SUB Undocumented
Constant TOKEN_TILDE Undocumented
Constant TOKEN_VARIABLE_BEGIN Undocumented
Constant TOKEN_VARIABLE_END Undocumented
Constant TOKEN_WHITESPACE Undocumented
Variable float_re Undocumented
Variable ignore_if_empty Undocumented
Variable ignored_tokens Undocumented
Variable integer_re Undocumented
Variable newline_re Undocumented
Variable operator_re Undocumented
Variable operators Undocumented
Variable reverse_operators Undocumented
Variable string_re Undocumented
Variable whitespace_re Undocumented
Class _Rule Undocumented
Function _describe_token_type Undocumented
Variable _lexer_cache Undocumented
def compile_rules(environment): (source)

Compiles all the rules from the environment into a list of rules.

Parameters
environment:EnvironmentUndocumented
Returns
t.List[t.Tuple[str, str]]Undocumented
def count_newlines(value): (source)

Count the number of newline characters in the string. This is useful for extensions that filter a stream.

Parameters
value:strUndocumented
Returns
intUndocumented
def describe_token(token): (source)

Returns a description of the token.

Parameters
token:TokenUndocumented
Returns
strUndocumented
def describe_token_expr(expr): (source)

Like `describe_token` but for token expressions.

Parameters
expr:strUndocumented
Returns
strUndocumented
def get_lexer(environment): (source)

Return a lexer which is probably cached.

Parameters
environment:EnvironmentUndocumented
Returns
LexerUndocumented
TOKEN_ADD = (source)

Undocumented

Value
intern('add')
TOKEN_ASSIGN = (source)

Undocumented

Value
intern('assign')
TOKEN_BLOCK_BEGIN = (source)

Undocumented

Value
intern('block_begin')
TOKEN_BLOCK_END = (source)

Undocumented

Value
intern('block_end')
TOKEN_COLON = (source)

Undocumented

Value
intern('colon')
TOKEN_COMMA = (source)

Undocumented

Value
intern('comma')
TOKEN_COMMENT = (source)

Undocumented

Value
intern('comment')
TOKEN_COMMENT_BEGIN = (source)

Undocumented

Value
intern('comment_begin')
TOKEN_COMMENT_END = (source)

Undocumented

Value
intern('comment_end')
TOKEN_DATA = (source)

Undocumented

Value
intern('data')
TOKEN_DIV = (source)

Undocumented

Value
intern('div')
TOKEN_DOT = (source)

Undocumented

Value
intern('dot')
TOKEN_EOF = (source)

Undocumented

Value
intern('eof')
TOKEN_EQ = (source)

Undocumented

Value
intern('eq')
TOKEN_FLOAT = (source)

Undocumented

Value
intern('float')
TOKEN_FLOORDIV = (source)

Undocumented

Value
intern('floordiv')
TOKEN_GT = (source)

Undocumented

Value
intern('gt')
TOKEN_GTEQ = (source)

Undocumented

Value
intern('gteq')
TOKEN_INITIAL = (source)

Undocumented

Value
intern('initial')
TOKEN_INTEGER = (source)

Undocumented

Value
intern('integer')
TOKEN_LBRACE = (source)

Undocumented

Value
intern('lbrace')
TOKEN_LBRACKET = (source)

Undocumented

Value
intern('lbracket')
TOKEN_LINECOMMENT = (source)

Undocumented

Value
intern('linecomment')
TOKEN_LINECOMMENT_BEGIN = (source)

Undocumented

Value
intern('linecomment_begin')
TOKEN_LINECOMMENT_END = (source)

Undocumented

Value
intern('linecomment_end')
TOKEN_LINESTATEMENT_BEGIN = (source)

Undocumented

Value
intern('linestatement_begin')
TOKEN_LINESTATEMENT_END = (source)

Undocumented

Value
intern('linestatement_end')
TOKEN_LPAREN = (source)

Undocumented

Value
intern('lparen')
TOKEN_LT = (source)

Undocumented

Value
intern('lt')
TOKEN_LTEQ = (source)

Undocumented

Value
intern('lteq')
TOKEN_MOD = (source)

Undocumented

Value
intern('mod')
TOKEN_MUL = (source)

Undocumented

Value
intern('mul')
TOKEN_NAME = (source)

Undocumented

Value
intern('name')
TOKEN_NE = (source)

Undocumented

Value
intern('ne')
TOKEN_OPERATOR = (source)

Undocumented

Value
intern('operator')
TOKEN_PIPE = (source)

Undocumented

Value
intern('pipe')
TOKEN_POW = (source)

Undocumented

Value
intern('pow')
TOKEN_RAW_BEGIN = (source)

Undocumented

Value
intern('raw_begin')
TOKEN_RAW_END = (source)

Undocumented

Value
intern('raw_end')
TOKEN_RBRACE = (source)

Undocumented

Value
intern('rbrace')
TOKEN_RBRACKET = (source)

Undocumented

Value
intern('rbracket')
TOKEN_RPAREN = (source)

Undocumented

Value
intern('rparen')
TOKEN_SEMICOLON = (source)

Undocumented

Value
intern('semicolon')
TOKEN_STRING = (source)

Undocumented

Value
intern('string')
TOKEN_SUB = (source)

Undocumented

Value
intern('sub')
TOKEN_TILDE = (source)

Undocumented

Value
intern('tilde')
TOKEN_VARIABLE_BEGIN = (source)

Undocumented

Value
intern('variable_begin')
TOKEN_VARIABLE_END = (source)

Undocumented

Value
intern('variable_end')
TOKEN_WHITESPACE = (source)

Undocumented

Value
intern('whitespace')
float_re = (source)

Undocumented

ignore_if_empty = (source)

Undocumented

ignored_tokens = (source)

Undocumented

integer_re = (source)

Undocumented

newline_re = (source)

Undocumented

operator_re = (source)

Undocumented

operators = (source)

Undocumented

reverse_operators = (source)

Undocumented

string_re = (source)

Undocumented

whitespace_re = (source)

Undocumented

def _describe_token_type(token_type): (source)

Undocumented

Parameters
token_type:strUndocumented
Returns
strUndocumented

Undocumented