class documentation

Lexer for YAML, a human-friendly data serialization language. .. versionadded:: 0.11

Method get_tokens_unprocessed Split ``text`` into (tokentype, text) pairs. If ``context`` is given, use this lexer context instead.
Method parse_block_scalar_empty_line Process an empty line in a block scalar.
Method parse_block_scalar_indent Process indentation spaces in a block scalar.
Method parse_plain_scalar_indent Process indentation spaces in a plain scalar.
Method reset_indent Reset the indentation levels.
Method save_indent Save a possible indentation level.
Method set_block_scalar_indent Set an explicit indentation level for a block scalar.
Method set_indent Set the previously saved indentation level.
Method something Do not produce empty tokens.
Class Variable aliases Undocumented
Class Variable filenames Undocumented
Class Variable mimetypes Undocumented
Class Variable name Undocumented
Class Variable tokens Undocumented
Class Variable url Undocumented

Inherited from Lexer (via ExtendedRegexLexer, RegexLexer):

Method __init__ Undocumented
Method __repr__ Undocumented
Method add_filter Add a new stream filter to this lexer.
Method analyse_text Has to return a float between ``0`` and ``1`` that indicates if a lexer wants to highlight this text. Used by ``guess_lexer``. If this method returns ``0`` it won't highlight it in any case, if it returns ``1`` highlighting with this lexer is guaranteed.
Method get_tokens Return an iterable of (tokentype, value) pairs generated from `text`. If `unfiltered` is set to `True`, the filtering mechanism is bypassed even if filters are defined.
Class Variable alias_filenames Undocumented
Class Variable priority Undocumented
Instance Variable encoding Undocumented
Instance Variable ensurenl Undocumented
Instance Variable filters Undocumented
Instance Variable options Undocumented
Instance Variable stripall Undocumented
Instance Variable stripnl Undocumented
Instance Variable tabsize Undocumented
def get_tokens_unprocessed(self, text=None, context=None): (source)

Split ``text`` into (tokentype, text) pairs. If ``context`` is given, use this lexer context instead.

def parse_block_scalar_empty_line(indent_token_class, content_token_class): (source)

Process an empty line in a block scalar.

def parse_block_scalar_indent(token_class): (source)

Process indentation spaces in a block scalar.

def parse_plain_scalar_indent(token_class): (source)

Process indentation spaces in a plain scalar.

def reset_indent(token_class): (source)

Reset the indentation levels.

def save_indent(token_class, start=False): (source)

Save a possible indentation level.

def set_block_scalar_indent(token_class): (source)

Set an explicit indentation level for a block scalar.

def set_indent(token_class, implicit=False): (source)

Set the previously saved indentation level.

def something(token_class): (source)

Do not produce empty tokens.

Undocumented

filenames: list[str] = (source)

Undocumented

mimetypes: list[str] = (source)

Undocumented

Undocumented

Undocumented