class documentation

Lexer for Shen source code. .. versionadded:: 2.1

Method get_tokens_unprocessed Split ``text`` into (tokentype, text) pairs.
Constant BUILTINS Undocumented
Constant BUILTINS_ANYWHERE Undocumented
Constant DECLARATIONS Undocumented
Constant MAPPINGS Undocumented
Constant SPECIAL_FORMS Undocumented
Class Variable aliases Undocumented
Class Variable filenames Undocumented
Class Variable mimetypes Undocumented
Class Variable name Undocumented
Class Variable symbol_name Undocumented
Class Variable tokens Undocumented
Class Variable url Undocumented
Class Variable valid_name Undocumented
Class Variable valid_symbol_chars Undocumented
Class Variable variable Undocumented
Method _process_declaration Undocumented
Method _process_declarations Undocumented
Method _process_signature Undocumented
Method _process_symbols Undocumented
Method _relevant Undocumented

Inherited from Lexer (via RegexLexer):

Method __init__ Undocumented
Method __repr__ Undocumented
Method add_filter Add a new stream filter to this lexer.
Method analyse_text Has to return a float between ``0`` and ``1`` that indicates if a lexer wants to highlight this text. Used by ``guess_lexer``. If this method returns ``0`` it won't highlight it in any case, if it returns ``1`` highlighting with this lexer is guaranteed.
Method get_tokens Return an iterable of (tokentype, value) pairs generated from `text`. If `unfiltered` is set to `True`, the filtering mechanism is bypassed even if filters are defined.
Class Variable alias_filenames Undocumented
Class Variable priority Undocumented
Instance Variable encoding Undocumented
Instance Variable ensurenl Undocumented
Instance Variable filters Undocumented
Instance Variable options Undocumented
Instance Variable stripall Undocumented
Instance Variable stripnl Undocumented
Instance Variable tabsize Undocumented
def get_tokens_unprocessed(self, text): (source)

Split ``text`` into (tokentype, text) pairs. ``stack`` is the initial stack (default: ``['root']``)

BUILTINS: tuple[str, ...] = (source)

Undocumented

Value
('==',
 '=',
 '*',
 '+',
 '-',
 '/',
 '<',
...
BUILTINS_ANYWHERE: tuple[str, ...] = (source)

Undocumented

Value
('where', 'skip', '>>', '_', '!', '<e>', '<!>')
DECLARATIONS: tuple[str, ...] = (source)

Undocumented

Value
('datatype',
 'define',
 'defmacro',
 'defprolog',
 'defcc',
 'synonyms',
 'declare',
...
MAPPINGS = (source)

Undocumented

Value
{s: Keyword for s in DECLARATIONS}
SPECIAL_FORMS: tuple[str, ...] = (source)

Undocumented

Value
('lambda',
 'get',
 'let',
 'if',
 'cases',
 'cond',
 'put',
...

Undocumented

filenames: list[str] = (source)

Undocumented

mimetypes: list[str] = (source)

Undocumented

Undocumented

symbol_name = (source)

Undocumented

Undocumented

valid_name = (source)

Undocumented

valid_symbol_chars: str = (source)

Undocumented

variable = (source)

Undocumented

def _process_declaration(self, declaration, tokens): (source)

Undocumented

def _process_declarations(self, tokens): (source)

Undocumented

def _process_signature(self, tokens): (source)

Undocumented

def _process_symbols(self, tokens): (source)

Undocumented

def _relevant(self, token): (source)

Undocumented