class documentation

A Scheme lexer. This parser is checked with pastes from the LISP pastebin at http://paste.lisp.org/ to cover as much syntax as possible. It supports the full Scheme syntax as defined in R5RS. .. versionadded:: 0.6

Method decimal_cb Undocumented
Method get_tokens_unprocessed Split ``text`` into (tokentype, text) pairs.
Class Variable aliases Undocumented
Class Variable complex_ Undocumented
Class Variable decimal Undocumented
Class Variable digit Undocumented
Class Variable filenames Undocumented
Class Variable flags Undocumented
Class Variable mimetypes Undocumented
Class Variable name Undocumented
Class Variable naninf Undocumented
Class Variable num Undocumented
Class Variable number_rules Undocumented
Class Variable prefix Undocumented
Class Variable radix Undocumented
Class Variable real Undocumented
Class Variable token_end Undocumented
Class Variable tokens Undocumented
Class Variable ureal Undocumented
Class Variable url Undocumented
Class Variable valid_name Undocumented

Inherited from Lexer (via RegexLexer):

Method __init__ Undocumented
Method __repr__ Undocumented
Method add_filter Add a new stream filter to this lexer.
Method analyse_text Has to return a float between ``0`` and ``1`` that indicates if a lexer wants to highlight this text. Used by ``guess_lexer``. If this method returns ``0`` it won't highlight it in any case, if it returns ``1`` highlighting with this lexer is guaranteed.
Method get_tokens Return an iterable of (tokentype, value) pairs generated from `text`. If `unfiltered` is set to `True`, the filtering mechanism is bypassed even if filters are defined.
Class Variable alias_filenames Undocumented
Class Variable priority Undocumented
Instance Variable encoding Undocumented
Instance Variable ensurenl Undocumented
Instance Variable filters Undocumented
Instance Variable options Undocumented
Instance Variable stripall Undocumented
Instance Variable stripnl Undocumented
Instance Variable tabsize Undocumented
def decimal_cb(self, match): (source)

Undocumented

def get_tokens_unprocessed(self, text): (source)

Split ``text`` into (tokentype, text) pairs. ``stack`` is the initial stack (default: ``['root']``)

complex_ = (source)

Undocumented

Undocumented

Undocumented

Undocumented

Undocumented

number_rules: dict = (source)

Undocumented

Undocumented

Undocumented

Undocumented

token_end: str = (source)

Undocumented

Undocumented

valid_name: str = (source)

Undocumented