class documentation

Parse `code` lines and yield "classified" tokens. Arguments code -- string of source code to parse, language -- formal language the code is written in, tokennames -- either 'long', 'short', or 'none' (see below). Merge subsequent tokens of the same token-type. Iterating over an instance yields the tokens as ``(tokentype, value)`` tuples. The value of `tokennames` configures the naming of the tokentype: 'long': downcased full token type name, 'short': short name defined by pygments.token.STANDARD_TYPES (= class argument used in pygments html output), 'none': skip lexical analysis.

Method __init__ Set up a lexical analyzer for `code` in `language`.
Method __iter__ Parse self.code and yield "classified" tokens.
Method merge Merge subsequent tokens of same token-type.
Instance Variable code Undocumented
Instance Variable language Undocumented
Instance Variable lexer Undocumented
Instance Variable tokennames Undocumented
def __init__(self, code, language, tokennames='short'): (source)

Set up a lexical analyzer for `code` in `language`.

def __iter__(self): (source)

Parse self.code and yield "classified" tokens.

def merge(self, tokens): (source)

Merge subsequent tokens of same token-type. Also strip the final newline (added by pygments).

code = (source)

Undocumented

language = (source)

Undocumented

lexer = (source)

Undocumented

tokennames = (source)

Undocumented