class documentation

Lexer for Multipurpose Internet Mail Extensions (MIME) data. This lexer is designed to process nested multipart data. It assumes that the given data contains both header and body (and is split at an empty line). If no valid header is found, then the entire data will be treated as body. Additional options accepted: `MIME-max-level` Max recursion level for nested MIME structure. Any negative number would treated as unlimited. (default: -1) `Content-Type` Treat the data as a specific content type. Useful when header is missing, or this lexer would try to parse from header. (default: `text/plain`) `Multipart-Boundary` Set the default multipart boundary delimiter. This option is only used when `Content-Type` is `multipart` and header is missing. This lexer would try to parse from header by default. (default: None) `Content-Transfer-Encoding` Treat the data as a specific encoding. Or this lexer would try to parse from header by default. (default: None) .. versionadded:: 2.5

Method __init__ Undocumented
Method get_body_tokens Undocumented
Method get_bodypart_tokens Undocumented
Method get_content_type_subtokens Undocumented
Method get_header_tokens Undocumented
Method store_content_transfer_encoding Undocumented
Method store_content_type Undocumented
Class Variable aliases Undocumented
Class Variable attention_headers Undocumented
Class Variable mimetypes Undocumented
Class Variable name Undocumented
Class Variable tokens Undocumented
Instance Variable boundary Undocumented
Instance Variable content_transfer_encoding Undocumented
Instance Variable content_type Undocumented
Instance Variable max_nested_level Undocumented

Inherited from RegexLexer:

Method get_tokens_unprocessed Split ``text`` into (tokentype, text) pairs.

Inherited from Lexer (via RegexLexer):

Method __repr__ Undocumented
Method add_filter Add a new stream filter to this lexer.
Method analyse_text Has to return a float between ``0`` and ``1`` that indicates if a lexer wants to highlight this text. Used by ``guess_lexer``. If this method returns ``0`` it won't highlight it in any case, if it returns ``1`` highlighting with this lexer is guaranteed.
Method get_tokens Return an iterable of (tokentype, value) pairs generated from `text`. If `unfiltered` is set to `True`, the filtering mechanism is bypassed even if filters are defined.
Class Variable alias_filenames Undocumented
Class Variable filenames Undocumented
Class Variable priority Undocumented
Class Variable url Undocumented
Instance Variable encoding Undocumented
Instance Variable ensurenl Undocumented
Instance Variable filters Undocumented
Instance Variable options Undocumented
Instance Variable stripall Undocumented
Instance Variable stripnl Undocumented
Instance Variable tabsize Undocumented
def __init__(self, **options): (source)

Undocumented

def get_body_tokens(self, match): (source)

Undocumented

def get_bodypart_tokens(self, text): (source)

Undocumented

def get_content_type_subtokens(self, match): (source)

Undocumented

def get_header_tokens(self, match): (source)

Undocumented

def store_content_transfer_encoding(self, match): (source)

Undocumented

def store_content_type(self, match): (source)

Undocumented

Undocumented

attention_headers: set[str] = (source)

Undocumented

mimetypes: list[str] = (source)

Undocumented

Undocumented

boundary = (source)

Undocumented

content_transfer_encoding = (source)

Undocumented

content_type = (source)

Undocumented

max_nested_level = (source)

Undocumented