class documentation

class _timelex(object): (source)

View In Hierarchy

Undocumented

Class Method isnum Whether the next character is part of a number
Class Method isspace Whether the next character is whitespace
Class Method isword Whether or not the next character is part of a word
Class Method split Undocumented
Method __init__ Undocumented
Method __iter__ Undocumented
Method __next__ Undocumented
Method get_token This function breaks the time string into lexical units (tokens), which can be parsed by the parser. Lexical units are demarcated by changes in the character set, so any continuous string of letters is considered one unit, any continuous string of numbers is considered one unit.
Method next Undocumented
Instance Variable charstack Undocumented
Instance Variable eof Undocumented
Instance Variable instream Undocumented
Instance Variable tokenstack Undocumented
Class Variable _split_decimal Undocumented
@classmethod
def isnum(cls, nextchar): (source)

Whether the next character is part of a number

@classmethod
def isspace(cls, nextchar): (source)

Whether the next character is whitespace

@classmethod
def isword(cls, nextchar): (source)

Whether or not the next character is part of a word

@classmethod
def split(cls, s): (source)

Undocumented

def __init__(self, instream): (source)

Undocumented

def __iter__(self): (source)

Undocumented

def __next__(self): (source)

Undocumented

def get_token(self): (source)

This function breaks the time string into lexical units (tokens), which can be parsed by the parser. Lexical units are demarcated by changes in the character set, so any continuous string of letters is considered one unit, any continuous string of numbers is considered one unit. The main complication arises from the fact that dots ('.') can be used both as separators (e.g. "Sep.20.2009") or decimal points (e.g. "4:30:21.447"). As such, it is necessary to read the full context of any dot-separated strings before breaking it into tokens; as such, this function maintains a "token stack", for when the ambiguous context demands that multiple tokens be parsed at once.

def next(self): (source)

Undocumented

charstack: list = (source)

Undocumented

Undocumented

instream = (source)

Undocumented

tokenstack: list = (source)

Undocumented

_split_decimal = (source)

Undocumented