module documentation

Checker for string formatting operations.

Class StringConstantChecker Check string literals.
Class StringFormatChecker Checks string formatting operations to ensure that the format string is valid and the arguments match the format string.
Function arg_matches_format_type Undocumented
Function get_access_path Given a list of format specifiers, returns the final access path (e.g. a.b.c[0][1]).
Function register Undocumented
Function str_eval Mostly replicate `ast.literal_eval(token)` manually to avoid any performance hit.
Constant DOUBLE_QUOTED_REGEX Undocumented
Constant MSGS Undocumented
Constant OTHER_NODES Undocumented
Constant QUOTE_DELIMITER_REGEX Undocumented
Constant SINGLE_QUOTED_REGEX Undocumented
Function _get_quote_delimiter Returns the quote character used to delimit this token string.
Function _is_long_string Is this string token a "longstring" (is it triple-quoted)?
Function _is_quote_delimiter_chosen_freely Was there a non-awkward option for the quote delimiter?
Constant _AST_NODE_STR_TYPES Undocumented
Constant _PREFIXES Undocumented
def arg_matches_format_type(arg_type: SuccessfulInferenceResult, format_type: str) -> bool: (source)

Undocumented

def get_access_path(key: str|Literal[0], parts: list[tuple[bool, str]]) -> str: (source)

Given a list of format specifiers, returns the final access path (e.g. a.b.c[0][1]).

def register(linter: PyLinter): (source)

Undocumented

def str_eval(token: str) -> str: (source)

Mostly replicate `ast.literal_eval(token)` manually to avoid any performance hit. This supports f-strings, contrary to `ast.literal_eval`. We have to support all string literal notations: https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals

DOUBLE_QUOTED_REGEX = (source)

Undocumented

Value
re.compile(f"""({'|'.join(_PREFIXES)})?""\"""")

Undocumented

Value
{'E1300': ('Unsupported format character %r (%#02x) at index %d',
           'bad-format-character',
           'Used when an unsupported format character is used in a format string
.'),
 'E1301': ('Format string ends in middle of conversion specifier',
           'truncated-format-string',
           'Used when a format string terminates before the end of a conversion 
...
OTHER_NODES = (source)

Undocumented

Value
(nodes.Const,
 nodes.List,
 nodes.Lambda,
 nodes.FunctionDef,
 nodes.ListComp,
 nodes.SetComp,
 nodes.GeneratorExp)
QUOTE_DELIMITER_REGEX = (source)

Undocumented

Value
re.compile(f"""({'|'.join(_PREFIXES)})?("|')""", re.DOTALL)
SINGLE_QUOTED_REGEX = (source)

Undocumented

Value
re.compile(f"""({'|'.join(_PREFIXES)})?'''""")
def _get_quote_delimiter(string_token: str) -> str: (source)

Returns the quote character used to delimit this token string. This function checks whether the token is a well-formed string. Args: string_token: The token to be parsed. Returns: A string containing solely the first quote delimiter character in the given string. Raises: ValueError: No quote delimiter characters are present.

def _is_long_string(string_token: str) -> bool: (source)

Is this string token a "longstring" (is it triple-quoted)? Long strings are triple-quoted as defined in https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals This function only checks characters up through the open quotes. Because it's meant to be applied only to tokens that represent string literals, it doesn't bother to check for close-quotes (demonstrating that the literal is a well-formed string). Args: string_token: The string token to be parsed. Returns: A boolean representing whether this token matches a longstring regex.

def _is_quote_delimiter_chosen_freely(string_token: str) -> bool: (source)

Was there a non-awkward option for the quote delimiter? Args: string_token: The quoted string whose delimiters are to be checked. Returns: Whether there was a choice in this token's quote character that would not have involved backslash-escaping an interior quote character. Long strings are excepted from this analysis under the assumption that their quote characters are set by policy.

_AST_NODE_STR_TYPES: tuple[str, ...] = (source)

Undocumented

Value
('__builtin__.unicode', '__builtin__.str', 'builtins.str')
_PREFIXES: set[str] = (source)

Undocumented

Value
set(['r',
     'u',
     'R',
     'U',
     'f',
     'F',
     'fr',
...