class AbstractRunner(ABC): (source)
Known subclasses: kedro.runner.ParallelRunner
, kedro.runner.SequentialRunner
, kedro.runner.ThreadRunner
AbstractRunner is the base class for all Pipeline runner implementations.
Method | __init__ |
Instantiates the runner classs. |
Method | create |
Factory method for creating the default dataset for the runner. |
Method | run |
Run the Pipeline using the datasets provided by catalog and save results back to the same objects. |
Method | run |
Run only the missing outputs from the Pipeline using the datasets provided by catalog, and save results back to the same objects. |
Method | _run |
The abstract interface for running pipelines, assuming that the inputs have already been checked and normalized by run(). |
Method | _suggest |
Suggest a command to the user to resume a run after it fails. The run should be started from the point closest to the failure for which persisted input exists. |
Instance Variable | _is |
Undocumented |
Property | _logger |
Undocumented |
Instantiates the runner classs.
Parameters | |
isbool | If True, the node inputs and outputs are loaded and saved asynchronously with threads. Defaults to False. |
Factory method for creating the default dataset for the runner.
Parameters | |
dsstr | Name of the missing dataset. |
Returns | |
AbstractDataSet | An instance of an implementation of AbstractDataSet to be used for all unregistered datasets. |
Pipeline
, catalog: DataCatalog
, hook_manager: PluginManager
= None, session_id: str
= None) -> Dict[ str, Any]
:
(source)
¶
Run the Pipeline using the datasets provided by catalog and save results back to the same objects.
Parameters | |
pipeline:Pipeline | The Pipeline to run. |
catalog:DataCatalog | The DataCatalog from which to fetch data. |
hookPluginManager | The PluginManager to activate hooks. |
sessionstr | The id of the session. |
Returns | |
Dict[ | Any node outputs that cannot be processed by the DataCatalog. These are returned in a dictionary, where the keys are defined by the node outputs. |
Raises | |
ValueError | Raised when Pipeline inputs cannot be satisfied. |
Pipeline
, catalog: DataCatalog
, hook_manager: PluginManager
) -> Dict[ str, Any]
:
(source)
¶
Run only the missing outputs from the Pipeline using the datasets provided by catalog, and save results back to the same objects.
Parameters | |
pipeline:Pipeline | The Pipeline to run. |
catalog:DataCatalog | The DataCatalog from which to fetch data. |
hookPluginManager | The PluginManager to activate hooks. |
Returns | |
Dict[ | Any node outputs that cannot be processed by the DataCatalog. These are returned in a dictionary, where the keys are defined by the node outputs. |
Raises | |
ValueError | Raised when Pipeline inputs cannot be satisfied. |
def _run(self, pipeline:
Pipeline
, catalog: DataCatalog
, hook_manager: PluginManager
, session_id: str
= None):
(source)
¶
The abstract interface for running pipelines, assuming that the inputs have already been checked and normalized by run().
Parameters | |
pipeline:Pipeline | The Pipeline to run. |
catalog:DataCatalog | The DataCatalog from which to fetch data. |
hookPluginManager | The PluginManager to activate hooks. |
sessionstr | The id of the session. |
Pipeline
, done_nodes: Iterable[ Node]
, catalog: DataCatalog
):
(source)
¶
Suggest a command to the user to resume a run after it fails. The run should be started from the point closest to the failure for which persisted input exists.
Parameters | |
pipeline:Pipeline | the Pipeline of the run. |
doneIterable[ | the ``Node``s that executed successfully. |
catalog:DataCatalog | the DataCatalog of the run. |