remote¶
Remote module to execute commands on hosts via Cumin.
- exception spicerack.remote.RemoteCheckError[source]¶
Bases:
spicerack.exceptions.SpicerackCheckError
Custom exception class for check errors of this module.
- exception spicerack.remote.RemoteClusterExecutionError(results: List[Tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]], failures: List[spicerack.remote.RemoteExecutionError])[source]¶
Bases:
spicerack.remote.RemoteError
Custom exception class for collecting multiple execution errors on a cluster.
Override the parent constructor to add failures and results as attributes.
- exception spicerack.remote.RemoteError[source]¶
Bases:
spicerack.exceptions.SpicerackError
Custom exception class for errors of this module.
- exception spicerack.remote.RemoteExecutionError(retcode: int, message: str)[source]¶
Bases:
spicerack.remote.RemoteError
Custom exception class for remote execution errors.
Override parent constructor to add the return code attribute.
- class spicerack.remote.LBRemoteCluster(config: cumin.Config, remote_hosts: spicerack.remote.RemoteHosts, conftool: spicerack.confctl.ConftoolEntity)[source]¶
Bases:
spicerack.remote.RemoteHostsAdapter
Class usable to operate on a cluster of servers with pooling/depooling logic in conftool.
Initialize the instance.
- Parameters
config (cumin.Config) -- cumin configuration.
remote_hosts (spicerack.remote.RemoteHosts) -- the instance to act on the remote hosts.
conftool (spicerack.confctl.ConftoolEntity) -- the conftool entity to operate on.
- reload_services(services: List[str], svc_to_depool: List[str], *, batch_size: int = 1, batch_sleep: Optional[float] = None, verbose: bool = True) List[Tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Reload services in batches, removing the host from all the affected services first.
- Parameters
services (list) -- A list of services to act upon
svc_to_depool (list) -- A list of services (in conftool) to depool.
batch_size (int) -- the batch size for cumin, as an integer.Defaults to 1
batch_sleep (float, optional) -- the batch sleep between groups of runs.
verbose (bool, optional) -- whether to print Cumin's output and progress bars to stdout/stderr.
- Returns
cumin.transports.BaseWorker.get_results to allow to iterate over the results.
- Return type
- Raises
spicerack.remote.RemoteExecutionError, spicerack.remote.RemoteClusterExecutionError -- if the Cumin execution returns a non-zero exit code.
- restart_services(services: List[str], svc_to_depool: List[str], *, batch_size: int = 1, batch_sleep: Optional[float] = None, verbose: bool = True) List[Tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Restart services in batches, removing the host from all the affected services first.
- Parameters
services (list) -- A list of services to act upon
svc_to_depool (list) -- A list of services (in conftool) to depool.
batch_size (int) -- the batch size for cumin, as an integer. Defaults to 1
batch_sleep (float, optional) -- the batch sleep between groups of runs.
verbose (bool, optional) -- whether to print Cumin's output and progress bars to stdout/stderr.
- Returns
cumin.transports.BaseWorker.get_results to allow to iterate over the results.
- Return type
- Raises
spicerack.remote.RemoteExecutionError, spicerack.remote.RemoteClusterExecutionError -- if the Cumin execution returns a non-zero exit code.
- run(*commands: Union[str, cumin.transports.Command], svc_to_depool: Optional[List[str]] = None, batch_size: int = 1, batch_sleep: Optional[float] = None, is_safe: bool = False, max_failed_batches: int = 0, print_output: bool = True, print_progress_bars: bool = True) List[Tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Run commands while depooling servers in groups of batch_size.
For clusters behind a load balancer, we typically want to be able to depool a server from a specific service, then run any number of commands on it, and finally repool it.
We also want to ensure we can only have at max N hosts depooled at any time. Given cumin doesn't have pre- and post- execution hooks, we break the remote run in smaller groups and execute on one group at a time, in parallel on all the servers. Note this works a bit differently than how the cumin moving window works, as here we'll have to wait for the execution on all servers in a group before moving on to the next.
- Parameters
*commands (str, cumin.transports.Command) -- Arbitrary number of commands to execute.
svc_to_depool (list) -- A list of services (in conftool) to depool.
batch_size (int, optional) -- the batch size for cumin, as an integer. Defaults to 1.
batch_sleep (float, optional) -- the batch sleep in seconds to use before scheduling the next batch of hosts.
is_safe (bool, optional) -- whether the command is safe to run also in dry-run mode because it's a read-only
max_failed_batches (int, optional) -- Maximum number of batches that can fail. Defaults to 0.
print_output (bool, optional) -- whether to print Cumin's output to stdout.
print_progress_bars (bool, optional) -- whether to print Cumin's progress bars to stderr.
- Returns
cumin.transports.BaseWorker.get_results to allow to iterate over the results.
- Return type
- Raises
spicerack.remote.RemoteExecutionError, spicerack.remote.RemoteClusterExecutionError -- if the Cumin execution returns a non-zero exit code.
- class spicerack.remote.Remote(config: str, dry_run: bool = True)[source]¶
Bases:
object
Remote class to interact with Cumin.
Initialize the instance.
- Parameters
- query(query_string: str, use_sudo: bool = False) spicerack.remote.RemoteHosts [source]¶
Execute a Cumin query and return the matching hosts.
- Parameters
- Returns
RemoteHosts instance matching the given query.
- Return type
- query_confctl(conftool: spicerack.confctl.ConftoolEntity, **tags: str) spicerack.remote.LBRemoteCluster [source]¶
Execute a conftool node query and return the matching hosts.
- Parameters
conftool (spicerack.confctl.ConftoolEntity) -- the conftool instance for the node type objects.
tags -- Conftool tags for node type objects as keyword arguments.
- Returns
LBRemoteCluster instance matching the given query
- Return type
- Raises
- class spicerack.remote.RemoteHosts(config: cumin.Config, hosts: ClusterShell.NodeSet.NodeSet, dry_run: bool = True, use_sudo: bool = False)[source]¶
Bases:
object
Remote Executor class.
This class can be extended to customize the interaction with remote hosts passing a custom factory function to spicerack.remote.Remote.query.
Initialize the instance.
- Parameters
config (cumin.Config) -- the configuration for Cumin.
hosts (ClusterShell.NodeSet.NodeSet) -- the hosts to target for the remote execution.
dry_run (bool, optional) -- whether this is a DRY-RUN.
use_sudo (bool, optional) -- if True will prepend 'sudo -i' to every command
- Raises
spicerack.remote.RemoteError -- if no hosts were provided.
- static results_to_list(results: Iterator[Tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]], callback: Optional[Callable] = None) List[Tuple[ClusterShell.NodeSet.NodeSet, Any]] [source]¶
Extract execution results into a list converting them with an optional callback.
Todo
move it directly into Cumin.
- Parameters
results (generator) -- generator returned by run_sync() and run_async() to iterate over the results.
callback (callable, optional) -- an optional callable to apply to each result output (it can be multiline). The callback will be called with a the string output as the only parameter and must return the extracted value. The return type can be chosen freely.
- Returns
a list of 2-element tuples with hosts
ClusterShell.NodeSet.NodeSet
as first item and the extracted outputsstr
as second. This is because NodeSet are not hashable.- Return type
- Raises
spicerack.remote.RemoteError -- if unable to run the callback.
- run_async(*commands: Union[str, cumin.transports.Command], success_threshold: float = 1.0, batch_size: Optional[Union[int, str]] = None, batch_sleep: Optional[float] = None, is_safe: bool = False, print_output: bool = True, print_progress_bars: bool = True) Iterator[Tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Execute commands on hosts matching a query via Cumin in async mode.
- Parameters
*commands (str, cumin.transports.Command) -- arbitrary number of commands to execute on the target hosts.
success_threshold (float, optional) -- to consider the execution successful, must be between 0.0 and 1.0.
batch_size (int, str, optional) -- the batch size for cumin, either as percentage (e.g.
25%
) or absolute number (e.g.5
).batch_sleep (float, optional) -- the batch sleep in seconds to use in Cumin before scheduling the next host.
is_safe (bool, optional) -- whether the command is safe to run also in dry-run mode because it's a read-only command that doesn't modify the state.
print_output (bool, optional) -- whether to print Cumin's output to stdout.
print_progress_bars (bool, optional) -- whether to print Cumin's progress bars to stderr.
- Returns
cumin.transports.BaseWorker.get_results to allow to iterate over the results.
- Return type
generator
- Raises
RemoteExecutionError -- if the Cumin execution returns a non-zero exit code.
- run_sync(*commands: Union[str, cumin.transports.Command], success_threshold: float = 1.0, batch_size: Optional[Union[int, str]] = None, batch_sleep: Optional[float] = None, is_safe: bool = False, print_output: bool = True, print_progress_bars: bool = True) Iterator[Tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Execute commands on hosts matching a query via Cumin in sync mode.
- Parameters
*commands (str, cumin.transports.Command) -- arbitrary number of commands to execute on the target hosts.
success_threshold (float, optional) -- to consider the execution successful, must be between 0.0 and 1.0.
batch_size (int, str, optional) -- the batch size for cumin, either as percentage (e.g.
25%
) or absolute number (e.g.5
).batch_sleep (float, optional) -- the batch sleep in seconds to use in Cumin before scheduling the next host.
is_safe (bool, optional) -- whether the command is safe to run also in dry-run mode because it's a read-only command that doesn't modify the state.
print_output (bool, optional) -- whether to print Cumin's output to stdout.
print_progress_bars (bool, optional) -- whether to print Cumin's progress bars to stderr.
- Returns
cumin.transports.BaseWorker.get_results to allow to iterate over the results.
- Return type
generator
- Raises
RemoteExecutionError -- if the Cumin execution returns a non-zero exit code.
- split(n_slices: int) Iterator[spicerack.remote.RemoteHosts] [source]¶
Split the current remote in n_slices RemoteHosts instances.
- Parameters
n_slices (int) -- the number of slices to slice the remote in.
- Yields
The RemoteHosts instances for the subset of nodes.
- uptime(print_progress_bars: bool = True) List[Tuple[ClusterShell.NodeSet.NodeSet, float]] [source]¶
Get current uptime.
- Parameters
print_progress_bars (bool, optional) -- whether to print Cumin's progress bars to stderr.
- Returns
a list of 2-element
tuple
instances with hostsClusterShell.NodeSet.NodeSet
as first item andfloat
uptime as second item.- Return type
- Raises
spicerack.remote.RemoteError -- if unable to parse the output as an uptime.
- wait_reboot_since(since: datetime.datetime, print_progress_bars: bool = True) None [source]¶
Poll the host until is reachable and has an uptime lower than the provided datetime.
- Parameters
since (datetime.datetime) -- the time after which the host should have booted.
print_progress_bars (bool, optional) -- whether to print Cumin's progress bars to stderr.
- Raises
spicerack.remote.RemoteCheckError -- if unable to connect to the host or the uptime is higher than expected.
- property hosts: ClusterShell.NodeSet.NodeSet¶
Getter for the hosts property.
- Returns
a copy of the targeted hosts.
- Return type
- class spicerack.remote.RemoteHostsAdapter(remote_hosts: spicerack.remote.RemoteHosts)[source]¶
Bases:
object
Base adapter to write classes that expand the capabilities of RemoteHosts.
This adapter class is a helper class to reduce duplication when writing classes that needs to add capabilities to a RemoteHosts instance. The goal is to not extend the RemoteHosts but instead delegate to its instances. This class fits when a single RemoteHosts instance is enough, but for more complex cases, in which multiple RemoteHosts instances should be orchestrated, it's ok to not extend this class and create a standalone one.
Initialize the instance.
- Parameters
remote_hosts (spicerack.remote.RemoteHosts) -- the instance to act on the remote hosts.