remote¶
Remote module to execute commands on hosts via Cumin.
- exception spicerack.remote.RemoteCheckError[source]¶
Bases:
SpicerackCheckError
Custom exception class for check errors of this module.
- exception spicerack.remote.RemoteClusterExecutionError(results: list[tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]], failures: list[spicerack.remote.RemoteExecutionError])[source]¶
Bases:
RemoteError
Custom exception class for collecting multiple execution errors on a cluster.
Override the parent constructor to add failures and results as attributes.
- Parameters:
results (
list
[tuple
[ClusterShell.NodeSet.NodeSet
,ClusterShell.MsgTree.MsgTreeElem
]]) -- the remote results.failures (
list
[spicerack.remote.RemoteExecutionError
]) -- the list of exceptions raised in the cluster execution.
- exception spicerack.remote.RemoteError[source]¶
Bases:
SpicerackError
Custom exception class for errors of this module.
- exception spicerack.remote.RemoteExecutionError(retcode: int, message: str, results: collections.abc.Iterator[tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]]) None [source]¶
Bases:
RemoteError
Custom exception class for remote execution errors.
Override parent constructor to add the return code attribute.
- Parameters:
retcode (
int
) -- the return code of the remote execution.message (
str
) -- the exception message.results (
collections.abc.Iterator
[tuple
[ClusterShell.NodeSet.NodeSet
,ClusterShell.MsgTree.MsgTreeElem
]]) -- the cumin results.
- class spicerack.remote.LBRemoteCluster(config: cumin.Config, remote_hosts: spicerack.remote.RemoteHosts, conftool: spicerack.confctl.ConftoolEntity) None [source]¶
Bases:
RemoteHostsAdapter
Class usable to operate on a cluster of servers with pooling/depooling logic in conftool.
Initialize the instance.
- Parameters:
config (
cumin.Config
) -- cumin configuration.remote_hosts (
spicerack.remote.RemoteHosts
) -- the instance to act on the remote hosts.conftool (
spicerack.confctl.ConftoolEntity
) -- the conftool entity to operate on.
- reload_services(services: list[str], svc_to_depool: list[str], *, batch_size: int = 1, batch_sleep: float | None = None, verbose: bool = True) list[tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Reload services in batches, removing the host from all the affected services first.
- Parameters:
svc_to_depool (
list
[str
]) -- A list of services (in conftool) to depool.batch_size (
int
, default:1
) -- the batch size for cumin, as an integer.Defaults to 1batch_sleep (
typing.Optional
[float
], default:None
) -- the batch sleep between groups of runs.verbose (
bool
, default:True
) -- whether to print Cumin's output and progress bars to stdout/stderr.
- Return type:
list
[tuple
[ClusterShell.NodeSet.NodeSet
,ClusterShell.MsgTree.MsgTreeElem
]]- Returns:
What
cumin.transports.BaseWorker.get_results()
returns to allow to iterate over the results.- Raises:
spicerack.remote.RemoteExecutionError, spicerack.remote.RemoteClusterExecutionError -- if the Cumin execution returns a non-zero exit code.
- restart_services(services: list[str], svc_to_depool: list[str], *, batch_size: int = 1, batch_sleep: float | None = None, verbose: bool = True) list[tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Restart services in batches, removing the host from all the affected services first.
- Parameters:
svc_to_depool (
list
[str
]) -- A list of services (in conftool) to depool.batch_size (
int
, default:1
) -- the batch size for cumin, as an integer. Defaults to 1.batch_sleep (
typing.Optional
[float
], default:None
) -- the batch sleep between groups of runs.verbose (
bool
, default:True
) -- whether to print Cumin's output and progress bars to stdout/stderr.
- Return type:
list
[tuple
[ClusterShell.NodeSet.NodeSet
,ClusterShell.MsgTree.MsgTreeElem
]]- Returns:
What
cumin.transports.BaseWorker.get_results()
returns to allow to iterate over the results.- Raises:
spicerack.remote.RemoteExecutionError, spicerack.remote.RemoteClusterExecutionError -- if the Cumin execution returns a non-zero exit code.
- run(*commands: str | cumin.transports.Command, svc_to_depool: list[str] | None = None, batch_size: int = 1, batch_sleep: float | None = None, is_safe: bool = False, max_failed_batches: int = 0, print_output: bool = True, print_progress_bars: bool = True) list[tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Run commands while depooling servers in groups of batch_size.
For clusters behind a load balancer, we typically want to be able to depool a server from a specific service, then run any number of commands on it, and finally repool it.
We also want to ensure we can only have at max N hosts depooled at any time. Given cumin doesn't have pre- and post- execution hooks, we break the remote run in smaller groups and execute on one group at a time, in parallel on all the servers. Note this works a bit differently than how the cumin moving window works, as here we'll have to wait for the execution on all servers in a group before moving on to the next.
- Parameters:
*commands (
typing.Union
[str
,cumin.transports.Command
]) -- Arbitrary number of commands to execute.svc_to_depool (
typing.Optional
[list
[str
]], default:None
) -- A list of services (in conftool) to depool.batch_size (
int
, default:1
) -- the batch size for cumin, as an integer. Defaults to 1.batch_sleep (
typing.Optional
[float
], default:None
) -- the batch sleep in seconds to use before scheduling the next batch of hosts.is_safe (
bool
, default:False
) -- whether the command is safe to run also in dry-run mode because it's a read-onlymax_failed_batches (
int
, default:0
) -- Maximum number of batches that can fail. Defaults to 0.print_output (
bool
, default:True
) -- whether to print Cumin's output to stdout.print_progress_bars (
bool
, default:True
) -- whether to print Cumin's progress bars to stderr.
- Return type:
list
[tuple
[ClusterShell.NodeSet.NodeSet
,ClusterShell.MsgTree.MsgTreeElem
]]- Returns:
What
cumin.transports.BaseWorker.get_results()
returns to allow to iterate over the results.- Raises:
spicerack.remote.RemoteExecutionError, spicerack.remote.RemoteClusterExecutionError -- if the Cumin execution returns a non-zero exit code.
- class spicerack.remote.Remote(config: str, dry_run: bool = True) None [source]¶
Bases:
object
Remote class to interact with Cumin.
Initialize the instance.
- Parameters:
- query(query_string: str, use_sudo: bool = False) spicerack.remote.RemoteHosts [source]¶
Execute a Cumin query and return the matching hosts.
- Parameters:
- Return type:
- query_confctl(conftool: spicerack.confctl.ConftoolEntity, **tags: str) spicerack.remote.LBRemoteCluster [source]¶
Execute a conftool node query and return the matching hosts.
- Parameters:
conftool (
spicerack.confctl.ConftoolEntity
) -- the conftool instance for the node type objects.tags (
str
) -- Conftool tags for node type objects as keyword arguments.
- Raises:
spicerack.remote.RemoteError -- if unable to query the hosts.
- Return type:
- class spicerack.remote.RemoteHosts(config: cumin.Config, hosts: ClusterShell.NodeSet.NodeSet, dry_run: bool = True, use_sudo: bool = False) None [source]¶
Bases:
object
Remote Executor class.
This class can be extended to customize the interaction with remote hosts passing a custom factory function to spicerack.remote.Remote.query.
Initialize the instance.
- Parameters:
config (
cumin.Config
) -- the configuration for Cumin.hosts (
ClusterShell.NodeSet.NodeSet
) -- the hosts to target for the remote execution.dry_run (
bool
, default:True
) -- whether this is a DRY-RUN.use_sudo (
bool
, default:False
) -- if True will prependsudo -i
to every command.
- Raises:
spicerack.remote.RemoteError -- if no hosts were provided.
- get_subset(subset: ClusterShell.NodeSet.NodeSet) spicerack.remote.RemoteHosts [source]¶
Return a new RemoteHosts instance with a subset of the existing set of hosts.
- Parameters:
subset (
ClusterShell.NodeSet.NodeSet
) -- the subset of hosts to select. They must all be part of the current set.- Raises:
spicerack.remote.RemoteError -- if any host in the subset is not part of the instance hosts.
- Return type:
- Returns:
a new instance with only the subset hosts.
- reboot(batch_size: int = 1, batch_sleep: float | None = 180.0) None [source]¶
Reboot hosts.
- Parameters:
batch_size (
int
, default:1
) -- how many hosts to reboot in parallel.batch_sleep (
typing.Optional
[float
], default:180.0
) -- how long to sleep between one reboot and the next.
- Return type:
- static results_to_list(results: collections.abc.Iterator[tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]], callback: collections.abc.Callable | None = None) list[tuple[ClusterShell.NodeSet.NodeSet, Any]] [source]¶
Extract execution results into a list converting them with an optional callback.
Todo
move it directly into Cumin.
- Parameters:
results (
collections.abc.Iterator
[tuple
[ClusterShell.NodeSet.NodeSet
,ClusterShell.MsgTree.MsgTreeElem
]]) -- generator returned by run_sync() and run_async() to iterate over the results.callback (
typing.Optional
[collections.abc.Callable
], default:None
) -- an optional callable to apply to each result output (it can be multiline). The callback will be called with a the string output as the only parameter and must return the extracted value. The return type can be chosen freely.
- Return type:
- Returns:
A list of 2-element tuples with hosts
ClusterShell.NodeSet.NodeSet
as first item and the extracted outputsstr
as second. This is because NodeSet are not hashable.- Raises:
spicerack.remote.RemoteError -- if unable to run the callback.
- run_async(*commands: str | cumin.transports.Command, success_threshold: float = 1.0, batch_size: int | str | None = None, batch_sleep: float | None = None, is_safe: bool = False, print_output: bool = True, print_progress_bars: bool = True) collections.abc.Iterator[tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Execute commands on hosts matching a query via Cumin in async mode and return its results.
- Parameters:
*commands (
typing.Union
[str
,cumin.transports.Command
]) -- arbitrary number of commands to execute on the target hosts.success_threshold (
float
, default:1.0
) -- to consider the execution successful, must be between 0.0 and 1.0.batch_size (
typing.Union
[int
,str
,None
], default:None
) -- the batch size for cumin, either as percentage (e.g.25%
) or absolute number (e.g.5
).batch_sleep (
typing.Optional
[float
], default:None
) -- the batch sleep in seconds to use in Cumin before scheduling the next host.is_safe (
bool
, default:False
) -- whether the command is safe to run also in dry-run mode because it's a read-only command that doesn't modify the state.print_output (
bool
, default:True
) -- whether to print Cumin's output to stdout.print_progress_bars (
bool
, default:True
) -- whether to print Cumin's progress bars to stderr.
- Raises:
spicerack.remote.RemoteExecutionError -- if the Cumin execution returns a non-zero exit code.
- Return type:
collections.abc.Iterator
[tuple
[ClusterShell.NodeSet.NodeSet
,ClusterShell.MsgTree.MsgTreeElem
]]
- run_sync(*commands: str | cumin.transports.Command, success_threshold: float = 1.0, batch_size: int | str | None = None, batch_sleep: float | None = None, is_safe: bool = False, print_output: bool = True, print_progress_bars: bool = True) collections.abc.Iterator[tuple[ClusterShell.NodeSet.NodeSet, ClusterShell.MsgTree.MsgTreeElem]] [source]¶
Execute commands on hosts matching a query via Cumin in sync mode and returns its results.
- Parameters:
*commands (
typing.Union
[str
,cumin.transports.Command
]) -- arbitrary number of commands to execute on the target hosts.success_threshold (
float
, default:1.0
) -- to consider the execution successful, must be between 0.0 and 1.0.batch_size (
typing.Union
[int
,str
,None
], default:None
) -- the batch size for cumin, either as percentage (e.g.25%
) or absolute number (e.g.5
).batch_sleep (
typing.Optional
[float
], default:None
) -- the batch sleep in seconds to use in Cumin before scheduling the next host.is_safe (
bool
, default:False
) -- whether the command is safe to run also in dry-run mode because it's a read-only command that doesn't modify the state.print_output (
bool
, default:True
) -- whether to print Cumin's output to stdout.print_progress_bars (
bool
, default:True
) -- whether to print Cumin's progress bars to stderr.
- Raises:
spicerack.remote.RemoteExecutionError -- if the Cumin execution returns a non-zero exit code.
- Return type:
collections.abc.Iterator
[tuple
[ClusterShell.NodeSet.NodeSet
,ClusterShell.MsgTree.MsgTreeElem
]]
- split(n_slices: int) collections.abc.Iterator[RemoteHosts] [source]¶
Split the current remote in n_slices RemoteHosts instances.
- Parameters:
n_slices (
int
) -- the number of slices to slice the remote in.- Yields:
spicerack.remote.RemoteHosts -- the instances for the subset of nodes.
- Return type:
collections.abc.Iterator
[RemoteHosts]
- uptime(print_progress_bars: bool = True) list[tuple[ClusterShell.NodeSet.NodeSet, float]] [source]¶
Get current uptime.
- Parameters:
print_progress_bars (
bool
, default:True
) -- whether to print Cumin's progress bars to stderr.- Return type:
- Returns:
A list of 2-element
tuple
instances with hostsClusterShell.NodeSet.NodeSet
as first item andfloat
uptime as second item.- Raises:
spicerack.remote.RemoteError -- if unable to parse the output as an uptime.
- wait_reboot_since(since: datetime.datetime, print_progress_bars: bool = True) None [source]¶
Poll the host until is reachable and has an uptime lower than the provided datetime.
- Parameters:
since (
datetime.datetime
) -- the time after which the host should have booted.print_progress_bars (
bool
, default:True
) -- whether to print Cumin's progress bars to stderr.
- Raises:
spicerack.remote.RemoteCheckError -- if unable to connect to the host or the uptime is higher than expected. When in DRY-RUN mode, it will raise only if unable to connect.
- Return type:
- class spicerack.remote.RemoteHostsAdapter(remote_hosts: spicerack.remote.RemoteHosts) None [source]¶
Bases:
object
Base adapter to write classes that expand the capabilities of RemoteHosts.
This adapter class is a helper class to reduce duplication when writing classes that needs to add capabilities to a RemoteHosts instance. The goal is to not extend the RemoteHosts but instead delegate to its instances. This class fits when a single RemoteHosts instance is enough, but for more complex cases, in which multiple RemoteHosts instances should be orchestrated, it's ok to not extend this class and create a standalone one.
Initialize the instance.
- Parameters:
remote_hosts (
spicerack.remote.RemoteHosts
) -- the instance to act on the remote hosts.