This document is mainly relevant only for git-based repositories, which currently (2015-10) does NOT include MediaWiki deploys.

Process model

Scap’s basic architecture consists of a main scap deploy process run by the user on the deployment host and a number of spawned scap deploy-local subprocesses running over SSH that perform the actual work of each deployment “stage”.

Structured logging events are sent back over the existing SSH channel, as line-wise JSON, where they are parsed and fed back into a unified logging stream. Optionally, a scap deploy-log process may be started by the user to filter and view the logging stream during or after the run.

blockdiag Deployer deploy logger httpd scap/log/{tag}.log deploy-log {filter} deploy-local -s {stag e} git repo jinja config systemct l checks service SSH SSH

Process flow

Scap’s overall deployment process is represented in the following diagram, with a detailed explanation below.

blockdiag Deploy host Deploy targets $ deploy Resolve targets Prepare config Prepare repo Group deployed Deploy each group Stage: config Stage: fetch Stage: promote Stage: finalize Deploy complete Fetch template Provide secrets Combine vars Render new config Perform checks Fetch repo Checkout revision Update submodules Perform checks Link repo Link config Restart service Perform checks Update state Delete old revs Perform checks puppet

After some preparation of the local repo and configuration, the main deployment process is run for each of the configured target groups. This process is composed of four distinct stages, config, fetch, promote, and finalize, run across the group targets in that order. Concurrency for each stage can be either completely serial or highly parallel, again depending on configuration. For fine tuning of the groups and stage concurrency, see server_groups and batch_size under Available configuration variables.