3\page jobqueue_design Job queue design
5Notes on the Job queuing system architecture.
7\section intro Introduction
9The data model consist of the following main components:
10* The Job object represents a particular deferred task that happens in the
11 background. All jobs subclass the Job object and put the main logic in the
12 function called run().
13* The JobQueue object represents a particular queue of jobs of a certain type.
14 For example there may be a queue for email jobs and a queue for CDN purge
17\section jobqueue Job queues
19Each job type has its own queue and is associated to a storage medium. One
20queue might save its jobs in redis while another one uses would use a database.
22Storage medium are defined in a queue class. Before using it, you must
23define in $wgJobTypeConf a mapping of the job type to a queue class.
25The factory class JobQueueGroup provides helper functions:
26- getting the queue for a given job
27- route new job insertions to the proper queue
29The following queue classes are available:
30* JobQueueDB (stores jobs in the `job` table in a database)
31* JobQueueRedis (stores jobs in a redis server)
33All queue classes support some basic operations (though some may be no-ops):
34* enqueueing a batch of jobs
35* dequeueing a single job
36* acknowledging a job is completed
37* checking if the queue is empty
39Some queue classes (like JobQueueDB) may dequeue jobs in random order while other
40queues might dequeue jobs in exact FIFO order. Callers should thus not assume jobs
41are executed in FIFO order.
43Also note that not all queue classes will have the same reliability guarantees.
44In-memory queues may lose data when restarted depending on snapshot and journal
45settings (including journal fsync() frequency). Some queue types may totally remove
46jobs when dequeued while leaving the ack() function as a no-op; if a job is
47dequeued by a job runner, which crashes before completion, the job will be
48lost. Some jobs, like purging CDN caches after a template change, may not
49require durable queues, whereas other jobs might be more important.
51\section aggregator Job queue aggregator
53The aggregators are used by nextJobDB.php, which is a script that will return a
54random ready queue (on any wiki in the farm) that can be used with runJobs.php.
55This can be used in conjunction with any scripts that handle wiki farm job queues.
56Note that $wgLocalDatabases defines what wikis are in the wiki farm.
58Since each job type has its own queue, and wiki-farms may have many wikis,
59there might be a large number of queues to keep track of. To avoid wasting
60large amounts of time polling empty queues, aggregators exists to keep track
61of which queues are ready.
63The following queue aggregator classes are available:
64* JobQueueAggregatorRedis (uses a redis server to track ready queues)
66Some aggregators cache data for a few minutes while others may be always up to date.
67This can be an important factor for jobs that need a low pickup time (or latency).
71Callers should also try to make jobs maintain correctness when executed twice.
72This is useful for queues that actually implement ack(), since they may recycle
73dequeued but un-acknowledged jobs back into the queue to be attempted again. If
74a runner dequeues a job, runs it, but then crashes before calling ack(), the
75job may be returned to the queue and run a second time. Jobs like cache purging can
76happen several times without any correctness problems. However, a pathological case
77would be if a bug causes the problem to systematically keep repeating. For example,
78a job may always throw a DB error at the end of run(). This problem is trickier to
79solve and more obnoxious for things like email jobs, for example. For such jobs,
80it might be useful to use a queue that does not retry jobs.