58 $status = StatusValue::newGood();
60 $n = count( $performOps );
61 if ( $n > self::MAX_BATCH_SIZE ) {
62 $status->fatal(
'backend-fail-batchsize', $n, self::MAX_BATCH_SIZE );
68 $ignoreErrors = !empty( $opts[
'force'] );
69 $journaled = empty( $opts[
'nonJournaled'] );
70 $maxConcurrency = isset( $opts[
'concurrency'] ) ? $opts[
'concurrency'] : 1;
79 foreach ( $performOps
as $index => $fileOp ) {
80 $backendName = $fileOp->getBackend()->getName();
81 $fileOp->setBatchId( $batchId );
84 if ( $fileOp->dependsOn( $curBatchDeps )
85 || count( $curBatch ) >= $maxConcurrency
86 || ( $backendName !== $lastBackend && count( $curBatch ) )
88 $pPerformOps[] = $curBatch;
92 $lastBackend = $backendName;
93 $curBatch[$index] = $fileOp;
95 $curBatchDeps = $fileOp->applyDependencies( $curBatchDeps );
97 $oldPredicates = $predicates;
98 $subStatus = $fileOp->precheck( $predicates );
100 if ( $subStatus->isOK() ) {
102 $entries = array_merge( $entries,
103 $fileOp->getJournalEntries( $oldPredicates, $predicates ) );
106 $status->success[$index] =
false;
108 if ( !$ignoreErrors ) {
114 if ( count( $curBatch ) ) {
115 $pPerformOps[] = $curBatch;
119 if ( count( $entries ) ) {
121 if ( !$subStatus->isOK() ) {
128 if ( $ignoreErrors ) {
151 foreach ( $pPerformOps
as $performOpsBatch ) {
156 foreach ( $performOpsBatch
as $i => $fileOp ) {
159 $performOpsBatch[$i]->logFailure(
'attempt_aborted' );
168 $backend = reset( $performOpsBatch )->getBackend();
172 foreach ( $performOpsBatch
as $i => $fileOp ) {
173 if ( !isset(
$status->success[$i] ) ) {
176 $subStatus = ( count( $performOpsBatch ) > 1 )
177 ? $fileOp->attemptAsync()
178 : $fileOp->attempt();
180 $opHandles[$i] = $subStatus->value;
182 $statuses[$i] = $subStatus;
187 $statuses = $statuses + $backend->executeOpHandlesInternal( $opHandles );
189 foreach ( $performOpsBatch
as $i => $fileOp ) {
190 if ( !isset(
$status->success[$i] ) ) {
191 $subStatus = $statuses[$i];
193 if ( $subStatus->isOK() ) {
FileBackendStore helper class for performing asynchronous file operations.
Class for handling file operation journaling.
logChangeBatch(array $entries, $batchId)
Log changes made by a batch file operation.
getTimestampedUUID()
Get a statistically unique ID string.
Helper class for representing batch file operations.
static runParallelBatches(array $pPerformOps, StatusValue $status)
Attempt a list of file operations sub-batches in series.
static attempt(array $performOps, array $opts, FileJournal $journal)
Attempt to perform a series of file operations.
static newDependencies()
Get a new empty dependency tracking array for paths read/written to.
static newPredicates()
Get a new empty predicates array for precheck()
Generic operation result class Has warning/error list, boolean status and arbitrary value.
This document is intended to provide useful advice for parties seeking to redistribute MediaWiki to end users It s targeted particularly at maintainers for Linux since it s been observed that distribution packages of MediaWiki often break We ve consistently had to recommend that users seeking support use official tarballs instead of their distribution s and this often solves whatever problem the user is having It would be nice if this could such as
the array() calling protocol came about after MediaWiki 1.4rc1.
this hook is for auditing only RecentChangesLinked and Watchlist RecentChangesLinked and Watchlist Do not use this to implement individual filters if they are compatible with the ChangesListFilter and ChangesListFilterGroup structure use sub classes of those in conjunction with the ChangesListSpecialPageStructuredFilters hook This hook can be used to implement filters that do not implement that or custom behavior that is not an individual filter e g Watchlist and Watchlist you will want to construct new ChangesListBooleanFilter or ChangesListStringOptionsFilter objects When constructing you specify which group they belong to You can reuse existing or create your you must register them with $special registerFilterGroup removed from all revisions and log entries to which it was applied This gives extensions a chance to take it off their books as the deletion has already been partly carried out by this point or something similar the user will be unable to create the tag set $status
injection txt This is an overview of how MediaWiki makes use of dependency injection The design described here grew from the discussion of RFC T384 The term dependency this means that anything an object needs to operate should be injected from the the object itself should only know narrow no concrete implementation of the logic it relies on The requirement to inject everything typically results in an architecture that based on two main types of and essentially stateless service objects that use other service objects to operate on the value objects As of the beginning MediaWiki is only starting to use the DI approach Much of the code still relies on global state or direct resulting in a highly cyclical dependency which acts as the top level factory for services in MediaWiki which can be used to gain access to default instances of various services MediaWikiServices however also allows new services to be defined and default services to be redefined Services are defined or redefined by providing a callback the instantiator that will return a new instance of the service When it will create an instance of MediaWikiServices and populate it with the services defined in the files listed by thereby bootstrapping the DI framework Per $wgServiceWiringFiles lists includes ServiceWiring php