MediaWiki REL1_30
populateIpChanges.php
Go to the documentation of this file.
1<?php
27require_once __DIR__ . '/Maintenance.php';
28
30
39 public function __construct() {
40 parent::__construct();
41
42 $this->addDescription( <<<TEXT
43This script will find all rows in the revision table where the user is an IP,
44and copy relevant fields to the ip_changes table. This backfilled data will
45then be available when querying for IP ranges at Special:Contributions.
46TEXT
47 );
48 $this->addOption( 'rev-id', 'The rev_id to start copying from. Default: 0', false, true );
49 $this->addOption(
50 'max-rev-id',
51 'The rev_id to stop at. Default: result of MAX(rev_id)',
52 false,
53 true
54 );
55 $this->addOption(
56 'throttle',
57 'Wait this many milliseconds after copying each batch of revisions. Default: 0',
58 false,
59 true
60 );
61 $this->addOption( 'force', 'Run regardless of whether the database says it\'s been run already' );
62 }
63
64 public function doDBUpdates() {
65 $dbw = $this->getDB( DB_MASTER );
66
67 if ( !$dbw->tableExists( 'ip_changes' ) ) {
68 $this->error( 'ip_changes table does not exist', true );
69 }
70
71 $lbFactory = MediaWikiServices::getInstance()->getDBLoadBalancerFactory();
72 $dbr = $this->getDB( DB_REPLICA, [ 'vslow' ] );
73 $throttle = intval( $this->getOption( 'throttle', 0 ) );
74 $maxRevId = intval( $this->getOption( 'max-rev-id', 0 ) );
75 $start = $this->getOption( 'rev-id', 0 );
76 $end = $maxRevId > 0
77 ? $maxRevId
78 : $dbw->selectField( 'revision', 'MAX(rev_id)', false, __METHOD__ );
79
80 if ( empty( $end ) ) {
81 $this->output( "No revisions found, aborting.\n" );
82 return true;
83 }
84
85 $blockStart = $start;
86 $attempted = 0;
87 $inserted = 0;
88
89 $this->output( "Copying IP revisions to ip_changes, from rev_id $start to rev_id $end\n" );
90
91 while ( $blockStart <= $end ) {
92 $blockEnd = min( $blockStart + $this->mBatchSize, $end );
93 $rows = $dbr->select(
94 'revision',
95 [ 'rev_id', 'rev_timestamp', 'rev_user_text' ],
96 [ "rev_id BETWEEN $blockStart AND $blockEnd", 'rev_user' => 0 ],
97 __METHOD__
98 );
99
100 $numRows = $rows->numRows();
101
102 if ( !$rows || $numRows === 0 ) {
103 $blockStart = $blockEnd + 1;
104 continue;
105 }
106
107 $this->output( "...checking $numRows revisions for IP edits that need copying, " .
108 "between rev_ids $blockStart and $blockEnd\n" );
109
110 $insertRows = [];
111 foreach ( $rows as $row ) {
112 // Make sure this is really an IP, e.g. not maintenance user or imported revision.
113 if ( IP::isValid( $row->rev_user_text ) ) {
114 $insertRows[] = [
115 'ipc_rev_id' => $row->rev_id,
116 'ipc_rev_timestamp' => $row->rev_timestamp,
117 'ipc_hex' => IP::toHex( $row->rev_user_text ),
118 ];
119
120 $attempted++;
121 }
122 }
123
124 if ( $insertRows ) {
125 $dbw->insert( 'ip_changes', $insertRows, __METHOD__, 'IGNORE' );
126
127 $inserted += $dbw->affectedRows();
128 }
129
130 $lbFactory->waitForReplication();
131 usleep( $throttle * 1000 );
132
133 $blockStart = $blockEnd + 1;
134 }
135
136 $this->output( "Attempted to insert $attempted IP revisions, $inserted actually done.\n" );
137
138 return true;
139 }
140
141 protected function getUpdateKey() {
142 return 'populate ip_changes';
143 }
144}
145
146$maintClass = "PopulateIpChanges";
147require_once RUN_MAINTENANCE_IF_MAIN;
and(b) You must cause any modified files to carry prominent notices stating that You changed the files
Apache License January AND DISTRIBUTION Definitions License shall mean the terms and conditions for use
</td >< td > &</td >< td > t want your writing to be edited mercilessly and redistributed at will
A collection of public static functions to play with IP address and IP ranges.
Definition IP.php:67
Class for scripts that perform database maintenance and want to log the update in updatelog so we can...
error( $err, $die=0)
Throw an error to the user.
getDB( $db, $groups=[], $wiki=false)
Returns a database to be used by current maintenance script.
addDescription( $text)
Set the description text.
addOption( $name, $description, $required=false, $withArg=false, $shortName=false, $multiOccurrence=false)
Add a parameter to the script.
getOption( $name, $default=null)
Get an option, or return the default.
MediaWikiServices is the service locator for the application scope of MediaWiki.
Maintenance script that will find all rows in the revision table where rev_user = 0 (user is an IP),...
__construct()
Default constructor.
doDBUpdates()
Do the actual work.
getUpdateKey()
Get the update key name to go in the update log table.
We use the convention $dbr for read and $dbw for write to help you keep track of whether the database object is a the world will explode Or to be a subsequent write query which succeeded on the master may fail when replicated to the slave due to a unique key collision Replication on the slave will stop and it may take hours to repair the database and get it back online Setting read_only in my cnf on the slave will avoid this but given the dire we prefer to have as many checks as possible We provide a but the wrapper functions like please read the documentation for except in special pages derived from QueryPage It s a common pitfall for new developers to submit code containing SQL queries which examine huge numbers of rows Remember that COUNT * is(N), counting rows in atable is like counting beans in a bucket.------------------------------------------------------------------------ Replication------------------------------------------------------------------------The largest installation of MediaWiki, Wikimedia, uses a large set ofslave MySQL servers replicating writes made to a master MySQL server. Itis important to understand the issues associated with this setup if youwant to write code destined for Wikipedia.It 's often the case that the best algorithm to use for a given taskdepends on whether or not replication is in use. Due to our unabashedWikipedia-centrism, we often just use the replication-friendly version, but if you like, you can use wfGetLB() ->getServerCount() > 1 tocheck to see if replication is in use.===Lag===Lag primarily occurs when large write queries are sent to the master.Writes on the master are executed in parallel, but they are executed inserial when they are replicated to the slaves. The master writes thequery to the binlog when the transaction is committed. The slaves pollthe binlog and start executing the query as soon as it appears. They canservice reads while they are performing a write query, but will not readanything more from the binlog and thus will perform no more writes. Thismeans that if the write query runs for a long time, the slaves will lagbehind the master for the time it takes for the write query to complete.Lag can be exacerbated by high read load. MediaWiki 's load balancer willstop sending reads to a slave when it is lagged by more than 30 seconds.If the load ratios are set incorrectly, or if there is too much loadgenerally, this may lead to a slave permanently hovering around 30seconds lag.If all slaves are lagged by more than 30 seconds, MediaWiki will stopwriting to the database. All edits and other write operations will berefused, with an error returned to the user. This gives the slaves achance to catch up. Before we had this mechanism, the slaves wouldregularly lag by several minutes, making review of recent editsdifficult.In addition to this, MediaWiki attempts to ensure that the user seesevents occurring on the wiki in chronological order. A few seconds of lagcan be tolerated, as long as the user sees a consistent picture fromsubsequent requests. This is done by saving the master binlog positionin the session, and then at the start of each request, waiting for theslave to catch up to that position before doing any reads from it. Ifthis wait times out, reads are allowed anyway, but the request isconsidered to be in "lagged slave mode". Lagged slave mode can bechecked by calling wfGetLB() ->getLaggedSlaveMode(). The onlypractical consequence at present is a warning displayed in the pagefooter.===Lag avoidance===To avoid excessive lag, queries which write large numbers of rows shouldbe split up, generally to write one row at a time. Multi-row INSERT ...SELECT queries are the worst offenders should be avoided altogether.Instead do the select first and then the insert.===Working with lag===Despite our best efforts, it 's not practical to guarantee a low-lagenvironment. Lag will usually be less than one second, but mayoccasionally be up to 30 seconds. For scalability, it 's very importantto keep load on the master low, so simply sending all your queries tothe master is not the answer. So when you have a genuine need forup-to-date data, the following approach is advised:1) Do a quick query to the master for a sequence number or timestamp 2) Run the full query on the slave and check if it matches the data you gotfrom the master 3) If it doesn 't, run the full query on the masterTo avoid swamping the master every time the slaves lag, use of thisapproach should be kept to a minimum. In most cases you should just readfrom the slave and let the user deal with the delay.------------------------------------------------------------------------ Lock contention------------------------------------------------------------------------Due to the high write rate on Wikipedia(and some other wikis), MediaWiki developers need to be very careful to structure their writesto avoid long-lasting locks. By default, MediaWiki opens a transactionat the first query, and commits it before the output is sent. Locks willbe held from the time when the query is done until the commit. So youcan reduce lock time by doing as much processing as possible before youdo your write queries.Often this approach is not good enough, and it becomes necessary toenclose small groups of queries in their own transaction. Use thefollowing syntax:$dbw=wfGetDB(DB_MASTER
deferred txt A few of the database updates required by various functions here can be deferred until after the result page is displayed to the user For updating the view updating the linked to tables after a etc PHP does not yet have any way to tell the server to actually return and disconnect while still running these but it might have such a feature in the future We handle these by creating a deferred update object and putting those objects on a global then executing the whole list after the page is displayed We don t do anything smart like collating updates to the same table or such because the list is almost always going to have just one item on if so it s not worth the trouble Since there is a job queue in the jobs table
Definition deferred.txt:16
design txt This is a brief overview of the new design More thorough and up to date information is available on the documentation wiki at etc Handles the details of getting and saving to the user table of the and dealing with sessions and cookies OutputPage Encapsulates the entire HTML page that will be sent in response to any server request It is used by calling its functions to add in any and then calling output() to send it all. It could be easily changed to send incrementally if that becomes useful
This document is intended to provide useful advice for parties seeking to redistribute MediaWiki to end users It s targeted particularly at maintainers for Linux since it s been observed that distribution packages of MediaWiki often break We ve consistently had to recommend that users seeking support use official tarballs instead of their distribution s and this often solves whatever problem the user is having It would be nice if this could such as
This document is intended to provide useful advice for parties seeking to redistribute MediaWiki to end users It s targeted particularly at maintainers for Linux since it s been observed that distribution packages of MediaWiki often break We ve consistently had to recommend that users seeking support use official tarballs instead of their distribution s and this often solves whatever problem the user is having It would be nice if this could such and we might be restricted by PHP settings such as safe mode or open_basedir We cannot assume that the software even has read access anywhere useful Many shared hosts run all users web applications under the same user
Wikitext formatted, in the key only.
do that in ParserLimitReportFormat instead use this to modify the parameters of the image all existing parser cache entries will be invalid To avoid you ll need to handle that somehow(e.g. with the RejectParserCacheValue hook) because MediaWiki won 't do it for you. & $defaults also a ContextSource after deleting those rows but within the same transaction $rows
Definition hooks.txt:2746
null for the local wiki Added in
Definition hooks.txt:1581
as see the revision history and available at free of to any person obtaining a copy of this software and associated documentation to deal in the Software without including without limitation the rights to copy
Definition LICENSE.txt:11
injection txt This is an overview of how MediaWiki makes use of dependency injection The design described here grew from the discussion of RFC T384 The term dependency this means that anything an object needs to operate should be injected from the the object itself should only know narrow no concrete implementation of the logic it relies on The requirement to inject everything typically results in an architecture that based on two main types of and essentially stateless service objects that use other service objects to operate on the value objects As of the beginning MediaWiki is only starting to use the DI approach Much of the code still relies on global state or direct resulting in a highly cyclical dependency which acts as the top level factory for services in MediaWiki which can be used to gain access to default instances of various services MediaWikiServices however also allows new services to be defined and default services to be redefined Services are defined or redefined by providing a callback the instantiator that will return a new instance of the service When it will create an instance of MediaWikiServices and populate it with the services defined in the files listed by thereby bootstrapping the DI framework Per $wgServiceWiringFiles lists includes ServiceWiring php
Definition injection.txt:37
require_once RUN_MAINTENANCE_IF_MAIN
This document describes the state of Postgres support in and is fairly well maintained The main code is very well while extensions are very hit and miss it is probably the most supported database after MySQL Much of the work in making MediaWiki database agnostic came about through the work of creating Postgres as and are nearing end of but without copying over all the usage comments General notes on the but these can almost always be programmed around *Although Postgres has a true BOOLEAN boolean columns are always mapped to as the code does not always treat the column as a and VARBINARY columns should simply be TEXT The only exception is when VARBINARY is used to store true binary data
Definition postgres.txt:43
const DB_REPLICA
Definition defines.php:25
const DB_MASTER
Definition defines.php:26
skin txt MediaWiki includes four core it has been set as the default in MediaWiki since the replacing Monobook it had been the default skin since then
Definition skin.txt:11