site
— MediaWiki sites#
Library module representing MediaWiki sites (wikis).
BaseSite
— Base Class for Sites#
Objects with site methods independent of the communication interface.
- class pywikibot.site._basesite.BaseSite(code, fam=None, user=None)[source]#
Bases:
ComparableMixin
Site methods that are independent of the communication interface.
- Parameters:
code (str) – the site’s language code
fam (str or pywikibot.family.Family) – wiki family name (optional)
user (str) – bot user name (optional)
- linktrail()#
Return regex for trailing chars displayed as part of a link.
See also
Deprecated since version 7.3: Only supported as
APISite
method. UseAPISite.linktrail
- Return type:
str
- category_redirects(fallback: str = '_default')#
Return list of category redirect templates.
See also
- Return type:
list[str]
- get_edit_restricted_templates()#
Return tuple of edit restricted templates.
Added in version 3.0.
- Return type:
tuple[str, …]
- get_archived_page_templates()#
Return tuple of edit restricted templates.
Added in version 3.0.
- Return type:
tuple[str, …]
- disambig(fallback='_default')#
Return list of disambiguation templates.
See also
- Parameters:
fallback (str | None)
- Return type:
list[str]
- protocol()#
The protocol to use to connect to the site.
May be overridden to return ‘http’. Other protocols are not supported.
Changed in version 8.2:
https
is returned instead ofhttp
.See also
- Returns:
protocol that this family uses
- property code#
The identifying code for this Site equal to the wiki prefix.
By convention, this is usually an ISO language code, but it does not have to be.
- property doc_subpage: tuple#
Return the documentation subpage for this Site.
- property family#
The Family object for this Site’s wiki family.
- isInterwikiLink(text)[source]#
Return True if text is in the form of an interwiki link.
If a link object constructed using “text” as the link text parses as belonging to a different site, this method returns True.
- property lang#
The ISO language code for this Site.
Presumed to be equal to the site code, but this can be overridden.
- lock_page(page, block=True)[source]#
Lock page for writing. Must be called before writing any page.
We don’t want different threads trying to write to the same page at the same time, even to different sections.
- Parameters:
page (Page) – the page to be locked
block (bool) – if true, wait until the page is available to be locked; otherwise, raise an exception if page can’t be locked
- property namespaces#
Return dict of valid namespaces on this wiki.
- ns_normalize(value)[source]#
Return canonical local form of namespace name.
- Parameters:
value (str) – A namespace name
- pagename2codes()[source]#
Return list of localized PAGENAMEE tags for the site.
- Return type:
list[str]
- pagenamecodes()[source]#
Return list of localized PAGENAME tags for the site.
- Return type:
list[str]
- redirect()[source]#
Return a default redirect tag for the site.
Changed in version 8.4: return a single generic redirect tag instead of a list of tags. For the list use
redirects()
instead.- Return type:
str
- property redirect_regex: Pattern[str]#
Return a compiled regular expression matching on redirect pages.
Group 1 in the regex match object will be the target title.
A redirect starts with hash (#), followed by a keyword, then arbitrary stuff, then a wikilink. The wikilink may contain a label, although this is not useful.
Added in version 8.4: moved from class:
APISite
- redirects()[source]#
Return list of generic redirect tags for the site.
See also
redirect()
for the default redirect tag.Added in version 8.4.
- Return type:
list[str]
- sametitle(title1, title2)[source]#
Return True if title1 and title2 identify the same wiki page.
title1 and title2 may be unequal but still identify the same page, if they use different aliases for the same namespace.
- Parameters:
title1 (str)
title2 (str)
- Return type:
bool
- property sitename#
String representing this Site’s name and code.
- property throttle#
Return this Site’s throttle. Initialize a new one if needed.
- unlock_page(page)[source]#
Unlock page. Call as soon as a write operation has completed.
- Parameters:
page (Page) – the page to be locked
- Return type:
None
- property use_hard_category_redirects#
Hard redirects are used for this site.
Originally create as property for future use for a proposal to replace category redirect templates with hard redirects. This was never implemented and is not used inside the framework.
Deprecated since version 8.5.
- class pywikibot.site._basesite.BaseSite(code, fam=None, user=None)[source]#
Bases:
ComparableMixin
Site methods that are independent of the communication interface.
- Parameters:
code (str) – the site’s language code
fam (str or pywikibot.family.Family) – wiki family name (optional)
user (str) – bot user name (optional)
- property code#
The identifying code for this Site equal to the wiki prefix.
By convention, this is usually an ISO language code, but it does not have to be.
- property doc_subpage: tuple#
Return the documentation subpage for this Site.
- property family#
The Family object for this Site’s wiki family.
- isInterwikiLink(text)[source]#
Return True if text is in the form of an interwiki link.
If a link object constructed using “text” as the link text parses as belonging to a different site, this method returns True.
- property lang#
The ISO language code for this Site.
Presumed to be equal to the site code, but this can be overridden.
- lock_page(page, block=True)[source]#
Lock page for writing. Must be called before writing any page.
We don’t want different threads trying to write to the same page at the same time, even to different sections.
- Parameters:
page (Page) – the page to be locked
block (bool) – if true, wait until the page is available to be locked; otherwise, raise an exception if page can’t be locked
- property namespaces#
Return dict of valid namespaces on this wiki.
- ns_normalize(value)[source]#
Return canonical local form of namespace name.
- Parameters:
value (str) – A namespace name
- pagename2codes()[source]#
Return list of localized PAGENAMEE tags for the site.
- Return type:
list[str]
- pagenamecodes()[source]#
Return list of localized PAGENAME tags for the site.
- Return type:
list[str]
- redirect()[source]#
Return a default redirect tag for the site.
Changed in version 8.4: return a single generic redirect tag instead of a list of tags. For the list use
redirects()
instead.- Return type:
str
- property redirect_regex: Pattern[str]#
Return a compiled regular expression matching on redirect pages.
Group 1 in the regex match object will be the target title.
A redirect starts with hash (#), followed by a keyword, then arbitrary stuff, then a wikilink. The wikilink may contain a label, although this is not useful.
Added in version 8.4: moved from class:
APISite
- redirects()[source]#
Return list of generic redirect tags for the site.
See also
redirect()
for the default redirect tag.Added in version 8.4.
- Return type:
list[str]
- sametitle(title1, title2)[source]#
Return True if title1 and title2 identify the same wiki page.
title1 and title2 may be unequal but still identify the same page, if they use different aliases for the same namespace.
- Parameters:
title1 (str)
title2 (str)
- Return type:
bool
- property sitename#
String representing this Site’s name and code.
- property throttle#
Return this Site’s throttle. Initialize a new one if needed.
- unlock_page(page)[source]#
Unlock page. Call as soon as a write operation has completed.
- Parameters:
page (Page) – the page to be locked
- Return type:
None
- property use_hard_category_redirects#
Hard redirects are used for this site.
Originally create as property for future use for a proposal to replace category redirect templates with hard redirects. This was never implemented and is not used inside the framework.
Deprecated since version 8.5.
APISite
— API Interface for Sites#
Objects representing API interface to MediaWiki site.
- class pywikibot.site._apisite.APISite(code, fam=None, user=None)[source]#
Bases:
BaseSite
,EchoMixin
,FlowMixin
,GeneratorsMixin
,GeoDataMixin
,GlobalUsageMixin
,LinterMixin
,PageImagesMixin
,ProofreadPageMixin
,TextExtractsMixin
,ThanksFlowMixin
,ThanksMixin
,UrlShortenerMixin
,WikibaseClientMixin
API interface to MediaWiki site.
Do not instantiate directly; use
pywikibot.Site
function.- Parameters:
code (str)
fam (str | pywikibot.family.Family | None)
user (str | None)
- property article_path: str#
Get the nice article path without $1.
Deprecated since version 7.0: Replaced by
articlepath()
- property articlepath: str#
Get the nice article path with placeholder.
Added in version 7.0: Replaces
article_path()
- static assert_valid_iter_params(msg_prefix, start, end, reverse, is_ts=True)[source]#
Validate iterating API parameters.
- Parameters:
msg_prefix (str) – The calling method name
start (datetime | int | str) – The start value to compare
end (datetime | int | str) – The end value to compare
reverse (bool) – The reverse option
is_ts (bool) – When comparing timestamps (with is_ts=True) the start is usually greater than end. Comparing titles this is vice versa.
- Raises:
AssertionError – start/end values are not comparabel types or are in the wrong order
- Return type:
None
- blockuser(user, expiry, reason, anononly=True, nocreate=True, autoblock=True, noemail=False, reblock=False, allowusertalk=False)[source]#
Block a user for certain amount of time and for a certain reason.
See also
- Parameters:
user (pywikibot.page.User) – The username/IP to be blocked without a namespace.
expiry (datetime.datetime | str | bool) –
The length or date/time when the block expires. If ‘never’, ‘infinite’, ‘indefinite’ it never does. If the value is given as a str it’s parsed by php’s strtotime function:
The relative format is described there:
It is recommended to not use a str if possible to be independent of the API.
reason (str) – The reason for the block.
anononly (bool) – Disable anonymous edits for this IP.
nocreate (bool) – Prevent account creation.
autoblock (bool) – Automatically block the last used IP address and all subsequent IP addresses from which this account logs in.
noemail (bool) – Prevent user from sending email through the wiki.
reblock (bool) – If the user is already blocked, overwrite the existing block.
allowusertalk (bool) – Whether the user can edit their talk page while blocked.
- Returns:
The data retrieved from the API request.
- Return type:
dict[str, Any]
- categoryinfo(category)[source]#
Retrieve data on contents of category.
- Parameters:
category (Category)
- Return type:
dict[str, int]
- compare(old, diff, difftype='table')[source]#
Corresponding method to the ‘action=compare’ API action.
Hint
Use
diff.html_comparator()
function to parse result.See also
- Parameters:
old (_CompType) – starting revision ID, title, Page, or Revision
diff (_CompType) – ending revision ID, title, Page, or Revision
difftype (str) – type of diff. One of ‘table’ or ‘inline’.
- Returns:
Returns an HTML string of a diff between two revisions.
- Return type:
str
- data_repository()[source]#
Return the data repository connected to this site.
- Returns:
The data repository if one is connected or None otherwise.
- Return type:
DataSite | None
- delete(page, reason, *, deletetalk=False, oldimage=None)[source]#
Delete a page or a specific old version of a file from the wiki.
Requires appropriate privileges.
See also
Page to be deleted can be given either as Page object or as pageid. To delete a specific version of an image the oldimage identifier must be provided.
Added in version 6.1: renamed from
deletepage
Changed in version 6.1: keyword only parameter
oldimage
was added.Changed in version 7.1: keyword only parameter
deletetalk
was added.Changed in version 8.1: raises
exceptions.NoPageError
if page does not exist.- Parameters:
page (BasePage | int | str) – Page to be deleted or its pageid.
reason (str) – Deletion reason.
deletetalk (bool) – Also delete the talk page, if it exists.
oldimage (str | None) – oldimage id of the file version to be deleted. If a BasePage object is given with page parameter, it has to be a FilePage.
- Raises:
TypeError, ValueError – page has wrong type/value.
- Return type:
None
- deleterevs(targettype, ids, *, hide=None, show=None, reason='', target=None)[source]#
Delete or undelete specified page revisions, file versions or logs.
See also
If more than one target id is provided, the same action is taken for all of them.
Added in version 6.0.
- Parameters:
targettype (str) – Type of target. One of “archive”, “filearchive”, “logging”, “oldimage”, “revision”.
ids (int | str | list[int | str]) – Identifiers for the revision, log, file version or archive.
hide (str | list[str] | None) – What to delete. Can be “comment”, “content”, “user” or a combination of them in pipe-separate form such as “comment|user”.
show (str | list[str] | None) – What to undelete. Can be “comment”, “content”, “user” or a combination of them in pipe-separate form such as “comment|user”.
reason (str) – Deletion reason.
target (pywikibot.page.Page | str | None) – Page object or page title, if required for the type.
- Return type:
None
- editpage(page, summary=None, minor=True, notminor=False, bot=True, recreate=True, createonly=False, nocreate=False, watch=None, **kwargs)[source]#
Submit an edit to be saved to the wiki.
See also
BasePage.save()
(should be preferred)
- Parameters:
page (BasePage) – The Page to be saved. By default its .text property will be used as the new text to be saved to the wiki
summary (str | None) – The edit summary for the modification (optional, but most wikis strongly encourage its use)
minor (bool) – if True (default), mark edit as minor
notminor (bool) – if True, override account preferences to mark edit as non-minor
recreate (bool) – if True (default), create new page even if this title has previously been deleted
createonly (bool) – if True, raise an error if this title already exists on the wiki
nocreate (bool) – if True, raise a
exceptions.NoCreateError
exception if the page does not existwatch (str | None) –
Specify how the watchlist is affected by this edit, set to one of
watch
,unwatch
,preferences
,nochange
:watch — add the page to the watchlist
unwatch — remove the page from the watchlist
preferences — use the preference settings (default)
nochange — don’t change the watchlist
If None (default), follow bot account’s default settings
bot (bool) – if True and bot right is given, mark edit with bot flag
kwargs (Any)
- Keyword Arguments:
text (str) – Overrides Page.text
section (int | str) – Edit an existing numbered section or a new section (‘new’)
prependtext (str) – Prepend text. Overrides Page.text
appendtext (str) – Append text. Overrides Page.text.
undo (int) – Revision id to undo. Overrides Page.text
- Returns:
True if edit succeeded, False if it failed
- Raises:
AbuseFilterDisallowedError – This action has been automatically identified as harmful, and therefore disallowed
CaptchaError – config.solve_captcha is False and saving the page requires solving a captcha
CascadeLockedPageError – The page is protected with protection cascade
EditConflictError – an edit confict occurred
Error – No text to be saved or API editing not enabled on site or user is not authorized to edit, create pages or create image redirects on site or bot is not logged in and anon users are not authorized to edit, create pages or to create image redirects or the edit was filtered or the content is too big
KeyError – No ‘result’ found in API response
LockedNoPageError – The page title is protected
LockedPageError – The page has been protected to prevent editing or other actions
NoCreateError – The page you specified doesn’t exist and nocreate is set
NoPageError – recreate is disabled and page does not exist
PageCreatedConflictError – The page you tried to create has been created already
PageDeletedConflictError – The page has been deleted in meantime
SpamblacklistError – The title is blacklisted as spam
TitleblacklistError – The title is blacklisted
ValueError – text keyword is used with one of the override keywords appendtext, prependtext or undo or more than one of the override keywords are used or no text keyword is used together with section keyword.
- Return type:
bool
- expand_text(text, title=None, includecomments=None)[source]#
Parse the given text for preprocessing and rendering.
e.g expand templates and strip comments if includecomments parameter is not True. Keeps text inside <nowiki></nowiki> tags unchanges etc. Can be used to parse magic parser words like {{CURRENTTIMESTAMP}}.
- Parameters:
text (str) – text to be expanded
title (str | None) – page title without section
includecomments (bool | None) – if True do not strip comments
- Return type:
str
- property file_extensions: list[str]#
File extensions enabled on the wiki.
Added in version 8.4.
Changed in version 9.2: also include extensions from the image repository
- static fromDBName(dbname, site=None)[source]#
Create a site from a database name using the sitematrix.
Changed in version 8.3.3: changed from classmethod to staticmethod.
- get_globaluserinfo(user=None, force=False)[source]#
Retrieve globaluserinfo from site and cache it.
Added in version 7.0.
- Parameters:
user (str | int | None) – The user name or user ID whose global info is retrieved. Defaults to the current user.
force (bool) – Whether the cache should be discarded.
- Returns:
A dict with the following keys and values:
id: user id (numeric str)
home: dbname of home wiki
registration: registration date as Timestamp
groups: list of groups (could be empty)
rights: list of rights (could be empty)
editcount: global editcount
- Raises:
TypeError – Inappropriate argument type of ‘user’
- Return type:
dict[str, Any]
- get_parsed_page(page)[source]#
Retrieve parsed text of the page using action=parse.
Changed in version 7.1: raises KeyError instead of AssertionError
See also
- Parameters:
page (BasePage)
- Return type:
str
- get_property_names(force=False)[source]#
Get property names for pages_with_property().
See also
- Parameters:
force (bool) – force to retrieve userinfo ignoring cache
- Return type:
list[str]
- get_searched_namespaces(force=False)[source]#
Retrieve the default searched namespaces for the user.
If no user is logged in, it returns the namespaces used by default. Otherwise it returns the user preferences. It caches the last result and returns it, if the username or login status hasn’t changed.
- Parameters:
force (bool) – Whether the cache should be discarded.
- Returns:
The namespaces which are searched by default.
- Return type:
set[Namespace]
- get_tokens(types, *args, **kwargs)[source]#
Preload one or multiple tokens.
Usage
>>> site = pywikibot.Site() >>> tokens = site.get_tokens([]) # get all tokens >>> list(tokens.keys()) # result depends on user ['createaccount', 'login'] >>> tokens = site.get_tokens(['csrf', 'patrol']) >>> list(tokens.keys()) ['csrf', 'patrol'] >>> token = site.get_tokens(['csrf']).get('csrf') # get a single token >>> token 'a9f...0a0+\\' >>> token = site.get_tokens(['unknown']) # try an invalid token ... ... # invalid token names shows a warnig and the key is not in result ... WARNING: API warning (tokens) of unknown format: ... {'warnings': 'Unrecognized value for parameter "type": foo'} {}
You should not call this method directly, especially if you only need a specific token. Use
tokens
property instead.Changed in version 8.0:
all
parameter is deprecated. Use an empty list fortypes
instead.Note
args
andkwargs
are not used for deprecation warning only.See also
- Parameters:
types (list[str]) – the types of token (e.g., “csrf”, “login”, “patrol”). If the list is empty all available tokens are loaded. See API documentation for full list of types.
- Returns:
a dict with retrieved valid tokens.
- Return type:
dict[str, str]
- getcategoryinfo(category)[source]#
Retrieve data on contents of category.
See also
- Parameters:
category (Category)
- Return type:
None
- getcurrenttimestamp()[source]#
Return the server time as a MediaWiki timestamp string.
It calls
server_time
first so it queries the server to get the current server time.- Returns:
the server time (as ‘yyyymmddhhmmss’)
- Return type:
str
- getmagicwords(word)[source]#
Return list of localized “word” magic words for the site.
- Parameters:
word (str)
- Return type:
list[str]
- getredirtarget(page, *, ignore_section=True)[source]#
Return page object for the redirect target of page.
Added in version 9.3: ignore_section parameter
See also
- Parameters:
page (BasePage) – page to search redirects for
ignore_section (bool) – do not include section to the target even the link has one
- Returns:
redirect target of page
- Raises:
CircularRedirectError – page is a circular redirect
InterwikiRedirectPageError – the redirect target is on another site
IsNotRedirectPageError – page is not a redirect
RuntimeError – no redirects found
SectionError – the section is not found on target page and ignore_section is not set
- Return type:
pywikibot.page.Page
- property globaluserinfo: dict[str, Any]#
Retrieve globaluserinfo of the current user from site.
To get globaluserinfo for a given user or user ID use
get_globaluserinfo()
method insteadAdded in version 3.0.
- has_all_mediawiki_messages(keys, lang=None)[source]#
Confirm that the site defines a set of MediaWiki messages.
- Parameters:
keys (Iterable[str]) – names of MediaWiki messages
lang (str | None) – a language code, default is self.lang
- Return type:
bool
- property has_data_repository: bool#
Return True if site has a shared data repository like Wikidata.
- has_extension(name)[source]#
Determine whether extension
name
is loaded.- Parameters:
name (str) – The extension to check for, case sensitive
- Returns:
If the extension is loaded
- Return type:
bool
- has_group(group)[source]#
Return true if and only if the user is a member of specified group.
Possible values of ‘group’ may vary depending on wiki settings, but will usually include bot.
See also
- Parameters:
group (str)
- Return type:
bool
- property has_image_repository: bool#
Return True if site has a shared image repository like Commons.
- has_mediawiki_message(key, lang=None)[source]#
Determine if the site defines a MediaWiki message.
- Parameters:
key (str) – name of MediaWiki message
lang (str | None) – a language code, default is self.lang
- Return type:
bool
- has_right(right)[source]#
Return true if and only if the user has a specific right.
Possible values of ‘right’ may vary depending on wiki settings.
See also
- Parameters:
right (str) – a specific right to be validated
- Return type:
bool
- image_repository()[source]#
Return Site object for image repository e.g. commons.
- Return type:
BaseSite | None
- interwiki(prefix)[source]#
Return the site for a corresponding interwiki prefix.
- Raises:
pywikibot.exceptions.SiteDefinitionError – if the url given in the interwiki table doesn’t match any of the existing families.
KeyError – if the prefix is not an interwiki prefix.
- Parameters:
prefix (str)
- Return type:
- interwiki_prefix(site)[source]#
Return the interwiki prefixes going to that site.
The interwiki prefixes are ordered first by length (shortest first) and then alphabetically.
interwiki(prefix)
is not guaranteed to equalsite
(i.e. the parameter passed to this function).- Parameters:
site (BaseSite) – The targeted site, which might be it’s own.
- Raises:
KeyError – if there is no interwiki prefix for that site.
- Return type:
list[str]
- isBot(username)[source]#
Return True is username is a bot user.
- Parameters:
username (str)
- Return type:
bool
- is_blocked(force=False)[source]#
Return True when logged in user is blocked.
To check whether a user can perform an action, the method has_right should be used.
See also
Added in version 7.0: The
force
parameter- Parameters:
force (bool) – Whether the cache should be discarded.
- Return type:
bool
- is_image_repository()[source]#
Return True if Site object is the image repository.
- Return type:
bool
- is_locked(user=None, force=False)[source]#
Return True when given user is locked globally.
Added in version 7.0.
- Parameters:
user (str | int | None) – The user name or user ID. Defaults to the current user.
force (bool) – Whether the cache should be discarded.
- Return type:
bool
- is_oauth_token_available()[source]#
Check whether OAuth token is set for this site.
- Return type:
bool
- is_uploaddisabled()[source]#
Return True if upload is disabled on site.
Example:
>>> site = pywikibot.Site('commons') >>> site.is_uploaddisabled() False >>> site = pywikibot.Site('wikidata') >>> site.is_uploaddisabled() True
- Return type:
bool
- property lang: str#
Return the code for the language of this Site.
- linktrail()[source]#
Build linktrail regex from siteinfo linktrail.
Letters that can follow a wikilink and are regarded as part of this link. This depends on the linktrail setting in LanguageXx.php
Added in version 7.3.
- Returns:
The linktrail regex.
- Return type:
str
- list_to_text(args)[source]#
Convert a list of strings into human-readable text.
The MediaWiki messages ‘and’ and ‘word-separator’ are used as separator between the last two arguments. If more than two arguments are given, other arguments are joined using MediaWiki message ‘comma-separator’.
- Parameters:
args (Iterable[str]) – text to be expanded
- Return type:
str
- loadimageinfo(page, history=False, url_width=None, url_height=None, url_param=None, timestamp=None)[source]#
Load image info from api and save in page attributes.
The following properties are loaded:
timestamp
,user
,comment
,url
,size
,sha1
,mime
,mediatype
,archivename
andbitdepth
.metadata
is loaded only if history is False. If url_width, url_height or url_param is given, additional propertiesthumbwidth
,thumbheight
,thumburl
andresponsiveUrls
are given.Note
Parameters validation and error handling left to the API call.
Changed in version 8.2: mediatype and bitdepth properties were added.
Changed in version 8.6.: Added timestamp parameter. Metadata are loaded only if history is False.
See also
- Parameters:
history (bool) – if true, return the image’s version history
url_width (int | None) – get info for a thumbnail with given width
url_height (int | None) – get info for a thumbnail with given height
url_param (str | None) – get info for a thumbnail with given param
timestamp (Timestamp | None) – timestamp of the image’s version to retrieve. It has effect only if history is False. If omitted, the latest version will be fetched.
page (FilePage)
- Return type:
None
- loadpageinfo(page, preload=False)[source]#
Load page info from api and store in page attributes.
See also
- Parameters:
page (BasePage)
preload (bool)
- Return type:
None
- loadpageprops(page)[source]#
Load page props for the given page.
- Parameters:
page (BasePage)
- Return type:
None
- local_interwiki(prefix)[source]#
Return whether the interwiki prefix is local.
A local interwiki prefix is handled by the target site like a normal link. So if that link also contains an interwiki link it does follow it as long as it’s a local link.
- Raises:
pywikibot.exceptions.SiteDefinitionError – if the url given in the interwiki table doesn’t match any of the existing families.
KeyError – if the prefix is not an interwiki prefix.
- Parameters:
prefix (str)
- Return type:
bool
- logged_in()[source]#
Verify the bot is logged into the site as the expected user.
The expected usernames are those provided as the user parameter at instantiation.
- Return type:
bool
- login(autocreate=False, user=None, *, cookie_only=False)[source]#
Log the user in if not already logged in.
Changed in version 8.0: lazy load cookies when logging in. This was dropped in 8.0.4
Changed in version 8.0.4: the cookie_only parameter was added and cookies are loaded whenever the site is initialized.
See also
- Parameters:
autocreate (bool) – if true, allow auto-creation of the account using unified login
user (str | None) – bot user name. Overrides the username set by BaseSite initializer parameter or user config setting
cookie_only (bool) – Only try to login from cookie but do not force to login with username/password settings.
- Raises:
pywikibot.exceptions.NoUsernameError – Username is not recognised by the site.
- Return type:
None
- logout()[source]#
Logout of the site and load details for the logged out user.
Also logs out of the global account if linked to the user.
See also
- Raises:
APIError – Logout is not available when OAuth enabled.
- Return type:
None
- property logtypes: set[str]#
Return a set of log types available on current site.
- property maxlimit: int#
Get the maximum limit of pages to be retrieved.
Added in version 7.0.
- mediawiki_message(key, lang=None)[source]#
Fetch the text for a MediaWiki message.
- Parameters:
key (str) – name of MediaWiki message
lang (str | None) – a language code, default is self.lang
- Return type:
str
- mediawiki_messages(keys, lang=None)[source]#
Fetch the text of a set of MediaWiki messages.
The returned dict uses each key to store the associated message.
See also
- Parameters:
keys (Iterable[str]) – MediaWiki messages to fetch
lang (str | None) – a language code, default is self.lang
- Return type:
OrderedDict[str, str]
- merge_history(source, dest, timestamp=None, reason=None)[source]#
Merge revisions from one page into another.
See also
page.BasePage.merge_history()
(should be preferred)
Revisions dating up to the given timestamp in the source will be moved into the destination page history. History merge fails if the timestamps of source and dest revisions overlap (all source revisions must be dated before the earliest dest revision).
- Parameters:
source (BasePage) – Source page from which revisions will be merged
dest (BasePage) – Destination page to which revisions will be merged
timestamp (Timestamp | None) – Revisions from this page dating up to this timestamp will be merged into the destination page (if not given or False, all revisions will be merged)
reason (str | None) – Optional reason for the history merge
- Raises:
APIError – unexpected APIError
Error – expected APIError or unexpected response
NoPageError – source or dest does not exist
PageSaveRelatedError – source is equal to dest
- Return type:
None
- messages()[source]#
Return true if the user has new messages, and false otherwise.
Deprecated since version 8.0: Replaced by
userinfo['messages']
.- Return type:
bool
- property months_names: list[tuple[str, str]]#
Obtain month names from the site messages.
The list is zero-indexed, ordered by month in calendar, and should be in the original site language.
- Returns:
list of tuples (month name, abbreviation)
- movepage(page, newtitle, summary, movetalk=True, noredirect=False, movesubpages=True)[source]#
Move a Page to a new title.
See also
Changed in version 7.2: The
movesubpages
parameter was added- Parameters:
page (BasePage) – the Page to be moved (must exist)
newtitle (str) – the new title for the Page
summary (str) – edit summary (required!)
movetalk (bool) – if True (default), also move the talk page if possible
noredirect (bool) – if True, suppress creation of a redirect from the old title to the new one
movesubpages (bool) – Rename subpages, if applicable.
- Returns:
Page object with the new title
- Return type:
pywikibot.page.Page
- property mw_version: MediaWikiVersion#
Return
version()
as atools.MediaWikiVersion
object.Cache the result for 24 hours.
- namespace(num, all_ns=False, all='[deprecated name of all_ns]')[source]#
Return string containing local name of namespace ‘num’.
If optional argument all_ns is true, return all recognized values for this namespace.
Changed in version 9.0: all parameter was renamed to all_ns.
- Parameters:
num (int) – Namespace constant.
all_ns (bool) – If True return a
Namespace
object. Otherwise return the namespace name.
- Returns:
local name or
Namespace
object
- nice_get_address(title)[source]#
Return shorter URL path to retrieve page titled ‘title’.
- Parameters:
title (str)
- Return type:
str
- page_can_be_edited(page, action='edit')[source]#
Determine if the page can be modified.
Return True if the bot has the permission of needed restriction level for the given action type.
See also
page.BasePage.has_permission()
(should be preferred)- Parameters:
page (BasePage) – a pywikibot.page.BasePage object
action (str) – a valid restriction type like ‘edit’, ‘move’
- Raises:
ValueError – invalid action parameter
- Return type:
bool
- page_from_repository(item)[source]#
Return a Page for this site object specified by Wikibase item.
Usage:
>>> site = pywikibot.Site('wikipedia:zh') >>> page = site.page_from_repository('Q131303') >>> page.title() 'Hello World'
This method is able to upcast categories:
>>> site = pywikibot.Site('commons') >>> page = site.page_from_repository('Q131303') >>> page.title() 'Category:Hello World' >>> page Category('Category:Hello World')
It also works for wikibase repositories:
>>> site = pywikibot.Site('wikidata') >>> page = site.page_from_repository('Q5296') >>> page.title() 'Wikidata:Main Page'
If no page exists for a given site, None is returned:
>>> site = pywikibot.Site('wikidata') >>> page = site.page_from_repository('Q131303') >>> page is None True
Changed in version 7.7: No longer raise NotimplementedError if used with a Wikibase site.
- Parameters:
item (str) – id number of item, “Q###”,
- Returns:
Page, or Category object given by Wikibase item number for this site object.
- Raises:
pywikibot.exceptions.UnknownExtensionError – site has no Wikibase extension
- Return type:
Page | None
- page_isredirect(page)[source]#
Return True if and only if page is a redirect.
- Parameters:
page (BasePage)
- Return type:
bool
- page_restrictions(page)[source]#
Return a dictionary reflecting page protections.
Example:
>>> site = pywikibot.Site('wikipedia:test') >>> page = pywikibot.Page(site, 'Main Page') >>> site.page_restrictions(page) {'edit': ('sysop', 'infinity'), 'move': ('sysop', 'infinity')}
See also
page.BasePage.protection()
(should be preferred)- Parameters:
page (BasePage)
- Return type:
dict[str, tuple[str, str]]
- pagename2codes()[source]#
Return list of localized PAGENAMEE tags for the site.
- Return type:
list[str]
- pagenamecodes()[source]#
Return list of localized PAGENAME tags for the site.
- Return type:
list[str]
- protect(page, protections, reason, expiry=None, **kwargs)[source]#
(Un)protect a wiki page. Requires protect right.
- Parameters:
protections (dict[str, str | None]) – A dict mapping type of protection to protection level of that type. Refer
protection_types()
for valid restriction types andprotection_levels()
for valid restriction levels. If None is given, however, that protection will be skipped.reason (str) – Reason for the action
expiry (datetime.datetime | str | None) – When the block should expire. This expiry will be applied to all protections. If
None
,'infinite'
,'indefinite'
,'never'
, or''
is given, there is no expiry.page (BasePage)
kwargs (Any)
- Return type:
None
- protection_levels()[source]#
Return the protection levels available on this site.
Example:
>>> site = pywikibot.Site('wikipedia:test') >>> sorted(site.protection_levels()) ['', 'autoconfirmed', ... 'sysop', 'templateeditor']
See also
Siteinfo._get_default()
- Returns:
protection types available
- Return type:
set[str]
- protection_types()[source]#
Return the protection types available on this site.
Example:
>>> site = pywikibot.Site('wikipedia:test') >>> sorted(site.protection_types()) ['create', 'edit', 'move', 'upload']
See also
Siteinfo._get_default()
- Returns:
protection types available
- Return type:
set[str]
- purgepages(pages, forcelinkupdate=False, forcerecursivelinkupdate=False, converttitles=False, redirects=False)[source]#
Purge the server’s cache for one or multiple pages.
- Parameters:
pages (list[BasePage]) – list of Page objects
redirects (bool) – Automatically resolve redirects.
converttitles (bool) – Convert titles to other variants if necessary. Only works if the wiki’s content language supports variant conversion.
forcelinkupdate (bool) – Update the links tables.
forcerecursivelinkupdate (bool) – Update the links table, and update the links tables for any page that uses this page as a template.
- Returns:
True if API returned expected response; False otherwise
- Return type:
bool
- ratelimit(action)[source]#
Get the rate limit for a given action.
This method get the ratelimit for a given action and returns a
tools.collections.RateLimit
namedtuple which has the following fields and properties:group
— The current user group returned by the API. If the user is not logged in, the group will be ‘ip’.hits
— rate limit hits; API requests should not exceed this limit value for the given action.seconds
— time base in seconds for the maximum hitsdelay
— (property) calculated as seconds per hits which may be used for wait cycles.ratio
— (property) inverse of delay, calculated as hits per seconds. The result may be Infinite.
If the user has ‘noratelimit’ rights,
maxlimit()
is used forhits
andseconds
will be 0. ‘noratelimit’ is returned as group parameter in that case.If no rate limit is found for the given action,
maxlimit()
is used forhits
andseconds
will be config.put_throttle. As group parameter ‘unknown’ is returned in that case.Examples:
This is an example for a bot user which is not logged in. The rate limit user group is ‘ip’
>>> site = pywikibot.Site() >>> limit = site.ratelimit('edit') # get rate limit for 'edit' action >>> limit RateLimit(group='ip', hits=8, seconds=60) >>> limit.delay # delay and ratio must be get as attributes 7.5 >>> site.ratelimit('purge').hits # get purge hits 30 >>> group, *limit = site.ratelimit('urlshortcode') >>> group # the user is not logged in, we get 'ip' as group 'ip' >>> limit # starred assignment is allowed for the fields [10, 120]
After login to the site and the rate limit will change. The limit user group might be ‘user’:
>>> limit = site.ratelimit('edit') >>> limit RateLimit(group='user', hits=90, seconds=60) >>> limit.ratio 1.5 >>> limit = site.ratelimit('urlshortcode') # no action limit found >>> group, *limits = limit >>> group 'unknown' # the group is 'unknown' because action was not found >>> limits (50, 10) # hits is maxlimit and seconds is config.put_throttle >>> site.maxlimit, pywikibot.config.put_throttle (50, 10)
If a user is logged in and has no rate limit, e.g bot accounts, we always get a default RateLimit namedtuple like this:
>>> site.has_right['noratelimit'] True >>> limit = site.ratelimit('any_action') # maxlimit is used >>> limit RateLimit(group='noratelimit', hits=500, seconds=0) >>> limit.delay, limit.ratio (0.0, inf)
Note
It is not verified whether
action
parameter has a valid value.See also
tools.collections.RateLimit
for RateLimit examples.
Added in version 9.0.
- Parameters:
action (str) – action which might be limited
- Returns:
RateLimit tuple with
group
,hits
andseconds
fields and properties fordelay
andratio
.- Return type:
RateLimit
- redirects()[source]#
Return a list of localized tags for the site without preceding ‘#’.
See also
Added in version 8.4.
- Return type:
list[str]
- rollbackpage(page, **kwargs)[source]#
Roll back page to version before last user’s edits.
See also
The keyword arguments are those supported by the rollback API.
As a precaution against errors, this method will fail unless the page history contains at least two revisions, and at least one that is not by the same user who made the last edit.
- Parameters:
page (BasePage) – the Page to be rolled back (must exist)
kwargs (Any)
- Keyword Arguments:
user – the last user to be rollbacked; default is page.latest_revision.user
- Return type:
None
- server_time()[source]#
Return a Timestamp object representing the current server time.
It uses the ‘time’ property of the siteinfo ‘general’. It’ll force a reload before returning the time.
- Returns:
the current server time
- Return type:
- simple_request(**kwargs)[source]#
Create a request by defining all kwargs as parameters.
Added in version 7.1:
_simple_request
becomes a public method- Parameters:
kwargs (Any)
- Return type:
- stash_info(file_key, props=None)[source]#
Get the stash info for a given file key.
See also
- Parameters:
file_key (str)
props (list[str] | None)
- Return type:
dict[str, Any]
- property tokens: TokenWallet#
Return the TokenWallet collection.
TokenWallet
collection holds all available tokens. The tokens are loaded viaget_tokens()
method with the first token request and is retained until the TokenWallet is cleared.Usage:
>>> site = pywikibot.Site() >>> token = site.tokens['csrf'] >>> token 'df8...9e6+\\' >>> 'csrf' in site.tokens ... # Check whether the token exists True >>> 'invalid' in site.tokens False >>> token = site.tokens['invalid'] Traceback (most recent call last): ... KeyError: "Invalid token 'invalid' for user ... >>> site.tokens.clear() # clears the internal cache >>> site.tokens['csrf'] ... # get a new token '1c8...9d3+\\' >>> del site.tokens # another variant to clear the cache
Changed in version 8.0:
tokens
attribute became a property to enable deleter.Warning
A deprecation warning is shown if the token name is outdated, see API:Tokens (action).
See also
API:Tokens for valid token types
- unblockuser(user, reason=None)[source]#
Remove the block for the user.
See also
- Parameters:
user (pywikibot.page.User) – The username/IP without a namespace.
reason (str | None) – Reason for the unblock.
- Return type:
dict[str, Any]
- undelete(page, reason, *, revisions=None, fileids=None)[source]#
Undelete page from the wiki. Requires appropriate privilege level.
See also
Added in version 6.1: renamed from
undelete_page
Changed in version 6.1:
fileids
parameter was added, keyword argument required forrevisions
.- Parameters:
page (BasePage) – Page to be deleted.
reason (str) – Undeletion reason.
revisions (list[str] | None) – List of timestamps to restore. If None, restores all revisions.
fileids (list[int | str] | None) – List of fileids to restore.
- Return type:
None
- upload(filepage, **kwargs)[source]#
Upload a file to the wiki.
See also
Either source_filename or source_url, but not both, must be provided.
Changed in version 6.0: keyword arguments required for all parameters except
filepage
Changed in version 6.2:: asynchronous upload is used if
asynchronous
parameter is set.For keyword arguments refer
pywikibot.site._upload.Uploader
- Parameters:
filepage (pywikibot.page.FilePage) – a FilePage object from which the wiki-name of the file will be obtained.
kwargs (Any)
- Returns:
It returns True if the upload was successful and False otherwise.
- Return type:
bool
- property userinfo: dict[str, Any]#
Retrieve userinfo from site and store in _userinfo attribute.
To force retrieving userinfo ignoring cache, just delete this property.
Usage
>>> site = pywikibot.Site('test') >>> info = site.userinfo >>> info['id'] # returns 0 if no ip user ... 0 >>> info['name'] # username or ip ... ... '92.198.174.192' >>> info['groups'] ['*'] >>> info['rights'] ['createaccount', 'read', 'edit', 'createpage', 'createtalk', ...] >>> info['messages'] False >>> del site.userinfo # delete userinfo cache >>> 'blockinfo' in site.userinfo False >>> 'anon' in site.userinfo True
Usefull alternatives to userinfo property
has_group()
to verify the group membershiphas_right()
to verify that the user has a given rightlogged_in()
to verify the user is loggend in to a site
See also
Changed in version 8.0: Use API formatversion 2.
- Returns:
A dict with the following keys and values:
id: user id (int)
name: username (if user is logged in)
anon: present if user is not logged in
groups: list of groups (could be empty)
rights: list of rights (could be empty)
messages: True if user has a new message on talk page (bool)
blockinfo: present if user is blocked (dict)
- validate_tokens(types)[source]#
Validate if requested tokens are acceptable.
Valid tokens may depend on mw version.
Deprecated since version 8.0.
- Parameters:
types (list[str])
- Return type:
list[str]
- version()[source]#
Return live project version number as a string.
Use
mw_version
to compare MediaWiki versions.- Return type:
str
Objects representing API interface to MediaWiki site extenstions.
- class pywikibot.site._extensions.EchoMixin[source]#
Bases:
object
APISite mixin for Echo extension.
- notifications(**kwargs)[source]#
Yield Notification objects from the Echo extension.
- Keyword Arguments:
format (Optional[str]) – If specified, notifications will be returned formatted this way. Its value is either
model
,special
orNone
. Default isspecial
.
See also
API:Notifications for other keywords.
- class pywikibot.site._extensions.FlowMixin[source]#
Bases:
object
APISite mixin for Structured Discussions extension.
Deprecated since version 9.4.0: Structured Discussions extension formerly known as Flow extenstion is not maintained and will be removed. Users are encouraged to stop using it. (T371180)
See also
- create_new_topic(page, title, content, content_format)[source]#
Deprecated.
Create a new topic on a Flow board.
- Parameters:
page (Board) – A Flow board
title (str) – The title of the new topic (must be in plaintext)
content (str) – The content of the topic’s initial post
content_format (str (either 'wikitext' or 'html')) – The content format of the supplied content
- Returns:
The metadata of the new topic
- Return type:
dict
- delete_post(post, reason)[source]#
Deprecated.
Delete a Flow post.
- Parameters:
post (Post) – A Flow post
reason (str) – The reason to delete the post
- Returns:
Metadata returned by the API
- Return type:
dict
- delete_topic(page, reason)[source]#
Deprecated.
Delete a Flow topic.
- Parameters:
page (Topic) – A Flow topic
reason (str) – The reason to delete the topic
- Returns:
Metadata returned by the API
- Return type:
dict
- hide_post(post, reason)[source]#
Deprecated.
Hide a Flow post.
- Parameters:
post (Post) – A Flow post
reason (str) – The reason to hide the post
- Returns:
Metadata returned by the API
- Return type:
dict
- hide_topic(page, reason)[source]#
Deprecated.
Hide a Flow topic.
- Parameters:
page (Topic) – A Flow topic
reason (str) – The reason to hide the topic
- Returns:
Metadata returned by the API
- Return type:
dict
- load_board(page)[source]#
Deprecated.
Retrieve the data for a Flow board.
- Parameters:
page (Board) – A Flow board
- Returns:
A dict representing the board’s metadata.
- Return type:
dict
- load_post_current_revision(page, post_id, content_format)[source]#
Deprecated.
Retrieve the data for a post to a Flow topic.
- Parameters:
page (Topic) – A Flow topic
post_id (str) – The UUID of the Post
content_format (str) – The content format used for the returned content; must be either ‘wikitext’, ‘html’, or ‘fixed-html’
- Returns:
A dict representing the post data for the given UUID.
- Return type:
dict
- load_topic(page, content_format)[source]#
Deprecated.
Retrieve the data for a Flow topic.
- Parameters:
page (Topic) – A Flow topic
content_format (str) – The content format to request the data in. Must ne either ‘wikitext’, ‘html’, or ‘fixed-html’
- Returns:
A dict representing the topic’s data.
- Return type:
dict
- load_topiclist(page, *, content_format='wikitext', limit=100, sortby='newest', toconly=False, offset=None, offset_id=None, reverse=False, include_offset=False)[source]#
Deprecated.
Retrieve the topiclist of a Flow board.
Changed in version 8.0: All parameters except page are keyword only parameters.
- Parameters:
page (pywikibot.flow.Board) – A Flow board
content_format (str) – The content format to request the data in. must be either ‘wikitext’, ‘html’, or ‘fixed-html’
limit (int) – The number of topics to fetch in each single request.
sortby (str) – Algorithm to sort topics by (‘newest’ or ‘updated’).
toconly (bool) – Whether to only include information for the TOC.
offset (Timestamp | str | None) – The timestamp to start at (when sortby is ‘updated’).
offset_id (str | None) – The topic UUID to start at (when sortby is ‘newest’).
reverse (bool) – Whether to reverse the topic ordering.
include_offset (bool) – Whether to include the offset topic.
- Returns:
A dict representing the board’s topiclist.
- Return type:
dict[str, Any]
- lock_topic(page, lock, reason)[source]#
Deprecated.
Lock or unlock a Flow topic.
- Parameters:
page (Topic) – A Flow topic
lock (bool (True corresponds to locking the topic.)) – Whether to lock or unlock the topic
reason (str) – The reason to lock or unlock the topic
- Returns:
Metadata returned by the API
- Return type:
dict
- moderate_post(post, state, reason)[source]#
Deprecated.
Moderate a Flow post.
- Parameters:
post (Post) – A Flow post
state (str) – The new moderation state
reason (str) – The reason to moderate the topic
- Returns:
Metadata returned by the API
- Return type:
dict
- moderate_topic(page, state, reason)[source]#
Deprecated.
Moderate a Flow topic.
- Parameters:
page (Topic) – A Flow topic
state (str) – The new moderation state
reason (str) – The reason to moderate the topic
- Returns:
Metadata returned by the API
- Return type:
dict
- reply_to_post(page, reply_to_uuid, content, content_format)[source]#
Deprecated.
Reply to a post on a Flow topic.
- param page:
A Flow topic
- type page:
Topic
- param reply_to_uuid:
The UUID of the Post to create a reply to
- param content:
The content of the reply
- param content_format:
The content format used for the supplied content; must be either ‘wikitext’ or ‘html’
- return:
Metadata returned by the API
- Parameters:
reply_to_uuid (str)
content (str)
content_format (str)
- Return type:
dict
- restore_post(post, reason)[source]#
Deprecated.
Restore a Flow post.
- Parameters:
post (Post) – A Flow post
reason (str) – The reason to restore the post
- Returns:
Metadata returned by the API
- Return type:
dict
- restore_topic(page, reason)[source]#
Deprecated.
Restore a Flow topic.
- Parameters:
page (Topic) – A Flow topic
reason (str) – The reason to restore the topic
- Returns:
Metadata returned by the API
- Return type:
dict
- summarize_topic(page, summary)[source]#
Deprecated.
Add summary to Flow topic.
- Parameters:
page (Topic) – A Flow topic
summary – The text of the summary
- Returns:
Metadata returned by the API
- Return type:
dict
- class pywikibot.site._extensions.GeoDataMixin[source]#
Bases:
object
APISite mixin for GeoData extension.
- class pywikibot.site._extensions.GlobalUsageMixin[source]#
Bases:
object
APISite mixin for Global Usage extension.
- globalusage(page, total=None)[source]#
Iterate global image usage for a given FilePage.
- Parameters:
page (FilePage) – the page to return global image usage for.
total – iterate no more than this number of pages in total.
- Raises:
TypeError – input page is not a FilePage.
pywikibot.exceptions.SiteDefinitionError – Site could not be defined for a returned entry in API response.
- class pywikibot.site._extensions.LinterMixin[source]#
Bases:
object
APISite mixin for Linter extension.
- linter_pages(lint_categories=None, total=None, namespaces=None, pageids=None, lint_from=None)[source]#
Return a generator to pages containing linter errors.
- Parameters:
lint_categories (an iterable that returns values (str), or a pipe-separated string of values.) – categories of lint errors
total (int) – if not None, yielding this many items in total
namespaces (iterable of str or Namespace key, or a single instance of those types. May be a '|' separated list of namespace identifiers.) – only iterate pages in these namespaces
pageids (an iterable that returns pageids (str or int), or a comma- or pipe-separated string of pageids (e.g. '945097,1483753, 956608' or '945097|483753|956608')) – only include lint errors from the specified pageids
lint_from (str representing digit or integer) – Lint ID to start querying from
- Returns:
pages with Linter errors.
- Return type:
Iterable[Page]
- class pywikibot.site._extensions.PageImagesMixin[source]#
Bases:
object
APISite mixin for PageImages extension.
- class pywikibot.site._extensions.ProofreadPageMixin[source]#
Bases:
object
APISite mixin for ProofreadPage extension.
- loadpageurls(page)[source]#
Load URLs from api and store in page attributes.
Load URLs to images for a given page in the “Page:” namespace. No effect for pages in other namespaces.
Added in version 8.6.
See also
- Parameters:
page (pywikibot.page.BasePage)
- Return type:
None
- property proofread_index_ns#
Return Index namespace for the ProofreadPage extension.
- property proofread_levels#
Return Quality Levels for the ProofreadPage extension.
- property proofread_page_ns#
Return Page namespace for the ProofreadPage extension.
- class pywikibot.site._extensions.TextExtractsMixin[source]#
Bases:
object
APISite mixin for TextExtracts extension.
Added in version 7.1.
- extract(page, *, chars=None, sentences=None, intro=True, plaintext=True)[source]#
Retrieve an extract of a page.
- Parameters:
page (Page) – The Page object for which the extract is read
chars (int | None) – How many characters to return. Actual text returned might be slightly longer.
sentences (int | None) – How many sentences to return
intro (bool) – Return only content before the first section
plaintext (bool) – if True, return extracts as plain text instead of limited HTML
- Return type:
str
- class pywikibot.site._extensions.ThanksFlowMixin[source]#
Bases:
object
APISite mixin for Thanks and Structured Discussions extension.
Deprecated since version 9.4.0: Structured Discussions extension formerly known as Flow extenstion is not maintained and will be removed. Users are encouraged to stop using it. (T371180)
See also
- class pywikibot.site._extensions.ThanksMixin[source]#
Bases:
object
APISite mixin for Thanks extension.
- thank_revision(revid, source=None)[source]#
Corresponding method to the ‘action=thank’ API action.
- Parameters:
revid (int) – Revision ID for the revision to be thanked.
source (str) – A source for the thanking operation.
- Raises:
APIError – On thanking oneself or other API errors.
- Returns:
The API response.
- class pywikibot.site._extensions.UrlShortenerMixin[source]#
Bases:
object
APISite mixin for UrlShortener extension.
- create_short_link(url)[source]#
Return a shortened link.
Note that on Wikimedia wikis only metawiki supports this action, and this wiki can process links to all WM domains.
- Parameters:
url (str) – The link to reduce, with propotol prefix.
- Returns:
The reduced link, without protocol prefix.
- Return type:
str
- class pywikibot.site._extensions.WikibaseClientMixin[source]#
Bases:
object
APISite mixin for WikibaseClient extension.
Objects representing API generators to MediaWiki site.
- class pywikibot.site._generators.GeneratorsMixin[source]#
Bases:
object
API generators mixin to MediaWiki site.
- allcategories(start='!', prefix='', total=None, reverse=False, content=False)[source]#
Iterate categories used (which need not have a Category page).
Iterator yields Category objects. Note that, in practice, links that were found on pages that have been deleted may not have been removed from the database table, so this method can return false positives.
See also
- Parameters:
start (str) – Start at this category title (category need not exist).
prefix (str) – Only yield categories starting with this string.
reverse (bool) – if True, iterate in reverse Unicode lexigraphic order (default: iterate in forward order)
content (bool) – if True, load the current content of each iterated page (default False); note that this means the contents of the category description page, not the pages that are members of the category
total (int | None)
- Return type:
Iterable[Category]
- alldeletedrevisions(*, namespaces=None, reverse=False, content=False, total=None, **kwargs)[source]#
Yield all deleted revisions.
See also
Warning
user keyword argument must be given together with start or end.
- Parameters:
namespaces (NamespaceArgType) – Only iterate pages in these namespaces
reverse (bool) – Iterate oldest revisions first (default: newest)
content (bool) – If True, retrieve the content of each revision
total (int | None) – Number of revisions to retrieve
- Keyword Arguments:
from (str) – Start listing at this title
to (str) – Stop listing at this title
prefix (str) – Search for all page titles that begin with this value
excludeuser (str) – Exclude revisions by this user
tag (str) – Only list revisions tagged with this tag
user (str) – List revisions by this user
start – Iterate revisions starting at this Timestamp
end – Iterate revisions ending at this Timestamp
prop (list[str]) – Which properties to get. Defaults are
ids
,timestamp
,flags
,user
, andcomment
(if the bot has the right to view).
- Return type:
Generator[dict[str, Any], None, None]
- allimages(start='!', prefix='', minsize=None, maxsize=None, reverse=False, sha1=None, sha1base36=None, total=None, content=False)[source]#
Iterate all images, ordered by image title.
Yields FilePages, but these pages need not exist on the wiki.
See also
- Parameters:
start (str) – start at this title (name need not exist)
prefix (str) – only iterate titles starting with this substring
minsize (int | None) – only iterate images of at least this many bytes
maxsize (int | None) – only iterate images of no more than this many bytes
reverse (bool) – if True, iterate in reverse lexigraphic order
sha1 (str | None) – only iterate image (it is theoretically possible there could be more than one) with this sha1 hash
sha1base36 (str | None) – same as sha1 but in base 36
content (bool) – if True, load the current content of each iterated page (default False); note that this means the content of the image description page, not the image itself
total (int | None)
- Return type:
Iterable[FilePage]
- alllinks(start='', prefix='', namespace=0, unique=False, fromids=False, total=None)[source]#
Iterate all links to pages (which need not exist) in one namespace.
Note
In practice, links that were found on pages that have been deleted may not have been removed from the links table, so this method can return false positives.
Caution
unique parameter is no longer supported by MediaWiki 1.43 or higher. Pywikibot uses
tools.itertools.filter_unique()
in that case which might be memory intensive. Use it with care.Important
Using namespace option different from
0
needs a lot of time on Wikidata site. You have to increase the read timeout part ofsocket_timeout
in Http Settings in youruser-config.py
file. Or increase it partially within your code like:from pywikibot import config save_timeout = config.socket_timeout # save the timeout config config.socket_timeout = save_timeout[0], 60 ... # your code here config.socket_timeout = save_timeout # restore timeout config
The minimum read timeout value should be 60 seconds in that case.
See also
- Parameters:
start (str) – Start at this title (page need not exist).
prefix (str) – Only yield pages starting with this string.
namespace (SingleNamespaceType) – Iterate pages from this (single) namespace
unique (bool) – If True, only iterate each link title once (default: False)
fromids (bool) – if True, include the pageid of the page containing each link (default: False) as the ‘_fromid’ attribute of the Page; cannot be combined with unique
total (int | None)
- Raises:
KeyError – the namespace identifier was not resolved
TypeError – the namespace identifier has an inappropriate type such as bool, or an iterable with more than one namespace
- Return type:
Generator[Page, None, None]
- allpages(start='!', prefix='', namespace=0, filterredir=None, filterlanglinks=None, minsize=None, maxsize=None, protect_type=None, protect_level=None, reverse=False, total=None, content=False)[source]#
Iterate pages in a single namespace.
See also
- Parameters:
start (str) – Start at this title (page need not exist).
prefix (str) – Only yield pages starting with this string.
namespace (SingleNamespaceType) – Iterate pages from this (single) namespace
filterredir (bool | None) – if True, only yield redirects; if False (and not None), only yield non-redirects (default: yield both)
filterlanglinks (bool | None) – if True, only yield pages with language links; if False (and not None), only yield pages without language links (default: yield both)
minsize (int | None) – if present, only yield pages at least this many bytes in size
maxsize (int | None) – if present, only yield pages at most this many bytes in size
protect_type (str | None) – only yield pages that have a protection of the specified type
protect_level (str | None) – only yield pages that have protection at this level; can only be used if protect_type is specified
reverse (bool) – if True, iterate in reverse Unicode lexigraphic order (default: iterate in forward order)
content (bool) – if True, load the current content of each iterated page (default False)
total (int | None)
- Raises:
KeyError – the namespace identifier was not resolved
TypeError – the namespace identifier has an inappropriate type such as bool, or an iterable with more than one namespace
- Return type:
Iterable[Page]
- allusers(start='!', prefix='', group=None, total=None)[source]#
Iterate registered users, ordered by username.
Iterated values are dicts containing ‘name’, ‘editcount’, ‘registration’, and (sometimes) ‘groups’ keys. ‘groups’ will be present only if the user is a member of at least 1 group, and will be a list of str; all the other values are str and should always be present.
See also
- Parameters:
start (str) – start at this username (name need not exist)
prefix (str) – only iterate usernames starting with this substring
group (str | None) – only iterate users that are members of this group
total (int | None)
- Return type:
Iterable[dict[str, str | list[str]]]
- blocks(starttime=None, endtime=None, reverse=False, blockids=None, users=None, iprange=None, total=None)[source]#
Iterate all current blocks, in order of creation.
The iterator yields dicts containing keys corresponding to the block properties.
See also
Note
logevents only logs user blocks, while this method iterates all blocks including IP ranges.
Warning
iprange
parameter cannot be used together withusers
.- Parameters:
starttime (Timestamp | None) – start iterating at this Timestamp
endtime (Timestamp | None) – stop iterating at this Timestamp
reverse (bool) – if True, iterate oldest blocks first (default: newest)
blockids (int | str | Iterable[int | str] | None) – only iterate blocks with these id numbers. Numbers must be separated by ‘|’ if given by a str.
users (str | Iterable[str] | None) – only iterate blocks affecting these usernames or IPs
iprange (str | None) – a single IP or an IP range. Ranges broader than IPv4/16 or IPv6/19 are not accepted.
total (int | None) – total amount of block entries
- Return type:
Iterable[dict[str, Any]]
- botusers(total=None)[source]#
Iterate bot users.
Iterated values are dicts containing ‘name’, ‘userid’, ‘editcount’, ‘registration’, and ‘groups’ keys. ‘groups’ will be present only if the user is a member of at least 1 group, and will be a list of str; all the other values are str and should always be present.
- Parameters:
total (int | None)
- Return type:
Generator[dict[str, Any], None, None]
- broken_redirects(total=None)[source]#
Yield Pages with broken redirects from Special:BrokenRedirects.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- categorymembers(category, *, namespaces=None, sortby=None, reverse=False, starttime=None, endtime=None, total=None, startprefix=None, endprefix=None, content=False, member_type=None)[source]#
Iterate members of specified category.
You should not use this method directly; instead use one of the following:
Changed in version 4.0: parameters except category are keyword arguments only.
Changed in version 8.0: raises TypeError instead of Error if no Category is specified
See also
- Parameters:
category (Category) – The Category to iterate.
namespaces (NamespaceArgType) – If present, only return category members from these namespaces. To yield subcategories or files, use parameter member_type instead.
sortby (str | None) – determines the order in which results are generated, valid values are “sortkey” (default, results ordered by category sort key) or “timestamp” (results ordered by time page was added to the category)
reverse (bool) – if True, generate results in reverse order (default False)
starttime (Timestamp | None) – if provided, only generate pages added after this time; not valid unless sortby=”timestamp”
endtime (Timestamp | None) – if provided, only generate pages added before this time; not valid unless sortby=”timestamp”
startprefix (str | None) – if provided, only generate pages >= this title lexically; not valid if sortby=”timestamp”
endprefix (str | None) – if provided, only generate pages < this title lexically; not valid if sortby=”timestamp”
content (bool) – if True, load the current content of each iterated page (default False)
member_type (str | Iterable[str] | None) – member type; values must be
page
,subcat
,file
. If member_type includespage
and is used in conjunction with sortby=”timestamp”, the API may limit results to only pages in the first 50 namespaces.total (int | None)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
TypeError – no Category is specified
ValueError – invalid values given
- Return type:
Iterable[Page]
- deadendpages(total=None)[source]#
Yield Page objects retrieved from Special:Deadendpages.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- deletedrevs(titles=None, start=None, end=None, reverse=False, content=False, total=None, **kwargs)[source]#
Iterate deleted revisions.
Each value returned by the iterator will be a dict containing the ‘title’ and ‘ns’ keys for a particular Page and a ‘revisions’ key whose value is a list of revisions in the same format as recentchanges plus a ‘content’ element with key ‘*’ if requested when ‘content’ parameter is set. For older wikis a ‘token’ key is also given with the content request.
See also
- Parameters:
- Keyword Arguments:
revids – Get revisions by their ID
- Return type:
Generator[dict[str, Any], None, None]
Note
either titles or revids must be set but not both
- Parameters:
start – Iterate revisions starting at this Timestamp
end – Iterate revisions ending at this Timestamp
reverse (bool) – Iterate oldest revisions first (default: newest)
content (bool) – If True, retrieve the content of each revision
total (int | None) – number of revisions to retrieve
- Keyword Arguments:
user – List revisions by this user
excludeuser – Exclude revisions by this user
tag – Only list revision tagged with this tag
prop – Which properties to get. Defaults are ids, user, comment, flags and timestamp
- Return type:
Generator[dict[str, Any], None, None]
- double_redirects(total=None)[source]#
Yield Pages with double redirects from Special:DoubleRedirects.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- exturlusage(url=None, protocol=None, namespaces=None, total=None, content=False)[source]#
Iterate Pages that contain links to the given URL.
See also
- Parameters:
url (str | None) – The URL to search for (with or without the protocol prefix); this may include a ‘*’ as a wildcard, only at the start of the hostname
namespaces (list[int] | None) – list of namespace numbers to fetch contribs from
total (int | None) – Maximum number of pages to retrieve in total
protocol (str | None) – Protocol to search for, likely http or https, http by default. Full list shown on Special:LinkSearch wikipage
content (bool)
- Return type:
Iterable[Page]
- filearchive(start=None, end=None, reverse=False, total=None, **kwargs)[source]#
Iterate archived files.
Yields dict of file archive informations.
See also
- Parameters:
start (str | None) – start at this title (name need not exist)
end (str | None) – end at this title (name need not exist)
reverse (bool) – if True, iterate in reverse lexigraphic order
total (int | None) – maximum number of pages to retrieve in total
- Keyword Arguments:
prefix – only iterate titles starting with this substring
sha1 – only iterate image with this sha1 hash
sha1base36 – same as sha1 but in base 36
prop – Image information to get. Default is timestamp
- Return type:
Iterable[dict[str, Any]]
- imageusage(image, *, namespaces=None, filterredir=None, total=None, content=False)[source]#
Iterate Pages that contain links to the given FilePage.
See also
Changed in version 7.2: all parameters except
image
are keyword only.- Parameters:
image (FilePage) – the image to search for (FilePage need not exist on the wiki)
namespaces (NamespaceArgType) – If present, only iterate pages in these namespaces
filterredir (bool | None) – if True, only yield redirects; if False (and not None), only yield non-redirects (default: yield both)
total (int | None) – iterate no more than this number of pages in total
content (bool) – if True, load the current content of each iterated page (default False)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Iterable[Page]
- load_pages_from_pageids(pageids)[source]#
Return a page generator from pageids.
Pages are iterated in the same order than in the underlying pageids.
Pageids are filtered and only one page is returned in case of duplicate pageids.
- Parameters:
pageids (str | Iterable[int | str]) – an iterable that returns pageids (str or int), or a comma- or pipe-separated string of pageids (e.g. ‘945097,1483753, 956608’ or ‘945097|483753|956608’)
- Return type:
Generator[Page, None, None]
- loadrevisions(page, *, content=False, section=None, **kwargs)[source]#
Retrieve revision information and store it in page object.
By default, retrieves the last (current) revision of the page, unless any of the optional parameters revids, startid, endid, starttime, endtime, rvdir, user, excludeuser, or total are specified. Unless noted below, all parameters not specified default to False.
If rvdir is False or not specified, startid must be greater than endid if both are specified; likewise, starttime must be greater than endtime. If rvdir is True, these relationships are reversed.
See also
- Parameters:
page (Page) – retrieve revisions of this Page and hold the data.
content (bool) – if True, retrieve the wiki-text of each revision; otherwise, only retrieve the revision metadata (default)
section (int | None) – if specified, retrieve only this section of the text (content must be True); section must be given by number (top of the article is section 0), not name
- Keyword Arguments:
revids – retrieve only the specified revision ids (raise Exception if any of revids does not correspond to page)
startid – retrieve revisions starting with this revid
endid – stop upon retrieving this revid
starttime – retrieve revisions starting at this Timestamp
endtime – stop upon reaching this Timestamp
rvdir – if false, retrieve newest revisions first (default); if true, retrieve oldest first
user – retrieve only revisions authored by this user
excludeuser – retrieve all revisions not authored by this user
total – number of revisions to retrieve
- Raises:
ValueError – invalid startid/endid or starttime/endtime values
pywikibot.exceptions.Error – revids belonging to a different page
- Return type:
None
- logevents(logtype=None, user=None, page=None, namespace=None, start=None, end=None, reverse=False, tag=None, total=None)[source]#
Iterate all log entries.
See also
Note
logevents with
logtype='block'
only logs user blocks whereassite.blocks
iterates all blocks including IP ranges.- Parameters:
logtype (str | None) – only iterate entries of this type (see mediawiki api documentation for available types)
user (str | None) – only iterate entries that match this user name
page (str | Page | None) – only iterate entries affecting this page
namespace (NamespaceArgType) – namespace(s) to retrieve logevents from
start (str | Timestamp | None)
end (str | Timestamp | None)
reverse (bool)
tag (str | None)
total (int | None)
- Return type:
Iterable[pywikibot.logentries.LogEntry]
Note
due to an API limitation, if namespace param contains multiple namespaces, log entries from all namespaces will be fetched from the API and will be filtered later during iteration.
- Parameters:
start (str | Timestamp | None) – only iterate entries from and after this Timestamp
end (str | Timestamp | None) – only iterate entries up to and through this Timestamp
reverse (bool) – if True, iterate oldest entries first (default: newest)
tag (str | None) – only iterate entries tagged with this tag
total (int | None) – maximum number of events to iterate
logtype (str | None)
user (str | None)
page (str | Page | None)
namespace (NamespaceArgType)
- Raises:
KeyError – the namespace identifier was not resolved
TypeError – the namespace identifier has an inappropriate type such as bool, or an iterable with more than one namespace
- Return type:
Iterable[pywikibot.logentries.LogEntry]
- lonelypages(total=None)[source]#
Yield Pages retrieved from Special:Lonelypages.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- longpages(total=None)[source]#
Yield Pages and lengths from Special:Longpages.
Yields a tuple of Page object, length(int).
- Parameters:
total (int | None) – number of pages to return
- Return type:
Generator[tuple[Page, int], None, None]
- newpages(user=None, returndict=False, start=None, end=None, reverse=False, bot=False, redirect=False, excludeuser=None, patrolled=None, namespaces=None, total=None)[source]#
Yield new articles (as Page objects) from recent changes.
Starts with the newest article and fetches the number of articles specified in the first argument.
The objects yielded are dependent on parameter returndict. When true, it yields a tuple composed of a Page object and a dict of attributes. When false, it yields a tuple composed of the Page object, timestamp (str), length (int), an empty string, username or IP address (str), comment (str).
- Parameters:
namespaces (NamespaceArgType) – only iterate pages in these namespaces
returndict (bool)
reverse (bool)
bot (bool)
redirect (bool)
total (int | None)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Generator[tuple[Page, dict[str, Any]], None, None] | Generator[tuple[Page, str, int, str, str, str], None, None]
- page_embeddedin(page, *, filter_redirects=None, namespaces=None, total=None, content=False)[source]#
Iterate all pages that embedded the given page as a template.
See also
- Parameters:
page (Page) – The Page to get inclusions for.
filter_redirects – If True, only return redirects that embed the given page. If False, only return non-redirect links. If None, return both (no filtering).
namespaces (NamespaceArgType) – If present, only return links from the namespaces in this list.
content (bool) – if True, load the current content of each iterated page (default False)
total (int | None)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Iterable[Page]
- page_extlinks(page, *, total=None)[source]#
Yield all external links on page, yielding URL strings.
See also
- Parameters:
page (Page)
total (int | None)
- Return type:
Generator[str, None, None]
- page_redirects(page, *, filter_fragments=None, namespaces=None, total=None, content=False)[source]#
Iterale all redirects to the given page.
See also
Added in version 7.0.
- Parameters:
page (Page) – The Page to get redirects for.
filter_fragments (bool | None) – If True, only return redirects with fragments. If False, only return redirects without fragments. If None, return both (no filtering).
namespaces (NamespaceArgType) – Only return redirects from the namespaces
total (int | None) – maximum number of redirects to retrieve in total
content (bool) – load the current content of each redirect
- Return type:
Iterable[Page]
- pagebacklinks(page, *, follow_redirects=False, filter_redirects=None, namespaces=None, total=None, content=False)[source]#
Iterate all pages that link to the given page.
See also
- Parameters:
page (Page) – The Page to get links to.
follow_redirects (bool) – Also return links to redirects pointing to the given page.
filter_redirects – If True, only return redirects to the given page. If False, only return non-redirect links. If None, return both (no filtering).
namespaces (NamespaceArgType) – If present, only return links from the namespaces in this list.
total (int | None) – Maximum number of pages to retrieve in total.
content (bool) – if True, load the current content of each iterated page (default False)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Iterable[Page]
- pagecategories(page, *, total=None, content=False)[source]#
Iterate categories to which page belongs.
See also
- pageimages(page, *, total=None, content=False)[source]#
Iterate images used (not just linked) on the page.
See also
- pagelanglinks(page, *, total=None, include_obsolete=False, include_empty_titles=False)[source]#
Yield all interlanguage links on page, yielding Link objects.
Changed in version 6.2::
include_empty_titles
parameter was added.See also
- pagelinks(page, *, namespaces=None, follow_redirects=False, total=None, content=False)[source]#
Yield internal wikilinks contained (or transcluded) on page.
See also
- Parameters:
namespaces (NamespaceArgType) – Only iterate pages in these namespaces (default: all)
follow_redirects (bool) – if True, yields the target of any redirects, rather than the redirect page
total (int | None) – iterate no more than this number of pages in total
content (bool) – if True, load the current content of each iterated page
page (pywikibot.page.BasePage)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Generator[Page, None, None]
- pagereferences(page, *, follow_redirects=False, filter_redirects=None, with_template_inclusion=True, only_template_inclusion=False, namespaces=None, total=None, content=False)[source]#
Convenience method combining pagebacklinks and page_embeddedin.
- Parameters:
namespaces (NamespaceArgType) – If present, only return links from the namespaces in this list.
follow_redirects (bool)
filter_redirects (bool | None)
with_template_inclusion (bool)
only_template_inclusion (bool)
total (int | None)
content (bool)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Iterable[Page]
- pages_with_property(propname, *, total=None)[source]#
Iterate Page objects from Special:PagesWithProp.
See also
- Parameters:
propname (str) – must be a valid property.
total (int | None) – number of pages to return
- Returns:
return a generator of Page objects
- Return type:
iterator
- pagetemplates(page, *, content=False, namespaces=None, total=None)[source]#
Iterate pages transcluded (not just linked) on the page.
- Parameters:
content (bool) – if True, load the current content of each iterated page (default False)
namespaces (NamespaceArgType) – Only iterate pages in these namespaces
total (int | None) – maximum number of pages to retrieve in total
page (Page)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
UnsupportedPageError – a Page object is not supported due to namespace restriction
- Return type:
Iterable[Page]
- patrol(rcid=None, revid=None, revision=None)[source]#
Return a generator of patrolled pages.
See also
Pages to be patrolled are identified by rcid, revid or revision. At least one of the parameters is mandatory. See https://www.mediawiki.org/wiki/API:Patrol.
- Parameters:
rcid (int | str | Iterable[int] | Iterable[str] | None) – an int/string/iterable/iterator providing rcid of pages to be patrolled.
revid (int | str | Iterable[int] | Iterable[str] | None) – an int/string/iterable/iterator providing revid of pages to be patrolled.
revision (pywikibot.page.Revision | Iterable[pywikibot.page.Revision] | None) – an Revision/iterable/iterator providing Revision object of pages to be patrolled.
- Return type:
Generator[dict[str, int | str], None, None]
- preloadpages(pagelist, *, groupsize=None, templates=False, langlinks=False, pageprops=False, categories=False, content=True, quiet=True)[source]#
Return a generator to a list of preloaded pages.
Pages are iterated in the same order than in the underlying pagelist. In case of duplicates in a groupsize batch, return the first entry.
Changed in version 7.6: content parameter was added.
Changed in version 7.7: categories parameter was added.
Changed in version 8.1: groupsize is maxlimit by default. quiet parameter was added. No longer show the “Retrieving pages from site” message by default.
- Parameters:
pagelist (Iterable[Page]) – an iterable that returns Page objects
groupsize (int | None) – how many Pages to query at a time. If None (default),
maxlimit
is used.templates (bool) – preload pages (typically templates) transcluded in the provided pages
langlinks (bool) – preload all language links from the provided pages to other languages
pageprops (bool) – preload various properties defined in page content
categories (bool) – preload page categories
content (bool) – preload page content
quiet (bool) – If True (default), do not show the “Retrieving pages” message
- Return type:
Generator[Page, None, None]
- protectedpages(namespace=0, protect_type='edit', level=False, total=None, type='[deprecated name of protect_type]')[source]#
Return protected pages depending on protection level and type.
For protection types which aren’t ‘create’ it uses
APISite.allpages
, while it uses for ‘create’ the ‘query+protectedtitles’ module.Changed in version 9.0: type parameter was renamed to protect_type.
See also
- Parameters:
namespace (NamespaceArgType) – The searched namespace.
protect_type (str) – The protection type to search for (default ‘edit’).
level (str | bool) – The protection level (like ‘autoconfirmed’). If False it shows all protection levels.
total (int | None)
- Returns:
The pages which are protected.
- querypage(special_page, total=None)[source]#
Iterate Page objects retrieved from Special:{special_page}.
Generic function for all special pages supported by the site MW API.
See also
Changed in version 9.0: Raises
ValueError
instead ofAssertionError
if special_page is invalid.- Parameters:
special_page (str) – Special page to query
total (int | None) – number of pages to return
- Raises:
ValueError – special_page is not supported in SpecialPages.
- Return type:
Iterable[Page]
- randompages(total=None, namespaces=None, redirects=False, content=False)[source]#
Iterate a number of random pages.
Pages are listed in a fixed sequence, only the starting point is random.
See also
Changed in version 9.0: Raises
TypeError
instead ofAssertionError
if redirects is invalid.- Parameters:
total (int | None) – the maximum number of pages to iterate
namespaces (NamespaceArgType) – only iterate pages in these namespaces.
redirects (bool | None) – if True, include only redirect pages in results, False does not include redirects and None include both types (default: False).
content (bool) – if True, load the current content of each iterated page (default False).
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
TypeError – unsupported redirects parameter
- Return type:
Iterable[Page]
- recentchanges(*, start=None, end=None, reverse=False, namespaces=None, changetype=None, minor=None, bot=None, anon=None, redirect=None, patrolled=None, top_only=False, total=None, user=None, excludeuser=None, tag=None)[source]#
Iterate recent changes.
See also
- Parameters:
start (Timestamp) – Timestamp to start listing from
end (Timestamp) – Timestamp to end listing at
reverse (bool) – if True, start with oldest changes (default: newest)
namespaces (NamespaceArgType) – only iterate pages in these namespaces
changetype (str | None) – only iterate changes of this type (“edit” for edits to existing pages, “new” for new pages, “log” for log entries)
minor (bool | None) – if True, only list minor edits; if False, only list non-minor edits; if None, list all
bot (bool | None) – if True, only list bot edits; if False, only list non-bot edits; if None, list all
anon (bool | None) – if True, only list anon edits; if False, only list non-anon edits; if None, list all
redirect (bool | None) – if True, only list edits to redirect pages; if False, only list edits to non-redirect pages; if None, list all
patrolled (bool | None) – if True, only list patrolled edits; if False, only list non-patrolled edits; if None, list all
top_only (bool) – if True, only list changes that are the latest revision (default False)
user (str | list[str] | None) – if not None, only list edits by this user or users
excludeuser (str | list[str] | None) – if not None, exclude edits by this user or users
tag (str | None) – a recent changes tag
total (int | None)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Iterable[dict[str, Any]]
- redirectpages(total=None)[source]#
Yield redirect pages from Special:ListRedirects.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- search(searchstring, *, namespaces=None, where=None, total=None, content=False)[source]#
Iterate Pages that contain the searchstring.
Note that this may include non-existing Pages if the wiki’s database table contains outdated entries.
Changed in version 7.0: Default of
where
parameter has been changed from ‘text’ to None. The behaviour depends on the installed search engine which is ‘text’ on CirrusSearch’. raises APIError instead of Error if searchstring is not set or what parameter is wrong.See also
- Parameters:
searchstring (str) – the text to search for
where (str | None) – Where to search; value must be “text”, “title”, “nearmatch” or None (many wikis do not support all search types)
namespaces (NamespaceArgType) – search only in these namespaces (defaults to all)
content (bool) – if True, load the current content of each iterated page (default False)
total (int | None)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
APIError – The “gsrsearch” parameter must be set: searchstring parameter is not set
APIError – Unrecognized value for parameter “gsrwhat”: wrong where parameter is given
- Return type:
Iterable[Page]
- shortpages(total=None)[source]#
Yield Pages and lengths from Special:Shortpages.
Yields a tuple of Page object, length(int).
- Parameters:
total (int | None) – number of pages to return
- Return type:
Generator[tuple[Page, int], None, None]
- uncategorizedcategories(total=None)[source]#
Yield Categories from Special:Uncategorizedcategories.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- uncategorizedfiles(total=None)#
Yield FilePages from Special:Uncategorizedimages.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- uncategorizedimages(total=None)[source]#
Yield FilePages from Special:Uncategorizedimages.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- uncategorizedpages(total=None)[source]#
Yield Pages from Special:Uncategorizedpages.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- uncategorizedtemplates(total=None)[source]#
Yield Pages from Special:Uncategorizedtemplates.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- unusedcategories(total=None)[source]#
Yield Category objects from Special:Unusedcategories.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- unusedfiles(total=None)[source]#
Yield FilePage objects from Special:Unusedimages.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- unwatchedpages(total=None)[source]#
Yield Pages from Special:Unwatchedpages (requires Admin privileges).
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- usercontribs(user=None, userprefix=None, start=None, end=None, reverse=False, namespaces=None, minor=None, total=None, top_only=False)[source]#
Iterate contributions by a particular user.
Iterated values are in the same format as recentchanges.
- Parameters:
user (str | None) – Iterate contributions by this user (name or IP)
userprefix (str | None) – Iterate contributions by all users whose names or IPs start with this substring
start – Iterate contributions starting at this Timestamp
end – Iterate contributions ending at this Timestamp
reverse (bool) – Iterate oldest contributions first (default: newest)
namespaces (NamespaceArgType) – only iterate pages in these namespaces
minor (bool | None) – if True, iterate only minor edits; if False and not None, iterate only non-minor edits (default: iterate both)
total (int | None) – limit result to this number of pages
top_only (bool) – if True, iterate only edits which are the latest revision (default: False)
- Raises:
pywikibot.exceptions.Error – either user or userprefix must be non-empty
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Iterable[dict[str, Any]]
- users(usernames)[source]#
Iterate info about a list of users by name or IP.
See also
- Parameters:
usernames (Iterable[str]) – a list of user names
- Return type:
Iterable[dict[str, Any]]
- wantedcategories(total=None)[source]#
Yield Pages from Special:Wantedcategories.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- wantedfiles(total=None)[source]#
Yield Pages from Special:Wantedfiles.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- wantedpages(total=None)[source]#
Yield Pages from Special:Wantedpages.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- wantedtemplates(total=None)[source]#
Yield Pages from Special:Wantedtemplates.
- Parameters:
total (int | None) – number of pages to return
- Return type:
Iterable[Page]
- watched_pages(force=False, total=None, *, with_talkpage=True)[source]#
Return watchlist.
Note
watched_pages
is a restartable generator. Seetools.collections.GeneratorWrapper
for its usage.See also
Added in version 8.1: the with_talkpage parameter.
- Parameters:
force (bool) – Reload watchlist
total (int | None) – if not None, limit the generator to yielding this many items in total
with_talkpage (bool) – if false, ignore talk pages and special pages
- Returns:
generator of pages in watchlist
- Return type:
Iterable[Page]
- watchlist_revs(start=None, end=None, reverse=False, namespaces=None, minor=None, bot=None, anon=None, total=None)[source]#
Iterate revisions to pages on the bot user’s watchlist.
Iterated values will be in same format as recentchanges.
See also
- Parameters:
start – Iterate revisions starting at this Timestamp
end – Iterate revisions ending at this Timestamp
reverse (bool) – Iterate oldest revisions first (default: newest)
namespaces (NamespaceArgType) – only iterate pages in these namespaces
minor (bool | None) – if True, only list minor edits; if False (and not None), only list non-minor edits
bot (bool | None) – if True, only list bot edits; if False (and not None), only list non-bot edits
anon (bool | None) – if True, only list anon edits; if False (and not None), only list non-anon edits
total (int | None)
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
Iterable[dict[str, Any]]
DataSite
— API Interface for Wikibase#
Objects representing API interface to Wikibase site.
- class pywikibot.site._datasite.DataSite(*args, **kwargs)[source]#
Bases:
APISite
Wikibase data capable site.
- addClaim(entity, claim, bot=True, summary=None, tags=None)[source]#
Add a claim.
Changed in version 9.4: tags parameter was added
- Parameters:
entity (pywikibot.page.WikibaseEntity) – Entity to modify
claim (pywikibot.page.Claim) – Claim to be added
bot (bool) – Whether to mark the edit as a bot edit
summary (str | None) – Edit summary
tags (str | None) – Change tags to apply to the revision
- Return type:
None
- add_form(lexeme, form, *, bot=True, baserevid=None)[source]#
Add a form.
- Parameters:
lexeme (LexemePage) – Lexeme to modify
form (LexemeForm) – Form to be added
bot (bool)
- Keyword Arguments:
bot – Whether to mark the edit as a bot edit
baserevid – Base revision id override, used to detect conflicts.
- Return type:
dict
- changeClaimTarget(claim, snaktype='value', bot=True, summary=None, tags=None)[source]#
Set the claim target to the value of the provided claim target.
Changed in version 9.4: tags parameter was added
- Parameters:
claim (Claim) – The source of the claim target value
snaktype (str) – An optional snaktype (‘value’, ‘novalue’ or ‘somevalue’). Default: ‘value’
bot (bool) – Whether to mark the edit as a bot edit
summary (str | None) – Edit summary
tags (str | None) – Change tags to apply to the revision
- property concept_base_uri#
Return the base uri for concepts/entities.
- Returns:
concept base uri
- Return type:
str
- editEntity(entity, data, bot=True, **kwargs)[source]#
Edit entity.
Note
This method is unable to create entities other than
item
if dict with API parameters was passed to entity parameter.Changed in version 9.4: tags keyword argument was added
- Parameters:
entity (pywikibot.page.WikibaseEntity | dict) – Page to edit, or dict with API parameters to use for entity identification.
data (dict) – data updates
bot (bool) – Whether to mark the edit as a bot edit.
- Keyword Arguments:
baserevid (int) – The numeric identifier for the revision to base the modification on. This is used for detecting conflicts during save.
clear (bool) – If set, the complete entity is emptied before proceeding. The entity will not be saved before it is filled with the data, possibly with parts excluded.
summary (str) – Summary for the edit. Will be prepended by an automatically generated comment. The length limit of the autocomment together with the summary is 260 characters. Be aware that everything above that limit will be cut off.
tags (Iterable[str] | str) – Change tags to apply to the revision.
- Returns:
New entity data
- Return type:
dict
- editQualifier(claim, qualifier, new=False, bot=True, summary=None, tags=None)[source]#
Create/Edit a qualifier.
Changed in version 7.0: deprecated baserevid parameter was removed
Changed in version 9.4: tags parameter was added
- Parameters:
claim (Claim) – A Claim object to add the qualifier to
qualifier (Claim) – A Claim object to be used as a qualifier
new (bool) – Whether to create a new one if the qualifier already exists
bot (bool) – Whether to mark the edit as a bot edit
summary (str | None) – Edit summary
tags (str | None) – Change tags to apply to the revision
- Raises:
ValueError – The claim cannot have a qualifier.
- editSource(claim, source, new=False, bot=True, summary=None, tags=None)[source]#
Create/Edit a source.
Changed in version 7.0: deprecated baserevid parameter was removed
Changed in version 9.4: tags parameter was added
- Parameters:
claim (Claim) – A Claim object to add the source to.
source (Claim) – A Claim object to be used as a source.
new (bool) – Whether to create a new one if the “source” already exists.
bot (bool) – Whether to mark the edit as a bot edit.
summary (str | None) – Edit summary.
tags (str | None) – Change tags to apply to the revision.
- Raises:
ValueError – The claim cannot have a source.
- edit_form_elements(form, data, *, bot=True, baserevid=None)[source]#
Edit lexeme form elements.
- Parameters:
form (LexemeForm) – Form
data (dict) – data updates
bot (bool)
- Keyword Arguments:
bot – Whether to mark the edit as a bot edit
baserevid – Base revision id override, used to detect conflicts.
- Returns:
New form data
- Return type:
dict
- getPropertyType(prop)[source]#
Obtain the type of a property.
Deprecated since version 9.5: Use
get_property_type()
instead.
- get_entity_for_entity_id(entity_id)[source]#
Return a new instance for given entity id.
- Raises:
pywikibot.exceptions.NoWikibaseEntityError – there is no entity with the id
- Returns:
a WikibaseEntity subclass
- Return type:
- get_namespace_for_entity_type(entity_type)[source]#
Return namespace for given entity type.
- Returns:
corresponding namespace
- Return type:
- get_property_type(prop)[source]#
Obtain the type of a property.
This is used specifically because we can cache the value for a much longer time (near infinite).
Added in version 9.5.
- Raises:
NoWikibaseEntityError – prop does not exist
- Parameters:
prop (Property)
- Return type:
str
- get_repo_for_entity_type(entity_type)[source]#
Get the data repository for the entity type.
When no foreign repository is defined for the entity type, the method returns this repository itself even if it does not support that entity type either.
Added in version 8.0.
- Raises:
ValueError – when invalid entity type was provided
- Parameters:
entity_type (str)
- Return type:
- linkTitles(page1, page2, bot=True)[source]#
Link two pages together.
Changed in version 9.4: tags parameter was added
- loadcontent(identification, *props)[source]#
Fetch the current content of a Wikibase item.
This is called loadcontent since wbgetentities does not support fetching old revisions. Eventually this will get replaced by an actual loadrevisions.
- Parameters:
identification (dict) – Parameters used to identify the page(s)
props – the optional properties to fetch.
- mergeItems(from_item, to_item, ignore_conflicts=None, summary=None, bot=True, tags=None)[source]#
Merge two items together.
Changed in version 9.4: tags parameter was added
- Parameters:
from_item (ItemPage) – Item to merge from
to_item (ItemPage) – Item to merge into
ignore_conflicts (list[str] | None) – Which type of conflicts (‘description’, ‘sitelink’, and ‘statement’) should be ignored
summary (str | None) – Edit summary
bot (bool) – Whether to mark the edit as a bot edit
tags (str | None) – Change tags to apply to the revision
- Returns:
dict API output
- Return type:
dict
- mergeLexemes(from_lexeme, to_lexeme, summary=None, *, bot=True)[source]#
Merge two lexemes together.
- Parameters:
from_lexeme (LexemePage) – Lexeme to merge from
to_lexeme (LexemePage) – Lexeme to merge into
summary (str) – Edit summary
bot (bool)
- Keyword Arguments:
bot – Whether to mark the edit as a bot edit
- Returns:
dict API output
- Return type:
dict
- parsevalue(datatype, values, options=None, language=None, validate=False)[source]#
Send data values to the wikibase parser for interpretation.
Added in version 7.5.
See also
- Parameters:
datatype (str) – datatype of the values being parsed. Refer the API for a valid datatype.
values (list[str]) – list of values to be parsed
options (dict[str, Any] | None) – any additional options for wikibase parser (for time, ‘precision’ should be specified)
language (str | None) – code of the language to parse the value in
validate (bool) – whether parser should provide data validation as well as parsing
- Returns:
list of parsed values
- Raises:
ValueError – parsing failed due to some invalid input values
- Return type:
list[Any]
- preload_entities(pagelist, groupsize=50)[source]#
Yield subclasses of WikibaseEntity’s with content prefilled.
Note
Pages will be iterated in a different order than in the underlying pagelist.
- Parameters:
pagelist (Iterable[WikibaseEntity | Page]) – an iterable that yields either WikibaseEntity objects, or Page objects linked to an ItemPage.
groupsize (int) – how many pages to query at a time
- Return type:
Generator[WikibaseEntity, None, None]
- property property_namespace#
Return namespace for properties.
- Returns:
property namespace
- Return type:
- removeClaims(claims, bot=True, summary=None, tags=None)[source]#
Remove claims.
Changed in version 7.0: deprecated baserevid parameter was removed
Changed in version 9.4: tags parameter was added
- Parameters:
claims (list[Claim]) – Claims to be removed
bot (bool) – Whether to mark the edit as a bot edit
summary (str | None) – Edit summary
tags (str | None) – Change tags to apply to the revision
- removeSources(claim, sources, bot=True, summary=None, tags=None)[source]#
Remove sources.
Changed in version 7.0: deprecated
baserevid
parameter was removedChanged in version 9.4: tags parameter was added
- remove_form(form, *, bot=True, baserevid=None)[source]#
Remove a form.
- Parameters:
form (LexemeForm) – Form to be removed
bot (bool)
- Keyword Arguments:
bot – Whether to mark the edit as a bot edit
baserevid – Base revision id override, used to detect conflicts.
- Return type:
dict
- remove_qualifiers(claim, qualifiers, bot=True, summary=None, tags=None)[source]#
Remove qualifiers.
Changed in version 7.0: deprecated
baserevid
parameter was removedChanged in version 9.4: tags parameter was added
- save_claim(claim, summary=None, bot=True, tags=None)[source]#
Save the whole claim to the wikibase site.
Changed in version 9.4: tags parameter was added
- Parameters:
claim (pywikibot.page.Claim) – The claim to save
bot (bool) – Whether to mark the edit as a bot edit
summary (str | None) – Edit summary
tags (str | None) – Change tags to apply to the revision
- Raises:
NoPageError – missing the the snak value
NotImplementedError –
claim.isReference
orclaim.isQualifier
is given
- search_entities(search, language, total=None, **kwargs)[source]#
Search for pages or properties that contain the given text.
- Parameters:
search (str) – Text to find.
language (str) – Language to search in.
total (int | None) – Maximum number of pages to retrieve in total, or None in case of no limit.
- Returns:
‘search’ list from API output.
- Return type:
Generator
- property sparql_endpoint#
Return the sparql endpoint url, if any has been set.
- Returns:
sparql endpoint url
- Return type:
str|None
- tabular_data_repository()[source]#
Return Site object for the tabular-datas repository e.g. commons.
- wbsetaliases(itemdef, aliases, **kwargs)[source]#
Set aliases for a single Wikibase entity.
See self._wbset_action() for parameters
- wbsetdescription(itemdef, description, **kwargs)[source]#
Set description for a single Wikibase entity.
See self._wbset_action()
Obsolete Sites
— Outdated Sites#
Objects representing obsolete MediaWiki sites.
- class pywikibot.site._obsoletesites.ClosedSite(code, fam=None, user=None)[source]#
Bases:
APISite
Site closed to read-only mode.
- Parameters:
code (str)
fam (str | pywikibot.family.Family | None)
user (str | None)
Siteinfo
— Site Info Container#
Objects representing site info data contents.
- class pywikibot.site._siteinfo.Siteinfo(site)[source]#
Bases:
Container
A ‘dictionary’ like container for siteinfo.
This class queries the server to get the requested siteinfo property. Optionally it can cache this directly in the instance so that later requests don’t need to query the server.
All values of the siteinfo property ‘general’ are directly available.
Initialise it with an empty cache.
- BOOLEAN_PROPS = {'general': ['imagewhitelistenabled', 'langconversion', 'titleconversion', 'rtl', 'readonly', 'writeapi', 'variantarticlepath', 'misermode', 'uploadsenabled'], 'magicwords': ['case-sensitive'], 'namespaces': ['subpages', 'content', 'nonincludable']}#
- WARNING_REGEX = re.compile('Unrecognized values? for parameter ["\\\']siprop["\\\']: (.+?)\\.?')#
- get(key, get_default=True, cache=True, expiry=False)[source]#
Return a siteinfo property.
It will never throw an APIError if it only stated, that the siteinfo property doesn’t exist. Instead it will use the default value.
See also
_get_siteinfo
- Parameters:
key (str) – The name of the siteinfo property.
get_default (bool) – Whether to throw an KeyError if the key is invalid.
cache (bool) – Caches the result internally so that future accesses via this method won’t query the server.
expiry (datetime | float | bool) – If the cache is older than the expiry it ignores the cache and queries the server to get the newest value.
- Returns:
The gathered property
- Raises:
KeyError – If the key is not a valid siteinfo property and the get_default option is set to False.
- Return type:
Any
- get_requested_time(key)[source]#
Return when ‘key’ was successfully requested from the server.
If the property is actually in the siprop ‘general’ it returns the last request from the ‘general’ siprop.
- Parameters:
key (str) – The siprop value or a property of ‘general’.
- Returns:
The last time the siprop of ‘key’ was requested.
- Return type:
None (never), False (default),
datetime.datetime
(cached)
Namespace
— Namespace Object#
Objects representing Namespaces of MediaWiki site.
- class pywikibot.site._namespace.BuiltinNamespace(value, names=None, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
Bases:
IntEnum
Builtin namespace enum.
- CATEGORY = 14#
- CATEGORY_TALK = 15#
- FILE = 6#
- FILE_TALK = 7#
- HELP = 12#
- HELP_TALK = 13#
- MAIN = 0#
- MEDIA = -2#
- MEDIAWIKI = 8#
- MEDIAWIKI_TALK = 9#
- PROJECT = 4#
- PROJECT_TALK = 5#
- SPECIAL = -1#
- TALK = 1#
- TEMPLATE = 10#
- TEMPLATE_TALK = 11#
- USER = 2#
- USER_TALK = 3#
- property canonical: str#
Canonical form of MediaWiki built-in namespace.
Added in version 7.1.
- class pywikibot.site._namespace.MetaNamespace(name, bases, dic)[source]#
Bases:
ABCMeta
Metaclass for Namespace attribute settings.
Added in version 9.0.
Set Namespace.FOO to BuiltinNamespace.FOO for each builtin ns.
- class pywikibot.site._namespace.Namespace(id, canonical_name=None, custom_name=None, aliases=None, **kwargs)[source]#
Bases:
Iterable
,ComparableMixin
Namespace site data object.
This is backwards compatible with the structure of entries in site._namespaces which were a list of:
[customised namespace, canonical namespace name?, namespace alias*]
If the canonical_name is not provided for a namespace between -2 and 15, the MediaWiki built-in names are used. Image and File are aliases of each other by default.
If only one of canonical_name and custom_name are available, both properties will have the same value.
Changed in version 9.0: metaclass from
MetaNamespace
- Parameters:
canonical_name (str | None) – Canonical name
custom_name (str | None) – Name defined in server LocalSettings.php
aliases (list[str] | None) – Aliases
- CATEGORY = 14#
- CATEGORY_TALK = 15#
- FILE = 6#
- FILE_TALK = 7#
- HELP = 12#
- HELP_TALK = 13#
- MAIN = 0#
- MEDIA = -2#
- MEDIAWIKI = 8#
- MEDIAWIKI_TALK = 9#
- PROJECT = 4#
- PROJECT_TALK = 5#
- SPECIAL = -1#
- TALK = 1#
- TEMPLATE = 10#
- TEMPLATE_TALK = 11#
- USER = 2#
- USER_TALK = 3#
- classmethod builtin_namespaces(case='first-letter')[source]#
Return a dict of the builtin namespaces.
- Parameters:
case (str)
- canonical_namespaces = {-2: 'Media', -1: 'Special', 0: '', 1: 'Talk', 2: 'User', 3: 'User talk', 4: 'Project', 5: 'Project talk', 6: 'File', 7: 'File talk', 8: 'MediaWiki', 9: 'MediaWiki talk', 10: 'Template', 11: 'Template talk', 12: 'Help', 13: 'Help talk', 14: 'Category', 15: 'Category talk'}#
- class pywikibot.site._namespace.NamespacesDict(namespaces)[source]#
Bases:
Mapping
An immutable dictionary containing the Namespace instances.
It adds a deprecation message when called as the ‘namespaces’ property of APISite was callable.
Create new dict using the given namespaces.
- lookup_name(name)[source]#
Find the Namespace for a name also checking aliases.
- Parameters:
name (str) – Name of the namespace.
- Return type:
Namespace | None
- lookup_normalized_name(name)[source]#
Find the Namespace for a name also checking aliases.
The name has to be normalized and must be lower case.
- Parameters:
name (str) – Name of the namespace.
- Return type:
Namespace | None
- resolve(identifiers)[source]#
Resolve namespace identifiers to obtain Namespace objects.
Identifiers may be any value for which int() produces a valid namespace id, except bool, or any string which Namespace.lookup_name successfully finds. A numerical string is resolved as an integer.
- Parameters:
identifiers (iterable of str or Namespace key, or a single instance of those types) – namespace identifiers
- Returns:
list of Namespace objects in the same order as the identifiers
- Raises:
KeyError – a namespace identifier was not resolved
TypeError – a namespace identifier has an inappropriate type such as NoneType or bool
- Return type:
list[Namespace]
TokenWallet
— Token Wallet#
Objects representing api tokens.
- class pywikibot.site._tokenwallet.TokenWallet(site)[source]#
Bases:
Container
Container for tokens.
You should not use this container class directly; use
APISite.tokens
instead which gives access to the site’s TokenWallet instance.- Parameters:
site (APISite)
- clear()[source]#
Clear the self._tokens cache. Tokens are reloaded when needed.
Added in version 8.0.
- load_tokens(*args, **kwargs)[source]#
Clear cache to lazy load tokens when needed.
Deprecated since version 8.0: Use
clear()
instead.Changed in version 8.0: Clear the cache instead of loading tokens. All parameters are ignored.
- Parameters:
args (Any)
kwargs (Any)
- Return type:
None
- update_tokens(tokens)[source]#
Return a list of new tokens for a given list of tokens.
This method can be used if a token is outdated and has to be renewed but the token type is unknown and we only have the old token. It first gets the token names from all given tokens, clears the cache and returns fresh new tokens of the found types.
Usage:
>>> import pywikibot >>> site = pywikibot.Site() >>> tokens = [site.tokens['csrf']] >>> new_tokens = site.tokens.update_tokens(tokens)
r._params['token'] = r.site.tokens.update_tokens(r._params['token'])
Added in version 8.0.
- Parameters:
tokens (list[str])
- Return type:
list[str]
Uploader
— Uploader Interface#
Objects representing API upload to MediaWiki site.
- class pywikibot.site._upload.Uploader(site, filepage, *, source_filename=None, source_url=None, comment=None, text=None, watch=False, chunk_size=0, asynchronous=False, ignore_warnings=False, report_success=None)[source]#
Bases:
object
Uploader class to upload a file to the wiki.
Added in version 7.1.
- Parameters:
site (pywikibot.site.APISite) – The current site to work on
filepage (FilePage) – a FilePage object from which the wiki-name of the file will be obtained.
source_filename (str | None) – path to the file to be uploaded
source_url (str | None) – URL of the file to be uploaded
comment (str | None) – Edit summary; if this is not provided, then filepage.text will be used. An empty summary is not permitted. This may also serve as the initial page text (see below).
text (str | None) – Initial page text; if this is not set, then filepage.text will be used, or comment.
watch (bool) – If true, add filepage to the bot user’s watchlist
chunk_size (int) – The chunk size in bytes for chunked uploading (see API:Upload#Chunked_uploading). It will only upload in chunks, if the chunk size is positive but lower than the file size.
asynchronous (bool) – Make potentially large file operations asynchronous on the server side when possible.
ignore_warnings (bool or callable or iterable of str) –
It may be a static boolean, a callable returning a boolean or an iterable. The callable gets a list of UploadError instances and the iterable should contain the warning codes for which an equivalent callable would return True if all UploadError codes are in thet list. If the result is False it’ll not continue uploading the file and otherwise disable any warning and reattempt to upload the file.
Note
If report_success is True or None it’ll raise an UploadError exception if the static boolean is False.
report_success (bool | None) – If the upload was successful it’ll print a success message and if ignore_warnings is set to False it’ll raise an UploadError if a warning occurred. If it’s None (default) it’ll be True if ignore_warnings is a bool and False otherwise. If it’s True or None ignore_warnings must be a bool.
- submit(request, result, data_result, ignore_warnings, ignore_all_warnings, report_success, file_key)[source]#
Submit request and return whether upload was successful.
- Parameters:
data_result (str | None)
- Return type:
bool
- upload()[source]#
Check for required parameters to upload and run the job.
- Returns:
Whether the upload was successful.
- Return type:
bool
- upload_warnings = {'bad-prefix': 'Target filename has a bad prefix {msg}.', 'badfilename': 'Target filename is invalid.', 'duplicate': 'Uploaded file is a duplicate of {msg}.', 'duplicate-archive': 'The file is a duplicate of a deleted file {msg}.', 'duplicate-version': 'The upload is an exact duplicate of older version(s) of this file.', 'empty-file': 'File {msg} is empty.', 'exists': 'File {msg} already exists.', 'exists-normalized': 'File exists with different extension as {msg!r}.', 'filetype-unwanted-type': 'File {msg} type is unwanted type.', 'no-change': 'The upload is an exact duplicate of the current version of this file.', 'page-exists': 'Target filename exists but with a different file {msg}.', 'was-deleted': 'The file {msg} was previously deleted.'}#