page
— MediaWiki Pages#
Interface of various types of MediaWiki pages.
Imports in pywikibot
module
The following classes and function are inported in pywikibot
module
and can also be used as pywikibot
members:
- class page.BaseLink(title, namespace=None, site=None)[source]#
Bases:
ComparableMixin
A MediaWiki link (local or interwiki).
Has the following attributes:
title: The title of the page linked to (str); does not include namespace or section
namespace: The Namespace object of the page linked to
site: The Site object for the wiki linked to
- Parameters:
title (str) – the title of the page linked to (str); does not include namespace or section
namespace (int, pywikibot.Namespace or str) – the namespace of the page linked to. Can be provided as either an int, a Namespace instance or a str, defaults to the MAIN namespace.
site (pywikibot.Site or str) – the Site object for the wiki linked to. Can be provided as either a Site instance or a db key, defaults to pywikibot.Site().
- astext(onsite=None)[source]#
Return a text representation of the link.
- Parameters:
onsite – if specified, present as a (possibly interwiki) link from the given site; otherwise, present as an internal link on the site.
- Return type:
str
- classmethod fromPage(page)[source]#
Create a BaseLink to a Page.
- Parameters:
page (pywikibot.page.Page) – target pywikibot.page.Page
- Return type:
pywikibot.page.BaseLink
- lookup_namespace()[source]#
Look up the namespace given the provided namespace id or name.
- Return type:
pywikibot.Namespace
- property namespace#
Return the namespace of the link.
- Return type:
pywikibot.Namespace
- ns_title(onsite=None)[source]#
Return full page title, including namespace.
- Parameters:
onsite – site object if specified, present title using onsite local namespace, otherwise use self canonical namespace.
- Raises:
pywikibot.exceptions.InvalidTitleError – no corresponding namespace is found in onsite
- property site#
Return the site of the link.
- Return type:
pywikibot.Site
- class page.BasePage(source, title='', ns=0)[source]#
Bases:
ComparableMixin
BasePage: Base object for a MediaWiki page.
This object only implements internally methods that do not require reading from or writing to the wiki. All other methods are delegated to the Site object.
Will be subclassed by Page, WikibasePage, and FlowPage.
Instantiate a Page object.
Three calling formats are supported:
If the first argument is a Page, create a copy of that object. This can be used to convert an existing Page into a subclass object, such as Category or FilePage. (If the title is also given as the second argument, creates a copy with that title; this is used when pages are moved.)
If the first argument is a Site, create a Page on that Site using the second argument as the title (may include a section), and the third as the namespace number. The namespace number is mandatory, even if the title includes the namespace prefix. This is the preferred syntax when using an already-normalized title obtained from api.php or a database dump. WARNING: may produce invalid objects if page title isn’t in normal form!
If the first argument is a BaseLink, create a Page from that link. This is the preferred syntax when using a title scraped from wikitext, URLs, or another non-normalized source.
- Parameters:
source (pywikibot.page.BaseLink (or subclass), pywikibot.page.Page (or subclass), or pywikibot.page.Site) – the source of the page
title (str) – normalized title of the page; required if source is a Site, ignored otherwise
ns (int) – namespace number; required if source is a Site, ignored otherwise
- applicable_protections()[source]#
Return the protection types allowed for that page.
Example:
>>> site = pywikibot.Site('wikipedia:test') >>> page = pywikibot.Page(site, 'Main Page') >>> sorted(page.applicable_protections()) ['edit', 'move']
See also
- Return type:
set[str]
- autoFormat()[source]#
Return
date.getAutoFormat
dictName and value, if any.Value can be a year, date, etc., and dictName is ‘YearBC’, ‘Year_December’, or another dictionary name. Please note that two entries may have exactly the same autoFormat, but be in two different namespaces, as some sites have categories with the same names. Regular titles return (None, None).
- backlinks(follow_redirects=True, filter_redirects=None, namespaces=None, total=None, content=False)[source]#
Return an iterator for pages that link to this page.
- Parameters:
follow_redirects (bool) – if True, also iterate pages that link to a redirect pointing to the page.
filter_redirects (bool | None) – if True, only iterate redirects; if False, omit redirects; if None, do not filter
namespaces – only iterate pages in these namespaces
total (int | None) – iterate no more than this number of pages in total
content (bool) – if True, retrieve the content of the current version of each referring page (default False)
- Return type:
Iterable[Page]
- botMayEdit()[source]#
Determine whether the active bot is allowed to edit the page.
This will be True if the page doesn’t contain {{bots}} or {{nobots}} or any other template from edit_restricted_templates list in x_family.py file, or it contains them and the active bot is allowed to edit this page. (This method is only useful on those sites that recognize the bot-exclusion protocol; on other sites, it will always return True.)
The framework enforces this restriction by default. It is possible to override this by setting ignore_bot_templates=True in user cnfig file (user-config.py), or using page.put(force=True).
- Return type:
bool
- categories(with_sort_key=False, total=None, content=False)[source]#
Iterate categories that the article is in.
- Parameters:
with_sort_key (bool) – if True, include the sort key in each Category.
total (int | None) – iterate no more than this number of pages in total
content (bool) – if True, retrieve the content of the current version of each category description page (default False)
- Returns:
a generator that yields Category objects.
- Return type:
generator
- change_category(old_cat, new_cat, summary=None, sort_key=None, in_place=True, include=None, show_diff=False)[source]#
Remove page from oldCat and add it to newCat.
Added in version 7.0: The
show_diff
parameter- Parameters:
old_cat (pywikibot.page.Category) – category to be removed
new_cat (pywikibot.page.Category or None) – category to be added, if any
summary (str | None) – string to use as an edit summary
sort_key – sortKey to use for the added category. Unused if newCat is None, or if inPlace=True If sortKey=True, the sortKey used for oldCat will be used.
in_place (bool) – if True, change categories in place rather than rearranging them.
include (list[str] | None) – list of tags not to be disabled by default in relevant textlib functions, where CategoryLinks can be searched.
show_diff (bool) – show changes between oldtext and newtext (default: False)
- Returns:
True if page was saved changed, otherwise False.
- Return type:
bool
- property content_model#
Return the content model for this page.
If it cannot be reliably determined via the API, None is returned.
- contributors(total=None, starttime=None, endtime=None)[source]#
Compile contributors of this page with edit counts.
- Parameters:
total (int | None) – iterate no more than this number of revisions in total
starttime – retrieve revisions starting at this Timestamp
endtime – retrieve revisions ending at this Timestamp
- Returns:
number of edits for each username
- Return type:
collections.Counter
- coordinates(primary_only=False)[source]#
Return a list of Coordinate objects for points on the page.
Uses the MediaWiki extension GeoData.
- Parameters:
primary_only (bool) – Only return the coordinate indicated to be primary
- Returns:
A list of Coordinate objects or a single Coordinate if primary_only is True
- Return type:
list of Coordinate or Coordinate or None
- create_short_link(permalink=False, with_protocol=True)[source]#
Return a shortened link that points to that page.
If shared_urlshortner_wiki is defined in family config, it’ll use that site to create the link instead of the current wiki.
- Parameters:
permalink (bool) – If true, the link will point to the actual revision of the page.
with_protocol (bool) – If true, and if it’s not already included, the link will have http(s) protocol prepended. On Wikimedia wikis the protocol is already present.
- Returns:
The reduced link.
- Raises:
APIError – urlshortener-ratelimit exceeded
- Return type:
str
- property data_repository#
Return the Site object for the data repository.
- defaultsort(force=False)[source]#
Extract value of the {{DEFAULTSORT:}} magic word from the page.
- Parameters:
force (bool) – force updating from the live site
- Return type:
str | None
- delete(reason=None, prompt=True, mark=False, automatic_quit=False, *, deletetalk=False)[source]#
Delete the page from the wiki. Requires administrator status.
Changed in version 7.1: keyword only parameter deletetalk was added.
- Parameters:
reason (str | None) – The edit summary for the deletion, or rationale for deletion if requesting. If None, ask for it.
deletetalk (bool) – Also delete the talk page, if it exists.
prompt (bool) – If true, prompt user for confirmation before deleting.
mark (bool) – If true, and user does not have sysop rights, place a speedy-deletion request on the page instead. If false, non-sysops will be asked before marking pages for deletion.
automatic_quit (bool) – show also the quit option, when asking for confirmation.
- Returns:
the function returns an integer, with values as follows: value meaning 0 no action was done 1 page was deleted -1 page was marked for deletion
- Return type:
int
- property depth: int#
Return the depth/subpage level of the page.
Check if the namespace allows subpages. Not allowed subpages means depth is always 0.
- editTime()[source]#
Return timestamp of last revision to page.
Deprecated since version 8.0: Use
latest_revision.timestamp
instead.- Return type:
- embeddedin(filter_redirects=None, namespaces=None, total=None, content=False)[source]#
Return an iterator for pages that embed this page as a template.
- Parameters:
filter_redirects (bool | None) – if True, only iterate redirects; if False, omit redirects; if None, do not filter
namespaces – only iterate pages in these namespaces
total (int | None) – iterate no more than this number of pages in total
content (bool) – if True, retrieve the content of the current version of each embedding page (default False)
- Return type:
Iterable[Page]
- exists()[source]#
Return True if page exists on the wiki, even if it’s a redirect.
If the title includes a section, return False if this section isn’t found.
- Return type:
bool
- expand_text(force=False, includecomments=False)[source]#
Return the page text with all templates and parser words expanded.
- Parameters:
force (bool) – force updating from the live site
includecomments (bool) – Also strip comments if includecomments parameter is not True.
- Return type:
str
- extlinks(total=None)[source]#
Iterate all external URLs (not interwiki links) from this page.
- Parameters:
total (int | None) – iterate no more than this number of pages in total
- Returns:
a generator that yields str objects containing URLs.
- Return type:
Iterable[str]
- extract(variant='plain', *, lines=None, chars=None, sentences=None, intro=True)[source]#
Retrieve an extract of this page.
Added in version 7.1.
- Parameters:
variant (str) – The variant of extract, either ‘plain’ for plain text, ‘html’ for limited HTML (both excludes templates and any text formatting) or ‘wiki’ for bare wikitext which also includes any templates for example.
lines (int | None) – if not None, wrap the extract into lines with width of 79 chars and return a string with that given number of lines.
chars (int | None) – How many characters to return. Actual text returned might be slightly longer.
sentences (int | None) – How many sentences to return
intro (bool) – Return only content before the first section
- Raises:
NoPageError – given page does not exist
NotImplementedError – “wiki” variant does not support
sentences
parameter.ValueError –
variant
parameter must be “plain”, “html” or “wiki”
- Return type:
str
See also
- get(force=False, get_redirect=False)[source]#
Return the wiki-text of the page.
This will retrieve the page from the server if it has not been retrieved yet, or if force is True. Exceptions should be caught by the calling code.
Example:
>>> import pywikibot >>> site = pywikibot.Site('mediawiki') >>> page = pywikibot.Page(site, 'Pywikibot') >>> page.get(get_redirect=True) '#REDIRECT[[Manual:Pywikibot]]' >>> page.get() Traceback (most recent call last): ... pywikibot.exceptions.IsRedirectPageError: ... is a redirect page.
Changed in version 9.2:
exceptions.SectionError
is raised if thesection()
does not existsSee also
text
property- Parameters:
force (bool) – reload all page attributes, including errors.
get_redirect (bool) – return the redirect text, do not follow the redirect, do not raise an exception.
- Raises:
NoPageError – The page does not exist.
IsRedirectPageError – The page is a redirect.
SectionError – The section does not exist on a page with a # link.
- Return type:
str
- getCategoryRedirectTarget()[source]#
If this is a category redirect, return the target category title.
- Return type:
- getDeletedRevision(timestamp, content=False, **kwargs)[source]#
Return a particular deleted revision by timestamp.
See also
- Returns:
a list of [date, editor, comment, text, restoration marker]. text will be None, unless content is True (or has been retrieved earlier). If timestamp is not found, returns empty list.
- Parameters:
content (bool)
- Return type:
list
- getOldVersion(oldid, force=False)[source]#
Return text of an old revision of this page.
- Parameters:
oldid – The revid of the revision desired.
force (bool)
- Return type:
str
- getRedirectTarget(*, ignore_section=True)[source]#
Return a Page object for the target this Page redirects to.
Added in version 9.3: ignore_section parameter
See also
- Parameters:
ignore_section (bool) – do not include section to the target even the link has one
- Raises:
CircularRedirectError – page is a circular redirect
InterwikiRedirectPageError – the redirect target is on another site
IsNotRedirectPageError – page is not a redirect
RuntimeError – no redirects found
SectionError – the section is not found on target page and ignore_section is not set
- Return type:
- getReferences(follow_redirects=True, with_template_inclusion=True, only_template_inclusion=False, filter_redirects=False, namespaces=None, total=None, content=False)[source]#
Return an iterator all pages that refer to or embed the page.
If you need a full list of referring pages, use
pages = list(s.getReferences())
- Parameters:
follow_redirects (bool) – if True, also iterate pages that link to a redirect pointing to the page.
with_template_inclusion (bool) – if True, also iterate pages where self is used as a template.
only_template_inclusion (bool) – if True, only iterate pages where self is used as a template.
filter_redirects (bool) – if True, only iterate redirects to self.
namespaces – only iterate pages in these namespaces
total (int | None) – iterate no more than this number of pages in total
content (bool) – if True, retrieve the content of the current version of each referring page (default False)
- Return type:
Iterable[Page]
- getVersionHistoryTable(reverse=False, total=None)[source]#
Return the version history as a wiki table.
- Parameters:
reverse (bool)
total (int | None)
- get_parsed_page(force=False)[source]#
Retrieve parsed text (via action=parse) and cache it.
Changed in version 7.1:
force
parameter was added;_get_parsed_page
becomes a public method- Parameters:
force (bool) – force updating from the live site
- Return type:
str
See also
- has_content()[source]#
Page has been loaded.
Not existing pages are considered loaded.
Added in version 7.6.
- Return type:
bool
- has_deleted_revisions()[source]#
Return True if the page has deleted revisions.
Added in version 4.2.
- Return type:
bool
- has_permission(action='edit')[source]#
Determine whether the page can be modified.
Return True if the bot has the permission of needed restriction level for the given action type:
>>> site = pywikibot.Site('test') >>> page = pywikibot.Page(site, 'Main Page') >>> page.has_permission() False >>> page.has_permission('move') False >>> page.has_permission('invalid') Traceback (most recent call last): ... ValueError: APISite.page_can_be_edited(): Invalid value "invalid" ...
See also
- Parameters:
action (str) – a valid restriction type like ‘edit’, ‘move’; default is
edit
.- Raises:
ValueError – invalid action parameter
- Return type:
bool
- property image_repository#
Return the Site object for the image repository.
- imagelinks(total=None, content=False)[source]#
Iterate FilePage objects for images displayed on this Page.
- Parameters:
total (int | None) – iterate no more than this number of pages in total
content (bool) – if True, retrieve the content of the current version of each image description page (default False)
- Returns:
a generator that yields FilePage objects.
- Return type:
Iterable[FilePage]
- interwiki(expand=True)[source]#
Yield interwiki links in the page text, excluding language links.
- Parameters:
expand (bool) – if True (default), include interwiki links found in templates transcluded onto this page; if False, only iterate interwiki links found in this page’s own wikitext
- Returns:
a generator that yields Link objects
- Return type:
Generator[Link, None, None]
- isCategoryRedirect()[source]#
Return True if this is a category redirect page, False otherwise.
- Return type:
bool
- isDisambig()[source]#
Return True if this is a disambiguation page, False otherwise.
By default, it uses the Disambiguator extension’s result. The identification relies on the presence of the
__DISAMBIG__
magic word which may also be transcluded.If the Disambiguator extension isn’t activated for the given site, the identification relies on the presence of specific templates. First load a list of template names from the
Family
file viaBaseSite.disambig()
; if the value in the Family file not found, look for the list on[[MediaWiki:Disambiguationspage]]
. If this page does not exist, take the MediaWiki message. ‘Template:Disambig’ is always assumed to be default, and will be appended regardless of its existence.- Return type:
bool
- isIpEdit()[source]#
Return True if last editor was unregistered.
Deprecated since version 9.3: Use
latest_revision.anon
instead.- Return type:
bool
- isStaticRedirect(force=False)[source]#
Determine whether the page is a static redirect.
A static redirect must be a valid redirect, and contain the magic word __STATICREDIRECT__.
Changed in version 7.0: __STATICREDIRECT__ can be transcluded
- Parameters:
force (bool) – Bypass local caching
- Return type:
bool
- iterlanglinks(total=None, include_obsolete=False)[source]#
Iterate all inter-language links on this page.
- Parameters:
total (int | None) – iterate no more than this number of pages in total
include_obsolete (bool) – if true, yield even Link object whose site is obsolete
- Returns:
a generator that yields Link objects.
- Return type:
Iterable[Link]
- itertemplates(total=None, *, content=False, namespaces=None)[source]#
Iterate Page objects for templates used on this Page.
This method yield pages embedded as templates even they are not in the TEMPLATE: namespace. The retrieved pages are not cached but they can be yielded from the cache of a previous
templates()
call.Added in version 2.0.
Changed in version 9.2: namespaces parameter was added; all parameters except total must be given as keyword arguments.
- Parameters:
- Return type:
Iterable[Page]
- langlinks(include_obsolete=False)[source]#
Return a list of all inter-language Links on this page.
- Parameters:
include_obsolete (bool) – if true, return even Link objects whose site is obsolete
- Returns:
list of Link objects.
- Return type:
list[Link]
- lastNonBotUser()[source]#
Return name or IP address of last human/non-bot user to edit page.
Determine the most recent human editor out of the last revisions. If it was not able to retrieve a human user, returns None.
If the edit was done by a bot which is no longer flagged as ‘bot’, i.e. which is not returned by Site.botusers(), it will be returned as a non-bot edit.
- Return type:
str | None
- property latest_revision: Revision#
Return the current revision for this page.
Example:
>>> site = pywikibot.Site() >>> page = pywikibot.Page(site, 'Main Page') ... # get the latest timestamp of that page >>> edit_time = page.latest_revision.timestamp >>> type(edit_time) <class 'pywikibot.time.Timestamp'>
See also
- property latest_revision_id#
Return the current revision id for this page.
- linkedPages(*args, **kwargs)[source]#
Iterate Pages that this Page links to.
Only returns pages from “normal” internal links. Embedded templates are omitted but links within them are returned. All interwiki and external links are omitted.
For the parameters refer
APISite.pagelinks
Added in version 7.0: the
follow_redirects
keyword argumentDeprecated since version 7.0: the positional arguments
See also
- Keyword Arguments:
namespaces – Only iterate pages in these namespaces (default: all)
follow_redirects – if True, yields the target of any redirects, rather than the redirect page
total – iterate no more than this number of pages in total
content – if True, load the current content of each page
- Return type:
Generator[BasePage, None, None]
- loadDeletedRevisions(total=None, **kwargs)[source]#
Retrieve deleted revisions for this Page.
Stores all revisions’ timestamps, dates, editors and comments in self._deletedRevs attribute.
- Returns:
iterator of timestamps (which can be used to retrieve revisions later on).
- Return type:
generator
- Parameters:
total (int | None)
- markDeletedRevision(timestamp, undelete=True)[source]#
Mark the revision identified by timestamp for undeletion.
See also
- Parameters:
undelete (bool) – if False, mark the revision to remain deleted.
- merge_history(dest, timestamp=None, reason=None)[source]#
Merge revisions from this page into another page.
See also
APISite.merge_history()
for details.- Parameters:
dest (BasePage) – Destination page to which revisions will be merged
timestamp (Timestamp | None) – Revisions from this page dating up to this timestamp will be merged into the destination page (if not given or False, all revisions will be merged)
reason (str | None) – Optional reason for the history merge
- Return type:
None
- move(newtitle, reason=None, movetalk=True, noredirect=False, movesubpages=True)[source]#
Move this page to a new title.
Changed in version 7.2: The movesubpages parameter was added
- Parameters:
newtitle (str) – The new page title.
reason (str | None) – The edit summary for the move.
movetalk (bool) – If true, move this page’s talk page (if it exists)
noredirect (bool) – if move succeeds, delete the old page (usually requires sysop privileges, depending on wiki settings)
movesubpages (bool) – Rename subpages, if applicable.
- Return type:
- moved_target()[source]#
Return a Page object for the target this Page was moved to.
If this page was not moved, it will raise a NoMoveTargetError. This method also works if the source was already deleted.
See also
- Raises:
NoMoveTargetError – page was not moved
- Return type:
- property oldest_revision: Revision#
Return the first revision of this page.
Example:
>>> site = pywikibot.Site() >>> page = pywikibot.Page(site, 'Main Page') ... # get the creation timestamp of that page >>> creation_time = page.oldest_revision.timestamp >>> type(creation_time) <class 'pywikibot.time.Timestamp'>
See also
- page_image()[source]#
Return the most appropriate image on the page.
Uses the MediaWiki extension PageImages.
- Returns:
A FilePage object
- Return type:
pywikibot.page.FilePage
- property pageid: int#
Return pageid of the page.
- Returns:
pageid or 0 if page does not exist
- permalink(oldid=None, percent_encoded=True, with_protocol=False)[source]#
Return the permalink URL of an old revision of this page.
- Parameters:
oldid – The revid of the revision desired.
percent_encoded (bool) – if false, the link will be provided without title uncoded.
with_protocol (bool) – if true, http or https prefixes will be included before the double slash.
- Return type:
str
- preloadText()[source]#
The text returned by EditFormPreloadText.
See API module “info”.
Application: on Wikisource wikis, text can be preloaded even if a page does not exist, if an Index page is present.
- Return type:
str
- properties(force=False)[source]#
Return the properties of the page.
- Parameters:
force (bool) – force updating from the live site
- Return type:
dict
- protect(reason=None, protections=None, **kwargs)[source]#
Protect or unprotect a wiki page. Requires protect right.
Valid protection levels are
''
(equivalent toNone
),'autoconfirmed'
,'sysop'
and'all'
.'all'
means everyone is allowed, i.e. that protection type will be unprotected.In order to unprotect a type of permission, the protection level shall be either set to
'all'
or''
or skipped in the protections dictionary.Expiry of protections can be set via kwargs, see
Site.protect()
for details. By default there is no expiry for the protection types.See also
- Parameters:
protections (dict[str, str | None] | None) –
A dict mapping type of protection to protection level of that type. Allowed protection types for a page can be retrieved by
applicable_protections()
. Defaults to protections is None, which means unprotect all protection types.Example:
{'move': 'sysop', 'edit': 'autoconfirmed'}
reason (str | None) – Reason for the action, default is None and will set an empty string.
- Return type:
None
- protection()[source]#
Return a dictionary reflecting page protections.
Example:
>>> site = pywikibot.Site('wikipedia:test') >>> page = pywikibot.Page(site, 'Main Page') >>> page.protection() {'edit': ('sysop', 'infinity'), 'move': ('sysop', 'infinity')}
- Return type:
dict[str, tuple[str, str]]
- purge(**kwargs)[source]#
Purge the server’s cache for this page.
- Keyword Arguments:
redirects – Automatically resolve redirects.
converttitles – Convert titles to other variants if necessary. Only works if the wiki’s content language supports variant conversion.
forcelinkupdate – Update the links tables.
forcerecursivelinkupdate – Update the links table, and update the links tables for any page that uses this page as a template.
- Return type:
bool
- put(newtext, summary=None, watch=None, minor=True, bot=True, force=False, asynchronous=False, callback=None, show_diff=False, botflag='[deprecated name of bot]', **kwargs)[source]#
Save the page with the contents of the first argument as the text.
This method is maintained primarily for backwards-compatibility. For new code, using
save()
is preferred; also ee that method docs for all parameters not listed here.Added in version 7.0: The
show_diff
parameterChanged in version 9.3: botflag parameter was renamed to bot.
Changed in version 9.4: edits cannot be marked as bot edits if the bot account has no
bot
right. Therefore aNone
argument for bot parameter was dropped.See also
- Parameters:
newtext (str) – The complete text of the revised page.
show_diff (bool) – show changes between oldtext and newtext (default: False)
summary (str | None)
watch (str | None)
minor (bool)
bot (bool)
force (bool)
asynchronous (bool)
- redirects(*, filter_fragments=None, namespaces=None, total=None, content=False)[source]#
Return an iterable of redirects to this page.
- Parameters:
filter_fragments (bool | None) – If True, only return redirects with fragments. If False, only return redirects without fragments. If None, return both (no filtering).
namespaces (int | str | Namespace | Iterable[int | str | Namespace] | None) – only return redirects from these namespaces
total (int | None) – maximum number of redirects to retrieve in total
content (bool) – load the current content of each redirect
- Return type:
Iterable[Page]
Added in version 7.0.
- revision_count(contributors=None)[source]#
Determine number of edits from contributors.
- Parameters:
contributors (iterable of str or User, a single pywikibot.User, a str or None) – contributor usernames
- Returns:
number of edits for all provided usernames
- Return type:
int
- revisions(reverse=False, total=None, content=False, starttime=None, endtime=None)[source]#
Generator which loads the version history as Revision instances.
- Parameters:
reverse (bool)
total (int | None)
content (bool)
- save(summary=None, watch=None, minor=True, bot=True, force=False, asynchronous=False, callback=None, apply_cosmetic_changes=None, quiet=False, botflag='[deprecated name of bot]', **kwargs)[source]#
Save the current contents of page’s text to the wiki.
Changed in version 7.0: boolean watch parameter is deprecated
Changed in version 9.3: botflag parameter was renamed to bot.
Changed in version 9.4: edits cannot be marked as bot edits if the bot account has no
bot
right. Therefore aNone
argument for bot parameter was dropped.Hint
Setting up OAuth or BotPassword login, you have to grant
High-volume (bot) access
to getbot
right even if the account is member of the bots group granted by bureaucrats. Otherwise edits cannot be marked with both flag and bot argument will be ignored.See also
- Parameters:
summary (str | None) – The edit summary for the modification (optional, but most wikis strongly encourage its use)
watch (str | None) –
Specify how the watchlist is affected by this edit, set to one of
watch
,unwatch
,preferences
,nochange
:watch — add the page to the watchlist
unwatch — remove the page from the watchlist
preferences — use the preference settings (Default)
nochange — don’t change the watchlist
If None (default), follow bot account’s default settings
minor (bool) – if True, mark this edit as minor
bot (bool) – if True, mark this edit as made by a bot if user has
bot
right (default), if False do not mark it as bot edit.force (bool) – if True, ignore botMayEdit() setting
asynchronous (bool) – if True, launch a separate thread to save asynchronously
callback – a callable object that will be called after the page put operation. This object must take two arguments: (1) a Page object, and (2) an exception instance, which will be None if the page was saved successfully. The callback is intended for use by bots that need to keep track of which saves were successful.
apply_cosmetic_changes (bool | None) – Overwrites the cosmetic_changes configuration value to this value unless it’s None.
quiet (bool) – enable/disable successful save operation message; defaults to False. In asynchronous mode, if True, it is up to the calling bot to manage the output e.g. via callback.
- section()[source]#
Return the name of the section this Page refers to.
The section is the part of the title following a
#
character, if any. If no section is present, return None.- Return type:
str | None
- property site#
Return the Site object for the wiki on which this Page resides.
- Return type:
pywikibot.Site
- templates(*, content=False, namespaces=None)[source]#
Return a list of Page objects for templates used on this Page.
This method returns a list of pages which are embedded as templates even they are not in the TEMPLATE: namespace. This method caches the result. If namespaces is used, all pages are retrieved and cached but the result is filtered.
Changed in version 2.0: a list of
pywikibot.Page
is returned instead of a list of template titles. The given pages may have namespaces different from TEMPLATE namespace. get_redirect parameter was removed.Changed in version 9.2: namespaces parameter was added; all parameters must be given as keyword arguments.
See also
- property text: str#
Return the current (edited) wikitext, loading it if necessary.
This property should be prefered over
get()
. If the page does not exist, an empty string will be returned. For a redirect it returns the redirect page content and does not raise anexceptions.IsRedirectPageError
exception.Example:
>>> import pywikibot >>> site = pywikibot.Site('mediawiki') >>> page = pywikibot.Page(site, 'Pywikibot') >>> page.text '#REDIRECT[[Manual:Pywikibot]]' >>> page.text = 'PWB Framework' >>> page.text 'PWB Framework' >>> page.text = None # reload from wiki >>> page.text '#REDIRECT[[Manual:Pywikibot]]' >>> del page.text # other way to reload from wiki
To save the modified text
save()
is one possible method.- Returns:
text of the page
- title(*, underscore=False, with_ns=True, with_section=True, as_url=False, as_link=False, allow_interwiki=True, force_interwiki=False, textlink=False, as_filename=False, insite=None, without_brackets=False)[source]#
Return the title of this Page, as a string.
- Parameters:
underscore (bool) – (not used with as_link) if true, replace all ‘ ‘ characters with ‘_’
with_ns (bool) – if false, omit the namespace prefix. If this option is false and used together with as_link return a labeled link like [[link|label]]
with_section (bool) – if false, omit the section
as_url (bool) – (not used with as_link) if true, quote title as if in an URL
as_link (bool) – if true, return the title in the form of a wikilink
allow_interwiki (bool) – (only used if as_link is true) if true, format the link as an interwiki link if necessary
force_interwiki (bool) – (only used if as_link is true) if true, always format the link as an interwiki link
textlink (bool) – (only used if as_link is true) if true, place a ‘:’ before Category: and Image: links
as_filename (bool) – (not used with as_link) if true, replace any characters that are unsafe in filenames
insite – (only used if as_link is true) a site object where the title is to be shown. Default is the current family/lang given by -family and -lang or -site option i.e. config.family and config.mylang
without_brackets (bool) – (cannot be used with as_link) if true, remove the last pair of brackets(usually removes disambiguation brackets).
- Return type:
str
- toggleTalkPage()[source]#
Return other member of the article-talk page pair for this Page.
If self is a talk page, returns the associated content page; otherwise, returns the associated talk page. The returned page need not actually exist on the wiki.
- Returns:
Page or None if self is a special page.
- Return type:
Page | None
- touch(callback=None, bot=False, botflag='[deprecated name of bot]', **kwargs)[source]#
Make a touch edit for this page.
See Meth:
save
method docs for all parameters. The following parameters will be overridden by this method: summary, watch, minor, force, asynchronousParameter bot is False by default.
minor and bot parameters are set to
False
which prevents hiding the edit when it becomes a real edit due to a bug.Note
This discards content saved to self.text.
Changed in version 9.2: botflag parameter was renamed to bot.
- Parameters:
bot (bool)
- undelete(reason=None)[source]#
Undelete revisions based on the markers set by previous calls.
If no calls have been made since
loadDeletedRevisions()
, everything will be restored.Simplest case:
Page(...).undelete('This will restore all revisions')
More complex:
page = Page(...) revs = page.loadDeletedRevisions() for rev in revs: if ... # decide whether to undelete a revision page.markDeletedRevision(rev) # mark for undeletion page.undelete('This will restore only selected revisions.')
- Parameters:
reason (str | None) – Reason for the action.
- Return type:
None
- userName()[source]#
Return name or IP address of last user to edit page.
Deprecated since version 9.3: Use
latest_revision.user
instead.- Return type:
str
- class page.Category(source, title='', sort_key=None)[source]#
Bases:
Page
A page in the Category: namespace.
All parameters are the same as for Page() Initializer.
- Parameters:
title (str)
- articles(*, recurse=False, total=None, **kwargs)[source]#
Yield all articles in the current category.
Yields all pages in the category that are not subcategories. Duplicates are filtered. To enable duplicates use
members()
withmember_type=['page', 'file']
instead.Usage:
>>> site = pywikibot.Site('wikipedia:test') >>> cat = pywikibot.Category(site, 'Pywikibot') >>> list(cat.articles()) [Page('Pywikibot nobots test')] >>> for p in cat.articles(recurse=1, namespaces=2, total=3): ... print(p.depth) ... 2 3 4
Warning
Categories may have infinite recursions of subcategories. If
recurse
option is given asTrue
or anint
value and this value is less thansys.getrecursionlimit()
, anRecursionError
may be raised. Be careful if passing this generator to a collection in such case.Changed in version 8.0: all parameters are keyword arguments only.
- Parameters:
recurse (int | bool) – if not False or 0, also iterate articles in subcategories. If an int, limit recursion to this number of levels. (Example:
recurse=1
will iterate articles in first-level subcats, but no deeper.)total (int | None) – iterate no more than this number of pages in total (at all levels)
kwargs (Any) – Additional parameters. Refer to
APISite.categorymembers()
for complete list (member_type excluded).
- Return type:
Generator[Page, None, None]
- aslink(sort_key=None)[source]#
Return a link to place a page in this Category.
Warning
Use this only to generate a “true” category link, not for interwikis or text links to category pages.
Usage:
>>> site = pywikibot.Site('wikipedia:test') >>> cat = pywikibot.Category(site, 'Foo') >>> cat.aslink() '[[Category:Foo]]' >>> cat = pywikibot.Category(site, 'Foo', sort_key='bar') >>> cat.aslink() '[[Category:Foo|bar]]' >>> cat.aslink('baz') '[[Category:Foo|baz]]'
- Parameters:
sort_key (str | None) – The sort key for the article to be placed in this Category; if omitted, default sort key is used.
- Return type:
str
- property categoryinfo: dict[str, Any]#
Return a dict containing information about the category.
The dict contains values for numbers of pages, subcategories, files, and total contents.
See also
- isEmptyCategory()[source]#
Return True if category has no members (including subcategories).
- Return type:
bool
- members(*, recurse=False, total=None, **kwargs)[source]#
Yield all category contents (subcats, pages, and files).
Usage:
>>> site = pywikibot.Site('wikipedia:test') >>> cat = pywikibot.Category(site, 'Pywikibot') >>> list(cat.members(member_type='subcat')) [Category('Category:Subpage testing')] >>> list(cat.members(member_type=['page', 'file'])) [Page('Pywikibot nobots test')]
Calling this method with
member_type='subcat'
is equal to callingsubcategories()
. Calling this method withmember_type=['page', 'file']
is equal to callingarticles()
except that the later will filter duplicates.See also
Warning
Categories may have infinite recursions of subcategories. If
recurse
option is given asTrue
or anint
value and this value is less thansys.getrecursionlimit()
, anRecursionError
may be raised. Be careful if passing this generator to a collection in such case.Changed in version 8.0: all parameters are keyword arguments only. Additional parameters are supported.
- Parameters:
recurse (bool) – if not False or 0, also iterate articles in subcategories. If an int, limit recursion to this number of levels. (Example:
recurse=1
will iterate articles in first-level subcats, but no deeper.)total (int | None) – iterate no more than this number of pages in total (at all levels)
kwargs (Any) – Additional parameters. Refer to
APISite.categorymembers()
for complete list.
- Return type:
Generator[Page, None, None]
- newest_pages(total=None)[source]#
Return pages in a category ordered by the creation date.
If two or more pages are created at the same time, the pages are returned in the order they were added to the category. The most recently added page is returned first.
It only allows to return the pages ordered from newest to oldest, as it is impossible to determine the oldest page in a category without checking all pages. But it is possible to check the category in order with the newly added first and it yields all pages which were created after the currently checked page was added (and thus there is no page created after any of the cached but added before the currently checked).
- Parameters:
total (int | None) – The total number of pages queried.
- Returns:
A page generator of all pages in a category ordered by the creation date. From newest to oldest.
Note
It currently only returns Page instances and not a subclass of it if possible. This might change so don’t expect to only get Page instances.
- Return type:
Generator[Page, None, None]
- subcategories(*, recurse=False, **kwargs)[source]#
Yield all subcategories of the current category.
Usage:
>>> site = pywikibot.Site('wikipedia:en') >>> cat = pywikibot.Category(site, 'Contents') >>> next(cat.subcategories()) Category('Category:Wikipedia administration') >>> len(list(cat.subcategories(recurse=2, total=50))) 50
Subcategories of the same level of each subtree are yielded first before the next subcategories level are yielded. For example having this category tree:
A +-- B | +-- E | | +-- H | +-- F | +-- G +-- C | +-- I | | +-- E | | +-- H | +-- J | +-- K | +-- L | +-- G +-- D
Subcategories are yields in the following order: B, C, D, E, F, G, H, I, J, E, H, K, L, G
See also
Warning
Categories may have infinite recursions of subcategories. If
recurse
option is given asTrue
or anint
value and this value is less thansys.getrecursionlimit()
, anRecursionError
may be raised. Be careful if passing this generator to a collection in such case.Changed in version 8.0: all parameters are keyword arguments only. Additional parameters are supported. The order of subcategories are yielded was changed. The old order was B, E, H, F, G, C, I, E, H, J, K, L, G, D
- Parameters:
recurse (int | bool) – if not False or 0, also iterate articles in subcategories. If an int, limit recursion to this number of levels. (Example:
recurse=1
will iterate articles in first-level subcats, but no deeper.)kwargs (Any) – Additional parameters. Refer to
APISite.categorymembers()
for complete list (member_type excluded).
- Return type:
Generator[Page, None, None]
- class page.Claim(site, pid, snak=None, hash=None, is_reference=False, is_qualifier=False, rank='normal', **kwargs)[source]#
Bases:
Property
A Claim on a Wikibase entity.
Claims are standard claims as well as references and qualifiers.
Defined by the “snak” value, supplemented by site + pid
- Parameters:
site (pywikibot.site.DataSite) – Repository where the property of the claim is defined. Note that this does not have to correspond to the repository where the claim has been stored.
pid – property id, with “P” prefix
snak – snak identifier for claim
hash – hash identifier for references
is_reference (bool) – whether specified claim is a reference
is_qualifier (bool) – whether specified claim is a qualifier
rank (str) – rank for claim
- SNAK_TYPES = ('value', 'somevalue', 'novalue')#
- TARGET_CONVERTER = {'commonsMedia': <function Claim.<lambda>>, 'geo-shape': <bound method WbDataPage.fromWikibase of <class 'pywikibot._wbtypes.WbGeoShape'>>, 'globe-coordinate': <bound method Coordinate.fromWikibase of <class 'pywikibot._wbtypes.Coordinate'>>, 'monolingualtext': <function Claim.<lambda>>, 'quantity': <bound method WbQuantity.fromWikibase of <class 'pywikibot._wbtypes.WbQuantity'>>, 'tabular-data': <bound method WbDataPage.fromWikibase of <class 'pywikibot._wbtypes.WbTabularData'>>, 'time': <bound method WbTime.fromWikibase of <class 'pywikibot._wbtypes.WbTime'>>, 'wikibase-form': <function Claim.<lambda>>, 'wikibase-item': <function Claim.<lambda>>, 'wikibase-lexeme': <function Claim.<lambda>>, 'wikibase-property': <function Claim.<lambda>>, 'wikibase-sense': <function Claim.<lambda>>}#
- addQualifier(qualifier, **kwargs)[source]#
Add the given qualifier.
- Parameters:
qualifier (pywikibot.page.Claim) – the qualifier to add
- addSource(claim, **kwargs)[source]#
Add the claim as a source.
- Parameters:
claim (Claim) – the claim to add
- Return type:
None
- addSources(claims, **kwargs)[source]#
Add the claims as one source.
- Parameters:
claims (list of Claim) – the claims to add
- changeSnakType(value=None, **kwargs)[source]#
Save the new snak value.
TODO: Is this function really needed?
- Return type:
None
- changeTarget(value=None, snaktype='value', **kwargs)[source]#
Set the target value in the data repository.
- Parameters:
value (object) – The new target value.
snaktype (str) – The new snak type (‘value’, ‘somevalue’, or ‘novalue’).
- Return type:
None
- classmethod fromJSON(site, data)[source]#
Create a claim object from JSON returned in the API call.
Changed in version 9.4: print a warning if the Claim.type is not given and missing in the wikibase.
- Parameters:
data (dict[str, Any]) – JSON containing claim data
- Return type:
- getSnakType()[source]#
Return the type of snak.
- Returns:
str (‘value’, ‘somevalue’ or ‘novalue’)
- Return type:
str
- getTarget()[source]#
Return the target value of this Claim.
None is returned if no target is set
- Returns:
object
- has_qualifier(qualifier_id, target)[source]#
Check whether Claim contains specified qualifier.
- Parameters:
qualifier_id (str) – id of the qualifier
target – qualifier target to check presence of
- Returns:
true if the qualifier was found, false otherwise
- Return type:
bool
- property on_item: WikibaseEntity | None#
Return entity this claim is attached to.
- classmethod qualifierFromJSON(site, data)[source]#
Create a Claim for a qualifier from JSON.
Qualifier objects are represented a bit differently like references, but I’m not sure if this even requires it’s own function.
- Return type:
pywikibot.page.Claim
- classmethod referenceFromJSON(site, data)[source]#
Create a dict of claims from reference JSON returned in the API call.
Reference objects are represented a bit differently, and require some more handling.
- Return type:
dict
- removeQualifier(qualifier, **kwargs)[source]#
Remove the qualifier. Call removeQualifiers().
- Parameters:
qualifier (pywikibot.page.Claim) – the qualifier to remove
- Return type:
None
- removeQualifiers(qualifiers, **kwargs)[source]#
Remove the qualifiers.
- Parameters:
qualifiers (list of Claim) – the qualifiers to remove
- Return type:
None
- removeSource(source, **kwargs)[source]#
Remove the source. Call removeSources().
- Parameters:
source (Claim) – the source to remove
- Return type:
None
- removeSources(sources, **kwargs)[source]#
Remove the sources.
- Parameters:
sources (list of Claim) – the sources to remove
- Return type:
None
- same_as(other, ignore_rank=True, ignore_quals=False, ignore_refs=True)[source]#
Check if two claims are same.
- Parameters:
ignore_rank (bool)
ignore_quals (bool)
ignore_refs (bool)
- Return type:
bool
- setSnakType(value)[source]#
Set the type of snak.
- Parameters:
value (str ('value', 'somevalue', or 'novalue')) – Type of snak
- setTarget(value)[source]#
Set the target value in the local object.
- Parameters:
value (object) – The new target value.
- Raises:
ValueError – if value is not of the type required for the Claim type.
- target_equals(value)[source]#
Check whether the Claim’s target is equal to specified value.
The function checks for:
WikibaseEntity ID equality
WbTime year equality
Coordinate equality, regarding precision
WbMonolingualText text equality
direct equality
- Parameters:
value – the value to compare with
- Returns:
true if the Claim’s target is equal to the value provided, false otherwise
- Return type:
bool
- class page.FileInfo(file_revision, filepage)[source]#
Bases:
object
A structure holding imageinfo of latest rev. of FilePage.
All keys of API imageinfo dictionary are mapped to FileInfo attributes. Attributes can be retrieved both as self[‘key’] or self.key.
- Following attributes will be returned:
timestamp, user, comment, url, size, sha1, mime, metadata (lazily)
archivename (not for latest revision)
see
Site.loadimageinfo()
for details.Note
timestamp will be casted to
pywikibot.Timestamp()
.Changed in version 7.7: raises KeyError instead of AttributeError if FileInfo is used as Mapping.
Changed in version 8.6: Metadata are loaded lazily. Added filepage parameter.
Initiate the class using the dict from
APISite.loadimageinfo
.- property metadata#
Return metadata.
Added in version 8.6.
- class page.FilePage(source, title='', *, ignore_extension=False)[source]#
Bases:
Page
A subclass of Page representing a file description page.
Supports the same interface as Page except ns; some added methods.
Changed in version 8.4: check for valid extensions.
Changed in version 9.3: ignore_extension parameter was added
- Parameters:
source (pywikibot.page.BaseLink (or subclass), pywikibot.page.Page (or subclass), or pywikibot.page.Site) – the source of the page
title (str) – normalized title of the page; required if source is a Site, ignored otherwise
ignore_extension (bool) – prevent extension check
- Raises:
ValueError – Either the title is not in the file namespace or does not have a valid extension and ignore_extension was not set.
- data_item()[source]#
Convenience function to get the associated Wikibase item of the file.
If WikibaseMediaInfo extension is available (e.g., on Commons), the method returns the associated mediainfo entity. Otherwise, it falls back to the behavior of
BasePage.data_item()
.Added in version 6.5.
- Return type:
pywikibot.page.WikibaseEntity
- download(filename=None, chunk_size=102400, revision=None, *, url_width=None, url_height=None, url_param=None)[source]#
Download to filename file of FilePage.
Usage examples:
Download an image:
>>> site = pywikibot.Site('wikipedia:test') >>> file = pywikibot.FilePage(site, 'Pywikibot MW gear icon.svg') >>> file.download() True
Pywikibot_MW_gear_icon.svg was downloaded.
Download a thumnail:
>>> file.download(url_param='120px') True
The suffix has changed and Pywikibot_MW_gear_icon.png was downloaded.
Added in version 8.2: url_width, url_height and url_param parameters.
Changed in version 8.2: filename argument may be also a path-like object or an iterable of path segments.
Note
filename suffix is adjusted if target url’s suffix is different which may be the case if a thumbnail is loaded.
Warning
If a file already exists, it will be overridden without further notes.
See also
API:Imageinfo for new parameters
- Parameters:
filename (str | PathLike | Iterable[str] | None) – filename where to save file. If
None
,self.title(as_filename=True, with_ns=False)
will be used. If an Iterable is specified the items will be used as path segments. To specify the user directory path you have to use either~
or~user
as first path segment e.g.~/foo
or('~', 'foo')
as filename. If only the user directory specifier is given, the title is used as filename like for None. If the suffix is missing or different from url (which can happen if a url_width, url_height or url_param argument is given), the file suffix is adjusted.chunk_size (int) – the size of each chunk to be received and written to file.
revision (FileInfo | None) – file revision to download. If None
latest_file_info
will be used; otherwise provided revision will be used.url_width (int | None) – download thumbnail with given width
url_height (int | None) – download thumbnail with given height
url_param (str | None) – download thumbnail with given param
- Returns:
True if download is successful, False otherwise.
- Raises:
IOError – if filename cannot be written for any reason.
- Return type:
bool
Check if the file is stored on any known shared repository.
Changed in version 7.0: return False if file does not exist on shared image repository instead raising NoPageError.
- Return type:
bool
- property file_is_used: bool#
Check whether the file is used at this site.
Added in version 7.1.
- getFileVersionHistoryTable()[source]#
Return the version history in the form of a wiki table.
- Return type:
str
- getImagePageHtml()[source]#
Download the file page, and return the HTML, as a string.
Caches the HTML code, so that if you run this method twice on the same FilePage object, the page will only be downloaded once.
- Return type:
str
- get_file_history()[source]#
Return the file’s version history.
- Returns:
dictionary with: key: timestamp of the entry value: instance of FileInfo()
- Return type:
dict
- get_file_info(ts)[source]#
Retrieve and store information of a specific Image rev. of FilePage.
This function will load also metadata. It is also used as a helper in FileInfo to load metadata lazily.
Added in version 8.6.
- Parameters:
ts – timestamp of the Image rev. to retrieve
- Returns:
instance of FileInfo()
- Return type:
dict
- get_file_url(url_width=None, url_height=None, url_param=None)[source]#
Return the url or the thumburl of the file described on this page.
Fetch the information if not available.
Once retrieved, file information will also be accessible as
latest_file_info
attributes, named as in API:Imageinfo. If url_width, url_height or url_param is given, additional propertiesthumbwidth
,thumbheight
,thumburl
andresponsiveUrls
are provided.Note
Parameters validation and error handling left to the API call.
See also
- Parameters:
url_width (int | None) – get info for a thumbnail with given width
url_height (int | None) – get info for a thumbnail with given height
url_param (str | None) – get info for a thumbnail with given param
- Returns:
latest file url or thumburl
- Return type:
str
- globalusage(total=None)[source]#
Iterate all global usage for this page.
See also
- Parameters:
total – iterate no more than this number of pages in total
- Returns:
a generator that yields Pages also on sites different from self.site.
- Return type:
generator
- property latest_file_info#
Retrieve and store information of latest Image rev. of FilePage.
At the same time, the whole history of Image is fetched and cached in self._file_revisions
- Returns:
instance of FileInfo()
- property oldest_file_info#
Retrieve and store information of oldest Image rev. of FilePage.
At the same time, the whole history of Image is fetched and cached in self._file_revisions
- Returns:
instance of FileInfo()
- upload(source, **kwargs)[source]#
Upload this file to the wiki.
keyword arguments are from site.upload() method.
- Parameters:
source (str) – Path or URL to the file to be uploaded.
- Keyword Arguments:
comment – Edit summary; if this is not provided, then filepage.text will be used. An empty summary is not permitted. This may also serve as the initial page text (see below).
text – Initial page text; if this is not set, then filepage.text will be used, or comment.
watch – If true, add filepage to the bot user’s watchlist
ignore_warnings –
It may be a static boolean, a callable returning a boolean or an iterable. The callable gets a list of UploadError instances and the iterable should contain the warning codes for which an equivalent callable would return True if all UploadError codes are in thet list. If the result is False it’ll not continue uploading the file and otherwise disable any warning and reattempt to upload the file.
Note
NOTE: If report_success is True or None it’ll raise an UploadError exception if the static boolean is False.
chunk_size – The chunk size in bytesfor chunked uploading (see API:Upload#Chunked_uploading). It will only upload in chunks, if the chunk size is positive but lower than the file size.
report_success – If the upload was successful it’ll print a success message and if ignore_warnings is set to False it’ll raise an UploadError if a warning occurred. If it’s None (default) it’ll be True if ignore_warnings is a bool and False otherwise. If it’s True or None ignore_warnings must be a bool.
- Returns:
It returns True if the upload was successful and False otherwise.
- Return type:
bool
- usingPages(**kwargs)[source]#
Yield Pages on which the file is displayed.
Deprecated since version 7.4: Use
using_pages()
instead.
- using_pages(**kwargs)[source]#
Yield Pages on which the file is displayed.
For parameters refer
APISite.imageusage()
Usage example:
>>> site = pywikibot.Site('wikipedia:test') >>> file = pywikibot.FilePage(site, 'Pywikibot MW gear icon.svg') >>> used = list(file.using_pages(total=10)) >>> len(used) 2 >>> used[0].title() 'Pywikibot'
See also
Changed in version 7.2: all parameters from
APISite.imageusage()
are available.Changed in version 7.4: renamed from
usingPages()
.
- class page.ItemPage(site, title=None, ns=None)[source]#
Bases:
WikibasePage
Wikibase entity of type ‘item’.
A Wikibase item may be defined by either a ‘Q’ id (qid), or by a site & title.
If an item is defined by site & title, once an item’s qid has been looked up, the item is then defined by the qid.
- Parameters:
site (pywikibot.site.DataSite) – data repository
title (str) – identifier of item, “Q###”, -1 or None for an empty item.
- DATA_ATTRIBUTES: dict[str, Any] = {'aliases': <class 'pywikibot.page._collections.AliasesDict'>, 'claims': <class 'pywikibot.page._collections.ClaimCollection'>, 'descriptions': <class 'pywikibot.page._collections.LanguageDict'>, 'labels': <class 'pywikibot.page._collections.LanguageDict'>, 'sitelinks': <class 'pywikibot.page._collections.SiteLinkCollection'>}#
- entity_type = 'item'#
- classmethod fromPage(page, lazy_load=False)[source]#
Get the ItemPage for a Page that links to it.
- Parameters:
page (pywikibot.page.Page) – Page to look for corresponding data item
lazy_load (bool) – Do not raise NoPageError if either page or corresponding ItemPage does not exist.
- Return type:
pywikibot.page.ItemPage
- Raises:
pywikibot.exceptions.NoPageError – There is no corresponding ItemPage for the page
pywikibot.exceptions.WikiBaseError – The site of the page has no data repository.
- classmethod from_entity_uri(site, uri, lazy_load=False)[source]#
Get the ItemPage from its entity uri.
- Parameters:
site (pywikibot.site.DataSite) – The Wikibase site for the item.
uri (str) – Entity uri for the Wikibase item.
lazy_load (bool) – Do not raise NoPageError if ItemPage does not exist.
- Return type:
pywikibot.page.ItemPage
- Raises:
TypeError – Site is not a valid DataSite.
ValueError – Site does not match the base of the provided uri.
pywikibot.exceptions.NoPageError – Uri points to non-existent item.
- get(force=False, get_redirect=False, *args, **kwargs)[source]#
Fetch all item data, and cache it.
- Parameters:
force (bool) – override caching
get_redirect (bool) – return the item content, do not follow the redirect, do not raise an exception.
- Raises:
NotImplementedError – a value in args or kwargs
IsRedirectPageError – instance is a redirect page and get_redirect is not True
- Returns:
actual data which entity holds
- Return type:
dict[str, Any]
Note
dicts returned by this method are references to content of this entity and their modifying may indirectly cause unwanted change to the live content
- getID(numeric=False, force=False)[source]#
Get the entity identifier.
- Parameters:
numeric (bool) – Strip the first letter and return an int
force (bool) – Force an update of new data
- getRedirectTarget(*, ignore_section=True)[source]#
Return the redirect target for this page.
Added in version 9.3: ignore_section parameter
See also
- Parameters:
ignore_section (bool) – do not include section to the target even the link has one
- Raises:
CircularRedirectError – page is a circular redirect
InterwikiRedirectPageError – the redirect target is on another site
Error – target page has wrong content model
IsNotRedirectPageError – page is not a redirect
RuntimeError – no redirects found
SectionError – the section is not found on target page and ignore_section is not set
- getSitelink(site, force=False)[source]#
Return the title for the specific site.
If the item doesn’t have a link to that site, raise NoSiteLinkError.
Changed in version 8.1: raises NoSiteLinkError instead of NoPageError.
- Parameters:
site (pywikibot.Site or database name) – Site to find the linked page of.
force (bool) – override caching
get_redirect – return the item content, do not follow the redirect, do not raise an exception.
- Raises:
IsRedirectPageError – instance is a redirect page
NoSiteLinkError – site is not in
sitelinks
- Return type:
str
- iterlinks(family=None)[source]#
Iterate through all the sitelinks.
- Parameters:
family (str|pywikibot.family.Family) – string/Family object which represents what family of links to iterate
- Returns:
iterator of pywikibot.Page objects
- Return type:
iterator
- mergeInto(item, **kwargs)[source]#
Merge the item into another item.
- Parameters:
item (pywikibot.page.ItemPage) – The item to merge into
- Return type:
None
- removeSitelink(site, **kwargs)[source]#
Remove a sitelink.
A site can either be a Site object, or it can be a dbName.
- Parameters:
site (LANGUAGE_IDENTIFIER)
- Return type:
None
- removeSitelinks(sites, **kwargs)[source]#
Remove sitelinks.
Sites should be a list, with values either being Site objects, or dbNames.
- Parameters:
sites (list[LANGUAGE_IDENTIFIER])
- Return type:
None
- setSitelink(sitelink, **kwargs)[source]#
Set sitelinks. Calls
setSitelinks()
.A sitelink can be a Page object, a BaseLink object or a
{'site': dbname, 'title': title}
dictionary.Refer
WikibasePage.editEntity()
for asynchronous and callback usage.- Parameters:
sitelink (SITELINK_TYPE)
- Return type:
None
- setSitelinks(sitelinks, **kwargs)[source]#
Set sitelinks.
sitelinks should be a list. Each item in the list can either be a Page object, a BaseLink object, or a dict with key for ‘site’ and a value for ‘title’.
Refer
editEntity()
for asynchronous and callback usage.- Parameters:
sitelinks (list[SITELINK_TYPE])
- Return type:
None
- set_redirect_target(target_page, create=False, force=False, keep_section=False, save=True, botflag='[deprecated name of bot]', **kwargs)[source]#
Make the item redirect to another item.
You need to define an extra argument to make this work, like
save=True
.Changed in version 9.3: botflag keyword parameter was renamed to bot.
- Parameters:
target_page (ItemPage | str) – target of the redirect, this argument is required.
force (bool) – if true, it sets the redirect target even the page is not redirect.
create (bool)
keep_section (bool)
save (bool)
- title(**kwargs)[source]#
Return ID as title of the ItemPage.
If the ItemPage was lazy-loaded via ItemPage.fromPage, this method will fetch the Wikibase item ID for the page, potentially raising NoPageError with the page on the linked wiki if it does not exist, or does not have a corresponding Wikibase item ID.
This method also refreshes the title if the id property was set. i.e. item.id = ‘Q60’
All optional keyword parameters are passed to the superclass.
- title_pattern = 'Q[1-9]\\d*'#
- class page.LexemeForm(repo, id_=None)[source]#
Bases:
LexemeSubEntity
Wikibase lexeme form.
- DATA_ATTRIBUTES: dict[str, Any] = {'claims': <class 'pywikibot.page._collections.ClaimCollection'>, 'representations': <class 'pywikibot.page._collections.LanguageDict'>}#
- edit_elements(data, **kwargs)[source]#
Update form elements.
- Parameters:
data (dict) – Data to be saved
- Return type:
None
- entity_type = 'form'#
- get(force=False)[source]#
Fetch all form data, and cache it.
- Parameters:
force (bool) – override caching
- Return type:
dict
Note
dicts returned by this method are references to content of this entity and their modifying may indirectly cause unwanted change to the live content
- title_pattern = 'L[1-9]\\d*-F[1-9]\\d*'#
- class page.LexemePage(site, title=None)[source]#
Bases:
WikibasePage
Wikibase entity of type ‘lexeme’.
Basic usage sample:
>>> import pywikibot >>> repo = pywikibot.Site('wikidata') >>> L2 = pywikibot.LexemePage(repo, 'L2') # create a Lexeme page >>> list(L2.claims) # access the claims ['P5402', 'P5831', 'P12690'] >>> len(L2.forms) # access the forms 3 >>> F1 = L2.forms[0] # access the first form >>> list(F1.claims) # access its claims ['P898'] >>> len(L2.senses) # access the senses 2 >>> S1 = L2.senses[0] # access the first sense >>> list(S1.claims) # and its claims ['P5137', 'P5972', 'P2888']
- Parameters:
site (pywikibot.site.DataSite) – data repository
title (str or None) – identifier of lexeme, “L###”, -1 or None for an empty lexeme.
- DATA_ATTRIBUTES: dict[str, Any] = {'claims': <class 'pywikibot.page._collections.ClaimCollection'>, 'forms': <class 'pywikibot.page._wikibase.LexemeFormCollection'>, 'lemmas': <class 'pywikibot.page._collections.LanguageDict'>, 'senses': <class 'pywikibot.page._wikibase.LexemeSenseCollection'>}#
- add_form(form, **kwargs)[source]#
Add a form to the lexeme.
- Parameters:
form (Form) – The form to add
- Keyword Arguments:
bot – Whether to flag as bot (if possible)
asynchronous – if True, launch a separate thread to add form asynchronously
callback – a callable object that will be called after the claim has been added. It must take two arguments: (1) a LexemePage object, and (2) an exception instance, which will be None if the entity was saved successfully. This is intended for use by bots that need to keep track of which saves were successful.
- Return type:
None
- entity_type = 'lexeme'#
- get(force=False, get_redirect=False, *args, **kwargs)[source]#
Fetch all lexeme data, and cache it.
- Parameters:
force (bool) – override caching
get_redirect (bool) – return the lexeme content, do not follow the redirect, do not raise an exception.
- Raises:
NotImplementedError – a value in args or kwargs
Note
dicts returned by this method are references to content of this entity and their modifying may indirectly cause unwanted change to the live content
- mergeInto(lexeme, **kwargs)[source]#
Merge the lexeme into another lexeme.
- Parameters:
lexeme (LexemePage) – The lexeme to merge into
- remove_form(form, **kwargs)[source]#
Remove a form from the lexeme.
- Parameters:
form (LexemeForm) – The form to remove
- Return type:
None
- title_pattern = 'L[1-9]\\d*'#
- class page.LexemeSense(repo, id_=None)[source]#
Bases:
LexemeSubEntity
Wikibase lexeme sense.
- DATA_ATTRIBUTES: dict[str, Any] = {'claims': <class 'pywikibot.page._collections.ClaimCollection'>, 'glosses': <class 'pywikibot.page._collections.LanguageDict'>}#
- entity_type = 'sense'#
- title_pattern = 'L[1-9]\\d*-S[1-9]\\d*'#
- class page.Link(text, source=None, default_namespace=0)[source]#
Bases:
BaseLink
A MediaWiki wikitext link (local or interwiki).
Constructs a Link object based on a wikitext link and a source site.
Extends BaseLink by the following attributes:
section: The section of the page linked to (str or None); this contains any text following a ‘#’ character in the title
anchor: The anchor text (str or None); this contains any text following a ‘|’ character inside the link
- Parameters:
text (str) – the link text (everything appearing between [[ and ]] on a wiki page)
source (Site or BasePage) – the Site on which the link was found (not necessarily the site to which the link refers)
default_namespace (int) – a namespace to use if the link does not contain one (defaults to 0)
- Raises:
UnicodeError – text could not be converted to unicode.
- property anchor: str#
Return the anchor of the link.
- astext(onsite=None)[source]#
Return a text representation of the link.
- Parameters:
onsite – if specified, present as a (possibly interwiki) link from the given site; otherwise, present as an internal link on the source site.
- classmethod create_separated(link, source, default_namespace=0, section=None, label=None)[source]#
Create a new instance but overwrite section or label.
The returned Link instance is already parsed.
- Parameters:
link (str) – The original link text.
source (Site) – The source of the link.
default_namespace (int) – The namespace this link uses when no namespace is defined in the link text.
section (None or str) – The new section replacing the one in link. If None (default) it doesn’t replace it.
label – The new label replacing the one in link. If None (default) it doesn’t replace it.
- classmethod fromPage(page, source=None)[source]#
Create a Link to a Page.
- Parameters:
page (pywikibot.page.Page) – target Page
source – Link from site source
source – Site
- Return type:
pywikibot.page.Link
- illegal_titles_pattern = re.compile('[\\x00-\\x1f\\x23\\x3c\\x3e\\x5b\\x5d\\x7b\\x7c\\x7d\\x7f]|%[0-9A-Fa-f]{2}|&[A-Za-z0-9\x80-ÿ]+;|&#[0-9]+;|&#x[0-9A-Fa-f]+;')#
- classmethod langlinkUnsafe(lang, title, source)[source]#
Create a “lang:title” Link linked from source.
Assumes that the lang & title come clean, no checks are made.
- Parameters:
lang (str) – target site code (language)
title (str) – target Page
source – Link from site source
source – Site
- Return type:
pywikibot.page.Link
- property namespace#
Return the namespace of the link.
- Return type:
pywikibot.Namespace
- parse_site()[source]#
Parse only enough text to determine which site the link points to.
This method does not parse anything after the first “:”; links with multiple interwiki prefixes (such as “wikt:fr:Parlais”) need to be re-parsed on the first linked wiki to get the actual site.
- Returns:
The family name and site code for the linked site. If the site is not supported by the configured families it returns None instead of a str.
- Return type:
tuple
- property section: str#
Return the section of the link.
- property site#
Return the site of the link.
- Return type:
pywikibot.Site
- property title: str#
Return the title of the link.
- class page.MediaInfo(repo, id_=None)[source]#
Bases:
WikibaseEntity
Interface for MediaInfo entities on Commons.
Added in version 6.5.
- Parameters:
repo (DataSite) – Entity repository.
id (str or None, -1 and None mean non-existing) – Entity identifier.
id_ (str | None)
- DATA_ATTRIBUTES: dict[str, Any] = {'labels': <class 'pywikibot.page._collections.LanguageDict'>, 'statements': <class 'pywikibot.page._collections.ClaimCollection'>}#
- addClaim(claim, bot=True, **kwargs)[source]#
Add a claim to the MediaInfo.
Added in version 8.5.
- Parameters:
claim (pywikibot.page.Claim) – The claim to add
bot (bool) – Whether to flag as bot (if possible)
- editLabels(labels, **kwargs)[source]#
Edit MediaInfo labels (eg. captions).
labels should be a dict, with the key as a language or a site object. The value should be the string to set it to. You can set it to
''
to remove the label.Usage:
>>> repo = pywikibot.Site('commons','commons') >>> page = pywikibot.FilePage(repo, 'File:Sandbox-Test.svg') >>> item = page.data_item() >>> item.editLabels({'en': 'Test file.'})
Added in version 8.5.
- Parameters:
labels (LANGUAGE_TYPE)
- Return type:
None
- entity_type = 'mediainfo'#
- get(force=False)[source]#
Fetch all MediaInfo entity data and cache it.
Note
dicts returned by this method are references to content of this entity and their modifying may indirectly cause unwanted change to the live content
Changed in version 9.0: Added pageid, ns, title, lastrevid, modified, id values to
_content
attribute when it is loaded.- Parameters:
force (bool) – override caching
- Raises:
NoWikibaseEntityError – if this entity doesn’t exist
- Returns:
actual data which entity holds
- Return type:
dict
- getID(numeric=False)[source]#
Get the entity identifier.
See also
- Parameters:
numeric (bool) – Strip the first letter and return an int
- Raises:
NoWikibaseEntityError – if this entity is associated with a non-existing file
- Return type:
str | int
- removeClaims(claims, **kwargs)[source]#
Remove the claims from the MediaInfo.
Added in version 8.5.
- Parameters:
claims (list or Claim) – list of claims to be removed
- Return type:
None
- title()[source]#
Return ID as title of the MediaInfo.
Added in version 9.4.
See also
- Raises:
NoWikibaseEntityError – if this entity is associated with a non-existing file
- Returns:
the entity identifier
- Return type:
str
- title_pattern = 'M[1-9]\\d*'#
- class page.Page(source, title='', ns=0)[source]#
Bases:
BasePage
,WikiBlameMixin
Page: A MediaWiki page.
Instantiate a Page object.
- Parameters:
title (str)
- get_best_claim(prop)[source]#
Return the first best Claim for this page.
Return the first ‘preferred’ ranked Claim specified by Wikibase property or the first ‘normal’ one otherwise.
Added in version 3.0.
- Parameters:
prop (str) – property id, “P###”
- Returns:
Claim object given by Wikibase property number for this page object.
- Return type:
Claim or None
- Raises:
UnknownExtensionError – site has no Wikibase extension
- property raw_extracted_templates#
Extract templates and parameters.
This method is using
textlib.extract_templates_and_params()
. Disabled parts and whitespace are stripped, except for whitespace in anonymous positional arguments.- Return type:
list of (str, OrderedDict)
- set_redirect_target(target_page, create=False, force=False, keep_section=False, save=True, botflag='[deprecated name of bot]', **kwargs)[source]#
Change the page’s text to point to the redirect page.
Changed in version 9.3: botflag keyword parameter was renamed to bot.
- Parameters:
target_page (Page | str) – target of the redirect, this argument is required.
create (bool) – if true, it creates the redirect even if the page doesn’t exist.
force (bool) – if true, it set the redirect target even the page doesn’t exist or it’s not redirect.
keep_section (bool) – if the old redirect links to a section and the new one doesn’t it uses the old redirect’s section.
save (bool) – if true, it saves the page immediately.
kwargs – Arguments which are used for saving the page directly afterwards, like summary for edit summary.
- templatesWithParams()[source]#
Return templates used on this Page.
The templates are extracted by
raw_extracted_templates()
, with positional arguments placed first in order, and each named argument appearing as ‘name=value’.All parameter keys and values for each template are stripped of whitespace.
- Returns:
a list of tuples with one tuple for each template invocation in the page, with the template Page as the first entry and a list of parameters as the second entry.
- Return type:
list[tuple[Page, list[str]]]
- class page.Property(site, id, datatype=None)[source]#
Bases:
object
A Wikibase property.
While every Wikibase property has a Page on the data repository, this object is for when the property is used as part of another concept where the property is not _the_ Page of the property.
For example, a claim on an ItemPage has many property attributes, and so it subclasses this Property class, but a claim does not have Page like behaviour and semantics.
- Parameters:
site (pywikibot.site.DataSite) – data repository
id (str) – id of the property
datatype (str | None) – datatype of the property; if not given, it will be queried via the API
- getID(numeric=False)[source]#
Get the identifier of this property.
- Parameters:
numeric (bool) – Strip the first letter and return an int
- property type: str#
Return the type of this property.
Changed in version 9.4: raises
NoWikibaseEntityError
if property does not exist.- Raises:
NoWikibaseEntityError – property does not exist
- types = {'commonsMedia': <class 'pywikibot.page._filepage.FilePage'>, 'external-id': <class 'str'>, 'geo-shape': <class 'pywikibot._wbtypes.WbGeoShape'>, 'globe-coordinate': <class 'pywikibot._wbtypes.Coordinate'>, 'math': <class 'str'>, 'monolingualtext': <class 'pywikibot._wbtypes.WbMonolingualText'>, 'musical-notation': <class 'str'>, 'quantity': <class 'pywikibot._wbtypes.WbQuantity'>, 'string': <class 'str'>, 'tabular-data': <class 'pywikibot._wbtypes.WbTabularData'>, 'time': <class 'pywikibot._wbtypes.WbTime'>, 'url': <class 'str'>, 'wikibase-form': <class 'pywikibot.page._wikibase.LexemeForm'>, 'wikibase-item': <class 'pywikibot.page._wikibase.ItemPage'>, 'wikibase-lexeme': <class 'pywikibot.page._wikibase.LexemePage'>, 'wikibase-property': <class 'pywikibot.page._wikibase.PropertyPage'>, 'wikibase-sense': <class 'pywikibot.page._wikibase.LexemeSense'>}#
- value_types = {'commonsMedia': 'string', 'external-id': 'string', 'geo-shape': 'string', 'globe-coordinate': 'globecoordinate', 'math': 'string', 'musical-notation': 'string', 'tabular-data': 'string', 'url': 'string', 'wikibase-form': 'wikibase-entityid', 'wikibase-item': 'wikibase-entityid', 'wikibase-lexeme': 'wikibase-entityid', 'wikibase-property': 'wikibase-entityid', 'wikibase-sense': 'wikibase-entityid'}#
- class page.PropertyPage(source, title=None, datatype=None)[source]#
Bases:
WikibasePage
,Property
A Wikibase entity in the property namespace.
Should be created as:
PropertyPage(DataSite, 'P21')
or:
PropertyPage(DataSite, datatype='url')
- Parameters:
source (pywikibot.site.DataSite) – data repository property is on
title (str) – identifier of property, like “P##”, “-1” or None for an empty property.
datatype (str) – Datatype for a new property.
- DATA_ATTRIBUTES: dict[str, Any] = {'aliases': <class 'pywikibot.page._collections.AliasesDict'>, 'claims': <class 'pywikibot.page._collections.ClaimCollection'>, 'descriptions': <class 'pywikibot.page._collections.LanguageDict'>, 'labels': <class 'pywikibot.page._collections.LanguageDict'>}#
- entity_type = 'property'#
- get(force=False, *args, **kwargs)[source]#
Fetch the property entity, and cache it.
- Parameters:
force (bool) – override caching
- Raises:
NotImplementedError – a value in args or kwargs
- Returns:
actual data which entity holds
- Return type:
dict
Note
dicts returned by this method are references to content of this entity and their modifying may indirectly cause unwanted change to the live content
- getID(numeric=False)[source]#
Get the identifier of this property.
- Parameters:
numeric (bool) – Strip the first letter and return an int
- newClaim(*args, **kwargs)[source]#
Helper function to create a new claim object for this property.
- Return type:
- title_pattern = 'P[1-9]\\d*'#
- class page.Revision(**kwargs)[source]#
Bases:
Mapping
A structure holding information about a single revision of a Page.
Each data item can be accessed either by its key or as an attribute with the attribute name equal to the key e.g.:
>>> r = Revision(comment='Sample for Revision access') >>> r.comment == r['comment'] True >>> r.comment 'Sample for Revision access'
See also
- class page.SiteLink(title, site=None, badges=None)[source]#
Bases:
BaseLink
A single sitelink in a Wikibase item.
Extends BaseLink by the following attribute:
badges: Any badges associated with the sitelink
Added in version 3.0.
- Parameters:
title (str) – the title of the linked page including namespace
site (pywikibot.Site or str) – the Site object for the wiki linked to. Can be provided as either a Site instance or a db key, defaults to pywikibot.Site().
badges ([ItemPage]) – list of badges
- class page.User(source, title='')[source]#
Bases:
Page
A class that represents a Wiki user.
This class also represents the Wiki page User:<username>
Initializer for a User object.
All parameters are the same as for Page() Initializer.
- Parameters:
title (str)
- block(*args, **kwargs)[source]#
Block user.
Refer
APISite.blockuser
method for parameters.- Returns:
None
- contributions(total=500, **kwargs)[source]#
Yield tuples describing this user edits.
Each tuple is composed of a pywikibot.Page object, the revision id, the edit timestamp and the comment. Pages returned are not guaranteed to be unique.
Example:
>>> site = pywikibot.Site('wikipedia:test') >>> user = pywikibot.User(site, 'pywikibot-test') >>> contrib = next(user.contributions(reverse=True)) >>> len(contrib) 4 >>> contrib[0].title() 'User:John Vandenberg/appendtext test' >>> contrib[1] 504588 >>> str(contrib[2]) '2022-03-04T17:36:02Z' >>> contrib[3] ''
See also
- Parameters:
total (int | None) – limit result to this number of pages
- Keyword Arguments:
start – Iterate contributions starting at this Timestamp
end – Iterate contributions ending at this Timestamp
reverse – Iterate oldest contributions first (default: newest)
namespaces – only iterate pages in these namespaces
showMinor – if True, iterate only minor edits; if False and not None, iterate only non-minor edits (default: iterate both)
top_only – if True, iterate only edits which are the latest revision (default: False)
- Returns:
tuple of pywikibot.Page, revid, pywikibot.Timestamp, comment
- Return type:
Generator[tuple[Page, int, Timestamp, str | None], None, None]
- deleted_contributions(*, total=500, **kwargs)[source]#
Yield tuples describing this user’s deleted edits.
Added in version 5.5.
- Parameters:
total (int | None) – Limit results to this number of pages
- Keyword Arguments:
start – Iterate contributions starting at this Timestamp
end – Iterate contributions ending at this Timestamp
reverse – Iterate oldest contributions first (default: newest)
namespaces – Only iterate pages in these namespaces
- Return type:
- editCount(force=False)[source]#
Return edit count for a registered user.
Always returns 0 for ‘anonymous’ users.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Return type:
int
- property first_edit: tuple[Page, int, Timestamp, str | None] | None#
Return first user contribution.
- Returns:
first user contribution entry
- Returns:
tuple of pywikibot.Page, revid, pywikibot.Timestamp, comment
- gender(force=False)[source]#
Return the gender of the user.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Returns:
return ‘male’, ‘female’, or ‘unknown’
- Return type:
str
- getUserPage(subpage='')[source]#
Return a Page object relative to this user’s main page.
- Parameters:
subpage (str) – subpage part to be appended to the main page title (optional)
- Returns:
Page object of user page or user subpage
- Return type:
- getUserTalkPage(subpage='')[source]#
Return a Page object relative to this user’s main talk page.
- Parameters:
subpage (str) – subpage part to be appended to the main talk page title (optional)
- Returns:
Page object of user talk page or user talk subpage
- Return type:
- getprops(force=False)[source]#
Return a properties about the user.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Return type:
dict
- groups(force=False)[source]#
Return a list of groups to which this user belongs.
The list of groups may be empty.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Returns:
groups property
- Return type:
list
- isBlocked(force=False)[source]#
Determine whether the user is currently blocked.
Deprecated since version 7.0: use
is_blocked()
instead- Parameters:
force (bool) – if True, forces reloading the data from API
- Return type:
bool
- isEmailable(force=False)[source]#
Determine whether emails may be send to this user through MediaWiki.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Return type:
bool
- isRegistered(force=False)[source]#
Determine if the user is registered on the site.
It is possible to have a page named User:xyz and not have a corresponding user with username xyz.
The page does not need to exist for this method to return True.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Return type:
bool
- is_blocked(force=False)[source]#
Determine whether the user is currently blocked.
Changed in version 7.0: renamed from
isBlocked()
method, can also detect range blocks.- Parameters:
force (bool) – if True, forces reloading the data from API
- Return type:
bool
- is_locked(force=False)[source]#
Determine whether the user is currently locked globally.
Added in version 7.0.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Return type:
bool
- property is_thankable: bool#
Determine if the user has thanks notifications enabled.
Note
This doesn’t accurately determine if thanks is enabled for user. Privacy of thanks preferences is under discussion, please see T57401#2216861 and T120753#1863894.
- property last_edit: tuple[Page, int, Timestamp, str | None] | None#
Return last user contribution.
- Returns:
last user contribution entry
- Returns:
tuple of pywikibot.Page, revid, pywikibot.Timestamp, comment
- property last_event#
Return last user activity.
- Returns:
last user log entry
- Return type:
LogEntry or None
- logevents(**kwargs)[source]#
Yield user activities.
- Keyword Arguments:
logtype – only iterate entries of this type (see mediawiki api documentation for available types)
page – only iterate entries affecting this page
namespace – namespace to retrieve logevents from
start – only iterate entries from and after this Timestamp
end – only iterate entries up to and through this Timestamp
reverse – if True, iterate oldest entries first (default: newest)
tag – only iterate entries tagged with this tag
total – maximum number of events to iterate
- Return type:
iterable
- registration(force=False)[source]#
Fetch registration date for this user.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Return type:
Timestamp | None
- renamed_target()[source]#
Return a User object for the target this user was renamed to.
If this user was not renamed, it will raise a
NoRenameTargetError
.Usage:
>>> site = pywikibot.Site('wikipedia:de') >>> user = pywikibot.User(site, 'Foo') >>> user.isRegistered() False >>> target = user.renamed_target() >>> target.isRegistered() True >>> target.title(with_ns=False) 'Foo~dewiki' >>> target.renamed_target() Traceback (most recent call last): ... pywikibot.exceptions.NoRenameTargetError: Rename target user ...
Added in version 9.4.
- Raises:
NoRenameTargetError – user was not renamed
- Return type:
- rights(force=False)[source]#
Return user rights.
- Parameters:
force (bool) – if True, forces reloading the data from API
- Returns:
return user rights
- Return type:
list
- send_email(subject, text, ccme=False)[source]#
Send an email to this user via MediaWiki’s email interface.
- Parameters:
subject (str) – the subject header of the mail
text (str) – mail body
ccme (bool) – if True, sends a copy of this email to the bot
- Raises:
NotEmailableError – the user of this User is not emailable
UserRightsError – logged in user does not have ‘sendemail’ right
- Returns:
operation successful indicator
- Return type:
bool
- unblock(reason=None)[source]#
Remove the block for the user.
- Parameters:
reason (str | None) – Reason for the unblock.
- Return type:
None
- uploadedImages(total=10)[source]#
Yield tuples describing files uploaded by this user.
Each tuple is composed of a pywikibot.Page, the timestamp (str in ISO8601 format), comment (str) and a bool for pageid > 0. Pages returned are not guaranteed to be unique.
- Parameters:
total (int) – limit result to this number of pages
- property username: str#
The username.
Convenience method that returns the title of the page with namespace prefix omitted, which is the username.
- class page.WikibaseEntity(repo, id_=None)[source]#
Bases:
object
The base interface for Wikibase entities.
Each entity is identified by a data repository it belongs to and an identifier.
- Variables:
DATA_ATTRIBUTES – dictionary which maps data attributes (e.g., ‘labels’, ‘claims’) to appropriate collection classes (e.g.,
LanguageDict
,ClaimCollection
)entity_type – entity type identifier
title_pattern – regular expression which matches all possible entity ids for this entity type
- Parameters:
repo (DataSite) – Entity repository.
id (str or None, -1 and None mean non-existing) – Entity identifier.
id_ (str | None)
- DATA_ATTRIBUTES: dict[str, Any] = {}#
- concept_uri()[source]#
Return the full concept URI.
- Raises:
NoWikibaseEntityError – if this entity’s id is not known
- Return type:
str
- editEntity(data=None, **kwargs)[source]#
Edit an entity using Wikibase
wbeditentity
API.- This function is wrapped around by:
See also
Changed in version 8.0.1: Copy snak IDs/hashes (T327607)
- Parameters:
data (ENTITY_DATA_TYPE | None) – Data to be saved
- Return type:
None
- get(force=False)[source]#
Fetch all entity data and cache it.
- Parameters:
force (bool) – override caching
- Raises:
NoWikibaseEntityError – if this entity doesn’t exist
- Returns:
actual data which entity holds
- Return type:
dict
- getID(numeric=False)[source]#
Get the identifier of this entity.
- Parameters:
numeric (bool) – Strip the first letter and return an int
- Return type:
int | str
- get_data_for_new_entity()[source]#
Return data required for creation of a new entity.
Override it if you need.
- Return type:
dict
- classmethod is_valid_id(entity_id)[source]#
Whether the string can be a valid id of the entity type.
- Parameters:
entity_id (str) – The ID to test.
- Return type:
bool
- property latest_revision_id: int | None#
Get the revision identifier for the most recent revision of the entity.
- Return type:
int or None if it cannot be determined
- Raises:
NoWikibaseEntityError – if the entity doesn’t exist
- class page.WikibasePage(site, title='', **kwargs)[source]#
Bases:
BasePage
,WikibaseEntity
Mixin base class for Wikibase entities which are also pages (eg. items).
There should be no need to instantiate this directly.
If title is provided, either ns or entity_type must also be provided, and will be checked against the title parsed using the Page initialisation logic.
- Parameters:
site (pywikibot.site.DataSite) – Wikibase data site
title (str) – normalized title of the page
- Keyword Arguments:
ns – namespace
entity_type – Wikibase entity type
- Raises:
TypeError – incorrect use of parameters
ValueError – incorrect namespace
pywikibot.exceptions.Error – title parsing problems
NotImplementedError – the entity type is not supported
- addClaim(claim, bot=True, **kwargs)[source]#
Add a claim to the entity.
- Parameters:
claim (pywikibot.page.Claim) – The claim to add
bot (bool) – Whether to flag as bot (if possible)
- Keyword Arguments:
asynchronous – if True, launch a separate thread to add claim asynchronously
callback – a callable object that will be called after the claim has been added. It must take two arguments: (1) a WikibasePage object, and (2) an exception instance, which will be None if the entity was saved successfully. This is intended for use by bots that need to keep track of which saves were successful.
- Return type:
None
- botMayEdit()[source]#
Return whether bots may edit this page.
Because there is currently no system to mark a page that it shouldn’t be edited by bots on Wikibase pages it always returns True. The content of the page is not text but a dict, the original way (to search for a template) doesn’t apply.
- Returns:
True
- Return type:
bool
- editAliases(aliases, **kwargs)[source]#
Edit entity aliases.
aliases should be a dict, with the key as a language or a site object. The value should be a list of strings.
Refer
editEntity()
for asynchronous and callback usage.Usage:
>>> repo = pywikibot.Site('wikidata:test') >>> item = pywikibot.ItemPage(repo, 'Q68') >>> item.editAliases({'en': ['pwb test item']})
- Parameters:
aliases (ALIASES_TYPE)
- Return type:
None
- editDescriptions(descriptions, **kwargs)[source]#
Edit entity descriptions.
descriptions should be a dict, with the key as a language or a site object. The value should be the string to set it to. You can set it to
''
to remove the description.Refer
editEntity()
for asynchronous and callback usage.Usage:
>>> repo = pywikibot.Site('wikidata:test') >>> item = pywikibot.ItemPage(repo, 'Q68') >>> item.editDescriptions({'en': 'Pywikibot test'})
- Parameters:
descriptions (LANGUAGE_TYPE)
- Return type:
None
- editEntity(data=None, **kwargs)[source]#
Edit an entity using Wikibase
wbeditentity
API.- This function is wrapped around by:
It supports asynchronous and callback keyword arguments. The callback function is intended for use by bots that need to keep track of which saves were successful. The minimal callback function signature is:
def my_callback(page: WikibasePage, err: Optional[Exception]) -> Any:
The arguments are:
page
a
WikibasePage
objecterr
an Exception instance, which will be None if the page was saved successfully
See also
- Parameters:
data (ENTITY_DATA_TYPE | None) – Data to be saved
kwargs (Any)
- Keyword Arguments:
asynchronous (bool) – if True, launch a separate thread to edit asynchronously
callback (Callable[[WikibasePage, Optional[Exception]], Any]) – a callable object that will be called after the entity has been updated. It must take two arguments, see above.
- Return type:
None
- editLabels(labels, **kwargs)[source]#
Edit entity labels.
labels should be a dict, with the key as a language or a site object. The value should be the string to set it to. You can set it to
''
to remove the label.Refer
editEntity()
for asynchronous and callback usage.Usage:
>>> repo = pywikibot.Site('wikidata:test') >>> item = pywikibot.ItemPage(repo, 'Q68') >>> item.editLabels({'en': 'Test123'})
- Parameters:
labels (LANGUAGE_TYPE)
- Return type:
None
- get(force=False, *args, **kwargs)[source]#
Fetch all page data, and cache it.
- Parameters:
force (bool) – override caching
- Raises:
NotImplementedError – a value in args or kwargs
- Returns:
actual data which entity holds
- Return type:
dict
Note
dicts returned by this method are references to content of this entity and their modifying may indirectly cause unwanted change to the live content
- property latest_revision_id: int#
Get the revision identifier for the most recent revision of the entity.
- Return type:
int
- Raises:
pywikibot.exceptions.NoPageError – if the entity doesn’t exist
- namespace()[source]#
Return the number of the namespace of the entity.
- Returns:
Namespace id
- Return type:
int
- page.html2unicode(text, ignore=None, exceptions=None)[source]#
Replace HTML entities with equivalent unicode.
- Parameters:
ignore – HTML entities to ignore
ignore – list of int
text (str)
- Return type:
str
page._collections
Wikibase Entity Structures#
Structures holding data for Wikibase entities.
- class pywikibot.page._collections.AliasesDict(data=None)[source]#
Bases:
BaseDataDict
A structure holding aliases for a Wikibase entity.
It is a mapping from a language to a list of strings.
- classmethod normalizeData(data)[source]#
Helper function to expand data into the Wikibase API structure.
Changed in version 7.7: raises TypeError if data value is not a list.
- Parameters:
data (dict) – Data to normalize
- Returns:
The dict with normalized data
- Raises:
TypeError – data values must be a list
- Return type:
dict
- class pywikibot.page._collections.ClaimCollection(repo)[source]#
Bases:
MutableMapping
A structure holding claims for a Wikibase entity.
- classmethod normalizeData(data)[source]#
Helper function to expand data into the Wikibase API structure.
- Parameters:
data – Data to normalize
- Returns:
The dict with normalized data
- Return type:
dict
- class pywikibot.page._collections.LanguageDict(data=None)[source]#
Bases:
BaseDataDict
A structure holding language data for a Wikibase entity.
Language data are mappings from a language to a string. It can be labels, descriptions and others.
- class pywikibot.page._collections.SiteLinkCollection(repo, data=None)[source]#
Bases:
MutableMapping
A structure holding SiteLinks for a Wikibase item.
- Parameters:
repo (pywikibot.site.DataSite) – the Wikibase site on which badges are defined
- static getdbName(site)[source]#
Helper function to obtain a dbName for a Site.
- Parameters:
site (pywikibot.site.BaseSite or str) – The site to look up.
- class pywikibot.page._collections.SubEntityCollection(repo, data=None)[source]#
Bases:
MutableSequence
Ordered collection of sub-entities indexed by their ids.
- Parameters:
repo (pywikibot.site.DataSite) – Wikibase site
data (iterable) – iterable of LexemeSubEntity
page._decorators
— Page Decorators#
Decorators for Page objects.
- page._decorators.allow_asynchronous(func)[source]#
Decorator to make it possible to run a BasePage method asynchronously.
This is done when the method is called with kwarg
asynchronous=True
. Optionally, you can also provide kwarg callback, which, if provided, is a callable that gets the page as the first and a possible exception that occurred during saving in the second thread or None as the second argument.
page._revision
— Page Revision#
Object representing page revision.
- class page._revision.Revision(**kwargs)[source]#
Bases:
Mapping
A structure holding information about a single revision of a Page.
Each data item can be accessed either by its key or as an attribute with the attribute name equal to the key e.g.:
>>> r = Revision(comment='Sample for Revision access') >>> r.comment == r['comment'] True >>> r.comment 'Sample for Revision access'
See also
page._toolforge
module#
Object representing interface to toolforge tools.
Added in version 7.7.
- class page._toolforge.WikiBlameMixin[source]#
Bases:
object
Page mixin for main authorship.
Added in version 7.7.
- WIKIBLAME_CODES = ('ar', 'de', 'en', 'es', 'eu', 'fr', 'hu', 'id', 'it', 'ja', 'nl', 'pl', 'pt', 'tr')#
Supported wikipedia site codes
- authorship(n=None, *, min_chars=0, min_pct=0.0, max_pct_sum=None, revid=None, date=None)[source]#
Retrieve authorship attribution of an article.
This method uses XTools/Authorship to retrieve the authors measured by character count.
Sample:
>>> import pywikibot >>> site = pywikibot.Site('wikipedia:en') >>> page = pywikibot.Page(site, 'Pywikibot') >>> auth = page.authorship() >>> auth {'1234qwer1234qwer4': (68, 100.0)}
Important
Only implemented for main namespace pages and only wikipedias of
WIKIBLAME_CODES
are supported.See also
Added in version 9.3: this method replaces
main_authors()
.- Parameters:
n (int | None) – Only return the first n or fewer authors.
min_chars (int) – Only return authors with more than min_chars chars changes.
min_pct (float) – Only return authors with more than min_pct percentage edits.
max_pct_sum (float | None) – Only return authors until the prcentage sum reached max_pct_sum.
revid (int | None) – The revision id for the authors should be found. If
None
or0
, the latest revision is be used. Cannot be used together with date.date (DATETYPE) – The revision date for the authors should be found. If
None
, it will be ignored. Cannot be used together with revid. If the parameter is a string it must be given in the formYYYY-MM-DD
- Returns:
Character count and percentage of edits for each username.
- Raises:
ImportError – missing
wikitextparser
moduleNotImplementedError – unsupported site or unsupported namespace.
Error – Error response from xtools.
requests.exceptions.HTTPError – 429 Client Error: Too Many Requests for url; login to meta family first.
- RaiseNoPageError:
The page does not exist.
- Return type:
dict[str, tuple[int, float]]
- main_authors(onlynew=NotImplemented)[source]#
Retrieve the 5 topmost main authors of an article.
Sample:
>>> import pywikibot >>> site = pywikibot.Site('wikipedia:eu') >>> page = pywikibot.Page(site, 'Python (informatika)') >>> auth = page.main_authors() >>> auth.most_common(1) [('Ksarasola', 82)]
Important
Only implemented for main namespace pages and only wikipedias of
WIKIBLAME_CODES
are supported.See also
Changed in version 9.2: do not use any wait cycles due to 366100.
Changed in version 9.3: https://xtools.wmcloud.org/authorship/ is used to retrieve authors
Deprecated since version 9.3: use
authorship()
instead.- Returns:
Percentage of edits for each username
- Raises:
ImportError – missing
wikitextparser
module.NotImplementedError – unsupported site or unsupported namespace.
Error – Error response from xtools.
NoPageError – The page does not exist.
requests.exceptions.HTTPError – 429 Client Error: Too Many Requests for url; login to meta family first.