9.9 KiB
MediaWiki API (part 2)¶
This notebook:
- uses the
query
&parse
actions of theMediaWiki API
, which we can use to work with wiki pages as (versioned and hypertextual) technotexts
Epicpedia¶
Reference: Epicpedia (2008), Annemieke van der Hoek \ (from: https://diversions.constantvzw.org/wiki/index.php?title=Eventual_Consistency#Towards_diffractive_technotexts)
In Epicpedia (2008), Annemieke van der Hoek creates a work that makes use of the underlying history that lies beneath the surface of each Wikipedia article.[20] Inspired by the work of Berthold Brecht and the notion of Epic Theater, Epicpedia presents Wikipedia articles as screenplays, where each edit becomes an utterance performed by a cast of characters (both major and minor) that takes place over a span of time, typically many years. The work uses the API of wikipedia to retrieve for a given article the sequence of revisions, their corresponding user handles, the summary message (that allows editors to describe the nature of their edit), and the timestamp to then produce a differential reading.
import urllib import json from IPython.display import JSON # iPython JSON renderer
Query & Parse¶
We will work again with the Dérive
page on the wiki: https://pzwiki.wdka.nl/mediadesign/D%C3%A9rive (i moved it here, to make the URL a bit simpler)
And use the API help page
on the PZI wiki as our main reference: https://pzwiki.wdka.nl/mw-mediadesign/api.php
# query the wiki page Dérive request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=query&titles=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response)
JSON(data)
# parse the wiki page Dérive request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=parse&page=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response)
JSON(data)
Links, contributors, edit history¶
We can ask the API for different kind of material/information about the page.
Such as:
- a list of wiki links
- a list of external links
- a list of images
- a list of edits
- a list of contributors
- page information
- reverse links (What links here?)
- ...
We can use the query action again, to ask for these things:
# wiki links: prop=links request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=query&prop=links&titles=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response) JSON(data)
# external links: prop=extlinks request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=query&prop=extlinks&titles=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response) JSON(data)
# images: prop=images request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=query&prop=images&titles=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response) JSON(data)
# edit history: prop=revisions request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=query&prop=revisions&titles=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response) JSON(data)
# contributors: prop=contributors request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=query&prop=contributors&titles=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response) JSON(data)
# page information: prop=info request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=query&prop=info&titles=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response) JSON(data)
# reverse links (What links here?): prop=linkshere + lhlimit=25 (max. nr of results) request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=query&prop=linkshere&lhlimit=100&titles=Prototyping&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response) JSON(data)
Use the data
responses in Python (and save data in variables)¶
# For example with the action=parse request request = 'https://pzwiki.wdka.nl/mw-mediadesign/api.php?action=parse&page=D%C3%A9rive&format=json' response = urllib.request.urlopen(request).read() data = json.loads(response) JSON(data)
text = data['parse']['text']['*'] print(text)
title = data['parse']['title'] print(title)
images = data['parse']['images'] print(images)
Use these variables to generate HTML pages¶
# open a HTML file to write to output = open('myfilename.html', 'w')
# write to this HTML file you just opened output.write(text)
2813
# close the file again (Jupyter needs this to actually write a file) output.close()
Use these variables to generate HTML pages (using the template language Jinja)¶
Jinja (template language): https://jinja.palletsprojects.com/