commit 7563356d1392a5d1eb7edc0e0bae5a61a4912904 Author: Michael Murtaugh Date: Sat Mar 11 10:29:21 2017 +0100 new site diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..7307eca --- /dev/null +++ b/.gitignore @@ -0,0 +1,13 @@ +*.pyc +*~ +drop/ +drop.json +tiles/ +lib/ +venv/ +fonts/ +xpub.node.json +drop.node.json +archive.json +about.json +index.json diff --git a/about.txt b/about.txt new file mode 100644 index 0000000..984fd32 --- /dev/null +++ b/about.txt @@ -0,0 +1,11 @@ +The new study path experimental publishing is the merging of two stories: a singular one, and a slightly more general one. + The singular story is the one of the Media Design and Communication Master, which for a decade has established a critical approach to the ambiguous notion of media. + In the past years we have welcomed a wide range of practitioners from the culture field, visual and digital artists, graphic designers, musicians, performances artists, architects, fine press book makers, and computer programmers, to help them develop a practice that explore the social, technical, cultural and political dimensions of their work, and ultimately as we took the habit to say, to encourage them to design their own media. +

Such an approach has resulted in a rich variety of projects and writings, from browser plugins to connect Amazon purchase button to Pirate Bay torrent links, chat system and audio feedback loops working with networks of tape reels, autonomous phone and computer based voice mail networks, theatre scripts based on Wikipedia page histories, peer-to-peer workflows for graphic designers, generative artists book and concrete poetry epub, secret social networks and file sharing hidden in the trash can of your computer desktop, wikis to publish precarious materials and weblogs of emerging forms of online artistic publishing , and many other amazing things.

+ The common point of these projects is they all look at particular issues, tensions, and conflicts relevant to the field of practice of their authors, and communicate concerns that are relevant to a much broader public. Why? Because they all offer a conversation about the cultural diversity, the systems, the networks, of humans and machines, that constitute our society. + This aspect of communication, sharing, informing, and thinking about how things are made public, and circulate in a public space is what link us today with this other, more general story, that is the one of publishing. Originally rooted in print media, the notion of publishing has in the last decades been both culturally diffused and appropriated well beyond its original domain of preference. It does not mean that publishing has lost its sharpness, in fact, this Cambrian explosion of new publishing potentials, has demonstrated how publishing has become central to a diversity of networked practices. + From app stores to art book fairs and zine shops, from darknets to sneakernets, from fansubs to on-demand services, and from tweeting to whistleblowing, the act of making things public, that is to say publishing, has became pivotal in an age infused with myriad media technologies. + What is more, the tension between the publishing heritage and novel forms of producing and sharing information has shown that old dichotomies such as analog versus digital, or local versus global, have grown increasingly irrelevant given their bond with hybrid media practices based on both old and new technologies, and their existence within mixed human and machine networks. + In sum, by experimental publishing we mean to engage with a broad set of intermingled and collaborative practices, both inherited and to be invented, so as to critically explore and actively engage with an ecosystem in which multi-layered interactions occur that are: + ... social, technical, cultural and political;
involving actors both human and algorithmic;
and mediated by networks of distribution and communication of varying scales and visibility. + For this journey, we seek students motivated to challenge the protocols of publishing (in all its (im)possible forms) using play, fiction, and ambiguity as methods and strategies of production and presentation, in order to experiment on the threshold of what is possible, desirable, allowed, or disruptive, in this ever expanding field. \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 0000000..4bde282 --- /dev/null +++ b/index.html @@ -0,0 +1,436 @@ + + + + + + + + + + + +
+
+
+
+
+ + + + + diff --git a/makefile b/makefile new file mode 100644 index 0000000..ed495ae --- /dev/null +++ b/makefile @@ -0,0 +1,27 @@ +all: index.json + +archive.json: + python scripts/mediawiki.py gallery --name archive --recursive \ + https://pzwiki.wdka.nl/mediadesign/Category:2016 \ + https://pzwiki.wdka.nl/mediadesign/Category:2015 \ + https://pzwiki.wdka.nl/mediadesign/Category:2014 \ + https://pzwiki.wdka.nl/mediadesign/Category:2013 \ + https://pzwiki.wdka.nl/mediadesign/Category:2012 \ + https://pzwiki.wdka.nl/mediadesign/Category:2011 \ + https://pzwiki.wdka.nl/mediadesign/Category:2010 \ + https://pzwiki.wdka.nl/mediadesign/Category:2009 \ + https://pzwiki.wdka.nl/mediadesign/Category:2008 \ + https://pzwiki.wdka.nl/mediadesign/Category:2007 \ + https://pzwiki.wdka.nl/mediadesign/Category:2006 \ + https://pzwiki.wdka.nl/mediadesign/Category:2005 \ + https://pzwiki.wdka.nl/mediadesign/Category:2004 > archive.json + +drop.node.json: drop.json + cat drop.json | python scripts/leaflet.py gallery --recursive --direction 2 > drop.node.json + +about.json: + python scripts/texthierarchy.py < about.txt > about.json + +index.json: archive.json about.json drop.node.json + python scripts/includenodes.py xpub.top.json > index.json + diff --git a/scripts/html5tidy b/scripts/html5tidy new file mode 100755 index 0000000..aa5034c --- /dev/null +++ b/scripts/html5tidy @@ -0,0 +1,166 @@ +#!/usr/bin/python +from __future__ import print_function +from html5lib import parse +import os, sys +from argparse import ArgumentParser +from xml.etree import ElementTree as ET + + +def etree_indent(elem, level=0): + i = "\n" + level*" " + if len(elem): + if not elem.text or not elem.text.strip(): + elem.text = i + " " + if not elem.tail or not elem.tail.strip(): + elem.tail = i + for elem in elem: + etree_indent(elem, level+1) + if not elem.tail or not elem.tail.strip(): + elem.tail = i + else: + if level and (not elem.tail or not elem.tail.strip()): + elem.tail = i + +def get_link_type (url): + lurl = url.lower() + if lurl.endswith(".html") or lurl.endswith(".htm"): + return "text/html" + elif lurl.endswith(".txt"): + return "text/plain" + elif lurl.endswith(".rss"): + return "application/rss+xml" + elif lurl.endswith(".atom"): + return "application/atom+xml" + elif lurl.endswith(".json"): + return "application/json" + elif lurl.endswith(".js") or lurl.endswith(".jsonp"): + return "text/javascript" + +def pluralize (x): + if type(x) == list or type(x) == tuple: + return x + else: + return (x,) + +def html5tidy (doc, charset="utf-8", title=None, scripts=None, links=None, indent=False): + if scripts: + script_srcs = [x.attrib.get("src") for x in doc.findall(".//script")] + for src in pluralize(scripts): + if src not in script_srcs: + script = ET.SubElement(doc.find(".//head"), "script", src=src) + script_srcs.append(src) + + if links: + existinglinks = {} + for elt in doc.findall(".//link"): + href = elt.attrib.get("href") + if href: + existinglinks[href] = elt + for link in links: + linktype = link.get("type") or get_link_type(link["href"]) + if link["href"] in existinglinks: + elt = existinglinks[link["href"]] + elt.attrib["rel"] = link["rel"] + else: + elt = ET.SubElement(doc.find(".//head"), "link", href=link["href"], rel=link["rel"]) + if linktype: + elt.attrib["type"] = linktype + if "title" in link: + elt.attrib["title"] = link["title"] + + if charset: + meta_charsets = [x.attrib.get("charset") for x in doc.findall(".//meta") if x.attrib.get("charset") != None] + if not meta_charsets: + meta = ET.SubElement(doc.find(".//head"), "meta", charset=args.charset) + + if title != None: + titleelt = doc.find(".//title") + if not titleelt: + titleelt = ET.SubElement(doc.find(".//head"), "title") + titleelt.text = title + + if indent: + etree_indent(doc) + return doc + + +if __name__ == "__main__": + p = ArgumentParser("") + p.add_argument("input", nargs="?", default=None) + p.add_argument("--indent", default=False, action="store_true") + p.add_argument("--mogrify", default=False, action="store_true", help="modify file in place") + p.add_argument("--method", default="html", help="method, default: html, values: html, xml, text") + p.add_argument("--output", default=None, help="") + p.add_argument("--title", default=None, help="ensure/add title tag in head") + p.add_argument("--charset", default="utf-8", help="ensure/add meta tag with charset") + p.add_argument("--script", action="append", default=[], help="ensure/add script tag") + # s, see https://www.w3.org/TR/html5/links.html#links + p.add_argument("--stylesheet", action="append", default=[], help="ensure/add style link") + p.add_argument("--alternate", action="append", default=[], nargs="+", help="ensure/add alternate links (optionally followed by a title and type)") + p.add_argument("--next", action="append", default=[], nargs="+", help="ensure/add alternate link") + p.add_argument("--prev", action="append", default=[], nargs="+", help="ensure/add alternate link") + p.add_argument("--search", action="append", default=[], nargs="+", help="ensure/add search link") + p.add_argument("--rss", action="append", default=[], nargs="+", help="ensure/add alternate link of type application/rss+xml") + p.add_argument("--atom", action="append", default=[], nargs="+", help="ensure/add alternate link of type application/atom+xml") + + args = p.parse_args() + links = [] + def add_links (links, items, rel, _type=None): + for href in items: + d = {} + d["rel"] = rel + if _type: + d["type"] = _type + + if type(href) == list: + if len(href) == 1: + d["href"] = href[0] + elif len(href) == 2: + d["href"] = href[0] + d["title"] = href[1] + elif len(href) == 3: + d["href"] = href[0] + d["title"] = href[1] + d["type"] = href[2] + else: + continue + else: + d["href"] = href + + links.append(d) + for rel in ("stylesheet", "alternate", "next", "prev", "search"): + add_links(links, getattr(args, rel), rel) + for item in args.rss: + add_links(links, item, rel="alternate", _type="application/rss+xml") + for item in args.atom: + add_links(links, item, rel="alternate", _type="application/atom+xml") + + # INPUT + if args.input: + fin = open(args.input) + else: + fin = sys.stdin + + doc = parse(fin, namespaceHTMLElements=False) + if fin != sys.stdin: + fin.close() + html5tidy(doc, scripts=args.script, links=links, title=args.title, indent=args.indent) + + # OUTPUT + tmppath = None + if args.output: + fout = open(args.output, "w") + elif args.mogrify: + tmppath = args.input+".tmp" + fout = open(tmppath, "w") + else: + fout = sys.stdout + + print (ET.tostring(doc, method=args.method), file=fout) + + if fout != sys.stdout: + fout.close() + + if tmppath: + os.rename(args.input, args.input+"~") + os.rename(tmppath, args.input) diff --git a/scripts/imagetile2.py b/scripts/imagetile2.py new file mode 100755 index 0000000..b9df4f7 --- /dev/null +++ b/scripts/imagetile2.py @@ -0,0 +1,63 @@ +#!/usr/bin/env python + +from PIL import Image +import re + +def fitbox (boxw, boxh, w, h): + rw = boxw + rh = int(rw * (float(h) / w)) + if (rh >= boxh): + rh = boxh + rw = int(rh * (float(w) / h)) + return rw, rh + + +def tile_image (im, maxz=0, tilew=256, tileh=256, base=".", template="z{0[z]}y{0[y]}x{0[x]}.jpg", bgcolor=(0,0,0)): + z = 0 + boxw, boxh = tilew, tileh + + alpha = bgcolor != None # not template.endswith("jpg") + + while True: + rw, rh = fitbox(boxw, boxh, im.size[0], im.size[1]) + rim = im.resize((rw, rh), Image.ANTIALIAS) + if bgcolor: + tim = Image.new("RGB", (boxw, boxh), bgcolor) + tim.paste(rim, (0, 0)) + else: + tim = Image.new("RGBA", (boxw, boxh)) + tim.paste(rim, (0, 0)) + + rows, cols = 2**z, 2**z + for r in range(rows): + for c in range(cols): + ix = c*tileh + iy = r*tilew + cim = tim.crop((ix, iy, ix+tilew, iy+tileh)) + op = base + template.format({'z':z, 'x':c, 'y':r}) + # if not alpha: + # cim = cim.convert("RGB") + cim.save(op) + + z += 1 + if z>maxz: + break + boxw *= 2 + boxh *= 2 + +if __name__ == "__main__": + from argparse import ArgumentParser + p = ArgumentParser("tile an image") + p.add_argument("--tilewidth", type=int, default=256, help="default: 256") + p.add_argument("--tileheight", type=int, default=256, help="default: 256") + p.add_argument("input") + p.add_argument("--output", default="./tile", help="output path, default: ./tile") + p.add_argument("--tilename", default="Z{z}Y{y}X{x}.jpg", help="template for tiles, default: Z{z}Y{y}X{x}.jpg") + p.add_argument("--background", default="0,0,0", help="background color, default: 0,0,0") + p.add_argument("--zoom", type=int, default=0, help="default 0") + args = p.parse_args() + im = Image.open(args.input) + tilename = re.sub(r"\{(.+?)\}", r"{0[\1]}", args.tilename) + background = tuple([int(x) for x in args.background.split(",")]) + tile_image (im, args.zoom, args.tilewidth, args.tileheight, args.output, tilename, background) + diff --git a/scripts/includenodes.py b/scripts/includenodes.py new file mode 100644 index 0000000..2486c40 --- /dev/null +++ b/scripts/includenodes.py @@ -0,0 +1,28 @@ +from __future__ import print_function +from argparse import ArgumentParser + +ap = ArgumentParser("") +ap.add_argument("input") +args = ap.parse_args() + +import json + +with open(args.input) as f: + node = json.load(f) + +def expand (node): + if node == None: + return node + retnode = node + if "@include" in node: + with open(node['@include']) as f: + retnode = json.load(f) + if "text" in node: + retnode['text'] = node['text'] + if "children" in retnode: + retnode['children'] = [expand(c) for c in retnode['children']] + + return retnode + +print (json.dumps(expand(node), indent=2)) + diff --git a/scripts/leaflet.py b/scripts/leaflet.py new file mode 100755 index 0000000..77b92dd --- /dev/null +++ b/scripts/leaflet.py @@ -0,0 +1,711 @@ +#!/usr/bin/env python + + +from __future__ import print_function, division +from argparse import ArgumentParser +from imagetile2 import tile_image +from PIL import Image +import os, json, sys, re, datetime, urlparse +from math import ceil, log + + +""" +Maybe a better name for this script is tiling or tiler as it's not particularly leaflet specific. + +""" + +def tiles_path_for (n): + return n + ".tiles" + +def autolink (text): + def sub (m): + return u'LINK'.format(m.group(0)) + return re.sub(r"https?://[\S]+", sub, text, re.I) + +def parse8601 (t, fmt=None): + """ simple 8601 parser that doesn't care about more than YMDHMS""" + # 2016-11-16T14:13:40.379857 + m = re.search(r"(?P\d\d\d\d)-(?P\d\d)-(?P\d\d)T(?P\d\d):(?P\d\d):(?P\d\d)", t) + if m: + d = m.groupdict() + ret = datetime.datetime(int(d['year']), int(d['month']), int(d['day']), int(d['hour']), int(d['minute']), int(d['second'])) + if fmt: + return ret.strftime(fmt) + else: + return ret + +class tiles_wrapper (object): + """ Image wrapper abstraction... include URL to original + caption + """ + def __init__(self, path, url=None, text=None, tilename="z{0[z]}y{0[y]}x{0[x]}.png"): + self.path = path + # self.item = item + self.url = url + self.text = text + self.tilename = tilename + + def get_tile_path (self, z, y, x): + return os.path.join(self.path, self.tilename.format({'z':z,'y':y,'x':x})) + + def zoom (self): + """ return serialized version of self """ + node = {} + node['zoomable'] = True + if self.text: + node['text'] = self.text + else: + # autotext is a link to the url showing the basename + _, basename = os.path.split(self.url) + node['text'] = u"{1}".format(self.url, basename) + node['url'] = self.url + node['image'] = self.get_tile_path(0, 0, 0) + return node + + def zoom_recursive (self, caption, x=0, y=0, z=0, maxzoom=3): + """ old style zoom in place -- ie render self to child nodes """ + node = {} + node['text'] = self.text + node['image'] = self.get_tile_path(z, y, x) + if z < maxzoom: + kids = [] + for r in range(2): + for c in range(2): + kids.append(self.zoom_recursive(caption, (x*2)+c, (y*2)+r, z+1, maxzoom)) + node['children'] = kids + return node + +def cell_layout(items, w=2): + i = 0 + for r in range(w): + for c in range(w): + if i=1 and 'date' in items[0].item: + # node['text'] = items[0].item['date'] + # else: + # node['text'] = '' + # node['image'] = '' + node['children'] = cc = [None, None, None, None] + ai = 0 + for x in items[:3]: + # cap = os.path.splitext(os.path.basename(x.path))[0] + # cc.append(x) # x.zoom() + if (ai == direction): + ai += 1 + cc[ai] = x + ai += 1; + + rest = items[3:] + if rest: + # recurse + # cc.append(recursiverender(rest, basename, tilewidth, tileheight, z+1)) + cc[direction] = recursiverender(rest, basename, tilewidth, tileheight, direction, z+1) + + newim = fourup([x.get("image") for x in node['children'] if x != None and x.get("image")], tilewidth, tileheight) + # simplified name works just because there's only one generated tile per level + newimpath = u"{0}.z{1}.png".format(basename, z) + newim.save(newimpath) + node['image'] = newimpath + + return node + +def layoutxyz (n, x=0, y=0, z=0, outnode={}): + # print ("layout", n, x, y, z, file=sys.stderr) + outnode["{0},{1},{2}".format(x,y,z)] = { + "text": n['text'], + "image": n['image'] + } + if 'children' in n: + for child, cx, cy in cell_layout(n['children']): + layout(child, (x*2)+cx, (y*2)+cy, z+1, outnode) + return outnode + +def html (node, title): + page = u""" + + + """ + title + u""" + + + + + + + +
+
+
+
+
+ + + +""" + return page + +def make_gallery(args): + """ + to do -- separate the actual tiling process... + make tiling a separate pass ON THE ACTUAL NODE jSON + + NB: this command accepts two different kinds of input. + 1. One or more images as (argv) arguments -or- + 2. A JSON stream (one object per line) on stdin. + """ + + bgcolor = None # (0, 0, 0) + + items = [] + if args.input: + for x in args.input: + i = {'url': x} + items.append(i) + else: + for line in sys.stdin: + line = line.rstrip() + if line and not line.startswith("#"): + item = json.loads(line) + items.append(item) + + # Ensure / Generate tiles per image + items.sort(key=lambda x: x['url']) + tiles = [] + for item in items: + n = item['url'] + # print (n, file=sys.stderr) + path = os.path.join(args.tilespath, n) + # TODO date format... + caption = '' + if 'text' or 'date' in item: + caption += u'

'; + if 'text' in item: + caption += u'{0}'.format(autolink(item['text'])) + if 'date' in item: + dt = parse8601(item['date'], "%d %b %Y") + caption += u'{0}'.format(dt) + if 'url' in item: + ext = os.path.splitext(urlparse.urlparse(item['url']).path)[1] + if ext: + ext = ext[1:].upper() + caption += u'{1}'.format(item['url'], ext) + if 'text' or 'date' in item: + caption += u'

'; + + t = tiles_wrapper(path, item['url'], text=caption) + tiles.append(t) + tile0 = t.get_tile_path(0, 0, 0) # os.path.join(path, args.tilename.format({'x': 0, 'y': 0, 'z': 0})) + if not os.path.exists(tile0) or args.force: + print ("Tiling {0}".format(n), file=sys.stderr) + try: + im = Image.open(n) + try: + os.makedirs(path) + except OSError: + pass + tile_image(im, args.zoom, args.tilewidth, args.tileheight, path+"/", args.tilename, bgcolor) + # tiles.append(t) + + except IOError as e: + print ("Missing {0}, skipping".format(n), file=sys.stderr) + tiles = tiles[:-1] + + # DO THE LAYOUT, generating intermediate tiles (zoom outs) + if args.reverse: + tiles.reverse() + tiles = [t.zoom() for t in tiles] + basename = os.path.join(args.tilespath, args.name) + if args.recursive: + root_node = recursiverender(tiles, basename, args.tilewidth, args.tileheight, args.direction) + else: + root_node = gridrender(tiles, basename, args.tilewidth, args.tileheight) + + # OUTPUT ROOT NODE + if args.html: + print (html(root_node, args.name)) + else: + print (json.dumps(root_node, indent=args.indent)) + + +if __name__ == "__main__": + + ap = ArgumentParser("") + + ap.add_argument("--basepath", default=".") + ap.add_argument("--baseuri", default="") + + ap.add_argument("--tilespath", default="tiles") + + ap.add_argument("--tilewidth", type=int, default=256) + ap.add_argument("--tileheight", type=int, default=256) + ap.add_argument("--zoom", type=int, default=3) + + ap.add_argument("--tilename", default="z{0[z]}y{0[y]}x{0[x]}.png") + ap.add_argument("--reverse", default=False, action="store_true") + ap.add_argument("--indent", default=2, type=int) + ap.add_argument("--recursive", default=False, action="store_true") + + ap.add_argument("--force", default=False, action="store_true") + + subparsers = ap.add_subparsers(help='sub-command help') + ap_gallery = subparsers.add_parser('gallery', help='Create a grid gallery of images') + ap_gallery.add_argument("input", nargs="*") + ap_gallery.add_argument("--html", default=False, action="store_true") + ap_gallery.add_argument("--recursive", default=False, action="store_true") + ap_gallery.add_argument("--direction", type=int, default=3, help="cell to recursively expand into, 0-3, default: 3 (bottom-right)") + ap_gallery.add_argument("--name", default="gallery") + ap_gallery.set_defaults(func=make_gallery) + + args = ap.parse_args() + args.func(args) diff --git a/scripts/mediawiki.py b/scripts/mediawiki.py new file mode 100644 index 0000000..b746f31 --- /dev/null +++ b/scripts/mediawiki.py @@ -0,0 +1,318 @@ +from __future__ import print_function + +import os, sys, re, urllib, urlparse, html5lib, json +from PIL import Image +from math import log +from argparse import ArgumentParser +from urllib2 import urlopen + +from xml.etree import ElementTree as ET + +# from wiki_get_html import page_html +from mwclient import Site +from mwclient.page import Page + +from leaflet import tiles_wrapper, recursiverender, gridrender, html +from imagetile2 import tile_image + + +def wiki_url_to_title (url): + return urllib.unquote(url.split("/")[-1]) + +def parse_gallery(t): + """ returns [(imagepageurl, caption, articleurl), ...] """ + galleryitems = t.findall(".//li[@class='gallerybox']") + items = [] + for i in galleryitems: + image_link = i.find(".//a[@class='image']") + src = None + captiontext = None + article = None + + if image_link != None: + src = image_link.attrib.get("href") + # src = src.split("/")[-1] + + caption = i.find(".//*[@class='gallerytext']") + if caption: + captiontext = ET.tostring(caption, method="html") + articlelink = caption.find(".//a") + if articlelink != None: + article = articlelink.attrib.get("href") + + # f = wiki.Pages[imgname] + # items.append((f.imageinfo['url'], captiontext)) + items.append((src, captiontext, article)) + return items + +def mwfilepage_to_url (wiki, url): + filename = urllib.unquote(url.split("/")[-1]) + page = wiki.Pages[filename] + return page, page.imageinfo['url'] + +def url_to_path (url): + """ https://pzwiki.wdka.nl/mediadesign/File:I-could-have-written-that_these-are-the-words_mb_300dpi.png """ + path = urllib.unquote(urlparse.urlparse(url).path) + return "/".join(path.split("/")[3:]) + +def wiki_absurl (wiki, url): + ret = '' + if type(wiki.host) == tuple: + ret = wiki.host[0]+"://"+wiki.host[1] + else: + ret = "http://"+wiki.host + + return urlparse.urljoin(ret, url) + +def wiki_title_to_url (wiki, title): + """ relies on wiki.site['base'] being set to the public facing URL of the Main page """ + ret = '' + parts = urlparse.urlparse(wiki.site['base']) + base, main_page = os.path.split(parts.path) + ret = parts.scheme+"://"+parts.netloc+base + p = wiki.pages[title] + ret += "/" + p.normalize_title(p.name) + return ret + +def ensure_wiki_image_tiles (wiki, imagepageurl, text='', basepath="tiles", force=False, bgcolor=None, tilewidth=256, tileheight=256, zoom=3): + print ("ensure_wiki_image_tiles", imagepageurl, file=sys.stderr) + page, imageurl = mwfilepage_to_url(wiki, imagepageurl) + path = os.path.join(basepath, url_to_path(imageurl)) + print ("imageurl, path", imageurl, path, file=sys.stderr) + ret = tiles_wrapper(path, imagepageurl, text=text) + tp = ret.get_tile_path(0, 0, 0) + if os.path.exists(tp) and not force: + return ret + + try: + os.makedirs(path) + except OSError: + pass + im = Image.open(urlopen(imageurl)) + tile_image(im, zoom, tilewidth, tileheight, path+"/", ret.tilename, bgcolor) + return ret + +def textcell (paras): + node = {} + node['text'] = paras[:1] + moretext = paras[1:] + if moretext: + node['children'] = [textcell([x]) for x in moretext] + return node + +def name_to_path (name): + return name.replace("/", "_") + + +def render_article (wiki, ref, basepath="tiles", depth=0, maxdepth=3): + print ("render_article", ref, file=sys.stderr) + if type(ref) == Page: + page = ref + title = page.name + ref = wiki_title_to_url(wiki, page.name) + elif ref.startswith("http"): + title = wiki_url_to_title(ref) + page = wiki.pages[title] + else: + title = ref + page = wiki.pages[title] + ref = wiki_title_to_url(wiki, page.name) + # pagetext = page.text() + # print ("WIKI PARSE", title, file=sys.stderr) + parse = wiki.parse(page=title) + html = parse['text']['*'] + # print ("GOT HTML ", html, file=sys.stderr) + tree = html5lib.parse(html, treebuilder="etree", namespaceHTMLElements=False) + body = tree.find("./body") + paras = [] + images = [] + imgsrcs = {} + + for c in body: + if c.tag == "p": + # filter out paras like


but checking text-only render length + ptext = ET.tostring(c, encoding="utf-8", method="text").strip() + if len(ptext) > 0: + ptext = ET.tostring(c, encoding="utf-8", method="html").strip() + paras.append(ptext) + + elif c.tag == "ul" and c.attrib.get("class") != None and "gallery" in c.attrib.get("class"): + # print ("GALLERY") + gallery = parse_gallery(c) + # Ensure image is downloaded ... at least the 00 image... + for src, caption, article in gallery: + src = wiki_absurl(wiki, src) + if src in imgsrcs: + continue + imgsrcs[src] = True + print ("GalleryImage", src, caption, article, file=sys.stderr) + # if article and depth < maxdepth: + # article = wiki_absurl(wiki, article) + # images.append(render_article(wiki, article, caption, basepath, depth+1, maxdepth)) + # else: + images.append(ensure_wiki_image_tiles(wiki, src, caption, basepath).zoom()) + + for a in body.findall('.//a[@class="image"]'): + caption = a.attrib.get("title", '') + src = wiki_absurl(wiki, a.attrib.get("href")) + # OEI... skippin svg for the moment (can't go straight to PIL) + if src.endswith(".svg"): + continue + print (u"Image_link {0}:'{1}'".format(src, caption).encode("utf-8"), file=sys.stderr) + if src in imgsrcs: + continue + imgsrcs[src] = True + images.append(ensure_wiki_image_tiles(wiki, src, caption, basepath).zoom()) + + print ("{0} paras, {1} images".format(len(paras), len(images)), file=sys.stderr) + + + if title == None: + title = page.name + + basename = "tiles/" + name_to_path(page.name) + + # gallerynode = gridrender(images, basename) + # return gallerynode + cells = [] + if len(paras) > 0: + cells.append(textcell(paras)) + cells.extend(images) + + ret = recursiverender(cells, basename) + ret['text'] = u"""

{0}WIKI

""".format(title, ref) + if images: + ret['image'] = images[0]['image'] + return ret + + # article = {} + # article['text'] = title + # article['children'] = children = [] + # children.append(textcell(paras)) + # for iz in images[:2]: + # if 'image' not in article and 'image' in iz: + # article['image'] = iz['image'] + # children.append(iz) + # restimages = images[2:] + # if len(restimages) == 1: + # children.append(restimages[0]) + # elif len(restimages) > 1: + # children.append(gridrender(restimages, basename)) + # return article + +def render_category (wiki, cat, output="tiles"): + print ("Render Category", cat, file=sys.stderr) + # if type(cat) == Page: + # page = ref + # title = page.name + # ref = wiki_title_to_url(wiki, page.name) + if cat.startswith("http"): + title = wiki_url_to_title(cat) + cat = wiki.pages[title] + else: + title = ref + cat = wiki.pages[cat] + # ref = wiki_title_to_url(wiki, cat.name) + print ("cat", cat, file=sys.stderr) + pages = [] + for m in cat.members(): + pages.append(m) + pages.sort(key=lambda x: x.name) + pagenodes = [render_article(wiki, x.name) for x in pages] + for page, node in zip(pages, pagenodes): + node['text'] = u"""

{0}WIKI

""".format(page.name, wiki_title_to_url(wiki, page.name)) + ret = gridrender(pagenodes, output+"/"+cat.name.replace(":", "_")) + ret['text'] = u"""

{1}

""".format(wiki_title_to_url(wiki, cat.name), cat.name) + return ret + # for p in pages: + # print (p.name, wiki_title_to_url(wiki, p.name)) + +def make_category (args): + wiki = Site((args.wikiprotocol, args.wikihost), path=args.wikipath) + root_node = render_category(wiki, args.category) + if args.html: + print (html(root_node, "")) + else: + print (json.dumps(root_node, indent=2)) + + +def make_article (args): + wiki = Site((args.wikiprotocol, args.wikihost), path=args.wikipath) + root_node = render_article(wiki, args.wikipage) + if args.html: + print (html(root_node, "")) + else: + print (json.dumps(root_node, indent=2)) + +def make_gallery(args): + wiki = Site((args.wikiprotocol, args.wikihost), path=args.wikipath) + # apiurl = args.wikiprotocol+"://"+args.wikihost+args.wikipath+"api.php" + if len(args.wikipage) == 1: + root_node = render_article(wiki, args.wikipage[0]) + else: + children = [] + for wikipage in args.wikipage: + print ("rendering", wikipage, file=sys.stderr) + if "Category:" in wikipage: + print ("rendering", wikipage, file=sys.stderr) + cnode = render_category(wiki, wikipage, args.output) + else: + cnode = render_article(wiki, wikipage) + children.append(cnode) + if args.recursive: + root_node = recursiverender(children, args.output+"/"+args.name, direction=1) + else: + root_node = gridrender(children, args.output+"/"+args.name, direction=1) + + if args.html: + print (html(root_node, "")) + else: + print (json.dumps(root_node, indent=2)) + + +def testwiki (args): + return Site((args.wikiprotocol, args.wikihost), path=args.wikipath) + +if __name__ == "__main__": + + ap = ArgumentParser("") + ap.add_argument("--wikiprotocol", default="https") + ap.add_argument("--wikihost", default="pzwiki.wdka.nl") + ap.add_argument("--wikipath", default="/mw-mediadesign/") + ap.add_argument("--wikishortpath", default="/mediadesign/") + + ap.add_argument("--tilewidth", type=int, default=256) + ap.add_argument("--tileheight", type=int, default=256) + # ap.add_argument("--zoom", type=int, default=3) + + ap.add_argument("--output", default="tiles") + # ap.add_argument("--title", default="TITLE") + + + subparsers = ap.add_subparsers(help='sub-command help') + ap_article = subparsers.add_parser('article', help='Render an article') + ap_article.add_argument("wikipage") + ap_article.add_argument("--html", default=False, action="store_true") + ap_article.set_defaults(func=make_article) + + ap_gallery = subparsers.add_parser('gallery', help='Render a gallery of articles') + ap_gallery.add_argument("wikipage", nargs="+") + ap_gallery.add_argument("--html", default=False, action="store_true") + ap_gallery.add_argument("--recursive", default=False, action="store_true") + ap_gallery.add_argument("--direction", type=int, default=3, help="cell to recursively expand into, 0-3, default: 3 (bottom-right)") + ap_gallery.add_argument("--name", default=None) + ap_gallery.set_defaults(func=make_gallery) + + ap_gallery = subparsers.add_parser('testwiki', help='Render a gallery of articles') + ap_gallery.set_defaults(func=testwiki) + + ap_article = subparsers.add_parser('category', help='Render an article') + ap_article.add_argument("category") + ap_article.add_argument("--html", default=False, action="store_true") + ap_article.set_defaults(func=make_category) + + + + args = ap.parse_args() + ret = args.func(args) + diff --git a/scripts/texthierarchy.py b/scripts/texthierarchy.py new file mode 100644 index 0000000..e009f7d --- /dev/null +++ b/scripts/texthierarchy.py @@ -0,0 +1,32 @@ +from __future__ import print_function +from html5lib import parse +import sys, json +from xml.etree import ElementTree as ET + + +def process (f): + stack = [] + for line in f: + line = line.rstrip() + if line: + level = 0 + while line.startswith("\t"): + line = line[1:] + level += 1 + print (level, line, file=sys.stderr) + node = { + 'text': line, + 'level': level, + 'children': [] + } + while len(stack) > level: + stack.pop() + if len(stack): + stack[len(stack)-1]['children'].append(node) + stack.append(node) + return stack[0] + +if __name__ == "__main__": + n = process(sys.stdin) + import json + print (json.dumps(n, indent=2)) \ No newline at end of file diff --git a/styles.css b/styles.css new file mode 100644 index 0000000..ad485bd --- /dev/null +++ b/styles.css @@ -0,0 +1,76 @@ +@font-face { + font-family: "Libertinage x"; + src: url("fonts/Libertinage-x.ttf"); +} +@font-face { + font-family: "OSP-DIN"; + src: url("fonts/OSP-DIN.ttf"); +} + +body { + margin: 5em; + font-family: "Libertinage x", serif; + font-size: 1.1em; + color: #2d2020; + background: #f2eee3; +} +#map { + background: #f2eee3 !important; +} +div.tile { + color: #2d2020; + position: absolute; + pointer-events: auto; /* this enables links */ +} +div.tile img.imagetile { + position: absolute; + left: 0; top: 0; + z-index: 0; +} +div.tile div.text { + position: absolute; + left: 0; top: 0; + z-index: 1; + font-family: sans-serif; + font-size: 15px; + line-height: 18px; + /*text-shadow: 1px 1px 2px black;*/ + padding-right: 10px; + padding-left: 0px; + margin-top: 0px; +} +div.tile div.text p { + margin: 0; + hyphens: auto; +} +div.tile div.text a { + margin: 0; + text-decoration: none; + color: #f2eee3; + background: #ed4e47; +} +div.tile div.text a:hover {} +div.coords { + pointer-events: none; + display: none; +} +.leaflet-overlay-pane { + z-index: 0 !important; /* hack to put the x underneath */ +} +p.caption {} +p.caption span.text { + /*background: #444;*/ +} +p.caption span.date { + padding-left: 8px; + /*background: #444;*/ + /*color: #AAA;*/ +} +p.caption a.url { + padding-left: 8px; + /*color: #FF0;*/ +} +p.caption a.url:hover { + /*background: #FF0;*/ + /*color: black;*/ +} diff --git a/xpub.top.json b/xpub.top.json new file mode 100644 index 0000000..ef959bf --- /dev/null +++ b/xpub.top.json @@ -0,0 +1,14 @@ +{ + "text": "", + "children": [ + { "@include": "about.json" }, + { + "text": "

Here you find a frequently updated stream of images and posts reflecting current events in the course.

", + "@include": "drop.node.json" + }, + { + "text": "

In the archive you find a vast collection of final projects from over 10 years of Media Design (including Networked Media and Lens based).

", + "@include": "archive.json" + } + ] +}